threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nI normally build the code with warnings enabled (specifically,\n-Wshadow) which exposes many \"shadowed\" declarations.\n\nIt would be better to reduce warnings wherever it's easy to do so,\nbecause if we always see/ignore lots of warnings then sooner or later\nsomething important may escape attention. In any case, just removing\nthese warnings sometimes makes the code more readable by removing any\nambiguity.\n\nCurrently, I'm seeing 350+ shadow warnings. See the attached summary.\n\nIt will be WIP to eliminate them all, but here is a patch to address\nsome of the low-hanging fruit. If the patch is accepted, then I can\ntry later to deal with more of them.\n\n======\n\nThe following are fixed by changing the param/var from 'text' to 'txt'.\n\nexecute.c:111:19: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\n../../../../src/include/c.h:700:24: warning: shadowed declaration is\nhere [-Wshadow]\n\n~\n\nprepare.c:104:26: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\n../../../../src/include/c.h:700:24: warning: shadowed declaration is\nhere [-Wshadow]\nprepare.c:270:12: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\n../../../../src/include/c.h:700:24: warning: shadowed declaration is\nhere [-Wshadow]\n\n~\n\nc_keywords.c:36:32: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\n../../../../src/include/c.h:700:24: warning: shadowed declaration is\nhere [-Wshadow]\necpg_keywords.c:39:35: warning: declaration of ‘text’ shadows a global\ndeclaration [-Wshadow]\n../../../../src/include/c.h:700:24: warning: shadowed declaration is\nhere [-Wshadow]\n\n~\n\ntab-complete.c:1638:29: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5082:46: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5111:38: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5120:36: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5129:37: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5140:33: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5148:43: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5164:40: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5172:50: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5234:19: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5605:32: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5685:33: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5733:37: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5778:33: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\ntab-complete.c:5910:27: warning: declaration of ‘text’ shadows a\nglobal declaration [-Wshadow]\n../../../src/include/c.h:700:24: warning: shadowed declaration is here\n[-Wshadow]\n\n======\n\nThe following was fixed by changing the name of the global static from\n'progname' to 'prog_name'.\n\npg_createsubscriber.c:341:46: warning: declaration of ‘progname’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:114:20: warning: shadowed declaration is here [-Wshadow]\n\n~\n\nThe following was fixed by changing the name of the global static from\n'dbinfo' to 'db_info'.\n\npg_createsubscriber.c:437:25: warning: declaration of ‘dbinfo’ shadows\na global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:734:40: warning: declaration of ‘dbinfo’ shadows\na global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:841:46: warning: declaration of ‘dbinfo’ shadows\na global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:961:47: warning: declaration of ‘dbinfo’ shadows\na global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1104:41: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1142:41: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1182:45: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1242:54: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1272:56: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1314:70: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1363:60: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1553:57: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1627:55: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1681:64: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1739:69: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\npg_createsubscriber.c:1830:64: warning: declaration of ‘dbinfo’\nshadows a global declaration [-Wshadow]\npg_createsubscriber.c:121:31: warning: shadowed declaration is here [-Wshadow]\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 12 Sep 2024 10:32:50 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Remove shadowed declaration warnings" }, { "msg_contents": "On Thu, 12 Sept 2024 at 12:33, Peter Smith <smithpb2250@gmail.com> wrote:\n> I normally build the code with warnings enabled (specifically,\n> -Wshadow) which exposes many \"shadowed\" declarations.\n>\n> It would be better to reduce warnings wherever it's easy to do so,\n> because if we always see/ignore lots of warnings then sooner or later\n> something important may escape attention. In any case, just removing\n> these warnings sometimes makes the code more readable by removing any\n> ambiguity.\n\n0fe954c28 did add -Wshadow=compatible-local to the standard set of\ncomplication flags. I felt it was diminishing returns after that, but\n-Wshadow=local would be the next step before going full -Wshadow.\n\nThere was justification for -Wshadow=compatible-local because there\nhas been > 1 bug (see af7d270dd) fixed that would have been found if\nwe'd had that sooner. Have we ever had any bugs that would have been\nhighlighted by -Wshadow but not -Wshadow=compatible-local? I'd be\ncurious to know if you do go through this process of weeding these out\nif you do find any bugs as a result.\n\nI also wonder if we could ever get this to a stable point. I didn't\ntake the time to understand what 388e80132 did. Is that going to\nprotect us from getting warnings where fixing them is beyond our\ncontrol for full -Wshadow?\n\nDavid\n\n\n", "msg_date": "Thu, 12 Sep 2024 12:58:19 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove shadowed declaration warnings" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Thu, 12 Sept 2024 at 12:33, Peter Smith <smithpb2250@gmail.com> wrote:\n>> I normally build the code with warnings enabled (specifically,\n>> -Wshadow) which exposes many \"shadowed\" declarations.\n\n> 0fe954c28 did add -Wshadow=compatible-local to the standard set of\n> complication flags. I felt it was diminishing returns after that, but\n> -Wshadow=local would be the next step before going full -Wshadow.\n\nI think that insisting that local declarations not shadow globals\nis an anti-pattern, and I'll vote against any proposal to make\nthat a standard warning. Impoverished as C is, it does have block\nstructure; why would we want to throw that away by (in effect)\ndemanding a single flat namespace for the entire program?\n\nI do grant that sometimes shadowing of locals can cause bugs. I don't\nrecall right now why we opted for -Wshadow=compatible-local over\n-Wshadow=local, but we could certainly take another look at that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Sep 2024 22:02:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove shadowed declaration warnings" }, { "msg_contents": "On Thu, 12 Sept 2024 at 14:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I do grant that sometimes shadowing of locals can cause bugs. I don't\n> recall right now why we opted for -Wshadow=compatible-local over\n> -Wshadow=local, but we could certainly take another look at that.\n\nI don't recall if it was discussed, but certainly, it was an easier\ngoal to achieve.\n\nLooks like there are currently 47 warnings with -Wshadow=local. I'm\nnot quite sure what the definition of \"compatible\" is for this flag,\nbut looking at one warning in pgbench.c:4548, I see an int shadowing a\nbool. So maybe -Wshadow=local is worthwhile.\n\nDavid\n\n\n", "msg_date": "Thu, 12 Sep 2024 14:25:38 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove shadowed declaration warnings" }, { "msg_contents": "On 12.09.24 04:25, David Rowley wrote:\n> On Thu, 12 Sept 2024 at 14:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I do grant that sometimes shadowing of locals can cause bugs. I don't\n>> recall right now why we opted for -Wshadow=compatible-local over\n>> -Wshadow=local, but we could certainly take another look at that.\n> \n> I don't recall if it was discussed, but certainly, it was an easier\n> goal to achieve.\n> \n> Looks like there are currently 47 warnings with -Wshadow=local. I'm\n> not quite sure what the definition of \"compatible\" is for this flag,\n> but looking at one warning in pgbench.c:4548, I see an int shadowing a\n> bool. So maybe -Wshadow=local is worthwhile.\n\nAnother thing to keep in mind with these different shadow warning \nvariants is that we should try to keep them consistent across compilers, \nat least the common ones. The current -Wshadow=compatible-local is only \navailable in gcc, not in clang, so it's currently impossible to rely on \nclang to write warning-free code.\n\nOf course we have other warning flags that we use that don't exist in \nall compilers, but in my experience these are either for very esoteric \ncases or something that is very obviously wrong and a developer would \nnormally see immediately. For example, there is no warning flag in \nclang that mirrors the switch \"fallthrough\" labeling that we have set up \nwith gcc. But this is not as much of a problem in practice because the \nwrong code would usually misbehave in an obvious way and the issue can \nbe found by looking at the code with two lines of context. With the \nshadow warning, the issues are much harder to find visually, and in most \ncases they are not actually a problem.\n\nThe shadow warning levels in gcc and clang appear to be very differently \nstructured, so I'm not sure how we can improve interoperability here.\n\n\n\n", "msg_date": "Wed, 18 Sep 2024 09:53:11 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Remove shadowed declaration warnings" } ]
[ { "msg_contents": "Hi, \n\nThe documentation on extending using C functions, \nleaves a blank on how to call other internal Postgres functions.\nThis can leave first-timers hanging.\n\nI think it’d be helpful to spend some paragraphs on discussing the DirectFunctionCall API.\n\nHere’s an attempt (also a PR[0]).\n\nHere’s the relevant HTML snippet for convenience:\nTo call another version-1 function, you can use DirectFunctionCalln(func, arg1,...,argn). This is particularly useful when you want to call functions defined in the standard internal library, by using an interface similar to their SQL signature.\n\nDifferent flavors of similar macros can be found in fmgr.h. The main point though is that they expect a C function name to call as their first argument (or its Oid in some cases), and actual arguments should be supplied as Datums. They always return Datum.\n\nFor example, to call the starts_with(text, text) from C, you can search through the catalog and find out that its C implementation is based on the Datum text_starts_with(PG_FUNCTION_ARGS) function.\n\nIn fmgr.h there are also available macros the facilitate conversions between C types and Datum. For example to turn text* into Datum, you can use DatumGetTextPP(X). If your extension defines additional types, it is usually convenient to define similar macros for these types too.\n\nI’ve also added the below example function:\n\nPG_FUNCTION_INFO_V1(t_starts_with);\n\nDatum\nt_starts_with(PG_FUNCTION_ARGS)\n{\n Datum t1 = PG_GETARG_DATUM(0);\n Datum t2 = PG_GETARG_DATUM(1);\n bool bool_res;\n\n Datum datum_res = DirectFunctionCall2(text_starts_with, t1, t2);\n bool_res = DatumGetBool(datum_res);\n\n PG_RETURN_BOOL(bool_res);\n}\nPS1: I was not sure if src/tutorial is still relevant with this part of the documentation.\nIf so, it needs updating too.\n\n\n[0] https://github.com/Florents-Tselai/postgres/pull/1/commits/1651b7bb68e0f9c2b61e1462367295d846d253ec\n", "msg_date": "Thu, 12 Sep 2024 12:41:42 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Add some documentation on how to call internal functions" } ]
[ { "msg_contents": "Hi,\n\nAttached is a self-sufficient patch extracted from a larger patchset\n[1]. The entire patchset probably will not proceed further in the\nnearest future. Since there was interest in this particular patch it\ndeserves being discussed in a separate thread.\n\nCurrently we support 32-bit integer values in GUCs, but don't support\n64-bit ones. The proposed patch adds this support.\n\nFirstly, it adds DefineCustomInt64Variable() which can be used by the\nextension authors.\n\nSecondly, the following core GUCs are made 64-bit:\n\n```\nautovacuum_freeze_min_age\nautovacuum_freeze_max_age\nautovacuum_freeze_table_age\nautovacuum_multixact_freeze_min_age\nautovacuum_multixact_freeze_max_age\nautovacuum_multixact_freeze_table_age\n```\n\nI see several open questions with the patch in its current state.\n\nFirstly, I'm not sure if it is beneficial to affect the named GUCs out\nof the context of the larger patchset. Perhaps we have better GUCs\nthat could benefit from being 64-bit? Or should we just leave alone\nthe core GUCs and focus on providing DefineCustomInt64Variable() ?\n\nSecondly, DefineCustomInt64Variable() is not test-covered. Turned out\nit was not even defined (although declared) in the original patch.\nThis was fixed in the attached version. Maybe one of the test modules\ncould use it even if it makes little sense for this particular module?\nFor instance, test/modules/worker_spi/ could use it for\nworker_spi.naptime.\n\nLast but not least, large values like 12345678912345 could be\ndifficult to read. Perhaps we should also support 12_345_678_912_345\nsyntax? This is not implemented in the attached patch and arguably\ncould be discussed separately when and if we merge it.\n\nThoughts?\n\n[1]: https://www.postgresql.org/message-id/flat/CAJ7c6TP9Ce021ebJ%3D5zOhMjiG3Wqg4hO6Mg0WsccErvAD9vZYA%40mail.gmail.com\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Thu, 12 Sep 2024 14:08:15 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi, Alexander!\nThank you for working on this!\n\nOn Thu, 12 Sept 2024 at 15:08, Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n> Hi,\n>\n> Attached is a self-sufficient patch extracted from a larger patchset\n> [1]. The entire patchset probably will not proceed further in the\n> nearest future. Since there was interest in this particular patch it\n> deserves being discussed in a separate thread.\n>\n> Currently we support 32-bit integer values in GUCs, but don't support\n> 64-bit ones. The proposed patch adds this support.\n>\n> Firstly, it adds DefineCustomInt64Variable() which can be used by the\n> extension authors.\n>\n> Secondly, the following core GUCs are made 64-bit:\n>\n> ```\n> autovacuum_freeze_min_age\n> autovacuum_freeze_max_age\n> autovacuum_freeze_table_age\n> autovacuum_multixact_freeze_min_age\n> autovacuum_multixact_freeze_max_age\n> autovacuum_multixact_freeze_table_age\n> ```\n>\n> I see several open questions with the patch in its current state.\n>\n> Firstly, I'm not sure if it is beneficial to affect the named GUCs out\n> of the context of the larger patchset. Perhaps we have better GUCs\n> that could benefit from being 64-bit? Or should we just leave alone\n> the core GUCs and focus on providing DefineCustomInt64Variable() ?\n>\nI think the direction is good and delivering 64-bit GUCs is very much worth\ncommitting.\nThe patch itself looks good, but we could need to add locks against\nconcurrently modifying 64-bit values, which could be non-atomic on older\narchitectures.\n\n\n> Secondly, DefineCustomInt64Variable() is not test-covered. Turned out\n> it was not even defined (although declared) in the original patch.\n> This was fixed in the attached version. Maybe one of the test modules\n> could use it even if it makes little sense for this particular module?\n> For instance, test/modules/worker_spi/ could use it for\n> worker_spi.naptime.\n>\n> Last but not least, large values like 12345678912345 could be\n> difficult to read. Perhaps we should also support 12_345_678_912_345\n> syntax? This is not implemented in the attached patch and arguably\n> could be discussed separately when and if we merge it.\n>\n\nI think 12345678912345 is good enough. Underscore dividers make reading\nlittle bit easier but look weird overall. I can't remember other places\nwhere we output long numbers with dividers.\n\nRegards,\nPavel Borisov\nSupabase\n\nHi, Alexander!Thank you for working on this!On Thu, 12 Sept 2024 at 15:08, Aleksander Alekseev <aleksander@timescale.com> wrote:Hi,\n\nAttached is a self-sufficient patch extracted from a larger patchset\n[1]. The entire patchset probably will not proceed further in the\nnearest future. Since there was interest in this particular patch it\ndeserves being discussed in a separate thread.\n\nCurrently we support 32-bit integer values in GUCs, but don't support\n64-bit ones. The proposed patch adds this support.\n\nFirstly, it adds DefineCustomInt64Variable() which can be used by the\nextension authors.\n\nSecondly, the following core GUCs are made 64-bit:\n\n```\nautovacuum_freeze_min_age\nautovacuum_freeze_max_age\nautovacuum_freeze_table_age\nautovacuum_multixact_freeze_min_age\nautovacuum_multixact_freeze_max_age\nautovacuum_multixact_freeze_table_age\n```\n\nI see several open questions with the patch in its current state.\n\nFirstly, I'm not sure if it is beneficial to affect the named GUCs out\nof the context of the larger patchset. Perhaps we have better GUCs\nthat could benefit from being 64-bit? Or should we just leave alone\nthe core GUCs and focus on providing DefineCustomInt64Variable() ?I think the direction is good and delivering 64-bit GUCs is very much worth committing.The patch itself looks good, but we could need to add locks against concurrently modifying 64-bit values, which could be non-atomic on older architectures. \nSecondly, DefineCustomInt64Variable() is not test-covered. Turned out\nit was not even defined (although declared) in the original patch.\nThis was fixed in the attached version. Maybe one of the test modules\ncould use it even if it makes little sense for this particular module?\nFor instance, test/modules/worker_spi/ could use it for\nworker_spi.naptime.\n\nLast but not least, large values like 12345678912345 could be\ndifficult to read. Perhaps we should also support 12_345_678_912_345\nsyntax? This is not implemented in the attached patch and arguably\ncould be discussed separately when and if we merge it. I think 12345678912345 is good enough. Underscore dividers make reading little bit easier but look weird overall. I can't remember other places where we output long numbers with dividers.Regards,Pavel BorisovSupabase", "msg_date": "Thu, 12 Sep 2024 15:29:14 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi Pavel,\n\n> I think the direction is good and delivering 64-bit GUCs is very much worth committing.\n> The patch itself looks good, but we could need to add locks against concurrently modifying 64-bit values, which could be non-atomic on older architectures.\n\nThanks for the feedback.\n\n> I think 12345678912345 is good enough. Underscore dividers make reading little bit easier but look weird overall. I can't remember other places where we output long numbers with dividers.\n\nWe already support this in SQL:\n\npsql (18devel)\n=# SELECT 123_456;\n ?column?\n----------\n 123456\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Thu, 12 Sep 2024 14:34:40 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "On Thu, Sep 12, 2024 at 02:08:15PM +0300, Aleksander Alekseev wrote:\n> Secondly, the following core GUCs are made 64-bit:\n> \n> ```\n> autovacuum_freeze_min_age\n> autovacuum_freeze_max_age\n> autovacuum_freeze_table_age\n> autovacuum_multixact_freeze_min_age\n> autovacuum_multixact_freeze_max_age\n> autovacuum_multixact_freeze_table_age\n> ```\n> \n> I see several open questions with the patch in its current state.\n> \n> Firstly, I'm not sure if it is beneficial to affect the named GUCs out\n> of the context of the larger patchset. Perhaps we have better GUCs\n> that could benefit from being 64-bit? Or should we just leave alone\n> the core GUCs and focus on providing DefineCustomInt64Variable() ?\n\nI don't understand why we would want to make these GUCs 64-bit. All of the\nallowed values fit in an int32, so AFAICT this would only serve to mislead\nusers into thinking they could set these much higher than they can/should.\n\nTBH I'm quite skeptical that this would even be particularly useful for\nextension authors. In what cases would a floating point value not suffice?\nI'm not totally opposed to the idea of 64-bit GUCs, but I'd like more\ninformation about the motivation.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 12 Sep 2024 08:46:15 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "On Thu, Sep 12, 2024 at 2:29 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> Hi, Alexander!\n> Thank you for working on this!\n>\n> On Thu, 12 Sept 2024 at 15:08, Aleksander Alekseev <aleksander@timescale.com> wrote:\n>>\n>> Hi,\n>>\n>> Attached is a self-sufficient patch extracted from a larger patchset\n>> [1]. The entire patchset probably will not proceed further in the\n>> nearest future. Since there was interest in this particular patch it\n>> deserves being discussed in a separate thread.\n>>\n>> Currently we support 32-bit integer values in GUCs, but don't support\n>> 64-bit ones. The proposed patch adds this support.\n>>\n>> Firstly, it adds DefineCustomInt64Variable() which can be used by the\n>> extension authors.\n>>\n>> Secondly, the following core GUCs are made 64-bit:\n>>\n>> ```\n>> autovacuum_freeze_min_age\n>> autovacuum_freeze_max_age\n>> autovacuum_freeze_table_age\n>> autovacuum_multixact_freeze_min_age\n>> autovacuum_multixact_freeze_max_age\n>> autovacuum_multixact_freeze_table_age\n>> ```\n>>\n>> I see several open questions with the patch in its current state.\n>>\n>> Firstly, I'm not sure if it is beneficial to affect the named GUCs out\n>> of the context of the larger patchset. Perhaps we have better GUCs\n>> that could benefit from being 64-bit? Or should we just leave alone\n>> the core GUCs and focus on providing DefineCustomInt64Variable() ?\n>\n> I think the direction is good and delivering 64-bit GUCs is very much worth committing.\n> The patch itself looks good, but we could need to add locks against concurrently modifying 64-bit values, which could be non-atomic on older architectures.\n\nGUCs are located in the local memory. No concurrent read/writes of\nthem are possible. It might happen that SIGHUP comes during\nread/write of the GUC variable. But, that's protected the other way:\nSignalHandlerForConfigReload() just sets the ConfigReloadPending flag,\nwhich is processed during CHECK_FOR_INTERRUPTS(). So, I don't see a\nneed to locks here.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Fri, 20 Sep 2024 15:13:29 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi, Aleksander!\n\nThank you for your work on this subject.\n\nOn Thu, Sep 12, 2024 at 2:08 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> Attached is a self-sufficient patch extracted from a larger patchset\n> [1]. The entire patchset probably will not proceed further in the\n> nearest future. Since there was interest in this particular patch it\n> deserves being discussed in a separate thread.\n>\n> Currently we support 32-bit integer values in GUCs, but don't support\n> 64-bit ones. The proposed patch adds this support.\n>\n> Firstly, it adds DefineCustomInt64Variable() which can be used by the\n> extension authors.\n>\n> Secondly, the following core GUCs are made 64-bit:\n>\n> ```\n> autovacuum_freeze_min_age\n> autovacuum_freeze_max_age\n> autovacuum_freeze_table_age\n> autovacuum_multixact_freeze_min_age\n> autovacuum_multixact_freeze_max_age\n> autovacuum_multixact_freeze_table_age\n> ```\n>\n> I see several open questions with the patch in its current state.\n>\n> Firstly, I'm not sure if it is beneficial to affect the named GUCs out\n> of the context of the larger patchset. Perhaps we have better GUCs\n> that could benefit from being 64-bit? Or should we just leave alone\n> the core GUCs and focus on providing DefineCustomInt64Variable() ?\n\nIt doesn't look like these *_age GUCs could benefit from being 64-bit,\nbefore 64-bit transaction ids get in. However, I think there are some\nbetter candidates.\n\nautovacuum_vacuum_threshold\nautovacuum_vacuum_insert_threshold\nautovacuum_analyze_threshold\n\nThis GUCs specify number of tuples before vacuum/analyze. That could\nbe more than 2^31. With large tables of small tuples, I can't even\nsay this is always impractical to have values greater than 2^31.\n\n> Secondly, DefineCustomInt64Variable() is not test-covered. Turned out\n> it was not even defined (although declared) in the original patch.\n> This was fixed in the attached version. Maybe one of the test modules\n> could use it even if it makes little sense for this particular module?\n> For instance, test/modules/worker_spi/ could use it for\n> worker_spi.naptime.\n\nI don't think there are good candidates among existing extension GUCs.\nI think we could add something for pure testing purposes somewhere in\nsrc/test/modules.\n\n> Last but not least, large values like 12345678912345 could be\n> difficult to read. Perhaps we should also support 12_345_678_912_345\n> syntax? This is not implemented in the attached patch and arguably\n> could be discussed separately when and if we merge it.\n\nI also think we're good with 12345678912345 so far.\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Fri, 20 Sep 2024 15:33:21 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi, Alexander!\n\n> Thank you for your work on this subject.\n\nThanks for your feedback.\n\n> It doesn't look like these *_age GUCs could benefit from being 64-bit,\n> before 64-bit transaction ids get in. However, I think there are some\n> better candidates.\n>\n> autovacuum_vacuum_threshold\n> autovacuum_vacuum_insert_threshold\n> autovacuum_analyze_threshold\n>\n> This GUCs specify number of tuples before vacuum/analyze. That could\n> be more than 2^31. With large tables of small tuples, I can't even\n> say this is always impractical to have values greater than 2^31.\n\nSounds good to me. Fixed.\n\n> > Secondly, DefineCustomInt64Variable() is not test-covered. Turned out\n> > it was not even defined (although declared) in the original patch.\n> > This was fixed in the attached version. Maybe one of the test modules\n> > could use it even if it makes little sense for this particular module?\n> > For instance, test/modules/worker_spi/ could use it for\n> > worker_spi.naptime.\n>\n> I don't think there are good candidates among existing extension GUCs.\n> I think we could add something for pure testing purposes somewhere in\n> src/test/modules.\n\nI found a great candidate in src/test/modules/delay_execution:\n\n```\n DefineCustomIntVariable(\"delay_execution.post_planning_lock_id\",\n \"Sets the advisory lock ID to be\nlocked/unlocked after planning.\",\n```\n\nAdvisory lock IDs are bigints [1]. I modified the module to use Int64's.\n\nI guess it may also answer Nathan's question.\n\n> > Last but not least, large values like 12345678912345 could be\n> > difficult to read. Perhaps we should also support 12_345_678_912_345\n> > syntax? This is not implemented in the attached patch and arguably\n> > could be discussed separately when and if we merge it.\n>\n> I also think we're good with 12345678912345 so far.\n\nFair enough.\n\nPFA the updated patch.\n\n[1]: https://www.postgresql.org/docs/current/functions-admin.html#FUNCTIONS-ADVISORY-LOCKS\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 24 Sep 2024 12:27:20 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi,\n\n> PFA the updated patch.\n\nIt is worth mentioning that v2 should not be merged as is.\nParticularly although it changes the following GUCs:\n\n> autovacuum_vacuum_threshold\n> autovacuum_vacuum_insert_threshold\n> autovacuum_analyze_threshold\n\n... it doesn't affect the code that uses these GUCs which results in\ncasting int64s to ints.\n\nI would appreciate a bit more feedback on v2. If the community is fine\nwith modifying these GUCs I will correct the patch in this respect.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 24 Sep 2024 12:35:03 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi, Aleksander Alekseev\r\n\r\nThanks for updating the patch.\r\n\r\n> On Sep 24, 2024, at 17:27, Aleksander Alekseev <aleksander@timescale.com> wrote:\r\n> \r\n> Hi, Alexander!\r\n> \r\n>> Thank you for your work on this subject.\r\n> \r\n> Thanks for your feedback.\r\n> \r\n>> It doesn't look like these *_age GUCs could benefit from being 64-bit,\r\n>> before 64-bit transaction ids get in. However, I think there are some\r\n>> better candidates.\r\n>> \r\n>> autovacuum_vacuum_threshold\r\n>> autovacuum_vacuum_insert_threshold\r\n>> autovacuum_analyze_threshold\r\n>> \r\n>> This GUCs specify number of tuples before vacuum/analyze. That could\r\n>> be more than 2^31. With large tables of small tuples, I can't even\r\n>> say this is always impractical to have values greater than 2^31.\r\n> \r\n> Sounds good to me. Fixed.\r\n\r\nI found the autovacuum_vacuum_threshold, autovacuum_vacuum_insert_threshold\r\nand autovacuum_analyze_threshold is change to int64 for relation option,\r\nhowever the GUCs are still integers.\r\n\r\n```\r\npostgres=# select * from pg_settings where name = 'autovacuum_vacuum_threshold' \\gx\r\n-[ RECORD 1 ]---+------------------------------------------------------------\r\nname | autovacuum_vacuum_threshold\r\nsetting | 50\r\nunit |\r\ncategory | Autovacuum\r\nshort_desc | Minimum number of tuple updates or deletes prior to vacuum.\r\nextra_desc |\r\ncontext | sighup\r\nvartype | integer\r\nsource | default\r\nmin_val | 0\r\nmax_val | 2147483647\r\nenumvals |\r\nboot_val | 50\r\nreset_val | 50\r\nsourcefile |\r\nsourceline |\r\npending_restart | f\r\n```\r\n\r\nIs there something I missed?\r\n\r\n> \r\n>>> Secondly, DefineCustomInt64Variable() is not test-covered. Turned out\r\n>>> it was not even defined (although declared) in the original patch.\r\n>>> This was fixed in the attached version. Maybe one of the test modules\r\n>>> could use it even if it makes little sense for this particular module?\r\n>>> For instance, test/modules/worker_spi/ could use it for\r\n>>> worker_spi.naptime.\r\n>> \r\n>> I don't think there are good candidates among existing extension GUCs.\r\n>> I think we could add something for pure testing purposes somewhere in\r\n>> src/test/modules.\r\n> \r\n> I found a great candidate in src/test/modules/delay_execution:\r\n> \r\n> ```\r\n> DefineCustomIntVariable(\"delay_execution.post_planning_lock_id\",\r\n> \"Sets the advisory lock ID to be\r\n> locked/unlocked after planning.\",\r\n> ```\r\n> \r\n> Advisory lock IDs are bigints [1]. I modified the module to use Int64's.\r\n\r\nI check the delay_execution.post_planning_lock_id parameter, and it’s varitype\r\nis int64, maybe bigint is better, see [1].\r\n\r\n```\r\npostgres=# select * from pg_settings where name = 'delay_execution.post_planning_lock_id' \\gx\r\n-[ RECORD 1 ]---+----------------------------------------------------------------\r\nname | delay_execution.post_planning_lock_id\r\nsetting | 0\r\nunit |\r\ncategory | Customized Options\r\nshort_desc | Sets the advisory lock ID to be locked/unlocked after planning.\r\nextra_desc | Zero disables the delay.\r\ncontext | user\r\nvartype | int64\r\nsource | default\r\nmin_val | 0\r\nmax_val | 9223372036854775807\r\nenumvals |\r\nboot_val | 0\r\nreset_val | 0\r\nsourcefile |\r\nsourceline |\r\npending_restart | f\r\n```\r\n\r\n[1] https://www.postgresql.org/docs/current/datatype-numeric.html\r\n\r\n\r\n--\r\nRegrads,\r\nJapin Li\r\n\r\n\r\n", "msg_date": "Tue, 24 Sep 2024 10:55:16 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "On Tue, Sep 24, 2024 at 12:27:20PM +0300, Aleksander Alekseev wrote:\n>> It doesn't look like these *_age GUCs could benefit from being 64-bit,\n>> before 64-bit transaction ids get in. However, I think there are some\n>> better candidates.\n>>\n>> autovacuum_vacuum_threshold\n>> autovacuum_vacuum_insert_threshold\n>> autovacuum_analyze_threshold\n>>\n>> This GUCs specify number of tuples before vacuum/analyze. That could\n>> be more than 2^31. With large tables of small tuples, I can't even\n>> say this is always impractical to have values greater than 2^31.\n>\n> [...]\n> \n> I found a great candidate in src/test/modules/delay_execution:\n> \n> ```\n> DefineCustomIntVariable(\"delay_execution.post_planning_lock_id\",\n> \"Sets the advisory lock ID to be\n> locked/unlocked after planning.\",\n> ```\n> \n> Advisory lock IDs are bigints [1]. I modified the module to use Int64's.\n> \n> I guess it may also answer Nathan's question.\n\nHm. I'm not sure I find any of these to be particularly convincing\nexamples of why we need int64 GUCs. Yes, the GUCs in question could\npotentially be set to higher values, but I've yet to hear of this being a\nproblem in practice. We might not want to encourage such high values,\neither.\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 24 Sep 2024 09:48:04 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi,\n\n> I found the autovacuum_vacuum_threshold, autovacuum_vacuum_insert_threshold\n> and autovacuum_analyze_threshold is change to int64 for relation option,\n> however the GUCs are still integers.\n>\n> ```\n> postgres=# select * from pg_settings where name = 'autovacuum_vacuum_threshold' \\gx\n> -[ RECORD 1 ]---+------------------------------------------------------------\n> name | autovacuum_vacuum_threshold\n> setting | 50\n> unit |\n> category | Autovacuum\n> short_desc | Minimum number of tuple updates or deletes prior to vacuum.\n> extra_desc |\n> context | sighup\n> vartype | integer\n> source | default\n> min_val | 0\n> max_val | 2147483647\n> enumvals |\n> boot_val | 50\n> reset_val | 50\n> sourcefile |\n> sourceline |\n> pending_restart | f\n> ```\n>\n> Is there something I missed?\n\nNo, you found a bug. The patch didn't change ConfigureNamesInt64[]\nthus these GUCs were still treated as int32s.\n\nHere is the corrected patch v3. Thanks!\n\n=# select * from pg_settings where name = 'autovacuum_vacuum_threshold';\n-[ RECORD 1 ]---+------------------------------------------------------------\nname | autovacuum_vacuum_threshold\nsetting | 1234605616436508552\nunit |\ncategory | Autovacuum\nshort_desc | Minimum number of tuple updates or deletes prior to vacuum.\nextra_desc |\ncontext | sighup\nvartype | int64\nsource | configuration file\nmin_val | 0\nmax_val | 9223372036854775807\nenumvals |\nboot_val | 50\nreset_val | 1234605616436508552\nsourcefile | /Users/eax/pginstall/data-master/postgresql.conf\nsourceline | 664\npending_restart | f\n\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Wed, 25 Sep 2024 14:03:12 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "On Sep 25, 2024, at 19:03, Aleksander Alekseev <aleksander@timescale.com> wrote:\n\nHi,\n\nI found the autovacuum_vacuum_threshold, autovacuum_vacuum_insert_threshold\nand autovacuum_analyze_threshold is change to int64 for relation option,\nhowever the GUCs are still integers.\n\n```\npostgres=# select * from pg_settings where name = 'autovacuum_vacuum_threshold' \\gx\n-[ RECORD 1 ]---+------------------------------------------------------------\nname | autovacuum_vacuum_threshold\nsetting | 50\nunit |\ncategory | Autovacuum\nshort_desc | Minimum number of tuple updates or deletes prior to vacuum.\nextra_desc |\ncontext | sighup\nvartype | integer\nsource | default\nmin_val | 0\nmax_val | 2147483647\nenumvals |\nboot_val | 50\nreset_val | 50\nsourcefile |\nsourceline |\npending_restart | f\n```\n\nIs there something I missed?\n\nNo, you found a bug. The patch didn't change ConfigureNamesInt64[]\nthus these GUCs were still treated as int32s.\n\nHere is the corrected patch v3. Thanks!\n\n=# select * from pg_settings where name = 'autovacuum_vacuum_threshold';\n-[ RECORD 1 ]---+------------------------------------------------------------\nname | autovacuum_vacuum_threshold\nsetting | 1234605616436508552\nunit |\ncategory | Autovacuum\nshort_desc | Minimum number of tuple updates or deletes prior to vacuum.\nextra_desc |\ncontext | sighup\nvartype | int64\nsource | configuration file\nmin_val | 0\nmax_val | 9223372036854775807\nenumvals |\nboot_val | 50\nreset_val | 1234605616436508552\nsourcefile | /Users/eax/pginstall/data-master/postgresql.conf\nsourceline | 664\npending_restart | f\n\n\nThanks for updating the patch.\n\nAfter testing the v3 patch, I found it cannot correctly handle the number with underscore.\n\nSee:\n\n```\npostgres=# alter system set autovacuum_vacuum_threshold to 2_147_483_648;\nERROR: invalid value for parameter \"autovacuum_vacuum_threshold\": \"2_147_483_648\"\npostgres=# alter system set autovacuum_vacuum_threshold to 2_147_483_647;\nALTER SYSTEM\n```\n\nIIRC, the lexer only supports integers but not int64.\n\n--\nBest regards,\nJapin Li\n\n\n\n\n\n\n\n\n\n\nOn Sep 25, 2024, at 19:03, Aleksander Alekseev <aleksander@timescale.com> wrote:\n\nHi,\n\nI found the autovacuum_vacuum_threshold, autovacuum_vacuum_insert_threshold\nand autovacuum_analyze_threshold is change to int64 for relation option,\nhowever the GUCs are still integers.\n\n```\npostgres=# select * from pg_settings where name = 'autovacuum_vacuum_threshold' \\gx\n-[ RECORD 1 ]---+------------------------------------------------------------\nname            | autovacuum_vacuum_threshold\nsetting         | 50\nunit            |\ncategory        | Autovacuum\nshort_desc      | Minimum number of tuple updates or deletes prior to vacuum.\nextra_desc      |\ncontext         | sighup\nvartype         | integer\nsource          | default\nmin_val         | 0\nmax_val         | 2147483647\nenumvals        |\nboot_val        | 50\nreset_val       | 50\nsourcefile      |\nsourceline      |\npending_restart | f\n```\n\nIs there something I missed?\n\n\nNo, you found a bug. The patch didn't change ConfigureNamesInt64[]\nthus these GUCs were still treated as int32s.\n\nHere is the corrected patch v3. Thanks!\n\n=# select * from pg_settings where name = 'autovacuum_vacuum_threshold';\n-[ RECORD 1 ]---+------------------------------------------------------------\nname            | autovacuum_vacuum_threshold\nsetting         | 1234605616436508552\nunit            |\ncategory        | Autovacuum\nshort_desc      | Minimum number of tuple updates or deletes prior to vacuum.\nextra_desc      |\ncontext         | sighup\nvartype         | int64\nsource          | configuration file\nmin_val         | 0\nmax_val         | 9223372036854775807\nenumvals        |\nboot_val        | 50\nreset_val       | 1234605616436508552\nsourcefile      | /Users/eax/pginstall/data-master/postgresql.conf\nsourceline      | 664\npending_restart | f\n\n\n\n\n\n\n\nThanks for updating the patch.\n\n\n\n\nAfter testing the v3 patch, I found it cannot correctly handle the number with underscore.\n\n\n\n\nSee:\n\n\n\n\n```\n\npostgres=# alter system set autovacuum_vacuum_threshold to 2_147_483_648;\n\nERROR:  invalid value for parameter \"autovacuum_vacuum_threshold\": \"2_147_483_648\"\n\npostgres=# alter system set autovacuum_vacuum_threshold to 2_147_483_647;\n\nALTER SYSTEM\n\n```\n\n\n\n\nIIRC, the lexer only supports integers but not int64.\n\n--\nBest regards,\nJapin Li", "msg_date": "Wed, 25 Sep 2024 14:38:46 +0000", "msg_from": "Li Japin <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi,\n\n> ```\n> postgres=# alter system set autovacuum_vacuum_threshold to 2_147_483_648;\n> ERROR: invalid value for parameter \"autovacuum_vacuum_threshold\": \"2_147_483_648\"\n> postgres=# alter system set autovacuum_vacuum_threshold to 2_147_483_647;\n> ALTER SYSTEM\n> ```\n>\n> IIRC, the lexer only supports integers but not int64.\n\nRight. Supporting underscores for GUCs was discussed above but not\nimplemented in the patch. As Alexander rightly pointed out this is not\na priority and can be discussed separately.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 25 Sep 2024 17:44:28 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "FWIW, I agree with the upthread opinions that we shouldn't do this\n(invent int64 GUCs). I don't think we need the added code bloat\nand risk of breaking user code that isn't expecting this new GUC\ntype. We invented the notion of GUC units in part to ensure that\nint32 GUCs could be adapted to handle potentially-large numbers.\nAnd there's always the fallback position of using a float8 GUC\nif you really feel you need a wider range.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Sep 2024 11:08:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi, Tom!\n\nOn Wed, Sep 25, 2024 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FWIW, I agree with the upthread opinions that we shouldn't do this\n> (invent int64 GUCs). I don't think we need the added code bloat\n> and risk of breaking user code that isn't expecting this new GUC\n> type. We invented the notion of GUC units in part to ensure that\n> int32 GUCs could be adapted to handle potentially-large numbers.\n> And there's always the fallback position of using a float8 GUC\n> if you really feel you need a wider range.\n\nThank you for your feedback.\nDo you think we don't need int64 GUCs just now, when 64-bit\ntransaction ids are far from committable shape? Or do you think we\ndon't need int64 GUCs even if we have 64-bit transaction ids? If yes,\nwhat do you think we should use for *_age variables with 64-bit\ntransaction ids?\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Wed, 25 Sep 2024 21:04:36 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi Alexander\n I think we need int64 GUCs, due to these parameters(\nautovacuum_freeze_table_age, autovacuum_freeze_max_age,When a table age is\ngreater than any of these parameters an aggressive vacuum will be\nperformed, When we implementing xid64, is it still necessary to be in the\nint range? btw, I have a suggestion to record a warning in the log when the\ntable age exceeds the int maximum. These default values we can set a\nreasonable values ,for example autovacuum_freeze_max_age=4294967295 or\n8589934592.\n\n\nThanks\n\nAlexander Korotkov <aekorotkov@gmail.com> 于2024年9月26日周四 02:05写道:\n\n> Hi, Tom!\n>\n> On Wed, Sep 25, 2024 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > FWIW, I agree with the upthread opinions that we shouldn't do this\n> > (invent int64 GUCs). I don't think we need the added code bloat\n> > and risk of breaking user code that isn't expecting this new GUC\n> > type. We invented the notion of GUC units in part to ensure that\n> > int32 GUCs could be adapted to handle potentially-large numbers.\n> > And there's always the fallback position of using a float8 GUC\n> > if you really feel you need a wider range.\n>\n> Thank you for your feedback.\n> Do you think we don't need int64 GUCs just now, when 64-bit\n> transaction ids are far from committable shape? Or do you think we\n> don't need int64 GUCs even if we have 64-bit transaction ids? If yes,\n> what do you think we should use for *_age variables with 64-bit\n> transaction ids?\n>\n> ------\n> Regards,\n> Alexander Korotkov\n> Supabase\n>\n>\n>\n\n Hi Alexander       I think we need int64 GUCs, due to these  parameters( autovacuum_freeze_table_age, autovacuum_freeze_max_age,When a table age is greater than any of these parameters an aggressive vacuum will be performed, When we implementing xid64, is it still necessary to be in the int range? btw, I have a suggestion to record a warning in the log when the table age exceeds the int maximum. These default values we can set a reasonable values ,for example autovacuum_freeze_max_age=4294967295 or 8589934592.Thanks Alexander Korotkov <aekorotkov@gmail.com> 于2024年9月26日周四 02:05写道:Hi, Tom!\n\nOn Wed, Sep 25, 2024 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> FWIW, I agree with the upthread opinions that we shouldn't do this\n> (invent int64 GUCs).  I don't think we need the added code bloat\n> and risk of breaking user code that isn't expecting this new GUC\n> type.  We invented the notion of GUC units in part to ensure that\n> int32 GUCs could be adapted to handle potentially-large numbers.\n> And there's always the fallback position of using a float8 GUC\n> if you really feel you need a wider range.\n\nThank you for your feedback.\nDo you think we don't need int64 GUCs just now, when 64-bit\ntransaction ids are far from committable shape?  Or do you think we\ndon't need int64 GUCs even if we have 64-bit transaction ids?  If yes,\nwhat do you think we should use for *_age variables with 64-bit\ntransaction ids?\n\n------\nRegards,\nAlexander Korotkov\nSupabase", "msg_date": "Thu, 26 Sep 2024 17:30:47 +0800", "msg_from": "wenhui qiu <qiuwenhuifx@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "On Thu, Sep 26, 2024 at 12:30 PM wenhui qiu <qiuwenhuifx@gmail.com> wrote:\n> I think we need int64 GUCs, due to these parameters( autovacuum_freeze_table_age, autovacuum_freeze_max_age,When a table age is greater than any of these parameters an aggressive vacuum will be performed, When we implementing xid64, is it still necessary to be in the int range? btw, I have a suggestion to record a warning in the log when the table age exceeds the int maximum. These default values we can set a reasonable values ,for example autovacuum_freeze_max_age=4294967295 or 8589934592.\n\nIn principle, even with 64-bit transaction ids we could specify *_age\nGUCs as int32 with bigger units or as float8. That feels a bit\nawkward for me. This is why I queried more about Tom's opinion in\nmore details: did he propose to wait with int64 GUCs before we have\n64-bit transaction ids, or give up about them completely?\n\nLinks.\n1. https://www.postgresql.org/message-id/3649727.1727276882%40sss.pgh.pa.us\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Thu, 26 Sep 2024 18:55:07 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> Do you think we don't need int64 GUCs just now, when 64-bit\n> transaction ids are far from committable shape? Or do you think we\n> don't need int64 GUCs even if we have 64-bit transaction ids? If yes,\n> what do you think we should use for *_age variables with 64-bit\n> transaction ids?\n\nI seriously doubt that _age values exceeding INT32_MAX would be\nuseful, even in the still-extremely-doubtful situation that we\nget to true 64-bit XIDs. But if you think we must have that,\nwe could still use float8 GUCs for them. float8 is exact up\nto 2^53 (given IEEE math), and you certainly aren't going to\nconvince me that anyone needs _age values exceeding that.\nFor that matter, an imprecise representation of such an age\nlimit would still be all right wouldn't it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Sep 2024 12:39:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support Int64 GUCs" }, { "msg_contents": "Hi,\n\n> I seriously doubt that _age values exceeding INT32_MAX would be\n> useful, even in the still-extremely-doubtful situation that we\n> get to true 64-bit XIDs. But if you think we must have that,\n> we could still use float8 GUCs for them. float8 is exact up\n> to 2^53 (given IEEE math), and you certainly aren't going to\n> convince me that anyone needs _age values exceeding that.\n> For that matter, an imprecise representation of such an age\n> limit would still be all right wouldn't it?\n\nConsidering the recent feedback. I'm marking the corresponding CF\nentry as \"Rejected\".\n\nThanks to everyone involved!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 30 Sep 2024 00:31:33 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support Int64 GUCs" } ]
[ { "msg_contents": "While working on the \"POC, WIP: OR-clause support for indexes\" project \n[0], it was suggested to use the construct_array function to form a \none-dimensional array.\n\nI noticed that there is a comment that values ​​with NULL are not \nprocessed there, but in fact this function calls the construct_md_array \nfunction, which\ncontains a comment that it can handle NULL values.\n\n/*\n  * construct_array    --- simple method for constructing an array object\n  *\n  * elems: array of Datum items to become the array contents\n  *          (NULL element values are not supported).\n\n*/\n\n\n/*\n  * construct_md_array    --- simple method for constructing an array object\n  *                            with arbitrary dimensions and possible NULLs\n\n*/\n\nIn the places where the construct_md_array function is called, I did not \nsee a check for NULL and a limitation on the use of the function, if any.\n\nThe tests during the check did not show that there is a problem with \nthis [1].\n\nIs this comment correct or we should update it?\n\n\n[0] https://commitfest.postgresql.org/49/4450/\n\n[1] \nhttps://www.postgresql.org/message-id/CACJufxHCJvC3X8nUK-jRvRru-ZEXp16EBPADOwTGaqmOYM1Raw%40mail.gmail.com\n\n\n\n\n\n\nWhile working on the \"POC, WIP: OR-clause\n support for indexes\" project [0], it was suggested to use\n the construct_array function to form a one-dimensional\n array. \n\nI noticed that there is a comment that values\n ​​with NULL are not processed there, but in fact this\n function calls the construct_md_array function, which \n contains a comment that it can handle NULL values.\n/*\n  * construct_array    --- simple method for constructing an\n array object\n  *\n  * elems: array of Datum items to become the array contents\n  *          (NULL element values are not supported).\n*/\n\n\n/*\n  * construct_md_array    --- simple method for constructing\n an array object\n  *                            with arbitrary dimensions and\n possible NULLs\n*/\n\n In the places where the construct_md_array\n function is called, I did not see a check for NULL and a\n limitation on the use of the function, if any.\n\n\nThe tests during the check did not show that\n there is a problem with this [1]. \n\nIs this comment correct or we should update\n it?\n\n\n[0]\n https://commitfest.postgresql.org/49/4450/\n[1]\nhttps://www.postgresql.org/message-id/CACJufxHCJvC3X8nUK-jRvRru-ZEXp16EBPADOwTGaqmOYM1Raw%40mail.gmail.com", "msg_date": "Thu, 12 Sep 2024 18:43:12 +0300", "msg_from": "Alena Rybakina <a.rybakina@postgrespro.ru>", "msg_from_op": true, "msg_subject": "may be a mismatch between the construct_array and construct_md_array\n comments" }, { "msg_contents": "Alena Rybakina <a.rybakina@postgrespro.ru> writes:\n> I noticed that there is a comment that values ​​with NULL are not \n> processed there, but in fact this function calls the construct_md_array \n> function, which\n> contains a comment that it can handle NULL values.\n\nRight. construct_md_array has a \"bool *nulls\" argument, but\nconstruct_array doesn't --- it passes NULL for that to\nconstruct_md_array, which will therefore assume there are no null\narray elements.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Sep 2024 13:44:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: may be a mismatch between the construct_array and\n construct_md_array comments" }, { "msg_contents": "On 12.09.2024 20:44, Tom Lane wrote:\n> Alena Rybakina <a.rybakina@postgrespro.ru> writes:\n>> I noticed that there is a comment that values ​​with NULL are not\n>> processed there, but in fact this function calls the construct_md_array\n>> function, which\n>> contains a comment that it can handle NULL values.\n> Right. construct_md_array has a \"bool *nulls\" argument, but\n> construct_array doesn't --- it passes NULL for that to\n> construct_md_array, which will therefore assume there are no null\n> array elements.\n>\nUnderstood.\n\nAt first I thought this comment was related to the value of a NULL \nelement that might be in the Array, but now I realized that this is not \nthe case.\n\nThanks for the explanation, it helped a lot!\n\n\n\n", "msg_date": "Sun, 15 Sep 2024 00:31:07 +0300", "msg_from": "Alena Rybakina <a.rybakina@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: may be a mismatch between the construct_array and\n construct_md_array comments" } ]
[ { "msg_contents": "I happened to notice that Postgres will let you do\n\nregression=# create table foo (id timestamp primary key);\nCREATE TABLE\nregression=# create table bar (ts timestamptz references foo);\nCREATE TABLE\n\nThis strikes me as a pretty bad idea, because whether a particular\ntimestamp is equal to a particular timestamptz depends on your\ntimezone setting. Thus the constraint could appear to be violated\nafter a timezone change.\n\nI'm inclined to propose rejecting FK constraints if the comparison\noperator is not immutable. Among the built-in opclasses, the only\ninstances of non-immutable btree equality operators are\n\nregression=# select amopopr::regoperator from pg_amop join pg_operator o on o.oid = amopopr join pg_proc p on p.oid = oprcode where amopmethod=403 and amopstrategy=3 and provolatile != 'i';\n amopopr \n---------------------------------------------------------\n =(date,timestamp with time zone)\n =(timestamp without time zone,timestamp with time zone)\n =(timestamp with time zone,date)\n =(timestamp with time zone,timestamp without time zone)\n(4 rows)\n\nA possible objection is that if anybody has such a setup and\nhasn't noticed a problem because they never change their\ntimezone setting, they might not appreciate us breaking it.\nSo I certainly wouldn't propose back-patching this. But\nmaybe we should add it as a foot-gun defense going forward.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Sep 2024 17:33:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Mutable foreign key constraints" }, { "msg_contents": "On Thursday, September 12, 2024, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> A possible objection is that if anybody has such a setup and\n> hasn't noticed a problem because they never change their\n> timezone setting, they might not appreciate us breaking it.\n> So I certainly wouldn't propose back-patching this. But\n> maybe we should add it as a foot-gun defense going forward.\n>\n\nI’m disinclined to begin enforcing this. If they got a volatile data type\nin a key column and don’t attempt to index the key, which would fail on the\nvolatile side, I’d be mighty surprised. I don’t really have much sympathy\nfor anyone who got themselves into the described position but I don’t see\nthis unsafe enough to force a potentially large table rewrite on those that\nmanaged to build a fragile but functioning model.\n\nI suggest adding the commentary and queries used to check for just such a\nsituation to the “don’t do this page” of the wiki and there just explain\nwhile allowed for backward compatibility it is definitely not a recommended\nsetup.\n\nDavid J.\n\nOn Thursday, September 12, 2024, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nA possible objection is that if anybody has such a setup and\nhasn't noticed a problem because they never change their\ntimezone setting, they might not appreciate us breaking it.\nSo I certainly wouldn't propose back-patching this.  But\nmaybe we should add it as a foot-gun defense going forward.\nI’m disinclined to begin enforcing this.  If they got a volatile data type in a key column and don’t attempt to index the key, which would fail on the volatile side, I’d be mighty surprised.  I don’t really have much sympathy for anyone who got themselves into the described position but I don’t see this unsafe enough to force a potentially large table rewrite on those that managed to build a fragile but functioning model.I suggest adding the commentary and queries used to check for just such a situation to the “don’t do this page” of the wiki and there just explain while allowed for backward compatibility it is definitely not a recommended setup.David J.", "msg_date": "Thu, 12 Sep 2024 14:59:00 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Mutable foreign key constraints" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thursday, September 12, 2024, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A possible objection is that if anybody has such a setup and\n>> hasn't noticed a problem because they never change their\n>> timezone setting, they might not appreciate us breaking it.\n>> So I certainly wouldn't propose back-patching this. But\n>> maybe we should add it as a foot-gun defense going forward.\n\n> I’m disinclined to begin enforcing this. If they got a volatile data type\n> in a key column and don’t attempt to index the key, which would fail on the\n> volatile side, I’d be mighty surprised.\n\nUm, neither type is \"volatile\" and each can be indexed just fine.\nIt's the cross-type comparison required by the FK that brings the\nhazard.\n\n> I suggest adding the commentary and queries used to check for just such a\n> situation to the “don’t do this page” of the wiki and there just explain\n> while allowed for backward compatibility it is definitely not a recommended\n> setup.\n\nYeah, that's a possible approach.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Sep 2024 18:23:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mutable foreign key constraints" }, { "msg_contents": "On Thu, 2024-09-12 at 17:33 -0400, Tom Lane wrote:\n> I happened to notice that Postgres will let you do\n> \n> regression=# create table foo (id timestamp primary key);\n> CREATE TABLE\n> regression=# create table bar (ts timestamptz references foo);\n> CREATE TABLE\n> \n> This strikes me as a pretty bad idea, because whether a particular\n> timestamp is equal to a particular timestamptz depends on your\n> timezone setting. Thus the constraint could appear to be violated\n> after a timezone change.\n> \n> I'm inclined to propose rejecting FK constraints if the comparison\n> operator is not immutable.\n\nI think that is the only sane thing to do. Consider\n\n test=> SHOW timezone;\n TimeZone \n ═══════════════\n Europe/Vienna\n (1 row)\n\n test=> INSERT INTO foo VALUES ('2024-09-13 12:00:00');\n INSERT 0 1\n test=> INSERT INTO bar VALUES ('2024-09-13 12:00:00+02');\n INSERT 0 1\n test=> SELECT * FROM foo JOIN bar ON foo.id = bar.ts;\n id │ ts \n ═════════════════════╪════════════════════════\n 2024-09-13 12:00:00 │ 2024-09-13 12:00:00+02\n (1 row)\n\n test=> SET timezone = 'Asia/Kolkata';\n SET\n test=> SELECT * FROM foo JOIN bar ON foo.id = bar.ts;\n id │ ts \n ════╪════\n (0 rows)\n\n test=> INSERT INTO foo VALUES ('2024-09-14 12:00:00');\n INSERT 0 1\n test=> INSERT INTO bar VALUES ('2024-09-14 12:00:00+02');\n ERROR: insert or update on table \"bar\" violates foreign key constraint \"bar_ts_fkey\"\n DETAIL: Key (ts)=(2024-09-14 15:30:00+05:30) is not present in table \"foo\".\n\nThat's very broken and should not be allowed.\n\n> A possible objection is that if anybody has such a setup and\n> hasn't noticed a problem because they never change their\n> timezone setting, they might not appreciate us breaking it.\n\nI hope that there are few cases of that in the field, and I think it\nis OK to break them. After all, it can be fixed with a simple\n\n ALTER TABLE foo ALTER id TYPE timestamptz;\n\nIf the session time zone is UTC, that wouldn't even require a rewrite.\n\nI agree that it cannot be backpatched.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 13 Sep 2024 04:41:40 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Mutable foreign key constraints" }, { "msg_contents": "On 9/13/24 4:41 AM, Laurenz Albe wrote:\n> That's very broken and should not be allowed.\n\n+1\n\n>> A possible objection is that if anybody has such a setup and\n>> hasn't noticed a problem because they never change their\n>> timezone setting, they might not appreciate us breaking it.\n> \n> I hope that there are few cases of that in the field, and I think it\n> is OK to break them. After all, it can be fixed with a simple\n> \n> ALTER TABLE foo ALTER id TYPE timestamptz;\n> \n> If the session time zone is UTC, that wouldn't even require a rewrite.\n> \n> I agree that it cannot be backpatched.\n\nI unfortunately suspect there might be more cases than we think in the \nfield due to many people not understanding the difference between \ntimestamp and timestamptz but the good thing is that \ntimestamp/timestamptz are rare in foreign keys, even in composite ones.\n\nSince this is quite broken and does not have any real world usefulness I \nthink we should just go ahead and disallow it and have a few people \ncomplain.\n\nAndreas\n\n\n\n", "msg_date": "Fri, 13 Sep 2024 15:05:49 +0200", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Mutable foreign key constraints" }, { "msg_contents": "On 9/13/24 15:05, Andreas Karlsson wrote:\n> On 9/13/24 4:41 AM, Laurenz Albe wrote:\n>> That's very broken and should not be allowed.\n> \n> +1\n> \n>>> A possible objection is that if anybody has such a setup and\n>>> hasn't noticed a problem because they never change their\n>>> timezone setting, they might not appreciate us breaking it.\n>>\n>> I hope that there are few cases of that in the field, and I think it\n>> is OK to break them.  After all, it can be fixed with a simple\n>>\n>>    ALTER TABLE foo ALTER id TYPE timestamptz;\n>>\n>> If the session time zone is UTC, that wouldn't even require a rewrite.\n>>\n>> I agree that it cannot be backpatched.\n> \n> I unfortunately suspect there might be more cases than we think in the \n> field due to many people not understanding the difference between \n> timestamp and timestamptz but the good thing is that \n> timestamp/timestamptz are rare in foreign keys, even in composite ones.\n\n\nIt will become a lot more common with WITHOUT OVERLAPS, so I think it is \nimportant to fix this at the same time or earlier as that feature.\n\n\n> Since this is quite broken and does not have any real world usefulness I \n> think we should just go ahead and disallow it and have a few people \n> complain.\n\n\n+1\n-- \nVik Fearing\n\n\n\n", "msg_date": "Fri, 13 Sep 2024 15:38:03 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Mutable foreign key constraints" }, { "msg_contents": "\nOn 2024-09-12 Th 5:33 PM, Tom Lane wrote:\n> I happened to notice that Postgres will let you do\n>\n> regression=# create table foo (id timestamp primary key);\n> CREATE TABLE\n> regression=# create table bar (ts timestamptz references foo);\n> CREATE TABLE\n>\n> This strikes me as a pretty bad idea, because whether a particular\n> timestamp is equal to a particular timestamptz depends on your\n> timezone setting. Thus the constraint could appear to be violated\n> after a timezone change.\n>\n> I'm inclined to propose rejecting FK constraints if the comparison\n> operator is not immutable. Among the built-in opclasses, the only\n> instances of non-immutable btree equality operators are\n>\n> regression=# select amopopr::regoperator from pg_amop join pg_operator o on o.oid = amopopr join pg_proc p on p.oid = oprcode where amopmethod=403 and amopstrategy=3 and provolatile != 'i';\n> amopopr\n> ---------------------------------------------------------\n> =(date,timestamp with time zone)\n> =(timestamp without time zone,timestamp with time zone)\n> =(timestamp with time zone,date)\n> =(timestamp with time zone,timestamp without time zone)\n> (4 rows)\n>\n> A possible objection is that if anybody has such a setup and\n> hasn't noticed a problem because they never change their\n> timezone setting, they might not appreciate us breaking it.\n> So I certainly wouldn't propose back-patching this. But\n> maybe we should add it as a foot-gun defense going forward.\n>\n> Thoughts?\n>\n> \t\t\t\n\n\nIsn't there an upgrade hazard here? People won't thank us if they can't \nnow upgrade their clusters. If we can get around that then +1.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 14 Sep 2024 07:15:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Mutable foreign key constraints" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2024-09-12 Th 5:33 PM, Tom Lane wrote:\n>> I'm inclined to propose rejecting FK constraints if the comparison\n>> operator is not immutable.\n\n> Isn't there an upgrade hazard here? People won't thank us if they can't \n> now upgrade their clusters. If we can get around that then +1.\n\nYeah, they would have to fix the bad DDL before upgrading. It'd\nbe polite of us to add a pg_upgrade precheck for such cases,\nperhaps.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Sep 2024 10:51:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Mutable foreign key constraints" } ]
[ { "msg_contents": "On Thu, Sep 12, 2024 at 8:12 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Andreas,\n>\n> On Thu, Sep 12, 2024 at 7:08 PM Andreas Ulbrich\n> <andreas.ulbrich@matheversum.de> wrote:\n> >\n> > Salvete!\n> >\n> >\n> > Sorry for my out of the rules replay, but I'm not at home, and also I can't verify and check your patch.\n> >\n> > But I think you have missed the docu in your fix:\n> >\n> > Must the example there not also be changed:\n> >\n> > From\n> >\n> > JSON_QUERY(jsonb '[1,[2,3],null]', 'lax $[*][$off]' PASSING 1 AS off WITH CONDITIONAL WRAPPER) → [3]\n> >\n> > to\n> >\n> > JSON_QUERY(jsonb '[1,[2,3],null]', 'lax $[*][$off]' PASSING 1 AS off WITH CONDITIONAL WRAPPER) → 3\n>\n> You're right, good catch.\n>\n> I had checked whether the documentation text needed fixing, but failed\n> to notice that an example\n> is using CONDITIONAL. Will fix, thanks for the report.\n\nI have pushed the fix. Thanks Andreas for the report.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 13 Sep 2024 16:13:05 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: json_query conditional wrapper bug" } ]
[ { "msg_contents": "Hello. This is my first patch to the project. The patch adds support for system columns in JOIN USING clause.  The problemcan be demonstrated with this code: ```sqlCREATE TABLE t  (id int);CREATE TABLE tt (id int); -- Works:SELECT * FROM t JOIN tt ON t.xmin = tt.xmin;-- Doesn't work:SELECT * FROM t JOIN tt USING (xmin);``` Solution:1. Use the scanNSItemForColumn() function instead of buildVarFromNSColumn() forconstructing Var objects, as it correctly handles system columns.2. Remove extra calls to markVarForSelectPriv(), since this function is alreadyinvoked in scanNSItemForColumn().3. For system columns, collect their negative attribute numbers along withuser-defined column indices into l_colnos and r_colnos.4. Create a buildVarFromSystemAttribute() function for rebuilding Var objectswith system attributes, analogous to buildVarFromNSColumn(), sincescanNSItemForColumn() is complex and has side effects.5. Implement a fillNSColumnParametersFromVar() function for building NS columnswith system attributes.6. Add SystemAttributeTotalNumber() function to heap.c to ensure memory forres_nscolumns is allocated with system columns in mind. Link to PR on GitHub: https://github.com/hilltracer/postgres/pull/3-- Best regards,  Denis Garsh,  d.garsh@arenadata.io", "msg_date": "Fri, 13 Sep 2024 11:03:42 +0300", "msg_from": "Denis Garsh <d.garsh@arenadata.io>", "msg_from_op": true, "msg_subject": "Add system column support to the USING clause" }, { "msg_contents": "On Friday, September 13, 2024, Denis Garsh <d.garsh@arenadata.io> wrote:\n>\n>\n> The patch adds support for system columns in JOIN USING clause.\n>\n\nDefinitely not high on my list of oversights to fix. Resorting to the ON\nclause for the rare query that would need to do such a thing isn’t that\ncostly. But as the patch exists I’ll leave it to others to judge the cost\nof actually adding it, or worthwhile-mess of reviewing it.\n\n\n> Link to PR on GitHub: https://github.com/hilltracer/postgres/pull/3\n>\n\nYou apparently missed the note on GitHub that says we don’t work with pull\nrequests. Patches are to be submitted directly to the mailing list.\n\nDavid J.\n\nOn Friday, September 13, 2024, Denis Garsh <d.garsh@arenadata.io> wrote: The patch adds support for system columns in JOIN USING clause. Definitely not high on my list of oversights to fix.  Resorting to the ON clause for the rare query that would need to do such a thing isn’t that costly.  But as the patch exists I’ll leave it to others to judge the cost of actually adding it, or worthwhile-mess of reviewing it. Link to PR on GitHub: https://github.com/hilltracer/postgres/pull/3You apparently missed the note on GitHub that says we don’t work with pull requests.  Patches are to be submitted directly to the mailing list.David J.", "msg_date": "Fri, 13 Sep 2024 07:06:49 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add system column support to the USING clause" }, { "msg_contents": "On Friday, September 13, 2024, David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n>\n> Link to PR on GitHub: https://github.com/hilltracer/postgres/pull/3\n>>\n>\n> You apparently missed the note on GitHub that says we don’t work with pull\n> requests. Patches are to be submitted directly to the mailing list.\n>\n\nSorry, I see now that you did both - that makes sense.\n\nDavid J.\n\nOn Friday, September 13, 2024, David G. Johnston <david.g.johnston@gmail.com> wrote:Link to PR on GitHub: https://github.com/hilltracer/postgres/pull/3You apparently missed the note on GitHub that says we don’t work with pull requests.  Patches are to be submitted directly to the mailing list.Sorry, I see now that you did both - that makes sense.David J.", "msg_date": "Fri, 13 Sep 2024 07:07:49 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Add system column support to the USING clause" }, { "msg_contents": "Denis Garsh <d.garsh@arenadata.io> writes:\n> The patch adds support for system columns in JOIN USING clause.\n\nI think this is an actively bad idea, and it was likely intentional\nthat it's not supported today. A few reasons why:\n\n* There are, fundamentally, no use-cases for joining on system\ncolumns. The only one that is stable enough to even consider\nusing for the purpose is tableoid, and I'm not detecting a reason\nwhy that'd be useful. If there are any edge cases where people\nwould actually wish to do that, it can be done easily enough with\na standard JOIN ON clause.\n\n* Non-heap table AMs may not provide the same system columns that\nheap does, further reducing the scope for plausible use-cases.\n(Yeah, I know we don't support that too well today.)\n\n* This breaks ruleutils.c's mechanism for dealing with name\nconflicts across multiple USING clauses. That relies on being\nable to assign aliases to the USING inputs at the table level\n(that is, \"FROM realtable AS aliastable(aliascolumn,...)\").\nThere's no way to alias a system column in the FROM syntax.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Sep 2024 10:56:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add system column support to the USING clause" }, { "msg_contents": "On 13.09.2024 17:56, Tom Lane wrote:\n> I think this is an actively bad idea, and it was likely intentional\n> that it's not supported today. A few reasons why:\n\nThank you, Tom and David, for your feedback.\n\nI admit my mistake. I should have asked if this problem was worth\nsolving before diving in. However, since I’ve already spent a lot of\ntime into the patch, so I'll try to fight a little ;-)\n\nIt looks like this feature hasn't been added because it's not obvious \nhow to do it. And it is difficult to assess the consequences of adding a \nsystem column in RTE. Personally, I had to sweat to do it.\n\n >* There are, fundamentally, no use-cases for joining on system\n >columns. The only one that is stable enough to even consider\n >using for the purpose is tableoid, and I'm not detecting a reason\n >why that'd be useful. If there are any edge cases where people\n >would actually wish to do that, it can be done easily enough with\n >a standard JOIN ON clause.\n\nBut after all, it's implemented in `JOIN ON`. Accordingly, it seems like \nit should also be supported in `JOIN USING`. And is there any guarantee \nthat new system columns won't be added in the future that may be more \nuseful?\n\n > * This breaks ruleutils.c's mechanism for dealing with name\n > conflicts across multiple USING clauses. That relies on being\n > able to assign aliases to the USING inputs at the table level\n > (that is, \"FROM realtable AS aliastable(aliascolumn,...)\").\n > There's no way to alias a system column in the FROM syntax.\n\nCould you please provide an example of such a query? I've tried creating \nmulti-join queries with aliases, but I couldn't break it. For example:\n```sql\nCREATE TABLE t    (id1 int);\nCREATE TABLE tt   (id2 int);\nCREATE TABLE ttt  (id3 int);\nCREATE TABLE tttt (id4 int);\n\nBEGIN;\nINSERT INTO t    VALUES (1);\nINSERT INTO tt   VALUES (101);\nINSERT INTO ttt  VALUES (201);\nINSERT INTO tttt VALUES (301);\nCOMMIT;\n\nBEGIN;\nINSERT INTO t    VALUES (2);\nINSERT INTO tt   VALUES (102);\nINSERT INTO ttt  VALUES (202);\nINSERT INTO tttt VALUES (302);\nCOMMIT;\n\nINSERT INTO t    VALUES (3);\nINSERT INTO tt   VALUES (103);\nINSERT INTO ttt  VALUES (203);\nINSERT INTO tttt VALUES (303);\n\nSELECT *FROM t FULL JOIN tt USING (xmin);\n-- xmin | id1 | id2\n--------+-----+-----\n-- 1057 |   1 | 101\n-- 1058 |   2 | 102\n-- 1059 |   3 |\n-- 1060 |     | 103\n--(4 rows)\n\nSELECT *FROM ttt FULL JOIN tttt USING (xmin);\n-- xmin | id3 | id4\n--------+-----+-----\n-- 1057 | 201 | 301\n-- 1058 | 202 | 302\n-- 1061 | 203 |\n-- 1062 |     | 303\n--(4 rows)\n\nSELECT * FROM t FULL JOIN tt USING (xmin) FULL JOIN ttt USING (xmin);\n-- xmin | id1 | id2 | id3\n--------+-----+-----+-----\n-- 1057 |   1 | 101 | 201\n-- 1058 |   2 | 102 | 202\n-- 1059 |   3 |     |\n-- 1060 |     | 103 |\n-- 1061 |     |     | 203\n--(5 rows)\n\nSELECT *FROM\n     (t FULL JOIN tt USING (xmin)) AS alias1(col1, col21, col31)\n     JOIN\n     (ttt FULL JOIN tttt USING (xmin)) AS alias2(col1, col22, col32)\n     USING (col1);\n-- col1 | col21 | col31 | col22 | col32\n--------+-------+-------+-------+-------\n-- 1057 |     1 |   101 |   201 |   301\n-- 1058 |     2 |   102 |   202 |   302\n--(2 rows)\n```\n\nI noticed that after adding it to the RTE, the negative system column \nattributes will be used in `ruleutils.c` (see \n[here](https://github.com/postgres/postgres/blob/52c707483ce4d0161127e4958d981d1b5655865e/src/backend/utils/adt/ruleutils.c#L5055)), \nand then in the `colinfo` structure. However, I didn't find any issues \nwith `colinfo`. For example:\n\n```sql\ncreate table tt2 (a int, b int, c int);\ncreate table tt3 (ax int8, b int2, c numeric);\ncreate table tt4 (ay int, b int, q int);\ncreate view v2 as select * from\ntt2 join tt3 using (b,c,xmin) join tt4 using (b, xmin);\nselect pg_get_viewdef('v2', true);\n-- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q\n--    FROM tt2 JOIN tt3 USING (b, c, xmin) JOIN tt4 USING (b, xmin);\nalter table tt2 add column d int;\nalter table tt2 add column e int;\nselect pg_get_viewdef('v2', true);\n-- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q\n--    FROM tt2 JOIN tt3 USING (b, c, xmin) JOIN tt4 USING (b, xmin);\n--       alter table tt3 rename c to d;\n\nselect pg_get_viewdef('v2', true);\n-- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q\n--    FROM tt2 JOIN tt3 tt3(ax, b, c) USING (b, c, xmin) JOIN tt4 USING \n(b, xmin);\nalter table tt3 add column c int;\nalter table tt3 add column e int;\nselect pg_get_viewdef('v2', true);\n-- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q\n--    FROM tt2 JOIN tt3 tt3(ax, b, c, c_1, e) USING (b, c, xmin)\n--       JOIN tt4 USING (b, xmin);\n\nalter table tt2 drop column d;\nselect pg_get_viewdef('v2', true);\n-- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q\n--    FROM tt2 JOIN tt3 tt3(ax, b, c, c_1, e) USING (b, c, xmin)\n--       JOIN tt4 USING (b, xmin);\n```\n\n-- \nBest regards,\nDenis Garsh\n\n\n\n\n\n\n\n\nOn 13.09.2024 17:56, Tom Lane wrote:\n\n\n\nI think this is an actively bad idea, and it was likely intentional\nthat it's not supported today. A few reasons why:\n\n\nThank you, Tom and David, for your feedback.\n\n I admit my mistake. I should have asked if this problem was worth\n \n solving before diving in. However, since I’ve already spent a lot\n of \n time into the patch, so I'll try to fight a little ;-)\n\nIt looks like this feature hasn't been added because it's not\n obvious how to do it. And it is difficult to assess the\n consequences of adding a system column in RTE. Personally, I had\n to sweat to do it.\n >* There are, fundamentally, no use-cases for joining on system  \n >columns. The only one that is stable enough to even consider  \n >using for the purpose is tableoid, and I'm not detecting a\n reason  \n >why that'd be useful. If there are any edge cases where people  \n >would actually wish to do that, it can be done easily enough\n with  \n >a standard JOIN ON clause.\n\n But after all, it's implemented in `JOIN ON`. Accordingly, it seems\n like it should also be supported in `JOIN USING`. And is there any\n guarantee that new system columns won't be added in the future that\n may be more useful?\n\n > * This breaks ruleutils.c's mechanism for dealing with name  \n > conflicts across multiple USING clauses. That relies on being  \n > able to assign aliases to the USING inputs at the table level  \n > (that is, \"FROM realtable AS aliastable(aliascolumn,...)\").  \n > There's no way to alias a system column in the FROM syntax.\n\n Could you please provide an example of such a query? I've tried\n creating multi-join queries with aliases, but I couldn't break it.\n For example:\n ```sql\n CREATE TABLE t    (id1 int);\n CREATE TABLE tt   (id2 int);\n CREATE TABLE ttt  (id3 int);\n CREATE TABLE tttt (id4 int);\n\n BEGIN;\n INSERT INTO t    VALUES (1);\n INSERT INTO tt   VALUES (101);\n INSERT INTO ttt  VALUES (201);\n INSERT INTO tttt VALUES (301);\n COMMIT;\n\n BEGIN;\n INSERT INTO t    VALUES (2);\n INSERT INTO tt   VALUES (102);\n INSERT INTO ttt  VALUES (202);\n INSERT INTO tttt VALUES (302);\n COMMIT;\n\n INSERT INTO t    VALUES (3);\n INSERT INTO tt   VALUES (103);\n INSERT INTO ttt  VALUES (203);\n INSERT INTO tttt VALUES (303);\n\n SELECT *FROM t FULL JOIN tt USING (xmin);\n -- xmin | id1 | id2 \n --------+-----+-----\n -- 1057 |   1 | 101\n -- 1058 |   2 | 102\n -- 1059 |   3 |    \n -- 1060 |     | 103\n --(4 rows)\n\n SELECT *FROM ttt FULL JOIN tttt USING (xmin);\n -- xmin | id3 | id4 \n --------+-----+-----\n -- 1057 | 201 | 301\n -- 1058 | 202 | 302\n -- 1061 | 203 |    \n -- 1062 |     | 303\n --(4 rows)\n\n SELECT * FROM t FULL JOIN tt USING (xmin) FULL JOIN ttt USING\n (xmin);\n -- xmin | id1 | id2 | id3 \n --------+-----+-----+-----\n -- 1057 |   1 | 101 | 201\n -- 1058 |   2 | 102 | 202\n -- 1059 |   3 |     |    \n -- 1060 |     | 103 |    \n -- 1061 |     |     | 203\n --(5 rows)\n\n SELECT *FROM \n     (t FULL JOIN tt USING (xmin)) AS alias1(col1, col21, col31) \n     JOIN\n     (ttt FULL JOIN tttt USING (xmin)) AS alias2(col1, col22, col32)\n     USING (col1);\n -- col1 | col21 | col31 | col22 | col32 \n --------+-------+-------+-------+-------\n -- 1057 |     1 |   101 |   201 |   301\n -- 1058 |     2 |   102 |   202 |   302\n --(2 rows)\n ```\n\n I noticed that after adding it to the RTE, the negative system\n column attributes will be used in `ruleutils.c` (see\n[here](https://github.com/postgres/postgres/blob/52c707483ce4d0161127e4958d981d1b5655865e/src/backend/utils/adt/ruleutils.c#L5055)),\n and then in the `colinfo` structure. However, I didn't find any\n issues with `colinfo`. For example: \n\n ```sql\n create table tt2 (a int, b int, c int);\n create table tt3 (ax int8, b int2, c numeric);\n create table tt4 (ay int, b int, q int);\n create view v2 as select * from \n tt2 join tt3 using (b,c,xmin) join tt4 using (b, xmin);\n select pg_get_viewdef('v2', true);\n -- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q \n --    FROM tt2 JOIN tt3 USING (b, c, xmin) JOIN tt4 USING (b, xmin);\n alter table tt2 add column d int;\n alter table tt2 add column e int;\n select pg_get_viewdef('v2', true);\n -- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q\n --    FROM tt2 JOIN tt3 USING (b, c, xmin) JOIN tt4 USING (b, xmin);\n --       alter table tt3 rename c to d;\n\n select pg_get_viewdef('v2', true);\n -- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q \n --    FROM tt2 JOIN tt3 tt3(ax, b, c) USING (b, c, xmin) JOIN tt4\n USING (b, xmin);\n alter table tt3 add column c int;\n alter table tt3 add column e int;\n select pg_get_viewdef('v2', true);\n -- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q\n --    FROM tt2 JOIN tt3 tt3(ax, b, c, c_1, e) USING (b, c, xmin)\n --       JOIN tt4 USING (b, xmin);\n\n alter table tt2 drop column d;\n select pg_get_viewdef('v2', true);\n -- SELECT tt2.b, tt2.xmin, tt3.c, tt2.a, tt3.ax, tt4.ay, tt4.q\n --    FROM tt2 JOIN tt3 tt3(ax, b, c, c_1, e) USING (b, c, xmin)\n --       JOIN tt4 USING (b, xmin);\n ```\n-- \nBest regards,  \nDenis Garsh", "msg_date": "Mon, 16 Sep 2024 10:19:13 +0300", "msg_from": "Denis Garsh <d.garsh@arenadata.io>", "msg_from_op": true, "msg_subject": "Re: Add system column support to the USING clause" }, { "msg_contents": "Hello, I'm still hoping for an answer.\n\n-- \n\nBest regards,\nDenis Garsh\n\n\n\n", "msg_date": "Thu, 26 Sep 2024 10:54:16 +0300", "msg_from": "Denis Garsh <d.garsh@arenadata.io>", "msg_from_op": true, "msg_subject": "Re: Add system column support to the USING clause" } ]
[ { "msg_contents": "Hello,\n\nI would like to add the information of the PID that caused the failure\nwhen acquiring a lock with \"FOR UPDATE NOWAIT\".\n\nWhen \"FOR UPDATE\" is executed and interrupted by lock_timeout,\nxid and PID are output in the logs, but in the case of \"FOR UPDATE \nNOWAIT\",\nno information is output, making it impossible to identify the cause of \nthe lock failure.\nTherefore, I would like to output information in the logs in the same \nway as\nwhen \"FOR UPDATE\" is executed and interrupted by lock_timeout.\n\nThe patch is attached as well.\n\nRegards,\n--\nYuki Seino\nNTT DATA CORPORATION", "msg_date": "Fri, 13 Sep 2024 20:49:36 +0900", "msg_from": "Seino Yuki <seinoyu@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Add_=E2=80=9CFOR_UPDATE_NOWAIT=E2=80=9D_lock_details_t?=\n =?UTF-8?Q?o_the_log=2E?=" } ]
[ { "msg_contents": "Can you lead me to a beginner friendly task so I can start hacking?\n\n-- \n\nSiavosh Kasravi\n* \"Save a Tree\" - Please print this email only if necessary.*\n\nCan you lead me to a beginner friendly task so I can start hacking?-- Siavosh Kasravi \"Save a Tree\" - Please print this email only if necessary.", "msg_date": "Fri, 13 Sep 2024 19:57:48 +0330", "msg_from": "sia kc <siavosh.kasravi@gmail.com>", "msg_from_op": true, "msg_subject": "A starter task" }, { "msg_contents": "Sorry I am not sure if I am doing this right. Should I look somewhere else\nfor tasks?\n\nOn Fri, Sep 13, 2024 at 7:57 PM sia kc <siavosh.kasravi@gmail.com> wrote:\n\n> Can you lead me to a beginner friendly task so I can start hacking?\n>\n> --\n>\n> Siavosh Kasravi\n> * \"Save a Tree\" - Please print this email only if necessary.*\n>\n\n\n-- \n\nSiavosh Kasravi\n* \"Save a Tree\" - Please print this email only if necessary.*\n\nSorry I am not sure if I am doing this right. Should I look somewhere else for tasks?On Fri, Sep 13, 2024 at 7:57 PM sia kc <siavosh.kasravi@gmail.com> wrote:Can you lead me to a beginner friendly task so I can start hacking?-- Siavosh Kasravi \"Save a Tree\" - Please print this email only if necessary.\n-- Siavosh Kasravi \"Save a Tree\" - Please print this email only if necessary.", "msg_date": "Sun, 15 Sep 2024 23:12:54 +0330", "msg_from": "sia kc <siavosh.kasravi@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A starter task" }, { "msg_contents": "On 9/15/24 21:42, sia kc wrote:\n> Sorry I am not sure if I am doing this right. Should I look somewhere\n> else for tasks?\n> \n\nHi,\n\nI think you can take a look at https://wiki.postgresql.org/wiki/Todo and\nsee if there's a patch/topic you would be interested in. It's really\ndifficult to \"assign\" a task based on a single sentence, with no info\nabout the person (experience with other projects, etc.).\n\nIf you're staring with PostgreSQL development, maybe take a look at\nhttps://wiki.postgresql.org/wiki/Developer_FAQ first. There's a lot of\nways to develop, but this is a good intro.\n\nFWIW, maybe it'd be better to start by looking at existing patches and\ndo a bit of a review, learn how to apply/test those and learn from them.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Sun, 15 Sep 2024 22:00:23 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "I am reading the documents. Think the Todo list is what I needed thanks.\nI think I should respond by sending my response to the mailing list but not\nsure why gmail does not have such a button. Please correct me if I am wrong.\n\nOn Sun, Sep 15, 2024 at 11:30 PM Tomas Vondra <tomas@vondra.me> wrote:\n\n> On 9/15/24 21:42, sia kc wrote:\n> > Sorry I am not sure if I am doing this right. Should I look somewhere\n> > else for tasks?\n> >\n>\n> Hi,\n>\n> I think you can take a look at https://wiki.postgresql.org/wiki/Todo and\n> see if there's a patch/topic you would be interested in. It's really\n> difficult to \"assign\" a task based on a single sentence, with no info\n> about the person (experience with other projects, etc.).\n>\n> If you're staring with PostgreSQL development, maybe take a look at\n> https://wiki.postgresql.org/wiki/Developer_FAQ first. There's a lot of\n> ways to develop, but this is a good intro.\n>\n> FWIW, maybe it'd be better to start by looking at existing patches and\n> do a bit of a review, learn how to apply/test those and learn from them.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n>\n\n\n-- \n\nSiavosh Kasravi\n* \"Save a Tree\" - Please print this email only if necessary.*\n\nI am reading the documents. Think the Todo list is what I needed thanks.I think I should respond by sending my response to the mailing list but not sure why gmail does not have such a button. Please correct me if I am wrong.On Sun, Sep 15, 2024 at 11:30 PM Tomas Vondra <tomas@vondra.me> wrote:On 9/15/24 21:42, sia kc wrote:\n> Sorry I am not sure if I am doing this right. Should I look somewhere\n> else for tasks?\n> \n\nHi,\n\nI think you can take a look at https://wiki.postgresql.org/wiki/Todo and\nsee if there's a patch/topic you would be interested in. It's really\ndifficult to \"assign\" a task based on a single sentence, with no info\nabout the person (experience with other projects, etc.).\n\nIf you're staring with PostgreSQL development, maybe take a look at\nhttps://wiki.postgresql.org/wiki/Developer_FAQ first. There's a lot of\nways to develop, but this is a good intro.\n\nFWIW, maybe it'd be better to start by looking at existing patches and\ndo a bit of a review, learn how to apply/test those and learn from them.\n\n\nregards\n\n-- \nTomas Vondra\n-- Siavosh Kasravi \"Save a Tree\" - Please print this email only if necessary.", "msg_date": "Sun, 15 Sep 2024 23:40:52 +0330", "msg_from": "sia kc <siavosh.kasravi@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A starter task" }, { "msg_contents": "Tomas Vondra <tomas@vondra.me> writes:\n> I think you can take a look at https://wiki.postgresql.org/wiki/Todo and\n> see if there's a patch/topic you would be interested in. It's really\n> difficult to \"assign\" a task based on a single sentence, with no info\n> about the person (experience with other projects, etc.).\n\nBeware that that TODO list is poorly maintained, so items may be out\nof date. Worse, most of what's there got there because it's hard,\nor there's not consensus about what the feature should look like,\nor both. So IMO it's not a great place for a beginner to start.\n\n> FWIW, maybe it'd be better to start by looking at existing patches and\n> do a bit of a review, learn how to apply/test those and learn from them.\n\nYeah, this is a good way to get some exposure to our code base and\ndevelopment practices.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Sep 2024 16:43:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "On 9/15/24 22:10, sia kc wrote:\n> I am reading the documents. Think the Todo list is what I needed thanks.\n> I think I should respond by sending my response to the mailing list but\n> not sure why gmail does not have such a button. Please correct me if I\n> am wrong.\n> \n\nI'm pretty sure gmail has \"reply to all\" button somewhere.\n\nAlso, please don't top post, reply in line.\n\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Sun, 15 Sep 2024 22:47:10 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "On 9/15/24 22:43, Tom Lane wrote:\n> Tomas Vondra <tomas@vondra.me> writes:\n>> I think you can take a look at https://wiki.postgresql.org/wiki/Todo and\n>> see if there's a patch/topic you would be interested in. It's really\n>> difficult to \"assign\" a task based on a single sentence, with no info\n>> about the person (experience with other projects, etc.).\n> \n> Beware that that TODO list is poorly maintained, so items may be out\n> of date. Worse, most of what's there got there because it's hard,\n> or there's not consensus about what the feature should look like,\n> or both. So IMO it's not a great place for a beginner to start.\n> \n\nTrue, some of the items may be obsolete, some are likely much harder\nthan expected (or perhaps even infeasible), etc. But it's the only such\nlist we have, and it is at least a reasonable overview of the areas.\n\nPresumably a new contributor will start by discussing the patch first,\nand won't waste too much time on it.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Sun, 15 Sep 2024 22:53:36 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "So isn't there something like Jira backlog to manage the tasks?\n\nMy plan is to do some really small tasks like adding an option to a command\nbut aim for harder ones like optimizer stuff.\n\nOn Mon, Sep 16, 2024, 00:14 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tomas Vondra <tomas@vondra.me> writes:\n> > I think you can take a look at https://wiki.postgresql.org/wiki/Todo and\n> > see if there's a patch/topic you would be interested in. It's really\n> > difficult to \"assign\" a task based on a single sentence, with no info\n> > about the person (experience with other projects, etc.).\n>\n> Beware that that TODO list is poorly maintained, so items may be out\n> of date. Worse, most of what's there got there because it's hard,\n> or there's not consensus about what the feature should look like,\n> or both. So IMO it's not a great place for a beginner to start.\n>\n> > FWIW, maybe it'd be better to start by looking at existing patches and\n> > do a bit of a review, learn how to apply/test those and learn from them.\n>\n> Yeah, this is a good way to get some exposure to our code base and\n> development practices.\n>\n> regards, tom lane\n>\n\nSo isn't there something like Jira backlog to manage the tasks?My plan is to do some really small tasks like adding an option to a command but aim for harder ones like optimizer stuff.On Mon, Sep 16, 2024, 00:14 Tom Lane <tgl@sss.pgh.pa.us> wrote:Tomas Vondra <tomas@vondra.me> writes:\n> I think you can take a look at https://wiki.postgresql.org/wiki/Todo and\n> see if there's a patch/topic you would be interested in. It's really\n> difficult to \"assign\" a task based on a single sentence, with no info\n> about the person (experience with other projects, etc.).\n\nBeware that that TODO list is poorly maintained, so items may be out\nof date.  Worse, most of what's there got there because it's hard,\nor there's not consensus about what the feature should look like,\nor both.  So IMO it's not a great place for a beginner to start.\n\n> FWIW, maybe it'd be better to start by looking at existing patches and\n> do a bit of a review, learn how to apply/test those and learn from them.\n\nYeah, this is a good way to get some exposure to our code base and\ndevelopment practices.\n\n                        regards, tom lane", "msg_date": "Mon, 16 Sep 2024 00:45:31 +0330", "msg_from": "sia kc <siavosh.kasravi@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A starter task" }, { "msg_contents": "Tomas Vondra <tomas@vondra.me> writes:\n> Presumably a new contributor will start by discussing the patch first,\n> and won't waste too much time on it.\n\nYeah, that is a really critical piece of advice for a newbie: no\nmatter what size of patch you are thinking about, a big part of the\njob will be to sell it to the rest of the community. It helps a lot\nto solicit advice while you're still at the design stage, before you\nspend a lot of time writing code you might have to throw away.\n\nStuff that is on the TODO list has a bit of an advantage here, because\nthat indicates there's been at least some interest and previous\ndiscussion. But that doesn't go very far, particularly if there\nwas not consensus about how to do the item. Job 1 is to build that\nconsensus.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Sep 2024 17:45:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "I have a bad experience. I picked up a task from MariaDB backlog, explained\nin their chat rooms that I started doing that. After it was done which was\na SQL command which MySQL already supported to restart server instance with\nSQL, they started rethinking the validity of the feature for the MariaDB.\nSo the task got suspended.\nAbout inlining not sure how it is done with gmail. Maybe should use another\nemail client.\n\nAbout reply to all button, I think only sending to mailing list address\nshould suffice. Why including previous recipients too?\n\nI have a bad experience. I picked up a task from MariaDB backlog, explained in their chat rooms that I started doing that. After it was done which was a SQL command which MySQL already supported to restart server instance with SQL, they started rethinking the validity of the feature for the MariaDB. So the task got suspended.About inlining not sure how it is done with gmail. Maybe should use another email client.About reply to all button, I think only sending to mailing list address should suffice. Why including previous recipients  too?", "msg_date": "Mon, 16 Sep 2024 01:47:44 +0330", "msg_from": "sia kc <siavosh.kasravi@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A starter task" }, { "msg_contents": "sia kc <siavosh.kasravi@gmail.com> writes:\n> About reply to all button, I think only sending to mailing list address\n> should suffice. Why including previous recipients too?\n\nIt's a longstanding habit around here for a couple of reasons:\n\n* The mail list servers are occasionally slow. (Our infrastructure\nis way better than it once was, but that still happens sometimes.)\nIf you directly cc: somebody, they can reply to that copy right away\nwhether or not they get a copy from the list right away.\n\n* pgsql-hackers is a fire hose. cc'ing people who have shown interest\nin the thread is useful because they will get those copies separately\nfrom the list traffic, and so they can follow the thread without\nhaving to dig through all the traffic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Sep 2024 18:32:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "On 9/16/24 00:32, Tom Lane wrote:\n> sia kc <siavosh.kasravi@gmail.com> writes:\n>> About reply to all button, I think only sending to mailing list address\n>> should suffice. Why including previous recipients too?\n> \n> It's a longstanding habit around here for a couple of reasons:\n> \n> * The mail list servers are occasionally slow. (Our infrastructure\n> is way better than it once was, but that still happens sometimes.)\n> If you directly cc: somebody, they can reply to that copy right away\n> whether or not they get a copy from the list right away.\n> \n> * pgsql-hackers is a fire hose. cc'ing people who have shown interest\n> in the thread is useful because they will get those copies separately\n> from the list traffic, and so they can follow the thread without\n> having to dig through all the traffic.\n> \n\nTrue, but it's also up to the client - the messages sent through the\nmailing list have the appropriate headers (List-Id etc.) and it's up to\nthe client to show the \"reply to list\" button. Thunderbird does, for\nexample. I thought gmail would too, but perhaps it doesn't.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n\n", "msg_date": "Mon, 16 Sep 2024 02:00:12 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "FWIW, maybe it'd be better to start by looking at existing patches and\ndo a bit of a review, learn how to apply/test those and learn from them.\n\nlets say i have experience in wal,physical replication,buffer management\nwhere can i find patches to review on these topics?\n\nregards\nTony Wayne\n\nOn Mon, Sep 16, 2024 at 1:30 AM Tomas Vondra <tomas@vondra.me> wrote:\n\n> On 9/15/24 21:42, sia kc wrote:\n> > Sorry I am not sure if I am doing this right. Should I look somewhere\n> > else for tasks?\n> >\n>\n> Hi,\n>\n> I think you can take a look at https://wiki.postgresql.org/wiki/Todo and\n> see if there's a patch/topic you would be interested in. It's really\n> difficult to \"assign\" a task based on a single sentence, with no info\n> about the person (experience with other projects, etc.).\n>\n> If you're staring with PostgreSQL development, maybe take a look at\n> https://wiki.postgresql.org/wiki/Developer_FAQ first. There's a lot of\n> ways to develop, but this is a good intro.\n>\n> FWIW, maybe it'd be better to start by looking at existing patches and\n> do a bit of a review, learn how to apply/test those and learn from them.\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n>\n>\n>\n\nFWIW, maybe it'd be better to start by looking at existing patches anddo a bit of a review, learn how to apply/test those and learn from them.lets say i have experience in wal,physical replication,buffer management where can i find patches to review on these topics?regardsTony WayneOn Mon, Sep 16, 2024 at 1:30 AM Tomas Vondra <tomas@vondra.me> wrote:On 9/15/24 21:42, sia kc wrote:\n> Sorry I am not sure if I am doing this right. Should I look somewhere\n> else for tasks?\n> \n\nHi,\n\nI think you can take a look at https://wiki.postgresql.org/wiki/Todo and\nsee if there's a patch/topic you would be interested in. It's really\ndifficult to \"assign\" a task based on a single sentence, with no info\nabout the person (experience with other projects, etc.).\n\nIf you're staring with PostgreSQL development, maybe take a look at\nhttps://wiki.postgresql.org/wiki/Developer_FAQ first. There's a lot of\nways to develop, but this is a good intro.\n\nFWIW, maybe it'd be better to start by looking at existing patches and\ndo a bit of a review, learn how to apply/test those and learn from them.\n\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Mon, 16 Sep 2024 12:19:27 +0530", "msg_from": "Tony Wayne <anonymouslydark3@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "On 9/16/24 08:49, Tony Wayne wrote:\n> FWIW, maybe it'd be better to start by looking at existing patches and\n> do a bit of a review, learn how to apply/test those and learn from them.\n> \n> lets say i have experience in wal,physical replication,buffer management\n> where can i find patches to review on these topics?\n>\n\nStart by looking at the current commitfest:\n\nhttps://commitfest.postgresql.org/49/\n\nThis is the only place tracking patches people are currently working on,\nand submitted to the mailing list for a discussion/review.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n\n", "msg_date": "Mon, 16 Sep 2024 09:49:53 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "On 9/16/24 00:17, sia kc wrote:\n> I have a bad experience. I picked up a task from MariaDB backlog,\n> explained in their chat rooms that I started doing that. After it was\n> done which was a SQL command which MySQL already supported to restart\n> server instance with SQL, they started rethinking the validity of the\n> feature for the MariaDB. So the task got suspended.\n> \n\nUnfortunately this can happen here too, to some extent. Sometimes it's\nnot obvious how complex the patch will be, the feature may conflict with\nanother feature in some unexpected way, etc. It's not like we have a\n100% validated and agreed design somewhere.\n\nThis is why my advice is to pick a patch the contributor is personally\ninterested in. It puts him/her in a better position to advocate for the\nfeature, decide what trade offs are more appropriate, etc.\n\n> About inlining not sure how it is done with gmail. Maybe should use\n> another email client.\n> \n\nCan you just expand the email, hit enter in a place where you want to\nadd a response, and write.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n\n", "msg_date": "Mon, 16 Sep 2024 09:58:08 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "On Mon, Sep 16, 2024 at 11:28 AM Tomas Vondra <tomas@vondra.me> wrote:\n\n> On 9/16/24 00:17, sia kc wrote:\n> > I have a bad experience. I picked up a task from MariaDB backlog,\n> > explained in their chat rooms that I started doing that. After it was\n> > done which was a SQL command which MySQL already supported to restart\n> > server instance with SQL, they started rethinking the validity of the\n> > feature for the MariaDB. So the task got suspended.\n> >\n>\n> Unfortunately this can happen here too, to some extent. Sometimes it's\n> not obvious how complex the patch will be, the feature may conflict with\n> another feature in some unexpected way, etc. It's not like we have a\n> 100% validated and agreed design somewhere.\n>\n\n\n> This is why my advice is to pick a patch the contributor is personally\n> interested in. It puts him/her in a better position to advocate for the\n> feature, decide what trade offs are more appropriate, etc.\n>\nBy picking a patch I assume you mean picking an already done task and\nseeing for example how I would have done it, right?\n\n\n\n>\n> > About inlining not sure how it is done with gmail. Maybe should use\n> > another email client.\n> >\n>\n> Can you just expand the email, hit enter in a place where you want to\n> add a response, and write.\n>\n>\nThanks.\n\n\n-- \n\nSiavosh Kasravi\n* \"Save a Tree\" - Please print this email only if necessary.*\n\nOn Mon, Sep 16, 2024 at 11:28 AM Tomas Vondra <tomas@vondra.me> wrote:On 9/16/24 00:17, sia kc wrote:\n> I have a bad experience. I picked up a task from MariaDB backlog,\n> explained in their chat rooms that I started doing that. After it was\n> done which was a SQL command which MySQL already supported to restart\n> server instance with SQL, they started rethinking the validity of the\n> feature for the MariaDB. So the task got suspended.\n> \n\nUnfortunately this can happen here too, to some extent. Sometimes it's\nnot obvious how complex the patch will be, the feature may conflict with\nanother feature in some unexpected way, etc. It's not like we have a\n100% validated and agreed design somewhere.\n\nThis is why my advice is to pick a patch the contributor is personally\ninterested in. It puts him/her in a better position to advocate for the\nfeature, decide what trade offs are more appropriate, etc.By picking a patch I assume you mean picking an already done task and seeing for example how I would have done it, right? \n\n> About inlining not sure how it is done with gmail. Maybe should use\n> another email client.\n> \n\nCan you just expand the email, hit enter in a place where you want to\nadd a response, and write.Thanks.-- Siavosh Kasravi \"Save a Tree\" - Please print this email only if necessary.", "msg_date": "Mon, 16 Sep 2024 12:05:24 +0330", "msg_from": "sia kc <siavosh.kasravi@gmail.com>", "msg_from_op": true, "msg_subject": "Re: A starter task" }, { "msg_contents": "On 9/16/24 10:35, sia kc wrote:\n> \n> \n> On Mon, Sep 16, 2024 at 11:28 AM Tomas Vondra <tomas@vondra.me\n> <mailto:tomas@vondra.me>> wrote:\n> \n> On 9/16/24 00:17, sia kc wrote:\n> > I have a bad experience. I picked up a task from MariaDB backlog,\n> > explained in their chat rooms that I started doing that. After it was\n> > done which was a SQL command which MySQL already supported to restart\n> > server instance with SQL, they started rethinking the validity of the\n> > feature for the MariaDB. So the task got suspended.\n> >\n> \n> Unfortunately this can happen here too, to some extent. Sometimes it's\n> not obvious how complex the patch will be, the feature may conflict with\n> another feature in some unexpected way, etc. It's not like we have a\n> 100% validated and agreed design somewhere.\n> \n> \n> \n> This is why my advice is to pick a patch the contributor is personally\n> interested in. It puts him/her in a better position to advocate for the\n> feature, decide what trade offs are more appropriate, etc.\n> \n> By picking a patch I assume you mean picking an already done task and\n> seeing for example how I would have done it, right?\n> \n\nI mean both the patch you'd review and the patch/feature you'd be\nwriting yourself. My experience is that when a person is genuinely\ninterested in a topic, that makes it easier to reason about approaches,\ntrade offs, and stick with the patch even if it doesn't go smoothly.\n\nIt's a bit similar to a homework. I always absolutely hated homework\ndone only for the sake of a homework, and done the absolutely bare\nminimum. But if it was something useful/interesting, I'd spend hours\nperfecting it. Patches are similar, IMO.\n\nIf you pick a patch that's useful for you (e.g. the feature would make\nyour job easier), that's a huge advantage IMO.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Mon, 16 Sep 2024 11:27:37 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: A starter task" }, { "msg_contents": "On 2024-09-15 Su 6:17 PM, sia kc wrote:\n>\n>\n> About inlining not sure how it is done with gmail. Maybe should use \n> another email client.\n\n\nClick the three dots with the tooltip \"Show trimmed content\". Then you \ncan scroll down and put your reply inline. (Personally I detest the \nGmail web interface, and use a different MUA, but you can do this even \nwith the Gmail web app.)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-09-15 Su 6:17 PM, sia kc wrote:\n\n\n\n\n\n\n\n\nAbout inlining not sure\n how it is done with gmail. Maybe should use another email\n client.\n\n\n\n\n\n\nClick the three dots with the tooltip \"Show trimmed content\".\n Then you can scroll down and put your reply inline. (Personally I\n detest the Gmail web interface, and use a different MUA, but you\n can do this even with the Gmail web app.)\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 17 Sep 2024 10:48:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: A starter task" } ]
[ { "msg_contents": "Hi,\n\nWith Postgres 17 RC1 on Windows, `float_to_shortest_decimal_buf` and\n`float_to_shortest_decimal_bufn` are not longer exported. This causes\n`unresolved external symbol` linking errors for extensions that rely on\nthese functions (like pgvector). Can these functions be exported like\nprevious versions of Postgres?\n\nThanks,\nAndrew\n\nHi,With Postgres 17 RC1 on Windows, `float_to_shortest_decimal_buf` and `float_to_shortest_decimal_bufn` are not longer exported. This causes `unresolved external symbol` linking errors for extensions that rely on these functions (like pgvector). Can these functions be exported like previous versions of Postgres?Thanks,Andrew", "msg_date": "Fri, 13 Sep 2024 13:07:13 -0700", "msg_from": "Andrew Kane <andrew@ankane.org>", "msg_from_op": true, "msg_subject": "Exporting float_to_shortest_decimal_buf(n) with Postgres 17 on\n Windows" }, { "msg_contents": "Andrew Kane <andrew@ankane.org> writes:\n> With Postgres 17 RC1 on Windows, `float_to_shortest_decimal_buf` and\n> `float_to_shortest_decimal_bufn` are not longer exported. This causes\n> `unresolved external symbol` linking errors for extensions that rely on\n> these functions (like pgvector). Can these functions be exported like\n> previous versions of Postgres?\n\nAFAICS it's in the exact same place it was in earlier versions.\nYou might need to review your linking commands.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Sep 2024 16:58:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Exporting float_to_shortest_decimal_buf(n) with Postgres 17 on\n Windows" }, { "msg_contents": "\n\n> On 13 Sep 2024, at 11:07 PM, Andrew Kane <andrew@ankane.org> wrote:\n> \n> Hi,\n> \n> With Postgres 17 RC1 on Windows, `float_to_shortest_decimal_buf` and `float_to_shortest_decimal_bufn` are not longer exported. This causes `unresolved external symbol` linking errors for extensions that rely on these functions (like pgvector). Can these functions be exported like previous versions of Postgres?\n\nProbably a Windows thing?\nJust tried on Darwin with 17_RC1, and pgvector 0.7.4 build installcheck’s OK. \n\n\n> \n> Thanks,\n> Andrew\n\n\n\n", "msg_date": "Sat, 14 Sep 2024 00:27:47 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Exporting float_to_shortest_decimal_buf(n) with Postgres 17 on\n Windows" }, { "msg_contents": "On Fri, Sep 13, 2024 at 04:58:20PM -0400, Tom Lane wrote:\n> Andrew Kane <andrew@ankane.org> writes:\n>> With Postgres 17 RC1 on Windows, `float_to_shortest_decimal_buf` and\n>> `float_to_shortest_decimal_bufn` are not longer exported. This causes\n>> `unresolved external symbol` linking errors for extensions that rely on\n>> these functions (like pgvector). Can these functions be exported like\n>> previous versions of Postgres?\n> \n> AFAICS it's in the exact same place it was in earlier versions.\n> You might need to review your linking commands.\n\nI do see a fair amount of special handling for f2s.c in the build files. I\nwonder if something got broken for Windows in the switch from the MSVC\nscripts to meson.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 13 Sep 2024 16:41:23 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Exporting float_to_shortest_decimal_buf(n) with Postgres 17 on\n Windows" }, { "msg_contents": "> Probably a Windows thing?\n\nCorrect, it's only on Windows.\n\n> I do see a fair amount of special handling for f2s.c in the build files.\n I\nwonder if something got broken for Windows in the switch from the MSVC\nscripts to meson.\n\nThis was my hunch as well since none of the source files changed. Also,\nneither function is present with `dumpbin /EXPORTS /SYMBOLS\nlib\\postgres.lib`, which led me to believe it may need to be addressed\nupstream.\n\n- Andrew\n\nOn Fri, Sep 13, 2024 at 2:41 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Fri, Sep 13, 2024 at 04:58:20PM -0400, Tom Lane wrote:\n> > Andrew Kane <andrew@ankane.org> writes:\n> >> With Postgres 17 RC1 on Windows, `float_to_shortest_decimal_buf` and\n> >> `float_to_shortest_decimal_bufn` are not longer exported. This causes\n> >> `unresolved external symbol` linking errors for extensions that rely on\n> >> these functions (like pgvector). Can these functions be exported like\n> >> previous versions of Postgres?\n> >\n> > AFAICS it's in the exact same place it was in earlier versions.\n> > You might need to review your linking commands.\n>\n> I do see a fair amount of special handling for f2s.c in the build files. I\n> wonder if something got broken for Windows in the switch from the MSVC\n> scripts to meson.\n>\n> --\n> nathan\n>\n\n> Probably a Windows thing?Correct, it's only on Windows.> I do see a fair amount of special handling for f2s.c in the build files.  Iwonder if something got broken for Windows in the switch from the MSVCscripts to meson.This was my hunch as well since none of the source files changed. Also, neither function is present with `dumpbin /EXPORTS /SYMBOLS lib\\postgres.lib`, which led me to believe it may need to be addressed upstream.- AndrewOn Fri, Sep 13, 2024 at 2:41 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Fri, Sep 13, 2024 at 04:58:20PM -0400, Tom Lane wrote:\n> Andrew Kane <andrew@ankane.org> writes:\n>> With Postgres 17 RC1 on Windows, `float_to_shortest_decimal_buf` and\n>> `float_to_shortest_decimal_bufn` are not longer exported. This causes\n>> `unresolved external symbol` linking errors for extensions that rely on\n>> these functions (like pgvector). Can these functions be exported like\n>> previous versions of Postgres?\n> \n> AFAICS it's in the exact same place it was in earlier versions.\n> You might need to review your linking commands.\n\nI do see a fair amount of special handling for f2s.c in the build files.  I\nwonder if something got broken for Windows in the switch from the MSVC\nscripts to meson.\n\n-- \nnathan", "msg_date": "Fri, 13 Sep 2024 15:26:42 -0700", "msg_from": "Andrew Kane <andrew@ankane.org>", "msg_from_op": true, "msg_subject": "Re: Exporting float_to_shortest_decimal_buf(n) with Postgres 17 on\n Windows" }, { "msg_contents": "On Fri, Sep 13, 2024 at 03:26:42PM -0700, Andrew Kane wrote:\n> This was my hunch as well since none of the source files changed. Also,\n> neither function is present with `dumpbin /EXPORTS /SYMBOLS\n> lib\\postgres.lib`, which led me to believe it may need to be addressed\n> upstream.\n\nHmm. Perhaps we are not careful enough with the calls of\nmsvc_gendef.pl in meson.build, missing some spots? \n--\nMichael", "msg_date": "Sat, 14 Sep 2024 09:15:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Exporting float_to_shortest_decimal_buf(n) with Postgres 17 on\n Windows" } ]
[ { "msg_contents": "Hi,\n\n(adding TOm in Cc as committer/co-author of the original patch)\n\nWhile adapting in pg_stat_kcache the fix for buggy nesting level calculation, I\nnoticed that one comment referencing the old approach was missed. Trivial\npatch attached.", "msg_date": "Sat, 14 Sep 2024 12:24:16 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Obsolete comment in pg_stat_statements" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> While adapting in pg_stat_kcache the fix for buggy nesting level calculation, I\n> noticed that one comment referencing the old approach was missed. Trivial\n> patch attached.\n\nHmm ... I agree that para is out of date, but is there anything to\nsalvage rather than just delete it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Sep 2024 00:39:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Obsolete comment in pg_stat_statements" }, { "msg_contents": "On Sat, 14 Sept 2024, 12:39 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > While adapting in pg_stat_kcache the fix for buggy nesting level\n> calculation, I\n> > noticed that one comment referencing the old approach was missed.\n> Trivial\n> > patch attached.\n>\n> Hmm ... I agree that para is out of date, but is there anything to\n> salvage rather than just delete it?\n\n\nI thought about it but I think that now that knowledge is in the else\nbranch, with the mention that we still have to bump the nesting level even\nif it's not locally handled.\n\n>\n>\n\nOn Sat, 14 Sept 2024, 12:39 Tom Lane, <tgl@sss.pgh.pa.us> wrote:Julien Rouhaud <rjuju123@gmail.com> writes:\n> While adapting in pg_stat_kcache the fix for buggy nesting level calculation, I\n> noticed that one comment referencing the old approach was missed.  Trivial\n> patch attached.\n\nHmm ... I agree that para is out of date, but is there anything to\nsalvage rather than just delete it?I thought about it but I think that now that knowledge is in the else branch, with the mention that we still have to bump the nesting level even if it's not locally handled.", "msg_date": "Sat, 14 Sep 2024 12:56:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Obsolete comment in pg_stat_statements" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, 14 Sept 2024, 12:39 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... I agree that para is out of date, but is there anything to\n>> salvage rather than just delete it?\n\n> I thought about it but I think that now that knowledge is in the else\n> branch, with the mention that we still have to bump the nesting level even\n> if it's not locally handled.\n\nAfter sleeping on it I looked again, and I think you're right,\nthere's no useful knowledge remaining in this para. Pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Sep 2024 11:44:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Obsolete comment in pg_stat_statements" }, { "msg_contents": "On Sat, 14 Sept 2024, 23:44 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Sat, 14 Sept 2024, 12:39 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n> >> Hmm ... I agree that para is out of date, but is there anything to\n> >> salvage rather than just delete it?\n>\n> > I thought about it but I think that now that knowledge is in the else\n> > branch, with the mention that we still have to bump the nesting level\n> even\n> > if it's not locally handled.\n>\n> After sleeping on it I looked again, and I think you're right,\n> there's no useful knowledge remaining in this para. Pushed.\n\n\nthanks!\n\nOn Sat, 14 Sept 2024, 23:44 Tom Lane, <tgl@sss.pgh.pa.us> wrote:Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sat, 14 Sept 2024, 12:39 Tom Lane, <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... I agree that para is out of date, but is there anything to\n>> salvage rather than just delete it?\n\n> I thought about it but I think that now that knowledge is in the else\n> branch, with the mention that we still have to bump the nesting level even\n> if it's not locally handled.\n\nAfter sleeping on it I looked again, and I think you're right,\nthere's no useful knowledge remaining in this para.  Pushed.thanks!", "msg_date": "Sun, 15 Sep 2024 10:30:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Obsolete comment in pg_stat_statements" } ]
[ { "msg_contents": "Hello hackers,\n\nWhile trying to reproduce inexplicable drongo failures (e. g., [1])\nseemingly related to BackgroundPsql, I stumbled upon close, but not\nthe same issue. After many (6-8 thousands) iterations of the\n015_stream.pl TAP test, psql failed to start with a STATUS_DLL_INIT_FAILED\nerror, and a corresponding Windows popup preventing a test exit (see ss-1\nin the archive attached).\n\nUpon reaching that state of the system, following test runs fail with one\nor another error related to memory (not only with psql, but also with the\nserver processes):\ntestrun/subscription_13/015_stream/log/015_stream_publisher.log:2024-09-11 20:01:51.777 PDT [8812] LOG:  server process \n(PID 11532) was terminated by exception 0xC00000FD\ntestrun/subscription_14/015_stream/log/015_stream_subscriber.log:2024-09-11 20:01:19.806 PDT [2036] LOG:  server process \n(PID 10548) was terminated by exception 0xC00000FD\ntestrun/subscription_16/015_stream/log/015_stream_publisher.log:2024-09-11 19:59:41.513 PDT [9128] LOG:  server process \n(PID 14476) was terminated by exception 0xC0000409\ntestrun/subscription_19/015_stream/log/015_stream_publisher.log:2024-09-11 20:03:27.801 PDT [10156] LOG:  server process \n(PID 2236) was terminated by exception 0xC0000409\ntestrun/subscription_20/015_stream/log/015_stream_publisher.log:2024-09-11 19:59:41.359 PDT [10656] LOG:  server process \n(PID 14712) was terminated by exception 0xC000012D\ntestrun/subscription_3/015_stream/log/015_stream_publisher.log:2024-09-11 20:02:23.815 PDT [13704] LOG:  server process \n(PID 13992) was terminated by exception 0xC00000FD\ntestrun/subscription_9/015_stream/log/015_stream_publisher.log:2024-09-11 19:59:41.360 PDT [9760] LOG:  server process \n(PID 11608) was terminated by exception 0xC0000142\n...\n\nWhen tests fail, I see Commit Charge reaching 100% (see ss-2 in the\nattachment), while Physical Memory isn't all used up. To get OS to a\nworking state, I had to reboot it — killing processes, logoff/logon didn't\nhelp.\n\nI reproduced this issue again, investigated it and found out that it is\ncaused by robocopy (used by PostgreSQL::Test::Cluster->init), which is\nleaking kernel objects, namely \"Event objects\" within Non-Paged pool on\neach run.\n\nThis can be seen with Kernel Pool Monitor, or just with this simple PS script:\nfor ($i = 1; $i -le 100; $i++)\n{\necho \"iteration $i\"\nrm -r c:\\temp\\target\nrobocopy.exe /E /NJH /NFL /NDL /NP c:\\temp\\initdb-template c:\\temp\\target\nGet-WmiObject -Class Win32_PerfRawData_PerfOS_Memory | % PoolNonpagedBytes\n}\n\nIt shows to me:\niteration 1\n                Total    Copied   Skipped  Mismatch    FAILED Extras\n     Dirs :        27        27         0         0         0 0\n    Files :       968       968         0         0         0 0\n    Bytes :   38.29 m   38.29 m         0         0         0 0\n    Times :   0:00:00   0:00:00                       0:00:00 0:00:00\n...\n1226063872\n...\niteration 100\n                Total    Copied   Skipped  Mismatch    FAILED Extras\n     Dirs :        27        27         0         0         0 0\n    Files :       968       968         0         0         0 0\n    Bytes :   38.29 m   38.29 m         0         0         0 0\n    Times :   0:00:00   0:00:00                       0:00:00 0:00:00\n...\n1245220864\n\n(That is, 0.1-0.2 MB leaks per one robocopy run.)\n\nI observed this on Windows 10 (Version 10.0.19045.4780), with all updates\ninstalled, but not on Windows Server 2016 (10.0.14393.0). Moreover, using\nrobocopy v14393 on Windows 10 doesn't affect the issue.\n\nPerhaps this information can be helpful for someone who is running\nbuildfarm/CI tests on Windows animals...\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-11%2007%3A24%3A53\n\nBest regards,\nAlexander", "msg_date": "Sat, 14 Sep 2024 16:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Robocopy might be not robust enough for never-ending testing on\n Windows" }, { "msg_contents": "\nOn 2024-09-14 Sa 9:00 AM, Alexander Lakhin wrote:\n> Hello hackers,\n>\n> While trying to reproduce inexplicable drongo failures (e. g., [1])\n> seemingly related to BackgroundPsql, I stumbled upon close, but not\n> the same issue. After many (6-8 thousands) iterations of the\n> 015_stream.pl TAP test, psql failed to start with a \n> STATUS_DLL_INIT_FAILED\n> error, and a corresponding Windows popup preventing a test exit (see ss-1\n> in the archive attached).\n>\n> Upon reaching that state of the system, following test runs fail with one\n> or another error related to memory (not only with psql, but also with the\n> server processes):\n> testrun/subscription_13/015_stream/log/015_stream_publisher.log:2024-09-11 \n> 20:01:51.777 PDT [8812] LOG:  server process (PID 11532) was \n> terminated by exception 0xC00000FD\n> testrun/subscription_14/015_stream/log/015_stream_subscriber.log:2024-09-11 \n> 20:01:19.806 PDT [2036] LOG:  server process (PID 10548) was \n> terminated by exception 0xC00000FD\n> testrun/subscription_16/015_stream/log/015_stream_publisher.log:2024-09-11 \n> 19:59:41.513 PDT [9128] LOG:  server process (PID 14476) was \n> terminated by exception 0xC0000409\n> testrun/subscription_19/015_stream/log/015_stream_publisher.log:2024-09-11 \n> 20:03:27.801 PDT [10156] LOG:  server process (PID 2236) was \n> terminated by exception 0xC0000409\n> testrun/subscription_20/015_stream/log/015_stream_publisher.log:2024-09-11 \n> 19:59:41.359 PDT [10656] LOG:  server process (PID 14712) was \n> terminated by exception 0xC000012D\n> testrun/subscription_3/015_stream/log/015_stream_publisher.log:2024-09-11 \n> 20:02:23.815 PDT [13704] LOG:  server process (PID 13992) was \n> terminated by exception 0xC00000FD\n> testrun/subscription_9/015_stream/log/015_stream_publisher.log:2024-09-11 \n> 19:59:41.360 PDT [9760] LOG:  server process (PID 11608) was \n> terminated by exception 0xC0000142\n> ...\n>\n> When tests fail, I see Commit Charge reaching 100% (see ss-2 in the\n> attachment), while Physical Memory isn't all used up. To get OS to a\n> working state, I had to reboot it — killing processes, logoff/logon \n> didn't\n> help.\n>\n> I reproduced this issue again, investigated it and found out that it is\n> caused by robocopy (used by PostgreSQL::Test::Cluster->init), which is\n> leaking kernel objects, namely \"Event objects\" within Non-Paged pool on\n> each run.\n>\n> This can be seen with Kernel Pool Monitor, or just with this simple PS \n> script:\n> for ($i = 1; $i -le 100; $i++)\n> {\n> echo \"iteration $i\"\n> rm -r c:\\temp\\target\n> robocopy.exe /E /NJH /NFL /NDL /NP c:\\temp\\initdb-template c:\\temp\\target\n> Get-WmiObject -Class Win32_PerfRawData_PerfOS_Memory | % \n> PoolNonpagedBytes\n> }\n>\n> It shows to me:\n> iteration 1\n>                Total    Copied   Skipped  Mismatch    FAILED Extras\n>     Dirs :        27        27         0         0         0 0\n>    Files :       968       968         0         0         0 0\n>    Bytes :   38.29 m   38.29 m         0         0         0 0\n>    Times :   0:00:00   0:00:00                       0:00:00 0:00:00\n> ...\n> 1226063872\n> ...\n> iteration 100\n>                Total    Copied   Skipped  Mismatch    FAILED Extras\n>     Dirs :        27        27         0         0         0 0\n>    Files :       968       968         0         0         0 0\n>    Bytes :   38.29 m   38.29 m         0         0         0 0\n>    Times :   0:00:00   0:00:00                       0:00:00 0:00:00\n> ...\n> 1245220864\n>\n> (That is, 0.1-0.2 MB leaks per one robocopy run.)\n>\n> I observed this on Windows 10 (Version 10.0.19045.4780), with all updates\n> installed, but not on Windows Server 2016 (10.0.14393.0). Moreover, using\n> robocopy v14393 on Windows 10 doesn't affect the issue.\n>\n> Perhaps this information can be helpful for someone who is running\n> buildfarm/CI tests on Windows animals...\n>\n> [1] \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2024-09-11%2007%3A24%3A53\n>\n>\n\nInteresting, good detective work. Still, wouldn't this mean drongo would \nfail consistently?\n\nI wonder why we're using robocopy instead of our own RecursiveCopy module?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 14 Sep 2024 10:22:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Robocopy might be not robust enough for never-ending testing on\n Windows" }, { "msg_contents": "Hello Andrew,\n\n14.09.2024 17:22, Andrew Dunstan wrote:\n>\n> On 2024-09-14 Sa 9:00 AM, Alexander Lakhin wrote:\n>> While trying to reproduce inexplicable drongo failures (e. g., [1])\n>> seemingly related to BackgroundPsql, I stumbled upon close, but not\n>> the same issue. ...\n>>\n>\n> Interesting, good detective work. Still, wouldn't this mean drongo would fail consistently?\n\nYes, I think drongo is suffering from another disease, we're yet to find\nwhich.\n\n>\n> I wonder why we're using robocopy instead of our own RecursiveCopy module?\n>\n\nAFAICS, robocopy is also used by regress.exe, so switching to the perl\nmodule would require perl even for regress tests. I know that perl was\na requirement for MSVC builds, but maybe that's not so with meson...\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sat, 14 Sep 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Robocopy might be not robust enough for never-ending testing on\n Windows" }, { "msg_contents": "On Sun, Sep 15, 2024 at 1:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n> (That is, 0.1-0.2 MB leaks per one robocopy run.)\n>\n> I observed this on Windows 10 (Version 10.0.19045.4780), with all updates\n> installed, but not on Windows Server 2016 (10.0.14393.0). Moreover, using\n> robocopy v14393 on Windows 10 doesn't affect the issue.\n\nI don't understand Windows but that seems pretty weird to me, as it\nseems to imply that a driver or something fairly low level inside the\nkernel is leaking objects (at least by simple minded analogies to\noperating systems I understand better). Either that or robocop.exe\nhas userspace stuff involving at least one thread still running\nsomewhere after it's exited, but that seems unlikely as I guess you'd\nhave noticed that...\n\nJust a thought: I was surveying the block cloning landscape across\nOSes and filesystems while looking into clone-based CREATE DATABASE\n(CF #4886) and also while thinking about the new TAP test initdb\ntemplate copy trick, is that robocopy.exe tries to use Windows' block\ncloning magic, just like cp on recent Linux and FreeBSD systems (at\none point I was wondering if that was causing some funky extra flush\nstalls on some systems, I need to come back to that...). It probably\ndoesn't actually work unless you have Windows 11 kernel with DevDrive\nenabled (from reading, no Windows here), but I guess it still probably\nuses the new system interfaces, probably something like CopyFileEx().\nDoes it still leak if you use /nooffload or /noclone?\n\n\n", "msg_date": "Sun, 15 Sep 2024 08:32:04 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Robocopy might be not robust enough for never-ending testing on\n Windows" }, { "msg_contents": "On Sun, Sep 15, 2024 at 8:32 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> template copy trick, is that robocopy.exe tries to use Windows' block\n\nErm, sorry for my fumbled early morning message editing and garbled\nEnglish... I meant to write \"one thing I noticed is that..\".\n\n\n", "msg_date": "Sun, 15 Sep 2024 08:45:39 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Robocopy might be not robust enough for never-ending testing on\n Windows" }, { "msg_contents": "Hello Thomas,\n\n14.09.2024 23:32, Thomas Munro wrote:\n> On Sun, Sep 15, 2024 at 1:00 AM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> (That is, 0.1-0.2 MB leaks per one robocopy run.)\n>>\n>> I observed this on Windows 10 (Version 10.0.19045.4780), with all updates\n>> installed, but not on Windows Server 2016 (10.0.14393.0). Moreover, using\n>> robocopy v14393 on Windows 10 doesn't affect the issue.\n> I don't understand Windows but that seems pretty weird to me, as it\n> seems to imply that a driver or something fairly low level inside the\n> kernel is leaking objects (at least by simple minded analogies to\n> operating systems I understand better). Either that or robocop.exe\n> has userspace stuff involving at least one thread still running\n> somewhere after it's exited, but that seems unlikely as I guess you'd\n> have noticed that...\n\nYes, I see no robocopy process left after the test, and I think userspace\nthreads would not survive logoff.\n\n> Just a thought: I was surveying the block cloning landscape across\n> OSes and filesystems while looking into clone-based CREATE DATABASE\n> (CF #4886) and also while thinking about the new TAP test initdb\n> template copy trick, is that robocopy.exe tries to use Windows' block\n> cloning magic, just like cp on recent Linux and FreeBSD systems (at\n> one point I was wondering if that was causing some funky extra flush\n> stalls on some systems, I need to come back to that...). It probably\n> doesn't actually work unless you have Windows 11 kernel with DevDrive\n> enabled (from reading, no Windows here), but I guess it still probably\n> uses the new system interfaces, probably something like CopyFileEx().\n> Does it still leak if you use /nooffload or /noclone?\n\nI tested the following (with the script above):\nWindows 10 (Version 10.0.19045.4780):\nrobocopy.exe (10.0.19041.4717) /NOOFFLOAD\niteration 1\n496611328\n...\niteration 1000\n609701888\n\nThat is, it leaks\n\n/NOCLONE is not supported by that robocopy version:\nERROR : Invalid Parameter #1 : \"/NOCLONE\"\n\nThen, Windows 11 (Version 10.0.22000.613), robocopy 10.0.22000.469:\niteration 1\n141217792\n...\niteration 996\n151670784\n...\niteration 997\n152817664\n...\niteration 1000\n151674880\n\nThat is, it doesn't leak.\n\nrobocopy.exe /NOOFFLOAD\niteration 1\n152666112\n...\niteration 1000\n153341952\n\nNo leak.\n\n/NOCLONE is not supported by that robocopy version:\n\nThen I updated that Windows 11 to Version 10.0.22000.2538 (with KB5031358),\nrobocopy 10.0.22000.1516:\niteration 1\n122753024\n...\niteration 1000\n244674560\n\nIt does leak.\n\nrobocopy /NOOFFLOAD\niteration 1\n167522304\n...\niteration 1000\n283484160\n\nIt leaks as well.\n\nFinally, I've installed newest Windows 11 Version 10.0.22631.4169, with\nrobocopy 10.0.22621.3672:\nNon-paged pool increased from 133 to 380 MB after 1000 robocopy runs.\n\nrobocopy /OFFLOAD leaks too.\n\n/NOCLONE is not supported by that robocopy version:\n\nSo this leak looks like a recent and still existing defect.\n\n(Sorry for a delay, fighting with OS updates/installation took me a while.)\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 16 Sep 2024 09:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Robocopy might be not robust enough for never-ending testing on\n Windows" }, { "msg_contents": "On Mon, Sep 16, 2024 at 6:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> So this leak looks like a recent and still existing defect.\n\n From my cartoon-like understanding of Windows, I would guess that if\nevent handles created by a program are leaked after it has exited, it\nwould normally imply that they've been duplicated somewhere else that\nis still running (for example see the way that PostgreSQL's\ndsm_impl_pin_segment() calls DuplicateHandle() to give a copy to the\npostmaster, so that the memory segment continues to exist after the\nbackend exits), and if it's that, you'd be able to see the handle\ncount going up in the process monitor for some longer running process\nsomewhere (as seen in this report from the Chrome hackers[1]). And if\nit's not that, then I would guess it would have to be a kernel bug\nbecause something outside userspace must be holding onto/leaking\nhandles. But I don't really understand Windows beyond trying to debug\nPostgreSQL at a distance, so my guesses may be way off. If we wanted\nto try to find a Windows expert to look at a standalone repro, does\nyour PS script work with *any* source directory, or is there something\nabout the initdb template, in which case could you post it in a .zip\nfile so that a non-PostgreSQL person could see the failure mode?\n\n[1] https://randomascii.wordpress.com/2021/07/25/finding-windows-handle-leaks-in-chromium-and-others/\n\n\n", "msg_date": "Tue, 17 Sep 2024 13:01:17 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Robocopy might be not robust enough for never-ending testing on\n Windows" }, { "msg_contents": "Hello Thomas,\n\n17.09.2024 04:01, Thomas Munro wrote:\n> On Mon, Sep 16, 2024 at 6:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n>> So this leak looks like a recent and still existing defect.\n> From my cartoon-like understanding of Windows, I would guess that if\n> event handles created by a program are leaked after it has exited, it\n> would normally imply that they've been duplicated somewhere else that\n> is still running (for example see the way that PostgreSQL's\n> dsm_impl_pin_segment() calls DuplicateHandle() to give a copy to the\n> postmaster, so that the memory segment continues to exist after the\n> backend exits), and if it's that, you'd be able to see the handle\n> count going up in the process monitor for some longer running process\n> somewhere (as seen in this report from the Chrome hackers[1]). And if\n> it's not that, then I would guess it would have to be a kernel bug\n> because something outside userspace must be holding onto/leaking\n> handles. But I don't really understand Windows beyond trying to debug\n> PostgreSQL at a distance, so my guesses may be way off. If we wanted\n> to try to find a Windows expert to look at a standalone repro, does\n> your PS script work with *any* source directory, or is there something\n> about the initdb template, in which case could you post it in a .zip\n> file so that a non-PostgreSQL person could see the failure mode?\n>\n> [1] https://randomascii.wordpress.com/2021/07/25/finding-windows-handle-leaks-in-chromium-and-others/\n\nThat's very interesting reading. I'll try to research the issue that deep\nlater (though I guess this case is different — after logging off and\nlogging in as another user, I can't see any processes belonging to the\nfirst one, while those \"Event objects\" in non-paged pool still occupy\nmemory), but finding a Windows expert who perhaps can look at the\nrobocopy's sources, would be good too (and more productive).\n\nSo, the repro we can show is:\nrm -r c:\\temp\\source\nmkdir c:\\temp\\source\nfor ($i = 1; $i -le 1000; $i++)\n{\necho 1 > \"c:\\temp\\source\\$i\"\n}\n\nfor ($i = 1; $i -le 1000; $i++)\n{\necho \"iteration $i\"\nrm -r c:\\temp\\target\nrobocopy.exe /E /NJH /NFL /NDL /NP c:\\temp\\source c:\\temp\\target\nGet-WmiObject -Class Win32_PerfRawData_PerfOS_Memory | % PoolNonpagedBytes\n}\n\nIt produces for me (on Windows 10 [Version 10.0.19045.4780]):\niteration 1\n...\n216887296\n...\niteration 1000\n\n\n------------------------------------------------------------------------------\n\n                Total    Copied   Skipped  Mismatch    FAILED Extras\n     Dirs :         1         1         0         0         0 0\n    Files :      1000      1000         0         0         0 0\n    Bytes :     7.8 k     7.8 k         0         0         0 0\n    Times :   0:00:00   0:00:00                       0:00:00 0:00:00\n\n\n    Speed :               17660 Bytes/sec.\n    Speed :               1.010 MegaBytes/min.\n    Ended : Monday, September 16, 2024 8:58:09 PM\n\n365080576\n\nJust \"touch c:\\temp\\source\\$i\" is not enough, files must be non-empty for\nthe leak to happen.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Tue, 17 Sep 2024 08:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Robocopy might be not robust enough for never-ending testing on\n Windows" }, { "msg_contents": "On Tue, Sep 17, 2024 at 5:00 PM Alexander Lakhin <exclusion@gmail.com> wrote:\n> but finding a Windows expert who perhaps can look at the\n> robocopy's sources, would be good too (and more productive).\n\nnumber_of_kernel_bugs_found_by_pgsql_hackers++;\n\nI am reliably informed that a kernel bug fix is in the pipeline and\nshould reach Windows 10 in ~3 months.\n\n\n", "msg_date": "Fri, 20 Sep 2024 13:57:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Robocopy might be not robust enough for never-ending testing on\n Windows" } ]
[ { "msg_contents": "Building --with-system-tzdata and the latest tzdata 2024b fails the \nregression tests for me (see attached .diffs). This seems to be because \nof [1], which changed the way \"PST8PDT\" is handled. This is the timezone \nthat the regression tests are run with.\n\n From 2024b on, \"PST8PDT\" is the same as \"America/Los_Angeles\", so by \nchanging the regression tests to use the latter as the default, we're \ngetting consistent output on at least 2024a and 2024b.\n\nPatch attached.\n\nBest,\n\nWolfgang\n\n[1]: \nhttps://github.com/eggert/tz/commit/a0b09c0230089252acf2eb0f1ba922e99f7f4a03", "msg_date": "Sat, 14 Sep 2024 15:02:38 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": true, "msg_subject": "Regression tests fail with tzdata 2024b" }, { "msg_contents": "Wolfgang Walther <walther@technowledgy.de> writes:\n> Building --with-system-tzdata and the latest tzdata 2024b fails the \n> regression tests for me (see attached .diffs). This seems to be because \n> of [1], which changed the way \"PST8PDT\" is handled. This is the timezone \n> that the regression tests are run with.\n\nThat's quite annoying, especially since it was not mentioned in the\n2024b release notes. (I had read the notes and concluded that 2024b\ndidn't require any immediate attention on our part.)\n\n> From 2024b on, \"PST8PDT\" is the same as \"America/Los_Angeles\", so by \n> changing the regression tests to use the latter as the default, we're \n> getting consistent output on at least 2024a and 2024b.\n\nI'm fairly un-thrilled with this answer, not least because it exposes\nthat zone's idiosyncratic \"LMT\" offset of -7:52:58 for years before\n1883. (I'm surprised that that seems to affect only one or two\nregression results.) Also, as a real place to a greater extent\nthan \"PST8PDT\" is, it's more subject to historical revisionism when\nsomebody turns up evidence of local law having been different than\nTZDB currently thinks.\n\nWe may not have a lot of choice though. I experimented with using\nfull POSIX notation, that is \"PST8PDT,M3.2.0,M11.1.0\", but that is\nactually worse in terms of the number of test diffs, since it doesn't\nmatch the DST transition dates that the tests expect for years before\n2007. Another objection is that the tests would then not exercise any\nof the mainstream tzdb-file-reading code paths within the timezone\ncode itself.\n\nGrumble.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Sep 2024 16:37:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "Tom Lane:\n> Also, as a real place to a greater extent\n> than \"PST8PDT\" is, it's more subject to historical revisionism when\n> somebody turns up evidence of local law having been different than\n> TZDB currently thinks.\n\nI now tried all versions of tzdata which we had in tree back to 2018g, \nthey all work fine with the same regression test output. 2018g was an \narbitrary cutoff, I just didn't try any further.\n\nIn the end, we don't need a default timezone that will never change. We \njust need one that didn't change in a reasonable number of releases \ngoing backwards. Once America/Los_Angeles is changed, we need to switch \nto a different zone, which could be one that wouldn't work today. Kind \nof a sliding window.\n\nOne positive might be: With this timezone, we are more likely to see \nrelevant changes mentioned in the upstream release notes.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Sun, 15 Sep 2024 11:08:11 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "Wolfgang Walther <walther@technowledgy.de> writes:\n> Tom Lane:\n>> Also, as a real place to a greater extent\n>> than \"PST8PDT\" is, it's more subject to historical revisionism when\n>> somebody turns up evidence of local law having been different than\n>> TZDB currently thinks.\n\n> I now tried all versions of tzdata which we had in tree back to 2018g, \n> they all work fine with the same regression test output. 2018g was an \n> arbitrary cutoff, I just didn't try any further.\n\nYeah, my belly-aching above is just about hypothetical future\ninstability. In reality, I'm sure America/Los_Angeles is pretty\nwell researched and so it's unlikely that there will be unexpected\nchanges in its zone history.\n\n> In the end, we don't need a default timezone that will never change.\n\nWe do, really. For example, there's a nonzero chance the USA will\ncancel DST altogether at some future time. (This would be driven by\npoliticians who don't remember the last time, but there's no shortage\nof those.) That'd likely break some of our test results, and even if\nit happened not to, it'd still be bad because we'd probably lose some\ncoverage of the DST-transition-related code paths in src/timezone/.\nSo I'd really be way happier with a test timezone that never changes\nbut does include DST behavior. I thought PST8PDT fit those\nrequirements pretty well, and I'm annoyed at Eggert for having tossed\nit overboard for no benefit whatever. But I don't run tzdb, so\nhere we are.\n\n> We just need one that didn't change in a reasonable number of\n> releases going backwards.\n\nWe've had this sort of fire-drill before, e.g. commit 8d7af8fbe.\nIt's no fun, and the potential surface area for unexpected changes\nis now much greater than the few tests affected by that change.\n\nBut here we are, so I pushed your patch with a couple of other\ncosmetic bits. There are still a couple of references to PST8PDT\nin the tree, but they don't appear to care what the actual meaning\nof that zone is, so I left them be.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Sep 2024 01:09:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "On Mon, Sep 16, 2024 at 7:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Wolfgang Walther <walther@technowledgy.de> writes:\n> > Tom Lane:\n> >> Also, as a real place to a greater extent\n> >> than \"PST8PDT\" is, it's more subject to historical revisionism when\n> >> somebody turns up evidence of local law having been different than\n> >> TZDB currently thinks.\n>\n> > I now tried all versions of tzdata which we had in tree back to 2018g,\n> > they all work fine with the same regression test output. 2018g was an\n> > arbitrary cutoff, I just didn't try any further.\n>\n> Yeah, my belly-aching above is just about hypothetical future\n> instability. In reality, I'm sure America/Los_Angeles is pretty\n> well researched and so it's unlikely that there will be unexpected\n> changes in its zone history.\n>\n> > In the end, we don't need a default timezone that will never change.\n>\n> We do, really. For example, there's a nonzero chance the USA will\n> cancel DST altogether at some future time. (This would be driven by\n> politicians who don't remember the last time, but there's no shortage\n> of those.) That'd likely break some of our test results, and even if\n> it happened not to, it'd still be bad because we'd probably lose some\n> coverage of the DST-transition-related code paths in src/timezone/.\n> So I'd really be way happier with a test timezone that never changes\n> but does include DST behavior. I thought PST8PDT fit those\n> requirements pretty well, and I'm annoyed at Eggert for having tossed\n> it overboard for no benefit whatever. But I don't run tzdb, so\n> here we are.\n>\n> > We just need one that didn't change in a reasonable number of\n> > releases going backwards.\n>\n> We've had this sort of fire-drill before, e.g. commit 8d7af8fbe.\n> It's no fun, and the potential surface area for unexpected changes\n> is now much greater than the few tests affected by that change.\n>\n> But here we are, so I pushed your patch with a couple of other\n> cosmetic bits. There are still a couple of references to PST8PDT\n> in the tree, but they don't appear to care what the actual meaning\n> of that zone is, so I left them be.\n>\n\nThis is an unfortunate change as this will break extensions tests using\npg_regress for testing. We run our tests against multiple minor versions\nand this getting backported means our tests will fail with the next minor\npg release. Is there a workaround available to make the timezone for\npg_regress configurable without going into every test?\n\nRegards, Sven Klemm\n\nOn Mon, Sep 16, 2024 at 7:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Wolfgang Walther <walther@technowledgy.de> writes:\n> Tom Lane:\n>> Also, as a real place to a greater extent\n>> than \"PST8PDT\" is, it's more subject to historical revisionism when\n>> somebody turns up evidence of local law having been different than\n>> TZDB currently thinks.\n\n> I now tried all versions of tzdata which we had in tree back to 2018g, \n> they all work fine with the same regression test output. 2018g was an \n> arbitrary cutoff, I just didn't try any further.\n\nYeah, my belly-aching above is just about hypothetical future\ninstability.  In reality, I'm sure America/Los_Angeles is pretty\nwell researched and so it's unlikely that there will be unexpected\nchanges in its zone history.\n\n> In the end, we don't need a default timezone that will never change.\n\nWe do, really.  For example, there's a nonzero chance the USA will\ncancel DST altogether at some future time.  (This would be driven by\npoliticians who don't remember the last time, but there's no shortage\nof those.)  That'd likely break some of our test results, and even if\nit happened not to, it'd still be bad because we'd probably lose some\ncoverage of the DST-transition-related code paths in src/timezone/.\nSo I'd really be way happier with a test timezone that never changes\nbut does include DST behavior.  I thought PST8PDT fit those\nrequirements pretty well, and I'm annoyed at Eggert for having tossed\nit overboard for no benefit whatever.  But I don't run tzdb, so\nhere we are.\n\n> We just need one that didn't change in a reasonable number of\n> releases going backwards.\n\nWe've had this sort of fire-drill before, e.g. commit 8d7af8fbe.\nIt's no fun, and the potential surface area for unexpected changes\nis now much greater than the few tests affected by that change.\n\nBut here we are, so I pushed your patch with a couple of other\ncosmetic bits.  There are still a couple of references to PST8PDT\nin the tree, but they don't appear to care what the actual meaning\nof that zone is, so I left them be.This is an unfortunate change as this will break extensions tests using pg_regress for testing. We run our tests against multiple minor versions and this getting backported means our tests will fail with the next minor pg release. Is there a workaround available to make the timezone for pg_regress configurable without going into every test?Regards, Sven Klemm", "msg_date": "Mon, 16 Sep 2024 10:33:56 +0200", "msg_from": "Sven Klemm <sven@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "Sven Klemm <sven@timescale.com> writes:\n> This is an unfortunate change as this will break extensions tests using\n> pg_regress for testing. We run our tests against multiple minor versions\n> and this getting backported means our tests will fail with the next minor\n> pg release. Is there a workaround available to make the timezone for\n> pg_regress configurable without going into every test?\n\nConfigurable to what? If your test cases are dependent on the\nhistorical behavior of PST8PDT, you're out of luck, because that\nsimply isn't available anymore (or won't be once 2024b reaches\nyour platform, anyway).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Sep 2024 11:19:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "On Mon, Sep 16, 2024 at 5:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Configurable to what? If your test cases are dependent on the\n> historical behavior of PST8PDT, you're out of luck, because that\n> simply isn't available anymore (or won't be once 2024b reaches\n> your platform, anyway).\n>\n\nI was wondering whether the timezone used by pg_regress could be made\nconfigurable.\n\n-- \nRegards, Sven Klemm\n\nOn Mon, Sep 16, 2024 at 5:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nConfigurable to what?  If your test cases are dependent on the\nhistorical behavior of PST8PDT, you're out of luck, because that\nsimply isn't available anymore (or won't be once 2024b reaches\nyour platform, anyway).I was wondering whether the timezone used by pg_regress could be made configurable.-- Regards, Sven Klemm", "msg_date": "Tue, 17 Sep 2024 07:38:15 +0200", "msg_from": "Sven Klemm <sven@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "Sven Klemm <sven@timescale.com> writes:\n> On Mon, Sep 16, 2024 at 5:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Configurable to what? If your test cases are dependent on the\n>> historical behavior of PST8PDT, you're out of luck, because that\n>> simply isn't available anymore (or won't be once 2024b reaches\n>> your platform, anyway).\n\n> I was wondering whether the timezone used by pg_regress could be made\n> configurable.\n\nYes, I understood that you were suggesting that. My point is that\nit wouldn't do you any good: you will still have to change any\nregression test cases that depend on behavior PST8PDT has/had that\nis different from America/Los_Angeles. That being the case,\nI don't see much value in making it configurable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Sep 2024 01:42:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "Tom Lane:\n>> I was wondering whether the timezone used by pg_regress could be made\n>> configurable.\n> \n> Yes, I understood that you were suggesting that. My point is that\n> it wouldn't do you any good: you will still have to change any\n> regression test cases that depend on behavior PST8PDT has/had that\n> is different from America/Los_Angeles. That being the case,\n> I don't see much value in making it configurable.\n\nJust changing it back to PST8PDT wouldn't really help as Tom pointed \nout. You'd still get different results depending on which tzdata version \nyou are running with.\n\nThe core regression tests need to be run with a timezone that tests \nspecial cases in the timezone handling code. But that might not be true \nfor extensions - all they want could be a stable output across major and \nminor versions of postgres and versions of tzdata. It could be helpful \nto set pg_regress' timezone to UTC, for example?\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Tue, 17 Sep 2024 08:39:53 +0200", "msg_from": "Wolfgang Walther <walther@technowledgy.de>", "msg_from_op": true, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "Wolfgang Walther <walther@technowledgy.de> writes:\n> The core regression tests need to be run with a timezone that tests \n> special cases in the timezone handling code. But that might not be true \n> for extensions - all they want could be a stable output across major and \n> minor versions of postgres and versions of tzdata. It could be helpful \n> to set pg_regress' timezone to UTC, for example?\n\nI would not recommend that choice. It would mask simple errors such\nas failing to apply the correct conversion between timestamptz and\ntimestamp values. Also, if you have test cases that are affected by\nthis issue at all, you probably have a need/desire to test things like\nDST transitions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Sep 2024 10:15:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with tzdata 2024b" }, { "msg_contents": "On Tue, Sep 17, 2024 at 4:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Wolfgang Walther <walther@technowledgy.de> writes:\n> > The core regression tests need to be run with a timezone that tests\n> > special cases in the timezone handling code. But that might not be true\n> > for extensions - all they want could be a stable output across major and\n> > minor versions of postgres and versions of tzdata. It could be helpful\n> > to set pg_regress' timezone to UTC, for example?\n>\n> I would not recommend that choice. It would mask simple errors such\n> as failing to apply the correct conversion between timestamptz and\n> timestamp values. Also, if you have test cases that are affected by\n> this issue at all, you probably have a need/desire to test things like\n> DST transitions.\n>\n\nAs far as I'm aware timescaledb does not rely on specifics of tzdata\nversion but just needs a stable setting for timezone. I guess I'll adjust\nour tests to not depend on upstream pg_regress timezone.\n\n-- \nRegards, Sven Klemm\n\nOn Tue, Sep 17, 2024 at 4:15 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Wolfgang Walther <walther@technowledgy.de> writes:\n> The core regression tests need to be run with a timezone that tests \n> special cases in the timezone handling code. But that might not be true \n> for extensions - all they want could be a stable output across major and \n> minor versions of postgres and versions of tzdata. It could be helpful \n> to set pg_regress' timezone to UTC, for example?\n\nI would not recommend that choice.  It would mask simple errors such\nas failing to apply the correct conversion between timestamptz and\ntimestamp values.  Also, if you have test cases that are affected by\nthis issue at all, you probably have a need/desire to test things like\nDST transitions.As far as I'm aware timescaledb does not rely on specifics of tzdata version but just needs a stable setting for timezone. I guess I'll adjust our tests to not depend on upstream pg_regress timezone. -- Regards, Sven Klemm", "msg_date": "Wed, 18 Sep 2024 08:04:03 +0200", "msg_from": "Sven Klemm <sven@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Regression tests fail with tzdata 2024b" } ]
[ { "msg_contents": "I noticed while working on bug #18617 [1] that we are fairly slipshod\nabout which SQLSTATE to report when libxml2 returns an error. There\nare places using ERRCODE_INTERNAL_ERROR for easily-triggered errors;\nthere are different places that use different ERRCODEs for exactly\nthe same condition; and there are places that use different ERRCODEs\nfor failures from xmlXPathCompile and xmlXPathCompiledEval. I found\nthat this last is problematic because some errors you might expect\nto be reported during XPath compilation are not detected till\nexecution, notably namespace-related errors. That seems more like\na libxml2 implementation artifact than something we should expect to\nbe stable behavior, so I think we should avoid using different\nERRCODEs.\n\nA lot of this can be blamed on there not being any especially on-point\nSQLSTATE values back when this code was first written. I learned that\nrecent revisions of SQL have a whole new SQLSTATE class, class 10 =\n\"XQuery Error\", so we have an opportunity to sync up with that as well\nas be more self-consistent. The spec's subclass codes in this class\nseem quite fine-grained. It might be an interesting exercise to try\nto teach xml_errorHandler() to translate libxml2's error->code values\ninto fine-grained SQLSTATEs, but I've not attempted that; I'm not\nsure whether there is a close mapping between what libxml2 reports\nand the set of codes the SQL committee chose. What I've done in the\nattached first-draft patch is just to select one relatively generic\ncode in class 10, 10608 = invalid_argument_for_xquery, and use that\nwhere it seemed apropos.\n\nThis is pretty low-priority, so I'll stash it in the next CF.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/356363.1726333674%40sss.pgh.pa.us", "msg_date": "Sat, 14 Sep 2024 15:14:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Cleaning up ERRCODE usage in our XML code" }, { "msg_contents": "I wrote:\n> [ v1-clean-up-errcodes-for-xml.patch ]\n\nPer cfbot, rebased over d5622acb3. No functional changes.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 20 Sep 2024 15:00:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up ERRCODE usage in our XML code" }, { "msg_contents": "> On 20 Sep 2024, at 21:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> I wrote:\n>> [ v1-clean-up-errcodes-for-xml.patch ]\n> \n> Per cfbot, rebased over d5622acb3. No functional changes.\n\nLooking over these I don't see anything mis-characterized so +1 on going ahead\nwith these. It would be neat if we end up translating xml2 errors into XQuery\nError SQLSTATEs but this is a clear improvement over what we have until then.\n\nThere is an ERRCODE_INTERNAL_ERROR in xml_out_internal() which seems a tad odd\ngiven that any error would be known to be parsing related and b) are caused by\nlibxml and not internally. Not sure if it's worth bothering with but with the\nother ones improved it stood out.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 23 Sep 2024 13:19:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cleaning up ERRCODE usage in our XML code" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 20 Sep 2024, at 21:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Per cfbot, rebased over d5622acb3. No functional changes.\n\n> Looking over these I don't see anything mis-characterized so +1 on going ahead\n> with these. It would be neat if we end up translating xml2 errors into XQuery\n> Error SQLSTATEs but this is a clear improvement over what we have until then.\n\nThanks for looking at it!\n\n> There is an ERRCODE_INTERNAL_ERROR in xml_out_internal() which seems a tad odd\n> given that any error would be known to be parsing related and b) are caused by\n> libxml and not internally. Not sure if it's worth bothering with but with the\n> other ones improved it stood out.\n\nYeah, I looked at that but wasn't sure what to do with it. We should\nhave validated the decl header when the XML value was created, so if\nwe get here then either the value got corrupted on-disk or in-transit,\nor something forgot to do that validation, or libxml has changed its\nmind since then about what's a valid decl. At least some of those\ncases are probably legitimately called INTERNAL_ERROR. I thought for\nawhile about ERRCODE_DATA_CORRUPTED, but I'm not convinced that's a\nbetter fit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Sep 2024 13:17:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up ERRCODE usage in our XML code" }, { "msg_contents": "> On 23 Sep 2024, at 19:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> \n>> There is an ERRCODE_INTERNAL_ERROR in xml_out_internal() which seems a tad odd\n>> given that any error would be known to be parsing related and b) are caused by\n>> libxml and not internally. Not sure if it's worth bothering with but with the\n>> other ones improved it stood out.\n> \n> Yeah, I looked at that but wasn't sure what to do with it. We should\n> have validated the decl header when the XML value was created, so if\n> we get here then either the value got corrupted on-disk or in-transit,\n> or something forgot to do that validation, or libxml has changed its\n> mind since then about what's a valid decl. At least some of those\n> cases are probably legitimately called INTERNAL_ERROR. I thought for\n> awhile about ERRCODE_DATA_CORRUPTED, but I'm not convinced that's a\n> better fit.\n\nI agree that it might not be an obvious better fit, but also not an obvious\nworse fit. It will make it easier to filter on during fleet analysis so I\nwould be inclined to change it, but the main value of the patch are other hunks\nso feel free to ignore.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 24 Sep 2024 08:48:06 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Cleaning up ERRCODE usage in our XML code" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 23 Sep 2024, at 19:17, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Yeah, I looked at that but wasn't sure what to do with it. We should\n>> have validated the decl header when the XML value was created, so if\n>> we get here then either the value got corrupted on-disk or in-transit,\n>> or something forgot to do that validation, or libxml has changed its\n>> mind since then about what's a valid decl. At least some of those\n>> cases are probably legitimately called INTERNAL_ERROR. I thought for\n>> awhile about ERRCODE_DATA_CORRUPTED, but I'm not convinced that's a\n>> better fit.\n\n> I agree that it might not be an obvious better fit, but also not an obvious\n> worse fit. It will make it easier to filter on during fleet analysis so I\n> would be inclined to change it, but the main value of the patch are other hunks\n> so feel free to ignore.\n\nFair enough. Pushed with ERRCODE_DATA_CORRUPTED used there.\nThanks again for reviewing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Sep 2024 13:01:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cleaning up ERRCODE usage in our XML code" } ]
[ { "msg_contents": "Hi,\n\nWhen working with jsonb/jsonpath, I’ve always wanted more fluent string operations for data cleaning.\nIdeally, chaining text processing methods together.\n\nHere’s a draft patch that adds a string replace method.\n\nIt works like this\nselect jsonb_path_query('\"hello world\"', '$.replace(\"hello\",\"bye\")');\n jsonb_path_query \n------------------\n \"bye world\"\n(1 row)\nAnd looks like plays nicely with existing facilities \n\nselect jsonb_path_query('\"hello world\"', '$.replace(\"hello\",\"bye\") starts with \"bye\"');\n jsonb_path_query \n------------------\n true\n(1 row)\nI’d like some initial feedback on whether this is of interested before I proceed with the following:\n- I’ve tried respecting the surrounding code, but a more experienced eye can spot some inconsistencies. I’ll revisit those\n- Add more test cases (I’ve added the obvious ones, but ideas on more cases are welcome).\n- pg upgrades currently fail on CI (see <https://github.com/Florents-Tselai/postgres/pull/3/checks?check_run_id=30154989488>)\n- better error handling/reporting: I’ve kept the wording of error messages, but we’ll need something akin to ERRCODE_INVALID_ARGUMENT_FOR_SQL_JSON_DATETIME_FUNCTION.\n- documentation.\n\n", "msg_date": "Sun, 15 Sep 2024 04:15:12 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] WIP: replace method for jsonpath" }, { "msg_contents": "Here’s an updated version of this patch.\nThe previous version failed several CI steps; this passes them all.\n\nUnless someone disagrees,\nI’ll proceed with the documentation and add this for the next CF.\n\n\nAs a future note:\nIt’s worth noting that both this newly added jspItem and other ones like (jpiDecimal, jpiString)\nuse jspGetRightArg and jspGetLeftArg.\nleft and right can be confusing if more complex methods are added in the future.\ni.e. jsonpath methods with nargs>=3 .\nI was wondering if we’d like something like JSP_GETARG(n)\n\n\nGitHub PR in case you prefer it’s UI https://github.com/Florents-Tselai/postgres/pull/3 \n\n\n\n\n\n\n> On 15 Sep 2024, at 4:15 AM, Florents Tselai <florents.tselai@gmail.com> wrote:\n> \n> Hi,\n> \n> When working with jsonb/jsonpath, I’ve always wanted more fluent string operations for data cleaning.\n> Ideally, chaining text processing methods together.\n> \n> Here’s a draft patch that adds a string replace method.\n> \n> It works like this\n> select jsonb_path_query('\"hello world\"', '$.replace(\"hello\",\"bye\")');\n> jsonb_path_query \n> ------------------\n> \"bye world\"\n> (1 row)\n> And looks like plays nicely with existing facilities \n> \n> select jsonb_path_query('\"hello world\"', '$.replace(\"hello\",\"bye\") starts with \"bye\"');\n> jsonb_path_query \n> ------------------\n> true\n> (1 row)\n> I’d like some initial feedback on whether this is of interested before I proceed with the following:\n> - I’ve tried respecting the surrounding code, but a more experienced eye can spot some inconsistencies. I’ll revisit those\n> - Add more test cases (I’ve added the obvious ones, but ideas on more cases are welcome).\n> - pg upgrades currently fail on CI (see <https://github.com/Florents-Tselai/postgres/pull/3/checks?check_run_id=30154989488>)\n> - better error handling/reporting: I’ve kept the wording of error messages, but we’ll need something akin to ERRCODE_INVALID_ARGUMENT_FOR_SQL_JSON_DATETIME_FUNCTION.\n> - documentation.\n> \n> <v1-0001-jsonpath-replace-method.patch>", "msg_date": "Tue, 17 Sep 2024 01:39:08 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] WIP: replace method for jsonpath" }, { "msg_contents": "On Sep 16, 2024, at 18:39, Florents Tselai <florents.tselai@gmail.com> wrote:\n\n> Here’s an updated version of this patch.\n\nOh, nice function.\n\nBut a broader question for hackers: Is replace() specified in the SQL/JSON spec? If not, what’s the process for evaluating whether or not to add features not specified by the spec?\n\n> As a future note:\n> It’s worth noting that both this newly added jspItem and other ones like (jpiDecimal, jpiString)\n> use jspGetRightArg and jspGetLeftArg.\n> left and right can be confusing if more complex methods are added in the future.\n> i.e. jsonpath methods with nargs>=3 .\n> I was wondering if we’d like something like JSP_GETARG(n)\n\nSo far I think we have only functions defined by the spec, which tend to be unary or binary, so left and right haven’t been an issue.\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Tue, 17 Sep 2024 14:40:57 -0400", "msg_from": "\"David E. Wheeler\" <david@justatheory.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] WIP: replace method for jsonpath" }, { "msg_contents": "\n\n> On 17 Sep 2024, at 9:40 PM, David E. Wheeler <david@justatheory.com> wrote:\n> \n> On Sep 16, 2024, at 18:39, Florents Tselai <florents.tselai@gmail.com> wrote:\n> \n>> Here’s an updated version of this patch.\n> \n> Oh, nice function.\n> \n> But a broader question for hackers: Is replace() specified in the SQL/JSON spec? If not, what’s the process for evaluating whether or not to add features not specified by the spec?\n\nThat’s the first argument I was expecting, and it’s a valid one.\n\nFrom a user’s perspective the answer is:\nWhy not?\nThe more text-processing facilities I have in jsonb,\nThe less back-and-forth-parentheses-fu I do,\nThe easier my life is.\n\nFrom a PG gatekeeping it’s a more complicated issue. \n\nIt’s not part of the spec, \nBut I think the jsonb infrastructure in PG is really powerful already and we can built on it,\nAnd can evolve into a superset DSL of jsonpath.\n\nFor example, apache/age have lift-and-shifted this infra and built another DSL (cypher) on top of it.\n\n> \n>> As a future note:\n>> It’s worth noting that both this newly added jspItem and other ones like (jpiDecimal, jpiString)\n>> use jspGetRightArg and jspGetLeftArg.\n>> left and right can be confusing if more complex methods are added in the future.\n>> i.e. jsonpath methods with nargs>=3 .\n>> I was wondering if we’d like something like JSP_GETARG(n)\n> \n> So far I think we have only functions defined by the spec, which tend to be unary or binary, so left and right haven’t been an issue.\n\nIf the answer to the Q above is: “we stick to the spec” then there’s no thinking about this.\n\nBut tbh, I’ve already started experimenting with other text methods in text $.strip() / trim() / upper() / lower() etc.\n\nFallback scenario: make this an extension, but in a first pass I didn’t find any convenient hooks.\nOne has to create a whole new scanner, grammar etc.\n\n> \n> Best,\n> \n> David\n> \n\n\n\n", "msg_date": "Tue, 17 Sep 2024 22:03:17 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] WIP: replace method for jsonpath" }, { "msg_contents": "On Sep 17, 2024, at 15:03, Florents Tselai <florents.tselai@gmail.com> wrote:\n\n> Fallback scenario: make this an extension, but in a first pass I didn’t find any convenient hooks.\n> One has to create a whole new scanner, grammar etc.\n\nYeah, it got me thinking about the RFC-9535 JSONPath \"Function Extension\" feature[1], which allows users to add functions. Would be cool to have a way to register jsonpath functions somehow, but I would imagine it’d need quite a bit of specification similar to RFC-9535. Wouldn’t surprise me to see something like that appear in a future version of the spec, with an interface something like CREATE OPERATOR.\n\nI don’t have a strong feeling about what should be added that’s not in the spec; my main interest is not having to constantly sync my port[2]. I’m already behind, and’t just been a couple months! 😂\n\nBest,\n\nDavid\n\n[1]: https://www.rfc-editor.org/rfc/rfc9535.html#name-function-extensions\n[2]: https://github.com/theory/sqljson\n\n\n\n", "msg_date": "Tue, 17 Sep 2024 15:16:57 -0400", "msg_from": "\"David E. Wheeler\" <david@justatheory.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] WIP: replace method for jsonpath" }, { "msg_contents": "On 17.09.24 21:16, David E. Wheeler wrote:\n> On Sep 17, 2024, at 15:03, Florents Tselai <florents.tselai@gmail.com> wrote:\n> \n>> Fallback scenario: make this an extension, but in a first pass I didn’t find any convenient hooks.\n>> One has to create a whole new scanner, grammar etc.\n> \n> Yeah, it got me thinking about the RFC-9535 JSONPath \"Function Extension\" feature[1], which allows users to add functions. Would be cool to have a way to register jsonpath functions somehow, but I would imagine it’d need quite a bit of specification similar to RFC-9535. Wouldn’t surprise me to see something like that appear in a future version of the spec, with an interface something like CREATE OPERATOR.\n\nWhy can't we \"just\" call any suitable pg_proc-registered function from \nJSON path? The proposed patch routes the example \n'$.replace(\"hello\",\"bye\")' internally to the internal implementation of \nthe SQL function replace(..., 'hello', 'bye'). Why can't we do this \nautomatically for any function call in a JSON path expression?\n\n\n\n", "msg_date": "Wed, 18 Sep 2024 10:23:22 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] WIP: replace method for jsonpath" }, { "msg_contents": "> On 18 Sep 2024, at 11:23 AM, Peter Eisentraut <peter@eisentraut.org> wrote:\n> \n> On 17.09.24 21:16, David E. Wheeler wrote:\n>> On Sep 17, 2024, at 15:03, Florents Tselai <florents.tselai@gmail.com> wrote:\n>>> Fallback scenario: make this an extension, but in a first pass I didn’t find any convenient hooks.\n>>> One has to create a whole new scanner, grammar etc.\n>> Yeah, it got me thinking about the RFC-9535 JSONPath \"Function Extension\" feature[1], which allows users to add functions. Would be cool to have a way to register jsonpath functions somehow, but I would imagine it’d need quite a bit of specification similar to RFC-9535. Wouldn’t surprise me to see something like that appear in a future version of the spec, with an interface something like CREATE OPERATOR.\n> \n> Why can't we \"just\" call any suitable pg_proc-registered function from JSON path? The proposed patch routes the example '$.replace(\"hello\",\"bye\")' internally to the internal implementation of the SQL function replace(..., 'hello', 'bye'). Why can't we do this automatically for any function call in a JSON path expression?\n> \n\n\nWell, we can.\nA couple of weeks ago, I discovered transform_jsonb_string_values, which is already available in jsonfuncs.h\nand that gave me the idea for this extension https://github.com/Florents-Tselai/jsonb_apply \n\nIt does exactly what you’re saying: searches for a suitable pg_proc in the catalog, and directly applies it.\n\nselect jsonb_apply('{\n \"id\": 1,\n \"name\": \"John\",\n \"messages\": [\n \"hello\"\n ]\n}', 'replace', 'hello', 'bye’);\n\nselect jsonb_filter_apply('{\n \"id\": 1,\n \"name\": \"John\",\n \"messages\": [\n \"hello\"\n ]\n}', '{messages}', 'md5’);\n\nBut, I don't know… jsonb_apply? That seemed “too fancy”/LISPy for standard Postgres.\n\nNow that you mention it, though, there’s an alternative of tweaking the grammar and calling the suitable text proc.\nOn 18 Sep 2024, at 11:23 AM, Peter Eisentraut <peter@eisentraut.org> wrote:On 17.09.24 21:16, David E. Wheeler wrote:On Sep 17, 2024, at 15:03, Florents Tselai <florents.tselai@gmail.com> wrote:Fallback scenario: make this an extension, but in a first pass I didn’t find any convenient hooks.One has to create a whole new scanner, grammar etc.Yeah, it got me thinking about the RFC-9535 JSONPath \"Function Extension\" feature[1], which allows users to add functions. Would be cool to have a way to register jsonpath functions somehow, but I would imagine it’d need quite a bit of specification similar to RFC-9535. Wouldn’t surprise me to see something like that appear in a future version of the spec, with an interface something like CREATE OPERATOR.Why can't we \"just\" call any suitable pg_proc-registered function from JSON path?  The proposed patch routes the example '$.replace(\"hello\",\"bye\")' internally to the internal implementation of the SQL function replace(..., 'hello', 'bye').  Why can't we do this automatically for any function call in a JSON path expression?Well, we can.A couple of weeks ago, I discovered transform_jsonb_string_values, which is already available in  jsonfuncs.hand that gave me the idea for this extension https://github.com/Florents-Tselai/jsonb_apply It does exactly what you’re saying: searches for a suitable pg_proc in the catalog, and directly applies it.select jsonb_apply('{  \"id\": 1,  \"name\": \"John\",  \"messages\": [    \"hello\"  ]}', 'replace', 'hello', 'bye’);select jsonb_filter_apply('{  \"id\": 1,  \"name\": \"John\",  \"messages\": [    \"hello\"  ]}', '{messages}', 'md5’);But, I don't know… jsonb_apply? That seemed “too fancy”/LISPy for standard Postgres.Now that you mention it, though, there’s an alternative of tweaking the grammar and calling the suitable text proc.", "msg_date": "Wed, 18 Sep 2024 11:39:25 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] WIP: replace method for jsonpath" }, { "msg_contents": "\nOn 2024-09-18 We 4:23 AM, Peter Eisentraut wrote:\n> On 17.09.24 21:16, David E. Wheeler wrote:\n>> On Sep 17, 2024, at 15:03, Florents Tselai \n>> <florents.tselai@gmail.com> wrote:\n>>\n>>> Fallback scenario: make this an extension, but in a first pass I \n>>> didn’t find any convenient hooks.\n>>> One has to create a whole new scanner, grammar etc.\n>>\n>> Yeah, it got me thinking about the RFC-9535 JSONPath \"Function \n>> Extension\" feature[1], which allows users to add functions. Would be \n>> cool to have a way to register jsonpath functions somehow, but I \n>> would imagine it’d need quite a bit of specification similar to \n>> RFC-9535. Wouldn’t surprise me to see something like that appear in a \n>> future version of the spec, with an interface something like CREATE \n>> OPERATOR.\n>\n> Why can't we \"just\" call any suitable pg_proc-registered function from \n> JSON path?  The proposed patch routes the example \n> '$.replace(\"hello\",\"bye\")' internally to the internal implementation \n> of the SQL function replace(..., 'hello', 'bye'). Why can't we do this \n> automatically for any function call in a JSON path expression?\n>\n>\n>\n\nThat might work. The thing that bothers me about the original proposal \nis this: what if we add a new non-spec jsonpath method and then a new \nversion of the spec adds a method with the same name, but not compatible \nwith our method? We'll be in a nasty place. At the very least I think we \nneed to try hard to avoid that. Maybe we should prefix non-spec method \nnames with \"pg_\", or maybe use an initial capital letter.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 18 Sep 2024 08:47:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] WIP: replace method for jsonpath" }, { "msg_contents": "On 18 Sep 2024, at 3:47 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n\nOn 2024-09-18 We 4:23 AM, Peter Eisentraut wrote:\n\nOn 17.09.24 21:16, David E. Wheeler wrote:\n\nOn Sep 17, 2024, at 15:03, Florents Tselai <florents.tselai@gmail.com>\nwrote:\n\nFallback scenario: make this an extension, but in a first pass I didn’t\nfind any convenient hooks.\nOne has to create a whole new scanner, grammar etc.\n\n\nYeah, it got me thinking about the RFC-9535 JSONPath \"Function Extension\"\nfeature[1], which allows users to add functions. Would be cool to have a\nway to register jsonpath functions somehow, but I would imagine it’d need\nquite a bit of specification similar to RFC-9535. Wouldn’t surprise me to\nsee something like that appear in a future version of the spec, with an\ninterface something like CREATE OPERATOR.\n\n\nWhy can't we \"just\" call any suitable pg_proc-registered function from JSON\npath? The proposed patch routes the example '$.replace(\"hello\",\"bye\")'\ninternally to the internal implementation of the SQL function replace(...,\n'hello', 'bye'). Why can't we do this automatically for any function call\nin a JSON path expression?\n\n\n\n\nThat might work. The thing that bothers me about the original proposal is\nthis: what if we add a new non-spec jsonpath method and then a new version\nof the spec adds a method with the same name, but not compatible with our\nmethod? We'll be in a nasty place. At the very least I think we need to try\nhard to avoid that. Maybe we should prefix non-spec method names with\n\"pg_\", or maybe use an initial capital letter.\n\n\nIf naming is your main reservation, then I take it you’re generally\npositive.\n\nHaving said that, “pg_” is probably too long for a jsonpath expression,\nMost importantly though, “pg_” in my mind is a prefix for things like\ncatalog lookup and system monitoring.\nNot a function that the average user would use.\nThus, I lean towards initial-capital.\n\nThe more general case would look like:\nA new jsonpath item of the format $.Func(arg1, …, argn) can be applied\n(recursively or not) to a json object.\n\nAs a first iteration/version only pg_proc-registered functions of the\nformat func(text, ...,) -> text are available.\nWe can focus on oid(arg0) = TEXTOID and rettype = TEXTOID fist.\nThe first arg0 has to be TEXTOID (the current json string) while subsequent\nargs are provided by the user\nin the jsonpath expression.\n\nThe functions we want to support will be part of jsonpath grammar\nand during execution we'll have enough info to find the appropriate\nPGFunction to call.\n\nWhat I'm missing yet is how we could handle vars jsonb,\nin case the user doesn't want to just hardcode the actual function\narguments.\nThen resolving argtypes1...n is a bit more complex:\n\nThe signature for jsonb_apply(doc jsonb, func text[, variadic \"any\"\nargs1_n]); [0]\nHere, variadic \"any\" works beautifully, but that's a brand-new function.\n\nIn existing jsonb{path} facilities, vars are jsonb objects which could work\nas well I think.\nUnless I'm missing something.\n\n[0] https://github.com/Florents-Tselai/jsonb_apply\n\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\nOn 18 Sep 2024, at 3:47 PM, Andrew Dunstan <andrew@dunslane.net> wrote:On 2024-09-18 We 4:23 AM, Peter Eisentraut wrote:On 17.09.24 21:16, David E. Wheeler wrote:On Sep 17, 2024, at 15:03, Florents Tselai <florents.tselai@gmail.com> wrote:Fallback scenario: make this an extension, but in a first pass I didn’t find any convenient hooks.One has to create a whole new scanner, grammar etc.Yeah, it got me thinking about the RFC-9535 JSONPath \"Function Extension\" feature[1], which allows users to add functions. Would be cool to have a way to register jsonpath functions somehow, but I would imagine it’d need quite a bit of specification similar to RFC-9535. Wouldn’t surprise me to see something like that appear in a future version of the spec, with an interface something like CREATE OPERATOR.Why can't we \"just\" call any suitable pg_proc-registered function from JSON path?  The proposed patch routes the example '$.replace(\"hello\",\"bye\")' internally to the internal implementation of the SQL function replace(..., 'hello', 'bye'). Why can't we do this automatically for any function call in a JSON path expression?That might work. The thing that bothers me about the original proposal is this: what if we add a new non-spec jsonpath method and then a new version of the spec adds a method with the same name, but not compatible with our method? We'll be in a nasty place. At the very least I think we need to try hard to avoid that. Maybe we should prefix non-spec method names with \"pg_\", or maybe use an initial capital letter.If naming is your main reservation, then I take it you’re generally positive.Having said that, “pg_” is probably too long for a jsonpath expression, Most importantly though, “pg_” in my mind is a prefix for things like catalog lookup and system monitoring.Not a function that the average user would use. Thus, I lean towards initial-capital.The more general case would look like:A new jsonpath item of the format $.Func(arg1, …, argn) can be applied (recursively or not) to a json object.As a first iteration/version only pg_proc-registered functions of the format func(text, ...,) -> text are available.We can focus on oid(arg0) = TEXTOID and rettype = TEXTOID fist.The first arg0 has to be TEXTOID (the current json string) while subsequent args are provided by the userin the jsonpath expression.The functions we want to support will be part of jsonpath grammarand during execution we'll have enough info to find the appropriate PGFunction to call.What I'm missing yet is how we could handle vars jsonb,in case the user doesn't want to just hardcode the actual function arguments.Then resolving argtypes1...n is a bit more complex:The signature for jsonb_apply(doc jsonb, func text[, variadic \"any\" args1_n]); [0]Here, variadic \"any\" works beautifully, but that's a brand-new function.In existing jsonb{path} facilities, vars are jsonb objects which could work as well I think.Unless I'm missing something.[0] https://github.com/Florents-Tselai/jsonb_apply cheersandrew--Andrew DunstanEDB: https://www.enterprisedb.com", "msg_date": "Thu, 19 Sep 2024 15:57:00 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] WIP: replace method for jsonpath" } ]
[ { "msg_contents": "Hi,\n\nI've been doing a lot of tests under valgrind lately, and it made me\nacutely aware of how long check-world takes. I realize valgrind is\ninherently expensive and slow, and maybe the reasonable reply is to just\nrun a couple tests that are \"interesting\" for a patch ...\n\nAnyway, I did a simple experiment - I ran check-world with timing info\nfor TAP tests, both with and without valgrind, and if I plot the results\nI get the attached charts (same data, second one has log-scale axes).\n\nThe basic rule is that valgrind means a very consistent 100x slowdown. I\nguess it might vary a bit depending on compile flags, but not much.\n\nBut there are two tests very clearly stand out - not by slowdown, that's\nperfectly in line with the 100x figure - but by total duration. I've\nlabeled them on the linear-scale chart.\n\nIt's 002_pg_upgrade and 027_stream_regress. I guess the reasons for the\nslowness are pretty clear - those are massive tests. pg_upgrade creates,\ndumps and restores many objects (thousands?), stream_regress runs the\nwhole regress test suite on primary, and cross-checks what gets\nreplicated to standby. So it's expected to be somewhat expected.\n\nStill, I wonder if there might be faster way to do these tests, because\nthese two tests alone add close to 3h of the valgrind run. Of course,\nit's not just about valgrind - these tests are slow even in regular\nruns, taking almost a minute each, but it's a different scale (minutes\ninstead of hours). Would be nice to speed it up too, though.\n\nI don't have a great idea how to speed up these tests, unfortunately.\nBut one of the problems is that all the TAP tests run serially - one\nafter each other. Could we instead run them in parallel? The tests setup\ntheir \"private\" clusters anyway, right?\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Sun, 15 Sep 2024 20:20:01 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": true, "msg_subject": "how to speed up 002_pg_upgrade.pl and 025_stream_regress.pl under\n valgrind" }, { "msg_contents": "Tomas Vondra <tomas@vondra.me> writes:\n> [ 002_pg_upgrade and 027_stream_regress are slow ]\n\n> I don't have a great idea how to speed up these tests, unfortunately.\n> But one of the problems is that all the TAP tests run serially - one\n> after each other. Could we instead run them in parallel? The tests setup\n> their \"private\" clusters anyway, right?\n\nBut there's parallelism within those two tests already, or I would\nhope so at least. If you run them in parallel then you are probably\ncausing 40 backends instead of 20 to be running at once (plus 40\nvalgrind instances). Maybe you have a machine beefy enough to make\nthat useful, but I don't.\n\nReally the way to fix those two tests would be to rewrite them to not\ndepend on the core regression tests. The core tests do a lot of work\nthat's not especially useful for the purposes of those tests, and it's\nnot even clear that they are exercising all that we'd like to have\nexercised for those purposes. In the case of 002_pg_upgrade, all\nwe really need to do is create objects that will stress all of\npg_dump. It's a little harder to scope out what we want to test for\n027_stream_regress, but it's still clear that the core tests do a lot\nof work that's not helpful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Sep 2024 14:31:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to speed up 002_pg_upgrade.pl and 025_stream_regress.pl under\n valgrind" }, { "msg_contents": "On 9/15/24 20:31, Tom Lane wrote:\n> Tomas Vondra <tomas@vondra.me> writes:\n>> [ 002_pg_upgrade and 027_stream_regress are slow ]\n> \n>> I don't have a great idea how to speed up these tests, unfortunately.\n>> But one of the problems is that all the TAP tests run serially - one\n>> after each other. Could we instead run them in parallel? The tests setup\n>> their \"private\" clusters anyway, right?\n> \n> But there's parallelism within those two tests already, or I would\n> hope so at least. If you run them in parallel then you are probably\n> causing 40 backends instead of 20 to be running at once (plus 40\n> valgrind instances). Maybe you have a machine beefy enough to make\n> that useful, but I don't.\n> \n\nI did look into that for both tests, albeit not very thoroughly, and\nmost of the time there were only 1-2 valgrind processes using CPU. The\nstream_regress seems more aggressive, but even for that the CPU spikes\nare short, and the machine could easily do something else in parallel.\n\nI'll try to do better analysis and some charts to visualize this ...\n\n> Really the way to fix those two tests would be to rewrite them to not\n> depend on the core regression tests. The core tests do a lot of work\n> that's not especially useful for the purposes of those tests, and it's\n> not even clear that they are exercising all that we'd like to have\n> exercised for those purposes. In the case of 002_pg_upgrade, all\n> we really need to do is create objects that will stress all of\n> pg_dump. It's a little harder to scope out what we want to test for\n> 027_stream_regress, but it's still clear that the core tests do a lot\n> of work that's not helpful.\n> \n\nPerhaps, but that's a lot of work and time, and tricky - it seems we\nmight easily remove some useful test, even if it's not the original\npurpose of that particular script.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Sun, 15 Sep 2024 21:47:28 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": true, "msg_subject": "Re: how to speed up 002_pg_upgrade.pl and 025_stream_regress.pl under\n valgrind" }, { "msg_contents": "On Mon, Sep 16, 2024 at 6:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Really the way to fix those two tests would be to rewrite them to not\n> depend on the core regression tests. The core tests do a lot of work\n> that's not especially useful for the purposes of those tests, and it's\n> not even clear that they are exercising all that we'd like to have\n> exercised for those purposes. In the case of 002_pg_upgrade, all\n> we really need to do is create objects that will stress all of\n> pg_dump. It's a little harder to scope out what we want to test for\n> 027_stream_regress, but it's still clear that the core tests do a lot\n> of work that's not helpful.\n\n027_stream_regress wants to get test coverage for the _redo routines\nand replay subsystem, so I've wondered about defining a\nsrc/test/regress/redo_schedule that removes what can be removed\nwithout reducing _redo coverage. For example, join_hash.sql must eat\na *lot* of valgrind CPU cycles, and contributes nothing to redo\ntesting.\n\nThinking along the same lines, 002_pg_upgrade wants to create database\nobjects to dump, so I was thinking you could have a dump_schedule that\nremoves anything that doesn't leave objects behind. But you might be\nright that it'd be better to start from scratch for that with that\ngoal in mind, and arguably also for the other.\n\n(An interesting archeological detail about the regression tests is\nthat they seem to derive from the Wisconsin benchmark, famous for\nbenchmark wars and Oracle lawyers[1]. It seems quaint now that 'tenk'\nwas a lot of tuples, but I guess that Ingres on a PDP 11, which caused\noffence by running that benchmark 5x faster, ran in something like\n128kB of memory[2], so I can only guess the buffer pool must have been\nsomething like 8 buffers or not much more in total?)\n\n[1] https://jimgray.azurewebsites.net/BenchmarkHandbook/chapter4.pdf\n[2] https://www.seas.upenn.edu/~zives/cis650/papers/INGRES.PDF\n\n\n", "msg_date": "Mon, 16 Sep 2024 09:22:41 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: how to speed up 002_pg_upgrade.pl and 025_stream_regress.pl under\n valgrind" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> (An interesting archeological detail about the regression tests is\n> that they seem to derive from the Wisconsin benchmark, famous for\n> benchmark wars and Oracle lawyers[1].\n\nThis is quite off-topic for the thread, but ... we actually had an\nimplementation of the Wisconsin benchmark in src/test/bench, which\nwe eventually removed (a05a4b478). It does look like the modern\nregression tests borrowed the definitions of \"tenk1\" and some related\ntables from there, but I think it'd be a stretch to say the tests\ndescended from it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Sep 2024 17:59:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to speed up 002_pg_upgrade.pl and 025_stream_regress.pl under\n valgrind" }, { "msg_contents": "On 9/15/24 21:47, Tomas Vondra wrote:\n> On 9/15/24 20:31, Tom Lane wrote:\n>> Tomas Vondra <tomas@vondra.me> writes:\n>>> [ 002_pg_upgrade and 027_stream_regress are slow ]\n>>\n>>> I don't have a great idea how to speed up these tests, unfortunately.\n>>> But one of the problems is that all the TAP tests run serially - one\n>>> after each other. Could we instead run them in parallel? The tests setup\n>>> their \"private\" clusters anyway, right?\n>>\n>> But there's parallelism within those two tests already, or I would\n>> hope so at least. If you run them in parallel then you are probably\n>> causing 40 backends instead of 20 to be running at once (plus 40\n>> valgrind instances). Maybe you have a machine beefy enough to make\n>> that useful, but I don't.\n>>\n> \n> I did look into that for both tests, albeit not very thoroughly, and\n> most of the time there were only 1-2 valgrind processes using CPU. The\n> stream_regress seems more aggressive, but even for that the CPU spikes\n> are short, and the machine could easily do something else in parallel.\n> \n> I'll try to do better analysis and some charts to visualize this ...\n\nI see there's already a discussion about how to make these tests cheaper\nby running only a subset of the regression tests, but here are two\ncharts showing how many processes and CPU usage for the two tests (under\nvalgrind). In both cases there are occasional spikes with >10 backends,\nand high CPU usage, but most of the time it's only 1-2 processes, using\n1-2 cores.\n\nIn fact, the two charts are almost exactly the same - which is somewhat\nexpected, considering the expensive part is running regression tests,\nand that's the same for both.\n\nBut doesn't this also mean we might speed up check-world by reordering\nthe tests a bit? The low-usage parts happen because one of the tests in\na group takes much longer, so what if moved those slow tests into a\ngroup on their own?\n\nregards\n\n-- \nTomas Vondra", "msg_date": "Mon, 16 Sep 2024 13:34:04 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": true, "msg_subject": "Re: how to speed up 002_pg_upgrade.pl and 025_stream_regress.pl under\n valgrind" } ]
[ { "msg_contents": "hi.\none minor issue in src/backend/catalog/information_schema.sql\n/*\n * 6.22\n * COLUMNS view\n */\nCREATE VIEW columns ....\n\n\nCAST(CASE WHEN a.attgenerated = '' THEN pg_get_expr(ad.adbin,\nad.adrelid) END AS character_data) AS column_default,\ncan change to\nCAST(CASE WHEN a.attgenerated = '' AND a.atthasdef THEN\npg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\ncolumn_default,\n\n\nCAST(CASE WHEN a.attgenerated <> '' THEN 'ALWAYS' ELSE 'NEVER' END AS\ncharacter_data) AS is_generated,\ncan change to\nCAST(CASE WHEN a.attgenerated <> '' AND a.atthasdef THEN 'ALWAYS' ELSE\n'NEVER' END AS character_data) AS is_generated,\n\nCAST(CASE WHEN a.attgenerated <> '' THEN pg_get_expr(ad.adbin,\nad.adrelid) END AS character_data) AS generation_expression,\ncan change to\nCAST(CASE WHEN a.attgenerated <> '' AND a.atthasdef THEN\npg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\ngeneration_expression,\n\n\ni guess, it will have some minor speed up, also more accurate.\n\n\n", "msg_date": "Mon, 16 Sep 2024 12:12:55 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": true, "msg_subject": "information_schema.view attgenerated" }, { "msg_contents": "On 16.09.24 06:12, jian he wrote:\n> hi.\n> one minor issue in src/backend/catalog/information_schema.sql\n> /*\n> * 6.22\n> * COLUMNS view\n> */\n> CREATE VIEW columns ....\n> \n> \n> CAST(CASE WHEN a.attgenerated = '' THEN pg_get_expr(ad.adbin,\n> ad.adrelid) END AS character_data) AS column_default,\n> can change to\n> CAST(CASE WHEN a.attgenerated = '' AND a.atthasdef THEN\n> pg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\n> column_default,\n> \n> \n> CAST(CASE WHEN a.attgenerated <> '' THEN 'ALWAYS' ELSE 'NEVER' END AS\n> character_data) AS is_generated,\n> can change to\n> CAST(CASE WHEN a.attgenerated <> '' AND a.atthasdef THEN 'ALWAYS' ELSE\n> 'NEVER' END AS character_data) AS is_generated,\n> \n> CAST(CASE WHEN a.attgenerated <> '' THEN pg_get_expr(ad.adbin,\n> ad.adrelid) END AS character_data) AS generation_expression,\n> can change to\n> CAST(CASE WHEN a.attgenerated <> '' AND a.atthasdef THEN\n> pg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\n> generation_expression,\n> \n> \n> i guess, it will have some minor speed up, also more accurate.\n\nI'm having a hard time interpreting this report. Could you be more \nclear about what is the existing code, and what is the code you are \nproposing as new.?\n\n\n\n", "msg_date": "Wed, 18 Sep 2024 10:09:38 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: information_schema.view attgenerated" }, { "msg_contents": "On Wed, Sep 18, 2024 at 4:09 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> >\n> > i guess, it will have some minor speed up, also more accurate.\n>\n> I'm having a hard time interpreting this report. Could you be more\n> clear about what is the existing code, and what is the code you are\n> proposing as new.?\n>\n\nsorry for confusion. The changes I propose, also attached.\n\n\ndiff --git a/src/backend/catalog/information_schema.sql\nb/src/backend/catalog/information_schema.sql\nindex c4145131ce..ff8b9305e4 100644\n--- a/src/backend/catalog/information_schema.sql\n+++ b/src/backend/catalog/information_schema.sql\n@@ -688,7 +688,7 @@ CREATE VIEW columns AS\n CAST(c.relname AS sql_identifier) AS table_name,\n CAST(a.attname AS sql_identifier) AS column_name,\n CAST(a.attnum AS cardinal_number) AS ordinal_position,\n- CAST(CASE WHEN a.attgenerated = '' THEN\npg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\ncolumn_default,\n+ CAST(CASE WHEN a.attgenerated = '' AND a.atthasdef THEN\npg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\ncolumn_default,\n CAST(CASE WHEN a.attnotnull OR (t.typtype = 'd' AND\nt.typnotnull) THEN 'NO' ELSE 'YES' END\n AS yes_or_no)\n AS is_nullable,\n@@ -777,8 +777,8 @@ CREATE VIEW columns AS\n CAST(seq.seqmin AS character_data) AS identity_minimum,\n CAST(CASE WHEN seq.seqcycle THEN 'YES' ELSE 'NO' END AS\nyes_or_no) AS identity_cycle,\n\n- CAST(CASE WHEN a.attgenerated <> '' THEN 'ALWAYS' ELSE\n'NEVER' END AS character_data) AS is_generated,\n- CAST(CASE WHEN a.attgenerated <> '' THEN\npg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\ngeneration_expression,\n+ CAST(CASE WHEN a.attgenerated <> '' AND a.atthasdef THEN\n'ALWAYS' ELSE 'NEVER' END AS character_data) AS is_generated,\n+ CAST(CASE WHEN a.attgenerated <> '' AND a.atthasdef THEN\npg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\ngeneration_expression,", "msg_date": "Wed, 18 Sep 2024 16:23:52 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": true, "msg_subject": "Re: information_schema.view attgenerated" }, { "msg_contents": "On 18.09.24 10:23, jian he wrote:\n> On Wed, Sep 18, 2024 at 4:09 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>\n>>>\n>>> i guess, it will have some minor speed up, also more accurate.\n>>\n>> I'm having a hard time interpreting this report. Could you be more\n>> clear about what is the existing code, and what is the code you are\n>> proposing as new.?\n>>\n> \n> sorry for confusion. The changes I propose, also attached.\n\nI think this change is not technically wrong, but I think it doesn't \nmake a difference either way, so I don't see why we should make a change \nhere.\n\n\n> diff --git a/src/backend/catalog/information_schema.sql\n> b/src/backend/catalog/information_schema.sql\n> index c4145131ce..ff8b9305e4 100644\n> --- a/src/backend/catalog/information_schema.sql\n> +++ b/src/backend/catalog/information_schema.sql\n> @@ -688,7 +688,7 @@ CREATE VIEW columns AS\n> CAST(c.relname AS sql_identifier) AS table_name,\n> CAST(a.attname AS sql_identifier) AS column_name,\n> CAST(a.attnum AS cardinal_number) AS ordinal_position,\n> - CAST(CASE WHEN a.attgenerated = '' THEN\n> pg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\n> column_default,\n> + CAST(CASE WHEN a.attgenerated = '' AND a.atthasdef THEN\n> pg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\n> column_default,\n> CAST(CASE WHEN a.attnotnull OR (t.typtype = 'd' AND\n> t.typnotnull) THEN 'NO' ELSE 'YES' END\n> AS yes_or_no)\n> AS is_nullable,\n> @@ -777,8 +777,8 @@ CREATE VIEW columns AS\n> CAST(seq.seqmin AS character_data) AS identity_minimum,\n> CAST(CASE WHEN seq.seqcycle THEN 'YES' ELSE 'NO' END AS\n> yes_or_no) AS identity_cycle,\n> \n> - CAST(CASE WHEN a.attgenerated <> '' THEN 'ALWAYS' ELSE\n> 'NEVER' END AS character_data) AS is_generated,\n> - CAST(CASE WHEN a.attgenerated <> '' THEN\n> pg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\n> generation_expression,\n> + CAST(CASE WHEN a.attgenerated <> '' AND a.atthasdef THEN\n> 'ALWAYS' ELSE 'NEVER' END AS character_data) AS is_generated,\n> + CAST(CASE WHEN a.attgenerated <> '' AND a.atthasdef THEN\n> pg_get_expr(ad.adbin, ad.adrelid) END AS character_data) AS\n> generation_expression,\n\n\n\n", "msg_date": "Sun, 29 Sep 2024 21:49:09 -0400", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: information_schema.view attgenerated" } ]
[ { "msg_contents": "Using a trigram index with an non-deterministic collation can\nlead to wrong query results:\n\n CREATE COLLATION faux_cn (PROVIDER = icu, LOCALE = 'und', DETERMINISTIC = FALSE, RULES = '&l = r');\n\n CREATE TABLE boom (id integer PRIMARY KEY, t text COLLATE faux_cn);\n\n INSERT INTO boom VALUES (1, 'right'), (2, 'light');\n\n SELECT * FROM boom WHERE t = 'right';\n\n id │ t \n ════╪═══════\n 1 │ right\n 2 │ light\n (2 rows)\n\n CREATE INDEX ON boom USING gin (t gin_trgm_ops);\n\n SET enable_seqscan = off;\n\n SELECT * FROM boom WHERE t = 'right';\n\n id │ t \n ════╪═══════\n 1 │ right\n (1 row)\n\nI also see questionable results with the similarity operator (with and\nwithout the index):\n\n SELECT * FROM boom WHERE t % 'rigor';\n\n id │ t \n ════╪═══════\n 1 │ right\n (1 row)\n\nBut here you could argue that the operator ignores the collation, so\nthe result is correct. With equality, there is no such loophole.\n\nI don't know what the correct fix would be. Perhaps just refusing to use\nthe index for equality comparisons with non-deterministic collations.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 17 Sep 2024 08:00:18 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Wrong results with equality search using trigram index and\n non-deterministic collation" }, { "msg_contents": "On Tue, 2024-09-17 at 08:00 +0200, Laurenz Albe wrote:\n> Using a trigram index with an non-deterministic collation can\n> lead to wrong query results:\n> [...]\n> \n> I don't know what the correct fix would be. Perhaps just refusing to use\n> the index for equality comparisons with non-deterministic collations.\n\nLooking into fixing that, how can you tell the optimizer to consider\na certain index only for certain collations?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 19 Sep 2024 14:53:52 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Wrong results with equality search using trigram index and\n non-deterministic collation" } ]
[ { "msg_contents": "Hi hackers,\n\nI just noticed that the docs for pg_service.conf\n(https://www.postgresql.org/docs/current/libpq-pgservice.html) don't\nmention the actual key word to use in the libpq connection string until\nthe example in the last sentence, but it mentions the env var in the\nfirst paragraph. Here's a patch to make the key word equally prominent.\n\n- ilmari", "msg_date": "Tue, 17 Sep 2024 10:24:33 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "[PATCH] Mention service key word more prominently in\n pg_service.conf docs" }, { "msg_contents": "> On 17 Sep 2024, at 11:24, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:\n\n> I just noticed that the docs for pg_service.conf\n> (https://www.postgresql.org/docs/current/libpq-pgservice.html) don't\n> mention the actual key word to use in the libpq connection string until\n> the example in the last sentence, but it mentions the env var in the\n> first paragraph. Here's a patch to make the key word equally prominent.\n\nFair point, that seems like a good change, will apply.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 17 Sep 2024 11:37:15 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Mention service key word more prominently in\n pg_service.conf docs" } ]
[ { "msg_contents": "Currently: \n\njsonb_strip_nulls ( jsonb ) → jsonb\nDeletes all object fields that have null values from the given JSON value, recursively. Null values that are not object fields are untouched.\n\n> Null values that are not object fields are untouched. \n\nCan we revisit this and make it work with arrays, too?\nTbh, at first sight that looked like the expected behavior for me.\nThat is strip nulls from arrays as well.\n\nThis has been available since 9.5 and iiuc predates lots of the jsonb array work.\n\nIn practice, though, whenever jsonb_build_array is used (especially with jsonpath),\na few nulls do appear in the resulting array most of the times,\nCurrently, there’s no expressive way to remove this. \n\nWe could also have jsonb_array_strip_nulls(jsonb) as well\n\n\nCurrently: jsonb_strip_nulls ( jsonb ) → jsonbDeletes all object fields that have null values from the given JSON value, recursively. Null values that are not object fields are untouched.> Null values that are not object fields are untouched. Can we revisit this and make it work with arrays, too?Tbh, at first sight that looked like the expected behavior for me.That is strip nulls from arrays as well.This has been available since 9.5 and iiuc predates lots of the jsonb array work.In practice, though, whenever jsonb_build_array is used (especially with jsonpath),a few nulls do appear in the resulting array most of the times,Currently, there’s no expressive way to remove this.  We could also have jsonb_array_strip_nulls(jsonb) as well", "msg_date": "Tue, 17 Sep 2024 12:26:36 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "jsonb_strip_nulls with arrays? " }, { "msg_contents": "On 2024-09-17 Tu 5:26 AM, Florents Tselai wrote:\n>\n> Currently:\n>\n>\n> |jsonb_strip_nulls| ( |jsonb| ) → |jsonb|\n>\n> Deletes all object fields that have null values from the given JSON \n> value, recursively. Null values that are not object fields are untouched.\n>\n>\n> > Null values that are not object fields are untouched.\n>\n>\n> Can we revisit this and make it work with arrays, too?\n>\n> Tbh, at first sight that looked like the expected behavior for me.\n>\n> That is strip nulls from arrays as well.\n>\n>\n> This has been available since 9.5 and iiuc predates lots of the jsonb \n> array work.\n>\n\nI don't think that's a great idea. Removing an object field which has a \nnull value shouldn't have any effect on the surrounding data, nor really \nany on other operations (If you try to get the value of the missing \nfield it should give you back null). But removing a null array member \nisn't like that at all - unless it's the trailing member of the array it \nwill renumber all the succeeding array members.\n\nAnd I don't think we should be changing the behaviour of a function, \nthat people might have been relying on for the better part of a decade.\n\n\n>\n> In practice, though, whenever jsonb_build_array is used (especially \n> with jsonpath),\n>\n> a few nulls do appear in the resulting array most of the times,\n>\n> Currently, there’s no expressive way to remove this.\n>\n>\n> We could also have jsonb_array_strip_nulls(jsonb) as well\n>\n\nWe could, if we're going to do anything at all in this area. Another \npossibility would be to provide a second optional parameter for \njson{b}_strip_nulls. That's probably a better way to go.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-09-17 Tu 5:26 AM, Florents\n Tselai wrote:\n\n\n\nCurrently: \n\n\njsonb_strip_nulls ( jsonb )\n → jsonb\nDeletes\n all object fields that have null values from the given JSON\n value, recursively. Null values that are not object fields are\n untouched.\n\n\n>\n Null values that are not object fields are untouched. \n\n\nCan\n we revisit this and make it work with arrays, too?\nTbh,\n at first sight that looked like the expected behavior for me.\nThat is strip nulls from arrays\n as well.\n\n\nThis has been available since 9.5\n and iiuc predates lots of the jsonb array work.\n\n\n\nI don't think that's a great idea. Removing an object field which\n has a null value shouldn't have any effect on the surrounding\n data, nor really any on other operations (If you try to get the\n value of the missing field it should give you back null). But\n removing a null array member isn't like that at all - unless it's\n the trailing member of the array it will renumber all the\n succeeding array members.\nAnd I don't think we should be changing the behaviour of a\n function, that people might have been relying on for the better\n part of a decade.\n\n\n\n\n\n\nIn practice, though, whenever\n jsonb_build_array is used (especially with jsonpath),\na few nulls do appear in the\n resulting array most of the times,\nCurrently,\n there’s no expressive way to remove this.  \n\n\nWe\n could also have jsonb_array_strip_nulls(jsonb) as well\n\n\n\nWe could, if we're going to do anything at all in this area.\n Another possibility would be to provide a second optional\n parameter for json{b}_strip_nulls. That's probably a better way to\n go.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 17 Sep 2024 10:11:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: jsonb_strip_nulls with arrays?" }, { "msg_contents": "On Tue, Sep 17, 2024 at 5:11 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n\n>\n> On 2024-09-17 Tu 5:26 AM, Florents Tselai wrote:\n>\n> Currently:\n>\n>\n> jsonb_strip_nulls ( jsonb ) → jsonb\n>\n> Deletes all object fields that have null values from the given JSON value,\n> recursively. Null values that are not object fields are untouched.\n>\n>\n> > Null values that are not object fields are untouched.\n>\n>\n> Can we revisit this and make it work with arrays, too?\n>\n> Tbh, at first sight that looked like the expected behavior for me.\n>\n> That is strip nulls from arrays as well.\n>\n>\n> This has been available since 9.5 and iiuc predates lots of the jsonb\n> array work.\n>\n>\n> I don't think that's a great idea. Removing an object field which has a\n> null value shouldn't have any effect on the surrounding data, nor really\n> any on other operations (If you try to get the value of the missing field\n> it should give you back null). But removing a null array member isn't like\n> that at all - unless it's the trailing member of the array it will renumber\n> all the succeeding array members.\n>\n> And I don't think we should be changing the behaviour of a function, that\n> people might have been relying on for the better part of a decade.\n>\n>\n>\n> In practice, though, whenever jsonb_build_array is used (especially with\n> jsonpath),\n>\n> a few nulls do appear in the resulting array most of the times,\n>\n> Currently, there’s no expressive way to remove this.\n>\n>\n> We could also have jsonb_array_strip_nulls(jsonb) as well\n>\n>\n> We could, if we're going to do anything at all in this area. Another\n> possibility would be to provide a second optional parameter for\n> json{b}_strip_nulls. That's probably a better way to go.\n>\nHere's a patch that adds that argument (only for jsonb; no json\nimplementation yet)\n\nThat's how I imagined & implemented it,\nbut there may be non-obvious pitfalls in the semantics.\n\nas-is version\n\nselect jsonb_strip_nulls('[1,2,null,3,4]');\n jsonb_strip_nulls\n--------------------\n [1, 2, null, 3, 4]\n(1 row)\n\nselect\njsonb_strip_nulls('{\"a\":1,\"b\":null,\"c\":[2,null,3],\"d\":{\"e\":4,\"f\":null}}');\n jsonb_strip_nulls\n--------------------------------------------\n {\"a\": 1, \"c\": [2, null, 3], \"d\": {\"e\": 4}}\n(1 row)\n\nwith the additional boolean flag added\n\nselect jsonb_strip_nulls('[1,2,null,3,4]', *true*);\n jsonb_strip_nulls\n-------------------\n [1, 2, 3, 4]\n(1 row)\n\nselect\njsonb_strip_nulls('{\"a\":1,\"b\":null,\"c\":[2,null,3],\"d\":{\"e\":4,\"f\":null}}',\n*true*);\n jsonb_strip_nulls\n--------------------------------------\n {\"a\": 1, \"c\": [2, 3], \"d\": {\"e\": 4}}\n(1 row)\n\n\nGH PR view: https://github.com/Florents-Tselai/postgres/pull/6/files\n\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>", "msg_date": "Tue, 17 Sep 2024 23:53:58 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: jsonb_strip_nulls with arrays?" } ]
[ { "msg_contents": "Here are a few miscellaneous cleanup patches for pg_upgrade. I don't think\nthere's anything controversial here.\n\n0001 removes some extra whitespace in the status message for failed data\ntype checks. I noticed that when the check fails, this status message is\nindented beyond all the other output. This appears to have been introduced\nin commit 347758b, so I'd back-patch this one to v17.\n\n0002 improves the coding style in many of the new upgrade task callback\nfunctions. I refrained from adjusting this code too much when converting\nthese tasks to use the new pg_upgrade task framework (see commit 40e2e5e),\nbut now I think we should. This decreases the amount of indentation in\nsome places and removes a few dozen lines of code.\n\n0003 adds names to the UpgradeTaskSlotState enum and the UpgradeTaskSlot\nstruct. I'm not aware of any established project policy in this area, but\nI figured it'd be good to at least be consistent within the same file.\n\nThoughts?\n\n-- \nnathan", "msg_date": "Tue, 17 Sep 2024 14:22:21 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "miscellaneous pg_upgrade cleanup" }, { "msg_contents": "> On 17 Sep 2024, at 21:22, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> \n> Here are a few miscellaneous cleanup patches for pg_upgrade. I don't think\n> there's anything controversial here.\n\nNo objections to any of these changes, LGTM.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 23 Sep 2024 15:04:22 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: miscellaneous pg_upgrade cleanup" }, { "msg_contents": "On Mon, Sep 23, 2024 at 03:04:22PM +0200, Daniel Gustafsson wrote:\n> No objections to any of these changes, LGTM.\n\nThanks for reviewing. I'll commit these once the v17 release freeze is\nover (since 0001 needs to be back-patched there).\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 23 Sep 2024 09:06:03 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: miscellaneous pg_upgrade cleanup" }, { "msg_contents": "Committed.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 26 Sep 2024 13:58:24 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: miscellaneous pg_upgrade cleanup" } ]
[ { "msg_contents": "Hi, hackers!\n\nI've noticed that there is no way to specify a custom connection string \nwhen\ncalling the PostgreSQL::Test::Cluster->background_psql() method compared \nto the\nPostgreSQL::Test:Cluster->psql(). It seems useful to have this feature \nwhile\ntesting with BackgroundPsql, for example, when the default host value \nneeds to\nbe overridden to establish different types of connections.\n\nWhat do you think?", "msg_date": "Wed, 18 Sep 2024 01:08:26 +0300", "msg_from": "a.imamov@postgrespro.ru", "msg_from_op": true, "msg_subject": "Custom connstr in background_psql()" }, { "msg_contents": "On Wed, Sep 18, 2024 at 01:08:26AM +0300, a.imamov@postgrespro.ru wrote:\n> I've noticed that there is no way to specify a custom connection string when\n> calling the PostgreSQL::Test::Cluster->background_psql() method compared to the\n> PostgreSQL::Test:Cluster->psql(). It seems useful to have this feature while\n> testing with BackgroundPsql, for example, when the default host value needs\n> to be overridden to establish different types of connections.\n> \n> What do you think?\n\nI think that it makes sense to extend the routine as you are\nsuggesting. At least I can see myself using it depending on the test\nsuite I am dealing with. So count me in.\n--\nMichael", "msg_date": "Wed, 18 Sep 2024 07:57:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Custom connstr in background_psql()" }, { "msg_contents": "Michael Paquier писал(а) 2024-09-18 01:57:\n> On Wed, Sep 18, 2024 at 01:08:26AM +0300, a.imamov@postgrespro.ru \n> wrote:\n>> I've noticed that there is no way to specify a custom connection \n>> string when\n>> calling the PostgreSQL::Test::Cluster->background_psql() method \n>> compared to the\n>> PostgreSQL::Test:Cluster->psql(). It seems useful to have this feature \n>> while\n>> testing with BackgroundPsql, for example, when the default host value \n>> needs\n>> to be overridden to establish different types of connections.\n>> \n>> What do you think?\n> \n> I think that it makes sense to extend the routine as you are\n> suggesting. At least I can see myself using it depending on the test\n> suite I am dealing with. So count me in.\n> --\n> Michael\n\nShould I register the proposal in CF?\nWhich one to choose if so?\n\n--\nregards,\nAidar Imamov\n\n\n", "msg_date": "Thu, 19 Sep 2024 16:00:30 +0300", "msg_from": "a.imamov@postgrespro.ru", "msg_from_op": true, "msg_subject": "Re: Custom connstr in background_psql()" }, { "msg_contents": "On Thu, Sep 19, 2024 at 04:00:30PM +0300, a.imamov@postgrespro.ru wrote:\n> Should I register the proposal in CF?\n> Which one to choose if so?\n\nI think that's useful, so I'll just go apply it.\n--\nMichael", "msg_date": "Fri, 20 Sep 2024 07:53:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Custom connstr in background_psql()" } ]
[ { "msg_contents": "Don't enter parallel mode when holding interrupts.\n\nDoing so caused the leader to hang in wait_event=ParallelFinish, which\nrequired an immediate shutdown to resolve. Back-patch to v12 (all\nsupported versions).\n\nFrancesco Degrassi\n\nDiscussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/ac04aa84a7f06635748278e6ff4bd74751bb3e8e\n\nModified Files\n--------------\nsrc/backend/optimizer/plan/planner.c | 6 ++++++\nsrc/test/regress/expected/select_parallel.out | 24 +++++++++++++++++++++\nsrc/test/regress/sql/select_parallel.sql | 31 +++++++++++++++++++++++++++\n3 files changed, 61 insertions(+)", "msg_date": "Wed, 18 Sep 2024 02:58:48 +0000", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "pgsql: Don't enter parallel mode when holding interrupts." }, { "msg_contents": "On Wed, 2024-09-18 at 02:58 +0000, Noah Misch wrote:\n> Don't enter parallel mode when holding interrupts.\n> \n> Doing so caused the leader to hang in wait_event=ParallelFinish, which\n> required an immediate shutdown to resolve.  Back-patch to v12 (all\n> supported versions).\n> \n> Francesco Degrassi\n> \n> Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com\n\nDoes that warrant mention on this page?\nhttps://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 18 Sep 2024 09:27:36 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't enter parallel mode when holding interrupts." }, { "msg_contents": "On Wed, Sep 18, 2024 at 3:27 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> On Wed, 2024-09-18 at 02:58 +0000, Noah Misch wrote:\n> > Don't enter parallel mode when holding interrupts.\n> >\n> > Doing so caused the leader to hang in wait_event=ParallelFinish, which\n> > required an immediate shutdown to resolve. Back-patch to v12 (all\n> > supported versions).\n> >\n> > Francesco Degrassi\n> >\n> > Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com\n>\n> Does that warrant mention on this page?\n> https://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html\n\nIMHO, no. This seems too low-level and too odd to mention.\n\nTBH, I'm kind of surprised to learn that it's possible to start\nexecuting a query while holding an LWLock. I see Tom is expressing\nsome doubts on the original thread, too. I wonder if we should instead\nbe erroring out if an LWLock is held at the start of query execution\n-- or even earlier, like when we try to call a plpgsql function while\nholding one. Leaving parallel query aside, what would prevent us from\nattempting to reacquire the exact same LWLock that we already hold and\nself-deadlocking? Or attempting to acquire some other LWLock and\ndeadlocking that way? I don't really feel like this is a parallel\nquery problem. I don't think we should be trying to run any\nuser-defined code while holding an LWLock, unless that code is written\nin C (or C++, Rust, etc.). Trying to run procedural code at that point\ndoesn't seem reasonable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Sep 2024 09:25:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Don't enter parallel mode when holding interrupts." }, { "msg_contents": "On Thu, Sep 19, 2024 at 09:25:05AM -0400, Robert Haas wrote:\n> On Wed, Sep 18, 2024 at 3:27 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > On Wed, 2024-09-18 at 02:58 +0000, Noah Misch wrote:\n> > > Don't enter parallel mode when holding interrupts.\n> > >\n> > > Doing so caused the leader to hang in wait_event=ParallelFinish, which\n> > > required an immediate shutdown to resolve. Back-patch to v12 (all\n> > > supported versions).\n> > >\n> > > Francesco Degrassi\n> > >\n> > > Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com\n> >\n> > Does that warrant mention on this page?\n> > https://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html\n> \n> IMHO, no. This seems too low-level and too odd to mention.\n\nAgreed. If I were documenting it, I would document it with the material for\nwriting opclasses. It's probably too esoteric to document even there.\n\n> TBH, I'm kind of surprised to learn that it's possible to start\n> executing a query while holding an LWLock. I see Tom is expressing\n> some doubts on the original thread, too. I wonder if we should instead\n> be erroring out if an LWLock is held at the start of query execution\n> -- or even earlier, like when we try to call a plpgsql function while\n> holding one. Leaving parallel query aside, what would prevent us from\n> attempting to reacquire the exact same LWLock that we already hold and\n> self-deadlocking? Or attempting to acquire some other LWLock and\n> deadlocking that way? I don't really feel like this is a parallel\n> query problem. I don't think we should be trying to run any\n> user-defined code while holding an LWLock, unless that code is written\n> in C (or C++, Rust, etc.). Trying to run procedural code at that point\n> doesn't seem reasonable.\n\nNothing prevents those lwlock deadlocks. If you think it's worth breaking the\nthings folks use today (see original thread) in order to prevent that, please\ndo share that on the original thread. I'm fine either way. I think given\ninfinite resources across both postgresql.org and all extension maintainers, I\nwould do what you're thinking in v18 while in back branches, I would change\n\"erroring out\" to \"warn when assertions are enabled\". I also think it's a\nlow-priority bug, given the only known ways to reach it are C code or a custom\nopclass. Since resources aren't infinite, I'm inclined toward one of (a) stop\nhere or (b) all branches \"warn when assertions are enabled\" and maybe block\nthe plancache route discussed on the original thread.\n\n\n", "msg_date": "Fri, 20 Sep 2024 11:39:31 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: pgsql: Don't enter parallel mode when holding interrupts." } ]
[ { "msg_contents": "Hi hackers,\n\nThere is some places where we check that a struct is full of zeroes:\n\npgstat_report_bgwriter()\npgstat_report_checkpointer()\npgstat_relation_flush_cb()\n\nIndeed that's the way we check if there is pending statistics to flush/report.\n\nThe current code is like (taking pgstat_relation_flush_cb() as an example):\n\n\"\nstatic const PgStat_TableCounts all_zeroes;\n.\n.\nif (memcmp(&lstats->counts, &all_zeroes,\n sizeof(PgStat_TableCounts)) == 0)\n.\n.\n\"\n\nThe static declaration is not \"really\" related to the purpose of the function\nit is declared in. It's there \"only\" to initialize a memory area with zeroes \nand to use it in the memcmp.\n\nI think it would make sense to \"hide\" all of this in a new macro, so please find\nattached a patch proposal doing so (Andres suggested something along those lines\nin [1] IIUC).\n\nThe macro is created in pgstat_internal.h as it looks like that \"only\" the \nstatistics related code would benefit of it currently (could be moved to other\nheader file later on if needed).\n\n[1]: https://www.postgresql.org/message-id/20230105002733.ealhzubjaiqis6ua%40awork3.anarazel.de\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 18 Sep 2024 04:16:12 +0000", "msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "define pg_structiszero(addr, s, r)" }, { "msg_contents": "On Wed, Sep 18, 2024 at 04:16:12AM +0000, Bertrand Drouvot wrote:\n> The macro is created in pgstat_internal.h as it looks like that \"only\" the \n> statistics related code would benefit of it currently (could be moved to other\n> header file later on if needed).\n\nI'm OK to add a helper macro in pgstat_internal.h as this is a pattern\nused only for some stats kinds (the other one I'm aware of is the\nallzero check for pages around bufmgr.c), cleaning up all these static\ndeclarations to make the memcpy() calls cheaper. That can also be\nuseful for anybody doing a custom pgstats kind, fixed or\nvariable-numbered.\n\n#define pg_structiszero(addr, s, r) \\\n\nLocating that at the top of pgstat_internal.h seems a bit out of order\nto me. Perhaps it would be better to move it closer to the inline\nfunctions?\n\nAlso, is this the best name to use here? Right, this is something\nthat may be quite generic. However, if we limit its scope in the\nstats, perhaps this should be named pgstat_entry_all_zeros() or\nsomething like that?\n--\nMichael", "msg_date": "Wed, 18 Sep 2024 15:07:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: define pg_structiszero(addr, s, r)" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 18, 2024 at 03:07:15PM +0900, Michael Paquier wrote:\n> On Wed, Sep 18, 2024 at 04:16:12AM +0000, Bertrand Drouvot wrote:\n> > The macro is created in pgstat_internal.h as it looks like that \"only\" the \n> > statistics related code would benefit of it currently (could be moved to other\n> > header file later on if needed).\n> \n> I'm OK to add a helper macro in pgstat_internal.h as this is a pattern\n> used only for some stats kinds (the other one I'm aware of is the\n> allzero check for pages around bufmgr.c), cleaning up all these static\n> declarations to make the memcpy() calls cheaper. That can also be\n> useful for anybody doing a custom pgstats kind, fixed or\n> variable-numbered.\n\nThanks for looking at it!\n\n> \n> #define pg_structiszero(addr, s, r) \\\n> \n> Locating that at the top of pgstat_internal.h seems a bit out of order\n> to me. Perhaps it would be better to move it closer to the inline\n> functions?\n\nMakes sense, done that way in v2 attached.\n\n> \n> Also, is this the best name to use here? Right, this is something\n> that may be quite generic. However, if we limit its scope in the\n> stats, perhaps this should be named pgstat_entry_all_zeros() or\n> something like that?\n\nAgree, we could still rename it later on if there is a need outside of\nthe statistics code area. Done in v2.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 18 Sep 2024 07:54:20 +0000", "msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: define pg_structiszero(addr, s, r)" }, { "msg_contents": "On 18.09.24 06:16, Bertrand Drouvot wrote:\n> +#define pg_structiszero(addr, s, r)\t\t\t\t\t\t\t\t\t\\\n> +\tdo {\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +\t\t/* We assume this initializes to zeroes */\t\t\t\t\t\\\n> +\t\tstatic const s all_zeroes;\t\t\t\t\t\t\t\t\t\\\n> +\t\tr = (memcmp(addr, &all_zeroes, sizeof(all_zeroes)) == 0);\t\\\n> +\t} while (0)\n\nThis assumption is kind of the problem, isn't it? Because, you can't \nassume that. And the existing code is arguably kind of wrong. But \nmoreover, this macro also assumes that the \"addr\" argument has no random \npadding bits.\n\nIn the existing code, you can maybe make a local analysis that the code \nis working correctly, although I'm not actually sure. But if you are \nrepackaging this as a general macro under a general-sounding name, then \nthe requirements should be more stringent.\n\n\n", "msg_date": "Wed, 18 Sep 2024 10:03:21 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: define pg_structiszero(addr, s, r)" }, { "msg_contents": "Hi,\n\nOn Wed, Sep 18, 2024 at 10:03:21AM +0200, Peter Eisentraut wrote:\n> On 18.09.24 06:16, Bertrand Drouvot wrote:\n> > +#define pg_structiszero(addr, s, r)\t\t\t\t\t\t\t\t\t\\\n> > +\tdo {\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> > +\t\t/* We assume this initializes to zeroes */\t\t\t\t\t\\\n> > +\t\tstatic const s all_zeroes;\t\t\t\t\t\t\t\t\t\\\n> > +\t\tr = (memcmp(addr, &all_zeroes, sizeof(all_zeroes)) == 0);\t\\\n> > +\t} while (0)\n>\n\nThanks for the feedback.\n \n> This assumption is kind of the problem, isn't it? Because, you can't assume\n> that. And the existing code is arguably kind of wrong. But moreover, this\n> macro also assumes that the \"addr\" argument has no random padding bits.\n> \n> In the existing code, you can maybe make a local analysis that the code is\n> working correctly, although I'm not actually sure.\n\nI think it is but will give it a closer look.\n\n> But if you are\n> repackaging this as a general macro under a general-sounding name, then the\n> requirements should be more stringent.\n\nAgree. That said in v2 ([1]), it has been renamed to pgstat_entry_all_zeros().\n\nI think that I will:\n\n1/ take a closer look regarding the existing assumption\n2/ if 1/ outcome is fine, then add more detailed comments around\npgstat_entry_all_zeros() to make sure it's not used outside of the existing\ncontext \n\nSounds good to you?\n\n[1]: https://www.postgresql.org/message-id/ZuqHLCdZXtEsbyb/%40ip-10-97-1-34.eu-west-3.compute.internal\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 18 Sep 2024 18:57:39 +0000", "msg_from": "Bertrand Drouvot <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: define pg_structiszero(addr, s, r)" } ]
[ { "msg_contents": "Correct me if I'm wrong,\nbut for an extension that defines composite types,\nthere's currently no easy way to get a TupleDesc, even for its own types.\n\nSomething like\nTupleDesc get_extension_type_tupledesc(const char *extname, const char\n*typname)\n\nHere's a routine I've stolen borrowed from pramsey's code and have been\nusing ever since.\n\nCould this be exposed in extension.h ? (probably without the version check)", "msg_date": "Wed, 18 Sep 2024 10:03:33 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Get TupleDesc for extension-defined types" }, { "msg_contents": "Hi\n\nst 18. 9. 2024 v 9:04 odesílatel Florents Tselai <florents.tselai@gmail.com>\nnapsal:\n\n> Correct me if I'm wrong,\n> but for an extension that defines composite types,\n> there's currently no easy way to get a TupleDesc, even for its own types.\n>\n> Something like\n> TupleDesc get_extension_type_tupledesc(const char *extname, const char\n> *typname)\n>\n> Here's a routine I've stolen borrowed from pramsey's code and have been\n> using ever since.\n>\n> Could this be exposed in extension.h ? (probably without the version check)\n>\n\nI don't think this functionality is generally useful. Wrapping\nTypeGetTupleDesc(typoid, NIL) is very specific, and probably this code\nshould be inside the extension.\n\nDifferent question is API for searching in system catalog and dependencies.\nI can imagine some functions like\n\nOid extid = get_extension_id(extname);\nOid objid = get_extension_object_id(extid, schema_can_be_null, name,\nTYPEOID); // can be used for routine, table, ...\n\ntupdesc = TypeGetTupleDesc(objid, NIL);\n\nRegards\n\nPavel\n\nHist 18. 9. 2024 v 9:04 odesílatel Florents Tselai <florents.tselai@gmail.com> napsal:Correct me if I'm wrong,but for an extension that defines composite types,there's currently no easy way to get a TupleDesc, even for its own types.Something like TupleDesc get_extension_type_tupledesc(const char *extname, const char *typname)Here's a routine I've stolen borrowed from pramsey's code and have been using ever since.Could this be exposed in extension.h ? (probably without the version check)I don't think this functionality is generally useful.  Wrapping TypeGetTupleDesc(typoid, NIL) is very specific, and probably this code should be inside the extension.Different question is API for searching in system catalog and dependencies. I can imagine some functions likeOid extid = get_extension_id(extname);Oid objid = get_extension_object_id(extid, schema_can_be_null, name, TYPEOID); // can be used for routine, table, ...tupdesc = TypeGetTupleDesc(objid, NIL);RegardsPavel", "msg_date": "Wed, 18 Sep 2024 12:09:04 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get TupleDesc for extension-defined types" }, { "msg_contents": "st 18. 9. 2024 v 9:04 odesílatel Florents Tselai <florents.tselai@gmail.com>\nnapsal:\n\n> Correct me if I'm wrong,\n> but for an extension that defines composite types,\n> there's currently no easy way to get a TupleDesc, even for its own types.\n>\n> Something like\n> TupleDesc get_extension_type_tupledesc(const char *extname, const char\n> *typname)\n>\n> Here's a routine I've stolen borrowed from pramsey's code and have been\n> using ever since.\n>\n> Could this be exposed in extension.h ? (probably without the version check)\n>\n>\nAnother significant issue - an only name is not a unique identifier in an\nextension.\n\nRegards\n\nPavel\n\nst 18. 9. 2024 v 9:04 odesílatel Florents Tselai <florents.tselai@gmail.com> napsal:Correct me if I'm wrong,but for an extension that defines composite types,there's currently no easy way to get a TupleDesc, even for its own types.Something like TupleDesc get_extension_type_tupledesc(const char *extname, const char *typname)Here's a routine I've stolen borrowed from pramsey's code and have been using ever since.Could this be exposed in extension.h ? (probably without the version check)Another significant issue - an only name is not a unique identifier in an extension. RegardsPavel", "msg_date": "Wed, 18 Sep 2024 12:11:49 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get TupleDesc for extension-defined types" }, { "msg_contents": "On Wed, Sep 18, 2024 at 1:09 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> st 18. 9. 2024 v 9:04 odesílatel Florents Tselai <\n> florents.tselai@gmail.com> napsal:\n>\n>> Correct me if I'm wrong,\n>> but for an extension that defines composite types,\n>> there's currently no easy way to get a TupleDesc, even for its own types.\n>>\n>> Something like\n>> TupleDesc get_extension_type_tupledesc(const char *extname, const char\n>> *typname)\n>>\n>> Here's a routine I've stolen borrowed from pramsey's code and have been\n>> using ever since.\n>>\n>> Could this be exposed in extension.h ? (probably without the version\n>> check)\n>>\n>\n> I don't think this functionality is generally useful. Wrapping\n> TypeGetTupleDesc(typoid, NIL) is very specific, and probably this code\n> should be inside the extension.\n>\n> Different question is API for searching in system catalog and\n> dependencies. I can imagine some functions like\n>\n\nThat's a better phrasing\n\n>\n> Oid extid = get_extension_id(extname);\n> Oid objid = get_extension_object_id(extid, schema_can_be_null, name,\n> TYPEOID); // can be used for routine, table, ...\n>\n> tupdesc = TypeGetTupleDesc(objid, NIL);\n>\n\nThese are valid.\nFor context:\nThe \"problem\" (inconvenience really) I'm trying to solve is this:\nMost extensions define some convenient PG_GETARG_MYTYPE(n) macros.\nWhen these types are varlena, things are easy.\n\nWhen they're composite types though things get more verbose.\ni.e. the lines of code the author needs to get from a Datum argument to\nstruct MyType are too many\nand multiple extensions copy-paste the same logic.\n\nMy hope is we could come up with a few routines that ease and standardize\nthis a bit.\n\nYou're right that extname isn't unique, so Oid should be the argument for\nextension, rather than char *extname,\nbut in my mind the \"default\" is \"the current extension\" , but no arguing\nabout that.\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n\nOn Wed, Sep 18, 2024 at 1:09 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hist 18. 9. 2024 v 9:04 odesílatel Florents Tselai <florents.tselai@gmail.com> napsal:Correct me if I'm wrong,but for an extension that defines composite types,there's currently no easy way to get a TupleDesc, even for its own types.Something like TupleDesc get_extension_type_tupledesc(const char *extname, const char *typname)Here's a routine I've stolen borrowed from pramsey's code and have been using ever since.Could this be exposed in extension.h ? (probably without the version check)I don't think this functionality is generally useful.  Wrapping TypeGetTupleDesc(typoid, NIL) is very specific, and probably this code should be inside the extension.Different question is API for searching in system catalog and dependencies. I can imagine some functions likeThat's a better phrasing Oid extid = get_extension_id(extname);Oid objid = get_extension_object_id(extid, schema_can_be_null, name, TYPEOID); // can be used for routine, table, ...tupdesc = TypeGetTupleDesc(objid, NIL);These are valid.For context:The \"problem\" (inconvenience really) I'm trying to solve is this:Most extensions define some convenient PG_GETARG_MYTYPE(n) macros. When these types are varlena, things are easy.When they're composite types though things get more verbose.i.e. the lines of code the author needs to get from a Datum argument to struct MyType are too manyand multiple extensions copy-paste the same logic. My hope is we could come up with a few routines that ease and standardize this a bit.You're right that extname isn't unique, so Oid should be the argument for extension, rather than char *extname,but in my mind the  \"default\" is \"the current extension\" , but no arguing about that.RegardsPavel", "msg_date": "Wed, 18 Sep 2024 17:25:03 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Get TupleDesc for extension-defined types" }, { "msg_contents": "st 18. 9. 2024 v 16:25 odesílatel Florents Tselai <florents.tselai@gmail.com>\nnapsal:\n\n>\n>\n> On Wed, Sep 18, 2024 at 1:09 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> st 18. 9. 2024 v 9:04 odesílatel Florents Tselai <\n>> florents.tselai@gmail.com> napsal:\n>>\n>>> Correct me if I'm wrong,\n>>> but for an extension that defines composite types,\n>>> there's currently no easy way to get a TupleDesc, even for its own types.\n>>>\n>>> Something like\n>>> TupleDesc get_extension_type_tupledesc(const char *extname, const char\n>>> *typname)\n>>>\n>>> Here's a routine I've stolen borrowed from pramsey's code and have been\n>>> using ever since.\n>>>\n>>> Could this be exposed in extension.h ? (probably without the version\n>>> check)\n>>>\n>>\n>> I don't think this functionality is generally useful. Wrapping\n>> TypeGetTupleDesc(typoid, NIL) is very specific, and probably this code\n>> should be inside the extension.\n>>\n>> Different question is API for searching in system catalog and\n>> dependencies. I can imagine some functions like\n>>\n>\n> That's a better phrasing\n>\n>>\n>> Oid extid = get_extension_id(extname);\n>> Oid objid = get_extension_object_id(extid, schema_can_be_null, name,\n>> TYPEOID); // can be used for routine, table, ...\n>>\n>> tupdesc = TypeGetTupleDesc(objid, NIL);\n>>\n>\n> These are valid.\n> For context:\n> The \"problem\" (inconvenience really) I'm trying to solve is this:\n> Most extensions define some convenient PG_GETARG_MYTYPE(n) macros.\n> When these types are varlena, things are easy.\n>\n> When they're composite types though things get more verbose.\n> i.e. the lines of code the author needs to get from a Datum argument to\n> struct MyType are too many\n> and multiple extensions copy-paste the same logic.\n>\n> My hope is we could come up with a few routines that ease and standardize\n> this a bit.\n>\n> You're right that extname isn't unique, so Oid should be the argument for\n> extension, rather than char *extname,\n> but in my mind the \"default\" is \"the current extension\" , but no arguing\n> about that.\n>\n\nwhat you mean \"the current extension\" - there is nothing like this. The\nfunctions have not any information without searching in the catalog about\ntheir extension. Function knows just its own oid and arguments. I can\nimagine so fmgr can be enhanced by it - it can reduce some searching, but\ncurrently there is nothing like current or owner extension (extension id).\n\n\n\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n\nst 18. 9. 2024 v 16:25 odesílatel Florents Tselai <florents.tselai@gmail.com> napsal:On Wed, Sep 18, 2024 at 1:09 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:Hist 18. 9. 2024 v 9:04 odesílatel Florents Tselai <florents.tselai@gmail.com> napsal:Correct me if I'm wrong,but for an extension that defines composite types,there's currently no easy way to get a TupleDesc, even for its own types.Something like TupleDesc get_extension_type_tupledesc(const char *extname, const char *typname)Here's a routine I've stolen borrowed from pramsey's code and have been using ever since.Could this be exposed in extension.h ? (probably without the version check)I don't think this functionality is generally useful.  Wrapping TypeGetTupleDesc(typoid, NIL) is very specific, and probably this code should be inside the extension.Different question is API for searching in system catalog and dependencies. I can imagine some functions likeThat's a better phrasing Oid extid = get_extension_id(extname);Oid objid = get_extension_object_id(extid, schema_can_be_null, name, TYPEOID); // can be used for routine, table, ...tupdesc = TypeGetTupleDesc(objid, NIL);These are valid.For context:The \"problem\" (inconvenience really) I'm trying to solve is this:Most extensions define some convenient PG_GETARG_MYTYPE(n) macros. When these types are varlena, things are easy.When they're composite types though things get more verbose.i.e. the lines of code the author needs to get from a Datum argument to struct MyType are too manyand multiple extensions copy-paste the same logic. My hope is we could come up with a few routines that ease and standardize this a bit.You're right that extname isn't unique, so Oid should be the argument for extension, rather than char *extname,but in my mind the  \"default\" is \"the current extension\" , but no arguing about that.what you mean \"the current extension\" - there is nothing like this. The functions have not any information without searching in the catalog about their extension.  Function knows just its own oid and arguments. I can imagine so fmgr can be enhanced by it - it can reduce some searching, but currently there is nothing like current or owner extension (extension id).RegardsPavel", "msg_date": "Wed, 18 Sep 2024 16:31:36 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Get TupleDesc for extension-defined types" } ]
[ { "msg_contents": "\nHi,\n\nCurrently detoast_attr always detoast the data into a palloc-ed memory\nand then if user wants the detoast data in a different memory, user has to\ncopy them, I'm thinking if we could provide a buf as optional argument for\ndetoast_attr to save such wastage. \n\ncurrent format:\n\n/* ----------\n * detoast_attr -\n *\n *\tPublic entry point to get back a toasted value from compression\n *\tor external storage. The result is always non-extended varlena form.\n *\n * Note some callers assume that if the input is an EXTERNAL or COMPRESSED\n * datum, the result will be a pfree'able chunk.\n * ----------\n */\nstruct varlena *\ndetoast_attr(struct varlena *attr)\n\nnew format:\n\n/* ----------\n * detoast_attr -\n \n * ...\n *\n * Note if caller provides a non-NULL buffer, it is the duty of caller\n * to make sure it has enough room for the detoasted format (Usually\n * they can use toast_raw_datum_size to get the size) Or else a\n * palloced memory under CurrentMemoryContext is used.\n */\n\nstruct varlena *\ndetoast_attr(struct varlena *attr, char *buffer)\n\nThere are 2 user cases at least:\n\n1. The shared detoast datum patch at [1], where I want to avoid the\nduplicated detoast effort for the same datum, for example:\n\nSELECT f(big_toast_col) FROM t WHERE g(big_toast_col);\n\nCurrent master detoast it twice now.\n\nIn that patch, I want to detoast the datum into a MemoryContext where the\nlifespan is same as slot->tts_values[*] rather than CurrentMemoryContext\nso that the result can be reused in the different expression. Within the\nproposal here, we can detoast the datum into the desired MemoryContext\ndirectly (just allocating the buffer in the desired MemoryContext is OK).\n\n2. make printtup function a bit faster [2]. That patch already removed\nsome palloc, memcpy effort, but it still have some chances to \noptimize further. for example in text_out function, it is still detoast\nthe datum into a palloc memory and then copy them into a StringInfo.\n\nOne of the key point is we can always get the varlena rawsize cheaply\nwithout any real detoast activity in advance, thanks to the existing\nvarlena design.\n\nIf this can be accepted, it would reduce the size of patch [2] at some\nextend, and which part was disliked by Thomas (and me..) [3].\n\nWhat do you think? \n\n[1] https://commitfest.postgresql.org/49/4759/\n[2] https://www.postgresql.org/message-id/87wmjzfz0h.fsf%40163.com\n[3] https://www.postgresql.org/message-id/6718759c-2dac-48e4-bf18-282de4d82204%40enterprisedb.com\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Wed, 18 Sep 2024 17:35:56 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": true, "msg_subject": "detoast datum into the given buffer as a optimization." }, { "msg_contents": "On Wed, Sep 18, 2024 at 05:35:56PM +0800, Andy Fan wrote:\n> Currently detoast_attr always detoast the data into a palloc-ed memory\n> and then if user wants the detoast data in a different memory, user has to\n> copy them, I'm thinking if we could provide a buf as optional argument for\n> detoast_attr to save such wastage. \n>\n> [...]\n> \n> What do you think? \n\nMy first thought is that this seems reasonable if there are existing places\nwhere we are copying the data out of the palloc'd memory, but otherwise it\nmight be more of a prerequisite patch for the other things you mentioned.\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 18 Sep 2024 16:23:42 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: detoast datum into the given buffer as a optimization." }, { "msg_contents": "On Wed, Sep 18, 2024 at 2:23 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Sep 18, 2024 at 05:35:56PM +0800, Andy Fan wrote:\n> > Currently detoast_attr always detoast the data into a palloc-ed memory\n> > and then if user wants the detoast data in a different memory, user has to\n> > copy them, I'm thinking if we could provide a buf as optional argument for\n> > detoast_attr to save such wastage.\n> >\n> > [...]\n> >\n> > What do you think?\n>\n> My first thought is that this seems reasonable if there are existing places\n> where we are copying the data out of the palloc'd memory, but otherwise it\n> might be more of a prerequisite patch for the other things you mentioned.\n>\n\nThis would also simplify data copying patterns many extensions have to do.\nFor instance, often they have to move the data from Postgres memory into\nanother language's allocation types. Or just for custom data structures,\nof course.\n\nI would suggest that this be added as new API, however, instead of a change\nto `detoast_attr`. This would make different return types more sensical,\nas there is no need to implicitly allocate. It could return an error type?\n\n> * Note if caller provides a non-NULL buffer, it is the duty of caller\n> * to make sure it has enough room for the detoasted format (Usually\n> * they can use toast_raw_datum_size to get the size)\n\nI'm not entirely sure why the caller is being given the burden of checking,\nbut I suppose they probably did check? I can imagine scenarios where they\nare not interested, however, and the callee always has to obtain the data\nfor len written anyways. So I would probably make writable length a third arg.\n\n\n", "msg_date": "Wed, 18 Sep 2024 16:10:19 -0700", "msg_from": "Jubilee Young <workingjubilee@gmail.com>", "msg_from_op": false, "msg_subject": "Re: detoast datum into the given buffer as a optimization." }, { "msg_contents": "Andy Fan <zhihuifan1213@163.com> writes:\n> * Note if caller provides a non-NULL buffer, it is the duty of caller\n> * to make sure it has enough room for the detoasted format (Usually\n> * they can use toast_raw_datum_size to get the size)\n\nThis is a pretty awful, unsafe API design. It puts it on the caller\nto know how to get the detoasted length, and it implies double\ndecoding of the toast datum.\n\n> One of the key point is we can always get the varlena rawsize cheaply\n> without any real detoast activity in advance, thanks to the existing\n> varlena design.\n\nThis is not an assumption I care to wire into the API design.\n\nHow about a variant like\n\nstruct varlena *\ndetoast_attr_cxt(struct varlena *attr, MemoryContext cxt)\n\nwhich promises to allocate the result in the specified context?\nThat would cover most of the practical use-cases, I think.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Sep 2024 19:21:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: detoast datum into the given buffer as a optimization." }, { "msg_contents": "\nThank you all for the double check.\n\n> Andy Fan <zhihuifan1213@163.com> writes:\n>> * Note if caller provides a non-NULL buffer, it is the duty of caller\n>> * to make sure it has enough room for the detoasted format (Usually\n>> * they can use toast_raw_datum_size to get the size)\n>\n> ..., It puts it on the caller to know how to get the detoasted length\n\nYes.\n\n> and it implies double decoding of the toast datum.\n\nYes, We need to decoding the toast datum to know the rawsize as what we\ndid in toast_raw_datum_size, this is an extra effrot.\n\nBut I want to highlight that this \"decoding\" is different from\n\"detoast\", the later one need to scan toast_relation or decompression\nthe data so it is a heavy work, but the former one just decoding some\nexisting memory at hand which should be very cheap.\n\n>> One of the key point is we can always get the varlena rawsize cheaply\n>> without any real detoast activity in advance, thanks to the existing\n>> varlena design.\n>\n> This is not an assumption I care to wire into the API design.\n\nOK. (I just was excited to find out we can get the rawsize so cheaply,\nso we can find out an API to satify the both user cases.) \n\n> How about a variant like\n>\n> struct varlena *\n> detoast_attr_cxt(struct varlena *attr, MemoryContext cxt)\n>\n> which promises to allocate the result in the specified context?\n> That would cover most of the practical use-cases, I think.\n\nI think this works for my user case 1 but doesn't work for my user case 2\nwhich requires the detoasted data is writen into a given memory\nbuffer (not only a certain MemoryContext). IIUC the user cases Jubilee\nprovided is more like user case 2. \n\n\"\"\" (user case 2)\n2. make printtup function a bit faster [2]. The patch there already\nremoved some palloc, memcpy effort, but it still have some chances to\noptimize further. for example text_out function, it is still detoast\nthe datum into a palloc memory and then copy them into a StringInfo.\n\"\"\"\n\nI really want to make some progress in this direction, so thank you for\nthe feedback. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 19 Sep 2024 08:03:38 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": true, "msg_subject": "Re: detoast datum into the given buffer as a optimization." }, { "msg_contents": "Jubilee Young <workingjubilee@gmail.com> writes:\n\n\n> On Wed, Sep 18, 2024 at 2:23 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>\n>> On Wed, Sep 18, 2024 at 05:35:56PM +0800, Andy Fan wrote:\n>> > Currently detoast_attr always detoast the data into a palloc-ed memory\n>> > and then if user wants the detoast data in a different memory, user has to\n>> > copy them, I'm thinking if we could provide a buf as optional argument for\n>> > detoast_attr to save such wastage.\n>> >\n>> > [...]\n>> >\n>> > What do you think?\n>>\n>> My first thought is that this seems reasonable if there are existing places\n>> where we are copying the data out of the palloc'd memory, but otherwise it\n>> might be more of a prerequisite patch for the other things you mentioned.\n>>\n>\n> This would also simplify data copying patterns many extensions have to do.\n> For instance, often they have to move the data from Postgres memory into\n> another language's allocation types. Or just for custom data structures,\n> of course.\n\nI thought we have, but did't check it so far. If we figured out an API,\nwe can use them for optimizing some existing code. \n\n> I would suggest that this be added as new API, however, instead of a change\n> to `detoast_attr`.\n\nI agree that new API would usually be clearer than adding a more\nargument, just that I don't want copy-paste too much existing\ncode. Actually the thing in my mind is:\n\nstruct varlena *\ndetoast_attr(struct varlena *attr)\n{\n int rawsize = toast_raw_datum_size(attr);\n char *buffer = palloc(rawsize) \n return detoast_attr_buffer(attr, buffer);\n}\n\nstruct varlena *\ndetoast_attr_buffer(struct varlena *attr, char *buffer)\n{\n ...\n}\n\nIn this case:\n- there is no existing code need to be changed.\n- detoast_attr_buffer is tested sufficiently automatically. \n\n> This would make different return types more sensical,\n> as there is no need to implicitly allocate. It could return an error\n> type?\n\nI can't understand the error here. \n\n>\n>> * Note if caller provides a non-NULL buffer, it is the duty of caller\n>> * to make sure it has enough room for the detoasted format (Usually\n>> * they can use toast_raw_datum_size to get the size)\n>\n> I'm not entirely sure why the caller is being given the burden of checking,\n> but I suppose they probably did check? I can imagine scenarios where they\n> are not interested, however, and the callee always has to obtain the data\n> for len written anyways. So I would probably make writable length a\n> third arg.\n\nI didn't follow up here as well:(, do you mind to explain a bit? \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 19 Sep 2024 08:34:38 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": true, "msg_subject": "Re: detoast datum into the given buffer as a optimization." } ]
[ { "msg_contents": "hi.\n\nwhile looking at tablecmd.c, BuildDescForRelation\n attdim = list_length(entry->typeName->arrayBounds);\n if (attdim > PG_INT16_MAX)\n ereport(ERROR,\n errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),\n errmsg(\"too many array dimensions\"))\n\nmakes me related to array_in refactor previously we did.\nat first, i thought it should be \"if (attdim > MAXDIM)\"\n\n\npg_attribute attndims description in [1]\nattndims int2\nNumber of dimensions, if the column is an array type; otherwise 0.\n(Presently, the number of dimensions of an array is not enforced, so\nany nonzero value effectively means “it's an array”.)\n\npg_type typndims description in [2]\ntypndims int4\ntypndims is the number of array dimensions for a domain over an array\n(that is, typbasetype is an array type). Zero for types other than\ndomains over array types.\n\nsince array_in is the only source of the real array data.\nMAXDIM (6) ensure the max dimension is 6.\n\nCan we error out at the stage \"create table\", \"create domain\"\ntime if the attndims or typndims is larger than MAXDIM (6) ?\n\nfor example, error out the following queries immediately\ncreate table t112(a int[][] [][] [][] [][][]);\ncreate domain d_text_arr text [1][][][][][][][];\n\nin the doc, we can still say \"the number of dimensions of an array is\nnot enforced\",\nbut attndims, typndims value would be within a sane threshold.\n\nWe can change typndims from int4 to int2,\nso array type's dimension is consistent with domain type's dimension.\nbut it seems with the change, pg_type occupies the same amount of\nstorage as int4.\n\n\n[1] https://www.postgresql.org/docs/current/catalog-pg-attribute.html\n[2] https://www.postgresql.org/docs/current/catalog-pg-type.html\n\n\n", "msg_date": "Wed, 18 Sep 2024 21:06:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": true, "msg_subject": "attndims, typndims still not enforced, but make the value within a\n sane threshold" }, { "msg_contents": "jian he <jian.universality@gmail.com> writes:\n> Can we error out at the stage \"create table\", \"create domain\"\n> time if the attndims or typndims is larger than MAXDIM (6) ?\n\nThe last time this was discussed, I think the conclusion was\nwe should remove attndims and typndims entirely on the grounds\nthat they're useless. I certainly don't see a point in adding\nmore logic that could give the misleading impression that they\nmean something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Sep 2024 10:10:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: attndims, typndims still not enforced,\n but make the value within a sane threshold" }, { "msg_contents": "On Wed, Sep 18, 2024 at 10:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> jian he <jian.universality@gmail.com> writes:\n> > Can we error out at the stage \"create table\", \"create domain\"\n> > time if the attndims or typndims is larger than MAXDIM (6) ?\n>\n> The last time this was discussed, I think the conclusion was\n> we should remove attndims and typndims entirely on the grounds\n> that they're useless. I certainly don't see a point in adding\n> more logic that could give the misleading impression that they\n> mean something.\n>\n\nhttps://commitfest.postgresql.org/43/\nsearch \"dim\" or \"pg_attribute\", no relevant result,\ni am assuming, nobody doing work to remove attndims and typndims entirely?\nIf so, I will try to make one.\n\n\n", "msg_date": "Wed, 18 Sep 2024 22:35:46 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": true, "msg_subject": "Re: attndims, typndims still not enforced, but make the value within\n a sane threshold" }, { "msg_contents": "On Wed, Sep 18, 2024 at 10:35 PM jian he <jian.universality@gmail.com> wrote:\n>\n> > The last time this was discussed, I think the conclusion was\n> > we should remove attndims and typndims entirely on the grounds\n> > that they're useless. I certainly don't see a point in adding\n> > more logic that could give the misleading impression that they\n> > mean something.\n> >\n\n\nattached patch removes attndims and typndims entirely.\nsome tests skipped in my local my machine, not skipped are all OK.", "msg_date": "Fri, 20 Sep 2024 10:11:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": true, "msg_subject": "Re: attndims, typndims still not enforced, but make the value within\n a sane threshold" }, { "msg_contents": "On Fri, Sep 20, 2024 at 10:11 AM jian he <jian.universality@gmail.com> wrote:\n>\n> On Wed, Sep 18, 2024 at 10:35 PM jian he <jian.universality@gmail.com> wrote:\n> >\n> > > The last time this was discussed, I think the conclusion was\n> > > we should remove attndims and typndims entirely on the grounds\n> > > that they're useless. I certainly don't see a point in adding\n> > > more logic that could give the misleading impression that they\n> > > mean something.\n> > >\n>\n>\n> attached patch removes attndims and typndims entirely.\n> some tests skipped in my local my machine, not skipped are all OK.\n\nShould you also bump the catalog version?\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 20 Sep 2024 11:51:49 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": false, "msg_subject": "Re: attndims, typndims still not enforced, but make the value within\n a sane threshold" }, { "msg_contents": "On Fri, Sep 20, 2024 at 11:51:49AM +0800, Junwang Zhao wrote:\n> Should you also bump the catalog version?\n\nNo need to worry about that when sending a patch because committers\ntake care of that when merging a patch into the tree. Doing that in\neach patch submitted just creates more conflicts and work for patch\nauthors because they'd need to recolve conflicts each time a\ncatversion bump happens. And that can happen on a daily basis\nsometimes depending on what is committed.\n--\nMichael", "msg_date": "Fri, 20 Sep 2024 13:30:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: attndims, typndims still not enforced, but make the value within\n a sane threshold" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Sep 20, 2024 at 11:51:49AM +0800, Junwang Zhao wrote:\n>> Should you also bump the catalog version?\n\n> No need to worry about that when sending a patch because committers\n> take care of that when merging a patch into the tree. Doing that in\n> each patch submitted just creates more conflicts and work for patch\n> authors because they'd need to recolve conflicts each time a\n> catversion bump happens. And that can happen on a daily basis\n> sometimes depending on what is committed.\n\nRight. Sometimes the committer forgets to do that :-(, which is\nnot great but it's not normally a big problem either. We've concluded\nit's better to err in that direction than impose additional work\non patch submitters.\n\nIf you feel concerned about the point, best practice is to include a\nmention that catversion bump is needed in your draft commit message.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Sep 2024 00:38:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: attndims, typndims still not enforced,\n but make the value within a sane threshold" }, { "msg_contents": "Hi Tom and Michael,\n\nOn Fri, Sep 20, 2024 at 12:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Fri, Sep 20, 2024 at 11:51:49AM +0800, Junwang Zhao wrote:\n> >> Should you also bump the catalog version?\n>\n> > No need to worry about that when sending a patch because committers\n> > take care of that when merging a patch into the tree. Doing that in\n> > each patch submitted just creates more conflicts and work for patch\n> > authors because they'd need to recolve conflicts each time a\n> > catversion bump happens. And that can happen on a daily basis\n> > sometimes depending on what is committed.\n>\n> Right. Sometimes the committer forgets to do that :-(, which is\n> not great but it's not normally a big problem either. We've concluded\n> it's better to err in that direction than impose additional work\n> on patch submitters.\n>\n> If you feel concerned about the point, best practice is to include a\n> mention that catversion bump is needed in your draft commit message.\n>\n> regards, tom lane\n\nGot it, thanks for both of your explanations.\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Fri, 20 Sep 2024 15:14:20 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": false, "msg_subject": "Re: attndims, typndims still not enforced, but make the value within\n a sane threshold" }, { "msg_contents": "On Fri, Sep 20, 2024 at 10:11:00AM +0800, jian he wrote:\n> On Wed, Sep 18, 2024 at 10:35 PM jian he <jian.universality@gmail.com> wrote:\n> >\n> > > The last time this was discussed, I think the conclusion was\n> > > we should remove attndims and typndims entirely on the grounds\n> > > that they're useless. I certainly don't see a point in adding\n> > > more logic that could give the misleading impression that they\n> > > mean something.\n> > >\n> \n> \n> attached patch removes attndims and typndims entirely.\n> some tests skipped in my local my machine, not skipped are all OK.\n\nI have been hoping for a patch links this because I feel the existence\nof these system columns is deceptive since we don't honor them properly.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Fri, 20 Sep 2024 09:27:47 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: attndims, typndims still not enforced, but make the value within\n a sane threshold" }, { "msg_contents": "\nOn 2024-09-20 Fr 12:38 AM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Fri, Sep 20, 2024 at 11:51:49AM +0800, Junwang Zhao wrote:\n>>> Should you also bump the catalog version?\n>> No need to worry about that when sending a patch because committers\n>> take care of that when merging a patch into the tree. Doing that in\n>> each patch submitted just creates more conflicts and work for patch\n>> authors because they'd need to recolve conflicts each time a\n>> catversion bump happens. And that can happen on a daily basis\n>> sometimes depending on what is committed.\n> Right. Sometimes the committer forgets to do that :-(, which is\n> not great but it's not normally a big problem either. We've concluded\n> it's better to err in that direction than impose additional work\n> on patch submitters.\n\n\nFWIW, I have a git pre-commit hook that helps avoid that. Essentially it \nchecks to see if there are changes in src/include/catalog but not in \ncatversion.h. That's not a 100% check, but it probably catches the vast \nmajority of changes that would require a catversion bump.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 23 Sep 2024 15:30:42 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: attndims, typndims still not enforced, but make the value within\n a sane threshold" } ]
[ { "msg_contents": "Hi,\nOne database app developer complied that He constantly has a problem \nwith row estimations in joins over groupings and provided the demo example:\n\nCREATE TABLE line (id int PRIMARY KEY, docId int, amount numeric);\nCREATE INDEX line_doc ON line (docid);\nINSERT INTO line (id, docId, amount)\n (SELECT docId*100 + id AS id, docId, random() AS amount\n FROM generate_series(1, 10) AS id,\n generate_series(1, 25000) AS docid);\nINSERT INTO line (id, docId, amount)\n (SELECT docId*100 + id AS id, docId, random() AS amount\n FROM generate_series(1, 20) AS id,\n generate_series(25001, 50000) AS docid);\nINSERT INTO line (id, docId, amount)\n (SELECT docId*100 + id AS id, docId, random() AS amount\n FROM generate_series(1, 50) AS id,\n generate_series(50001, 75000) AS docid);\nINSERT INTO line (id, docId, amount)\n (SELECT docId*100 + id AS id, docId, random() AS amount\n FROM generate_series(1, 100) AS id,\n generate_series(75001, 100000) AS docid);\nCREATE TABLE tmp (id int PRIMARY KEY);\nINSERT INTO tmp (id) SELECT * FROM generate_series(1, 50);\nANALYZE line, tmp;\n\nEXPLAIN\nSELECT tmp.id, sq.amount FROM tmp\n LEFT JOIN\n (SELECT docid, SUM(amount) AS amount FROM line\n JOIN tmp ON tmp.id = docid GROUP BY 1) sq\n ON sq.docid = tmp.id;\n\nwith this query we have bad estimation of the top JOIN:\n\n Hash Right Join (rows=855)\n Hash Cond: (line.docid = tmp.id)\n -> GroupAggregate (cost=3.49..117.25 rows=3420 width=36)\n Group Key: line.docid\n -> Merge Join (cost=3.49..57.40 rows=3420 width=15)\n Merge Cond: (line.docid = tmp_1.id)\n\t\t...\n\nThis wrong prediction makes things much worse if the query has more \nupper query blocks.\nHis question was: Why not consider the grouping column unique in the \nupper query block? It could improve estimations.\nAfter a thorough investigation, I discovered that in commit 4767bc8ff2 \nmost of the work was already done for DISTINCT clauses. So, why not do \nthe same for grouping? A sketch of the patch is attached.\nAs I see it, grouping in this sense works quite similarly to DISTINCT, \nand we have no reason to ignore it. After applying the patch, you can \nsee that prediction has been improved:\n\nHash Right Join (cost=5.62..162.56 rows=50 width=36)\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Thu, 19 Sep 2024 09:55:19 +0200", "msg_from": "Andrei Lepikhov <lepihov@gmail.com>", "msg_from_op": true, "msg_subject": "Improve statistics estimation considering GROUP-BY as a 'uniqueiser'" }, { "msg_contents": "On 19/9/2024 09:55, Andrei Lepikhov wrote:\n> This wrong prediction makes things much worse if the query has more \n> upper query blocks.\n> His question was: Why not consider the grouping column unique in the \n> upper query block? It could improve estimations.\n> After a thorough investigation, I discovered that in commit  4767bc8ff2 \n> most of the work was already done for DISTINCT clauses. So, why not do \n> the same for grouping? A sketch of the patch is attached.\n> As I see it, grouping in this sense works quite similarly to DISTINCT, \n> and we have no reason to ignore it. After applying the patch, you can \n> see that prediction has been improved:\n> \n> Hash Right Join  (cost=5.62..162.56 rows=50 width=36)\n> \nA regression test is added into new version.\nThe code looks tiny, simple and non-invasive - it will be easy to commit \nor reject. So I add it to next commitfest.\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Tue, 24 Sep 2024 07:08:09 +0200", "msg_from": "Andrei Lepikhov <lepihov@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Improve statistics estimation considering GROUP-BY as a\n 'uniqueiser'" } ]
[ { "msg_contents": "Hello!\nCurrently PostgreSQL built on 64-bit Windows has 2Gb limit for\nGUC variables due to sizeof(long)==4 used by Windows compilers.\nTechnically 64-bit addressing for maintenance_work_mem is possible,\nbut code base historically uses variables and constants of type \"long\",\nwhen process maintenance_work_mem value.\nModern vector indexes like pgvector or pgvectorscale require as much\nas possible maintenance_work_mem, 2 Gb limit  dramatically decrease\nbuild index performace making impossible to build indexes for large\ndatasets.\n​​​ The proposed patch fixes all appearences of \"long\" variables and constants\nthat can affect maintenance_work_mem (hash index, vacuum, planner only\naffected, gin, gist, brin, bloom, btree indexes process value\ncorrectly).\nConstant MAX_SIZE_T_KILOBYTES added as upper limit for GUC variables\nthat depend on size_t only (currently only maintenance_work_mem).\nOther GUC variables could use this constant after fixing \"long\" type\ndependence.\nThis patch tested on\na) Windows 10 64-bit AMD64, compiled by msvc-19.37.32822\nb) linux gcc (Debian 12.2.0-14) AMD64\nAll tests are passed.\n\nBest regards\n\nVladlen Popolitov\npostgrespro.com", "msg_date": "Thu, 19 Sep 2024 16:55:42 +0300", "msg_from": "\n =?utf-8?q?=D0=9F=D0=BE=D0=BF=D0=BE=D0=BB=D0=B8=D1=82=D0=BE=D0=B2_=D0=92=D0=BB=D0=B0=D0=B4=D0=BB=D0=B5=D0=BD?=\n <v.popolitov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Increase of =?utf-8?q?maintenance=5Fwork=5Fmem?= limit in 64-bit\n Windows" }, { "msg_contents": "On Fri, 20 Sept 2024 at 01:55, Пополитов Владлен\n<v.popolitov@postgrespro.ru> wrote:\n> Currently PostgreSQL built on 64-bit Windows has 2Gb limit for\n> GUC variables due to sizeof(long)==4 used by Windows compilers.\n> Technically 64-bit addressing for maintenance_work_mem is possible,\n> but code base historically uses variables and constants of type \"long\",\n> when process maintenance_work_mem value.\n\nI agree. Ideally, we shouldn't use longs for anything ever. We should\nlikely adopt trying to remove the usages of them when possible.\n\nI'd like to suggest you go about this patch slightly differently with\nthe end goal of removing the limitation from maintenance_work_mem,\nwork_mem, autovacuum_work_mem and logical_decoding_work_mem.\n\nPatch 0001: Add a macro named something like WORK_MEM_KB_TO_BYTES()\nand adjust all places where we do <work_mem_var> * 1024L to use this\nnew macro. Make the macro do the * 1024L as is done today so that this\npatch is a simple refactor.\nPatch 0002: Convert all places that use long and use Size instead.\nAdjust WORK_MEM_KB_TO_BYTES to use a Size type rather than 1024L.\n\nIt might be wise to break 0002 down into individual GUCs as the patch\nmight become large.\n\nI suspect we might have quite a large number of subtle bugs in our\ncode today due to using longs. 7340d9362 is an example of one that was\nfixed recently.\n\nDavid\n\n\n", "msg_date": "Mon, 23 Sep 2024 13:28:47 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Increase of maintenance_work_mem limit in 64-bit Windows" }, { "msg_contents": "David Rowley писал(а) 2024-09-23 04:28:\n> On Fri, 20 Sept 2024 at 01:55, Пополитов Владлен\n> <v.popolitov@postgrespro.ru> wrote:\n>> Currently PostgreSQL built on 64-bit Windows has 2Gb limit for\n>> GUC variables due to sizeof(long)==4 used by Windows compilers.\n>> Technically 64-bit addressing for maintenance_work_mem is possible,\n>> but code base historically uses variables and constants of type \n>> \"long\",\n>> when process maintenance_work_mem value.\n> \n> I agree. Ideally, we shouldn't use longs for anything ever. We should\n> likely adopt trying to remove the usages of them when possible.\n> \n> I'd like to suggest you go about this patch slightly differently with\n> the end goal of removing the limitation from maintenance_work_mem,\n> work_mem, autovacuum_work_mem and logical_decoding_work_mem.\n> \n> Patch 0001: Add a macro named something like WORK_MEM_KB_TO_BYTES()\n> and adjust all places where we do <work_mem_var> * 1024L to use this\n> new macro. Make the macro do the * 1024L as is done today so that this\n> patch is a simple refactor.\n> Patch 0002: Convert all places that use long and use Size instead.\n> Adjust WORK_MEM_KB_TO_BYTES to use a Size type rather than 1024L.\n> \n> It might be wise to break 0002 down into individual GUCs as the patch\n> might become large.\n> \n> I suspect we might have quite a large number of subtle bugs in our\n> code today due to using longs. 7340d9362 is an example of one that was\n> fixed recently.\n> \n> David\n\nHi David,\nThank you for proposal, I looked at the patch and source code from this\npoint of view. In this approach we need to change all <work_mem_var>.\nI counted the appearences of these vars in the code:\nmaintenance_work_mem appears 63 times in 20 files\nwork_mem appears 113 times in 48 files\nlogical_decoding_work_mem appears 10 times in 2 files\nmax_stack_depth appears 11 times in 3 files\nwal_keep_size_mb appears 5 times in 3 files\nmin_wal_size_mb appears 5 times in 2 files\nmax_wal_size_mb appears 10 times in 2 files\nwal_skip_threshold appears 5 times in 2 files\nmax_slot_wal_keep_size_mb appears 6 times in 3 files\nwal_sender_timeout appears 23 times in 3 files\nautovacuum_work_mem appears 11 times in 4 files\ngin_pending_list_limit appears 8 times in 5 files\npendingListCleanupSize appears 2 times in 2 files\nGinGetPendingListCleanupSize appears 2 times in 2 files\n\nmaintenance_work_mem appears 63 times and had only 4 cases, where \"long\"\nis used (I fix it in patch). I also found, that this patch also fixed\nautovacuum_work_mem , that has only 1 case - the same place in code as\nmaintenance_work_mem.\n\nNow <work_mem_vars> in the code are processed based on the context: they \nare\nassigned to Size, uint64, int64, double, long, int variables (last 2 \ncases\nneed to fix) or multiplied by (uint64)1024, (Size)1024, 1024L (last case\nneeds to fix). Also signed value is used for max_stack_depth (-1 used as\nerror value). I am not sure, that we can solve all this cases by one\nmacro WORK_MEM_KB_TO_BYTES(). The code needs case by case check.\n\nIf I check the rest of the variables, the patch does not need\nMAX_SIZE_T_KILOBYTES constant (I introduced it for variables, that are\nalready checked and fixed), it will contain only fixes in the types of\nthe variables and the constants.\nIt requires a lot of time to check all appearances and neighbour\ncode, but final patch will not be large, I do not expect a lot of\n\"long\" in the rest of the code (only 4 case out of 63 needed to fix\nfor maintenance_work_mem).\nWhat do you think about this approach?\n\n-- \nBest regards,\n\nVladlen Popolitov.\n\n\n", "msg_date": "Mon, 23 Sep 2024 12:01:31 +0300", "msg_from": "Vladlen Popolitov <v.popolitov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Increase of maintenance_work_mem limit in 64-bit Windows" }, { "msg_contents": "On Mon, 23 Sept 2024 at 21:01, Vladlen Popolitov\n<v.popolitov@postgrespro.ru> wrote:\n> Thank you for proposal, I looked at the patch and source code from this\n> point of view. In this approach we need to change all <work_mem_var>.\n> I counted the appearences of these vars in the code:\n> maintenance_work_mem appears 63 times in 20 files\n> work_mem appears 113 times in 48 files\n> logical_decoding_work_mem appears 10 times in 2 files\n> max_stack_depth appears 11 times in 3 files\n> wal_keep_size_mb appears 5 times in 3 files\n> min_wal_size_mb appears 5 times in 2 files\n> max_wal_size_mb appears 10 times in 2 files\n> wal_skip_threshold appears 5 times in 2 files\n> max_slot_wal_keep_size_mb appears 6 times in 3 files\n> wal_sender_timeout appears 23 times in 3 files\n> autovacuum_work_mem appears 11 times in 4 files\n> gin_pending_list_limit appears 8 times in 5 files\n> pendingListCleanupSize appears 2 times in 2 files\n> GinGetPendingListCleanupSize appears 2 times in 2 files\n\nWhy do you think all of these appearances matter? I imagined all you\ncare about are when the values are multiplied by 1024.\n\n> If I check the rest of the variables, the patch does not need\n> MAX_SIZE_T_KILOBYTES constant (I introduced it for variables, that are\n> already checked and fixed), it will contain only fixes in the types of\n> the variables and the constants.\n> It requires a lot of time to check all appearances and neighbour\n> code, but final patch will not be large, I do not expect a lot of\n> \"long\" in the rest of the code (only 4 case out of 63 needed to fix\n> for maintenance_work_mem).\n> What do you think about this approach?\n\nI don't think you can do maintenance_work_mem without fixing work_mem\ntoo. I don't think the hacks you've put into RI_Initial_Check() to\nensure you don't try to set work_mem beyond its allowed range are very\ngood. It effectively means that maintenance_work_mem does not do what\nit's meant to for the initial validation of referential integrity\nchecks. If you're not planning on fixing work_mem too, would you just\npropose to leave those hacks in there forever?\n\nDavid\n\n\n", "msg_date": "Tue, 24 Sep 2024 00:35:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Increase of maintenance_work_mem limit in 64-bit Windows" }, { "msg_contents": "David Rowley писал(а) 2024-09-23 15:35:\n> On Mon, 23 Sept 2024 at 21:01, Vladlen Popolitov\n> <v.popolitov@postgrespro.ru> wrote:\n>> Thank you for proposal, I looked at the patch and source code from \n>> this\n>> point of view. In this approach we need to change all <work_mem_var>.\n>> I counted the appearences of these vars in the code:\n>> maintenance_work_mem appears 63 times in 20 files\n>> work_mem appears 113 times in 48 files\n>> logical_decoding_work_mem appears 10 times in 2 files\n>> max_stack_depth appears 11 times in 3 files\n>> wal_keep_size_mb appears 5 times in 3 files\n>> min_wal_size_mb appears 5 times in 2 files\n>> max_wal_size_mb appears 10 times in 2 files\n>> wal_skip_threshold appears 5 times in 2 files\n>> max_slot_wal_keep_size_mb appears 6 times in 3 files\n>> wal_sender_timeout appears 23 times in 3 files\n>> autovacuum_work_mem appears 11 times in 4 files\n>> gin_pending_list_limit appears 8 times in 5 files\n>> pendingListCleanupSize appears 2 times in 2 files\n>> GinGetPendingListCleanupSize appears 2 times in 2 files\n> \n> Why do you think all of these appearances matter? I imagined all you\n> care about are when the values are multiplied by 1024.\nCommon pattern in code - assign <work_mem_var> to local variable and \nsend\nlocal variable as parameter to function, then to nested function, and\nsomewhere deep multiply function parameter by 1024. It is why I needed \nto\ncheck all appearances, most of them are correct.\n>> If I check the rest of the variables, the patch does not need\n>> MAX_SIZE_T_KILOBYTES constant (I introduced it for variables, that are\n>> already checked and fixed), it will contain only fixes in the types of\n>> the variables and the constants.\n>> It requires a lot of time to check all appearances and neighbour\n>> code, but final patch will not be large, I do not expect a lot of\n>> \"long\" in the rest of the code (only 4 case out of 63 needed to fix\n>> for maintenance_work_mem).\n>> What do you think about this approach?\n> \n> I don't think you can do maintenance_work_mem without fixing work_mem\n> too. I don't think the hacks you've put into RI_Initial_Check() to\n> ensure you don't try to set work_mem beyond its allowed range are very\n> good. It effectively means that maintenance_work_mem does not do what\n> it's meant to for the initial validation of referential integrity\n> checks. If you're not planning on fixing work_mem too, would you just\n> propose to leave those hacks in there forever?\nI agree, it is better to fix all them together. I also do not like this\nhack, it will be removed from the patch, if I check and change\nall <work_mem_vars> at once.\nI think, it will take about 1 week to fix and test all changes. I will\nestimate the total volume of the changes and think, how to group them\nin the patch ( I hope, it will be only one patch)\n\n-- \nBest regards,\n\nVladlen Popolitov.\n\n\n", "msg_date": "Mon, 23 Sep 2024 17:47:52 +0300", "msg_from": "Vladlen Popolitov <v.popolitov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Increase of maintenance_work_mem limit in 64-bit Windows" }, { "msg_contents": "On Tue, 24 Sept 2024 at 02:47, Vladlen Popolitov\n<v.popolitov@postgrespro.ru> wrote:\n> I agree, it is better to fix all them together. I also do not like this\n> hack, it will be removed from the patch, if I check and change\n> all <work_mem_vars> at once.\n> I think, it will take about 1 week to fix and test all changes. I will\n> estimate the total volume of the changes and think, how to group them\n> in the patch ( I hope, it will be only one patch)\n\nThere's a few places that do this:\n\nSize maxBlockSize = ALLOCSET_DEFAULT_MAXSIZE;\n\n/* choose the maxBlockSize to be no larger than 1/16 of work_mem */\nwhile (16 * maxBlockSize > work_mem * 1024L)\n\nI think since maxBlockSize is a Size variable, that the above should\nprobably be:\n\nwhile (16 * maxBlockSize > (Size) work_mem * 1024)\n\nMaybe there can be a precursor patch to fix all those to get rid of\nthe 'L' and cast to the type we're comparing to or assigning to rather\nthan trying to keep the result of the multiplication as a long.\n\nDavid\n\n\n", "msg_date": "Tue, 24 Sep 2024 10:07:23 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Increase of maintenance_work_mem limit in 64-bit Windows" }, { "msg_contents": "David Rowley писал(а) 2024-09-24 01:07:\n> On Tue, 24 Sept 2024 at 02:47, Vladlen Popolitov\n> <v.popolitov@postgrespro.ru> wrote:\n>> I agree, it is better to fix all them together. I also do not like \n>> this\n>> hack, it will be removed from the patch, if I check and change\n>> all <work_mem_vars> at once.\n>> I think, it will take about 1 week to fix and test all changes. I will\n>> estimate the total volume of the changes and think, how to group them\n>> in the patch ( I hope, it will be only one patch)\n> \n> There's a few places that do this:\n> \n> Size maxBlockSize = ALLOCSET_DEFAULT_MAXSIZE;\n> \n> /* choose the maxBlockSize to be no larger than 1/16 of work_mem */\n> while (16 * maxBlockSize > work_mem * 1024L)\n> \n> I think since maxBlockSize is a Size variable, that the above should\n> probably be:\n> \n> while (16 * maxBlockSize > (Size) work_mem * 1024)\n> \n> Maybe there can be a precursor patch to fix all those to get rid of\n> the 'L' and cast to the type we're comparing to or assigning to rather\n> than trying to keep the result of the multiplication as a long.\nYes. It is what I mean, when I wrote about the context - in this case\nvariable is used in \"Size\" context and the cast to Size type should be\nused. It is why I need to check all places in code. I am going to do it\nduring this week.\n\n-- \nBest regards,\n\nVladlen Popolitov.\n\n\n", "msg_date": "Tue, 24 Sep 2024 10:19:13 +0300", "msg_from": "Vladlen Popolitov <v.popolitov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Increase of maintenance_work_mem limit in 64-bit Windows" }, { "msg_contents": "David Rowley писал(а) 2024-09-24 01:07:\n> On Tue, 24 Sept 2024 at 02:47, Vladlen Popolitov\n> <v.popolitov@postgrespro.ru> wrote:\n>> I agree, it is better to fix all them together. I also do not like \n>> this\n>> hack, it will be removed from the patch, if I check and change\n>> all <work_mem_vars> at once.\n>> I think, it will take about 1 week to fix and test all changes. I will\n>> estimate the total volume of the changes and think, how to group them\n>> in the patch ( I hope, it will be only one patch)\n> \n> There's a few places that do this:\n> \n> Size maxBlockSize = ALLOCSET_DEFAULT_MAXSIZE;\n> \n> /* choose the maxBlockSize to be no larger than 1/16 of work_mem */\n> while (16 * maxBlockSize > work_mem * 1024L)\n> \n> I think since maxBlockSize is a Size variable, that the above should\n> probably be:\n> \n> while (16 * maxBlockSize > (Size) work_mem * 1024)\n> \n> Maybe there can be a precursor patch to fix all those to get rid of\n> the 'L' and cast to the type we're comparing to or assigning to rather\n> than trying to keep the result of the multiplication as a long.\n\nHi\n\nI rechecked all <work_mem_vars>, that depend on MAX_KILOBYTES limit and \nfixed\nall casts that are affected by 4-bytes long type in Windows 64-bit. Now\nnext variables are limited by 2TB in all 64-bit systems:\nmaintenance_work_mem\nwork_mem\nlogical_decoding_work_mem\nmax_stack_depth\nautovacuum_work_mem\ngin_pending_list_limit\nwal_skip_threshold\nAlso wal_keep_size_mb, min_wal_size_mb, max_wal_size_mb,\nmax_slot_wal_keep_size_mb are not affected by \"long\" cast.", "msg_date": "Tue, 01 Oct 2024 00:30:40 +0300", "msg_from": "v.popolitov@postgrespro.ru", "msg_from_op": false, "msg_subject": "Re: Increase of maintenance_work_mem limit in 64-bit Windows" } ]
[ { "msg_contents": "In [1] I whined about how the parallel heap scan machinery should have\nnoticed that the same ParallelTableScanDesc was being used to give out\nblock numbers for two different relations. Looking closer, there\nare Asserts that mean to catch this type of error --- but they are\ncomparing relation OIDs, whereas what would have been needed to detect\nthe problem was to compare RelFileLocators.\n\nIt seems to me that a scan is fundamentally operating at the physical\nrelation level, and therefore these tests should check RelFileLocators\nnot OIDs. Hence I propose the attached. (For master only, of course;\nthis would be an ABI break in the back branches.) This passes\ncheck-world and is able to catch the problem exposed in the other\nthread.\n\nAnother possible view is that we should check both physical and\nlogical relation IDs, but that seems like overkill to me.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/2042942.1726781733%40sss.pgh.pa.us", "msg_date": "Thu, 19 Sep 2024 19:45:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Rethinking parallel-scan relation identity checks" } ]
[ { "msg_contents": "\nHi,\n\nstatic inline void\nFullTransactionIdAdvance(FullTransactionId *dest)\n{\n\tdest->value++;\n\n\t/* see FullTransactionIdAdvance() */\n\tif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n\t\treturn;\n\n\twhile (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n\t\tdest->value++;\n}\n\nI understand this functiona as: 'dest->value++' increases the epoch when\nnecessary and we don't want use the TransactionId which is smaller than\nFirstNormalTransactionId. But what is the point of the below code:\n\n/* see FullTransactionIdAdvance() */\nif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n\treturn;\n\nIt looks to me it will be never true(I added a 'Assert(false);' above\nthe return, make check-world pass). and if it is true somehow, retruning\na XID which is smaller than FirstNormalTransactionId looks strange as\nwell. IIUC, should we remove it to save a prediction on each\nGetNewTransactionId call? \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Fri, 20 Sep 2024 17:38:40 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": true, "msg_subject": "FullTransactionIdAdvance question" }, { "msg_contents": "Hi,\n\nOn 2024-09-20 17:38:40 +0800, Andy Fan wrote:\n> static inline void\n> FullTransactionIdAdvance(FullTransactionId *dest)\n> {\n> \tdest->value++;\n> \n> \t/* see FullTransactionIdAdvance() */\n> \tif (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n> \t\treturn;\n> \n> \twhile (XidFromFullTransactionId(*dest) < FirstNormalTransactionId)\n> \t\tdest->value++;\n> }\n> \n> I understand this functiona as: 'dest->value++' increases the epoch when\n> necessary and we don't want use the TransactionId which is smaller than\n> FirstNormalTransactionId. But what is the point of the below code:\n> \n> /* see FullTransactionIdAdvance() */\n> if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n> \treturn;\n>\n> It looks to me it will be never true(I added a 'Assert(false);' above\n> the return, make check-world pass).\n\nHm. I think in the past we did have some code that could end up calling\nFullTransactionIdAdvance() on special xids for some reason, IIRC it was\nrelated to BootstrapTransactionId. Turning those into a normal xid doesn't\nseem quite right, I guess and could hide bugs.\n\nBut I'm not sure it'd not better to simply assert out in those cases.\n\n\n> and if it is true somehow, retruning a XID which is smaller than\n> FirstNormalTransactionId looks strange as well.\n\nWell, it'd be true if you passed it a special xid.\n\n\n> IIUC, should we remove it to save a prediction on each GetNewTransactionId\n> call?\n\nI could see adding an unlikely() to make sure the compiler orders the code to\nmake it statically predictable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 20 Sep 2024 14:17:05 -0400", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: FullTransactionIdAdvance question" }, { "msg_contents": "Hi Andres:\n\n> On 2024-09-20 17:38:40 +0800, Andy Fan wrote:\n>> static inline void\n>> FullTransactionIdAdvance(FullTransactionId *dest)\n>> {\n..\n>> }\n>> \n>> I understand this functiona as: 'dest->value++' increases the epoch when\n>> necessary and we don't want use the TransactionId which is smaller than\n>> FirstNormalTransactionId. But what is the point of the below code:\n>> \n>> /* see FullTransactionIdAdvance() */\n>> if (FullTransactionIdPrecedes(*dest, FirstNormalFullTransactionId))\n>> \treturn;\n>>\n>> It looks to me it will be never true(I added a 'Assert(false);' above\n>> the return, make check-world pass).\n>\n> Hm. I think in the past we did have some code that could end up calling\n> FullTransactionIdAdvance() on special xids for some reason, IIRC it was\n> related to BootstrapTransactionId. Turning those into a normal xid doesn't\n> seem quite right, I guess and could hide bugs.\n>\n> But I'm not sure it'd not better to simply assert out in those cases.\n\nPer my current understanding, special XIDs are special and better be\nused explicitly (vs advance special XID-1 to special XID-2)? Currently\nif the input is BootstrapTransactionId, then we would get\nFrozenTransactionId, which I'm still can't understand a reason of it. I\ncheckout the code to the commit where FullTransactionIdAdvance was\nintroduced, and then add the \"Assert(false)\". make clean .. make\ncheck-world still passed.\n\n>> IIUC, should we remove it to save a prediction on each GetNewTransactionId\n>> call?\n>\n> I could see adding an unlikely() to make sure the compiler orders the code to\n> make it statically predictable.\n\nThanks, Attached is the version to add the 'unlikely' while I'm still\nsearching the possibility to remove the code. I will defer to you after\nmy understanding is expressed. Actually I'm more fan on the knowledge\nabout the XIDs which I am not aware of. I think either the fix of\n'prediction miss' or removing the code probably not make any noticeable\nimprovements in this case. So thanks for checking this!\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 23 Sep 2024 08:53:24 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": true, "msg_subject": "Re: FullTransactionIdAdvance question" } ]
[ { "msg_contents": "Hello Hackers,\n(CC people involved in the earlier discussion)\n\nWhile considering the implementation of timestamp-based conflict\nresolution (last_update_wins) in logical replication (see [1]), there\nwas a feedback at [2] and the discussion on whether or not to manage\nclock-skew at database level. We tried to research the history of\nclock-skew related discussions in Postgres itself and summarized that\nat [3].\n\nWe also analyzed how other databases deal with it. Based on our\nresearch, the other classic RDBMS like Oracle and IBM, using similar\ntimestamp-based resolution methods, do not address clock-skew at the\ndatabase level. Instead, they recommend using external time\nsynchronization solutions, such as NTP.\n\n- Oracle while handling conflicts[2] assumes clocks are synchronized\nand relies on external tools like NTP for time synchronization between\nnodes[4].\n- IBM Informix, similarly, recommends using their network commands to\nensure clock synchronization across nodes[5].\n\nOther postgres dependent databases like EDB-BDR and YugabyteDB provide\nGUC parameters to manage clock-skew within the database:\n\n- EDB-BDR allows configuration of parameters like\nbdr.maximum_clock_skew and bdr.maximum_clock_skew_action to define\nacceptable skew and actions when it exceeds[6].\n- YugabyteDB offers a GUC max_clock_skew_usec setting, which causes\nthe node to crash if the clock-skew exceeds the specified value[7].\n\nThere are, of course, other approaches to managing clock-skew used by\ndistributed systems, such as NTP daemons, centralized logical clocks,\natomic clocks (as in Google Spanner), and time sync services like\nAWS[4].\n\nImplementing any of these time-sync services for CDR seems quite a bit\nof deviation and a big project in itself, which we are not sure is\nreally needed. At best, for users' aid, we should provide some GUCs\nbased implementation to handle clock-skew in logical replication. The\nidea is that users should be able to handle clock-skew outside of the\ndatabase. But in worst case scenarios, users can rely on these GUCs.\n\nWe have attempted to implement a patch which manages clock-skew in\nlogical replication. It works based on these new GUCs: (see [10] for\ndetailed discussion)\n\n- max_logical_rep_clock_skew: Defines the tolerable limit for clock-skew.\n- max_logical_rep_clock_skew_action: Configures the action when\nclock-skew exceeds the limit.\n- max_logical_rep_clock_skew_wait: Limits the maximum wait time if the\naction is configured as \"wait.\"\n\nThe proposed idea is implemented in attached patch v1. Thank you\nShveta for implementing it.\nThanks Kuroda-san for assisting in the research.\n\nThoughts? Looking forward to hearing others' opinions!\n\n[1]: https://www.postgresql.org/message-id/CAJpy0uD0-DpYVMtsxK5R%3DzszXauZBayQMAYET9sWr_w0CNWXxQ%40mail.gmail.com\n[2]: https://www.postgresql.org/message-id/CAFiTN-uTycjZWdp1kEpN9w7b7SQpoGL5zyg_qZzjpY_vr2%2BKsg%40mail.gmail.com\n[3]: https://www.postgresql.org/message-id/CAA4eK1Jn4r-y%2BbkW%3DJaKCbxEz%3DjawzQAS1Z4wAd8jT%2B1B0RL2w%40mail.gmail.com\n[4]: https://www.oracle.com/cn/a/tech/docs/technical-resources/wp-oracle-goldengate-activeactive-final2-1.pdf\n[5]: https://docs.oracle.com/en/operating-systems/oracle-linux/8/network/network-ConfiguringNetworkTime.html\n[6]: https://www.ibm.com/docs/en/informix-servers/14.10?topic=environment-time-synchronization\n[7]: https://www.enterprisedb.com/docs/pgd/latest/reference/pgd-settings/#bdrmaximum_clock_skew\n[8]: https://support.yugabyte.com/hc/en-us/articles/4403707404173-Too-big-clock-skew-leading-to-error-messages-or-tserver-crashes\n[9]: https://aws.amazon.com/about-aws/whats-new/2023/11/amazon-time-sync-service-microsecond-accurate-time/\n[10]: https://www.postgresql.org/message-id/CAJpy0uDCW%2BvrBoUZWrBWPjsM%3D9wwpwbpZuZa8Raj3VqeVYs3PQ%40mail.gmail.com\n\n--\nThanks,\nNisha", "msg_date": "Fri, 20 Sep 2024 16:12:53 +0530", "msg_from": "Nisha Moond <nisha.moond412@gmail.com>", "msg_from_op": true, "msg_subject": "Clock-skew management in logical replication" }, { "msg_contents": "Nisha Moond <nisha.moond412@gmail.com> writes:\n> While considering the implementation of timestamp-based conflict\n> resolution (last_update_wins) in logical replication (see [1]), there\n> was a feedback at [2] and the discussion on whether or not to manage\n> clock-skew at database level.\n\nFWIW, I cannot see why we would do anything beyond suggesting that\npeople run NTP. That's standard anyway on the vast majority of\nmachines these days. Why would we add complexity that we have\nto maintain (and document) in order to cater to somebody not doing\nthat?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Sep 2024 10:21:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Clock-skew management in logical replication" }, { "msg_contents": "Nisha Moond <nisha.moond412@gmail.com> writes:\n> Thoughts? Looking forward to hearing others' opinions!\n\nHad a productive conversation with Amit Kaplia today about time skew\nin distributed systems, and wanted to share some thoughts.\nEssentially, we're grappling with the classic distributed snapshot\nproblem. In a multi-active environment, where multiple nodes can\nindependently process transactions, it becomes crucial to determine\nthe visibility of these transactions across the system. Time skew,\nwhere different machines have different timestamps make it a hard\nproblem. How can we ensure consistent transaction ordering and\nvisibility when time itself is unreliable?\n\nAs you mentioned, there are several ways to tackle the time skew\nproblem in distributed systems. These approaches generally fall into\nthree main categories:\n\n1. Centralized Timestamps (Timestamp Oracle)\n\nMechanism: A dedicated server acts as a single source of truth for\ntime, eliminating skew by providing timestamps to all nodes. Google\nPercolator and TiDB use this approach.\nConsistency level: Serializable\nPros: Simple to implement.\nCons: High latency for cross-geo transactions due to reliance on a\ncentral server. Can become a bottleneck.\n\n2. Atomic Clocks (True Time)\n\nMechanism: Utilizes highly accurate atomic clocks to provide a\nglobally consistent view of time, as seen in Google Spanner.\nConsistency level: External Serializable\nPros: Very high consistency level (externally consistent).\nCons: Requires specialized and expensive hardware. Adds some latency\nto transactions, though less than centralized timestamps.\n\n3. Hybrid Logical Clocks\n\nMechanism: CombinesNTP for rough time synchronization with logical\nclocks for finer-grained ordering. Yugabyte and CockroachDB employ\nthis strategy.\nConsistency level: Serializable\nPros: Avoids the need for specialized hardware.\nCons: Can introduce significant latency to transactions.\n\n4 Local Clocks\n\nMechanism: Just use logical clock\nConsistency level: Eventual Consistency\nPros: Simple implementation\nCons: The consistency level is very low\n\nOf the four implementations considered, only local clocks and the HLC\napproach offer a 'pure database' solution. Given PostgreSQL's\npractical use cases, I recommend starting with a local clock\nimplementation. However, recognizing the increasing prevalence of\ndistributed clock services, we should also implement a pluggable time\naccess method. This allows users to integrate with different time\nservices as needed.\n\nIn the mid-term, implementing the HLC approach would provide highly\nconsistent snapshot reads. This offers a significant advantage for\nmany use cases.\n\nLong-term, we should consider integrating with a distributed time\nservice like AWS Time Sync Service. This ensures high accuracy and\nscalability for demanding applications.\n\nThanks,\nShihao\n\n\n", "msg_date": "Sat, 21 Sep 2024 01:31:15 -0400", "msg_from": "shihao zhong <zhong950419@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Clock-skew management in logical replication" }, { "msg_contents": "On 9/21/24 01:31, shihao zhong wrote:\n> Nisha Moond <nisha.moond412@gmail.com> writes:\n>> Thoughts? Looking forward to hearing others' opinions!\n> \n> Had a productive conversation with Amit Kaplia today about time skew\n> in distributed systems, and wanted to share some thoughts.\n> Essentially, we're grappling with the classic distributed snapshot\n> problem. In a multi-active environment, where multiple nodes can\n> independently process transactions, it becomes crucial to determine\n> the visibility of these transactions across the system. Time skew,\n> where different machines have different timestamps make it a hard\n> problem. How can we ensure consistent transaction ordering and\n> visibility when time itself is unreliable?\n> \n> As you mentioned, there are several ways to tackle the time skew\n> problem in distributed systems. These approaches generally fall into\n> three main categories:\n> \n> 1. Centralized Timestamps (Timestamp Oracle)\n> 2. Atomic Clocks (True Time)\n> 3. Hybrid Logical Clocks\n> 4 Local Clocks\n\n> I recommend ...<snip>... implement a pluggable time access method. This\n> allows users to integrate with different time services as needed.\n\nHuge +1\n\n> In the mid-term, implementing the HLC approach would provide highly\n> consistent snapshot reads. This offers a significant advantage for\n> many use cases.\n\nagreed\n\n> Long-term, we should consider integrating with a distributed time\n> service like AWS Time Sync Service. This ensures high accuracy and\n> scalability for demanding applications.\n\nI think the pluggable access method should make this possible, no?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 22 Sep 2024 09:54:11 -0400", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Clock-skew management in logical replication" }, { "msg_contents": ">\n> > Long-term, we should consider integrating with a distributed time\n> > service like AWS Time Sync Service. This ensures high accuracy and\n> > scalability for demanding applications.\n>\n> > I think the pluggable access method should make > this possible, no?\n\n\n\n> I am sorry that I did not explain clearly in previous email. What do I\n> mean is the pluggable time access method should provide the mechanism to\n> use customized time service. But there is no out of box solution for\n> customer who want to use customized time service. I am suggesting we\n> provide some default implementation for popular used time service like AWS\n> time sync service. Maybe that should be done outside of the mainstream but\n> this is something provide better user experience\n>\n> --\n> Joe Conway\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n>\n\n\n\n> Long-term, we should consider integrating with a distributed time\n> service like AWS Time Sync Service. This ensures high accuracy and\n> scalability for demanding applications.\n> I think the pluggable access method should make > this possible, no?  I am sorry that I did not explain clearly in previous email. What do I mean is the pluggable time access method should provide the mechanism to use customized time service. But there is no out of box solution for customer who want to use customized time service. I am suggesting we provide some default implementation for popular used time service like AWS time sync service. Maybe that should be done outside of the mainstream but this is something provide better user experience \n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sun, 22 Sep 2024 14:18:28 -0400", "msg_from": "shihao zhong <zhong950419@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Clock-skew management in logical replication" }, { "msg_contents": "On Sun, Sep 22, 2024 at 7:24 PM Joe Conway <mail@joeconway.com> wrote:\n>\n> On 9/21/24 01:31, shihao zhong wrote:\n> > Nisha Moond <nisha.moond412@gmail.com> writes:\n> >> Thoughts? Looking forward to hearing others' opinions!\n> >\n> > Had a productive conversation with Amit Kaplia today about time skew\n> > in distributed systems, and wanted to share some thoughts.\n> > Essentially, we're grappling with the classic distributed snapshot\n> > problem. In a multi-active environment, where multiple nodes can\n> > independently process transactions, it becomes crucial to determine\n> > the visibility of these transactions across the system. Time skew,\n> > where different machines have different timestamps make it a hard\n> > problem. How can we ensure consistent transaction ordering and\n> > visibility when time itself is unreliable?\n> >\n> > As you mentioned, there are several ways to tackle the time skew\n> > problem in distributed systems. These approaches generally fall into\n> > three main categories:\n> >\n> > 1. Centralized Timestamps (Timestamp Oracle)\n> > 2. Atomic Clocks (True Time)\n> > 3. Hybrid Logical Clocks\n> > 4 Local Clocks\n>\n> > I recommend ...<snip>... implement a pluggable time access method. This\n> > allows users to integrate with different time services as needed.\n>\n> Huge +1\n>\n\nThe one idea to provide user control over timestamps that are used for\n'latest_write_wins' strategy could be to let users specify the values\nin a special column in the table that will be used to resolve\nconflicts.\n\nCREATE TABLE foo(c1 int, c2 timestamp default conflict_fn, CHECK CONFLICTS(c2));\n\nNow, for column c2 user can provide its function which can provide\nvalue for each row that can be used to resolve conflict. If the\ntable_level conflict column is provided then that will be used to\nresolve conflicts, otherwise, the default commit timestamp provided by\ncommit_ts module will be used to resolve conflict.\n\nOn the apply-side, we will use a condition like:\nif ((source_new_column_value > replica_current_column_value) ||\noperation.type == \"delete\")\n apply_update();\n\nIn the above example case, source_new_column_value and\nreplica_current_column_value will be column c2 on publisher and\nsubscriber. Note, that in the above case, we allowed deletes to always\nwin as the delete operation doesn't update the column values. We can\nchoose a different strategy to apply deletes like comparing the\nexisting column values as well.\n\nNote that MYSQL [1] and Oracle's Timesten [2] provide a similar\nstrategy at the table level for conflict resolution to avoid reliance\non system clocks.\n\nThough this provides a way for users to control values required for\nconflict resolution, I prefer a simple approach at least for the first\nversion which is to document that users should ensure time\nsynchronization via NTP. Even Oracle mentions the same in their docs\n[3] (See from: \"It is critical to ensure that the clocks on all\ndatabases are identical to one another and it’s recommended that all\ndatabase servers are configured to maintain accurate time through a\ntime server using the network time protocol (NTP). Even in\nenvironments where databases span different time zones, all database\nclocks must be set to the same time zone or Coordinated Universal Time\n(UTC) must be used to maintain accurate time. Failure to maintain\naccurate and synchronized time across the databases in an\nactive-active replication environment will result in data integrity\nissues.\")\n\n[1] - https://dev.mysql.com/doc/refman/9.0/en/mysql-cluster-replication-schema.html#ndb-replication-ndb-replication\n[2] - https://docs.oracle.com/en/database/other-databases/timesten/22.1/replication/configuring-timestamp-comparison.html#GUID-C8B0580B-B577-435F-8726-4AF341A09806\n[3] - https://www.oracle.com/cn/a/tech/docs/technical-resources/wp-oracle-goldengate-activeactive-final2-1.pdf\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 23 Sep 2024 15:17:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Clock-skew management in logical replication" }, { "msg_contents": "On Fri, Sep 20, 2024 at 7:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Nisha Moond <nisha.moond412@gmail.com> writes:\n> > While considering the implementation of timestamp-based conflict\n> > resolution (last_update_wins) in logical replication (see [1]), there\n> > was a feedback at [2] and the discussion on whether or not to manage\n> > clock-skew at database level.\n>\n> FWIW, I cannot see why we would do anything beyond suggesting that\n> people run NTP. That's standard anyway on the vast majority of\n> machines these days. Why would we add complexity that we have\n> to maintain (and document) in order to cater to somebody not doing\n> that?\n>\n> regards, tom lane\n\nThank you for your response.\n\nI agree with suggesting users to run NTP and we can recommend it in\nthe docs rather than introducing additional complexities.\n\nIn my research on setting up NTP servers on Linux, I found that\nChrony[1] is a lightweight and efficient solution for time\nsynchronization across nodes. Another reliable option is the classic\nNTP daemon (ntpd)[2], which is also easy to configure and maintain.\nBoth Chrony and ntpd can be used to configure a local machine as an\nNTP server for localized time synchronization, or as clients syncing\nfrom public NTP servers such as 'ntp.ubuntu.com' (default ntp server\npool for Ubuntu systems) or 'time.google.com'(Google Public NTP).\nFor example, on Ubuntu, Chrony is straightforward to install and\nconfigure[3]. Comprehensive NTP(ntpd) configuration guides are\navailable for various Linux distributions, such as Ubuntu[4] and\nRedHat-Linux[5].\n\nFurther, I’m exploring options for implementing NTP on Windows systems.\n\n[1] https://chrony-project.org/index.html\n[2] https://www.ntp.org/documentation/4.2.8-series/\n[3] https://documentation.ubuntu.com/server/how-to/networking/serve-ntp-with-chrony/\n[4] https://askubuntu.com/questions/14558/how-do-i-setup-a-local-ntp-server\n[5] https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-configuring_ntp_using_ntpd#s1-Understanding_the_ntpd_Configuration_File\n\nThanks,\nNisha\n\n\n", "msg_date": "Mon, 23 Sep 2024 16:00:00 +0530", "msg_from": "Nisha Moond <nisha.moond412@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Clock-skew management in logical replication" }, { "msg_contents": "On Mon, Sep 23, 2024 at 4:00 PM Nisha Moond <nisha.moond412@gmail.com> wrote:\n>\n> On Fri, Sep 20, 2024 at 7:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Nisha Moond <nisha.moond412@gmail.com> writes:\n> > > While considering the implementation of timestamp-based conflict\n> > > resolution (last_update_wins) in logical replication (see [1]), there\n> > > was a feedback at [2] and the discussion on whether or not to manage\n> > > clock-skew at database level.\n> >\n> > FWIW, I cannot see why we would do anything beyond suggesting that\n> > people run NTP. That's standard anyway on the vast majority of\n> > machines these days. Why would we add complexity that we have\n> > to maintain (and document) in order to cater to somebody not doing\n> > that?\n> >\n> > regards, tom lane\n>\n> Thank you for your response.\n>\n> I agree with suggesting users to run NTP and we can recommend it in\n> the docs rather than introducing additional complexities.\n>\n> In my research on setting up NTP servers on Linux, I found that\n> Chrony[1] is a lightweight and efficient solution for time\n> synchronization across nodes. Another reliable option is the classic\n> NTP daemon (ntpd)[2], which is also easy to configure and maintain.\n> Both Chrony and ntpd can be used to configure a local machine as an\n> NTP server for localized time synchronization, or as clients syncing\n> from public NTP servers such as 'ntp.ubuntu.com' (default ntp server\n> pool for Ubuntu systems) or 'time.google.com'(Google Public NTP).\n> For example, on Ubuntu, Chrony is straightforward to install and\n> configure[3]. Comprehensive NTP(ntpd) configuration guides are\n> available for various Linux distributions, such as Ubuntu[4] and\n> RedHat-Linux[5].\n>\n> Further, I’m exploring options for implementing NTP on Windows systems.\n>\n\nWindows platforms provide built-in time synchronization services. As a\nclient, they allow users to sync system time using internet or public\nNTP servers. This can be easily configured by selecting a public NTP\nserver directly in the Date and Time settings. More details can be\nfound at [1].\n\nAdditionally, Windows servers can be configured as NTP servers for\nlocalized time synchronization within a network, allowing other nodes\nto sync with them. Further instructions on configuring an NTP server\non Windows can be found at [2].\n\n[1] https://learn.microsoft.com/en-us/windows-server/networking/windows-time-service/how-the-windows-time-service-works\n[2] https://learn.microsoft.com/en-us/troubleshoot/windows-server/active-directory/configure-authoritative-time-server\n\nThanks,\nNisha\n\n\n", "msg_date": "Wed, 25 Sep 2024 08:20:19 +0530", "msg_from": "Nisha Moond <nisha.moond412@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Clock-skew management in logical replication" }, { "msg_contents": "On Fri, Sep 20, 2024 at 10:21:34AM -0400, Tom Lane wrote:\n> FWIW, I cannot see why we would do anything beyond suggesting that\n> people run NTP. That's standard anyway on the vast majority of\n> machines these days. Why would we add complexity that we have\n> to maintain (and document) in order to cater to somebody not doing\n> that?\n\nAgreed. I am on the same boat as you are here. I don't think that\nthe database should be in charge of taking like decisions based on a\nclock that may have gone crazy. Precise clocks are a difficult\nproblem, for sure, but this patch is just providing a workaround for a\nproblem that should not be linked to the backend engine by default and\nI agree that we will feel better if we neither maintain this stuff nor\nenter in this territory.\n\nMaking that more pluggable, though, has the merit to let out-of-core\nfolks do what they want, even if we may finish with an community\necosystem that has more solutions than the number of fingers on one\nhand. I've seen multiple ways of solving conflicts across multiple\nlogical nodes in the past years, some being clock-based, some not,\nwith more internal strictly-monotonic counting solution to solve any\nconflicts.\n--\nMichael", "msg_date": "Wed, 25 Sep 2024 12:21:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Clock-skew management in logical replication" }, { "msg_contents": "Dear hackers,\r\n\r\n> Though this provides a way for users to control values required for\r\n> conflict resolution, I prefer a simple approach at least for the first\r\n> version which is to document that users should ensure time\r\n> synchronization via NTP. Even Oracle mentions the same in their docs\r\n\r\nI researched some cloud services and found that the time-sync services on the\r\ncloud are integrated with the NTP or PTP direct connection. This means that there\r\nare no specific APIs to synchronize the machine clock. Based on that,\r\nI also agree with the simple approach (just document). I feel the synchronization\r\ncan be regarded as the low-layer task and can rely on the OS.\r\n\r\nThe below part shows the status of cloud vendors and Oracle.\r\n\r\n## AWS case\r\n\r\nAWS provides a \"Time Sync Service\" [1] that can be used via NTP. The source server\r\nis at 169.254.169.123; users can modify the configuration file to refer to it shown below.\r\n\r\n```\r\nserver 169.254.169.123 prefer iburst\r\n```\r\n\r\nOr users can even directly connect to the local and accurate hardware clock.\r\n\r\n## GCP case\r\n\r\nGCP compute engines must use an NTP server on the GCP cloud [2], located at\r\nmetadata.google.internal, or other public NTP servers. The configuration will\r\nlook like this:\r\n\r\n```\r\nserver metadata.google.internal iburst\r\n```\r\n\r\n## Oracle case\r\n\r\nOracle RAC requires that all participants are well synchronized by NTP.\r\nFormally, it had an automatic synchronization feature called \"Cluster Time\r\nSynchronization Service (CTSS).\" It is de-supported in Oracle Database 23ai [3].\r\n\r\n[1]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/configure-ec2-ntp.html\r\n[2]: https://cloud.google.com/compute/docs/instances/configure-ntp\r\n[3]: https://docs.oracle.com/en/database/oracle/oracle-database/23/cwlin/server-configuration-checklist-for-oracle-grid-infrastructure.html\r\n\r\nBest regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Wed, 25 Sep 2024 09:39:49 +0000", "msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Clock-skew management in logical replication" }, { "msg_contents": "On Wed, Sep 25, 2024 at 3:09 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > Though this provides a way for users to control values required for\n> > conflict resolution, I prefer a simple approach at least for the first\n> > version which is to document that users should ensure time\n> > synchronization via NTP. Even Oracle mentions the same in their docs\n>\n> I researched some cloud services and found that the time-sync services on the\n> cloud are integrated with the NTP or PTP direct connection. This means that there\n> are no specific APIs to synchronize the machine clock. Based on that,\n> I also agree with the simple approach (just document). I feel the synchronization\n> can be regarded as the low-layer task and can rely on the OS.\n>\n> The below part shows the status of cloud vendors and Oracle.\n>\n> ## AWS case\n>\n> AWS provides a \"Time Sync Service\" [1] that can be used via NTP. The source server\n> is at 169.254.169.123; users can modify the configuration file to refer to it shown below.\n>\n> ```\n> server 169.254.169.123 prefer iburst\n> ```\n>\n> Or users can even directly connect to the local and accurate hardware clock.\n>\n> ## GCP case\n>\n> GCP compute engines must use an NTP server on the GCP cloud [2], located at\n> metadata.google.internal, or other public NTP servers. The configuration will\n> look like this:\n>\n> ```\n> server metadata.google.internal iburst\n> ```\n>\n\nIf NTP already provides a way to configure other time-sync services as\nshown by you then I don't think we need to do more at this stage\nexcept to document it with the conflict resolution patch. In the\nfuture, we may want to provide an additional column in the table with\na special meaning that can help in conflict resolution.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 26 Sep 2024 14:26:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Clock-skew management in logical replication" } ]
[ { "msg_contents": "Why PostgreSQL DOCs needs to show or compare the Oracle way of doing things\n?\n\nI understand that on page Porting from Oracle PL/SQL is ok to mention\nOracle, but there are other places where it's not needed. Or, if it's ok to\nmention, why not mention SQL Server or MySQL or any other ?\n\nBug Reporting Guidelines\nEspecially refrain from merely saying that “This is not what SQL\nsays/Oracle does.”\n\nLOCK\nthe PostgreSQL lock modes and the LOCK TABLE syntax are compatible with\nthose present in Oracle.\n\nSELECT\nApplications written for Oracle frequently use a workaround involving the\nautomatically generated rownum column, which is not available in\nPostgreSQL, to implement the effects of these clauses.\n\nROLLBACK TO SAVEPOINT\nThe SQL standard specifies that the key word SAVEPOINT is mandatory, but\nPostgreSQL and Oracle allow it to be omitted\n\nData Type Formatting Functions\nFM modifies only the next specification, while in Oracle FM affects all\nsubsequent specifications, and repeated FM modifiers toggle fill mode on\nand off.\n\nData Type Formatting Functions\nA sign formatted using SG, PL, or MI is not anchored to the number; for\nexample, to_char(-12, 'MI9999') produces '- 12' but to_char(-12, 'S9999')\nproduces ' -12'. (The Oracle implementation does not allow the use of MI\nbefore 9, but rather requires that 9 precede MI.)\n\nregards\nMarcos\n\nWhy PostgreSQL DOCs needs to show or compare the Oracle way of doing things ?I understand that on page Porting from Oracle PL/SQL is ok to mention Oracle, but there are other places where it's not needed. Or, if it's ok to mention, why not mention SQL Server or MySQL or any other ?Bug Reporting GuidelinesEspecially refrain from merely saying that “This is not what SQL says/Oracle does.”LOCKthe PostgreSQL lock modes and the LOCK TABLE syntax are compatible with those present in Oracle.SELECTApplications written for Oracle frequently use a workaround involving the automatically generated rownum column, which is not available in PostgreSQL, to implement the effects of these clauses.ROLLBACK TO SAVEPOINTThe SQL standard specifies that the key word SAVEPOINT is mandatory, but PostgreSQL and Oracle allow it to be omittedData Type Formatting Functions FM modifies only the next specification, while in Oracle FM affects all subsequent specifications, and repeated FM modifiers toggle fill mode on and off.Data Type Formatting Functions A sign formatted using SG, PL, or MI is not anchored to the number; for example, to_char(-12, 'MI9999') produces '-  12' but to_char(-12, 'S9999') produces '  -12'. (The Oracle implementation does not allow the use of MI before 9, but rather requires that 9 precede MI.)regardsMarcos", "msg_date": "Fri, 20 Sep 2024 09:36:51 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Why mention to Oracle ?" }, { "msg_contents": "\n\nOn 9/20/24 14:36, Marcos Pegoraro wrote:\n> Why PostgreSQL DOCs needs to show or compare the Oracle way of doing\n> things ?\n> \n> I understand that on page Porting from Oracle PL/SQL is ok to mention\n> Oracle, but there are other places where it's not needed. Or, if it's ok\n> to mention, why not mention SQL Server or MySQL or any other ?\n> \n\nIt's not quite clear to me whether your suggestion is to not mention any\nother databases ever, or to always mention every existing one. ;-)\n\nI didn't dig into all the places you mention, but I'd bet those places\nreference Oracle simply because it was the most common DB people either\nmigrated from or needed to support in their application next to PG, and\nthus were running into problems. The similarity of the interfaces and\nSQL dialects also likely played a role. It's less likely to run into\nsubtle behavior differences e.g. SQL Server when you have to rewrite\nT-SQL stuff from scratch anyway.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 20 Sep 2024 16:56:48 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "Tomas Vondra <tomas@vondra.me> writes:\n> On 9/20/24 14:36, Marcos Pegoraro wrote:\n>> Why PostgreSQL DOCs needs to show or compare the Oracle way of doing\n>> things ?\n\n> I didn't dig into all the places you mention, but I'd bet those places\n> reference Oracle simply because it was the most common DB people either\n> migrated from or needed to support in their application next to PG, and\n> thus were running into problems. The similarity of the interfaces and\n> SQL dialects also likely played a role. It's less likely to run into\n> subtle behavior differences e.g. SQL Server when you have to rewrite\n> T-SQL stuff from scratch anyway.\n\nAs far as the mentions in \"Data Type Formatting Functions\" go, those\nare there because those functions are not in the SQL standard; we\nstole the API definitions for them from Oracle, lock stock and barrel.\n(Except for the discrepancies that are called out by referencing what\nOracle does differently.) A number of the other references probably\nhave similar origins.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Sep 2024 11:53:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "On Fri, Sep 20, 2024 at 5:37 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> Why PostgreSQL DOCs needs to show or compare the Oracle way of doing\n> things ?\n>\n> I understand that on page Porting from Oracle PL/SQL is ok to mention\n> Oracle, but there are other places where it's not needed. Or, if it's ok to\n> mention, why not mention SQL Server or MySQL or any other ?\n>\n\nIt would be a boon to the community if someone were to put together a\nweb/wiki page or mini-app that details this kind of information and, if\nconsidered accurate and relevant enough by the community, link to that more\nglobally while also remove the random and incomplete references of this\nnature from the main documentation. As it stands the info is at least\nrelevant, and its incompleteness doesn't cause enough grief, IMO, to\nwarrant its outright removal absent there existing an alternative.\n\n\n> Bug Reporting Guidelines\n> Especially refrain from merely saying that “This is not what SQL\n> says/Oracle does.”\n>\n\nI would agree that this admonishment be re-worded. I suggest:\n\nIf referencing some external authority, like the SQL Standard or another\nrelational database product, mention it, but also include the literal\noutput values.\n\nDavid J.\n\nOn Fri, Sep 20, 2024 at 5:37 AM Marcos Pegoraro <marcos@f10.com.br> wrote:Why PostgreSQL DOCs needs to show or compare the Oracle way of doing things ?I understand that on page Porting from Oracle PL/SQL is ok to mention Oracle, but there are other places where it's not needed. Or, if it's ok to mention, why not mention SQL Server or MySQL or any other ?It would be a boon to the community if someone were to put together a web/wiki page or mini-app that details this kind of information and, if considered accurate and relevant enough by the community, link to that more globally while also remove the random and incomplete references of this nature from the main documentation.  As it stands the info is at least relevant, and its incompleteness doesn't cause enough grief, IMO, to warrant its outright removal absent there existing an alternative.Bug Reporting GuidelinesEspecially refrain from merely saying that “This is not what SQL says/Oracle does.”I would agree that this admonishment be re-worded.  I suggest:If referencing some external authority, like the SQL Standard or another relational database product, mention it, but also include the literal output values.David J.", "msg_date": "Fri, 20 Sep 2024 09:18:13 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "Em sex., 20 de set. de 2024 às 12:53, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> As far as the mentions in \"Data Type Formatting Functions\" go, those\n> are there because those functions are not in the SQL standard; we\n> stole the API definitions for them from Oracle, lock stock and barrel.\n> (Except for the discrepancies that are called out by referencing what\n> Oracle does differently.) A number of the other references probably\n> have similar origins.\n>\n\nAll the time we see somebody adding a new function to Postgres core that\nexists in Python or GO or MySQL, but none of them are mentioned on DOCS.\n\nI did never use Oracle but I'm almost sure on Oracle DOCS there are no\nmentions of Postgres, right ? Why do we need to mention it ?\n\nRegards\nMarcos\n\nEm sex., 20 de set. de 2024 às 12:53, Tom Lane <tgl@sss.pgh.pa.us> escreveu:As far as the mentions in \"Data Type Formatting Functions\" go, those\nare there because those functions are not in the SQL standard; we\nstole the API definitions for them from Oracle, lock stock and barrel.\n(Except for the discrepancies that are called out by referencing what\nOracle does differently.)  A number of the other references probably\nhave similar origins.All the time we see somebody adding a new function to Postgres core that exists in Python or GO or MySQL, but none of them are mentioned on DOCS.I did never use Oracle but I'm almost sure on Oracle DOCS there are no mentions of Postgres, right ? Why do we need to mention it ?RegardsMarcos", "msg_date": "Fri, 20 Sep 2024 14:31:47 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "Em sex., 20 de set. de 2024 às 11:56, Tomas Vondra <tomas@vondra.me>\nescreveu:\n\n> It's not quite clear to me whether your suggestion is to not mention any\n> other databases ever, or to always mention every existing one. ;-)\n>\n\nMy suggestion is: Postgres DOCs are written and have to be read by Postgres\nusers, just that. If you are Oracle user, search for a tutorial on how to\nmigrate to Postgres or find tools for it, but not in DOCs\nBecause if you write something for Oracle users, SQL Server users can claim\nwhy there is no \"Porting from T-SQL to PL/pgSQL\" ?\nAnd MySQL users can do the same, and so on.\n\nOracle simply because it was the most common DB people either\n> migrated from or needed to support in their application next to PG, and\n> thus were running into problems.\n>\n\nMaybe Oracle was the most common DB which migrated to Postgres, but I'm\nnot sure this is true for today.\n\nregards\nMarcos\n\nEm sex., 20 de set. de 2024 às 11:56, Tomas Vondra <tomas@vondra.me> escreveu:It's not quite clear to me whether your suggestion is to not mention any\nother databases ever, or to always mention every existing one. ;-)My suggestion is: Postgres DOCs are written and have to be read by Postgres users, just that. If you are Oracle user, search for a tutorial on how to migrate to Postgres or find tools for it, but not in DOCsBecause if you write something for Oracle users, SQL Server users can claim why there is no \"Porting from T-SQL to PL/pgSQL\" ?And MySQL users can do the same, and so on.Oracle simply because it was the most common DB people either\nmigrated from or needed to support in their application next to PG, and\nthus were running into problems. Maybe Oracle was the most common DB which migrated to Postgres, but I'm not sure this is true for today. regardsMarcos", "msg_date": "Fri, 20 Sep 2024 14:42:17 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "Em sex., 20 de set. de 2024 às 13:18, David G. Johnston <\ndavid.g.johnston@gmail.com> escreveu:\n\n> It would be a boon to the community if someone were to put together a\n> web/wiki page or mini-app that details this kind of information and, if\n> considered accurate and relevant enough by the community, link to that more\n> globally while also remove the random and incomplete references of this\n> nature from the main documentation. As it stands the info is at least\n> relevant, and its incompleteness doesn't cause enough grief, IMO, to\n> warrant its outright removal absent there existing an alternative.\n>\n\nOracle DOCs or MySQL DOCs or any other have these comparisons ? I don't\nthink so, so why does Postgres have to mention it ?\n\nAll these places, and others I didn't find, I think it's correct to say\nPostgres' way of doing that, not what is different from Oracle.\n\nregards\nMarcos\n\nEm sex., 20 de set. de 2024 às 13:18, David G. Johnston <david.g.johnston@gmail.com> escreveu:It would be a boon to the community if someone were to put together a web/wiki page or mini-app that details this kind of information and, if considered accurate and relevant enough by the community, link to that more globally while also remove the random and incomplete references of this nature from the main documentation.  As it stands the info is at least relevant, and its incompleteness doesn't cause enough grief, IMO, to warrant its outright removal absent there existing an alternative.Oracle DOCs or MySQL DOCs or any other have these comparisons ? I don't think so, so why does Postgres have to mention it ? All these places, and others I didn't find, I think it's correct to say Postgres' way of doing that, not what is different from Oracle.regardsMarcos", "msg_date": "Fri, 20 Sep 2024 14:48:51 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "\n\nOn 9/20/24 19:31, Marcos Pegoraro wrote:\n> Em sex., 20 de set. de 2024 às 12:53, Tom Lane <tgl@sss.pgh.pa.us\n> <mailto:tgl@sss.pgh.pa.us>> escreveu:\n> \n> As far as the mentions in \"Data Type Formatting Functions\" go, those\n> are there because those functions are not in the SQL standard; we\n> stole the API definitions for them from Oracle, lock stock and barrel.\n> (Except for the discrepancies that are called out by referencing what\n> Oracle does differently.)  A number of the other references probably\n> have similar origins.\n> \n> \n> All the time we see somebody adding a new function to Postgres core that\n> exists in Python or GO or MySQL, but none of them are mentioned on DOCS.\n> \n\nWhich Python/Go/MySQL functions we added to Postgres, for example?\n\nAFAIK we're now adding stuff that is either described by SQL standard,\nor stuff that's our own invention. Neither cases would benefit from\nexplaining how other products behave. That's very different from the\ninterfaces we copied from Oracle.\n\n> I did never use Oracle but I'm almost sure on Oracle DOCS there are no\n> mentions of Postgres, right ? Why do we need to mention it ?\n> \n\nI think Tom already explained that we copied a lot of this stuff from\nOracle, so it makes sense to explain in which cases the behavior\ndiffers. I don't see how removing this would help users, it'd very\nclearly make life harder for them.\n\nI'm no fan of Oracle corp myself, but I admit I don't quite understand\nwhy you're upset with the handful of places mentioning the product.\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 20 Sep 2024 20:00:32 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "On 9/20/24 19:48, Marcos Pegoraro wrote:\n> Em sex., 20 de set. de 2024 às 13:18, David G. Johnston\n> <david.g.johnston@gmail.com <mailto:david.g.johnston@gmail.com>> escreveu:\n> \n> It would be a boon to the community if someone were to put together\n> a web/wiki page or mini-app that details this kind of information\n> and, if considered accurate and relevant enough by the community,\n> link to that more globally while also remove the random and\n> incomplete references of this nature from the main documentation. \n> As it stands the info is at least relevant, and its incompleteness\n> doesn't cause enough grief, IMO, to warrant its outright removal\n> absent there existing an alternative.\n> \n> \n> Oracle DOCs or MySQL DOCs or any other have these comparisons ? I don't\n> think so, so why does Postgres have to mention it ? \n> \n\nI fail to see why would \"entity X does not do A\" be a good reason to not\ndo A ourselves. Commercial companies may have their own reasons not to\nmention competing products, and few of those will likely apply to our\nproject. And maybe they're wrong to not do that, not us.\n\n> All these places, and others I didn't find, I think it's correct to say\n> Postgres' way of doing that, not what is different from Oracle.\n> \n\nIMHO it's quite reasonable to say \"we do X, but this other product\n(which is what we try to mimic) does Y\".\n\n\nregards\n\n-- \nTomas Vondra\n\n\n", "msg_date": "Fri, 20 Sep 2024 20:11:22 +0200", "msg_from": "Tomas Vondra <tomas@vondra.me>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "Em sex., 20 de set. de 2024 às 15:11, Tomas Vondra <tomas@vondra.me>\nescreveu:\n\n> IMHO it's quite reasonable to say \"we do X, but this other product\n> (which is what we try to mimic) does Y\".\n>\n\nOk, for Data Type Formatting Functions is fine, if they were really copied\nfrom, but the others ...\nBug Reporting Guidelines, LOCK, SELECT, ROLLBACK TO SAVEPOINT and CURSORS\n\nEm sex., 20 de set. de 2024 às 15:11, Tomas Vondra <tomas@vondra.me> escreveu:IMHO it's quite reasonable to say \"we do X, but this other product\n(which is what we try to mimic) does Y\".Ok, for Data Type Formatting Functions is fine, if they were really copied from, but the others ...Bug Reporting Guidelines, LOCK, SELECT, ROLLBACK TO SAVEPOINT and CURSORS", "msg_date": "Fri, 20 Sep 2024 17:26:49 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "On Fri, Sep 20, 2024 at 4:27 PM Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> Em sex., 20 de set. de 2024 às 15:11, Tomas Vondra <tomas@vondra.me>\n> escreveu:\n>\n>> IMHO it's quite reasonable to say \"we do X, but this other product\n>> (which is what we try to mimic) does Y\".\n>>\n>\n> Ok, for Data Type Formatting Functions is fine, if they were really copied\n> from, but the others ...\n> Bug Reporting Guidelines, LOCK, SELECT, ROLLBACK TO SAVEPOINT and CURSORS\n>\n\nSeems to me this has already been answered well multiple times by multiple\npeople; I’m not sure why this is such an issue, or one that warrants\ncontinued discussion.\n\nBy your own admission, you wouldn’t see the value, where others who came\nfrom Oracle would. Additionally, your assumption is incorrect: many Oracle\ndatabases are migrated to Postgres, more-so today than when much of that\nwas written.\n\nYou’re arguing against it being in the docs and talking about how much\nbetter it would be in other more focused content, which does have some\nmerit. But, at the same time, you’re neither qualified nor volunteering to\nwrite it. As such, getting rid of it here serves what purpose other than\nomitting useful information to those it would benefit directly in the\ndocumentation?\n\nOn Fri, Sep 20, 2024 at 4:27 PM Marcos Pegoraro <marcos@f10.com.br> wrote:Em sex., 20 de set. de 2024 às 15:11, Tomas Vondra <tomas@vondra.me> escreveu:IMHO it's quite reasonable to say \"we do X, but this other product\n(which is what we try to mimic) does Y\".Ok, for Data Type Formatting Functions is fine, if they were really copied from, but the others ...Bug Reporting Guidelines, LOCK, SELECT, ROLLBACK TO SAVEPOINT and CURSORSSeems to me this has already been answered well multiple times by multiple people; I’m not sure why this is such an issue, or one that warrants continued discussion.By your own admission, you wouldn’t see the value, where others who came from Oracle would. Additionally, your assumption is incorrect: many Oracle databases are migrated to Postgres, more-so today than when much of that was written.You’re arguing against it being in the docs and talking about how much better it would be in other more focused content, which does have some merit. But, at the same time, you’re neither qualified nor volunteering to write it. As such, getting rid of it here serves what purpose other than omitting useful information to those it would benefit directly in the documentation?", "msg_date": "Fri, 20 Sep 2024 17:34:06 -0400", "msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "On Fri, Sep 20, 2024 at 11:43 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n>\n> My suggestion is: Postgres DOCs are written and have to be read by\n> Postgres users, just that. If you are Oracle user, search for a tutorial on\n> how to migrate to Postgres or find tools for it, but not in DOCs\n>\n\nAs Tomas, Tom and others pointed out, it's simply because it is a common\ndatabase people\nmigrate from and ask for help, and people contributed patches to the\ndocumentation out of their\nown need, or to help others.\n\n(Several) years ago I wrote a since-deprecated section of the docs to port\nfrom PL/SQL to\nPL/pgSQL because it was needed back then.\n\nBecause if you write something for Oracle users, SQL Server users can claim\n> why there is no \"Porting from T-SQL to PL/pgSQL\" ?\n> And MySQL users can do the same, and so on.\n>\n\nAnd those users are welcome to contribute patches to the docs explaining\nwhy they think\nthose additions to our docs would be helpful.\n\n\n> Maybe Oracle was the most common DB which migrated to Postgres, but I'm\n> not sure this is true for today.\n>\n\nI don't know about you, but in my experience that is absolutely not true. I\ndeal with lots of people\nand companies migrating from Oracle, or whose staff have experience with\nOracle and need\nhelp adapting that knowledge to Postgres.\n\nRoberto\n\nOn Fri, Sep 20, 2024 at 11:43 AM Marcos Pegoraro <marcos@f10.com.br> wrote:My suggestion is: Postgres DOCs are written and have to be read by Postgres users, just that. If you are Oracle user, search for a tutorial on how to migrate to Postgres or find tools for it, but not in DOCsAs Tomas, Tom and others pointed out, it's simply because it is a common database peoplemigrate from and ask for help, and people contributed patches to the documentation out of theirown need, or to help others. (Several) years ago I wrote a since-deprecated section of the docs to port from PL/SQL toPL/pgSQL because it was needed back then. Because if you write something for Oracle users, SQL Server users can claim why there is no \"Porting from T-SQL to PL/pgSQL\" ?And MySQL users can do the same, and so on.And those users are welcome to contribute patches to the docs explaining why they thinkthose additions to our docs would be helpful. Maybe Oracle was the most common DB which migrated to Postgres, but I'm not sure this is true for today. I don't know about you, but in my experience that is absolutely not true. I deal with lots of peopleand companies migrating from Oracle, or whose staff have experience with Oracle and needhelp adapting that knowledge to Postgres. Roberto", "msg_date": "Fri, 20 Sep 2024 16:48:57 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "Em sex., 20 de set. de 2024 às 18:34, Jonah H. Harris <\njonah.harris@gmail.com> escreveu:\n\n> Seems to me this has already been answered well multiple times by multiple\n> people; I’m not sure why this is such an issue, or one that warrants\n> continued discussion.\n>\n\nNo, I do not want to continue a discussion about a closed issue, I just\nwant to improve DOCs, or get it cleaner, just that.\n\nhttps://www.postgresql.org/docs/current/sql-lock.html\nPostgreSQL lock modes and the LOCK TABLE syntax are compatible with those\npresent in Oracle.\n\nhttps://www.postgresql.org/docs/current/sql-rollback-to.html\nThe SQL standard specifies that the key word SAVEPOINT is mandatory, but\nPostgreSQL and Oracle allow it to be omitted.\n\nhttps://www.postgresql.org/docs/current/sql-select.html\nApplications written for Oracle frequently use a workaround involving the\nautomatically generated rownum column, which is not available in\nPostgreSQL, to implement the effects of these clauses.\n\nhttps://www.postgresql.org/docs/current/plpgsql-cursors.html\nFOR can be replaced by IS for Oracle compatibility.\n\nSo, except for Data Type Formatting, because Postgres mimics Oracle version,\nand converting to PL/pgSQL, these other cases, and I don't know if other\nexists, the DOC says something that is specific to a database, so users\nwhich come from any other database could ask why not their database\ncompatibility is shown too. So I think all these cases could be removed.\n\nEm sex., 20 de set. de 2024 às 18:34, Jonah H. Harris <\njonah.harris@gmail.com> escreveu:\n\n> By your own admission, you wouldn’t see the value, where others who came\n> from Oracle would. Additionally, your assumption is incorrect: many Oracle\n> databases are migrated to Postgres, more-so today than when much of that\n> was written.\n>\n\nI didn't say no more people are migrating from Oracle, I just say that\nmaybe migrations are now coming from other databases, like SQL Server,\nMySQL, DB2, Firebird, Mongo and many others. So why do you document only\nfor those which come from Oracle ?\nNew Postgres users are today 90% coming from Oracle or 10%, I think we\ncannot have this number exactly. And if nobody knows, why mention any of\nthem ?\n\nThanks for your time and I repeat, I just want to get Postgres DOCs better,\njust that.\n\nregards\nMarcos\n\nEm sex., 20 de set. de 2024 às 18:34, Jonah H. Harris <jonah.harris@gmail.com> escreveu:Seems to me this has already been answered well multiple times by multiple people; I’m not sure why this is such an issue, or one that warrants continued discussion.No, I do not want to continue a discussion about a closed issue, I just want to improve DOCs, or get it cleaner, just that.https://www.postgresql.org/docs/current/sql-lock.htmlPostgreSQL lock modes and the LOCK TABLE syntax are compatible with those present in Oracle.https://www.postgresql.org/docs/current/sql-rollback-to.htmlThe SQL standard specifies that the key word SAVEPOINT is mandatory, but PostgreSQL and Oracle allow it to be omitted.https://www.postgresql.org/docs/current/sql-select.htmlApplications written for Oracle frequently use a workaround involving the automatically generated rownum column, which is not available in PostgreSQL, to implement the effects of these clauses.https://www.postgresql.org/docs/current/plpgsql-cursors.htmlFOR can be replaced by IS for Oracle compatibility.So, except for Data Type Formatting, because Postgres mimics Oracle version, and converting to PL/pgSQL, these other cases, and I don't know if other exists, the DOC says something that is specific to a database, so users which come from any other database could ask why not their database compatibility is shown too. So I think all these cases could be removed.Em sex., 20 de set. de 2024 às 18:34, Jonah H. Harris <jonah.harris@gmail.com> escreveu:By your own admission, you wouldn’t see the value, where others who came from Oracle would. Additionally, your assumption is incorrect: many Oracle databases are migrated to Postgres, more-so today than when much of that was written.I didn't say no more people are migrating from Oracle, I just say that maybe migrations are now coming from other databases, like SQL Server, MySQL, DB2, Firebird, Mongo and many others. So why do you document only for those which come from Oracle ?New Postgres users are today 90% coming from Oracle or 10%, I think we cannot have this number exactly. And if nobody knows, why mention any of them ?Thanks for your time and I repeat, I just want to get Postgres DOCs better, just that.regardsMarcos", "msg_date": "Sat, 21 Sep 2024 13:50:37 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "On Sat, Sep 21, 2024 at 01:50:37PM -0300, Marcos Pegoraro wrote:\n> I didn't say no more people are migrating from Oracle, I just say that maybe\n> migrations are now coming from other databases, like SQL Server, MySQL, DB2,\n> Firebird, Mongo and many others. So why do you document only for those which\n> come from Oracle ?\n> New Postgres users are today 90% coming from Oracle or 10%, I think we cannot\n> have this number exactly. And if nobody knows, why mention any of them ?\n> \n> Thanks for your time and I repeat, I just want to get Postgres DOCs better,\n> just that.\n\nI suggest you explain what changes would make the docs better (meaing\nmore useful).\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n When a patient asks the doctor, \"Am I going to die?\", he means \n \"Am I going to die soon?\"\n\n\n", "msg_date": "Sat, 21 Sep 2024 17:42:18 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "Em sáb., 21 de set. de 2024 às 18:42, Bruce Momjian <bruce@momjian.us>\nescreveu:\n\n> I suggest you explain what changes would make the docs better (meaing\n> more useful).\n>\n\nWell, I think I already did this in this discussion.\nTom said that some functions were copied from Oracle, so it is ok to\nmention them. I don't think so.\nTomas said that we can mention other vendors, even if others don't do the\nsame for us. I don't think so again.\nBut David answered that would be cool if we create a separate\npage/wiki/tool which compares, translates or anything like that to other\ndatabases.\nSo, if we have a \"Compatibility/Translation/Feature Comparison/ ... with\nother Databases\", it would be so cool.\nBut we don't have this kind of page, so why do we need to mention just one\nof them ?\nSearching on SGML there are 0 mentions to SQL Server and MySQL, but there\nare almost 50 mentions to Oracle.\nSo this is my point, if you don't do the same for others, why do it for\nOracle ?\n\nAnd again, I'm not saying that migrations from Oracle are not important,\nI'm saying migrations from Oracle have the same importance than from MySQL,\nSQL Server, Mongo, ...\n\nregards\nMarcos\n\nEm sáb., 21 de set. de 2024 às 18:42, Bruce Momjian <bruce@momjian.us> escreveu:I suggest you explain what changes would make the docs better (meaing\nmore useful).Well, I think I already did this in this discussion.Tom said that some functions were copied from Oracle, so it is ok to mention them. I don't think so.Tomas said that we can mention other vendors, even if others don't do the same for us. I don't think so again.\n\nBut David answered that would be cool if we create a separate page/wiki/tool which compares, translates or anything like that to other databases.So, if we have a \"Compatibility/Translation/Feature Comparison/ ... with other Databases\", it would be so cool.But we don't have this kind of page, so why do we need to mention just one of them ? Searching on SGML there are 0 mentions to SQL Server and MySQL, but there are almost 50 mentions to Oracle. So this is my point, if you don't do the same for others, why do it for Oracle ?And again, I'm not saying that migrations from Oracle are not important, I'm saying migrations from Oracle have the same importance than from MySQL, SQL Server, Mongo, ...regardsMarcos", "msg_date": "Sun, 22 Sep 2024 11:09:30 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "On Sun, Sep 22, 2024 at 8:10 AM Marcos Pegoraro <marcos@f10.com.br> wrote:\n\n> Em sáb., 21 de set. de 2024 às 18:42, Bruce Momjian <bruce@momjian.us>\n> escreveu:\n>\n>> I suggest you explain what changes would make the docs better (meaing\n>> more useful).\n>>\n>\n> So, if we have a \"Compatibility/Translation/Feature Comparison/ ... with\n> other Databases\", it would be so cool.\n> But we don't have this kind of page, so why do we need to mention just one\n> of them ?\n>\n\nBecause people contributed those.\n\n\n> Searching on SGML there are 0 mentions to SQL Server and MySQL, but there\n> are almost 50 mentions to Oracle.\n> So this is my point, if you don't do the same for others, why do it for\n> Oracle ?\n>\n\nBecause people contributed those.\n\n\n> And again, I'm not saying that migrations from Oracle are not important,\n> I'm saying migrations from Oracle have the same importance than from MySQL,\n> SQL Server, Mongo, ...\n>\n\nAgain, different people at different times felt it was important to\ncontribute patches to the documentation\nexplaining Oracle differences, or porting, etc. That's why those are in the\ncurrent docs.\n\nIf you're volunteering to add a MySQL, SQL Server, Mongo, etc porting to\nthe docs, I'm sure it could be a\nnice addition.\n\nRoberto\n\nOn Sun, Sep 22, 2024 at 8:10 AM Marcos Pegoraro <marcos@f10.com.br> wrote:Em sáb., 21 de set. de 2024 às 18:42, Bruce Momjian <bruce@momjian.us> escreveu:I suggest you explain what changes would make the docs better (meaing\nmore useful).So, if we have a \"Compatibility/Translation/Feature Comparison/ ... with other Databases\", it would be so cool.But we don't have this kind of page, so why do we need to mention just one of them ? Because people contributed those. Searching on SGML there are 0 mentions to SQL Server and MySQL, but there are almost 50 mentions to Oracle. So this is my point, if you don't do the same for others, why do it for Oracle ?Because people contributed those. And again, I'm not saying that migrations from Oracle are not important, I'm saying migrations from Oracle have the same importance than from MySQL, SQL Server, Mongo, ...Again, different people at different times felt it was important to contribute patches to the documentationexplaining Oracle differences, or porting, etc. That's why those are in the current docs.If you're volunteering to add a MySQL, SQL Server, Mongo, etc porting to the docs, I'm sure it could be anice addition.Roberto", "msg_date": "Sun, 22 Sep 2024 09:48:38 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "Em dom., 22 de set. de 2024 às 12:49, Roberto Mello <roberto.mello@gmail.com>\nescreveu:\n\n> If you're volunteering to add a MySQL, SQL Server, Mongo, etc porting to\n> the docs, I'm sure it could be a\n> nice addition.\n>\n\nAnd if we create a page like https://www.postgresql.org/about/featurematrix/\nBut instead of Postgres versions we have other vendors.\nEvery feature would have a Postgres way of doing and what differs from his\nold database.\nFeature PostgreSQL Oracle SQL Server MySQL Firebird\nSELECT N ROWS LIMIT 10 TOP 10 FIRST 10\nCONCAT STRINGS 'Name: ' || Name 'Name: ' + Name\nREBUILD INDEX REINDEX ALTER INDEX… REBUILD\nCURRENT DATE CURRENT_DATE GETDATE\nThis is just an example, for sure there would be several tables, for DMLs,\nfor DDL, for Maintenance ...\n\nWhat do you think ?\n\nregards\nMarcos\n\nEm dom., 22 de set. de 2024 às 12:49, Roberto Mello <roberto.mello@gmail.com> escreveu:If you're volunteering to add a MySQL, SQL Server, Mongo, etc porting to the docs, I'm sure it could be anice addition.And if we create a page like https://www.postgresql.org/about/featurematrix/But instead of Postgres versions we have other vendors.Every feature would have a Postgres way of doing and what differs from his old database.\n\n\nFeature\nPostgreSQL\nOracle\nSQL Server\nMySQL\nFirebird\n\n\nSELECT  N ROWS\nLIMIT 10\n \nTOP 10\n \nFIRST 10\n\n\nCONCAT STRINGS\n'Name: ' || Name \n \n'Name: ' + Name \n \n \n\n\nREBUILD INDEX\nREINDEX\nALTER INDEX… REBUILD\n \n \n \n\n\nCURRENT DATE\nCURRENT_DATE\n \nGETDATE\n \n \n\n\nThis is just an example, for sure there would be several tables, for DMLs, for DDL, for Maintenance ...What do you think ?regardsMarcos", "msg_date": "Tue, 24 Sep 2024 12:59:26 -0300", "msg_from": "Marcos Pegoraro <marcos@f10.com.br>", "msg_from_op": true, "msg_subject": "Re: Why mention to Oracle ?" }, { "msg_contents": "> And if we create a page like\n> https://www.postgresql.org/about/featurematrix/\n> But instead of Postgres versions we have other vendors.\n>\n\nThis sounds like something that would fit well on the Postgres wiki:\n\nhttps://wiki.postgresql.org/\n\nCheers,\nGreg\n\nAnd if we create a page like https://www.postgresql.org/about/featurematrix/But instead of Postgres versions we have other vendors.This sounds like something that would fit well on the Postgres wiki:https://wiki.postgresql.org/Cheers,Greg", "msg_date": "Wed, 25 Sep 2024 08:35:35 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why mention to Oracle ?" } ]
[ { "msg_contents": "Hi,\n\nI would like to improve the following two points on the result outputs\nof pgbench related to faild transaction. The patch is attached.\n\n(1) Output per-script statistics even when there are no successful\ntransaction if there is any failed transactions due to serialization\nor deadlock errors.\n\nPreviously, per-script statistics were never output when any transactions\nare failed. However, it is reasonable to report per-script failed transactions\nif they are due to serialization or deadlock errors, since these kinds of\nfailures are now objects to be reported.\n \nThis is fixed by modifying the following condition to use \"total_cnt <= 0\".\n\n /* Remaining stats are nonsensical if we failed to execute any xacts */\n if (total->cnt + total->skipped <= 0)\n return;\n\n(2) Avoid to print \"NaN%\" in lines on failed transaction reports.\n \nIf the total number of successful, skipped, and failed transactions is zero,\nwe don't have to report the number of failed transactions as similar to that\nthe number of skipped transactions is not reported in this case.\n\nSo, I moved the check of total_cnt mentioned above before reporting the number\nof faild transactions. Also, I added a check of script_total_cnt before\nreporting per-script number of failed transactions.\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>", "msg_date": "Sat, 21 Sep 2024 00:35:44 +0900", "msg_from": "Yugo Nagata <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "pgbench: Improve result outputs related to failed transactinos" }, { "msg_contents": "Hi,\n\n> Hi,\n> \n> I would like to improve the following two points on the result outputs\n> of pgbench related to faild transaction. The patch is attached.\n> \n> (1) Output per-script statistics even when there are no successful\n> transaction if there is any failed transactions due to serialization\n> or deadlock errors.\n> \n> Previously, per-script statistics were never output when any transactions\n> are failed. However, it is reasonable to report per-script failed transactions\n> if they are due to serialization or deadlock errors, since these kinds of\n> failures are now objects to be reported.\n> \n> This is fixed by modifying the following condition to use \"total_cnt <= 0\".\n> \n> /* Remaining stats are nonsensical if we failed to execute any xacts */\n> if (total->cnt + total->skipped <= 0)\n> return;\n> \n> (2) Avoid to print \"NaN%\" in lines on failed transaction reports.\n\nThis itself sounds good. However, in case (1) still \"NaN%\" are\nprinted. This looks inconsistent.\n\nt-ishii$ src/bin/pgbench/pgbench -p 11002 -c1 -t 1 -f c.sql -f d.sql --failures-detailed test\npgbench (18devel)\nstarting vacuum...end.\ntransaction type: multiple scripts\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nmaximum number of tries: 1\nnumber of transactions per client: 1\nnumber of transactions actually processed: 0/1\nnumber of failed transactions: 1 (100.000%)\nnumber of serialization failures: 1 (100.000%)\nnumber of deadlock failures: 0 (0.000%)\nlatency average = 7023.604 ms (including failures)\ninitial connection time = 4.964 ms\ntps = 0.000000 (without initial connection time)\nSQL script 1: c.sql\n - weight: 1 (targets 50.0% of total)\n - 0 transactions (NaN% of total, tps = 0.000000)\nSQL script 2: d.sql\n - weight: 1 (targets 50.0% of total)\n - 0 transactions (NaN% of total, tps = 0.000000)\n - number of failed transactions: 1 (100.000%)\n - number of serialization failures: 1 (100.000%)\n - number of deadlock failures: 0 (0.000%)\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sun, 22 Sep 2024 17:59:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pgbench: Improve result outputs related to failed transactinos" }, { "msg_contents": "On Sun, 22 Sep 2024 17:59:34 +0900 (JST)\nTatsuo Ishii <ishii@postgresql.org> wrote:\n\n> > I would like to improve the following two points on the result outputs\n> > of pgbench related to faild transaction. The patch is attached.\n> > \n> > (1) Output per-script statistics even when there are no successful\n> > transaction if there is any failed transactions due to serialization\n> > or deadlock errors.\n> > \n> > Previously, per-script statistics were never output when any transactions\n> > are failed. However, it is reasonable to report per-script failed transactions\n> > if they are due to serialization or deadlock errors, since these kinds of\n> > failures are now objects to be reported.\n> > \n> > This is fixed by modifying the following condition to use \"total_cnt <= 0\".\n> > \n> > /* Remaining stats are nonsensical if we failed to execute any xacts */\n> > if (total->cnt + total->skipped <= 0)\n> > return;\n> > \n> > (2) Avoid to print \"NaN%\" in lines on failed transaction reports.\n> \n> This itself sounds good. However, in case (1) still \"NaN%\" are\n> printed. This looks inconsistent.\n> \n> t-ishii$ src/bin/pgbench/pgbench -p 11002 -c1 -t 1 -f c.sql -f d.sql --failures-detailed test\n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: multiple scripts\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> maximum number of tries: 1\n> number of transactions per client: 1\n> number of transactions actually processed: 0/1\n> number of failed transactions: 1 (100.000%)\n> number of serialization failures: 1 (100.000%)\n> number of deadlock failures: 0 (0.000%)\n> latency average = 7023.604 ms (including failures)\n> initial connection time = 4.964 ms\n> tps = 0.000000 (without initial connection time)\n> SQL script 1: c.sql\n> - weight: 1 (targets 50.0% of total)\n> - 0 transactions (NaN% of total, tps = 0.000000)\n> SQL script 2: d.sql\n> - weight: 1 (targets 50.0% of total)\n> - 0 transactions (NaN% of total, tps = 0.000000)\n> - number of failed transactions: 1 (100.000%)\n> - number of serialization failures: 1 (100.000%)\n> - number of deadlock failures: 0 (0.000%)\n\nI overlooked the \"NaN% of total\" in per-script results.\nI think this NaN also should be avoided.\n\nI fixed the number of transactions in per-script results to include\nskipped and failed transactions. It prevents to print \"total of NaN%\"\nwhen any transactions are not successfully processed. \n\nAlthough it breaks the back-compatibility, this seems reasonable\nmodification because not only succeeded transactions but also skips and\nfailures ones are now handled and reported for each script. Also, the\nnumber of transactions actually processed per-script and TPS based on\nit are now output explicitly in a separate line.\n\nHere is an example of the results.\n\n$ pgbench -f sleep.sql -f deadlock.sql --failures-detailed -t 2 -r -c 4 -j 4\npgbench (18devel)\nstarting vacuum...end.\ntransaction type: multiple scripts\nscaling factor: 1\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 4\nmaximum number of tries: 1\nnumber of transactions per client: 2\nnumber of transactions actually processed: 5/8\nnumber of failed transactions: 3 (37.500%)\nnumber of serialization failures: 0 (0.000%)\nnumber of deadlock failures: 3 (37.500%)\nlatency average = 7532.531 ms (including failures)\ninitial connection time = 7.447 ms\ntps = 0.331894 (without initial connection time)\nSQL script 1: sleep.sql\n - weight: 1 (targets 50.0% of total)\n - 2 transactions (25.0% of total)\n - number of transactions actually pocessed: 2 (tps = 0.132758)\n - number of failed transactions: 0 (0.000%)\n - number of serialization failures: 0 (0.000%)\n - number of deadlock failures: 0 (0.000%)\n - latency average = 1002.506 ms\n - latency stddev = 0.320 ms\n - statement latencies in milliseconds and failures:\n 1002.506 0 select pg_sleep(1)\nSQL script 2: deadlock.sql\n - weight: 1 (targets 50.0% of total)\n - 6 transactions (75.0% of total)\n - number of transactions actually pocessed: 3 (tps = 0.199136)\n - number of failed transactions: 3 (50.000%)\n - number of serialization failures: 0 (0.000%)\n - number of deadlock failures: 3 (50.000%)\n - latency average = 9711.271 ms\n - latency stddev = 466.328 ms\n - statement latencies in milliseconds and failures:\n 0.426 0 begin;\n 5352.229 0 lock b;\n 2003.416 0 select pg_sleep(2);\n 0.829 3 lock a;\n 8.774 0 end;\n\nI've attached the updated patch.\n\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Tue, 24 Sep 2024 15:05:50 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench: Improve result outputs related to failed transactinos" }, { "msg_contents": "> I overlooked the \"NaN% of total\" in per-script results.\n> I think this NaN also should be avoided.\n> \n> I fixed the number of transactions in per-script results to include\n> skipped and failed transactions. It prevents to print \"total of NaN%\"\n> when any transactions are not successfully processed. \n\nThanks for the fix. Here is the new run with the v2 patch. The result\nlooks good to me.\n\nsrc/bin/pgbench/pgbench -p 11002 -c1 -t 1 -f c.sql -f d.sql --failures-detailed -r test\npgbench (18devel)\nstarting vacuum...end.\ntransaction type: multiple scripts\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nmaximum number of tries: 1\nnumber of transactions per client: 1\nnumber of transactions actually processed: 1/1\nnumber of failed transactions: 0 (0.000%)\nnumber of serialization failures: 0 (0.000%)\nnumber of deadlock failures: 0 (0.000%)\nlatency average = 2.434 ms\ninitial connection time = 2.117 ms\ntps = 410.846343 (without initial connection time)\nSQL script 1: c.sql\n - weight: 1 (targets 50.0% of total)\n - 1 transactions (100.0% of total)\n - number of transactions actually pocessed: 1 (tps = 410.846343)\n - number of failed transactions: 0 (0.000%)\n - number of serialization failures: 0 (0.000%)\n - number of deadlock failures: 0 (0.000%)\n - latency average = 2.419 ms\n - latency stddev = 0.000 ms\n - statement latencies in milliseconds and failures:\n 0.187 0 begin;\n 0.153 0 set transaction isolation level serializable;\n 0.977 0 insert into t1 select max(i)+1,2 from t1;\n 1.102 0 end;\nSQL script 2: d.sql\n - weight: 1 (targets 50.0% of total)\n - 0 transactions (0.0% of total)\n - statement latencies in milliseconds and failures:\n 0.000 0 begin;\n 0.000 0 set transaction isolation level serializable;\n 0.000 0 insert into t1 select max(i)+1,2 from t1;\n 0.000 0 end;\n\n> Although it breaks the back-compatibility, this seems reasonable\n> modification because not only succeeded transactions but also skips and\n> failures ones are now handled and reported for each script. Also, the\n> number of transactions actually processed per-script and TPS based on\n> it are now output explicitly in a separate line.\n\nOkay for me as long as the patch is pushed to master branch.\n\nA small comment on the comments in the patch: pgindent dislikes some\nof the comment indentation styles. See attached pgindent.txt. Although\nsuch a small defect would be fixed by committers when a patch gets\ncommitted anyway, you might want to help committers beforehand.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n*** /tmp/pgbench.c\t2024-09-24 18:42:20.632311240 +0900\n--- src/bin/pgbench/pgbench.c\t2024-09-24 18:42:51.824299286 +0900\n***************\n*** 6392,6399 ****\n \t}\n \n \t/*\n! \t * Remaining stats are nonsensical if we failed to execute any xacts\n! \t * due to others than serialization or deadlock errors\n \t */\n \tif (total_cnt <= 0)\n \t\treturn;\n--- 6392,6399 ----\n \t}\n \n \t/*\n! \t * Remaining stats are nonsensical if we failed to execute any xacts due\n! \t * to others than serialization or deadlock errors\n \t */\n \tif (total_cnt <= 0)\n \t\treturn;\n***************\n*** 6514,6520 ****\n \t\t\t\t\t\t\t\tscript_total_cnt));\n \t\t\t\t\t}\n \n! \t\t\t\t\t/* it can be non-zero only if max_tries is not equal to one */\n \t\t\t\t\tif (max_tries != 1)\n \t\t\t\t\t{\n \t\t\t\t\t\tprintf(\" - number of transactions retried: \" INT64_FORMAT \" (%.3f%%)\\n\",\n--- 6514,6523 ----\n \t\t\t\t\t\t\t\tscript_total_cnt));\n \t\t\t\t\t}\n \n! \t\t\t\t\t/*\n! \t\t\t\t\t * it can be non-zero only if max_tries is not equal to\n! \t\t\t\t\t * one\n! \t\t\t\t\t */\n \t\t\t\t\tif (max_tries != 1)\n \t\t\t\t\t{\n \t\t\t\t\t\tprintf(\" - number of transactions retried: \" INT64_FORMAT \" (%.3f%%)\\n\",", "msg_date": "Tue, 24 Sep 2024 19:00:04 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: pgbench: Improve result outputs related to failed transactinos" }, { "msg_contents": "On Tue, 24 Sep 2024 19:00:04 +0900 (JST)\nTatsuo Ishii <ishii@postgresql.org> wrote:\n\n> > I overlooked the \"NaN% of total\" in per-script results.\n> > I think this NaN also should be avoided.\n> > \n> > I fixed the number of transactions in per-script results to include\n> > skipped and failed transactions. It prevents to print \"total of NaN%\"\n> > when any transactions are not successfully processed. \n> \n> Thanks for the fix. Here is the new run with the v2 patch. The result\n> looks good to me.\n> \n> src/bin/pgbench/pgbench -p 11002 -c1 -t 1 -f c.sql -f d.sql --failures-detailed -r test\n> pgbench (18devel)\n> starting vacuum...end.\n> transaction type: multiple scripts\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> maximum number of tries: 1\n> number of transactions per client: 1\n> number of transactions actually processed: 1/1\n> number of failed transactions: 0 (0.000%)\n> number of serialization failures: 0 (0.000%)\n> number of deadlock failures: 0 (0.000%)\n> latency average = 2.434 ms\n> initial connection time = 2.117 ms\n> tps = 410.846343 (without initial connection time)\n> SQL script 1: c.sql\n> - weight: 1 (targets 50.0% of total)\n> - 1 transactions (100.0% of total)\n> - number of transactions actually pocessed: 1 (tps = 410.846343)\n> - number of failed transactions: 0 (0.000%)\n> - number of serialization failures: 0 (0.000%)\n> - number of deadlock failures: 0 (0.000%)\n> - latency average = 2.419 ms\n> - latency stddev = 0.000 ms\n> - statement latencies in milliseconds and failures:\n> 0.187 0 begin;\n> 0.153 0 set transaction isolation level serializable;\n> 0.977 0 insert into t1 select max(i)+1,2 from t1;\n> 1.102 0 end;\n> SQL script 2: d.sql\n> - weight: 1 (targets 50.0% of total)\n> - 0 transactions (0.0% of total)\n> - statement latencies in milliseconds and failures:\n> 0.000 0 begin;\n> 0.000 0 set transaction isolation level serializable;\n> 0.000 0 insert into t1 select max(i)+1,2 from t1;\n> 0.000 0 end;\n> \n> > Although it breaks the back-compatibility, this seems reasonable\n> > modification because not only succeeded transactions but also skips and\n> > failures ones are now handled and reported for each script. Also, the\n> > number of transactions actually processed per-script and TPS based on\n> > it are now output explicitly in a separate line.\n> \n> Okay for me as long as the patch is pushed to master branch.\n> \n> A small comment on the comments in the patch: pgindent dislikes some\n> of the comment indentation styles. See attached pgindent.txt. Although\n> such a small defect would be fixed by committers when a patch gets\n> committed anyway, you might want to help committers beforehand.\n\nThank you for your comments.\nI've attached a updated patch that I applied pgindent.\n\nRegards,\nYugo Nagata\n\n\n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS K.K.\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Tue, 24 Sep 2024 19:55:04 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench: Improve result outputs related to failed transactinos" } ]
[ { "msg_contents": "Hi,\n\nI noticed two headers are not in alphabetical order in pg_checkums.c,\npatch attached.\n\n\nMichael", "msg_date": "Fri, 20 Sep 2024 19:20:15 +0200", "msg_from": "Michael Banck <mbanck@gmx.net>", "msg_from_op": true, "msg_subject": "pg_checksums: Reorder headers in alphabetical order" }, { "msg_contents": "On Fri, Sep 20, 2024 at 07:20:15PM +0200, Michael Banck wrote:\n> I noticed two headers are not in alphabetical order in pg_checkums.c,\n> patch attached.\n\nThis appears to be commit 280e5f1's fault. Will fix.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 20 Sep 2024 13:56:16 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_checksums: Reorder headers in alphabetical order" }, { "msg_contents": "On Fri, Sep 20, 2024 at 01:56:16PM -0500, Nathan Bossart wrote:\n> On Fri, Sep 20, 2024 at 07:20:15PM +0200, Michael Banck wrote:\n>> I noticed two headers are not in alphabetical order in pg_checkums.c,\n>> patch attached.\n> \n> This appears to be commit 280e5f1's fault. Will fix.\n\nCommitted, thanks!\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 20 Sep 2024 15:20:16 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_checksums: Reorder headers in alphabetical order" }, { "msg_contents": "On Fri, Sep 20, 2024 at 03:20:16PM -0500, Nathan Bossart wrote:\n> On Fri, Sep 20, 2024 at 01:56:16PM -0500, Nathan Bossart wrote:\n> > On Fri, Sep 20, 2024 at 07:20:15PM +0200, Michael Banck wrote:\n> >> I noticed two headers are not in alphabetical order in pg_checkums.c,\n> >> patch attached.\n> > \n> > This appears to be commit 280e5f1's fault. Will fix.\n\nOops, that was my fault then :)\n\n> Committed, thanks!\n\nThanks!\n\n\nMichael\n\n\n", "msg_date": "Fri, 20 Sep 2024 22:23:11 +0200", "msg_from": "Michael Banck <mbanck@gmx.net>", "msg_from_op": true, "msg_subject": "Re: pg_checksums: Reorder headers in alphabetical order" }, { "msg_contents": "\n\nOn 2024/09/21 5:20, Nathan Bossart wrote:\n> On Fri, Sep 20, 2024 at 01:56:16PM -0500, Nathan Bossart wrote:\n>> On Fri, Sep 20, 2024 at 07:20:15PM +0200, Michael Banck wrote:\n>>> I noticed two headers are not in alphabetical order in pg_checkums.c,\n>>> patch attached.\n>>\n>> This appears to be commit 280e5f1's fault. Will fix.\n> \n> Committed, thanks!\n\nI don’t have any objections to this commit, but I’d like to confirm\nwhether we really want to proactively reorder #include directives,\neven for standard C library headers. I’m asking because I know there are\nseveral source files, like xlog.c and syslogger.c, where such #include\ndirectives aren't in alphabetical order. I understand we usually reorder\n#include directives for PostgreSQL header files, though.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Sat, 21 Sep 2024 11:55:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_checksums: Reorder headers in alphabetical order" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> I don’t have any objections to this commit, but I’d like to confirm\n> whether we really want to proactively reorder #include directives,\n> even for standard C library headers.\n\nI'm hesitant to do that. We can afford to insist that our own header\nfiles be inclusion-order-independent, because we have the ability to\nfix any problems that might arise. We have no ability to do something\nabout it if the system headers on $random_platform have inclusion\norder dependencies. (In fact, I'm fairly sure there are already\nplaces in plperl and plpython where we know we have to be careful\nabout inclusion order around those languages' headers.)\n\nSo I would tread pretty carefully around making changes of this\ntype, especially in long-established code. I have no reason to\nthink that the committed patch will cause any problems, but\nI do think it's mostly asking for trouble.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Sep 2024 23:09:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_checksums: Reorder headers in alphabetical order" }, { "msg_contents": "\n\nOn 2024/09/21 12:09, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> I don’t have any objections to this commit, but I’d like to confirm\n>> whether we really want to proactively reorder #include directives,\n>> even for standard C library headers.\n> \n> I'm hesitant to do that. We can afford to insist that our own header\n> files be inclusion-order-independent, because we have the ability to\n> fix any problems that might arise. We have no ability to do something\n> about it if the system headers on $random_platform have inclusion\n> order dependencies. (In fact, I'm fairly sure there are already\n> places in plperl and plpython where we know we have to be careful\n> about inclusion order around those languages' headers.)\n> \n> So I would tread pretty carefully around making changes of this\n> type, especially in long-established code. I have no reason to\n> think that the committed patch will cause any problems, but\n> I do think it's mostly asking for trouble.\n\nSounds reasonable to me.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Sat, 21 Sep 2024 14:48:32 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_checksums: Reorder headers in alphabetical order" }, { "msg_contents": "On Sat, Sep 21, 2024 at 02:48:32PM +0900, Fujii Masao wrote:\n> On 2024/09/21 12:09, Tom Lane wrote:\n>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> > I don�t have any objections to this commit, but I�d like to confirm\n>> > whether we really want to proactively reorder #include directives,\n>> > even for standard C library headers.\n>> \n>> I'm hesitant to do that. We can afford to insist that our own header\n>> files be inclusion-order-independent, because we have the ability to\n>> fix any problems that might arise. We have no ability to do something\n>> about it if the system headers on $random_platform have inclusion\n>> order dependencies. (In fact, I'm fairly sure there are already\n>> places in plperl and plpython where we know we have to be careful\n>> about inclusion order around those languages' headers.)\n>> \n>> So I would tread pretty carefully around making changes of this\n>> type, especially in long-established code. I have no reason to\n>> think that the committed patch will cause any problems, but\n>> I do think it's mostly asking for trouble.\n> \n> Sounds reasonable to me.\n\nOh, sorry. I thought it was project policy to keep these alphabetized, and\nI was unaware of the past problems with system header ordering. I'll keep\nthis in mind in the future.\n\n-- \nnathan\n\n\n", "msg_date": "Sat, 21 Sep 2024 08:15:48 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_checksums: Reorder headers in alphabetical order" } ]
[ { "msg_contents": "I’m in the process of trying to restore some PG15/16 backups in PG17.\n\nWhile playing with different -t and -n combinations I was browsing through the docs.\n\nIn pg_restore there are two notes about both -t / -n \n\n> When -n / -t is specified, pg_dump makes no attempt to ...\n\nIn pg_dump though there’s the equivalent note only for the -t option.\n\nShouldn’t it be a note as well for -n ? \n\nOtherwise I would expect -n to cascade the restore to objects in other schemas;\nWhich I don’t think it does.\n\nAm I missing something? \nI’m in the process of trying to restore some PG15/16 backups in PG17.While playing with different -t and -n combinations I was browsing through the docs.In pg_restore there are two notes about both -t / -n > When -n / -t is specified, pg_dump makes no attempt to ...In pg_dump though there’s the equivalent note only for the -t option.Shouldn’t it be a note as well for -n ? Otherwise I would expect -n to cascade the restore to objects in other schemas;Which I don’t think it does.Am I missing something?", "msg_date": "Sat, 21 Sep 2024 20:33:53 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Docs pg_restore: Shouldn't there be a note about -n ?" }, { "msg_contents": "On Sat, Sep 21, 2024 at 8:34 PM Florents Tselai <florents.tselai@gmail.com>\nwrote:\n\n> I’m in the process of trying to restore some PG15/16 backups in PG17.\n>\n> While playing with different -t and -n combinations I was browsing through\n> the docs.\n>\n> In *pg_restore* there are two notes about both -t / -n\n>\n> > When -n / -t is specified, pg_dump makes no attempt to ...\n>\n> In pg_dump though there’s the equivalent note only for the -t option.\n>\n> Shouldn’t it be a note as well for -n ?\n>\n> Otherwise I would expect -n to cascade the restore to objects in other\n> schemas;\n> Which I don’t think it does.\n>\n> Am I missing something?\n>\n\nAh, swapped them by mistake on the previous email:\n\nThey're both available in the pg_dump and note on -n missing in pg_restore.\n\nThe question remains though:\nShouldn’t there be a note about -n in pg_restore ?\n\nOn Sat, Sep 21, 2024 at 8:34 PM Florents Tselai <florents.tselai@gmail.com> wrote:I’m in the process of trying to restore some PG15/16 backups in PG17.While playing with different -t and -n combinations I was browsing through the docs.In pg_restore there are two notes about both -t / -n > When -n / -t is specified, pg_dump makes no attempt to ...In pg_dump though there’s the equivalent note only for the -t option.Shouldn’t it be a note as well for -n ? Otherwise I would expect -n to cascade the restore to objects in other schemas;Which I don’t think it does.Am I missing something? Ah,  swapped them by mistake on the previous email: They're both available in the pg_dump and note on -n missing in pg_restore. The question remains though: Shouldn’t there be a note about -n in pg_restore ?", "msg_date": "Sat, 21 Sep 2024 20:40:27 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Docs pg_restore: Shouldn't there be a note about -n ?" }, { "msg_contents": "Florents Tselai <florents.tselai@gmail.com> writes:\n> Ah, swapped them by mistake on the previous email:\n> They're both available in the pg_dump and note on -n missing in pg_restore.\n> The question remains though:\n> Shouldn’t there be a note about -n in pg_restore ?\n\nProbably. I see that pg_dump has a third copy of the exact same\nnote for \"-e\". pg_restore lacks that switch for some reason,\nbut this is surely looking mighty duplicative. I propose getting\nrid of the per-switch Notes and putting a para into the Notes\nsection, along the lines of\n\n When -e, -n, or -t is specified, pg_dump makes no attempt to dump\n any other database objects that the selected object(s) might\n depend upon. Therefore, there is no guarantee that the results of\n a selective dump can be successfully restored by themselves into a\n clean database.\n\nand mutatis mutandis for pg_restore.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 21 Sep 2024 14:22:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Docs pg_restore: Shouldn't there be a note about -n ?" }, { "msg_contents": "\n\n> On 21 Sep 2024, at 9:22 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Florents Tselai <florents.tselai@gmail.com> writes:\n>> Ah, swapped them by mistake on the previous email:\n>> They're both available in the pg_dump and note on -n missing in pg_restore.\n>> The question remains though:\n>> Shouldn’t there be a note about -n in pg_restore ?\n> \n> Probably. I see that pg_dump has a third copy of the exact same\n> note for \"-e\". pg_restore lacks that switch for some reason,\n> but this is surely looking mighty duplicative. I propose getting\n> rid of the per-switch Notes and putting a para into the Notes\n> section, along the lines of\n> \n> When -e, -n, or -t is specified, pg_dump makes no attempt to dump\n> any other database objects that the selected object(s) might\n> depend upon. Therefore, there is no guarantee that the results of\n> a selective dump can be successfully restored by themselves into a\n> clean database.\n\nAgree with that, but I think there should be a pointer like “see Notes” .\nOtherwise I’m pretty sure most would expect pg doing magic.\nCan’t remember I scrolledl to the bottom of a page “notes” after finding the option I want.\n\nI would also add an example of what “depend upon” means,\nTo underline that it’s really not that uncommon.\nSomething like: \n“If you pg_dump only with -t A and A has foreign key constraints to table B,\nThose constraints won’t succeed If B has not been already restored” \n\n\n\n> \n> and mutatis mutandis for pg_restore.\n> \n> \t\t\tregards, tom lane\n\n\n\n", "msg_date": "Sat, 21 Sep 2024 21:48:45 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Docs pg_restore: Shouldn't there be a note about -n ?" }, { "msg_contents": "On Sat, Sep 21, 2024 at 9:48 PM Florents Tselai <florents.tselai@gmail.com>\nwrote:\n\n>\n>\n> > On 21 Sep 2024, at 9:22 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Florents Tselai <florents.tselai@gmail.com> writes:\n> >> Ah, swapped them by mistake on the previous email:\n> >> They're both available in the pg_dump and note on -n missing in\n> pg_restore.\n> >> The question remains though:\n> >> Shouldn’t there be a note about -n in pg_restore ?\n> >\n> > Probably. I see that pg_dump has a third copy of the exact same\n> > note for \"-e\". pg_restore lacks that switch for some reason,\n> > but this is surely looking mighty duplicative. I propose getting\n> > rid of the per-switch Notes and putting a para into the Notes\n> > section, along the lines of\n> >\n> > When -e, -n, or -t is specified, pg_dump makes no attempt to dump\n> > any other database objects that the selected object(s) might\n> > depend upon. Therefore, there is no guarantee that the results of\n> > a selective dump can be successfully restored by themselves into a\n> > clean database.\n>\n> Agree with that, but I think there should be a pointer like “see Notes” .\n> Otherwise I’m pretty sure most would expect pg doing magic.\n> Can’t remember I scrolledl to the bottom of a page “notes” after finding\n> the option I want.\n>\n> I would also add an example of what “depend upon” means,\n> To underline that it’s really not that uncommon.\n> Something like:\n> “If you pg_dump only with -t A and A has foreign key constraints to table\n> B,\n> Those constraints won’t succeed If B has not been already restored”\n>\n>\nAttached an idea.\nThe A/B example may be wordy / redundant,\nbut the -e note on the distinction between installing binaries / creating\nan extension I think it's important.", "msg_date": "Thu, 26 Sep 2024 01:28:16 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Docs pg_restore: Shouldn't there be a note about -n ?" } ]
[ { "msg_contents": "Hi hackers,\n\nI just noticed that all indents in upgrade scripts of pg_stat_statemets are\nmade using spaces, except for \"CREATE FUNCTION pg_stat_statements_reset\"\nstatement in pg_stat_statements--1.6--1.7.sql and\npg_stat_statements--1.10--1.11.sql.\n\nI made a patch to fix it in pg_stat_statements--1.10--1.11.sql if it's not\ntoo late for PostgreSQL 17 yet.\n\nBest regards,\nKarina Litskevich\nPostgres Professional: http://postgrespro.com/", "msg_date": "Mon, 23 Sep 2024 16:23:56 +0300", "msg_from": "Karina Litskevich <litskevichkarina@gmail.com>", "msg_from_op": true, "msg_subject": "pg_stat_statements: use spaces to indent in upgrade scripts" } ]
[ { "msg_contents": "Hello,\r\n\r\nWe've set allow_system_table_mods = on so that we could rename \r\npg_database and in its place put a custom view that only lists the\r\ndatabases the current user has CONNECT privileges to. This is because\r\n1) we allow customers direct (read only) access to their databases, but\r\n2) we don't want them to see the other customers, and 3) restricting\r\naccess to pg_database altogether leads to the GUIs the customers use\r\nspamming error messages because they expect pg_database to be readable,\r\nand that makes the customers (or their consultants) annoyed.\r\n\r\nA problem arose after the fix for CVE-2024-7348, because certain\r\nqueries that pg_dump runs use pg_database, and those are now blocked,\r\nso pg_dump fails. Well, actually, it's just subscriptions that are the\r\nproblem when it comes to pg_dump: pg_dump --no-subscriptions works in\r\nour case. However, pg_dumpall runs a different query that also uses\r\npg_database and that I don't think is possible to avoid.\r\n\r\nI realise that if you use allow_system_table_mods, you're kinda on your\r\nown, but it exists after all, and this security fix seems to make it\r\nless usable, if not unusable.\r\n\r\nCould views owned by postgres and/or in the pg_catalog namespace be\r\nconsidered system relations, even if customized? There's no way to\r\nsuppress the use of restrict_nonsystem_relation_kind if you know that\r\nthere are no untrusted users with object creation privileges, is there?\r\n\r\nAlternatively, do you have any other suggestions as to how to solve the\r\noriginal problem (we'd like to avoid renaming the databases so they\r\ndon't reveal the customer names)?\r\n\r\n-- \r\nGreetings,\r\nMagnus Holmgren\r\n\r\nMilient Software | www.milientsoftware.com\r\n\r\n\r\n", "msg_date": "Mon, 23 Sep 2024 15:38:43 +0000", "msg_from": "Magnus Holmgren <magnus.holmgren@milientsoftware.com>", "msg_from_op": true, "msg_subject": "restrict_nonsystem_relation_kind led to regression (kinda)" }, { "msg_contents": "Hi Magnus,\n\nOn 2024-Sep-23, Magnus Holmgren wrote:\n\n> We've set allow_system_table_mods = on so that we could rename \n> pg_database and in its place put a custom view that only lists the\n> databases the current user has CONNECT privileges to. This is because\n> 1) we allow customers direct (read only) access to their databases, but\n> 2) we don't want them to see the other customers, and 3) restricting\n> access to pg_database altogether leads to the GUIs the customers use\n> spamming error messages because they expect pg_database to be readable,\n> and that makes the customers (or their consultants) annoyed.\n\nYour use case and problem seem to match bug report #18604 almost\nexactly:\nhttps://postgr.es/m/18604-04d64b68e981ced6@postgresql.org\n\nI suggest to read that discussion, as it contains useful information.\nAs I understand, you're only really safe (not just theatrically safe) by\ngiving each customer a separate Postgres instance.\n\nRegards\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Mon, 23 Sep 2024 17:50:49 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: restrict_nonsystem_relation_kind led to regression (kinda)" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed unnecessary variable \"low\" in index_delete_sort() \n(/postgres/src/backend/access/heap/heapam.c), patch attached. What do \nyou think?\n\nRegards,\nKoki Nakamura <btnakamurakoukil@oss.nttdata.com>", "msg_date": "Tue, 24 Sep 2024 17:32:48 +0900", "msg_from": "btnakamurakoukil <btnakamurakoukil@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "index_delete_sort: Unnecessary variable \"low\" is used in heapam.c" }, { "msg_contents": "> On 24 Sep 2024, at 10:32, btnakamurakoukil <btnakamurakoukil@oss.nttdata.com> wrote:\n\n> I noticed unnecessary variable \"low\" in index_delete_sort() (/postgres/src/backend/access/heap/heapam.c), patch attached. What do you think?\n\nThat variable does indeed seem to not be used, and hasn't been used since it\nwas committed in d168b666823. The question is if it's a left-over from\ndevelopment which can be removed, or if it should be set and we're missing an\noptimization. Having not read the referenced paper I can't tell so adding\nPeter Geoghegan who wrote this for clarification.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 24 Sep 2024 14:31:30 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: index_delete_sort: Unnecessary variable \"low\" is used in heapam.c" } ]
[ { "msg_contents": "Hi,\n\nCurrently we modify pages while just holding a share lock, for hint bit\nwrites. Writing a buffer out only requires a share lock. Because of that we\ncan't compute checksums and write out pages in-place, as a concurent hint bit\nwrite can easily corrupt the checksum.\n\nThat's not great, but not awful for our current synchronous buffered IO, as\nwe only ever have a single page being written out at a time.\n\nHowever, it becomes a problem even if we just want to write out in chunks\nlarger than a single page - we'd need to reserve not just one BLCKSZ sized\nbuffer for this, but make it PG_IOV_MAX * BLCKSZ sized. Perhaps still\ntolerable.\n\nWith AIO this becomes a considerably bigger issue:\na) We can't just have one write in progress at a time, but many\nb) To be able to implement AIO using workers the \"source\" or \"target\" memory\n of the IO needs to be in shared memory\n\nBesides that, the need to copy the buffers makes checkpoints with AIO\nnoticeably slower when checksums are enabled - it's not the checksum but the\ncopy that's the biggest source of the slowdown.\n\n\nSo far the AIO patchset has solved this by introducing a set of \"bounce\nbuffers\", which can be acquired and used as the source/target of IO when doing\nit in-place into shared buffers isn't viable.\n\nI am worried about that solution however, as either acquisition of bounce\nbuffers becomes a performance issue (that's how I did it at first, it was hard\nto avoid regressions) or we reserve bounce buffers for each backend, in which\ncase the memory overhead for instances with relatively small amount of\nshared_buffers and/or many connections can be significant.\n\n\nWhich lead me down the path of trying to avoid the need for the copy in the\nfirst place: What if we don't modify pages while it's undergoing IO?\n\nThe naive approach would be to not set hint bits with just a shared lock - but\nthat doesn't seem viable at all. For performance we rely on hint bits being\nset and in many cases we'll only encounter the page in shared mode. We could\nimplement a conditional lock upgrade to an exclusive lock and do so while\nsetting hint bits, but that'd obviously be concerning from a concurrency point\nof view.\n\nWhat I suspect we might want instead is something inbetween a share and an\nexclusive lock, which is taken while setting a hint bit and which conflicts\nwith having an IO in progress.\n\nOn first blush it might sound attractive to introduce this on the level of\nlwlocks. However, I doubt that is a good idea - it'd make lwlock.c more\ncomplicated which would imply overhead for other users, while the precise\nsemantics would be fairly specific to buffer locking. A variant of this would\nbe to generalize lwlock.c to allow implementing different kinds of locks more\neasily. But that's a significant project on its own and doesn't really seem\nnecessary for this specific project.\n\nWhat I'd instead like to propose is to implement the right to set hint bits as\na bit in each buffer's state, similar to BM_IO_IN_PROGRESS. Tentatively I\nnamed this BM_SETTING_HINTS. It's only allowed to set BM_SETTING_HINTS when\nBM_IO_IN_PROGRESS isn't already set and StartBufferIO has to wait for\nBM_SETTING_HINTS to be unset to start IO.\n\nNaively implementing this, by acquiring and releasing the permission to set\nhint bits in SetHintBits() unfortunately leads to a significant performance\nregression. While the performance is unaffected for OLTPish workloads like\npgbench (both read and write), sequential scans of unhinted tables regress\nsignificantly, due to the per-tuple lock acquisition this would imply.\n\nBut: We can address this and improve performance over the status quo! Today we\ndetermine tuple visiblity determination one-by-one, even when checking the\nvisibility of an entire page worth of tuples. That's not exactly free. I've\nprototyped checking visibility of an entire page of tuples at once and it\nindeed speeds up visibility checks substantially (in some cases seqscans are\nover 20% faster!).\n\nOnce we have page-level visibility checks we can get the right to set hint\nbits once for an entire page instead of doing it for every tuple - with that\nin place the \"new approach\" of setting hint bits only with BM_SETTING_HINTS\nwins.\n\n\nHaving a page level approach to setting hint bits has other advantages:\n\nE.g. today, with wal_log_hints, we'll log hint bits on the first hint bit set\non the page and we don't mark a page dirty on hot standby. Which often will\nresult in hint bits notpersistently set on replicas until the page is frozen.\n\nAnother issue is that we'll often WAL log hint bits for a page (due to hint\nbits being set), just to then immediately log another WAL record for the same\npage (e.g. for pruning), which is obviously wasteful. With a different\ninterface we could combine the WAL records for both.\n\nI've not prototyped either, but I'm fairly confident they'd be helpful.\n\n\nDoes this sound like a reasonable idea? Counterpoints? If it does sound\nreasonable, I'll clean up my pile of hacks into something\nsemi-understandable...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 Sep 2024 11:55:08 -0400", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "AIO writes vs hint bits vs checksums" }, { "msg_contents": "On Tue, Sep 24, 2024 at 11:55:08AM -0400, Andres Freund wrote:\n> So far the AIO patchset has solved this by introducing a set of \"bounce\n> buffers\", which can be acquired and used as the source/target of IO when doing\n> it in-place into shared buffers isn't viable.\n> \n> I am worried about that solution however, as either acquisition of bounce\n> buffers becomes a performance issue (that's how I did it at first, it was hard\n> to avoid regressions) or we reserve bounce buffers for each backend, in which\n> case the memory overhead for instances with relatively small amount of\n> shared_buffers and/or many connections can be significant.\n\n> But: We can address this and improve performance over the status quo! Today we\n> determine tuple visiblity determination one-by-one, even when checking the\n> visibility of an entire page worth of tuples. That's not exactly free. I've\n> prototyped checking visibility of an entire page of tuples at once and it\n> indeed speeds up visibility checks substantially (in some cases seqscans are\n> over 20% faster!).\n\nNice! It sounds like you refactored the relationship between\nheap_prepare_pagescan() and HeapTupleSatisfiesVisibility() to move the hint\nbit setting upward or the iterate-over-tuples downward. Is that about right?\n\n> Once we have page-level visibility checks we can get the right to set hint\n> bits once for an entire page instead of doing it for every tuple - with that\n> in place the \"new approach\" of setting hint bits only with BM_SETTING_HINTS\n> wins.\n\nHow did page-level+BM_SETTING_HINTS performance compare to performance of the\npage-level change w/o the BM_SETTING_HINTS change?\n\n> Having a page level approach to setting hint bits has other advantages:\n> \n> E.g. today, with wal_log_hints, we'll log hint bits on the first hint bit set\n> on the page and we don't mark a page dirty on hot standby. Which often will\n> result in hint bits notpersistently set on replicas until the page is frozen.\n\nNice way to improve that.\n\n> Does this sound like a reasonable idea? Counterpoints?\n\nI guess the main part left to discuss is index scans or other scan types where\nwe'd either not do page-level visibility or we'd do page-level visibility\nincluding tuples we wouldn't otherwise use. BM_SETTING_HINTS likely won't\nshow up so readily in index scan profiles, but the cost is still there. How\nshould we think about comparing the distributed cost of the buffer header\nmanipulations during index scans vs. the costs of bounce buffers?\n\nThanks,\nnm\n\n\n", "msg_date": "Tue, 24 Sep 2024 12:43:40 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: AIO writes vs hint bits vs checksums" }, { "msg_contents": "Hi,\n\nOn 2024-09-24 12:43:40 -0700, Noah Misch wrote:\n> On Tue, Sep 24, 2024 at 11:55:08AM -0400, Andres Freund wrote:\n> > So far the AIO patchset has solved this by introducing a set of \"bounce\n> > buffers\", which can be acquired and used as the source/target of IO when doing\n> > it in-place into shared buffers isn't viable.\n> >\n> > I am worried about that solution however, as either acquisition of bounce\n> > buffers becomes a performance issue (that's how I did it at first, it was hard\n> > to avoid regressions) or we reserve bounce buffers for each backend, in which\n> > case the memory overhead for instances with relatively small amount of\n> > shared_buffers and/or many connections can be significant.\n>\n> > But: We can address this and improve performance over the status quo! Today we\n> > determine tuple visiblity determination one-by-one, even when checking the\n> > visibility of an entire page worth of tuples. That's not exactly free. I've\n> > prototyped checking visibility of an entire page of tuples at once and it\n> > indeed speeds up visibility checks substantially (in some cases seqscans are\n> > over 20% faster!).\n>\n> Nice! It sounds like you refactored the relationship between\n> heap_prepare_pagescan() and HeapTupleSatisfiesVisibility() to move the hint\n> bit setting upward or the iterate-over-tuples downward. Is that about right?\n\nI've tried about five variations, so I don't have one answer to this yet :).\n\nOne problem is that having repeated loops doing PageGetItemId(),\nPageGetItem(), ItemIdGetLength() isn't exactly free. To some degree it can be\nhidden by allowing for better superscalar execution, but not entirely.\n\nI've been somewhat confused by the compiler generated code around ItemId\nhandling for a while, it looks way more expensive than it should - it\nregularly is a bottleneck due to the sheer number of instructions being\nexecuted leading to being frontend bound. But never quite looked into it\ndeeply enough to figure out what's causing it / how to fix it.\n\n\n> > Once we have page-level visibility checks we can get the right to set hint\n> > bits once for an entire page instead of doing it for every tuple - with that\n> > in place the \"new approach\" of setting hint bits only with BM_SETTING_HINTS\n> > wins.\n>\n> How did page-level+BM_SETTING_HINTS performance compare to performance of the\n> page-level change w/o the BM_SETTING_HINTS change?\n\nJust ran that. There probably is a performance difference, but it's small\n(<0.5%) making it somewhat hard to be certain. It looks like the main reason\nfor that is ConditionVariableBroadcast() on the iocv shows up even though\nnobody is waiting.\n\nI've been fighting that with AIO as well, so maybe it's time to figure out the\nmemory ordering rules that'd allow to check that without a full spinlock\nacquisition.\n\nIf we figure it out, we perhaps should use the chance to get rid of\nBM_PIN_COUNT_WAITER...\n\n\n> > Does this sound like a reasonable idea? Counterpoints?\n>\n> I guess the main part left to discuss is index scans or other scan types where\n> we'd either not do page-level visibility or we'd do page-level visibility\n> including tuples we wouldn't otherwise use. BM_SETTING_HINTS likely won't\n> show up so readily in index scan profiles, but the cost is still there.\n\nI could indeed not make it show up in some simple index lookup heavy\nworkloads. I need to try some more extreme cases though (e.g. fetching all\ntuples in a table via an index or having very long HOT chains).\n\nIf it's not visible cost-wise compared to all the other costs of index scans -\ndoes it matter? If it's not visible it's either because it proportionally is\nvery small or because it's completely hidden by superscalar execution.\n\n\nAnother thing I forgot to mention that probably fits into the \"tradeoffs\"\nbucket: Because BM_SETTING_HINTS would be just a single bit, one backend\nsetting hint bits would block out another backend setting hint bits. In most\nsituations that'll be fine, or even faster than not doing so due to reducing\ncache line ping-pong, but in cases of multiple backends doing index lookups to\ndifferent unhinted tuples on the same page it could be a bit worse.\n\nBut I suspect that's fine because it's hard to believe that you could have\nenough index lookups to unhinted tuples for that to be a bottleneck -\nsomething has to produce all those unhinted tuples after all, and that's\nrather more expensive. And for single-tuple visibility checks the window in\nwhich hint bits are set is very small.\n\n\n> How should we think about comparing the distributed cost of the buffer\n> header manipulations during index scans vs. the costs of bounce buffers?\n\nWell, the cost of bounce buffers would be born as long as postgres is up,\nwhereas a not-measurable (if it indeed isn't) cost during index scans wouldn't\nreally show up. If there are cases where the cost of the more expensive hint\nbit logic does show up, it'll get a lot harder to weigh.\n\n\nSo far my prototype uses the path that avoids hint bits being set while IO is\ngoing on all the time, not just when checksums are enabled. We could change\nthat, I guess. However, our habit of modifying buffers while IO is going on is\ncausing issues with filesystem level checksums as well, as evidenced by the\nfact that debug_io_direct = data on btrfs causes filesystem corruption. So I\ntend to think it'd be better to just stop doing that alltogether (we also do\nthat for WAL, when writing out a partial page, but a potential fix there would\nbe different, I think).\n\n\n\nA thing I forgot to mention: Bounce buffers are kind of an architectural dead\nend, in that we wouln't need them in that form if we get to a threaded\npostgres.\n\n\nZooming out (a lot) more: I like the idea of having a way to get the\npermission to perform some kinds of modifications on a page without an\nexlusive lock. While obviously a lot more work, I do think there's some\npotential to have some fast-paths that perform work on a page level without\nblocking out readers. E.g. some simple cases of insert could correctly be done\nwithout blocking out readers (by ordering the update of the max page offset\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 Sep 2024 16:30:25 -0400", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: AIO writes vs hint bits vs checksums" }, { "msg_contents": "On Wed, Sep 25, 2024 at 8:30 AM Andres Freund <andres@anarazel.de> wrote:\n> Just ran that. There probably is a performance difference, but it's small\n> (<0.5%) making it somewhat hard to be certain. It looks like the main reason\n> for that is ConditionVariableBroadcast() on the iocv shows up even though\n> nobody is waiting.\n\n. o O { Gotta fix that. Memory barriers might be enough to check for\nempty wait list?, and even in the slow path, atomic wait lists or\nsomething better than spinlocks... }\n\n> However, our habit of modifying buffers while IO is going on is\n> causing issues with filesystem level checksums as well, as evidenced by the\n> fact that debug_io_direct = data on btrfs causes filesystem corruption. So I\n> tend to think it'd be better to just stop doing that alltogether (we also do\n> that for WAL, when writing out a partial page, but a potential fix there would\n> be different, I think).\n\n+many. Interesting point re the WAL variant. For the record, here's\nsome discussion and a repro for that problem, which Andrew currently\nworks around in a build farm animal with mount options:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGKSBaz78Fw3WTF3Q8ArqKCz1GgsTfRFiDPbu-j9OFz-jw%40mail.gmail.com\n\n\n", "msg_date": "Wed, 25 Sep 2024 12:45:10 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIO writes vs hint bits vs checksums" }, { "msg_contents": "On Tue, Sep 24, 2024 at 04:30:25PM -0400, Andres Freund wrote:\n> On 2024-09-24 12:43:40 -0700, Noah Misch wrote:\n> > On Tue, Sep 24, 2024 at 11:55:08AM -0400, Andres Freund wrote:\n> > > Besides that, the need to copy the buffers makes checkpoints with AIO\n> > > noticeably slower when checksums are enabled - it's not the checksum but the\n> > > copy that's the biggest source of the slowdown.\n\nHow big is that copy's contribution to the slowdown there? A measurable CPU\noverhead on writes likely does outweigh the unmeasurable overhead on index\nscans, but ...\n\n> > > Does this sound like a reasonable idea? Counterpoints?\n\n> > How should we think about comparing the distributed cost of the buffer\n> > header manipulations during index scans vs. the costs of bounce buffers?\n> \n> Well, the cost of bounce buffers would be born as long as postgres is up,\n> whereas a not-measurable (if it indeed isn't) cost during index scans wouldn't\n> really show up.\n\n... neither BM_SETTING_HINTS nor keeping bounce buffers looks like a bad\ndecision. From what I've heard so far of the performance effects, if it were\nme, I would keep the bounce buffers. I'd pursue BM_SETTING_HINTS and bounce\nbuffer removal as a distinct project after the main AIO capability. Bounce\nbuffers have an implementation. They aren't harming other design decisions.\nThe AIO project is big, so I'd want to err on the side of not designating\nother projects as its prerequisites.\n\n> Zooming out (a lot) more: I like the idea of having a way to get the\n> permission to perform some kinds of modifications on a page without an\n> exlusive lock. While obviously a lot more work, I do think there's some\n> potential to have some fast-paths that perform work on a page level without\n> blocking out readers. E.g. some simple cases of insert could correctly be done\n> without blocking out readers (by ordering the update of the max page offset\n\nTrue.\n\n\n", "msg_date": "Tue, 24 Sep 2024 19:00:22 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: AIO writes vs hint bits vs checksums" }, { "msg_contents": "Andres Freund <andres@anarazel.de> wrote:\n\n> What I'd instead like to propose is to implement the right to set hint bits as\n> a bit in each buffer's state, similar to BM_IO_IN_PROGRESS. Tentatively I\n> named this BM_SETTING_HINTS. It's only allowed to set BM_SETTING_HINTS when\n> BM_IO_IN_PROGRESS isn't already set and StartBufferIO has to wait for\n> BM_SETTING_HINTS to be unset to start IO.\n> \n> Naively implementing this, by acquiring and releasing the permission to set\n> hint bits in SetHintBits() unfortunately leads to a significant performance\n> regression. While the performance is unaffected for OLTPish workloads like\n> pgbench (both read and write), sequential scans of unhinted tables regress\n> significantly, due to the per-tuple lock acquisition this would imply.\n\nAn alternative approach: introduce a flag that tells that the checksum is\nbeing computed, and disallow setting hint bits when that flag is set. As long\nas the checksum computation takes take much less time than the IO, fewer hint\nbit updates should be rejected.\n\nOf course, SetHintBits() would have to update the checksum too. But if it\ncould determine where the hint bit is located in the buffer and if \"some\nintermediate state\" of the computation was maintained for each page in shared\nbuffers, then the checksum update might be cheaper than the initial\ncomputation. But I'm not sure I understand the algorithm enough.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 25 Sep 2024 09:06:15 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: AIO writes vs hint bits vs checksums" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Andres Freund <andres@anarazel.de> wrote:\n> \n> > What I'd instead like to propose is to implement the right to set hint bits as\n> > a bit in each buffer's state, similar to BM_IO_IN_PROGRESS. Tentatively I\n> > named this BM_SETTING_HINTS. It's only allowed to set BM_SETTING_HINTS when\n> > BM_IO_IN_PROGRESS isn't already set and StartBufferIO has to wait for\n> > BM_SETTING_HINTS to be unset to start IO.\n> > \n> > Naively implementing this, by acquiring and releasing the permission to set\n> > hint bits in SetHintBits() unfortunately leads to a significant performance\n> > regression. While the performance is unaffected for OLTPish workloads like\n> > pgbench (both read and write), sequential scans of unhinted tables regress\n> > significantly, due to the per-tuple lock acquisition this would imply.\n> \n> An alternative approach: introduce a flag that tells that the checksum is\n> being computed, and disallow setting hint bits when that flag is set. As long\n> as the checksum computation takes take much less time than the IO, fewer hint\n> bit updates should be rejected.\n\nWell, the checksum actually should not be computed during the IO, so the IO\nwould still disallow hint bit updates :-(\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 25 Sep 2024 10:12:27 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: AIO writes vs hint bits vs checksums" }, { "msg_contents": "On Wed, Sep 25, 2024 at 12:45 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Sep 25, 2024 at 8:30 AM Andres Freund <andres@anarazel.de> wrote:\n> > However, our habit of modifying buffers while IO is going on is\n> > causing issues with filesystem level checksums as well, as evidenced by the\n> > fact that debug_io_direct = data on btrfs causes filesystem corruption. So I\n> > tend to think it'd be better to just stop doing that alltogether (we also do\n> > that for WAL, when writing out a partial page, but a potential fix there would\n> > be different, I think).\n>\n> +many. Interesting point re the WAL variant. For the record, here's\n> some discussion and a repro for that problem, which Andrew currently\n> works around in a build farm animal with mount options:\n>\n> https://www.postgresql.org/message-id/CA%2BhUKGKSBaz78Fw3WTF3Q8ArqKCz1GgsTfRFiDPbu-j9OFz-jw%40mail.gmail.com\n\nHere's an interesting new development in that area, this time from\nOpenZFS, which committed its long awaited O_DIRECT support a couple of\nweeks ago[1] and seems to have taken a different direction since that\nlast discussion. Clearly it has the same checksum stability problem\nas BTRFS and PostgreSQL itself, so an O_DIRECT mode with the goal of\navoiding copying and caching must confront that and break *something*,\nor accept something like bounce buffers and give up the zero-copy\ngoal. Curiously, they seem to have landed on two different solutions\nwith three different possible behaviours: (1) On FreeBSD, temporarily\nmake the memory non-writeable, (2) On Linux, they couldn't do that so\nthey have an extra checksum verification on write. I haven't fully\ngrokked all this yet, or even tried it, and it's not released or\nanything, but it looks a bit like all three behaviours are bad for our\ncurrent hint bit design: on FreeBSD, setting a hint bit might crash\n(?) if a write is in progress in another process, and on Linux,\ndepending on zfs_vdev_direct_write_verify, either the concurrent write\nmight fail (= checkpointer failing on EIO because someone concurrently\nset a hint bit) or a later read might fail (= file is permanently\ncorrupted and you don't find out until later, like btrfs). I plan to\nlook more closely soon and see if I understood that right...\n\n[1] https://github.com/openzfs/zfs/pull/10018/commits/d7b861e7cfaea867ae28ab46ab11fba89a5a1fda\n\n\n", "msg_date": "Fri, 27 Sep 2024 09:56:34 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: AIO writes vs hint bits vs checksums" } ]
[ { "msg_contents": "Hello hackers,\r\n\r\nI have found an instance of a time overflow with the start time that is written in \"postmaster.pid\". On a 32-bit Linux system, if the system date is past 01/19/2038, when you start Postgres with `pg_ctl start -D {datadir} ...`, the start time will have rolled back to approximately 1900. This is an instance of the \"2038 problem\". On my system, pg_ctl will not exit if the start time has overflowed.\r\n\r\nThis can be fixed by casting \"MyStartTime\" to a long long instead of just a long in \"src/backend/utils/init/miscinit.c\". Additionally, in \"src/bin/pg_ctl/pg_ctl.c\", when we read that value from the file, we should use \"atoll()\" instead of \"atol()\" to ensure we are reading it as a long long.\r\nI have verified that this fixes the start time overflow on my 32-bit arm system. My glibc is compiled with 64-bit time_t.\r\nMost systems running Postgres likely aren't 32-bit, but for embedded systems, this is important to ensure 2038 compatibility.\r\n\r\nThis is a fairly trivial patch, and I do not currently see any issues with using long long. I was told on IRC that a regression test is likely not necessary for this patch.\r\nI look forward to hearing any feedback. This is my first open-source contribution!\r\n\r\nThank you,\r\n\r\n\r\nMax Johnson\r\n\r\nEmbedded Linux Engineer I\r\n\r\n​\r\n\r\nNovaTech, LLC\r\n\r\n13555 W. 107th Street | Lenexa, KS 66215 ​\r\n\r\nO: 913.451.1880\r\n\r\nM: 913.742.4580​\r\n\r\nnovatechautomation.com<http://www.novatechautomation.com/> | NovaTechLinkedIn<https://www.linkedin.com/company/565017>\r\n\r\n\r\nNovaTech Automation is Net Zero committed. #KeepItCool<https://www.keepitcool.earth/>\r\n\r\nReceipt of this email implies compliance with our terms and conditions<https://www.novatechautomation.com/email-terms-conditions>.", "msg_date": "Tue, 24 Sep 2024 19:33:24 +0000", "msg_from": "Max Johnson <max.johnson@novatechautomation.com>", "msg_from_op": true, "msg_subject": "pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of long\n to avoid 2038 problem." }, { "msg_contents": "On Tue, Sep 24, 2024 at 07:33:24PM +0000, Max Johnson wrote:\n> I have found an instance of a time overflow with the start time that is\n> written in \"postmaster.pid\". On a 32-bit Linux system, if the system date\n> is past 01/19/2038, when you start Postgres with `pg_ctl start -D\n> {datadir} ...`, the start time will have rolled back to approximately\n> 1900. This is an instance of the \"2038 problem\". On my system, pg_ctl\n> will not exit if the start time has overflowed.\n\nNice find. I think this has been the case since the start time was added\nto the lock files [0].\n\n> -\tsnprintf(buffer, sizeof(buffer), \"%d\\n%s\\n%ld\\n%d\\n%s\\n\",\n> +\tsnprintf(buffer, sizeof(buffer), \"%d\\n%s\\n%lld\\n%d\\n%s\\n\",\n> \t\t\t amPostmaster ? (int) my_pid : -((int) my_pid),\n> \t\t\t DataDir,\n> -\t\t\t (long) MyStartTime,\n> +\t\t\t (long long) MyStartTime,\n> \t\t\t PostPortNumber,\n> \t\t\t socketDir);\n\nI think we should use INT64_FORMAT here. That'll choose the right length\nmodifier for the platform. And I don't think we need to cast MyStartTime,\nsince it's a pg_time_t (which is just an int64).\n\n[0] https://postgr.es/c/30aeda4\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 24 Sep 2024 15:26:47 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> I think we should use INT64_FORMAT here. That'll choose the right length\n> modifier for the platform. And I don't think we need to cast MyStartTime,\n> since it's a pg_time_t (which is just an int64).\n\nAgreed. However, a quick grep finds half a dozen other places that\nare casting MyStartTime to long. We should fix them all.\n\nAlso note that if any of the other places are using translatable\nformat strings, INT64_FORMAT is problematic in that context, and\n\"long long\" is a better answer for them.\n\n(I've not dug in the history, but I rather imagine that this is all\na hangover from MyStartTime having once been plain \"time_t\".)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Sep 2024 16:44:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." }, { "msg_contents": "On Tue, Sep 24, 2024 at 04:44:41PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> I think we should use INT64_FORMAT here. That'll choose the right length\n>> modifier for the platform. And I don't think we need to cast MyStartTime,\n>> since it's a pg_time_t (which is just an int64).\n> \n> Agreed. However, a quick grep finds half a dozen other places that\n> are casting MyStartTime to long. We should fix them all.\n\n+1\n\n> Also note that if any of the other places are using translatable\n> format strings, INT64_FORMAT is problematic in that context, and\n> \"long long\" is a better answer for them.\n\nAt a glance, I'm not seeing any translatable format strings that involve\nMyStartTime. But that is good to know...\n\n-- \nnathan\n\n\n", "msg_date": "Tue, 24 Sep 2024 15:58:08 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." }, { "msg_contents": "Hi there,\r\n\r\nI have amended my patch to reflect the changes that were discussed and have verified on my system that it works the same as before. I have also fixed a typo and changed the name of the patch to more accurately reflect what it does now. Please let me know if there is anything else you'd like me to do.\r\n\r\nThanks again,\r\n\r\n\r\nMax Johnson\r\n\r\nEmbedded Linux Engineer I\r\n\r\n​\r\n\r\nNovaTech, LLC\r\n\r\n13555 W. 107th Street | Lenexa, KS 66215 ​\r\n\r\nO: 913.451.1880\r\n\r\nM: 913.742.4580​\r\n\r\nnovatechautomation.com<http://www.novatechautomation.com/> | NovaTechLinkedIn<https://www.linkedin.com/company/565017>\r\n\r\n\r\nNovaTech Automation is Net Zero committed. #KeepItCool<https://www.keepitcool.earth/>\r\n\r\nReceipt of this email implies compliance with our terms and conditions<https://www.novatechautomation.com/email-terms-conditions>.\r\n\r\n________________________________\r\nFrom: Nathan Bossart <nathandbossart@gmail.com>\r\nSent: Tuesday, September 24, 2024 3:58 PM\r\nTo: Tom Lane <tgl@sss.pgh.pa.us>\r\nCc: Max Johnson <max.johnson@novatechautomation.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\r\nSubject: Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of long to avoid 2038 problem.\r\n\r\nOn Tue, Sep 24, 2024 at 04:44:41PM -0400, Tom Lane wrote:\r\n> Nathan Bossart <nathandbossart@gmail.com> writes:\r\n>> I think we should use INT64_FORMAT here. That'll choose the right length\r\n>> modifier for the platform. And I don't think we need to cast MyStartTime,\r\n>> since it's a pg_time_t (which is just an int64).\r\n>\r\n> Agreed. However, a quick grep finds half a dozen other places that\r\n> are casting MyStartTime to long. We should fix them all.\r\n\r\n+1\r\n\r\n> Also note that if any of the other places are using translatable\r\n> format strings, INT64_FORMAT is problematic in that context, and\r\n> \"long long\" is a better answer for them.\r\n\r\nAt a glance, I'm not seeing any translatable format strings that involve\r\nMyStartTime. But that is good to know...\r\n\r\n--\r\nnathan", "msg_date": "Wed, 25 Sep 2024 15:17:45 +0000", "msg_from": "Max Johnson <max.johnson@novatechautomation.com>", "msg_from_op": true, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." }, { "msg_contents": "On Wed, Sep 25, 2024 at 03:17:45PM +0000, Max Johnson wrote:\n> I have amended my patch to reflect the changes that were discussed and\n> have verified on my system that it works the same as before. I have also\n> fixed a typo and changed the name of the patch to more accurately reflect\n> what it does now. Please let me know if there is anything else you'd like\n> me to do.\n\nThanks! I went through all the other uses of MyStartTime and fixed those\nas needed, too. Please let me know what you think.\n\n-- \nnathan", "msg_date": "Wed, 25 Sep 2024 13:48:35 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." }, { "msg_contents": "Hi Nathan,\n\nI think your patch looks good, no objections. I am happy to have contributed.\n\nThanks,\nMax\n________________________________\nFrom: Nathan Bossart <nathandbossart@gmail.com>\nSent: Wednesday, September 25, 2024 1:48 PM\nTo: Max Johnson <max.johnson@novatechautomation.com>\nCc: tgl@sss.pgh.pa.us <tgl@sss.pgh.pa.us>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of long to avoid 2038 problem.\n\nOn Wed, Sep 25, 2024 at 03:17:45PM +0000, Max Johnson wrote:\n> I have amended my patch to reflect the changes that were discussed and\n> have verified on my system that it works the same as before. I have also\n> fixed a typo and changed the name of the patch to more accurately reflect\n> what it does now. Please let me know if there is anything else you'd like\n> me to do.\n\nThanks! I went through all the other uses of MyStartTime and fixed those\nas needed, too. Please let me know what you think.\n\n--\nnathan\n\n\n\n\n\n\n\n\nHi Nathan,\n\n\n\n\nI think your patch looks good, no objections. I am happy to have contributed.\n\n\n\n\nThanks,\n\nMax\n\n\n\nFrom: Nathan Bossart <nathandbossart@gmail.com>\nSent: Wednesday, September 25, 2024 1:48 PM\nTo: Max Johnson <max.johnson@novatechautomation.com>\nCc: tgl@sss.pgh.pa.us <tgl@sss.pgh.pa.us>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>\nSubject: Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of long to avoid 2038 problem.\n \n\n\nOn Wed, Sep 25, 2024 at 03:17:45PM +0000, Max Johnson wrote:\n> I have amended my patch to reflect the changes that were discussed and\n> have verified on my system that it works the same as before. I have also\n> fixed a typo and changed the name of the patch to more accurately reflect\n> what it does now. Please let me know if there is anything else you'd like\n> me to do.\n\nThanks!  I went through all the other uses of MyStartTime and fixed those\nas needed, too.  Please let me know what you think.\n\n-- \nnathan", "msg_date": "Wed, 25 Sep 2024 20:04:59 +0000", "msg_from": "Max Johnson <max.johnson@novatechautomation.com>", "msg_from_op": true, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." }, { "msg_contents": "On Wed, Sep 25, 2024 at 08:04:59PM +0000, Max Johnson wrote:\n> I think your patch looks good, no objections. I am happy to have contributed.\n\nGreat. I've attached what I have staged for commit.\n\nMy first instinct was to not bother back-patching this since all\ncurrently-supported versions will have been out of support for over 8 years\nby the time this becomes a practical issue. However, I wonder if it makes\nsense to back-patch for the kinds of 32-bit embedded systems you cited\nupthread. I can imagine that such systems might need to work for a very\nlong time without any software updates, in which case it'd probably be a\ngood idea to make this fix available in the next minor release. What do\nyou think?\n\n-- \nnathan", "msg_date": "Thu, 26 Sep 2024 21:38:40 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." }, { "msg_contents": "I think that it would be a good idea to include these fixes in the next minor release. After working for a couple months on getting our embedded systems 2038 compliant, it has become very apparent that 2038 will be a substantial ordeal. Maximizing the number of systems that include this fix would make things a little easier when that time comes around.\n\nThanks,\n\nMax\n________________________________\nFrom: Nathan Bossart <nathandbossart@gmail.com>\nSent: Thursday, September 26, 2024 9:38 PM\nTo: Max Johnson <max.johnson@novatechautomation.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>; tgl@sss.pgh.pa.us <tgl@sss.pgh.pa.us>\nSubject: Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of long to avoid 2038 problem.\n\nOn Wed, Sep 25, 2024 at 08:04:59PM +0000, Max Johnson wrote:\n> I think your patch looks good, no objections. I am happy to have contributed.\n\nGreat. I've attached what I have staged for commit.\n\nMy first instinct was to not bother back-patching this since all\ncurrently-supported versions will have been out of support for over 8 years\nby the time this becomes a practical issue. However, I wonder if it makes\nsense to back-patch for the kinds of 32-bit embedded systems you cited\nupthread. I can imagine that such systems might need to work for a very\nlong time without any software updates, in which case it'd probably be a\ngood idea to make this fix available in the next minor release. What do\nyou think?\n\n--\nnathan\n\n\n\n\n\n\n\n\nI think that it would be a good idea to include these fixes in the next minor release. After working for a couple months on getting our embedded systems 2038 compliant, it has become very apparent that 2038\n will be a substantial ordeal. Maximizing the number of systems that include this fix would make things a little easier when that time comes around.\n\n\n\n\nThanks,\n\n\n\n\nMax\n\n\n\nFrom: Nathan Bossart <nathandbossart@gmail.com>\nSent: Thursday, September 26, 2024 9:38 PM\nTo: Max Johnson <max.johnson@novatechautomation.com>\nCc: pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>; tgl@sss.pgh.pa.us <tgl@sss.pgh.pa.us>\nSubject: Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of long to avoid 2038 problem.\n \n\n\nOn Wed, Sep 25, 2024 at 08:04:59PM +0000, Max Johnson wrote:\n> I think your patch looks good, no objections. I am happy to have contributed.\n\nGreat.  I've attached what I have staged for commit.\n\nMy first instinct was to not bother back-patching this since all\ncurrently-supported versions will have been out of support for over 8 years\nby the time this becomes a practical issue.  However, I wonder if it makes\nsense to back-patch for the kinds of 32-bit embedded systems you cited\nupthread.  I can imagine that such systems might need to work for a very\nlong time without any software updates, in which case it'd probably be a\ngood idea to make this fix available in the next minor release.  What do\nyou think?\n\n-- \nnathan", "msg_date": "Fri, 27 Sep 2024 14:48:01 +0000", "msg_from": "Max Johnson <max.johnson@novatechautomation.com>", "msg_from_op": true, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." }, { "msg_contents": "On Fri, Sep 27, 2024 at 02:48:01PM +0000, Max Johnson wrote:\n> I think that it would be a good idea to include these fixes in the next\n> minor release. After working for a couple months on getting our embedded\n> systems 2038 compliant, it has become very apparent that 2038 will be a\n> substantial ordeal. Maximizing the number of systems that include this\n> fix would make things a little easier when that time comes around.\n\nAlright. I was able to back-patch it to v12 without too much trouble,\nfortunately. I'll commit that soon unless anyone else has feedback.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 27 Sep 2024 14:10:47 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_ctl/miscinit: print \"MyStartTime\" as a long long instead of\n long to avoid 2038 problem." } ]
[ { "msg_contents": "Hi hackers,\nI am new in contributing to postgres. I have a doubt regarding if we want\nto send a patch which has an extension and also has changes in pg source\nalso,what's the way to do it?\nIt seems odd right ? we are doing extension then ,then why changes in pg?\n\nThis is the 1st version of the idea,if it goes well 🤞 we can move the\nchanges from extension to pg.\n\nregards,\nTony Wayne\n\nHi hackers,I am new in contributing to postgres. I have a doubt regarding if we want to send a patch which has an extension and also has changes in pg source also,what's the way to do it?It seems odd right ? we are doing extension then ,then why changes in pg?This is the 1st version of the idea,if it goes well 🤞 we can move the changes from extension to pg. regards,Tony Wayne", "msg_date": "Wed, 25 Sep 2024 06:02:08 +0530", "msg_from": "Tony Wayne <anonymouslydark3@gmail.com>", "msg_from_op": true, "msg_subject": "How to send patch with so many files changes?" }, { "msg_contents": "On Wed, Sep 25, 2024 at 6:02 AM Tony Wayne <anonymouslydark3@gmail.com>\nwrote:\n\n>\n> I am new in contributing to postgres. I have a doubt regarding if we want\n> to send a patch which has an extension and also has changes in pg source\n> also,what's the way to do it?\n>\n> is git diff enough?\n\nOn Wed, Sep 25, 2024 at 6:02 AM Tony Wayne <anonymouslydark3@gmail.com> wrote:I am new in contributing to postgres. I have a doubt regarding if we want to send a patch which has an extension and also has changes in pg source also,what's the way to do it?is git diff enough?", "msg_date": "Wed, 25 Sep 2024 06:06:52 +0530", "msg_from": "Tony Wayne <anonymouslydark3@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to send patch with so many files changes?" }, { "msg_contents": "On Tue, Sep 24, 2024 at 5:32 PM Tony Wayne <anonymouslydark3@gmail.com>\nwrote:\n\n>\n> This is the 1st version of the idea,if it goes well 🤞 we can move the\n> changes from extension to pg.\n>\n>\nIf you are saying you are planning to add something to the contrib directly\nthen you should just post the patch(es) that do it. Your ability to make\nit digestible will highly influence whether anyone is willing to review it.\n\nIf this isn't intended for core or contrib you are not in the correct\nplace. If you wish to share an external extension you are publishing the\n-general channel would be the place to discuss such things.\n\nDavid J.\n\nOn Tue, Sep 24, 2024 at 5:32 PM Tony Wayne <anonymouslydark3@gmail.com> wrote:This is the 1st version of the idea,if it goes well 🤞 we can move the changes from extension to pg. If you are saying you are planning to add something to the contrib directly then you should just post the patch(es) that do it.  Your ability to make it digestible will highly influence whether anyone is willing to review it.If this isn't intended for core or contrib you are not in the correct place.  If you wish to share an external extension you are publishing the -general channel would be the place to discuss such things.David J.", "msg_date": "Tue, 24 Sep 2024 17:37:02 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to send patch with so many files changes?" }, { "msg_contents": "On Tue, Sep 24, 2024 at 5:37 PM Tony Wayne <anonymouslydark3@gmail.com>\nwrote:\n\n>\n> On Wed, Sep 25, 2024 at 6:02 AM Tony Wayne <anonymouslydark3@gmail.com>\n> wrote:\n>\n>>\n>> I am new in contributing to postgres. I have a doubt regarding if we want\n>> to send a patch which has an extension and also has changes in pg source\n>> also,what's the way to do it?\n>>\n>> is git diff enough?\n>\n\nUsually you'd want to use format-patch so your commit message(s) make it\ninto the artifact. Especially for something complex/large.\n\nDavid J.\n\nOn Tue, Sep 24, 2024 at 5:37 PM Tony Wayne <anonymouslydark3@gmail.com> wrote:On Wed, Sep 25, 2024 at 6:02 AM Tony Wayne <anonymouslydark3@gmail.com> wrote:I am new in contributing to postgres. I have a doubt regarding if we want to send a patch which has an extension and also has changes in pg source also,what's the way to do it?is git diff enough? Usually you'd want to use format-patch so your commit message(s) make it into the artifact.  Especially for something complex/large.David J.", "msg_date": "Tue, 24 Sep 2024 17:38:36 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to send patch with so many files changes?" }, { "msg_contents": "These changes are for core ,I think it would be better to either move whole\nchanges to core or contrib as an extension.\n\nOn Wed, Sep 25, 2024 at 6:09 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Tue, Sep 24, 2024 at 5:37 PM Tony Wayne <anonymouslydark3@gmail.com>\n> wrote:\n>\n>>\n>> On Wed, Sep 25, 2024 at 6:02 AM Tony Wayne <anonymouslydark3@gmail.com>\n>> wrote:\n>>\n>>>\n>>> I am new in contributing to postgres. I have a doubt regarding if we\n>>> want to send a patch which has an extension and also has changes in pg\n>>> source also,what's the way to do it?\n>>>\n>>> is git diff enough?\n>>\n>\n> Usually you'd want to use format-patch so your commit message(s) make it\n> into the artifact. Especially for something complex/large.\n>\n> David J.\n>\n\nThese changes are for core ,I think it would be better to either move whole changes to core or contrib as an extension. On Wed, Sep 25, 2024 at 6:09 AM David G. Johnston <david.g.johnston@gmail.com> wrote:On Tue, Sep 24, 2024 at 5:37 PM Tony Wayne <anonymouslydark3@gmail.com> wrote:On Wed, Sep 25, 2024 at 6:02 AM Tony Wayne <anonymouslydark3@gmail.com> wrote:I am new in contributing to postgres. I have a doubt regarding if we want to send a patch which has an extension and also has changes in pg source also,what's the way to do it?is git diff enough? Usually you'd want to use format-patch so your commit message(s) make it into the artifact.  Especially for something complex/large.David J.", "msg_date": "Wed, 25 Sep 2024 06:19:39 +0530", "msg_from": "Tony Wayne <anonymouslydark3@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to send patch with so many files changes?" }, { "msg_contents": "On Wed, Sep 25, 2024 at 06:19:39AM +0530, Tony Wayne wrote:\n> These changes are for core ,I think it would be better to either move whole\n> changes to core or contrib as an extension.\n\nPlease avoid top-posting. The community mailing lists use\nbottom-posting, to ease discussions. See:\nhttps://en.wikipedia.org/wiki/Posting_style#Bottom-posting\n\n> On Wed, Sep 25, 2024 at 6:09 AM David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>> Usually you'd want to use format-patch so your commit message(s) make it\n>> into the artifact. Especially for something complex/large.\n\nThe community wiki has some guidelines about all that:\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nIn my experience, it is much easier to sell a feature to the community\nif a patch is organized into independent useful pieces with\nrefactoring pieces presented on top of the actual feature. In order\nto achieve that `git format-patch` is essential because it is possible\nto present a patch set organizing your ideas so as others need to\nspend less time trying to figure out what a patch set is doing when\ndoing a review. format-patch with `git am` is also quite good to\ntrack the addition of new files or the removal of old files. Writing\nyour ideas in the commit logs can also bring a lot of insight for\nanybody reading your patches.\n\nFor simpler and localized changes, using something like git diff would\nbe also OK that can be applied with a simple `patch` command can also\nbe fine. I've done plenty of work with patches sent to the lists this\nway for bug fixes. Of course this is case-by-case, for rather complex\nbug fixes format-patch can still be a huge gain of time when reading\nsomebody else's ideas on a specific matter.\n--\nMichael", "msg_date": "Wed, 25 Sep 2024 10:06:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: How to send patch with so many files changes?" }, { "msg_contents": "On Wed, Sep 25, 2024 at 6:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Sep 25, 2024 at 06:19:39AM +0530, Tony Wayne wrote:\n> > These changes are for core ,I think it would be better to either move\n> whole\n> > changes to core or contrib as an extension.\n>\n> Please avoid top-posting. The community mailing lists use\n> bottom-posting, to ease discussions. See:\n> https://en.wikipedia.org/wiki/Posting_style#Bottom-posting\n>\n> Thanks for the feedback.\n\n> > On Wed, Sep 25, 2024 at 6:09 AM David G. Johnston <\n> > david.g.johnston@gmail.com> wrote:\n> >> Usually you'd want to use format-patch so your commit message(s) make it\n> >> into the artifact. Especially for something complex/large.\n>\n> The community wiki has some guidelines about all that:\n> https://wiki.postgresql.org/wiki/Submitting_a_Patch\n>\n> In my experience, it is much easier to sell a feature to the community\n> if a patch is organized into independent useful pieces with\n> refactoring pieces presented on top of the actual feature. In order\n> to achieve that `git format-patch` is essential because it is possible\n> to present a patch set organizing your ideas so as others need to\n> spend less time trying to figure out what a patch set is doing when\n> doing a review. format-patch with `git am` is also quite good to\n> track the addition of new files or the removal of old files. Writing\n> your ideas in the commit logs can also bring a lot of insight for\n> anybody reading your patches.\n>\n> For simpler and localized changes, using something like git diff would\n> be also OK that can be applied with a simple `patch` command can also\n> be fine. I've done plenty of work with patches sent to the lists this\n> way for bug fixes. Of course this is case-by-case, for rather complex\n> bug fixes format-patch can still be a huge gain of time when reading\n> somebody else's ideas on a specific matter.\n> --\n> Michael\n>\nThanks, I got it 👍.\n\nOn Wed, Sep 25, 2024 at 6:36 AM Michael Paquier <michael@paquier.xyz> wrote:On Wed, Sep 25, 2024 at 06:19:39AM +0530, Tony Wayne wrote:\n> These changes are for core ,I think it would be better to either move whole\n> changes to core or contrib as an extension.\n\nPlease avoid top-posting.  The community mailing lists use\nbottom-posting, to ease discussions.  See:\nhttps://en.wikipedia.org/wiki/Posting_style#Bottom-posting\nThanks for the feedback. \n> On Wed, Sep 25, 2024 at 6:09 AM David G. Johnston <\n> david.g.johnston@gmail.com> wrote:\n>> Usually you'd want to use format-patch so your commit message(s) make it\n>> into the artifact.  Especially for something complex/large.\n\nThe community wiki has some guidelines about all that:\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch\n\nIn my experience, it is much easier to sell a feature to the community\nif a patch is organized into independent useful pieces with\nrefactoring pieces presented on top of the actual feature.  In order\nto achieve that `git format-patch` is essential because it is possible\nto present a patch set organizing your ideas so as others need to\nspend less time trying to figure out what a patch set is doing when\ndoing a review.  format-patch with `git am` is also quite good to\ntrack the addition of new files or the removal of old files.  Writing\nyour ideas in the commit logs can also bring a lot of insight for\nanybody reading your patches.\n\nFor simpler and localized changes, using something like git diff would\nbe also OK that can be applied with a simple `patch` command can also\nbe fine.  I've done plenty of work with patches sent to the lists this\nway for bug fixes.  Of course this is case-by-case, for rather complex\nbug fixes format-patch can still be a huge gain of time when reading\nsomebody else's ideas on a specific matter.\n--\nMichaelThanks, I got it 👍.", "msg_date": "Wed, 25 Sep 2024 06:50:09 +0530", "msg_from": "Tony Wayne <anonymouslydark3@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to send patch with so many files changes?" } ]
[ { "msg_contents": "Hi hackers,\n\n\nThe following code fails to pass the ecpg compilation, although it is accepted by the gcc compiler.\n\n\n```\n#if ABC /* this is a multi-line\n * comment including single star character */\nint a = 1;\n#endif\n```\n\n\nThe issue arises from the first '*' in the second line. Upon its removal, the ecpg compiler functions properly.\n\n\n```\n#if ABC /* this is a multi-line\n comment without single star character */\nint a = 1;\n#endif\n```\n\n\nThe problem has been identified as a bug in the `cppline` definition within the `pgc.l` file.\n\n\n```\ncppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\\/\\*[^*/]*\\*+\\/)|.|\\\\{space}*{newline})*{newline}\n```\n[Source](https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/interfaces/ecpg/preproc/pgc.l;h=f3c03482aec61956691f30426f61510920c5c702;hb=HEAD#l461)\n\n\nMore specifically, the bug originates from the regex pattern for the C block code comment.\n\n\n```\n\\/\\*[^*/]*\\*+\\/\n```\n\n\nAttempting another example:\n\n\n```\n#if ABC /* hello * world */\nint a = 1;\n#endif\n```\n\n\nThis time, the ecpg compiler also functions correctly.\n\n\nConfused! I am uncertain how to rectify the regex. I hope someone can address this bug.\nHi hackers,The following code fails to pass the ecpg compilation, although it is accepted by the gcc compiler.```#if ABC /* this is a multi-line              * comment including single star character */int a = 1;#endif```The issue arises from the first '*' in the second line. Upon its removal, the ecpg compiler functions properly.```#if ABC /* this is a multi-line                comment without single star character */int a = 1;#endif```The problem has been identified as a bug in the `cppline` definition within the `pgc.l` file.```cppline {space}*#([^i][A-Za-z]*|{if}|{ifdef}|{ifndef}|{import})((\\/\\*[^*/]*\\*+\\/)|.|\\\\{space}*{newline})*{newline}```[Source](https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/interfaces/ecpg/preproc/pgc.l;h=f3c03482aec61956691f30426f61510920c5c702;hb=HEAD#l461)More specifically, the bug originates from the regex pattern for the C block code comment.```\\/\\*[^*/]*\\*+\\/```Attempting another example:```#if ABC /* hello * world */int a = 1;#endif```This time, the ecpg compiler also functions correctly.Confused! I am uncertain how to rectify the regex. I hope someone can address this bug.", "msg_date": "Wed, 25 Sep 2024 15:36:48 +0800 (CST)", "msg_from": "\"Winter Loo\" <winterloo@126.com>", "msg_from_op": true, "msg_subject": "[ecpg bug]: can not use single '*' in multi-line comment after c\n preprocessor directives" }, { "msg_contents": "\"Winter Loo\" <winterloo@126.com> writes:\n> The following code fails to pass the ecpg compilation, although it is accepted by the gcc compiler.\n\nYeah ... an isolated \"/\" inside the comment doesn't work either.\n\n> Confused! I am uncertain how to rectify the regex. I hope someone can address this bug.\n\nI poked at this for awhile and concluded that we probably cannot make\nit work with a single regexp for \"cppline\". The right thing would\ninvolve an exclusive start condition for parsing a cppline, more or\nless like the way that /* comments are parsed in the <xc> start\ncondition. This is kind of a lot of work compared to the value :-(.\nMaybe somebody else would like to take a crack at it, but I can't\nget excited enough about it.\n\nThere are other deficiencies too in ecpg's handling of these things,\nlike the fact that (I think) comments are mishandled in #include\ndirectives.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Sep 2024 15:04:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [ecpg bug]: can not use single '*' in multi-line comment after c\n preprocessor directives" } ]
[ { "msg_contents": "Hi,\n\nI'm working on the flags VERSION (X076), INCLUDING XMLDECLARATION, and\nEXCLUDING XMLDECLARATION (X078) for XMLSerialize, and I have a question\nfor SQL/XML experts on the list.\n\nIs there any validation mechanism for VERSION <character string\nliteral>? The SQL/XML spec says\n\n\"The <character string literal> immediately contained in <XML serialize\nversion> shall be '1.0' or '1.1', or it shall identify some successor to\nXML 1.0 and XML 1.1.\"\n\nI was wondering if a validation here would make any sense, since\nXMLSerialize is only supposed to print a string --- not to mention that\nvalidating \"some successor to XML 1.0 and XML 1.1\" can be challenging :)\nBut again, printing an \"invalid\" XML string also doesn't seem very nice.\n\nThe oracle implementation accepts pretty much anything:\n\nSQL> SELECT xmlserialize(DOCUMENT xmltype('<foo><bar>42</bar></foo>')\nVERSION 'foo') AS xml FROM dual;\n\nXML\n--------------------------------------------------------------------------------\n<?xml version=\"foo\"?>\n<foo>\n  <bar>42</bar>\n</foo>\n\n\nIn db2, anything other than '1.0' raises an error:\n\ndb2 => SELECT XMLSERIALIZE(CONTENT XMLELEMENT(NAME \"db2\",service_level)\nAS varchar(100) VERSION '1.0' INCLUDING XMLDECLARATION) FROM\nsysibmadm.env_inst_info;\n\n1                                                                                                 \n \n----------------------------------------------------------------------------------------------------\n<?xml version=\"1.0\" encoding=\"UTF-8\"?><db2>DB2\nv11.5.9.0</db2>                                      \n\n  1 record(s) selected.\n\n\ndb2 => SELECT XMLSERIALIZE(CONTENT XMLELEMENT(NAME \"db2\",service_level)\nAS varchar(100) VERSION '1.1' INCLUDING XMLDECLARATION) FROM\nsysibmadm.env_inst_info;\nSQL0171N  The statement was not processed because the data type, length or\nvalue of the argument for the parameter in position \"2\" of routine\n\"XMLSERIALIZE\" is incorrect. Parameter name: \"\".  SQLSTATE=42815\n\nAny thoughts on how we should approach this feature?\n\nThanks!\n\nBest, Jim\n\n\n", "msg_date": "Wed, 25 Sep 2024 14:51:45 +0200", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "XMLSerialize: version and explicit XML declaration" }, { "msg_contents": "Jim Jones <jim.jones@uni-muenster.de> writes:\n> Is there any validation mechanism for VERSION <character string\n> literal>?\n\nAFAICS, all we do with an embedded XML version string is pass it to\nlibxml2's xmlNewDoc(), which is the authority on whether it means\nanything. I'd be inclined to do the same here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Sep 2024 12:02:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: XMLSerialize: version and explicit XML declaration" }, { "msg_contents": "Hi Tom\n\nOn 25.09.24 18:02, Tom Lane wrote:\n> AFAICS, all we do with an embedded XML version string is pass it to\n> libxml2's xmlNewDoc(), which is the authority on whether it means\n> anything. I'd be inclined to do the same here.\n\nThanks. I used xml_is_document(), which calls xmlNewDoc(), to check if\nthe returned document is valid or not. It then decides if an unexpected\nversion deserves an error or just a warning.\n\nAttached v1 with the first attempt to implement these features.\n\n==== INCLUDING / EXCLUDING XMLDECLARATION (SQL/XML X078) ====\n\nThe flags INCLUDING XMLDECLARATION and EXCLUDING XMLDECLARATION include\nor remove the XML declaration in the XMLSerialize output of the given\nDOCUMENT or CONTENT, respectively.\n\nSELECT\n  xmlserialize(\n    DOCUMENT '<foo><bar>42</bar></foo>'::xml AS text\n    INCLUDING XMLDECLARATION);\n\n                         xmlserialize\n---------------------------------------------------------------\n <?xml version=\"1.0\" encoding=\"UTF8\"?><foo><bar>42</bar></foo>\n(1 row)\n\nSELECT\n  xmlserialize(\n    DOCUMENT '<?xml version=\"1.0\"\nencoding=\"UTF-8\"?><foo><bar>42</bar></foo>'::xml AS text\n    EXCLUDING XMLDECLARATION);\n\n       xmlserialize\n--------------------------\n <foo><bar>42</bar></foo>\n(1 row)\n\n\nIf omitted, the output will contain an XML declaration only if the given\nXML value had one.\n\nSELECT\n  xmlserialize(\n    DOCUMENT '<?xml version=\"1.0\"\nencoding=\"UTF-8\"?><foo><bar>42</bar></foo>'::xml AS text);\n\n                          xmlserialize                          \n----------------------------------------------------------------\n <?xml version=\"1.0\" encoding=\"UTF-8\"?><foo><bar>42</bar></foo>\n(1 row)\n\nSELECT\n  xmlserialize(\n    DOCUMENT '<foo><bar>42</bar></foo>'::xml AS text);\n       xmlserialize       \n--------------------------\n <foo><bar>42</bar></foo>\n(1 row)\n\n\n==== VERSION (SQL/XML X076)====\n\nVERSION can be used to specify the version in the XML declaration of the\nserialized DOCUMENT or CONTENT.\n\nSELECT\n  xmlserialize(\n    DOCUMENT '<foo><bar>42</bar></foo>'::xml AS text\n    VERSION '1.0'\n    INCLUDING XMLDECLARATION);\n    \n                         xmlserialize                          \n---------------------------------------------------------------\n <?xml version=\"1.0\" encoding=\"UTF8\"?><foo><bar>42</bar></foo>\n(1 row)\n\n\nIn case of XML values of type DOCUMENT, the version will be validated by\nlibxml2's xmlNewDoc(), which will raise an error for invalid\nversions or a warning for unsupported ones. For CONTENT values no\nvalidation is performed.\n\nSELECT\n  xmlserialize(\n    DOCUMENT '<foo><bar>42</bar></foo>'::xml AS text\n    VERSION '1.1'\n    INCLUDING XMLDECLARATION);\n    \nWARNING:  line 1: Unsupported version '1.1'\n<?xml version=\"1.1\" encoding=\"UTF8\"?><foo><bar>42</bar></foo>\n                   ^\n                         xmlserialize\n---------------------------------------------------------------\n <?xml version=\"1.1\" encoding=\"UTF8\"?><foo><bar>42</bar></foo>\n(1 row)\n\nSELECT\n  xmlserialize(\n    DOCUMENT '<foo><bar>42</bar></foo>'::xml AS text\n    VERSION '2.0'\n    INCLUDING XMLDECLARATION);\n\nERROR:  Invalid XML declaration: VERSION '2.0'\n\nSELECT\n  xmlserialize(\n    CONTENT '<foo><bar>42</bar></foo>'::xml AS text\n    VERSION '2.0'\n    INCLUDING XMLDECLARATION);\n\n                         xmlserialize\n---------------------------------------------------------------\n <?xml version=\"2.0\" encoding=\"UTF8\"?><foo><bar>42</bar></foo>\n(1 row)\n\nThis option is ignored if the XML value had no XML declaration and\nINCLUDING XMLDECLARATION was not used.\n\nSELECT\n  xmlserialize(\n    CONTENT '<foo><bar>42</bar></foo>'::xml AS text\n    VERSION '1111');\n\n       xmlserialize\n--------------------------\n <foo><bar>42</bar></foo>\n(1 row)\n\n\nBest, Jim", "msg_date": "Mon, 30 Sep 2024 10:08:34 +0200", "msg_from": "Jim Jones <jim.jones@uni-muenster.de>", "msg_from_op": true, "msg_subject": "Re: XMLSerialize: version and explicit XML declaration" } ]
[ { "msg_contents": "Hi,\n\njust came across this:\n\nsrc/backend/optimizer/util/plancat.c -> Is this correct English?\n-> We need not lock the relation since it was already locked ... \n\nI am not a native speaker, but this sounds strange.\n\nRegards\nDaniel?\n\n", "msg_date": "Wed, 25 Sep 2024 16:52:47 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": true, "msg_subject": "src/backend/optimizer/util/plancat.c -> Is this correct English" }, { "msg_contents": "On Wed, Sep 25, 2024 at 04:52:47PM +0000, Daniel Westermann (DWE) wrote:\n> just came across this:\n> \n> src/backend/optimizer/util/plancat.c -> Is this correct English?\n> -> We need not lock the relation since it was already locked ... \n> \n> I am not a native speaker, but this sounds strange.\n\nI think it's fine. It could also be phrased like this:\n\n\tWe do not need to lock the relation since...\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 25 Sep 2024 11:59:04 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: src/backend/optimizer/util/plancat.c -> Is this correct English" }, { "msg_contents": ">>I think it's fine. It could also be phrased like this:\n>\n> We do not need to lock the relation since...\n\nThat's what I would have expected. But, as said, maybe this only sounds strange to me.\n\nRegards\nDaniel\n\n\n\n\n\n\n\n\n>>I think it's fine.  It could also be phrased like this:\n>\n>        We do not need to lock the relation since...\n\n\n\n\nThat's what I would have expected. But, as said, maybe this only sounds strange to me.\n\n\n\n\nRegards\n\nDaniel", "msg_date": "Wed, 25 Sep 2024 17:35:48 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": true, "msg_subject": "Re: src/backend/optimizer/util/plancat.c -> Is this correct English" }, { "msg_contents": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com> writes:\n> That's what I would have expected. But, as said, maybe this only sounds strange to me.\n\n\"Need not\" is perfectly good English, although perhaps it has a\nfaintly archaic whiff to it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Sep 2024 13:50:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: src/backend/optimizer/util/plancat.c -> Is this correct English" } ]
[ { "msg_contents": "Hello hackers,\n\nThis patch is a follow-up and generalization to [0].\n\nIt adds the following jsonpath methods: lower, upper, initcap, l/r/btrim,\nreplace, split_part.\n\nIt makes jsonpath able to support expressions like these:\n\nselect jsonb_path_query('\" hElLo WorlD \"',\n '$.btrim().lower().upper().lower().replace(\"hello\",\"bye\") starts with\n\"bye\"');\nselect jsonb_path_query('\"abc~@~def~@~ghi\"', '$.split_part(\"~@~\", 2)')\n\nThey, of course, forward their implementation to the internal\npg_proc-registered function.\n\nAs a first wip/poc I've picked the functions I typically need to clean up\nJSON data.\nI've also added a README.jsonpath with documentation on how to add a new\njsonpath method.\nIf I had this available when I started, it would have saved me some time.\nSo, I am leaving it here for the next hacker.\n\nThis patch is not particularly intrusive to existing code:\nAfaict, the only struct I've touched is JsonPathParseItem , where I added {\nJsonPathParseItem *arg0, *arg1; } method_args.\nUp until now, most of the jsonpath methods that accept arguments rely on\nleft/right operands,\nwhich works, but it could be more convenient for future more complex\nmethods.\nI've also added the appropriate jspGetArgX(JsonPathItem *v, JsonPathItem\n*a).\n\nOpen items\n- What happens if the jsonpath standard adds a new method by the same name?\nA.D. mentioned this in [0] with the proposal of having a prefix like pg_ or\ninitial-upper letter.\n- Still using the default collation like the rest of the jsonpath code.\n- documentation N/A yet\n- I do realize that the process of adding a new method sketches an\nimaginary.\nCREATE JSONPATH FUNCTION. This has been on the back of my mind for some\ntime now,\nbut I can't say I have an action plan for this yet.\n\nGitHub PR view if you prefer:\nhttps://github.com/Florents-Tselai/postgres/pull/18\n\n[0]\nhttps://www.postgresql.org/message-id/flat/185BF814-9225-46DB-B1A1-6468CF2C8B63%40justatheory.com#1850a37a98198974cf543aefe225ba56\n\nAll the best,\nFlo", "msg_date": "Wed, 25 Sep 2024 21:17:20 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "PATCH: jsonpath string methods: lower, upper, initcap, l/r/btrim,\n replace, split_part" }, { "msg_contents": "Florents Tselai <florents.tselai@gmail.com> writes:\n> This patch is a follow-up and generalization to [0].\n> It adds the following jsonpath methods: lower, upper, initcap, l/r/btrim,\n> replace, split_part.\n\nHow are you going to deal with the fact that this makes jsonpath\noperations not guaranteed immutable? (See commit cb599b9dd\nfor some context.) Those are all going to have behavior that's\ndependent on the underlying locale.\n\nWe have the kluge of having separate \"_tz\" functions to support\nnon-immutable datetime operations, but that way doesn't seem like\nit's going to scale well to multiple sources of mutability.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Sep 2024 17:03:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PATCH: jsonpath string methods: lower, upper, initcap, l/r/btrim,\n replace, split_part" }, { "msg_contents": "On Thu, Sep 26, 2024 at 12:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Florents Tselai <florents.tselai@gmail.com> writes:\n> > This patch is a follow-up and generalization to [0].\n> > It adds the following jsonpath methods: lower, upper, initcap, l/r/btrim,\n> > replace, split_part.\n>\n> How are you going to deal with the fact that this makes jsonpath\n> operations not guaranteed immutable? (See commit cb599b9dd\n> for some context.) Those are all going to have behavior that's\n> dependent on the underlying locale.\n>\n> We have the kluge of having separate \"_tz\" functions to support\n> non-immutable datetime operations, but that way doesn't seem like\n> it's going to scale well to multiple sources of mutability.\n\nWhile inventing \"_tz\" functions I was thinking about jsonpath methods\nand operators defined in standard then. Now I see huge interest on\nextending that. I wonder if we can introduce a notion of flexible\nmutability? Imagine that jsonb_path_query() function (and others) has\nanother function which analyzes arguments and reports mutability. If\njsonpath argument is constant and all methods inside are safe then\njsonb_path_query() is immutable otherwise it is stable. I was\nthinking about that back working on jsonpath, but that time problem\nseemed too limited for this kind of solution. Now, it's possibly time\nto shake off the dust from this idea. What do you think?\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Thu, 26 Sep 2024 13:55:44 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: jsonpath string methods: lower, upper, initcap, l/r/btrim,\n replace, split_part" }, { "msg_contents": "Hi, Florents!\n\nOn Wed, Sep 25, 2024 at 9:18 PM Florents Tselai\n<florents.tselai@gmail.com> wrote:\n> This patch is a follow-up and generalization to [0].\n>\n> It adds the following jsonpath methods: lower, upper, initcap, l/r/btrim, replace, split_part.\n>\n> It makes jsonpath able to support expressions like these:\n>\n> select jsonb_path_query('\" hElLo WorlD \"', '$.btrim().lower().upper().lower().replace(\"hello\",\"bye\") starts with \"bye\"');\n> select jsonb_path_query('\"abc~@~def~@~ghi\"', '$.split_part(\"~@~\", 2)')\n\nDid you check if these new methods now in SQL standard or project of\nSQL standard?\n\n------\nRegards,\nAlexander Korotkov\nSupabase\n\n\n", "msg_date": "Thu, 26 Sep 2024 13:57:25 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: jsonpath string methods: lower, upper, initcap, l/r/btrim,\n replace, split_part" }, { "msg_contents": "On Thu, Sep 26, 2024 at 1:55 PM Alexander Korotkov <aekorotkov@gmail.com>\nwrote:\n\n> On Thu, Sep 26, 2024 at 12:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Florents Tselai <florents.tselai@gmail.com> writes:\n> > > This patch is a follow-up and generalization to [0].\n> > > It adds the following jsonpath methods: lower, upper, initcap,\n> l/r/btrim,\n> > > replace, split_part.\n> >\n> > How are you going to deal with the fact that this makes jsonpath\n> > operations not guaranteed immutable? (See commit cb599b9dd\n> > for some context.) Those are all going to have behavior that's\n> > dependent on the underlying locale.\n> >\n> > We have the kluge of having separate \"_tz\" functions to support\n> > non-immutable datetime operations, but that way doesn't seem like\n> > it's going to scale well to multiple sources of mutability.\n>\n> While inventing \"_tz\" functions I was thinking about jsonpath methods\n> and operators defined in standard then. Now I see huge interest on\n> extending that. I wonder if we can introduce a notion of flexible\n> mutability? Imagine that jsonb_path_query() function (and others) has\n> another function which analyzes arguments and reports mutability. If\n> jsonpath argument is constant and all methods inside are safe then\n> jsonb_path_query() is immutable otherwise it is stable. I was\n> thinking about that back working on jsonpath, but that time problem\n> seemed too limited for this kind of solution. Now, it's possibly time\n> to shake off the dust from this idea. What do you think?\n>\n> ------\n> Regards,\n> Alexander Korotkov\n> Supabase\n>\n\nIn case you're having a deja vu, while researching this\nI did come across [0] where disussing this back in 2019.\n\nIn this patch I've conveniently left jspIsMutable and jspIsMutableWalker\nuntouched and under the rug,\nbut for the few seconds I pondered over this,the best answer I came with\nwas\na simple heuristic to what Alexander says above:\nif all elements are safe, then the whole jsp is immutable.\n\nIf we really want to tackle this and make jsonpath richer though,\nI don't think we can avoid being a little more flexible/explicit wrt\nmutability.\n\nSpeaking of extensible: the jsonpath standard does mention function\nextensions [1] ,\nso it looks like we're covered by the standard, and the mutability aspect\nis an implementation detail. No?\nAnd having said that, the whole jsonb/jsonpath parser/executor\ninfrastructure is extremely powerful\nand kinda under-utilized if we use it \"only\" for jsonpath.\nTbh, I can see it supporting more specific DSLs and even offering hooks for\nextensions.\nAnd I know for certain I'm not the only one thinking about this.\nSee [2] for example where they've lifted, shifted and renamed the\njsonb/jsonpath infra to build a separate language for graphs\n\n[0]\nhttps://www.postgresql.org/message-id/CAPpHfdvDci4iqNF9fhRkTqhe-5_8HmzeLt56drH+_Rv2rNRqfg@mail.gmail.com\n[1] https://www.rfc-editor.org/rfc/rfc9535.html#name-function-extensions\n[2] https://github.com/apache/age/blob/master/src/include/utils/agtype.h\n\nOn Thu, Sep 26, 2024 at 1:55 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:On Thu, Sep 26, 2024 at 12:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Florents Tselai <florents.tselai@gmail.com> writes:\n> > This patch is a follow-up and generalization to [0].\n> > It adds the following jsonpath methods:  lower, upper, initcap, l/r/btrim,\n> > replace, split_part.\n>\n> How are you going to deal with the fact that this makes jsonpath\n> operations not guaranteed immutable?  (See commit cb599b9dd\n> for some context.)  Those are all going to have behavior that's\n> dependent on the underlying locale.\n>\n> We have the kluge of having separate \"_tz\" functions to support\n> non-immutable datetime operations, but that way doesn't seem like\n> it's going to scale well to multiple sources of mutability.\n\nWhile inventing \"_tz\" functions I was thinking about jsonpath methods\nand operators defined in standard then.  Now I see huge interest on\nextending that.  I wonder if we can introduce a notion of flexible\nmutability?  Imagine that jsonb_path_query() function (and others) has\nanother function which analyzes arguments and reports mutability.  If\njsonpath argument is constant and all methods inside are safe then\njsonb_path_query() is immutable otherwise it is stable.  I was\nthinking about that back working on jsonpath, but that time problem\nseemed too limited for this kind of solution.  Now, it's possibly time\nto shake off the dust from this idea.  What do you think?\n\n------\nRegards,\nAlexander Korotkov\nSupabaseIn case you're having a deja vu, while researching this  I did come across [0] where disussing this back in 2019.In this patch I've conveniently left jspIsMutable and jspIsMutableWalker untouched and under the rug,but for the few seconds I pondered over this,the best answer I came with was a simple heuristic to what Alexander says above:if all elements are safe, then the whole jsp is immutable.If we really want to tackle this and make jsonpath richer though, I don't think we can avoid being a little more flexible/explicit wrt mutability.Speaking of extensible: the jsonpath standard does mention function extensions [1] ,so it looks like we're covered by the standard, and the mutability aspect is an implementation detail. No?And having said that, the whole jsonb/jsonpath parser/executor infrastructure is extremely powerfuland kinda under-utilized if we use it \"only\" for jsonpath.Tbh, I can see it supporting more specific DSLs and even offering hooks for extensions.And I know for certain I'm not the only one thinking about this.See [2] for example where they've lifted, shifted and renamed the jsonb/jsonpath infra to build a separate language for graphs[0] https://www.postgresql.org/message-id/CAPpHfdvDci4iqNF9fhRkTqhe-5_8HmzeLt56drH+_Rv2rNRqfg@mail.gmail.com[1] https://www.rfc-editor.org/rfc/rfc9535.html#name-function-extensions[2] https://github.com/apache/age/blob/master/src/include/utils/agtype.h", "msg_date": "Thu, 26 Sep 2024 15:59:51 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: jsonpath string methods: lower, upper, initcap, l/r/btrim,\n replace, split_part" }, { "msg_contents": "On Sep 26, 2024, at 13:59, Florents Tselai <florents.tselai@gmail.com> wrote:\n\n> Speaking of extensible: the jsonpath standard does mention function extensions [1] ,\n> so it looks like we're covered by the standard, and the mutability aspect is an implementation detail. No?\n\nThat’s not the standard used for Postgres jsonpath. Postgres follows the SQL/JSON standard in the SQL standard, which is not publicly available, but a few people on the list have copies they’ve purchased and so could provide some context.\n\nIn a previous post I wondered if the SQL standard had some facility for function extensions, but I suspect not. Maybe in the next iteration?\n\n> And having said that, the whole jsonb/jsonpath parser/executor infrastructure is extremely powerful\n> and kinda under-utilized if we use it \"only\" for jsonpath.\n> Tbh, I can see it supporting more specific DSLs and even offering hooks for extensions.\n> And I know for certain I'm not the only one thinking about this.\n> See [2] for example where they've lifted, shifted and renamed the jsonb/jsonpath infra to build a separate language for graphs\n\nI’m all for extensibility, though jsonpath does need to continue to comply with the SQL standard. Do you have some idea of the sorts of hooks that would allow extension authors to use some of that underlying capability?\n\nBest,\n\nDavid\n\n\n\n", "msg_date": "Fri, 27 Sep 2024 10:45:23 +0100", "msg_from": "\"David E. Wheeler\" <david@justatheory.com>", "msg_from_op": false, "msg_subject": "Re: PATCH: jsonpath string methods: lower, upper, initcap, l/r/btrim,\n replace, split_part" }, { "msg_contents": "\n\n> On 27 Sep 2024, at 12:45 PM, David E. Wheeler <david@justatheory.com> wrote:\n> \n> On Sep 26, 2024, at 13:59, Florents Tselai <florents.tselai@gmail.com> wrote:\n> \n>> Speaking of extensible: the jsonpath standard does mention function extensions [1] ,\n>> so it looks like we're covered by the standard, and the mutability aspect is an implementation detail. No?\n> \n> That’s not the standard used for Postgres jsonpath. Postgres follows the SQL/JSON standard in the SQL standard, which is not publicly available, but a few people on the list have copies they’ve purchased and so could provide some context.\n> \n> In a previous post I wondered if the SQL standard had some facility for function extensions, but I suspect not. Maybe in the next iteration?\n> \n>> And having said that, the whole jsonb/jsonpath parser/executor infrastructure is extremely powerful\n>> and kinda under-utilized if we use it \"only\" for jsonpath.\n>> Tbh, I can see it supporting more specific DSLs and even offering hooks for extensions.\n>> And I know for certain I'm not the only one thinking about this.\n>> See [2] for example where they've lifted, shifted and renamed the jsonb/jsonpath infra to build a separate language for graphs\n> \n> I’m all for extensibility, though jsonpath does need to continue to comply with the SQL standard. Do you have some idea of the sorts of hooks that would allow extension authors to use some of that underlying capability?\n\nRe-tracing what I had to do\n\n1. Define a new JsonPathItemType jpiMyExtType and map it to a JsonPathKeyword\n2. Add a new JsonPathKeyword and make the lexer and parser aware of that,\n3. Tell the main executor executeItemOptUnwrapTarget what to do when the new type is matched.\n\nI think 1, 2 are the trickiest because they require hooks to jsonpath_scan.l and parser jsonpath_gram.y \n\n3. is the meat of a potential hook, which would be something like \nextern JsonPathExecResult executeOnMyJsonpathItem(JsonPathExecContext *cxt, JsonbValue *jb, JsonValueList *found);\nThis should be called by the main executor executeItemOptUnwrapTarget when it encounters case jpiMyExtType\n\nIt looks like quite an endeavor, to be honest.\n\n", "msg_date": "Fri, 27 Sep 2024 13:28:21 +0300", "msg_from": "Florents Tselai <florents.tselai@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PATCH: jsonpath string methods: lower, upper, initcap, l/r/btrim,\n replace, split_part" } ]
[ { "msg_contents": "Hi all\n\nWhile testing out 17 RC1 I found that a construct that previously worked\nhas now stopped working:\n\nCREATE OR REPLACE FUNCTION index_truncate(src TEXT) RETURNS TEXT AS $$\n SELECT LOWER(LEFT(src, 100));\n$$ LANGUAGE SQL;\n\nCREATE OR REPLACE FUNCTION join_for_index(TEXT [])\n RETURNS TEXT LANGUAGE SQL IMMUTABLE AS\n$$\nSELECT index_truncate(array_to_string($1, ' '))\n$$;\n\nDROP TABLE IF EXISTS test;\nCREATE TABLE test (\nstrings TEXT[]\n);\n\nCREATE INDEX test_strings_idx ON test (join_for_index(strings));\n\nThis worked fine 9.5-16 but the CREATE INDEX statement now fails with:\n\nCREATE INDEX test_strings_idx ON test (join_for_index(strings));\npsql:test.sql:21: ERROR: function index_truncate(text) does not exist\nLINE 2: SELECT index_truncate(array_to_string($1, ' '))\n ^\nHINT: No function matches the given name and argument types. You might\nneed to add explicit type casts.\nQUERY:\nSELECT index_truncate(array_to_string($1, ' '))\n\nCONTEXT: SQL function \"join_for_index\" during inlining\n\nI presume that this is related to the work in 17 around using restricted\nsearch paths in more places, but it's just a guess. CREATE INDEX isn't\nmentioned in the release notes.\n\nFWIW this is from an older db migration of ours - a later one redefined\njoin_for_index to use an explicit path to find index_truncate, and that\nworks fine. But this breakage will then require us to go patch this older\nmigration in many installations.\n\nReporting in case this is unexpected. At the very least if a function used\nin an index must now always find other functions using an explicit path, it\nseems like this should be documented and noted in the release notes.\n\nCheers\n\nTom\n\nHi allWhile testing out 17 RC1 I found that a construct that previously worked has now stopped working:CREATE OR REPLACE FUNCTION index_truncate(src TEXT) RETURNS TEXT AS $$  SELECT LOWER(LEFT(src, 100));$$ LANGUAGE SQL;CREATE OR REPLACE FUNCTION join_for_index(TEXT [])  RETURNS TEXT LANGUAGE SQL IMMUTABLE AS$$SELECT index_truncate(array_to_string($1, ' '))$$;DROP TABLE IF EXISTS test;CREATE TABLE test (\tstrings TEXT[]);CREATE INDEX test_strings_idx ON test (join_for_index(strings));This worked fine 9.5-16 but the CREATE INDEX statement now fails with:CREATE INDEX test_strings_idx ON test (join_for_index(strings));psql:test.sql:21: ERROR:  function index_truncate(text) does not existLINE 2: SELECT index_truncate(array_to_string($1, ' '))               ^HINT:  No function matches the given name and argument types. You might need to add explicit type casts.QUERY:  SELECT index_truncate(array_to_string($1, ' '))CONTEXT:  SQL function \"join_for_index\" during inliningI presume that this is related to the work in 17 around using restricted search paths in more places, but it's just a guess. CREATE INDEX isn't mentioned in the release notes.FWIW this is from an older db migration of ours - a later one redefined join_for_index to use an explicit path to find index_truncate, and that works fine. But this breakage will then require us to go patch this older migration in many installations.Reporting in case this is unexpected. At the very least if a function used in an index must now always find other functions using an explicit path, it seems like this should be documented and noted in the release notes.CheersTom", "msg_date": "Thu, 26 Sep 2024 12:22:32 +0930", "msg_from": "Tom Dunstan <pgsql@tomd.cc>", "msg_from_op": true, "msg_subject": "CREATE INDEX regression in 17 RC1 or expected behavior?" }, { "msg_contents": "On Thu, 26 Sept 2024 at 12:22, Tom Dunstan <pgsql@tomd.cc> wrote:\n\n> I presume that this is related to the work in 17 around using restricted\n> search paths in more places, but it's just a guess. CREATE INDEX isn't\n> mentioned in the release notes.\n>\n\nReading a bit closer yields:\n\n> Functions used by expression indexes and materialized views that need to\nreference non-default schemas must specify a search path during function\ncreation.\n\nSo I guess that makes this an intended breakage.\n\nIt might help to add CREATE INDEX (and maybe CREATE MATERIALIZED VIEW if\nthat's also affected) to the list of commands affected in the release notes\nto make this more obvious - having a list of commands that are affected\nthat didn't include it made me think that this wasn't intended.\n\nCheers\n\nTom\n\nOn Thu, 26 Sept 2024 at 12:22, Tom Dunstan <pgsql@tomd.cc> wrote:I presume that this is related to the work in 17 around using restricted search paths in more places, but it's just a guess. CREATE INDEX isn't mentioned in the release notes.Reading a bit closer yields:> Functions used by expression indexes and materialized views that need to reference non-default schemas must specify a search path during function creation.So I guess that makes this an intended breakage.It might help to add CREATE INDEX (and maybe CREATE MATERIALIZED VIEW if that's also affected) to the list of commands affected in the release notes to make this more obvious - having a list of commands that are affected that didn't include it made me think that this wasn't intended.CheersTom", "msg_date": "Thu, 26 Sep 2024 12:42:55 +0930", "msg_from": "Tom Dunstan <tom@tomd.cc>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX regression in 17 RC1 or expected behavior?" }, { "msg_contents": "On Thu, Sep 26, 2024 at 12:22:32PM +0930, Tom Dunstan wrote:\n> Reporting in case this is unexpected. At the very least if a function used\n> in an index must now always find other functions using an explicit path, it\n> seems like this should be documented and noted in the release notes.\n\nThe first compatibility entry in the release notes [0] has the following\nsentence:\n\n\tFunctions used by expression indexes and materialized views that need\n\tto reference non-default schemas must specify a search path during\n\tfunction creation.\n\nDo you think this needs to be expanded upon?\n\n[0] https://www.postgresql.org/docs/release/17.0/\n\n-- \nnathan\n\n\n", "msg_date": "Wed, 25 Sep 2024 22:16:06 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX regression in 17 RC1 or expected behavior?" }, { "msg_contents": "On Wed, 25 Sep 2024 22:16:06 -0500\nNathan Bossart <nathandbossart@gmail.com> wrote:\n\n> On Thu, Sep 26, 2024 at 12:22:32PM +0930, Tom Dunstan wrote:\n> > Reporting in case this is unexpected. At the very least if a function used\n> > in an index must now always find other functions using an explicit path, it\n> > seems like this should be documented and noted in the release notes.\n> \n> The first compatibility entry in the release notes [0] has the following\n> sentence:\n> \n> \tFunctions used by expression indexes and materialized views that need\n> \tto reference non-default schemas must specify a search path during\n> \tfunction creation.\n\nAlso, this is documented as followins in\nhttps://www.postgresql.org/docs/17/sql-createindex.html .\n\n While CREATE INDEX is running, the search_path is temporarily changed to pg_catalog, pg_temp.\n\nBy the way, this is not mentioned in CREATE MATERIALIZED VIEW documentation, although\nwe can find in REFRESH MATERIALIZED VIEW doc. So, I sent the doc patch in [1], \nand create a commitfest entry [2].\n\n[1] https://www.postgresql.org/message-id/20240805160502.d2a4975802a832b1e04afb80%40sraoss.co.jp\n[2] https://commitfest.postgresql.org/49/5182/\n\nRegards,\nYugo Nagata\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 26 Sep 2024 12:51:10 +0900", "msg_from": "Yugo Nagata <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX regression in 17 RC1 or expected behavior?" }, { "msg_contents": "On Thu, 26 Sept 2024 at 13:21, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n\n> By the way, this is not mentioned in CREATE MATERIALIZED VIEW\n> documentation, although\n> we can find in REFRESH MATERIALIZED VIEW doc. So, I sent the doc patch in\n> [1],\n> and create a commitfest entry [2].\n>\n\nThanks.\n\nI suggest adding CREATE INDEX and CREATE MATERIALIZED VIEW to the release\nnotes list of commands, as I looked for CREATE INDEX there and only raised\nthis due to its absence.\n\nCheers\n\nTom\n\nOn Thu, 26 Sept 2024 at 13:21, Yugo Nagata <nagata@sraoss.co.jp> wrote:\nBy the way, this is not mentioned in CREATE MATERIALIZED VIEW documentation, although\nwe can find in REFRESH MATERIALIZED VIEW doc. So, I sent the doc patch in [1], \nand create a commitfest entry [2].Thanks.I suggest adding CREATE INDEX and CREATE MATERIALIZED VIEW to the release notes list of commands, as I looked for CREATE INDEX there and only raised this due to its absence.CheersTom", "msg_date": "Thu, 26 Sep 2024 13:27:54 +0930", "msg_from": "Tom Dunstan <tom@tomd.cc>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX regression in 17 RC1 or expected behavior?" }, { "msg_contents": "On Thu, 26 Sep 2024 13:27:54 +0930\nTom Dunstan <tom@tomd.cc> wrote:\n\n> On Thu, 26 Sept 2024 at 13:21, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n> > By the way, this is not mentioned in CREATE MATERIALIZED VIEW\n> > documentation, although\n> > we can find in REFRESH MATERIALIZED VIEW doc. So, I sent the doc patch in\n> > [1],\n> > and create a commitfest entry [2].\n> >\n> \n> Thanks.\n> \n> I suggest adding CREATE INDEX and CREATE MATERIALIZED VIEW to the release\n> notes list of commands, as I looked for CREATE INDEX there and only raised\n> this due to its absence.\n\nI've proposed to improve the release notes to include CREATE INDEX and\nCREATE MATERIALIZED VIEW into the command list.\n\n[1] https://www.postgresql.org/message-id/20240926141921.57d0b430fa53ac4389344847%40sraoss.co.jp\n\nRegards,\nYugo Nagata\n\n> \n> Cheers\n> \n> Tom\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Thu, 26 Sep 2024 14:21:27 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: CREATE INDEX regression in 17 RC1 or expected behavior?" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed incorrect comment in /postgres/src/bin/pg_walsummary/nls.mk.\nThe part that should be \"pg_walsummary\" is \"pg_combinebackup\", patch \nattached.\n\nRegards,\nKoki Nakamura <btnakamurakoukil@oss.nttdata.com>", "msg_date": "Thu, 26 Sep 2024 15:05:45 +0900", "msg_from": "btnakamurakoukil <btnakamurakoukil@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Modify comment in /postgres/src/bin/pg_walsummary/nls.mk" }, { "msg_contents": "\n\nOn 2024/09/26 15:05, btnakamurakoukil wrote:\n> Hi hackers,\n> \n> I noticed incorrect comment in  /postgres/src/bin/pg_walsummary/nls.mk.\n> The part that should be \"pg_walsummary\" is \"pg_combinebackup\", patch attached.\n\nThis seems a simple copy/paste mistake when the file was created.\nThanks for the patch! Pushed.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Fri, 27 Sep 2024 10:24:31 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Modify comment in /postgres/src/bin/pg_walsummary/nls.mk" } ]
[ { "msg_contents": "Hi all. A brief overview of our use case follows.\n\nWe are developing a foreign data wrapper which employs parallel scan\nsupport and predicate pushdown; given the types of queries we run,\nforeign scans can be very long and often return very few rows.\n\nAs the scan can be very long and slow, we'd like to provide partial\nresults to the user as rows are being returned. We found two problems\nwith that:\n1. Leader backend would not poll the parallel workers queue until it\nitself found a row to return; we worked around it by turning\n`parallel_leader_participation` to off.\n2. Parallel workers tuple queues have buffering, and are not flushed\nuntil a certain fill threshold is reached; as our queries yield few\nresult rows, oftentimes these rows would only get returned at the end\nof the (very long) scan.\n\nThe proposal is to add a `parallel_tuplequeue_autoflush` GUC (bool,\ndefault false) that would force every row returned by a parallel\nworker to be immediately flushed to the leader; this was already the\ncase before v15, so it simply allows to opt for the previous\nbehaviour.\n\nThis would be achieved by configuring a `auto_flush` field on\n`TQueueDestReceiver`, so that `tqueueReceiveSlot` would pass\n`force_flush` when calling `shm_mq_send`.\n\nThe attached patch, tested on master @ 1ab67c9dfaadda , is a poc\ntentative implementation.\nBased on feedback, we're available to work on a complete and properly\ndocumented patch.\n\nThanks in advance for your consideration.\n\nRegards,\nFrancesco", "msg_date": "Thu, 26 Sep 2024 16:15:56 +0200", "msg_from": "Francesco Degrassi <francesco.degrassi@optionfactory.net>", "msg_from_op": true, "msg_subject": "RFC/PoC: GUC option to enable tuple queue autoflush for parallel\n workers" } ]
[ { "msg_contents": "Hello\n\nWhile studying a review note from Jian He on not-null constraints, I\ncame across some behavior introduced by commit 9139aa19423b[1] that I\nthink is mistaken. Consider the following example:\n\nCREATE TABLE parted (a int CONSTRAINT the_check CHECK (a > 0)) PARTITION BY LIST (a);\nCREATE TABLE parted_1 PARTITION OF parted FOR VALUES IN (1);\nALTER TABLE ONLY parted DROP CONSTRAINT the_check;\n\nThe ALTER TABLE fails with the following message:\n\nERROR: cannot remove constraint from only the partitioned table when partitions exist\nHINT: Do not specify the ONLY keyword.\n\nand the relevant code in ATExecDropConstraint is:\n\n\t/*\n\t * For a partitioned table, if partitions exist and we are told not to\n\t * recurse, it's a user error. It doesn't make sense to have a constraint\n\t * be defined only on the parent, especially if it's a partitioned table.\n\t */\n\tif (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&\n\t\tchildren != NIL && !recurse)\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n\t\t\t\t errmsg(\"cannot remove constraint from only the partitioned table when partitions exist\"),\n\t\t\t\t errhint(\"Do not specify the ONLY keyword.\")));\n\nNote that the comment here is confused: it talks about a constraint that\nwould \"be defined only on the parent\", but that's bogus: the end result\nwould be that the constraint no longer exist on the parent but would\ncontinue to exist on the children. Indeed it's not entirely\nunimaginable that you start with a partitioned table with a bunch of\nconstraints which are enforced on all partitions, then you later decide\nthat you want this constraint to apply only to some of the partitions,\nnot the whole partitioned table. To implement that, you would drop the\nconstraint on the parent using ONLY, then drop it on a few of the\npartitions, but still keep it on the other partitions. This would work\njust fine if not for this ereport(ERROR).\n\nAlso, you can achieve the same end result by creating the constraint on\nonly some of the partitions and not on the partitioned table to start\nwith.\n\nThis also applies to ALTER TABLE ONLY ... DROP NOT NULL.\n\nOf course, *adding* a constraint in this fashion is also forbidden, but\nthat makes perfect sense. Both restrictions were added as part of the\nsame commit, so I suppose we thought they were symmetrical behaviors and\nfailed to notice they weren't.\n\nThe DROP of such constraints can already be done on a table with legacy\ninheritance children; it's just partitioned tables that have this\nweirdness.\n\nIt doesn't surprise me that nobody has reported this inconsistency,\nbecause it seems an unusual enough situation. For the same reason, I\nwouldn't propose to backpatch this change. But I put forward the\nattached patch, which removes the ereport(ERROR)s.\n\n[1] Discussion: https://postgr.es/m/7682253a-6f79-6a92-00aa-267c4c412870@lab.ntt.co.jp\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Here's a general engineering tip: if the non-fun part is too complex for you\nto figure out, that might indicate the fun part is too ambitious.\" (John Naylor)\nhttps://postgr.es/m/CAFBsxsG4OWHBbSDM%3DsSeXrQGOtkPiOEOuME4yD7Ce41NtaAD9g%40mail.gmail.com", "msg_date": "Thu, 26 Sep 2024 19:52:03 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "ALTER TABLE ONLY .. DROP CONSTRAINT on partitioned tables" }, { "msg_contents": "Hi Alvaro,\n\nOn Fri, Sep 27, 2024 at 2:52 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> While studying a review note from Jian He on not-null constraints, I\n> came across some behavior introduced by commit 9139aa19423b[1] that I\n> think is mistaken. Consider the following example:\n>\n> CREATE TABLE parted (a int CONSTRAINT the_check CHECK (a > 0)) PARTITION BY LIST (a);\n> CREATE TABLE parted_1 PARTITION OF parted FOR VALUES IN (1);\n> ALTER TABLE ONLY parted DROP CONSTRAINT the_check;\n>\n> The ALTER TABLE fails with the following message:\n>\n> ERROR: cannot remove constraint from only the partitioned table when partitions exist\n> HINT: Do not specify the ONLY keyword.\n>\n> and the relevant code in ATExecDropConstraint is:\n>\n> /*\n> * For a partitioned table, if partitions exist and we are told not to\n> * recurse, it's a user error. It doesn't make sense to have a constraint\n> * be defined only on the parent, especially if it's a partitioned table.\n> */\n> if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE &&\n> children != NIL && !recurse)\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> errmsg(\"cannot remove constraint from only the partitioned table when partitions exist\"),\n> errhint(\"Do not specify the ONLY keyword.\")));\n>\n> Note that the comment here is confused: it talks about a constraint that\n> would \"be defined only on the parent\", but that's bogus: the end result\n> would be that the constraint no longer exist on the parent but would\n> continue to exist on the children. Indeed it's not entirely\n> unimaginable that you start with a partitioned table with a bunch of\n> constraints which are enforced on all partitions, then you later decide\n> that you want this constraint to apply only to some of the partitions,\n> not the whole partitioned table. To implement that, you would drop the\n> constraint on the parent using ONLY, then drop it on a few of the\n> partitions, but still keep it on the other partitions. This would work\n> just fine if not for this ereport(ERROR).\n>\n> Also, you can achieve the same end result by creating the constraint on\n> only some of the partitions and not on the partitioned table to start\n> with.\n>\n> This also applies to ALTER TABLE ONLY ... DROP NOT NULL.\n>\n> Of course, *adding* a constraint in this fashion is also forbidden, but\n> that makes perfect sense. Both restrictions were added as part of the\n> same commit, so I suppose we thought they were symmetrical behaviors and\n> failed to notice they weren't.\n>\n> The DROP of such constraints can already be done on a table with legacy\n> inheritance children; it's just partitioned tables that have this\n> weirdness.\n\nYeah, I don’t quite recall why I thought the behavior for both ADD and\nDROP had to be the same. I went back and reviewed the thread, trying\nto understand why DROP was included in the decision, but couldn’t find\nanything that explained it. It also doesn’t seem to be related to the\npg_dump issue that was being discussed at the time.\n\nSo, I think you might be right that the restriction on DROP is\noverkill, and we should consider removing it, at least in the master\nbranch.\n\n> It doesn't surprise me that nobody has reported this inconsistency,\n> because it seems an unusual enough situation. For the same reason, I\n> wouldn't propose to backpatch this change. But I put forward the\n> attached patch, which removes the ereport(ERROR)s.\n\nThe patch looks good to me.\n\n-- \nThanks, Amit Langote\n\n\n", "msg_date": "Fri, 27 Sep 2024 13:51:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE ONLY .. DROP CONSTRAINT on partitioned tables" }, { "msg_contents": "Hello,\n\nOn 2024-Sep-27, Amit Langote wrote:\n\n> On Fri, Sep 27, 2024 at 2:52 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > While studying a review note from Jian He on not-null constraints, I\n> > came across some behavior introduced by commit 9139aa19423b[1] that I\n> > think is mistaken.\n\n> Yeah, I don’t quite recall why I thought the behavior for both ADD and\n> DROP had to be the same. I went back and reviewed the thread, trying\n> to understand why DROP was included in the decision, but couldn’t find\n> anything that explained it. It also doesn’t seem to be related to the\n> pg_dump issue that was being discussed at the time.\n\nRight.\n\n> So, I think you might be right that the restriction on DROP is\n> overkill, and we should consider removing it, at least in the master\n> branch.\n\nThanks for looking! I have pushed the removal now.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n", "msg_date": "Mon, 30 Sep 2024 12:01:20 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: ALTER TABLE ONLY .. DROP CONSTRAINT on partitioned tables" } ]
[ { "msg_contents": "Hi hackers,\n\nI would like to suggest a patch to truncate some long queries. I believe\nsometimes there is no need to log a query containing some gigabytes of\nblob, for example. In patch a new parameter, named max_log_size, is\nintroduced. It defines the maximum size of logged query, in bytes.\nEverything beyond that size is truncated.\n\nBest regards,\nKirill Gavrilov", "msg_date": "Thu, 26 Sep 2024 21:30:08 +0300", "msg_from": "diPhantxm <diphantxm@gmail.com>", "msg_from_op": true, "msg_subject": "Truncate logs by max_log_size" }, { "msg_contents": "On Thu, Sep 26, 2024, at 3:30 PM, diPhantxm wrote:\n> I would like to suggest a patch to truncate some long queries. I believe sometimes there is no need to log a query containing some gigabytes of blob, for example. In patch a new parameter, named max_log_size, is introduced. It defines the maximum size of logged query, in bytes. Everything beyond that size is truncated.\n\nI don't know how useful is this proposal. IMO the whole query is usually\ncrucial for an analysis. Let's say you arbitrarily provide max_log_size = 100\nbut it means you cannot see a WHERE clause and you have a performance issue in\nthat query. It won't be possible to obtain the whole query for an EXPLAIN. It\nwould break audit systems that requires the whole query. I don't know if there\nare some log-based replication systems but it would break such tools too.\n\nThere are other ways to avoid logging such long queries. The GRANT ... ON\nPARAMETER and SET LOCAL commands are your friends. Hence, you can disable\nspecific long queries even if you are not a superuser.\n\nIf your main problem is disk space, you can adjust the rotation settings or have\nan external tool to manage your log files (or even use syslog).\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Sep 26, 2024, at 3:30 PM, diPhantxm wrote:I would like to suggest a patch to truncate some long queries. I believe sometimes there is no need to log a query containing some gigabytes of blob, for example. In patch a new parameter, named max_log_size, is introduced. It defines the maximum size of logged query, in bytes. Everything beyond that size is truncated.I don't know how useful is this proposal. IMO the whole query is usuallycrucial for an analysis. Let's say you arbitrarily provide max_log_size = 100but it means you cannot see a WHERE clause and you have a performance issue inthat query. It won't be possible to obtain the whole query for an EXPLAIN. Itwould break audit systems that requires the whole query. I don't know if thereare some log-based replication systems but it would break such tools too.There are other ways to avoid logging such long queries. The GRANT ... ONPARAMETER and SET LOCAL commands are your friends. Hence, you can disablespecific long queries even if you are not a superuser.If your main problem is disk space, you can adjust the rotation settings or havean external tool to manage your log files (or even use syslog).--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Thu, 26 Sep 2024 21:30:05 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: Truncate logs by max_log_size" }, { "msg_contents": "\n\n> On 27 Sep 2024, at 03:30, Euler Taveira <euler@eulerto.com> wrote:\n> \n> Let's say you arbitrarily provide max_log_size = 100\n\nConsider max_log_size = 10Mb. The perspective might look very different. It’s not about WHERE anymore. It's a guard against heavy abuse.\n\nThe feature looks very important for me.\n\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Fri, 27 Sep 2024 13:36:58 +0300", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Truncate logs by max_log_size" } ]
[ { "msg_contents": "Here's a patch that adjusts several routines in nbtcompare.c and related\nfiles to use the branchless integer comparison functions added in commit\n6b80394. It's probably unlikely this produces a measurable benefit (at\nleast I've been unable to find any in my admittedly-limited testing), but\nin theory it should save a cycle here and there. I was hoping that this\nwould trim many lines of code, but maintaining the STRESS_SORT_INT_MIN\nstuff eats up most of what we save.\n\nAnyway, I don't feel too strongly about this patch, but I went to the\ntrouble of writing it, and so I figured I'd post it.\n\n-- \nnathan", "msg_date": "Thu, 26 Sep 2024 15:17:03 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "micro-optimize nbtcompare.c routines" }, { "msg_contents": "On Fri, 27 Sept 2024 at 08:17, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Here's a patch that adjusts several routines in nbtcompare.c and related\n> files to use the branchless integer comparison functions added in commit\n> 6b80394. It's probably unlikely this produces a measurable benefit (at\n> least I've been unable to find any in my admittedly-limited testing), but\n> in theory it should save a cycle here and there. I was hoping that this\n> would trim many lines of code, but maintaining the STRESS_SORT_INT_MIN\n> stuff eats up most of what we save.\n\nI had been looking at [1] (which I've added your version to now). I\nhad been surprised to see gcc emitting different code for the first 3\nversions. Clang does a better job at figuring out they all do the same\nthing and emitting the same code for each.\n\nI played around with the attached (hacked up) qsort.c to see if there\nwas any difference. Likely function call overhead kills the\nperformance anyway. There does not seem to be much difference between\nthem. I've not tested with an inlined comparison function.\n\nLooking at your version, it doesn't look like there's any sort of\nimprovement in terms of the instructions. Certainly, for clang, it's\nworse as it adds a shift left instruction and an additional compare.\nNo jumps, at least.\n\nWhat's your reasoning for returning INT_MIN and INT_MAX?\n\nDavid\n\n[1] https://godbolt.org/z/33T8h151M", "msg_date": "Fri, 27 Sep 2024 14:50:13 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: micro-optimize nbtcompare.c routines" }, { "msg_contents": "On Fri, Sep 27, 2024 at 02:50:13PM +1200, David Rowley wrote:\n> I had been looking at [1] (which I've added your version to now). I\n> had been surprised to see gcc emitting different code for the first 3\n> versions. Clang does a better job at figuring out they all do the same\n> thing and emitting the same code for each.\n\nInteresting.\n\n> I played around with the attached (hacked up) qsort.c to see if there\n> was any difference. Likely function call overhead kills the\n> performance anyway. There does not seem to be much difference between\n> them. I've not tested with an inlined comparison function.\n\nI'd expect worse performance with the branchless routines for the inlined\ncase. However, I recall that clang was able to optimize med3() as well as\nit can with the branching routines, so that may not always be true.\n\n> Looking at your version, it doesn't look like there's any sort of\n> improvement in terms of the instructions. Certainly, for clang, it's\n> worse as it adds a shift left instruction and an additional compare.\n> No jumps, at least.\n\nI think I may have forgotten to add -O2 when I was inspecting this code\nwith godbolt.org earlier. *facepalm* The different versions look pretty\ncomparable with that added.\n\n> What's your reasoning for returning INT_MIN and INT_MAX?\n\nThat's just for the compile option added by commit c87cb5f, which IIUC is\nintended to test that we correctly handle comparisons that return INT_MIN.\n\n-- \nnathan\n\n\n", "msg_date": "Thu, 26 Sep 2024 22:23:43 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: micro-optimize nbtcompare.c routines" }, { "msg_contents": "I've marked this one as Withdrawn. Apologies for the noise.\n\n-- \nnathan\n\n\n", "msg_date": "Fri, 27 Sep 2024 10:44:00 -0500", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: micro-optimize nbtcompare.c routines" } ]
[ { "msg_contents": "The attached patch series refactors the collation and ctype behavior\ninto method tables, and provides a way to hook the creation of a\npg_locale_t so that an extension can create any kind of method table it\nwants.\n\nIn practice, the main use is to replace, for example, ICU with a\ndifferent version of ICU. But it can also be used to control libc\nbehavior, or to use a different set of methods that have nothing to do\nwith ICU or libc.\n\nIt also isolates code to some new files: ICU code goes in\npg_locale_icu.c, and libc code goes in pg_locale_libc.c. And it reduces\na lot of code that branches on the provider. That's easier to reason\nabout, in my opinion.\n\nWith these patches, the collation provider becomes mainly a catalog\nconcept used to create the right pg_locale_t, rather than an execution-\ntime concept.\n\nWe could take this further and make providers a concept in the catalog,\nlike \"CREATE LOCALE PROVIDER\", and it would just provide an arbitrary\nhandler function to create the pg_locale_t. If we decide how we'd like\nto handle versioning, that could potentially allow a much smoother\nupgrade process that preserves the provider versions.\n\nRegards,\n\tJeff Davis", "msg_date": "Thu, 26 Sep 2024 15:30:09 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Collation & ctype method table, and extension hooks" } ]
[ { "msg_contents": "Hi hackers,\n\nper David's suggestion, this patch implements general\npurpose array sort.\n\nWe can do the following with this patch:\n\nSELECT array_sort('{1.1,3.3,5.5,2.2,4.4,6.6}'::float8[], 'asc');\nSELECT array_sort('{abc DEF 123abc,ábc sßs ßss DÉF,DŽxxDŽ džxxDž\nDžxxdž,ȺȺȺ,ⱥⱥⱥ,ⱥȺ}'::text[]);\nSELECT array_sort('{abc DEF 123abc,ábc sßs ßss DÉF,DŽxxDŽ džxxDž\nDžxxdž,ȺȺȺ,ⱥⱥⱥ,ⱥȺ}'::text[], 'asc', 'pg_c_utf8');\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Fri, 27 Sep 2024 21:15:45 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "general purpose array_sort" }, { "msg_contents": "On Fri, Sep 27, 2024 at 9:15 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> per David's suggestion, this patch implements general\n> purpose array sort.\n>\n> We can do the following with this patch:\n>\n> SELECT array_sort('{1.1,3.3,5.5,2.2,4.4,6.6}'::float8[], 'asc');\n> SELECT array_sort('{abc DEF 123abc,ábc sßs ßss DÉF,DŽxxDŽ džxxDž\n> Džxxdž,ȺȺȺ,ⱥⱥⱥ,ⱥȺ}'::text[]);\n> SELECT array_sort('{abc DEF 123abc,ábc sßs ßss DÉF,DŽxxDŽ džxxDž\n> Džxxdž,ȺȺȺ,ⱥⱥⱥ,ⱥȺ}'::text[], 'asc', 'pg_c_utf8');\n>\n> --\n> Regards\n> Junwang Zhao\n\nPFA v2, use COLLATE keyword to supply the collation suggested by\nAndreas offlist.\n\nSELECT array_sort('{abc DEF 123abc,ábc sßs ßss DÉF,DŽxxDŽ džxxDž\nDžxxdž,ȺȺȺ,ⱥⱥⱥ,ⱥȺ}'::text[]);\nSELECT array_sort('{abc DEF 123abc,ábc sßs ßss DÉF,DŽxxDŽ džxxDž\nDžxxdž,ȺȺȺ,ⱥⱥⱥ,ⱥȺ}'::text[] COLLATE \"pg_c_utf8\");\n\nI also created a CF entry[1] so it can be easily reviewed.\n\n[1]: https://commitfest.postgresql.org/50/5277/\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Sat, 28 Sep 2024 19:52:00 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: general purpose array_sort" }, { "msg_contents": "On Sat, Sep 28, 2024 at 7:52 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> PFA v2, use COLLATE keyword to supply the collation suggested by\n> Andreas offlist.\n>\nthis is better. otherwise we need extra care to handle case like:\nSELECT array_sort('{1,3,5,2,4,6}'::int[] COLLATE \"pg_c_utf8\");\n\n\n+ <row>\n+ <entry role=\"func_table_entry\"><para role=\"func_signature\">\n+ <indexterm>\n+ <primary>array_sort</primary>\n+ </indexterm>\n+ <function>array_sort</function> ( <type>anyarray</type>\n<optional>, <parameter>dir</parameter> </optional>)\n+ <returnvalue>anyarray</returnvalue>\n+ </para>\n+ <para>\n+ Sorts the array in either ascending or descending order.\n+ <parameter>dir</parameter> must be <literal>asc</literal>\n+ or <literal>desc</literal>. The array must be empty or one-dimensional.\n+ </para>\n+ <para>\n+ <literal>array_sort(ARRAY[1,2,5,6,3,4])</literal>\n+ <returnvalue>{1,2,3,4,5,6}</returnvalue>\n+ </para></entry>\n+ </row>\nI am confused with <parameter>dir</parameter>. I guess you want to say\n\"direction\"\nBut here, I think <parameter>sort_asc</parameter> would be more appropriate?\n\n\n<parameter>dir</parameter> can have only two potential values, make it\nas a boolean would be more easier?\nyou didn't mention information: \"by default, it will sort by\nascending order; the sort collation by default is using the array\nelement type's collation\"\n\ntuplesort_begin_datum can do null-first, null-last, so the\none-dimension array can allow null values.\n\nBased on the above and others, I did some refactoring, feel free to take it.\nmy changes, changed the function signature, so you need to pay\nattention to sql test file.", "msg_date": "Sat, 28 Sep 2024 22:40:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: general purpose array_sort" }, { "msg_contents": "On Sat, Sep 28, 2024 at 10:41 PM jian he <jian.universality@gmail.com> wrote:\n>\n> On Sat, Sep 28, 2024 at 7:52 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n> >\n> > PFA v2, use COLLATE keyword to supply the collation suggested by\n> > Andreas offlist.\n> >\n> this is better. otherwise we need extra care to handle case like:\n> SELECT array_sort('{1,3,5,2,4,6}'::int[] COLLATE \"pg_c_utf8\");\n>\n>\n> + <row>\n> + <entry role=\"func_table_entry\"><para role=\"func_signature\">\n> + <indexterm>\n> + <primary>array_sort</primary>\n> + </indexterm>\n> + <function>array_sort</function> ( <type>anyarray</type>\n> <optional>, <parameter>dir</parameter> </optional>)\n> + <returnvalue>anyarray</returnvalue>\n> + </para>\n> + <para>\n> + Sorts the array in either ascending or descending order.\n> + <parameter>dir</parameter> must be <literal>asc</literal>\n> + or <literal>desc</literal>. The array must be empty or one-dimensional.\n> + </para>\n> + <para>\n> + <literal>array_sort(ARRAY[1,2,5,6,3,4])</literal>\n> + <returnvalue>{1,2,3,4,5,6}</returnvalue>\n> + </para></entry>\n> + </row>\n> I am confused with <parameter>dir</parameter>. I guess you want to say\n> \"direction\"\n> But here, I think <parameter>sort_asc</parameter> would be more appropriate?\n\nThis doc is mostly copied and edited from intarray.sgml sort part.\n\nAnd the logic is basically the same, you can check the intarray module.\n\n>\n>\n> <parameter>dir</parameter> can have only two potential values, make it\n> as a boolean would be more easier?\n> you didn't mention information: \"by default, it will sort by\n> ascending order; the sort collation by default is using the array\n> element type's collation\"\n>\n> tuplesort_begin_datum can do null-first, null-last, so the\n> one-dimension array can allow null values.\n\nThe following(create extension intarry first) will give an error, I\nkeep the same for array_sort.\n\nSELECT sort('{1234234,-30,234234, null}');\n\n>\n> Based on the above and others, I did some refactoring, feel free to take it.\n> my changes, changed the function signature, so you need to pay\n> attention to sql test file.\n\nThanks for your refactor, I will take some in the next version.\n\n\n-- \nRegards\nJunwang Zhao\n\n\n", "msg_date": "Sun, 29 Sep 2024 10:05:34 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: general purpose array_sort" }, { "msg_contents": "On Sat, Sep 28, 2024 at 7:05 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n\n> On Sat, Sep 28, 2024 at 10:41 PM jian he <jian.universality@gmail.com>\n> wrote:\n> >\n> > <parameter>dir</parameter> can have only two potential values, make it\n> > as a boolean would be more easier?\n> > you didn't mention information: \"by default, it will sort by\n> > ascending order; the sort collation by default is using the array\n> > element type's collation\"\n> >\n> > tuplesort_begin_datum can do null-first, null-last, so the\n> > one-dimension array can allow null values.\n>\n> The following(create extension intarry first) will give an error, I\n> keep the same for array_sort.\n>\n> SELECT sort('{1234234,-30,234234, null}');\n>\n>\nI would suggest accepting:\nasc\ndesc\nasc nulls first\nasc nulls last *\ndesc nulls first *\ndesc nulls last\n\nAs valid inputs for \"dir\" - and that the starred options are the defaults\nwhen null position is omitted.\n\nIn short, mimic create index.\n\nDavid J.\n\nOn Sat, Sep 28, 2024 at 7:05 PM Junwang Zhao <zhjwpku@gmail.com> wrote:On Sat, Sep 28, 2024 at 10:41 PM jian he <jian.universality@gmail.com> wrote:>\n> <parameter>dir</parameter> can have only two potential values, make it\n> as a boolean would be more easier?\n> you didn't mention information:  \"by default, it will sort by\n> ascending order; the sort collation by default is using the array\n> element type's collation\"\n>\n> tuplesort_begin_datum can do null-first, null-last, so the\n> one-dimension array can allow null values.\n\nThe following(create extension intarry first) will give an error, I\nkeep the same for array_sort.\n\nSELECT sort('{1234234,-30,234234, null}');I would suggest accepting:ascdescasc nulls firstasc nulls last *desc nulls first *desc nulls lastAs valid inputs for \"dir\" - and that the starred options are the defaults when null position is omitted.In short, mimic create index.David J.", "msg_date": "Sat, 28 Sep 2024 19:50:38 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: general purpose array_sort" }, { "msg_contents": "On Sun, Sep 29, 2024 at 10:51 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>\n> On Sat, Sep 28, 2024 at 7:05 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>>\n>> On Sat, Sep 28, 2024 at 10:41 PM jian he <jian.universality@gmail.com> wrote:\n>> >\n>> > <parameter>dir</parameter> can have only two potential values, make it\n>> > as a boolean would be more easier?\n>> > you didn't mention information: \"by default, it will sort by\n>> > ascending order; the sort collation by default is using the array\n>> > element type's collation\"\n>> >\n>> > tuplesort_begin_datum can do null-first, null-last, so the\n>> > one-dimension array can allow null values.\n>>\n>> The following(create extension intarry first) will give an error, I\n>> keep the same for array_sort.\n>>\n>> SELECT sort('{1234234,-30,234234, null}');\n>>\n>\n> I would suggest accepting:\n> asc\n> desc\n> asc nulls first\n> asc nulls last *\n> desc nulls first *\n> desc nulls last\n>\n> As valid inputs for \"dir\" - and that the starred options are the defaults when null position is omitted.\n>\n> In short, mimic create index.\n>\n> David J.\n>\n\nPFA v3 with David's suggestion addressed.\n\n-- \nRegards\nJunwang Zhao", "msg_date": "Mon, 30 Sep 2024 13:01:25 +0800", "msg_from": "Junwang Zhao <zhjwpku@gmail.com>", "msg_from_op": true, "msg_subject": "Re: general purpose array_sort" }, { "msg_contents": "On Mon, Sep 30, 2024 at 1:01 PM Junwang Zhao <zhjwpku@gmail.com> wrote:\n>\n> > I would suggest accepting:\n> > asc\n> > desc\n> > asc nulls first\n> > asc nulls last *\n> > desc nulls first *\n> > desc nulls last\n> >\n> > As valid inputs for \"dir\" - and that the starred options are the defaults when null position is omitted.\n> >\n> > In short, mimic create index.\n> >\n> > David J.\n> >\n>\n> PFA v3 with David's suggestion addressed.\n>\n\nI think just adding 2 bool arguments (asc/desc, nulls last/not nulls\nlast) would be easier.\nbut either way, (i don't have a huge opinion)\nbut document the second argument, imagine case\nSELECT array_sort('{a,B}'::text[] , E'aSc NulLs LaST \\t\\r\\n');\nwould be tricky?\n\n\nerrmsg(\"multidimensional arrays sorting are not supported\")));\nwrite a sql test to trigger the error message that would be great.\n\nyou can add two or one example to collate.icu.utf8.sql to demo that it\nactually works with COLLATE collation_name\nlike:\nSELECT array_sort('{a,B}'::text[] COLLATE case_insensitive);\nSELECT array_sort('{a,B}'::text[] COLLATE \"C\");\n\n\n#define WHITESPACE \" \\t\\n\\r\"\nyou may also check function scanner_isspace\n\n\n+ typentry = (TypeCacheEntry *) fcinfo->flinfo->fn_extra;\n+ if (typentry == NULL || typentry->type_id != elmtyp)\n+ {\n+ typentry = lookup_type_cache(elmtyp, sort_asc ? TYPECACHE_LT_OPR :\nTYPECACHE_GT_OPR);\n+ fcinfo->flinfo->fn_extra = (void *) typentry;\n+ }\nyou need to one-time check typentry->lt_opr or typentry->gt_opr exists?\nsee CreateStatistics.\n /* Disallow data types without a less-than operator */\n type = lookup_type_cache(attForm->atttypid, TYPECACHE_LT_OPR);\n if (type->lt_opr == InvalidOid)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"column \\\"%s\\\" cannot be used in\nstatistics because its type %s has no default btree operator class\",\n attname, format_type_be(attForm->atttypid))));\n\n\n", "msg_date": "Mon, 30 Sep 2024 23:13:06 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: general purpose array_sort" } ]
[ { "msg_contents": "Hi all! Congrats on releasing v17!\n\nI'm adding support for Index Only Scans to a custom IAM impl and I've got a little dilemma. \n\nMy IAM implementation is essentially a composite index that might have up to 32 columns and while it can return any column in the index definition it's quite expensive to do so. It doesn't have an already formed index tuple sitting right there like the built-in btree index does. Returning 1 or 2 columns is usually a win over the regular Index Scan version, but more than that and it's at least a wash if not a total loss.\n\nSince not all Index Only Scans need *all* the columns, was there ever any thought around providing the required attnos as a field on IndexScanDescData? That information seems to be there in the nodeindexonlyscan.c machinery.\n\nAs a compromise, I've settled on my `amcanreturn` function saying it only knows how to return attno==1, which is sufficient for some query patterns, but I'd like to be able to have my index directly feed tuples into aggregate plans and such too. It's just too expensive for me to always return all the columns when generally maybe only 1 or 2 are needed (there doesn't seem to be a way to weave that into the cost estimations, but that wouldn't matter if I knew which specific columns I need to fetch out of my index).\n\nI'm pretty familiar with the IAM and the internals around it, but maybe I've missed something -- can I get at this information some other way?\n\nThanks for your time and consideration!\n\neric\n\n", "msg_date": "Fri, 27 Sep 2024 09:33:13 -0400", "msg_from": "Eric Ridge <eebbrr@gmail.com>", "msg_from_op": true, "msg_subject": "IndexAccessMethod API & Index Only Scans" } ]
[ { "msg_contents": "tl;dr let's assume SSDs are popular and HDDs are the exception and flip our\ndefault\n\nAs I write this email, it's the year 2024. I think it is time we lower our\n\"default\" setting of random_page_cost (as set in postgresql.conf.sample and\nthe docs). Even a decade ago, the current default of 4 was considered\nfairly conservative and often lowered. The git logs shows that this value\nwas last touched in 2006, during the age of spinning metal. We are now in a\nnew era, the age of SSDs, and thus we should lower this default value to\nreflect the fact that the vast majority of people using Postgres these days\nare doing so on solid state drives. We tend to stay ultra-conservative in\nall of our settings, but we also need to recognize when there has been a\nmajor shift in the underlying hardware - and calculations that our defaults\nare based on.\n\nGranted, there are other factors involved, and yes, perhaps we should tweak\nsome of the similar settings as well, but ranom_page_cost is the one\nsetting most out of sync with today's hardware realities. So I'll be brave\nand throw a number out there: 1.2. And change our docs to say wordage like\n\"if you are using an older hard disk drive technology, you may want to try\nraising rpc\" to replace our fairly-hidden note about SSDs buried in the\nlast sentence - of the fourth paragraph - of the rpc docs.\n\nReal data about performance on today's SSDs are welcome, and/or some way to\ngenerate a more accurate default.\n\nCheers,\nGreg\n\ntl;dr let's assume SSDs are popular and HDDs are the exception and flip our defaultAs I write this email, it's the year 2024. I think it is time we lower our \"default\" setting of random_page_cost (as set in postgresql.conf.sample and the docs). Even a decade ago, the current default of 4 was considered fairly conservative and often lowered. The git logs shows that this value was last touched in 2006, during the age of spinning metal. We are now in a new era, the age of SSDs, and thus we should lower this default value to reflect the fact that the vast majority of people using Postgres these days are doing so on solid state drives. We tend to stay ultra-conservative in all of our settings, but we also need to recognize when there has been a major shift in the underlying hardware - and calculations that our defaults are based on.Granted, there are other factors involved, and yes, perhaps we should tweak some of the similar settings as well, but ranom_page_cost is the one setting most out of sync with today's hardware realities. So I'll be brave and throw a number out there: 1.2. And change our docs to say wordage like \"if you are using an older hard disk drive technology, you may want to try raising rpc\" to replace our fairly-hidden note about SSDs buried in the last sentence - of the fourth paragraph - of the rpc docs.Real data about performance on today's SSDs are welcome, and/or some way to generate a more accurate default.Cheers,Greg", "msg_date": "Fri, 27 Sep 2024 10:07:14 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Changing the default random_page_cost value" }, { "msg_contents": "On Fri, Sep 27, 2024 at 8:07 AM Greg Sabino Mullane <htamfids@gmail.com>\nwrote:\n\n> tl;dr let's assume SSDs are popular and HDDs are the exception and flip\n> our default\n>\n\n<snip>\n\n\n> Granted, there are other factors involved, and yes, perhaps we should\n> tweak some of the similar settings as well, but ranom_page_cost is the one\n> setting most out of sync with today's hardware realities. So I'll be brave\n> and throw a number out there: 1.2. And change our docs to say wordage like\n> \"if you are using an older hard disk drive technology, you may want to try\n> raising rpc\" to replace our fairly-hidden note about SSDs buried in the\n> last sentence - of the fourth paragraph - of the rpc docs.\n>\n\n+1\n\nI suggest a slightly nicer comment in the default conf file, like \"For\nspinning hard drives, raise this to at least 3 and test\"\n\nRoberto\n\nOn Fri, Sep 27, 2024 at 8:07 AM Greg Sabino Mullane <htamfids@gmail.com> wrote:tl;dr let's assume SSDs are popular and HDDs are the exception and flip our default<snip> Granted, there are other factors involved, and yes, perhaps we should tweak some of the similar settings as well, but ranom_page_cost is the one setting most out of sync with today's hardware realities. So I'll be brave and throw a number out there: 1.2. And change our docs to say wordage like \"if you are using an older hard disk drive technology, you may want to try raising rpc\" to replace our fairly-hidden note about SSDs buried in the last sentence - of the fourth paragraph - of the rpc docs.+1I suggest a slightly nicer comment in the default conf file, like \"For spinning hard drives, raise this to at least 3 and test\"Roberto", "msg_date": "Fri, 27 Sep 2024 08:26:38 -0600", "msg_from": "Roberto Mello <roberto.mello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default random_page_cost value" }, { "msg_contents": "On Fri, 2024-09-27 at 10:07 -0400, Greg Sabino Mullane wrote:\n> So I'll be brave and throw a number out there: 1.2.\n\n+1\n\nLaurenz Albe\n\n\n", "msg_date": "Fri, 27 Sep 2024 17:35:51 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Changing the default random_page_cost value" }, { "msg_contents": "Greg Sabino Mullane <htamfids@gmail.com> writes:\n\n> So I'll be brave and throw a number out there: 1.2. And change our\n> docs to say wordage like \"if you are using an older hard disk drive\n> technology, you may want to try raising rpc\" to replace our\n> fairly-hidden note about SSDs buried in the last sentence - of the\n> fourth paragraph - of the rpc docs.\n\nIt might also be worth mentioning cloudy block storage (e.g. AWS' EBS),\nwhich is typically backed by SSDs, but has extra network latency.\n\n- ilmari\n\n\n", "msg_date": "Fri, 27 Sep 2024 17:03:33 +0100", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Changing the default random_page_cost value" }, { "msg_contents": "On Fri, Sep 27, 2024 at 12:03 PM Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>\nwrote:\n\n> It might also be worth mentioning cloudy block storage (e.g. AWS' EBS),\n> which is typically backed by SSDs, but has extra network latency.\n>\n\nThat seems a little too in the weeds for me, but wording suggestions are\nwelcome. To get things moving forward, I made a doc patch which changes a\nfew things, namely:\n\n* Mentions the distinction between ssd and hdd right up front.\n* Moves the tablespace talk to the very end, as tablespace use is getting\nrarer (again, thanks in part to ssds)\n* Mentions the capability to set per-database and per-role since we mention\nper-tablespace.\n* Removes a lot of the talk of caches and justifications for the 4.0\nsetting. While those are interesting, I've been tuning this parameter for\nmany years and never really cared about the \"90% cache rate\". The proof is\nin the pudding: rpc is the canonical \"try it and see\" parameter. Tweak.\nTest. Repeat.\n\nCheers,\nGreg", "msg_date": "Mon, 30 Sep 2024 10:05:29 -0400", "msg_from": "Greg Sabino Mullane <htamfids@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Changing the default random_page_cost value" } ]
[ { "msg_contents": "The following bug has been logged on the website:\n\nBug reference: 18641\nLogged by: Alexander Lakhin\nEmail address: exclusion@gmail.com\nPostgreSQL version: 17.0\nOperating system: Ubuntu 22.04\nDescription: \n\nLogical decoding of a transaction like:\r\nBEGIN;\r\nINSERT INTO test_tab VALUES(1);\r\nPREPARE TRANSACTION 'pt';\r\n\r\nwhere test_tab defined as:\r\nCREATE TABLE test_tab(a int primary key,\r\n t text DEFAULT 'Some TOASTable value';\r\n\r\nfor a subscription created with two_phase = on, fails as below:\r\n2024-09-28 06:44:50.708 UTC [3741774:6][client backend][6/2:740] LOG: \nstatement: PREPARE TRANSACTION 'pt';\r\n2024-09-28 06:44:50.709 UTC [3741774:7][client backend][:0] LOG: \ndisconnection: session time: 0:00:00.006 user=law database=postgres\nhost=[local]\r\n2024-09-28 06:44:50.713 UTC [3741741:17][walsender][25/0:0] ERROR: \nunexpected table_index_fetch_tuple call during logical decoding\r\n2024-09-28 06:44:50.713 UTC [3741741:18][walsender][25/0:0] BACKTRACE: \r\ntable_index_fetch_tuple at tableam.h:1253:3\r\nindex_fetch_heap at indexam.c:637:10\r\nindex_getnext_slot at indexam.c:697:6\r\nsystable_getnext_ordered at genam.c:717:5\r\nheap_fetch_toast_slice at heaptoast.c:698:17\r\ntable_relation_fetch_toast_slice at tableam.h:1924:1\r\ntoast_fetch_datum at detoast.c:379:2\r\ndetoast_attr at detoast.c:123:10\r\npg_detoast_datum_packed at fmgr.c:1867:10\r\ntext_to_cstring at varlena.c:220:23\r\nAttrDefaultFetch at relcache.c:4537:17\r\nRelationBuildTupleDesc at relcache.c:697:4\r\nRelationBuildDesc at relcache.c:1188:24\r\nRelationIdGetRelation at relcache.c:2116:7\r\nReorderBufferProcessTXN at reorderbuffer.c:2246:17\r\nReorderBufferReplay at reorderbuffer.c:2725:2\r\nReorderBufferPrepare at reorderbuffer.c:2822:2\r\nDecodePrepare at decode.c:826:2\r\nxact_decode at decode.c:347:5\r\nLogicalDecodingProcessRecord at decode.c:123:1\r\nXLogSendLogical at walsender.c:3413:33\r\nWalSndLoop at walsender.c:2814:4\r\nStartLogicalReplication at walsender.c:1523:2\r\nexec_replication_command at walsender.c:2148:5\r\nPostgresMain at postgres.c:4763:11\r\nBackendInitialize at backend_startup.c:123:1\r\npostmaster_child_launch at launch_backend.c:281:9\r\nBackendStartup at postmaster.c:3593:8\r\nServerLoop at postmaster.c:1677:10\r\nPostmasterMain at postmaster.c:1372:11\r\nstartup_hacks at main.c:217:1\r\n /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)\n[0x7f30a90d7e40]\r\n postgres: node_publisher: walsender law postgres [local]\nSTART_REPLICATION(_start+0x25) [0x5647b6023565]\r\n2024-09-28 06:44:50.713 UTC [3741741:19][walsender][25/0:0] LOG: released\nlogical replication slot \"test_sub\"\r\n2024-09-28 06:44:50.713 UTC [3741741:20][walsender][25/0:0] LOG: could not\nsend data to client: Broken pipe\r\n2024-09-28 06:44:50.713 UTC [3741741:21][walsender][25/0:0] FATAL: \nconnection to client lost\r\n\r\nThe issue can be easily reproduced with 022_twophase_cascade.pl modified\r\nlike this:\r\n@@ -67,0 +68 @@ $node_A->safe_psql('postgres',\r\n+my $default = join('', map {chr(65 + rand 26)} (1 .. 10000));\r\n@@ -69 +70 @@ $node_B->safe_psql('postgres',\r\n- \"CREATE TABLE test_tab (a int primary key, b bytea, c timestamptz\nDEFAULT now(), d bigint DEFAULT 999)\"\r\n+ \"CREATE TABLE test_tab (a int primary key, b bytea, c timestamptz\nDEFAULT now(), d bigint DEFAULT 999, t text DEFAULT '$default')\"\r\n@@ -72 +73 @@ $node_C->safe_psql('postgres',\r\n- \"CREATE TABLE test_tab (a int primary key, b bytea, c timestamptz\nDEFAULT now(), d bigint DEFAULT 999)\"\r\n+ \"CREATE TABLE test_tab (a int primary key, b bytea, c timestamptz\nDEFAULT now(), d bigint DEFAULT 999, t text DEFAULT '$default')\"\r\n\r\nReproduced on REL_15_STABLE (starting from a8fd13cab) .. master.", "msg_date": "Sat, 28 Sep 2024 11:00:01 +0000", "msg_from": "PG Bug reporting form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "BUG #18641: Logical decoding of two-phase commit fails with TOASTed\n default values" }, { "msg_contents": "Hi, Alexander and hackers\n\n>\n> The following bug has been logged on the website:\n>\n> Bug reference: 18641\n> Logged by: Alexander Lakhin\n> Email address: exclusion@gmail.com\n> PostgreSQL version: 17.0\n> Operating system: Ubuntu 22.04\n> Description:\n>\n> Logical decoding of a transaction like:\n> BEGIN;\n> INSERT INTO test_tab VALUES(1);\n> PREPARE TRANSACTION 'pt';\n>\n> where test_tab defined as:\n> CREATE TABLE test_tab(a int primary key,\n> t text DEFAULT 'Some TOASTable value';\n>\n> for a subscription created with two_phase = on, fails as below:\n> 2024-09-28 06:44:50.708 UTC [3741774:6][client backend][6/2:740] LOG:\n> statement: PREPARE TRANSACTION 'pt';\n> 2024-09-28 06:44:50.709 UTC [3741774:7][client backend][:0] LOG:\n> disconnection: session time: 0:00:00.006 user=law database=postgres\n> host=[local]\n> 2024-09-28 06:44:50.713 UTC [3741741:17][walsender][25/0:0] ERROR:\n> unexpected table_index_fetch_tuple call during logical decoding\n> 2024-09-28 06:44:50.713 UTC [3741741:18][walsender][25/0:0] BACKTRACE:\n> table_index_fetch_tuple at tableam.h:1253:3\n> index_fetch_heap at indexam.c:637:10\n> index_getnext_slot at indexam.c:697:6\n> systable_getnext_ordered at genam.c:717:5\n> heap_fetch_toast_slice at heaptoast.c:698:17\n> table_relation_fetch_toast_slice at tableam.h:1924:1\n> toast_fetch_datum at detoast.c:379:2\n> detoast_attr at detoast.c:123:10\n> pg_detoast_datum_packed at fmgr.c:1867:10\n> text_to_cstring at varlena.c:220:23\n> AttrDefaultFetch at relcache.c:4537:17\n> RelationBuildTupleDesc at relcache.c:697:4\n> RelationBuildDesc at relcache.c:1188:24\n> RelationIdGetRelation at relcache.c:2116:7\n> ReorderBufferProcessTXN at reorderbuffer.c:2246:17\n> ReorderBufferReplay at reorderbuffer.c:2725:2\n> ReorderBufferPrepare at reorderbuffer.c:2822:2\n> DecodePrepare at decode.c:826:2\n> xact_decode at decode.c:347:5\n> LogicalDecodingProcessRecord at decode.c:123:1\n> XLogSendLogical at walsender.c:3413:33\n> WalSndLoop at walsender.c:2814:4\n> StartLogicalReplication at walsender.c:1523:2\n> exec_replication_command at walsender.c:2148:5\n> PostgresMain at postgres.c:4763:11\n> BackendInitialize at backend_startup.c:123:1\n> postmaster_child_launch at launch_backend.c:281:9\n> BackendStartup at postmaster.c:3593:8\n> ServerLoop at postmaster.c:1677:10\n> PostmasterMain at postmaster.c:1372:11\n> startup_hacks at main.c:217:1\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)\n> [0x7f30a90d7e40]\n> postgres: node_publisher: walsender law postgres [local]\n> START_REPLICATION(_start+0x25) [0x5647b6023565]\n> 2024-09-28 06:44:50.713 UTC [3741741:19][walsender][25/0:0] LOG: released\n> logical replication slot \"test_sub\"\n> 2024-09-28 06:44:50.713 UTC [3741741:20][walsender][25/0:0] LOG: could not\n> send data to client: Broken pipe\n> 2024-09-28 06:44:50.713 UTC [3741741:21][walsender][25/0:0] FATAL:\n> connection to client lost\n>\n> The issue can be easily reproduced with 022_twophase_cascade.pl modified\n> like this:\n> @@ -67,0 +68 @@ $node_A->safe_psql('postgres',\n> +my $default = join('', map {chr(65 + rand 26)} (1 .. 10000));\n> @@ -69 +70 @@ $node_B->safe_psql('postgres',\n> - \"CREATE TABLE test_tab (a int primary key, b bytea, c timestamptz\n> DEFAULT now(), d bigint DEFAULT 999)\"\n> + \"CREATE TABLE test_tab (a int primary key, b bytea, c timestamptz\n> DEFAULT now(), d bigint DEFAULT 999, t text DEFAULT '$default')\"\n> @@ -72 +73 @@ $node_C->safe_psql('postgres',\n> - \"CREATE TABLE test_tab (a int primary key, b bytea, c timestamptz\n> DEFAULT now(), d bigint DEFAULT 999)\"\n> + \"CREATE TABLE test_tab (a int primary key, b bytea, c timestamptz\n> DEFAULT now(), d bigint DEFAULT 999, t text DEFAULT '$default')\"\n>\n> Reproduced on REL_15_STABLE (starting from a8fd13cab) .. master.\n\nThank you for reporting the issue.\nI was able to reproduce the issue by modifying 022_twophase_cascade.pl\naccordingly.\n\nThe scan for toast index is actually done under systable_getnext_ordered,\nwhere HandleConcurrentAbort() is called. So it seems to me that this\nscan is actually safe for concurrent abort in logical decoding.\nLogic around HandleConcurrentAbort is intorduced\nhttps://github.com/postgres/postgres/commit/7259736a6e5b7c7588fff9578370736a6648acbb.\n\nThough I may not understand the logic around HandleConcurrentAbort\nfully and I am not sure not-setting bsysscan at\nsystable_beginscan_ordered is intentional,\nit seems to me setting and unsetting a bsysscan flag in\nsystable_beginscan_ordered and systable_endscan_ordered would resolve\nthe issue.\nPatch is attached.\n\nRegards,\nTakeshi Ideriha", "msg_date": "Mon, 30 Sep 2024 10:16:00 +0900", "msg_from": "Takeshi Ideriha <iderihatakeshi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #18641: Logical decoding of two-phase commit fails with\n TOASTed default values" }, { "msg_contents": "On Mon, Sep 30, 2024 at 6:46 AM Takeshi Ideriha\n<iderihatakeshi@gmail.com> wrote:\n>\n> Patch is attached.\n>\n\nThanks for the patch. I'll look into it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:14:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #18641: Logical decoding of two-phase commit fails with\n TOASTed default values" }, { "msg_contents": "On Mon, Sep 30, 2024 at 6:46 AM Takeshi Ideriha\n<iderihatakeshi@gmail.com> wrote:\n>\n> Thank you for reporting the issue.\n> I was able to reproduce the issue by modifying 022_twophase_cascade.pl\n> accordingly.\n>\n> The scan for toast index is actually done under systable_getnext_ordered,\n> where HandleConcurrentAbort() is called. So it seems to me that this\n> scan is actually safe for concurrent abort in logical decoding.\n> Logic around HandleConcurrentAbort is intorduced\n> https://github.com/postgres/postgres/commit/7259736a6e5b7c7588fff9578370736a6648acbb.\n>\n> Though I may not understand the logic around HandleConcurrentAbort\n> fully and I am not sure not-setting bsysscan at\n> systable_beginscan_ordered is intentional,\n> it seems to me setting and unsetting a bsysscan flag in\n> systable_beginscan_ordered and systable_endscan_ordered would resolve\n> the issue.\n>\n\nWe forgot to set/unset the flag in functions\nsystable_beginscan_ordered and systable_endscan_ordered. BTW,\nshouldn't this occur even without prepare transaction? If so, we need\nto backpatch this till 14.\n\nAlso, it is better to have a test for this, and let's ensure that the\nnew test doesn't increase the regression time too much if possible.\n\nOne minor point:\n+\n+ /*\n+ * If CheckXidAlive is set then set a flag to indicate that system table\n\nThe indentation in the first comment line seems off.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 30 Sep 2024 21:50:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG #18641: Logical decoding of two-phase commit fails with\n TOASTed default values" } ]
[ { "msg_contents": "A week or so ago I upgraded the msys2 animal fairywren to the latest \nmsys2, and ever since then the build has been failing for Release 15. \nIt's complaining like this:\n\nccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -DFRONTEND -DUNSAFE_STAT_OK -I/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/interfaces/libpq -I../../../src/include -I/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/include -I../pgsql/src/include/port/win32 -I/c/progra~1/openssl-win64/include \"-I/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/include/port/win32\" -DWIN32_STACK_RLIMIT=4194304 -I../../../src/port -I/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/port -DSO_MAJOR_VERSION=5 -c -o fe-secure-common.o /home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/interfaces/libpq/fe-secure-common.c\nC:/tools/xmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c: In function 'pq_verify_peer_name_matches_certificate_ip':\nC:/tools/xmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c:219:21: error: implicit declaration of function 'inet_pton'; did you mean 'inet_aton'? [-Wimplicit-function-declaration]\n 219 | if (inet_pton(AF_INET6, host, &addr) == 1)\n | ^~~~~~~~~\n | inet_aton\nmake[3]: *** [<builtin>: fe-secure-common.o] Error 1\n\nconfigure has determined that we have inet_pton, and I have repeated the \ntest manually. It's not a ccache issue - I have cleared the cache and \nthe problem persists. The test run by meson on the same animal reports \nnot finding the function.\n\nSo I'm a bit flummoxed about how to fix this, and would appreciate any \nsuggestions.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nA week or so ago I upgraded the msys2\n animal fairywren to the latest msys2, and ever since then the\n build has been failing for Release 15. It's complaining like\n this:\nccache gcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wcast-function-type -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -g -O2 -DFRONTEND -DUNSAFE_STAT_OK -I/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/interfaces/libpq -I../../../src/include -I/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/include -I../pgsql/src/include/port/win32 -I/c/progra~1/openssl-win64/include \"-I/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/include/port/win32\" -DWIN32_STACK_RLIMIT=4194304 -I../../../src/port -I/home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/port -DSO_MAJOR_VERSION=5 -c -o fe-secure-common.o /home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/interfaces/libpq/fe-secure-common.c\nC:/tools/xmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c: In function 'pq_verify_peer_name_matches_certificate_ip':\nC:/tools/xmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c:219:21: error: implicit declaration of function 'inet_pton'; did you mean 'inet_aton'? [-Wimplicit-function-declaration]\n 219 | if (inet_pton(AF_INET6, host, &addr) == 1)\n | ^~~~~~~~~\n | inet_aton\nmake[3]: *** [<builtin>: fe-secure-common.o] Error 1\n\nconfigure has determined that we have inet_pton, and I have\n repeated the test manually. It's not a ccache issue - I have\n cleared the cache and the problem persists. The test run by meson\n on the same animal reports not finding the function.\nSo I'm a bit flummoxed about how to fix this, and would\n appreciate any suggestions.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 28 Sep 2024 09:50:29 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "msys inet_pton strangeness" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> It's complaining like this:\n\n> C:/tools/xmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c:219:21: error: implicit declaration of function 'inet_pton'; did you mean 'inet_aton'? [-Wimplicit-function-declaration]\n> 219 | if (inet_pton(AF_INET6, host, &addr) == 1)\n> | ^~~~~~~~~\n\n> configure has determined that we have inet_pton, and I have repeated the \n> test manually.\n\nconfigure's test is purely a linker test. It does not check to see\nwhere/whether the function is declared. Meanwhile, the compiler is\ncomplaining that it doesn't see a declaration. So the problem\nprobably can be fixed by adding an #include, but you'll need to\nfigure out what.\n\nI see that our other user of inet_pton, fe-secure-openssl.c,\nhas a rather different #include setup than fe-secure-common.c;\ndoes it compile OK?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Sep 2024 11:49:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "On 2024-09-28 Sa 11:49 AM, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> It's complaining like this:\n>> C:/tools/xmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c:219:21: error: implicit declaration of function 'inet_pton'; did you mean 'inet_aton'? [-Wimplicit-function-declaration]\n>> 219 | if (inet_pton(AF_INET6, host, &addr) == 1)\n>> | ^~~~~~~~~\n>> configure has determined that we have inet_pton, and I have repeated the\n>> test manually.\n> configure's test is purely a linker test. It does not check to see\n> where/whether the function is declared. Meanwhile, the compiler is\n> complaining that it doesn't see a declaration. So the problem\n> probably can be fixed by adding an #include, but you'll need to\n> figure out what.\n>\n> I see that our other user of inet_pton, fe-secure-openssl.c,\n> has a rather different #include setup than fe-secure-common.c;\n> does it compile OK?\n\n\nI'll try, but this error occurs before we get that far.\n\nWe should have included ws2tcpip.h, which includes this:\n\n #define InetPtonA inet_pton\n WINSOCK_API_LINKAGE INT WSAAPI InetPtonA(INT Family, LPCSTR pStringBuf, PVOID pAddr);\n\nIt's conditioned on (_WIN32_WINNT >= 0x0600), but that should be true.\n\nSo I'm still very confused ;-(\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-09-28 Sa 11:49 AM, Tom Lane\n wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nIt's complaining like this:\n\n\n\n\n\nC:/tools/xmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c:219:21: error: implicit declaration of function 'inet_pton'; did you mean 'inet_aton'? [-Wimplicit-function-declaration]\n 219 | if (inet_pton(AF_INET6, host, &addr) == 1)\n | ^~~~~~~~~\n\n\n\n\n\nconfigure has determined that we have inet_pton, and I have repeated the \ntest manually.\n\n\n\nconfigure's test is purely a linker test. It does not check to see\nwhere/whether the function is declared. Meanwhile, the compiler is\ncomplaining that it doesn't see a declaration. So the problem\nprobably can be fixed by adding an #include, but you'll need to\nfigure out what.\n\nI see that our other user of inet_pton, fe-secure-openssl.c,\nhas a rather different #include setup than fe-secure-common.c;\ndoes it compile OK?\n\n\n\nI'll try, but this error occurs before we get that far.\nWe should have included ws2tcpip.h, which includes this:\n\n#define InetPtonA inet_pton\nWINSOCK_API_LINKAGE INT WSAAPI InetPtonA(INT Family, LPCSTR pStringBuf, PVOID pAddr);\n\nIt's conditioned on (_WIN32_WINNT >= 0x0600), but that should\n be true.\nSo I'm still very confused ;-(\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 28 Sep 2024 13:26:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> We should have included ws2tcpip.h, which includes this:\n\n> #define InetPtonA inet_pton\n> WINSOCK_API_LINKAGE INT WSAAPI InetPtonA(INT Family, LPCSTR pStringBuf, PVOID pAddr);\n\n> It's conditioned on (_WIN32_WINNT >= 0x0600), but that should be true.\n\n> So I'm still very confused ;-(\n\nMe too. Does this compiler support the equivalent of -E, so\nthat you can verify that the InetPtonA declaration is being\nread?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Sep 2024 15:49:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "On Sun, Sep 29, 2024 at 6:26 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> We should have included ws2tcpip.h, which includes this:\n>\n> #define InetPtonA inet_pton\n> WINSOCK_API_LINKAGE INT WSAAPI InetPtonA(INT Family, LPCSTR pStringBuf, PVOID pAddr);\n>\n> It's conditioned on (_WIN32_WINNT >= 0x0600), but that should be true.\n\nCan you print out the value to be sure? I can't imagine they'd set it\nlower themselves or make it go backwards in an upgrade, but perhaps\nit's somehow not being set at all, and then we do:\n\n#if defined(_MSC_VER) && _MSC_VER >= 1900\n#define MIN_WINNT 0x0600\n#else\n#define MIN_WINNT 0x0501\n#endif\n\nIn 16 we don't do that anymore, we just always set it to 0x0A00\n(commit 495ed0ef2d72). And before 15, we didn't want that function\nyet (commit c1932e542863).\n\n\n", "msg_date": "Sun, 29 Sep 2024 09:52:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "Hello Thomas and Andrew,\n\n28.09.2024 23:52, Thomas Munro wrote:\n> On Sun, Sep 29, 2024 at 6:26 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> We should have included ws2tcpip.h, which includes this:\n>>\n>> #define InetPtonA inet_pton\n>> WINSOCK_API_LINKAGE INT WSAAPI InetPtonA(INT Family, LPCSTR pStringBuf, PVOID pAddr);\n>>\n>> It's conditioned on (_WIN32_WINNT >= 0x0600), but that should be true.\n> Can you print out the value to be sure? I can't imagine they'd set it\n> lower themselves or make it go backwards in an upgrade, but perhaps\n> it's somehow not being set at all, and then we do:\n>\n> #if defined(_MSC_VER) && _MSC_VER >= 1900\n> #define MIN_WINNT 0x0600\n> #else\n> #define MIN_WINNT 0x0501\n> #endif\n>\n> In 16 we don't do that anymore, we just always set it to 0x0A00\n> (commit 495ed0ef2d72). And before 15, we didn't want that function\n> yet (commit c1932e542863).\n\nFWIW, I'm observing the same here.\nFor a trivial test.c (compiled with the same command line as\nfe-secure-common.c) like:\n\"===_WIN32\"\n_WIN32;\n\"===_WIN32_WINNT\";\n_WIN32_WINNT;\n\nwith gcc -E (from mingw-w64-ucrt-x86_64-gcc 14.2.0-1), I get:\n\"===_WIN32\"\n1;\n\"===_WIN32_WINNT\";\n_WIN32_WINNT;\n\nThat is, _WIN32_WINNT is not defined, but with #include <windows.h> above,\nI see:\n\"===_WIN32_WINNT\";\n0x603\n\nWith #include \"postgres_fe.h\" (as in fe-secure-common.c) I get:\n\"===_WIN32_WINNT\";\n0x0501;\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Sun, 29 Sep 2024 08:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "On 2024-09-29 Su 1:00 AM, Alexander Lakhin wrote:\n> Hello Thomas and Andrew,\n>\n> 28.09.2024 23:52, Thomas Munro wrote:\n>> On Sun, Sep 29, 2024 at 6:26 AM Andrew Dunstan <andrew@dunslane.net> \n>> wrote:\n>>> We should have included ws2tcpip.h, which includes this:\n>>>\n>>> #define InetPtonA inet_pton\n>>> WINSOCK_API_LINKAGE INT WSAAPI InetPtonA(INT Family, LPCSTR \n>>> pStringBuf, PVOID pAddr);\n>>>\n>>> It's conditioned on (_WIN32_WINNT >= 0x0600), but that should be true.\n>> Can you print out the value to be sure?  I can't imagine they'd set it\n>> lower themselves or make it go backwards in an upgrade, but perhaps\n>> it's somehow not being set at all, and then we do:\n>>\n>> #if defined(_MSC_VER) && _MSC_VER >= 1900\n>> #define MIN_WINNT 0x0600\n>> #else\n>> #define MIN_WINNT 0x0501\n>> #endif\n>>\n>> In 16 we don't do that anymore, we just always set it to 0x0A00\n>> (commit 495ed0ef2d72).  And before 15, we didn't want that function\n>> yet (commit c1932e542863).\n>\n> FWIW, I'm observing the same here.\n> For a trivial test.c (compiled with the same command line as\n> fe-secure-common.c) like:\n> \"===_WIN32\"\n> _WIN32;\n> \"===_WIN32_WINNT\";\n> _WIN32_WINNT;\n>\n> with gcc -E (from mingw-w64-ucrt-x86_64-gcc 14.2.0-1), I get:\n> \"===_WIN32\"\n> 1;\n> \"===_WIN32_WINNT\";\n> _WIN32_WINNT;\n>\n> That is, _WIN32_WINNT is not defined, but with #include <windows.h> \n> above,\n> I see:\n> \"===_WIN32_WINNT\";\n> 0x603\n>\n> With #include \"postgres_fe.h\" (as in fe-secure-common.c) I get:\n> \"===_WIN32_WINNT\";\n> 0x0501;\n>\n>\n>\n\n\nYeah, src/include/port/win32/sys/socket.h has:\n\n #include <winsock2.h>\n #include <ws2tcpip.h>\n #include <windows.h>\n\nI'm inclined to think we might need to reverse the order of the last \ntwo. TBH I don't really understand how this has worked up to now.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-09-29 Su 1:00 AM, Alexander\n Lakhin wrote:\n\nHello\n Thomas and Andrew,\n \n\n 28.09.2024 23:52, Thomas Munro wrote:\n \nOn Sun, Sep 29, 2024 at 6:26 AM Andrew\n Dunstan <andrew@dunslane.net> wrote:\n \nWe should have included ws2tcpip.h,\n which includes this:\n \n\n #define InetPtonA inet_pton\n \n WINSOCK_API_LINKAGE INT WSAAPI InetPtonA(INT Family, LPCSTR\n pStringBuf, PVOID pAddr);\n \n\n It's conditioned on (_WIN32_WINNT >= 0x0600), but that\n should be true.\n \n\n Can you print out the value to be sure?  I can't imagine they'd\n set it\n \n lower themselves or make it go backwards in an upgrade, but\n perhaps\n \n it's somehow not being set at all, and then we do:\n \n\n #if defined(_MSC_VER) && _MSC_VER >= 1900\n \n #define MIN_WINNT 0x0600\n \n #else\n \n #define MIN_WINNT 0x0501\n \n #endif\n \n\n In 16 we don't do that anymore, we just always set it to 0x0A00\n \n (commit 495ed0ef2d72).  And before 15, we didn't want that\n function\n \n yet (commit c1932e542863).\n \n\n\n FWIW, I'm observing the same here.\n \n For a trivial test.c (compiled with the same command line as\n \n fe-secure-common.c) like:\n \n \"===_WIN32\"\n \n _WIN32;\n \n \"===_WIN32_WINNT\";\n \n _WIN32_WINNT;\n \n\n with gcc -E (from mingw-w64-ucrt-x86_64-gcc 14.2.0-1), I get:\n \n \"===_WIN32\"\n \n 1;\n \n \"===_WIN32_WINNT\";\n \n _WIN32_WINNT;\n \n\n That is, _WIN32_WINNT is not defined, but with #include\n <windows.h> above,\n \n I see:\n \n \"===_WIN32_WINNT\";\n \n 0x603\n \n\n With #include \"postgres_fe.h\" (as in fe-secure-common.c) I get:\n \n \"===_WIN32_WINNT\";\n \n 0x0501;\n \n\n\n\n\n\n\n\n\nYeah, src/include/port/win32/sys/socket.h has:\n\n#include <winsock2.h>\n#include <ws2tcpip.h>\n#include <windows.h>\n\n\nI'm inclined to think we might need to reverse the order of the\n last two. TBH I don't really understand how this has worked up to\n now.\n\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 29 Sep 2024 11:47:23 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Yeah, src/include/port/win32/sys/socket.h has:\n\n> #include <winsock2.h>\n> #include <ws2tcpip.h>\n> #include <windows.h>\n\n> I'm inclined to think we might need to reverse the order of the last \n> two. TBH I don't really understand how this has worked up to now.\n\nI see the same in src/include/port/win32_port.h ... wouldn't that\nget included first?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Sep 2024 12:24:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "29.09.2024 18:47, Andrew Dunstan wrote:\n> Yeah, src/include/port/win32/sys/socket.h has:\n>\n> #include <winsock2.h>\n> #include <ws2tcpip.h>\n> #include <windows.h>\n>\n> I'm inclined to think we might need to reverse the order of the last two. TBH I don't really understand how this has \n> worked up to now.\n>\n\nAs far as I can see, in my environment  _WIN32_WINNT defined with\n#ifndef _WIN32_WINNT\n#define _WIN32_WINNT 0x603\n#endif\n\ninside C:/msys64/ucrt64/include/_mingw.h, which can be included\n                  from C:/msys64/ucrt64/include/corecrt.h:10,\n                  from C:/msys64/ucrt64/include/crtdefs.h:10,\n                  from ../../../src/include/pg_config_os.h:40,\n                  from ../../../src/include/c.h:56,\n                  from ../../../src/include/postgres_fe.h:25,\n                  from fe-secure-common.c:20\n\nor (if HAVE_CRTDEFS_H is not defined):\n                  from C:/msys64/ucrt64/include/corecrt.h:10,\n                  from C:/msys64/ucrt64/include/corecrt_stdio_config.h:10,\n                  from C:/msys64/ucrt64/include/stdio.h:9,\n                  from ../../../src/include/c.h:59,\n                  from ../../../src/include/postgres_fe.h:25,\n                  from fe-secure-common.c:20\n\nor (if winsock2.h included directly):\n                  from C:/msys64/ucrt64/include/windows.h:9,\n                  from C:/msys64/ucrt64/include/winsock2.h:23\n\nso including winsock2.h is sufficient to include _mingw.h, but it doesn't\nredefine _WIN32_WINNT, unfortunately.\n\nBest regards,\nAlexander\n\n\n\n\n\n29.09.2024 18:47, Andrew Dunstan wrote:\n\n\n\n Yeah, src/include/port/win32/sys/socket.h has:\n \n#include <winsock2.h>\n#include <ws2tcpip.h>\n#include <windows.h>\n\n\nI'm inclined to think we might need to reverse the order of the\n last two. TBH I don't really understand how this has worked up\n to now.\n\n\n\n As far as I can see, in my environment  _WIN32_WINNT defined with\n #ifndef _WIN32_WINNT\n #define _WIN32_WINNT 0x603\n #endif\n\n inside C:/msys64/ucrt64/include/_mingw.h, which can be included\n                  from C:/msys64/ucrt64/include/corecrt.h:10,\n                  from C:/msys64/ucrt64/include/crtdefs.h:10,\n                  from ../../../src/include/pg_config_os.h:40,\n                  from ../../../src/include/c.h:56,\n                  from ../../../src/include/postgres_fe.h:25,\n                  from fe-secure-common.c:20\n\n or (if HAVE_CRTDEFS_H is not defined):\n                  from C:/msys64/ucrt64/include/corecrt.h:10,\n                  from\n C:/msys64/ucrt64/include/corecrt_stdio_config.h:10,\n                  from C:/msys64/ucrt64/include/stdio.h:9,\n                  from ../../../src/include/c.h:59,\n                  from ../../../src/include/postgres_fe.h:25,\n                  from fe-secure-common.c:20\n\n or (if winsock2.h included directly):\n                  from C:/msys64/ucrt64/include/windows.h:9,\n                  from C:/msys64/ucrt64/include/winsock2.h:23\n\n so including winsock2.h is sufficient to include _mingw.h, but it\n doesn't\n redefine _WIN32_WINNT, unfortunately.\n\n Best regards,\n Alexander", "msg_date": "Sun, 29 Sep 2024 23:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "Just an idea...\n\n--- a/src/include/port/win32.h\n+++ b/src/include/port/win32.h\n@@ -16,7 +16,7 @@\n * get support for GetLocaleInfoEx() with locales. For everything else\n * the minimum version is Windows XP (0x0501).\n */\n-#if defined(_MSC_VER) && _MSC_VER >= 1900\n+#if !defined(_MSC_VER) || _MSC_VER >= 1900\n #define MIN_WINNT 0x0600\n #else\n #define MIN_WINNT 0x0501\n\nThat was done to reveal the Vista locale stuff, which MingGW certainly\nhas: we're calling it unconditionally in the master branch in at least\none place (and we should do more of that, to make MSVC and MinGW code\npaths the same wherever possible). In 15 the users of\nGetLocaleInfoEx() are guarded with checks that you're on MSVC so it\nstill wouldn't actually call them anyway.\n\nObviously it's not good to change the target in the back branches.\nBut apparently it already changed by accident due to some header order\nnonsense (could it be related to MinGW's recent switch to the UCRT by\ndefault?), so changing it again so that it compiles seems OK? We\ndon't seem to have a documented MinGW support range, and I've always\nsort of assumed that it's just 'recent versions only' because it's\neffectively only for developers (cross builds and suchlike). And it\ncertainly didn't really intend to be runnable on Windows XP\n(PostgreSQL 11 was the last to claim to run on Windows XP (0x0501)).\nI doubt anyone's actually going to test this on Vista or other ancient\nSDKs either, which is why I was looking for a change that *only*\naffects MinGW and doesn't risk changining anything for MSVC users on\nthe retro-computers and toolchains we claim to support. For example,\nheader order dependencies and side effects are a little chaotic on\nthat OS, so you could easily break something else...\n\nI guess the objection would be that (apparently) some translation\nunits are being compiled with 0x0603 from system headers, and this one\nwould use 0x0600, which might be confusing.\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:28:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "Hello Andrew and Thomas,\n\n29.09.2024 18:47, Andrew Dunstan пишет:\n>\n> I'm inclined to think we might need to reverse the order of the last two. TBH I don't really understand how this has \n> worked up to now.\n>\n\nI've looked at the last successful run [1] and discovered that\nfe-secure-common.c didn't compile cleanly too:\nccache gcc ... /home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/interfaces/libpq/fe-secure-common.c\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c: In function \n'pq_verify_peer_name_matches_certificate_ip':\nC:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c:219:21: warning: \nimplicit declaration of function 'inet_pton'; did you mean 'inet_aton'? [-Wimplicit-function-declaration]\n   219 |                 if (inet_pton(AF_INET6, host, &addr) == 1)\n       |                     ^~~~~~~~~\n       |                     inet_aton\n\nSo it worked just because that missing declaration generated just a\nwarning, not an error.\n\n30.09.2024 01:28, Thomas Munro wrote:\n> Just an idea...\n>\n> --- a/src/include/port/win32.h\n> +++ b/src/include/port/win32.h\n> @@ -16,7 +16,7 @@\n> * get support for GetLocaleInfoEx() with locales. For everything else\n> * the minimum version is Windows XP (0x0501).\n> */\n> -#if defined(_MSC_VER) && _MSC_VER >= 1900\n> +#if !defined(_MSC_VER) || _MSC_VER >= 1900\n> #define MIN_WINNT 0x0600\n> #else\n> #define MIN_WINNT 0x0501\n\nThis change works for me in the msys case. I have no VS 2013 on hand to\ntest the other branch, but it looks like HAVE_INET_PTON set to 1\nunconditionally in src/tools/msvc/Solution.pm, so we probably will stumble\nupon the same issue with _MSC_VER = 1800. What if we just set\nMIN_WINNT 0x0600 for REL_15_STABLE? Or may be it would make sense to get\nthat old Visual Studio and recheck?\n\nThe other question that I still have is: where we expect to get system\n_WIN32_WINNT from? As far as I can see, in the fe-secure-common.c case we\nhave the following include chain:\n#include \"postgres_fe.h\"\n     #include \"c.h\" // no other includes above\n         #include \"postgres_ext.h\"\n             #include \"pg_config_ext.h\"\n             ...\n             #include \"pg_config.h\"\n             #include \"pg_config_manual.h\"    /* must be after pg_config.h */\n             #include \"pg_config_os.h\"        /* must be before any system header files */\n                 // checks _WIN32_WINNT:\n                 #if defined(_WIN32_WINNT) && _WIN32_WINNT < MIN_WINNT\n\nSo if pg_config_os.h is really included before any system headers,\nchecking _WIN32_WINNT makes sense only when that define passed with\n-D_WIN32_WINNT, no?\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=fairywren&dt=2024-09-19%2023%3A10%3A10&stg=build\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 30 Sep 2024 14:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "\nOn 2024-09-30 Mo 7:00 AM, Alexander Lakhin wrote:\n> Hello Andrew and Thomas,\n>\n> 29.09.2024 18:47, Andrew Dunstan пишет:\n>>\n>> I'm inclined to think we might need to reverse the order of the last \n>> two. TBH I don't really understand how this has worked up to now.\n>>\n>\n> I've looked at the last successful run [1] and discovered that\n> fe-secure-common.c didn't compile cleanly too:\n> ccache gcc ... \n> /home/pgrunner/bf/root/REL_15_STABLE/pgsql.build/../pgsql/src/interfaces/libpq/fe-secure-common.c\n> C:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c: \n> In function 'pq_verify_peer_name_matches_certificate_ip':\n> C:/tools/nmsys64/home/pgrunner/bf/root/REL_15_STABLE/pgsql/src/interfaces/libpq/fe-secure-common.c:219:21: \n> warning: implicit declaration of function 'inet_pton'; did you mean \n> 'inet_aton'? [-Wimplicit-function-declaration]\n>   219 |                 if (inet_pton(AF_INET6, host, &addr) == 1)\n>       |                     ^~~~~~~~~\n>       |                     inet_aton\n>\n> So it worked just because that missing declaration generated just a\n> warning, not an error.\n\n\n\nAh, so this is because gcc 14.1.0 treats this as an error but gcc 12.2.0 \ntreats it as a warning. Now it makes sense.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 30 Sep 2024 07:11:23 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "\nOn 2024-09-29 Su 6:28 PM, Thomas Munro wrote:\n> Just an idea...\n>\n> --- a/src/include/port/win32.h\n> +++ b/src/include/port/win32.h\n> @@ -16,7 +16,7 @@\n> * get support for GetLocaleInfoEx() with locales. For everything else\n> * the minimum version is Windows XP (0x0501).\n> */\n> -#if defined(_MSC_VER) && _MSC_VER >= 1900\n> +#if !defined(_MSC_VER) || _MSC_VER >= 1900\n> #define MIN_WINNT 0x0600\n> #else\n> #define MIN_WINNT 0x0501\n\n\nThis seems reasonable as just about the most minimal change we can make \nwork.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 30 Sep 2024 09:28:57 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Ah, so this is because gcc 14.1.0 treats this as an error but gcc 12.2.0 \n> treats it as a warning. Now it makes sense.\n\nNot entirely ... if fairywren had been generating that warning all\nalong, I would have noticed it long ago, because I periodically\nscrape the BF database for compiler warnings. There has to have\nbeen some recent change in the system include files.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 10:08:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "\nOn 2024-09-30 Mo 10:08 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Ah, so this is because gcc 14.1.0 treats this as an error but gcc 12.2.0\n>> treats it as a warning. Now it makes sense.\n> Not entirely ... if fairywren had been generating that warning all\n> along, I would have noticed it long ago, because I periodically\n> scrape the BF database for compiler warnings. There has to have\n> been some recent change in the system include files.\n\n\nhere's what I see on vendikar:\n\n\npgbfprod=> select min(snapshot) from build_status_log where log_stage in \n('build.log', 'make.log') and branch = 'REL_15_STABLE' and sysname = \n'fairywren' and snapshot > now() - interval '1500 days' and log_text ~ \n'inet_pton';\n          min\n---------------------\n  2022-06-30 18:04:08\n(1 row)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:05:34 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2024-09-30 Mo 10:08 AM, Tom Lane wrote:\n>> Not entirely ... if fairywren had been generating that warning all\n>> along, I would have noticed it long ago, because I periodically\n>> scrape the BF database for compiler warnings. There has to have\n>> been some recent change in the system include files.\n\n> here's what I see on vendikar:\n\nOh, wait, I forgot this is only about the v15 branch. I seldom\nsearch for warnings except on HEAD. Still, I'd have expected to\nnotice it while v15 was development tip. Maybe we changed something\nsince then?\n\nAnyway, it's pretty moot, I see no reason not to push forward\nwith the proposed fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:11:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: msys inet_pton strangeness" }, { "msg_contents": "On 2024-09-30 Mo 11:11 AM, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> On 2024-09-30 Mo 10:08 AM, Tom Lane wrote:\n>>> Not entirely ... if fairywren had been generating that warning all\n>>> along, I would have noticed it long ago, because I periodically\n>>> scrape the BF database for compiler warnings. There has to have\n>>> been some recent change in the system include files.\n>> here's what I see on vendikar:\n> Oh, wait, I forgot this is only about the v15 branch. I seldom\n> search for warnings except on HEAD. Still, I'd have expected to\n> notice it while v15 was development tip. Maybe we changed something\n> since then?\n>\n> Anyway, it's pretty moot, I see no reason not to push forward\n> with the proposed fix.\n>\n> \t\t\t\n\n\nThanks, done.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2024-09-30 Mo 11:11 AM, Tom Lane\n wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\nOn 2024-09-30 Mo 10:08 AM, Tom Lane wrote:\n\n\nNot entirely ... if fairywren had been generating that warning all\nalong, I would have noticed it long ago, because I periodically\nscrape the BF database for compiler warnings. There has to have\nbeen some recent change in the system include files.\n\n\n\n\n\n\nhere's what I see on vendikar:\n\n\n\nOh, wait, I forgot this is only about the v15 branch. I seldom\nsearch for warnings except on HEAD. Still, I'd have expected to\nnotice it while v15 was development tip. Maybe we changed something\nsince then?\n\nAnyway, it's pretty moot, I see no reason not to push forward\nwith the proposed fix.\n\n\t\t\t\n\n\n\nThanks, done.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 30 Sep 2024 11:46:23 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: msys inet_pton strangeness" } ]
[ { "msg_contents": "Hi,\n\nCurrently, index scans that order by an operator (for instance, `location\n<-> POINT(0, 0)`) and have a filter for the same expression (`location <->\nPOINT(0, 0) < 2`) can end up scanning much more of the index than is\nnecessary.\n\nHere's a complete example:\n\nCREATE TABLE stores (location point);\nINSERT INTO stores SELECT POINT(0, i) FROM generate_series(1, 100000) i;\nCREATE INDEX ON stores USING gist (location);\nEXPLAIN (ANALYZE, COSTS OFF) SELECT * FROM stores WHERE location <->\nPOINT(0, 0) < 2 ORDER BY location <-> POINT(0, 0) LIMIT 10;\n\nOnce the second tuple returned from the index has a distance >= 2, the scan\nshould be able to end (as it's an ascending order scan). Instead, it scans\nthe entire index, filtering out the next 99,998 rows.\n\n Limit (actual time=0.166..32.573 rows=1 loops=1)\n -> Index Only Scan using stores_location_idx on stores (actual\ntime=0.165..32.570 rows=1 loops=1)\n Order By: (location <-> '(0,0)'::point)\n Filter: ((location <-> '(0,0)'::point) < '2'::double precision)\n Rows Removed by Filter: 99999\n\nThis can be especially costly for vector index scans (this was found while\nworking on an upcoming feature for pgvector [1]).\n\n- Andrew\n\n[1] https://github.com/pgvector/pgvector/issues/678\n\nHi,Currently, index scans that order by an operator (for instance, `location <-> POINT(0, 0)`) and have a filter for the same expression (`location <-> POINT(0, 0) < 2`) can end up scanning much more of the index than is necessary.Here's a complete example:CREATE TABLE stores (location point);INSERT INTO stores SELECT POINT(0, i) FROM generate_series(1, 100000) i;CREATE INDEX ON stores USING gist (location);EXPLAIN (ANALYZE, COSTS OFF) SELECT * FROM stores WHERE location <-> POINT(0, 0) < 2 ORDER BY location <-> POINT(0, 0) LIMIT 10;Once the second tuple returned from the index has a distance >= 2, the scan should be able to end (as it's an ascending order scan). Instead, it scans the entire index, filtering out the next 99,998 rows. Limit (actual time=0.166..32.573 rows=1 loops=1)   ->  Index Only Scan using stores_location_idx on stores (actual time=0.165..32.570 rows=1 loops=1)         Order By: (location <-> '(0,0)'::point)         Filter: ((location <-> '(0,0)'::point) < '2'::double precision)         Rows Removed by Filter: 99999This can be especially costly for vector index scans (this was found while working on an upcoming feature for pgvector [1]).- Andrew[1] https://github.com/pgvector/pgvector/issues/678", "msg_date": "Sat, 28 Sep 2024 14:18:15 -0700", "msg_from": "Andrew Kane <andrew@ankane.org>", "msg_from_op": true, "msg_subject": "ORDER BY operator index scans and filtering" } ]
[ { "msg_contents": "Hi Hackers,\n\nI am not sure if this is a bug or I am missing something:\n\nThere is a partitioned table with partitions being a mix of foreign and regular tables.\nI have a function:\n\nreport(param text) RETURNS TABLE(…) STABLE LANGUAGE sql AS\n$$\nSELECT col1, expr1(col2), expr2(col2), sum(col3) FROM tbl GROUP BY col1, expr1(col2), expr2(col2)\n$$\n\nEXPLAIN SELECT * FROM report(‘xyz’);\n\nreturns expected plan pushing down aggregate expression to remote server.\n\nWhen I add STRICT or SET search_path to the function definition, the plan is (as expected) a simple function scan.\nBut - to my surprise - auto explain in the logs shows unexpected plan with all nodes scanning partitions having row estimates = 1\n\nIs it expected behavior?\n\n—\nMichal\n\n", "msg_date": "Sun, 29 Sep 2024 20:49:31 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <michal@kleczek.org>", "msg_from_op": true, "msg_subject": "SET or STRICT modifiers on function affect planner row estimates" }, { "msg_contents": "Hi Michal,\nIt is difficult to understand the exact problem from your description.\nCan you please provide EXPLAIN outputs showing the expected plan and\nthe unexpected plan; plans on the node where the query is run and\nwhere the partitions are located.\n\nOn Mon, Sep 30, 2024 at 12:19 AM Michał Kłeczek <michal@kleczek.org> wrote:\n>\n> Hi Hackers,\n>\n> I am not sure if this is a bug or I am missing something:\n>\n> There is a partitioned table with partitions being a mix of foreign and regular tables.\n> I have a function:\n>\n> report(param text) RETURNS TABLE(…) STABLE LANGUAGE sql AS\n> $$\n> SELECT col1, expr1(col2), expr2(col2), sum(col3) FROM tbl GROUP BY col1, expr1(col2), expr2(col2)\n> $$\n>\n> EXPLAIN SELECT * FROM report(‘xyz’);\n>\n> returns expected plan pushing down aggregate expression to remote server.\n>\n> When I add STRICT or SET search_path to the function definition, the plan is (as expected) a simple function scan.\n> But - to my surprise - auto explain in the logs shows unexpected plan with all nodes scanning partitions having row estimates = 1\n>\n> Is it expected behavior?\n>\n> —\n> Michal\n>\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 30 Sep 2024 17:44:11 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: SET or STRICT modifiers on function affect planner row estimates" }, { "msg_contents": "Hi,\nThanks for taking a look.\n\n> On 30 Sep 2024, at 14:14, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n> \n> Hi Michal,\n> It is difficult to understand the exact problem from your description.\n> Can you please provide EXPLAIN outputs showing the expected plan and\n> the unexpected plan; plans on the node where the query is run and\n> where the partitions are located.\n\nThe table structure is as follows:\n\nCREATE TABLE tbl (…) PARTITION BY RANGE year(col02_date)\n\nCREATE TABLE tbl_2015 PARTITION OF tbl FOR VALUES BETWEEN (2023) AND (2024) PARTITION BY HASH (col01_no)\n… subsequent years\n\n\nCREATE TABLE tbl_2021_32_9 PARTITION OF tbl_2021 FOR VALUES WITH (MODULUS 32 REMAINDER 9)\n…\n\nCREATE FOREIGN TABLE tbl_2022_16_9 PARTITION OF tbl_2022 FOR VALUES WITH (MODULUS 32 REMAINDER 9)\n\nAll tables are freshly ANALYZEd.\n\nI have a function:\n\nCREATE FUNCTION report(col01 text, year_from, month_from, year_to, month_to) RETURNS … LANGUAGE sql\n$$\nSELECT\n col01_no, year(col02_date), month(col02_date), sum(col03) FROM tbl WHERE col02_no = col02 AND (col02_date conditions) GROUP BY 1, 2, 3\n$$\n\nEXPLAIN (…) SELECT * FROM report(…);\n\ngives\n\nPlan 1 (performs pushdown of aggregates):\n \nAppend (cost=9.33..76322501.41 rows=3 width=58) (actual time=5.051..6.414 rows=21 loops=1)\n InitPlan 1 (returns $0)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)\n Output: '2021-01-01'::date\n InitPlan 2 (returns $1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)\n Output: '2023-12-31'::date\n -> GroupAggregate (cost=9.31..9.85 rows=1 width=58) (actual time=0.073..0.074 rows=0 loops=1)\n Output: op.col01_no, (year(op.col02_date)), (month(op.col02_date)), sum(op.col03) FILTER (WHERE (op.debit_flag = '0'::bpchar)), sum(op.col03) FILTER (WHERE (op.debit_flag <> '0'::bpchar)), count(1)\n Group Key: (year(op.col02_date)), (month(op.col02_date))\n -> Sort (cost=9.31..9.32 rows=1 width=44) (actual time=0.072..0.073 rows=0 loops=1)\n Output: (year(op.col02_date)), (month(op.col02_date)), op.col01_no, op.col03, op.debit_flag\n Sort Key: (year(op.col02_date)), (month(op.col02_date))\n Sort Method: quicksort Memory: 25kB\n -> Index Only Scan using tbl_2021_32_9_universal_gist_idx_3a2df25af5bc48a on cbt.tbl_2021_32_9 op (cost=0.28..9.30 rows=1 width=44) (actual time=0.063..0.063 rows=0 loops=1)\n Output: year(op.col02_date), month(op.col02_date), op.col01_no, op.col03, op.debit_flag\n Index Cond: ((op.col01_no = '22109020660000000110831697'::text) AND (op.col02_date >= $0) AND (op.col02_date <= $1))\n Filter: ((year(op.col02_date) >= 2021) AND (year(op.col02_date) <= 2023))\n Heap Fetches: 0\n -> Async Foreign Scan (cost=100.02..76322480.36 rows=1 width=58) (actual time=0.753..0.755 rows=11 loops=1)\n Output: op_1.col01_no, (year(op_1.col02_date)), (month(op_1.col02_date)), (sum(op_1.col03) FILTER (WHERE (op_1.debit_flag = '0'::bpchar))), (sum(op_1.col03) FILTER (WHERE (op_1.debit_flag <> '0'::bpchar))), ((count(1))::double precision)\n Relations: Aggregate on (cbt_c61d467d1b5fd1d218d9e6e7dd44a333.tbl_2022_16_9 op_1)\n Remote SQL: SELECT col01_no, cbt.year(col02_date), cbt.month(col02_date), sum(col03) FILTER (WHERE (debit_flag = '0')), sum(col03) FILTER (WHERE (debit_flag <> '0')), count(1) FROM cbt.tbl_2022_16_9 WHERE ((cbt.year(col02_date) >= 2021)) AND ((cbt.year(col02_date) <= 2023)) AND ((col02_date >= $1::date)) AND ((col02_date <= $2::date)) AND ((col01_no = '22109020660000000110831697')) GROUP BY 1, 2, 3\n -> GroupAggregate (cost=10.63..11.17 rows=1 width=58) (actual time=4.266..4.423 rows=10 loops=1)\n Output: op_2.col01_no, (year(op_2.col02_date)), (month(op_2.col02_date)), sum(op_2.col03) FILTER (WHERE (op_2.debit_flag = '0'::bpchar)), sum(op_2.col03) FILTER (WHERE (op_2.debit_flag <> '0'::bpchar)), count(1)\n Group Key: (year(op_2.col02_date)), (month(op_2.col02_date))\n -> Sort (cost=10.63..10.64 rows=1 width=44) (actual time=4.238..4.273 rows=735 loops=1)\n Output: (year(op_2.col02_date)), (month(op_2.col02_date)), op_2.col01_no, op_2.col03, op_2.debit_flag\n Sort Key: (year(op_2.col02_date)), (month(op_2.col02_date))\n Sort Method: quicksort Memory: 82kB\n -> Index Only Scan using tbl_2023_128_9_universal_gist_idx_3a2df25af5bc48a on cbt.tbl_2023_128_9 op_2 (cost=0.54..10.62 rows=1 width=44) (actual time=0.295..4.059 rows=735 loops=1)\n Output: year(op_2.col02_date), month(op_2.col02_date), op_2.col01_no, op_2.col03, op_2.debit_flag\n Index Cond: ((op_2.col01_no = '22109020660000000110831697'::text) AND (op_2.col02_date >= $0) AND (op_2.col02_date <= $1))\n Filter: ((year(op_2.col02_date) >= 2021) AND (year(op_2.col02_date) <= 2023))\n Heap Fetches: 0\n \n \n\nBUT after I perform\nCREATE OR REPLACE report(…)\nSTRICT TO … AS … (same code)\n\nThe plan (as reported by auto_explain) changes to:\n\nPlan 2 (no pushdown):\n\nGroupAggregate (cost=1781983216.68..1781983324.62 rows=200 width=58) (actual time=16.065..16.432 rows=21 loops=1)\n Output: op.col01_no, (year(op.col02_date)), (month(op.col02_date)), sum(op.col03) FILTER (WHERE (op.debit_flag = '0'::bpchar)), sum(op.col03) FILTER (WHERE (op.debit_flag <> '0'::bpchar)), count(1)\n Group Key: (year(op.col02_date)), (month(op.col02_date))\n InitPlan 1 (returns $0)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Output: make_date($2, $3, 1)\n InitPlan 2 (returns $1)\n -> Result (cost=0.00..0.02 rows=1 width=4) (actual time=0.002..0.003 rows=1 loops=1)\n Output: (((make_date($4, $5, 1) + '1 mon'::interval) - '1 day'::interval))::date\n -> Sort (cost=1781983216.65..1781983217.33 rows=272 width=44) (actual time=16.041..16.117 rows=1564 loops=1)\n Output: (year(op.col02_date)), (month(op.col02_date)), op.col01_no, op.col03, op.debit_flag\n Sort Key: (year(op.col02_date)), (month(op.col02_date))\n Sort Method: quicksort Memory: 171kB\n -> Append (cost=0.55..1781983205.65 rows=272 width=44) (actual time=1.013..15.445 rows=1564 loops=1)\n Subplans Removed: 269\n -> Index Only Scan using accoper_2021_32_9_universal_gist_idx_3a2df25af5bc48a on cbt.accoper_2021_32_9 op_1 (cost=0.28..9.30 rows=1 width=44) (actual time=0.084..0.084 rows=0 loops=1)\n Output: year(op_1.col02_date), month(op_1.col02_date), op_1.col01_no, op_1.col03, op_1.debit_flag\n Index Cond: ((op_1.col01_no = $1) AND (op_1.col02_date >= $0) AND (op_1.col02_date <= $1))\n Filter: ((year(op_1.col02_date) >= $2) AND (year(op_1.col02_date) <= $4))\n Heap Fetches: 0\n -> Async Foreign Scan on cbt_c61d467d1b5fd1d218d9e6e7dd44a333.accoper_2022_16_9 op_2 (cost=100.00..76322479.83 rows=1 width=44) (actual time=0.658..3.870 rows=829 loops=1)\n Output: year(op_2.col02_date), month(op_2.col02_date), op_2.col01_no, op_2.col03, op_2.debit_flag\n Remote SQL: SELECT col03, col02_date, debit_flag, col01_no FROM cbt.accoper_2022_16_9 WHERE ((col02_date >= $1::date)) AND ((col02_date <= $2::date)) AND ((col01_no = $3::text)) AND ((cbt.year(col02_date) >= $4::integer)) AND ((cbt.year(col02_date) <= $5::integer))\n -> Index Only Scan using accoper_2023_128_9_universal_gist_idx_3a2df25af5bc48a on cbt.accoper_2023_128_9 op_3 (cost=0.54..10.62 rows=1 width=44) (actual time=0.361..4.043 rows=735 loops=1)\n Output: year(op_3.col02_date), month(op_3.col02_date), op_3.col01_no, op_3.col03, op_3.debit_flag\n Index Cond: ((op_3.col01_no = $1) AND (op_3.col02_date >= $0) AND (op_3.col02_date <= $1))\n Filter: ((year(op_3.col02_date) >= $2) AND (year(op_3.col02_date) <= $4))\n Heap Fetches: 0\n\nI understand wrong rows estimates ( =1 ) are due to missing statistics on expressions year() and month().\n\nBut why plans are different?\n\n\nBTW. I tried to add extended statistics on the above expressions but ANALYZE didn’t seem to update any values (as seen in pg_stat_ext) - ndistinct is NULL for example.\n\nThanks,\n\n--\nMichal\n\n\n\nHi,Thanks for taking a look.On 30 Sep 2024, at 14:14, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi Michal,It is difficult to understand the exact problem from your description.Can you please provide EXPLAIN outputs showing the expected plan andthe unexpected plan; plans on the node where the query is run andwhere the partitions are located.The table structure is as follows:CREATE TABLE tbl (…) PARTITION BY RANGE year(col02_date)CREATE TABLE tbl_2015 PARTITION OF tbl FOR VALUES BETWEEN (2023) AND (2024) PARTITION BY HASH (col01_no)… subsequent yearsCREATE TABLE tbl_2021_32_9 PARTITION OF tbl_2021 FOR VALUES WITH (MODULUS 32 REMAINDER 9)…CREATE FOREIGN TABLE tbl_2022_16_9 PARTITION OF tbl_2022 FOR VALUES WITH (MODULUS 32 REMAINDER 9)All tables are freshly ANALYZEd.I have a function:CREATE FUNCTION report(col01 text, year_from, month_from, year_to, month_to) RETURNS … LANGUAGE sql$$SELECT  col01_no, year(col02_date), month(col02_date), sum(col03) FROM tbl WHERE col02_no = col02 AND (col02_date conditions) GROUP BY 1, 2, 3$$EXPLAIN (…) SELECT * FROM report(…);givesPlan 1 (performs pushdown of aggregates): Append  (cost=9.33..76322501.41 rows=3 width=58) (actual time=5.051..6.414 rows=21 loops=1)  InitPlan 1 (returns $0)    ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)          Output: '2021-01-01'::date  InitPlan 2 (returns $1)    ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)          Output: '2023-12-31'::date  ->  GroupAggregate  (cost=9.31..9.85 rows=1 width=58) (actual time=0.073..0.074 rows=0 loops=1)        Output: op.col01_no, (year(op.col02_date)), (month(op.col02_date)), sum(op.col03) FILTER (WHERE (op.debit_flag = '0'::bpchar)), sum(op.col03) FILTER (WHERE (op.debit_flag <> '0'::bpchar)), count(1)        Group Key: (year(op.col02_date)), (month(op.col02_date))        ->  Sort  (cost=9.31..9.32 rows=1 width=44) (actual time=0.072..0.073 rows=0 loops=1)              Output: (year(op.col02_date)), (month(op.col02_date)), op.col01_no, op.col03, op.debit_flag              Sort Key: (year(op.col02_date)), (month(op.col02_date))              Sort Method: quicksort  Memory: 25kB              ->  Index Only Scan using tbl_2021_32_9_universal_gist_idx_3a2df25af5bc48a on cbt.tbl_2021_32_9 op  (cost=0.28..9.30 rows=1 width=44) (actual time=0.063..0.063 rows=0 loops=1)                    Output: year(op.col02_date), month(op.col02_date), op.col01_no, op.col03, op.debit_flag                    Index Cond: ((op.col01_no = '22109020660000000110831697'::text) AND (op.col02_date >= $0) AND (op.col02_date <= $1))                    Filter: ((year(op.col02_date) >= 2021) AND (year(op.col02_date) <= 2023))                    Heap Fetches: 0  ->  Async Foreign Scan  (cost=100.02..76322480.36 rows=1 width=58) (actual time=0.753..0.755 rows=11 loops=1)        Output: op_1.col01_no, (year(op_1.col02_date)), (month(op_1.col02_date)), (sum(op_1.col03) FILTER (WHERE (op_1.debit_flag = '0'::bpchar))), (sum(op_1.col03) FILTER (WHERE (op_1.debit_flag <> '0'::bpchar))), ((count(1))::double precision)        Relations: Aggregate on (cbt_c61d467d1b5fd1d218d9e6e7dd44a333.tbl_2022_16_9 op_1)        Remote SQL: SELECT col01_no, cbt.year(col02_date), cbt.month(col02_date), sum(col03) FILTER (WHERE (debit_flag = '0')), sum(col03) FILTER (WHERE (debit_flag <> '0')), count(1) FROM cbt.tbl_2022_16_9 WHERE ((cbt.year(col02_date) >= 2021)) AND ((cbt.year(col02_date) <= 2023)) AND ((col02_date >= $1::date)) AND ((col02_date <= $2::date)) AND ((col01_no = '22109020660000000110831697')) GROUP BY 1, 2, 3  ->  GroupAggregate  (cost=10.63..11.17 rows=1 width=58) (actual time=4.266..4.423 rows=10 loops=1)        Output: op_2.col01_no, (year(op_2.col02_date)), (month(op_2.col02_date)), sum(op_2.col03) FILTER (WHERE (op_2.debit_flag = '0'::bpchar)), sum(op_2.col03) FILTER (WHERE (op_2.debit_flag <> '0'::bpchar)), count(1)        Group Key: (year(op_2.col02_date)), (month(op_2.col02_date))        ->  Sort  (cost=10.63..10.64 rows=1 width=44) (actual time=4.238..4.273 rows=735 loops=1)              Output: (year(op_2.col02_date)), (month(op_2.col02_date)), op_2.col01_no, op_2.col03, op_2.debit_flag              Sort Key: (year(op_2.col02_date)), (month(op_2.col02_date))              Sort Method: quicksort  Memory: 82kB              ->  Index Only Scan using tbl_2023_128_9_universal_gist_idx_3a2df25af5bc48a on cbt.tbl_2023_128_9 op_2  (cost=0.54..10.62 rows=1 width=44) (actual time=0.295..4.059 rows=735 loops=1)                    Output: year(op_2.col02_date), month(op_2.col02_date), op_2.col01_no, op_2.col03, op_2.debit_flag                    Index Cond: ((op_2.col01_no = '22109020660000000110831697'::text) AND (op_2.col02_date >= $0) AND (op_2.col02_date <= $1))                    Filter: ((year(op_2.col02_date) >= 2021) AND (year(op_2.col02_date) <= 2023))                    Heap Fetches: 0  BUT after I performCREATE OR REPLACE report(…)STRICT TO … AS … (same code)The plan (as reported by auto_explain) changes to:Plan 2 (no pushdown):GroupAggregate  (cost=1781983216.68..1781983324.62 rows=200 width=58) (actual time=16.065..16.432 rows=21 loops=1)  Output: op.col01_no, (year(op.col02_date)), (month(op.col02_date)), sum(op.col03) FILTER (WHERE (op.debit_flag = '0'::bpchar)), sum(op.col03) FILTER (WHERE (op.debit_flag <> '0'::bpchar)), count(1)  Group Key: (year(op.col02_date)), (month(op.col02_date))  InitPlan 1 (returns $0)    ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)          Output: make_date($2, $3, 1)  InitPlan 2 (returns $1)    ->  Result  (cost=0.00..0.02 rows=1 width=4) (actual time=0.002..0.003 rows=1 loops=1)          Output: (((make_date($4, $5, 1) + '1 mon'::interval) - '1 day'::interval))::date  ->  Sort  (cost=1781983216.65..1781983217.33 rows=272 width=44) (actual time=16.041..16.117 rows=1564 loops=1)        Output: (year(op.col02_date)), (month(op.col02_date)), op.col01_no, op.col03, op.debit_flag        Sort Key: (year(op.col02_date)), (month(op.col02_date))        Sort Method: quicksort  Memory: 171kB        ->  Append  (cost=0.55..1781983205.65 rows=272 width=44) (actual time=1.013..15.445 rows=1564 loops=1)              Subplans Removed: 269              ->  Index Only Scan using accoper_2021_32_9_universal_gist_idx_3a2df25af5bc48a on cbt.accoper_2021_32_9 op_1  (cost=0.28..9.30 rows=1 width=44) (actual time=0.084..0.084 rows=0 loops=1)                    Output: year(op_1.col02_date), month(op_1.col02_date), op_1.col01_no, op_1.col03, op_1.debit_flag                    Index Cond: ((op_1.col01_no = $1) AND (op_1.col02_date >= $0) AND (op_1.col02_date <= $1))                    Filter: ((year(op_1.col02_date) >= $2) AND (year(op_1.col02_date) <= $4))                    Heap Fetches: 0              ->  Async Foreign Scan on cbt_c61d467d1b5fd1d218d9e6e7dd44a333.accoper_2022_16_9 op_2  (cost=100.00..76322479.83 rows=1 width=44) (actual time=0.658..3.870 rows=829 loops=1)                    Output: year(op_2.col02_date), month(op_2.col02_date), op_2.col01_no, op_2.col03, op_2.debit_flag                    Remote SQL: SELECT col03, col02_date, debit_flag, col01_no FROM cbt.accoper_2022_16_9 WHERE ((col02_date >= $1::date)) AND ((col02_date <= $2::date)) AND ((col01_no = $3::text)) AND ((cbt.year(col02_date) >= $4::integer)) AND ((cbt.year(col02_date) <= $5::integer))              ->  Index Only Scan using accoper_2023_128_9_universal_gist_idx_3a2df25af5bc48a on cbt.accoper_2023_128_9 op_3  (cost=0.54..10.62 rows=1 width=44) (actual time=0.361..4.043 rows=735 loops=1)                    Output: year(op_3.col02_date), month(op_3.col02_date), op_3.col01_no, op_3.col03, op_3.debit_flag                    Index Cond: ((op_3.col01_no = $1) AND (op_3.col02_date >= $0) AND (op_3.col02_date <= $1))                    Filter: ((year(op_3.col02_date) >= $2) AND (year(op_3.col02_date) <= $4))                    Heap Fetches: 0I understand wrong rows estimates ( =1 ) are due to missing statistics on expressions year() and month().But why plans are different?BTW. I tried to add extended statistics on the above expressions but ANALYZE didn’t seem to update any values (as seen in pg_stat_ext) - ndistinct is NULL for example.Thanks,--Michal", "msg_date": "Mon, 30 Sep 2024 20:36:26 +0200", "msg_from": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <michal@kleczek.org>", "msg_from_op": true, "msg_subject": "Re: SET or STRICT modifiers on function affect planner row estimates" }, { "msg_contents": "=?utf-8?Q?Micha=C5=82_K=C5=82eczek?= <michal@kleczek.org> writes:\n>> On 30 Sep 2024, at 14:14, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>> It is difficult to understand the exact problem from your description.\n>> Can you please provide EXPLAIN outputs showing the expected plan and\n>> the unexpected plan; plans on the node where the query is run and\n>> where the partitions are located.\n\n> The table structure is as follows:\n\n> CREATE TABLE tbl (…) PARTITION BY RANGE year(col02_date)\n\nYou're still expecting people to magically intuit what all those\n\"...\"s are. I could spend many minutes trying to reconstruct\na runnable example from these fragments, and if it didn't behave\nas you say, it'd be wasted effort because I didn't guess right\nabout some un-mentioned detail. Please provide a *self-contained*\nexample if you want someone to poke into this in any detail.\nYou have not mentioned your PG version, either.\n\nMy first guess would be that adding STRICT or adding a SET clause\nprevents function inlining, because it does. However, your Plan 2\ndoesn't seem to involve a FunctionScan node, so either these plans\naren't really what you say or there's something else going on.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 15:24:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SET or STRICT modifiers on function affect planner row estimates" } ]
[ { "msg_contents": "Hi,\n\nif a cluster contains invalid databases that we cannot connect to \nanymore, pg_upgrade would currently fail when trying to connect to the \nfirst encountered invalid database with\n\n\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\n\nconnection failure: connection to server on socket \"/tmp/.s.PGSQL.50432\" \nfailed: FATAL: cannot connect to invalid database \"foo\"\nHINT: Use DROP DATABASE to drop invalid databases.\n\nFailure, exiting\n\n\nIf there is more than one invalid database, we need to run pg_upgrade \nmore than once (unless the user inspects pg_database).\n\nI attached two small patches for PG 17 and PG 18 (can be easily \nbackported to all previous versions upon request). Instead of just \nfailing to connect with an error, we collect all invalid databases in a \nreport file invalid_databases.txt:\n\n\nPerforming Consistency Checks\n-----------------------------\nChecking cluster versions ok\nChecking for invalid databases fatal\n\nYour installation contains invalid databases as a consequence of\ninterrupted DROP DATABASE. They are now marked as corrupted databases\nthat cannot be connected to anymore. Consider removing them using\n DROP DATABASE ...;\nA list of invalid databases is in the file:\n\n/usr/local/pgsql/data/18/pg_upgrade_output.d/20240929T200559.707/invalid_databases.txt\nFailure, exiting\n\n\nAny thoughts on the proposed patches?", "msg_date": "Sun, 29 Sep 2024 20:45:50 -0400", "msg_from": "Thomas Krennwallner <tk@postsubmeta.net>", "msg_from_op": true, "msg_subject": "pg_upgrade check for invalid databases" }, { "msg_contents": "On Sun, Sep 29, 2024 at 08:45:50PM -0400, Thomas Krennwallner wrote:\n> if a cluster contains invalid databases that we cannot connect to anymore,\n> pg_upgrade would currently fail when trying to connect to the first\n> encountered invalid database with\n> \n> [...]\n> \n> If there is more than one invalid database, we need to run pg_upgrade more\n> than once (unless the user inspects pg_database).\n> \n> I attached two small patches for PG 17 and PG 18 (can be easily backported\n> to all previous versions upon request). Instead of just failing to connect\n> with an error, we collect all invalid databases in a report file\n> invalid_databases.txt:\n\nShould we have pg_upgrade skip invalid databases? If the only valid action\nis to drop them, IMHO it seems unnecessary to require users to manually\ndrop them before retrying pg_upgrade.\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 30 Sep 2024 10:12:41 -0400", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade check for invalid databases" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Should we have pg_upgrade skip invalid databases? If the only valid action\n> is to drop them, IMHO it seems unnecessary to require users to manually\n> drop them before retrying pg_upgrade.\n\nI was thinking the same. But I wonder if there is any chance of\nlosing data that could be recoverable. It feels like this should\nnot be a default behavior.\n\nTBH I'm not finding anything very much wrong with the current\nbehavior... this has to be a rare situation, do we need to add\ndebatable behavior to make it easier?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 10:55:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade check for invalid databases" }, { "msg_contents": "> On 30 Sep 2024, at 16:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> TBH I'm not finding anything very much wrong with the current\n> behavior... this has to be a rare situation, do we need to add\n> debatable behavior to make it easier?\n\nOne argument would be to make the checks consistent, pg_upgrade generally tries\nto report all the offending entries to help the user when fixing the source\ndatabase. Not sure if it's a strong enough argument for carrying code which\nreally shouldn't see much use though.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 30 Sep 2024 23:29:35 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade check for invalid databases" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 30 Sep 2024, at 16:55, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> TBH I'm not finding anything very much wrong with the current\n>> behavior... this has to be a rare situation, do we need to add\n>> debatable behavior to make it easier?\n\n> One argument would be to make the checks consistent, pg_upgrade generally tries\n> to report all the offending entries to help the user when fixing the source\n> database. Not sure if it's a strong enough argument for carrying code which\n> really shouldn't see much use though.\n\nOK, but the consistency argument would be to just report and fail.\nI don't think there's a precedent in other pg_upgrade checks for\ntrying to fix problems automatically.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:20:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade check for invalid databases" } ]
[ { "msg_contents": "*Hi, hackers*\n\n*When calculating the path, *forknum* is hardcoded as *MAIN_FORKNUM*:*\n/* Compute the path. */\np = relpathperm(ftag->rnode, MAIN_FORKNUM);\n\n\n*But since the *ftag* structure already contains *forknum*:*\ntypedef struct FileTag\n{\nint16 handler; /* SyncRequestHandler value, saving space */\nint16 forknum; /* ForkNumber, saving space */\nRelFileNode rnode;\nuint32 segno;\n} FileTag;\n\n\n*Wouldn’t it be more flexible to use the value from the *ftag* structure\ndirectly?*\n\n\n*Best regards, *\n\n*Pixian Shi*\n\nHi, hackersWhen calculating the path, forknum is hardcoded as MAIN_FORKNUM:/* Compute the path. */ p = relpathperm(ftag->rnode, MAIN_FORKNUM);But since the ftag structure already contains forknum:typedef struct FileTag{ int16 handler; /* SyncRequestHandler value, saving space */ int16 forknum; /* ForkNumber, saving space */ RelFileNode rnode; uint32 segno;} FileTag;Wouldn’t it be more flexible to use the value from the ftag structure directly?Best regards, Pixian Shi", "msg_date": "Mon, 30 Sep 2024 10:43:17 +0800", "msg_from": "px shi <spxlyy123@gmail.com>", "msg_from_op": true, "msg_subject": "a litter question about mdunlinkfiletag function" } ]
[ { "msg_contents": "Hi hackers,\nI found probably something to fix in pg_walsummary.\n\npg_walsummary specifies “f:iqw:” as the third argument of getopt_long().\n\n> /* process command-line options */\n> while ((c = getopt_long(argc, argv, \"f:iqw:\",\n> \t\t\t\tlong_options, &optindex)) != -1)\n\nHowever, only i and q are valid options.\n\n> \tswitch (c)\n> \t{\n> \t\tcase 'i':\n> \t\t\tbreak;\n> \t\tcase 'q':\n> \t\t\topt.quiet = true;\n> \t\t\tbreak;\n> \t\tdefault:\n> \t\t\t/* getopt_long already emitted a complaint */\n> \t\t\tpg_log_error_hint(\"Try \\\"%s --help\\\" for more information.\", \n> progname);\n> \t\t\texit(1);\n> \t}\n\nTherefore, shouldn't “f:” and “w:” be removed?\n\nBest regards,\nYusuke Sugie\n\n\n", "msg_date": "Mon, 30 Sep 2024 15:02:03 +0900", "msg_from": "btsugieyuusuke <btsugieyuusuke@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "pg_walsummary, Character-not-present-in-option" }, { "msg_contents": "I forgot to attach the patch file, so I'm attaching it in reply.\n\n\n2024-09-30 15:02 に btsugieyuusuke さんは書きました:\n> Hi hackers,\n> I found probably something to fix in pg_walsummary.\n> \n> pg_walsummary specifies “f:iqw:” as the third argument of \n> getopt_long().\n> \n>> /* process command-line options */\n>> while ((c = getopt_long(argc, argv, \"f:iqw:\",\n>> \t\t\t\tlong_options, &optindex)) != -1)\n> \n> However, only i and q are valid options.\n> \n>> \tswitch (c)\n>> \t{\n>> \t\tcase 'i':\n>> \t\t\tbreak;\n>> \t\tcase 'q':\n>> \t\t\topt.quiet = true;\n>> \t\t\tbreak;\n>> \t\tdefault:\n>> \t\t\t/* getopt_long already emitted a complaint */\n>> \t\t\tpg_log_error_hint(\"Try \\\"%s --help\\\" for more information.\", \n>> progname);\n>> \t\t\texit(1);\n>> \t}\n> \n> Therefore, shouldn't “f:” and “w:” be removed?\n> \n> Best regards,\n> Yusuke Sugie", "msg_date": "Mon, 30 Sep 2024 19:40:30 +0900", "msg_from": "btsugieyuusuke <btsugieyuusuke@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: pg_walsummary, Character-not-present-in-option" }, { "msg_contents": "btsugieyuusuke <btsugieyuusuke@oss.nttdata.com> writes:\n>> Therefore, shouldn't “f:” and “w:” be removed?\n\nLooks like that to me too. Pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 12:08:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_walsummary, Character-not-present-in-option" } ]
[ { "msg_contents": "I think there's an unnecessary underscore in config.sgml.\nAttached patch fixes it.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Mon, 30 Sep 2024 15:34:04 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@postgresql.org>", "msg_from_op": true, "msg_subject": "Doc: typo in config.sgml" }, { "msg_contents": "On Mon, 30 Sep 2024 15:34:04 +0900 (JST)\nTatsuo Ishii <ishii@postgresql.org> wrote:\n\n> I think there's an unnecessary underscore in config.sgml.\n> Attached patch fixes it.\n\nI could not apply the patch with an error.\n\n error: patch failed: doc/src/sgml/config.sgml:9380\n error: doc/src/sgml/config.sgml: patch does not apply\n\nI found your patch contains an odd character (ASCII Code 240?)\nby performing `od -c` command on the file. See the attached file.\n\nRegards,\nYugo Nagata\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS K.K.\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>", "msg_date": "Mon, 30 Sep 2024 16:47:28 +0900", "msg_from": "Yugo Nagata <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Doc: typo in config.sgml" }, { "msg_contents": ">> I think there's an unnecessary underscore in config.sgml.\n>> Attached patch fixes it.\n> \n> I could not apply the patch with an error.\n> \n> error: patch failed: doc/src/sgml/config.sgml:9380\n> error: doc/src/sgml/config.sgml: patch does not apply\n\nStrange. I have no problem applying the patch here.\n\n> I found your patch contains an odd character (ASCII Code 240?)\n> by performing `od -c` command on the file. See the attached file.\n\nYes, 240 in octal (== 0xc2) is in the patch but it's because current\nconfig.sgml includes the character. You can check it by looking at\nline 9383 of config.sgml.\n\nI think it was introduced by 28e858c0f95.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 30 Sep 2024 17:23:24 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Doc: typo in config.sgml" }, { "msg_contents": "On Mon, 30 Sep 2024 17:23:24 +0900 (JST)\nTatsuo Ishii <ishii@postgresql.org> wrote:\n\n> >> I think there's an unnecessary underscore in config.sgml.\n> >> Attached patch fixes it.\n> > \n> > I could not apply the patch with an error.\n> > \n> > error: patch failed: doc/src/sgml/config.sgml:9380\n> > error: doc/src/sgml/config.sgml: patch does not apply\n> \n> Strange. I have no problem applying the patch here.\n> \n> > I found your patch contains an odd character (ASCII Code 240?)\n> > by performing `od -c` command on the file. See the attached file.\n> \n> Yes, 240 in octal (== 0xc2) is in the patch but it's because current\n> config.sgml includes the character. You can check it by looking at\n> line 9383 of config.sgml.\n\nYes, you are right, I can find the 0xc2 char in config.sgml using od -c,\nalthough I still could not apply the patch. \n\nI think this is non-breaking space of (C2A0) of utf-8. I guess my\nterminal normally regards this as a space, so applying patch fails.\n\nI found it also in line 85 of ref/drop_extension.sgml.\n\n\n> \n> I think it was introduced by 28e858c0f95.\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS K.K.\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:03:30 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Doc: typo in config.sgml" }, { "msg_contents": ">>> I think there's an unnecessary underscore in config.sgml.\n\nI was wrong. The particular byte sequences just looked an underscore\non my editor but the byte sequence is actually 0xc2a0, which must be a\n\"non breaking space\" encoded in UTF-8. I guess someone mistakenly\ninsert a non breaking space while editing config.sgml.\n\nHowever the mistake does not affect the patch.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:03:44 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Doc: typo in config.sgml" }, { "msg_contents": "On Mon, 30 Sep 2024 18:03:44 +0900 (JST)\nTatsuo Ishii <ishii@postgresql.org> wrote:\n\n> >>> I think there's an unnecessary underscore in config.sgml.\n> \n> I was wrong. The particular byte sequences just looked an underscore\n> on my editor but the byte sequence is actually 0xc2a0, which must be a\n> \"non breaking space\" encoded in UTF-8. I guess someone mistakenly\n> insert a non breaking space while editing config.sgml.\n> \n> However the mistake does not affect the patch.\n\nIt looks like we've crisscrossed our mail.\nAnyway, I agree with removing non breaking spaces, as well as\none found in line 85 of ref/drop_extension.sgml.\n\nRegards,\nYugo Nagata\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS K.K.\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 30 Sep 2024 18:22:16 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Doc: typo in config.sgml" }, { "msg_contents": "> On 30 Sep 2024, at 11:03, Tatsuo Ishii <ishii@postgresql.org> wrote:\n> \n>>>> I think there's an unnecessary underscore in config.sgml.\n> \n> I was wrong. The particular byte sequences just looked an underscore\n> on my editor but the byte sequence is actually 0xc2a0, which must be a\n> \"non breaking space\" encoded in UTF-8. I guess someone mistakenly\n> insert a non breaking space while editing config.sgml.\n\nI wonder if it would be worth to add a check for this like we have to tabs?\nThe attached adds a rule to \"make -C doc/src/sgml check\" for trapping nbsp\n(doing so made me realize we don't have an equivalent meson target).\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 30 Sep 2024 11:59:48 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Doc: typo in config.sgml" }, { "msg_contents": "On Mon, 30 Sep 2024 11:59:48 +0200\nDaniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 30 Sep 2024, at 11:03, Tatsuo Ishii <ishii@postgresql.org> wrote:\n> > \n> >>>> I think there's an unnecessary underscore in config.sgml.\n> > \n> > I was wrong. The particular byte sequences just looked an underscore\n> > on my editor but the byte sequence is actually 0xc2a0, which must be a\n> > \"non breaking space\" encoded in UTF-8. I guess someone mistakenly\n> > insert a non breaking space while editing config.sgml.\n> \n> I wonder if it would be worth to add a check for this like we have to tabs?\n> The attached adds a rule to \"make -C doc/src/sgml check\" for trapping nbsp\n> (doing so made me realize we don't have an equivalent meson target).\n\nYour patch couldn't detect 0xA0 in config.sgml in my machine, but it works\nwhen I use `grep -P \"[\\xA0]\"` instead of `grep -e \"\\xA0\"`.\n\nHowever, it also detects the following line in charset.sgml.\n(https://www.postgresql.org/docs/current/collation.html)\n\n For example, locale und-u-kb sorts 'àe' before 'aé'.\n\nThis is not non-breaking space, so should not be detected as an error.\n\nRegards,\nYugo Nagata\n\n> --\n> Daniel Gustafsson\n> \n\n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 30 Sep 2024 19:34:02 +0900", "msg_from": "Yugo Nagata <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Doc: typo in config.sgml" }, { "msg_contents": ">> I wonder if it would be worth to add a check for this like we have to tabs?\n\n+1.\n\n>> The attached adds a rule to \"make -C doc/src/sgml check\" for trapping nbsp\n>> (doing so made me realize we don't have an equivalent meson target).\n> \n> Your patch couldn't detect 0xA0 in config.sgml in my machine, but it works\n> when I use `grep -P \"[\\xA0]\"` instead of `grep -e \"\\xA0\"`.\n> \n> However, it also detects the following line in charset.sgml.\n> (https://www.postgresql.org/docs/current/collation.html)\n> \n> For example, locale und-u-kb sorts 'àe' before 'aé'.\n> \n> This is not non-breaking space, so should not be detected as an error.\n\nThat's because non-breaking space (nbsp) is not encoded as 0xa0 in\nUTF-8. nbsp in UTF-8 is \"0xc2 0xa0\" (2 bytes) (A 0xa0 is a nbsp's code\npoint in Unicode. i.e. U+00A0).\nSo grep -P \"[\\xC2\\xA0]\" should work to detect nbsp.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS K.K.\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 30 Sep 2024 20:07:31 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: Doc: typo in config.sgml" }, { "msg_contents": "On Mon, 30 Sep 2024 20:07:31 +0900 (JST)\nTatsuo Ishii <ishii@postgresql.org> wrote:\n\n> >> I wonder if it would be worth to add a check for this like we have to tabs?\n> \n> +1.\n> \n> >> The attached adds a rule to \"make -C doc/src/sgml check\" for trapping nbsp\n> >> (doing so made me realize we don't have an equivalent meson target).\n> > \n> > Your patch couldn't detect 0xA0 in config.sgml in my machine, but it works\n> > when I use `grep -P \"[\\xA0]\"` instead of `grep -e \"\\xA0\"`.\n> > \n> > However, it also detects the following line in charset.sgml.\n> > (https://www.postgresql.org/docs/current/collation.html)\n> > \n> > For example, locale und-u-kb sorts 'àe' before 'aé'.\n> > \n> > This is not non-breaking space, so should not be detected as an error.\n> \n> That's because non-breaking space (nbsp) is not encoded as 0xa0 in\n> UTF-8. nbsp in UTF-8 is \"0xc2 0xa0\" (2 bytes) (A 0xa0 is a nbsp's code\n> point in Unicode. i.e. U+00A0).\n> So grep -P \"[\\xC2\\xA0]\" should work to detect nbsp.\n\n`LC_ALL=C grep -P \"\\xC2\\xA0\"` works for my environment. \n([ and ] were not necessary.)\n\nWhen LC_ALL is null, `grep -P \"\\xA0\"` could not detect any characters in charset.sgml,\nbut I think it is better to specify both LC_ALL=C and \"\\xC2\\xA0\" for making sure detecting\nnbsp.\n\nOne problem is that -P option can be used in only GNU grep, and grep in mac doesn't support it.\n\nOn bash, we can also use `grep $'\\xc2\\xa0'`, but I am not sure we can assume the shell is bash.\n\nMaybe, better way is use perl itself rather than grep as following.\n\n `perl -ne '/\\xC2\\xA0/ and print' `\n\nI attached a patch fixed in this way.\n\nRegards,\nYugo Nagata\n\n> \n> Best reagards,\n> --\n> Tatsuo Ishii\n> SRA OSS K.K.\n> English: http://www.sraoss.co.jp/index_en/\n> Japanese:http://www.sraoss.co.jp\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Mon, 30 Sep 2024 23:18:39 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: Doc: typo in config.sgml" } ]
[ { "msg_contents": "Greetings\n\nI understand the complexity of implementing a pseudo data type when\npassing it over parameters, or using it when creating an object.\nvide: git grep pseudo | egrep -v -e \"po|out|sgml|sql\" | more\n\nMy initial problem was saving history (for backup to be used during\ntroubleshoot analysis) of plan changing and so on.. of the pg_statistic\ntable/object.\n\nI was surprised - in a good way - to observe so much effort when handling\nit for that specific purpose. I started to wonder if/when I want to create\nan object of other *pseudatatypes *or pass it through a function parameter\nthat the same level of effort during code implementation would be the same.\n\nI understand this would be much more a code architecture discretion rather\nthan a punctual question.\n\nI have posted in pgsql-admin a question which is \"simple problem\" when\ncreating a table using anyarray and indeed the problem is simple - the\nsolution might not be.\n\nWhat folks, more experienced in this subject, would recommend as a starting\npoint to achieve that objective?\n\nKind regards,\n\nBazana Schmidt, Vinícius\n\nPS.: apologize in advance for the HTML email.\n\nGreetingsI understand the complexity of implementing a pseudo data type when passing it over parameters, or using it when creating an object.vide: git grep pseudo | egrep -v -e \"po|out|sgml|sql\" | moreMy initial problem was saving history (for backup to be used during troubleshoot analysis) of plan changing and so on..  of the pg_statistic table/object.I was surprised - in a good way - to observe so much effort when handling it for that specific purpose. I started to wonder if/when I want to create an object of other pseudatatypes or pass it through a function parameter that the same level of effort during code implementation would be the same.I understand this would be much more a code architecture discretion rather than a punctual question. I have posted in pgsql-admin a question which is \"simple problem\" when creating a table using anyarray and indeed the problem is simple - the solution might not be.What folks, more experienced in this subject, would recommend as a starting point to achieve that objective? Kind regards,Bazana Schmidt, ViníciusPS.: apologize in advance for the HTML email.", "msg_date": "Mon, 30 Sep 2024 09:01:37 +0100", "msg_from": "=?UTF-8?B?Vmluw61jaXVzIEFicmFow6Nv?= <vinnix.bsd@gmail.com>", "msg_from_op": true, "msg_subject": "Possibilities on code change to implement pseudodatatypes" }, { "msg_contents": "On Mon, Sep 30, 2024 at 9:01 AM Vinícius Abrahão <vinnix.bsd@gmail.com>\nwrote:\n\n> Greetings\n>\n> I understand the complexity of implementing a pseudo data type when\n> passing it over parameters, or using it when creating an object.\n> vide: git grep pseudo | egrep -v -e \"po|out|sgml|sql\" | more\n>\n> My initial problem was saving history (for backup to be used during\n> troubleshoot analysis) of plan changing and so on.. of the pg_statistic\n> table/object.\n>\n> I was surprised - in a good way - to observe so much effort when handling\n> it for that specific purpose. I started to wonder if/when I want to create\n> an object of other *pseudatatypes *or pass it through a function\n> parameter that the same level of effort during code implementation would be\n> the same.\n>\n> I understand this would be much more a code architecture discretion rather\n> than a punctual question.\n>\n> I have posted in pgsql-admin a question which is \"simple problem\" when\n> creating a table using anyarray and indeed the problem is simple - the\n> solution might not be.\n>\n> What folks, more experienced in this subject, would recommend as a\n> starting point to achieve that objective?\n>\n> Kind regards,\n>\n> Bazana Schmidt, Vinícius\n>\n> PS.: apologize in advance for the HTML email.\n>\n\nComplementing -\n\nUnder this optics below:\n\n[vinnix@vesuvio postgres]$ git grep CheckAttributeType\nsrc/backend/catalog/heap.c: * flags controls which datatypes are\nallowed, cf CheckAttributeType.\nsrc/backend/catalog/heap.c:\nCheckAttributeType(NameStr(TupleDescAttr(tupdesc, i)->attname),\nsrc/backend/catalog/heap.c: * CheckAttributeType\nsrc/backend/catalog/heap.c:CheckAttributeType(const char *attname,\nsrc/backend/catalog/heap.c: CheckAttributeType(attname,\ngetBaseType(atttypid), attcollation,\nsrc/backend/catalog/heap.c:\nCheckAttributeType(NameStr(attr->attname),\nsrc/backend/catalog/heap.c: CheckAttributeType(attname,\nget_range_subtype(atttypid),\nsrc/backend/catalog/heap.c: CheckAttributeType(attname,\natt_typelem, attcollation,\nsrc/backend/catalog/index.c:\n CheckAttributeType(NameStr(to->attname),\nsrc/backend/commands/tablecmds.c:\nCheckAttributeType(NameStr(attribute->attname), attribute->atttypid,\nattribute->attcollation,\nsrc/backend/commands/tablecmds.c: CheckAttributeType(colName,\ntargettype, targetcollid,\nsrc/backend/commands/tablecmds.c:\nCheckAttributeType(partattname,\nsrc/include/catalog/heap.h:/* flag bits for\nCheckAttributeType/CheckAttributeNamesTypes */\nsrc/include/catalog/heap.h:extern void CheckAttributeType(const char\n*attname,\n\nand this line:\nhttps://github.com/postgres/postgres/blob/master/src/backend/catalog/heap.c#L562\n\n if (att_typtype == TYPTYPE_PSEUDO)\n {\n /*\n * We disallow pseudo-type columns, with the exception of ANYARRAY,\" <<==\n\n\nWhat WE are missing? WHY? How could we make this state true for creating\ntable commands?\nI will try to find time to keep researching about it - if you folks have\nany insights please let me know.\n\nOn Mon, Sep 30, 2024 at 9:01 AM Vinícius Abrahão <vinnix.bsd@gmail.com> wrote:GreetingsI understand the complexity of implementing a pseudo data type when passing it over parameters, or using it when creating an object.vide: git grep pseudo | egrep -v -e \"po|out|sgml|sql\" | moreMy initial problem was saving history (for backup to be used during troubleshoot analysis) of plan changing and so on..  of the pg_statistic table/object.I was surprised - in a good way - to observe so much effort when handling it for that specific purpose. I started to wonder if/when I want to create an object of other pseudatatypes or pass it through a function parameter that the same level of effort during code implementation would be the same.I understand this would be much more a code architecture discretion rather than a punctual question. I have posted in pgsql-admin a question which is \"simple problem\" when creating a table using anyarray and indeed the problem is simple - the solution might not be.What folks, more experienced in this subject, would recommend as a starting point to achieve that objective? Kind regards,Bazana Schmidt, ViníciusPS.: apologize in advance for the HTML email.\nComplementing - Under this optics below:[vinnix@vesuvio postgres]$ git grep CheckAttributeTypesrc/backend/catalog/heap.c: *           flags controls which datatypes are allowed, cf CheckAttributeType.src/backend/catalog/heap.c:             CheckAttributeType(NameStr(TupleDescAttr(tupdesc, i)->attname),src/backend/catalog/heap.c: *           CheckAttributeTypesrc/backend/catalog/heap.c:CheckAttributeType(const char *attname,src/backend/catalog/heap.c:             CheckAttributeType(attname, getBaseType(atttypid), attcollation,src/backend/catalog/heap.c:                     CheckAttributeType(NameStr(attr->attname),src/backend/catalog/heap.c:             CheckAttributeType(attname, get_range_subtype(atttypid),src/backend/catalog/heap.c:             CheckAttributeType(attname, att_typelem, attcollation,src/backend/catalog/index.c:                    CheckAttributeType(NameStr(to->attname),src/backend/commands/tablecmds.c:       CheckAttributeType(NameStr(attribute->attname), attribute->atttypid, attribute->attcollation,src/backend/commands/tablecmds.c:       CheckAttributeType(colName, targettype, targetcollid,src/backend/commands/tablecmds.c:                       CheckAttributeType(partattname,src/include/catalog/heap.h:/* flag bits for CheckAttributeType/CheckAttributeNamesTypes */src/include/catalog/heap.h:extern void CheckAttributeType(const char *attname,and this line:https://github.com/postgres/postgres/blob/master/src/backend/catalog/heap.c#L562 \n  if (att_typtype == TYPTYPE_PSEUDO)\n  {\n  /*\n  * We disallow pseudo-type columns, with the exception of ANYARRAY,\"  <<==\nWhat WE are missing? WHY? How could we make this state true for creating table commands?I will try to find time to keep researching about it - if you folks have any insights please let me know.", "msg_date": "Mon, 30 Sep 2024 10:19:04 +0100", "msg_from": "=?UTF-8?B?Vmluw61jaXVzIEFicmFow6Nv?= <vinnix.bsd@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possibilities on code change to implement pseudodatatypes" } ]
[ { "msg_contents": "Hi hackers,\nI found a flaw in the ACL_MAINTAIN comment.\n\nCommands such as VACUUM are listed as commands that are allowed to be \nexecuted by the MAINTAIN privilege.\nHowever, LOCK TABLE is missing from the comment.\n\n> \t/*\n> \t * Check if ACL_MAINTAIN is being checked and, if so, and not already \n> set\n> \t * as part of the result, then check if the user is a member of the\n> \t * pg_maintain role, which allows VACUUM, ANALYZE, CLUSTER, REFRESH\n> \t * MATERIALIZED VIEW, and REINDEX on all relations.\n> \t */\n\nTherefore, shouldn't LOCK TABLE be added to the comment?\n\nBest regards,\nYusuke Sugie\n\n\n", "msg_date": "Mon, 30 Sep 2024 17:29:03 +0900", "msg_from": "btsugieyuusuke <btsugieyuusuke@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "ACL_MAINTAIN, Lack of comment content" }, { "msg_contents": "> On 30 Sep 2024, at 10:29, btsugieyuusuke <btsugieyuusuke@oss.nttdata.com> wrote:\n> \n> Hi hackers,\n> I found a flaw in the ACL_MAINTAIN comment.\n> \n> Commands such as VACUUM are listed as commands that are allowed to be executed by the MAINTAIN privilege.\n> However, LOCK TABLE is missing from the comment.\n> \n>> /*\n>> * Check if ACL_MAINTAIN is being checked and, if so, and not already set\n>> * as part of the result, then check if the user is a member of the\n>> * pg_maintain role, which allows VACUUM, ANALYZE, CLUSTER, REFRESH\n>> * MATERIALIZED VIEW, and REINDEX on all relations.\n>> */\n> \n> Therefore, shouldn't LOCK TABLE be added to the comment?\n\nThat's correct, for the list to be complete LOCK TABLE should be added as per\nthe attached.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 30 Sep 2024 11:40:29 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: ACL_MAINTAIN, Lack of comment content" }, { "msg_contents": "On Mon, 30 Sep 2024 11:40:29 +0200\nDaniel Gustafsson <daniel@yesql.se> wrote:\n\n> -\t * MATERIALIZED VIEW, and REINDEX on all relations.\n> +\t * MATERIALIZED VIEW, REINDEX and LOCK TABLE on all relations.\n\nShould we put a comma between REINDEX and \"and\" as following?\n\n \"... MATERIALIZED VIEW, REINDEX, and LOCK TABLE on all relations.\"\n\nRegards,\nYugo Nagata \n\n-- \nYugo Nagata <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 30 Sep 2024 19:38:35 +0900", "msg_from": "Yugo Nagata <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: ACL_MAINTAIN, Lack of comment content" }, { "msg_contents": "> On 30 Sep 2024, at 12:38, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n> \n> On Mon, 30 Sep 2024 11:40:29 +0200\n> Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> - * MATERIALIZED VIEW, and REINDEX on all relations.\n>> + * MATERIALIZED VIEW, REINDEX and LOCK TABLE on all relations.\n> \n> Should we put a comma between REINDEX and \"and\" as following?\n> \n> \"... MATERIALIZED VIEW, REINDEX, and LOCK TABLE on all relations.\"\n\nI'm not a native speaker so I'm not sure which is right, but grepping for other\nlists of items shows that the last \"and\" item is often preceded by a comma so\nI'll do that.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 30 Sep 2024 16:13:55 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: ACL_MAINTAIN, Lack of comment content" }, { "msg_contents": "On Mon, Sep 30, 2024 at 04:13:55PM +0200, Daniel Gustafsson wrote:\n>> On 30 Sep 2024, at 12:38, Yugo Nagata <nagata@sraoss.co.jp> wrote:\n>> \n>> Should we put a comma between REINDEX and \"and\" as following?\n>> \n>> \"... MATERIALIZED VIEW, REINDEX, and LOCK TABLE on all relations.\"\n> \n> I'm not a native speaker so I'm not sure which is right, but grepping for other\n> lists of items shows that the last \"and\" item is often preceded by a comma so\n> I'll do that.\n\nI'm not aware of a project policy around the Oxford comma [0], but I tend\nto include one.\n\n[0] https://en.wikipedia.org/wiki/Serial_comma\n\n-- \nnathan\n\n\n", "msg_date": "Mon, 30 Sep 2024 10:17:29 -0400", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ACL_MAINTAIN, Lack of comment content" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Mon, Sep 30, 2024 at 04:13:55PM +0200, Daniel Gustafsson wrote:\n>> I'm not a native speaker so I'm not sure which is right, but grepping for other\n>> lists of items shows that the last \"and\" item is often preceded by a comma so\n>> I'll do that.\n\n> I'm not aware of a project policy around the Oxford comma [0], but I tend\n> to include one.\n\nYeah, as that wikipedia article suggests, you can find support for\neither choice. I'd say do what looks best in context.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:43:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ACL_MAINTAIN, Lack of comment content" }, { "msg_contents": "> On 30 Sep 2024, at 17:43, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Mon, Sep 30, 2024 at 04:13:55PM +0200, Daniel Gustafsson wrote:\n>>> I'm not a native speaker so I'm not sure which is right, but grepping for other\n>>> lists of items shows that the last \"and\" item is often preceded by a comma so\n>>> I'll do that.\n> \n>> I'm not aware of a project policy around the Oxford comma [0], but I tend\n>> to include one.\n> \n> Yeah, as that wikipedia article suggests, you can find support for\n> either choice. I'd say do what looks best in context.\n\nThanks for input, I ended up keeping the comma.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 1 Oct 2024 00:04:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: ACL_MAINTAIN, Lack of comment content" } ]
[ { "msg_contents": "Hi,\n\nshouldn't this give the same error message?\n\n$ pg_basebackup --checkpoint=fast --format=t --compress --pgdata=/var/tmp/dummy\npg_basebackup: error: must specify output directory or backup target\npg_basebackup: hint: Try \"pg_basebackup --help\" for more information.\n\n$ pg_basebackup --pgdata=/var/tmp/dummy --checkpoint=fast --format=t --compress\npg_basebackup: option '--compress' requires an argument\npg_basebackup: hint: Try \"pg_basebackup --help\" for more information.\n\nI don't see why the first case gives not the same message as the second, especially as the \"output directory\" is specified.\n\nTested on v17 and devel, but I guess it was always like this.\n\nRegards\nDaniel\n\n", "msg_date": "Mon, 30 Sep 2024 11:10:09 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": true, "msg_subject": "pg_basebackup and error messages dependent on the order of the\n arguments" }, { "msg_contents": "\n\nOn 2024/09/30 20:10, Daniel Westermann (DWE) wrote:\n> Hi,\n> \n> shouldn't this give the same error message?\n> \n> $ pg_basebackup --checkpoint=fast --format=t --compress --pgdata=/var/tmp/dummy\n> pg_basebackup: error: must specify output directory or backup target\n> pg_basebackup: hint: Try \"pg_basebackup --help\" for more information.\n> \n> $ pg_basebackup --pgdata=/var/tmp/dummy --checkpoint=fast --format=t --compress\n> pg_basebackup: option '--compress' requires an argument\n> pg_basebackup: hint: Try \"pg_basebackup --help\" for more information.\n> \n> I don't see why the first case gives not the same message as the second, especially as the \"output directory\" is specified.\n\nI guess because \"--pgdata=/var/tmp/dummy\" is processed as the argument of\n--compress option in the first case, but not in the second case.\nYou can see the same error if you specify other optoin requiring an argument,\nsuch as --label, in the first case.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n", "msg_date": "Tue, 1 Oct 2024 01:34:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup and error messages dependent on the order of the\n arguments" }, { "msg_contents": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com> writes:\n> shouldn't this give the same error message?\n\n> $ pg_basebackup --checkpoint=fast --format=t --compress --pgdata=/var/tmp/dummy\n> pg_basebackup: error: must specify output directory or backup target\n> pg_basebackup: hint: Try \"pg_basebackup --help\" for more information.\n\n> $ pg_basebackup --pgdata=/var/tmp/dummy --checkpoint=fast --format=t --compress\n> pg_basebackup: option '--compress' requires an argument\n> pg_basebackup: hint: Try \"pg_basebackup --help\" for more information.\n\n> I don't see why the first case gives not the same message as the second, especially as the \"output directory\" is specified.\n\nIt appears that the first case treats \"--pgdata=/var/tmp/dummy\"\nas the argument of --compress, and it doesn't bother to check that\nthat specifies a valid compression algorithm until much later.\n\nAs this example shows, we really ought to validate the compression\nargument on sight in order to get sensible error messages. The\ntrouble is that for server-side compression the code wants to just\npass the string through to the server and not form its own opinion\nas to whether it's a known algorithm.\n\nPerhaps it would help if we simply rejected strings beginning\nwith a dash? I haven't tested, but roughly along the lines of\n\n case 'Z':\n+ /* we don't want to check the algorithm yet, but ... */\n+ if (optarg[0] == '-')\n+ pg_fatal(\"invalid compress option \\\"%s\\\", optarg);\n backup_parse_compress_options(optarg, &compression_algorithm,\n &compression_detail, &compressloc);\n break;\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 12:39:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup and error messages dependent on the order of the\n arguments" }, { "msg_contents": "I wrote:\n> As this example shows, we really ought to validate the compression\n> argument on sight in order to get sensible error messages. The\n> trouble is that for server-side compression the code wants to just\n> pass the string through to the server and not form its own opinion\n> as to whether it's a known algorithm.\n\n> Perhaps it would help if we simply rejected strings beginning\n> with a dash? I haven't tested, but roughly along the lines of\n\nTaking a closer look, many of the other switches-requiring-an-argument\nalso just absorb \"optarg\" without checking its value till much later,\nso I'm not sure how far we could move the needle by special-casing\n--compress.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 13:15:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup and error messages dependent on the order of the\n arguments" }, { "msg_contents": ">I wrote:\n>> As this example shows, we really ought to validate the compression\n>> argument on sight in order to get sensible error messages. The\n>> trouble is that for server-side compression the code wants to just\n>> pass the string through to the server and not form its own opinion\n>> as to whether it's a known algorithm.\n\n>> Perhaps it would help if we simply rejected strings beginning\n>> with a dash? I haven't tested, but roughly along the lines of\n\n>Taking a closer look, many of the other switches-requiring-an-argument\n>also just absorb \"optarg\" without checking its value till much later,\n>so I'm not sure how far we could move the needle by special-casing\n>--compress.\n\nMy point was not so much about --compress but rather giving a good error message.\n\nLooking at this:\n$ pg_basebackup --checkpoint=fast --format=t --compress --pgdata=/var/tmp/dummy\npg_basebackup: error: must specify output directory or backup target\n\n... the error message is misleading and will confuse users more than it helps.\n\nRegards\nDaniel\n\n\n\n\n\n\n\n\n>I wrote:\n\n>> As this example shows, we really ought to validate the compression\n>> argument on sight in order to get sensible error messages.  The\n>> trouble is that for server-side compression the code wants to just\n>> pass the string through to the server and not form its own opinion\n>> as to whether it's a known algorithm.\n\n>> Perhaps it would help if we simply rejected strings beginning\n>> with a dash?  I haven't tested, but roughly along the lines of\n\n>Taking a closer look, many of the other switches-requiring-an-argument\n>also just absorb \"optarg\" without checking its value till much later,\n>so I'm not sure how far we could move the needle by special-casing\n>--compress.\n\n\nMy point was not so much about --compress but rather giving a good error message.\n\n\nLooking at this:\n$ pg_basebackup --checkpoint=fast --format=t --compress --pgdata=/var/tmp/dummy\npg_basebackup: error: must specify output directory or backup target\n\n\n... the error message is misleading and will confuse users more than it helps.\n\n\n\nRegards\n\nDaniel", "msg_date": "Mon, 30 Sep 2024 17:56:51 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": true, "msg_subject": "Re: pg_basebackup and error messages dependent on the order of the\n arguments" }, { "msg_contents": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com> writes:\n>> Taking a closer look, many of the other switches-requiring-an-argument\n>> also just absorb \"optarg\" without checking its value till much later,\n>> so I'm not sure how far we could move the needle by special-casing\n>> --compress.\n\n> My point was not so much about --compress but rather giving a good error message.\n\nRight, and my point was that the issue is bigger than --compress.\nFor example, you get exactly the same misbehavior with\n\n$ pg_basebackup --checkpoint=fast --format=t -d --pgdata=/var/tmp/dummy\npg_basebackup: error: must specify output directory or backup target\npg_basebackup: hint: Try \"pg_basebackup --help\" for more information.\n\nI'm not sure how to solve the problem once you consider this larger\nscope. I don't think we could forbid arguments beginning with \"-\" for\nall of these switches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 15:09:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup and error messages dependent on the order of the\n arguments" } ]
[ { "msg_contents": "Hi!\n\nWorking with temp relations is some kind of bottleneck in Postgres, in my\nview.\nThere are no problems if you want to handle it from time to time, not\narguing\nthat. But if you have to make a massive temp tables creation/deletion,\nyou'll\nsoon step into a performance degradation.\n\nTo the best of my knowledge, there are two obvious problems:\n1. We have to add or remove an entry in pg_class when temp table created\nand\n deleted, resulting in \"bloating\" of pg_class. Thus, auto-vacuum is\nneeded, but\n it will acquire a lock, slowing things down.\n2. Temp tables almost universally treated as regular tables. And this is\n100%\ncorrect and makes code much simpler. But also involve all the locking\nmechanism.\n\nAs for the first issue, I do not see how any significant improvements can\nbe made,\nunfortunately.\n\nBut for the second one: do we really need any lock for temp relations?\nAFAICU\nthey are backend only, apart from pg_class entries.\n\nI do not have any particular solution for now, only some kind of concept:\nwe can\nput checks for temp relations in LockAcquire/LockRelease in order to skip\nlocking.\n\nDo I miss something and idea is doomed or there are no visible obstacles\nhere\nand it's worth the effort to make a POC patch?\n\n-- \nBest regards,\nMaxim Orlov.\n\nHi!Working with temp relations is some kind of bottleneck in Postgres, in my view. There are no problems if you want to handle it from time to time, not arguing that. But if you have to make a massive temp tables creation/deletion, you'll soon step into a performance degradation.To the best of my knowledge, there are two obvious problems:1. We have to add or remove an entry in pg_class when temp table created and    deleted, resulting in \"bloating\" of pg_class. Thus, auto-vacuum is needed, but   it will acquire a lock, slowing things down.2. Temp tables almost universally treated as regular tables. And this is 100% correct and makes code much simpler. But also involve all the locking mechanism.As for the first issue, I do not see how any significant improvements can be made, unfortunately.But for the second one: do we really need any lock for temp relations? AFAICU they are backend only, apart from pg_class entries.I do not have any particular solution for now, only some kind of concept: we canput checks for temp relations in LockAcquire/LockRelease in order to skip locking.Do I miss something and idea is doomed or there are no visible obstacles hereand it's worth the effort to make a POC patch?-- Best regards,Maxim Orlov.", "msg_date": "Mon, 30 Sep 2024 17:49:36 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": true, "msg_subject": "Do not lock temp relations" }, { "msg_contents": "Maxim Orlov <orlovmg@gmail.com> writes:\n> But for the second one: do we really need any lock for temp relations?\n\nYes. Our implementation restrictions preclude access to the contents\nof another session's temp tables, but it is not forbidden to do DDL\non them so long as no content access is required. (Without this,\nit'd be problematic for example to clean out a crashed session's temp\ntables. See the \"orphan temporary tables\" logic in autovacuum.c.)\nYou need fairly realistic locking to ensure that's OK.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 11:05:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do not lock temp relations" } ]
[ { "msg_contents": "I am just contacting you to talk about a current issue with our database.\nWe have run out of a positive sequence in one of our tables and are now\noperating with negative sequences. To address this, we plan to migrate from\nthe int4 ID column to an int8 ID column.\n\nThe plan involves renaming the int8 column to the id column and setting it\nas the primary key. However, this process will require downtime, which may\nbe substantial in a production environment. Fortunately, we have noted that\nother tables do not use the id column as a foreign key, which may help\nmitigate some concerns.\nOur Approach:\n\n 1.\n\n *Copy Data: *Copy all the data to the new *id_copy* column.\n 2.\n\n *C**reate a Unique Index*: We will then create a unique index on the new\n ID column before renaming it ,and alter it to non-nullable. This step will\n necessitate scanning the entire table to verify uniqueness and\n non-nullability.\n 3.\n\n *A**dd Primary Key*: After ensuring the uniqueness, we will add the ID\n column as the primary key. By doing this, we hope to bypass the additional\n scanning for uniqueness and nullability, as the column will already be set\n as not nullable and will have the uniqueness constraint from the unique\n index.\n\nWe want to confirm if this approach will work as expected. If we should be\naware of any potential pitfalls or considerations, could you please provide\ninsights or point us toward relevant documentation?\n\nThank you so much for your help, and I look forward to your guidance.\n\nBest regards,\n\nAditya Narayan Singh\nLoyalty Juggernaut Inc.\n\n-- \n\n*Confidentiality Warning:*\nThis message and any attachments are intended \nonly for the use of the intended recipient(s), are confidential, and may be \nprivileged. If you are not the intended recipient, you are hereby notified \nthat any disclosure, copying, distribution, or other use of this message \nand any attachments is strictly prohibited. If received in error, please \nnotify the sender immediately and permanently delete it.\n\nI am just contacting you to talk about a current issue with our database. We have run out of a positive sequence in one of our tables and are now operating with negative sequences. To address this, we plan to migrate from the int4 ID column to an int8 ID column.The plan involves renaming the int8 column to the id column and setting it as the primary key. However, this process will require downtime, which may be substantial in a production environment. Fortunately, we have noted that other tables do not use the id column as a foreign key, which may help mitigate some concerns.Our Approach:Copy Data: Copy all the data to the new  id_copy column. Create a Unique Index: We will then create a unique index on the new ID column before renaming it ,and alter it to non-nullable. This step will necessitate scanning the entire table to verify uniqueness and non-nullability.Add Primary Key: After ensuring the uniqueness, we will add the ID column as the primary key. By doing this, we hope to bypass the additional scanning for uniqueness and nullability, as the column will already be set as not nullable and will have the uniqueness constraint from the unique index.We want to confirm if this approach will work as expected. If we should be aware of any potential pitfalls or considerations, could you please provide insights or point us toward relevant documentation?Thank you so much for your help, and I look forward to your guidance.Best regards,Aditya Narayan SinghLoyalty Juggernaut Inc.\n\nConfidentiality Warning:This message and any attachments are intended only for the use of the intended recipient(s), are confidential, and may be privileged. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or other use of this message and any attachments is strictly prohibited. If received in error, please notify the sender immediately and permanently delete it.", "msg_date": "Mon, 30 Sep 2024 22:35:00 +0530", "msg_from": "Aditya Singh <aditya.singh@lji.io>", "msg_from_op": true, "msg_subject": "Request for Insights on ID Column Migration Approach" } ]
[ { "msg_contents": "I happened to notice that `set_rel_pathlist` params, RelOptInfo *rel\nand RangeTblEntry *rte are\nunnecessary, because upon all usages,\n`rte=root->simple_rte_array[rti]` and\n`rel=root->simple_rel_array[rti]` holds. What's the point of providing\nthe same information 3 times? Is it kept like that for extension\nbackward compatibility.\n\nSo, I propose to refactor this a little bit.\n\nAm I missing something?\n\n\n\n-- \nBest regards,\nKirill Reshke\n\n\n", "msg_date": "Tue, 1 Oct 2024 00:38:56 +0500", "msg_from": "Kirill Reshke <reshkekirill@gmail.com>", "msg_from_op": true, "msg_subject": "set_rel_pathlist function unnecessary params." }, { "msg_contents": "Kirill Reshke <reshkekirill@gmail.com> writes:\n> I happened to notice that `set_rel_pathlist` params, RelOptInfo *rel\n> and RangeTblEntry *rte are\n> unnecessary, because upon all usages,\n> `rte=root->simple_rte_array[rti]` and\n> `rel=root->simple_rel_array[rti]` holds. What's the point of providing\n> the same information 3 times?\n\nTo avoid having to re-fetch it from those arrays?\n\n> So, I propose to refactor this a little bit.\n> Am I missing something?\n\nI'm -1 on changing this. It'd provide no detectable benefit\nwhile creating back-patching hazards in this code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Sep 2024 15:56:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: set_rel_pathlist function unnecessary params." } ]
[ { "msg_contents": "Hi, there!\n\nI created patch improving the log messages generated by\nlog_lock_waits.\n\nSample output (log_lock_waits=on required):\n\nsession 1:\nCREATE TABLE foo (val integer);\nINSERT INTO foo (val) VALUES (1);\nBEGIN;\nUPDATE foo SET val = 3;\n\nsession 2:\nBEGIN;\nUPDATE TABLE foo SET val = 2;\n\nOutput w/o patch:\n\nLOG: process 3133043 still waiting for ShareLock on transaction 758\nafter 1000.239 ms\nDETAIL: Process holding the lock: 3132855. Wait queue: 3133043.\nCONTEXT: while updating tuple (0,7) in relation \"foo\"\nSTATEMENT: update foo SET val = 2;\n\nOutput with path\n\nLOG: process 3133043 still waiting for ShareLock on transaction 758\nafter 1000.239 ms\nDETAIL: Process holding the lock: 3132855. Wait queue: 3133043.\nProcess 3132855: update foo SET val = 3;\nCONTEXT: while updating tuple (0,7) in relation \"foo\"\nSTATEMENT: update foo SET val = 2;\n\nAs you can see information about query that holds the lock goes into log.\n\nIf this approach proves unacceptable, we can make the log_lock_waits\nparameter as an enum\nand display the query if the log_lock_waits=verbose (for example).\n\nWhat do you think?\n\nRegards,\n\n--\nOrlov Alexey", "msg_date": "Mon, 30 Sep 2024 23:15:38 +0300", "msg_from": "Alexey Orlov <aporlov@gmail.com>", "msg_from_op": true, "msg_subject": "Patch: Show queries of processes holding a lock" } ]