threads
listlengths
1
2.99k
[ { "msg_contents": "Andrew Martin wrote:\n> Just run the regression tests on 6.1 and as I suspected the array bug\n> is still there. The regression test passes because the expected output\n> has been fixed to the *wrong* output.\n\nOK, I think I understand the current array behavior, which is apparently\ndifferent than the behavior for v1.0x.\n\nPostgres v6.1 allows one to specify a dimensionality for an array object\nwhen declaring that object/column. However, that specification is not\nused when decoding a field. Instead, the dimensionality is deduced from\nthe input string itself. The dimensionality is stored with each field,\nand is used to encode the array on output. So, one is currently allowed\nto mix array dimensions within a column, but Postgres seems to keep that\nall straight for input and output.\n\nIs this the behavior that we want? Just because it is different from\nprevious behavior doesn't mean that it is undesirable. However, when\nmixing dimensionality within the same column it must be more difficult\nto figure out how to do comparison and other operations on arrays.\n\nIf we are to enforce dimensionality within columns, then I need to\nfigure out how to get that information from the table declaration when\ndecoding fields. Bruce, do you know where to look for that kind of code?\nAnyone have an idea on how much this code has changed over the last\nyear??\n\n\t\t\t- Tom\n\n\n--ELM913966242-1523-0_\n\n--ELM913966242-1523-0_--\n", "msg_date": "Tue, 24 Jun 1997 15:16:32 +0000", "msg_from": "\"Thomas G. Lockhart\" <Thomas.Lockhart@jpl.nasa.gov>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Array bug is still there...." } ]
[ { "msg_contents": "\nignore\n\n", "msg_date": "Sun, 4 Jan 1998 00:47:02 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy>", "msg_from_op": true, "msg_subject": "setting up mhonarc one list at a time" } ]
[ { "msg_contents": "\nI've just created a *very* simple script that creates a snapshot\nbased on the current source tree. Nothing at all fancy, its just\nmeant to give those without CVSup access to something to test and\nwork with.\n\nIt will get regenerated every Friday/Saturday night via cron\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jan 1998 03:26:34 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "New Snapshot(s)" } ]
[ { "msg_contents": "\nOn Sat, 3 Jan 1998, Bruce Momjian wrote:\n\n> Sounds maybe a little too serious. We currently use WARN a lot to\n> indicate errors in the supplied SQL statement. Perhaps we need to make\n> the parser elog's ERROR, and the non-parser WARN's ABORT? Is that good?\n> When can I make the change? I don't want to mess up people's current work.\n\nThis shouldn't affect JDBC. The only thing that would break things, is if\nthe notification sent by the \"show datestyle\" statement is changed.\n\n-- \nPeter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk\n\n\n", "msg_date": "Sun, 4 Jan 1998 12:16:08 +0000 (GMT)", "msg_from": "Peter T Mount <psqlhack@maidast.demon.co.uk>", "msg_from_op": true, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Hello,\n\nfor thanks for the great stuff you wrote.\n\nI remember some time ago talk about error numbers in PostgreSQL. Are \nthey implemented? And standard SQL error codes (5 character long \nstrings). As PostgreSQL is moving towards ANSI SQL rather fast, it \nwould be nice to see standard error codes and messages as well.\n\n> > > > I just think the WARN word coming up on users terminals is odd. I can\n> > > > make the change in all the source files easily if we decide what the new\n> > > > error word should be. Error? Failure?\n> > > >\n> > > \n> > > Yes, that's one of the things I don't understand with PostgreSQL.\n> > > ERROR would be much better.\n> > \n> > How about ABORT ?\n> \n> Sounds maybe a little too serious. We currently use WARN a lot to\n> indicate errors in the supplied SQL statement. Perhaps we need to make\n> the parser elog's ERROR, and the non-parser WARN's ABORT? Is that good?\n> When can I make the change? I don't want to mess up people's current work.\n\nMe too :-)\n\nCiao\n\nCiao\n\nDas Boersenspielteam.\n\n---------------------------------------------------------------------------\n http://www.boersenspiel.de\n \t Das Boersenspiel im Internet\n *Realitaetsnah* *Kostenlos* *Ueber 6000 Spieler*\n---------------------------------------------------------------------------\n", "msg_date": "Sun, 4 Jan 1998 14:04:44 +0000", "msg_from": "\"Boersenspielteam\" <boersenspiel@vocalweb.de>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS] " }, { "msg_contents": "> \n> Mattias Kregert wrote:\n> > \n> > Bruce Momjian wrote:\n> > >\n> > \n> > > I just think the WARN word coming up on users terminals is odd. I can\n> > > make the change in all the source files easily if we decide what the new\n> > > error word should be. Error? Failure?\n> > >\n> > \n> > Yes, that's one of the things I don't understand with PostgreSQL.\n> > ERROR would be much better.\n> \n> How about ABORT ?\n\nSo I assume no one has pending patches where this change would cause a\nproblem. So I will go ahead.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 21:22:50 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> ABORT means that transaction is ABORTed.\n> Will ERROR mean something else ?\n> Why should we use two different flag-words for the same thing ?\n> Note, that I don't object against using ERROR, but against using two words.\n\nI wanted two words to distinguish between user errors like a mis-spelled\nfield name, and internal errors like btree failure messages.\n\nMake sense?\n\nI made all the error messages coming from the parser as ERROR, and\nnon-parser messages as ABORT. I think I will need to fine-tune the\nmessages because I am sure I missed some messages that should be ERROR\nbut are ABORT. For example, utils/adt messages about improper data\nformats, is that an ERROR or an ABORT?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 22:25:03 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Mattias Kregert wrote:\n> > >\n> > > Bruce Momjian wrote:\n> > > >\n> > >\n> > > > I just think the WARN word coming up on users terminals is odd. I can\n> > > > make the change in all the source files easily if we decide what the new\n> > > > error word should be. Error? Failure?\n> > > >\n> > >\n> > > Yes, that's one of the things I don't understand with PostgreSQL.\n> > > ERROR would be much better.\n> >\n> > How about ABORT ?\n> \n> Sounds maybe a little too serious. We currently use WARN a lot to\n> indicate errors in the supplied SQL statement. Perhaps we need to make\n> the parser elog's ERROR, and the non-parser WARN's ABORT? Is that good?\n> When can I make the change? I don't want to mess up people's current work.\n\nABORT means that transaction is ABORTed.\nWill ERROR mean something else ?\nWhy should we use two different flag-words for the same thing ?\nNote, that I don't object against using ERROR, but against using two words.\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 10:35:10 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > ABORT means that transaction is ABORTed.\n> > > Will ERROR mean something else ?\n> > > Why should we use two different flag-words for the same thing ?\n> > > Note, that I don't object against using ERROR, but against using two words.\n> > \n> > I wanted two words to distinguish between user errors like a mis-spelled\n> > field name, and internal errors like btree failure messages.\n> > \n> > Make sense?\n> \n> No, for me. Do Informix, Oracle, etc use two words ?\n> What benefit of special \"in-parser-error\" word for user - in any case\n> user will read error message itself to understand what caused error.\n\nOK, if no one likes my idea in the next day, I will make them all ERROR.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 23:43:21 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > ABORT means that transaction is ABORTed.\n> > Will ERROR mean something else ?\n> > Why should we use two different flag-words for the same thing ?\n> > Note, that I don't object against using ERROR, but against using two words.\n> \n> I wanted two words to distinguish between user errors like a mis-spelled\n> field name, and internal errors like btree failure messages.\n> \n> Make sense?\n\nNo, for me. Do Informix, Oracle, etc use two words ?\nWhat benefit of special \"in-parser-error\" word for user - in any case\nuser will read error message itself to understand what caused error.\n\n> \n> I made all the error messages coming from the parser as ERROR, and\n> non-parser messages as ABORT. I think I will need to fine-tune the\n> messages because I am sure I missed some messages that should be ERROR\n> but are ABORT. For example, utils/adt messages about improper data\n> formats, is that an ERROR or an ABORT?\n\nGood question :)\n\nFollowing your way\n\ninsert into X (an_int2_field) values (9999999999);\n\nshould cause ERROR message, but\n\ninsert into X (an_int2_field) select an_int4_field from Y;\n\nshould return ABORT message if value of some an_int4_field in Y is\ngreater than 32768.\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 11:51:30 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> > > I wanted two words to distinguish between user errors like a mis-spelled\n> > > field name, and internal errors like btree failure messages.\n> > >\n> > > Make sense?\n> >\n> > No, for me. Do Informix, Oracle, etc use two words ?\n> > What benefit of special \"in-parser-error\" word for user - in any case\n> > user will read error message itself to understand what caused error.\n>\n> OK, if no one likes my idea in the next day, I will make them all ERROR.\n\nWell, _I_ like your idea. Seems like we can distinguish between operator error\n(which the operator can fix) and internal problems, and we could flag them\ndifferently. Perhaps there are so many grey areas that this becomes difficult to\ndo??\n\n - Tom\n\n", "msg_date": "Mon, 05 Jan 1998 07:12:11 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Thomas G. Lockhart wrote:\n> \n> > > > I wanted two words to distinguish between user errors like a mis-spelled\n> > > > field name, and internal errors like btree failure messages.\n> > > >\n> > > > Make sense?\n> > >\n> > > No, for me. Do Informix, Oracle, etc use two words ?\n> > > What benefit of special \"in-parser-error\" word for user - in any case\n> > > user will read error message itself to understand what caused error.\n> >\n> > OK, if no one likes my idea in the next day, I will make them all ERROR.\n> \n> Well, _I_ like your idea. Seems like we can distinguish between operator error\n> (which the operator can fix) and internal problems, and we could flag them\n> differently. Perhaps there are so many grey areas that this becomes difficult to\n> do??\n\nAll adt/*.c are \"grey areas\":\n\ninsert into X (an_int2_field) values (9999999999);\n\nshould cause ERROR message, but\n\ninsert into X (an_int2_field) select an_int4_field from Y;\n\nshould return ABORT message if value of some an_int4_field in Y is\ngreater than 32768.\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 14:48:59 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> > I made all the error messages coming from the parser as ERROR, and\n> > non-parser messages as ABORT. I think I will need to fine-tune the\n> > messages because I am sure I missed some messages that should be ERROR\n> > but are ABORT. For example, utils/adt messages about improper data\n> > formats, is that an ERROR or an ABORT?\n> \n> Good question :)\n> \n> Following your way\n> \n> insert into X (an_int2_field) values (9999999999);\n> \n> should cause ERROR message, but\n> \n> insert into X (an_int2_field) select an_int4_field from Y;\n\nThis generates an ERROR, because the parser catches the type mismatch.\n\nIt looks like the changes are broken up pretty much among directories. \nutils/adt and catalog/ and commands/ are all pretty much ERROR.\n\n> \n> should return ABORT message if value of some an_int4_field in Y is\n> greater than 32768.\n> \n> Vadim\n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 11:10:37 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > I made all the error messages coming from the parser as ERROR, and\n> > > non-parser messages as ABORT. I think I will need to fine-tune the\n> > > messages because I am sure I missed some messages that should be ERROR\n> > > but are ABORT. For example, utils/adt messages about improper data\n> > > formats, is that an ERROR or an ABORT?\n> >\n> > Good question :)\n> >\n> > Following your way\n> >\n> > insert into X (an_int2_field) values (9999999999);\n> >\n> > should cause ERROR message, but\n> >\n> > insert into X (an_int2_field) select an_int4_field from Y;\n> \n> This generates an ERROR, because the parser catches the type mismatch.\n\nHm - this is just example, I could use casting here...\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 00:12:38 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" }, { "msg_contents": "> > This generates an ERROR, because the parser catches the type mismatch.\n> \n> Hm - this is just example, I could use casting here...\n\nAh, you got me here. If you cast int2(), you would get a different\nmessage. You are right.\n\nI changes parser/, commands/, utils/adt/, and several of the /tcop\nfiles. Should take care of most of them. Any errors coming out of the\noptimizer or executor, or cache code should be marked as serious. Let's\nsee if it helps. I can easily make them all the same.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 12:27:53 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Error messages/logging (Was: Re: [HACKERS] Re: [COMMITTERS]\n\t'pgsql/src/backend/parser gram.y parse_oper.c')" } ]
[ { "msg_contents": "\nOn Sat, 3 Jan 1998, Bruce Momjian wrote:\n\n> OK, I am CC'ing Peter. I know he suspected a protocol issue, and\n> perhaps this will help him.\n\nPS: Its best to send postgresql stuff to my home addresses, rather than\nthe work one. Also, I'm actually on leave this week, so stuff sent to\nmaidstone.gov.uk won't be read until the 12th.\n\n> > The output produced looks like the lo_import() call worked (a id number is\n> > returned), but the second query runs into problenms. For example:\n> > \n> > Here is some output:\n> > \n> > IMPORTING FILE: /tmp/10-225341.jpg\n> > \n> > File Id is 18209Executing Query : INSERT INTO foo VALUES ('Testing', 18209)\n> > \n> > Database Error: unknown protocol character 'V' read from backend. (The protocol\n> > character is the first character the backend sends in\n> > response to a query it receives). \n\nThis was the first bug that I found (and the reason I first suspected the\nprotocol). I have a possible fix for this one (which I applied to the jdbc\ndriver), but another problem then rears its head else where.\n\nPS: this problem is infact in libpq.\n\n> > this is a CGI program, and the http user has all access to table \"foo\" in the\n> > database. The postgres.log file also spits out this:\n> > NOTICE:LockRelease: locktable lookup failed, no lock\n\nYep, I get this as well. It's on my list of things to find.\n\n> > -------------------------------------------------------------\n> > \n> > \n> > Bruce:\n> > \n> > Do you see a problem with the code that I missed, or are the large objects\n> > interface functions broken in 6.2.1?\n\nThey are broken.\n\nWhen testing, I find that as long as the large object is < 1k in size, it\nworks once, then will never work again. This obviously is useless.\n\nI'm spending the whole of next week hitting this with a vengence (doing\nsome preliminary work this afternoon). I'll keep everyone posted.\n\n-- \nPeter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk\n\n\n", "msg_date": "Sun, 4 Jan 1998 12:39:39 +0000 (GMT)", "msg_from": "Peter T Mount <psqlhack@maidast.demon.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] tuple is too big?" } ]
[ { "msg_contents": "> \n> On Sat, 3 Jan 1998, Bruce Momjian wrote:\n> \n> > I believe it is 8k to match the base size for the filesystem block size,\n> > for performance.\n> \n> \tHrmmm...how does one find out what the file system block size is? I know\n> there is an option to newfs for changing this, and at least under FreeBSD, the\n> default is set to:\n> \n> sys/param.h:#define DFLTBSIZE 4096\n> \n> \tSo maybe a multiple of block size should be considered more appropriate?\n> Maybe have it so that you can stipulate # of blocks that equal max tuple size?\n> Then, if I wanted, I could format a drive with a block size of 16k that is only\n> going to be used for databases, and have a tuple size up to that level?\n> \n\nYes, you certainly could do that. The comment says:\n\t\n\t/*\n\t * the maximum size of a disk block for any possible installation.\n\t *\n\t * in theory this could be anything, but in practice this is actually\n\t * limited to 2^13 bytes because we have limited ItemIdData.lp_off and\n\t * ItemIdData.lp_len to 13 bits (see itemid.h).\n\t */\n\t#define MAXBLCKSZ 8192\n\nYou can now specify the actual file system block size at the time of\nnewfs. We actually could query the file system at time of compile, but\nthat would be strange becuase the database location is set at time of\npostmaster startup, and I don't think we can make this a run-time\nparameter, but I may be wrong.\n\nOf course, you have to change the structures the mention.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 10:50:53 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "On Sun, 4 Jan 1998, Bruce Momjian wrote:\n\n> Yes, you certainly could do that. The comment says:\n> \t\n> \t/*\n> \t * the maximum size of a disk block for any possible installation.\n> \t *\n> \t * in theory this could be anything, but in practice this is actually\n> \t * limited to 2^13 bytes because we have limited ItemIdData.lp_off and\n> \t * ItemIdData.lp_len to 13 bits (see itemid.h).\n> \t */\n> \t#define MAXBLCKSZ 8192\n> \n> You can now specify the actual file system block size at the time of\n> newfs. We actually could query the file system at time of compile, but\n> that would be strange becuase the database location is set at time of\n> postmaster startup, and I don't think we can make this a run-time\n> parameter, but I may be wrong.\n\n\tNo, don't make it a run-time or auto-detect thing, just a compile time\noption. By default, leave it at 8192, since \"that's the way its always been\"...\nbut if we are justifying it based on disk block size, its 2x the disk block \nsize that my system is setup for. What's the difference between that and making\nit 3x or 4x? Or, hell, would I get a performance increase if I brought it\ndown to 4096, which is what my actually disk block size is?\n\n\tSo, what we would really be doing is setting the default to 8192, but give\nthe installer the opportunity (with a caveat that this value should be a multiple\nof default file system block size for optimal performance) to increase it as they\nsee fit.\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jan 1998 14:18:53 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \tNo, don't make it a run-time or auto-detect thing, just a compile time\n> option. By default, leave it at 8192, since \"that's the way its always been\"...\n> but if we are justifying it based on disk block size, its 2x the disk block \n> size that my system is setup for. What's the difference between that and making\n> it 3x or 4x? Or, hell, would I get a performance increase if I brought it\n> down to 4096, which is what my actually disk block size is?\n> \n> \tSo, what we would really be doing is setting the default to 8192, but give\n> the installer the opportunity (with a caveat that this value should be a multiple\n> of default file system block size for optimal performance) to increase it as they\n> see fit.\n\nI assume you changed the default, becuase the BSD44 default is 8k\nblocks, with 1k fragments.\n\nI don't think there is any 'performance' improvement with making it\ngreater than the file system block size.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 14:05:32 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "On Sun, 4 Jan 1998, Bruce Momjian wrote:\n\n> > \tNo, don't make it a run-time or auto-detect thing, just a compile time\n> > option. By default, leave it at 8192, since \"that's the way its always been\"...\n> > but if we are justifying it based on disk block size, its 2x the disk block \n> > size that my system is setup for. What's the difference between that and making\n> > it 3x or 4x? Or, hell, would I get a performance increase if I brought it\n> > down to 4096, which is what my actually disk block size is?\n> > \n> > \tSo, what we would really be doing is setting the default to 8192, but give\n> > the installer the opportunity (with a caveat that this value should be a multiple\n> > of default file system block size for optimal performance) to increase it as they\n> > see fit.\n> \n> I assume you changed the default, becuase the BSD44 default is 8k\n> blocks, with 1k fragments.\n\n\tGood question, I don't know. What does BSDi have it set at? Linux? NetBSD?\n\n\tI just checked our sys/param.h file under Solaris 2.5.1, and it doesn't\nseem to define a DEFAULT, but a MAXSIZE of 8192...oops, newfs defines the default\nthere for 8192 also\n\n> I don't think there is any 'performance' improvement with making it\n> greater than the file system block size.\n\n\tNo no...you missed the point. If we are saying that max tuple size is 8k\nbecause of block size of the file system, under FreeBSD, the tuple size is 2x\nthe block size of the file system. So, if there a performance decrease because\nof that...on modern OSs, how much does that even matter anymore? The 8192 that\nwe have current set, that's probably still from the original Postgres4.2 system\nthat was written in which decade? :)\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jan 1998 15:31:49 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \n> On Sun, 4 Jan 1998, Bruce Momjian wrote:\n> \n> > > \tNo, don't make it a run-time or auto-detect thing, just a compile time\n> > > option. By default, leave it at 8192, since \"that's the way its always been\"...\n> > > but if we are justifying it based on disk block size, its 2x the disk block \n> > > size that my system is setup for. What's the difference between that and making\n> > > it 3x or 4x? Or, hell, would I get a performance increase if I brought it\n> > > down to 4096, which is what my actually disk block size is?\n> > > \n> > > \tSo, what we would really be doing is setting the default to 8192, but give\n> > > the installer the opportunity (with a caveat that this value should be a multiple\n> > > of default file system block size for optimal performance) to increase it as they\n> > > see fit.\n> > \n> > I assume you changed the default, becuase the BSD44 default is 8k\n> > blocks, with 1k fragments.\n> \n> \tGood question, I don't know. What does BSDi have it set at? Linux? NetBSD?\n> \n> \tI just checked our sys/param.h file under Solaris 2.5.1, and it doesn't\n> seem to define a DEFAULT, but a MAXSIZE of 8192...oops, newfs defines the default\n> there for 8192 also\n> \n> > I don't think there is any 'performance' improvement with making it\n> > greater than the file system block size.\n> \n> \tNo no...you missed the point. If we are saying that max tuple size is 8k\n> because of block size of the file system, under FreeBSD, the tuple size is 2x\n> the block size of the file system. So, if there a performance decrease because\n> of that...on modern OSs, how much does that even matter anymore? The 8192 that\n> we have current set, that's probably still from the original Postgres4.2 system\n> that was written in which decade? :)\n\nI see, we could increase it and it probably would not matter much.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 14:39:30 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" } ]
[ { "msg_contents": "On Sun, 4 Jan 1998, Stan Brown wrote:\n\n> >\n> >\n> >I've just created a *very* simple script that creates a snapshot\n> >based on the current source tree. Nothing at all fancy, its just\n> >meant to give those without CVSup access to something to test and\n> >work with.\n> >\n> >It will get regenerated every Friday/Saturday night via cron\n> >\n> \n> \tAh. Mark, a bit more info please. Where will I find these snapshots.\n\n\tftp.postgresql.org/pub\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 4 Jan 1998 17:51:57 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] New Snapshot(s)" } ]
[ { "msg_contents": "\nCan someone comment on this? For an SQL server in Canada, should we be using\nEuropean format? *raised eyebrow* I'm currently using the default (US) format\n\n\n------------------------------------\n\n> \tmmddyy is proper ISO/SQL format for north america...we went through a major\n> discussion as to the differences, because European dates are *totally* different :(\n\nYes canada is supposed to be the same as europe being date/month/year\nas in small > large while americans go month day year which is no\norder at all...\n\n", "msg_date": "Sun, 4 Jan 1998 20:28:10 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "date format: Canada same as European or US?" }, { "msg_contents": "On Sun, 4 Jan 1998, The Hermit Hacker wrote:\n\n> \n> Can someone comment on this? For an SQL server in Canada, should we be using\n> European format? *raised eyebrow* I'm currently using the default (US) format\n\nYes - Canada follows european dates, spelling, and measurement system.\nI guess it's a matter of choice though... I think how the US spell\n\"colour\" is weird for instance.... and reading their dates is a pain.\n(also I prefer tea with a little bit of milk and sugar... haven't a clue\nwhere that's from :)\n\nSo how do I setup postgres for Canada? *curious?*\n\n> ------------------------------------\n> \n> > \tmmddyy is proper ISO/SQL format for north america...we went through a major\n> > discussion as to the differences, because European dates are *totally* different :(\n> \n> Yes canada is supposed to be the same as europe being date/month/year\n> as in small > large while americans go month day year which is no\n> order at all...\n> \n\nG'day, eh? :)\n\t- Teunis ... trying to catch up on ~500 backlogged messages\n\t\t\there...\n\n", "msg_date": "Thu, 15 Jan 1998 15:45:33 -0700 (MST)", "msg_from": "teunis <teunis@mauve.computersupportcentre.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] date format: Canada same as European or US?" } ]
[ { "msg_contents": "> \n> So, is it lp_len or lp_offset that can be reduced by 2? I want to \n> experiment with this...\n> \n\nNeither...try grep'ing around for uses of lp_flags. I dug into this\nlast Dec/Jan...check the hackers digests from that time for any\nrelevent info. At that time, only two bits in lp_flags were in use.\nDon't know if any more are taken now or not.\n\nBoth lp_len and lp_offset should be the same, so if you take four bits\nfrom lp_flags (and give two apiece to lp_len & lp_offset), that would\nget you to a block size of 32k.\n\nNow that there're timely src snapshots available, I'm going to try to\nget back into coding (assuming the aix port still works. :)\n\ndarrenk\n", "msg_date": "Sun, 4 Jan 1998 20:45:30 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] config.h/Followup FOLLOWUP" } ]
[ { "msg_contents": "I have added NOT NULL and DEFAULT indications to \\d:\n\ntest=> \\d testz\n\nTable = testz\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| x | int4 not null default '4' | 4 |\n+----------------------------------+----------------------------------+-------+\n\nSome people have asked for this on the questions list.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 21:14:37 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "new \\d information" } ]
[ { "msg_contents": "Integration wrote:\n> \n> ps. why not allow for larger tuples in general? Do we take a speed hit?\n\nUsing large blocks is bad for performance: by increasing block size\nyou automatically decrease number of blocks in shared buffer pool -\nthis is bad for index scans and in multi-user environment!\nJust remember that Informix (and others) use 2K blocks.\n(Actually, I would like to have smaller blocks, but postgres lives\nover file system...)\n\nAs for having big tuples - someone said about multi-representation\nfeature of Illustra (automatically storing of big fields outside\nof tuple itself - in blobs, large objects, ...): looks very nice.\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 11:14:49 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n\n> Just remember that Informix (and others) use 2K blocks.\n\n\tSo we're 4x what the commercial ones are as of right now? \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jan 1998 00:21:08 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \n> On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> \n> > Just remember that Informix (and others) use 2K blocks.\n> \n> \tSo we're 4x what the commercial ones are as of right now? \n\nThat is because they do not use the file system, so they try to match\nthe raw disk block sizes, while we try to match the file system size.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 23:28:46 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "On Sun, 4 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> > \n> > > Just remember that Informix (and others) use 2K blocks.\n> > \n> > \tSo we're 4x what the commercial ones are as of right now? \n> \n> That is because they do not use the file system, so they try to match\n> the raw disk block sizes, while we try to match the file system size.\n\n\tIrrelevant to my question...our tuples...are they 4x the size of the\ncommercial vendors, or is Vadim talking about something altogether different?\n\n\tIf we are 4x their size, then I think this whole discussion is a joke since\nwe are already *way* better then \"the others\"\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 5 Jan 1998 00:54:25 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \n> On Sun, 4 Jan 1998, Bruce Momjian wrote:\n> \n> > > \n> > > On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> > > \n> > > > Just remember that Informix (and others) use 2K blocks.\n> > > \n> > > \tSo we're 4x what the commercial ones are as of right now? \n> > \n> > That is because they do not use the file system, so they try to match\n> > the raw disk block sizes, while we try to match the file system size.\n> \n> \tIrrelevant to my question...our tuples...are they 4x the size of the\n> commercial vendors, or is Vadim talking about something altogether different?\n> \n> \tIf we are 4x their size, then I think this whole discussion is a joke since\n> we are already *way* better then \"the others\"\n\nThat's a good question. What is the maximum tuple size for Informix or\nOracle tuples?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 00:08:41 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" } ]
[ { "msg_contents": "> On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> \n> > Just remember that Informix (and others) use 2K blocks.\n> \n> \tSo we're 4x what the commercial ones are as of right now? \n> \n\nDate: Sat, 14 Dec 1996 17:29:53 -0500\nFrom: aixssd!darrenk (Darren King)\nTo: abs.net!postgreSQL.org!hackers\nSubject: [HACKERS] The 8k block size.\n\n--- snip ---\n\n(All of this is taken from their respective web site docs)\n\n block size max tuple size\nIBM DB2 4K 4005 bytes\nSybase 2K 2016 bytes\nInformix 2K 32767 bytes\nOracle (left my Oracle books at home...oops)\n\n--- snip ---\n\nDarren darrenk@insightdist.com\n\n\n", "msg_date": "Sun, 4 Jan 1998 23:23:37 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" }, { "msg_contents": "> \n> > On Mon, 5 Jan 1998, Vadim B. Mikheev wrote:\n> > \n> > > Just remember that Informix (and others) use 2K blocks.\n> > \n> > \tSo we're 4x what the commercial ones are as of right now? \n> > \n> \n> (All of this is taken from their respective web site docs)\n> \n> block size max tuple size\n> IBM DB2 4K 4005 bytes\n> Sybase 2K 2016 bytes\n> Informix 2K 32767 bytes\n> Oracle (left my Oracle books at home...oops)\n\nWow, I guess we are not as bad as I thought. If Peter gets large\nobjects working properly, we can close this issue.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 08:13:49 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] include/config.h FOLLOWUP" } ]
[ { "msg_contents": "I have changed the WARN messages to ERROR and ABORT messages.\n\nI wanted to differentiate between errors the user caused, and are\nnormal, like mistyped field names, and more serious errors coming from\nthe backend routines.\n\nI have made all the elog's in the parser/ directory as ERRORS. All the\nothers are ABORT.\n\nDoes someone want to review all the elog's and submit a patch changing\nABORT to ERROR as appropriate?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 4 Jan 1998 23:35:04 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "ERROR/ABORT message" } ]
[ { "msg_contents": "I was thinking about subselects, and how to attach the two queries.\n\nWhat if the subquery makes a range table entry in the outer query, and\nthe query is set up like the UNION queries where we put the scans in a\nrow, but in the case we put them over/under each other.\n\nAnd we push a temp table into the catalog cache that represents the\nresult of the subquery, then we could join to it in the outer query as\nthough it was a real table.\n\nAlso, can't we do the correlated subqueries by adding the proper\ntarget/output columns to the subquery, and have the outer query\nreference those columns in the subquery range table entry.\n\nMaybe I can write up a sample of this? Vadim, would this help? Is this\nthe point we are stuck at?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 00:16:49 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "subselect" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I was thinking about subselects, and how to attach the two queries.\n> \n> What if the subquery makes a range table entry in the outer query, and\n> the query is set up like the UNION queries where we put the scans in a\n> row, but in the case we put them over/under each other.\n> \n> And we push a temp table into the catalog cache that represents the\n> result of the subquery, then we could join to it in the outer query as\n> though it was a real table.\n> \n> Also, can't we do the correlated subqueries by adding the proper\n> target/output columns to the subquery, and have the outer query\n> reference those columns in the subquery range table entry.\n\nYes, this is a way to handle subqueries by joining to temp table.\nAfter getting plan we could change temp table access path to\nnode material. On the other hand, it could be useful to let optimizer\nknow about cost of temp table creation (have to think more about it)...\nUnfortunately, not all subqueries can be handled by \"normal\" joins: NOT IN\nis one example of this - joining by <> will give us invalid results.\nSetting special NOT EQUAL flag is not enough: subquery plan must be\nalways inner one in this case. The same for handling ALL modifier.\nNote, that we generaly can't use aggregates here: we can't add MAX to \nsubquery in the case of > ALL (subquery), because of > ALL should return FALSE\nif subquery returns NULL(s) but aggregates don't take NULLs into account.\n\n> \n> Maybe I can write up a sample of this? Vadim, would this help? Is this\n> the point we are stuck at?\n\nPersonally, I was stuck by holydays -:)\nNow I can spend ~ 8 hours ~ each day for development...\n\nVadim\n", "msg_date": "Mon, 05 Jan 1998 19:35:59 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "\nVadim,\n\n Unfortunately, not all subqueries can be handled by \"normal\" joins: NOT IN\n is one example of this - joining by <> will give us invalid results.\n\nWhat is you approach towards this problem?\nI got an idea that one could reverse the order,\nthat is execute the outer first into a temptable\nand delete from that according to the result of the\nsubquery and then return it.\nProbably this is too raw and slow. ;-)\n\n Personally, I was stuck by holydays -:)\n Now I can spend ~ 8 hours ~ each day for development...\n\nOh, isn't it christmas eve right now in Russia?\n\n best regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "5 Jan 1998 13:28:25 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> Yes, this is a way to handle subqueries by joining to temp table.\n> After getting plan we could change temp table access path to\n> node material. On the other hand, it could be useful to let optimizer\n> know about cost of temp table creation (have to think more about it)...\n> Unfortunately, not all subqueries can be handled by \"normal\" joins: NOT IN\n> is one example of this - joining by <> will give us invalid results.\n> Setting special NOT EQUAL flag is not enough: subquery plan must be\n> always inner one in this case. The same for handling ALL modifier.\n> Note, that we generaly can't use aggregates here: we can't add MAX to \n> subquery in the case of > ALL (subquery), because of > ALL should return FALSE\n> if subquery returns NULL(s) but aggregates don't take NULLs into account.\n\nOK, here are my ideas. First, I think you have to handle subselects in\nthe outer node because a subquery could have its own subquery. Also, we\nnow have a field in Aggreg to all us to 'usenulls'.\n\nOK, here it is. I recommend we pass the outer and subquery through\nthe parser and optimizer separately.\n\nWe parse the subquery first. If the subquery is not correlated, it\nshould parse fine. If it is correlated, any columns we find in the\nsubquery that are not already in the FROM list, we add the table to the\nsubquery FROM list, and add the referenced column to the target list of\nthe subquery.\n\nWhen we are finished parsing the subquery, we create a catalog cache\nentry for it called 'sub1' and make its fields match the target\nlist of the subquery.\n\nIn the outer query, we add 'sub1' to its target list, and change\nthe subquery reference to point to the new range table. We also add\nWHERE clauses to do any correlated joins.\n\nHere is a simple example:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb)\n\nThis is not correlated, and the subquery parser easily. We create a\n'sub1' catalog cache entry, and add 'sub1' to the outer query FROM\nclause. We also replace 'col1 = (subquery)' with 'col1 = sub1.col2'.\n\nHere is a more complex correlated subquery:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb\n\t\t where taba.col3 = tabb.col4)\n\nHere we must add 'taba' to the subquery's FROM list, and add col3 to the\ntarget list of the subquery. After we parse the subquery, add 'sub1' to\nthe FROM list of the outer query, change 'col1 = (subquery)' to 'col1 =\nsub1.col2', and add to the outer WHERE clause 'AND taba.col3 = sub1.col3'.\nTHe optimizer will do the correlation for us.\n\nIn the optimizer, we can parse the subquery first, then the outer query,\nand then replace all 'sub1' references in the outer query to use the\nsubquery plan.\n\nI realize making merging the two plans and doing IN and NOT IN is the\nreal challenge, but I hoped this would give us a start.\n\nWhat do you think?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 10:28:48 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > always inner one in this case. The same for handling ALL modifier.\n> > Note, that we generaly can't use aggregates here: we can't add MAX to\n> > subquery in the case of > ALL (subquery), because of > ALL should return FALSE\n> > if subquery returns NULL(s) but aggregates don't take NULLs into account.\n> \n> OK, here are my ideas. First, I think you have to handle subselects in\n> the outer node because a subquery could have its own subquery. Also, we\n\nI hope that this is no matter: if results of subquery (with/without sub-subqueries)\nwill go into temp table then this table will be re-scanned for each outer tuple.\n\n> now have a field in Aggreg to all us to 'usenulls'.\n ^^^^^^^^\n This can't help:\n\nvac=> select * from x;\ny\n-\n1\n2\n3\n <<< this is NULL\n(4 rows)\n\nvac=> select max(y) from x;\nmax\n---\n 3\n\n==> we can't replace \n\nselect * from A where A.a > ALL (select y from x);\n ^^^^^^^^^^^^^^^\n (NULL will be returned and so A.a > ALL is FALSE - this is what \n Sybase does, is it right ?)\nwith\n\nselect * from A where A.a > (select max(y) from x);\n ^^^^^^^^^^^^^^^^^^^^\njust because of we lose knowledge about NULLs here.\n\nAlso, I would like to handle ANY and ALL modifiers for all bool\noperators, either built-in or user-defined, for all data types -\nisn't PostgreSQL OO-like RDBMS -:)\n\n> OK, here it is. I recommend we pass the outer and subquery through\n> the parser and optimizer separately.\n\nI don't like this. I would like to get parse-tree from parser for\nentire query and let optimizer (on upper level) decide how to rewrite\nparse-tree and what plans to produce and how these plans should be\nmerged. Note, that I don't object your methods below, but only where\nto place handling of this. I don't understand why should we add\nnew part to the system which will do optimizer' work (parse-tree --> \nexecution plan) and deal with optimizer nodes. Imho, upper optimizer\nlevel is nice place to do this.\n\n> \n> We parse the subquery first. If the subquery is not correlated, it\n> should parse fine. If it is correlated, any columns we find in the\n> subquery that are not already in the FROM list, we add the table to the\n> subquery FROM list, and add the referenced column to the target list of\n> the subquery.\n> \n> When we are finished parsing the subquery, we create a catalog cache\n> entry for it called 'sub1' and make its fields match the target\n> list of the subquery.\n> \n> In the outer query, we add 'sub1' to its target list, and change\n> the subquery reference to point to the new range table. We also add\n> WHERE clauses to do any correlated joins.\n...\n> Here is a more complex correlated subquery:\n> \n> select *\n> from taba\n> where col1 = (select col2\n> from tabb\n> where taba.col3 = tabb.col4)\n> \n> Here we must add 'taba' to the subquery's FROM list, and add col3 to the\n> target list of the subquery. After we parse the subquery, add 'sub1' to\n> the FROM list of the outer query, change 'col1 = (subquery)' to 'col1 =\n> sub1.col2', and add to the outer WHERE clause 'AND taba.col3 = sub1.col3'.\n> THe optimizer will do the correlation for us.\n> \n> In the optimizer, we can parse the subquery first, then the outer query,\n> and then replace all 'sub1' references in the outer query to use the\n> subquery plan.\n> \n> I realize making merging the two plans and doing IN and NOT IN is the\n ^^^^^^^^^^^^^^^^^^^^^\nThis is very easy to do! As I already said we have just change sub1\naccess path (SeqScan of sub1) with SeqScan of Material node with \nsubquery plan.\n\n> real challenge, but I hoped this would give us a start.\n\nDecision about how to record subquery stuff in to parse-tree\nwould be very good start -:)\n\nBTW, note that for _expression_ subqueries (which are introduced without\nIN, EXISTS, ALL, ANY - this follows Sybase' naming) - as in your examples - \nwe have to check that subquery returns single tuple...\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 02:55:57 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > > always inner one in this case. The same for handling ALL modifier.\n> > > Note, that we generaly can't use aggregates here: we can't add MAX to\n> > > subquery in the case of > ALL (subquery), because of > ALL should return FALSE\n> > > if subquery returns NULL(s) but aggregates don't take NULLs into account.\n> > \n> > OK, here are my ideas. First, I think you have to handle subselects in\n> > the outer node because a subquery could have its own subquery. Also, we\n> \n> I hope that this is no matter: if results of subquery (with/without sub-subqueries)\n> will go into temp table then this table will be re-scanned for each outer tuple.\n\nOK, sounds good.\n\n> \n> > now have a field in Aggreg to all us to 'usenulls'.\n> ^^^^^^^^\n> This can't help:\n> \n> vac=> select * from x;\n> y\n> -\n> 1\n> 2\n> 3\n> <<< this is NULL\n> (4 rows)\n> \n> vac=> select max(y) from x;\n> max\n> ---\n> 3\n> \n> ==> we can't replace \n> \n> select * from A where A.a > ALL (select y from x);\n> ^^^^^^^^^^^^^^^\n> (NULL will be returned and so A.a > ALL is FALSE - this is what \n> Sybase does, is it right ?)\n> with\n> \n> select * from A where A.a > (select max(y) from x);\n\nI agree. I don't see how we can ever replace an '> ALL (y)' with '> ALL\n(max(y))'. This sounds too detailed for the system to deal with. If\nthey do ALL, we have to implement ALL, without any use of aggregates to\ntry and second-guess their request.\n\n> ^^^^^^^^^^^^^^^^^^^^\n> just because of we lose knowledge about NULLs here.\n\nYep. And it is too much work. If they want to replace the query with\nmax(), let them do it, if not, we do what they requested.\n\n> \n> Also, I would like to handle ANY and ALL modifiers for all bool\n> operators, either built-in or user-defined, for all data types -\n> isn't PostgreSQL OO-like RDBMS -:)\n\nOK, sounds good to me.\n\n> \n> > OK, here it is. I recommend we pass the outer and subquery through\n> > the parser and optimizer separately.\n> \n> I don't like this. I would like to get parse-tree from parser for\n> entire query and let optimizer (on upper level) decide how to rewrite\n> parse-tree and what plans to produce and how these plans should be\n> merged. Note, that I don't object your methods below, but only where\n> to place handling of this. I don't understand why should we add\n> new part to the system which will do optimizer' work (parse-tree --> \n> execution plan) and deal with optimizer nodes. Imho, upper optimizer\n> level is nice place to do this.\n\nI am confused. Do you want one flat query and want to pass the whole\nthing into the optimizer? That brings up some questions:\n\nHow do we want to do this? Well, we could easily have the two queries\nshare the same range table by making the subquery have the proper\nalias's/refnames.\n\nHowever, how do we represent the join and correlated joins to the\nsubquery. We can do the correlated stuff by having the outer columns\nreference the inner queries range table entries that we added, but how\nto represent the subquery WHERE clause, and the join of the outer to\ninner queries?\n\nIn:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb\n\t\t where taba.col3 = tabb.col4)\n\nHow do we represent join of col1 to tabb.col2? I guess we have a new\nnode type for IN and NOT IN and ANY, and we put that operator in the\nparse grammar.\n\nSo I assume you are suggesting we flatten the query, to merge the range\ntables of the two queries, and the WHERE clauses of the two queries, add\nthe proper WHERE conditionals to join the two range tables for\ncorrelated queries, and have the IN, NOT IN, ALL nodes in the WHERE\nclause, and have the optimizer figure out how to handle the issues.\n\nHow do we handle aggregates in the subquery? Currently the optimizer\ndoes those last, but we must put them above the materialized node. And\nif we merge the outer and subquery to produce one flat query, how do we\ntell the optimizer to make sure the aggregate is in a node that can be\nmaterialized?\n\n---------------------------------------------------------------------------\n\nIf you don't want to flatten the outer query and subquery into one\nquery, I am really confused. There certainly will be stuff that needs\nto be put into the upper optimizer, to properly handle the two plans and\nmake sure they are merged into one plan.\n\nAre you suggesting we put the IN node in the upper optimizer, and the\ncorrelation stuff. That sounds good.\n\n> > I realize making merging the two plans and doing IN and NOT IN is the\n> ^^^^^^^^^^^^^^^^^^^^^\n> This is very easy to do! As I already said we have just change sub1\n> access path (SeqScan of sub1) with SeqScan of Material node with \n> subquery plan.\n\nGood. Makes sense. This is what I was suggesting.\n\n> \n> > real challenge, but I hoped this would give us a start.\n> \n> Decision about how to record subquery stuff in to parse-tree\n> would be very good start -:)\n> \n> BTW, note that for _expression_ subqueries (which are introduced without\n> IN, EXISTS, ALL, ANY - this follows Sybase' naming) - as in your examples - \n> we have to check that subquery returns single tuple...\n\nYes, I realize this.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 15:51:26 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> > I am confused. Do you want one flat query and want to pass the whole\n> > thing into the optimizer? That brings up some questions:\n> \n> No. I just want to follow Tom's way: I would like to see new\n> SubSelect node as shortened version of struct Query (or use\n> Query structure for each subquery - no matter for me), some \n> subquery-related stuff added to Query (and SubSelect) to help\n> optimizer to start, and see\n\nOK, so you want the subquery to actually be INSIDE the outer query\nexpression. Do they share a common range table? If they don't, we\ncould very easily just fly through when processing the WHERE clause, and\nstart a new query using a new query structure for the subquery. Believe\nme, you don't want a separate SubQuery-type, just re-use Query for it. \nIt allows you to call all the normal query stuff with a consistent\nstructure.\n\nThe parser will need to know it is in a subquery, so it can add the\nproper target columns to the subquery, or are you going to do that in\nthe optimizer. You can do it in the optimizer, and join the range table\nreferences there too.\n\n> \n> typedef struct A_Expr\n> {\n> NodeTag type;\n> int oper; /* type of operation\n> * {OP,OR,AND,NOT,ISNULL,NOTNULL} */\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> IN, NOT IN, ANY, ALL, EXISTS here,\n> \n> char *opname; /* name of operator/function */\n> Node *lexpr; /* left argument */\n> Node *rexpr; /* right argument */\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> and SubSelect (Query) here (as possible case).\n> \n> One thought to follow this way: RULEs (and so - VIEWs) are handled by using\n> Query - how else can we implement VIEWs on selects with subqueries ?\n\nViews are stored as nodeout structures, and are merged into the query's\nfrom list, target list, and where clause. I am working out\nreadfunc,outfunc now to make sure they are up-to-date with all the\ncurrent fields.\n\n> \n> BTW, is\n> \n> select * from A where (select TRUE from B);\n> \n> valid syntax ?\n\nI don't think so.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 17:16:40 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > OK, here it is. I recommend we pass the outer and subquery through\n> > > the parser and optimizer separately.\n> >\n> > I don't like this. I would like to get parse-tree from parser for\n> > entire query and let optimizer (on upper level) decide how to rewrite\n> > parse-tree and what plans to produce and how these plans should be\n> > merged. Note, that I don't object your methods below, but only where\n> > to place handling of this. I don't understand why should we add\n> > new part to the system which will do optimizer' work (parse-tree -->\n> > execution plan) and deal with optimizer nodes. Imho, upper optimizer\n> > level is nice place to do this.\n> \n> I am confused. Do you want one flat query and want to pass the whole\n> thing into the optimizer? That brings up some questions:\n\nNo. I just want to follow Tom's way: I would like to see new\nSubSelect node as shortened version of struct Query (or use\nQuery structure for each subquery - no matter for me), some \nsubquery-related stuff added to Query (and SubSelect) to help\noptimizer to start, and see\n\ntypedef struct A_Expr\n{\n NodeTag type;\n int oper; /* type of operation\n * {OP,OR,AND,NOT,ISNULL,NOTNULL} */\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n IN, NOT IN, ANY, ALL, EXISTS here,\n\n char *opname; /* name of operator/function */\n Node *lexpr; /* left argument */\n Node *rexpr; /* right argument */\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n and SubSelect (Query) here (as possible case).\n\nOne thought to follow this way: RULEs (and so - VIEWs) are handled by using\nQuery - how else can we implement VIEWs on selects with subqueries ?\n\nBTW, is\n\nselect * from A where (select TRUE from B);\n\nvalid syntax ?\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 05:18:11 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "Goran Thyni wrote:\n> \n> Vadim,\n> \n> Unfortunately, not all subqueries can be handled by \"normal\" joins: NOT IN\n> is one example of this - joining by <> will give us invalid results.\n> \n> What is you approach towards this problem?\n\nActually, this is problem of ALL modifier (NOT IN is _not_equal_ ALL)\nand so, we have to have not just NOT EQUAL flag but some ALL node\nwith modified operator.\n\nAfter that, one way is put subquery into inner plan of an join node\nto be sure that for an outer tuple all corresponding subquery tuples\nwill be tested with modified operator (this will require either\nchanging code of all join nodes or addition of new plan type - we'll see)\nand another way is ... suggested by you:\n\n> I got an idea that one could reverse the order,\n> that is execute the outer first into a temptable\n> and delete from that according to the result of the\n> subquery and then return it.\n> Probably this is too raw and slow. ;-)\n\nThis will be faster in some cases (when subquery returns many results\nand there are \"not so many\" results from outer query) - thanks for idea!\n\n> \n> Personally, I was stuck by holydays -:)\n> Now I can spend ~ 8 hours ~ each day for development...\n> \n> Oh, isn't it christmas eve right now in Russia?\n\nDue to historic reasons New Year is mu-u-u-uch popular\nholiday in Russia -:)\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 05:48:58 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> No. I just want to follow Tom's way: I would like to see new\n> SubSelect node as shortened version of struct Query (or use\n> Query structure for each subquery - no matter for me), some \n> subquery-related stuff added to Query (and SubSelect) to help\n> optimizer to start, and see\n\nThis is fine. I thought it would be too much work for the optimizer to\npass a subquery inside the WHERE clause, but if you think you can handle\nit, great. I think it is more likely views will work with subqueries if\nwe do that too, and it is cleaner.\n\nI recommend adding a boolean flag to the rangetable entries to show if\nthe range was added automatically, meaning it refers to an outer query. \nAlso, we will need a flag in the Query structure to tell if it is a\nsubquery, and a pointer to the parent's range table to resolve\nreferences like:\n\n\tselect *\n\tfrom taba\n\twhere col1 = (select col2\n\t\t from tabb\n\t\t where col3 = tabb.col4)\n\nIn this case, the proper table for col3 can not be determined from the\nsubquery range table, so we must search the parent range table to add\nthe proper entry to the child. If we add target entries at the same\ntime in the parser, we should add a flag to the targetentry structure to\nidentify it as an entry that will have to have additional WHERE clauses\nadded to the parent to restrict the join, or we could add those entries\nin the parser, but at the time we are processing the subquery, we are\nalready inside the WHERE clause, so we must be careful.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 18:02:08 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > I am confused. Do you want one flat query and want to pass the whole\n> > > thing into the optimizer? That brings up some questions:\n> >\n> > No. I just want to follow Tom's way: I would like to see new\n> > SubSelect node as shortened version of struct Query (or use\n> > Query structure for each subquery - no matter for me), some\n> > subquery-related stuff added to Query (and SubSelect) to help\n> > optimizer to start, and see\n> \n> OK, so you want the subquery to actually be INSIDE the outer query\n> expression. Do they share a common range table? If they don't, we\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nNo.\n\n> could very easily just fly through when processing the WHERE clause, and\n> start a new query using a new query structure for the subquery. Believe\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n... and filling some subquery-related stuff in upper query structure -\nstill don't know what exactly this could be -:)\n\n> me, you don't want a separate SubQuery-type, just re-use Query for it.\n> It allows you to call all the normal query stuff with a consistent\n> structure.\n\nNo objections.\n\n> \n> The parser will need to know it is in a subquery, so it can add the\n> proper target columns to the subquery, or are you going to do that in\n\nI don't think that we need in it, but list of correlation clauses\ncould be good thing - all in all parser has to check all column \nreferences...\n\n> the optimizer. You can do it in the optimizer, and join the range table\n> references there too.\n\nYes.\n\n> > typedef struct A_Expr\n> > {\n> > NodeTag type;\n> > int oper; /* type of operation\n> > * {OP,OR,AND,NOT,ISNULL,NOTNULL} */\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > IN, NOT IN, ANY, ALL, EXISTS here,\n> >\n> > char *opname; /* name of operator/function */\n> > Node *lexpr; /* left argument */\n> > Node *rexpr; /* right argument */\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > and SubSelect (Query) here (as possible case).\n> >\n> > One thought to follow this way: RULEs (and so - VIEWs) are handled by using\n> > Query - how else can we implement VIEWs on selects with subqueries ?\n> \n> Views are stored as nodeout structures, and are merged into the query's\n> from list, target list, and where clause. I am working out\n> readfunc,outfunc now to make sure they are up-to-date with all the\n> current fields.\n\nNice! This stuff was out-of-date for too long time.\n\n> > BTW, is\n> >\n> > select * from A where (select TRUE from B);\n> >\n> > valid syntax ?\n> \n> I don't think so.\n\nAnd so, *rexpr can be of Query type only for oper \"in\" OP, IN, NOT IN,\nANY, ALL, EXISTS - well.\n\n(Time to sleep -:)\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 06:09:56 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> BTW, note that for _expression_ subqueries (which are introduced without\n> IN, EXISTS, ALL, ANY - this follows Sybase' naming) - as in your examples -\n> we have to check that subquery returns single tuple...\n\nIt might be nice to have a tuple-counting operation or query node (is this the right\nterminology?) which could be used to help implement EXISTS. It might help to\nre-implement the count(*) function also.\n\n - Tom\n\n\n", "msg_date": "Tue, 06 Jan 1998 04:50:12 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] subselect" }, { "msg_contents": "> \n> > BTW, note that for _expression_ subqueries (which are introduced without\n> > IN, EXISTS, ALL, ANY - this follows Sybase' naming) - as in your examples -\n> > we have to check that subquery returns single tuple...\n> \n> It might be nice to have a tuple-counting operation or query node (is this the right\n> terminology?) which could be used to help implement EXISTS. It might help to\n> re-implement the count(*) function also.\n\nIn the new code, count(*) picks a column from one of the tables to count\non.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 00:06:39 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] subselect" } ]
[ { "msg_contents": "Hi all,\n\nthe current snapshot reserves 'char' as keyword.\nCan anyone tell me the reason ?\n\nthanks\nEdmund\n-- \nEdmund Mergl mail: E.Mergl@bawue.de\nIm Haldenhau 9 http://www.bawue.de/~mergl\nD 70565 Stuttgart fon: +49 711 747503\nGermany gsm: +49 171 2645325\n", "msg_date": "Mon, 05 Jan 1998 07:07:37 +0100", "msg_from": "Edmund Mergl <E.Mergl@bawue.de>", "msg_from_op": true, "msg_subject": "why is char now a keyword" }, { "msg_contents": "Edmund Mergl wrote:\n\n> Hi all,\n>\n> the current snapshot reserves 'char' as keyword.\n> Can anyone tell me the reason ?\n\nAt least in part so that we can do the explicit parsing required to\nsupport SQL92 syntax elements such as \"character sets\" and \"collating\nsequences\". I'd like to get support for multiple character sets, but am\nimmersed in documentation so will not likely get to this for the next\nrelease. Have you encountered a problem with an existing database? In\nwhat context??\n\n - Tom\n\n", "msg_date": "Mon, 05 Jan 1998 07:56:07 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] why is char now a keyword" } ]
[ { "msg_contents": "> > \n> > 2. This is more like a C issue rather than aix-specific. The aix compiler complains\n> > about assigning the (void)NULL to isnull in the heap_getattr macro. Changing the\n> > (void) to (bool) works and seems like it should be (bool) to match the type of isnull,\n> > shouldn't it?\n> > \n> > *** include/access/heapam.h.org\tSun Jan 4 23:52:05 1998\n> > --- include/access/heapam.h\tSun Jan 4 23:52:11 1998\n> > ***************\n> > *** 101,110 ****\n> > #define heap_getattr(tup, b, attnum, tupleDesc, isnull) \\\n> > \t(AssertMacro((tup) != NULL) ? \\\n> > \t\t((attnum) > (int) (tup)->t_natts) ? \\\n> > ! \t\t\t(((isnull) ? (*(isnull) = true) : (void)NULL), (Datum)NULL) : \\\n> > \t\t((attnum) > 0) ? \\\n> > \t\t\tfastgetattr((tup), (attnum), (tupleDesc), (isnull)) : \\\n> > ! \t\t(((isnull) ? (*(isnull) = false) : (void)NULL), heap_getsysattr((tup), (b), (attnum))) : \\\n> > \t(Datum)NULL)\n> > \n> > extern HeapAccessStatistics heap_access_stats;\t/* in stats.c */\n> > --- 101,110 ----\n> > #define heap_getattr(tup, b, attnum, tupleDesc, isnull) \\\n> > \t(AssertMacro((tup) != NULL) ? \\\n> > \t\t((attnum) > (int) (tup)->t_natts) ? \\\n> > ! \t\t\t(((isnull) ? (*(isnull) = true) : (bool)NULL), (Datum)NULL) : \\\n> > \t\t((attnum) > 0) ? \\\n> > \t\t\tfastgetattr((tup), (attnum), (tupleDesc), (isnull)) : \\\n> > ! \t\t(((isnull) ? (*(isnull) = false) : (bool)NULL), heap_getsysattr((tup), (b), (attnum))) : \\\n> > \t(Datum)NULL)\n> > \n> > extern HeapAccessStatistics heap_access_stats;\t/* in stats.c */\n> \n> We made if void so that we would stop getting gcc warnings about 'unused\n> left-hand side of conditional' messages. Does aix complain or stop. If\n> it just complains, I think we have to leave it alone, because everyone\n> else will complain about bool.\n\nBut this is then trying to assign a (void)NULL to isnull, which is a bool (really a char).\nIMHO gcc should complain. Aix gives a severe error since the types don't match.\n\nMaybe better to have a warning than fix it by causing an error. Gcc just happens to be in\na forgiving mood. What does the C standard say about casting (void) ptrs to other types?\n\nWhy not make this a _little_ more legible and compiler-friendly by making it into an\nif-then-else block? Is the ?: operator really saving any ops?\n\n---------\n\nRe: the StrNCpy macro...\n\nThe aix compiler complains about trying to assign a (void)NULL to (len > 0). Can this be\nfixed with another set of parens separating the returned dest from the ?: operator?\n\nLike...\n\n(((strncpy((dst),(src),(len)),(len > 0)) ? *((dst)+(len)-1)='\\0' : ((char)NULL)),(dst)))\n ^ ^\n \nThis gets the return value back doesn't it? And changing to a (char)NULL makes the\ncompiler happy again too. Is this acceptable?\n\ndarrenk\n", "msg_date": "Mon, 5 Jan 1998 09:13:30 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [PORTS] (void)NULL in macros and aix" } ]
[ { "msg_contents": "\nOn Tue, 30 Dec 1997, Bruce Momjian wrote:\n\n> The following was sent to me. Does it fit our needs anywhere? Let's\n> discuss it.\n> > \n> > I wrote an indexed file system some time ago in ANSI C. I've compiled\n> > and used it on several platforms, but have never aspired to do very much\n> > with it. I'm not really the marketing type, and I don't want to compete\n> > with existing standards.\n> > \n> > I wonder if it could make any contribution to the PostgreSQL effort? \n> > Here are the pluses and minuses:\n\n[snip]\n\n> > Maybe it could do strange sorts or handle BLOBs. If you think it could\n> > make a contribution I'd be willing to learn and work on the appropriate\n> > code. You're welcome to a copy of it or any additional information you\n> > might want.\n\nBruce, while reading another thread about the tuple size, could this be\nutilised in building some form of MEMO field, getting round the 8k limit?\n\nWe have the existing large objects that are ideal for binary data, but for\ntextual data this could be of use. From the client's point of view, this\nis stored with the tuple, but in reality, the content is stored in a large\nobject.\n\n\nAlso, quite some time ago, someone did ask on how to do searches on large\nobjects (consisting of large unicode documents). The existing stuff\ndoesn't support this, but could it be done with this code?\n\nAnyhow, just a thought.\n\n-- \nPeter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk\n\n\n", "msg_date": "Mon, 5 Jan 1998 15:10:35 +0000 (GMT)", "msg_from": "Peter T Mount <psqlhack@maidast.demon.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: development" }, { "msg_contents": "> Also, quite some time ago, someone did ask on how to do searches on large\n> objects (consisting of large unicode documents). The existing stuff\n> doesn't support this, but could it be done with this code?\n\nI think this is what Vadim was alluding to when he talked about Illustra\nhaving large objects that looked like columns in a table. Perhaps it\ncould be done easily by defining a function that takes a large object\nname stored in a table, and returns its value.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 10:41:41 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: development" } ]
[ { "msg_contents": "Hello all,\n\nWARNING : It's a long mail, but please have patience and read it *all*\n\nI have reached a point in developing PgAccess when I discovered that\nsome functions in libpgtcl are bad implemented and even libpq does not\nhelp me a lot!\n\nWhat's the problem ! Is about working with large queries result, big\ntables with thousand of records that have to be processed in order to\nget a full report for example.\n\nGetting a query result from Tcl/Tk (pg_select function) uses PQexec.\nBut PQexec IS GETTING ALL THE RECORDS IN MEMORY and after that user can\nhandle query results.\nBut what if table has thousand records ? Probably I would need more than\n512 Mb of RAM in order to get a report finished.\n\nViorel Mitache from RENEL Braila, (mviorel@flex.ro, please cc him and me\nbecause we aren't on hacker list) proposed another sollution.\n\nWith some small changes in libpq-fe.h\n\n ( void (* callback)(PGresult *,void *ptr,int stat);\n void *usr_ptr;)\n\nand also in libpq to allow a newly defined function in libpgtcl\n(pg_loop) to initiate a query and then calling back a user defined\nfunction after every record fetched from the connection.\n\nIn order to do this, the connection is 'cloned' and on this new\nconnection the query is issued. For every record fetched, the C callback\nfunction is called, here the Tcl interpreted is invoked for the source\ninside the loop, then memory used by the record is release and the next\nrecord is ready to come.\nMore than that, after processing some records, user can choose to break\nthe loop (using break command in Tcl) that is actually breaking the\nconnection.\n\nWhat we achieve making this patches ?\n\nFirst of all the ability of sequential processing large tables.\nThen increasing performance due to parallel execution of receiving data\non the network and local processing. The backend process on the server\nis filling the communication channel with data and the local task is\nprocessing it as it comes.\nIn the old version, the local task has to wait until *all* data has\ncomed (buffered in memory if it was room enough) and then processing it.\n\nWhat I would ask from you?\n1) First of all, if my needs could be satisfied in other way with\ncurrent functions in libpq of libpgtcl. I can assure you that with\ncurrent libpgtcl is rather impossible. I am not sure if there is another\nmechanism using some subtle functions that I didn't know about them.\n2) Then, if you agree with the idea, to whom we must send more accurate\nthe changes that we would like to make in order to be analysed and\nchecked for further development of Pg.\n3) Is there any other normal mode to tell to the backend not to send any\nmore tuples instead of breaking the connection ?\n4) Even working in C, using PQexec , it's impossible to handle large\nquery results, am I true ?\n\nPlease cc to : mviorel@flex.ro and also teo@flex.ro\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Mon, 05 Jan 1998 20:43:45 +0200", "msg_from": "Constantin Teodorescu <teo@flex.ro>", "msg_from_op": true, "msg_subject": "I want to change libpq and libpgtcl for better handling of large\n\tquery results" }, { "msg_contents": "> Getting a query result from Tcl/Tk (pg_select function) uses PQexec.\n> But PQexec IS GETTING ALL THE RECORDS IN MEMORY and after that user can\n> handle query results.\n> But what if table has thousand records ? Probably I would need more than\n> 512 Mb of RAM in order to get a report finished.\n\nThis issue has come up before. The accepted solution is to open a\ncursor, and fetch whatever records you need. The backend still\ngenerates the full result, but the front end requests the records it\nwants.\n\nDoes that not work in your case?\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 5 Jan 1998 21:15:10 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "On Mon, 5 Jan 1998, Constantin Teodorescu wrote:\n\n> In order to do this, the connection is 'cloned' and on this new\n> connection the query is issued. For every record fetched, the C callback\n> function is called, here the Tcl interpreted is invoked for the source\n> inside the loop, then memory used by the record is release and the next\n> record is ready to come.\n> More than that, after processing some records, user can choose to break\n> the loop (using break command in Tcl) that is actually breaking the\n> connection.\n> \n> What we achieve making this patches ?\n> \n> First of all the ability of sequential processing large tables.\n> Then increasing performance due to parallel execution of receiving data\n> on the network and local processing. The backend process on the server\n> is filling the communication channel with data and the local task is\n> processing it as it comes.\n> In the old version, the local task has to wait until *all* data has\n> comed (buffered in memory if it was room enough) and then processing it.\n> \n> What I would ask from you?\n> 1) First of all, if my needs could be satisfied in other way with\n> current functions in libpq of libpgtcl. I can assure you that with\n> current libpgtcl is rather impossible. I am not sure if there is another\n> mechanism using some subtle functions that I didn't know about them.\n\n\tBruce answered this one by asking about cursors...\n\n> 2) Then, if you agree with the idea, to whom we must send more accurate\n> the changes that we would like to make in order to be analysed and\n> checked for further development of Pg.\n\n\tHere, on this mailing list...\n\n\tNow, let's see if I understand what you are thinking of...\n\n\tBasically, by \"cloning\", you are effectively looking at implementing ftp's\nway of dealing with a connection, having one \"control\" channel, and one \"data\"\nchannel, is this right? So that the \"frontend\" has a means of sending a STOP\ncommand to the backend even while the backend is still sending the frontend\nthe data?\n\n\tNow, from reading Bruce's email before reading this, this doesn't get \naround the fact that the backend is still going to have to finish generating\na response to the query before it can send *any* data back, so, as Bruce has\nasked, don't cursors already provide what you are looking for? With cursors,\nas I understand it, you basically tell the backend to send forward X tuples at\na time and after that, if you want to break the connection, you just break \nthe connection. \n\n\tWith what you are proposing (again, if I'm understanding correctly), the\nfrontend would effectively accept X bytes of data (or X tuples) and then it\nwould have an opportunity to send back a STOP over a control channel...\n\n\tOversimplified, I know, but I'm a simple man *grin*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 01:45:58 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> > What I would ask from you?\n> > 1) First of all, if my needs could be satisfied in other way with\n> > current functions in libpq of libpgtcl. I can assure you that with\n> > current libpgtcl is rather impossible. I am not sure if there is another\n> > mechanism using some subtle functions that I didn't know about them.\n> \n> Bruce answered this one by asking about cursors...\n\nYes. It's true. I have used cursors for speeding up opening tables in\nPgAccess fetching only the first 200 records from the table.\nBut for a 10 thousand record table I will send over the network 10\nthousand \"FETCH 1 IN CURSOR\" because in a report table I am processing\nrecords one by one.\nThe time for this kind of retrieval would be more than twice as in the\n'callback' mechanism.\n\nIf you think that is better to keep libpq and libpgtcl as they are, then\nI will use cursors.\nBut using the 'callback' method it would increase performance.\n\nI am waiting for the final resolution :-)\n\n> Basically, by \"cloning\", you are effectively looking at implementing ftp's\n> way of dealing with a connection, having one \"control\" channel, and one \"data\"\n> channel, is this right? So that the \"frontend\" has a means of sending a STOP\n> command to the backend even while the backend is still sending the frontend\n> the data?\n\nNot exactly. Looking from Tcl/Tk point of view, the mechanism is\ntransparent. I am using this structure :\n\npg_loop $database \"select * from sometable\" record {\n set something $record(somefield)\n}\n\nBut the new libpgtcl is opening a 'cloned' connection in order to :\n- send the query through it\n- receive the data from it\nI am not able to break the connection using commands send through the\n'original' one. The query is 'stopped' by breaking the connection.\nThat's why we needed another connection. Because there isn't (yet) a\nmechanism to tell the backend to abort transmission of the rest of the\nquery. I understand that the backend is not reading any more the socket\nin order to receive some CANCEL signal from the frontend. So, dropping\nthe rest of the query results isn't possible without a hard break of the\nconnection.\n\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Tue, 06 Jan 1998 09:32:26 +0200", "msg_from": "Constantin Teodorescu <teo@flex.ro>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "As far as I understood, this seems to be another solution to the older\nproblem of speeding up the browser display of large results. The first one\nconsisted on nonblocking exec/blocking fetchtuple in libpq (the patch is\nvery simple). But the main point is that I learned at that time that\nbackend send tuples as soon as it computes them. \n\nCan someone give an authorized answer?\n\nOn Tue, 6 Jan 1998, The Hermit Hacker wrote:\n\n> On Mon, 5 Jan 1998, Constantin Teodorescu wrote:\n...\n> \n> \tNow, from reading Bruce's email before reading this, this doesn't get \n> around the fact that the backend is still going to have to finish generating\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> a response to the query before it can send *any* data back, so, as Bruce has\n...\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> \n\nPS. On the other hand, if someone is working on back/front protocol, could\nhe think about how difficult would be to have a async full duplex\nconnection?\n\nCostin Oproiu\n\n", "msg_date": "Tue, 6 Jan 1998 10:39:21 +0200 (EET)", "msg_from": "PostgreSQL <postgres@deuroconsult.ro>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "On Mon, 5 Jan 1998, Constantin Teodorescu wrote:\n\n> I have reached a point in developing PgAccess when I discovered that\n> some functions in libpgtcl are bad implemented and even libpq does not\n> help me a lot!\n> \n> What's the problem ! Is about working with large queries result, big\n> tables with thousand of records that have to be processed in order to\n> get a full report for example.\n\nIn the past, I've had a lot of people complaining about the performance\n(or lack of) when handling large results in JDBC.\n\n> Getting a query result from Tcl/Tk (pg_select function) uses PQexec.\n> But PQexec IS GETTING ALL THE RECORDS IN MEMORY and after that user can\n> handle query results.\n> But what if table has thousand records ? Probably I would need more than\n> 512 Mb of RAM in order to get a report finished.\n\nThe only solution I was able to give was for them to use cursors, and\nfetch the result in chunks.\n\n> With some small changes in libpq-fe.h\n> \n> ( void (* callback)(PGresult *,void *ptr,int stat);\n> void *usr_ptr;)\n> \n> and also in libpq to allow a newly defined function in libpgtcl\n> (pg_loop) to initiate a query and then calling back a user defined\n> function after every record fetched from the connection.\n> \n> In order to do this, the connection is 'cloned' and on this new\n> connection the query is issued. For every record fetched, the C callback\n> function is called, here the Tcl interpreted is invoked for the source\n> inside the loop, then memory used by the record is release and the next\n> record is ready to come.\n\nI understand the idea here as I've use this trick before with tcl, but\nthis could cause a problem with the other languages that we support. I\ndon't know how this would be done for Perl, but with Java, the JDBC spec\ndoesn't have this type of callback.\n\nSome time back (around v6.0), I did look at having a seperate thread on\nthe client, that read the results in the background, and the foreground\nthread would then get the results almost immediately. It would only wait,\nif it had read everything transfered so far, and (as JDBC cannot go back a\nrow in a ResultSet), the read rows are freed once used.\n\nAlthough the idea was sound, in practice, it didn't work well. Not every\nJVM implemented threading in the same way, so it locked up a lot. In the\nend, the idea was dropped.\n\n> More than that, after processing some records, user can choose to break\n> the loop (using break command in Tcl) that is actually breaking the\n> connection.\n\nWhat side effects could this have to the backend if the second connection\nis broken. I think the existing code would simply terminate.\n\n> What we achieve making this patches ?\n> \n> First of all the ability of sequential processing large tables.\n> Then increasing performance due to parallel execution of receiving data\n> on the network and local processing. The backend process on the server\n> is filling the communication channel with data and the local task is\n> processing it as it comes.\n> In the old version, the local task has to wait until *all* data has\n> comed (buffered in memory if it was room enough) and then processing it.\n\n> What I would ask from you?\n> 1) First of all, if my needs could be satisfied in other way with\n> current functions in libpq of libpgtcl. I can assure you that with\n> current libpgtcl is rather impossible. I am not sure if there is another\n> mechanism using some subtle functions that I didn't know about them.\n\nWe were talking about some changes to the protocol. Perhaps, we could do\nsomething like changing it so it sends the result in blocks of tuples,\nrather than everything in one block. Then, in between each packet, an ACK\nor CAN style packet could be sent to the backend, either asking for the\nnext, or canceling the results.\n\nAnother alternative is (as an option definable by the client at run time)\nto have results open another connection on a per-result basis (aka FTP).\nHowever, I can see a performance hit with the overhead involved in opening\na new connection every time.\n\nAlso, I can see a possible problem:-\n\nSay, a client has executed a query, which returns a large number of rows.\nWe have read in the first 100 rows. The backend still has the majority of\nthe result queued up behind it.\n\nNow in JDBC, we have getAsciiStream/getBinaryStream/getUnicodeStream which\nare the standard way of getting at BLOBS.\n\nIf one of the columns is a blob, and the client tries to read from the\nblob, it will fail, because we are not in the main loop in the backend\n(were still transfering a result, and BLOBS use fastpath).\n\nThere are ways around this, but things could get messy if were not\ncareful.\n\n> 3) Is there any other normal mode to tell to the backend not to send any\n> more tuples instead of breaking the connection ?\n\nApart from using cursors, not that I know of.\n\n> 4) Even working in C, using PQexec , it's impossible to handle large\n> query results, am I true ?\n\nMemory is the only limitation to this.\n\n> Please cc to : mviorel@flex.ro and also teo@flex.ro\n\nDone..\n\nIt would be interesting to see what the others think. Both TCL & Java are\nclose relatives, and Sun are working on a TCL extension to Java, so any\nchanges could (in the future) help both of us.\n\n-- \nPeter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk\n\n", "msg_date": "Tue, 6 Jan 1998 12:07:07 +0000 (GMT)", "msg_from": "Peter T Mount <psqlhack@maidast.demon.co.uk>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "> \n> The Hermit Hacker wrote:\n> > \n> > > What I would ask from you?\n> > > 1) First of all, if my needs could be satisfied in other way with\n> > > current functions in libpq of libpgtcl. I can assure you that with\n> > > current libpgtcl is rather impossible. I am not sure if there is another\n> > > mechanism using some subtle functions that I didn't know about them.\n> > \n> > Bruce answered this one by asking about cursors...\n> \n> Yes. It's true. I have used cursors for speeding up opening tables in\n> PgAccess fetching only the first 200 records from the table.\n> But for a 10 thousand record table I will send over the network 10\n> thousand \"FETCH 1 IN CURSOR\" because in a report table I am processing\n> records one by one.\n> The time for this kind of retrieval would be more than twice as in the\n> 'callback' mechanism.\n\nYou can tell fetch to give you as many records as you want, so you can\nread in 100-tuple blocks.\n\n> \n> If you think that is better to keep libpq and libpgtcl as they are, then\n> I will use cursors.\n> But using the 'callback' method it would increase performance.\n> \n> I am waiting for the final resolution :-)\n> \n> > Basically, by \"cloning\", you are effectively looking at implementing ftp's\n> > way of dealing with a connection, having one \"control\" channel, and one \"data\"\n> > channel, is this right? So that the \"frontend\" has a means of sending a STOP\n> > command to the backend even while the backend is still sending the frontend\n> > the data?\n> \n> Not exactly. Looking from Tcl/Tk point of view, the mechanism is\n> transparent. I am using this structure :\n> \n> pg_loop $database \"select * from sometable\" record {\n> set something $record(somefield)\n> }\n> \n> But the new libpgtcl is opening a 'cloned' connection in order to :\n> - send the query through it\n> - receive the data from it\n> I am not able to break the connection using commands send through the\n> 'original' one. The query is 'stopped' by breaking the connection.\n> That's why we needed another connection. Because there isn't (yet) a\n> mechanism to tell the backend to abort transmission of the rest of the\n> query. I understand that the backend is not reading any more the socket\n> in order to receive some CANCEL signal from the frontend. So, dropping\n> the rest of the query results isn't possible without a hard break of the\n> connection.\n\nWe have this on the TODO list. We could use the TCP/IP out-of-band\nconnection option to inform the backend to stop things, but no one has\nimplemented it yet. (For the new Unix domain sockets, we could use\nsignals.) Anyone want to tackle it?\n\nman send shows:\n\n The flags parameter may include one or more of the following:\n\n #define MSG_OOB 0x1 /* process out-of-band data */\n #define MSG_DONTROUTE 0x4 /* bypass routing, use direct interface */\n\n The flag MSG_OOB is used to send ``out-of-band'' data on sockets that\n support this notion (e.g. SOCK_STREAM); the underlying protocol must al-\n so support ``out-of-band'' data. MSG_DONTROUTE is usually used only by\n diagnostic or routing programs.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 10:13:01 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> We have this on the TODO list. We could use the TCP/IP out-of-band\n> connection option to inform the backend to stop things, but no one has\n> implemented it yet. (For the new Unix domain sockets, we could use\n> signals.) Anyone want to tackle it?\n\nI'll have to check, but I'm not sure if OOB is possible with Java.\n\n-- \nPeter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk\n\n", "msg_date": "Tue, 6 Jan 1998 18:11:32 +0000 (GMT)", "msg_from": "Peter T Mount <psqlhack@maidast.demon.co.uk>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "Sorry, just to clear things:\n\n> We were talking about some changes to the protocol. Perhaps, we could do\n> something like changing it so it sends the result in blocks of tuples,\n> rather than everything in one block. Then, in between each packet, an ACK\n ^^^^^^^^^^^^^^^^^^^^^^^\n\nBackend sends tuples one by one - just after executor gets next tuple \nfrom upper plan, backend sends this tuple to client-side.\n\n> or CAN style packet could be sent to the backend, either asking for the\n> next, or canceling the results.\n\nVadim\n", "msg_date": "Wed, 07 Jan 1998 01:51:15 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "On Wed, 7 Jan 1998, Vadim B. Mikheev wrote:\n\n> Sorry, just to clear things:\n> \n> > We were talking about some changes to the protocol. Perhaps, we could do\n> > something like changing it so it sends the result in blocks of tuples,\n> > rather than everything in one block. Then, in between each packet, an ACK\n> ^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Backend sends tuples one by one - just after executor gets next tuple \n> from upper plan, backend sends this tuple to client-side.\n\nOops, of course it does, sorry ;-)\n\n> > or CAN style packet could be sent to the backend, either asking for the\n> > next, or canceling the results.\n\n-- \nPeter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk\n\n", "msg_date": "Tue, 6 Jan 1998 23:13:23 +0000 (GMT)", "msg_from": "Peter T Mount <psqlhack@maidast.demon.co.uk>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" }, { "msg_contents": "Peter T Mount wrote:\n> \n> The only solution I was able to give was for them to use cursors, and\n> fetch the result in chunks.\n\nGot it!!!\n\nSeems everyone has 'voted' for using cursors.\n\nAs a matter of fact, I have tested both a \nBEGIN ; DECLARE CURSOR ; FETCH N; END;\nand a \nSELECT FROM \n\nBoth of them are locking for write the tables that they use, until end\nof processing.\n\nFetching records in chunks (100) would speed up a little the processing.\n\nBut I am still convinced that if frontend would be able to process\ntuples as soon as they come, the overall time of processing a big table\nwould be less.\nFetching in chunks, the frontend waits for the 100 records to come (time\nA) and then process them (time B). A and B cannot be overlapped.\n\nThanks a lot for helping me to decide. Reports in PgAccess will use\ncursors.\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 07 Jan 1998 10:14:37 +0200", "msg_from": "Constantin Teodorescu <teo@flex.ro>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] I want to change libpq and libpgtcl for better handling\n\tof large query results" } ]
[ { "msg_contents": "Gautam Thaker (609-338-3907) wrote:\n> \n> I did not see contrib/spi because I have just the binary .rpm\n> for linux. I guess I will have to go get the entire src dist.\n> and perhaps even make the system myself.\n\nI see your problem. It seems that spi.h (and trigger.h) should be \ninstalled into _pg_installation_dir_/include dir to go into rpm - thanks,\nhope to fix this for 6.3. \n\n...Also, contrib should be included into rpm...\n\n> \n> BTW, how can I tell which version I have on my system?\n\nOnly rpm' author knows -:)\n\nVadim\n", "msg_date": "Tue, 06 Jan 1998 03:59:56 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] where to find SPI support (and what is the latest\n\tversion?)" } ]
[ { "msg_contents": "Your name was selected from a list of consultants found on the search engines. It is a personal invitation mailed separately to your attention, NOT a \"bulk\" E-Mail proposal.. If it is inappropriate, please type \"remove\" under Subject : and reply.\n====================================================================\n \nHere is the Preplanning Checklist from our 2001 Innovation Seminar for your use.\n\n1. What is the purpose of my venture?\n2. What business function will it perform?\n3. What markets will it serve?\n4. What products will it sell?\n5. How is it unique from others in the same business?\n6. Do I plan to diversify into new markets?\n7. Do I plan to introduce new products?\n8. How will I accomplish this diversification?\n9. What resources will I set aside to diversify with?\n10. What policies will I follow in setting the tone of the business?\n11. Do I have the manpower needed to attain my objectives?\n12. Do I have the capital required to attain my objectives?\n13. Do I have the knowledge required to pursue these objectives?\n14. What changes in my organization will be required?\n15. Are my manufacturing costs in line?\n16. Are my merchandise purchasing costs satisfactory?\n17. Do I have an adequate return on my investment?\n18. Is my competitive position viable?\n19. Is the venture in a cyclical business?\n20. What can I do to reduce its cyclical?\n21. Does my projected income/earnings meet my objectives?\n\nThe seminar uses four strategic planning lists. I'll send you the next one in a few weeks, or you can E-Mail me for a copy. Ask for Product Positioning and Opportunity Analysis Strategies. When you are browsing, stop by http://www.chemmgrs.com and see our new book, 21st Century Consulting. We'll give you Chapter 1 free just for stopping by our Web Site, or E-Mail us with \"Chapter 1 free\" as the subject. \n\nRichard Montgomery, General Manager\nChem Mgrs, Ltd\n2751 W. N. Union St Suite #72\nMidland, MI 48642\n \n", "msg_date": "Mon, 5 Jan 1998 16:10:45 -0500 (EST)", "msg_from": "rmonty@chemmgrs.com", "msg_from_op": true, "msg_subject": "Strategic Planning for the year 2001 and\n\tbeyondpgsql-hackers@postgresql.orgpgsql-hackers@postgresql.org" } ]
[ { "msg_contents": "> Hi everybody:\n> I have the following information for you.\n> I installed Postgress 6.2.1 in the next platform:\n> Apollo 9000 Serie 800 with PA-RISC running HP-UX 10.20\n> - The version of PostgreSQL (v6.2.1, 6.1.1, beta 970703, etc.).\n> v6.2.1\n> - Your operating system (i.e. RedHat v4.0 Linux v2.0.26).\n> HP-UX 10.20\n> - Your hardware (SPARC, i486, etc.).\n> Precision Arquitecture-RISC (32 o 64 Mz ???)\n> - Install problems:\n> I had problems running HP's make so I need to install 'gmake'. By the\n> way, I couldn't find a program called gmake so I got 'make' instead.\n> I did ftp from prep.ai.mit.edu. File:/pub/gnu/make-3.76.1.tar.gz\n>\n> I hadn't any problems to install gnu make.\n>\n> As to the Postgress installation I had the following problems:\n> When I ran 'make all', I got a yacc error:\n> Too many states compiling gram.y. (I got a clue from the yacc compiler:\n> use Ns option)\n> Solution: I edited the Makefile.global file. I modified the line with\n> the YFLAGS variable (line 211), so I added the option Ns with a value of\n> 5000(The default for HP was 1000)\n> After this change The problem vanished but I found the next problem:The\n> size of look ahead tables was not big enough. (Default 650) So I\n> modified to 2500 with the Nl (En -el) option.\n> At last the line was modified in the following way:\n> Original line:\n> YFLAGS= -d\n> Modified line\n> YFLAGS= -d -Ns5000 -Nl2500\n>\n> After this I got the next fatal error:\n> cc -I../../include -W l,-E -Ae -DNOFIXADE -Dhpux -I.. -I.\n> include -c scan.c -o scan.o\n> cc: \"scan.c\", line 145: error 1000: Unexpected symbol: \"&\".\n>\n> The problem was very simple to solve. One comment was erronous written.\n> The '/' was missing. I just edited the file scan.c and everythig worked\n> fine.\n\nOh, I assumed that this was a comment from scan.l, but I'm now guessing\nthat this was a comment inserted by HP's lex program. Yes?\n\n> I ran the regress test and I could find some tests failed principally\n> due to the floating point precision.\n\nWhich is OK.\n\nGood information. Anyone interested in typing this up as a FAQ\n(doc/FAQ_HP)?\n\n - Tom\n\n", "msg_date": "Tue, 06 Jan 1998 01:55:02 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Postgress installation in HP-UX 10.20." } ]
[ { "msg_contents": "\nOracle has adjustable disk block sizes, tuneable per instance\n(can you change it after install?? can't remember). In any\ncase the default is 2k or 4k, and things almost always go\nfaster with larger block sizes. On one project we needed to\ngo fast and had disk space to burn, so we upped it to 16k.\nThis was all _without_ using raw devices.\n\nMy *gut* feeling is that the underlying block size is a trade-off,\nsmaller blocks are better for small transactions, bigger blocks\nare better for bulk load/extract operations, with a penalty for \nfinding a single row. Optimum depends on the application, but is\nsomewhere between 2 and 32 k.\n\nHow hard would it be for postgresql to support adjustable block sizes? \nJust wondering.\n\n-- cary\ncobrien@access.digex.net\n", "msg_date": "Mon, 5 Jan 1998 22:43:19 -0500 (EST)", "msg_from": "\"Cary B. O'Brien\" <cobrien@access.digex.net>", "msg_from_op": true, "msg_subject": "Block Sizes" } ]
[ { "msg_contents": "I believe I found a bug. If a user other than the postgres superuser is\ngiven permission to create databases, then he should be able to destroy\nthe databases he creates. Currently he can't, at least in version 6.2.1\ncomplied for SunOS 5.5. Only the poostgres superuser can delete\ndatabases. If otherusers try they get the following error message:\n\n\"WARN:pg_database: Permission denied.\ndestroydb: database destroy failed on tmpdb.\"\n\neventhough this user is the database admin for tmpdb as shown in the\npd_database table.\n", "msg_date": "Mon, 05 Jan 1998 20:05:30 -0800", "msg_from": "Kevin Witten <kwitten@qdt.com>", "msg_from_op": true, "msg_subject": "Postgres acl" }, { "msg_contents": "> I believe I found a bug. If a user other than the postgres superuser is\n> given permission to create databases, then he should be able to destroy\n> the databases he creates. Currently he can't, at least in version 6.2.1\n> complied for SunOS 5.5. Only the poostgres superuser can delete\n> databases. If otherusers try they get the following error message:\n>\n> \"WARN:pg_database: Permission denied.\n> destroydb: database destroy failed on tmpdb.\"\n>\n> eventhough this user is the database admin for tmpdb as shown in the\n> pd_database table.\n\nAt the moment, one requires \"create users\" privilege to destroy your own\ndatabase, but only \"create databases\" privilege to create one. I think\nthere is something about this on the ToDo list...\n\n - Tom\n\n", "msg_date": "Tue, 06 Jan 1998 04:36:06 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl" }, { "msg_contents": "> \n> I believe I found a bug. If a user other than the postgres superuser is\n> given permission to create databases, then he should be able to destroy\n> the databases he creates. Currently he can't, at least in version 6.2.1\n> complied for SunOS 5.5. Only the poostgres superuser can delete\n> databases. If otherusers try they get the following error message:\n> \n> \"WARN:pg_database: Permission denied.\n> destroydb: database destroy failed on tmpdb.\"\n> \n> eventhough this user is the database admin for tmpdb as shown in the\n> pd_database table.\n> \n> \n\nHere is the fix. This bug has been around for a while:\n\n---------------------------------------------------------------------------\n\n*** ./aclchk.c.orig\tTue Jan 6 00:10:25 1998\n--- ./aclchk.c\tTue Jan 6 00:18:40 1998\n***************\n*** 410,416 ****\n \t\t * pg_database table, there is still additional permissions\n \t\t * checking in dbcommands.c\n \t\t */\n! \t\tif (mode & ACL_AP)\n \t\t\treturn ACLCHECK_OK;\n \t}\n \n--- 410,416 ----\n \t\t * pg_database table, there is still additional permissions\n \t\t * checking in dbcommands.c\n \t\t */\n! \t\tif ((mode & ACL_WR) || (mode & ACL_AP))\n \t\t\treturn ACLCHECK_OK;\n \t}\n \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 00:19:32 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl" } ]
[ { "msg_contents": "I would like to stand up as being very much in favor of two related things:\n\n(1) adjustable blocksizes\n(2) a larger MAX TUPLE size\n\nAs far as I can tell (and that ain't far), there would only be relatively \nminor changes that would have to be made to give the option of allowing \nthe user to select 2, 4, 8 or 16 as the blocksize. Concurrently, it \nwould seem wise to simply up the max tuple size to 32k. It seems to me \nunlikely that this would have a noticeable performance impact. In order \nto do this, we would need to know about the 32 bit ItemIdData structure in \n/storage/itemid.h (see my previous posts). It was recommended to me that \nlp_flags might still be only using 2 of the 6 bits allocated to it. If \nso, increasing lp_offset to 15 and lp_len to 15, i.e. 2^15 bits, i.e. \n32768 bytes max tuple size, would be possible! I think!\n\nJust my 2 cents.\n\nEddie\n", "msg_date": "Mon, 05 Jan 1998 23:08:10 -0500 (EST)", "msg_from": "Integration <abrams@philos.umass.edu>", "msg_from_op": true, "msg_subject": "My 2c on adjustable blocksizes" }, { "msg_contents": "> I would like to stand up as being very much in favor of two related things:\n>\n> (1) adjustable blocksizes\n> (2) a larger MAX TUPLE size\n>\n> As far as I can tell (and that ain't far), there would only be relatively\n> minor changes that would have to be made to give the option of allowing\n> the user to select 2, 4, 8 or 16 as the blocksize. Concurrently, it\n> would seem wise to simply up the max tuple size to 32k. It seems to me\n> unlikely that this would have a noticeable performance impact. In order\n> to do this, we would need to know about the 32 bit ItemIdData structure in\n> /storage/itemid.h (see my previous posts). It was recommended to me that\n> lp_flags might still be only using 2 of the 6 bits allocated to it. If\n> so, increasing lp_offset to 15 and lp_len to 15, i.e. 2^15 bits, i.e.\n> 32768 bytes max tuple size, would be possible! I think!\n\nIf someone came up with some clean patches to allow #define declarations for\nblock size and for tuple sizes, I'm sure they would be of interest. The\nongoing work being discussed for v6.3 would not conflict with those areas (I\nsuspect) so go to it!\n\nI have noticed some integer constants scattered around the code (in places\nwhere they don't belong) which are related to a maximum tuple size. For\nexample, there is an arbitrary 4096 byte limit on the size of a character\ncolumn, and the 4096 is hardcoded into the parser. That particular one would\nbe easy to change...\n\n - Tom\n\n", "msg_date": "Tue, 06 Jan 1998 04:43:26 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] My 2c on adjustable blocksizes" } ]
[ { "msg_contents": "\nWell, I've found the problem that was breaking Large Objects and, although\nI have a fix, I still believe the cause is still there.\n\nIt wasn't a protocol problem after all, but a memory leak that was causing\nthe backend to throw a Segmentation Violation when an argument was being\nfree'd.\n\nFor example:\n\nRunning src/test/example/testlo2, it calls lo_import on a file, then\nlo_export to export the newly formed large object to another file.\n\nAnyhow, every time, on the last call to lo_write (when importing the\nfile), the backend seemed to just die. The only difference between this\ncall and the previous calls, is that the amount of data to write is\nsmaller. Even changing the block size didn't change this fact.\n\nAnyhow, I'm now trying to break it before posting the patch, but both\nlibpq & JDBC are running flawlessly.\n\nHopefully, the patch will be up later this afternoon.\n\n-- \nPeter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk\n\n", "msg_date": "Tue, 6 Jan 1998 12:16:08 +0000 (GMT)", "msg_from": "Peter T Mount <psqlhack@maidast.demon.co.uk>", "msg_from_op": true, "msg_subject": "Large objects fixed" } ]
[ { "msg_contents": "> \n> How hard would it be for postgresql to support adjustable block sizes? \n> Just wondering.\n> \n\nI can take a stab at this tonite after work now that the snapshot is there.\nStill have around some of the files/diffs from looking at this a year ago...\n\nI don't think it will be hard, just a few files with BLCKSZ/MAXBLCKSZ\nreferences to check for breakage. Appears that only one bit of lp_flags is\nbeing used too, so that would seem to allow up to 32k blocks.\n\nOther issue is the bit alignment in the ItemIdData structure. In the past,\nI've read that bit operations were slower than int ops. Is this the case?\n\nI want to check to see if the structure is only 32 bits and not being padded\nby the compiler. Worse to worse, make one field of 32 bits and make macros\nto access the three pieces or make lp_off & lp_len shorts and lp_flags a char.\n\nI can check the aix compiler, but what does gcc and other compilers do with\nbit field alignment?\n\n\ndarrenk\n", "msg_date": "Tue, 6 Jan 1998 08:51:45 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Block Sizes" }, { "msg_contents": "> \n> > \n> > How hard would it be for postgresql to support adjustable block sizes? \n> > Just wondering.\n> > \n> \n> I can take a stab at this tonite after work now that the snapshot is there.\n> Still have around some of the files/diffs from looking at this a year ago...\n> \n> I don't think it will be hard, just a few files with BLCKSZ/MAXBLCKSZ\n> references to check for breakage. Appears that only one bit of lp_flags is\n> being used too, so that would seem to allow up to 32k blocks.\n> \n> Other issue is the bit alignment in the ItemIdData structure. In the past,\n> I've read that bit operations were slower than int ops. Is this the case?\n\nUsually, yes.\n\n> \n> I want to check to see if the structure is only 32 bits and not being padded\n> by the compiler. Worse to worse, make one field of 32 bits and make macros\n> to access the three pieces or make lp_off & lp_len shorts and lp_flags a char.\n> \n> I can check the aix compiler, but what does gcc and other compilers do with\n> bit field alignment?\n\nI don't know.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 10:18:18 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Block Sizes" } ]
[ { "msg_contents": "> I've run the regression tests on today's source tree, and found lots of\n> ordering differences..., and one different result.\n>\n> The different result is in the \"select_distinct_on\" test; the original\n> result had 8 rows and the new result has 40 rows. However, I'm getting\n> myself confused on what the correct result _should_ be, since \"select\n> distinct on\" is not documented. For a query like:\n>\n> SELECT DISTINCT ON string4 two, string4, ten FROM temp;\n>\n> What is the \"ON string4 two\" clause saying? Anyway, the result is\n> different than before, so we would probably want to look at it. I'm away\n> 'til after the weekend, but can help after that.\n\nHi Bruce. Some of the \"order by\" clauses are currently broken in the\nregression tests (at least on my machine). Do you see this also? For\nexample, in the point test:\n\nQUERY: SET geqo TO 'off';\nQUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1 <->\np2.f1 AS dist\n FROM POINT_TBL p1, POINT_TBL p2\n ORDER BY dist, point1 using <<, point2 using <<;\nthirtysix|point1 |point2 | dist\n---------+----------+----------+----------------\n |(0,0) |(-10,0) | 10\n |(-10,0) |(-10,0) | 0\n |(-3,4) |(-10,0) |8.06225774829855\n ...\n\nAlso, some of Vadim's contrib stuff is broken since WARN is no longer\ndefined. I can post patches for that (there are two files affected) but\nsubstituted ERROR and am not certain whether that is the correct choice.\n\nLet me know if I can help with anything...\n\n - Tom\n\n", "msg_date": "Tue, 06 Jan 1998 15:04:18 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current regression tests" }, { "msg_contents": "> \n> > I've run the regression tests on today's source tree, and found lots of\n> > ordering differences..., and one different result.\n> >\n> > The different result is in the \"select_distinct_on\" test; the original\n> > result had 8 rows and the new result has 40 rows. However, I'm getting\n> > myself confused on what the correct result _should_ be, since \"select\n> > distinct on\" is not documented. For a query like:\n> >\n> > SELECT DISTINCT ON string4 two, string4, ten FROM temp;\n> >\n> > What is the \"ON string4 two\" clause saying? Anyway, the result is\n> > different than before, so we would probably want to look at it. I'm away\n> > 'til after the weekend, but can help after that.\n> \n> Hi Bruce. Some of the \"order by\" clauses are currently broken in the\n> regression tests (at least on my machine). Do you see this also? For\n> example, in the point test:\n> \n> QUERY: SET geqo TO 'off';\n> QUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1 <->\n> p2.f1 AS dist\n> FROM POINT_TBL p1, POINT_TBL p2\n> ORDER BY dist, point1 using <<, point2 using <<;\n> thirtysix|point1 |point2 | dist\n> ---------+----------+----------+----------------\n> |(0,0) |(-10,0) | 10\n> |(-10,0) |(-10,0) | 0\n> |(-3,4) |(-10,0) |8.06225774829855\n> ...\n> \n> Also, some of Vadim's contrib stuff is broken since WARN is no longer\n> defined. I can post patches for that (there are two files affected) but\n> substituted ERROR and am not certain whether that is the correct choice.\n> \n> Let me know if I can help with anything...\n\nI am starting to agree with Vadim that it is too much work to go though\nevery elog(), and doing it by directory or file is very imprecise, and\nmay cause confusion.\n\nShould I throw in the towel and make them all ERROR?\n\nI don't know anything that would cause the ORDER BY problems.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 10:55:12 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current regression tests" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> Should I throw in the towel and make them all ERROR?\n\n\tI'm curious, but, again, how does everyone else handle this?\n\n", "msg_date": "Tue, 6 Jan 1998 11:46:56 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current regression tests" } ]
[ { "msg_contents": "", "msg_date": "Tue, 06 Jan 1998 15:53:35 +0000", "msg_from": "Tony Rios <tonester@ccom.net>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "Hi,\n\nI created a table with two columns of type int, and loaded about 300 K records\nin it. So, the total size of the table is approx. that of 600 K integers,\nroughly 2.4 MB.\nBut, the file corresponding to the table in pgsql/data/base directory\nhas a size of 19 MB. I was wondering if I have done something wrong in\nthe installation or usage, or is it the normal behavior ?\n\nAlso, I was trying to execute the query: \nselect item as item, count(*) as cnt into table C_temp \nfrom temp group by item;\n\nHere, temp is the name of the table which contains the data and item is an\ninteger attribute. While doing the sort for the group by, the size of one of\nthe temporary pg_psort relation grows to about 314 MB. The size of the temp \ntable is as mentioned above. If someone tried similar queries, could you\nplease tell me if this is normal. \nThe above query did not finish even after 2 hours. I am executing it on a \nSun Sparc 5 running Sun OS 5.5.\n\nThanks\n--shiby\n\n\n", "msg_date": "Tue, 06 Jan 1998 18:09:52 -0500", "msg_from": "Shiby Thomas <sthomas@cise.ufl.edu>", "msg_from_op": false, "msg_subject": "database size" }, { "msg_contents": "Shiby Thomas wrote:\n> \n> Hi,\n> \n> I created a table with two columns of type int, and loaded about 300 K records\n> in it. So, the total size of the table is approx. that of 600 K integers,\n> roughly 2.4 MB.\n> But, the file corresponding to the table in pgsql/data/base directory\n> has a size of 19 MB. I was wondering if I have done something wrong in\n> the installation or usage, or is it the normal behavior ?\n\nThis is OK. First thing - int is not 2 bytes long, it's 4 bytes long.\nUse int2 if you want so. Second - you have to add up other per-record\nstuff like oids and other internal attributes. \n\n> Also, I was trying to execute the query:\n> select item as item, count(*) as cnt into table C_temp\n> from temp group by item;\n> \n> Here, temp is the name of the table which contains the data and item is an\n> integer attribute. While doing the sort for the group by, the size of one of\n> the temporary pg_psort relation grows to about 314 MB. The size of the temp\n> table is as mentioned above. \n\nIt ain't good. Seems like the psort is very hungry. \n\nMike\n\n-- \nWWW: http://www.lodz.pdi.net/~mimo tel: Int. Acc. Code + 48 42 148340\nadd: Michal Mosiewicz * Bugaj 66 m.54 * 95-200 Pabianice * POLAND\n", "msg_date": "Tue, 06 Jan 1998 23:37:43 +0000", "msg_from": "\"Micha��� Mosiewicz\" <mimo@lodz.pdi.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "On Tue, 6 Jan 1998, Shiby Thomas wrote:\n\n> Hi,\n> \n> I created a table with two columns of type int, and loaded about 300 K records\n> in it. So, the total size of the table is approx. that of 600 K integers,\n> roughly 2.4 MB.\n> But, the file corresponding to the table in pgsql/data/base directory\n> has a size of 19 MB. I was wondering if I have done something wrong in\n> the installation or usage, or is it the normal behavior ?\n> \n> Also, I was trying to execute the query: \n> select item as item, count(*) as cnt into table C_temp \n> from temp group by item;\n> \n> Here, temp is the name of the table which contains the data and item is an\n> integer attribute. While doing the sort for the group by, the size of one of\n> the temporary pg_psort relation grows to about 314 MB. The size of the temp \n> table is as mentioned above. If someone tried similar queries, could you\n> please tell me if this is normal. \n> The above query did not finish even after 2 hours. I am executing it on a \n> Sun Sparc 5 running Sun OS 5.5.\n\n\tWhat version of PostgreSQL are you running? *raised eyebrow*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 19:42:18 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "On Tue, 6 Jan 1998, Tony Rios wrote:\n\n> At 06:09 PM 1/6/98 -0500, Shiby Thomas wrote:\n> \n> >Hi,\n> \n> >\n> \n> >I created a table with two columns of type int, and loaded about 300 K records\n> \n> >in it. So, the total size of the table is approx. that of 600 K integers,\n> \n> >roughly 2.4 MB.\n> \n> >But, the file corresponding to the table in pgsql/data/base directory\n> \n> >has a size of 19 MB. I was wondering if I have done something wrong in\n> \n> >the installation or usage, or is it the normal behavior ?\n> \n> >\n> \n> \n> Just wondering.. did you happen to do an INSERT into the database,\n> \n> then delete some rows.. say 19megs worth, then re-add... From what I've\n> \n> seen msql db's will always be at least the size of the largest you've ever\n> \n> had the database before. It will over time, overrite existing deleted\n> \n> records, but it keeps the data still in there, just sets a delete flag.\n> \n> \n> If you really need to cut the size down, I've had to delete the database\n> \n> completely, then create another table from scratch. Not sure if there\n> \n> is a 'purge' type function available, but you have to be careful that\n> \n> nobody is accessing the db at that time, since it's very sensitive at\n> \n> that time.\n\n\tvacuum will clean out the deleted records and truncate the table...has\nbeen so since v6.1, I believe...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 20:18:19 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "\n=> \tWhat version of PostgreSQL are you running? *raised eyebrow*\n6.2.1. I haven't yet applied the patches(put in the PostgreSQL web page) \nthough.\n\n--shiby\n\n\n\n", "msg_date": "Tue, 06 Jan 1998 20:10:39 -0500", "msg_from": "Shiby Thomas <sthomas@cise.ufl.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size " }, { "msg_contents": "> Just wondering.. did you happen to do an INSERT into the database,\n> \n> then delete some rows.. say 19megs worth, then re-add... From what I've\n> \n> seen msql db's will always be at least the size of the largest you've ever\n> \n> had the database before. It will over time, overrite existing deleted\n> \n> records, but it keeps the data still in there, just sets a delete flag.\n> \n> \n> If you really need to cut the size down, I've had to delete the database\n> \n> completely, then create another table from scratch. Not sure if there\n> \n> is a 'purge' type function available, but you have to be careful that\n> \n> nobody is accessing the db at that time, since it's very sensitive at\n> \n> that time.\n> \n\nThanks to Vadim, vacuum shrinks the size to the exact amount needed to\nstore the data. Also, the table is locked while vacuuming, so no one\ncan accidentally access it.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 20:16:34 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" } ]
[ { "msg_contents": "Forwarded message:\n> > I believe I found a bug. If a user other than the postgres superuser is\n> > given permission to create databases, then he should be able to destroy\n> > the databases he creates. Currently he can't, at least in version 6.2.1\n> > complied for SunOS 5.5. Only the poostgres superuser can delete\n> > databases. If otherusers try they get the following error message:\n> > \n> > \"WARN:pg_database: Permission denied.\n> > destroydb: database destroy failed on tmpdb.\"\n> > \n> > eventhough this user is the database admin for tmpdb as shown in the\n> > pd_database table.\n> > \n> > \n> \n> Here is the fix. This bug has been around for a while:\n> \n> ---------------------------------------------------------------------------\n> \n> *** ./aclchk.c.orig\tTue Jan 6 00:10:25 1998\n> --- ./aclchk.c\tTue Jan 6 00:18:40 1998\n> ***************\n> *** 410,416 ****\n> \t\t * pg_database table, there is still additional permissions\n> \t\t * checking in dbcommands.c\n> \t\t */\n> ! \t\tif (mode & ACL_AP)\n> \t\t\treturn ACLCHECK_OK;\n> \t}\n> \n> --- 410,416 ----\n> \t\t * pg_database table, there is still additional permissions\n> \t\t * checking in dbcommands.c\n> \t\t */\n> ! \t\tif ((mode & ACL_WR) || (mode & ACL_AP))\n> \t\t\treturn ACLCHECK_OK;\n> \t}\n\nI am now thinking about this patch, and I don't think I like it. The\noriginal code allowed APPEND-only for users who can create databases,\nbut no DELETE. The patch gives them DELETE permission, so they can\ndestroy their database, but they could issue the command:\n\n\tselect from pg_database\n\nand destroy everyone's. 'drop database' does checkes, but the acl check\nis done in the executor, and it doesn't know if the the checks have been\nperformed or not.\n\nCan someone who has permission to create databases be trusted not to\ndelete others? If we say no, how do we make sure they can change\npg_database rows on only databases that they own?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 11:52:17 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> Can someone who has permission to create databases be trusted not to\n> delete others? If we say no, how do we make sure they can change\n> pg_database rows on only databases that they own?\n\n\tdeleting a database is accomplished using 'drop database', no?\nCan the code for that not be modified to see whether the person dropping\nthe database is the person that owns it *or* pgsuperuser?\n\n\n", "msg_date": "Tue, 6 Jan 1998 12:11:19 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Forwarded message:\n> > > I believe I found a bug. If a user other than the postgres superuser is\n> > > given permission to create databases, then he should be able to destroy\n> > > the databases he creates. Currently he can't, at least in version 6.2.1\n> > > complied for SunOS 5.5. Only the poostgres superuser can delete\n> > > databases. If otherusers try they get the following error message:\n> > >\n> > > \"WARN:pg_database: Permission denied.\n> > > destroydb: database destroy failed on tmpdb.\"\n> > >\n> > > eventhough this user is the database admin for tmpdb as shown in the\n> > > pd_database table.\n> > >\n> > >\n> >\n> > Here is the fix. This bug has been around for a while:\n> >\n> > ---------------------------------------------------------------------------\n> >\n> > *** ./aclchk.c.orig Tue Jan 6 00:10:25 1998\n> > --- ./aclchk.c Tue Jan 6 00:18:40 1998\n> > ***************\n> > *** 410,416 ****\n> > * pg_database table, there is still additional permissions\n> > * checking in dbcommands.c\n> > */\n> > ! if (mode & ACL_AP)\n> > return ACLCHECK_OK;\n> > }\n> >\n> > --- 410,416 ----\n> > * pg_database table, there is still additional permissions\n> > * checking in dbcommands.c\n> > */\n> > ! if ((mode & ACL_WR) || (mode & ACL_AP))\n> > return ACLCHECK_OK;\n> > }\n> \n> I am now thinking about this patch, and I don't think I like it. The\n> original code allowed APPEND-only for users who can create databases,\n> but no DELETE. The patch gives them DELETE permission, so they can\n> destroy their database, but they could issue the command:\n> \n> select from pg_database\n> \n> and destroy everyone's. 'drop database' does checkes, but the acl check\n> is done in the executor, and it doesn't know if the the checks have been\n> performed or not.\n> \n> Can someone who has permission to create databases be trusted not to\n> delete others? If we say no, how do we make sure they can change\n> pg_database rows on only databases that they own?\n> \n> --\n> Bruce Momjian\n> maillist@candle.pha.pa.us\n\n\nCan't you check to see if they own the database before you let them\ndelete the row in pg_database. If a row is deleted from pg_database, it\nis disallowed unless the userid is the same as the datdba field in that\nrow?\n", "msg_date": "Tue, 06 Jan 1998 10:01:03 -0800", "msg_from": "Kevin Witten <kwitten@qdt.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> \n> > Can someone who has permission to create databases be trusted not to\n> > delete others? If we say no, how do we make sure they can change\n> > pg_database rows on only databases that they own?\n> \n> \tdeleting a database is accomplished using 'drop database', no?\n> Can the code for that not be modified to see whether the person dropping\n> the database is the person that owns it *or* pgsuperuser?\n\nIt already does the check, but issues an SQL from the C code to delete\nfrom pg_database. I believe any user who can create a database can\nissue the same SQL command from psql, bypassing the drop database\nchecks, no?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 13:42:02 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> > \n> > > Can someone who has permission to create databases be trusted not to\n> > > delete others? If we say no, how do we make sure they can change\n> > > pg_database rows on only databases that they own?\n> > \n> > \tdeleting a database is accomplished using 'drop database', no?\n> > Can the code for that not be modified to see whether the person dropping\n> > the database is the person that owns it *or* pgsuperuser?\n> \n> It already does the check, but issues an SQL from the C code to delete\n> from pg_database. I believe any user who can create a database can\n> issue the same SQL command from psql, bypassing the drop database\n> checks, no?\n\n\tOkay, I understand what you mean here...so I guess the next\nquestion is should system tables be directly modifyable by non-superuser?\n\n\tFor instance, we have a 'drop database' SQL command...can we\nrestrict 'delete from pg_database' to just superuser, while leaving 'drop\ndatabase' open to those with createdb privileges? Same with 'create\nuser', and, possible, a 'create group' command instead of 'insert into\npg_group'?\n\n\n", "msg_date": "Tue, 6 Jan 1998 13:47:17 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> \n> > > \n> > > On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> > > \n> > > > Can someone who has permission to create databases be trusted not to\n> > > > delete others? If we say no, how do we make sure they can change\n> > > > pg_database rows on only databases that they own?\n> > > \n> > > \tdeleting a database is accomplished using 'drop database', no?\n> > > Can the code for that not be modified to see whether the person dropping\n> > > the database is the person that owns it *or* pgsuperuser?\n> > \n> > It already does the check, but issues an SQL from the C code to delete\n> > from pg_database. I believe any user who can create a database can\n> > issue the same SQL command from psql, bypassing the drop database\n> > checks, no?\n> \n> \tOkay, I understand what you mean here...so I guess the next\n> question is should system tables be directly modifyable by non-superuser?\n> \n> \tFor instance, we have a 'drop database' SQL command...can we\n> restrict 'delete from pg_database' to just superuser, while leaving 'drop\n> database' open to those with createdb privileges? Same with 'create\n> user', and, possible, a 'create group' command instead of 'insert into\n> pg_group'?\n\nYes, we must replace the SQL commands in commands/dbcommands.c with\nlower-level C table access routines so we do not have to go to the\nexecutor, where the access permissions are checked.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 14:21:32 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" } ]
[ { "msg_contents": "> > > > Can someone who has permission to create databases be trusted not to\n> > > > delete others? If we say no, how do we make sure they can change\n> > > > pg_database rows on only databases that they own?\n> > > \n> > > \tdeleting a database is accomplished using 'drop database', no?\n> > > Can the code for that not be modified to see whether the person dropping\n> > > the database is the person that owns it *or* pgsuperuser?\n> > \n> > It already does the check, but issues an SQL from the C code to delete\n> > from pg_database. I believe any user who can create a database can\n> > issue the same SQL command from psql, bypassing the drop database\n> > checks, no?\n> \n> \tOkay, I understand what you mean here...so I guess the next\n> question is should system tables be directly modifyable by non-superuser?\n> \n> \tFor instance, we have a 'drop database' SQL command...can we\n> restrict 'delete from pg_database' to just superuser, while leaving 'drop\n> database' open to those with createdb privileges? Same with 'create\n> user', and, possible, a 'create group' command instead of 'insert into\n> pg_group'?\n\nIMHO, the system tables should _never_ be directly modifiable by anyone\nother than the superuser/dba. The rest of the population should have to\nuse a command of some sort that can be grant/revoked by said superuser/dba.\n\ndarrenk\n", "msg_date": "Tue, 6 Jan 1998 14:20:15 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> > > > > Can someone who has permission to create databases be trusted not to\n> > > > > delete others? If we say no, how do we make sure they can change\n> > > > > pg_database rows on only databases that they own?\n> > > >\n> > > > deleting a database is accomplished using 'drop database', no?\n> > > > Can the code for that not be modified to see whether the person dropping\n> > > > the database is the person that owns it *or* pgsuperuser?\n> > >\n> > > It already does the check, but issues an SQL from the C code to delete\n> > > from pg_database. I believe any user who can create a database can\n> > > issue the same SQL command from psql, bypassing the drop database\n> > > checks, no?\n> >\n> > Okay, I understand what you mean here...so I guess the next\n> > question is should system tables be directly modifyable by non-superuser?\n> >\n> > For instance, we have a 'drop database' SQL command...can we\n> > restrict 'delete from pg_database' to just superuser, while leaving 'drop\n> > database' open to those with createdb privileges? Same with 'create\n> > user', and, possible, a 'create group' command instead of 'insert into\n> > pg_group'?\n>\n> IMHO, the system tables should _never_ be directly modifiable by anyone\n> other than the superuser/dba. The rest of the population should have to\n> use a command of some sort that can be grant/revoked by said superuser/dba.\n\nAre there any maintenance operations which require a \"delete from pg_xxx\"? If\nnot, then we could just modify the parser (or the executor?) to check the table\nname and not allow insert/delete from any table whose name starts with \"pg_\". Had\nto ask, although I'm sure this is too easy to actually work :)\n\n - Tom\n\n", "msg_date": "Wed, 07 Jan 1998 01:25:47 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "On Wed, 7 Jan 1998, Thomas G. Lockhart wrote:\n\n> Are there any maintenance operations which require a \"delete from pg_xxx\"? If\n> not, then we could just modify the parser (or the executor?) to check the table\n> name and not allow insert/delete from any table whose name starts with \"pg_\". Had\n> to ask, although I'm sure this is too easy to actually work :)\n\n\tAs long as what you are suggesting doesn't break \"drop database\", \"drop\ntable\", \"drop view\"...I realize that this is obvious, but...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 22:18:01 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> >\n> > IMHO, the system tables should _never_ be directly modifiable by anyone\n> > other than the superuser/dba. The rest of the population should have to\n> > use a command of some sort that can be grant/revoked by said superuser/dba.\n> \n> Are there any maintenance operations which require a \"delete from pg_xxx\"? If\n> not, then we could just modify the parser (or the executor?) to check the table\n> name and not allow insert/delete from any table whose name starts with \"pg_\". Had\n> to ask, although I'm sure this is too easy to actually work :)\n\nInteresting thought. Wonder if it would work?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 21:23:13 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" }, { "msg_contents": "> \n> On Wed, 7 Jan 1998, Thomas G. Lockhart wrote:\n> \n> > Are there any maintenance operations which require a \"delete from pg_xxx\"? If\n> > not, then we could just modify the parser (or the executor?) to check the table\n> > name and not allow insert/delete from any table whose name starts with \"pg_\". Had\n> > to ask, although I'm sure this is too easy to actually work :)\n> \n> \tAs long as what you are suggesting doesn't break \"drop database\", \"drop\n> table\", \"drop view\"...I realize that this is obvious, but...\n\nGood point. Yes it does. dbcommands.c and user.c both do direct calls\nto pg_exec to pass everything into the parser, optimizer, and executor. \n\nThe real fix is to do things like copy.c does, by directly calling the C\nroutines and making the desired changes there. Or to have some global\nflag that says \"Backend performed the rights test, let this SQL\nsucceed.\" That may be cleaner. Table access rights are tested in just\none function, I think.\n\nWe still have the pg_user.passwd problem, and pg_user is not readable by\ngeneral users. I can't think of a fix for this.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 21:27:44 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres acl (fwd)" } ]
[ { "msg_contents": "> \n> Bruce,\n> \n> Just running the regression tests on the latest CVS on SPARC-Linux!!\n> \n> Appart from several other ordering and precision errors I'm seeing\n> errors in constraints tests due to output/constraints.source not\n> being updated for the new error messages.\n\nI have just fixed many of these WARN problems. I am looking at the new\nresults. The first problem:\n\t\n\t====== boolean ======\n\t166,168d165\n\t< |f |f \n\t< |f |f \n\t< |f |f \n\t170a168\n\t> |f |f \n\t173a172\n\t> |f |f \n\t176a176\n\t> |f |f \n\nis because the query has no ORDER BY.\n\nThe second problem looks serious:\n\nQUERY: SET geqo TO 'off';\nQUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1<-> p2.f$\n FROM POINT_TBL p1, POINT_TBL p2\n ORDER BY dist, point1 using <<, point2 using <<;\nthirtysix|point1 |point2 | dist\n---------+----------+----------+----------------\n |(10,10) |(-10,0) |22.3606797749979 \n |(0,0) |(-10,0) | 10\n\nThe 'dist' is not being ordered. \n\nIn geometry we have:\n\n104c103\n\t< |(0,0) |[(0,0),(6,6)] |(-0,0) \n\t---\n\t> |(0,0) |[(0,0),(6,6)] |(0,0) \n \nI am happy to see the -0 changed to zero, but this may be just on my\nplatform. Also:\n\n\t< |(-0,0),(-20,-20) \n\t---\n\t> |(0,0),(-20,-20) \n\t213c212\n\t< |(-0,2),(-14,0) \n\t---\n\t> |(0,2),(-14,0) \n\t221c220\n\t< |(14,-0),(0,-34) \n\t---\n\t> |(14,0),(0,-34) \n\t236c235\n\nWe also have broken sorting in timespan:\n\nQUERY: SELECT '' AS fortyfive, r1.*, r2.*\n FROM TIMESPAN_TBL r1, TIMESPAN_TBL r2\n WHERE r1.f1 > r2.f1\n ORDER BY r1.f1, r2.f1;\nfortyfive|f1 |f1\n---------+-----------------------------+-----------------------------\n |@ 6 years |@ 14 secs ago\n |@ 5 mons |@ 14 secs ago\n |@ 5 mons 12 hours |@ 14 secs ago\n \nHow long has this been broken? Any idea on a cause. Obviously it is a\nsorting issue, but where?\n\n-- \nBruce Momjian maillist@candle.pha.pa.us\n\n", "msg_date": "Tue, 6 Jan 1998 14:57:01 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: consttraints.source" } ]
[ { "msg_contents": "\nThe Mailing List Archives are now available through your Web Browser at:\n\n\thttp://www.postgresql.org/mhonarc/pgsql-questions\n\t\t- all archives converted over\n\thttp://www.postgresql.org/mhonarc/pgsql-hackers\n\t\t- Oct to present converted so far\n\nWill most likely be integrating WebGlimpse in as the search engine once I've \nfully figured that thing out :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 17:10:01 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Mailing List Archives via MHonarc" } ]
[ { "msg_contents": "Hi,\n\nI'm seeing similar problems, mainly due to failure to sort correctly\neven though there is an \"order by\" clause.\n\nI did a few tests and found that the sort sort seemed to fail when there\nwere multiple columns in the \"order by\" clause. (Not conclusive)\n\nI don't know when it 1st appeared as I've been trying to compile on \nSPARC-Linux for the past few attempts and this is the 1st time I've\nhad a fully working package to run the regression tests on!!\n\nThanks,\nKeith.\n \n\nBruce Momjian <maillist@candle.pha.pa.us>\n> emkxp01@mtcc.demon.co.uk\n> > \n> > Bruce,\n> > \n> > Just running the regression tests on the latest CVS on SPARC-Linux!!\n> > \n> > Appart from several other ordering and precision errors I'm seeing\n> > errors in constraints tests due to output/constraints.source not\n> > being updated for the new error messages.\n> \n> I have just fixed many of these WARN problems. I am looking at the new\n> results. The first problem:\n> \t\n> \t====== boolean ======\n> \t166,168d165\n> \t< |f |f \n> \t< |f |f \n> \t< |f |f \n> \t170a168\n> \t> |f |f \n> \t173a172\n> \t> |f |f \n> \t176a176\n> \t> |f |f \n> \n> is because the query has no ORDER BY.\n> \n> The second problem looks serious:\n> \n> QUERY: SET geqo TO 'off';\n> QUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1<-> \np2.f$\n> FROM POINT_TBL p1, POINT_TBL p2\n> ORDER BY dist, point1 using <<, point2 using <<;\n> thirtysix|point1 |point2 | dist\n> ---------+----------+----------+----------------\n> |(10,10) |(-10,0) |22.3606797749979 \n> |(0,0) |(-10,0) | 10\n> \n> The 'dist' is not being ordered. \n> \n> In geometry we have:\n> \n> 104c103\n> \t< |(0,0) |[(0,0),(6,6)] |(-0,0) \n> \t---\n> \t> |(0,0) |[(0,0),(6,6)] |(0,0) \n> \n> I am happy to see the -0 changed to zero, but this may be just on my\n> platform. Also:\n> \n> \t< |(-0,0),(-20,-20) \n> \t---\n> \t> |(0,0),(-20,-20) \n> \t213c212\n> \t< |(-0,2),(-14,0) \n> \t---\n> \t> |(0,2),(-14,0) \n> \t221c220\n> \t< |(14,-0),(0,-34) \n> \t---\n> \t> |(14,0),(0,-34) \n> \t236c235\n> \n> We also have broken sorting in timespan:\n> \n> QUERY: SELECT '' AS fortyfive, r1.*, r2.*\n> FROM TIMESPAN_TBL r1, TIMESPAN_TBL r2\n> WHERE r1.f1 > r2.f1\n> ORDER BY r1.f1, r2.f1;\n> fortyfive|f1 |f1\n> ---------+-----------------------------+-----------------------------\n> |@ 6 years |@ 14 secs ago\n> |@ 5 mons |@ 14 secs ago\n> |@ 5 mons 12 hours |@ 14 secs ago\n> \n> How long has this been broken? Any idea on a cause. Obviously it is a\n> sorting issue, but where?\n> \n> -- \n> Bruce Momjian maillist@candle.pha.pa.us\n> \n\n", "msg_date": "Tue, 6 Jan 1998 21:11:53 +0000 (GMT)", "msg_from": "Keith Parks <emkxp01@mtcc.demon.co.uk>", "msg_from_op": true, "msg_subject": "Re: consttraints.source" }, { "msg_contents": "> \n> Hi,\n> \n> I'm seeing similar problems, mainly due to failure to sort correctly\n> even though there is an \"order by\" clause.\n> \n> I did a few tests and found that the sort sort seemed to fail when there\n> were multiple columns in the \"order by\" clause. (Not conclusive)\n\nThis is a huge help. I think I have found it. I just overhauled the\nreadfunc/outfunc code, so it was now very clear that was in the\nQuery.sortClause.\n\nYour hint that the it fails when there is more than one sort identifier\nwas the trick.\n\n> \n> I don't know when it 1st appeared as I've been trying to compile on \n> SPARC-Linux for the past few attempts and this is the 1st time I've\n> had a fully working package to run the regression tests on!!\n> \n> Thanks,\n> Keith.\n> \n> \n> Bruce Momjian <maillist@candle.pha.pa.us>\n> > emkxp01@mtcc.demon.co.uk\n> > > \n> > > Bruce,\n> > > \n> > > Just running the regression tests on the latest CVS on SPARC-Linux!!\n> > > \n> > > Appart from several other ordering and precision errors I'm seeing\n> > > errors in constraints tests due to output/constraints.source not\n> > > being updated for the new error messages.\n> > \n> > I have just fixed many of these WARN problems. I am looking at the new\n> > results. The first problem:\n> > \t\n> > \t====== boolean ======\n> > \t166,168d165\n> > \t< |f |f \n> > \t< |f |f \n> > \t< |f |f \n> > \t170a168\n> > \t> |f |f \n> > \t173a172\n> > \t> |f |f \n> > \t176a176\n> > \t> |f |f \n> > \n> > is because the query has no ORDER BY.\n> > \n> > The second problem looks serious:\n> > \n> > QUERY: SET geqo TO 'off';\n> > QUERY: SELECT '' AS thirtysix, p1.f1 AS point1, p2.f1 AS point2, p1.f1<-> \n> p2.f$\n> > FROM POINT_TBL p1, POINT_TBL p2\n> > ORDER BY dist, point1 using <<, point2 using <<;\n> > thirtysix|point1 |point2 | dist\n> > ---------+----------+----------+----------------\n> > |(10,10) |(-10,0) |22.3606797749979 \n> > |(0,0) |(-10,0) | 10\n> > \n> > The 'dist' is not being ordered. \n> > \n> > In geometry we have:\n> > \n> > 104c103\n> > \t< |(0,0) |[(0,0),(6,6)] |(-0,0) \n> > \t---\n> > \t> |(0,0) |[(0,0),(6,6)] |(0,0) \n> > \n> > I am happy to see the -0 changed to zero, but this may be just on my\n> > platform. Also:\n> > \n> > \t< |(-0,0),(-20,-20) \n> > \t---\n> > \t> |(0,0),(-20,-20) \n> > \t213c212\n> > \t< |(-0,2),(-14,0) \n> > \t---\n> > \t> |(0,2),(-14,0) \n> > \t221c220\n> > \t< |(14,-0),(0,-34) \n> > \t---\n> > \t> |(14,0),(0,-34) \n> > \t236c235\n> > \n> > We also have broken sorting in timespan:\n> > \n> > QUERY: SELECT '' AS fortyfive, r1.*, r2.*\n> > FROM TIMESPAN_TBL r1, TIMESPAN_TBL r2\n> > WHERE r1.f1 > r2.f1\n> > ORDER BY r1.f1, r2.f1;\n> > fortyfive|f1 |f1\n> > ---------+-----------------------------+-----------------------------\n> > |@ 6 years |@ 14 secs ago\n> > |@ 5 mons |@ 14 secs ago\n> > |@ 5 mons 12 hours |@ 14 secs ago\n> > \n> > How long has this been broken? Any idea on a cause. Obviously it is a\n> > sorting issue, but where?\n> > \n> > -- \n> > Bruce Momjian maillist@candle.pha.pa.us\n> > \n> \n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 18:43:30 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: consttraints.source" }, { "msg_contents": "> \n> Hi,\n> \n> I'm seeing similar problems, mainly due to failure to sort correctly\n> even though there is an \"order by\" clause.\n> \n> I did a few tests and found that the sort sort seemed to fail when there\n> were multiple columns in the \"order by\" clause. (Not conclusive)\n> \n> I don't know when it 1st appeared as I've been trying to compile on \n> SPARC-Linux for the past few attempts and this is the 1st time I've\n> had a fully working package to run the regression tests on!!\n\nSort is now fixed. When I added UNION, I needed to add UNIQUE from\noptimizer, so I added a SortClause node to the routine. Turns out it\nwas NULL'ing it for every sort field. Should work now.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 18:56:52 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: consttraints.source" }, { "msg_contents": "> > I'm seeing similar problems, mainly due to failure to sort correctly\n> > even though there is an \"order by\" clause.\n> >\n> > I did a few tests and found that the sort sort seemed to fail when there\n> > were multiple columns in the \"order by\" clause. (Not conclusive)\n>\n> This is a huge help. I think I have found it. I just overhauled the\n> readfunc/outfunc code, so it was now very clear that was in the\n> Query.sortClause.\n>\n> Your hint that the it fails when there is more than one sort identifier\n> was the trick.\n\nAh, Keith beat me to the test :) fwiw, the problem was introduced between 971227\nand 980101...\n\n - Tom\n\n", "msg_date": "Wed, 07 Jan 1998 01:36:20 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: consttraints.source" } ]
[ { "msg_contents": "> I created a table with two columns of type int, and loaded about 300 K records\n> in it. So, the total size of the table is approx. that of 600 K integers,\n> roughly 2.4 MB.\n> But, the file corresponding to the table in pgsql/data/base directory\n> has a size of 19 MB. I was wondering if I have done something wrong in\n> the installation or usage, or is it the normal behavior ?\n\n48 bytes + each row header (on my aix box..._your_ mileage may vary)\n 8 bytes + two int fields @ 4 bytes each\n 4 bytes + pointer on page to tuple\n-------- =\n60 bytes per tuple\n\n8192 / 60 give 136 tuples per page.\n\n300000 / 136 ... round up ... need 2206 pages which gives us ...\n\n2206 * 8192 = 18,071,532\n\nSo 19 MB is about right. And this is the best to be done, unless\nyou can make do with int2s which would optimally shrink the table\nsize to 16,834,560 bytes. Any nulls in there might add a few bytes\nper offending row too, but other than that, this should be considered\nnormal postgresql behavior.\n\n> ...\n> One massive sort file...\n> ...\n\nThis one I don't know if is \"normal\"...\n\n\nDarren aka darrenk@insightdist.com\n", "msg_date": "Tue, 6 Jan 1998 18:26:31 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "On Tue, 6 Jan 1998, Darren King wrote:\n\n> 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> 8 bytes + two int fields @ 4 bytes each\n> 4 bytes + pointer on page to tuple\n> -------- =\n> 60 bytes per tuple\n> \n> 8192 / 60 give 136 tuples per page.\n> \n> 300000 / 136 ... round up ... need 2206 pages which gives us ...\n> \n> 2206 * 8192 = 18,071,532\n> \n> So 19 MB is about right. And this is the best to be done, unless\n> you can make do with int2s which would optimally shrink the table\n> size to 16,834,560 bytes. Any nulls in there might add a few bytes\n> per offending row too, but other than that, this should be considered\n> normal postgresql behavior.\n\n\tBruce...this would be *great* to have in the FAQ!! What we do need is\na section of the User Manual dealing with computing resources required for\na table, similar to this :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 20:32:39 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "> \n> > I created a table with two columns of type int, and loaded about 300 K records\n> > in it. So, the total size of the table is approx. that of 600 K integers,\n> > roughly 2.4 MB.\n> > But, the file corresponding to the table in pgsql/data/base directory\n> > has a size of 19 MB. I was wondering if I have done something wrong in\n> > the installation or usage, or is it the normal behavior ?\n> \n> 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> 8 bytes + two int fields @ 4 bytes each\n> 4 bytes + pointer on page to tuple\n> -------- =\n> 60 bytes per tuple\n> \n> 8192 / 60 give 136 tuples per page.\n> \n> 300000 / 136 ... round up ... need 2206 pages which gives us ...\n> \n> 2206 * 8192 = 18,071,532\n> \n> So 19 MB is about right. And this is the best to be done, unless\n> you can make do with int2s which would optimally shrink the table\n> size to 16,834,560 bytes. Any nulls in there might add a few bytes\n> per offending row too, but other than that, this should be considered\n> normal postgresql behavior.\n\n\nNice math exercise.\n\nDoes anyone want to tell me the row overhead on commercial databases?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 20:18:26 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Darren King wrote:\n> \n> > 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> > 8 bytes + two int fields @ 4 bytes each\n> > 4 bytes + pointer on page to tuple\n> > -------- =\n> > 60 bytes per tuple\n> > \n> > 8192 / 60 give 136 tuples per page.\n> > \n> > 300000 / 136 ... round up ... need 2206 pages which gives us ...\n> > \n> > 2206 * 8192 = 18,071,532\n> > \n> > So 19 MB is about right. And this is the best to be done, unless\n> > you can make do with int2s which would optimally shrink the table\n> > size to 16,834,560 bytes. Any nulls in there might add a few bytes\n> > per offending row too, but other than that, this should be considered\n> > normal postgresql behavior.\n> \n> \tBruce...this would be *great* to have in the FAQ!! What we do need is\n> a section of the User Manual dealing with computing resources required for\n> a table, similar to this :)\n\nAdded to FAQ.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 22:02:42 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" } ]
[ { "msg_contents": "Hi All,\n\nI'm just investigating some regression failures on a SPARC-Linux\nbuild of PostgreSQL from the 7th Jan CVS tree.\n\nAlthough there are many failures I can't explain, the failure\nof the sanity_check VACUUM has attracted my interest.\n\nDoing a VACUUM on the regression database I'm getting:-\n\nregression=> vacuum;\nABORT: nodeRead: Bad type 0\nregression=>\n\nThe log shows:-\n\nDEBUG: Rel equipment_r: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup 4: \nVac 0, Crash 0, UnUsed 0, MinLen 62, MaxLen 75; Re-using: Free/Avail. Space 0/0; \nEndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nDEBUG: Rel iportaltest: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup 2: \nVac 0, Crash 0, UnUsed 0, MinLen 120, MaxLen 120; Re-using: Free/Avail. Space \n0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\nABORT: nodeRead: Bad type 0\n\n\n\nSo what comes next?\n\nIt looks like the Rel iportaltest vac'd OK.\n\nIf I VACUUM each relation individually everything seems to be OK.\n\nIf I try to vacuum a VIEW I get the same error.\n\nregression=> vacuum toyemp;\nABORT: nodeRead: Bad type 0\nregression=> \n\n\nAnyone have any insight into this?\n\nKeith.\n\n", "msg_date": "Wed, 7 Jan 1998 00:35:20 +0000 (GMT)", "msg_from": "Keith Parks <emkxp01@mtcc.demon.co.uk>", "msg_from_op": true, "msg_subject": "VACUUM error on CVS build 07-JAN-98" }, { "msg_contents": "> \n> Hi All,\n> \n> I'm just investigating some regression failures on a SPARC-Linux\n> build of PostgreSQL from the 7th Jan CVS tree.\n> \n> Although there are many failures I can't explain, the failure\n> of the sanity_check VACUUM has attracted my interest.\n> \n> Doing a VACUUM on the regression database I'm getting:-\n> \n> regression=> vacuum;\n> ABORT: nodeRead: Bad type 0\n> regression=>\n> \n> The log shows:-\n> \n> DEBUG: Rel equipment_r: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup 4: \n> Vac 0, Crash 0, UnUsed 0, MinLen 62, MaxLen 75; Re-using: Free/Avail. Space 0/0; \n> EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> DEBUG: Rel iportaltest: Pages 1: Changed 0, Reapped 0, Empty 0, New 0; Tup 2: \n> Vac 0, Crash 0, UnUsed 0, MinLen 120, MaxLen 120; Re-using: Free/Avail. Space \n> 0/0; EndEmpty/Avail. Pages 0/0. Elapsed 0/0 sec.\n> ABORT: nodeRead: Bad type 0\n> \n> \n> \n> So what comes next?\n> \n> It looks like the Rel iportaltest vac'd OK.\n> \n> If I VACUUM each relation individually everything seems to be OK.\n> \n> If I try to vacuum a VIEW I get the same error.\n> \n> regression=> vacuum toyemp;\n> ABORT: nodeRead: Bad type 0\n> regression=> \n> \n> \n> Anyone have any insight into this?\n> \n> Keith.\n> \n> \n> \n\nTry the newest version. I think I fixed it.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 21:16:20 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM error on CVS build 07-JAN-98" } ]
[ { "msg_contents": "Hi,\n\nAnyone have any ideas about this error in the horology regression test?\n\nThe platform is SPARC-Linux running the latest CVS build.\n\nKeith.\n\n\nQUERY: CREATE TABLE TEMP_DATETIME (f1 datetime);\nQUERY: INSERT INTO TEMP_DATETIME (f1)\n SELECT d1 FROM DATETIME_TBL\n WHERE d1 BETWEEN '13-jun-1957' AND '1-jan-1997'\n OR d1 BETWEEN '1-jan-1999' AND '1-jan-2010';\nABORT: floating point exception! The last floating point operation either \nexceeded legal ranges or was a divide by zero\nQUERY: SELECT '' AS ten, f1 AS datetime\n FROM TEMP_DATETIME\n ORDER BY datetime;\nten|datetime\n---+--------\n(0 rows) \n\n", "msg_date": "Wed, 7 Jan 1998 00:39:57 +0000 (GMT)", "msg_from": "Keith Parks <emkxp01@mtcc.demon.co.uk>", "msg_from_op": true, "msg_subject": "Another regression test failure." } ]
[ { "msg_contents": "> I can take a stab at this tonite after work now that the snapshot is there.\n> Still have around some of the files/diffs from looking at this a year ago...\n> \n> I don't think it will be hard, just a few files with BLCKSZ/MAXBLCKSZ\n> references to check for breakage. Appears that only one bit of lp_flags is\n> being used too, so that would seem to allow up to 32k blocks.\n\nI have finished \"fixing\" the code for this and have a test system of postgres\nrunning with 4k blocks right now. Tables appear to take about 10% less space.\nSimple btree indices are taking the same as with 8k blocks. Regression is\nrunning now and is going smoothly.\n\nNow for the question...\n\nIn backend/access/nbtree/nbtsort.c, ---> #define TAPEBLCKSZ (MAXBLCKSZ << 2)\n\nSo far MAXBLCKSZ has been equal to BLCKSZ. What effect will a MAXBLCKSZ=32768\nhave on these tape files? Should I leave it as MAXBLCKSZ this big or change\nthem to BLCKSZ to mirror the real block size being used?\n\n\n> I can check the aix compiler, but what does gcc and other compilers do with\n> bit field alignment?\n\nThe ibm compiler allocates the ItemIdData as four bytes. My C book says though\nthat the individual compiler is free to align bit fields however it chooses.\nThe bit-fields might not always be packed or allowed to cross integer boundaries.\n\ndarrenk\n", "msg_date": "Tue, 6 Jan 1998 19:52:42 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Tape files and MAXBLCKSZ vs. BLCKSZ" }, { "msg_contents": "> \n> > I can take a stab at this tonite after work now that the snapshot is there.\n> > Still have around some of the files/diffs from looking at this a year ago...\n> > \n> > I don't think it will be hard, just a few files with BLCKSZ/MAXBLCKSZ\n> > references to check for breakage. Appears that only one bit of lp_flags is\n> > being used too, so that would seem to allow up to 32k blocks.\n> \n> I have finished \"fixing\" the code for this and have a test system of postgres\n> running with 4k blocks right now. Tables appear to take about 10% less space.\n> Simple btree indices are taking the same as with 8k blocks. Regression is\n> running now and is going smoothly.\n> \n> Now for the question...\n> \n> In backend/access/nbtree/nbtsort.c, ---> #define TAPEBLCKSZ (MAXBLCKSZ << 2)\n> \n> So far MAXBLCKSZ has been equal to BLCKSZ. What effect will a MAXBLCKSZ=32768\n> have on these tape files? Should I leave it as MAXBLCKSZ this big or change\n> them to BLCKSZ to mirror the real block size being used?\n> \n\nI would keep it equal to BLCKSZ. I see no reason to make it different,\nunless the btree sorting is expecting to take 2x the block size. Vadim\nmay know.\n\n\n> \n> > I can check the aix compiler, but what does gcc and other compilers do with\n> > bit field alignment?\n> \n> The ibm compiler allocates the ItemIdData as four bytes. My C book says though\n> that the individual compiler is free to align bit fields however it chooses.\n> The bit-fields might not always be packed or allowed to cross integer boundaries.\n> \n> darrenk\n> \n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 21:19:16 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tape files and MAXBLCKSZ vs. BLCKSZ" } ]
[ { "msg_contents": "\nHi...\n\n\tI got ahold of Julie today (maintainer of PostODBC) about including the\nPostODBC stuff as part of the general distribution, so that we pretty much\nhad all the interfaces covered.\n\n\tJulie agreed, and uploaded a zip file of the current sources for me to\nintegrate into the source tree...which I did...and then *very* quickly\nundid...PostODBC falls under LGPL, and therefore can't be included as part of\nour source distribution without contaminating our code :(\n\n\tDoes anyone know of *any* way around this? Like, can a section of our\ndistribution contain software that falls under LGPL without it affecting *our*\ncopyright (Berkeley)? Or does it have to remain completely seperate? Its\neffectively a seperate package, but because its wrapped in our \"tar\" file\nfor distribution, how does that affect things?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 21:07:44 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "PostODBC..." }, { "msg_contents": "The Hermit Hacker wrote:\n\n> Hi...\n>\n> I got ahold of Julie today (maintainer of PostODBC) about including the\n> PostODBC stuff as part of the general distribution, so that we pretty much\n> had all the interfaces covered.\n>\n> Julie agreed, and uploaded a zip file of the current sources for me to\n> integrate into the source tree...which I did...and then *very* quickly\n> undid...PostODBC falls under LGPL, and therefore can't be included as part of\n> our source distribution without contaminating our code :(\n>\n> Does anyone know of *any* way around this? Like, can a section of our\n> distribution contain software that falls under LGPL without it affecting *our*\n> copyright (Berkeley)? Or does it have to remain completely seperate? Its\n> effectively a seperate package, but because its wrapped in our \"tar\" file\n> for distribution, how does that affect things?\n\nI'm no expert, but (for example) RedHat distributes Linux as well as commercial\nproducts on the same CDROM. There are separate licensing statements for each\ncategory of software. It would seem to be the same issue with us; we aren't\n_forcing_ someone to use both categories...\n\n - Tom\n\n", "msg_date": "Wed, 07 Jan 1998 02:02:54 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostODBC..." }, { "msg_contents": "On Wed, 7 Jan 1998, Thomas G. Lockhart wrote:\n\n> I'm no expert, but (for example) RedHat distributes Linux as well as commercial\n> products on the same CDROM. There are separate licensing statements for each\n> category of software. It would seem to be the same issue with us; we aren't\n> _forcing_ someone to use both categories...\n\n\tRight, this I have no problems with...but, would that mean that we could\ndistribute it as PostODBC.tar.gz on the same CD as PostgreSQL-v6.3.tar.gz, or\nas part of the overall tar file? Where does the line get drawn? :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 22:19:15 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostODBC..." } ]
[ { "msg_contents": "Forwarded message:\n> \n> Update of /usr/local/cvsroot/pgsql/src/interfaces/odbc/src/socket\n> In directory hub.org:/home/staff/scrappy/src/pgsql/src/interfaces/odbc/src/socket\n> \n> Removed Files:\n> \tcompat.h connect.h connectp.cpp errclass.cpp errclass.h \n> \tsockio.cpp sockio.h wrapper.cpp wrapper.h \n> Log Message:\n> \n> Can't include this...it falls under GPL...it will contaminate all the\n> other code :(\n\nCan't we just GPL that directory? We already distribute the source.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 21:20:31 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "[COMMITTERS] 'pgsql/src/interfaces/odbc/src/socket compat.h connect.h\n\tconnectp.cpp errclass.cpp errclass.h sockio.cpp sockio.h wO (fwd)" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> > Can't include this...it falls under GPL...it will contaminate all the\n> > other code :(\n> \n> Can't we just GPL that directory? We already distribute the source.\n\n\tThis is what I'm curious about...can the GPL be directory specific?\n\n\tWhat do you mean \"we already distribute the source\"? The source to what?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 22:37:39 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] 'pgsql/src/interfaces/odbc/src/socket compat.h\n\tconnect.h connectp.cpp errclass.cpp errclass.h sockio.cpp\n\tsockio.h wO (fwd)" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> \n> > > Can't include this...it falls under GPL...it will contaminate all the\n> > > other code :(\n> > \n> > Can't we just GPL that directory? We already distribute the source.\n> \n> \tThis is what I'm curious about...can the GPL be directory specific?\n> \n> \tWhat do you mean \"we already distribute the source\"? The source to what?\n\nThe Postodbc source is already distributed. It is not like we are\ngiving people a simple binary.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 22:10:17 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [COMMITTERS] 'pgsql/src/interfaces/odbc/src/socket compat.h\n\tconnect.h connectp.cpp errclass.cpp errclass.h sockio.cpp sockio" } ]
[ { "msg_contents": "Do we have code to preserver grants in pg_dump? I maked it in the TODO\nlist as completed, but I am not sure it was done.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 22:40:44 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pg_dump and groups" }, { "msg_contents": "On Tue, 6 Jan 1998, Bruce Momjian wrote:\n\n> Do we have code to preserver grants in pg_dump? I maked it in the TODO\n> list as completed, but I am not sure it was done.\n\n\tYup *nod*\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 6 Jan 1998 23:55:21 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump and groups" }, { "msg_contents": "> \n> On Tue, 6 Jan 1998, Bruce Momjian wrote:\n> \n> > Do we have code to preserver grants in pg_dump? I marked it in the TODO\n> > list as completed, but I am not sure it was done.\n> \n> \tYup *nod*\n\n6.3, a \"no excuses\" release.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 6 Jan 1998 23:01:37 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump and groups" } ]
[ { "msg_contents": "\nHi...\n\n\tWell, I decided to use WebGlimpse to provide a search engine in front of\nthe MHonarc archives...its slower then I'd like, but that's what the upgrade\nis for (and many other things)...\n\n\tpgsql-hackers and pgsql-questions are currently searchable\n\n\tAnd, finally, you can access it all, until Neil gets hooks in place on the\nregular pages, at:\n\n\thttp://www.postgresql.org/mhonarc\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 7 Jan 1998 00:04:06 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Search engine in place..." } ]
[ { "msg_contents": "I have fixed the Node 0 problem with views. It was added as part of my\nreadnode/outnode additions.\n\nAt this point, I think I am done with the readnode/outnode changes. \nShould make rewrite system a little more robust.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Wed, 7 Jan 1998 03:07:32 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "fix for views and outnodes" } ]
[ { "msg_contents": "Bruce Momjian <maillist@candle.pha.pa.us>\n> emkxp01@mtcc.demon.co.uk\n> > \n> > Hi,\n> > \n> > I'm seeing similar problems, mainly due to failure to sort correctly\n> > even though there is an \"order by\" clause.\n> > \n> > I did a few tests and found that the sort sort seemed to fail when there\n> > were multiple columns in the \"order by\" clause. (Not conclusive)\n> > \n> > I don't know when it 1st appeared as I've been trying to compile on \n> > SPARC-Linux for the past few attempts and this is the 1st time I've\n> > had a fully working package to run the regression tests on!!\n> \n> Sort is now fixed. When I added UNION, I needed to add UNIQUE from\n> optimizer, so I added a SortClause node to the routine. Turns out it\n> was NULL'ing it for every sort field. Should work now.\n\nThanks for the quick response, I'm building the new code now so will know\nin the morning how it stands.\n\nI'm building the whole thing with -O instead of -O2 to see if it helps\nwith some of the other errors I'm seeing. ( an old problem with gcc and\nthe SPARC processor makes me suspicious)\n\nLater...\n\nThe sorting problems are fixes but I'm still getting many fails.\n\nWill investigate...\n\nKeith.\n\n", "msg_date": "Wed, 7 Jan 1998 09:40:41 +0000 (GMT)", "msg_from": "Keith Parks <emkxp01@mtcc.demon.co.uk>", "msg_from_op": true, "msg_subject": "Re: consttraints.source" } ]
[ { "msg_contents": "> \tJulie agreed, and uploaded a zip file of the current sources for me to\n> integrate into the source tree...which I did...and then *very* quickly\n> undid...PostODBC falls under LGPL, and therefore can't be included as part of\n> our source distribution without contaminating our code :(\n> \n> \tDoes anyone know of *any* way around this? Like, can a section of our\n> distribution contain software that falls under LGPL without it affecting *our*\n> copyright (Berkeley)? Or does it have to remain completely seperate? Its\n> effectively a seperate package, but because its wrapped in our \"tar\" file\n> for distribution, how does that affect things?\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> \n> \nBy LGPL, I assume you mean the library version of GPL. I thought that the whole\npoint of the library version was that it didn't contaminate any other code.\nThat's how commercial products can release executables which are linked with\nthe GNU libc (or whatever) without the whole product falling under the GPL.\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Wed, 7 Jan 1998 11:23:40 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostODBC..." } ]
[ { "msg_contents": "> > 48 bytes + each row header (on my aix box..._your_ mileage may vary)\n> > 8 bytes + two int fields @ 4 bytes each\n> > 4 bytes + pointer on page to tuple\n> > -------- =\n> > 60 bytes per tuple\n> > \n> > ...\n> \n> Nice math exercise.\n> \n> Does anyone want to tell me the row overhead on commercial databases?\n\nI've seen this for Oracle, but I _can't_ find it right now. I'll dig it\nup tonite...this is driving me nuts trying to remember where it is now.\n\nBut this I do have handy! It's an HTML page from IBM DB2 docs. A touch\nlong, but I found it to most interesting.\n\nIf there are any of the linked pages that someone else is interested in,\ncontact me and if I have it, I can send it to you off-list.\n\nDarren aka darrenk@insightdist.com\n\n<HTML>\n<HEAD>\n <TITLE>DB2 Administration Guide</TITLE>\n</HEAD>\n<BODY TEXT=\"#000000\" BGCOLOR=\"#FFFFFF\" LINK=\"#9900CC\" VLINK=\"#3366CC\" ALINK=\"#3399CC\">\n\n<H2><A NAME=\"HDRDBSIZE\"></A>Estimating Space Requirements for Tables</H2>\n\n<P>The following information provides a general rule for estimating the\nsize of a database: </P>\n\n<UL COMPACT>\n<LI><A HREF=\"#HDROPCAT\">&quot;System Catalog Tables&quot;</A> </LI>\n\n<LI><A HREF=\"#HDROPDAT\">&quot;User Table Data&quot;</A> </LI>\n\n<LI><A HREF=\"#HDROPLF\">&quot;Long Field Data&quot;</A> </LI>\n\n<LI><A HREF=\"#HDROPLOB\">&quot;Large Object (LOB) Data&quot;</A> </LI>\n\n<LI><A HREF=\"#HDROPINX\">&quot;Index Space&quot;</A> </LI>\n</UL>\n\n<P>After reading these sections, you should read <A HREF=\"sqld00025.html#HDRTBSPACE\">&quot;Designing\nand Choosing Table Spaces&quot;</A>. </P>\n\n<P>Information is not provided for the space required by such things as:\n</P>\n\n<UL COMPACT>\n<LI>The local database directory file </LI>\n\n<LI>The system database directory file </LI>\n\n<LI>The file management overhead required by the operating system, including:\n</LI>\n\n<UL COMPACT>\n<LI>file block size </LI>\n\n<LI>directory control space </LI>\n</UL>\n</UL>\n\n<P>Information such as row size and structure is precise. However, multiplication\nfactors for file overhead because of disk fragmentation, free space, and\nvariable length columns will vary in your own database since there is such\na wide range of possibilities for the column types and lengths of rows\nin a database. After initially estimating your database size, create a\ntest database and populate it with representative data. You will then find\na multiplication factor that is more accurate for your own particular database\ndesign. </P>\n\n<H3><A NAME=\"HDROPCAT\"></A>System Catalog Tables</H3>\n\n<P>When a database is initially created, system catalog tables are created.\nThese system tables will grow as user tables, views, indexes, authorizations,\nand packages are added to the database. Initially, they use approximately\n1600 KB of disk space. </P>\n\n<P>The amount of space allocated for the catalog tables depends on the\ntype of table space and the extent size for the table space. For example,\nif a DMS table space with an extent size of 32 is used, the catalog table\nspace will initially be allocated 20MB of space. For more information,\nsee <A HREF=\"sqld00025.html#HDRTBSPACE\">&quot;Designing and Choosing Table\nSpaces&quot;</A>. </P>\n\n<H3><A NAME=\"HDROPDAT\"></A>User Table Data</H3>\n\n<P>Table data is stored on 4KB pages. Each page contains 76 bytes of overhead\nfor the database manager. This leaves 4020 bytes to hold user data (or\nrows), although no row can exceed 4005 bytes in length. A row will <I>not</I>\nspan multiple pages. </P>\n\n<P>Note that the table data pages <B>do not</B> contain the data for columns\ndefined with LONG VARCHAR, LONG VARGRAPHIC, BLOB, CLOB, or DBCLOB data\ntypes. The rows in a table data page do, however, contain a descriptor\nof these columns. (See <A HREF=\"#HDROPLF\">&quot;Long Field Data&quot;</A>\nfor information about estimating the space required for the table objects\nthat will contain the data stored using these data types.) </P>\n\n<P>Rows are inserted into the table in a first-fit order. The file is searched\n(using a free space map) for the first available space that is large enough\nto hold the new row. When a row is updated, it is updated in place unless\nthere is insufficient room left on the 4KB page to contain it. If this\nis the case, a &quot;tombstone record&quot; is created in the original\nrow location which points to the new location in the table file of the\nupdated row. </P>\n\n<P>See <A HREF=\"#HDROPLF\">&quot;Long Field Data&quot;</A> for information\nabout how LONG VARCHAR, LONG VARGRAPHIC, BLOB, CLOB and DBCLOB data is\nstored and for estimating the space required to store these types of columns.\n</P>\n\n<P>For each user table in the database, the space needed is: </P>\n\n<PRE> (average row size + 8) * number of rows * 1.5\n</PRE>\n\n<P>The average row size is the sum of the average column sizes. For information\non the size of each column, see CREATE TABLE in the <A HREF=\"/data/db2/support/sqls00aa/sqls0.html\"><I>SQL\nReference</I>. </A></P>\n\n<P>The factor of &quot;1.5&quot; is for overhead such as page overhead\nand free space. </P>\n\n<H3><A NAME=\"HDROPLF\"></A>Long Field Data</H3>\n\n<P>If a table has LONG VARCHAR or LONG VARGRAPHIC data, in addition to\nthe byte count of 20 for the LONG VARCHAR or LONG VARGRAPHIC descriptor\n(in the table row), the data itself must be stored. Long field data is\nstored in a separate table object which is structured differently from\nthe other data types (see <A HREF=\"#HDROPDAT\">&quot;User Table Data&quot;</A>\nand <A HREF=\"#HDROPLOB\">&quot;Large Object (LOB) Data&quot;</A>). </P>\n\n<P>Data is stored in 32KB areas that are broken up into segments whose\nsizes are &quot;powers of two&quot; times 512 bytes. (Hence these segments\ncan be 512 bytes, 1024 bytes, 2048 bytes, and so on, up to 32KB.) </P>\n\n<P>They are stored in a fashion that enables free space to be reclaimed\neasily. Allocation and free space information is stored in 4KB allocation\npages, which appear infrequently throughout the object. </P>\n\n<P>The amount of unused space in the object depends on the size of the\nlong field data and whether this size is relatively constant across all\noccurrences of the data. For data entries larger than 255 bytes, this unused\nspace can be up to 50 percent of the size of the long field data. </P>\n\n<P>If character data is less than 4KB in length, the CHAR, GRAPHIC, VARCHAR,\nor VARGRAPHIC data types should be used instead of LONG VARCHAR or LONG\nVARGRAPHIC. </P>\n\n<H3><A NAME=\"HDROPLOB\"></A>Large Object (LOB) Data</H3>\n\n<P>If a table has BLOB, CLOB, or DBCLOB data, in addition to the byte count\n(between 72 and 280 bytes) for the BLOB, CLOB, or DBCLOB descriptor (in\nthe table row), the data itself must be stored. This data is stored in\ntwo separate table objects that are structured differently than other data\ntypes (see <A HREF=\"#HDROPDAT\">&quot;User Table Data&quot;</A>). </P>\n\n<P>To estimate the space required by large object data, you need to consider\nthe two table objects used to store data defined with these data types:\n</P>\n\n<UL>\n<LI><B>LOB Data Objects</B> </LI>\n\n<P>Data is stored in 64MB areas that are broken up into segments whose\nsizes are &quot;powers of two&quot; times 1024 bytes. (Hence these segments\ncan be 1024 bytes, 2048 bytes, 4096 bytes, and so on, up to 64MB.) </P>\n\n<P>To reduce the amount of disk space used by the LOB data, you can use\nthe COMPACT parameter on the <I>lob-options-clause</I> on the CREATE TABLE\nand ALTER TABLE statements. The COMPACT option minimizes the amount of\ndisk space required by allowing the LOB data to be split into smaller segments\nso that it will use the smallest amount of space possible. Without the\nCOMPACT option, the entire LOB value must contiguously fit into a single\nsegment. Appending to LOB values stored using the COMPACT option may result\nin slower performance compared to LOB values for which the COMPACT option\nis not specified. </P>\n\n<P>The amount of free space contained in LOB data objects will be influenced\nby the amount of update and delete activity, as well as the size of the\nLOB values being inserted. </P>\n\n<LI><B>LOB Allocation Objects</B> </LI>\n\n<P>Allocation and free space information is stored in 4KB allocation pages\nseparated from the actual data. The number of these 4KB pages is dependent\non the amount of data, including unused space, allocated for the large\nobject data. The overhead is calculated as follows: one 4KB pages for every\n64GB plus one 4KB page for every 8MB. </P>\n</UL>\n\n<P>If character data is less than 4KB in length, the CHAR, GRAPHIC, VARCHAR,\nor VARGRAPHIC data types should be used instead of BLOB, CLOB or DBCLOB.\n</P>\n\n<H3><A NAME=\"HDROPINX\"></A>Index Space</H3>\n\n<P>For each index, the space needed can be estimated as: </P>\n\n<PRE> (average index key size + 8) * number of rows * 2\n</PRE>\n\n<P>where: </P>\n\n<UL COMPACT>\n<LI>The &quot;average index key size&quot; is the byte count of each column\nin the index key. See the CREATE TABLE statement <A HREF=\"/data/db2/support/sqls00aa/sqls0.html\"><I>SQL\nReference</I> </A>for information on how to calculate the byte count for\ncolumns with different data types. (Note that to estimate the average column\nsize for VARCHAR and VARGRAPHIC columns, use an average of the current\ndata size, plus one byte. Do not use the maximum declared size.) </LI>\n\n<LI>The factor of 2 is for overhead, such as non-leaf pages and free space.\n</LI>\n</UL>\n\n<P><B>Note: </B></P>\n\n<BLOCKQUOTE>\n<P>For every column that allows nulls, add one extra byte for the null\nindicator. </P>\n</BLOCKQUOTE>\n\n<P>Temporary space is required when creating the index. The maximum amount\nof temporary space required during index creation can be estimated as:\n</P>\n\n<PRE> (average index key size + 8) * number of rows * 3.2\n</PRE>\n\n<P>where the factor of 3.2 is for index overhead as well as space required\nfor the sorting needed to create the index. </P>\n\n<P>\n<HR><B>[ <A HREF=\"sqld0.html#ToC\">Table of Contents</A>\n| <A HREF=\"sqld00022.html\">Previous Page</A> | <A HREF=\"sqld00024.html\">Next\nPage</A> ]</B> \n<HR></P>\n\n</BODY>\n</HTML>\n", "msg_date": "Wed, 7 Jan 1998 09:54:35 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] database size" }, { "msg_contents": "> I've seen this for Oracle, but I _can't_ find it right now. I'll dig it\n> up tonite...this is driving me nuts trying to remember where it is now.\n> \n> But this I do have handy! It's an HTML page from IBM DB2 docs. A touch\n> long, but I found it to most interesting.\n> \n> If there are any of the linked pages that someone else is interested in,\n> contact me and if I have it, I can send it to you off-list.\n\nInteresting that they have \"tombstone\" records, which sounds like our\ntime travel that vacuum cleans up.\n\nThey recommend (rowsize+8) * 1.5.\n\nSounds like we are not too bad.\n\nI assume our index overhead is not as large as data rows, but still\nsignificant. I am adding a mention of it to the FAQ. That comes up\noften too.\n\n\tIndexes do not contain the same overhead, but do contain the\n\tdata that is being indexed, so they can be large also.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Wed, 7 Jan 1998 12:18:40 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] database size" } ]
[ { "msg_contents": "\nIn the next update to s_lock.c, would it be possible to add an\n#else to the #ifdef with nothing more than a ; in it? Like...\n\n#if defined (__alpha__) && defined(linux)\n\n... alpha linux code ...\n\n#else\n;\n#endif\n\nOr perhaps put a #include <stdio.h> outside the ifdef'd block?\n\nThe aix compiler requires there be _some_ sort of valid code left\nafter the pre-processor finishes with the file, and currently\nthere isn't, so my compile fails since the s_lock.c file's always\nin the make.\n\nOr could the #if be moved to the makefile to add s_lock.c to OBJS\nif defined(__alpha__) and defined(linux)?\n\n\nDarren aka darrenk@insightdist.com\n", "msg_date": "Wed, 7 Jan 1998 10:14:19 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Linux/Alpha's s_lock.c and other ports..." }, { "msg_contents": "\nThis was done...I didn't think the #if's shoudl be around the header\nfile stuff, so just moved them down a bit \n\n\nOn Wed, 7 Jan 1998, Darren King wrote:\n\n> \n> In the next update to s_lock.c, would it be possible to add an\n> #else to the #ifdef with nothing more than a ; in it? Like...\n> \n> #if defined (__alpha__) && defined(linux)\n> \n> ... alpha linux code ...\n> \n> #else\n> ;\n> #endif\n> \n> Or perhaps put a #include <stdio.h> outside the ifdef'd block?\n> \n> The aix compiler requires there be _some_ sort of valid code left\n> after the pre-processor finishes with the file, and currently\n> there isn't, so my compile fails since the s_lock.c file's always\n> in the make.\n> \n> Or could the #if be moved to the makefile to add s_lock.c to OBJS\n> if defined(__alpha__) and defined(linux)?\n> \n> \n> Darren aka darrenk@insightdist.com\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 7 Jan 1998 18:43:38 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Linux/Alpha's s_lock.c and other ports..." } ]
[ { "msg_contents": "Hi All,\n\nI suspect this is an O/S or platform problem but can anyone offer any\nsuggestions as to how I might locate the cause.\n\nDROP TABLE FLOAT8_TBL;\nDROP\n\nCREATE TABLE FLOAT8_TBL(f1 float8);\nCREATE\n\nINSERT INTO FLOAT8_TBL(f1) VALUES ('1.2345678901234e-200');\nINSERT 277993 1\n\nSELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\nABORT: floating point exception! The last floating point operation either \nexceeded legal ranges or was a divide by zero\n\n\nThe ABORT message comes from tcop.c when we are hit by a FPE signal by\nthe operating system.\n\n.....\n\nHere's some additional tests that seem to show the threshold.\n\npostgres=> CREATE TABLE FLOAT8_TBL(f1 float8);\nCREATE\npostgres=> INSERT INTO FLOAT8_TBL(f1) VALUES ('1.2345678901234e-150');\nINSERT 278057 1\npostgres=> SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\nbad|?column?\n---+--------\n | 1\n(1 row)\n\npostgres=> INSERT INTO FLOAT8_TBL(f1) VALUES ('1.2345678901234e-151');\nINSERT 278058 1\npostgres=> SELECT '' AS bad, : (f.f1) from FLOAT8_TBL f;\nABORT: floating point exception! The last floating point operation either \nexceeded legal ranges or was a divide by zero\n\nKeith.\n\n", "msg_date": "Wed, 7 Jan 1998 16:11:09 +0000 (GMT)", "msg_from": "Keith Parks <emkxp01@mtcc.demon.co.uk>", "msg_from_op": true, "msg_subject": "Floating point exceptions." } ]
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
46
Edit dataset card