threads
listlengths
1
2.99k
[ { "msg_contents": "Hello!\n\nI try to use libpq++ (linux: redhat4.2, postgresql-(devel)?-6.2 GNU c)\nAnd i have problem with linking:\n\n[root@3dom neko]# g++ -I/usr/include/postgres/ -lpq++ -o a a.cc \n/tmp/cca032541.o: In function \nain':\n/tmp/cca032541.o(.text+0x22): undefined reference to \u0010gConnection::Exec(char\nconst *)'\n...\nAnd more like this. But where is this famous exec?\n\n[root@3dom neko]# nm -C /usr/lib/libpq++.a | grep 'Exec(char'\n00000348 T PgConnection::Exec(char const *)\n U PgConnection::Exec(char const *)\n\nAll right. In libpq++.\nAnd I use this lib?\n[root@3dom neko]# mv /usr/lib/libpq++.a . \n[root@3dom neko]# g++ -I/usr/include/postgres/ -lpq -lpq++ -o e e.cc \nld: cannot open -lpq++: No such file or directory\nYes. We were use before this.\nAnd include-s is too works.\n\nMy first program:\n--------\tx\t--------\n#include <postgres/libpq++.h>\n#ifndef PGDATABASE_H\n main(){;};\n#else // PGDATABASE_H\n#include <iostream.h>\nmain(){\n ExecStatusType XStat;\n PgDatabase conn(\"tval\");\n XStat=conn.Exec(\"select oid from szak order by oid ;\");\n for (int i=0,max=conn.Tuples();max>i;i++) {\n cout << conn.GetValue(i,0) << endl;\n }\n}\n#endif // PGDATABASE_H\n--------\tx\t--------\n\nI mean, i try all what i can. But somewhere i must done a mistake. \nBUT where?!\n\nPlease help me...\n\nsprintf (\"`-''-/\").___..--''\"`-._ Error In\n(\"%|s\", `6_ 6 ) `-. ( ).`-.__.`) Loading Object\n\"Petike\" (_Y_.)' ._ ) `._ `. ``-..-' line:3\n/* Neko */ _..`--'_..-_/ /--'_.' ,' Before /*Neko*/\n ); (il),-'' (li),' ((!.-'\n\n", "msg_date": "Thu, 22 Jan 1998 18:51:01 +0100 (NFT)", "msg_from": "\"Vazsonyi Peter[ke]\" <neko@kornel.szif.hu>", "msg_from_op": true, "msg_subject": "libpq++ linking" }, { "msg_contents": "On Thu, 22 Jan 1998, Vazsonyi Peter[ke] wrote:\n\n> Hello!\n> \n> I try to use libpq++ (linux: redhat4.2, postgresql-(devel)?-6.2 GNU c)\n> And i have problem with linking:\n> \n> [root@3dom neko]# g++ -I/usr/include/postgres/ -lpq++ -o a a.cc \n> /tmp/cca032541.o: In function ain':\n> /tmp/cca032541.o(.text+0x22): undefined reference to \u0010gConnection::Exec(char\n> const *)'\n> ...\n> And more like this. But where is this famous exec?\n> \n> [root@3dom neko]# nm -C /usr/lib/libpq++.a | grep 'Exec(char'\n> 00000348 T PgConnection::Exec(char const *)\n> U PgConnection::Exec(char const *)\n> \n> All right. In libpq++.\n> And I use this lib?\n> [root@3dom neko]# mv /usr/lib/libpq++.a . \n> [root@3dom neko]# g++ -I/usr/include/postgres/ -lpq -lpq++ -o e e.cc \n> ld: cannot open -lpq++: No such file or directory\n> Yes. We were use before this.\n\nTry putting the -lpq++ *after* your input files, ie. last on the line.\n\nMaarten\n\n_____________________________________________________________________________\n| Maarten Boekhold, Faculty of Electrical Engineering TU Delft, NL |\n| Computer Architecture and Digital Technique section |\n| M.Boekhold@et.tudelft.nl |\n-----------------------------------------------------------------------------\n\n", "msg_date": "Thu, 22 Jan 1998 21:22:20 +0100 (MET)", "msg_from": "Maarten Boekhold <maartenb@dutepp0.et.tudelft.nl>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpq++ linking" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Thu, 22 Jan 1998 19:33:17 +0100", "msg_from": "Sergio Brandano <serbr@tin.it>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "Here is a new developer's FAQ. I have asked it to be added to our web\npage. I will add it to the distribution under /tools.\n\nComments? I want to add some stuff on Nodes, and palloc, and fopen().\n\n\n---------------------------------------------------------------------------\n\nDevelopers Frequently Asked Questions (FAQ) for PostgreSQL\n\nLast updated: Thu Jan 22 14:41:11 EST 1998\n\nCurrent maintainer: Bruce Momjian (maillist@candle.pha.pa.us)\n\nThe most recent version of this document can be viewed at the postgreSQL Web\nsite, http://postgreSQL.org.\n\n ------------------------------------------------------------------------\n\nQuestions answered:\n\n1) General questions\n\n1) What tools are available for developers?\n ------------------------------------------------------------------------\n\n1) What tools are available for developers?\n\nAside from the User documentation mentioned in the regular FAQ, there are\nseveral development tools available. First, all the files in the /tools\ndirectory are designed for developers.\n\n RELEASE_CHANGES changes we have to make for each release\n SQL_keywords standard SQL'92 keywords\n backend web flowchart of the backend directories\n ccsym find standard defines made by your compiler\n entab converts tabs to spaces, used by pgindent\n find_static finds functions that could be made static\n find_typedef get a list of typedefs in the source coe\n make_ctags make vi 'tags' file in each directory\n make_diff make *.orig and diffs of source\n make_etags make emacs 'etags' files\n make_keywords.README make comparison of our keywords and SQL'92\n make_mkid make mkid ID files\n mkldexport create AIX exports file\n pgindent indents C source files\n\nLet me note some of these. If you point your browser at the tools/backend\ndirectory, you will see all the backend components in a flow chart. You can\nclick on any one to see a description. If you then click on the directory\nname, you will be taken to the source directory, to browse the actual source\ncode behind it. We also have several README files in some source directories\nto describe the function of the module. The browser will display these when\nyou enter the directory also. The tools/backend directory is also contained\non our web page under the title Backend Flowchart.\n\nSecond, you really should have an editor that can handle tags, so you can\ntag a function call to see the function definition, and then tag inside that\nfunction to see an even lower-level function, and then back out twice to\nreturn to the original function. Most editors support this via tags or etags\nfiles.\n\nThird, you need to get mkid from ftp.postgresql.org. By running\ntools/make_mkid, an archive of source symbols can be created that can be\nrapidly queried like grep or edited.\n\nmake_diff has tools to create patch diff files that can be applied to the\ndistribution.\n\npgindent will format source files to match our standard format, which has\nfour-space tabs, and an indenting format specified by flags to the your\noperating system's utility indent.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 22 Jan 1998 14:43:13 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "New Developers FAQ" } ]
[ { "msg_contents": "Hi all,\n\nSince the \"fix\" for varchars in \"SELECT INTO\" I'm getting a core dump on\ninitdb when initializing the database.\n\nI've tracked this down to the \"REVOKE ALL ON pg_user FROM public\" step.\n\nLooking at the (not compiled with -g) core I see.\n(gdb seems to be broken on S/Linux as I can't get a backtrace)\n\nGDB 4.16 (sparc-unknown-linux), Copyright 1996 Free Software Foundation, Inc...\nCore was generated by `postgres -F -Q -D/usr/local/pgsql/data template1'.\nProgram terminated with signal 11, Segmentation fault.\nReading symbols from /lib/libdl.so.1.8.3...done.\nReading symbols from /lib/libm.so.5.0.6...done.\nReading symbols from /usr/lib/libreadline.so.2.0...done.\nReading symbols from /lib/libtermcap.so.2.0.8...done.\nReading symbols from /lib/libc.so.5.3.12...done.\nReading symbols from /lib/ld-linux.so.1...done.\n#0 0xc7f38 in OrderedElemGetBase ()\n(gdb) bt\n#0 0xc7f38 in OrderedElemGetBase ()\n(gdb) \n\nOrderedElemGetBase() seems to have something to so with extracting stuff :-)\nfrom structures so I wonder if somehow the new...\n\n#define USE_ATTTYPMOD(typeid) ((typeid) == BPCHAROID || (typeid) == \nVARCHAROID)\n\nin include/catalog/pg_type.h is somehow breaking some alignment rules\non SPARC architecture?\n\nI'm currently trying to build on SPARC-Solaris2.6 but it's a major struggle\ndue to the changes in \"dynloader\" and \"tas\". (configure no longer works)\n\nKeith.\n\n", "msg_date": "Thu, 22 Jan 1998 20:08:20 +0000 (GMT)", "msg_from": "Keith Parks <emkxp01@mtcc.demon.co.uk>", "msg_from_op": true, "msg_subject": "varchar() troubles. (S/Linux)" } ]
[ { "msg_contents": "\nHi...\n\n\tAs was previously announced, we are currently looking at upgrading\nthe server in all respects... well, totally replacing it is more like it. \nDouble the RAM, PII processor, 4x the disk space, increased bandwidth,\netc.\n\n\tWith this is mind, we are asking the PostgreSQL user community to\nhelp offset the costs of this upgrade. Nobody is *required* to contribute\n... the upgrade is going to happen regardless, as it is required, but I'd\nprefer not to have to pay for *all* of it out of my pocket.\n\n\tTowards this goal, starting with v6.3's release, we will be making\na CD distribution of PostgreSQL, as one means for users to be able to\ncontribute to the project.\n\n\tThere are two new pages on our web site that I ask/encourage\neveryone to read through: \n\n\thttp://www.postgresql.org/fund-raising.shtml\n\thttp://www.postgresql.org/cd-dist.shtml\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Thu, 22 Jan 1998 19:48:06 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Important Announcement" }, { "msg_contents": "Quoting The Hermit Hacker (scrappy@hub.org):\n> \n> \n> \tThere are two new pages on our web site that I ask/encourage\n> everyone to read through: \n> \n> \thttp://www.postgresql.org/fund-raising.shtml\n> \thttp://www.postgresql.org/cd-dist.shtml\n\n I see that the cdroms are $39.95CDN. Is that in Canidian dollars?\n Do you have a converted price for US dollars? How about the donations,\n will you convert US checks?\n\nj.heil\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n\n-- \nJoseph A. Heil, Jr.\n\nSuperStat, Inc.\t\t\temail: heilja@superstat.com\n7500 Market Place Drive\t\tvoice: 612-943-8400\nEden Prairie, MN 55344\t\tfax: 612-943-8300\n\nKey fingerprint = 95 FC 3A F4 8A 10 05 85 3F 53 01 86 AD DB DB 51\n", "msg_date": "Thu, 22 Jan 1998 21:46:17 -0600", "msg_from": "Joseph Heil <heilja@real-time.com>", "msg_from_op": false, "msg_subject": "Re: [PORTS] Important Announcement" }, { "msg_contents": "On Thu, 22 Jan 1998, Joseph Heil wrote:\n\n> Quoting The Hermit Hacker (scrappy@hub.org):\n> > \n> > \n> > \tThere are two new pages on our web site that I ask/encourage\n> > everyone to read through: \n> > \n> > \thttp://www.postgresql.org/fund-raising.shtml\n> > \thttp://www.postgresql.org/cd-dist.shtml\n> \n> I see that the cdroms are $39.95CDN. Is that in Canidian dollars?\n> Do you have a converted price for US dollars?\n\n\tUnfortunately, with the way that the currencies fluctuate, it will\nvary greatly...but yes, it is in Canadian dollars. \n\n> How about the donations, will you convert US checks?\n\n\tYes, this is not a problem\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 23 Jan 1998 00:01:14 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PORTS] Important Announcement" }, { "msg_contents": "> On Thu, 22 Jan 1998, Joseph Heil wrote:\n> \n> > Quoting The Hermit Hacker (scrappy@hub.org):\n> > > \n> > > \n> > > \tThere are two new pages on our web site that I ask/encourage\n> > > everyone to read through: \n> > > \n> > > \thttp://www.postgresql.org/fund-raising.shtml\n> > > \thttp://www.postgresql.org/cd-dist.shtml\n> > \n> > I see that the cdroms are $39.95CDN. Is that in Canadian dollars?\n> > Do you have a converted price for US dollars?\n> \n> \tUnfortunately, with the way that the currencies fluctuate, it will\n> vary greatly...but yes, it is in Canadian dollars. \n> \n> > How about the donations, will you convert US checks?\n> \n\nOf course US$ accepted at par ;-) At $39.95US that upgrade will be paid \nfor in no time.\n\n> \tYes, this is not a problem\n> \n \n\n* Michael J. Rogan, Network Administrator, 905-624-3020 *\n* Mark IV Industries, F-P Electronics & I.V.H.S. Divisions *\n* mrogan@fpelectronics.com mrogan@ivhs.com *\n", "msg_date": "Fri, 23 Jan 1998 10:53:33 +0000", "msg_from": "\"Michael J. Rogan\" <mrogan@fpelectronics.com>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Re: [HACKERS] Re: [PORTS] Important Announcement" }, { "msg_contents": "On Fri, 23 Jan 1998, Michael J. Rogan wrote:\n\n> > On Thu, 22 Jan 1998, Joseph Heil wrote:\n> > \n> > > Quoting The Hermit Hacker (scrappy@hub.org):\n> > > > \n> > > > \n> > > > \tThere are two new pages on our web site that I ask/encourage\n> > > > everyone to read through: \n> > > > \n> > > > \thttp://www.postgresql.org/fund-raising.shtml\n> > > > \thttp://www.postgresql.org/cd-dist.shtml\n> > > \n> > > I see that the cdroms are $39.95CDN. Is that in Canadian dollars?\n> > > Do you have a converted price for US dollars?\n> > \n> > \tUnfortunately, with the way that the currencies fluctuate, it will\n> > vary greatly...but yes, it is in Canadian dollars. \n> > \n> > > How about the donations, will you convert US checks?\n> > \n> \n> Of course US$ accepted at par ;-) At $39.95US that upgrade will be paid \n> for in no time.\n\n\t*rofl* I don't think so. I put up a US price onto the web site of\n$29.95...plus I put a (+$6/yr shipping and handling), based on $2 per\nrelease, 3 releases per year, for shipping...\n\n\tOf course, if anyone out there with a little more distribution\nexperience then me feels that $2/cd is too high (or low?) please let me\nknow? \n\n", "msg_date": "Fri, 23 Jan 1998 14:56:11 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Re: [HACKERS] Re: [PORTS] Important Announcement" } ]
[ { "msg_contents": "I have implemented LOCK tablename for 6.3. It just does a DELETE table\nWHERE false.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 22 Jan 1998 19:09:05 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "LOCK command" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have implemented LOCK tablename for 6.3. It just does a DELETE table\n> WHERE false.\n\nUnfortunately, this will be useful inside BEGIN/END only :(\n\nVadim\n", "msg_date": "Fri, 23 Jan 1998 13:55:28 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] LOCK command" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > I have implemented LOCK tablename for 6.3. It just does a DELETE table\n> > WHERE false.\n> \n> Unfortunately, this will be useful inside BEGIN/END only :(\n> \n> Vadim\n> \n> \n\nWell, if they issue a LOCK outside a transaction, it doesn't do\nanything. To be usefull outside a transaction, we would have to have a\nLOCK/UNLOCK command, and I am not at that point yet.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 23 Jan 1998 09:58:07 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] LOCK command" } ]
[ { "msg_contents": "I am trying to change the lock manager so read locks will not be granted\nif there is a write-lock waiting for a lock. The following patch helps,\nbut is incomplete. Can someone figure out the right fix? I tried\nputting this code inside LockResolveConflicts(), but that didn't work\neither.\n\n---------------------------------------------------------------------------\n\n*** ./backend/storage/lmgr/lock.c.orig\tFri Jan 23 01:18:01 1998\n--- ./backend/storage/lmgr/lock.c\tFri Jan 23 01:18:53 1998\n***************\n*** 602,607 ****\n--- 602,620 ----\n \n \tstatus = LockResolveConflicts(ltable, lock, lockt, myXid);\n \n+ \t\t/* ------------------------\n+ \t\t * If someone with a greater priority is waiting for the lock,\n+ \t\t * do not continue and share the lock, even if we can. bjm\n+ \t\t * ------------------------\n+ \t\t */\n+ \t\tint\t\t\t\tmyprio = ltable->ctl->prio[lockt];\n+ \t\tPROC_QUEUE\t\t*waitQueue = &(lock->waitProcs);\n+ \t\tPROC\t\t\t*topproc = (PROC *) MAKE_PTR(waitQueue->links.prev);\n+ \n+ \t\tif (topproc && topproc->prio > myprio)\n+ \t\t\tstatus = STATUS_FOUND;\n+ \t}\n+ \n \tif (status == STATUS_OK)\n \t{\n \t\tGrantLock(lock, lockt);\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 23 Jan 1998 01:21:41 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "locking change help" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I am trying to change the lock manager so read locks will not be granted\n> if there is a write-lock waiting for a lock. The following patch helps,\n> but is incomplete. Can someone figure out the right fix? I tried\n> putting this code inside LockResolveConflicts(), but that didn't work\n> either.\n\nIs this to help writers from starving?\n\n/* m */\n", "msg_date": "Fri, 23 Jan 1998 11:25:36 +0100", "msg_from": "Mattias Kregert <matti@algonet.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] locking change help" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > I am trying to change the lock manager so read locks will not be granted\n> > if there is a write-lock waiting for a lock. The following patch helps,\n> > but is incomplete. Can someone figure out the right fix? I tried\n> > putting this code inside LockResolveConflicts(), but that didn't work\n> > either.\n> \n> Is this to help writers from starving?\n\nYes.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 23 Jan 1998 09:59:47 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] locking change help" } ]
[ { "msg_contents": "> \n> On Thu, 22 Jan 1998, Joseph Heil wrote:\n> \n> > Quoting The Hermit Hacker (scrappy@hub.org):\n> > > \n> > > \n> > > \tThere are two new pages on our web site that I ask/encourage\n> > > everyone to read through: \n> > > \n> > > \thttp://www.postgresql.org/fund-raising.shtml\n> > > \thttp://www.postgresql.org/cd-dist.shtml\n> > \n> > I see that the cdroms are $39.95CDN. Is that in Canidian dollars?\n> > Do you have a converted price for US dollars?\n> \n> \tUnfortunately, with the way that the currencies fluctuate, it will\n> vary greatly...but yes, it is in Canadian dollars. \n> \n> > How about the donations, will you convert US checks?\n> \n> \tYes, this is not a problem\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> \n> \nAccepting credit cards would be a good way to get international users\nto buy. For a small sum, it really is too much hassle to pay abroad\nby any other means...\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 23 Jan 1998 11:42:55 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PORTS] Important Announcement" }, { "msg_contents": "> > \n> Accepting credit cards would be a good way to get international users\n> to buy. For a small sum, it really is too much hassle to pay abroad\n> by any other means...\n\n\tI really wish we could accept credit cards :( Up here, at least,\nit costs something like $10,000 upfront with the banks in order to even\nget 'online credit card authorization' :( And that is if they will even\ntalk to you in the first place. \n\n\tThings were alot more slack 10 years ago, when nobody really knew\nwhat the Internet was, but when it exploded, so did the banks up here,\nsine there were alot of 'fly by night' operations :(\n\n\tI've had one suggestion on how we might be able to get around\nthis, using some of the online distribution houses that are out\nthere...I'm still looking into it, but so far dont' like what I'm seeing\n:(\n\n\n\n", "msg_date": "Fri, 23 Jan 1998 08:19:22 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Important Announcement" } ]
[ { "msg_contents": "> > > psql .psqlrc file startup(Andrew)\n> > Unfortunately I simply don't have time to implement any of the nicer suggestions\n> > for how this should work. I wish I did...\n> \n> OK, what are we doing the .psqlrc. Can you send the old patch, or are\n> we dropping it for 6.3?\n> \n> -- \n> Bruce Momjian\n> maillist@candle.pha.pa.us\n> \n\nHere's the patch I supplied before:\n\n*** src/bin/psql/psql.c.orig Fri Jun 20 21:54:31 1997\n--- src/bin/psql/psql.c Fri Jun 20 22:23:49 1997\n***************\n*** 1598,1603 ****\n--- 1598,1605 ----\n bool singleSlashCmd = 0;\n int c;\n \n+ char *home = NULL; /* Used to store $HOME */\n+ \n memset(&settings, 0, sizeof settings);\n settings.opt.align = 1;\n settings.opt.header = 1;\n***************\n*** 1728,1733 ****\n--- 1730,1760 ----\n printf(\" type \\\\g or terminate with semicolon to execute query\\n\");\n printf(\" You are currently connected to the database: %s\\n\\n\", dbname);\n }\n+ \n+ /*\n+ * 20.06.97 ACRM See if we've got a /etc/psqlrc or .psqlrc file\n+ */\n+ if(!access(\"/etc/psqlrc\",R_OK))\n+ HandleSlashCmds(&settings, \"\\\\i /etc/psqlrc\", \"\");\n+ if((home = getenv(\"HOME\"))!=NULL) {\n+ char *psqlrc = NULL,\n+ *line = NULL;\n+ \n+ if((psqlrc = (char *)malloc(strlen(home) + 10))!=NULL) {\n+ sprintf(psqlrc, \"%s/.psqlrc\", home);\n+ if(!access(psqlrc, R_OK)) {\n+ if((line = (char *)malloc(strlen(psqlrc) + 5))!=NULL) {\n+ sprintf(line, \"\\\\i %s\", psqlrc);\n+ HandleSlashCmds(&settings, line, \"\");\n+ free(line);\n+ }\n+ }\n+ free(psqlrc);\n+ }\n+ }\n+ /* End of check for psqlrc files */\n+ \n+ \n if (qfilename || singleSlashCmd) {\n /*\n * read in a file full of queries instead of reading in queries\n\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 23 Jan 1998 11:45:01 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current open 6.3 issues" } ]
[ { "msg_contents": "> > > \n> > Accepting credit cards would be a good way to get international users\n> > to buy. For a small sum, it really is too much hassle to pay abroad\n> > by any other means...\n> \n> \tI really wish we could accept credit cards :( Up here, at least,\n> it costs something like $10,000 upfront with the banks in order to even\n> get 'online credit card authorization' :( And that is if they will even\n> talk to you in the first place. \n> \nEeek! That's a hell of a lot of money!!!!\n\nBut it wouldn't have to be online authorisation. It would be fine if you could\nsend your credit card number by post or 'phone or FAX. Do they charge equally\nrediculous sums for that?\n\nWhat about someone like Walnut Creek as a distributor? Or Prime Time Freeware?\n\n\nBest wishes,\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Fri, 23 Jan 1998 17:19:33 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [PORTS] Important Announcement" }, { "msg_contents": "On Fri, 23 Jan 1998, Andrew Martin wrote:\n\n> > > > \n> > > Accepting credit cards would be a good way to get international users\n> > > to buy. For a small sum, it really is too much hassle to pay abroad\n> > > by any other means...\n> > \n> > \tI really wish we could accept credit cards :( Up here, at least,\n> > it costs something like $10,000 upfront with the banks in order to even\n> > get 'online credit card authorization' :( And that is if they will even\n> > talk to you in the first place. \n> > \n> Eeek! That's a hell of a lot of money!!!!\n\n\tYup...and that was what they were pricing out 4 years ago...plus\nyou had to have some ridiculous amount of sales per month :(\n\n> But it wouldn't have to be online authorisation. It would be fine if you could\n> send your credit card number by post or 'phone or FAX. Do they charge equally\n> rediculous sums for that?\n\n\tSame...anything that doesn't have a card imprint pretty much\ncounts as \"online\" :(\n\n> What about someone like Walnut Creek as a distributor? Or Prime Time Freeware?\n\n\tI'm looking at what www.ncbuy.com does in partnership with a\ncompany in the US...if it goes, then we'll have credit card sales\ntoo...but so far, they are saying something like $10 off the top is\ntheirs, which doesn't leave that much after paying for the CDs\nthemselves...we'll see.\n\n\n", "msg_date": "Fri, 23 Jan 1998 12:26:52 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [PORTS] Important Announcement" } ]
[ { "msg_contents": "I am submitting this patch for people to review.\n\nIt fixes several problems in the lock manager. Let me show you how to\nreproduce them. Try this before applying the patch and you will see the\nold bugs.\n\nFirst, set up three psql sessions. Do a 'begin;' in each one to start a\ntransaction.\n\n---------------------------------------------------------------------------\n\nOK, pick a single table to use. In the first one, do an UPDATE, in the\nsecond psql session, do an UPDATE, in the third, do an UPDATE. Now, do\n'end;' in the first psql, and you will find the third finishes, even\nthough the second was first to request the lock.\n\n---------------------------------------------------------------------------\n\nOK, exit all your psql's, and start them again, with a 'begin;' for\neach.\n\nDo an UPDATE in the first, a SELECT in the second, and an UPDATE in the\nthird, in that order. Now do an 'end;' in the first. The second\ncompletes, even though the write should have higher priority over a read\nWHEN THEN ARE BOTH ASLEEP waiting for a lock.\n\n---------------------------------------------------------------------------\n\nOK, exit all your psql's, and start them again, with a 'begin;' for\neach.\n\nNow, do a SELECT in the first, and UPDATE in the second, and a SELECT in\nthe third. The third completes right away, even though it should wait\nbehind the higher-priority second UPDATE and not share the existing\nlock.\n\n---------------------------------------------------------------------------\n\nThe following patch fixes this. The proc.c priority queue condition was\nbackwards, and did not put newer requesters behind older requesters. \nThe lock.c change adds code to check the wait queue and not allow\nsharing a lock if someone of higher priority is waiting for it.\n\nThis should eliminate writer starvation in 6.3, and fix other problems\npeople might have been experiencing with this buggy behavior.\n\nThis will be in 6.3. I will apply it now.\n\nComments?\n\n---------------------------------------------------------------------------\n\n*** ./backend/storage/lmgr/lock.c.orig\tFri Jan 23 01:01:03 1998\n--- ./backend/storage/lmgr/lock.c\tFri Jan 23 15:59:06 1998\n***************\n*** 708,713 ****\n--- 708,727 ----\n \t\tresult->nHolding = 0;\n \t}\n \n+ \t{\n+ \t\t/* ------------------------\n+ \t\t * If someone with a greater priority is waiting for the lock,\n+ \t\t * do not continue and share the lock, even if we can. bjm\n+ \t\t * ------------------------\n+ \t\t */\n+ \t\tint\t\t\t\tmyprio = ltable->ctl->prio[lockt];\n+ \t\tPROC_QUEUE\t\t*waitQueue = &(lock->waitProcs);\n+ \t\tPROC\t\t\t*topproc = (PROC *) MAKE_PTR(waitQueue->links.prev);\n+ \n+ \t\tif (waitQueue->size && topproc->prio > myprio)\n+ \t\t\treturn STATUS_FOUND;\n+ \t}\n+ \n \t/* ----------------------------\n \t * first check for global conflicts: If no locks conflict\n \t * with mine, then I get the lock.\n*** ./backend/storage/lmgr/proc.c.orig\tFri Jan 23 14:34:27 1998\n--- ./backend/storage/lmgr/proc.c\tFri Jan 23 15:22:11 1998\n***************\n*** 469,475 ****\n \tproc = (PROC *) MAKE_PTR(queue->links.prev);\n \tfor (i = 0; i < queue->size; i++)\n \t{\n! \t\tif (proc->prio < prio)\n \t\t\tproc = (PROC *) MAKE_PTR(proc->links.prev);\n \t\telse\n \t\t\tbreak;\n--- 469,475 ----\n \tproc = (PROC *) MAKE_PTR(queue->links.prev);\n \tfor (i = 0; i < queue->size; i++)\n \t{\n! \t\tif (proc->prio >= prio)\n \t\t\tproc = (PROC *) MAKE_PTR(proc->links.prev);\n \t\telse\n \t\t\tbreak;\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 23 Jan 1998 16:09:57 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Fix for many lock problems" } ]
[ { "msg_contents": "\n Hi again,\n\n It seems unfortunate, but if I can't efficiently (i.e. no copying of\nwhole table, no blocking and using index) browse the table, I might not\nbe able to \"sell\" postgres into the commercial environment. :-( RAIMA\nVelocis might win :-(\n\n Please please help me solve this or make workarounds or anything. I\nwould *really* like to see PosgreSQL to be playing against the Big\n(commercial) Boys !\n\n What the whole problem really reduces to, is to be able to get the next\n/ previous value in an index. If I can do that, I win. No SELECT (and thus\nno locking) and no huge copying !). It seems to me like something\nprimitive, so it should be easy. But it isn't obvious to me. Please help. \n\n Below is more description and questions.\n\nOn Thu, 22 Jan 1998, Dustin Sallings wrote:\n\n> > Well, first let the user look up the manufacturer's part, and then let\n> > him \"browse\" from there one (look at the next / previous ones), where\n> > \"next\" is defined as the one closest bigger than the current one (based on\n> > a sort attribute(s) on which I have an index).\n> \n> \tThat's easy in a normal application type place where you keep state, if\n> you're doing it as a web based application, it's a different story... Assuming\n> you're writing a standalone application, you just keep a cursor to what you're\n> looking at, and move it back and forth as needed, otherwise, otherwise, you're\n> going to have to do the whole query again AFAIK.\n\n a few questions :\n\n 0. having a value of a field on which there is an index, how can I do :\n a) current_pointer = some_function(\"value_I_have\");\n b) next_pointer = some_other_function(current_pointer);\n c) one_tupple = yet_another_function(next_pointer);\n If I can accomplish a,b,c, then I win and I don't have to do questions\n1..5 below. \n\n 1. I assume I have to say \"declare cursor mycursor for SELECT\nfield1,field2 FROM mytable ORDER BY one_of_fields_on_which_I_have_index\"\n Am I right ? Or can I do it some other way ?\n\n 2. Will not the step (1.) *copy* the whole 40MB table ? (that would\ntrash the system if several people at once want to \"browse\" through the\ntable.)\n\n 3. Will not the step (1.) read-lock *lock* the whole table ? (i.e.\nwhile somebody is \"browsing\", I need other people to be able to update the\ndatabase).\n\n 4. Will the step (1.) cause Postgres to use index for moving forth /\nback ? Because if it doesn't, it might take ages to either declare the\ncursor or to move forth / back.\n\n 5. If the step (1.) will not cause Postgres to use index, what will ?\n\n 6. How do I do it if I don't want to start \"browsing\" from the first\nrow ?\n\n Thanx,\n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Fri, 23 Jan 1998 16:50:29 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": true, "msg_subject": "Show stopper ? (was: Re: \"cruising\" or \"browsing\" through tables\n\tusing an index / ordering)" }, { "msg_contents": "On Fri, 23 Jan 1998, Jan Vicherek wrote:\n\n> Hi again,\n> \n> It seems unfortunate, but if I can't efficiently (i.e. no copying of\n> whole table, no blocking and using index) browse the table, I might not\n> be able to \"sell\" postgres into the commercial environment. :-( RAIMA\n> Velocis might win :-(\n> \n> Please please help me solve this or make workarounds or anything. I\n> would *really* like to see PosgreSQL to be playing against the Big\n> (commercial) Boys !\n\n\tI'm curious, but can the \"Big (commercial) Boys\" do this? If so,\ncan you please provide an example of which and how? Most of us here have\naccess to an one or the other (me, Oracle) to use as a sample system...if\nwe can prove that it does work on another system, then we have something\nto work with, but right now all I've seen is \"I wish I could do this\", and\nseveral examples on how to accomplish it using PostgreSQL, but that's\nit...\n\n> \n> What the whole problem really reduces to, is to be able to get the next\n> / previous value in an index. If I can do that, I win. No SELECT (and thus\n> no locking) and no huge copying !). It seems to me like something\n> primitive, so it should be easy. But it isn't obvious to me. Please help. \n\n\tIf there is no SELECT, how do you get data out of an SQL database?\n*raised eyebrow*\n\n> 0. having a value of a field on which there is an index, how can I do :\n> a) current_pointer = some_function(\"value_I_have\");\n> b) next_pointer = some_other_function(current_pointer);\n> c) one_tupple = yet_another_function(next_pointer);\n> If I can accomplish a,b,c, then I win and I don't have to do questions\n> 1..5 below. \n\n\tWhy not put a sequence field on the table so that you can do:\n\n\tselect * from table where rowid = n; -or-\n\tselect * from table where rowid = n - 1; -or-\n\tselect * from table where rowid = n + 1; -or-\n\tselect * from table where rowid >= n and rowid <= n+x;\n\n\tAnd create the index on rowid?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Fri, 23 Jan 1998 18:20:57 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or \"browsing\"\n\tthrough tables using an index / ordering)" }, { "msg_contents": "On Fri, 23 Jan 1998, The Hermit Hacker wrote:\n\n> \tI'm curious, but can the \"Big (commercial) Boys\" do this? If so,\n> can you please provide an example of which and how?\n\n Hmm, well, the one we are switching from does this ;-) (Informix 3.3\nALL-II C interface). It's not SQL, tho.\n\n> Most of us here have\n> access to an one or the other (me, Oracle) to use as a sample system...if\n> we can prove that it does work on another system, then we have something\n> to work with\n\n> all I've seen is ... and several examples on how to accomplish it using\n> PostgreSQL, but that's it... \n\n Wait, have you seen here an example that accomplishes this which\nwouldn't need the whole table copied and wouldn't lock the table against\nupdates ?\n\n> > What the whole problem really reduces to, is to be able to get the next\n> > / previous value in an index. If I can do that, I win. No SELECT (and thus\n> > no locking) and no huge copying !). It seems to me like something\n> > primitive, so it should be easy. But it isn't obvious to me. Please help. \n> \n> \tIf there is no SELECT, how do you get data out of an SQL database?\n> *raised eyebrow*\n\n Sorry, I meant no SELECT that couln't be done through index and\nso wouldn't copy huge amounts of data into a result table. And wouldn't\nlock the whole table.\n\n> > 0. having a value of a field on which there is an index, how can I do :\n> > a) current_pointer = some_function(\"value_I_have\");\n> > b) next_pointer = some_other_function(current_pointer);\n> > c) one_tupple = yet_another_function(next_pointer);\n> > If I can accomplish a,b,c, then I win and I don't have to do questions\n> > 1..5 below. \n> \n> \tWhy not put a sequence field on the table so that you can do:\n> \tselect * from table where rowid = n; -or-\n> \tselect * from table where rowid = n - 1; -or-\n> \tselect * from table where rowid = n + 1; -or-\n> \tselect * from table where rowid >= n and rowid <= n+x;\n> \n> \tAnd create the index on rowid?\n\n Because I also need to be able to INSERT rows. That would require\nrenumeration of half the table (remember, it's 40MB, 400,000 rows) every\ntime I do an INSERT. \n\n I *still* think that there *has to* be a way to find a value that is\nimmediatelly next to one I have. This seems like such a primitive\noperation. Even the backend must be doing it on some operations, it would\nseem.\n\n Maybe even in SQL. Maybe something like (I'm not an SQL expert) : \"SELECT\nIndexField from MyTable where InxdexField > 'my_current_value' and\nIndexField < (\"all IndexFields that are bigger than the IndexField\nsearched for\")\n\n Important : I'm not looking for a \"pure SQL\" solution. I'm writing a C\nemulation library, so if it can be achieved via a call to a C Postgres\nfunction, it would be great.\n\n Thanx,\n\n Jan\n\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Fri, 23 Jan 1998 18:19:28 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or \"browsing\"\n\tthrough tables using an index / ordering)" }, { "msg_contents": "On Fri, 23 Jan 1998, Jan Vicherek wrote:\n\n> On Fri, 23 Jan 1998, The Hermit Hacker wrote:\n> \n> > \tI'm curious, but can the \"Big (commercial) Boys\" do this? If so,\n> > can you please provide an example of which and how?\n> \n> Hmm, well, the one we are switching from does this ;-) (Informix 3.3\n> ALL-II C interface). It's not SQL, tho.\n\n\tOkay, well, ummm...now you are comparing apples to oranges\nthen...if you wanted a non-SQL engine to replace Informix, PostgreSQL\nisn't what you are looking for :(\n\n> > all I've seen is ... and several examples on how to accomplish it using\n> > PostgreSQL, but that's it... \n> \n> Wait, have you seen here an example that accomplishes this which\n> wouldn't need the whole table copied and wouldn't lock the table against\n> updates ?\n\n\tFirst off, *nothing* you are going to be able to accomplish in\n*any* SQL engine is going to prevent locking the table against\nupdates...the code that Bruce put in this afternoon for v6.3 is going to\nreduce the possibility of the lock causing a deadlock, but that is about\nit...the lock will still be created.\n\n> > \tWhy not put a sequence field on the table so that you can do:\n> > \tselect * from table where rowid = n; -or-\n> > \tselect * from table where rowid = n - 1; -or-\n> > \tselect * from table where rowid = n + 1; -or-\n> > \tselect * from table where rowid >= n and rowid <= n+x;\n> > \n> > \tAnd create the index on rowid?\n> \n> Because I also need to be able to INSERT rows. That would require\n> renumeration of half the table (remember, it's 40MB, 400,000 rows) every\n> time I do an INSERT. \n\n\tOkay, you are confusing INSERT then...INSERT in SQL just adds a\nrow, that's it...it doesn't perform any \"sorting\" on it...that's what the\nORDER BY command does...\n\n\t...but, I now understand what *you* mean by INSERT...\n\n> I *still* think that there *has to* be a way to find a value that is\n> immediatelly next to one I have. This seems like such a primitive\n> operation. Even the backend must be doing it on some operations, it would\n> seem.\n\n\tNot possible...INSERT into a table doesn't \"merge\" the record\nbetween its lower/higher bounds...it just adds it to the very end of the\ntable. And an index doesn't \"sort\" the data either...that is what the\nORDER BY clause is for...\n\n> Maybe even in SQL. Maybe something like (I'm not an SQL expert) : \"SELECT\n> IndexField from MyTable where InxdexField > 'my_current_value' and\n> IndexField < (\"all IndexFields that are bigger than the IndexField\n> searched for\")\n\n\tFrom your sample above, is your first SQL call going to pull out\nall 40MB of data in one select statement, then your second 40MB minus the\nfirst X records? \n\n\tWhat you want to do is:\n\nbegin\n declare mycursor cursor for select * from pg-user order by <somefield>;\n move $forward in FOO;\n fetch $retrieve in FOO;\n close foo;\nend;\n\n\tBasically, take your table, move to the $forward record in it,\ngrab the next $retrieve records and then release the table.\n\n\tYour first time through, $forward might just equal 0...but when\nyou run it the second time through, you pass it back a $forward value\nequal to $forward + $retrieve, so that you start at where you ended the\nfirst time. This is how I deal with one of my projects, where I have to\ndo exactly what you are looking for...\n\n\tAbout the only part of the SELECT above is the ORDER BY, and alot\nof that is as much a restriction on your hardware then anything...the\nmajor performance boost in an ORDER BY is memory..keeping it off the hard\ndrive.\n\t\n> Important : I'm not looking for a \"pure SQL\" solution. I'm writing a C\n> emulation library, so if it can be achieved via a call to a C Postgres\n> function, it would be great.\n\n\tYou'd be better off looking at something like GDBM (which, by the\nway, also creates a lock against updates while another is reading the\ndatabase)...unless I'm missing something, you aren't looking at doing\nanything that *requires* an SQL engine :(\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Fri, 23 Jan 1998 21:32:03 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or \"browsing\"\n\tthrough tables using an index / ordering)" }, { "msg_contents": "\nOn Fri, 23 Jan 1998, The Hermit Hacker wrote:\n\n> On Fri, 23 Jan 1998, Jan Vicherek wrote:\n> \n> > On Fri, 23 Jan 1998, The Hermit Hacker wrote:\n> > \n> > > \tI'm curious, but can the \"Big (commercial) Boys\" do this? If so,\n> > > can you please provide an example of which and how?\n> > \n> > Hmm, well, the one we are switching from does this ;-) (Informix 3.3\n> > ALL-II C interface). It's not SQL, tho.\n> \n> \tOkay, well, ummm...now you are comparing apples to oranges\n> then...if you wanted a non-SQL engine to replace Informix, PostgreSQL\n> isn't what you are looking for :(\n\n Hmm, I thought that PostgreSQL would have two interfaces :\n\n an SQL interface (doing all the optimizing, joining and the complicated\nstuff) and\n a low(er)-level C/C++/whatever interface which would allow to use less\nof the intelligence of PostgreSQL and more of it's retireval system,\nincluding indexing.\n\n> > > all I've seen is ... and several examples on how to accomplish it using\n> > > PostgreSQL, but that's it... \n> > \n> > Wait, have you seen here an example that accomplishes this which\n> > wouldn't need the whole table copied and wouldn't lock the table against\n> > updates ?\n> \n> \tFirst off, *nothing* you are going to be able to accomplish in\n> *any* SQL engine is going to prevent locking the table against\n> updates...\n\n Sorry, I wasn't precise again : I mean I can't stop other people from\nupdating a table just because one guy is out there \"browsing\" through it.\n\n> > > \tWhy not put a sequence field on the table so that you can do:\n> > > \tselect * from table where rowid = n; -or-\n> > > \tselect * from table where rowid = n - 1; -or-\n> > > \tselect * from table where rowid = n + 1; -or-\n> > > \tselect * from table where rowid >= n and rowid <= n+x;\n> > > \n> > > \tAnd create the index on rowid?\n> > \n> > Because I also need to be able to INSERT rows. That would require\n> > renumeration of half the table (remember, it's 40MB, 400,000 rows) every\n> > time I do an INSERT. \n> \n> \tOkay, you are confusing INSERT then...INSERT in SQL just adds a\n> row, that's it...it doesn't perform any \"sorting\" on it...that's what the\n> ORDER BY command does...\n\n The INSERT causes all indices on that table to be updated.\n If I used a field in the table to serve as a pseudo-index (user-level\nindex, not system-level index), *then* that would require \"manual\"\nrenumeration of rowid field in half the table every time I do an INSERT. \n\n> \t...but, I now understand what *you* mean by INSERT...\n\n good :)\n\n> > I *still* think that there *has to* be a way to find a value that is\n> > immediatelly next to one I have. This seems like such a primitive\n> > operation. Even the backend must be doing it on some operations, it would\n> > seem.\n> \n> \tNot possible...\n\n Remeber that I have an index on the field that I'm looking for the\n\"next\" value.\n Is it not possible in such case ?\n Even by calling some lower-level C function to give me the next value ?\n\n> INSERT into a table doesn't \"merge\" the record\n> between its lower/higher bounds...it just adds it to the very end of the\n> table. And an index doesn't \"sort\" the data either...that is what the\n> ORDER BY clause is for...\n\n I guess I wasn't precise enough. I was aware of these two facts when I\nwas writing the previous msgs.\n Index doesn't \"sort\". Index gets sorted. But pseudo-index doesn't even\nget sorted, I would have to sort it \"manually\" after every INSERT I would\nmake.\n\n> > Maybe even in SQL. Maybe something like (I'm not an SQL expert) : \"SELECT\n> > IndexField from MyTable where InxdexField > 'my_current_value' and\n> > IndexField < (\"all IndexFields that are bigger than the IndexField\n> > searched for\")\n> \n> \tFrom your sample above, is your first SQL call going to pull out\n> all 40MB of data in one select statement, then your second 40MB minus the\n> first X records? \n\n Hmm, I could have hoped that the optimizer would recognize what I'm\ntrying to do and would only pull out that one record, but I guess not.\nNever mind this attempt then.\n\n> \tWhat you want to do is:\n> \n> begin\n> declare mycursor cursor for select * from pg-user order by <somefield>;\n> move $forward in FOO;\n> fetch $retrieve in FOO;\n> close foo;\n> end;\n\n (I guess you meant \"close mycursor\", not \"close foo\".)\n\n I've just tried this on half-the-size table, and it took 70 seconds :-(\n\nmeray=> begin;\nBEGIN\nmeray=> declare mycursor cursor for select * from master order by mpartno;\nSELECT\nmeray=> fetch forward 1 in mycursor;\nclose mycursor;\nend;\nmpartno |mlistprc|mwarcomp|mcost|msupercd|mbasecd|mflag|mpackqty|mdescrip\n-------------------+--------+--------+-----+--------+-------+-----+--------+----------------\nF 0 AVZK | 581| 418| 311|S | | 6| 0|SPARK PLUG\n(1 row)\n\nmeray=> close mycursor;\nCLOSE\nmeray=> end;\nEND\nmeray=> \\d masterind\n\nTable = masterind\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| mpartno | (bp)char | 19 |\n+----------------------------------+----------------------------------+-------+\n\n Since the ORDER BY field (mpartno) has an index on it, why wouldn't\nPostgres just take that index and go with it one-by-one ?! Such approach\nwouldn't take 70 seconds on idle P200+ 64MB, but would take 1/2 a second.\n\nIs such approach pricipally/fundamentally wrong, or is this correct\napproach, and postgres just needs some more code written for this thing to\nwork ? (I.e. would other things break if this was implemented ?) \n\n -- whereas single select (which uses index) takes half a second.\nmeray=> select * from master where mpartno = 'F 0 AVZK ';\nmpartno |mlistprc|mwarcomp|mcost|msupercd|mbasecd|mflag|mpackqty|mdescrip\n-------------------+--------+--------+-----+--------+-------+-----+--------+----------------\nF 0 AVZK | 581| 418| 311|S | | 6| 0|SPARK PLUG\nF 0 AVZK | 581| 418| 311|S | | 6| 0|SPARK PLUG \n(2 rows)\nmeray=> \n\n Now I want do see the row that is in the masterind after the first, in\nthis case it contains the same data.\n\n If we can get \"this new approach\" to work, \n\n> \tBasically, take your table, move to the $forward record in it,\n> grab the next $retrieve records and then release the table.\n\n But this creates 70-second lock on the table every time I want to see\nnext record :-(\n\n> > Important : I'm not looking for a \"pure SQL\" solution. I'm writing a C\n> > emulation library, so if it can be achieved via a call to a C Postgres\n> > function, it would be great.\n> \n> \tYou'd be better off looking at something like GDBM (which, by the\n> way, also creates a lock against updates while another is reading the\n> database)...unless I'm missing something, you aren't looking at doing\n> anything that *requires* an SQL engine :(\n\n This looks like a sad suggestion to me. I'll kick and scream just that\nI wouldn't have to go with Velocis RAIMA's lower-level C interface. (I'm\njust staring into its manual to figure out the sequence of C calls I would\nhave to make to accomplish this.)\n\n\n Thanx to all,\n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Fri, 23 Jan 1998 22:41:50 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": true, "msg_subject": "Attn PG gurus / coders : New approach for ORDER BY ? (was: Re: Show\n\tstopper ?)" }, { "msg_contents": "> \n> On Fri, 23 Jan 1998, The Hermit Hacker wrote:\n> \n> > \tI'm curious, but can the \"Big (commercial) Boys\" do this? If so,\n> > can you please provide an example of which and how?\n> \n> Hmm, well, the one we are switching from does this ;-) (Informix 3.3\n> ALL-II C interface). It's not SQL, tho.\n\nAh, the Informix ISAM interface that used to sit underneath the Informix\nSQL engine.\n\n> Important : I'm not looking for a \"pure SQL\" solution. I'm writing a C\n> emulation library, so if it can be achieved via a call to a C Postgres\n> function, it would be great.\n> \n\nYou can put an index on the table, and embed a function inside the\nengine to spin through the index, getting valid rows. The code is\nalready there to spin through the index, so you could just hook on to\nthe index, get the 'tid' and use that to get data from the physical\ntable. No locking, but you may get in trouble if other people are\nreading the index. Maybe a quick lock-unlock would do it for you. Not\neasy, because the engine really doesn't provide a user-friendly C\ninterface for this, but it could be done.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 23 Jan 1998 22:50:30 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or \"browsing\"\n\tthrough tables using an index / ordering)" }, { "msg_contents": "> \tYou'd be better off looking at something like GDBM (which, by the\n> way, also creates a lock against updates while another is reading the\n> database)...unless I'm missing something, you aren't looking at doing\n> anything that *requires* an SQL engine :(\n\nI agree. GDBM is a fine system for such uses.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 23 Jan 1998 22:53:27 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or\n\t\"browsing\" through tables using an index / ordering)" }, { "msg_contents": "On Fri, 23 Jan 1998, Bruce Momjian wrote:\n\n> > \tYou'd be better off looking at something like GDBM (which, by the\n> > way, also creates a lock against updates while another is reading the\n> > database)...unless I'm missing something, you aren't looking at doing\n> > anything that *requires* an SQL engine :(\n> \n> I agree. GDBM is a fine system for such uses.\n\n We would like to convert to something that has more perspective, like\nPostgreSQL does. We are not converting to PostgreSQL just to get this one\njob done, but to layout a good platform for the future ...\n\n Yes ! PostgreSQL is highly regarded !\n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Fri, 23 Jan 1998 23:08:32 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or\n\t\"browsing\" through tables using an index / ordering)" }, { "msg_contents": "\n\nOn Fri, 23 Jan 1998, Jan Vicherek wrote:\n[big chop]\n\n> This looks like a sad suggestion to me. I'll kick and scream just that\n> I wouldn't have to go with Velocis RAIMA's lower-level C interface. (I'm\n> just staring into its manual to figure out the sequence of C calls I would\n> have to make to accomplish this.)\n\nIf it is still the same as when it was called DBVista... then\n\n d_keyfind(xx,x,x)\n while(db_status == S_OK)\n {\n d_recread(x,x,x,x);\n d_keynext(ssss);\n }\n(No error checking in that code...<g>)\n\n------------ end Raima note ------------------------------------\n\nBUT - the functionality you want could be a special case of the Postgres\nbackend and would benefit a very important real world scenario\n(scrolling inquiries)\n\nIF - declare cursor\n and select 'whole table'\n and 'order by a field which is a key'\n\nTreat as special - dont handle ANY data until the FETCH statement.\n IE. Defer the IO until the fetch happens - then access the row(s)\n via the index\n\nThis would give an enormous boost in effective performance with the common\napplication task of a scrolling inquiry against a large table.\nImplementing it would be transparent to current application code and would\noffer a competitive edge over other products without it.\n\nNormally I'm not one to support 'special case' coding tricks, but the\npublic impact on visible performance probably justifies looking into this \na bit further.\nI suspect the query processor can identify the situation. It would need to\nremember that it was special and leave enough control info to allow the\n'fetch' processing to respond to that special case.\n\nComments from the guru's who are up to their necks in the code ?\n\nRegards\nRon O'Hara\n\n> \n> \n> Thanx to all,\n> \n> Jan\n> \n> -- Gospel of Jesus is the saving power of God for all who believe --\n> Jan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n> >>> Free Software Union President ... www.fslu.org <<<\n> Interactive Electronic Design Inc. -#- PGP: finger honza@ied.com\n> \n> \n\n", "msg_date": "Sat, 24 Jan 1998 15:18:05 +1100 (EST)", "msg_from": "\"Ron O'Hara\" <rono@pen.sentuny.com.au>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Attn PG gurus / coders : New approach for ORDER BY ?\n\t(was: Re: Show stopper ?)" }, { "msg_contents": "\n Hey ! You sound like you know exactly what I mean ! ;-)\n\nOn Fri, 23 Jan 1998, Bruce Momjian wrote:\n\n> > Important : I'm not looking for a \"pure SQL\" solution. I'm writing a C\n> > emulation library, so if it can be achieved via a call to a C Postgres\n> > function, it would be great.\n> \n> You can put an index on the table, and embed a function inside the\n> engine to spin through the index, getting valid rows.\n\n Aha, this implies that in the index there are valid and non-valid rows.\nI guess those that are to be \"valid\" (no current transactions on that\nrow) and the non-valid \"those that are subject to an update lock /\ntransaction\".\n\n> The code is\n> already there to spin through the index, so you could just hook on to\n> the index, get the 'tid' and use that to get data from the physical\n> table. \n\n This sounds like what I meant. Would you have any pointers (to source\nfiles / functions I should read / understand / call ) ?\n\n\n> No locking, but you may get in trouble if other people are\n> reading the index. \n\n I don't see what trouble I could get into. Any hints appreciated.\n\n\n> Maybe a quick lock-unlock would do it for you. \n\n Do you mean lock-unlock on the table or on the index ?\n\n\n> Not easy, because the engine really doesn't provide a user-friendly C\n> interface for this, but it could be done. \n\n Can you point me to some of the user-nonfriendly files / functions I\nshould use then ?\n\n Thanx a bunch (I have to have the implementation by tomorrow 10am),\n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n\n", "msg_date": "Fri, 23 Jan 1998 23:20:09 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or \"browsing\"\n\tthrough tables using an index / ordering)" }, { "msg_contents": "\n\n Here is another exact perfect descrtiption of what I mean :\n (you don't have to read it again, it's only repost)\n\n Jan\n\nOn Sat, 24 Jan 1998, Ron O'Hara wrote:\n\n> On Fri, 23 Jan 1998, Jan Vicherek wrote:\n> [big chop]\n> \n> > This looks like a sad suggestion to me. I'll kick and scream just that\n> > I wouldn't have to go with Velocis RAIMA's lower-level C interface. (I'm\n> > just staring into its manual to figure out the sequence of C calls I would\n> > have to make to accomplish this.)\n> \n> If it is still the same as when it was called DBVista... then\n> \n> d_keyfind(xx,x,x)\n> while(db_status == S_OK)\n> {\n> d_recread(x,x,x,x);\n> d_keynext(ssss);\n> }\n> (No error checking in that code...<g>)\n> \n> ------------ end Raima note ------------------------------------\n\n Yea, it seems like that would be still the same ... thanx for saving me\nthe time ! :)\n\n v--- Yes, yes yes, this is exactly what I'm trying to accompliesh ---v\n> BUT - the functionality you want could be a special case of the Postgres\n> backend and would benefit a very important real world scenario\n> (scrolling inquiries)\n> \n> IF - declare cursor\n> and select 'whole table'\n> and 'order by a field which is a key'\n> \n> Treat as special - dont handle ANY data until the FETCH statement.\n> IE. Defer the IO until the fetch happens - then access the row(s)\n> via the index\n> \n> This would give an enormous boost in effective performance with the common\n> application task of a scrolling inquiry against a large table.\n> Implementing it would be transparent to current application code and would\n> offer a competitive edge over other products without it.\n\n yes yes yes\n\n> Normally I'm not one to support 'special case' coding tricks, but the\n> public impact on visible performance probably justifies looking into this \n> a bit further.\n> I suspect the query processor can identify the situation. It would need to\n> remember that it was special and leave enough control info to allow the\n> 'fetch' processing to respond to that special case.\n> \n> Comments from the guru's who are up to their necks in the code ?\n\n ( I am eager ;-) \n\n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Fri, 23 Jan 1998 23:24:00 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] Attn PG gurus / coders : New approach for ORDER BY ?\n\t(was: Re: Show stopper ?)" }, { "msg_contents": "\n\nOn Sat, 24 Jan 1998, Ron O'Hara wrote:\n\n[ description of spercial-case deleted ]\n\n> Comments from the guru's who are up to their necks in the code ?\n\n Hmm, I don't know how busy you developers are, so I don't know what\nshould I tell the customer tomorrow. Any proximity guesses ?\n\n Thanx,\n\n Jan\n\n", "msg_date": "Fri, 23 Jan 1998 23:26:31 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": true, "msg_subject": "Re: Attn PG gurus / coders : New approach for ORDER BY ?" }, { "msg_contents": "On Fri, 23 Jan 1998, Jan Vicherek wrote:\n\n> an SQL interface (doing all the optimizing, joining and the complicated\n> stuff) and\n> a low(er)-level C/C++/whatever interface which would allow to use less\n> of the intelligence of PostgreSQL and more of it's retireval system,\n> including indexing.\n\n\t*scratch head* indexing != sorting...indexing doesn't produce\nsorted data...indexing provides a means to very quickly retrieve\n*segments* of data from a large table of data...\n\n> > \tFirst off, *nothing* you are going to be able to accomplish in\n> > *any* SQL engine is going to prevent locking the table against\n> > updates...\n> \n> Sorry, I wasn't precise again : I mean I can't stop other people from\n> updating a table just because one guy is out there \"browsing\" through it.\n\n\tThe new code that Bruce put in *should* provide for better\nschedualing of the locks on a table. Bruce can explain it alot better\nthen me, but the way I understand it, before, if someone had a READ lock\non the table and someone came in looking for a WRITE lock, they got held\nout. If, while the READ lock was still in effect, someone else came in to\nREAD the table, they got in, effectively pushing the WRITE lock further\ndown the queue. With Bruce's patches, that WRITE lock will prevent\nanother READ from grabbing a \"lock\" until after the WRITE finishes. As I\nhaven't had a chance to look at it yet, I'm curious as to how often\nsomeone sees a deadlock with the new code.\n\n\tI'm running an radiusd accounting system on v6.2.1 that has one\ntable in it that is:\n\npostgres@zeus> ls -lt rad*\n-rw------- 1 postgres wheel 2678784 Jan 23 23:01 radlog_uniq_id\n-rw------- 1 postgres wheel 1564672 Jan 23 23:01 radlog_stop\n-rw------- 1 postgres wheel 2031616 Jan 23 23:01 radlog_start\n-rw------- 1 postgres wheel 6406144 Jan 23 23:01 radlog\n-rw------- 1 postgres wheel 1572864 Jan 23 23:01 radlog_userid\n-rw------- 1 postgres wheel 79519744 Jan 23 22:51 radhist\n-rw------- 1 postgres wheel 16818176 Jan 23 22:51 radhist_userid\n-rw------- 1 postgres wheel 16793600 Jan 23 22:51 radhist_uniq_id\n-rw------- 1 postgres wheel 11927552 Jan 23 22:51 radhist_stop\n-rw------- 1 postgres wheel 11919360 Jan 23 22:51 radhist_start\n\n\tThose are just two tables in it. The only problem that I've yet\nto have with it, as far as \"deadlocks\" are concerned (deadlocks mean that\none lock is in place while a write lock is trying to happen) is during a\nvacuum...and that's running on an old 486 with one hard drive and *just\nupgraded to* 64Meg of RAM.\n\n\tOh...and I'd say a good 75% of the activities on those above\ntables is multiple simultaneous updates...and the only thing that disrupts\nthat is the vacuum process...\n\n\tVadim...what is that humongous database you are running? And what\nare you running it on?\n\n\t\n> > > I *still* think that there *has to* be a way to find a value that is\n> > > immediatelly next to one I have. This seems like such a primitive\n> > > operation. Even the backend must be doing it on some operations, it would\n> > > seem.\n> > \n> > \tNot possible...\n> \n> Remeber that I have an index on the field that I'm looking for the\n> \"next\" value.\n> Is it not possible in such case ?\n\n\tNo, as the index isn't \"ordered\"...there is no logical next/prev\nrecord in an index\n\n> Even by calling some lower-level C function to give me the next value ?\n\n\tthere are no lower-level C functions...for that, you'd be looking\nat something like GDBM...\n\n> Index doesn't \"sort\". Index gets sorted. But pseudo-index doesn't even\n> get sorted, I would have to sort it \"manually\" after every INSERT I would\n> make.\n\n\tIndex gets sorted by...?\n\n> I've just tried this on half-the-size table, and it took 70 seconds :-(\n> \n> meray=> begin;\n> BEGIN\n> meray=> declare mycursor cursor for select * from master order by mpartno;\n> SELECT\n> meray=> fetch forward 1 in mycursor;\n> close mycursor;\n> end;\n> mpartno |mlistprc|mwarcomp|mcost|msupercd|mbasecd|mflag|mpackqty|mdescrip\n> -------------------+--------+--------+-----+--------+-------+-----+--------+----------------\n> F 0 AVZK | 581| 418| 311|S | | 6| 0|SPARK PLUG\n> (1 row)\n> \n> meray=> close mycursor;\n> CLOSE\n> meray=> end;\n> END\n> meray=> \\d masterind\n> \n> Table = masterind\n> +----------------------------------+----------------------------------+-------+\n> | Field | Type | Length|\n> +----------------------------------+----------------------------------+-------+\n> | mpartno | (bp)char | 19 |\n> +----------------------------------+----------------------------------+-------+\n> \n> Since the ORDER BY field (mpartno) has an index on it, why wouldn't\n> Postgres just take that index and go with it one-by-one ?! Such approach\n\n\tAn index isn't sorted. An index is only used with a WHERE clause\n(Bruce, I'm not misrepresenting that, am I?). A 'select * from table'\nwill only do a Sequential Scan through the table for the data it needs.\n\nacctng=> select count(start) from radhist;\n count\n------\n563258\n(1 row)\n\nacctng=> explain select * from radhist;\nNOTICE:QUERY PLAN:\n\nSeq Scan on radhist (cost=27022.38 size=537648 width=72)\n\nEXPLAIN\nacctng=> explain select * from radhist where userid = 'scrappy';\nNOTICE:QUERY PLAN:\n\nIndex Scan on radhist (cost=2.05 size=1 width=72)\n\nEXPLAIN\nacctng=> \n\n\n\tDamn, I wasn't thinking. Okay...an index *is* sorted, in a sense.\nYou index on *one* field of a table (with one exception that I know of,\nbut I'm not getting into that here). The index only makes use of that\nif/when you do a WHERE clause on that field.\n\nacctng=> explain select * from radhist where port = 1;\nNOTICE:QUERY PLAN:\n\nSeq Scan on radhist (cost=27022.38 size=1 width=72)\n\nEXPLAIN\n\n\tNotice above it does a Seq Scan? I don't have an index on port,\nsince I'll never do a WHERE \"search\" on it.\n\n\n> wouldn't take 70 seconds on idle P200+ 64MB, but would take 1/2 a second.\n\n\tHow are you starting up postmaster? Take a look at the man page\nfor postgres, which has an option:\n\n -S Specifies the amount of memory to be used by internal\n sorts before using disk files for sorting. This\n value is specified in 1k bytes, and defaults to 512.\n\n\tBy default, very little memory is \"reserved\" for sorting (ORDER\nBY)...try increasing that and see if your performance improves any. My\naccounting server has it set for 10240 (10Meg), since it too had 64Meg of\nRAM and nobody expect as an accounting/RDBMS server. I could probably\nincrease it past that, not sure...\n\t\n> Is such approach pricipally/fundamentally wrong, or is this correct\n> approach, and postgres just needs some more code written for this thing to\n> work ? (I.e. would other things break if this was implemented ?) \n\n\tSee my above explanation...Bruce or Vadim could probably explain\nit better, as they are more solid in the internals themselves, but you are\nmisunderstanding what an index is and what it is used for :(\n\n> -- whereas single select (which uses index) takes half a second.\n> meray=> select * from master where mpartno = 'F 0 AVZK ';\n> mpartno |mlistprc|mwarcomp|mcost|msupercd|mbasecd|mflag|mpackqty|mdescrip\n> -------------------+--------+--------+-----+--------+-------+-----+--------+----------------\n> F 0 AVZK | 581| 418| 311|S | | 6| 0|SPARK PLUG\n> F 0 AVZK | 581| 418| 311|S | | 6| 0|SPARK PLUG \n> (2 rows)\n> meray=> \n\n\tAssuming you have an index built on mpartno, then yes, this would\nuse the index because (and only because) of the WHERE clause. Try using\n'explain' at the beginning of your select statements...it will explain\n*what* it is going to do, but not actually do it.\n\n> Now I want do see the row that is in the masterind after the first, in\n> this case it contains the same data.\n> \n> If we can get \"this new approach\" to work, \n\n\tYou can't, unless there is some logical \"greater/less then\" for\nmpartno, such that you could do something like (syntax will be wrong, its\nthe concept that i'm showing):\n\nbegin;\ndeclare cursor mycursor for select where mpartno >= 'F 0 AVZK '\nORDER by mpartno;\nfetch forward 1 from mycursor;\nclose mycursor;\nend;\n\nThat would speed up the search (should?) because now you are finding, in\nthe index, all mpartno's that come after the one your stipulate, and then\nyou are grabbing the first one off the stack...I don't guarantee it will\nbe faster, because it also depends on how much data is returned.\n\nJust thought of it...after you load this table, you did do a 'vacuum\nanalyze' on it, right? Just to get the stats up dated?\n\n> > \tBasically, take your table, move to the $forward record in it,\n> > grab the next $retrieve records and then release the table.\n> \n> But this creates 70-second lock on the table every time I want to see\n> next record :-(\n\n\tMost of that 70-seconds, I am guess, was spent in the ORDER\nby...try increasing -S to as much as you dare and see if that helps any...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 24 Jan 1998 00:28:27 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Attn PG gurus / coders : New approach for ORDER BY ? (was: Re:\n\tShow stopper ?)" }, { "msg_contents": "On Fri, 23 Jan 1998, Jan Vicherek wrote:\n\n> > You can put an index on the table, and embed a function inside the\n> > engine to spin through the index, getting valid rows.\n> \n> Aha, this implies that in the index there are valid and non-valid rows.\n> I guess those that are to be \"valid\" (no current transactions on that\n> row) and the non-valid \"those that are subject to an update lock /\n> transaction\".\n\t\n\tThere is no record level locking available at this point in time,\nonly table level. Bruce, just curious...this whole discussion...would it\nbe moot if we did have record level locking?\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 24 Jan 1998 00:32:25 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or \"browsing\"\n\tthrough tables using an index / ordering)" }, { "msg_contents": "On Fri, 23 Jan 1998, Jan Vicherek wrote:\n\n> \n> \n> On Sat, 24 Jan 1998, Ron O'Hara wrote:\n> \n> [ description of spercial-case deleted ]\n> \n> > Comments from the guru's who are up to their necks in the code ?\n> \n> Hmm, I don't know how busy you developers are, so I don't know what\n> should I tell the customer tomorrow. Any proximity guesses ?\n\n\tIMHO...very little chance of this happening in the next several\nmonths...Bruce/Vadim might say differently, but somehow I doubt it :(\nThere are too many things currently on the go that are needed for 99% of\nthe users...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 24 Jan 1998 00:57:05 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Attn PG gurus / coders : New approach for ORDER BY\n ?" }, { "msg_contents": "\nKeep in mind that the entire database gets locked for as long as\nyou've got it open in read/write mode, regardless of if you've read or\nwritten.\n\nOn Fri, 23 January 1998, at 22:53:27, Bruce Momjian wrote:\n\n> > \tYou'd be better off looking at something like GDBM (which, by the\n> > way, also creates a lock against updates while another is reading the\n> > database)...unless I'm missing something, you aren't looking at doing\n> > anything that *requires* an SQL engine :(\n> \n> I agree. GDBM is a fine system for such uses.\n", "msg_date": "Sat, 24 Jan 1998 14:46:58 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] Re: [HACKERS] Show stopper ? (was: Re: \"cruising\" or\n\t\"browsing\" through tables using an index / ordering)" } ]
[ { "msg_contents": "\nI have been chasing the grant/revoke problems (on Linux platforms) and\nhave had some success. There were two problems causing SIGSEGV's to\ncrash the backend. \n\nThe first problem was caused by a function trying to pass a string\ndirectly. This was fixed by returning the result of a strdup().\n\nThe second problem is in ./src/backend/parser/gram.y . The grant and\nrevoke statements are the only ones to use \"free()\". Somehow this is\ncausing SIGSEGV's and crashing the backend. I removed these from the\nsource and re-built everything: and it works now. But! I know absolutely\nnothing about yacc/bison and do not know the implications of removing\nthese statements from the source. \n\nIf everyone thinks it is OK, I will submit patches. If someone can look\nat the grant and revoke code in gram.y, I will submit the patches for\n./src/backend/utils/init/miscinit.c.\n\n\n-James\n\n \n\n", "msg_date": "Sat, 24 Jan 1998 01:55:47 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": true, "msg_subject": "Grant/Revoke problems" }, { "msg_contents": "> \n> \n> I have been chasing the grant/revoke problems (on Linux platforms) and\n> have had some success. There were two problems causing SIGSEGV's to\n> crash the backend. \n> \n> The first problem was caused by a function trying to pass a string\n> directly. This was fixed by returning the result of a strdup().\n> \n> The second problem is in ./src/backend/parser/gram.y . The grant and\n> revoke statements are the only ones to use \"free()\". Somehow this is\n> causing SIGSEGV's and crashing the backend. I removed these from the\n> source and re-built everything: and it works now. But! I know absolutely\n> nothing about yacc/bison and do not know the implications of removing\n> these statements from the source. \n> \n> If everyone thinks it is OK, I will submit patches. If someone can look\n> at the grant and revoke code in gram.y, I will submit the patches for\n> ./src/backend/utils/init/miscinit.c.\n\nThe free() in gram.y is clearly wrong. Please submit a patch. I have\nfixed some of these in 6.3, but not that one.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 14:39:11 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Grant/Revoke problems" } ]
[ { "msg_contents": "The Hermit Hacker <scrappy@hub.org> wrote:\n\n> > Please please help me solve this or make workarounds or anything. I\n> > would *really* like to see PosgreSQL to be playing against the Big\n> > (commercial) Boys !\n>\n> I'm curious, but can the \"Big (commercial) Boys\" do this? If so,\n> can you please provide an example of which and how? Most of us here have\n> access to an one or the other (me, Oracle) to use as a sample system...if\n> we can prove that it does work on another system, then we have something\n> to work with, but right now all I've seen is \"I wish I could do this\", and\n> several examples on how to accomplish it using PostgreSQL, but that's\n> it...\n>\nThe main problem is that PostgreSQL does not use index for sorting and\nthus really does a\n\"copy\" of the whole table and then sorts it before it can use a few rows\nfrom the beginning.\n\nUsing indexes for sorting as well as selecting is on the TODO list, but\nseems to be not very high priority.\n\n>\n> > 0. having a value of a field on which there is an index, how can\n> I do :\n> > a) current_pointer = some_function(\"value_I_have\");\n> > b) next_pointer = some_other_function(current_pointer);\n> > c) one_tupple = yet_another_function(next_pointer);\n> > If I can accomplish a,b,c, then I win and I don't have to do\n> questions\n> > 1..5 below.\n>\n> Why not put a sequence field on the table so that you can do:\n>\n> select * from table where rowid = n; -or-\n> select * from table where rowid = n - 1; -or-\n> select * from table where rowid = n + 1; -or-\n> select * from table where rowid >= n and rowid <= n+x;\n>\n> And create the index on rowid?\n\nIt works no better than any other indexed field unless you disallow\ndeletes.\n\nif aggregates were able to use indexes you could do:\n\nselect min(n) from table where rowid >n;\n\nand then\n\nselect * from table where n = n_found by_last_previous_select;\n\nbut as they don't you would get very poor performance from the first\nselect;\n\nThis could be simulated by fetching only the first row from a cursor\nsorted on the field.\n\nSo the real solution would be to use indexes for sorting, maybe at first\nfor single field sorts.\n\nThen one could just do:\n\n--8<---------\n\ndeclare cursor part_cursor for\n select * from part_table\n where indexed_field > 'last_value'\n order by indexed_field ;\n\nfetch 10 from part_cursor;\n\nclose part_cursor;\n\n--8<---------\n\nfor moving backwards you would of course use '<' and 'desc' in the\nselect clause.\n\nUnfortunately it does not work nearly fast enough for big tables as\noften almost the whole table is copied and then sorted before you get\nyour few rows.\n\nEven more unfortunately client side development tools like Access or\nDelphi seem to rely on sorted queries using indexes for sorting and as a\nresult perform very poorly with PostgreSQL in their default modes.\n\nOTOH, it usually shows poor database design if you can't specify your\nseach criteria precisely enough to limit the number of rows to some\nmanageable level in interactive applications. It really is the task of\nthe database server to look up things. The poor user should not have to\nwade through zillions of records in a looking for the one she wants\n(even tho the you cand do it quite effectively using ISAM).\n\nOTOOH, it would be very hard for general client side tools to do without\nkeyed access, so addind using indexes for sorting should be given at\nleast some priority.\n\n----------------\nHannu Krosing\nTrust-O-Matic O�\n\n>\n\n", "msg_date": "Sat, 24 Jan 1998 09:36:41 +0200", "msg_from": "Hannu Krosing <hannu@trust.ee>", "msg_from_op": true, "msg_subject": "Re: Browsing the tables and why pgsql does not perform well" }, { "msg_contents": "> Even more unfortunately client side development tools like Access or\n> Delphi seem to rely on sorted queries using indexes for sorting and as a\n> result perform very poorly with PostgreSQL in their default modes.\n\nI find Delphi to be poor even on Informix because Delphi thinks it has\nan ISAM file behind each table, and when using an SQL engine, does tons\nof table rescans every time it wants to change data in a table because\nit doesn't have a physical row it can manage internally. \n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 15:25:08 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Browsing the tables and why pgsql does not perform\n\twell" }, { "msg_contents": "On Sat, 24 January 1998, at 09:36:41, Hannu Krosing wrote:\n\n> The main problem is that PostgreSQL does not use index for sorting and\n> thus really does a\n> \"copy\" of the whole table and then sorts it before it can use a few rows\n> from the beginning.\n> \n> Using indexes for sorting as well as selecting is on the TODO list, but\n> seems to be not very high priority.\n\nIt doesn't seem like it would be very difficult -- I'd be happy to\ntackle it if given some pointers in the right direction (I am a newbie\npgsql-hacker and I'm looking for work! give me some!!)\n\n--brett\n\n> OTOOH, it would be very hard for general client side tools to do without\n> keyed access, so addind using indexes for sorting should be given at\n> least some priority.\n> \n> ----------------\n> Hannu Krosing\n> Trust-O-Matic O�\n> \n> >\n> \n", "msg_date": "Sat, 24 Jan 1998 15:02:09 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Browsing the tables and why pgsql does not perform\n\twell" }, { "msg_contents": "> \n> On Sat, 24 January 1998, at 09:36:41, Hannu Krosing wrote:\n> \n> > The main problem is that PostgreSQL does not use index for sorting and\n> > thus really does a\n> > \"copy\" of the whole table and then sorts it before it can use a few rows\n> > from the beginning.\n> > \n> > Using indexes for sorting as well as selecting is on the TODO list, but\n> > seems to be not very high priority.\n> \n> It doesn't seem like it would be very difficult -- I'd be happy to\n> tackle it if given some pointers in the right direction (I am a newbie\n> pgsql-hacker and I'm looking for work! give me some!!)\n> \n\nIt is so complicated, I can not even suggest where to start.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 19:26:48 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Browsing the tables and why pgsql does not perform\n\twell" } ]
[ { "msg_contents": "I wrote :\n\n> The Hermit Hacker wrote:\n> > Jan Vicherek wrote:\n> > > Please please help me solve this or make workarounds or anything. I\n> > > would *really* like to see PosgreSQL to be playing against the Big\n> > > (commercial) Boys !\n> >\n> > I'm curious, but can the \"Big (commercial) Boys\" do this? If so,\n> > can you please provide an example of which and how?\n>\nThey optimise sorting (use indexes) as well and so make it doable using\ncursors\n\nI even think that Oracle can use indexes for some cases of count(*) and\nmin() and max().\n\n> Most of us here have\n> > access to an one or the other (me, Oracle) to use as a sample system...if\n> > we can prove that it does work on another system, then we have something\n> > to work with, but right now all I've seen is \"I wish I could do this\", and\n> > several examples on how to accomplish it using PostgreSQL, but that's\n> > it...\n> >\n> The main problem is that PostgreSQL does not use index for sorting and\n> thus really does a\n> \"copy\" of the whole table and then sorts it before it can use a few rows\n> from the beginning.\n>\n> Using indexes for sorting as well as selecting is on the TODO list, but\n> seems to be not very high priority.\n>\n> >\n> > > 0. having a value of a field on which there is an index, how can\n> > I do :\n> > > a) current_pointer = some_function(\"value_I_have\");\n> > > b) next_pointer = some_other_function(current_pointer);\n> > > c) one_tupple = yet_another_function(next_pointer);\n> > > If I can accomplish a,b,c, then I win and I don't have to do\n> > questions\n> > > 1..5 below.\n> >\n> > Why not put a sequence field on the table so that you can do:\n> >\n> > select * from table where rowid = n; -or-\n> > select * from table where rowid = n - 1; -or-\n> > select * from table where rowid = n + 1; -or-\n> > select * from table where rowid >= n and rowid <= n+x;\n> >\n> > And create the index on rowid?\n>\n> It works no better than any other indexed field unless you disallow\n> deletes.\n>\n> if aggregates were able to use indexes you could do:\n>\n> select min(n) from table where rowid >n;\n>\n> and then\n>\n> select * from table where n = n_found by_last_previous_select;\n>\n> but as they don't you would get very poor performance from the first\n> select;\n>\n> This could be simulated by fetching only the first row from a cursor\n> sorted on the field.\n>\n> So the real solution would be to use indexes for sorting, maybe at first\n> for single field sorts.\n>\n> Then one could just do:\n>\n> --8<---------\n> begin;\n> declare cursor part_cursor for\n> select * from part_table\n> where indexed_field > 'last_value'\n> order by indexed_field ;\n>\n> fetch 10 from part_cursor;\n>\n> close part_cursor;\n> end;\n> --8<---------\n>\n> for moving backwards you would of course use '<' and 'desc' in the\n> select clause.\n>\n> Unfortunately it does not work nearly fast enough for big tables as\n> often almost the whole table is copied and then sorted before you get\n> your few rows.\n>\nIt actually works for forward queries if you have a b-tree index on\nindexed_field and if you omit the 'order by'-clause, but for all the\nwrond reasons ;), i.e. the backend performes an index scan and so does\nthe right thing.\n\nI just tried it on a 2 780 000 record db (web access log, table size\n500MB) and both the cursor was created and data was returned\nimmediatedly (fetch 3 in my_cursor).\n\nI did:\n\nbegin;\n\ndeclare my_cursor cursor for select * from access_log where adate >\n'28/12/97';\n\nfetch 3 in mu_cursor;\n\nend;\n\n\nIt does not work for moving backwards as the backend does the index scan\nin forward direction regardless of the comparison being '>' or '<';\n\nI then tried adding the 'order by adate desc' to cursor definition. The\nresult was a complete disaster (barely avoided by manually killing the\npostgres processes), as 'fetch 3 in mycursor' run for about 10 min and\nin the process exhausted 2.3GB of disc space for pg_sort files.\n\n> Even more unfortunately client side development tools like Access or\n> Delphi seem to rely on sorted queries using indexes for sorting and as a\n> result perform very poorly with PostgreSQL in their default modes.\n>\n> OTOH, it usually shows poor database design if you can't specify your\n> seach criteria precisely enough to limit the number of rows to some\n> manageable level in interactive applications. It really is the task of\n> the database server to look up things. The poor user should not have to\n> wade through zillions of records in a looking for the one she wants\n> (even tho the you cand do it quite effectively using ISAM).\n>\n> OTOOH, it would be very hard for general client side tools to do without\n> keyed access, so addind using indexes for sorting should be given at\n> least some priority.\n>\nMaybe an quicker and easier but not 'future proof' (as it uses\nundocumented and possibly soon-to-change features) fix would be to\nreverse the index scan direction for '<' operator?\n\nVadim: how hard would implementing this proposal be?\n\n(as I think that making the optimiser also optimize ordering would be\nquite a big undertaking)\n\n-----------------------\nHannu Krosing\nTrust-O-Matic O�\n\n", "msg_date": "Sat, 24 Jan 1998 11:21:39 +0200", "msg_from": "Hannu Krosing <hannu@trust.ee>", "msg_from_op": true, "msg_subject": "Re: Browsing the tables / why pgsql does not perform well (with temp\n\tfix)" }, { "msg_contents": "On Sat, 24 Jan 1998, Hannu Krosing wrote:\n\n> > if aggregates were able to use indexes you could do:\n> >\n> > select min(n) from table where rowid >n;\n> >\n> > and then\n> >\n> > select * from table where n = n_found by_last_previous_select;\n> >\n> > but as they don't you would get very poor performance from the first\n> > select;\n\n This looks good. No transactions necessary, no locking, no mutliple rows\ncopying.\n\n how hard would it be to make aggregates able to use indexes ?\n\n Could I manage in a day ? (10 hours)\n\n\n> > This could be simulated by fetching only the first row from a cursor\n> > sorted on the field.\n> >\n> > So the real solution would be to use indexes for sorting, maybe at first\n> > for single field sorts.\n\n How many hours would that take to write ?\n\n> > Then one could just do:\n> >\n> > --8<---------\n> > begin;\n> > declare cursor part_cursor for\n> > select * from part_table\n> > where indexed_field > 'last_value'\n> > order by indexed_field ;\n> >\n> > fetch 10 from part_cursor;\n> >\n> > close part_cursor;\n> > end;\n> > --8<---------\n> >\n> > for moving backwards you would of course use '<' and 'desc' in the\n> > select clause.\n> >\n> > Unfortunately it does not work nearly fast enough for big tables as\n> > often almost the whole table is copied and then sorted before you get\n> > your few rows.\n\n After code that makes sorting use indices would fix this problem,\nright ?\n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Sat, 24 Jan 1998 20:11:10 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": false, "msg_subject": "Re: Browsing the tables / why pgsql does not perform well (with temp\n\tfix)" }, { "msg_contents": "On Sat, 24 Jan 1998, Jan Vicherek wrote:\n\n> On Sat, 24 Jan 1998, Hannu Krosing wrote:\n> \n> > > if aggregates were able to use indexes you could do:\n> > >\n> > > select min(n) from table where rowid >n;\n> > >\n> > > and then\n> > >\n> > > select * from table where n = n_found by_last_previous_select;\n> > >\n> > > but as they don't you would get very poor performance from the first\n> > > select;\n> \n> This looks good. No transactions necessary, no locking, no mutliple rows\n> copying.\n\n\tThe SELECT above will create a READ lock on the table, preventing\nUPDATES from happening for the duration of the SELECT. There is *no* way\nof getting around or away from this lock...\n\n> > > So the real solution would be to use indexes for sorting, maybe at first\n> > > for single field sorts.\n> \n> How many hours would that take to write ?\n\n\tAs Bruce said in another posting, he isn't even sure *where* to\nstart on this, so I'd say over a month, if not longer...over a month in\nthe sense that as of the end of this week, there will be no new\ndevelopments on the source tree, only stabilization and bug fixes...\n\n\n \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 24 Jan 1998 23:55:38 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Browsing the tables / why pgsql does not perform well (with temp\n\tfix)" }, { "msg_contents": "\nOn Sat, 24 Jan 1998, The Hermit Hacker wrote:\n\n> On Sat, 24 Jan 1998, Jan Vicherek wrote:\n> \n> > On Sat, 24 Jan 1998, Hannu Krosing wrote:\n> > \n> > > > if aggregates were able to use indexes you could do:\n> > > >\n> > > > select min(n) from table where rowid >n;\n> > > >\n> > > > and then\n> > > >\n> > > > select * from table where n = n_found by_last_previous_select;\n> > > >\n> > > > but as they don't you would get very poor performance from the first\n> > > > select;\n> > \n> > This looks good. No transactions necessary, no locking, no mutliple rows\n> > copying.\n> \n> \tThe SELECT above will create a READ lock on the table, preventing\n> UPDATES from happening for the duration of the SELECT. There is *no* way\n> of getting around or away from this lock...\n\n Yes, you are correct.\n In addition, there will be no long-lasting \"begin - declare cursor -\nend\" statement, so the table will not get locked against updates for\nminutes / hours when a person wants to \"browse\" the table ...\n\n .. hmm , but there still may be copying of mutliple rows if the \"rowid\"\nfield is not unique.\n\n Again, accessing the \"next\" item in the index is the only solution. The\nnext item in index has a tid which points to next row in the browsed\ntable.\n\n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Sat, 24 Jan 1998 23:12:49 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": false, "msg_subject": "Re: Browsing the tables / why pgsql does not perform well (with temp\n\tfix)" }, { "msg_contents": "> > \n> > \tThe SELECT above will create a READ lock on the table, preventing\n> > UPDATES from happening for the duration of the SELECT. There is *no* way\n> > of getting around or away from this lock...\n> \n> Yes, you are correct.\n> In addition, there will be no long-lasting \"begin - declare cursor -\n> end\" statement, so the table will not get locked against updates for\n> minutes / hours when a person wants to \"browse\" the table ...\n\n\tActually, here i believe you are wrong. Bruce, please correct me\nif I'm wrong, but it would be faster for you to do the\nbegin;declare...;move...;fetch...;end; then doing a straight SELECT.\n\n\tI'm not *certain* about this, but the way I believe that it works\nis that if you do:\n\nbegin;\ndeclare cursor mycursor for select * from table order by field;\nmove forward 20;\nfetch 20;\nend;\n\n\tThe SELECT/ORDER BY is done in the backend, as is the MOVE/FETCH\nbefore returning any data to the front end. So, now you are returning\nlet's say 100 records to the front end, instead of the whole table. If\nyou do a SELECT, it will return *all* the records to the front end.\n\n\tSo, I would imagine that it would be slightly longer to SELECT all\nrecords and send them all to the front end then it would be to SELECT all\nrecords and just return the 100 that you want.\n\n\tBruce, is this a correct assessment?\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 25 Jan 1998 00:51:59 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Browsing the tables / why pgsql does not perform well (with temp\n\tfix)" }, { "msg_contents": "On Sun, 25 Jan 1998, The Hermit Hacker wrote:\n\n> > > \n> > > \tThe SELECT above will create a READ lock on the table, preventing\n> > > UPDATES from happening for the duration of the SELECT. There is *no* way\n> > > of getting around or away from this lock...\n> > \n> > Yes, you are correct.\n> > In addition, there will be no long-lasting \"begin - declare cursor -\n> > end\" statement, so the table will not get locked against updates for\n> > minutes / hours when a person wants to \"browse\" the table ...\n> \n> \tActually, here i believe you are wrong.\n\n If I was to select whole table at once, I would be wrong.\n However, with this \"browsing\" business I would only do (if aggregates\nused indeces) :\n\n CurrentValue = NextValue;\n SELECT min(IndexedField) as NextValue from MyTable where IndexedField > CurrentValue;\n SELECT * from MyTable where IndexedField = NextValue;\n <return * to user he he can look at it 5 hours>\n\n Since the above locks the table only during the SELECTs (which are very\nshort [ < .5 seconds ]), other people can update the table while one users\nstares at the results for 5 hours. The first SELECT is very short because\nwe assume that aggregates can use indeces. The second SELECT is very short\nbecause we have \"WHERE\" which uses only a field on which we have an index.\n\n After the 5 hours the user decides to see the next record, so we execute\nthose 2 SELECT statements again.\n\n The only problem with the above is that if IndexedField isn't unique, the\nSELECT will return multiple records instead of only one. (The user can\nsee only one record on the screan at a time.) So this would require the\ncode between the database and the application to handle multiple-row\nresults of the second SELECT statement. An unnecessary complication :\n\n This situation would be solved if I could retrieve a (valid) record\nbased not on the (non-unique) IndexedField, but based on the \"tid\" of the\nrecord. This cannot currently be done through SQL (as the FAQ says). \n\n<joke_that_might_become_a_reality>\n I'm tempted to implement an extension to the current postgres' SQL\nlanguage that would allow me to do :\n\n CREATE INDEX MyIndex ON MyTable (IndexField);\n SELECT FIRST(Tid) AS MyFirstTid FROM MyIndex WHERE IndexField = 'what_user_requests';\n SELECT * from MyTable where Tid = MyFirstTid;\n SELECT NEXT(Tid) AS MyNextTid FROM MyIndex WHERE Tid = MyFirstTid;\n\n</joke_that_might_become_a_reality>\n\n I'll ask on developers list whether this is technically possible, given\nthe access that the SQL processing routines have to the back-end index\nprocessing routines.\n\n> begin;\n> declare cursor mycursor for select * from table order by field;\n> move forward 20;\n> fetch 20;\n> end;\n\n the application would have to handle in this case multiple-row results\ncoming from the backend.\n\n\n Thanx for all your intpt,\n\n keep it coming,\n \n Jan\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Sun, 25 Jan 1998 00:38:01 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": false, "msg_subject": "Re: Browsing the tables / why pgsql does not perform well (with temp\n\tfix)" } ]
[ { "msg_contents": "Hi,\n\nI read this message on the debian development list. Thought it might be \nof interest to scrappy on the PostODBC thingie...\n\nMaarten\n\n_____________________________________________________________________________\n| Maarten Boekhold, Faculty of Electrical Engineering TU Delft, NL |\n| Computer Architecture and Digital Technique section |\n| M.Boekhold@et.tudelft.nl |\n-----------------------------------------------------------------------------\n\n---------- Forwarded message ----------\nDate: Fri, 23 Jan 1998 23:47:23 +0100\nFrom: David Frey <david@eos.lugs.ch>\nTo: debian-devel@lists.debian.org\nSubject: Re: Copyright question: GPL patches for non-GPL packages\n\nOn Thu, Jan 22 1998 13:18 +0100 Andreas Jellinghaus writes:\n> if you want to mix gpl'ed software with other software, that other\n> software's licence may not conflict with the gpl.\n> \n> for example, you can mix software with bsd style licencens (or x window\n> licence) with GPL'ed software, becuase these two licences do not\n> conflict. the mix will be under GPL'ed.\n[...]\n> example: this is allowed\n> bsd + gpl\n[...]\nI recall reading once ago, that the standard BSD license is incompatible\nwith the GPL because of point 4.:\n\n4. Neither the name of the University nor the names of its contributors\n may be used to endorse or promote products derived from this software\n without specific prior written permission.\n\nwhich is an additional restriction, which is not allowed under the GPL:\n\n 6. Each time you redistribute the Program (or any work based on the\nProgram), the recipient automatically receives a license from the\noriginal licensor to copy, distribute or modify the Program subject to\nthese terms and conditions. You may not impose any further\nrestrictions on the recipients' exercise of the rights granted herein.\nYou are not responsible for enforcing compliance by third parties to\nthis License.\n\nSo people who wanted to have both licenses applicable on their code,\ncancelled the fourth paragraph of the original BSD license...\n\nDavid\n\n\n--\nTO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word \"unsubscribe\" to\ndebian-devel-request@lists.debian.org . \nTrouble? e-mail to templin@bucknell.edu .\n\n", "msg_date": "Sat, 24 Jan 1998 11:32:46 +0100 (MET)", "msg_from": "Maarten Boekhold <maartenb@dutepp0.et.tudelft.nl>", "msg_from_op": true, "msg_subject": "Re: Copyright question: GPL patches for non-GPL packages (fwd)" }, { "msg_contents": "On Sat, 24 Jan 1998, Maarten Boekhold wrote:\n\n> Hi,\n> \n> I read this message on the debian development list. Thought it might be \n> of interest to scrappy on the PostODBC thingie...\n\n\tDamn, to say I hate copyrights isn't saying enough :) Okay, I\nguess the first thing to note is that PostODBC actually falls under the\nLGPL vs the GPL, which appears to have slightly more lax restrictions on\nhow it gets included with other packages...\n\n\tNow, with that in mind, should we remove the PostODBC stuff from\nthe interfaces directory an dmove it to the contrib directory? Or remove\nit all together? Or leave it where it is?\n\n\n\n > > Maarten\n> \n> _____________________________________________________________________________\n> | Maarten Boekhold, Faculty of Electrical Engineering TU Delft, NL |\n> | Computer Architecture and Digital Technique section |\n> | M.Boekhold@et.tudelft.nl |\n> -----------------------------------------------------------------------------\n> \n> ---------- Forwarded message ----------\n> Date: Fri, 23 Jan 1998 23:47:23 +0100\n> From: David Frey <david@eos.lugs.ch>\n> To: debian-devel@lists.debian.org\n> Subject: Re: Copyright question: GPL patches for non-GPL packages\n> \n> On Thu, Jan 22 1998 13:18 +0100 Andreas Jellinghaus writes:\n> > if you want to mix gpl'ed software with other software, that other\n> > software's licence may not conflict with the gpl.\n> > \n> > for example, you can mix software with bsd style licencens (or x window\n> > licence) with GPL'ed software, becuase these two licences do not\n> > conflict. the mix will be under GPL'ed.\n> [...]\n> > example: this is allowed\n> > bsd + gpl\n> [...]\n> I recall reading once ago, that the standard BSD license is incompatible\n> with the GPL because of point 4.:\n> \n> 4. Neither the name of the University nor the names of its contributors\n> may be used to endorse or promote products derived from this software\n> without specific prior written permission.\n> \n> which is an additional restriction, which is not allowed under the GPL:\n> \n> 6. Each time you redistribute the Program (or any work based on the\n> Program), the recipient automatically receives a license from the\n> original licensor to copy, distribute or modify the Program subject to\n> these terms and conditions. You may not impose any further\n> restrictions on the recipients' exercise of the rights granted herein.\n> You are not responsible for enforcing compliance by third parties to\n> this License.\n> \n> So people who wanted to have both licenses applicable on their code,\n> cancelled the fourth paragraph of the original BSD license...\n> \n> David\n> \n> \n> --\n> TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word \"unsubscribe\" to\n> debian-devel-request@lists.debian.org . \n> Trouble? e-mail to templin@bucknell.edu .\n> \n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 24 Jan 1998 23:50:38 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Copyright question: GPL patches for non-GPL\n\tpackages (fwd)" }, { "msg_contents": "> \n> On Sat, 24 Jan 1998, Maarten Boekhold wrote:\n> \n> > Hi,\n> > \n> > I read this message on the debian development list. Thought it might be \n> > of interest to scrappy on the PostODBC thingie...\n> \n> \tDamn, to say I hate copyrights isn't saying enough :) Okay, I\n> guess the first thing to note is that PostODBC actually falls under the\n> LGPL vs the GPL, which appears to have slightly more lax restrictions on\n> how it gets included with other packages...\n> \n> \tNow, with that in mind, should we remove the PostODBC stuff from\n> the interfaces directory an dmove it to the contrib directory? Or remove\n> it all together? Or leave it where it is?\n> \n\nLeave it. The posting talks about intermixing source code. In our\ncase, it is separate, and that is enough. BSDI ships GNU utilities, but\ndoes not have the entire OS under GPL, and that is GPL, not LGPL.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 23:20:32 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Copyright question: GPL patches for non-GPL\n\tpackages (fwd)" }, { "msg_contents": "On Sat, 24 Jan 1998, Bruce Momjian wrote:\n\n> > \n> > On Sat, 24 Jan 1998, Maarten Boekhold wrote:\n> > \n> > > Hi,\n> > > \n> > > I read this message on the debian development list. Thought it might be \n> > > of interest to scrappy on the PostODBC thingie...\n> > \n> > \tDamn, to say I hate copyrights isn't saying enough :) Okay, I\n> > guess the first thing to note is that PostODBC actually falls under the\n> > LGPL vs the GPL, which appears to have slightly more lax restrictions on\n> > how it gets included with other packages...\n> > \n> > \tNow, with that in mind, should we remove the PostODBC stuff from\n> > the interfaces directory an dmove it to the contrib directory? Or remove\n> > it all together? Or leave it where it is?\n> > \n> \n> Leave it. The posting talks about intermixing source code. In our\n> case, it is separate, and that is enough. BSDI ships GNU utilities, but\n> does not have the entire OS under GPL, and that is GPL, not LGPL.\n\n\tTrue enough...FreeBSD ships a bunch of GPL stuff as well, but its\ncore kernel is still under Berkeley :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 25 Jan 1998 01:06:33 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Copyright question: GPL patches for non-GPL\n\tpackages (fwd)" } ]
[ { "msg_contents": "\nI am still wondering why postmaster fork/exec instead of\njust forking when receiving a new connection.\n\nFork on modern unices (linux and (a think) *BSD) cost\nalmost nothing (in time and memory) thanks to COW (copy-on-write).\nExec in expensive as it breaks COW.\n\nI know this is not the time (have too wait 'til after 6.3),\nbut shouldn't this be on the ToDo-list.\n\n best regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "24 Jan 1998 19:24:27 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": true, "msg_subject": "fork/exec for backend" }, { "msg_contents": "> \n> \n> I am still wondering why postmaster fork/exec instead of\n> just forking when receiving a new connection.\n> \n> Fork on modern unices (linux and (a think) *BSD) cost\n> almost nothing (in time and memory) thanks to COW (copy-on-write).\n> Exec in expensive as it breaks COW.\n> \n> I know this is not the time (have too wait 'til after 6.3),\n> but shouldn't this be on the ToDo-list.\n\nIt was on my personal TODO. It is on the main one now:\n\n\t* remove fork()/exec() of backend and make it just fork()\n\nI had hoped to do this fir 6.3 as it will save 0.01 seconds on startup,\nbut no luck.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 15:45:48 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "\nOn 24 Jan 1998, Goran Thyni wrote:\n\n> Fork on modern unices (linux and (a think) *BSD) cost\n> almost nothing (in time and memory) thanks to COW (copy-on-write).\n> Exec in expensive as it breaks COW.\n\n Not so. Modern Unixs will share executable address space between\nprocesses. So if you fork and exec 10 identical programs, they will share\nmost address space.\n\n If you want to speed this up, link postgresql static. This makes exec()\ncost almost nothing too. postgresql becomes its own best shared library.\n\n Again, this only applies to \"modern\" systems, but FreeBSD definitely has\nthis behaviour.\n\nTom\n\n", "msg_date": "Sat, 24 Jan 1998 13:35:33 -0800 (PST)", "msg_from": "Tom <tom@sdf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "\n On 24 Jan 1998, Goran Thyni wrote:\n\n > Fork on modern unices (linux and (a think) *BSD) cost\n > almost nothing (in time and memory) thanks to COW (copy-on-write).\n > Exec in expensive as it breaks COW.\n\n Not so. Modern Unixs will share executable address space between\n processes. So if you fork and exec 10 identical programs, they will share\n most address space.\n\n1. Code is probably not shared between postmaster and postgres\n processes.\n\n2. Some inits may be done once (by postmaster) and not repeated\n by every child.\n\n3. (and most important) \n With no exec COW is in action, meaning:\n data pages in shared until changed.\n\nCOW is the key to how Linux can fork faster than most unices\nstarts a new thread. :-)\n\n Again, this only applies to \"modern\" systems, but FreeBSD definitely has\n this behaviour.\n\nI don't know if *BSD has COW, but if should think so.\n\n best regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "24 Jan 1998 22:16:53 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "\nOn 24 Jan 1998, Goran Thyni wrote:\n\n> On 24 Jan 1998, Goran Thyni wrote:\n> \n> > Fork on modern unices (linux and (a think) *BSD) cost\n> > almost nothing (in time and memory) thanks to COW (copy-on-write).\n> > Exec in expensive as it breaks COW.\n> \n> Not so. Modern Unixs will share executable address space between\n> processes. So if you fork and exec 10 identical programs, they will share\n> most address space.\n> \n> 1. Code is probably not shared between postmaster and postgres\n> processes.\n\n A backend is execed for every connection. All the backends will share\ncode space.\n\n> 2. Some inits may be done once (by postmaster) and not repeated\n> by every child.\n\n Not relevant. I'm only concerned with the children.\n\n> 3. (and most important) \n> With no exec COW is in action, meaning:\n> data pages in shared until changed.\n> \n> COW is the key to how Linux can fork faster than most unices\n> starts a new thread. :-)\n\n COW is old news. Perhaps you can find some old SCO systems that don't\ndo COW :)\n\n> Again, this only applies to \"modern\" systems, but FreeBSD definitely has\n> this behaviour.\n> \n> I don't know if *BSD has COW, but if should think so.\n\n I'm not speaking just about COW, but about being able share code between\nseparately execed processes.\n\n> best regards,\n> -- \n> ---------------------------------------------\n> G�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\nTom\n\n", "msg_date": "Sat, 24 Jan 1998 14:53:22 -0800 (PST)", "msg_from": "Tom <tom@sdf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "> \n> \n> On 24 Jan 1998, Goran Thyni wrote:\n> \n> > Fork on modern unices (linux and (a think) *BSD) cost\n> > almost nothing (in time and memory) thanks to COW (copy-on-write).\n> > Exec in expensive as it breaks COW.\n> \n> Not so. Modern Unixs will share executable address space between\n> processes. So if you fork and exec 10 identical programs, they will share\n> most address space.\n> \n> 1. Code is probably not shared between postmaster and postgres\n> processes.\n\n\n\nI think it is shared. postmaster is a symlink to postgres, so by the\ntime it gets to the kernel exec routines, both processes are mapped to\nthe same inode number.\n\n> \n> 2. Some inits may be done once (by postmaster) and not repeated\n> by every child.\n\nMaybe.\n\n> \n> 3. (and most important) \n> With no exec COW is in action, meaning:\n> data pages in shared until changed.\n\nThis would also prevent us from attaching to shared memory because it\nwould already be in the address space.\n\n> \n> COW is the key to how Linux can fork faster than most unices\n> starts a new thread. :-)\n\n\n> \n> Again, this only applies to \"modern\" systems, but FreeBSD definitely has\n> this behaviour.\n> \n> I don't know if *BSD has COW, but if should think so.\n\nAll modern Unixes have COW.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 19:21:55 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "> \n> \n> On 24 Jan 1998, Goran Thyni wrote:\n> \n> > Fork on modern unices (linux and (a think) *BSD) cost\n> > almost nothing (in time and memory) thanks to COW (copy-on-write).\n> > Exec in expensive as it breaks COW.\n> \n> Not so. Modern Unixs will share executable address space between\n> processes. So if you fork and exec 10 identical programs, they will share\n> most address space.\n> \n> If you want to speed this up, link postgresql static. This makes exec()\n> cost almost nothing too. postgresql becomes its own best shared library.\n> \n> Again, this only applies to \"modern\" systems, but FreeBSD definitely has\n> this behaviour.\n\nThis is very OS-specific. SunOS-style shared libraries do have a\nnoticable overhead for each function call. In fact, even though these\nare part of BSD44 source, BSDI does not use them, and uses a more crude\nshared library jump table, similar to SVr3 shared libraries because of\nthe SunOS shared library overhead.\n\nI think FreeBSD and Lunix use SunOS style shared libraries, often called\ndynamic shared libraries because you can change the function while the\nbinary is running if you are realy careful.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 19:24:18 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "\nOn Sat, 24 Jan 1998, Bruce Momjian wrote:\n\n> > On 24 Jan 1998, Goran Thyni wrote:\n> > \n> > > Fork on modern unices (linux and (a think) *BSD) cost\n> > > almost nothing (in time and memory) thanks to COW (copy-on-write).\n> > > Exec in expensive as it breaks COW.\n> > \n> > Not so. Modern Unixs will share executable address space between\n> > processes. So if you fork and exec 10 identical programs, they will share\n> > most address space.\n> > \n> > If you want to speed this up, link postgresql static. This makes exec()\n> > cost almost nothing too. postgresql becomes its own best shared library.\n> > \n> > Again, this only applies to \"modern\" systems, but FreeBSD definitely has\n> > this behaviour.\n> \n> This is very OS-specific. SunOS-style shared libraries do have a\n> noticable overhead for each function call. In fact, even though these\n> are part of BSD44 source, BSDI does not use them, and uses a more crude\n> shared library jump table, similar to SVr3 shared libraries because of\n> the SunOS shared library overhead.\n\n Regardless on the method used, the dynamic executables need to undergo a\nlink step during exec(). Linking static reduces that.\n\n> I think FreeBSD and Lunix use SunOS style shared libraries, often called\n> dynamic shared libraries because you can change the function while the\n> binary is running if you are realy careful.\n\n Linux uses ELF shared libraries. I don't know how those work.\n\n I don't FreeBSD to have a high call overhead for dynamic libs at all.\nStatic executables just start faster thats all.\n\n> -- \n> Bruce Momjian\n> maillist@candle.pha.pa.us\n> \n> \n\nTom\n\n", "msg_date": "Sat, 24 Jan 1998 16:51:11 -0800 (PST)", "msg_from": "Tom <tom@sdf.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "> Regardless on the method used, the dynamic executables need to undergo a\n> link step during exec(). Linking static reduces that.\n\nTrue, but the SVr3/BSDI shared libraries set up the jump table in the\nbinary at binary link time. It has to map into the shared library at\nexec time, but it is a single mapping per shared library, not a mapping\nper function or per function call.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 20:36:05 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "\n> I'm not speaking just about COW, but about being able share code \n> between separately execed processes.\n\nThis we already have (as long as the OS is sane).\nI was think about reducing child startup time by\nnot breaking COW.\n\nI do not seem to be a big issue in normal usage,\nbut there are several situations where one will\ndo a : connect/simple query/disconnect sequence\nfor instance in simple CGI-queries where it could \nbe a noticable speedup.\n\n best regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "25 Jan 1998 11:13:47 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "\n> This would also prevent us from attaching to shared memory because it\n> would already be in the address space.\n\nWith no exec we could use mmap instead of shm*.\nHave to clock them to see which one is faster first.\nI think the mmap API is cleaner.\n\n regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "25 Jan 1998 11:31:53 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] fork/exec for backend" }, { "msg_contents": "> \n> \n> > This would also prevent us from attaching to shared memory because it\n> > would already be in the address space.\n> \n> With no exec we could use mmap instead of shm*.\n> Have to clock them to see which one is faster first.\n> I think the mmap API is cleaner.\n\nYes, we really only need memory that is not going to be copy-on-write\nwhen you fork. Both types would do this, but since it would already be\nin the address space, there is no speed to measure. The postmaster is\nthe only one to do the actual operation.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 25 Jan 1998 10:44:06 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] fork/exec for backend" } ]
[ { "msg_contents": "I wrote a real small function to revoke update on a column. The function\ndoesn't do anything\nfancy like checking user ids.\n\nI copied most of it from the refint.c in the contrib directory.\n\nShould I post this somewhere? It really isn't very big.\n\nThanx\n--\n | Email - rick@rpacorp.com\nRick Poleshuck | Voice - (908) 653-1070 Fax - (908) 653-0265\n | Mail - RPA Corporation\n | - 308 Elizabeth Avenue, Cranford, New Jersey\n07016\n\n\n", "msg_date": "Sat, 24 Jan 1998 18:08:55 -0500", "msg_from": "Rick Poleshuck <rick@rpacorp.com>", "msg_from_op": true, "msg_subject": "Revoke update on Column" }, { "msg_contents": "Sure send it in, we can put it in contrib.\n\n> \n> I wrote a real small function to revoke update on a column. The function\n> doesn't do anything\n> fancy like checking user ids.\n> \n> I copied most of it from the refint.c in the contrib directory.\n> \n> Should I post this somewhere? It really isn't very big.\n> \n> Thanx\n> --\n> | Email - rick@rpacorp.com\n> Rick Poleshuck | Voice - (908) 653-1070 Fax - (908) 653-0265\n> | Mail - RPA Corporation\n> | - 308 Elizabeth Avenue, Cranford, New Jersey\n> 07016\n> \n> \n> \n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sat, 24 Jan 1998 19:27:26 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Revoke update on Column" } ]
[ { "msg_contents": "\n hi,\n\n I'm tempted to implement an extension to the current postgres' SQL\nlanguage that would allow me to do something like :\n\n CREATE INDEX MyIndex ON MyTable (IndexField);\n BEGIN;\n SELECT LAST(Tid) AS MyFirstTid FROM MyIndex WHERE IndexField = 'what_user_requests';\n SELECT * from MyTable where Tid = MyFirstTid;\n END;\n ... disconnect ...\n ... reconnect ...\n BEGIN;\n SELECT NEXT(Tid) AS MyNextTid FROM MyIndex WHERE Tid = MyFirstTid; \n SELECT * from MyTable where Tid = MyNextTid;\n END;\n ... disconnect ...\n ... etc ...\n\n The \"LAST(Tid)\" could also be \"FIRST(Tid)\". The FIRST/LAST keywords are\nthere so it knows which index pointer (TID) to pick when the index isn't\nunique and the 'what_user_requests' value happens to be in the index more\nthan once.\n\n The NEXT could also be PREVIOUS.\n\n Now I understand that the first and second SELECT would have to be in\none transaction, so that other users cannot invalidate on me (delete /\nupdate) the record that the TID points to. \n\n Also after I reconnect, it is not guaranteed that MyFirstTid is still a\nvalid Tid. If it is not valid, then the first SELECT should return 0 rows\nas result. If it is still valid, it should return exacly one row with the\nnext TID in the index.\n\n\n I would appreciate opinions on any of the following :\n\n o. is this a reasonable approach to solve my \"browsing\" problem ?\n\n o. what would be involved in implementing this ?\n\n o. given the current design, how many lines (estimate) of code would\nhave to be written to do this,\n\n o. given the current design, how many functions (estimate) would have to\nbe rewritten\n\n Is there anybody out there who would be willing to do this with me over\nthe next couple of days ? (My \"selling\" of postgres to my customer is\ncompletely lost yet.)\n\n Or is there anybody who could at least feed me with pointers when I get\nstuck ? (I.e. you would give me the direction(s), I would do the work.)\n\n\n Thanx,\n\n Jan\n\n\n -- Gospel of Jesus is the saving power of God for all who believe --\nJan Vicherek ## To some, nothing is impossible. ## www.ied.com/~honza\n >>> Free Software Union President ... www.fslu.org <<<\nInteractive Electronic Design Inc. -#- PGP: finger honza@ied.com\n\n", "msg_date": "Sun, 25 Jan 1998 01:00:59 -0500 (EST)", "msg_from": "Jan Vicherek <honza@ied.com>", "msg_from_op": true, "msg_subject": "extension to SQL to access TIDs from indices" } ]
[ { "msg_contents": "Hi...\n\n\tI added this to the contrib directory for the server, but am\nwondering if anyone feels that this just might be useful as a \"normal\"\ndatatype, instead of just in the contrib directory...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n---------- Forwarded message ----------\nDate: Sun, 11 Jan 1998 21:24:40 +0100 (CET)\nFrom: Tom I Helbekkmo <tih@Hamartun.Priv.NO>\nTo: The Hermit Hacker <scrappy@hub.org>\nSubject: A small type extension example for the contrib directory\n\nHi!\n\nI figured the accompanying code might fit well in the contrib\ndirectory. I don't know who to send it to, though, so I'm just\nsending it to you. :-) From the README:\n\n| PostgreSQL type extensions for IP and MAC addresses.\n| \n| I needed to record IP and MAC level ethernet addresses in a data\n| base, and I really didn't want to store them as plain strings, with\n| no enforced error checking, so I put together the accompanying code\n| as my first experiment with adding a data type to PostgreSQL. I\n| then thought that this might be useful to others, both directly and\n| as a very simple example of how to do this sort of thing, so here\n| it is, in the hope that it will be useful.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"", "msg_date": "Sun, 25 Jan 1998 03:12:09 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "A small type extension example for the contrib directory (fwd)" }, { "msg_contents": "On Sun, 25 Jan 1998, The Hermit Hacker wrote:\n\n> \tI added this to the contrib directory for the server, but am\n> wondering if anyone feels that this just might be useful as a \"normal\"\n> datatype, instead of just in the contrib directory...\n\nI, naturally, like the idea. :-) However, there are problems with\nthe code I submitted -- as I said, it's my first shot at this:\n\n- since then, I've learned (hi, Bruce!) that commutativity is not what\nI thought it was, and that LIKE is not only not commutative, it's not\neven reflexive.\n\n- the Makefile is BSD-specific, and I don't know how to avoid this.\n\n- my LIKE operator for IP addresses fails. I've written about this to\nthe questions list, and I'm more and more certain that this has to be\na bug in PostgreSQL. (Yes, that's arrogant of me, but I can't for the\nlife of me see any other possibility. If someone could look at it\n(there's a sample \"test.sql\" that shows the problem, and I've left the\ndebug output in the C source), that would be great!)\n\n- I'm using 8 bytes of storage for each type, while I probably could\nget away with 5 for IP addresses and 6 for ethernet addresses. I just\ndidn't know for sure what the alignment aspects and so on were, and I\nhad the disk space to spare, so I never experimented to find out.\n\nWith all that said, I still think that this is both a useful data type\nextension and a neat, small example of how to do this. If it is to be\nused as a base for creating a new internal data type, though, it must\nbe looked at carefully by someone else; I am only an egg.\n\nI'd also like to be able to index on IP addresses, but the amount of\nhair needed to add this seems to be somewhat larger than what I have.\nIf someone wants to show me how to extend my code to allow this, I would\nvery much appreciate it.\n\nI'm attaching an updated version of the sample files. Most of the\noccurrences of \"commutator = ...\" are gone now, and there is a note in\nthe README to the effect that LIKE and NOT LIKE do not work.\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"", "msg_date": "Sun, 25 Jan 1998 10:20:49 +0100 (CET)", "msg_from": "Tom I Helbekkmo <tih@Hamartun.Priv.NO>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" }, { "msg_contents": "On Sun, 25 Jan 1998, The Hermit Hacker wrote:\n\n> \tI added this to the contrib directory for the server, but am\n> wondering if anyone feels that this just might be useful as a \"normal\"\n> datatype, instead of just in the contrib directory...\n\nPersonally, I would, as at work, I have a database that contains all of\nour machines on the network. It's a real pain handling IP & MAC addresses\nas strings.\n\nThe IP type would be more useful, as I'd thought that people would use it\nin things like web useage logs, etc.\n\n> | PostgreSQL type extensions for IP and MAC addresses.\n> | \n> | I needed to record IP and MAC level ethernet addresses in a data\n> | base, and I really didn't want to store them as plain strings, with\n> | no enforced error checking, so I put together the accompanying code\n> | as my first experiment with adding a data type to PostgreSQL. I\n> | then thought that this might be useful to others, both directly and\n> | as a very simple example of how to do this sort of thing, so here\n> | it is, in the hope that it will be useful.\n\n-- \nPeter T Mount petermount@earthling.net or pmount@maidast.demon.co.uk\nMain Homepage: http://www.demon.co.uk/finder\nWork Homepage: http://www.maidstone.gov.uk Work EMail: peter@maidstone.gov.uk\n\n", "msg_date": "Sun, 25 Jan 1998 10:51:20 +0000 (GMT)", "msg_from": "Peter T Mount <psqlhack@maidast.demon.co.uk>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" }, { "msg_contents": ">\n>Hi...\n>\n>\tI added this to the contrib directory for the server, but am\n>wondering if anyone feels that this just might be useful as a \"normal\"\n>datatype, instead of just in the contrib directory...\n>\n>Marc G. Fournier \n>Systems Administrator @ hub.org \n>primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n>\n\n\tI think this would be great as a standard tyoe. Many people wn't poke\n\taround n the source tree looking for neat things like this, but would\n\tgain even greater respect for Postgres if they found it in the default\n\tsystem. Remember we know have a lot of people installing RedHat inux,\n\tand being presented with Postgres as the default SQL server. \n\n\tI would like to see them impressed by the functionality. Many of these\n\tpeople will probably want to do table with network parmaters in them.\n\n\tI recomend making it a standard type.\n\n-- \nStan Brown stanb@netcom.com 770-996-6955\nFactory Automation Systems\nAtlanta Ga.\n-- \nLook, look, see Windows 95. Buy, lemmings, buy! \nPay no attention to that cliff ahead... Henry Spencer\n(c) 1998 Stan Brown. Redistribution via the Microsoft Network is prohibited.\n\n", "msg_date": "Sun, 25 Jan 1998 09:45:49 -0500 (EST)", "msg_from": "\"Stan Brown\" <stanb@awod.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" }, { "msg_contents": "\nI think it's great for the distribution! speaking of which, I've\nwritten some more datetime routines which I think are pretty basic.\nOne just returns the datetime as the number of seconds since the epoch\n(why wasn't that possible?) and the other is an interface to the\nstrftime function call, to custom format date strings... what should\nthe function names be called? right now i've got datetime2seconds &\nstrfdatetime.. any other suggestions? and where should I submit the\ncode?\n\nOn Sun, 25 January 1998, at 03:12:09, The Hermit Hacker wrote:\n\n> Hi...\n> \n> \tI added this to the contrib directory for the server, but am\n> wondering if anyone feels that this just might be useful as a \"normal\"\n> datatype, instead of just in the contrib directory...\n> \n> Marc G. Fournier \n> Systems Administrator @ hub.org \n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n> \n> ---------- Forwarded message ----------\n> Date: Sun, 11 Jan 1998 21:24:40 +0100 (CET)\n> From: Tom I Helbekkmo <tih@Hamartun.Priv.NO>\n> To: The Hermit Hacker <scrappy@hub.org>\n> Subject: A small type extension example for the contrib directory\n> \n> Hi!\n> \n> I figured the accompanying code might fit well in the contrib\n> directory. I don't know who to send it to, though, so I'm just\n> sending it to you. :-) From the README:\n> \n> | PostgreSQL type extensions for IP and MAC addresses.\n> | \n> | I needed to record IP and MAC level ethernet addresses in a data\n> | base, and I really didn't want to store them as plain strings, with\n> | no enforced error checking, so I put together the accompanying code\n> | as my first experiment with adding a data type to PostgreSQL. I\n> | then thought that this might be useful to others, both directly and\n> | as a very simple example of how to do this sort of thing, so here\n> | it is, in the hope that it will be useful.\n> \n> -tih\n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "Sun, 25 Jan 1998 07:21:11 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" }, { "msg_contents": "> \n> I figured the accompanying code might fit well in the contrib\n> directory. I don't know who to send it to, though, so I'm just\n> sending it to you. :-) From the README:\n\n/contrib looks like a good place. We need to re-emphasise contrib,\nrather than putting this in the backend.\n\n> \n> | PostgreSQL type extensions for IP and MAC addresses.\n> | \n> | I needed to record IP and MAC level ethernet addresses in a data\n> | base, and I really didn't want to store them as plain strings, with\n> | no enforced error checking, so I put together the accompanying code\n> | as my first experiment with adding a data type to PostgreSQL. I\n> | then thought that this might be useful to others, both directly and\n> | as a very simple example of how to do this sort of thing, so here\n> | it is, in the hope that it will be useful.\n> \n> -tih\n> -- \n> Popularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n> \n> --1426065427-653378052-884550280=:24931--\n> \n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 25 Jan 1998 10:40:21 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" }, { "msg_contents": "\n\nOn Sun, 25 Jan 1998, Peter T Mount wrote:\n\n> On Sun, 25 Jan 1998, The Hermit Hacker wrote:\n> \n> > \tI added this to the contrib directory for the server, but am\n> > wondering if anyone feels that this just might be useful as a \"normal\"\n> > datatype, instead of just in the contrib directory...\n> \n> Personally, I would, as at work, I have a database that contains all of\n> our machines on the network. It's a real pain handling IP & MAC addresses\n> as strings.\n> \n> The IP type would be more useful, as I'd thought that people would use it\n> in things like web useage logs, etc.\n> \n> > | PostgreSQL type extensions for IP and MAC addresses.\n> > | \n> > | I needed to record IP and MAC level ethernet addresses in a data\n> > | base, and I really didn't want to store them as plain strings, with\n> > | no enforced error checking, so I put together the accompanying code\n> > | as my first experiment with adding a data type to PostgreSQL. I\n> > | then thought that this might be useful to others, both directly and\n> > | as a very simple example of how to do this sort of thing, so here\n> > | it is, in the hope that it will be useful.\n> \n\nHmm...\n\n\tIt would be nice to see PgSQL integrated into Scotty/Tkined :)\n\n\n-James\n\n", "msg_date": "Sun, 25 Jan 1998 12:11:18 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" }, { "msg_contents": "Thus spake The Hermit Hacker\n> \tI added this to the contrib directory for the server, but am\n> wondering if anyone feels that this just might be useful as a \"normal\"\n> datatype, instead of just in the contrib directory...\n\nI have always thought that this would be useful. Perhaps we should have\na \"Would be nice if we had it\" list of datatypes to see if anyone would\nlike to implement some. How's this for starters?\n\nIP addresses\nMAC addresses\nPhone numbers (with international and area codes)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Sun, 25 Jan 1998 13:53:00 -0500 (EST)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" }, { "msg_contents": "> I added this to the contrib directory for the server, but am\n> wondering if anyone feels that this just might be useful as a \"normal\"\n> datatype, instead of just in the contrib directory...\n\nI second Bruce's point that contrib is a useful place and should be\nre-emphasized. In particular, it is a great place for new data types and\nfunctions to be implemented, even if they might end up in the backend in a future\nrelease. For example, contrib has code to implement 64-bit integers, and perhaps\nsomeday we will move them to the backend. In the meantime, you can install\nanything from contrib into template1 and your whole installation can then use it.\n\n - Tom\n\nIn hindsight, this thread is a _good_ case for contrib, since some of the\nfeatures of the candidate data type (e.g. \"like\" pattern matching) may not have\nbehaved properly and needed more work.\n\n", "msg_date": "Tue, 27 Jan 1998 15:22:27 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" }, { "msg_contents": "\nExcept, of course, if the database was created before the\nfunction/types were inserted into template1, correct?\n\nOn Tue, 27 January 1998, at 15:22:27, Thomas G. Lockhart wrote:\n\n> In the meantime, you can install\n> anything from contrib into template1 and your whole installation can then use it.\n> \n> - Tom\n> \n", "msg_date": "Tue, 27 Jan 1998 15:57:00 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" } ]
[ { "msg_contents": "The Hermit Hacker <scrappy@hub.org> wrote:\n\n> Now, with that in mind, should we remove the PostODBC stuff from\n> the interfaces directory an dmove it to the contrib directory? Or remove\n> it all together? Or leave it where it is?\n>\nProbably the safest way would be to put it in contrib, and maybe leave a\nREADME in interfaces/PostODBC informing people of its being in contrib.\n\nHannu\n\n", "msg_date": "Sun, 25 Jan 1998 18:05:34 +0200", "msg_from": "Hannu Krosing <hannu@trust.ee>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Copyright question: GPL patches for non-GPL\n\tpackages (fwd)" }, { "msg_contents": "> \n> The Hermit Hacker <scrappy@hub.org> wrote:\n> \n> > Now, with that in mind, should we remove the PostODBC stuff from\n> > the interfaces directory an dmove it to the contrib directory? Or remove\n> > it all together? Or leave it where it is?\n> >\n> Probably the safest way would be to put it in contrib, and maybe leave a\n> README in interfaces/PostODBC informing people of its being in contrib.\n\nLeave it in interfaces.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 25 Jan 1998 14:11:36 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Copyright question: GPL patches for non-GPL\n\tpackages (fwd)" } ]
[ { "msg_contents": "\nhow do you find the elementType for the array? it seems to be passed\nto array_in & array_out what about to user defined functions? how\ndoes this work?\n\n--brett\n", "msg_date": "Sun, 25 Jan 1998 08:56:00 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": true, "msg_subject": "user defined functions with array arguments" } ]
[ { "msg_contents": "\nokay, it's passed to me too the same way, as the second arg...\ni checked out pg_proc, sorry for the waste of bandwidth!\n", "msg_date": "Sun, 25 Jan 1998 08:57:56 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": true, "msg_subject": "ignore array query" } ]
[ { "msg_contents": "\nsilly me, it turned out the function had two arguments, and the second\nargument just happened to coincide with the type of the array.\n\nWhen writing a c function to be dynamically loaded and called from\npostgres, how do you find out the base element type of an array that\nyou're accepting as an arugment (getting called with). array_in/out\nseem to get passed this value, wheras my function just gets the\npointer without knowing what the underlying data is. do I have to\nlook this up once inside the function? or, if I know what I'm\ngetting, can I fudge it? (i.e. treat them as what I expect them to be\n(int4s) without regard for what they actually are). that doesn't\nsound so good to me.\n\nI'd appreciate any help!\n\n--brett\n", "msg_date": "Sun, 25 Jan 1998 11:10:21 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": true, "msg_subject": "array questions still stands" }, { "msg_contents": "\nOkay, I suppose more obviously i've just got an array of integers (by\nvalue) @ ARR_DATA_PTR(array), so I don't have much to worry about.\n\nOn Sun, 25 January 1998, at 11:10:21, Brett McCormick wrote:\n\n> silly me, it turned out the function had two arguments, and the second\n> argument just happened to coincide with the type of the array.\n> \n> When writing a c function to be dynamically loaded and called from\n> postgres, how do you find out the base element type of an array that\n> you're accepting as an arugment (getting called with). array_in/out\n> seem to get passed this value, wheras my function just gets the\n> pointer without knowing what the underlying data is. do I have to\n> look this up once inside the function? or, if I know what I'm\n> getting, can I fudge it? (i.e. treat them as what I expect them to be\n> (int4s) without regard for what they actually are). that doesn't\n> sound so good to me.\n> \n> I'd appreciate any help!\n> \n> --brett\n", "msg_date": "Sun, 25 Jan 1998 15:33:03 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] array questions still stands" } ]
[ { "msg_contents": "I found this patch in my mailbox. Is there any intestest in this, or is\nit too site-specific?\n\n> \n> Eze Ogwuma writes:\n> > Bruce Momjian <maillist@candle.pha.pa.us> writes:\n> > > Can you be specific? Something I can add to the TODO list.\n> > \n> > Database based access for users so that each user can be giving access\n> > to a particular database only. More permissions for each databse user:\n> > Create, Drop, Select, Insert etc. Possibly table based\n> > authentification as well.\n> \n> I needed to do that for the web database that I'm setting up. We have\n> 20000 users and each (potentially) needs a separate database which is\n> only accessible to them. Rather than having 20000 lines in pg_hba.conf,\n> I've patched Postgres so that the special token \"%username\" in the\n> database field of pg_hba.conf allows access only to the username which\n> is connecting. (I chose the leading \"%\" so that it couldn't clash with\n> a real database name.) Since the patch is against 6.1 rather than\n> 6.2beta, I hadn't made it public. Here it is in case it's of interest.\n> \n> ----------------------------- cut here -----------------------------\n> --- postgresql-v6.1/src/include/libpq/hba.h.ORI\tWed Jul 30 18:05:12 1997\n> +++ postgresql-v6.1/src/include/libpq/hba.h\tWed Jul 30 18:05:37 1997\n> @@ -42,7 +42,7 @@\n> hba_recvauth(const Port *port, const char database[], const char user[],\n> const char DataDir[]);\n> void find_hba_entry(const char DataDir[], const struct in_addr ip_addr, \n> -\t\t const char database[], \n> +\t\t const char user[], const char database[], \n> \t\t bool *host_ok_p, enum Userauth *userauth_p, \n> \t\t char usermap_name[], bool find_password_entries);\n> \n> --- postgresql-v6.1/src/backend/libpq/hba.c.ORI\tWed Jul 30 18:05:47 1997\n> +++ postgresql-v6.1/src/backend/libpq/hba.c\tThu Jul 31 14:18:03 1997\n> @@ -144,8 +144,8 @@\n> \n> static void\n> process_hba_record(FILE *file, \n> - const struct in_addr ip_addr, const char database[],\n> - bool *matches_p, bool *error_p, \n> + const struct in_addr ip_addr, const char user[],\n> + const char database[], bool *matches_p, bool *error_p, \n> enum Userauth *userauth_p, char usermap_name[],\n> \t\t bool find_password_entries) {\n> /*---------------------------------------------------------------------------\n> @@ -173,7 +173,8 @@\n> if (buf[0] == '\\0') *matches_p = false;\n> else {\n> /* If this record isn't for our database, ignore it. */\n> - if (strcmp(buf, database) != 0 && strcmp(buf, \"all\") != 0) {\n> + if (strcmp(buf, database) != 0 && strcmp(buf, \"all\") != 0\n> + && (strcmp(buf, \"%username\") != 0 || strcmp(user, database) != 0)) {\n> *matches_p = false;\n> read_through_eol(file);\n> } else {\n> @@ -235,7 +236,8 @@\n> \n> static void\n> process_open_config_file(FILE *file, \n> - const struct in_addr ip_addr, const char database[],\n> + const struct in_addr ip_addr,\n> + const char user[], const char database[],\n> bool *host_ok_p, enum Userauth *userauth_p, \n> char usermap_name[], bool find_password_entries) {\n> /*---------------------------------------------------------------------------\n> @@ -261,7 +263,7 @@\n> else {\n> if (c == '#') read_through_eol(file);\n> else {\n> - process_hba_record(file, ip_addr, database, \n> + process_hba_record(file, ip_addr, user, database, \n> &found_entry, &error, userauth_p, usermap_name,\n> \t\t\t find_password_entries);\n> }\n> @@ -277,7 +279,7 @@\n> \n> void\n> find_hba_entry(const char DataDir[], const struct in_addr ip_addr, \n> - const char database[],\n> + const char user[], const char database[],\n> bool *host_ok_p, enum Userauth *userauth_p, \n> char usermap_name[], bool find_password_entries) {\n> /*--------------------------------------------------------------------------\n> @@ -348,8 +350,8 @@\n> fputs(PQerrormsg, stderr);\n> pqdebug(\"%s\", PQerrormsg);\n> } else {\n> - process_open_config_file(file, ip_addr, database, host_ok_p, userauth_p,\n> - usermap_name, find_password_entries);\n> + process_open_config_file(file, ip_addr, user, database, host_ok_p,\n> + userauth_p, usermap_name, find_password_entries);\n> fclose(file);\n> }\n> free(conf_file);\n> @@ -719,7 +721,7 @@\n> /* Our eventual return value */\n> \n> \n> - find_hba_entry(DataDir, port->raddr.sin_addr, database, \n> + find_hba_entry(DataDir, port->raddr.sin_addr, user, database, \n> &host_ok, &userauth, usermap_name, \n> \t\t false /* don't find password entries of type 'password' */);\n> \n> --- postgresql-v6.1/src/backend/libpq/password.c.ORI\tWed Jul 30 18:05:55 1997\n> +++ postgresql-v6.1/src/backend/libpq/password.c\tWed Jul 30 18:06:43 1997\n> @@ -23,7 +23,7 @@\n> char *p, *test_user, *test_pw;\n> char salt[3];\n> \n> - find_hba_entry(DataDir, port->raddr.sin_addr, database, \n> + find_hba_entry(DataDir, port->raddr.sin_addr, user, database, \n> \t\t &host_ok, &userauth, pw_file_name, true);\n> \n> if(!host_ok) {\n> ----------------------------- cut here -----------------------------\n> \n> --Malcolm\n> \n> -- \n> Malcolm Beattie <mbeattie@sable.ox.ac.uk>\n> Unix Systems Programmer\n> Oxford University Computing Services\n> \n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 25 Jan 1998 14:27:51 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] How is PostgreSQL doing?" }, { "msg_contents": "> \n> Eze Ogwuma writes:\n> > Bruce Momjian <maillist@candle.pha.pa.us> writes:\n> > > Can you be specific? Something I can add to the TODO list.\n> > \n> > Database based access for users so that each user can be giving access\n> > to a particular database only. More permissions for each databse user:\n> > Create, Drop, Select, Insert etc. Possibly table based\n> > authentification as well.\n> \n> I needed to do that for the web database that I'm setting up. We have\n> 20000 users and each (potentially) needs a separate database which is\n> only accessible to them. Rather than having 20000 lines in pg_hba.conf,\n> I've patched Postgres so that the special token \"%username\" in the\n> database field of pg_hba.conf allows access only to the username which\n> is connecting. (I chose the leading \"%\" so that it couldn't clash with\n> a real database name.) Since the patch is against 6.1 rather than\n> 6.2beta, I hadn't made it public. Here it is in case it's of interest.\n\nI have re-generated this patch for the current source, and changed\n'%username' to 'sameuser'. I added documentation in pg_hba.conf.\n\nPatch applied. This is a nice feature.\n\n-- \nBruce Momjian | 830 Blythe Avenue\nmaillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sat, 13 Jun 1998 00:26:51 -0400 (EDT)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] How is PostgreSQL doing?" } ]
[ { "msg_contents": "\nI'm stuck as to how to make this a parameter for the backend.\n\nThe postmaster could read a pg_blocksize file and then pass that,\nbut how could _that_ be set differently when calling the same\ncopy of initdb?\n\nI'd like to avoid having to set it in the Makefile in the way\nthat the NAMEDATALEN and OIDNAMELEN variables are set up.\n\n*small lightbulb on*\n\nWould it be acceptable for this to be in a file in the PGLIB\ndirectory? This could be read in by initdb and then passed along\nto the postgres calls. Does this seem reasonable and/or clean?\n\nOr is there a really obvious way that I'm just not seeing right now?\n\ndarrenk\n", "msg_date": "Sun, 25 Jan 1998 14:38:58 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Variable Block Size Dilemma" }, { "msg_contents": "> \n> \n> I'm stuck as to how to make this a parameter for the backend.\n> \n> The postmaster could read a pg_blocksize file and then pass that,\n> but how could _that_ be set differently when calling the same\n> copy of initdb?\n> \n> I'd like to avoid having to set it in the Makefile in the way\n> that the NAMEDATALEN and OIDNAMELEN variables are set up.\n> \n> *small lightbulb on*\n> \n> Would it be acceptable for this to be in a file in the PGLIB\n> directory? This could be read in by initdb and then passed along\n> to the postgres calls. Does this seem reasonable and/or clean?\n> \n> Or is there a really obvious way that I'm just not seeing right now?\n\nYou ask about the block size as part of initdb, and create a file in\n/pgsql/data as part of initdb's work. Then each postmaster reads the\nfile, and passes the value to each backend as a parameter.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 25 Jan 1998 15:21:39 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Variable Block Size Dilemma" } ]
[ { "msg_contents": "> \n> >\n> >> \tI think this would be great as a standard tyoe. Many people wn't poke\n> >> \taround n the source tree looking for neat things like this, but would\n> >> \tgain even greater respect for Postgres if they found it in the default\n> >> \tsystem. Remember we know have a lot of people installing RedHat inux,\n> >> \tand being presented with Postgres as the default SQL server. \n> >> \n> >> \tI would like to see them impressed by the functionality. Many of these\n> >> \tpeople will probably want to do table with network parmaters in them.\n> >> \n> >> \tI recomend making it a standard type.\n> >\n> >OK, I can be swayed to make it standard.\n> >\n> \n> \tGreat. While I don't believe in creeping featureisim, I do think the\n> \trazor to slice a feature against, is whehter it is usefuel in more than\n> \ta narrow band of specilized applications. For example I defnately think\n> \twe should allow tems in the ORDER BY clause that are not in the\n> \tSELECT.\n\nYes, I agree we should allow ORDER BY not in the select list, but\nbecause it is not part of the standard, it remains on the TODO list.\n\nI am even removing the '?' from the end of the item:\n\n\t* Remove restriction that ORDER BY field must be in SELECT list\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 25 Jan 1998 18:33:20 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib\n\tdirectory (fwd)" } ]
[ { "msg_contents": "I have added cash_words_out() to pg_proc, but when I use it it crashes\nthe backend. Can someone fix it and send in a patch?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 25 Jan 1998 20:07:04 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "cash_words_out" } ]
[ { "msg_contents": "\nSince I'm not doing anything fancy with them, and so that it isn't too\ndifficult for those submitting patches, I have changed the cron entry such\nthat it creates a snapshot every night at 3am (EST)...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 25 Jan 1998 23:20:28 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "snapshots..." } ]
[ { "msg_contents": "On Sun, 25 Jan 1998, Joe Hellerstein wrote:\n\n> Did it -- thanks for the heads up.\n\n\tGreat, thanks...\n\n> Let us know if you hear about any cool gist utilization.\n\n\tBest way for feedback...anyone out there using this? :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 26 Jan 1998 00:26:06 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: References from http://GiST.CS.Berkeley.EDU:8000/gist/... " } ]
[ { "msg_contents": "Yes, Marc, I too am having problems since the new protocol cleanup went\nin too.\n\nIt seems there is a new local hba_conf type, and I added that, but now\nit core dumps. I am stumped.\n\nI am glad someone has gone through this code and has given it a good\ncleaning, though.\n\nI am sure it is something simply overlooked.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 25 Jan 1998 23:40:56 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "new cvsup" }, { "msg_contents": "Hello,\n\n After a successful compile of the latest snapshot on an Alpha/Linux\nmachine, the initdb program dumps core, with the following message [I\nhope this helps]:\n\n[postgres@ac6 bin]$ ./initdb\ninitdb: using /usr/local/pgsql/lib/local1_template1.bki.source as input\nto create the template database.\ninitdb: using /usr/local/pgsql/lib/global1.bki.source as input to create\nthe global classes.\ninitdb: using /usr/local/pgsql/lib/pg_hba.conf.sample as the host-based\nauthentication control file.\n\nWe are initializing the database system with username postgres\n(uid=501).\nThis user will own all the files and must also own the server process.\n\ninitdb: creating template database in\n/usr/local/pgsql/data/base/template1\nRunning: postgres -boot -C -F -D/usr/local/pgsql/data -Q template1\nERROR: BuildFuncTupleDesc: function mkoidname(opaque, opaque) does not\nexist\nERROR: BuildFuncTupleDesc: function mkoidname(opaque, opaque) does not\nexist\ninitdb: could not create template database\ninitdb: cleaning up by wiping out /usr/local/pgsql/data/base/template1\n\n\nThanks,\nEdwin S. Ramirez\n\n", "msg_date": "Mon, 26 Jan 1998 09:08:25 +0000", "msg_from": "\"Edwin S. Ramirez\" <ramirez@doc.mssm.edu>", "msg_from_op": false, "msg_subject": "Problem with initdb in lastest Snapshot" } ]
[ { "msg_contents": "Hi all,\n\nI have been scanning the thread from Jan Vicherek who is having heavy\nbrowsing activity blocking updates.\n\nBasically there should be no need for read locks in postgresql,\nsince the before images are in the table until the next vacuum.\nTherefore a scheme could be implemented, where readers\nalways read data as it was at the time of their begin work statement.\n(The Oracle way)\n\nAn other alternative would be to implement a:\nset isolation to {dirty|uncommited} read; \nThis enables readers willing to take the risc of reading uncommitted\ndata \nto read without a read lock. This is often very useful. (The Informix\nand DB/2 way)\n\nIn the long run I think one of the above should be implemented (I prefer\nthe second,\nas it is less resource consuming, and easier to handle (e.g. with\nindexes)).\n\nAndreas\n", "msg_date": "Mon, 26 Jan 1998 11:14:49 +0100", "msg_from": "Zeugswetter Andreas DBT <Andreas.Zeugswetter@telecom.at>", "msg_from_op": true, "msg_subject": "Isolation Levels (Re: show stopper)" } ]
[ { "msg_contents": "\n> On Sat, 24 Jan 1998, Maarten Boekhold wrote:\n> \n> > Hi,\n> > \n> > I read this message on the debian development list. Thought it might be \n> > of interest to scrappy on the PostODBC thingie...\n> \n> \tDamn, to say I hate copyrights isn't saying enough :) Okay, I\n> guess the first thing to note is that PostODBC actually falls under the\n> LGPL vs the GPL, which appears to have slightly more lax restrictions on\n> how it gets included with other packages...\n> \n> \tNow, with that in mind, should we remove the PostODBC stuff from\n> the interfaces directory an dmove it to the contrib directory? Or remove\n> it all together? Or leave it where it is?\n> \n> > > Maarten\n> > \n> > Subject: Re: Copyright question: GPL patches for non-GPL packages\n> > \n> > > if you want to mix gpl'ed software with other software, that other\n> > > software's licence may not conflict with the gpl.\n> > > \n> > > for example, you can mix software with bsd style licencens (or x window\n> > > licence) with GPL'ed software, becuase these two licences do not\n> > > conflict. the mix will be under GPL'ed.\n\nAs the preamble to the LGPL says, LGPL and GPL are completely different --- they\nshare the same intention (to let people use software), but are completely\ndifferent licences. Do not interpret anything from GPL as applying to LGPL or\nvice versa. Read the licence which applies and work out what it says.\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Mon, 26 Jan 1998 10:31:19 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: Copyright question: GPL patches for non-GPL\n\tpackages (fwd)" } ]
[ { "msg_contents": "> I found this patch in my mailbox. Is there any intestest in this, or is\n> it too site-specific?\n> \n> > \n> > Eze Ogwuma writes:\n> > > Bruce Momjian <maillist@candle.pha.pa.us> writes:\n> > > > Can you be specific? Something I can add to the TODO list.\n> > > \n> > > Database based access for users so that each user can be giving access\n> > > to a particular database only. More permissions for each databse user:\n> > > Create, Drop, Select, Insert etc. Possibly table based\n> > > authentification as well.\n> > \n> > I needed to do that for the web database that I'm setting up. We have\n> > 20000 users and each (potentially) needs a separate database which is\n> > only accessible to them. Rather than having 20000 lines in pg_hba.conf,\n> > I've patched Postgres so that the special token \"%username\" in the\n> > database field of pg_hba.conf allows access only to the username which\n> > is connecting. (I chose the leading \"%\" so that it couldn't clash with\n> > a real database name.) Since the patch is against 6.1 rather than\n> > 6.2beta, I hadn't made it public. Here it is in case it's of interest.\n> > \n\nYes please! I'd like to see this...\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Mon, 26 Jan 1998 10:40:32 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How is PostgreSQL doing?" }, { "msg_contents": "> \n> > I found this patch in my mailbox. Is there any intestest in this, or is\n> > it too site-specific?\n> > \n> > > \n> > > Eze Ogwuma writes:\n> > > > Bruce Momjian <maillist@candle.pha.pa.us> writes:\n> > > > > Can you be specific? Something I can add to the TODO list.\n> > > > \n> > > > Database based access for users so that each user can be giving access\n> > > > to a particular database only. More permissions for each databse user:\n> > > > Create, Drop, Select, Insert etc. Possibly table based\n> > > > authentification as well.\n> > > \n> > > I needed to do that for the web database that I'm setting up. We have\n> > > 20000 users and each (potentially) needs a separate database which is\n> > > only accessible to them. Rather than having 20000 lines in pg_hba.conf,\n> > > I've patched Postgres so that the special token \"%username\" in the\n> > > database field of pg_hba.conf allows access only to the username which\n> > > is connecting. (I chose the leading \"%\" so that it couldn't clash with\n> > > a real database name.) Since the patch is against 6.1 rather than\n> > > 6.2beta, I hadn't made it public. Here it is in case it's of interest.\n> > > \n> \n> Yes please! I'd like to see this...\n\nI think it may already be there, but with no documentation in\npg_hba.conf:\n\nSee backend/libpq/hba.c:\n\n Special case: For usermap \"sameuser\", don't look in the usermap\n file. That's an implied map where \"pguser\" must be identical to\n \"ident_username\" in order to be authorized.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 26 Jan 1998 10:08:07 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How is PostgreSQL doing?" } ]
[ { "msg_contents": "The following is Informix behavior:\n\ninformix@zeus:/usr/informix72> dbaccess - -\n> create database nulltest in datadbs;\nDatabase created.\n\n> create table t1 (a int4, b char(2), c char(2));\n 201: A syntax error has occurred.\nError in line 1\nNear character position 20\n\n> create table t1 (a int, b char(2), c char(2));\nTable created.\n\n> insert into t1 (a,c) values (1,'x');\n1 row(s) inserted.\n> insert into t1 (a,c) values (2,'x');\n1 row(s) inserted.\n> insert into t1 (a,c) values (3,'z');\n1 row(s) inserted.\n> insert into t1 (a,c) values (2,'x');\n1 row(s) inserted.\n> select * from t1;\n a b c\n 1 x\n 2 x\n 3 z\n 2 x\n\n4 row(s) retrieved.\n> select b,c,sum(a) from t1 group by b,c;\nb c (sum)\n\n x 5\n z 3\n\n2 row(s) retrieved.\n> select b,c,sum(a) from t1 group by b,c order by c;\nb c (sum)\n\n x 5\n z 3\n\n2 row(s) retrieved.\n>\n\n\nAndreas\n", "msg_date": "Mon, 26 Jan 1998 12:39:07 +0100", "msg_from": "Zeugswetter Andreas DBT <Andreas.Zeugswetter@telecom.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." } ]
[ { "msg_contents": ">> In the second query, the first two rows have been grouped, but\nshouldn't\n>> they not be since b is NULL? I thought that NULL != NULL?\n\nNote that:\nNULL <> NULL\t\tis false\nNULL = NULL\t\tis false\n\n> select * from t1 x, t1 y where x.b <> y.b;\n a b c a b c\nNo rows found.\n> select * from t1 x, t1 y where x.b = y.b;\n a b c a b c\nNo rows found.\n> select * from t1 x, t1 y where not x.b = y.b;\n a b c a b c\nNo rows found.\n> select * from t1 x, t1 y where not x.b <> y.b;\n a b c a b c\nNo rows found.\n> select * from t1 where a = b;\n a b c\nNo rows found.\n> select * from t1 where a <> b;\n a b c\nNo rows found.\n>\n\nThe false seems not to be commutative.\nFeel free to ask for more\nAndreas\n \n", "msg_date": "Mon, 26 Jan 1998 12:51:54 +0100", "msg_from": "Zeugswetter Andreas DBT <Andreas.Zeugswetter@telecom.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." } ]
[ { "msg_contents": "I wrote:\n>>> In the second query, the first two rows have been grouped, but\nshouldn't\n>>> they not be since b is NULL? I thought that NULL != NULL?\n>\n>Note that:\n>NULL <> NULL\t\tis false\n>NULL = NULL\t\t\tis false\n\nThis was wrong, I digged up some more docu and found:\n<something> op NULL\t\tis unknown note that there exists a\nthird boolean state (name it ?)\n`Unknown values occur when part of an expression that uses an arithmetic\noperator is null.`\nIf a whole where expression evaluates to unknown the row is not chosen.\nnot:\tt -> f, f -> t, ? -> ?\nand:\t? and t -> ?, ? and f -> f, ? and ? -> ?\nor:\tt or ? -> t, f or ? -> ?, ? or ? -> ?\n\nOrder by: `Null values are ordered as less than values that are not\nnull.` \nbut\n\tNULL {>|<|<=|>=} value\t\tis unknown\n\nAndreas \n\n", "msg_date": "Mon, 26 Jan 1998 14:15:58 +0100", "msg_from": "Zeugswetter Andreas DBT <Andreas.Zeugswetter@telecom.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] A small type extension example for the contrib dire\n\tctory (fwd)" } ]
[ { "msg_contents": "> \n> The following is Informix behavior:\n> \n> informix@zeus:/usr/informix72> dbaccess - -\n> > create database nulltest in datadbs;\n> Database created.\n> \n> > create table t1 (a int, b char(2), c char(2));\n> Table created.\n> \n> > insert into t1 (a,c) values (1,'x');\n> 1 row(s) inserted.\n> > insert into t1 (a,c) values (2,'x');\n> 1 row(s) inserted.\n> > insert into t1 (a,c) values (3,'z');\n> 1 row(s) inserted.\n> > insert into t1 (a,c) values (2,'x');\n> 1 row(s) inserted.\n> > select * from t1;\n> a b c\n> 1 x\n> 2 x\n> 3 z\n> 2 x\n> \n> 4 row(s) retrieved.\n> > select b,c,sum(a) from t1 group by b,c;\n> b c (sum)\n> \n> x 5\n> z 3\n> \n> 2 row(s) retrieved.\n\nHere is where postgres seems to differ. Seems postgres is missing\nan implicit sort so that the grouping is done properly.\n\nPostgres will return _three_ rows...\n\nb c (sum)\n x 3\n z 3\n x 2\n\n\n> > select b,c,sum(a) from t1 group by b,c order by c;\n> b c (sum)\n> \n> x 5\n> z 3\n> \n> 2 row(s) retrieved.\n\nEven with the order by, postgres still returns _three_ rows...\n\nb c (sum)\n x 3\n x 2\n z 3\n \nFor now, ignore the patch I sent. Appears from Andreas demo that the\ncurrent postgres code will follow the Informix style with regard to\ngrouping columns with NULL values. Now that I really think about it,\nit does make more sense.\n\nBut there is still a problem.\n\nDoes the SQL standard say anything about an implied sort when\ngrouping or is it up to the user to include an ORDER BY clause?\n\ndarrenk\n", "msg_date": "Mon, 26 Jan 1998 10:36:04 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." }, { "msg_contents": "On Mon, 26 Jan 1998, Darren King wrote:\n\n>> The following is Informix behavior:\n>> \n>> > select b,c,sum(a) from t1 group by b,c;\n>> b c (sum)\n>> \n>> x 5\n>> z 3\n>> \n>> 2 row(s) retrieved.\n>\n>Here is where postgres seems to differ. Seems postgres is missing\n>an implicit sort so that the grouping is done properly.\n>\n>Postgres will return _three_ rows...\n>\n>b c (sum)\n> x 3\n> z 3\n> x 2\n\nI'm running the current cvs and it gives me this.\n\nselect b,c,sum(a) from t1 group by b,c;\nb|c |sum\n-+--+---\n |x | 1\n |x | 2\n |z | 3\n |x | 2\n(4 rows)\n\n>> > select b,c,sum(a) from t1 group by b,c order by c;\n>> b c (sum)\n>> \n>> x 5\n>> z 3\n>> \n>> 2 row(s) retrieved.\n>\n>Even with the order by, postgres still returns _three_ rows...\n>\n>b c (sum)\n> x 3\n> x 2\n> z 3\n\nselect b,c,sum(a) from t1 group by b,c order by c;\nb|c |sum\n-+--+---\n |x | 1\n |x | 2\n |x | 2\n |z | 3\n(4 rows) \n\n>For now, ignore the patch I sent. Appears from Andreas demo that the\n>current postgres code will follow the Informix style with regard to\n>grouping columns with NULL values. Now that I really think about it,\n>it does make more sense.\n\nI think I saw the patch committed this morning...?\n\n\nMike\n[ Michael J. Maravillo Philippines Online ]\n[ System Administrator PGP KeyID: 470AED9D InfoDyne, Incorporated ]\n[ http://www.philonline.com/~mmj/ (632) 890-0204 ]\n\n", "msg_date": "Tue, 27 Jan 1998 00:23:04 +0800 (GMT+0800)", "msg_from": "\"Michael J. Maravillo\" <mmj@philonline.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." }, { "msg_contents": "On Tue, 27 Jan 1998, Michael J. Maravillo wrote:\n\n> >For now, ignore the patch I sent. Appears from Andreas demo that the\n> >current postgres code will follow the Informix style with regard to\n> >grouping columns with NULL values. Now that I really think about it,\n> >it does make more sense.\n> \n> I think I saw the patch committed this morning...?\n\n\tYesterday evening, actually...should we back it out, or leave it\nas is? Is the old way more corrrect the the new?\n\n\n", "msg_date": "Mon, 26 Jan 1998 11:30:46 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." } ]
[ { "msg_contents": "OK, my deadlock code is ready for me to test, as soon as we can resolve\nthe protocol crashes that started yesterday. I assume the author been\nnotified?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 26 Jan 1998 15:21:23 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "deadlock ready" }, { "msg_contents": "On Mon, 26 Jan 1998, Bruce Momjian wrote:\n\n> OK, my deadlock code is ready for me to test, as soon as we can resolve\n> the protocol crashes that started yesterday. I assume the author been\n> notified?\n\n\tI sent the message off to Phil yesterday...\n\n\n", "msg_date": "Mon, 26 Jan 1998 15:38:30 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] deadlock ready" } ]
[ { "msg_contents": "Umm - hate to be annoying - but I haven't seen any comments about this bug\nfor a bit... Any news? :)\n(does varchar still corrupt tables?)\n\nG'day, eh? :)\n\t- Teunis\n\n", "msg_date": "Mon, 26 Jan 1998 13:24:42 -0700 (MST)", "msg_from": "teunis <teunis@mauve.computersupportcentre.com>", "msg_from_op": true, "msg_subject": "about that varchar() problem" }, { "msg_contents": "> \n> Umm - hate to be annoying - but I haven't seen any comments about this bug\n> for a bit... Any news? :)\n> (does varchar still corrupt tables?)\n> \n\nTotally fixed, and now variable-length storage size.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 26 Jan 1998 16:00:54 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] about that varchar() problem" } ]
[ { "msg_contents": "\nWhen the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\nIs there a way to auto-remove it after a postmaster crash?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 26 Jan 1998 16:28:21 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "postmaster crash and .s.pgsql file" }, { "msg_contents": "On Mon, 26 Jan 1998, Bruce Momjian wrote:\n\n> \n> When the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\n> Is there a way to auto-remove it after a postmaster crash?\n\n\tIf we wrote the process id to the file, if the file existed, we\ncould read the process id and do a 'kill(pid, 0)', it \"determines if a\nspecific process still exists\"...\n\n\tI'll try and look at it tonight, along with syslog() logging\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 26 Jan 1998 18:31:15 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Mon, 26 Jan 1998, The Hermit Hacker wrote:\n\n> On Mon, 26 Jan 1998, Bruce Momjian wrote:\n> \n> > \n> > When the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\n> > Is there a way to auto-remove it after a postmaster crash?\n> \n> \tIf we wrote the process id to the file, if the file existed, we\n> could read the process id and do a 'kill(pid, 0)', it \"determines if a\n> specific process still exists\"...\n> \n> \tI'll try and look at it tonight, along with syslog() logging\n\n\tOops...I screwed up *rofl* I wasn't thinking when I wrote\nthis...I was thining that /tmp/.s.PGSQL was a lock file...I forgot it was\na socket :( forget me thing about the kill :) I haven't got a clue how\nto detect whether it is active or not :(\n\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 26 Jan 1998 19:24:06 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "\n\nOn Mon, 26 Jan 1998, Bruce Momjian wrote:\n\n: \n: When the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\n: Is there a way to auto-remove it after a postmaster crash?\n: \n: -- \n: Bruce Momjian\n: maillist@candle.pha.pa.us\n: \n\nI found that when using \"-S\" with postmaster, the file doesn't get\ncreated at all. I will look at removing the file on startup when I'm in\nthere.\n\n\n\n-James\n\n", "msg_date": "Mon, 26 Jan 1998 18:32:13 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "Forget my complaint about local domain socket files not being removed. \nIt was a socket packet bug from someone else, and it is fixed now.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 26 Jan 1998 23:27:30 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "lock file" }, { "msg_contents": "\n On Mon, 26 Jan 1998, Bruce Momjian wrote:\n\n : \n : When the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\n : Is there a way to auto-remove it after a postmaster crash?\n : \n : -- \n : Bruce Momjian\n : maillist@candle.pha.pa.us\n : \n\n I found that when using \"-S\" with postmaster, the file doesn't get\n created at all. \n\nI have submitted a patch for this before,\nI think a got applied, maybe it has been \n(accidently) reverted since.\n(patch below)\n\nThe same goes for Bruce's problem with\nsocket name being 1 char short.\n\n I will look at removing the file on startup when I'm in there.\n\nDon't, it gets removed at shutdown except when crashing.\nRemoving at startup opens a whole new can of worms.\n(You must no postmaster is not already running.)\n\n regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n\n------------------ snip -----------------------------------\n\ndiff -c src/backend/postmaster/postmaster.c.orig src/backend/postmaster/postmaster.c\n*** /databaser/pg-sup/pgsql/src/backend/postmaster/postmaster.c.orig\tMon Jan 26 08:46:08 1998\n--- /databaser/pg-sup/pgsql/src/backend/postmaster/postmaster.c\tTue Jan 27 09:35:21 1998\n***************\n*** 482,488 ****\n \t{\n \t\tfprintf(stderr, \"%s: \", progname);\n \t\tperror(\"cannot disassociate from controlling TTY\");\n! \t\texit(1);\n \t}\n #endif\n \ti = open(NULL_DEV, O_RDWR);\n--- 482,488 ----\n \t{\n \t\tfprintf(stderr, \"%s: \", progname);\n \t\tperror(\"cannot disassociate from controlling TTY\");\n! \t\t_exit(1);\n \t}\n #endif\n \ti = open(NULL_DEV, O_RDWR);\n\n\n", "msg_date": "27 Jan 1998 08:37:04 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "\nSorry,\nlast patch wrong.\nThis is the right one:\n\ndiff -c src/backend/postmaster/postmaster.c.orig src/backend/postmaster/postmaster.c\n*** src/backend/postmaster/postmaster.c.orig\tMon Jan 26 08:46:08 1998\n--- src/backend/postmaster/postmaster.c\tTue Jan 27 09:37:45 1998\n***************\n*** 473,479 ****\n \tint\t\t\ti;\n \n \tif (fork())\n! \t\texit(0);\n /* GH: If there's no setsid(), we hopefully don't need silent mode.\n * Until there's a better solution.\n */\n--- 473,479 ----\n \tint\t\t\ti;\n \n \tif (fork())\n! \t\t_exit(0);\n /* GH: If there's no setsid(), we hopefully don't need silent mode.\n * Until there's a better solution.\n */\n\n", "msg_date": "27 Jan 1998 08:39:42 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "\n\nOn 27 Jan 1998, Goran Thyni wrote:\n\n: \n: On Mon, 26 Jan 1998, Bruce Momjian wrote:\n: \n: : \n: : When the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\n: : Is there a way to auto-remove it after a postmaster crash?\n: : \n\n<snip>\n\n: I will look at removing the file on startup when I'm in there.\n: \n: Don't, it gets removed at shutdown except when crashing.\n: Removing at startup opens a whole new can of worms.\n: (You must no postmaster is not already running.)\n: \n\nHow about in postmaster.c (arround line 427), when starting up...\n\n\t1.) Check for the existence of a pid file.\n\n\t2.) If one is there, read the pid and see if a back end is alive.\n\n\t3.) If so, warn the user and exit.\n\n\t4.) If not, check for and cleanup any leftover files.\n\n\t5.) Continue with startup process.\n\n\t\t...wouldn't this work OK? \n\n\n\n-James\n\n", "msg_date": "Tue, 27 Jan 1998 06:14:14 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "> \n> \n> On Mon, 26 Jan 1998, Bruce Momjian wrote:\n> \n> : \n> : When the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\n> : Is there a way to auto-remove it after a postmaster crash?\n> : \n> : -- \n> : Bruce Momjian\n> : maillist@candle.pha.pa.us\n> : \n> \n> I found that when using \"-S\" with postmaster, the file doesn't get\n> created at all. \n> \n> I have submitted a patch for this before,\n> I think a got applied, maybe it has been \n> (accidently) reverted since.\n> (patch below)\n> \n> The same goes for Bruce's problem with\n> socket name being 1 char short.\n> \n> I will look at removing the file on startup when I'm in there.\n> \n> Don't, it gets removed at shutdown except when crashing.\n> Removing at startup opens a whole new can of worms.\n> (You must no postmaster is not already running.)\n\nYes. All is working, thanks.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 07:21:12 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "\n : Removing at startup opens a whole new can of worms.\n : (You must no postmaster is not already running.)\n\n How about in postmaster.c (arround line 427), when starting up...\n\t 1.) Check for the existence of a pid file.\n\t 2.) If one is there, read the pid and see if a back end is alive.\n\t 3.) If so, warn the user and exit.\n\t 4.) If not, check for and cleanup any leftover files.\n\t 5.) Continue with startup process.\n\t\t ...wouldn't this work OK? \n\nIt would,\nbut it would be complex.\n\nThe easy way to check for a pid is \"kill -0 PID\".\nBut...\nIt only works if we are running under the same user \nas the server (or root).\n\nBesides,\n we do not know that a process running as that\npid really is a postmaster w/o checking the process\ntable - and that is \"portability hell\".\n\nBesides 2,\nhow do get the pid in the first place?\nIf you code it into the socket, like:\n/tmp/.s.PORTNR.PID\nhow would the clients find the socket to connect to?\n(they do not know the pid of the server)\n\nBetter leave it until after 6.3 (at least).\n\n best regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "27 Jan 1998 14:26:39 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "> \n> \n> Sorry,\n> last patch wrong.\n> This is the right one:\n\nApplied. I don't know what other patch you are mentioning.\n\n> \n> diff -c src/backend/postmaster/postmaster.c.orig src/backend/postmaster/postmaster.c\n> *** src/backend/postmaster/postmaster.c.orig\tMon Jan 26 08:46:08 1998\n> --- src/backend/postmaster/postmaster.c\tTue Jan 27 09:37:45 1998\n> ***************\n> *** 473,479 ****\n> \tint\t\t\ti;\n> \n> \tif (fork())\n> ! \t\texit(0);\n> /* GH: If there's no setsid(), we hopefully don't need silent mode.\n> * Until there's a better solution.\n> */\n> --- 473,479 ----\n> \tint\t\t\ti;\n> \n> \tif (fork())\n> ! \t\t_exit(0);\n> /* GH: If there's no setsid(), we hopefully don't need silent mode.\n> * Until there's a better solution.\n> */\n> \n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 10:36:05 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Tue, 27 Jan 1998, James Hughes wrote:\n\n> On 27 Jan 1998, Goran Thyni wrote:\n> \n> : \n> : On Mon, 26 Jan 1998, Bruce Momjian wrote:\n> : \n> : : \n> : : When the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\n> : : Is there a way to auto-remove it after a postmaster crash?\n> : : \n> \n> <snip>\n> \n> : I will look at removing the file on startup when I'm in there.\n> : \n> : Don't, it gets removed at shutdown except when crashing.\n> : Removing at startup opens a whole new can of worms.\n> : (You must no postmaster is not already running.)\n> : \n> \n> How about in postmaster.c (arround line 427), when starting up...\n> \n> \t1.) Check for the existence of a pid file.\n> \n> \t2.) If one is there, read the pid and see if a back end is alive.\n> \n> \t3.) If so, warn the user and exit.\n> \n> \t4.) If not, check for and cleanup any leftover files.\n> \n> \t5.) Continue with startup process.\n> \n> \t\t...wouldn't this work OK? \n\n\tA thought. Why not change the startup routine such that instead\nof creating /tmp/.s.PGSQL.5432, create a subdirectory that contains both\nthe socket (.socket) and the PID file? Given time, I could see us adding\nin some stats to the postmaster process, similar to named, where you\nSIGUSR2 the process and it dumps a status file and that too could get\ndumped there.\n\n\tJust a thought...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 27 Jan 1998 23:31:14 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "> \n> \tA thought. Why not change the startup routine such that instead\n> of creating /tmp/.s.PGSQL.5432, create a subdirectory that contains both\n> the socket (.socket) and the PID file? Given time, I could see us adding\n> in some stats to the postmaster process, similar to named, where you\n> SIGUSR2 the process and it dumps a status file and that too could get\n> dumped there.\n> \n> \tJust a thought...\n\nLet's see what people report. With the bug fixed, I never see leftover\n/tmp/.s.pgsql files.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 22:37:03 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "\n\nOn Tue, 27 Jan 1998, The Hermit Hacker wrote:\n\n: On Tue, 27 Jan 1998, James Hughes wrote:\n: \n: > On 27 Jan 1998, Goran Thyni wrote:\n: > \n: > : \n: > : On Mon, 26 Jan 1998, Bruce Momjian wrote:\n: > : \n: > : : \n: > : : When the postmaster crashes, it leaves the /tmp/.s.pgsql file in /tmp.\n: > : : Is there a way to auto-remove it after a postmaster crash?\n: > : : \n: > \n: > <snip>\n: > \n: > : I will look at removing the file on startup when I'm in there.\n: > : \n: > : Don't, it gets removed at shutdown except when crashing.\n: > : Removing at startup opens a whole new can of worms.\n: > : (You must no postmaster is not already running.)\n: > : \n: > \n: > How about in postmaster.c (arround line 427), when starting up...\n: > \n: > \t1.) Check for the existence of a pid file.\n: > \n: > \t2.) If one is there, read the pid and see if a back end is alive.\n: > \n: > \t3.) If so, warn the user and exit.\n: > \n: > \t4.) If not, check for and cleanup any leftover files.\n: > \n: > \t5.) Continue with startup process.\n: > \n: > \t\t...wouldn't this work OK? \n: \n: \tA thought. Why not change the startup routine such that instead\n: of creating /tmp/.s.PGSQL.5432, create a subdirectory that contains both\n: the socket (.socket) and the PID file? Given time, I could see us adding\n: in some stats to the postmaster process, similar to named, where you\n: SIGUSR2 the process and it dumps a status file and that too could get\n: dumped there.\n: \n: \tJust a thought...\n: \n\nI would opt for /var/run to store the pid files and have the name set to\npgsql.$PORT. A \".pgsql\" subdirectory in /tmp would be nice to store all\nthe sockets. You mentioned syslog capability in a previous message\nand maybe an rc file is needed too...\n\nI'm with Goran though, we should save these for one of the next\n(6.3.[1-3]) releases.\n\n\n\n-James\n\n", "msg_date": "Wed, 28 Jan 1998 08:24:40 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Wed, 28 Jan 1998, James Hughes wrote:\n\n> \n> I would opt for /var/run to store the pid files and have the name set to\n\n\tThat would assume that postmaster runs as root, which is not\nallowed...has to be in /tmp somewhere\n\n> pgsql.$PORT. A \".pgsql\" subdirectory in /tmp would be nice to store all\n> the sockets. You mentioned syslog capability in a previous message\n> and maybe an rc file is needed too...\n> \n> I'm with Goran though, we should save these for one of the next\n> (6.3.[1-3]) releases.\n\n\tMakes sense\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 28 Jan 1998 19:58:54 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "\n > I would opt for /var/run to store the pid files and have the name set to\n\n\t That would assume that postmaster runs as root, which is not\n allowed...has to be in /tmp somewhere\n\nMaybe both should be under /usr/local/pgsql\nsomewhere, so they will not be removed by any \n'/tmp'-clean-up-scripts.\n\nMaybe we should move the socket there before 6.3.\nI do not have the necessary time right now,\nbut I will look into it unless someone beats \nme to the punch.\n\n v�nliga h�lsningar,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "29 Jan 1998 08:59:27 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On 29 Jan 1998, Goran Thyni wrote:\n\n> \n> > I would opt for /var/run to store the pid files and have the name set to\n> \n> \t That would assume that postmaster runs as root, which is not\n> allowed...has to be in /tmp somewhere\n> \n> Maybe both should be under /usr/local/pgsql\n> somewhere, so they will not be removed by any \n> '/tmp'-clean-up-scripts.\n\n\tI don't agree with this either.../tmp is world\nreadable/writable...what happens if 'joe blow user' decides for whatever\nreason to initdb a database in his personal directory space and run a\npostmaster process of his own? (S)he'd be running the same system binary,\njust under her own userid...\n\n\n", "msg_date": "Thu, 29 Jan 1998 08:03:45 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": " > I would opt for /var/run to store the pid files and have the name set to\n\n\t That would assume that postmaster runs as root, which is not\n allowed...has to be in /tmp somewhere\n\nNo. Make /var/run writable by some group (e.g., group pidlog) and put\npostgres (and other things like root or daemon or ..., whatever needs\nto log pid files) in that group.\n\n/var/run really is where a pid file should be. I submitted a patch\nthat would do this some time ago. I'll resend it if there is\ninterest.\n\nCheers,\nBrook\n", "msg_date": "Thu, 29 Jan 1998 08:06:57 -0700 (MST)", "msg_from": "Brook Milligan <brook@trillium.NMSU.Edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "> \n> > I would opt for /var/run to store the pid files and have the name set to\n> \n> \t That would assume that postmaster runs as root, which is not\n> allowed...has to be in /tmp somewhere\n> \n> No. Make /var/run writable by some group (e.g., group pidlog) and put\n> postgres (and other things like root or daemon or ..., whatever needs\n> to log pid files) in that group.\n> \n> /var/run really is where a pid file should be. I submitted a patch\n> that would do this some time ago. I'll resend it if there is\n> interest.\n\nWe can't expect the user to be able to change /var/run permissions. \nMust be in pgsql/ or /tmp.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 29 Jan 1998 11:50:48 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "> No. Make /var/run writable by some group (e.g., group pidlog) and put\n> postgres (and other things like root or daemon or ..., whatever needs\n> to log pid files) in that group.\n\n We can't expect the user to be able to change /var/run permissions. \n Must be in pgsql/ or /tmp.\n\nNo, \"normal\" users shouldn't be allowed to do so, obviously. But, are\nthere real systems in which a database maintainer (i.e., user\npostgres) cannot cooperate with the system admin (i.e., user root) to\naccomplish this? In practice, is it really envisioned that postgres\nshould be _so_ distinct from the system? For example, don't most\npeople run the postmaster from the system startup scripts, and isn't\nthat the same thing? How did those commands get inserted into the\nstartup scripts if not by root?\n\nCheers,\nBrook\n", "msg_date": "Thu, 29 Jan 1998 12:10:14 -0700 (MST)", "msg_from": "Brook Milligan <brook@trillium.NMSU.Edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Thu, 29 Jan 1998, Brook Milligan wrote:\n\n> No, \"normal\" users shouldn't be allowed to do so, obviously. But, are\n> there real systems in which a database maintainer (i.e., user\n> postgres) cannot cooperate with the system admin (i.e., user root) to\n> accomplish this? In practice, is it really envisioned that postgres\n> should be _so_ distinct from the system? For example, don't most\n> people run the postmaster from the system startup scripts, and isn't\n> that the same thing? How did those commands get inserted into the\n> startup scripts if not by root?\n\n\tI do not feel that it is appropriate for a non-root program (which\nPostgreSQL is) to require a systems administrator to make permissions\nrelated changed to a directory for it to run, period.\n\n\n", "msg_date": "Thu, 29 Jan 1998 15:10:15 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Thu, 29 Jan 1998, The Hermit Hacker wrote:\n\n> On Thu, 29 Jan 1998, Brook Milligan wrote:\n> \n> > No, \"normal\" users shouldn't be allowed to do so, obviously. But, are\n> > there real systems in which a database maintainer (i.e., user\n> > postgres) cannot cooperate with the system admin (i.e., user root) to\n> > accomplish this? In practice, is it really envisioned that postgres\n> > should be _so_ distinct from the system? For example, don't most\n> > people run the postmaster from the system startup scripts, and isn't\n> > that the same thing? How did those commands get inserted into the\n> > startup scripts if not by root?\n> \n> \tI do not feel that it is appropriate for a non-root program (which\n> PostgreSQL is) to require a systems administrator to make permissions\n> related changed to a directory for it to run, period.\n> \n> \n> \nSpeaking of feelings, I'm not especially happy about allowing any old\nuser to trash a key file because it's located in a globally writable\ndirectory.\n\nWould setting the sticky bit on the permissions of the /tmp directory\nhelp?\n\nMarc Zuckman\nmarc@fallon.classyad.com\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n", "msg_date": "Thu, 29 Jan 1998 15:58:21 -0500 (EST)", "msg_from": "Marc Howard Zuckman <marc@fallon.classyad.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Thu, 29 Jan 1998, Marc Howard Zuckman wrote:\n\n> Would setting the sticky bit on the permissions of the /tmp directory\n> help?\n\n\tUmmmm...is it? *raised eyebrows* I just checked my FreeBSD boxes\nand my Solaris box at the office, and sticky bit is auto-set on those...\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 29 Jan 1998 17:20:53 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "> \n> > No. Make /var/run writable by some group (e.g., group pidlog) and put\n> > postgres (and other things like root or daemon or ..., whatever needs\n> > to log pid files) in that group.\n> \n> We can't expect the user to be able to change /var/run permissions. \n> Must be in pgsql/ or /tmp.\n> \n> No, \"normal\" users shouldn't be allowed to do so, obviously. But, are\n> there real systems in which a database maintainer (i.e., user\n> postgres) cannot cooperate with the system admin (i.e., user root) to\n> accomplish this? In practice, is it really envisioned that postgres\n> should be _so_ distinct from the system? For example, don't most\n> people run the postmaster from the system startup scripts, and isn't\n> that the same thing? How did those commands get inserted into the\n> startup scripts if not by root?\n\nWell, we have to weigh the value of moving it to /var/run vs. the\nhardship for people who don't have root access. Even if only 5% don't\nhave root access, that is a lot of people.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 29 Jan 1998 16:23:41 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "> \n> On Thu, 29 Jan 1998, The Hermit Hacker wrote:\n> \n> > On Thu, 29 Jan 1998, Brook Milligan wrote:\n> > \n> > > No, \"normal\" users shouldn't be allowed to do so, obviously. But, are\n> > > there real systems in which a database maintainer (i.e., user\n> > > postgres) cannot cooperate with the system admin (i.e., user root) to\n> > > accomplish this? In practice, is it really envisioned that postgres\n> > > should be _so_ distinct from the system? For example, don't most\n> > > people run the postmaster from the system startup scripts, and isn't\n> > > that the same thing? How did those commands get inserted into the\n> > > startup scripts if not by root?\n> > \n> > \tI do not feel that it is appropriate for a non-root program (which\n> > PostgreSQL is) to require a systems administrator to make permissions\n> > related changed to a directory for it to run, period.\n> > \n> > \n> > \n> Speaking of feelings, I'm not especially happy about allowing any old\n> user to trash a key file because it's located in a globally writable\n> directory.\n> \n> Would setting the sticky bit on the permissions of the /tmp directory\n> help?\n\nMost OS's or good administrators already have the sticky bit set on\n/tmp, or they should. If they don't, the PostgreSQL socket file is the\nleast of their worries.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 29 Jan 1998 16:26:41 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "> > > > No, \"normal\" users shouldn't be allowed to do so, obviously. But, are\n> > > > there real systems in which a database maintainer (i.e., user\n> > > > postgres) cannot cooperate with the system admin (i.e., user root) to\n> > > > accomplish this? In practice, is it really envisioned that postgres\n> > > > should be _so_ distinct from the system? For example, don't most\n> > > > people run the postmaster from the system startup scripts, and isn't\n> > > > that the same thing? How did those commands get inserted into the\n> > > > startup scripts if not by root?\n> > >\n> > > I do not feel that it is appropriate for a non-root program (which\n> > > PostgreSQL is) to require a systems administrator to make permissions\n> > > related changed to a directory for it to run, period.\n\n> > >\n> > Speaking of feelings, I'm not especially happy about allowing any old\n> > user to trash a key file because it's located in a globally writable\n> > directory.\n\nCorrect me if I'm wrong (oh, why bother saying that? :), but aren't there two\nissues going on here? And, shouldn't all points raised above (and earlier) be\nconsidered in the solution?\n\nOne issue is that a location for sockets needs to be specified for _any_\nPostgres installation. This location is not exactly the same kind of thing as\nthe main Postgres installation tree.\n\nThe other issue is that there _may_ be a preferred location for this location\non some, most, or all Unix systems.\n\nIn either case, the location should be specified in Makefile.global, so that I\ncan override it in Makefile.custom, just like I do for defining POSTGRESDIR to\nallow me to work in /opt/postgres/... rather than the other possible preferred\nlocation(s).\n\nPerhaps the default location for an installation from source code should be\navailable without sysadmin intervention, which might suggest that it should be\nin the postgres owner's home directory tree or in /tmp. Packaged binary\ninstallations are likely to be installed by root into a dedicated Postgres\naccount.\n\nFor my installation, I'll install from source and go ahead and override the\ndefault to put it in /var/run or somewhere like that which is more secure; the\ninstallation instructions will tell me which is the best location to achieve\nmaximum security.\n\nOK?\n\n - Tom\n\n", "msg_date": "Fri, 30 Jan 1998 02:05:39 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Fri, 30 Jan 1998, Thomas G. Lockhart wrote:\n\n> For my installation, I'll install from source and go ahead and override the\n> default to put it in /var/run or somewhere like that which is more secure; the\n> installation instructions will tell me which is the best location to achieve\n> maximum security.\n> \n> OK?\n> \n\nThis is just too intelligent of a diatribe to listen to.\n\nMarc Zuckman\nmarc@fallon.classyad.com\n\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n", "msg_date": "Thu, 29 Jan 1998 22:37:27 -0500 (EST)", "msg_from": "Marc Howard Zuckman <marc@fallon.classyad.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" } ]
[ { "msg_contents": "> > >For now, ignore the patch I sent. Appears from Andreas demo that the\n> > >current postgres code will follow the Informix style with regard to\n> > >grouping columns with NULL values. Now that I really think about it,\n> > >it does make more sense.\n> > \n> > I think I saw the patch committed this morning...?\n> \n> \tYesterday evening, actually...should we back it out, or leave it\n> as is? Is the old way more corrrect the the new?\n\nMarc,\n\n>From the two responses demonstrating Informix and Sybase, I think the patch\nI sent should be backed out. I can live with NULL equaling NULL in the\nGROUP BY and ORDER BY clauses, but not in the WHERE clause, if everyone else\nis doing it that way.\n\ndarrenk\n", "msg_date": "Mon, 26 Jan 1998 16:43:38 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." }, { "msg_contents": "> \n> > > >For now, ignore the patch I sent. Appears from Andreas demo that the\n> > > >current postgres code will follow the Informix style with regard to\n> > > >grouping columns with NULL values. Now that I really think about it,\n> > > >it does make more sense.\n> > > \n> > > I think I saw the patch committed this morning...?\n> > \n> > \tYesterday evening, actually...should we back it out, or leave it\n> > as is? Is the old way more corrrect the the new?\n> \n> Marc,\n> \n> >From the two responses demonstrating Informix and Sybase, I think the patch\n> I sent should be backed out. I can live with NULL equaling NULL in the\n> GROUP BY and ORDER BY clauses, but not in the WHERE clause, if everyone else\n> is doing it that way.\n> \n> darrenk\n> \n> \n\nYour patch backed out.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 10:39:34 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Group By, NULL values and inconsistent behaviour." } ]
[ { "msg_contents": "I am missing the last digit of the port number on the socket name:\n\n\t/tmp/.s.PGSQL.543\n\nCan someone fix this?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 26 Jan 1998 17:39:26 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "socket name" }, { "msg_contents": "> \n> I am missing the last digit of the port number on the socket name:\n> \n> \t/tmp/.s.PGSQL.543\n> \n> Can someone fix this?\n\nI have fixed this.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Mon, 26 Jan 1998 23:25:25 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] socket name" } ]
[ { "msg_contents": "\nSr:\n \n I have tried to look at the postgreSQL www and contact some people but \ni don't \nhave full internet just e-mail so it's a little hard to search exactly what\ni'm looking for, please help me to find out about this if you can:\n\n\t1) There are primary keys, foreing keys and not null statements \nfor postgres?\n 2) There is possible to emulate this things with the rule system?\n\t3)How can i get a manual about the system rules for postgres?\n\t\t\tRegards\n\t\t\t\tOtto\n", "msg_date": "Mon, 26 Jan 1998 18:11:54 -0500 (EST)", "msg_from": "Otto Villarin <fax!ceniai.inf.cu!comuh.uh.cu!comuh.uh.cu!otto>", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "\nHello,\n\nWe, at CORE SDI, have developed a protocol named PAO wich enables the\nverification of the authenticity of append only files.\n\nI am informing you of this because i belive you may find it extremelly\nuseful for your postgresql software.\n\nThe protocol is cryptographycally strong, and it is described (with a\nsample aplication) at www.core-sdi.com/ssyslog\n\nif you have any questions or comments dont hesitate to mail me.\n\nMaybe we can find a way to get this going togheter..\n\nLucio\n\n==============================[ CORE Seguridad de la Informacion S.A. ]=======\nLucio Torre Email : lucio@secnet.com\n \nAv. Santa Fe 2861 5to C TEL/FAX : +54-1-821-1030\nBuenos Aires, Argentina. CP 1425 Mensajeria: +54-1-317-4157\n==============================================================================\n\n", "msg_date": "Mon, 26 Jan 1998 21:09:48 -0700 (MST)", "msg_from": "\" Lucio Torre (CORE)\" <lucio@securenetworks.com>", "msg_from_op": true, "msg_subject": "PEO Protocol." } ]
[ { "msg_contents": "Unprivileged user wrote:\n> \n> Category : runtime: back-end: SQL\n> Severity : critical\n> \n> Summary: Result not \"GROUPED BY\" from SQL: \n> select im,bn,count(adr) FROM logtmp GROUP BY im,bn;\n\nI found from where it comes: nodeGroup.c' funcs assume that slots from\nsubplan (node Sort) have ttc_shouldFree = FALSE. This was right before\n6.2 and is right for in-memory sorts, but wrong for disk ones.\nNode Mergejoin uses some tricks to deal with this - #define MarkInnerTuple...\nAt the moment, I haven't time to fix this bug - will do this latter\nif no one else.\n\nVadim\n", "msg_date": "Tue, 27 Jan 1998 15:40:58 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": true, "msg_subject": "Re: [PORTS] Port Bug Report: Result not \"GROUPED BY\" from SQL: select\n\tim,bn,count(adr) FROM logtmp GROUP BY im,bn;" } ]
[ { "msg_contents": "\nThe following two queries & explains seem very strange. the two\nstatements differ only by the absence of usrexists = 't' in the where\nclause of the second, but it seems far less effecient (rather than\nmore?)\n\nAt first I thought my hash index on usrid should be used for the first\nexample, but obviously it can't use the index on both. Is it possible\nfor it to use an Index scan on the bigger table (in this case, user)?\n\nThe second one seems totally insane!\n\n--brett\n\nsas=> explain update user set usrfreextime = 't' from xtermaccess where usrexists = 't' and usrid = xtausrid ;\nNOTICE:QUERY PLAN:\n\nNested Loop (cost=701.57 size=1 width=225)\n -> Seq Scan on user (cost=700.52 size=1 width=217)\n -> Index Scan on xtermaccess (cost=1.05 size=4210 width=8)\n\nEXPLAIN\n\nsas=> explain update user set usrfreextime = 't' from xtermaccess where usrid = xtausrid ;\nNOTICE:QUERY PLAN:\n\nHash Join (cost=1206.89 size=5652 width=225)\n -> Seq Scan on user (cost=700.52 size=5652 width=217)\n -> Hash (cost=0.00 size=0 width=0)\n -> Seq Scan on xtermaccess (cost=178.93 size=4210 width=8)\n\nEXPLAIN\nsas=> \n", "msg_date": "Tue, 27 Jan 1998 00:44:50 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": true, "msg_subject": "weird query plans involving hash [join] & no indicies" } ]
[ { "msg_contents": "On linux-elf:\n\n\npqcomm.c: In function `StreamServerPort':\npqcomm.c:605: structure has no member named `sun_len'\npqcomm.c: In function `StreamOpen':\npqcomm.c:760: structure has no member named `sun_len'\n___\n \nSY, Serj\n", "msg_date": "Tue, 27 Jan 1998 11:49:33 +0300", "msg_from": "Serj <fenix@am.ring.ru>", "msg_from_op": true, "msg_subject": "Snapshot 270198 compile error" }, { "msg_contents": "\n On linux-elf:\n pqcomm.c: In function `StreamServerPort':\n pqcomm.c:605: structure has no member named `sun_len'\n pqcomm.c: In function `StreamOpen':\n pqcomm.c:760: structure has no member named `sun_len'\n\nYes,\nthe sun_len member of struct sockaddr_un is BSD-specific.\nIt is not there in Linux, nor in a SVR4-system we run here.\n\nI had hard-coded the extra offset to 1 in the UNIXSOCK_PATH macro, \nmaybe 2 or 4 works OK on all systems.\n\n regards,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "27 Jan 1998 10:08:33 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Snapshot 270198 compile error" }, { "msg_contents": "> \n> \n> On linux-elf:\n> pqcomm.c: In function `StreamServerPort':\n> pqcomm.c:605: structure has no member named `sun_len'\n> pqcomm.c: In function `StreamOpen':\n> pqcomm.c:760: structure has no member named `sun_len'\n> \n> Yes,\n> the sun_len member of struct sockaddr_un is BSD-specific.\n> It is not there in Linux, nor in a SVR4-system we run here.\n> \n> I had hard-coded the extra offset to 1 in the UNIXSOCK_PATH macro, \n> maybe 2 or 4 works OK on all systems.\n\nGee, thanks. This does help tremendously. The protocol overhaul person\ndid not have the sun_len field, thought the +1 as a mistake of counting\nthe trailing null and removed it. I will apply the following patch\nwhich should fix the situation. One problem is that under BSD, we never\nset the sun_len field, but it still seems to work, and I can't think of\na platform-safe way of doing the assignment, so I will leave it alone.\n\n---------------------------------------------------------------------------\n\n*** ./include/libpq/pqcomm.h.orig\tTue Jan 27 07:25:25 1998\n--- ./include/libpq/pqcomm.h\tTue Jan 27 07:36:34 1998\n***************\n*** 35,42 ****\n \n #define\tUNIXSOCK_PATH(sun,port) \\\n \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)) + \\\n! \t\tsizeof ((sun).sun_len) + sizeof ((sun).sun_family))\n! \n \n /*\n * These manipulate the frontend/backend protocol version number.\n--- 35,46 ----\n \n #define\tUNIXSOCK_PATH(sun,port) \\\n \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)) + \\\n! \t\t+ 1 + sizeof ((sun).sun_family))\n! /*\n! *\t\t+ 1 is for BSD-specific sizeof((sun).sun_len)\n! *\t\tWe never actually set sun_len, and I can't think of a\n! *\t\tplatform-safe way of doing it, but the code still works. bjm\n! */\n \n /*\n * These manipulate the frontend/backend protocol version number.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 07:37:07 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Snapshot 270198 compile error" }, { "msg_contents": "> \n> \n> On linux-elf:\n> pqcomm.c: In function `StreamServerPort':\n> pqcomm.c:605: structure has no member named `sun_len'\n> pqcomm.c: In function `StreamOpen':\n> pqcomm.c:760: structure has no member named `sun_len'\n> \n> Yes,\n> the sun_len member of struct sockaddr_un is BSD-specific.\n> It is not there in Linux, nor in a SVR4-system we run here.\n> \n> I had hard-coded the extra offset to 1 in the UNIXSOCK_PATH macro, \n> maybe 2 or 4 works OK on all systems.\n\nI have applied a patch to document the +1 in the socket length.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 10:35:28 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Snapshot 270198 compile error" }, { "msg_contents": "Bruce Momjian wrote:\n\n> #define UNIXSOCK_PATH(sun,port) \\\n> (sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)) + \\\n> ! + 1 + sizeof ((sun).sun_family))\n> ! /*\n> ! * + 1 is for BSD-specific sizeof((sun).sun_len)\n> ! * We never actually set sun_len, and I can't think of a\n> ! * platform-safe way of doing it, but the code still works. bjm\n> ! */\n\nI don't think this is going to work. On glibc2 you will end up with a\ntrailing '\\0' in the socket name. You won't be able to see it but I\nthink it will be there. Is the following version portable?\n\n#define\tUNIXSOCK_PATH(sun,port) \\\n\t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)) + \\\n\t((char *)&(sun).sun_path[0] - (char *)&(sun)))\n\nPhil\n", "msg_date": "Tue, 27 Jan 1998 19:25:11 +0000", "msg_from": "Phil Thompson <phil@river-bank.demon.co.uk>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Snapshot 270198 compile error" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> \n> > #define UNIXSOCK_PATH(sun,port) \\\n> > (sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)) + \\\n> > ! + 1 + sizeof ((sun).sun_family))\n> > ! /*\n> > ! * + 1 is for BSD-specific sizeof((sun).sun_len)\n> > ! * We never actually set sun_len, and I can't think of a\n> > ! * platform-safe way of doing it, but the code still works. bjm\n> > ! */\n\nOK, I am with you. Even better, let's use offset(). Takes care of\npossible OS padding between fields too:\n\n---------------------------------------------------------------------------\n\n\n*** ./include/libpq/pqcomm.h.orig\tTue Jan 27 14:28:27 1998\n--- ./include/libpq/pqcomm.h\tTue Jan 27 14:48:15 1998\n***************\n*** 35,44 ****\n \n #define\tUNIXSOCK_PATH(sun,port) \\\n \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)) + \\\n! \t\t+ 1 + sizeof ((sun).sun_family))\n /*\n! *\t\t+ 1 is for BSD-specific sizeof((sun).sun_len)\n! *\t\tWe never actually set sun_len, and I can't think of a\n *\t\tplatform-safe way of doing it, but the code still works. bjm\n */\n \n--- 35,44 ----\n \n #define\tUNIXSOCK_PATH(sun,port) \\\n \t(sprintf((sun).sun_path, \"/tmp/.s.PGSQL.%d\", (port)) + \\\n! \t\toffsetof(struct sockaddr_un, sun_path))\n /*\n! *\t\tWe do this because sun_len is in BSD's struct, while others don't.\n! *\t\tWe never actually set BSD's sun_len, and I can't think of a\n *\t\tplatform-safe way of doing it, but the code still works. bjm\n */\n \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 14:50:01 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Snapshot 270198 compile error" } ]
[ { "msg_contents": "> Does the SQL standard say anything about an implied sort when\n> grouping or is it up to the user to include an ORDER BY clause?\n> darrenk\n\nWithout order by the result set is never guaranteed to be ordered in a\nspecific way (standard speak). The order is dependent on the chosen\nquery path, which\nchanges from query to query.\n\nAndreas\n", "msg_date": "Tue, 27 Jan 1998 11:13:24 +0100", "msg_from": "Zeugswetter Andreas DBT <Andreas.Zeugswetter@telecom.at>", "msg_from_op": true, "msg_subject": "No: implied sort with group by" } ]
[ { "msg_contents": "> \n> > Does the SQL standard say anything about an implied sort when\n> > grouping or is it up to the user to include an ORDER BY clause?\n> > darrenk\n> \n> Without order by the result set is never guaranteed to be ordered in a\n> specific way (standard speak). The order is dependent on the chosen\n> query path, which changes from query to query.\n> \n> Andreas\n\nPostgres should then do an internal sort before grouping. In the second\nof your examples, I take the above to mean that either row could be\nreturned first.\n\nIn order to get that result set though, the data needs to be sorted before\ngetting to the group by node in the executor. The order of that internal\nsort is purely arbitrary, it just has to be done.\n\nThis is what I think is missing or broken right now.\n\n> > select * from t1;\n> a b c\n> 1 x\n> 2 x\n> 3 z\n> 2 x\n>\n> 4 row(s) retrieved.\n> > select b,c,sum(a) from t1 group by b,c;\n> b c (sum)\n>\n> x 5\n> z 3\n>\n> 2 row(s) retrieved.\n\ndarrenk\n\n\n", "msg_date": "Tue, 27 Jan 1998 08:43:29 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] No: implied sort with group by" }, { "msg_contents": "> > > Does the SQL standard say anything about an implied sort when\n> > > grouping or is it up to the user to include an ORDER BY clause?\n\nUp to the user. SQL is a set-oriented language. The fact that many/most/all\nimplementations order results to then do grouping is an implementation\ndetail, not a language definition.\n\n\n> This is what I think is missing or broken right now.\n>\n> > > select * from t1;\n> > a b c\n> > 1 x\n> > 2 x\n> > 3 z\n> > 2 x\n> >\n> > 4 row(s) retrieved.\n> > > select b,c,sum(a) from t1 group by b,c;\n> > b c (sum)\n> >\n> > x 5\n> > z 3\n> >> 2 row(s) retrieved.\n\nSorry, I've lost the thread. What is broken? I get this same result, and\n(assuming that column \"b\" is full of nulls) I think this the correct result.\n\n - Tom\n\n", "msg_date": "Tue, 27 Jan 1998 16:44:55 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] No: implied sort with group by" }, { "msg_contents": "> \n> > > > Does the SQL standard say anything about an implied sort when\n> > > > grouping or is it up to the user to include an ORDER BY clause?\n> \n> Up to the user. SQL is a set-oriented language. The fact that many/most/all\n> implementations order results to then do grouping is an implementation\n> detail, not a language definition.\n> \n> \n> > This is what I think is missing or broken right now.\n> >\n> > > > select * from t1;\n> > > a b c\n> > > 1 x\n> > > 2 x\n> > > 3 z\n> > > 2 x\n> > >\n> > > 4 row(s) retrieved.\n> > > > select b,c,sum(a) from t1 group by b,c;\n> > > b c (sum)\n> > >\n> > > x 5\n> > > z 3\n> > >> 2 row(s) retrieved.\n> \n> Sorry, I've lost the thread. What is broken? I get this same result, and\n> (assuming that column \"b\" is full of nulls) I think this the correct result.\n\nAt one point, it was thought that NULLs shouldn't be grouped, but I\nbacked out the patch. There is a problem with GROUP BY on large\ndatasets, and Vadim knows the cause, and will work on it later.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 11:54:00 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] No: implied sort with group by" } ]
[ { "msg_contents": "In snapshot 270198 \"GROUP BY\" bug present too.\n\nTest on linux-elf (\"select a,b,count(*) from c group by a,b\");\n\n-- \nSY, Serj\n", "msg_date": "Tue, 27 Jan 1998 17:03:41 +0300", "msg_from": "Serj <fenix@am.ring.ru>", "msg_from_op": true, "msg_subject": "Group By bug in snapshot 270198" }, { "msg_contents": "Serj wrote:\n> \n> In snapshot 270198 \"GROUP BY\" bug present too.\n> \n> Test on linux-elf (\"select a,b,count(*) from c group by a,b\");\n\nWe know by what this bug is caused - will be fixed before 6.3 beta 2.\nThere will be also patch for 6.2.1\n\nAs \"workarround\" - try to increase -S max_sort_memory to prevent\ndisk sorting. Sorry.\n\nVadim\n", "msg_date": "Tue, 27 Jan 1998 21:57:26 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Group By bug in snapshot 270198" } ]
[ { "msg_contents": "> > > I found this patch in my mailbox. Is there any intestest in this, or is\n> > > it too site-specific?\n> > > \n> > > > \n> > > > Eze Ogwuma writes:\n> > > > > Bruce Momjian <maillist@candle.pha.pa.us> writes:\n> > > > > > Can you be specific? Something I can add to the TODO list.\n> > > > > \n> > > > > Database based access for users so that each user can be giving access\n> > > > > to a particular database only. More permissions for each databse user:\n> > > > > Create, Drop, Select, Insert etc. Possibly table based\n> > > > > authentification as well.\n> > > > \n> > > > I needed to do that for the web database that I'm setting up. We have\n> > > > 20000 users and each (potentially) needs a separate database which is\n> > > > only accessible to them. Rather than having 20000 lines in pg_hba.conf,\n> > > > I've patched Postgres so that the special token \"%username\" in the\n\nSo someone wasted their time writing this patch, 'cos the facility wasn't\ndocumented properly ?????\n\n> > \n> > Yes please! I'd like to see this...\n> \n> I think it may already be there, but with no documentation in\n> pg_hba.conf:\n> \n> See backend/libpq/hba.c:\n> \n> Special case: For usermap \"sameuser\", don't look in the usermap\n> file. That's an implied map where \"pguser\" must be identical to\n> \"ident_username\" in order to be authorized.\nThe terminology isn't exactly clear :-) \n\nI hope this gets documented properly and comprehensibly!!!! I can't same\nI'm any wiser from reading that as to what one needs to do (though I guess\nI might be if I read it in conjunction with the hba instructions).\n\n\n<RANT ON>\nMight I ask again that people send patches in for the documentation WHENEVER\nthey add a new feature!\n\nThere is no point in adding new and wonderful things if users don't know\nthey exist!!!!! When someone ends up duplicating functionality 'cos they\ndon't know that a feature exists, that's even worse........\n<RANT OFF>\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Tue, 27 Jan 1998 16:13:39 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How is PostgreSQL doing?" }, { "msg_contents": "> > > > > I've patched Postgres so that the special token \"%username\" in the\n> \n> So someone wasted their time writing this patch, 'cos the facility wasn't\n> documented properly ?????\n\nYep, that's what happened.\n\n> I hope this gets documented properly and comprehensibly!!!! I can't same\n> I'm any wiser from reading that as to what one needs to do (though I guess\n> I might be if I read it in conjunction with the hba instructions).\n\nPhil kindely just added several mentions to the pg_hba.conf file, with\nexamples of its use.\n\n# ident: Authentication is done by the ident server on the remote\n# host, via the ident (RFC 1413) protocol. AUTH_ARGUMENT, if\n# specified, is a map name to be found in the pg_ident.conf file.\n# That table maps from ident usernames to Postgres usernames. The\n# special map name \"sameuser\" indicates an implied map (not found\n# in pg_ident.conf) that maps every ident username to the identical\n# Postgres username.\n#\n\n> \n> \n> <RANT ON>\n> Might I ask again that people send patches in for the documentation WHENEVER\n> they add a new feature!\n> \n> There is no point in adding new and wonderful things if users don't know\n> they exist!!!!! When someone ends up duplicating functionality 'cos they\n> don't know that a feature exists, that's even worse........\n> <RANT OFF>\n\nI usually check before each release to be sure each new feature is\ndocumented, but in this case, there was no mention that the feature\nexisted.\n\nNever hurts to remind people to send manual page changes too, though\npeople are usually pretty good about it.\n\n-- \nBruce Momjian maillist@candle.pha.pa.us\n\n", "msg_date": "Tue, 27 Jan 1998 11:24:43 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [QUESTIONS] How is PostgreSQL doing?" } ]
[ { "msg_contents": "> > Thus spake Adam Fenn\n> > > Where can I find some documentation on how to use the money data type in\n> > > pgsql?\n> > \n> > I never did get around to that. The code has some explanations. There\n> > isn't really much. You create a type as money and you can assign values\n> > such as '$123,456.78' (dollar sign optional) to it and it displays it\n\n\nARRRRGGHHHHHHHH!!!!!!!! Another example where a nice feature has been\nintroduced but not documented! THERE IS NO POINT in having features if\npeople don't know how to use them.\n\nIs this now documented????\n\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Tue, 27 Jan 1998 16:31:03 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Re: [PORTS] the 'money' type" }, { "msg_contents": "> > > > Where can I find some documentation on how to use the money data type in\n> > > > pgsql?\n> > >\n> > > I never did get around to that. The code has some explanations. There\n> > > isn't really much. You create a type as money and you can assign values\n> > > such as '$123,456.78' (dollar sign optional) to it and it displays it\n>\n> ARRRRGGHHHHHHHH!!!!!!!! Another example where a nice feature has been\n> introduced but not documented! THERE IS NO POINT in having features if\n> people don't know how to use them.\n>\n> Is this now documented????\n\nHi Andrew. I've added a section on data types, including boolean and money, to\nthe new SGML-based docs which should be available for v6.3. Looking forward to\nyour comments (and patches?) :)\n\n http://alumni.caltech.edu/~lockhart/postgres/doc/html/index.sgml\n\n - Tom\n\n", "msg_date": "Tue, 27 Jan 1998 17:03:31 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Re: [PORTS] the 'money' type" }, { "msg_contents": "> this url is 404.. the html dir is empty..\n\nWell, not exactly, but I have sgml on the brain. The file type is\n\"html\", not \"sgml\".The URL is actually:\n\n http://alumni.caltech.edu/~lockhart/postgres/doc/html/index.html\n\nSorry for the typo.\n\n - Tom\n\n> > http://alumni.caltech.edu/~lockhart/postgres/doc/html/index.sgml\n\n\n\n", "msg_date": "Wed, 28 Jan 1998 07:47:21 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Re: [PORTS] the 'money' type" } ]
[ { "msg_contents": "> > > This is what I think is missing or broken right now.\n> > >\n> > > > > select * from t1;\n> > > > a b c\n> > > > 1 x\n> > > > 2 x\n> > > > 3 z\n> > > > 2 x\n> > > >\n> > > > 4 row(s) retrieved.\n> > > > > select b,c,sum(a) from t1 group by b,c;\n> > > > b c (sum)\n> > > >\n> > > > x 5\n> > > > z 3\n> > > >> 2 row(s) retrieved.\n> > \n> > Sorry, I've lost the thread. What is broken? I get this same result, and\n> > (assuming that column \"b\" is full of nulls) I think this the correct result.\n> \n> At one point, it was thought that NULLs shouldn't be grouped, but I\n> backed out the patch. There is a problem with GROUP BY on large\n> datasets, and Vadim knows the cause, and will work on it later.\n\nDifferent from the grouping by NULLs issue...\n\nThe above results are from Sybase. If these same four rows are inserted into\npostgres, the second query will return three rows. Something like...\n\nb|c|sum(a)\n |x|3\n |z|3\n |x|2\n \nIt does this not because of the null values of column b, but because the data is\nnot sorted before getting to the group by node if the user does not explicitly put\nan order by in the query. IMHO, postgres should put an arbitrary sort node in the\ntree so that the data can be properly grouped as the group node iterates over it.\n\nAnd even if I put an \"order by c\" clause in there, I still get three rows, they're\njust properly sorted. :)\n\ndarrenk\n \n", "msg_date": "Tue, 27 Jan 1998 12:08:45 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] No: implied sort with group by" }, { "msg_contents": "> > > > This is what I think is missing or broken right now.\n> > > >\n> > > > > > select * from t1;\n> > > > > a b c\n> > > > > 1 x\n> > > > > 2 x\n> > > > > 3 z\n> > > > > 2 x\n> > > > >\n> > > > > 4 row(s) retrieved.\n> > > > > > select b,c,sum(a) from t1 group by b,c;\n> > > > > b c (sum)\n> > > > >\n> > > > > x 5\n> > > > > z 3\n> > > > >> 2 row(s) retrieved.\n> > >\n> > > Sorry, I've lost the thread. What is broken? I get this same result, and\n> > > (assuming that column \"b\" is full of nulls) I think this the correct result.\n> >\n> > At one point, it was thought that NULLs shouldn't be grouped, but I\n> > backed out the patch. There is a problem with GROUP BY on large\n> > datasets, and Vadim knows the cause, and will work on it later.\n>\n> Different from the grouping by NULLs issue...\n>\n> The above results are from Sybase. If these same four rows are inserted into\n> postgres, the second query will return three rows. Something like...\n>\n> b|c|sum(a)\n> |x|3\n> |z|3\n> |x|2\n>\n> It does this not because of the null values of column b, but because the data is\n> not sorted before getting to the group by node if the user does not explicitly put\n> an order by in the query. IMHO, postgres should put an arbitrary sort node in the\n> tree so that the data can be properly grouped as the group node iterates over it.\n>\n> And even if I put an \"order by c\" clause in there, I still get three rows, they're\n> just properly sorted. :)\n\nNot necessarily true; as I said, I get the same result as above (with the 980112\nsource tree; have things changed since??). Perhaps you are running into the sorting\nproblem which seemed to be present on larger tables only?\n\n - Tom\n\npostgres=> select b,c,sum(a) from t1 group by b,c;\nb|c|sum\n-+-+---\n |x| 5\n |z| 3\n(2 rows)\n\npostgres=> select * from t1;\na|b|c\n-+-+-\n1| |x\n2| |x\n2| |x\n3| |z\n(4 rows)\n\n", "msg_date": "Wed, 28 Jan 1998 07:33:40 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] No: implied sort with group by" } ]
[ { "msg_contents": "My new code now checks for any type of non-escalation deadlock, even if\nthree or more processes have deadlocks where any two of them does not\nrepresent a deadlock.\n\nThe one piece missing is to check for escalation deadlocks, where two\npeople get readlocks on the same table, and both then go for write-locks\non the same table.\n\n--\nBruce Momjian maillist@candle.pha.pa.us\n\n", "msg_date": "Tue, 27 Jan 1998 13:23:36 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "deadlock detection" } ]
[ { "msg_contents": "Phil, I know I am hitting you with lots of problems, but I want to thank\nyou for overhauling our protocol. Several people have talked about\ndoing it, but you are the one who did it.\n\nThanks.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 14:57:10 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Frontend/Backend Protocol Patch" } ]
[ { "msg_contents": "Any reason why the whole domain appears to be dead?\nAnything I can do to help?\n\n----------\n\n\nSincerely Yours, \n\nSimon Shapiro\nShimon@Simon-Shapiro.ORG Voice: 503.799.2313\n", "msg_date": "Tue, 27 Jan 1998 12:00:24 -0800 (PST)", "msg_from": "Simon Shapiro <shimon@simon-shapiro.org>", "msg_from_op": true, "msg_subject": "Domain Problem?" }, { "msg_contents": "On Tue, 27 Jan 1998, Simon Shapiro wrote:\n\n> Any reason why the whole domain appears to be dead?\n> Anything I can do to help?\n\nWithout getting into a very very large email...Trends was sharing a 3Mps\nlink with another company to UUnet...the other company went belly-up this\npast week, with UUnet pulling the link. Adrian (@Trends) spent today\ndoing up contracts and whatnot with UUnet, and now we are connected\ndirectly with them, instead of sharing. It took them a little longer then\nanticipated to get the link back up again, but considering that it went\ndown at ~noon, I think that Adrian did a fantastic job of things...\n\nIf UUnet goes belly up...well...is there a 'Net left? :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 27 Jan 1998 23:36:29 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Domain Problem?" }, { "msg_contents": "\nOn 28-Jan-98 The Hermit Hacker wrote:\n> On Tue, 27 Jan 1998, Simon Shapiro wrote:\n> \n>> Any reason why the whole domain appears to be dead?\n>> Anything I can do to help?\n> \n> Without getting into a very very large email...Trends was sharing a 3Mps\n> link with another company to UUnet...the other company went belly-up this\n> past week, with UUnet pulling the link. Adrian (@Trends) spent today\n> doing up contracts and whatnot with UUnet, and now we are connected\n> directly with them, instead of sharing. It took them a little longer\n> then\n> anticipated to get the link back up again, but considering that it went\n> down at ~noon, I think that Adrian did a fantastic job of things...\n\nTotally understandable. I just have some bandwidth I thought might be\nneeded as a backup. Thanx!\n\n\n> If UUnet goes belly up...well...is there a 'Net left? :)\n\nNow you tempt me... :-))\n\n----------\n\n\nSincerely Yours, \n\nSimon Shapiro\nShimon@Simon-Shapiro.ORG Voice: 503.799.2313\n", "msg_date": "Wed, 28 Jan 1998 00:37:29 -0800 (PST)", "msg_from": "Simon Shapiro <shimon@simon-shapiro.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Domain Problem?" }, { "msg_contents": "On Wed, 28 Jan 1998, Simon Shapiro wrote:\n\n> \n> On 28-Jan-98 The Hermit Hacker wrote:\n> > On Tue, 27 Jan 1998, Simon Shapiro wrote:\n> > \n> >> Any reason why the whole domain appears to be dead?\n> >> Anything I can do to help?\n> > \n> > Without getting into a very very large email...Trends was sharing a 3Mps\n> > link with another company to UUnet...the other company went belly-up this\n> > past week, with UUnet pulling the link. Adrian (@Trends) spent today\n> > doing up contracts and whatnot with UUnet, and now we are connected\n> > directly with them, instead of sharing. It took them a little longer\n> > then\n> > anticipated to get the link back up again, but considering that it went\n> > down at ~noon, I think that Adrian did a fantastic job of things...\n> \n> Totally understandable. I just have some bandwidth I thought might be\n> needed as a backup. Thanx!\n\n\tMore mirror sites are always welcome...especially for ftp\n\n", "msg_date": "Wed, 28 Jan 1998 07:35:09 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Domain Problem?" }, { "msg_contents": "\nOn 28-Jan-98 The Hermit Hacker wrote:\n \n...\n\n> More mirror sites are always welcome...especially for ftp\n\nOnce we relocate out equipment, I'll start a mirror for FTP and CVS.\nWill take a few days.\n\n----------\n\n\nSincerely Yours, \n\nSimon Shapiro\nShimon@Simon-Shapiro.ORG Voice: 503.799.2313\n", "msg_date": "Wed, 28 Jan 1998 08:50:51 -0800 (PST)", "msg_from": "Simon Shapiro <shimon@simon-shapiro.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Domain Problem?" } ]
[ { "msg_contents": "\nI notice that all the functions with the same name but different args\nare actually sql statements which SELECT the result of the function\ncall using a different (and unique) name..\n\nWouldn't this cause slowdowns? Shouldn't you be able to have a\ndifferent name for your function in pgsql than in the shared library,\nwithout having to resort to such hacks?\n\n--brett\n", "msg_date": "Tue, 27 Jan 1998 12:42:01 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": true, "msg_subject": "functions with same name, different args" }, { "msg_contents": "> I notice that all the functions with the same name but different args\n> are actually sql statements which SELECT the result of the function\n> call using a different (and unique) name..\n>\n> Wouldn't this cause slowdowns? Shouldn't you be able to have a\n> different name for your function in pgsql than in the shared library,\n> without having to resort to such hacks?\n\nActually, we were pretty happy when Edmund Mergl found this mechanism.\nI've thought about making changes to allow compiled code to do the same\nthing, but we've had other more important issues to work on. Send\npatches if you want something different.\n\n", "msg_date": "Wed, 28 Jan 1998 07:36:41 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] functions with same name, different args" }, { "msg_contents": "> \n> > I notice that all the functions with the same name but different args\n> > are actually sql statements which SELECT the result of the function\n> > call using a different (and unique) name..\n> >\n> > Wouldn't this cause slowdowns? Shouldn't you be able to have a\n> > different name for your function in pgsql than in the shared library,\n> > without having to resort to such hacks?\n> \n> Actually, we were pretty happy when Edmund Mergl found this mechanism.\n> I've thought about making changes to allow compiled code to do the same\n> thing, but we've had other more important issues to work on. Send\n> patches if you want something different.\n\nActually the problem was that SQL functions can compare args and call\nthe proper function, while C functions just get called without any arg\ncomparisons.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Wed, 28 Jan 1998 11:34:23 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] functions with same name, different args" } ]
[ { "msg_contents": "\nI was just using 30 days because that's what the previous fellow used..\nit works, but some of the time conversion routines are still messed up.\n\nselect date('now'::datetime),date(datetime(date('now'::datetime)));\n date| date\n----------+----------\n01-27-1998|01-26-1998\n(1 row)\n\nthose should be the same correct? it appears to get messed up in the\ndatetime(date) conversion..\n\n--brett\n\nOn Tue, 27 January 1998, at 08:21:22, Thomas G. Lockhart wrote:\n\n> btw, if you want to add a month, better to use\n> \n> postgres=> select date( cast 'now'::date as datetime + '1\n> month'::timespan);\n> date\n> ----------\n> 02-27-1998\n> (1 row)\n> \n> (The important thing is the '1 month' time span; ignore the casting\n> until v6.3 :)\n", "msg_date": "Tue, 27 Jan 1998 16:03:42 -0800", "msg_from": "Brett McCormick <brett@abraxas.scene.com>", "msg_from_op": true, "msg_subject": "Re: [QUESTIONS] select date('now'::datetime+'30 day'::timespan)" }, { "msg_contents": "> > > 6.2.1... Is there a particular patch I could apply to get this to\n> > > work? Oh! note that this may be an alpha only problem, as my dates\n> > > started off by *years*\n> >\n> > Looks like it is an alpha problem, since my patched v6.2.1 works\n> > correctly also. I can't remember if any of the patches available for\n> > v6.2.1 address a problem like this. Look at the readme on\n>\n> Yes, it is an alpha problem. I received the message below from\n> ebert@pe-muc.de which describes a fix that at least makes this stuff\n> work. I thought it would have been reported already to make 6.3. Anyway:\n\nAh. I knew about that one, but since you only had a one day offset in your\nexample and said that \"dates started off by *years*\" I thought you had the\nother stuff solved.\n\nWould someone who knows about the Alpha port generate a FAQ_DigitalUnix (or\nsomething with a better name) which summarizes these issues? We can include\nit in v6.3 and save others the troubles you have encountered.\n\nI'll be sure to get it into the distribution if someone writes it.\n\n - Tom\n\n> > MB> The initial configuration stage defines HAVE_INT_TIMEZONE in\n> > MB> config.h. However, under Digital UNIX, the symbol timezone is some\n> > MB> pointer picked up from somewhere. Interpreting that pointer as an\n> > MB> offset adds some ridiculous large number to the resulting times\n> > MB> resulting in the bogus datetime values. Rather than hack GNU\n> > MB> configure tests properly, I just changed the #define\n> > MB> HAVE_INT_TIMEZONE 1 line in include/config.h to /* #undef\n> > MB> HAVE_INT_TIMEZONE */ and verified that that fixed the problem. The\n> > MB> datetime.sql regression test still coughs up a few differences but\n> > MB> it's mostly a second or two of different rather than 21 years. I'll\n> > MB> assume that's not important unless I hear to the contrary.\n> >\n> > Editing config.h and undefining HAVE_INT_TIMEZONE solved the problem for\n> > me, too. (OSF/1 3.0 or 3.2, gcc-2.7.2, postgres-6.2.1). The only\n> > differences on the regression tests datetime, abstime, and tinterval\n> > concern mixing up PST and PDT. That is a lot better than the unpatched\n> > postgres 6.2.1.\n\n", "msg_date": "Thu, 29 Jan 1998 16:21:52 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [QUESTIONS] select date('now'::datetime+'30 day'::timespan)" } ]
[ { "msg_contents": "I have completed lock escalation detection.\n\nBoth types of locks are described in the new lock manual page, and the\ndeadlock message points them to the manual page now.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 27 Jan 1998 21:30:53 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "deadlocks" } ]
[ { "msg_contents": "darrenk wrote:\n> postgres should then do an internal sort before grouping. In the\nsecond\n> of your examples, I take the above to mean that either row could be\n> returned first.\n\nyes (standard speak)\n\n> In order to get that result set though, the data needs to be sorted\nbefore\n> getting to the group by node in the executor. The order of that\ninternal\n> sort is purely arbitrary, it just has to be done.\n\neither that or group the result set into an implicit temp table\ninternally.\nIf a compound index exists on b,c then an index path could be used\ninstead.\n(compound btree would also be good for order by, of course yall know ;-)\nAn auto index path (temp index is created on the fly and dropped after\nquery completion)\nmight also be considered. \n\nAndreas\n", "msg_date": "Wed, 28 Jan 1998 09:36:17 +0100", "msg_from": "Zeugswetter Andreas DBT <Andreas.Zeugswetter@telecom.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] No: implied sort with group by" } ]
[ { "msg_contents": "> Not necessarily true; as I said, I get the same result as above (with the 980112\n> source tree; have things changed since??). Perhaps you are running into the sorting\n> problem which seemed to be present on larger tables only?\n> \n> - Tom\n> \n> postgres=> select b,c,sum(a) from t1 group by b,c;\n> b|c|sum\n> -+-+---\n> |x| 5\n> |z| 3\n> (2 rows)\n> \n> postgres=> select * from t1;\n> a|b|c\n> -+-+-\n> 1| |x\n> 2| |x\n> 2| |x\n> 3| |z\n> (4 rows)\n\nHmmm...I have a snapshot from about ten days ago, I'll get something newer and\ntry this again. I've been putting off getting a new one until I get the block\nsize patch done. Annoying to put the changes back into a new src copy (but not\nas annoying as dealing with #(*&^! insurance companies claims departments).\n\nIs the order from the second query the order that the rows were inserted?\n\nDo you get the same results if you insert the (3,null,'z') second or third so\nthe rows are stored out of order? I was getting my bad results with this same\ndata, only four rows. I do have a problem with large groupings on two or more\ncolumns running out of memory, but not the problem that linux users are seeing.\n\ndarrenk\n", "msg_date": "Wed, 28 Jan 1998 09:02:34 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] No: implied sort with group by" }, { "msg_contents": "> > Not necessarily true; as I said, I get the same result as above (with the 980112\n> > source tree; have things changed since??). Perhaps you are running into the sorting\n> > problem which seemed to be present on larger tables only?\n> >\n> > postgres=> select b,c,sum(a) from t1 group by b,c;\n> > b|c|sum\n> > -+-+---\n> > |x| 5\n> > |z| 3\n> > (2 rows)\n> >\n> > postgres=> select * from t1;\n> > a|b|c\n> > -+-+-\n> > 1| |x\n> > 2| |x\n> > 2| |x\n> > 3| |z\n> > (4 rows)\n>\n> Hmmm...I have a snapshot from about ten days ago\n\n> Is the order from the second query the order that the rows were inserted?\n>\n> Do you get the same results if you insert the (3,null,'z') second or third so\n> the rows are stored out of order? I was getting my bad results with this same\n> data, only four rows.\n\nOUCH! You are right, there is a problem with this simple test case:\n\npostgres=> select b,c,sum(a) from t1 group by b,c;\nb|c|sum\n-+-+---\n |x| 5\n |z| 3\n |x| 0\n(3 rows)\n\npostgres=> select * from t1;\na|b|c\n-+-+-\n1| |x\n2| |x\n2| |x\n3| |z\n0| |x\n(5 rows)\n\nI just inserted a single out-of-order row at the end of the table which, since the\ninteger value is zero, should have not affected the result. Sorry I didn't understand\nthe nature of the test case.\n\n - Tom\n\n", "msg_date": "Wed, 28 Jan 1998 16:22:28 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] No: implied sort with group by" } ]
[ { "msg_contents": "> postgres=> select b,c,sum(a) from t1 group by b,c;\n> b|c|sum\n> -+-+---\n> |x| 5\n> |z| 3\n> |x| 0\n> (3 rows)\n> \n> postgres=> select * from t1;\n> a|b|c\n> -+-+-\n> 1| |x\n> 2| |x\n> 2| |x\n> 3| |z\n> 0| |x\n> (5 rows)\n> \n> I just inserted a single out-of-order row at the end of the table which, since the\n> integer value is zero, should have not affected the result. Sorry I didn't understand\n> the nature of the test case.\n\nThe order of the implicit sort would be arbitrary, but should first sort on\nany fields in a given ORDER BY to help speed things up later in the tree.\n\nWhat are the effects of sorted or partially sorted input data to the sort code?\n\nThe current group/aggregate code seems to just loop over the tuples as they are.\n\nI see two ways to fix the above, one w/minimal code, second w/more work, but\npotentially better speed for large queries.\n\n1. Put a sort node immediately before the group node, taking into account\nany user given ordering. Also make sure the optimizer is aware of this sort\nwhen calculating query costs.\n\n2. Instead of sorting the tuples before grouping, add a hashing system to\nthe group node so that the pre-sorting is not necessary.\n\nHmmm...is this a grouping problem or an aggregate problem? Or both? The first\nquery above should have the data sorted before aggregating, shouldn't it, or I\nam still missing a piece of this puzzle?\n\ndarrenk\n", "msg_date": "Wed, 28 Jan 1998 12:52:18 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] No: implied sort with group by" }, { "msg_contents": "Darren King wrote:\n> \n\n[examples deleted]\n\n> I see two ways to fix the above, one w/minimal code, second w/more work, but\n> potentially better speed for large queries.\n> \n> 1. Put a sort node immediately before the group node, taking into account\n> any user given ordering. Also make sure the optimizer is aware of this sort\n> when calculating query costs.\n> \n> 2. Instead of sorting the tuples before grouping, add a hashing system to\n> the group node so that the pre-sorting is not necessary.\n> \n> Hmmm...is this a grouping problem or an aggregate problem? Or both? The first\n> query above should have the data sorted before aggregating, shouldn't it, or I\n> am still missing a piece of this puzzle?\n> \n> darrenk\n\nThe hash should work. If the hash key is built on the group-by items,\nthen any row with the same entries in these columns will get hashed to\nthe same result row. At this point, it should be fairly easy to\nperform aggregation (test and substitute for min and max, add for\nsum,avg, etc).\n\nOcie\n\n", "msg_date": "Wed, 28 Jan 1998 10:42:54 -0800 (PST)", "msg_from": "ocie@paracel.com", "msg_from_op": false, "msg_subject": "Re: [HACKERS] No: implied sort with group by" }, { "msg_contents": "> > postgres=> select b,c,sum(a) from t1 group by b,c;\n> > b|c|sum\n> > -+-+---\n> > |x| 5\n> > |z| 3\n> > |x| 0\n> > (3 rows)\n> >\n> > postgres=> select * from t1;\n> > a|b|c\n> > -+-+-\n> > 1| |x\n> > 2| |x\n> > 2| |x\n> > 3| |z\n> > 0| |x\n> > (5 rows)\n> >\n> > I just inserted a single out-of-order row at the end of the table which, since the\n> > integer value is zero, should have not affected the result. Sorry I didn't understand\n> > the nature of the test case.\n\n> Hmmm...is this a grouping problem or an aggregate problem? Or both? The first\n> query above should have the data sorted before aggregating, shouldn't it, or I\n> am still missing a piece of this puzzle?\n\nfwiw, I see the same incorrect behavior in v6.2.1p5.\n\n - Tom\n\n", "msg_date": "Thu, 29 Jan 1998 03:14:37 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] No: implied sort with group by" } ]
[ { "msg_contents": "Here is the long-awaited 6.3 features list. Please send any comments to\nme, or a diff if you want to majorly re-order the items. There are just\ntoo many for me to order meaningfully. I even spellchecked it.\n\nAny items I forgot, please let me know. This is taken from the cvs\nlogs, and it sorted cronologically be category.\n\nAt 150, this is the largest number of items on any release ever. I have\nmarked subselects with a big <NOT DONE YET> to take the heat off of\nVadim.\n\nThis list is so big, maybe we need to distribute it in a PostgreSQL\ntable :-)\n\n---------------------------------------------------------------------------\n\n\nCHANGES IN THE 6.3 RELEASE\n--------------------------\n\nDevelopers who have claimed items are:\n-------------------------------------\n\t* Bruce is Bruce Momjian<maillist@candle.pha.pa.us>\n\t* Bryan is Bryan Henderson<bryanh@giraffe.netgate.net>\n\t* D'Arcy is D'Arcy J.M. Cain is darcy@druid.net\n\t* Dan is Dan McGuirk <mcguirk@indirect.com>\n\t* Darren is Darren King <darrenk@insightdist.com>\n\t* Goran is Goran Thyni is goran@bildbasen.se\n\t* Henry is Henry B. Hotz is hotz@jpl.nasa.gov\n\t* Igor is Igor <igor@sba.miami.edu>\n\t* Jan is Jan Wieck <wieck@sapserv.debis.de>\n\t* Jun is Jun Kuwamura <juk@rccm.co.jp>\n \t* Martin is Martin S. Utesch <utesch@aut.tu-freiberg.de>\n \t* Marc is Marc Fournier <scrappy@hub.org>\n\t* Paul is Paul M. Aoki <aoki@CS.Berkeley.EDU>\n\t* Patrick is Patrick van Kleef <pvk@pobox.com>\n\t* Peter is Peter T Mount <psqlhack@maidast.demon.co.uk>\n\t* Tatsuo is Tatsuo Ishii <t-ishii@sra.co.jp>\n\t* Thomas is Thomas Lockhart <tgl@mythos.jpl.nasa.gov>\n\t* Todd is Todd Brandys is <brandys@eng3.hep.uiuc.edu>\n\t* Vadim is \"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>\n\t* Vivek is Vivek Khera <khera@kci.kciLink.com>\n\nBug Fixes\n---------\nFix binary cursors broken by MOVE implementation(Vadim)\nFix for tcl library crash(Jan)\nFix for array handling, from Gerhard Hintermayer\nFix acl error, and remove duplicate pqtrace(Bruce)\nFix \\e for empty file(Bruce)\nFix for textcat on varchar() fields(Bruce)\nFix for DBT Sendproc (Zeugswetter Andres)\nFix vacuum analyze syntax problem(Bruce)\nFix for international identifiers(Tatsuo)\nFix aggregates on inherited tables(Bruce)\nFix substr() for out-of-bounds data\nFix for select 1=1 or 2=2, select 1=1 and 2=2, and select sum(2+2)(Bruce)\nFix notty output to show status result. -q option still turns it off(Bruce)\nFix for count(*), aggs with views and multiple tables and sum(3)(Bruce)\nFix cluster(Bruce)\nFix for PQtrace start/stop several times(Bruce)\nFix a variety of locking problems like newer lock waiters getting\n\tlock before older waiters, and having readlock people not share\n\tlocks if a writer is waiting for a lock, and waiting writers not\n\tgetting priority over waiting readers(Bruce)\nFix crashes in psql when executing queries from external files, \n\tJames Hughes <jamesh@interpath.com>\nFix problem with multiple order by columns, with the first one having\n\tNULL values, Jeroen van Vianen <jeroenv@design.nl>\n\n\nEnhancements\n------------\nAdd SQL92 \"constants\" CURRENT_DATE, CURRENT_TIME, CURRENT_TIMESTAMP, \n\tCURRENT_USER(Thomas)\nAdd syntax for primary and foreign keys(Thomas)\n<NOT DONE YET> Subselects with EXISTS, IN, ALL, ANY keywords (Vadim, Bruce, Thomas)\nSupport SQL92 syntax for IS TRUE/IS FALSE/IS NOT TRUE/IS NOT FALSE(Thomas)\nAdd SQL92 reserved words for primary and foreign keys(Thomas)\nAllow true/false, yes/no, 1/0. Throw elog warning if anything else(Thomas)\nAllow shorter strings, so \"t\", \"tr\", \"tru\" and \"true\" match \"true\".\nAdd conversions for int2 and int4, oid to and from text(Thomas)\nUse shared lock when building indices(Vadim)\nFree memory allocated for an user query inside transaction block after\n\tthis query is done, was turned off in <= 6.2.1(Vadim)\nNew CREATE PROCEDURAL LANGUAGE (Jan)\nAdd support for SQL92 delimited identifiers(Thomas)\nAdd support for SQL3 IS TRUE and IS FALSE(Thomas)\nAugment support for SQL92 SET TIME ZONE...(Thomas)\nGenerate error on large integer(Bruce)\nAdd initial backend support for SET/SHOW/RESET TIME ZONE uses TZ\n\tenvironment variable(Thomas)\nSupport SQL92 delimited identifiers by checking some attribute names\n\tfor mixed-case and surrounding with double quotes(Thomas)\nRename pg_dump -H option to -h(Bruce)\nAdd java passwords, European dates,from Peter T Mount\nUse indexes for LIKE and ~, !~ operations(Bruce)\nTime Travel removed(Vadim, Bruce)\nAdd paging for \\d and \\z, and fix \\i(Bruce)\nUpdate of contrib stuff(Massimo)\nAdd Unix domain socket support(Goran)\nSupport alternate database locations(Thomas)\nImplement CREATE DATABASE/WITH LOCATION(Thomas)\nImplement SET keyword = DEFAULT and SET TIME ZONE DEFAULT(Thomas)\nRe-enable JOIN= option in CREATE OPERATOR statement (Thomas)\nAllow more SQL and/or Postgres reserved words as column identifiers(Thomas)\nEnable SET value = DEFAULT by passing null parameter to parsers(Thomas)\nEnable SET TIME ZONE using TZ environment variable(Thomas)\nAdd PGTZ environment variable to initialization code(Thomas)\nAdd PGDATESTYLE environment variable(Thomas)\nRegression tests can now run with \"setenv PGTZ PST8PDT\"(Thomas)\nAdd pg_description table for info on tables, columns, operators, types, and\n\taggregates(Bruce)\nAdd new psql \\da, \\dd, \\df, \\do, \\dS, and \\dT commands(Bruce)\nAdd other initialization environment variables: PGCOSTHEAP, PGCOSTINDEX,\n\tPGRPLANS, PGGEQO(Thomas)\nRemove 16 char limit on system table/index names. Rename system indexes(Bruce)\nAdd UNION capability(Bruce)\nSupport SQL92 syntax for type coercion of strings, \"DATETIME 'now' (Thomas)\nImplement SQL92 binary and hexadecimal string decoding (b'10' and x'1F')(Thomas)\nAllow fractional values for delta times (e.g. '2.5 days')(Thomas)\nCheck valid numeric input more carefully for delta times(Thomas)\nImplement day of year as possible input to datetime_part()(Thomas)\nRemove archive stuff(Bruce)\nAdd SQL92-compliant syntax for constraints(Thomas)\nImplement PRIMARY KEY and UNIQUE clauses using indices(Thomas)\nAllow NOT NULL UNIQUE syntax (both were allowed individually before)(Thomas)\nAllow Postgres-style casting (\"::\") of non-constants(Thomas)\nAdd 'GERMAN' option to DateStyle(Thomas)\nSpecify hash table support functions for float8 and int4 rather than using\n\tbtree support functions(Thomas)\nAllow for a pg_password authentication database that is separate from\n\tthe system password file(Todd)\nDump ACLs, GRANT, REVOKE permissions, Matt(maycock@intelliquest.com)\nAllow logging of output to syslog or /tmp/postgres.log(Thomas)\nAllow multiple-argument functions in constraint clauses\nDefine text, varchar, and bpchar string length functions(Thomas)\nEnable timespan_finite() and text_timespan() routines (Thomas)\nDefine an \"ISO-style\" timespan output format with \"hh:mm:ss\" fields(Thomas)\nEnabled by DateStyle = USE_ISO_DATES(Thomas)\nFix Query handling for inheritance, and cost computations(Bruce)\nImplement CREATE TABLE ... AS SELECT(Thomas)\nAllow NOT, IS NULL, IS NOT NULL in constraints(Thomas)\nAdd UNIONs(Bruce)\nChange precedence for boolean operators to match expected behavior(Thomas)\nShow NOT NULL and DEFAULT in psql \\d table(Bruce)\nAllow varchar() to only store needed bytes on disk(Bruce)\nAdd UNION, GROUP, DISTINCT to INSERT(Bruce)\nFix for BLOBs(Peter)\nMega-Patch for JDBC...see README_6.3 for list of changes(Peter)\nAllow installation data block size and max tuple size configuration(Darren)\nRemove unused \"option\" from PQconnectdb()\nDBD::Pg can connect to unix sockets(Goran)\nNew PostgreSQL PL interface(Jan)\nNew LOCK command and lock manual page describing deadlocks(Bruce)\nAllow \\z to show sequences(Bruce)\nNew psql .psqlrc file startup(Andrew)\nNew types for IP and MAC addresses in /contrib, Tom I Helbekkmo\n\t<tih@Hamartun.Priv.NO>\nNew python interface (PyGreSQL 2.0)(D'Arcy)\nNew protocol has a version number, network byte order, pg_hba.conf\n\tenhanced and documented, many cleanups, Phil Thompson\n\t<phil@river-bank.demon.co.uk>\nReal deadlock detection, no more timeouts(Bruce)\n\n\nSource Tree Changes\n-------------------\nAdd new html development tools, and flow chart in /tools/backend\nFix for SCO compiles\nStratus computer port \"Gillies, Robert\" <GilliesR@Nichols.com>\nAdded support for shlib for BSD44_derived & i386_solaris\nMake configure more automated, (Brook Milligan)\nAdd script to check regression tests\nBreak parser functions into smaller files, group together(Bruce)\nRename heap_create to heap_create_and_catalog, rename heap_creatr\n\tto heap_create()(Bruce)\nSpark/Linux patch for locking, (Tom Szybist)\nRemove PORTNAME and reorganize port-specific stuff(Marc)\nAdd optimizer README file(Bruce)\nRemove some recursion in optimizer and clean up some code there(Bruce)\nFix for NetBSD locking(Henry)\nFix for libptcl make(Tatsuo)\nAIX patch(Darren)\nChange IS TRUE, IS FALSE, etc. to expressions using \"=\" rather than\n\tfunction calls to istrue() or isfalse() to allow optimization(Thomas)\nVarious fixes NetBSD/sparc related, Tom I Helbekkmo <tih@Hamartun.Priv.NO>\nNew alpha linux locking from Travis Melhiser <melhiser@viper.co.union.nc.us>\nMore alphal/linux s_lock changes, Ryan Kirkpatrick <rkirkpat@nag.cs.colorado.edu>\nChange elog(WARN) to elog(ERROR)\nShort little FAQ for FreeBSD(Marc)\nBring in the PostODBC source tree as part of our standard distribution(Marc)\nA minor patch for HP/UX 10 vs 9(Stan)\nNew pg_attribute.atttypmod for type-specific information like varchar\n\tlength(Bruce)\nUnixware patches, Billy G. Allie <Bill.Allie@mug.org>\nNew i386 'lock' for spin lock asm, Billy G. Allie <Bill.Allie@mug.org>\nSupport for multiplexed backends is removed\nStart an OpenBSD port\nStart an AUX port\nStart an Cygnus port\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Wed, 28 Jan 1998 23:58:15 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "6.3 CHANGES list" } ]
[ { "msg_contents": "Now that we know the storage manager code that splits tables over 2GB\ninto separate files doesn't work(Irix), can we rip out that code and\njust use the OS code to access >2GB files as normal files. Now, most\nOS's can support 64-bit files and file sizes.\n\nBecause it is isolated in the storage manager, it should be easy.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 29 Jan 1998 00:32:39 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "tables >2GB" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Now that we know the storage manager code that splits tables over 2GB\n> into separate files doesn't work(Irix), can we rip out that code and\n> just use the OS code to access >2GB files as normal files. Now, most\n> OS's can support 64-bit files and file sizes.\n> \n> Because it is isolated in the storage manager, it should be easy.\n\nSomeday we'll get TABLESPACEs and fixed multi-chunk code could\nallow to store chunks in different TABLESPACEs created on _different\ndisks_ - imho, ability to store a table on > 1 disk is good thing.\n\nAnd so, I would suggest just add elog(ERROR) to mdextend() now,\nwith recommendation to increase RELSEG_SIZE...\n\nVadim\n", "msg_date": "Thu, 29 Jan 1998 14:25:18 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] tables >2GB" }, { "msg_contents": "> \n> Now that we know the storage manager code that splits tables over 2GB\n> into separate files doesn't work(Irix), can we rip out that code and\n> just use the OS code to access >2GB files as normal files. Now, most\n> OS's can support 64-bit files and file sizes.\n> \n> Because it is isolated in the storage manager, it should be easy.\n\nCan someone knowledgeable make a patch for this for our mega-patch?\n\n-- \nBruce Momjian | 830 Blythe Avenue\nmaillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Thu, 19 Mar 1998 11:19:10 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] tables >2GB" }, { "msg_contents": "On Thu, 19 Mar 1998, Bruce Momjian wrote:\n\n> > \n> > Now that we know the storage manager code that splits tables over 2GB\n> > into separate files doesn't work(Irix), can we rip out that code and\n> > just use the OS code to access >2GB files as normal files. Now, most\n> > OS's can support 64-bit files and file sizes.\n> > \n> > Because it is isolated in the storage manager, it should be easy.\n> \n> Can someone knowledgeable make a patch for this for our mega-patch?\n\n\t*Only* if its in before 9am AST (or is it ADT?) on Friday\nmorning...:)\n\n", "msg_date": "Thu, 19 Mar 1998 12:05:47 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] tables >2GB" }, { "msg_contents": "> > Now that we know the storage manager code that splits tables over 2GB\n> > into separate files doesn't work(Irix), can we rip out that code and\n> > just use the OS code to access >2GB files as normal files. Now, most\n> > OS's can support 64-bit files and file sizes.\n> > \n> > Because it is isolated in the storage manager, it should be easy.\n> \n> Can someone knowledgeable make a patch for this for our mega-patch?\n> \n\nThere are still quite a few OS's out there that do not support >2GB files\nyet. Even my beloved Linux (x86)...\n\nSo, how about we fix the storage manager instead?\n\nA neat thing that Illustra does is allow you to stripe a table across\nmultiple directories. You get big tables, easy storage management, and\na nice performance boost.\n\ncreate stripedir('stripe1', '/disk1/data/stripe1');\ncreate stripedir('stripe2', '/disk2/data/stripe2');\n\ncreate table giant_table (...) with (stripes 4, 'stripe1', 'stripe2');\n -- the '4' is the number of pages to interleave.\n\nThen the smgr just distributes the blocks alternately across the stripes.\n\nread_block(blockno, ...stripeinfo)\n{\n ...\n stripe = (blockno / stripe_interleave ) % number_of_stripes;\n stripe_block = blockno / number_of_stripes;\n\n fd = stripe_info->fd[stripe];\n lseek(fd, stripe_block * BLOCKSIZE, SEEK_SET);\n ...\n}\n\nAll vastly oversimplified of course....\n\n-dg\n\nDavid Gould dg@illustra.com 510.628.3783 or 510.305.9468 \nInformix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n - I realize now that irony has no place in business communications.\n\n \n \n", "msg_date": "Thu, 19 Mar 1998 12:09:43 -0800 (PST)", "msg_from": "dg@illustra.com (David Gould)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] tables >2GB" }, { "msg_contents": "\nDoes this act as a notice of volunteering to work on this aspect of the\ncode? *Grin*\n\n\nOn Thu, 19 Mar 1998, David Gould wrote:\n\n> > > Now that we know the storage manager code that splits tables over 2GB\n> > > into separate files doesn't work(Irix), can we rip out that code and\n> > > just use the OS code to access >2GB files as normal files. Now, most\n> > > OS's can support 64-bit files and file sizes.\n> > > \n> > > Because it is isolated in the storage manager, it should be easy.\n> > \n> > Can someone knowledgeable make a patch for this for our mega-patch?\n> > \n> \n> There are still quite a few OS's out there that do not support >2GB files\n> yet. Even my beloved Linux (x86)...\n> \n> So, how about we fix the storage manager instead?\n> \n> A neat thing that Illustra does is allow you to stripe a table across\n> multiple directories. You get big tables, easy storage management, and\n> a nice performance boost.\n> \n> create stripedir('stripe1', '/disk1/data/stripe1');\n> create stripedir('stripe2', '/disk2/data/stripe2');\n> \n> create table giant_table (...) with (stripes 4, 'stripe1', 'stripe2');\n> -- the '4' is the number of pages to interleave.\n> \n> Then the smgr just distributes the blocks alternately across the stripes.\n> \n> read_block(blockno, ...stripeinfo)\n> {\n> ...\n> stripe = (blockno / stripe_interleave ) % number_of_stripes;\n> stripe_block = blockno / number_of_stripes;\n> \n> fd = stripe_info->fd[stripe];\n> lseek(fd, stripe_block * BLOCKSIZE, SEEK_SET);\n> ...\n> }\n> \n> All vastly oversimplified of course....\n> \n> -dg\n> \n> David Gould dg@illustra.com 510.628.3783 or 510.305.9468 \n> Informix Software (No, really) 300 Lakeside Drive Oakland, CA 94612\n> - I realize now that irony has no place in business communications.\n> \n> \n> \n> \n\n", "msg_date": "Thu, 19 Mar 1998 15:14:24 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] tables >2GB" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Now that we know the storage manager code that splits tables over 2GB\n> > into separate files doesn't work(Irix), can we rip out that code and\n> > just use the OS code to access >2GB files as normal files. Now, most\n> > OS's can support 64-bit files and file sizes.\n> >\n> > Because it is isolated in the storage manager, it should be easy.\n> \n> Can someone knowledgeable make a patch for this for our mega-patch?\n\nBut, could it not be useful to be able to use multiple files per\ntable? Suppose someone wants to spread them out on different\ndisks to increase access performance?\n\nAnd what about tables over 2^64 bytes size? There will never be\ndisks of that size? Now, remember what people said about 2^32 byte\nfiles, and years after 1999, and 64k RAM, and about all inventions\nalready being invented, and... :)\n\n/* m */\n", "msg_date": "Fri, 20 Mar 1998 12:59:06 +0100", "msg_from": "Mattias Kregert <matti@algonet.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] tables >2GB" } ]
[ { "msg_contents": " unsubscribe\n\n\n", "msg_date": "Thu, 29 Jan 1998 10:55:45 +0200", "msg_from": "\"Alexandr V. Goncharuk\" <sasha@eurocom.odtel.net>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "I think it would be nice to add time stamps to logging information.\n\nIgor Sysoev\n", "msg_date": "Thu, 29 Jan 1998 12:51:10 +0300", "msg_from": "\"Igor Sysoev\" <igor@nitek.ru>", "msg_from_op": true, "msg_subject": "time stamps in logging" }, { "msg_contents": "> \n> I think it would be nice to add time stamps to logging information.\n> \n> Igor Sysoev\n> \n> \n\nTry adding ELOG_TIMESTAMPS define to backend/utils/error/elog.c.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 29 Jan 1998 08:33:37 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] time stamps in logging" } ]
[ { "msg_contents": "Ocie wrote:\n>> 2. Instead of sorting the tuples before grouping, add a hashing\nsystem to\n>> the group node so that the pre-sorting is not necessary.\n>The hash should work. If the hash key is built on the group-by items,\n>then any row with the same entries in these columns will get hashed to\n>the same result row. At this point, it should be fairly easy to\n>perform aggregation (test and substitute for min and max, add for\n>sum,avg, etc).\n\nHave been thinking about that too. Is each list in the current hash\nimplementation sorted ? \nCause else how do you know, that a certain value has not already been\nprocessed ?\nAnswer: keep a list of already processed groups in memory. Initialize it\nfor each new hash list.\n\nAndreas\n", "msg_date": "Thu, 29 Jan 1998 11:23:15 +0100", "msg_from": "Zeugswetter Andreas DBT <Andreas.Zeugswetter@telecom.at>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] No: implied sort with group by" } ]
[ { "msg_contents": "> > > postgres=> select b,c,sum(a) from t1 group by b,c;\n> > > b|c|sum\n> > > -+-+---\n> > > |x| 5\n> > > |z| 3\n> > > |x| 0\n> > > (3 rows)\n> > >\n> > > postgres=> select * from t1;\n> > > a|b|c\n> > > -+-+-\n> > > 1| |x\n> > > 2| |x\n> > > 2| |x\n> > > 3| |z\n> > > 0| |x\n> > > (5 rows)\n> > >\n> > > I just inserted a single out-of-order row at the end of the table which, since the\n> > > integer value is zero, should have not affected the result. Sorry I didn't understand\n> > > the nature of the test case.\n> \n> > Hmmm...is this a grouping problem or an aggregate problem? Or both? The first\n> > query above should have the data sorted before aggregating, shouldn't it, or I\n> > am still missing a piece of this puzzle?\n> \n> fwiw, I see the same incorrect behavior in v6.2.1p5.\n> \n> - Tom\n> \n> \nAnd in v6.1. If b is a space (rather than a NULL), then the behaviour is correct\nso it must be a problem in grouping NULLs.\n\n\nAndrew\n\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Thu, 29 Jan 1998 10:43:34 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] No: implied sort with group by" } ]
[ { "msg_contents": "> > I would opt for /var/run to store the pid files and have the name set to\n> \n> \t That would assume that postmaster runs as root, which is not\n> allowed...has to be in /tmp somewhere\n> \n> Maybe both should be under /usr/local/pgsql\nI assume you mean the root of the installation rather than specifically\n/usr/local/pgsql.\n\n> somewhere, so they will not be removed by any \n> '/tmp'-clean-up-scripts.\n> \nIn $PGDATA would seem as good as anywhere (maybe $PGDATA/.run or some such)\n\n/usr/local is mounted r/o on my system - $PGDATA lives elsewhere and is\nwritable.\n\nAndrew\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Thu, 29 Jan 1998 10:49:50 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Thu, 29 Jan 1998, Andrew Martin wrote:\n\n> > > I would opt for /var/run to store the pid files and have the name set to\n> > \n> > \t That would assume that postmaster runs as root, which is not\n> > allowed...has to be in /tmp somewhere\n> > \n> > Maybe both should be under /usr/local/pgsql\n> I assume you mean the root of the installation rather than specifically\n> /usr/local/pgsql.\n> \n> > somewhere, so they will not be removed by any \n> > '/tmp'-clean-up-scripts.\n> > \n> In $PGDATA would seem as good as anywhere (maybe $PGDATA/.run or some such)\n> \n> /usr/local is mounted r/o on my system - $PGDATA lives elsewhere and is\n> writable.\n\n\t$PGDATA is created 700...general users need to be able to read the\ndirectory in order to connect to the socket, so we'd have to lax up\nsecurity in order to accomplish this...\n\n\n", "msg_date": "Thu, 29 Jan 1998 08:05:36 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "\n > > somewhere, so they will not be removed by any \n > > '/tmp'-clean-up-scripts.\n\nI am convinced.\nWe better leave the socket in /tmp .\nAny tmpwatch-scripts should not touch dot-files anyway.\n\n<EXPLAIN>\nA tmpwatch-script runs under cron and cleans out old file\nfrom /tmp.\ntmpwatch command (at least under RH5-Linux) does not touch \nspecial-files unless -a is specified.\n</EXPLAIN>\n\n\tterveiset,\n-- \n---------------------------------------------\nG�ran Thyni, sysadm, JMS Bildbasen, Kiruna\n\n", "msg_date": "29 Jan 1998 16:31:15 -0000", "msg_from": "Goran Thyni <goran@bildbasen.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" } ]
[ { "msg_contents": "> > > > postgres=> select b,c,sum(a) from t1 group by b,c;\n> > > > b|c|sum\n> > > > -+-+---\n> > > > |x| 5\n> > > > |z| 3\n> > > > |x| 0\n> > > > (3 rows)\n> > > >\n> > > > postgres=> select * from t1;\n> > > > a|b|c\n> > > > -+-+-\n> > > > 1| |x\n> > > > 2| |x\n> > > > 2| |x\n> > > > 3| |z\n> > > > 0| |x\n> > > > (5 rows)\n> > > >\n> > > ...\n> > \n> And in v6.1. If b is a space (rather than a NULL), then the behaviour is correct\n> so it must be a problem in grouping NULLs.\n> \n\nexplain select b,c,sum(a) from foo group by b,c; -- gives...\n\nAggregate (cost=0.00 size=0 width=0)\n -> Group (cost=0.00 size=0 width=0)\n -> Sort (cost=0.00 size=0 width=0)\n -> Seq Scan on foo (cost=0.00 size=0 width=28)\n\nThere sort is there before the grouping operation, so this would seem to point to\nthe sort code incorrectly setting something when handling NULLs.\n\nThis doesn't seem like the same bug that Vadim found since a small data set such as\nthis one _shouldn't_ be going out to a tape file.\n\ndarrenk\n", "msg_date": "Thu, 29 Jan 1998 08:37:53 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] No: implied sort with group by" }, { "msg_contents": "> > And in v6.1. If b is a space (rather than a NULL), then the behaviour is correct\n> > so it must be a problem in grouping NULLs.\n> > \n> \n> explain select b,c,sum(a) from foo group by b,c; -- gives...\n> \n> Aggregate (cost=0.00 size=0 width=0)\n> -> Group (cost=0.00 size=0 width=0)\n> -> Sort (cost=0.00 size=0 width=0)\n> -> Seq Scan on foo (cost=0.00 size=0 width=28)\n> \n> There sort is there before the grouping operation, so this would seem to point to\n> the sort code incorrectly setting something when handling NULLs.\n> \n> This doesn't seem like the same bug that Vadim found since a small data set such as\n> this one _shouldn't_ be going out to a tape file.\n\nWe have a NULL sort patch for psort in 6.3. Are you running the most\nrecent sources?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 29 Jan 1998 09:51:04 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] No: implied sort with group by" } ]
[ { "msg_contents": "> On Thu, 29 Jan 1998, Andrew Martin wrote:\n> \n> > > > I would opt for /var/run to store the pid files and have the name set to\n> > > \n> > > \t That would assume that postmaster runs as root, which is not\n> > > allowed...has to be in /tmp somewhere\n> > > \n> > > Maybe both should be under /usr/local/pgsql\n> > I assume you mean the root of the installation rather than specifically\n> > /usr/local/pgsql.\n> > \n> > > somewhere, so they will not be removed by any \n> > > '/tmp'-clean-up-scripts.\n> > > \n> > In $PGDATA would seem as good as anywhere (maybe $PGDATA/.run or some such)\n> > \n> > /usr/local is mounted r/o on my system - $PGDATA lives elsewhere and is\n> > writable.\n> \n> \t$PGDATA is created 700...general users need to be able to read the\n> directory in order to connect to the socket, so we'd have to lax up\n> security in order to accomplish this...\n> \nOK, no problem, a subdirectory of $PGDATA which has world read permission\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Thu, 29 Jan 1998 14:29:46 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "On Thu, 29 Jan 1998, Andrew Martin wrote:\n\n> > On Thu, 29 Jan 1998, Andrew Martin wrote:\n> > \n> > > > > I would opt for /var/run to store the pid files and have the name set to\n> > > > \n> > > > \t That would assume that postmaster runs as root, which is not\n> > > > allowed...has to be in /tmp somewhere\n> > > > \n> > > > Maybe both should be under /usr/local/pgsql\n> > > I assume you mean the root of the installation rather than specifically\n> > > /usr/local/pgsql.\n> > > \n> > > > somewhere, so they will not be removed by any \n> > > > '/tmp'-clean-up-scripts.\n> > > > \n> > > In $PGDATA would seem as good as anywhere (maybe $PGDATA/.run or some such)\n> > > \n> > > /usr/local is mounted r/o on my system - $PGDATA lives elsewhere and is\n> > > writable.\n> > \n> > \t$PGDATA is created 700...general users need to be able to read the\n> > directory in order to connect to the socket, so we'd have to lax up\n> > security in order to accomplish this...\n> > \n> OK, no problem, a subdirectory of $PGDATA which has world read permission\n\n\tYou'd have to relax the 700 permissions on $PGDATA to get at\nanything under that directory, even if the subdirectory under it had 777\naccess to it...\n\n\n\tAnd, it also makes the assumption that you'll only ever have 1\npostmaster process running on a machine, or else you are now having to set\nthe PGDATA environment variable depending on which database you want to\nconnect to...:(\n\n", "msg_date": "Thu, 29 Jan 1998 09:52:27 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" }, { "msg_contents": "\n\nOn Thu, 29 Jan 1998, Andrew Martin wrote:\n\n: > On Thu, 29 Jan 1998, Andrew Martin wrote:\n: > \n: > > > > I would opt for /var/run to store the pid files and have the name set to\n: > > > \n: > > > \t That would assume that postmaster runs as root, which is not\n: > > > allowed...has to be in /tmp somewhere\n: > > > \n: > > > Maybe both should be under /usr/local/pgsql\n: > > I assume you mean the root of the installation rather than specifically\n: > > /usr/local/pgsql.\n: > > \n: > > > somewhere, so they will not be removed by any \n: > > > '/tmp'-clean-up-scripts.\n: > > > \n: > > In $PGDATA would seem as good as anywhere (maybe $PGDATA/.run or some such)\n: > > \n: > > /usr/local is mounted r/o on my system - $PGDATA lives elsewhere and is\n: > > writable.\n: > \n: > \t$PGDATA is created 700...general users need to be able to read the\n: > directory in order to connect to the socket, so we'd have to lax up\n: > security in order to accomplish this...\n: > \n: OK, no problem, a subdirectory of $PGDATA which has world read permission\n: \n\nI think $PGDATA would be a problem. I did an initdb to my directory and\nran a postmaster on port 5440 with $PGDATA pointing to my home\ndirectory. This postmaster never saw the main postmaster's $PGDATA\ndirectory. All postmasters should use the same location to get\ninformation about processes running. It wouldn't matter where, as long\nas the location is flexible and easily configured.\n\nAll this raises other questions for me. I believe Goran was right about\nthe can of worms :)\n\n\n\n-James\n\n", "msg_date": "Thu, 29 Jan 1998 09:57:01 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" } ]
[ { "msg_contents": "unsubscribe\n\n", "msg_date": "Thu, 29 Jan 1998 16:57:52 +0200", "msg_from": "\"Alexander V. Goncharuk\" <sasha@EuroCom.OdTel.Net>", "msg_from_op": true, "msg_subject": "(no subject)" } ]
[ { "msg_contents": "\nCompile of this mornings snapshot fails in parse_node.h.\n\nThe typedef of ParseState is missing the variable name\nfor the first member...only has \"struct ParseState;\".\n\nAny quick fix?\n\ndarrenk\n", "msg_date": "Thu, 29 Jan 1998 10:38:05 -0500", "msg_from": "darrenk@insightdist.com (Darren King)", "msg_from_op": true, "msg_subject": "Error in nodes/parse_node.h" }, { "msg_contents": "> \n> \n> Compile of this mornings snapshot fails in parse_node.h.\n> \n> The typedef of ParseState is missing the variable name\n> for the first member...only has \"struct ParseState;\".\n> \n> Any quick fix?\n> \n\nJust remove the line. I thought I had removed it, but obviously not. I\nwill apply a patch. It was put in for subselects at one point, but I\nchanged my mind, but forgot to remove it.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Thu, 29 Jan 1998 14:04:32 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Error in nodes/parse_node.h" } ]
[ { "msg_contents": "> > > > In $PGDATA would seem as good as anywhere (maybe $PGDATA/.run or some such)\n> > > > \n> > > > /usr/local is mounted r/o on my system - $PGDATA lives elsewhere and is\n> > > > writable.\n> > > \n> > > \t$PGDATA is created 700...general users need to be able to read the\n> > > directory in order to connect to the socket, so we'd have to lax up\n> > > security in order to accomplish this...\n> > > \n> > OK, no problem, a subdirectory of $PGDATA which has world read permission\n> \n> \tYou'd have to relax the 700 permissions on $PGDATA to get at\n> anything under that directory, even if the subdirectory under it had 777\n> access to it...\nDuh... Wasn't thinking straight - it's been a bad week :-)\nYour right, you'd have to allow read access to the directory but not\nexecute.\n\n> \n> \n> \tAnd, it also makes the assumption that you'll only ever have 1\n> postmaster process running on a machine, or else you are now having to set\n> the PGDATA environment variable depending on which database you want to\n> connect to...:(\nAgreed, I was really just making the point that I didn't want anything\nto be written to the root directory of the pgsql installation...\n\n> \n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Thu, 29 Jan 1998 20:19:38 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" } ]
[ { "msg_contents": "> > I would opt for /var/run to store the pid files and have the name set to\n> \n> \t That would assume that postmaster runs as root, which is not\n> allowed...has to be in /tmp somewhere\n> \n> No. Make /var/run writable by some group (e.g., group pidlog) and put\n> postgres (and other things like root or daemon or ..., whatever needs\n> to log pid files) in that group.\n> \n> /var/run really is where a pid file should be. I submitted a patch\n> that would do this some time ago. I'll resend it if there is\n> interest.\n> \n> Cheers,\n> Brook\n> \nBAD idea as it means one needs to have root access to install PGSQL.\nThis has always been a strong point that an ordinary user could\ninstall it and play before having to convince system managers\nthat it should be installed globally.\n\nAndrew\n\n----------------------------------------------------------------------------\nDr. Andrew C.R. Martin University College London\nEMAIL: (Work) martin@biochem.ucl.ac.uk (Home) andrew@stagleys.demon.co.uk\nURL: http://www.biochem.ucl.ac.uk/~martin\nTel: (Work) +44(0)171 419 3890 (Home) +44(0)1372 275775\n", "msg_date": "Thu, 29 Jan 1998 20:22:57 GMT", "msg_from": "Andrew Martin <martin@biochemistry.ucl.ac.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] postmaster crash and .s.pgsql file" } ]
[ { "msg_contents": "I am working on a patch to inline fastgetattr(), fastgetiattr(), and\ngetsysattr() calls. This will allow full inline access to the raw tuple\ndata without any function call overhead, which some one showed can read\n1 million calls in very trivial queries.\n\nMy previous inlining attempts inlined most of this, but these functions\nwhere the last part. I will have a patch tomorrow. I have also gotten\na much cleaner format for the macros.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 30 Jan 1998 00:59:17 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "inlining fastgetattr()" } ]
[ { "msg_contents": "Let me add, I am not inlining all the functions, but only the top part\nof them that deals with cachoffsets and nulls. These are the easy ones,\nand the ones that get used most often.\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 30 Jan 1998 01:01:41 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "inlining" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Let me add, I am not inlining all the functions, but only the top part\n> of them that deals with cachoffsets and nulls. These are the easy ones,\n> and the ones that get used most often.\n\nfastgetattr() is called from a HUNDREDS places - I'm not sure that\nthis is good idea.\n\nI suggest to inline _entire_ body of this func in the \nexecQual.c:ExecEvalVar() - Executor uses _only_ ExecEvalVar() to get\ndata from tuples.\n\n(We could #define FASTGETATTR macro and re-write fastgetattr() as just\nthis macro \"call\".)\n\nI don't know should we follow the same way for fastgetiattr() or not...\n\nVadim\n", "msg_date": "Fri, 30 Jan 1998 13:53:44 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inlining" }, { "msg_contents": "Sorry - this is with valid charset...\n\nVadim B. Mikheev wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > Let me add, I am not inlining all the functions, but only the top part\n> > of them that deals with cachoffsets and nulls. These are the easy ones,\n> > and the ones that get used most often.\n> \n> fastgetattr() is called from a HUNDREDS places - I'm not sure that\n> this is good idea.\n> \n> I suggest to inline _entire_ body of this func in the\n> execQual.c:ExecEvalVar() - Executor uses _only_ ExecEvalVar() to get\n> data from tuples.\n> \n> (We could #define FASTGETATTR macro and re-write fastgetattr() as just\n> this macro \"call\".)\n> \n> I don't know should we follow the same way for fastgetiattr() or not...\n> \n> Vadim\n", "msg_date": "Fri, 30 Jan 1998 13:57:59 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inlining" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > Let me add, I am not inlining all the functions, but only the top part\n> > of them that deals with cachoffsets and nulls. These are the easy ones,\n> > and the ones that get used most often.\n> \n> fastgetattr() is called from a HUNDREDS places - I'm not sure that\n> this is good idea.\n\nHere is the fastgetattr macro. Again, I just inlined the cacheoffset\nand null handlling at the top. Doesn't look like much code, though the\n?: macro format makes it look larger. What do you think?\n\nI did the same with fastgetiattr, which in fact just called\nindex_getattr, so that is gone now. For getsysattr, I made an array of\noffsetof(), and do a lookup into the array from heap_getattr, so that is\ngone too.\n\n---------------------------------------------------------------------------\n\n#define fastgetattr(tup, attnum, tupleDesc, isnull) \\\n( \\\n\tAssertMacro((attnum) > 0) ? \\\n\t( \\\n\t\t((isnull) ? (*(isnull) = false) : (dummyret)NULL), \\\n\t\tHeapTupleNoNulls(tup) ? \\\n\t\t( \\\n\t\t\t((tupleDesc)->attrs[(attnum)-1]->attcacheoff > 0) ? \\\n\t\t\t( \\\n\t\t\t\t(Datum)fetchatt(&((tupleDesc)->attrs[(attnum)-1]), \\\n\t\t\t \t (char *) (tup) + (tup)->t_hoff + (tupleDesc)->attrs[(attnum)-1]->attcacheoff) \\\n\t\t\t) \\\n\t\t\t: \\\n\t\t\t( \\\n\t\t\t\t((attnum)-1 > 0) ? \\\n\t\t\t\t( \\\n\t\t\t\t\t(Datum)fetchatt(&((tupleDesc)->attrs[0]), (char *) (tup) + (tup)->t_hoff) \\\n\t\t\t\t) \\\n\t\t\t\t: \\\n\t\t\t\t( \\\n\t\t\t\t\tnocachegetattr((tup), (attnum), (tupleDesc), (isnull)) \\\n\t\t\t\t) \\\n\t\t\t) \\\n\t\t) \\\n\t\t: \\\n\t\t( \\\n\t\t\tatt_isnull((attnum)-1, (tup)->t_bits) ? \\\n\t\t\t( \\\n\t\t\t\t((isnull) ? (*(isnull) = true) : (dummyret)NULL), \\\n\t\t\t\t(Datum)NULL \\\n\t\t\t) \\\n\t\t\t: \\\n\t\t\t( \\\n\t\t\t\tnocachegetattr((tup), (attnum), (tupleDesc), (isnull)) \\\n\t\t\t) \\\n\t\t) \\\n\t) \\\n\t: \\\n\t( \\\n\t\t (Datum)NULL \\\n\t) \\\n)\n> \n> I suggest to inline _entire_ body of this func in the \n> execQual.c:ExecEvalVar() - Executor uses _only_ ExecEvalVar() to get\n> data from tuples.\n> \n> (We could #define FASTGETATTR macro and re-write fastgetattr() as just\n> this macro \"call\".)\n> \n> I don't know should we follow the same way for fastgetiattr() or not...\n> \n> Vadim\n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 30 Jan 1998 10:59:15 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inlining" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > Let me add, I am not inlining all the functions, but only the top part\n> > of them that deals with cachoffsets and nulls. These are the easy ones,\n> > and the ones that get used most often.\n> \n> fastgetattr() is called from a HUNDREDS places - I'm not sure that\n> this is good idea.\n\nHere is the fastgetattr macro. Again, I just inlined the cacheoffset\nand null handlling at the top. Doesn't look like much code, though the\n?: macro format makes it look larger. What do you think?\n\nI did the same with fastgetiattr, which in fact just called\nindex_getattr, so that is gone now. For getsysattr, I made an array of\noffsetof(), and do a lookup into the array from heap_getattr, so that is\ngone too.\n\n---------------------------------------------------------------------------\n\n#define fastgetattr(tup, attnum, tupleDesc, isnull) \\\n( \\\n\tAssertMacro((attnum) > 0) ? \\\n\t( \\\n\t\t((isnull) ? (*(isnull) = false) : (dummyret)NULL), \\\n\t\tHeapTupleNoNulls(tup) ? \\\n\t\t( \\\n\t\t\t((tupleDesc)->attrs[(attnum)-1]->attcacheoff > 0) ? \\\n\t\t\t( \\\n\t\t\t\t(Datum)fetchatt(&((tupleDesc)->attrs[(attnum)-1]), \\\n\t\t\t \t (char *) (tup) + (tup)->t_hoff + (tupleDesc)->attrs[(attnum)-1]->attcacheoff) \\\n\t\t\t) \\\n\t\t\t: \\\n\t\t\t( \\\n\t\t\t\t((attnum)-1 > 0) ? \\\n\t\t\t\t( \\\n\t\t\t\t\t(Datum)fetchatt(&((tupleDesc)->attrs[0]), (char *) (tup) + (tup)->t_hoff) \\\n\t\t\t\t) \\\n\t\t\t\t: \\\n\t\t\t\t( \\\n\t\t\t\t\tnocachegetattr((tup), (attnum), (tupleDesc), (isnull)) \\\n\t\t\t\t) \\\n\t\t\t) \\\n\t\t) \\\n\t\t: \\\n\t\t( \\\n\t\t\tatt_isnull((attnum)-1, (tup)->t_bits) ? \\\n\t\t\t( \\\n\t\t\t\t((isnull) ? (*(isnull) = true) : (dummyret)NULL), \\\n\t\t\t\t(Datum)NULL \\\n\t\t\t) \\\n\t\t\t: \\\n\t\t\t( \\\n\t\t\t\tnocachegetattr((tup), (attnum), (tupleDesc), (isnull)) \\\n\t\t\t) \\\n\t\t) \\\n\t) \\\n\t: \\\n\t( \\\n\t\t (Datum)NULL \\\n\t) \\\n)\n> \n> I suggest to inline _entire_ body of this func in the \n> execQual.c:ExecEvalVar() - Executor uses _only_ ExecEvalVar() to get\n> data from tuples.\n> \n> (We could #define FASTGETATTR macro and re-write fastgetattr() as just\n> this macro \"call\".)\n> \n> I don't know should we follow the same way for fastgetiattr() or not...\n> \n> Vadim\n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n\n", "msg_date": "Fri, 30 Jan 1998 10:59:15 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inlining" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Bruce Momjian wrote:\n> > >\n> > > Let me add, I am not inlining all the functions, but only the top part\n> > > of them that deals with cachoffsets and nulls. These are the easy ones,\n> > > and the ones that get used most often.\n> >\n> > fastgetattr() is called from a HUNDREDS places - I'm not sure that\n> > this is good idea.\n> \n> Here is the fastgetattr macro. Again, I just inlined the cacheoffset\n> and null handlling at the top. Doesn't look like much code, though the\n> ?: macro format makes it look larger. What do you think?\n\nTry to gmake clean and gmake... Please compare old/new sizes for\ndebug version too.\n\nVadim\n", "msg_date": "Fri, 30 Jan 1998 23:19:59 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inlining" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > >\n> > > Bruce Momjian wrote:\n> > > >\n> > > > Let me add, I am not inlining all the functions, but only the top part\n> > > > of them that deals with cachoffsets and nulls. These are the easy ones,\n> > > > and the ones that get used most often.\n> > >\n> > > fastgetattr() is called from a HUNDREDS places - I'm not sure that\n> > > this is good idea.\n> > \n> > Here is the fastgetattr macro. Again, I just inlined the cacheoffset\n> > and null handlling at the top. Doesn't look like much code, though the\n> > ?: macro format makes it look larger. What do you think?\n> \n> Try to gmake clean and gmake... Please compare old/new sizes for\n> debug version too.\n\nOK, here it is, 'size' with two regression run timings:\n\nOLD\n\ttext\tdata\tbss\tdec\thex\n\t831488\t155648\t201524\t1188660\t122334\n\t 151.12 real 4.66 user 8.52 sys\n\t 141.70 real 1.28 user 7.44 sys\n\nNEW\n\ttext\tdata\tbss\tdec\thex\n\t864256\t155648\t201548\t1221452\t12a34c\n\t 143.52 real 3.48 user 9.08 sys\n\t 146.10 real 1.34 user 7.44 sys\n\nThese numbers are with assert and -g on.\n\nInteresting that the 1st regression test is the greatest, and the 2nd is\nthe least, with the same no-inlining, but with standard optimizations.\n\nNow, my test of startup times shows it saves 0.015 seconds on a 0.10\nsecond test. This 0.015 is the equvalent to the fork() overhead time. \nThis speedup is reproducable.\n\nThe inlining is a 3% increase in size, but provides a 15% speed increase\non my startup test.\n\nLooks good to me. I am going to apply the patch, and let people tell me\nif they see a speedup worth a 3% binary size increase.\n\nThe only visible change is that heap_getattr() does not take a buffer\nparameter anymore, thanks to the removal of time travel.\n\nVadim, I will send you the patch separately to look at.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n\n\n", "msg_date": "Fri, 30 Jan 1998 23:33:25 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inlining" }, { "msg_contents": "> OLD\n> \ttext\tdata\tbss\tdec\thex\n> \t831488\t155648\t201524\t1188660\t122334\n> \t 151.12 real 4.66 user 8.52 sys\n> \t 141.70 real 1.28 user 7.44 sys\n> \n> NEW\n> \ttext\tdata\tbss\tdec\thex\n> \t864256\t155648\t201548\t1221452\t12a34c\n\nI have the new size down to 852000, which is only 2.5% increase.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 1 Feb 1998 00:35:44 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inlining" }, { "msg_contents": "> \n> Bruce Momjian wrote:\n> > \n> > Let me add, I am not inlining all the functions, but only the top part\n> > of them that deals with cachoffsets and nulls. These are the easy ones,\n> > and the ones that get used most often.\n> \n> fastgetattr() is called from a HUNDREDS places - I'm not sure that\n> this is good idea.\n> \n> I suggest to inline _entire_ body of this func in the \n> execQual.c:ExecEvalVar() - Executor uses _only_ ExecEvalVar() to get\n> data from tuples.\n\nI don't think I can do that easily. Inlining the top of the the\nfunction that uses attcacheoff or gets NULL's is easy, but after that,\nlots of loops and stuff, which are hard to inline because you really\ncan't define your own variables inside a macro that returns a value.\n\nLet's see that profiling shows after my changes, and how many times\nnocache_getattr(), the new name for the remaining part of the function,\nactually has to be called.\n\nAlso, there is nocache_getiattr(), and get_sysattr() is gone. Just an\narray lookup for the offset now.\n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 1 Feb 1998 13:41:25 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] inlining" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Bruce Momjian wrote:\n> > >\n> > > Let me add, I am not inlining all the functions, but only the top part\n> > > of them that deals with cachoffsets and nulls. These are the easy ones,\n> > > and the ones that get used most often.\n> >\n> > fastgetattr() is called from a HUNDREDS places - I'm not sure that\n> > this is good idea.\n> >\n> > I suggest to inline _entire_ body of this func in the\n> > execQual.c:ExecEvalVar() - Executor uses _only_ ExecEvalVar() to get\n> > data from tuples.\n> \n> I don't think I can do that easily. Inlining the top of the the\n> function that uses attcacheoff or gets NULL's is easy, but after that,\n> lots of loops and stuff, which are hard to inline because you really\n> can't define your own variables inside a macro that returns a value.\n\nOk.\n\n> \n> Let's see that profiling shows after my changes, and how many times\n> nocache_getattr(), the new name for the remaining part of the function,\n> actually has to be called.\n\nOk.\n\n> \n> Also, there is nocache_getiattr(), and get_sysattr() is gone. Just an\n> array lookup for the offset now.\n\nNice.\n\nVadim\n", "msg_date": "Mon, 02 Feb 1998 02:33:45 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] inlining" } ]
[ { "msg_contents": "> > GROUP BY bug of duplicates\t\t\n> > GROUP BY nulls bug\n> > ORDER BY nulls(Vadim?)\n> > many OR's exhaust optimizer memory(Vadim?)\n> > max tuple length settings(Darren & Peter)\n> > pg_user defaults to world-readable until passwords used(Todd)\n> > self-join optimizer bug\n> > subselects(Vadim)\n\nAs far as I know, none of these are ready for 6.3, which is due on\nSunday. What is the game plan, folks?\n\nAnd the tools/RELEASE_CHANGES files have not been updated. I\nwill do the FAQ, TODO, HISTORY files.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 30 Jan 1998 01:24:46 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current open 6.3 issues (fwd)" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > GROUP BY bug of duplicates\n> > > GROUP BY nulls bug\n> > > ORDER BY nulls(Vadim?)\n> > > many OR's exhaust optimizer memory(Vadim?)\n> > > max tuple length settings(Darren & Peter)\n> > > pg_user defaults to world-readable until passwords used(Todd)\n> > > self-join optimizer bug\n ^^^^^^^^^^^^^^^^^^^^^^^\n\nTest case, please.\n\n> > > subselects(Vadim)\n> \n> As far as I know, none of these are ready for 6.3, which is due on\n ^^^ beta\n> Sunday. What is the game plan, folks?\n\nI'm working...\n\nVadim\n", "msg_date": "Fri, 30 Jan 1998 14:43:13 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current open 6.3 issues (fwd)" }, { "msg_contents": "On Fri, 30 Jan 1998, Bruce Momjian wrote:\n\n> > > GROUP BY bug of duplicates\t\t\n> > > GROUP BY nulls bug\n> > > ORDER BY nulls(Vadim?)\n> > > many OR's exhaust optimizer memory(Vadim?)\n> > > max tuple length settings(Darren & Peter)\n> > > pg_user defaults to world-readable until passwords used(Todd)\n> > > self-join optimizer bug\n> > > subselects(Vadim)\n> \n> As far as I know, none of these are ready for 6.3, which is due on\n> Sunday. What is the game plan, folks?\n\n\t\"bugs\" can be fixed up to RELEASE date, which covers 4 of the\nabove...\n\n> And the tools/RELEASE_CHANGES files have not been updated. I\n> will do the FAQ, TODO, HISTORY files.\n\n\tI believe I'm supposed to be handling the RELEASE_CHANGES\nfile...will take a look into that over the next week or so...its a month\nbefore the RELEASE :)\n\n\n", "msg_date": "Fri, 30 Jan 1998 08:11:52 -0500 (EST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current open 6.3 issues (fwd)" }, { "msg_contents": "On Fri, 30 Jan 1998, Vadim B. Mikheev wrote:\n\n> Bruce Momjian wrote:\n> > \n> > > > GROUP BY bug of duplicates\n> > > > GROUP BY nulls bug\n> > > > ORDER BY nulls(Vadim?)\n> > > > many OR's exhaust optimizer memory(Vadim?)\n> > > > max tuple length settings(Darren & Peter)\n> > > > pg_user defaults to world-readable until passwords used(Todd)\n> > > > self-join optimizer bug\n> ^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Test case, please.\n\nI'm not sure if this is an example of the self-join bug mentioned above\nor just an example of an \"exceptional\" computing challenge. I repost\nthe example so that those of you who understand what's going on can\njudge for yourself. \n\n", "msg_date": "Fri, 30 Jan 1998 09:39:54 -0500 (EST)", "msg_from": "Marc Howard Zuckman <marc@fallon.classyad.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Current open 6.3 issues (fwd)" }, { "msg_contents": "> \n> On Fri, 30 Jan 1998, Bruce Momjian wrote:\n> \n> > > > GROUP BY bug of duplicates\t\t\n> > > > GROUP BY nulls bug\n> > > > ORDER BY nulls(Vadim?)\n> > > > many OR's exhaust optimizer memory(Vadim?)\n> > > > max tuple length settings(Darren & Peter)\n> > > > pg_user defaults to world-readable until passwords used(Todd)\n> > > > self-join optimizer bug\n> > > > subselects(Vadim)\n> > \n> > As far as I know, none of these are ready for 6.3, which is due on\n> > Sunday. What is the game plan, folks?\n> \n> \t\"bugs\" can be fixed up to RELEASE date, which covers 4 of the\n> above...\n> \n> > And the tools/RELEASE_CHANGES files have not been updated. I\n> > will do the FAQ, TODO, HISTORY files.\n> \n> \tI believe I'm supposed to be handling the RELEASE_CHANGES\n> file...will take a look into that over the next week or so...its a month\n> before the RELEASE :)\n\nThis is 6.3 beta. IMHO, the stuff has to say at least 6.3, if not 6.3\nbeta by Feb 1. Don't want beta users saying they have 6.2.1.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n\n", "msg_date": "Fri, 30 Jan 1998 14:06:50 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Current open 6.3 issues (fwd)" } ]
[ { "msg_contents": "on i386 linux / glibc-2 with hba and locale support, authentication\nfails.\n\nIf I patch backend/libpq/auth.c to default as: (lines ~420-ish)\n\tauth_handler = NULL;\n+\t\tareq = AUTH_REQ_OK\n+\t\tauth_handler = handle_done_auth;\n\n\tswitch (port->auth_method)\n\nThe system works :)\n[regression tests that is]\n\nWell - it's a cheap hack..\nTime-related (all), triggers, select_distinct*, select_views, points and\ngeometry tests all fail BTW...\n\nOh - and the JDBC driver is _SERIOUSLY_ wedged as it's creating invalid\nstartup packets... who changed the startup?\n\nFor me this is an \"end of the world\" issue as I require JDBC to work - and\ncan't find the problem *sigh*.\n\nI'll try for one more day *grr*....\n\nG'day, eh? :)\n\t- Teunis\n\n\n", "msg_date": "Thu, 29 Jan 1998 23:50:40 -0700 (MST)", "msg_from": "teunis <teunis@mauve.computersupportcentre.com>", "msg_from_op": true, "msg_subject": "CVS tonight (10pm MST)" } ]
[ { "msg_contents": "Wow, this is really a fantastic list, of fixes and enhancements !\n\nIt sure is nice to see my name on the Bugs Fixed list! But I think,\nas I did not do any coding (yet ? I prblby only gave a hint ?) Bruce\nshould get the credit here:\n\n> Fix for DBT Sendproc (Zeugswetter Andres)\nshould probably be:\nFix for send/receiveProcedure argument order in TypeCreate (Bruce)\n\nAndreas\n", "msg_date": "Fri, 30 Jan 1998 11:42:03 +0100", "msg_from": "Zeugswetter Andreas DBT <Andreas.Zeugswetter@telecom.at>", "msg_from_op": true, "msg_subject": "Re: 6.3 CHANGES list" }, { "msg_contents": "> \n> Wow, this is really a fantastic list, of fixes and enhancements !\n> \n> It sure is nice to see my name on the Bugs Fixed list! But I think,\n> as I did not do any coding (yet ? I prblby only gave a hint ?) Bruce\n> should get the credit here:\n> \n> > Fix for DBT Sendproc (Zeugswetter Andres)\n> should probably be:\n> Fix for send/receiveProcedure argument order in TypeCreate (Bruce)\n> \n> Andreas\n> \n> \n\nI am going to keep your name on it. Finding the actual problem was over\nhalf the job of fixing it.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 1 Feb 1998 13:36:32 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: 6.3 CHANGES list" } ]
[ { "msg_contents": "Hi,\n\ni have just grabbed the 6.3 snapshot and tried to compile it on irix 6.2 -\nwithout success.\n\nThe following errors occurred:\n\n1) configure fails\n\n% ./configure \n[...]\nlinking ./backend/port/tas/i386_solaris.s to backend/port/tas.s\nlinking ./backend/port/dynloader/irix5.c to backend/port/dynloader.c\nconfigure: error: ./backend/port/dynloader/irix5.c: File not found\n\n\n2) compiling the backend fails:\nin src/backend the gcc call \ngcc -mabi=64 -o postgres access/SUBSYS.o bootstrap/SUBSYS.o\ncatalog/SUBSYS.o commands/SUBSYS.o executor/SUBSYS.o lib/SUBSYS.o\nlibpq/SUBSYS.o main/SUBSYS.o nodes/SUBSYS.o optimizer/SUBSYS.o\nparser/SUBSYS.o port/SUBSYS.o postmaster/SUBSYS.o regex/SUBSYS.o\nrewrite/SUBSYS.o storage/SUBSYS.o tcop/SUBSYS.o utils/SUBSYS.o\n../utils/version.o -64 -lPW -lcrypt -lm -lbsd -lreadline -lhistory\n-ltermcap -lcurses \n\nreturns \n[...]\nld64: ERROR 33: Unresolved text symbol \"S_INIT_LOCK\" -- 1st referenced by\nstorage/SUBSYS.o.\nld64: ERROR 33: Unresolved text symbol \"S_UNLOCK\" -- 1st referenced by\nstorage/SUBSYS.o.\nld64: ERROR 33: Unresolved text symbol \"S_LOCK\" -- 1st referenced by\nstorage/SUBSYS.o\n\n\nthese functions just seem to be implemented for linux/alpha ??\n\nbest wishes,\nFuad\n\n\n--\nFuad Abdallah\nMax-Planck-Institut fuer Zuechtungsforschung / ZWDV\nTel.: 0221/5062-739 / Priv: 0221/584563\n\n", "msg_date": "Fri, 30 Jan 1998 13:33:52 +0100 (MET)", "msg_from": "Fuad Abdallah <abdallah@mpiz-koeln.mpg.de>", "msg_from_op": true, "msg_subject": "snapshot won't compile on irix6.2" } ]
[ { "msg_contents": "\n[CC to hackers list].\n\n> \n> Hi!\n> \n> Do you still know me?\n> \n> I'm the one who promised to work on the rest of the ANSI SQL Standard for\n> postgresql.\n\nYep. Funny thing, we are preparing a 6.3 beta release for Feb 1, and I\njust yesterday removed you name from the top of the TODO list because I\nhadn't heard anything for a while.\n\n> \n> I know, I told you one year ago, that I would start my work immediately\n> but did nothing till now.\n> \n> Now here's my question:\n> \n> Are you still interested in my working on that?\n\nYes.\n\n> I have finished all other important things of my studies so I *really*\n> start on Monday working on postgresql.\n\nGreat.\n\n> Maybe you remember, that I wanted this to be my Master's Thesis.\n> Hope there is enough left for me!\n\nSure is. We really need someone who can focus on some big issues.\n\n> I'm really still interested in it! The reason why I did not start so far\n> is, that there have been a lot of examinations that have been very \n> important.\n> \n> \n> Hope you are not angry with me, this time I really want to do something\n> and I will try to finish this until summer (end of June), because I want\n> to finish my studies till that time.\n\nNot angry at all. People offer to help, but things come up. Any help\nis appreciated.\n\nHere is the developers FAQ. You will probably have to study the source\ncode for several weeks to get a clear idea of how everything works. We\ncan then get you started on items from the TODO list, if that is good\nfor you, or any ideas you have after looking at the code. I recommend\nyou get the most current snapshot from ftp.postgresql.org, and start\nlooking at it.\n\n---------------------------------------------------------------------------\n\nDevelopers Frequently Asked Questions (FAQ) for PostgreSQL\n\nLast updated: Thu Jan 22 15:08:43 EST 1998\n\nCurrent maintainer: Bruce Momjian (maillist@candle.pha.pa.us)\n\nThe most recent version of this document can be viewed at the postgreSQL Web\nsite, http://postgreSQL.org.\n\n ------------------------------------------------------------------------\n\nQuestions answered:\n\n1) General questions\n\n1) What tools are available for developers?\n2) What books are good for developers?\n3) Why do we use palloc() and pfree() to allocate memory?\n4) Why do we use Node and List to make data structures?\n ------------------------------------------------------------------------\n\n1) What tools are available for developers?\n\nAside from the User documentation mentioned in the regular FAQ, there are\nseveral development tools available. First, all the files in the /tools\ndirectory are designed for developers.\n\n RELEASE_CHANGES changes we have to make for each release\n SQL_keywords standard SQL'92 keywords\n backend web flowchart of the backend directories\n ccsym find standard defines made by your compiler\n entab converts tabs to spaces, used by pgindent\n find_static finds functions that could be made static\n find_typedef get a list of typedefs in the source code\n make_ctags make vi 'tags' file in each directory\n make_diff make *.orig and diffs of source\n make_etags make emacs 'etags' files\n make_keywords.README make comparison of our keywords and SQL'92\n make_mkid make mkid ID files\n mkldexport create AIX exports file\n pgindent indents C source files\n\nLet me note some of these. If you point your browser at the tools/backend\ndirectory, you will see all the backend components in a flow chart. You can\nclick on any one to see a description. If you then click on the directory\nname, you will be taken to the source directory, to browse the actual source\ncode behind it. We also have several README files in some source directories\nto describe the function of the module. The browser will display these when\nyou enter the directory also. The tools/backend directory is also contained\non our web page under the title Backend Flowchart.\n\nSecond, you really should have an editor that can handle tags, so you can\ntag a function call to see the function definition, and then tag inside that\nfunction to see an even lower-level function, and then back out twice to\nreturn to the original function. Most editors support this via tags or etags\nfiles.\n\nThird, you need to get mkid from ftp.postgresql.org. By running\ntools/make_mkid, an archive of source symbols can be created that can be\nrapidly queried like grep or edited.\n\nmake_diff has tools to create patch diff files that can be applied to the\ndistribution.\n\npgindent will format source files to match our standard format, which has\nfour-space tabs, and an indenting format specified by flags to the your\noperating system's utility indent.\n\n2) What books are good for developers?\n\nI have two good books, An Introduction to Database Systems, by C.J. Date,\nAddison, Wesley and A Guide to the SQL Standard, by C.J. Date, et. al,\nAddison, Wesley.\n\n3) Why do we use palloc() and pfree() to allocate memory?\n\npalloc() and pfree() are used in place of malloc() and free() because we\nautomatically free all memory allocated when a transaction completes. This\nmakes it easier to make sure we free memory that gets allocated in one\nplace, but only freed much later. There are several contexts that memory can\nbe allocated in, and this controls when the allocated memory is\nautomatically freed by the backend.\n\n4) Why do we use Node and List to make data structures?\n\nWe do this because this allows a consistent way to pass data inside the\nbackend in a flexible way. Every node has a NodeTag which specifies what\ntype of data is inside the Node. Lists are lists of Nodes. lfirst(),\nlnext(), and foreach() are used to get, skip, and traverse throught Lists.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Fri, 30 Jan 1998 12:09:37 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Adding rest of SQL Standard" } ]
[ { "msg_contents": "The gram.y file in the snapshot file dated 30-JAN-98 causes errors when \nprocessed with yacc on Unixware 2.1.2. The following patch will enable yacc \nto process the file, but I am unsure of the ramifications it will cause.\n\nYour comments please.\n\n--\n\n*** src/backend/parser/gram.y.orig\tSat Jan 31 00:55:17 1998\n--- src/backend/parser/gram.y\tSat Jan 31 01:44:07 1998\n***************\n*** 125,132 ****\n \t\tExplainStmt, VariableSetStmt, VariableShowStmt, VariableResetStmt,\n \t\tCreateUserStmt, AlterUserStmt, DropUserStmt\n \n- %type <rtstmt> \n- \n %type <str>\t\topt_database, location\n \n %type <pboolean> user_createdb_clause, user_createuser_clause\n--- 125,130 ----\n***************\n*** 261,267 ****\n \t\tSECOND_P, SELECT, SET, SUBSTRING,\n \t\tTABLE, TIME, TIMESTAMP, TO, TRAILING, TRANSACTION, TRIM,\n \t\tUNION, UNIQUE, UPDATE, USING,\n! \t\tVALUES, VARCHAR, VARYING, VERBOSE, VERSION, VIEW,\n \t\tWHERE, WITH, WORK, YEAR_P, ZONE\n \n /* Keywords (in SQL3 reserved words) */\n--- 259,265 ----\n \t\tSECOND_P, SELECT, SET, SUBSTRING,\n \t\tTABLE, TIME, TIMESTAMP, TO, TRAILING, TRANSACTION, TRIM,\n \t\tUNION, UNIQUE, UPDATE, USING,\n! \t\tVALUES, VARCHAR, VARYING, VIEW,\n \t\tWHERE, WITH, WORK, YEAR_P, ZONE\n \n /* Keywords (in SQL3 reserved words) */\n***************\n*** 3284,3291 ****\n \t\t\t\t\t\tmakeA_Expr(OP, \"<\", $1, $4),\n \t\t\t\t\t\tmakeA_Expr(OP, \">\", $1, $6));\n \t\t\t\t}\n! \t\t| a_expr IN { saved_In_Expr = lcons($1,saved_In_Expr); } '(' in_expr ')' { \nsaved_In_Expr = lnext(saved_In_Expr); }\n \t\t\t\t{\n \t\t\t\t\tif (nodeTag($5) == T_SubLink)\n \t\t\t\t\t{\n \t\t\t\t\t\t\tSubLink *n = (SubLink *)$5;\n--- 3282,3290 ----\n \t\t\t\t\t\tmakeA_Expr(OP, \"<\", $1, $4),\n \t\t\t\t\t\tmakeA_Expr(OP, \">\", $1, $6));\n \t\t\t\t}\n! \t\t| a_expr IN { saved_In_Expr = lcons($1,saved_In_Expr); } '(' in_expr ')'\n \t\t\t\t{\n+ \t\t\t\t\tsaved_In_Expr = lnext(saved_In_Expr);\n \t\t\t\t\tif (nodeTag($5) == T_SubLink)\n \t\t\t\t\t{\n \t\t\t\t\t\t\tSubLink *n = (SubLink *)$5;\n***************\n*** 3297,3304 ****\n \t\t\t\t\t}\n \t\t\t\t\telse\t$$ = $5;\n \t\t\t\t}\n! \t\t| a_expr NOT IN { saved_In_Expr = lcons($1,saved_In_Expr); } '(' \nnot_in_expr ')' { saved_In_Expr = lnext(saved_In_Expr); }\n \t\t\t\t{\n \t\t\t\t\tif (nodeTag($6) == T_SubLink)\n \t\t\t\t\t{\n \t\t\t\t\t\t\tSubLink *n = (SubLink *)$6;\n--- 3296,3304 ----\n \t\t\t\t\t}\n \t\t\t\t\telse\t$$ = $5;\n \t\t\t\t}\n! \t\t| a_expr NOT IN { saved_In_Expr = lcons($1,saved_In_Expr); } '(' \nnot_in_expr ')'\n \t\t\t\t{\n+ \t\t\t\t\tsaved_In_Expr = lnext(saved_In_Expr);\n \t\t\t\t\tif (nodeTag($6) == T_SubLink)\n \t\t\t\t\t{\n \t\t\t\t\t\t\tSubLink *n = (SubLink *)$6;\n\n-- \n____ | Billy G. Allie | Domain....: Bill.Allie@mug.org\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: B_G_Allie@email.msn.com\n|/ |LLIE | (313) 582-1540 | \n\n\n", "msg_date": "Sat, 31 Jan 1998 03:12:35 -0500", "msg_from": "\"Billy G. Allie\" <Bill.Allie@mug.org>", "msg_from_op": true, "msg_subject": "gram.y has errors when processed by yacc (on Unixware 2.1.2)" }, { "msg_contents": "I liked all of this patch, and have applied it.\n> \n> The gram.y file in the snapshot file dated 30-JAN-98 causes errors when \n> processed with yacc on Unixware 2.1.2. The following patch will enable yacc \n> to process the file, but I am unsure of the ramifications it will cause.\n> \n> Your comments please.\n> \n> --\n> \n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 1 Feb 1998 14:06:20 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] gram.y has errors when processed by yacc (on Unixware\n\t2.1.2)" } ]
[ { "msg_contents": "I am expirencing problems with 'order by' clauses in the 30-JAN-98 snapshot on \na Unixware 2.1.2 i386 base system. Any more than one attribute in the order \nby seems to ignored by the sort. (i.e., select * from testbga order by b \nusing >, a using <; will sort be sorted correctly with attribute b, but not \nattribute a).\n\nAny pointers as to where I can start looking for the problem?\n\nThanks.\n\n-- \n____ | Billy G. Allie | Domain....: Bill.Allie@mug.org\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: B_G_Allie@email.msn.com\n|/ |LLIE | (313) 582-1540 | \n\n\n", "msg_date": "Sat, 31 Jan 1998 03:28:03 -0500", "msg_from": "\"Billy G. Allie\" <Bill.Allie@mug.org>", "msg_from_op": true, "msg_subject": "order by problems with 30-JAN-98 snapshot." } ]
[ { "msg_contents": "The following patches will bring the UNIVEL port in line with the new porting \nmodel used in postgreSQL 6.3\n\nNOTE: The src/backend/port/univel directory can be deleted.\n\n\n\n\n____ | Billy G. Allie | Domain....: Bill.Allie@mug.org\n| /| | 7436 Hartwell | Compuserve: 76337,2061\n|-/-|----- | Dearborn, MI 48126| MSN.......: B_G_Allie@email.msn.com\n|/ |LLIE | (313) 582-1540 |", "msg_date": "Sat, 31 Jan 1998 03:41:08 -0500", "msg_from": "\"Billy G. Allie\" <Bill.Allie@mug.org>", "msg_from_op": true, "msg_subject": "Patches to the UNIVEL port to fit the new porting model." } ]
[ { "msg_contents": "\nI introduced a problem with the mailing lists on Friday afternoon as a\nresults of upgrading perl for some anti-spam filters...I belive it to be\nfixed now, and this is acting as a test...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 31 Jan 1998 18:16:20 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Problem fixed..." } ]
[ { "msg_contents": "\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 31 Jan 1998 18:18:03 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "one problem at a time...just ignore" } ]
[ { "msg_contents": "\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 31 Jan 1998 18:20:49 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "erk, damn upgrades..." } ]
[ { "msg_contents": "\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 31 Jan 1998 18:22:43 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "last error cleaned out.." } ]
[ { "msg_contents": "\nDue to an upgrade in Perl on the main server affecting an older version of\nMajordomo, the lists have been effectively down for the past 24hrs...the\nproblems have been worked out and everything appears to be okay now...\n\nif you've posted anything since ~noon on Friday, please repost it...\n\nSorry for the inconvience...\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sat, 31 Jan 1998 18:26:33 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "List problem from Friday Afternoon to 6:24pm Saturday" } ]
[ { "msg_contents": "Has anyone tested the regression tests recently. Many problems, I\nmaybe think the ORDER BY is broken, as a Unixware person reported. \nCan't beta release with this problem. Is it the new psort() NULL\nhandling, or something else?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 1 Feb 1998 00:31:32 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "regression tests" }, { "msg_contents": "On Sun, 1 Feb 1998, Bruce Momjian wrote:\n\n> Has anyone tested the regression tests recently. Many problems, I\n> maybe think the ORDER BY is broken, as a Unixware person reported. \n> Can't beta release with this problem. Is it the new psort() NULL\n> handling, or something else?\n\n\tI just got a chance to do it...the select_distinct test fails\nmiserably over here on:\n\nQUERY: SELECT DISTINCT two, string4, ten\n FROM temp\n ORDER BY two using <, string4 using <, ten using <;\n\nwith:\n\n two|string4|ten\n ---+-------+---\n 0|AAAAxx | 0\n 0|HHHHxx | 0\n 0|OOOOxx | 0\n 0|VVVVxx | 0\n+ 0|OOOOxx | 0\n+ 0|HHHHxx | 0\n+ 0|AAAAxx | 0\n+ 0|OOOOxx | 0\n+ 0|HHHHxx | 0\n+ 0|OOOOxx | 0\n+ 0|HHHHxx | 0\n+ 0|OOOOxx | 0\n+ 0|HHHHxx | 0\n+ 0|VVVVxx | 0\n+ 0|HHHHxx | 0\n+ 0|AAAAxx | 0\n<etc,etc>\n\n\t\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 1 Feb 1998 07:37:03 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] regression tests" } ]
[ { "msg_contents": "\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 1 Feb 1998 06:31:06 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "testing..." } ]
[ { "msg_contents": "\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 1 Feb 1998 06:37:54 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "hrmmm...try again, testing, ignore" } ]
[ { "msg_contents": "\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 1 Feb 1998 06:41:49 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "again, ignore" } ]
[ { "msg_contents": "\ntesting anti-spam filter\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 1 Feb 1998 06:45:38 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "try again..." } ]
[ { "msg_contents": "\nI am getting the error...\n\n\tERROR: fmgr_info: function 0: cache lookup failed\n\n\t\t...after creating a database, creating tables - indexes\nand sequences, inserting data (with perl scripts) into 2 tables (570\nrecords in one and 100 records in another), using \"vacuum analyze\" on\nthe database then trying \"\\d <tablename>\" or \"\\dS\". Running \"vacuum\"\nalone is OK.\n\nI run the query from psql.c:601 on the psql command line and get the\nsame result. \n\nI tried the same sequence with a small test database and only a few\nrecords and there were no problems.\n\nI am still looking into this and would appreciate any pointers.\n\n\n\n-James\n\n", "msg_date": "Sun, 1 Feb 1998 08:22:02 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": true, "msg_subject": "VACUUM ANALYZE Problem" }, { "msg_contents": "James Hughes wrote:\n> \n> I am getting the error...\n> \n> ERROR: fmgr_info: function 0: cache lookup failed\n> \n> ...after creating a database, creating tables - indexes\n> and sequences, inserting data (with perl scripts) into 2 tables (570\n> records in one and 100 records in another), using \"vacuum analyze\" on\n> the database then trying \"\\d <tablename>\" or \"\\dS\". Running \"vacuum\"\n> alone is OK.\n> \n> I run the query from psql.c:601 on the psql command line and get the\n> same result.\n> \n> I tried the same sequence with a small test database and only a few\n> records and there were no problems.\n> \n> I am still looking into this and would appreciate any pointers.\n\nVersion ?\ngdb output ?\n\nVadim\n", "msg_date": "Sun, 01 Feb 1998 22:45:33 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "\n\nOn Sun, 1 Feb 1998, Vadim B. Mikheev wrote:\n\n: James Hughes wrote:\n: > \n: > I am getting the error...\n: > \n: > ERROR: fmgr_info: function 0: cache lookup failed\n: > \n: > ...after creating a database, creating tables - indexes\n: > and sequences, inserting data (with perl scripts) into 2 tables (570\n: > records in one and 100 records in another), using \"vacuum analyze\" on\n: > the database then trying \"\\d <tablename>\" or \"\\dS\". Running \"vacuum\"\n: > alone is OK.\n: > \n: > I run the query from psql.c:601 on the psql command line and get the\n: > same result.\n: > \n: > I tried the same sequence with a small test database and only a few\n: > records and there were no problems.\n: > \n: > I am still looking into this and would appreciate any pointers.\n: \n: Version ?\n\n1-31 cvs tree\n\n: gdb output ?\n\nI'll see if I can narrow it down a bit. Might be larger than the sources\nat this point ;)\n\n\n-James\n\n\n", "msg_date": "Sun, 1 Feb 1998 12:41:11 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "\nAfter poking arround some more, I found that the \"vacuum analyze\" is\ncausing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\nfor \"/d <table>\" and \"/dS\" commands in psql cause the error. \n\nI verified that any simple query using the \"<\" or \">\" operators fail \nwith the same message...\n\n\tERROR: fmgr_info: function 0: cache lookup failed\n\n \t\t\t...after using the \"vacuum analyse\" command.\nBut, only after vacuuming any relation that was created and populated by\nme. Vacumming system catalogs poses no problems.\n\nI did go back to 6.2.0. Found no problems there.\n\n\n-James\n\n", "msg_date": "Mon, 2 Feb 1998 23:17:48 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "James Hughes wrote:\n> \n> After poking arround some more, I found that the \"vacuum analyze\" is\n> causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n> for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n> \n> I verified that any simple query using the \"<\" or \">\" operators fail\n> with the same message...\n\nAnalyze uses oper(\"=\",...), oper(\"<\",...) and oper(\">\",...)...\nAre queries with \"=\" OK ?\n\n> \n> ERROR: fmgr_info: function 0: cache lookup failed\n> \n> ...after using the \"vacuum analyse\" command.\n> But, only after vacuuming any relation that was created and populated by\n> me. Vacumming system catalogs poses no problems.\n\nThere are comments into vc_updstats:\n\n /*\n * invalidating system relations confuses the function cache of\n * pg_operator and pg_opclass\n */\n if (!IsSystemRelationName(pgcform->relname.data))\n RelationInvalidateHeapTuple(rd, rtup);\n\n==> invalidation of user relation causes problems too, Bruce ?\n\nVadim\n", "msg_date": "Tue, 03 Feb 1998 12:09:27 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "\n\nOn Tue, 3 Feb 1998, Vadim B. Mikheev wrote:\n\n: James Hughes wrote:\n: > \n: > After poking arround some more, I found that the \"vacuum analyze\" is\n: > causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n: > for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n: > \n: > I verified that any simple query using the \"<\" or \">\" operators fail\n: > with the same message...\n: \n: Analyze uses oper(\"=\",...), oper(\"<\",...) and oper(\">\",...)...\n: Are queries with \"=\" OK ?\n:\n\nYes...\n\n\t\"=\" is OK,\n\t\"<>\" is OK,\n\t\"<\" is broken,\n\t\">\" is broken,\n\t\"<=\" is broken,\n\t\">=\" is broken\n\n\t\t...maybe others, I have no geometrical tables to test\nwith. I could use some of the code from the regression tests if needed.\n\n\n\n-James\n\n\n", "msg_date": "Tue, 3 Feb 1998 06:02:25 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "James Hughes wrote:\n> \n> After poking arround some more, I found that the \"vacuum analyze\" is\n> causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n> for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n> \n> I verified that any simple query using the \"<\" or \">\" operators fail\n> with the same message...\n> \n> ERROR: fmgr_info: function 0: cache lookup failed\n> \n> ...after using the \"vacuum analyse\" command.\n> But, only after vacuuming any relation that was created and populated by\n> me. Vacumming system catalogs poses no problems.\n\nWell, I found that this problem was caused by selfuncs.c:gethilokey():\n\n static ScanKeyData key[3] = {\n {0, Anum_pg_statistic_starelid, F_OIDEQ},\n {0, Anum_pg_statistic_staattnum, F_INT2EQ},\n {0, Anum_pg_statistic_staop, F_OIDEQ}\n\n: skankeys are hardcoded without call to ScanKeyEntryInitialize() =>\nwithout initialization of sk_func.fn_oid required, I assume, by\nnew PL support code. Patch for this place follows...\nOne should check all places where ScanKeyData is used.\nJan, could you do this ?\n\n(Oh, hell! I got this ERROR while testing subselect and spent so many time\nto fix this problem...)\n\nVadim", "msg_date": "Tue, 03 Feb 1998 18:09:07 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "\n\nOn Tue, 3 Feb 1998, Vadim B. Mikheev wrote:\n\n: James Hughes wrote:\n: > \n: > After poking arround some more, I found that the \"vacuum analyze\" is\n: > causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n: > for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n: > \n: > I verified that any simple query using the \"<\" or \">\" operators fail\n: > with the same message...\n: > \n: > ERROR: fmgr_info: function 0: cache lookup failed\n: > \n: > ...after using the \"vacuum analyse\" command.\n: > But, only after vacuuming any relation that was created and populated by\n: > me. Vacumming system catalogs poses no problems.\n: \n: Well, I found that this problem was caused by selfuncs.c:gethilokey():\n: \n: static ScanKeyData key[3] = {\n: {0, Anum_pg_statistic_starelid, F_OIDEQ},\n: {0, Anum_pg_statistic_staattnum, F_INT2EQ},\n: {0, Anum_pg_statistic_staop, F_OIDEQ}\n: \n: : skankeys are hardcoded without call to ScanKeyEntryInitialize() =>\n: without initialization of sk_func.fn_oid required, I assume, by\n: new PL support code. Patch for this place follows...\n: One should check all places where ScanKeyData is used.\n: Jan, could you do this ?\n: \n\nTHANKS! I'll patch my code and check the other instances.\n\n\n\n: (Oh, hell! I got this ERROR while testing subselect and spent so many time\n: to fix this problem...)\n: \n: Vadim\n\n\n\n-James\n\n", "msg_date": "Tue, 3 Feb 1998 09:32:59 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "> \n> \n> After poking arround some more, I found that the \"vacuum analyze\" is\n> causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n> for \"/d <table>\" and \"/dS\" commands in psql cause the error. \n> \n> I verified that any simple query using the \"<\" or \">\" operators fail \n> with the same message...\n> \n> \tERROR: fmgr_info: function 0: cache lookup failed\n> \n> \t\t\t...after using the \"vacuum analyse\" command.\n> But, only after vacuuming any relation that was created and populated by\n> me. Vacumming system catalogs poses no problems.\n> \n> I did go back to 6.2.0. Found no problems there.\n\nGlad it wasn't my vacuum code. Lots of bad scan initializations. Who\nis working on that? I think someone volunteered.\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 3 Feb 1998 14:25:56 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "> \n> James Hughes wrote:\n> > \n> > After poking arround some more, I found that the \"vacuum analyze\" is\n> > causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n> > for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n> > \n> > I verified that any simple query using the \"<\" or \">\" operators fail\n> > with the same message...\n> \n> Analyze uses oper(\"=\",...), oper(\"<\",...) and oper(\">\",...)...\n> Are queries with \"=\" OK ?\n> \n> > \n> > ERROR: fmgr_info: function 0: cache lookup failed\n> > \n> > ...after using the \"vacuum analyse\" command.\n> > But, only after vacuuming any relation that was created and populated by\n> > me. Vacumming system catalogs poses no problems.\n> \n> There are comments into vc_updstats:\n> \n> /*\n> * invalidating system relations confuses the function cache of\n> * pg_operator and pg_opclass\n> */\n> if (!IsSystemRelationName(pgcform->relname.data))\n> RelationInvalidateHeapTuple(rd, rtup);\n> \n> ==> invalidation of user relation causes problems too, Bruce ?\n\nSo this is not a problem? right?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 3 Feb 1998 14:26:13 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "> \n> \n> \n> On Tue, 3 Feb 1998, Vadim B. Mikheev wrote:\n> \n> : James Hughes wrote:\n> : > \n> : > After poking arround some more, I found that the \"vacuum analyze\" is\n> : > causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n> : > for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n> : > \n> : > I verified that any simple query using the \"<\" or \">\" operators fail\n> : > with the same message...\n> : > \n> : > ERROR: fmgr_info: function 0: cache lookup failed\n> : > \n> : > ...after using the \"vacuum analyse\" command.\n> : > But, only after vacuuming any relation that was created and populated by\n> : > me. Vacumming system catalogs poses no problems.\n> : \n> : Well, I found that this problem was caused by selfuncs.c:gethilokey():\n> : \n> : static ScanKeyData key[3] = {\n> : {0, Anum_pg_statistic_starelid, F_OIDEQ},\n> : {0, Anum_pg_statistic_staattnum, F_INT2EQ},\n> : {0, Anum_pg_statistic_staop, F_OIDEQ}\n> : \n> : : skankeys are hardcoded without call to ScanKeyEntryInitialize() =>\n> : without initialization of sk_func.fn_oid required, I assume, by\n> : new PL support code. Patch for this place follows...\n> : One should check all places where ScanKeyData is used.\n> : Jan, could you do this ?\n> : \n> \n> THANKS! I'll patch my code and check the other instances.\n\nJames, are you going to submit a patch for all the source code?\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Tue, 3 Feb 1998 14:28:26 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "\n\nOn Tue, 3 Feb 1998, Bruce Momjian wrote:\n\n: > \n: > \n: > \n: > On Tue, 3 Feb 1998, Vadim B. Mikheev wrote:\n: > \n: > : James Hughes wrote:\n: > : > \n: > : > After poking arround some more, I found that the \"vacuum analyze\" is\n: > : > causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n: > : > for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n: > : > \n: > : > I verified that any simple query using the \"<\" or \">\" operators fail\n: > : > with the same message...\n: > : > \n: > : > ERROR: fmgr_info: function 0: cache lookup failed\n: > : > \n: > : > ...after using the \"vacuum analyse\" command.\n: > : > But, only after vacuuming any relation that was created and populated by\n: > : > me. Vacumming system catalogs poses no problems.\n: > : \n: > : Well, I found that this problem was caused by selfuncs.c:gethilokey():\n: > : \n: > : static ScanKeyData key[3] = {\n: > : {0, Anum_pg_statistic_starelid, F_OIDEQ},\n: > : {0, Anum_pg_statistic_staattnum, F_INT2EQ},\n: > : {0, Anum_pg_statistic_staop, F_OIDEQ}\n: > : \n: > : : skankeys are hardcoded without call to ScanKeyEntryInitialize() =>\n: > : without initialization of sk_func.fn_oid required, I assume, by\n: > : new PL support code. Patch for this place follows...\n: > : One should check all places where ScanKeyData is used.\n: > : Jan, could you do this ?\n: > : \n: > \n: > THANKS! I'll patch my code and check the other instances.\n: \n: James, are you going to submit a patch for all the source code?\n: \n\nGo ahead with just Vadim's patch for now. It fixes the analyze problem. \nI am going out of town for a few days and won't have access to my Dev\nSystem until then. I'll work on it this weekend if it still needs doing.\n\n\n\n-James\n\n", "msg_date": "Tue, 3 Feb 1998 15:36:46 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "OK, I have looked at this, but can't figure out how to fix the many\ninitializations of ScanKeyData. Can someone who understands this please\nsubmit a patch for all these initializations so we can stop these vacuum\nanalyze reports?\n\nVadim has found the problem, but we need someone to properly fix it.\n\n> James Hughes wrote:\n> > \n> > After poking arround some more, I found that the \"vacuum analyze\" is\n> > causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n> > for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n> > \n> > I verified that any simple query using the \"<\" or \">\" operators fail\n> > with the same message...\n> > \n> > ERROR: fmgr_info: function 0: cache lookup failed\n> > \n> > ...after using the \"vacuum analyse\" command.\n> > But, only after vacuuming any relation that was created and populated by\n> > me. Vacumming system catalogs poses no problems.\n> \n> Well, I found that this problem was caused by selfuncs.c:gethilokey():\n> \n> static ScanKeyData key[3] = {\n> {0, Anum_pg_statistic_starelid, F_OIDEQ},\n> {0, Anum_pg_statistic_staattnum, F_INT2EQ},\n> {0, Anum_pg_statistic_staop, F_OIDEQ}\n> \n> : skankeys are hardcoded without call to ScanKeyEntryInitialize() =>\n> without initialization of sk_func.fn_oid required, I assume, by\n> new PL support code. Patch for this place follows...\n> One should check all places where ScanKeyData is used.\n> Jan, could you do this ?\n> \n> (Oh, hell! I got this ERROR while testing subselect and spent so many time\n> to fix this problem...)\n> \n> Vadim\n> --------------A99EE0A2D8F4D665C5BF3957\n> Content-Type: application/octet-stream; name=\"FFF\"\n> Content-Transfer-Encoding: base64\n> Content-Disposition: attachment; filename=\"FFF\"\n> \n> KioqIHNlbGZ1bmNzLmN+CU1vbiBGZWIgIDIgMTM6NTU6NDcgMTk5OAotLS0gc2VsZnVuY3Mu\n> YwlUdWUgRmViICAzIDE3OjM2OjAxIDE5OTgKKioqKioqKioqKioqKioqCioqKiAzMzcsMzQ1\n> ICoqKioKICAJcmVnaXN0ZXIgUmVsYXRpb24gcmRlc2M7CiAgCXJlZ2lzdGVyIEhlYXBTY2Fu\n> RGVzYyBzZGVzYzsKICAJc3RhdGljIFNjYW5LZXlEYXRhIGtleVszXSA9IHsKISAJCXswLCBB\n> bnVtX3BnX3N0YXRpc3RpY19zdGFyZWxpZCwgRl9PSURFUX0sCiEgCQl7MCwgQW51bV9wZ19z\n> dGF0aXN0aWNfc3RhYXR0bnVtLCBGX0lOVDJFUX0sCiEgCQl7MCwgQW51bV9wZ19zdGF0aXN0\n> aWNfc3Rhb3AsIEZfT0lERVF9CiAgCX07CiAgCWJvb2wJCWlzbnVsbDsKICAJSGVhcFR1cGxl\n> CXR1cGxlOwotLS0gMzM3LDM0NSAtLS0tCiAgCXJlZ2lzdGVyIFJlbGF0aW9uIHJkZXNjOwog\n> IAlyZWdpc3RlciBIZWFwU2NhbkRlc2Mgc2Rlc2M7CiAgCXN0YXRpYyBTY2FuS2V5RGF0YSBr\n> ZXlbM10gPSB7CiEgCQl7MCwgQW51bV9wZ19zdGF0aXN0aWNfc3RhcmVsaWQsIEZfT0lERVEs\n> IHswLCAwLCBGX09JREVRfX0sCiEgCQl7MCwgQW51bV9wZ19zdGF0aXN0aWNfc3RhYXR0bnVt\n> LCBGX0lOVDJFUSwgezAsIDAsIEZfSU5UMkVRfX0sCiEgCQl7MCwgQW51bV9wZ19zdGF0aXN0\n> aWNfc3Rhb3AsIEZfT0lERVEsIHswLCAwLCBGX09JREVRfX0KICAJfTsKICAJYm9vbAkJaXNu\n> dWxsOwogIAlIZWFwVHVwbGUJdHVwbGU7Cg==\n> --------------A99EE0A2D8F4D665C5BF3957--\n> \n> \n\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Wed, 4 Feb 1998 23:27:10 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, I have looked at this, but can't figure out how to fix the many\n> initializations of ScanKeyData. Can someone who understands this please\n> submit a patch for all these initializations so we can stop these vacuum\n> analyze reports?\n> \n> Vadim has found the problem, but we need someone to properly fix it.\n\nJust apply my patch to stop \"analyze-problem\" reports (sorry, I didn't it).\nAs for other (possible) places, note that ScanKeyEntryInitialize()\ninitializes sk_func.fn_oid and so we have to worry about hard-coded\ninitializations only (when ScanKeyEntryInitialize() is not called).\n\nVadim\n", "msg_date": "Thu, 05 Feb 1998 11:58:15 +0700", "msg_from": "\"Vadim B. Mikheev\" <vadim@sable.krasnoyarsk.su>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "> \n> This is a multi-part message in MIME format.\n> --------------A99EE0A2D8F4D665C5BF3957\n> Content-Type: text/plain; charset=us-ascii\n> Content-Transfer-Encoding: 7bit\n> \n> James Hughes wrote:\n> > \n> > After poking arround some more, I found that the \"vacuum analyze\" is\n> > causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n> > for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n> > \n> > I verified that any simple query using the \"<\" or \">\" operators fail\n> > with the same message...\n> > \n> > ERROR: fmgr_info: function 0: cache lookup failed\n> > \n> > ...after using the \"vacuum analyse\" command.\n> > But, only after vacuuming any relation that was created and populated by\n> > me. Vacumming system catalogs poses no problems.\n> \n> Well, I found that this problem was caused by selfuncs.c:gethilokey():\n> \n> static ScanKeyData key[3] = {\n> {0, Anum_pg_statistic_starelid, F_OIDEQ},\n> {0, Anum_pg_statistic_staattnum, F_INT2EQ},\n> {0, Anum_pg_statistic_staop, F_OIDEQ}\n> \n> : skankeys are hardcoded without call to ScanKeyEntryInitialize() =>\n> without initialization of sk_func.fn_oid required, I assume, by\n> new PL support code. Patch for this place follows...\n> One should check all places where ScanKeyData is used.\n> Jan, could you do this ?\n> \n> (Oh, hell! I got this ERROR while testing subselect and spent so many time\n> to fix this problem...)\n\nI assume we can consider this item closed.\n\n-- \nBruce Momjian | 830 Blythe Avenue\nmaillist@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026\n + If your life is a hard drive, | (610) 353-9879(w)\n + Christ can be your backup. | (610) 853-3000(h)\n", "msg_date": "Sun, 15 Mar 1998 23:40:48 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" }, { "msg_contents": "\n\nOn Sun, 15 Mar 1998, Bruce Momjian wrote:\n\n: > \n: > This is a multi-part message in MIME format.\n: > --------------A99EE0A2D8F4D665C5BF3957\n: > Content-Type: text/plain; charset=us-ascii\n: > Content-Transfer-Encoding: 7bit\n: > \n: > James Hughes wrote:\n: > > \n: > > After poking arround some more, I found that the \"vacuum analyze\" is\n: > > causing problems with the \"<\" and \">\" operators. The \"> 0\" in the SELECT\n: > > for \"/d <table>\" and \"/dS\" commands in psql cause the error.\n: > > \n: > > I verified that any simple query using the \"<\" or \">\" operators fail\n: > > with the same message...\n: > > \n: > > ERROR: fmgr_info: function 0: cache lookup failed\n: > > \n: > > ...after using the \"vacuum analyse\" command.\n: > > But, only after vacuuming any relation that was created and populated by\n: > > me. Vacumming system catalogs poses no problems.\n: > \n: > Well, I found that this problem was caused by selfuncs.c:gethilokey():\n: > \n: > static ScanKeyData key[3] = {\n: > {0, Anum_pg_statistic_starelid, F_OIDEQ},\n: > {0, Anum_pg_statistic_staattnum, F_INT2EQ},\n: > {0, Anum_pg_statistic_staop, F_OIDEQ}\n: > \n: > : skankeys are hardcoded without call to ScanKeyEntryInitialize() =>\n: > without initialization of sk_func.fn_oid required, I assume, by\n: > new PL support code. Patch for this place follows...\n: > One should check all places where ScanKeyData is used.\n: > Jan, could you do this ?\n: > \n: > (Oh, hell! I got this ERROR while testing subselect and spent so many time\n: > to fix this problem...)\n: \n: I assume we can consider this item closed.\n: \n\nThe problem on my system was fixed by the patch. \n\n\n-James\n\n\n", "msg_date": "Thu, 19 Mar 1998 21:17:36 -0500 (EST)", "msg_from": "James Hughes <jamesh@interpath.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] VACUUM ANALYZE Problem" } ]
[ { "msg_contents": "\njust testing to make sure\n\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 1 Feb 1998 17:25:48 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "bug fixed with mail2news gateway..." } ]
[ { "msg_contents": "> \n> On Sun, 1 Feb 1998, Peter T Mount wrote:\n> \n> > \n> > Has anyone seen the jdbc patch I posted earlier today?\n> > \n> > If not, I'll ftp it.\n> \n> \tI haven't seen it...send me a copy directly? The mailing lists\n> have '# of lines' limits to messages, which may have been where it got\n> lost...\n\nWonder what the limit is? Let me find out by trial and error... :-)\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 1 Feb 1998 17:12:33 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Was the jdbc patch received?" }, { "msg_contents": "On Sun, 1 Feb 1998, Bruce Momjian wrote:\n\n> > \n> > On Sun, 1 Feb 1998, Peter T Mount wrote:\n> > \n> > > \n> > > Has anyone seen the jdbc patch I posted earlier today?\n> > > \n> > > If not, I'll ftp it.\n> > \n> > \tI haven't seen it...send me a copy directly? The mailing lists\n> > have '# of lines' limits to messages, which may have been where it got\n> > lost...\n> \n> Wonder what the limit is? Let me find out by trial and error... :-)\n\n\tI just increased it...when we didn't have the patches list, I had\nincreased hackers, but just switched it around:\n\n # maxlength [integer] (40000) <resend,digest>\n # The maximum size of an unapproved message in characters. When\n # used with digest, a new digest will be automatically generated if\n # the size of the digest exceeds this number of characters.\nmaxlength = 4000000\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 1 Feb 1998 18:26:01 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Was the jdbc patch received?" }, { "msg_contents": "> \tI just increased it...when we didn't have the patches list, I had\n> increased hackers, but just switched it around:\n> \n> # maxlength [integer] (40000) <resend,digest>\n> # The maximum size of an unapproved message in characters. When\n> # used with digest, a new digest will be automatically generated if\n> # the size of the digest exceeds this number of characters.\n> maxlength = 4000000\n\nGee, that is taking all the fun out of finding the limit myself. :-)\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 1 Feb 1998 17:37:53 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Was the jdbc patch received?" }, { "msg_contents": "On Sun, 1 Feb 1998, Bruce Momjian wrote:\n\n> > \tI just increased it...when we didn't have the patches list, I had\n> > increased hackers, but just switched it around:\n> > \n> > # maxlength [integer] (40000) <resend,digest>\n> > # The maximum size of an unapproved message in characters. When\n> > # used with digest, a new digest will be automatically generated if\n> > # the size of the digest exceeds this number of characters.\n> > maxlength = 4000000\n> \n> Gee, that is taking all the fun out of finding the limit myself. :-)\n\n\tI could add a random number generator to the code from 1->4000000\nthat changes each time an email is sent, just so that it became a game of\n(excuse Vadim) Russian Roulette? :)\n\nMarc G. Fournier \nSystems Administrator @ hub.org \nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 1 Feb 1998 19:29:59 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Was the jdbc patch received?" } ]
[ { "msg_contents": "I have fixed the regression test problem. The original psort() NULL\nhandling patch is attached. It made the while() into a for() which is\ngood, but he did not remove the increment at the bottom or the for(), so\nnkey was being incremented twice in the loop. Fixed it, and now\nregression looks good to me.\n\nI have updated the files in tools/RELEASE_CHANGES to say 6.3, so I think\nwe are ready for some announcement on 6.3 beta, Marc. I recommend you\nattach the HISTORY file for the release, so people know what is new.\n\n\n---------------------------------------------------------------------------\n\nIndex: psort.c\n===================================================================\nRCS file: /usr/local/cvsroot/pgsql/src/backend/utils/sort/psort.c,v\nretrieving revision 1.33\nretrieving revision 1.34\ndiff -c -r1.33 -r1.34\n*** psort.c\t1998/01/25 05:14:49\t1.33\n--- psort.c\t1998/01/25 05:18:34\t1.34\n***************\n*** 7,13 ****\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/utils/sort/psort.c,v 1.33 1998/01/25 05:14:49 momjian Exp $\n *\n * NOTES\n *\t\tSorts the first relation into the second relation.\n--- 7,13 ----\n *\n *\n * IDENTIFICATION\n! *\t $Header: /usr/local/cvsroot/pgsql/src/backend/utils/sort/psort.c,v 1.34 1998/01/25 05:18:34 scrappy Exp $\n *\n * NOTES\n *\t\tSorts the first relation into the second relation.\n***************\n*** 1094,1100 ****\n int\t\tresult = 0;\n bool\tisnull1, isnull2;\n \n! while ( nkey < PsortNkeys && !result )\n {\n \t\tlattr = heap_getattr(*ltup, InvalidBuffer,\n \t\t\t\t PsortKeys[nkey].sk_attno, \n--- 1094,1100 ----\n int\t\tresult = 0;\n bool\tisnull1, isnull2;\n \n! for (nkey = 0; nkey < PsortNkeys && !result; nkey++ )\n {\n \t\tlattr = heap_getattr(*ltup, InvalidBuffer,\n \t\t\t\t PsortKeys[nkey].sk_attno, \n***************\n*** 1106,1119 ****\n \t\t\t\t &isnull2);\n \t\tif ( isnull1 )\n \t\t{\n! \t \tif ( isnull2 )\n! \t \t\treturn (0);\n! \t\t return(1);\n \t\t}\n \t\telse if ( isnull2 )\n! \t\t return (-1);\n \t\t\n! \t\tif (PsortKeys[nkey].sk_flags & SK_COMMUTE)\n \t\t{\n \t \tif (!(result = -(long) (*fmgr_faddr(&PsortKeys[nkey].sk_func)) (rattr, lattr)))\n \t\t\tresult = (long) (*fmgr_faddr(&PsortKeys[nkey].sk_func)) (lattr, rattr);\n--- 1106,1118 ----\n \t\t\t\t &isnull2);\n \t\tif ( isnull1 )\n \t\t{\n! \t\t\tif ( !isnull2 )\n! \t\t\t\tresult = 1;\n \t\t}\n \t\telse if ( isnull2 )\n! \t\t result = -1;\n \t\t\n! \t\telse if (PsortKeys[nkey].sk_flags & SK_COMMUTE)\n \t\t{\n \t \tif (!(result = -(long) (*fmgr_faddr(&PsortKeys[nkey].sk_func)) (rattr, lattr)))\n \t\t\tresult = (long) (*fmgr_faddr(&PsortKeys[nkey].sk_func)) (lattr, rattr);\n\n-- \nBruce Momjian\nmaillist@candle.pha.pa.us\n", "msg_date": "Sun, 1 Feb 1998 17:41:45 -0500 (EST)", "msg_from": "Bruce Momjian <maillist@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "psort fixed" }, { "msg_contents": "> I have fixed the regression test problem. The original psort() NULL\n> handling patch is attached. It made the while() into a for() which is\n> good, but he did not remove the increment at the bottom or the for(), so\n> nkey was being incremented twice in the loop. Fixed it, and now\n> regression looks good to me.\n\nWe be stylin' now... Nice.\n\n - Tom\n\n", "msg_date": "Mon, 02 Feb 1998 01:43:02 +0000", "msg_from": "\"Thomas G. Lockhart\" <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psort fixed" } ]