threads
listlengths
1
275
[ { "msg_contents": "Folks, \n\nConsider this performance quandry brought to me by Elein, which I can replcate \nin 7.2.3 and in 7.4 devel:\n\ncase_clients is a medium-large table with about 110,000 rows. The field \ndate_resolved is a timestamp field which is indexed and allows nulls (in \nfact, is null for 40% of entries).\n\nFirst, as expected, a regular aggregate is slow:\n\njwnet=> explain analyze select max(date_resolved) from case_clients;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=3076.10..3076.10 rows=1 width=4) (actual time=484.24..484.24 \nrows=1 loops=1)\n -> Seq Scan on case_clients (cost=0.00..2804.48 rows=108648 width=4) \n(actual time=0.08..379.81 rows=108648 loops=1)\nTotal runtime: 484.44 msec\n\n\nSo we use the workaround standard for PostgreSQL:\n\njwnet=> explain analyze select date_resolved from case_clients order by \ndate_resolved desc limit 1;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..1.50 rows=1 width=4) (actual time=0.22..0.23 rows=1 \nloops=1)\n -> Index Scan Backward using idx_caseclients_resolved on case_clients \n(cost=0.00..163420.59 rows=108648 width=4) (actual time=0.21..0.22 rows=2 \nloops=1)\nTotal runtime: 0.33 msec\n\n... which is fast, but returns NULL, since nulls sort to the bottom! So we \nadd IS NOT NULL:\n\njwnet=> explain analyze select date_resolved from case_clients where \ndate_resolved is not null order by date_resolved desc limit 1;\nNOTICE: QUERY PLAN:\n\nLimit (cost=0.00..4.06 rows=1 width=4) (actual time=219.63..219.64 rows=1 \nloops=1)\n -> Index Scan Backward using idx_caseclients_resolved on case_clients \n(cost=0.00..163420.59 rows=40272 width=4) (actual time=219.62..219.62 rows=2 \nloops=1)\nTotal runtime: 219.76 msec\n\nAieee! Almost as slow as the aggregate!\n\nNow, none of those times is huge on this test database, but on a larger \ndatabase (> 1million rows) the performance problem is much worse. For some \nreason, the backward index scan seems to have to transverse all of the NULLs \nbefore selecting a value. I find this peculiar, as I was under the \nimpression that NULLs were not indexed.\n\nWhat's going on here?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 13 Dec 2002 11:55:51 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Odd Sort/Limit/Max Problem" }, { "msg_contents": "\nOn Fri, 13 Dec 2002, Josh Berkus wrote:\n\n> First, as expected, a regular aggregate is slow:\n\n> So we use the workaround standard for PostgreSQL:\n>\n> ... which is fast, but returns NULL, since nulls sort to the bottom! So we\n> add IS NOT NULL:\n>\n> jwnet=> explain analyze select date_resolved from case_clients where\n> date_resolved is not null order by date_resolved desc limit 1;\n> NOTICE: QUERY PLAN:\n>\n> Limit (cost=0.00..4.06 rows=1 width=4) (actual time=219.63..219.64 rows=1\n> loops=1)\n> -> Index Scan Backward using idx_caseclients_resolved on case_clients\n> (cost=0.00..163420.59 rows=40272 width=4) (actual time=219.62..219.62 rows=2\n> loops=1)\n> Total runtime: 219.76 msec\n>\n> Aieee! Almost as slow as the aggregate!\n\nI'd suggest trying a partial index on date_resolved where date_resolve is\nnot null. In my simple tests on about 200,000 rows of ints where 50% are\nnull that sort of index cut the runtime on my machine from 407.66 msec to\n0.15 msec.\n\n\n", "msg_date": "Fri, 13 Dec 2002 12:10:20 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Odd Sort/Limit/Max Problem" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> Now, none of those times is huge on this test database, but on a larger \n> database (> 1million rows) the performance problem is much worse. For some \n> reason, the backward index scan seems to have to transverse all of the NULLs \n> before selecting a value.\n\nCorrect. You lose, if there are a lot of nulls. Unfortunately, the\n\"IS NOT NULL\" clause isn't considered an indexable operator and so the\nindexscan has no idea that it shouldn't return the null rows. If it\ncould just traverse past them in the index, this example wouldn't be so\nbad, but it goes out and fetches the heap rows before discarding 'em :-(\n\n> I find this peculiar, as I was under the \n> impression that NULLs were not indexed.\n\nNot correct. btrees index NULLs, as they must do in order to have\ncorrect behavior for multicolumn indexes.\n\n\nI think it would work to instead do something like\n\nselect date_resolved from case_clients\nwhere date_resolved < 'infinity'\norder by date_resolved desc\nlimit 1;\n\nsince then the indexscan will get a qualifier condition that will allow\nit to discard the nulls. In fact, I think this will even prevent\nhaving to traverse past the nulls in the index --- the original form\nstarts the indexscan at the index end, but this should do a btree\ndescent search to exactly the place you want. Note that the\nwhere-clause has to match the scan direction (> or >= for ASC, < or <=\nfor DESC) so that it looks like a \"start here\" condition to btree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Dec 2002 15:24:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Odd Sort/Limit/Max Problem " }, { "msg_contents": "Tom Lane kirjutas L, 14.12.2002 kell 01:24:\n> Josh Berkus <josh@agliodbs.com> writes:\n> > Now, none of those times is huge on this test database, but on a larger \n> > database (> 1million rows) the performance problem is much worse. For some \n> > reason, the backward index scan seems to have to transverse all of the NULLs \n> > before selecting a value.\n> \n> Correct. You lose, if there are a lot of nulls. Unfortunately, the\n> \"IS NOT NULL\" clause isn't considered an indexable operator and so the\n> indexscan has no idea that it shouldn't return the null rows. If it\n> could just traverse past them in the index, this example wouldn't be so\n> bad, but it goes out and fetches the heap rows before discarding 'em :-(\n> \n> > I find this peculiar, as I was under the \n> > impression that NULLs were not indexed.\n> \n> Not correct. btrees index NULLs, as they must do in order to have\n> correct behavior for multicolumn indexes.\n\nI've heard this befoe, but this is something I've never understood - why\ndo you have to index _single_ null's in order to behave correctly for\nmulti-column index. \n\nIs it that postgres thinks that tuple of several nulls is the same as\nnull ? \n\nIs it just that nulls need to have an ordering and that this fact has\nsomehow leaked down to actually being stored in the index ?\n\nI don't have anything against nulls being indexed - in a table where\nnulls have about the same frequency as other values it may actually be\nuseful (if indexes were used to find IS NULL tuples)\n\n\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "14 Dec 2002 03:22:08 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Odd Sort/Limit/Max Problem" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Tom Lane kirjutas L, 14.12.2002 kell 01:24:\n>> Not correct. btrees index NULLs, as they must do in order to have\n>> correct behavior for multicolumn indexes.\n\n> I've heard this befoe, but this is something I've never understood - why\n> do you have to index _single_ null's in order to behave correctly for\n> multi-column index. \n\nWell, you don't absolutely *have to* index individual nulls, but you do\nhave to support nulls in index entries.\n\nThe example that motivates this is\n\n\tcreate table foo (f1 int, f2 int);\n\tcreate index fooi on foo(f1,f2);\n\t... fill table ...\n\tselect * from foo where f1 = 42;\n\nThe planner is entitled to implement this as an indexscan using fooi's\nfirst column (and ignoring its lower-order column(s)). Now if fooi does\nnot index rows in which f2 is null, you lose, because it may omit rows\nwith f1 = 42 that should have been found by the indexscan. So it *has\nto* be able to store index entries like (42, NULL).\n\nFor btree's purposes, the easiest implementation is to say that NULL is\nan ordinary index entry with a definable sort position (which we chose\nto define as \"after all non-NULL values\"). There's no particular value\nin having a special case for all-NULL index entries, so we don't.\n\nGiST is not able to handle all-NULL index entries, so it uses the rule\n\"index all rows in which the first index column is not NULL\". This\nstill meets the planner's constraint because we never do an indexscan\nthat uses only lower-order index columns.\n\nhash and rtree don't support NULL index entries, but they don't support\nmulticolumn indexes either, so the constraint doesn't apply.\n\n> I don't have anything against nulls being indexed - in a table where\n> nulls have about the same frequency as other values it may actually be\n> useful (if indexes were used to find IS NULL tuples)\n\nAt least for btree, it would be nice to someday allow IS NULL as an\nindexable operator. I haven't thought very hard about how to do that;\nshoehorning it into the operator class structure looks like it'd be a\nhorrid mess, so it'd probably require some creative klugery :-(\n\n> Is it just that nulls need to have an ordering and that this fact has\n> somehow leaked down to actually being stored in the index ?\n\nNo, more the other way around: btree assigns an ordering to NULLs\nbecause it must do so in order to know where to put them in the index.\nThis is an artifact of btree that happens to \"leak upward\" ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Dec 2002 19:03:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Odd Sort/Limit/Max Problem " } ]
[ { "msg_contents": "Thank you for a good workaround.\n\nEven BETTER would be to fix the aggregates so workarounds wouldn't have to\nbe found.\n\nThanks again,\n\nL.\nOn Fri, 13 Dec 2002, Josh Berkus wrote:\n\n> \n> \n> ---------- Forwarded Message ----------\n> \n> Subject: Re: [PERFORM] Odd Sort/Limit/Max Problem\n> Date: Fri, 13 Dec 2002 12:10:20 -0800 (PST)\n> From: Stephan Szabo <sszabo@megazone23.bigpanda.com>\n> To: Josh Berkus <josh@agliodbs.com>\n> Cc: <pgsql-performance@postgresql.org>\n> \n> On Fri, 13 Dec 2002, Josh Berkus wrote:\n> \n> > First, as expected, a regular aggregate is slow:\n> \n> > So we use the workaround standard for PostgreSQL:\n> >\n> > ... which is fast, but returns NULL, since nulls sort to the bottom! So we\n> > add IS NOT NULL:\n> >\n> > jwnet=> explain analyze select date_resolved from case_clients where\n> > date_resolved is not null order by date_resolved desc limit 1;\n> > NOTICE: QUERY PLAN:\n> >\n> > Limit (cost=0.00..4.06 rows=1 width=4) (actual time=219.63..219.64 rows=1\n> > loops=1)\n> > -> Index Scan Backward using idx_caseclients_resolved on case_clients\n> > (cost=0.00..163420.59 rows=40272 width=4) (actual time=219.62..219.62 rows=2\n> > loops=1)\n> > Total runtime: 219.76 msec\n> >\n> > Aieee! Almost as slow as the aggregate!\n> \n> I'd suggest trying a partial index on date_resolved where date_resolve is\n> not null. In my simple tests on about 200,000 rows of ints where 50% are\n> null that sort of index cut the runtime on my machine from 407.66 msec to\n> 0.15 msec.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n> -------------------------------------------------------\n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nThere's more to life than just SQL.\n\n", "msg_date": "Fri, 13 Dec 2002 16:00:45 -0800 (PST)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: Fwd: Re: [PERFORM] Odd Sort/Limit/Max Problem" } ]
[ { "msg_contents": "I have been trying tune joins against a view we use a lot for which\nthe optimizer generates very poor query plans when it uses the GEQO.\nThe long involved version (and more readable version) of the problem\nis here: http://xarg.net/writing/misc/GEQO\n\nI have tried doing a variety of explicit joins but generally end up\nwith something a lot poorer than the result from the exhaustive\nsearch. I am hoping someone has some advice on how to tackle this (my\ninclination is to turn of GEQO since we use this and similiarly\ncomplex views quite a lot and with a poor plan these queries are very\nslow, I would trade predictably slow query planning against\nunpredictably slow queries I guess).\n\n\nAnyway, Here is the view:\n\ncreate view cc_users as\nSELECT o.*, pa.*, pe.*, u.*, mr.member_state, mr.rel_id\n FROM acs_objects o, parties pa, persons pe, users u, group_member_map m, membership_rels mr\n WHERE o.object_id = pa.party_id\n and pa.party_id = pe.person_id\n and pe.person_id = u.user_id\n and u.user_id = m.member_id\n and m.group_id = acs__magic_object_id('registered_users')\n and m.rel_id = mr.rel_id\n and m.container_id = m.group_id;\n\n\nand here are the two query plans: \n\noatest=# set geqo_threshold to 11; explain analyze select * from cc_users u, forums_messages m where u.user_id = m.user_id and m.message_id = 55001;\nSET VARIABLE\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=15202.01..19099.49 rows=1 width=1483) (actual time=6012.96..6054.26 rows=1 loops=1)\n -> Index Scan using forums_messages_pk on forums_messages m (cost=0.00..3.38 rows=1 width=983) (actual time=0.06..0.08 rows=1 loops=1)\n -> Materialize (cost=18571.15..18571.15 rows=41997 width=500) (actual time=5996.36..6009.62 rows=42002 loops=1)\n -> Hash Join (cost=15202.01..18571.15 rows=41997 width=500) (actual time=4558.36..5920.36 rows=42002 loops=1)\n -> Merge Join (cost=0.00..3089.82 rows=42002 width=354) (actual time=0.13..651.67 rows=42002 loops=1)\n -> Index Scan using parties_pk on parties pa (cost=0.00..992.58 rows=42018 width=146) (actual time=0.05..122.78 rows=42018 loops=1)\n -> Index Scan using users_pk on users u (cost=0.00..1362.17 rows=42002 width=208) (actual time=0.03..223.07 rows=42002 loops=1)\n -> Hash (cost=15097.01..15097.01 rows=41997 width=146) (actual time=4558.05..4558.05 rows=0 loops=1)\n -> Hash Join (cost=4639.30..15097.01 rows=41997 width=146) (actual time=1512.75..4445.08 rows=42002 loops=1)\n -> Seq Scan on acs_objects o (cost=0.00..8342.17 rows=318117 width=90) (actual time=0.03..1567.37 rows=318117 loops=1)\n -> Hash (cost=4534.30..4534.30 rows=41997 width=56) (actual time=1511.87..1511.87 rows=0 loops=1)\n -> Hash Join (cost=2951.31..4534.30 rows=41997 width=56) (actual time=857.33..1291.41 rows=42002 loops=1)\n -> Seq Scan on persons pe (cost=0.00..848.02 rows=42002 width=32) (actual time=0.01..73.65 rows=42002 loops=1)\n -> Hash (cost=2846.30..2846.30 rows=42004 width=24) (actual time=856.92..856.92 rows=0 loops=1)\n -> Hash Join (cost=1318.18..2846.30 rows=42004 width=24) (actual time=584.26..806.18 rows=42002 loops=1)\n -> Seq Scan on membership_rels mr (cost=0.00..688.04 rows=42004 width=16) (actual time=0.01..60.95 rows=42004 loops=1)\n -> Hash (cost=1213.16..1213.16 rows=42009 width=8) (actual time=583.69..583.69 rows=0 loops=1)\n -> Seq Scan on group_element_index (cost=0.00..1213.16 rows=42009 width=8) (actual time=0.05..430.06 rows=42002 loops=1)\nTotal runtime: 6064.47 msec\n\n------------------------------------------------------------\n\noatest=# set geqo_threshold to 15; explain analyze select * from cc_users u, forums_messages m where u.user_id = m.user_id and m.message_id = 55001;\nSET VARIABLE\nNOTICE: QUERY PLAN:\n\nNested Loop (cost=0.00..21.65 rows=1 width=1483) (actual time=0.42..0.44 rows=1 loops=1)\n -> Nested Loop (cost=0.00..18.62 rows=1 width=1451) (actual time=0.36..0.37 rows=1 loops=1)\n -> Nested Loop (cost=0.00..15.59 rows=1 width=1435) (actual time=0.30..0.32 rows=1 loops=1)\n -> Nested Loop (cost=0.00..12.54 rows=1 width=1289) (actual time=0.22..0.23 rows=1 loops=1)\n -> Nested Loop (cost=0.00..9.44 rows=1 width=1199) (actual time=0.17..0.18 rows=1 loops=1)\n -> Nested Loop (cost=0.00..6.41 rows=1 width=991) (actual time=0.12..0.13 rows=1 loops=1)\n -> Index Scan using forums_messages_pk on forums_messages m (cost=0.00..3.38 rows=1 width=983) (actual time=0.06..0.06 rows=1 loops=1)\n -> Index Scan using group_elem_idx_element_idx on group_element_index (cost=0.00..3.02 rows=1 width=8) (actual time=0.05..0.05 rows=1 loops=1)\n -> Index Scan using users_pk on users u (cost=0.00..3.02 rows=1 width=208) (actual time=0.03..0.03 rows=1 loops=1)\n -> Index Scan using acs_objects_pk on acs_objects o (cost=0.00..3.08 rows=1 width=90) (actual time=0.03..0.03 rows=1 loops=1)\n -> Index Scan using parties_pk on parties pa (cost=0.00..3.04 rows=1 width=146) (actual time=0.05..0.05 rows=1 loops=1)\n -> Index Scan using membership_rel_rel_id_pk on membership_rels mr (cost=0.00..3.01 rows=1 width=16) (actual time=0.02..0.02 rows=1 loops=1)\n -> Index Scan using persons_pk on persons pe (cost=0.00..3.01 rows=1 width=32) (actual time=0.03..0.03 rows=1 loops=1)\nTotal runtime: 1.01 msec\n\n", "msg_date": "Mon, 16 Dec 2002 13:55:29 -0500", "msg_from": "Jeff Davis <davis@netcomuk.co.uk>", "msg_from_op": true, "msg_subject": "Problem with GEQO when using views and nested selects" }, { "msg_contents": "Jeff Davis <davis@netcomuk.co.uk> writes:\n> I have been trying tune joins against a view we use a lot for which\n> the optimizer generates very poor query plans when it uses the GEQO.\n> The long involved version (and more readable version) of the problem\n> is here: http://xarg.net/writing/misc/GEQO\n\nThis is not actually using GEQO. The reason you are seeing an effect\nfrom raising geqo_threshold is that geqo_threshold determines whether\nor not the view will be flattened into the upper query. For this\nparticular query situation, flattening the view is essential (since you\ndon't want the thing to compute the whole view). The relevant source\ncode tidbit is\n\n /*\n * Yes, so do we want to merge it into parent? Always do\n * so if child has just one element (since that doesn't\n * make the parent's list any longer). Otherwise we have\n * to be careful about the increase in planning time\n * caused by combining the two join search spaces into\n * one. Our heuristic is to merge if the merge will\n * produce a join list no longer than GEQO_RELS/2.\n * (Perhaps need an additional user parameter?)\n */\n\nAFAICS, your only good solution is to make geqo_threshold at least 14,\nsince you want a 7-way join after flattening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Dec 2002 14:30:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem with GEQO when using views and nested selects " }, { "msg_contents": " Tom> Jeff Davis <davis@netcomuk.co.uk> writes:\n >> I have been trying tune joins against a view we use a lot for which\n >> the optimizer generates very poor query plans when it uses the GEQO.\n >> The long involved version (and more readable version) of the problem\n >> is here: http://xarg.net/writing/misc/GEQO\n\n Tom> This is not actually using GEQO. The reason you are seeing an effect\n Tom> from raising geqo_threshold is that geqo_threshold determines whether\n Tom> or not the view will be flattened into the upper query. For this\n Tom> particular query situation, flattening the view is essential (since you\n Tom> don't want the thing to compute the whole view). The relevant source\n Tom> code tidbit is\n\n Tom> /*\n Tom> * Yes, so do we want to merge it into parent? Always do\n Tom> * so if child has just one element (since that doesn't\n Tom> * make the parent's list any longer). Otherwise we have\n Tom> * to be careful about the increase in planning time\n Tom> * caused by combining the two join search spaces into\n Tom> * one. Our heuristic is to merge if the merge will\n Tom> * produce a join list no longer than GEQO_RELS/2.\n Tom> * (Perhaps need an additional user parameter?)\n Tom> */\n\n Tom> AFAICS, your only good solution is to make geqo_threshold at least 14,\n Tom> since you want a 7-way join after flattening.\n\nThanks very much. I have to admit it was all very mysterious to me\nand the only knobs I had seemed to indicate that the GEQO was the\nissue.\n\nI think having another user parameter as mentioned in the comment is a\ngood idea (although I see it's been discussed before), that or maybe\nsome better guidance on the actual interpretation of GEQO_THRESHOLD\n(the comment is hugely more illuminating than the documentation on\nthis point).\n\nNow that I understand what is going on, I know in our case this crops\nup a fair bit and no one had really figured ever figured out what was\ncausing views to work ok some of the time and then fall over in other\nqueries.\n\n\n\n\n", "msg_date": "Mon, 16 Dec 2002 16:47:17 -0500", "msg_from": "Jeff Davis <davis@netcomuk.co.uk>", "msg_from_op": true, "msg_subject": "Re: Problem with GEQO when using views and nested selects " } ]
[ { "msg_contents": "Folks,\n\nI had a request from one of the SF-PUG members that I found\ninteresting. She suggested that we post the specs of some of the\nPostgreSQL servers that we administrate, their statistics, and some\ncomments on how they perform. I'll start it off with an example:\n\nSmall Cheap Workgroup Server\nAMD Athalon 700mhz\n256mb SDRAM\nDual 20gb 7200rpm IDE Drives (1 IBM, 1 Quantum)\n with OS, Apache, XLog on 1st drive,\n Postgres Database, Swap on 2nd drive\nRunning SuSE Linux 7.3\n Apache 1.3.x\n PHP 4.0.x\n PostgreSQL 7.1.3\n3-8 concurrent users on intranet application\nwith large transactions but low transaction frequency\n(est. 20-300 reads and 5-80 writes per hour)\non small database (< 20,000 records combined in main tables)\n\nPerformance assessment: Adequate, reasonably fast\non selects except aggregates, commits taking 5-20 seconds\nduring medium activity. Same system with a Celeron 500\npreviously demonstrated horrible performance (often > 45 seconds\non selects) on complex queries, such as one view with\ncustom aggregates.\n\n\n-Josh Berkus\n\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n", "msg_date": "Mon, 16 Dec 2002 20:03:51 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Profiling" }, { "msg_contents": "Folks,\n\n> I had a request from one of the SF-PUG members that I found\n> interesting. She suggested that we post the specs of some of the\n> PostgreSQL servers that we administrate, their statistics, and some\n> comments on how they perform. I'll start it off with an example:\n> \n> Small Cheap Workgroup Server\n> AMD Athalon 700mhz\n> 256mb SDRAM\n> Dual 20gb 7200rpm IDE Drives (1 IBM, 1 Quantum)\n> with OS, Apache, XLog on 1st drive,\n> Postgres Database, Swap on 2nd drive\n> Running SuSE Linux 7.3\n> Apache 1.3.x\n> PHP 4.0.x\n> PostgreSQL 7.1.3\n> 3-8 concurrent users on intranet application\n> with large transactions but low transaction frequency\n> (est. 20-300 reads and 5-80 writes per hour)\n> on small database (< 20,000 records combined in main tables)\n> \n> Performance assessment: Adequate, reasonably fast\n> on selects except aggregates, commits taking 5-20 seconds\n> during medium activity. Same system with a Celeron 500\n> previously demonstrated horrible performance (often > 45 seconds\n> on selects) on complex queries, such as one view with\n> custom aggregates.\n\nOh, and I forgot:\n\nshared_buffers 4096\nsort_mem 2048\nwal_files 8\nwal_sync_method = fdatasync\n\n-Josh\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n", "msg_date": "Mon, 16 Dec 2002 20:12:47 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Profiling" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n> \n>>I had a request from one of the SF-PUG members that I found\n>>interesting. She suggested that we post the specs of some of the\n>>PostgreSQL servers that we administrate, their statistics, and some\n>>comments on how they perform. I'll start it off with an example:\n>>\n>>Small Cheap Workgroup Server\n>>AMD Athalon 700mhz\n>>256mb SDRAM\n>>Dual 20gb 7200rpm IDE Drives (1 IBM, 1 Quantum)\n>> with OS, Apache, XLog on 1st drive,\n>> Postgres Database, Swap on 2nd drive\n>>Running SuSE Linux 7.3\n>> Apache 1.3.x\n>> PHP 4.0.x\n>> PostgreSQL 7.1.3\n>>3-8 concurrent users on intranet application\n>>with large transactions but low transaction frequency\n>>(est. 20-300 reads and 5-80 writes per hour)\n>>on small database (< 20,000 records combined in main tables)\n>>\n>>Performance assessment: Adequate, reasonably fast\n>>on selects except aggregates, commits taking 5-20 seconds\n>>during medium activity. Same system with a Celeron 500\n>>previously demonstrated horrible performance (often > 45 seconds\n>>on selects) on complex queries, such as one view with\n>>custom aggregates.\n> \n> \n> Oh, and I forgot:\n> \n> shared_buffers 4096\n> sort_mem 2048\n> wal_files 8\n> wal_sync_method = fdatasync\n\nHi Josh,\n\nWant to CVS checkout the latest OSDB source code (http://www.sf.net/projects/osdb), generate say a 100MB database and do \na multiuser test of 20 or so users on it?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> -Josh\n> \n> ______AGLIO DATABASE SOLUTIONS___________________________\n> Josh Berkus\n> Complete information technology josh@agliodbs.com\n> and data management solutions (415) 565-7293\n> for law firms, small businesses fax 621-2533\n> and non-profit organizations. San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Tue, 17 Dec 2002 15:30:55 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "On Tuesday 17 December 2002 09:33 am, you wrote:\n> Folks,\n>\n> I had a request from one of the SF-PUG members that I found\n> interesting. She suggested that we post the specs of some of the\n> PostgreSQL servers that we administrate, their statistics, and some\n> comments on how they perform. I'll start it off with an example:\n\nOK my take.\n\nP-II-450MHz/256MB/20GB IDE. Mandrake8.2, postgresql 7.2.x\n\nPGBench, with 10M records/10,000 transactions/10 users.\n\nThroughput 25tps.\n\nRest of the things were default. I am not too sure of details as this was more \nthan 4 months back and that machine is windows now.\n\nSame machine/Another benchmark\n\nBanking application simulation.\n\nShared buffers 14000\nNumber of records: 100 in one table, continously updated+log table continously \ninserted\nThroughput 200tps.\n\n HTH\n \n Shridhar\n\n", "msg_date": "Tue, 17 Dec 2002 12:27:07 +0530", "msg_from": "Shridhar Daithankar <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "On Tuesday, December 17, 2002, at 12:57 AM, Shridhar Daithankar wrote:\n\n> On Tuesday 17 December 2002 09:33 am, you wrote:\n>> Folks,\n>>\n>> I had a request from one of the SF-PUG members that I found\n>> interesting. She suggested that we post the specs of some of the\n>> PostgreSQL servers that we administrate, their statistics, and some\n>> comments on how they perform. I'll start it off with an example:\n>\n\nMy take:\n\nDual PIII-1.12Ghz, 3Gb, 5 x 36 RAID 5'ed with a spare, RedHat 7.2, Pg \n7.3\n\npgbench, default settings, 252tps inc. connex, 409tps excluding connex\n\nDay to day, runs a monitoring/historical analysis tool of my design \nwith gathers metrics from around 30 hosts (they report every 10 \nminutes, by their clock). Has 3,689,652 rows as of right now in the \n'metrics' table, which is indexed by timestamp.\n\nMy 'main' query is in the form of:\n\nSELECT timestamp, data FROM metrics WHERE resgroupid=? and hostid=? AND \ntimestamp BETWEEN ? AND ? ORDER BY timestamp\n\nIndex is on timestamp.\n\nThis query generally takes about half a second for 24 hours worth of \ndata. I just ran a 240 hour query on a test database with about 20,000 \nrows and the result too 2998ms.\n\nThings slowed to a crawl about 2 weeks ago, so I upgraded to 7.3 and \nsaw a huge improvement. I believe part of this might have been due to \nthe recreation of the database, similar to a CLUSTER. My performance \nis not degrading from a time perspective, but CPU usage is steadily \ndegrading. User time is steadily increasing over the last 240 hours, \nfrom 5% to 15%. Attached is output of my monitoring program (well, the \nnew improved Java version) showing the CPU performance over the last \n240 hours.\n\nshared_buffers = 98304\nsort_mem = 1600\nfsync = false\n\nEverything else is default, recommendations welcome. ;)\n\n\n\n\n\nCory 'G' Watson", "msg_date": "Tue, 17 Dec 2002 07:29:48 -0600", "msg_from": "Cory 'G' Watson <gphat@cafes.net>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "On Tuesday, December 17, 2002, at 12:57 AM, Shridhar Daithankar wrote:\n\n> On Tuesday 17 December 2002 09:33 am, you wrote:\n>> Folks,\n>>\n>> I had a request from one of the SF-PUG members that I found\n>> interesting. She suggested that we post the specs of some of the\n>> PostgreSQL servers that we administrate, their statistics, and some\n>> comments on how they perform. I'll start it off with an example:\n>\n\nMy take:\n\nDual PIII-1.12Ghz, 3Gb, 5 x 36 RAID 5'ed with a spare, RedHat 7.2, Pg \n7.3\n\npgbench, default settings, 252tps inc. connex, 409tps excluding connex\n\nDay to day, runs a monitoring/historical analysis tool of my design \nwith gathers metrics from around 30 hosts (they report every 10 \nminutes, by their clock). Has 3,689,652 rows as of right now in the \n'metrics' table, which is indexed by timestamp.\n\nMy 'main' query is in the form of:\n\nSELECT timestamp, data FROM metrics WHERE resgroupid=? and hostid=? AND \ntimestamp BETWEEN ? AND ? ORDER BY timestamp\n\nIndex is on timestamp.\n\nThis query generally takes about half a second for 24 hours worth of \ndata. I just ran a 240 hour query on a test database with about 20,000 \nrows and the result too 2998ms.\n\nThings slowed to a crawl about 2 weeks ago, so I upgraded to 7.3 and \nsaw a huge improvement. I believe part of this might have been due to \nthe recreation of the database, similar to a CLUSTER. My performance \nis not degrading from a time perspective, but CPU usage is steadily \ndegrading. User time is steadily increasing over the last 240 hours, \nfrom 5% to 15%. Attached is output of my monitoring program (well, the \nnew improved Java version) showing the CPU performance over the last \n240 hours.\n\nshared_buffers = 98304\nsort_mem = 1600\nfsync = false\n\nEverything else is default, recommendations welcome. ;)\n\n\n\n\n\nCory 'G' Watson", "msg_date": "Tue, 17 Dec 2002 07:30:13 -0600", "msg_from": "Cory 'G' Watson <gphat@cafes.net>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "On Tuesday 17 December 2002 06:59 pm, you wrote:\n> shared_buffers = 98304\n> sort_mem = 1600\n> fsync = false\n>\n> Everything else is default, recommendations welcome. ;)\n\nWhat is the vacuum frequency?\n\n Shridhar\n\n", "msg_date": "Tue, 17 Dec 2002 19:05:41 +0530", "msg_from": "Shridhar Daithankar <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "\nOn Tuesday, December 17, 2002, at 07:35 AM, Shridhar Daithankar wrote:\n\n> On Tuesday 17 December 2002 06:59 pm, you wrote:\n>> shared_buffers = 98304\n>> sort_mem = 1600\n>> fsync = false\n>>\n>> Everything else is default, recommendations welcome. ;)\n>\n> What is the vacuum frequency?\n\nEvery morning. This db is almost exclusively INSERT and SELECT. Well, \nI take that back, a single table gets UPDATEs rather frequently. \nOtherwise, INSERT only.\n\nCory 'G' Watson\n\n", "msg_date": "Tue, 17 Dec 2002 07:43:28 -0600", "msg_from": "Cory 'G' Watson <gphat@cafes.net>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "On Tuesday 17 December 2002 07:13 pm, you wrote:\n> On Tuesday, December 17, 2002, at 07:35 AM, Shridhar Daithankar wrote:\n> > What is the vacuum frequency?\n>\n> Every morning. This db is almost exclusively INSERT and SELECT. Well,\n> I take that back, a single table gets UPDATEs rather frequently.\n> Otherwise, INSERT only.\n\ni recommend a vacuum analyze per 1000/2000 records for the table that gets \nupdated. It should boost the performance like anything..\n\n Shridhar\n", "msg_date": "Tue, 17 Dec 2002 19:19:33 +0530", "msg_from": "Shridhar Daithankar <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "\nOn Tuesday, December 17, 2002, at 07:49 AM, Shridhar Daithankar wrote:\n\n> On Tuesday 17 December 2002 07:13 pm, you wrote:\n>> On Tuesday, December 17, 2002, at 07:35 AM, Shridhar Daithankar wrote:\n>>> What is the vacuum frequency?\n>>\n>> Every morning. This db is almost exclusively INSERT and SELECT. \n>> Well,\n>> I take that back, a single table gets UPDATEs rather frequently.\n>> Otherwise, INSERT only.\n>\n> i recommend a vacuum analyze per 1000/2000 records for the table that \n> gets\n> updated. It should boost the performance like anything..\n\nBy my math, I'll need to vacuum once every hour or so. Cron, here I \ncome.\n\nvacuumdb --table cached_metrics loggerithim\n\nI assume I do not need a --analyze, since that table has no indexes. \nShould I vacuum the entire DB?\n\nAny other settings I should look at? Note that I'm not necessarily \nhaving any problems at present, but one can always tune. This DB is \nused with a web app (mod_perl/DBI) at the moment, but is moving to a \nJava Swing client, which will give me much more data about performance.\n\nCory 'G' Watson\n\n", "msg_date": "Tue, 17 Dec 2002 08:20:21 -0600", "msg_from": "Cory 'G' Watson <gphat@cafes.net>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "On 17 Dec 2002 at 8:20, Cory 'G' Watson wrote:\n> By my math, I'll need to vacuum once every hour or so. Cron, here I \n> come.\n> \n> vacuumdb --table cached_metrics loggerithim\n\nhttp://gborg.postgresql.org/project/pgavd/projdisplay.php\n\nYeah, yeah.. I wrote that..and use CVS as usual. No release as yet..\n\n \n> I assume I do not need a --analyze, since that table has no indexes. \n> Should I vacuum the entire DB?\n\nYou need analyse to keep vacuum non-locking I assume. And there is no need to \nvacuum entire DB.\n\nHTH\n\nBye\n Shridhar\n\n--\npaycheck:\tThe weekly $5.27 that remains after deductions for federal\t\nwithholding, state withholding, city withholding, FICA,\tmedical/dental, long-\nterm disability, unemployment insurance,\tChristmas Club, and payroll savings \nplan contributions.\n\n", "msg_date": "Tue, 17 Dec 2002 19:59:42 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Profiling" }, { "msg_contents": "\"Cory 'G' Watson\" <gphat@cafes.net> writes:\n> I assume I do not need a --analyze, since that table has no indexes. \n\nWhether you need analyze or not has nothing to do with whether there\nare indexes. You probably don't need it once an hour, but maybe once\na day would be good.\n\n> Should I vacuum the entire DB?\n\nOverkill; just get the heavily-updated table(s). A DB-wide vacuum must\nbe done occasionally, but again once-a-day would be plenty.\n\n> Any other settings I should look at?\n\nFree space map (fsm) settings must be adequate to keep track of the free\nspace in your tables.\n\nHowever, all of this relates only to keeping performance good on the\ntable with lots of updates. If you are seeing progressive degradation\non a table that only gets INSERTs, then there's something else going on.\nAFAIR you didn't show us an EXPLAIN ANALYZE for the principal query?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Dec 2002 09:49:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Profiling " }, { "msg_contents": "OK, my turn.\n\nWe have two main servers that are identical, one is the online server, the \nother is the hot spare. Their specs:\n\nDual PIII-750 1.5 Gig ram and dual 18 Gig 10krpm UW SCSI drives.\nOS on one drive, postgresql on the other.\n\nInteresting postgresql.conf entries:\n\nmax_connections = 128\nshared_buffers = 32768 \nmax_fsm_relations = 10000\nsort_mem = 2048\nvacuum_mem = 8192\ncpu_tuple_cost = 0.01\ncpu_index_tuple_cost = 0.0001\ncpu_operator_cost = 0.05\n\npgbench -c 4 -t 200 delivers about 240 tps.\n\nPerformance is outstanding. This machine runs apache, OpenLDAP, Real \nServer 1, as well as Postgresql. All non-database data is stored on a \nNAS, so the local drives are only used for swap and postgresql. Average \nload is about 40 to 200 reads per minute, with only a handful of writes \nper minute (1 to 5 max). Most data is loaded from nightly runs out of \nthe mainframe and a few other systems for things like company phonebook \nand ldap.\n\nMy test servers:\n\nServer A: Dual PPro 200 with 256 Meg RAM and 6x4Gig 10kRPM UW SCSI drives \n(3 quantum, 3 seagate) and 2x80Gig 7200 RPM IDE drives. \n\nData is generally stored on the pair of 80 gig drives, because the 4 gig \nscsis just aren't big enough. The 80 gig ides are setup as two 40 gig \nmirrors (i.e. they're split in half) with the other half used to store \nbackups and such.\n\nshared_buffers = 5000\n\npgbench -c 4 -t 200 yields about 80 tps. \n\nPerformance is actually quite good, and this is a box we bought in 1997. \n\nServer B: (My workstation) Celeron 1.1GHz, with 512 MEg RAM and a 40 gig \nIDE @7200 RPM, and a 17 Gig IDE @5400 RPM.\n\nshared_buffers = 4096\n\npgbench -c 4 -t 200 yields about 75 tps. Yes, my dual PPro 200 outruns \nthis box. But then again, my workstation has KDE up and running with \nMozilla, xmms mp3 player going, and a couple other programs running as \nwell. \n\nAll of these boxes are / were heavily tested before deployment, and we \nhave never had a problem with postgresql on any of them.\n\n", "msg_date": "Tue, 17 Dec 2002 09:18:02 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Profiling" } ]
[ { "msg_contents": "Hi\nFew days ago I read, that EXISTS is better than IN, but only if there \nare many records (how many?). I was wondering which one is better and \nwhen. Did anyone try to compare these queries doing the same work:\n\n- select * from some_table t\n where t.id [not] in (select id from filter);\n- select * from some_table t\n where [not] exists (select * from filter where id=t.id);\n- select * from some_table t\n left join filter f using (id)\n where f.id is [not] null;\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Thu, 19 Dec 2002 11:12:01 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": true, "msg_subject": "EXISTS vs IN vs OUTER JOINS" }, { "msg_contents": "Tomasz,\n\n> Few days ago I read, that EXISTS is better than IN, but only if there\n> are many records (how many?). I was wondering which one is better and\n> when. Did anyone try to compare these queries doing the same work:\n> \n> - select * from some_table t\n> where t.id [not] in (select id from filter);\n> -select * from some_table t\n> where [not] exists (select * from filter where id=t.id);\n\nThe rule I use is: if I expect the sub-select to return more than 12\nrecords 20% or more of the time, use EXISTS. The speed gain for IN on\nsmall lists is not as dramatic as the speed loss for EXISTS on large\nlists.\n\nMore importantly, the difference between NOT IN and NOT EXISTS can be\nas much as 20:1 on large sub-selects, as opposed to IN and EXISTS,\nwhere I have rarely seen a difference of more than 3:1. As I\nunderstand it, this is because NOT EXISTS can use optimized join\nalgorithms to locate matching rows, whereas NOT IN must compare each\nrow against every possible matching value in the subselect.\n\nIt also makes a difference whether or not the referenced field(s) in\nthe subselect is indexed. EXISTS will often use an index to compare\nthe values in the master query to the sub-query. As far as I know, IN\ncan use an index to retrieve the subquery values, but not to sort or\ncompare them after they have been retreived into memory.\n\n> -select * from some_table t\n> left join filter f using (id)\n> where f.id is [not] null;\n\nThis will not get you the same result as the above. It will get you\nall rows from t+f where a record is present in f and has (or does not\nhave) a NULL value for f.id. While this type of query works in MS\nAccess, it will not work in SQL92/99-commpliant databases.\n\nIncidentally, the dramatic differences between IN and EXISTS are not\nonly a \"PostgreSQL Thing\". The same rules apply to MS SQL Server and\nSQL Anywhere, for the same reasons.\n\n-Josh Berkus\n\n", "msg_date": "Thu, 19 Dec 2002 09:15:36 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS" }, { "msg_contents": "On Thu, Dec 19, 2002 at 09:15:36AM -0800, Josh Berkus wrote:\n> > -select * from some_table t\n> > left join filter f using (id)\n> > where f.id is [not] null;\n>\n> This will not get you the same result as the above. It will get you\n> all rows from t+f where a record is present in f and has (or does not\n> have) a NULL value for f.id. While this type of query works in MS\n> Access, it will not work in SQL92/99-commpliant databases.\n\nfilter_table does not contain null fields. It has only two states: it\nhas row\nor it hasn't row coresponding to row in some_table.\n\nAnd now, which one is better?\n\nTomasz Myrta\n", "msg_date": "Thu, 19 Dec 2002 18:27:33 +0100", "msg_from": "jasiek@klaster.net", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS" }, { "msg_contents": "Josh Berkus wrote:\n> where I have rarely seen a difference of more than 3:1. As I\n> understand it, this is because NOT EXISTS can use optimized join\n> algorithms to locate matching rows, whereas NOT IN must compare each\n> row against every possible matching value in the subselect.\n> \n> It also makes a difference whether or not the referenced field(s) in\n> the subselect is indexed. EXISTS will often use an index to compare\n> the values in the master query to the sub-query. As far as I know, IN\n> can use an index to retrieve the subquery values, but not to sort or\n> compare them after they have been retreived into memory.\n\nI wonder if \"[NOT] IN (subselect)\" could be improved with a hash table in \nsimilar fashion to the hash aggregate solution Tom recently implemented?\n\nJoe\n\n\n", "msg_date": "Thu, 19 Dec 2002 09:43:24 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS" }, { "msg_contents": "\nTomasz,\n\n> > This will not get you the same result as the above. It will get you\n> > all rows from t+f where a record is present in f and has (or does not\n> > have) a NULL value for f.id. While this type of query works in MS\n> > Access, it will not work in SQL92/99-commpliant databases.\n> \n> filter_table does not contain null fields. It has only two states: it\n> has row\n> or it hasn't row coresponding to row in some_table.\n> \n> And now, which one is better?\n\nYou're not listening. I said that LEFT JOIN won't work. At all.\n\nPlease re-read the paragraph above, which explains why.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 19 Dec 2002 11:02:39 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS" }, { "msg_contents": "On Thu, Dec 19, 2002 at 11:02:39AM -0800, Josh Berkus wrote:\n>\n> Tomasz,\n> You're not listening. I said that LEFT JOIN won't work. At all.\n>\n> Please re-read the paragraph above, which explains why.\nI read your mail once again, but I still don't understand what are you\ntalking about.\nI'll write example - maybe it will help us to understand each other.\n\n\nI have three tables: users, things and access_list\ncreate table users(\nuser_id integer primary key,\nusername varchar\n);\ninsert into users(1,'Tomasz');\n\ncreate table things(\nthing_id int4 primary key,\nthingname varchar\n);\ninsert into things(1,'thing1');\ninsert into things(2,'thing2');\ninsert into things(3,'thing3');\ninsert into things(4,'thing4');\ninsert into things(5,'thing5');\n\ncreate table access_list(\nuser_id int4 not null references users,\nthing_id int4 not null references things\n);\ninsert into access_list(1,1);\ninsert into access_list(1,4);\n\nSELECT u.username,t.thingname,al.thing_id \nfrom users u cross join things t \nleft join access_list al on (s.user_id=al.user_id and\nt.thing_id=al.thing_id)\n\nResult:\nusername thingname thing_id\nTomasz\t thing1\t 1\nTomasz\t thing2\t \nTomasz\t thing3\t \nTomasz\t thing4\t 4\nTomasz\t thing5\t 5\n\nNow if we add \"where al.user_id is null\" we get:\nTomasz\t thing2\t \nTomasz\t thing3\t \n\nOr if we add \"where al.user_id is not null\" we get:\n(the same result we have when using inner join)\nTomasz\t thing1\t 1\nTomasz\t thing4\t 4\nTomasz\t thing5\t 5\n\nI know this method will fail if we have not unique pairs in table\naccess_list, but in other case it looks ok.\nTomasz Myrta\n", "msg_date": "Thu, 19 Dec 2002 20:28:30 +0100", "msg_from": "jasiek@klaster.net", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I wonder if \"[NOT] IN (subselect)\" could be improved with a hash table in \n> similar fashion to the hash aggregate solution Tom recently implemented?\n\nIt's being worked on ;-)\n\nhttp://archives.postgresql.org/pgsql-hackers/2002-11/msg01055.php\n\nAssuming I get this done, the conventional wisdom that \"EXISTS\noutperforms IN\" will be stood on its head --- unless we add planner code\nto try to reverse-engineer an IN from an EXISTS, which is something I'm\nnot really eager to expend code and cycles on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Dec 2002 17:46:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> > I wonder if \"[NOT] IN (subselect)\" could be improved with a hash table in \n> > similar fashion to the hash aggregate solution Tom recently implemented?\n> \n> It's being worked on ;-)\n> \n> http://archives.postgresql.org/pgsql-hackers/2002-11/msg01055.php\n> \n> Assuming I get this done, the conventional wisdom that \"EXISTS\n> outperforms IN\" will be stood on its head --- unless we add planner code\n> to try to reverse-engineer an IN from an EXISTS, which is something I'm\n> not really eager to expend code and cycles on.\n\nI am looking forward to removing _that_ FAQ item. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 19 Dec 2002 17:52:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS" }, { "msg_contents": "Tomasz,\n\n> I read your mail once again, but I still don't understand what are\n> you\n> talking about.\n> I'll write example - maybe it will help us to understand each other.\n\nHmmm ... you're right. Sorry for being dense. It shouldn't work, but\nit does. \n\nTom, Bruce:\n\nIf I run the query:\n\nSELECT t1.* \nFROM table1 t1\n LEFT JOIN table2 t2 ON t1.xid = t2.xid\nWHERE t2.label IS NULL\n\nI will get rows in t1 for which there is no row in t2. This does not\nseem SQL-spec to me; shouldn't I get only rows from t1 where a row\nexists in t2 and t2.label IS NULL?\n\n-Josh\n", "msg_date": "Thu, 19 Dec 2002 15:19:21 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS" }, { "msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> If I run the query:\n\n> SELECT t1.* \n> FROM table1 t1\n> LEFT JOIN table2 t2 ON t1.xid = t2.xid\n> WHERE t2.label IS NULL\n\n> I will get rows in t1 for which there is no row in t2. This does not\n> seem SQL-spec to me; shouldn't I get only rows from t1 where a row\n> exists in t2 and t2.label IS NULL?\n\nNo; that would be the behavior of an inner join, but you did a left\njoin. The above will give you t1 rows for which there is no match in t2\n(plus rows for which there is a match containing null in t2.label).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Dec 2002 19:02:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS " }, { "msg_contents": "On Thu, 19 Dec 2002 15:19:21 -0800, \"Josh Berkus\" <josh@agliodbs.com>\nwrote:\n>SELECT t1.* \n>FROM table1 t1\n> LEFT JOIN table2 t2 ON t1.xid = t2.xid\n>WHERE t2.label IS NULL\n\nJosh, also note that Tomasz did something like\n\nSELECT t1.* \nFROM table1 t1\n LEFT JOIN table2 t2 ON t1.xid = t2.xid\nWHERE t2.xid IS NULL\n ^^^\nThis special trick guarantees that you get exactly the rows from t1\nnot having a matching row in t2, because a NULL xid in t2 would not\nsatisfy the condition t1.xid = t2.xid.\n\nServus\n Manfred\n", "msg_date": "Fri, 20 Dec 2002 09:53:56 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: EXISTS vs IN vs OUTER JOINS" } ]
[ { "msg_contents": "On Thu, 19 Dec 2002 14:10:58 -0500, george young <gry@ll.mit.edu>\nwrote:\n>with 4 billion(4e9) rows. I would guess to make wafer, die_row, etc. be of\n>type \"char\", probably testtype a char too with a separate testtype lookup table.\n>Even so, this will be a huge table.\n\nDon't know if you can store 0-127 in a \"char\" column ... Anyway, it\ndoesn't matter, if it does not cause the tuple size to cross a 4 byte\nboundary, because the tuple size will be rounded up to a multiple of\n4.\n \n>Questions: How much overhead will there be in the table in addition to the\n>9 bytes of data I see?\n\nThere is a page header (ca. 20 bytes) per page (8K by default). Then\nyou have a tuple header and 4 bytes ItemIdData per tuple.\n\nPG 7.2: Without NULLs a tuple header is 32 bytes, add 4 bytes for each\ntuple containing at least one NULL column.\n\nPG 7.3: 24 bytes tuple header (with and without NULLs, because you\nhave only 8 columns).\n\n>How big will the primary index on the first seven columns be?\n\nDon't know offhand. No time now to dig it out. Will answer tomorrow,\nif nobody else jumps in ...\n\nServus\n Manfred\n", "msg_date": "Thu, 19 Dec 2002 20:07:53 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: 4G row table?" }, { "msg_contents": "[linux, 700MHz athlon, 512MB RAM, 700GB 10kRPM SCSI HW RAID, postgresql 7.2]\nWe're setting up a DB of IC test data, which is very simple and regular, but large.\nOne project (we get three or four per year) has ~4 giga bits, each specified by\na few small integer values, e.g.:\n Name Type Values\n ----------------------\n wafer int\t 1-50\n die_row int 2-7\n die_col int 2-7\n testtype string (~10 different short strings)\n vdd int 0-25\n bit_col int 0-127\n bit_row int 0-511\n value bit 0 or 1\n\nwith 4 billion(4e9) rows. I would guess to make wafer, die_row, etc. be of\ntype \"char\", probably testtype a char too with a separate testtype lookup table.\nEven so, this will be a huge table. \n\nQuestions: How much overhead will there be in the table in addition to the\n9 bytes of data I see? How big will the primary index on the first seven columns\nbe? Will this schema work at all? \n\nOf course, we could pack 128 bits into an 8 byte \"text\" field (or should we use bit(128)?),\nbut lose some ease of use, especially for naive (but important) users.\n\nComments, suggestions?\n\n-- George\n \n-- \n I cannot think why the whole bed of the ocean is\n not one solid mass of oysters, so prolific they seem. Ah,\n I am wandering! Strange how the brain controls the brain!\n\t-- Sherlock Holmes in \"The Dying Detective\"\n", "msg_date": "Thu, 19 Dec 2002 14:10:58 -0500", "msg_from": "george young <gry@ll.mit.edu>", "msg_from_op": false, "msg_subject": "4G row table?" }, { "msg_contents": "George,\n\n> [linux, 700MHz athlon, 512MB RAM, 700GB 10kRPM SCSI HW RAID, postgresql 7.2]\n\nWhat kind of RAID? How many drives? Will you be updating the data \nfrequently, or mostly just running reports on it?\n\nWith 4G rows, you will have *heavy* disk access, so the configuration and \nquality of your disk array is a big concern. You also might think about \nupping th ememory if you can.\n\n> We're setting up a DB of IC test data, which is very simple and regular, but \nlarge.\n> One project (we get three or four per year) has ~4 giga bits, each specified \nby\n> a few small integer values, e.g.:\n> Name Type Values\n> ----------------------\n> wafer int\t 1-50\n> die_row int 2-7\n> die_col int 2-7\n> testtype string (~10 different short strings)\n> vdd int 0-25\n> bit_col int 0-127\n> bit_row int 0-511\n> value bit 0 or 1\n> \n> with 4 billion(4e9) rows. I would guess to make wafer, die_row, etc. be of\n> type \"char\", probably testtype a char too with a separate testtype lookup \ntable.\n> Even so, this will be a huge table. \n\n1. Use INT2 and not INT for the INT values above. If you can hire a \nPostgreSQL hacker, have them design a new data type for you, an unsigned INT1 \nwhich will cut your storage space even further.\n\n2. Do not use CHAR for wafer & die-row. CHAR requries min 3bytes storage; \nINT2 is only 2 bytes.\n\n3. If you can use a lookup table for testtype, make it another INT2 and create \na numeric key for the lookup table.\n\n> Questions: How much overhead will there be in the table in addition to the\n> 9 bytes of data I see? \n\nThere's more than 9 bytes in the above. Count again.\n\n> How big will the primary index on the first seven columns\n> be? Will this schema work at all? \n\nAs large as the 7 columns themselves, plus a little more. I suggest creating \na surrogate key as an int8 sequence to refer to most rows. \n\n> Of course, we could pack 128 bits into an 8 byte \"text\" field (or should we \nuse bit(128)?),\n> but lose some ease of use, especially for naive (but important) users.\n\nThis is also unlikely to be more efficient due to the translation<->conversion \nprocess requried to access the data when you query.\n\n> Comments, suggestions?\n\nUnless you have a *really* good RAID array, expect slow performance on this \nhardware platform.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 19 Dec 2002 11:15:20 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: 4G row table?" }, { "msg_contents": "Josh:\n\nWhy do you say to expect slow performance on this hardware? Is there \nsomething specific about the configuration that worries you? Or, just \nlots of data in the database, so the data will be on disk and not in the \ncache (system or postgresql)? \n\nWhat do you classify as *slow*? Obviously, he is dependent on the I/O \nchannel given the size of the tables. So, good indexing will be \nrequired to help on the queries. No comments on the commit rate for \nthis data (I am guessing that it is slow, given the description of the \ndatabase), so that may or may not be an issue. \n\nDepending on the type of queries, perhaps clustering will help, along \nwith some good partitioning indexes. \n\nI just don't see the slow in the hardware. Of course, if he is \ntargeting lots of concurrent queries, better add some additional \nprocessors, or better yet, use ERSERVER and replicate the data to a farm \nof machines. [To avoid the I/O bottleneck of lots of concurrent queries \nagainst these large tables].\n\nI guess there are a lot of assumptions on the data's use to decide if \nthe hardware is adequate or not :-)\n\nCharlie\n\n\n\nJosh Berkus wrote:\n\n>George,\n> \n>\n>>[linux, 700MHz athlon, 512MB RAM, 700GB 10kRPM SCSI HW RAID, postgresql 7.2]\n>> \n>>\n>\n>What kind of RAID? How many drives? Will you be updating the data \n>frequently, or mostly just running reports on it?\n> \n>\n>Unless you have a *really* good RAID array, expect slow performance on this \n>hardware platform.\n>\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n", "msg_date": "Thu, 19 Dec 2002 14:27:25 -0500", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: 4G row table?" }, { "msg_contents": "On Thu, 2002-12-19 at 13:10, george young wrote:\n> [linux, 700MHz athlon, 512MB RAM, 700GB 10kRPM SCSI HW RAID, postgresql 7.2]\n> We're setting up a DB of IC test data, which is very simple and regular, but large.\n> One project (we get three or four per year) has ~4 giga bits, each specified by\n> a few small integer values, e.g.:\n> Name Type Values\n> ----------------------\n> wafer int\t 1-50\n> die_row int 2-7\n> die_col int 2-7\n> testtype string (~10 different short strings)\n> vdd int 0-25\n> bit_col int 0-127\n> bit_row int 0-511\n> value bit 0 or 1\n> \n> with 4 billion(4e9) rows. I would guess to make wafer, die_row, etc. be of\n> type \"char\", probably testtype a char too with a separate testtype lookup table.\n> Even so, this will be a huge table. \n\nHow many records per day will be inserted?\n\nWill they ever be updated?\n\nDo you have to have *ALL* 4 billion records in the same table at the\nsame time? As Josh Berkus mentioned, wafer thru bit_col can be\nconverted to INT2, if you make testtype use a lookup table; thus, each\ntuple could be shrunk to 20 bytes, plus 24 bytes per tuple (in v7.3)\nthat would make the table a minimum of 189 billion bytes, not\nincluding index!!!\n\nRethink your solution...\n\nOne possibility would to have a set of tables, with names like:\nTEST_DATA_200301\nTEST_DATA_200302\nTEST_DATA_200303\nTEST_DATA_200304\nTEST_DATA_200305\nTEST_DATA_200306\nTEST_DATA_200307\nTEST_DATA_<etc>\n\nThen, each month do \"CREATE VIEW TEST_DATA AS TEST_DATA_yyyymm\" for the\ncurrent month.\n\n\n> Questions: How much overhead will there be in the table in addition to the\n> 9 bytes of data I see? How big will the primary index on the first seven columns\n> be? Will this schema work at all? \n> \n> Of course, we could pack 128 bits into an 8 byte \"text\" field (or should we use bit(128)?),\n> but lose some ease of use, especially for naive (but important) users.\n> \n> Comments, suggestions?\n> \n> -- George\n> \n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"My advice to you is to get married: If you find a good wife, |\n| you will be happy; if not, you will become a philosopher.\" |\n| Socrates |\n+---------------------------------------------------------------+\n\n", "msg_date": "19 Dec 2002 13:36:36 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: 4G row table?" }, { "msg_contents": "On Thu, 19 Dec 2002 14:10:58 -0500, george young <gry@ll.mit.edu>\nwrote:\n>with 4 billion(4e9) rows.\n>How big will the primary index on the first seven columns be?\n\nIf you manage to pack the key into 8 bytes (by using a custom 1 byte\ninteger datatype) and if there are no NULLs:\n\n 75 GB with a 100% fill factor,\n 114 GB with a 66% fill factor,\nrealistically something in between. Note that frequent updates can\ncause index growth.\n\n>Will this schema work at all? \n\nYou have a somewhat unusual identifier : payload ratio (8B : 1b). It\ndepends on the planned use, but I'm not sure if *any* database is the\nright solution. You have \"only\" 30670848000 (30G) possible different\nkey combinations, more than 1/8 of them (4G) are actually used. A\n7-dimensional array of double-bits (1 bit to indicate a valid value\nand 1 bit payload) would require not more than 8 GB.\n\nIf you plan to use a database because you have to answer ad-hoc\nqueries, you will almost certainly need additonal indices.\n\nServus\n Manfred\n", "msg_date": "Fri, 20 Dec 2002 11:27:18 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: 4G row table?" }, { "msg_contents": "Charlie,\n\n> Why do you say to expect slow performance on this hardware? Is\n> there something specific about the configuration that worries you?\n> Or, just lots of data in the database, so the data will be on disk\n> and not in the cache (system or postgresql)? \n> What do you classify as *slow*? Obviously, he is dependent on the\n> I/O channel given the size of the tables. So, good indexing will be\n> required to help on the queries. No comments on the commit rate for\n> this data (I am guessing that it is slow, given the description of\n> the database), so that may or may not be an issue. \n> Depending on the type of queries, perhaps clustering will help, along\n> with some good partitioning indexes. \n> I just don't see the slow in the hardware. Of course, if he is\n> targeting lots of concurrent queries, better add some additional\n> processors, or better yet, use ERSERVER and replicate the data to a\n> farm of machines. [To avoid the I/O bottleneck of lots of concurrent\n> queries against these large tables].\n> \n> I guess there are a lot of assumptions on the data's use to decide if\n> the hardware is adequate or not :-)\n\nWell, slow is relative. It may be fast enough for him. Me, I'd be\nscreaming in frustration.\n\nTake, for example, an index scan on the primary key. Assuming that he\ncan get the primary key down to 12 bytes per node using custom data\ntypes, that's still:\n\n12bytes * 4,000,000,000 rows = 48 GB for the index\n\nAs you can see, it's utterly impossible for him to load even the\nprimary key index into his 512 MB of RAM (of which no more than 200mb\ncan go to Postgres anyway without risking conflicts over RAM). A\nSort-and-Limit on the primary key, for example, would require swapping\nthe index from RAM to swap space as much as 480 times! (though probably\nmore like 100 times on average)\n\nWith a slow RAID array and the hardware he described to us, this would\nmean, most likely, that a simple sort-and-limit on primary key query\ncould take hours to execute. Even with really fast disk access, we're\ntalking tens of minutes at least.\n\n-Josh Berkus\n\n\n\n", "msg_date": "Fri, 20 Dec 2002 09:01:28 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: 4G row table?" } ]
[ { "msg_contents": "Hello,\n\nWe are considering switching our systems over from MySQL to Postgresql.\n\nSpeed is one of our major concerns, so before switching we've decided \nto perform some speed tests.\n From what I understand, Postgresql is NOT as fast as Mysql, but should \nbe close enough.\n\nWe've installed the software and have run some basic insert, index and \nquery tests that\nseem ridiculously slow. I can't help thinking that we are doing \nsomething wrong, or\ndon't have things configured for optimal performance.\n\nWe've performed these same tests on Mysql and then run dramatically \nfaster.\n\nHere's the initial performance test results and issues...\n\nTable configuration:\nspeedtest( prop1 integer, prop2 integer, prop3 integer, prop4 integer);\nindexes on each of the four individual property fields\n\nEach record consists of four random integers, uniformly distributed,\nbetween 0 and 1000. The integers are computed in the perl script\nused to populate the table, not using an SQL random() function.\n\nHardware configuration: P3-500, 384MB ram, *unloaded* system.\nSoftware configuration: Linux 2.4.20, reiserfs, standard slackware \ninstall.\n\nIssue #1: Speed of inserts is relatively slow. 100000 inserts is \ntaking\nroughly 10 minutes. This isn't EVIL, but mysql appears to be about\nten times faster here. Is there something we could do to the indexes\ndifferently? Disable transactions? Is there a more \"raw\" insert, which\nmay not set off triggers?\n\nIssue #2: It doesn't appear as though multiple indexes are being used.\nie: select count(*) from speedtest where (prop1 between 100 and 200)\nand( prop2 between 100 and 200) and (prop3 between 100 and 200)\nand (prop4 between 100 and 200) formulates a query plan that only\nuses one index. The following is pasted from the 'explain select' ---\n\n Aggregate (cost=17.16..17.16 rows=1 width=0)\n -> Index Scan using p4 on speedtest (cost=0.00..17.16 rows=1 \nwidth=0)\n Index Cond: ((prop4 >= 100) AND (prop4 <= 200))\n Filter: ((prop1 >= 100) AND (prop1 <= 200) AND (prop2 >= 100) \nAND\n(prop2 <= 200) AND (prop3 >= 100) AND (prop3 <= 200))\n(4 rows)\n\nIt appears as though the index on prop4 is being used to determine a \nsubset\nof records to fetch -- subsequently filtering them with the other \nconditions.\nUnfortunately, since the index condition matches 10% of the table (due \nto\nthe random uniform integers from 0-1000), this results in a large \nnumber of\nrecord fetches and examinations the db engine must make. This query \ntakes\nat least a second to execute, whereas we would like to be able to drop \nthis\ninto the sub-0.1 second range, and preferably into the millisecond \nrange.\nWhile this would run faster on the production machines than on my \nworkstation,\nit is still a fundamental flaw that multiple indexes aren't being \ncombined to\nrestrict the record set to fetch.\n\nOTOH, if we could do index combining, we could fetch 10% of 10% of 10%\nof the initial 10% of records... Resulting in a microscopic number of \nitems\nto retrieve and examine.\n\nCan anybody give me some ideas as to what I am doing wrong???\n\nThanks,\n\n-Noah\n\n", "msg_date": "Fri, 20 Dec 2002 17:57:28 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": true, "msg_subject": "Speed Question" }, { "msg_contents": "On Fri, 20 Dec 2002, Noah Silverman wrote:\n\n> Issue #1: Speed of inserts is relatively slow. 100000 inserts is \n> taking\n> roughly 10 minutes. This isn't EVIL, but mysql appears to be about\n> ten times faster here. Is there something we could do to the indexes\n> differently? Disable transactions? Is there a more \"raw\" insert, which\n> may not set off triggers?\n\nAre you doing these in a transaction? If not, then try adding a \nbegin;end; pair around your inserts. i.e.\n\nbegin;\ninsert 100000 rows\nend;\n\nthat should help.\n\n\nReading the rest of your message, it appears there are two issues here. \nOne is you might get some help from a multi-column index.\n\nFurther, have you run analyze on your database?\n\nHave you read the administrative docs yet? There's lots more good stuff \nin there too. These are the basics.\n\nThe other issue is the assumption that indexes are ALWAYS faster, which \nthey aren't. If the query planner thinks it's gonna grab some significant \nportion of a table, it will just grab the whole thing instead of using an \nindex, which makes a certain amount of sense. To reduce the likelihood of \nthe planner picking a sequential scan, change random_page_cost from the \ndefault 4 to something lower. A 1 means that the cost of grabbing a page \nrandomly is the same as grabbing it sequentially, which shouldn't be \npossible, but is, if the data is all in memory.\n\nNext, use EXPLAIN ANALYZE to get an output of both what the query planner \nTHOUGHT it was going to do, and what the query actually did, in terms of \ntime to execute.\n\nLet us know how it all turns out.\n\n", "msg_date": "Fri, 20 Dec 2002 16:23:59 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Speed Question" }, { "msg_contents": "\"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> On Fri, 20 Dec 2002, Noah Silverman wrote:\n>> Issue #1: Speed of inserts is relatively slow. 100000 inserts is \n\n> Are you doing these in a transaction? If not, then try adding a \n> begin;end; pair around your inserts. i.e.\n\n> begin;\n> insert 100000 rows\n> end;\n\nOr use a COPY command instead of retail inserts. See also the tips at\nhttp://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/populate.html\n\n> One is you might get some help from a multi-column index.\n\nYes, I'd recommend a multi-column index when no single column is\nparticularly selective.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Dec 2002 18:50:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Speed Question " }, { "msg_contents": "Noah,\n\n> Speed is one of our major concerns, so before switching we've decided \n> to perform some speed tests.\n> From what I understand, Postgresql is NOT as fast as Mysql,\n\nThis is a PR myth spread by MySQL AB. The truth is:\n\n1) PostgreSQL, unconfigured and not optimized, is indeed slower than MySQL \nout-of-the-box. MySQL is meant to be idiot-proof; PostgreSQL is not, \nintentionally.\n\n2) Nobody has yet come up with a database benchmark that both MySQL AB and the \nPostgreSQL team are willing to accept; depending on whose benchmark you use, \neither could be faster -- and neither benchmark may approximate your setup.\n\n> We've installed the software and have run some basic insert, index and \n> query tests that\n> seem ridiculously slow. I can't help thinking that we are doing \n> something wrong, or\n> don't have things configured for optimal performance.\n\nAlmost undoubtedly. Have you modified the postgresql.conf file at all? \nWhere are your database files located on disk? How are you construting your \nqueries?\n\n> We've performed these same tests on Mysql and then run dramatically \n> faster.\n\nWithout transations? Sure. Turn off transaction logging, and PostgreSQL \nruns faster, too.\n\n> \n> Here's the initial performance test results and issues...\n> \n> Table configuration:\n> speedtest( prop1 integer, prop2 integer, prop3 integer, prop4 integer);\n> indexes on each of the four individual property fields\n> \n> Each record consists of four random integers, uniformly distributed,\n> between 0 and 1000. The integers are computed in the perl script\n> used to populate the table, not using an SQL random() function.\n> \n> Hardware configuration: P3-500, 384MB ram, *unloaded* system.\n> Software configuration: Linux 2.4.20, reiserfs, standard slackware \n> install.\n\nYou haven't mentioned your PostgreSQL memory settings, by which I assume that \nyou haven't configured them. This is very important.\n\n> Issue #1: Speed of inserts is relatively slow. 100000 inserts is \n> taking\n> roughly 10 minutes. This isn't EVIL, but mysql appears to be about\n> ten times faster here. Is there something we could do to the indexes\n> differently? Disable transactions? Is there a more \"raw\" insert, which\n> may not set off triggers?\n\nBundle them in a single transaction. Move pg_xlog to a seperate drive from \nthe database.\n\n> Issue #2: It doesn't appear as though multiple indexes are being used.\n> ie: select count(*) from speedtest where (prop1 between 100 and 200)\n> and( prop2 between 100 and 200) and (prop3 between 100 and 200)\n> and (prop4 between 100 and 200) formulates a query plan that only\n> uses one index. The following is pasted from the 'explain select' ---\n\nThat's correct; Postgres will only use a single index on this query. If you \nwant to reference all columns, create a multi-column index. Note that, \nhowever, Postgres is likely to reject the index as it is just as large as the \ntable. In this way, your test is insufficiently like real data.\n\nGood luck. Why not use the Open Database Benchmark for testing, instead of \ninventing your own?\n\n http://www.sf.net/projects/osdb\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 20 Dec 2002 15:59:23 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Speed Question" } ]
[ { "msg_contents": "First: THANK YOU everyone for all your suggestions.\n\nI've discovered the \"copy from\" command and it helps a lot.\nRight now, we just ran a test on 1MM rows with 4 columns and it is very \nfast with a 4 column index. Works well.\n\nNow we are creating more of a real world example: 10MM rows with 32 \ncolumns of integers. I'm loading up the data now, and will create a \nmulti-column index(on all 32) after the data is loaded.\n\n From everyone's responses I understand that we really need to tune the \nsystem to get optimal performance. I would love to do this, but don't \nreally know where to start. Below are our system stats if anyone wants \nto suggest some settings:\n\n2x AMD 2100MP CPU\n2 GB RAM\nData - 350GB on a raid5 card\nNote: We will probably NEVER use transactions, so turning off that \nfeature would be fine if it would help, and we knew how.\n\nOur data is probably only going to take up 20% MAXIMUM of our RAID. \nSubsequently, we have no problem trading a little extra space for \nbetter performance.\n\nBTW - is there any kind of \"describe table\" and/or \"show index\" \nfunction if pgsql. I've gotten very used to them in Mysql, but they \ndon't work here. There must be some way. I've RTFM, but can't find \nanything. help.\n\nTHANKS AGAIN,\n\n-Noah\n\n\n", "msg_date": "Fri, 20 Dec 2002 19:10:49 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": true, "msg_subject": "Re: Speed Question" }, { "msg_contents": "> BTW - is there any kind of \"describe table\" and/or \"show index\"\n> function if pgsql. I've gotten very used to them in Mysql, but they\n> don't work here. There must be some way. I've RTFM, but can't find\n> anything. help.\n\nIn psql use \"\\d tablename\". do a \"\\?\" for a quick overview and \"man psql\"\nfor lots of stuff.\n\n-philip\n\n", "msg_date": "Fri, 20 Dec 2002 17:01:13 -0800 (PST)", "msg_from": "Philip Hallstrom <philip@adhesivemedia.com>", "msg_from_op": false, "msg_subject": "Re: Speed Question" }, { "msg_contents": "On Fri, 20 Dec 2002 19:10:49 -0500, Noah Silverman\n<noah@allresearch.com> wrote:\n>Now we are creating more of a real world example: 10MM rows with 32 \n>columns of integers. I'm loading up the data now, and will create a \n>multi-column index(on all 32) after the data is loaded.\n\nIf a table with a 32 column key and no dependent attributes is a real\nworld example, I'd like to see your use case ;-)\n\nAn index on c1, c2, ..., cn will only help, if your search criteria\ncontain (strict) conditions on the leading index columns, e.g.\n\tWHERE c1 = ... AND c2 = ... AND c3 BETWEEN ... AND ...\n\nIt won't help for \n\tWHERE c22 = ...\n\n> From everyone's responses I understand that we really need to tune [...]\n>2x AMD 2100MP CPU\n>2 GB RAM\n>Data - 350GB on a raid5 card\n\nIt all depends on your application, but looking at SHARED_BUFFERS,\nEFFECTIVE_CACHE_SIZE, SORT_MEM, MAX_FSM_RELATIONS, and MAX_FSM_PAGES\nmight be a good start. Later you might want to use CPU_*_COST,\nRANDOM_PAGE_COST, and various WAL settings to fine tune your system.\n\n>Note: We will probably NEVER use transactions,\n\nOh yes, you will. You have no other choice. If you don't enclose\n(several) statements between BEGIN and COMMIT, every statement is\nautomatically wrapped into its own transaction.\n\nIt helps performance and consistency, if *you* control transactions.\n\nServus\n Manfred\n", "msg_date": "Sat, 21 Dec 2002 13:21:31 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Speed Question" }, { "msg_contents": "Thanks for the help. We've been using MySQL for the last 4 years, so \nPgSQL is a whole new world for me. Lots to learn\n\nActually the \"real world\" test we are performing is an exact \nduplication of our intended use. Without divulging too many company \nsecrets, we create a 32 key profile of an object. We then have to be \nable to search the database to find \"similar\" objects. In reality, we \nwill probably have 20MM to 30MM rows in our table. I need to very \nquickly find the matching records on a \"test\" object.\n\nIf you're really curious as to more details, let me know (I don't want \nto bore the group with our specifics)\n\nSince this machine is solely a database server, I want to utilize a ton \nof RAM to help things along. Probably at lease 1.5 Gigs worth. I \nguess my next step is to try and figure out what all the various memory \nsettings are and where to set them.\n\nThanks,\n\n-N\n\n\nOn Saturday, December 21, 2002, at 07:21 AM, Manfred Koizar wrote:\n\n> On Fri, 20 Dec 2002 19:10:49 -0500, Noah Silverman\n> <noah@allresearch.com> wrote:\n>> Now we are creating more of a real world example: 10MM rows with 32\n>> columns of integers. I'm loading up the data now, and will create a\n>> multi-column index(on all 32) after the data is loaded.\n>\n> If a table with a 32 column key and no dependent attributes is a real\n> world example, I'd like to see your use case ;-)\n>\n> An index on c1, c2, ..., cn will only help, if your search criteria\n> contain (strict) conditions on the leading index columns, e.g.\n> \tWHERE c1 = ... AND c2 = ... AND c3 BETWEEN ... AND ...\n>\n> It won't help for\n> \tWHERE c22 = ...\n>\n>> From everyone's responses I understand that we really need to tune \n>> [...]\n>> 2x AMD 2100MP CPU\n>> 2 GB RAM\n>> Data - 350GB on a raid5 card\n>\n> It all depends on your application, but looking at SHARED_BUFFERS,\n> EFFECTIVE_CACHE_SIZE, SORT_MEM, MAX_FSM_RELATIONS, and MAX_FSM_PAGES\n> might be a good start. Later you might want to use CPU_*_COST,\n> RANDOM_PAGE_COST, and various WAL settings to fine tune your system.\n>\n>> Note: We will probably NEVER use transactions,\n>\n> Oh yes, you will. You have no other choice. If you don't enclose\n> (several) statements between BEGIN and COMMIT, every statement is\n> automatically wrapped into its own transaction.\n>\n> It helps performance and consistency, if *you* control transactions.\n>\n> Servus\n> Manfred\n>\n>\n\n", "msg_date": "Sat, 21 Dec 2002 13:46:05 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": true, "msg_subject": "Re: Speed Question" }, { "msg_contents": "On Sat, 21 Dec 2002 13:46:05 -0500, Noah Silverman\n<noah@allresearch.com> wrote:\n>Without divulging too many company \n>secrets, we create a 32 key profile of an object. We then have to be \n>able to search the database to find \"similar\" objects.\n\n... where \"similar\" means that the value of each attribute lies within\na small range around the value of the corresponding attribute of the\nreference object?\n\nI fear a multicolumn b-tree index is not the optimal solution to this\nproblem, unless you have some extremely selective attributes you can\nput at the start of the index. But then again I doubt that it makes\nsense to include even the last attribute (or the last few attributes)\ninto the index.\n\n>In reality, we \n>will probably have 20MM to 30MM rows in our table. I need to very \n>quickly find the matching records on a \"test\" object.\n\nThis seems to be a nice case for utilizing bitmaps for index scans.\nThus you would scan several single column indices and combine the\nbitmaps before accessing the heap tuples. This has been discussed on\n-hackers and I believe it is a todo item.\n\nI don't know, whether GiST or R-Tree could help. Is anybody listening\nwho knows?\n\n>If you're really curious as to more details, let me know (I don't want \n>to bore the group with our specifics)\n\nThe group is patient :-)\n\nServus\n Manfred\n", "msg_date": "Sat, 21 Dec 2002 21:02:39 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Speed Question" }, { "msg_contents": "You are correct. \"similar\" means within a small range.\n\nBelow is a sample query:\n\nselect count(*) from speedtest where (p1 between 209 and 309) and (p2 \nbetween 241 and 341) and (p3 between 172 and 272) and (p4 between 150 \nand 250) and (p5 between 242 and 342) and (p6 between 222 and 322) and \n(p7 between 158 and 258) and (p8 between 249 and 349) and (p9 between \n162 and 262) and (p10 between 189 and 289) and (p11 between 201 and \n301) and (p12 between 167 and 267) and (p13 between 167 and 267) and \n(p14 between 229 and 329) and (p15 between 235 and 335) and (p16 \nbetween 190 and 290) and (p17 between 240 and 340) and (p18 between 156 \nand 256) and (p19 between 150 and 250) and (p20 between 171 and 271) \nand (p21 between 241 and 341) and (p22 between 244 and 344) and (p23 \nbetween 219 and 319) and (p24 between 198 and 298) and (p25 between 196 \nand 296) and (p26 between 243 and 343) and (p27 between 160 and 260) \nand (p28 betw een 151 and 251) and (p29 between 226 and 326) and (p30 \nbetween 168 and 268) and (p31 between 153 and 253) and (p32 between \n218 and 318)\n\nCurrently, on an un-tuned installation, this query takes about 1 \nsecond. Much too slow for our needs. We need to be able to execute \nabout 30-50 per second.\n\n\nI'm not a database expert. There is probably a better way to do this, \nbut I have no idea how.\n\nThe general use of this table is as an index for document storage. \nWhen we come across a new document, we have to know if we already have \nsomething close to it. Exact checksums don't work because two \ndocuments with only a few different words are still \"the same\" for our \nintended use. We calculate 32 separate checksums on parts of each \ndocument. By storing all 32, we have a good representation of each \ndocument. A new document can then very quickly be checked against the \ntable to see if we already have something close to it.\n\nIf anybody has any better ideas, I would love to hear it...\n\n-N\n\n\nOn Saturday, December 21, 2002, at 03:02 PM, Manfred Koizar wrote:\n\n> On Sat, 21 Dec 2002 13:46:05 -0500, Noah Silverman\n> <noah@allresearch.com> wrote:\n>> Without divulging too many company\n>> secrets, we create a 32 key profile of an object. We then have to be\n>> able to search the database to find \"similar\" objects.\n>\n> ... where \"similar\" means that the value of each attribute lies within\n> a small range around the value of the corresponding attribute of the\n> reference object?\n>\n> I fear a multicolumn b-tree index is not the optimal solution to this\n> problem, unless you have some extremely selective attributes you can\n> put at the start of the index. But then again I doubt that it makes\n> sense to include even the last attribute (or the last few attributes)\n> into the index.\n>\n>> In reality, we\n>> will probably have 20MM to 30MM rows in our table. I need to very\n>> quickly find the matching records on a \"test\" object.\n>\n> This seems to be a nice case for utilizing bitmaps for index scans.\n> Thus you would scan several single column indices and combine the\n> bitmaps before accessing the heap tuples. This has been discussed on\n> -hackers and I believe it is a todo item.\n>\n> I don't know, whether GiST or R-Tree could help. Is anybody listening\n> who knows?\n>\n>> If you're really curious as to more details, let me know (I don't want\n>> to bore the group with our specifics)\n>\n> The group is patient :-)\n>\n> Servus\n> Manfred\n>\n>\n\n", "msg_date": "Sat, 21 Dec 2002 15:17:53 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": true, "msg_subject": "Re: Speed Question" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> ... where \"similar\" means that the value of each attribute lies within\n> a small range around the value of the corresponding attribute of the\n> reference object?\n\n> I don't know, whether GiST or R-Tree could help.\n\nIf the problem is multidimensional range search then GIST might be just\nthe ticket. I am not sure if you'd need to do any coding though. It\nlooks like contrib/btree_gist provides the necessary operator class, but\nonly for int4 and timestamp datatypes.\n\nI think that our r-tree code is restricted to two-dimensional indexing,\nso it wouldn't help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 21 Dec 2002 15:28:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Speed Question " }, { "msg_contents": "Does anyone know how/where I can find the contrib/btree_gist stuff and \nhow I use it, and are there docs for it.\n\nThanks,\n\n-N\n\n\nOn Saturday, December 21, 2002, at 03:28 PM, Tom Lane wrote:\n\n> Manfred Koizar <mkoi-pg@aon.at> writes:\n>> ... where \"similar\" means that the value of each attribute lies within\n>> a small range around the value of the corresponding attribute of the\n>> reference object?\n>\n>> I don't know, whether GiST or R-Tree could help.\n>\n> If the problem is multidimensional range search then GIST might be just\n> the ticket. I am not sure if you'd need to do any coding though. It\n> looks like contrib/btree_gist provides the necessary operator class, \n> but\n> only for int4 and timestamp datatypes.\n>\n> I think that our r-tree code is restricted to two-dimensional indexing,\n> so it wouldn't help.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Mon, 23 Dec 2002 16:55:01 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": true, "msg_subject": "Re: Speed Question " } ]
[ { "msg_contents": "I was looking at some queries that appeared to be slower than I remembered\nthem being under 7.2 (which may be a wrong perception) and noticed\nthat a view wasn't being handled very efficiently.\n\nThe view is security view that is used to hide some fields in some records\nwhen displaying information on the web. The primary key is left alone\nthough. When this view is joined a plan is generated that applies\nthe field suppression for each row of the underlying table even though\nonly a few rows out of this view are going to be selected. It would see\nthat first looking for rows that will be used and only applying the\nchanges to rows that are going to be used would result in a significant\nspeed up.\n\nThe other thing that seemed odd is that the constant\n(select pord from priv where pname = 'web') subqueries weren't pulled\nout of the loop.\n\nI was able to get a 20% speed up by adding an index on gameid to crate\nand by disabling merge joins so that a has join was used instead.\nThe merge join estimate was about 20% low and the hash join estimate\nwas about 100% high resulting in the merge join getting picked.\n\nView:\ncreate view cname_web as select\n areaid,\n case when (select pord from priv where pname = 'web') >=\n (select pord from priv where pname = privacy) then\n lname else null end as lname,\n case when (select pord from priv where pname = 'web') >=\n (select pord from priv where pname = privacy) then\n fmname else null end as fmname,\n case when (select pord from priv where pname = 'web') >=\n (select pord from priv where pname = privacy) then\n aname else null end as aname,\n case when (select pord from priv where pname = 'web') >=\n (select pord from priv where pname = privacy) then\n gen else null end as gen,\n case when (select pord from priv where pname = 'web') >=\n (select pord from priv where pname = privacy) then\n genlab else null end as genlab,\n case when (select pord from priv where pname = 'web') >=\n (select pord from priv where pname = privacy) then\n touched else null end as touched\n from cname;\n\nQuery:\n\nexplain analyze select cname_web.areaid, lname, fmname, aname, coalesce(genlab, to_char(gen, 'FMRN')), rate, frq, opp, rmp, trn, to_char(crate.touched,'YYYY-MM-DD') from cname_web, crate where cname_web.areaid = crate.areaid and gameid = '776' and frq > 0 and crate.touched >= ((timestamp 'epoch' + '1040733601 second') + '2 year ago') order by rate desc, lower(lname), lower(coalesce((aname || ' ') || fmname, fmname, aname)), gen, genlab, cname_web.areaid;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=1308.35..1308.44 rows=39 width=203) (actual time=1152.67..1152.68 rows=25 loops=1)\n Sort Key: crate.rate, lower(cname_web.lname), lower(CASE WHEN (((cname_web.aname || ' '::text) || cname_web.fmname) IS NOT NULL) THEN ((cname_web.aname || ' '::text) || cname_web.fmname) WHEN (cname_web.fmname IS NOT NULL) THEN cname_web.fmname WHEN (cname_web.aname IS NOT NULL) THEN cname_web.aname ELSE NULL::text END), cname_web.gen, cname_web.genlab, cname_web.areaid\n -> Merge Join (cost=1270.71..1307.31 rows=39 width=203) (actual time=1120.23..1152.25 rows=25 loops=1)\n Merge Cond: (\"outer\".areaid = \"inner\".areaid)\n -> Sort (cost=681.95..699.97 rows=7208 width=63) (actual time=1079.55..1083.66 rows=7147 loops=1)\n Sort Key: cname_web.areaid\n -> Subquery Scan cname_web (cost=0.00..220.08 rows=7208 width=63) (actual time=0.40..843.48 rows=7208 loops=1)\n -> Seq Scan on cname (cost=0.00..220.08 rows=7208 width=63) (actual time=0.40..818.24 rows=7208 loops=1)\n InitPlan\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.04..0.05 rows=1 loops=1)\n Filter: (pname = 'web'::text)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.02 rows=1 loops=1)\n Filter: (pname = 'web'::text)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.02 rows=1 loops=1)\n Filter: (pname = 'web'::text)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.02 rows=1 loops=1)\n Filter: (pname = 'web'::text)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.02 rows=1 loops=1)\n Filter: (pname = 'web'::text)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.02 rows=1 loops=1)\n Filter: (pname = 'web'::text)\n SubPlan\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=7208)\n Filter: (pname = $1)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=7208)\n Filter: (pname = $1)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=7208)\n Filter: (pname = $1)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=7208)\n Filter: (pname = $1)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=7208)\n Filter: (pname = $1)\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.01..0.01 rows=1 loops=7208)\n Filter: (pname = $1)\n -> Sort (cost=588.76..588.80 rows=16 width=39) (actual time=39.95..39.96 rows=25 loops=1)\n Sort Key: crate.areaid\n -> Seq Scan on crate (cost=0.00..588.45 rows=16 width=39) (actual time=3.14..39.58 rows=25 loops=1)\n Filter: ((gameid = '776'::text) AND (frq > 0) AND (touched >= '2000-12-24 12:40:01'::timestamp without time zone))\n Total runtime: 1155.29 msec\n(39 rows)\n\n", "msg_date": "Tue, 24 Dec 2002 14:16:38 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "View performance" }, { "msg_contents": "As a followup to this I rewrote the view as:\ncreate view cname_web as select\n a.areaid, b.lname, b.fmname, b.aname, b.gen, b.genlab, b.touched\n from cname a left join\n (select areaid, lname, fmname, aname, gen, genlab, touched, privacy\n from cname, priv\n where pname = privacy and\n pord <= (select pord from priv where pname = 'web')\n ) b\n using (areaid);\n\nAnd got the query down to about half the original time as shown here:\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=970.71..970.74 rows=15 width=113) (actual time=550.82..550.83 rows=25 loops=1)\n Sort Key: crate.rate, lower(cname.lname), lower(CASE WHEN (((cname.aname || ' '::text) || cname.fmname) IS NOT NULL) THEN ((cname.aname || ' '::text) || cname.fmname) WHEN (cname.fmname IS NOT NULL) THEN cname.fmname WHEN (cname.aname IS NOT NULL) THEN cname.aname ELSE NULL::text END), cname.gen, cname.genlab, a.areaid\n InitPlan\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.02..0.03 rows=1 loops=1)\n Filter: (pname = 'web'::text)\n -> Merge Join (cost=484.88..970.41 rows=15 width=113) (actual time=361.92..550.53 rows=25 loops=1)\n Merge Cond: (\"outer\".areaid = \"inner\".areaid)\n -> Merge Join (cost=348.16..815.45 rows=7208 width=74) (actual time=358.29..520.50 rows=7147 loops=1)\n Merge Cond: (\"outer\".areaid = \"inner\".areaid)\n -> Index Scan using cname_pkey on cname a (cost=0.00..407.27 rows=7208 width=11) (actual time=0.03..26.59 rows=7147 loops=1)\n -> Sort (cost=348.16..354.17 rows=2403 width=63) (actual time=358.20..362.38 rows=7141 loops=1)\n Sort Key: cname.areaid\n -> Hash Join (cost=1.09..213.25 rows=2403 width=63) (actual time=0.35..94.32 rows=7202 loops=1)\n Hash Cond: (\"outer\".privacy = \"inner\".pname)\n -> Seq Scan on cname (cost=0.00..146.08 rows=7208 width=55) (actual time=0.01..33.41 rows=7208 loops=1)\n -> Hash (cost=1.09..1.09 rows=2 width=8) (actual time=0.07..0.07 rows=0 loops=1)\n -> Seq Scan on priv (cost=0.00..1.09 rows=2 width=8) (actual time=0.06..0.07 rows=2 loops=1)\n Filter: (pord <= $0)\n -> Sort (cost=136.72..136.76 rows=15 width=39) (actual time=0.95..0.96 rows=25 loops=1)\n Sort Key: crate.areaid\n -> Index Scan using crate_game on crate (cost=0.00..136.42 rows=15 width=39) (actual time=0.10..0.67 rows=25 loops=1)\n Index Cond: (gameid = '776'::text)\n Filter: ((frq > 0) AND (touched >= '2000-12-24 12:40:01'::timestamp without time zone))\n Total runtime: 553.17 msec\n(24 rows)\n\nOn Tue, Dec 24, 2002 at 14:16:38 -0600,\n Bruno Wolff III <bruno@wolff.to> wrote:\n> \n> View:\n> create view cname_web as select\n> areaid,\n> case when (select pord from priv where pname = 'web') >=\n> (select pord from priv where pname = privacy) then\n> lname else null end as lname,\n> case when (select pord from priv where pname = 'web') >=\n> (select pord from priv where pname = privacy) then\n> fmname else null end as fmname,\n> case when (select pord from priv where pname = 'web') >=\n> (select pord from priv where pname = privacy) then\n> aname else null end as aname,\n> case when (select pord from priv where pname = 'web') >=\n> (select pord from priv where pname = privacy) then\n> gen else null end as gen,\n> case when (select pord from priv where pname = 'web') >=\n> (select pord from priv where pname = privacy) then\n> genlab else null end as genlab,\n> case when (select pord from priv where pname = 'web') >=\n> (select pord from priv where pname = privacy) then\n> touched else null end as touched\n> from cname;\n> \n> Query:\n> \n> explain analyze select cname_web.areaid, lname, fmname, aname, coalesce(genlab, to_char(gen, 'FMRN')), rate, frq, opp, rmp, trn, to_char(crate.touched,'YYYY-MM-DD') from cname_web, crate where cname_web.areaid = crate.areaid and gameid = '776' and frq > 0 and crate.touched >= ((timestamp 'epoch' + '1040733601 second') + '2 year ago') order by rate desc, lower(lname), lower(coalesce((aname || ' ') || fmname, fmname, aname)), gen, genlab, cname_web.areaid;\n", "msg_date": "Tue, 24 Dec 2002 15:06:37 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "Re: View performance" }, { "msg_contents": "By disabling merge joins and using the updated view, I got the query down\nto about 25% of its original runtime.\nNote the query estimate is off by a factor of more than 10.\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=3271.35..3271.39 rows=15 width=113) (actual time=232.25..232.27 rows=25 loops=1)\n Sort Key: crate.rate, lower(cname.lname), lower(CASE WHEN (((cname.aname || ' '::text) || cname.fmname) IS NOT NULL) THEN ((cname.aname || ' '::text) || cname.fmname) WHEN (cname.fmname IS NOT NULL) THEN cname.fmname WHEN (cname.aname IS NOT NULL) THEN cname.aname ELSE NULL::text END), cname.gen, cname.genlab, a.areaid\n InitPlan\n -> Seq Scan on priv (cost=0.00..1.09 rows=1 width=4) (actual time=0.02..0.03 rows=1 loops=1)\n Filter: (pname = 'web'::text)\n -> Hash Join (cost=355.71..3271.05 rows=15 width=113) (actual time=106.82..231.97 rows=25 loops=1)\n Hash Cond: (\"outer\".areaid = \"inner\".areaid)\n -> Hash Join (cost=219.25..431.41 rows=7208 width=74) (actual time=103.86..222.00 rows=7208 loops=1)\n Hash Cond: (\"outer\".areaid = \"inner\".areaid)\n -> Seq Scan on cname a (cost=0.00..146.08 rows=7208 width=11) (actual time=0.01..16.23 rows=7208 loops=1)\n -> Hash (cost=213.25..213.25 rows=2403 width=63) (actual time=103.70..103.70 rows=0 loops=1)\n -> Hash Join (cost=1.09..213.25 rows=2403 width=63) (actual time=0.35..88.82 rows=7202 loops=1)\n Hash Cond: (\"outer\".privacy = \"inner\".pname)\n -> Seq Scan on cname (cost=0.00..146.08 rows=7208 width=55) (actual time=0.01..29.73 rows=7208 loops=1)\n -> Hash (cost=1.09..1.09 rows=2 width=8) (actual time=0.07..0.07 rows=0 loops=1)\n -> Seq Scan on priv (cost=0.00..1.09 rows=2 width=8) (actual time=0.06..0.07 rows=2 loops=1)\n Filter: (pord <= $0)\n -> Hash (cost=136.42..136.42 rows=15 width=39) (actual time=0.72..0.72 rows=0 loops=1)\n -> Index Scan using crate_game on crate (cost=0.00..136.42 rows=15 width=39) (actual time=0.10..0.66 rows=25 loops=1)\n Index Cond: (gameid = '776'::text)\n Filter: ((frq > 0) AND (touched >= '2000-12-24 12:40:01'::timestamp without time zone))\n Total runtime: 232.83 msec\n(22 rows)\n\n", "msg_date": "Tue, 24 Dec 2002 15:25:56 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "Re: View performance" }, { "msg_contents": "Bruno Wolff III <bruno@wolff.to> writes:\n> I was looking at some queries that appeared to be slower than I remembered\n> them being under 7.2 (which may be a wrong perception) and noticed\n> that a view wasn't being handled very efficiently.\n\nThe change in behavior from 7.2 is probably due to this patch:\n\n2002-12-05 16:46 tgl\n\n\t* src/backend/optimizer/plan/planner.c (REL7_3_STABLE): Avoid\n\tpulling up sublinks from a subselect's targetlist. Works around\n\tproblems that occur if sublink is referenced via a join alias\n\tvariable. Perhaps this can be improved later, but a simple and\n\tsafe fix is needed for 7.3.1.\n\nwhich means that views using subselects in their targetlists will not be\nflattened into the calling query in 7.3.1. This is not real desirable,\nbut I see no other short-term fix.\n\nIn the particular case, your view definition seemed mighty inefficient\nanyway (it must recompute the subselects for each column retrieved from\nthe view) so I think your rewrite is a good change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Dec 2002 14:42:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: View performance " }, { "msg_contents": "Bruno Wolff III <bruno@wolff.to> writes:\n> By disabling merge joins and using the updated view, I got the query down\n> to about 25% of its original runtime.\n> Note the query estimate is off by a factor of more than 10.\n\nThis seems to indicate some estimation problems in cost_hashjoin; the\nestimated cost for the hashjoin is evidently a lot higher than it should\nbe.\n\nAre you interested in digging into this; or could you send me a dump of\nthe tables used in the view and query, so I could look into it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Dec 2002 14:50:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: View performance " }, { "msg_contents": "On Thu, Dec 26, 2002 at 14:42:39 -0500,\n Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> which means that views using subselects in their targetlists will not be\n> flattened into the calling query in 7.3.1. This is not real desirable,\n> but I see no other short-term fix.\n\nThanks for the explaination.\n\n> In the particular case, your view definition seemed mighty inefficient\n> anyway (it must recompute the subselects for each column retrieved from\n> the view) so I think your rewrite is a good change.\n\nI was naively expecting that the planner would notice the common subexpressions\nand only compute them once.\n", "msg_date": "Thu, 26 Dec 2002 14:36:56 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "Re: View performance" }, { "msg_contents": "Bruno Wolff III <bruno@wolff.to> writes:\n> I was naively expecting that the planner would notice the common\n> subexpressions and only compute them once.\n\nThere isn't currently any code for detection of common subexpressions of\nany kind.\n\nMy gut feeling is that searching for common subexpressions would be a\nnet waste of cycles in the vast majority of queries. It'd be fairly\nexpensive (a naive implementation would be roughly O(N^2) in the number\nof expression nodes), with zero payback in very many cases.\n\nIt might be worth doing for very constrained classes of subexpressions.\nFor instance, I was just thinking about putting in some code to\nrecognize duplicate aggregates (eg, \"sum(foo)\" appearing twice in the\nsame query). nodeAgg.c could do this relatively cheaply, since it has\nto make a list of the aggregate expressions to be computed, anyway.\nI'm not sure about recognizing duplicated sub-SELECT expressions; it\ncould possibly be done but some thought would have to be given to\npreserving semantics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Dec 2002 15:45:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: View performance " }, { "msg_contents": "I wrote:\n>> This seems to indicate some estimation problems in cost_hashjoin; the\n>> estimated cost for the hashjoin is evidently a lot higher than it should\n>> be.\n\nThe answer is that estimate_hash_bucketsize() is producing a rather\nsilly result in this situation, viz. a bucketsize \"fraction\" that's well\nabove 1.0. I've applied the following band-aid patch to CVS tip, which\nperhaps you might like to use locally. But probably the long-range\nanswer is to rethink what that routine is doing --- its adjustment for\nskewed data distributions is perhaps not such a great idea.\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/optimizer/path/costsize.c.orig\tFri Dec 13 19:17:55 2002\n--- src/backend/optimizer/path/costsize.c\tThu Dec 26 18:34:02 2002\n***************\n*** 1164,1169 ****\n--- 1164,1179 ----\n \tif (avgfreq > 0.0 && mcvfreq > avgfreq)\n \t\testfract *= mcvfreq / avgfreq;\n \n+ \t/*\n+ \t * Clamp bucketsize to sane range (the above adjustment could easily\n+ \t * produce an out-of-range result). We set the lower bound a little\n+ \t * above zero, since zero isn't a very sane result.\n+ \t */\n+ \tif (estfract < 1.0e-6)\n+ \t\testfract = 1.0e-6;\n+ \telse if (estfract > 1.0)\n+ \t\testfract = 1.0;\n+ \n \tReleaseSysCache(tuple);\n \n \treturn (Selectivity) estfract;\n", "msg_date": "Thu, 26 Dec 2002 18:43:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: View performance " } ]
[ { "msg_contents": "Hello, \n\nI'm facing to a performance problem, when I run PostgreSQL 7.\n2.1 + RedHat Linux 7.3 on a Xeon Dual CPU machine.\n\nCompared with a P4 2.4GHz machine, pgbench shows 1/10 on the \nXeon machine (detail is describled below), while hdparm \nshows 2 times faster.\n\nDisabling HTT had no effects. PostgreSQL 7.2.3 is also slow \non the machine.\n\nI'm grateful if any of you can give me an advice. \n\nthank you.\n\n\ndata ---------------\n\n[Hardware Profile]\nDELL PowerEdge 2600\n CPU Xeon 2GHz Dual\n Memory 2GB PC2100 ECC DDR266 SDRAM\n HDD 73GB 10,000rpm U320 SCSI\n\nDELL PowerEdge 600SC (for comparison)\n CPU Pentium 4 2.4GHz Single\n Memory 1GB 400MHz ECC DDR SDRAM \n HDD 80GB 7,200rpm EIDE\n\n[pgbench result]\n\nmultiplicity | 1 | 128\n--------------+-----+------\nPE 600 |259.2|178.2 [tps]\nPE 2600 | 22.9| 34.8 [tps]\n\n- wal_sync_method(wal_method.sh) fsync\n- 200,000 data items\n\n---\nYutaka Inada [Justsystem Corporation]\n", "msg_date": "Fri, 27 Dec 2002 16:03:48 +0900", "msg_from": "yutaka_inada@justsystem.co.jp", "msg_from_op": true, "msg_subject": "executing pgsql on Xeon-dual machine" }, { "msg_contents": "yutaka_inada@justsystem.co.jp writes:\n> Compared with a P4 2.4GHz machine, pgbench shows 1/10 on the \n> Xeon machine (detail is describled below), while hdparm \n> shows 2 times faster.\n\nIf you set fsync off, how do the pgbench results change?\n\n> [Hardware Profile]\n> DELL PowerEdge 2600\n> CPU Xeon 2GHz Dual\n> Memory 2GB PC2100 ECC DDR266 SDRAM\n> HDD 73GB 10,000rpm U320 SCSI\n\n> DELL PowerEdge 600SC (for comparison)\n> CPU Pentium 4 2.4GHz Single\n> Memory 1GB 400MHz ECC DDR SDRAM \n> HDD 80GB 7,200rpm EIDE\n\nI'm suspicious that the IDE drive may be configured to lie about write\ncompletion. If it reports write complete when it's really only buffered\nthe data in controller RAM, then you're effectively running with fsync\noff on the PE600.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Dec 2002 13:35:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: executing pgsql on Xeon-dual machine " }, { "msg_contents": "Thank you, Tom,\n\nTom Lane <tgl@sss.pgh.pa.us>:\n> \n> If you set fsync off, how do the pgbench results change?\n\nI'll try it.\n\n---\nYutaka Inada [Justsystem Corporation]\n", "msg_date": "Mon, 6 Jan 2003 11:27:17 +0900", "msg_from": "yutaka_inada@justsystem.co.jp", "msg_from_op": true, "msg_subject": "Re: executing pgsql on Xeon-dual machine" } ]
[ { "msg_contents": "\nHi,\n As we can see all tuples of a child table when scanning parent table,\nI'm confused about if that means there are two copies that are stored on\ndisk, one is for child table, and the other for parent table? If so,\nI have to reconsider the size of my database.\n Thanks.\n\nLi Li\n\n", "msg_date": "Sun, 29 Dec 2002 14:09:10 -0800 (PST)", "msg_from": "li li <lili@cs.uoregon.edu>", "msg_from_op": true, "msg_subject": "A question about inheritance" }, { "msg_contents": "On Sun, 2002-12-29 at 22:09, li li wrote:\n> Hi,\n> As we can see all tuples of a child table when scanning parent table,\n> I'm confused about if that means there are two copies that are stored on\n> disk, one is for child table, and the other for parent table? If so,\n> I have to reconsider the size of my database.\n\nWhen you select from the parent table, all rows in its children are also\nselected unless you use the keyword ONLY.\n\nSo \n\n SELECT * FROM parent;\n\nwill show all rows of parent and children. But\n\n SELECT * FROM ONLY parent;\n\nwill show just the rows in the parent table.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight, UK http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"Ye have heard that it hath been said, Thou shalt love \n thy neighbour, and hate thine enemy. But I say unto \n you, Love your enemies, bless them that curse you, do \n good to them that hate you, and pray for them which \n despitefully use you, and persecute you;\" \n Matthew 5:43,44 \n\n", "msg_date": "31 Dec 2002 01:42:49 +0000", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": false, "msg_subject": "Re: A question about inheritance" } ]
[ { "msg_contents": "Howdy.\n\nI've used PostgreSQL in the past on a small project,\nand I thought it was great.\n\nNow I'm trying to evaluate it as a possible\nreplacement for MS SQL Server.\n\nI have two issues:\n\n1. I have a homegrown Java migration tool I wrote that\nseems to work reasonably well, but I'm hoping to\nunderstand how to improve its performance.\n\n2. After migrating, I found pg_dump to be plenty\nquick, but psql < (to completely reload the database)\nto be very very slow during the COPY stage.\n\nNow for more detail. On problem 1., I have autocommit\noff, and I'm doing PreparedStatement.addBatch() and\nexecuteBatch(), and eventually, commit.\n\nI've been playing with the amount of rows I do before\nexecuteBatch(), and I seem to do best with 20,000 to\n50,000 rows in a batch. Some background: this is\nRedHat8.0 with all the latest RedHat patches, 1GB\nRAMBUS RAM, 2GHz P4, 40GB 7200RPM HD. Watching\ngkrellm and top, I see a good bit of CPU use by\npostmaster duing the addBatch()es, but then when\nexecuteBatch() comes, CPU goes almost totally idle,\nand disk starts churning. Somehow it seems the disk\nisn't being utilized to the fullest, but I'm just\nguessing.\n\nI'm wondering if there's some postmaster tuning I\nmight do to improve this.\n\nThen on problem 2., a pg_dump of the database takes\nabout 3 minutes, and creates a file of 192MB in size. \nThen I create testdb and do psql -e testdb\n<thedump.sql, and it creeps once it gets to the COPY\nsection. So far it's been running for 45 minutes,\nmostly on one table (the biggest table, which has\n1,090,000 rows or so). During this time, CPU use is\nvery low, and there's no net or lo traffic.\n\nIn contrast, using MSSQL's backup and restore\nfacilities, it takes about 15 second on a previous\ngeneration box (with SCSI though) to backup, and 45\nseconds to a minute to restore.\n\nSuggestions?\n\nThanks,\nMT\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 31 Dec 2002 14:14:34 -0800 (PST)", "msg_from": "Michael Teter <mt_pgsql@yahoo.com>", "msg_from_op": true, "msg_subject": "preliminary testing, two very slow situations..." }, { "msg_contents": "Michael Teter said:\n> I've used PostgreSQL in the past on a small project,\n> and I thought it was great.\n>\n> Now I'm trying to evaluate it as a possible\n> replacement for MS SQL Server.\n\n[ ... ]\n\nWhat version of PostgreSQL are you using?\n\nHave you made any changes to the default configuration parameters? If not,\nthat's probably the first thing to look at. Several settings (e.g.\nshared_buffers) are set to very conservative values by default. You can\nalso consider trading some reliability for better performance by disabling\nfsync.\n\nFor more info on configuration, see:\n\nhttp://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/runtime-config.html\n\nAnother low-hanging fruit is kernel configuration. For example, what OS\nand kernel are you using? Have you enabled DMA? What filesystem are you\nusing?\n\nCheers,\n\nNeil\n\n\n", "msg_date": "Tue, 31 Dec 2002 18:05:27 -0500 (EST)", "msg_from": "\"Neil Conway\" <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: preliminary testing, two very slow situations..." }, { "msg_contents": "\"Neil Conway\" <neilc@samurai.com> writes:\n> Michael Teter said:\n>> Now I'm trying to evaluate it as a possible\n>> replacement for MS SQL Server.\n\n> What version of PostgreSQL are you using?\n> [suggestions for tuning]\n\nThe only reason I can think of for COPY to be as slow as Michael is\ndescribing is if it's checking foreign-key constraints (and even then\nit'd have to be using very inefficient plans for the check queries).\nSo we should ask not only about the PG version, but also about the\nexact table declarations involved.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Jan 2003 16:01:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: preliminary testing, two very slow situations... " } ]
[ { "msg_contents": "Hi all,\n\nI've experienced very slow performance to add foreign key constraints using \nALTER TABLE ADD CONSTRAINT FOREIGN KEY ...\n\nAfter using COPY ... FROM to load the base tables, I started to build the \nreferential integrity between tables.\nI have 3 tables: T1 (6 million records), T2 (1.5 million records) and T3 (0.8 \nmillion records).\nOne of the RI - foreign key (T1 -> T2) constraint took about 70 hrs to build.\nThe other RI - foreign key (T1 -> T3) constraint took about 200 hrs and yet \ncompleted!! (compound foreign key)\n\nI tried to use small subset of the tables of T2 and T3 to do the testing.\nAn estimation show that it need about 960 hrs to build the RI - foreign key \nconstraints on table T1 -> T3 !!!\n\nI've read in the archives that some people suffered slow performance of this \nproblem in Aug 2000, but there was no further information about the solution.\n\nPlease anyone who has experience in this issues can give me some hint. \n\nThanks\n\nHans\n", "msg_date": "Wed, 1 Jan 2003 16:32:10 +1300", "msg_from": "Minghann Ho <Minghann.Ho@mcs.vuw.ac.nz>", "msg_from_op": true, "msg_subject": "alter table TBL add constraint TBL_FK foreign key ... very slow" }, { "msg_contents": "On Wed, 1 Jan 2003, Minghann Ho wrote:\n\n> I've experienced very slow performance to add foreign key constraints using\n> ALTER TABLE ADD CONSTRAINT FOREIGN KEY ...\n>\n> After using COPY ... FROM to load the base tables, I started to build the\n> referential integrity between tables.\n> I have 3 tables: T1 (6 million records), T2 (1.5 million records) and T3 (0.8\n> million records).\n> One of the RI - foreign key (T1 -> T2) constraint took about 70 hrs to build.\n> The other RI - foreign key (T1 -> T3) constraint took about 200 hrs and yet\n> completed!! (compound foreign key)\n>\n> I tried to use small subset of the tables of T2 and T3 to do the testing.\n> An estimation show that it need about 960 hrs to build the RI - foreign key\n> constraints on table T1 -> T3 !!!\n\nIt's running the constraint check for each row in the foreign key table.\nRather than using a call to the function and a select for each row, it\ncould probably be done in a single select with a not exists subselect, but\nthat hasn't been done yet. There's also been talk about allowing some\nmechanism to allow the avoidance of the create time check, but I don't\nthink any concensus was reached.\n\n", "msg_date": "Tue, 31 Dec 2002 19:38:54 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: alter table TBL add constraint TBL_FK foreign key ..." }, { "msg_contents": "On Tue, 2002-12-31 at 21:32, Minghann Ho wrote:\n> Hi all,\n> \n> I've experienced very slow performance to add foreign key constraints using \n> ALTER TABLE ADD CONSTRAINT FOREIGN KEY ...\n> \n> After using COPY ... FROM to load the base tables, I started to build the \n> referential integrity between tables.\n> I have 3 tables: T1 (6 million records), T2 (1.5 million records) and T3 (0.8 \n> million records).\n> One of the RI - foreign key (T1 -> T2) constraint took about 70 hrs to build.\n> The other RI - foreign key (T1 -> T3) constraint took about 200 hrs and yet \n> completed!! (compound foreign key)\n> \n> I tried to use small subset of the tables of T2 and T3 to do the testing.\n> An estimation show that it need about 960 hrs to build the RI - foreign key \n> constraints on table T1 -> T3 !!!\n> \n> I've read in the archives that some people suffered slow performance of this \n> problem in Aug 2000, but there was no further information about the solution.\n> \n> Please anyone who has experience in this issues can give me some hint. \n\nSilly question: Are T2 & T3 compound-key indexed on the relevant foreign\nkey fields (in the exact order that they are mentioned in the ADD\nCONSTRAINT command)? Otherwise, for each record in T1, it is scanning\nT2 1.5M times (9E12 record reads!), with a similar formula for T1->T3.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "01 Jan 2003 02:14:34 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: alter table TBL add constraint TBL_FK foreign key" } ]
[ { "msg_contents": "Hi all,\n\nI've experienced very slow performance to add foreign key constraints using \nALTER TABLE ADD CONSTRAINT FOREIGN KEY ...\n\nAfter using COPY ... FROM to load the base tables, I started to build the \nreferential integrity between tables.\nI have 3 tables: T1 (6 million records), T2 (1.5 million records) and T3 (0.8 \nmillion records).\nOne of the RI - foreign key (T1 -> T2) constraint took about 70 hrs to build.\nThe other RI - foreign key (T1 -> T3) constraint took about 200 hrs and yet \ncompleted!! (compound foreign key)\n\nI tried to use small subset of the tables of T2 and T3 to do the testing.\nAn estimation show that it need about 960 hrs to build the RI - foreign key \nconstraints on table T1 -> T3 !!!\n\nI've read in the archives that some people suffered slow performance of this \nproblem in Aug 2000, but there was no further information about the solution.\n\nPlease anyone who has experience in this issues can give me some hint. \n\nThanks\n\nHans\n", "msg_date": "Wed, 1 Jan 2003 16:38:06 +1300", "msg_from": "Minghann Ho <mho@mcs.vuw.ac.nz>", "msg_from_op": true, "msg_subject": "alter table TBL add constraint TBL_FK foreign key ... very slow" } ]
[ { "msg_contents": "Roman Fail wrote:\n> The same result columns and JOINS are performed all day with variations on the WHERE clause;\nAre there any where clauses which all of theses variation have?\nIf yes - query can be reordered to contain explicit joins for these clauses and \nto let Postgres to find best solution for other joins.\n\nI know, it is not best solution, but sometimes I prefer finding best join order by myself. \nI create then several views returning the same values, but manualy ordered for specific where clauses.\n\nTomasz Myrta\n\n\n", "msg_date": "Wed, 01 Jan 2003 10:59:06 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": true, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Thanks to everyone for the quick replies! I'm sure that my lack of skill with SQL queries is the main problem. What's strange to me is how MSSQL takes my bad queries and makes them look good anyway. It must have a real smart planner.\r\n \r\nSeveral changes: shared_buffers = 131072, sort_mem = 32768, shmmax = 2097152000, shmall = 131072000. I couldn't find any info out there on the relationship between shmmax and shmall, so I just preserved the ratio from the RedHat defaults (1:16). As far as sort_mem goes, I expect to be running no more than 3 concurrent queries and they will all be just as complex as this one. Do you think sort_mem=32768 is a reasonable size? None of these changes seemed to help speed up things however.\r\n \r\nREINDEX INDEX batchdetail_ix_tranamount_idx; was executed successfully, although it took 15 minutes.\r\nANALYZE executed in 2 minutes, even though I increased default_statistics_target = 30. Should I increase it even more? I don't mind the extra overhead each night if it will make my queries faster. (Idiot check: I did actually stop and start the postmaster after changing all these settings).\r\n \r\nAndrew Sullivan wrote:\r\n>First, the performance of foreign keys is flat-out awful in Postgres.\r\n>I suggest avoiding them if you can.\r\n\r\nI don't have any problem getting rid of FKs, especially if it might actually help performance. The nightly data import is well-defined and should always observe referential integrity, so I guess the db doesn't really need to enforce it. In MSSQL, adding FKs was supposed to actually benefit SELECT performance. Is it pretty much universally accepted that I should drop all my foreign keys?\r\n\r\n>Second, ordering joins explicitly (with the JOIN keyword) constrains\r\n>the planner, and may select bad plan. The explain analyse output\r\n>was nice, but I didn't see the query, so I can't tell what the plan\r\n>maybe ought to be.\r\n\r\nI think this is the most likely problem. I've read through Chapter 10 of the 7.3 docs, but I still don't feel like I know what would be a good order. How do you learn this stuff anyway? Trial and error?\r\n\r\n>Third, I didn't see any suggestion that you'd moved the WAL onto its\r\n>own disk. That will mostly help when you are under write load;\r\n\r\nI don't think I'm going to bother with moving the WAL....the write load during the day is very, very light (when queries are run). Disk I/O is clearly not the limiting factor (yet!). \r\n \r\nSo here's the query, and another EXPLAIN ANALYZE to go with it (executed after all setting changes). The same result columns and JOINS are performed all day with variations on the WHERE clause; other possible search columns are the ones that are indexed (see below). The 4 tables that use LEFT JOIN only sometimes have matching records, hence the OUTER join.\r\n \r\nEXPLAIN ANALYZE\r\nSELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount, \r\nd.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode, \r\nm.name AS merchantname, c.cardtype, m.merchid, \r\np1.localtaxamount, p1.productidentifier, dr.avsresponse, \r\ncr.checkoutdate, cr.noshowindicator, ck.checkingacctno, \r\nck.abaroutingno, ck.checkno \r\nFROM tranheader t \r\nINNER JOIN batchheader b ON t.tranheaderid = b.tranheaderid \r\nINNER JOIN merchants m ON m.merchantid = b.merchantid \r\nINNER JOIN batchdetail d ON d.batchid = b.batchid \r\nINNER JOIN cardtype c ON d.cardtypeid = c.cardtypeid \r\nLEFT JOIN purc1 p1 ON p1.batchdetailid = d.batchdetailid \r\nLEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid \r\nLEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid \r\nLEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid \r\nWHERE t.clientid = 6 \r\nAND d.tranamount BETWEEN 500.0 AND 700.0 \r\nAND b.batchdate > '2002-12-15' \r\nAND m.merchid = '701252267' \r\nORDER BY b.batchdate DESC \r\nLIMIT 50\r\n\r\nLimit (cost=1829972.39..1829972.39 rows=1 width=285) (actual time=1556497.79..1556497.80 rows=5 loops=1)\r\n -> Sort (cost=1829972.39..1829972.39 rows=1 width=285) (actual time=1556497.78..1556497.79 rows=5 loops=1)\r\n Sort Key: b.batchdate\r\n -> Nested Loop (cost=1771874.32..1829972.38 rows=1 width=285) (actual time=1538783.03..1556486.64 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=1771874.32..1829915.87 rows=1 width=247) (actual time=1538760.60..1556439.67 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=1771874.32..1829915.86 rows=1 width=230) (actual time=1538760.55..1556439.50 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=1771874.32..1829915.85 rows=1 width=221) (actual time=1538760.51..1556439.31 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=1771874.32..1773863.81 rows=1 width=202) (actual time=1529153.84..1529329.65 rows=5 loops=1)\r\n Join Filter: (\"outer\".cardtypeid = \"inner\".cardtypeid)\r\n -> Merge Join (cost=1771874.32..1773862.58 rows=1 width=188) (actual time=1529142.55..1529317.99 rows=5 loops=1)\r\n Merge Cond: (\"outer\".batchid = \"inner\".batchid)\r\n -> Sort (cost=116058.42..116058.43 rows=3 width=118) (actual time=14184.11..14184.14 rows=17 loops=1)\r\n Sort Key: b.batchid\r\n -> Hash Join (cost=109143.44..116058.39 rows=3 width=118) (actual time=12398.29..14184.03 rows=17 loops=1)\r\n Hash Cond: (\"outer\".merchantid = \"inner\".merchantid)\r\n -> Merge Join (cost=109137.81..114572.94 rows=295957 width=40) (actual time=12359.75..13848.67 rows=213387 loops=1)\r\n Merge Cond: (\"outer\".tranheaderid = \"inner\".tranheaderid)\r\n -> Index Scan using tranheader_ix_tranheaderid_idx on tranheader t (cost=0.00..121.15 rows=1923 width=16) (actual time=0.17..10.91 rows=1923 loops=1)\r\n Filter: (clientid = 6)\r\n -> Sort (cost=109137.81..109942.73 rows=321966 width=24) (actual time=12317.83..12848.43 rows=329431 loops=1)\r\n Sort Key: b.tranheaderid\r\n -> Seq Scan on batchheader b (cost=0.00..79683.44 rows=321966 width=24) (actual time=29.93..10422.75 rows=329431 loops=1)\r\n Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\r\n -> Hash (cost=5.63..5.63 rows=1 width=78) (actual time=21.06..21.06 rows=0 loops=1)\r\n -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.63 rows=1 width=78) (actual time=21.05..21.05 rows=1 loops=1)\r\n Index Cond: (merchid = '701252267'::character varying)\r\n -> Sort (cost=1655815.90..1656810.15 rows=397698 width=70) (actual time=1513860.73..1514497.92 rows=368681 loops=1)\r\n Sort Key: d.batchid\r\n -> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..1597522.38 rows=397698 width=70) (actual time=14.05..1505397.17 rows=370307 loops=1)\r\n Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))\r\n -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=2.25..2.28 rows=10 loops=5)\r\n -> Seq Scan on purc1 p1 (cost=0.00..44285.35 rows=941335 width=19) (actual time=2.40..3812.43 rows=938770 loops=5)\r\n -> Seq Scan on direct dr (cost=0.00..0.00 rows=1 width=9) (actual time=0.00..0.00 rows=0 loops=5)\r\n -> Seq Scan on carrental cr (cost=0.00..0.00 rows=1 width=17) (actual time=0.00..0.00 rows=0 loops=5)\r\n -> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=0.50..7.05 rows=1267 loops=5)\r\nTotal runtime: 1556553.76 msec\r\n\r\n \r\nTomasz Myrta wrote:\r\n>Seq Scan on batchheader b (cost=0.00..79587.23 rows=308520 width=56)\r\n>Can you write what condition and indexes does batchheader have?\r\n \r\nbatchheader has 2.6 million records:\r\nCREATE TABLE public.batchheader (\r\n batchid int8 DEFAULT nextval('\"batchheader_batchid_key\"'::text) NOT NULL, \r\n line int4, \r\n tranheaderid int4, \r\n merchantid int4, \r\n batchdate timestamp, \r\n merchref char(16), \r\n carryindicator char(1), \r\n assocno varchar(6), \r\n merchbankno char(4), \r\n debitcredit char(1), \r\n achpostdate timestamp, \r\n trancode char(4), \r\n netdeposit numeric(18, 4), \r\n CONSTRAINT batchheader_ix_batchid_idx UNIQUE (batchid), \r\n CONSTRAINT batchheader_pkey PRIMARY KEY (batchid), \r\n CONSTRAINT fk_bh_th FOREIGN KEY (tranheaderid) REFERENCES tranheader (tranheaderid) ON DELETE RESTRICT ON UPDATE NO ACTION NOT DEFERRABLE INITIALLY IMMEDIATE\r\n) WITH OIDS;\r\nCREATE UNIQUE INDEX batchheader_ix_batchid_idx ON batchheader USING btree (batchid);\r\nCREATE INDEX batchheader_ix_batchdate_idx ON batchheader USING btree (batchdate);\r\nCREATE INDEX batchheader_ix_merchantid_idx ON batchheader USING btree (merchantid);\r\nCREATE INDEX batchheader_ix_merchref_idx ON batchheader USING btree (merchref);\r\nCREATE INDEX batchheader_ix_netdeposit_idx ON batchheader USING btree (netdeposit);\r\n\r\nAnd here's batchdetail too, just for kicks. 23 million records.\r\nCREATE TABLE public.batchdetail (\r\n batchdetailid int8 DEFAULT nextval('public.batchdetail_batchdetailid_seq'::text) NOT NULL, \r\n line int4, \r\n batchid int4, \r\n merchno varchar(16), \r\n assocno varchar(6), \r\n refnumber char(23), \r\n trandate timestamp, \r\n tranamount numeric(18, 4), \r\n netdeposit numeric(18, 4), \r\n cardnocfb bytea, \r\n bestinterchange char(2), \r\n submitinterchange char(2), \r\n downgrader1 char(4), \r\n downgrader2 char(4), \r\n downgrader3_1 char(1), \r\n downgrader3_2 char(1), \r\n downgrader3_3 char(1), \r\n downgrader3_4 char(1), \r\n downgrader3_5 char(1), \r\n downgrader3_6 char(1), \r\n downgrader3_7 char(1), \r\n onlineentry char(1), \r\n achflag char(1), \r\n authsource char(1), \r\n cardholderidmeth char(1), \r\n catindicator char(1), \r\n reimbattribute char(1), \r\n motoindicator char(1), \r\n authcharind char(1), \r\n banknetrefno char(9), \r\n banknetauthdate char(6), \r\n draftaflag char(1), \r\n authcurrencycode char(3), \r\n authamount numeric(18, 4), \r\n validcode char(4), \r\n authresponsecode char(2), \r\n debitnetworkid char(3), \r\n switchsetindicator char(1), \r\n posentrymode char(2), \r\n debitcredit char(1), \r\n reversalflag char(1), \r\n merchantname varchar(25), \r\n authno char(6), \r\n rejectreason char(4), \r\n cardtypeid int4, \r\n currencycode char(3), \r\n origtranamount numeric(18, 4), \r\n foreigncard char(1), \r\n carryover char(1), \r\n extensionrecord char(2), \r\n mcccode char(4), \r\n terminalid char(8), \r\n submitinterchange3b char(3), \r\n purchaseid varchar(25), \r\n trancode char(4), \r\n CONSTRAINT batchdetail_pkey PRIMARY KEY (batchdetailid)\r\n) WITH OIDS;\r\nCREATE INDEX batchdetail_ix_authno_idx ON batchdetail USING btree (authno);\r\nCREATE INDEX batchdetail_ix_batchdetailid_idx ON batchdetail USING btree (batchdetailid);\r\nCREATE INDEX batchdetail_ix_cardnocfb_idx ON batchdetail USING btree (cardnocfb);\r\nCREATE INDEX batchdetail_ix_posentrymode_idx ON batchdetail USING btree (posentrymode);\r\nCREATE INDEX batchdetail_ix_submitinterchange3b_idx ON batchdetail USING btree (submitinterchange3b);\r\nCREATE INDEX batchdetail_ix_tranamount_idx ON batchdetail USING btree (tranamount);\r\n \r\nRoman Fail\r\nSr. Web Application Developer\r\nPOS Portal, Inc.\r\nSacramento, CA\r\n \r\n\r\n \r\n\r\n \r\n\r\n\r\n \r\n\r\n \r\n", "msg_date": "Wed, 15 Jan 2003 15:30:55 -0800", "msg_from": "\"Roman Fail\" <rfail@posportal.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\n> So here's the query, and another EXPLAIN ANALYZE to go with it\n> (executed after all setting changes). The same result columns and\n> JOINS are performed all day with variations on the WHERE clause; other\n> possible search columns are the ones that are indexed (see below).\n> The 4 tables that use LEFT JOIN only sometimes have matching records,\n> hence the OUTER join.\n>\n> EXPLAIN ANALYZE\n> SELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount,\n> d.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode,\n> m.name AS merchantname, c.cardtype, m.merchid,\n> p1.localtaxamount, p1.productidentifier, dr.avsresponse,\n> cr.checkoutdate, cr.noshowindicator, ck.checkingacctno,\n> ck.abaroutingno, ck.checkno\n> FROM tranheader t\n> INNER JOIN batchheader b ON t.tranheaderid = b.tranheaderid\n> INNER JOIN merchants m ON m.merchantid = b.merchantid\n> INNER JOIN batchdetail d ON d.batchid = b.batchid\n> INNER JOIN cardtype c ON d.cardtypeid = c.cardtypeid\n> LEFT JOIN purc1 p1 ON p1.batchdetailid = d.batchdetailid\n> LEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid\n> LEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid\n> LEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid\n> WHERE t.clientid = 6\n> AND d.tranamount BETWEEN 500.0 AND 700.0\n> AND b.batchdate > '2002-12-15'\n> AND m.merchid = '701252267'\n> ORDER BY b.batchdate DESC\n> LIMIT 50\n\nWell, you might get a little help by replace the from with\n something like:\n\nFROM transheader t, batchheader b, merchants m, cardtype c,\nbatchdetail d\nLEFT JOIN purc1 p1 on p1.batchdetailid=d.batchdetailid\nLEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid\nLEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid\nLEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid\n\nand adding\nAND t.tranheaderid=b.tranheaderid\nAND m.merchantid=b.merchantid\nAND d.batchid=b.batchid\nAND c.cardtypeid=d.cardtypeid\nto the WHERE conditions.\n\nThat should at least allow it to do some small reordering\nof the joins. I don't think that alone is going to do much,\nsince most of the time seems to be on the scan of d.\n\nWhat does vacuum verbose batchdetail give you (it'll give\nan idea of pages anyway)\n\n\n\n\n\n", "msg_date": "Wed, 15 Jan 2003 19:40:04 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\nOn Wed, 15 Jan 2003, Roman Fail wrote:\n\n> Thanks to everyone for the quick replies! I'm sure that my lack of\n> skill with SQL queries is the main problem. What's strange to me is\n> how MSSQL takes my bad queries and makes them look good anyway. It\n> must have a real smart planner.\n\nAs a followup, if you do\nset enable_indexscan=off;\nbefore running the explain analyze, what does that give you?\n\n", "msg_date": "Wed, 15 Jan 2003 19:46:16 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\"Roman Fail\" <rfail@posportal.com> writes:\n> Thanks to everyone for the quick replies! I'm sure that my lack of\n> skill with SQL queries is the main problem. What's strange to me is\n> how MSSQL takes my bad queries and makes them look good anyway. It\n> must have a real smart planner.\n\nI think more likely the issue is that your use of JOIN syntax is forcing\nPostgres into a bad plan. MSSQL probably doesn't assign any semantic\nsignificance to the use of \"a JOIN b\" syntax as opposed to \"FROM a, b\"\nsyntax. Postgres does. Whether this is a bug or a feature depends on\nyour point of view --- but there are folks out there who find it to be\na life-saver. You can find some explanations at\nhttp://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html\n\n> Is it pretty much universally accepted that I should drop all my\n> foreign keys?\n\nNo. They don't have any effect on SELECT performance in Postgres.\nThey will impact update speed, but that's not your complaint (at the\nmoment). Don't throw away data integrity protection until you know\nyou need to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Jan 2003 23:35:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "Tom, Roman,\n\n> I think more likely the issue is that your use of JOIN syntax is\n> forcing\n> Postgres into a bad plan. MSSQL probably doesn't assign any semantic\n> significance to the use of \"a JOIN b\" syntax as opposed to \"FROM a,\n> b\"\n> syntax. \n\nThat's correct. MSSQL will reorder equijoins, even when explicitly\ndeclared.\n\nHey, Roman, how many records in BatchDetail, anyway?\n\nJosh Berkus\n", "msg_date": "Wed, 15 Jan 2003 20:41:11 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "Tom Lane wrote:\n> \"Roman Fail\" <rfail@posportal.com> writes:\n> > Thanks to everyone for the quick replies! I'm sure that my lack of\n> > skill with SQL queries is the main problem. What's strange to me is\n> > how MSSQL takes my bad queries and makes them look good anyway. It\n> > must have a real smart planner.\n> \n> I think more likely the issue is that your use of JOIN syntax is forcing\n> Postgres into a bad plan. MSSQL probably doesn't assign any semantic\n> significance to the use of \"a JOIN b\" syntax as opposed to \"FROM a, b\"\n> syntax. Postgres does. Whether this is a bug or a feature depends on\n> your point of view --- but there are folks out there who find it to be\n> a life-saver. \n\nSince it *does* depend on one's point of view, would it be possible to\nhave control over this implemented in a session-defined variable (with\nthe default in the GUC, of course)? I wouldn't be surprised if a lot\nof people get bitten by this.\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Wed, 15 Jan 2003 20:48:47 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> Tom Lane wrote:\n>> ... Whether this is a bug or a feature depends on\n>> your point of view --- but there are folks out there who find it to be\n>> a life-saver. \n\n> Since it *does* depend on one's point of view, would it be possible to\n> have control over this implemented in a session-defined variable (with\n> the default in the GUC, of course)?\n\nI have no objection to doing that --- anyone care to contribute code to\nmake it happen? (I think the trick would be to fold plain-JOIN jointree\nentries into FROM-list items in planner.c, somewhere near the code that\nhoists sub-SELECTs into the main join tree. But I haven't tried it, and\nhave no time to in the near future.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Jan 2003 00:07:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <kevin@sysexperts.com> writes:\n> > Tom Lane wrote:\n> >> ... Whether this is a bug or a feature depends on\n> >> your point of view --- but there are folks out there who find it to be\n> >> a life-saver. \n> \n> > Since it *does* depend on one's point of view, would it be possible to\n> > have control over this implemented in a session-defined variable (with\n> > the default in the GUC, of course)?\n> \n> I have no objection to doing that --- anyone care to contribute code to\n> make it happen? (I think the trick would be to fold plain-JOIN jointree\n> entries into FROM-list items in planner.c, somewhere near the code that\n> hoists sub-SELECTs into the main join tree. But I haven't tried it, and\n> have no time to in the near future.)\n\nI'm looking at the code now (the 7.2.3 code in particular, but I\nsuspect for this purpose the code is likely to be very similar to the\nCVS tip), but it's all completely new to me and the developer\ndocumentation isn't very revealing of the internals. The optimizer\ncode (I've been looking especially at make_jointree_rel() and\nmake_fromexpr_rel()) looks a bit tricky...it'll take me some time to\ncompletely wrap my brain around it. Any pointers to revealing\ndocumentation would be quite helpful!\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Thu, 16 Jan 2003 02:40:25 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Thu, 2003-01-16 at 03:40, Stephan Szabo wrote:\n> > So here's the query, and another EXPLAIN ANALYZE to go with it\n> > (executed after all setting changes). The same result columns and\n> > JOINS are performed all day with variations on the WHERE clause; other\n> > possible search columns are the ones that are indexed (see below).\n> > The 4 tables that use LEFT JOIN only sometimes have matching records,\n> > hence the OUTER join.\n> >\n> > EXPLAIN ANALYZE\n> > SELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount,\n> > d.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode,\n> > m.name AS merchantname, c.cardtype, m.merchid,\n> > p1.localtaxamount, p1.productidentifier, dr.avsresponse,\n> > cr.checkoutdate, cr.noshowindicator, ck.checkingacctno,\n> > ck.abaroutingno, ck.checkno\n> > FROM tranheader t\n> > INNER JOIN batchheader b ON t.tranheaderid = b.tranheaderid\n> > INNER JOIN merchants m ON m.merchantid = b.merchantid\n> > INNER JOIN batchdetail d ON d.batchid = b.batchid\n> > INNER JOIN cardtype c ON d.cardtypeid = c.cardtypeid\n> > LEFT JOIN purc1 p1 ON p1.batchdetailid = d.batchdetailid\n> > LEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid\n> > LEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid\n> > LEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid\n> > WHERE t.clientid = 6\n> > AND d.tranamount BETWEEN 500.0 AND 700.0\n\nHow much of data in d has tranamount BETWEEN 500.0 AND 700.0 ?\n\nDo you have an index on d.tranamount ?\n\n> > AND b.batchdate > '2002-12-15'\n\nagain - how much of b.batchdate > '2002-12-15' ?\n\nis there an index\n\n> > AND m.merchid = '701252267'\n\nditto\n\n> > ORDER BY b.batchdate DESC\n> > LIMIT 50\n\nthese two together make me think that perhaps \n\nb.batchdate between '2003-12-12' and '2002-12-15' \n\ncould be better at making the optimiser see that reverse index scan on\nb.batchdate would be the way to go.\n\n> Well, you might get a little help by replace the from with\n\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "16 Jan 2003 12:54:58 +0000", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Wed, Jan 15, 2003 at 03:30:55PM -0800, Roman Fail wrote:\n> Thanks to everyone for the quick replies! I'm sure that my lack of skill with SQL queries is the main problem. What's strange to me is how MSSQL takes my bad queries and makes them look good anyway. It must have a real smart planner.\n> Andrew Sullivan wrote:\n> >First, the performance of foreign keys is flat-out awful in Postgres.\n> >I suggest avoiding them if you can.\n> \n> I don't have any problem getting rid of FKs, especially if it might\n> actually help performance. The nightly data import is well-defined\n\nSorry, I think I sent this too quickly. FKs make no difference to\nSELECT performance, so if you're not doing updates and the like at\nthe same time as the SELECTs, there's no advantage. So you should\nleave the FKs in place.\n\n> I think this is the most likely problem. I've read through Chapter\n> 10 of the 7.3 docs, but I still don't feel like I know what would\n> be a good order. How do you learn this stuff anyway? Trial and\n> error?\n\nSorry, but yes.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 16 Jan 2003 08:17:38 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> I'm looking at the code now (the 7.2.3 code in particular, but I\n> suspect for this purpose the code is likely to be very similar to the\n> CVS tip), but it's all completely new to me and the developer\n> documentation isn't very revealing of the internals. The optimizer\n> code (I've been looking especially at make_jointree_rel() and\n> make_fromexpr_rel()) looks a bit tricky...it'll take me some time to\n> completely wrap my brain around it. Any pointers to revealing\n> documentation would be quite helpful!\n\nsrc/backend/optimizer/README is a good place to start.\n\nI'd recommend working with CVS tip; there is little point in doing any\nnontrivial development in the 7.2 branch. You'd have to port it forward\nanyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Jan 2003 10:46:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "I was surprised to hear that JOIN syntax constrained the planner. We \nhave a policy of using JOIN syntax to describe the table relationships \nand where clauses to describe the selection process for our queries. It \nwas our understanding that the JOIN syntax was introduced to support \nthis approach, but not to contrain the planner. \n\nIs there any way to sell the planner to consider JOIN syntax as \nequivalent to WHERE clauses and to not use them to force the planner \ndown a specific path? Can we get that added as an option (and then made \navailable to use JDBC folks as a URL parameter). It would make my team \nvery happy :-).\n\n\nI think that making this an option will help all those migrating to \nPostgres who did not expect that JOINs forced the planner down specific \nplans. Is it possible/reasonable to add?\n\nCharlie\n\n\nTom Lane wrote:\n\n>\"Roman Fail\" <rfail@posportal.com> writes:\n> \n>\n>>Thanks to everyone for the quick replies! I'm sure that my lack of\n>>skill with SQL queries is the main problem. What's strange to me is\n>>how MSSQL takes my bad queries and makes them look good anyway. It\n>>must have a real smart planner.\n>> \n>>\n>\n>I think more likely the issue is that your use of JOIN syntax is forcing\n>Postgres into a bad plan. MSSQL probably doesn't assign any semantic\n>significance to the use of \"a JOIN b\" syntax as opposed to \"FROM a, b\"\n>syntax. Postgres does. Whether this is a bug or a feature depends on\n>your point of view --- but there are folks out there who find it to be\n>a life-saver. You can find some explanations at\n>http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html\n>\n> \n>\n>>Is it pretty much universally accepted that I should drop all my\n>>foreign keys?\n>> \n>>\n>\n>No. They don't have any effect on SELECT performance in Postgres.\n>They will impact update speed, but that's not your complaint (at the\n>moment). Don't throw away data integrity protection until you know\n>you need to.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 4: Don't 'kill -9' the postmaster\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n", "msg_date": "Thu, 16 Jan 2003 10:53:08 -0500", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\nIs this a TODO item?\n\n---------------------------------------------------------------------------\n\nCharles H. Woloszynski wrote:\n> I was surprised to hear that JOIN syntax constrained the planner. We \n> have a policy of using JOIN syntax to describe the table relationships \n> and where clauses to describe the selection process for our queries. It \n> was our understanding that the JOIN syntax was introduced to support \n> this approach, but not to contrain the planner. \n> \n> Is there any way to sell the planner to consider JOIN syntax as \n> equivalent to WHERE clauses and to not use them to force the planner \n> down a specific path? Can we get that added as an option (and then made \n> available to use JDBC folks as a URL parameter). It would make my team \n> very happy :-).\n> \n> \n> I think that making this an option will help all those migrating to \n> Postgres who did not expect that JOINs forced the planner down specific \n> plans. Is it possible/reasonable to add?\n> \n> Charlie\n> \n> \n> Tom Lane wrote:\n> \n> >\"Roman Fail\" <rfail@posportal.com> writes:\n> > \n> >\n> >>Thanks to everyone for the quick replies! I'm sure that my lack of\n> >>skill with SQL queries is the main problem. What's strange to me is\n> >>how MSSQL takes my bad queries and makes them look good anyway. It\n> >>must have a real smart planner.\n> >> \n> >>\n> >\n> >I think more likely the issue is that your use of JOIN syntax is forcing\n> >Postgres into a bad plan. MSSQL probably doesn't assign any semantic\n> >significance to the use of \"a JOIN b\" syntax as opposed to \"FROM a, b\"\n> >syntax. Postgres does. Whether this is a bug or a feature depends on\n> >your point of view --- but there are folks out there who find it to be\n> >a life-saver. You can find some explanations at\n> >http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html\n> >\n> > \n> >\n> >>Is it pretty much universally accepted that I should drop all my\n> >>foreign keys?\n> >> \n> >>\n> >\n> >No. They don't have any effect on SELECT performance in Postgres.\n> >They will impact update speed, but that's not your complaint (at the\n> >moment). Don't throw away data integrity protection until you know\n> >you need to.\n> >\n> >\t\t\tregards, tom lane\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 4: Don't 'kill -9' the postmaster\n> > \n> >\n> \n> -- \n> \n> \n> Charles H. Woloszynski\n> \n> ClearMetrix, Inc.\n> 115 Research Drive\n> Bethlehem, PA 18015\n> \n> tel: 610-419-2210 x400\n> fax: 240-371-3256\n> web: www.clearmetrix.com\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 16 Jan 2003 11:18:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\nOn Wed, 15 Jan 2003, Roman Fail wrote:\n\nI just had new thoughts.\n\nIf you make an index on batchdetail(batchid)\ndoes that help?\n\nI realized that it was doing a merge join\nto join d and the (t,b,m) combination when it\nwas expecting 3 rows out of the latter, and\nbatchid is presumably fairly selective on\nthe batchdetail table, right? I'd have expected\na nested loop over the id column, but it\ndoesn't appear you have an index on it in\nbatchdetail.\n\nThen I realized that batchheader.batchid and\nbatchdetail.batchid don't even have the same\ntype, and that's probably something else you'd\nneed to fix.\n\n> batchheader has 2.6 million records:\n> CREATE TABLE public.batchheader (\n> batchid int8 DEFAULT nextval('\"batchheader_batchid_key\"'::text) NOT NULL,\n\n> And here's batchdetail too, just for kicks. 23 million records.\n> CREATE TABLE public.batchdetail (\n> batchid int4,\n\n\n\n\n", "msg_date": "Thu, 16 Jan 2003 10:43:02 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "I'd love to see this as a TODO item, but I am hardly one to add to the \nlist...\n\nCharlie\n\n\nBruce Momjian wrote:\n\n>Is this a TODO item?\n>\n>---------------------------------------------------------------------------\n>\n>Charles H. Woloszynski wrote:\n> \n>\n>>I was surprised to hear that JOIN syntax constrained the planner. We \n>>have a policy of using JOIN syntax to describe the table relationships \n>>and where clauses to describe the selection process for our queries. It \n>>was our understanding that the JOIN syntax was introduced to support \n>>this approach, but not to contrain the planner. \n>>\n>>Is there any way to sell the planner to consider JOIN syntax as \n>>equivalent to WHERE clauses and to not use them to force the planner \n>>down a specific path? Can we get that added as an option (and then made \n>>available to use JDBC folks as a URL parameter). It would make my team \n>>very happy :-).\n>>\n>>\n>>I think that making this an option will help all those migrating to \n>>Postgres who did not expect that JOINs forced the planner down specific \n>>plans. Is it possible/reasonable to add?\n>>\n>>Charlie\n>>\n>>\n>>Tom Lane wrote:\n>>\n>> \n>>\n>>>\"Roman Fail\" <rfail@posportal.com> writes:\n>>> \n>>>\n>>> \n>>>\n>>>>Thanks to everyone for the quick replies! I'm sure that my lack of\n>>>>skill with SQL queries is the main problem. What's strange to me is\n>>>>how MSSQL takes my bad queries and makes them look good anyway. It\n>>>>must have a real smart planner.\n>>>> \n>>>>\n>>>> \n>>>>\n>>>I think more likely the issue is that your use of JOIN syntax is forcing\n>>>Postgres into a bad plan. MSSQL probably doesn't assign any semantic\n>>>significance to the use of \"a JOIN b\" syntax as opposed to \"FROM a, b\"\n>>>syntax. Postgres does. Whether this is a bug or a feature depends on\n>>>your point of view --- but there are folks out there who find it to be\n>>>a life-saver. You can find some explanations at\n>>>http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html\n>>>\n>>> \n>>>\n>>> \n>>>\n>>>>Is it pretty much universally accepted that I should drop all my\n>>>>foreign keys?\n>>>> \n>>>>\n>>>> \n>>>>\n>>>No. They don't have any effect on SELECT performance in Postgres.\n>>>They will impact update speed, but that's not your complaint (at the\n>>>moment). Don't throw away data integrity protection until you know\n>>>you need to.\n>>>\n>>>\t\t\tregards, tom lane\n>>>\n>>>---------------------------(end of broadcast)---------------------------\n>>>TIP 4: Don't 'kill -9' the postmaster\n>>> \n>>>\n>>> \n>>>\n>>-- \n>>\n>>\n>>Charles H. Woloszynski\n>>\n>>ClearMetrix, Inc.\n>>115 Research Drive\n>>Bethlehem, PA 18015\n>>\n>>tel: 610-419-2210 x400\n>>fax: 240-371-3256\n>>web: www.clearmetrix.com\n>>\n>>\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>>\n>> \n>>\n>\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n", "msg_date": "Fri, 17 Jan 2003 08:29:25 -0500", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" } ]
[ { "msg_contents": "On Tue, 31 Dec 2002 14:14:34 -0800 (PST)\nMichael Teter <mt_pgsql@yahoo.com> wrote:\n> I've used PostgreSQL in the past on a small project,\n> and I thought it was great.\n> \n> Now I'm trying to evaluate it as a possible\n> replacement for MS SQL Server.\n> \n> I have two issues:\n> \n> 1. I have a homegrown Java migration tool I wrote that\n> seems to work reasonably well, but I'm hoping to\n> understand how to improve its performance.\n> \n> 2. After migrating, I found pg_dump to be plenty\n> quick, but psql < (to completely reload the database)\n> to be very very slow during the COPY stage.\n\nI've found that \"psql -f myfile mydb\" is Much faster than\n\"psql mydb <myfile\". I'm not too sure why, but it's worth\na try. \n\n> Now for more detail. On problem 1., I have autocommit\n> off, and I'm doing PreparedStatement.addBatch() and\n> executeBatch(), and eventually, commit.\n> \n> I've been playing with the amount of rows I do before\n> executeBatch(), and I seem to do best with 20,000 to\n> 50,000 rows in a batch. Some background: this is\n> RedHat8.0 with all the latest RedHat patches, 1GB\n> RAMBUS RAM, 2GHz P4, 40GB 7200RPM HD. Watching\n> gkrellm and top, I see a good bit of CPU use by\n> postmaster duing the addBatch()es, but then when\n> executeBatch() comes, CPU goes almost totally idle,\n> and disk starts churning. Somehow it seems the disk\n> isn't being utilized to the fullest, but I'm just\n> guessing.\n> \n> I'm wondering if there's some postmaster tuning I\n> might do to improve this.\n> \n> Then on problem 2., a pg_dump of the database takes\n> about 3 minutes, and creates a file of 192MB in size. \n> Then I create testdb and do psql -e testdb\n> <thedump.sql, and it creeps once it gets to the COPY\n> section. So far it's been running for 45 minutes,\n> mostly on one table (the biggest table, which has\n> 1,090,000 rows or so). During this time, CPU use is\n> very low, and there's no net or lo traffic.\n> \n> In contrast, using MSSQL's backup and restore\n> facilities, it takes about 15 second on a previous\n> generation box (with SCSI though) to backup, and 45\n> seconds to a minute to restore.\n> \n> Suggestions?\n> \n> Thanks,\n> MT\n> \n> __________________________________________________\n> Do you Yahoo!?\n> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.\n> http://mailplus.yahoo.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n\n-- \n I cannot think why the whole bed of the ocean is\n not one solid mass of oysters, so prolific they seem. Ah,\n I am wandering! Strange how the brain controls the brain!\n\t-- Sherlock Holmes in \"The Dying Detective\"\n\n\n-- \n I cannot think why the whole bed of the ocean is\n not one solid mass of oysters, so prolific they seem. Ah,\n I am wandering! Strange how the brain controls the brain!\n\t-- Sherlock Holmes in \"The Dying Detective\"\n", "msg_date": "Thu, 2 Jan 2003 10:57:27 -0500", "msg_from": "george young <gry@ll.mit.edu>", "msg_from_op": true, "msg_subject": "Re: preliminary testing, two very slow situations..." } ]
[ { "msg_contents": "\n Well, our current database server is getting tremendously loaded, and\nright now there isn't a clear-cut choice as to an upgrade path - at least\nnot within the commodity hardware market.\n\n The machine is a dual AthlonMP 2000, with 2 gigs of RAM. the loads on\nthe machine are getting out of hand, and performance is noticeably slowed.\n'top' shows the CPU's as being anywhere from 30% to 50% idle, with (on\naverage) 5-10 postmasters in the \"non-idle\" state. 'vmstat' shows bi/bo\npegged at zero (copious quantities of disk cache, fsync turned off),\ninterrupts fluctuating between 200 and 1,000 per second (avg. is approx\n400), context switches between 1300 and 4500 (avg. is approx 2300). I\nlogged some queries, and found that in an average second, the machine\nforks off 10 new backends, and responds to 50 selects and 3 updates.\n\n My feelings are that the machine is being swamped by both the number of\ncontext switches and the I/O, most likely the memory bandwidth. I'm\nworking on implementing some connection pooling to reduce the number of\nnew backends forked off, but there's not much I can do about the sheer\nvolume (or cost) of queries.\n\n Now, if quad-Hammers were here, I'd simply throw hardware at it.\nUnfortunately, they're not. So far, about the only commodity-level answer\nI can think of would be a dual P4 Xeon, with the 533 MHz bus, and\ndual-channel DDR memory. That would give each processor approximately\ndouble the memory bandwidth over what we're currently running.\n\n I'm fairly sure that would at least help lower the load, but I'm not\nsure by how much. If anyone has run testing under similar platforms, I'd\nlove to hear of the performance difference. If this is going to chop the\nloads in half, I'll do it. If it's only going to improve it by 10% or so,\nI'm not going to waste the money.\n\nSteve\n\n", "msg_date": "Thu, 2 Jan 2003 10:42:05 -0700", "msg_from": "\"Steve Wolfe\" <nw@codon.com>", "msg_from_op": true, "msg_subject": "Question on hardware & server capacity" }, { "msg_contents": "\"Steve Wolfe\" <nw@codon.com> writes:\n> I logged some queries, and found that in an average second, the machine\n> forks off 10 new backends, and responds to 50 selects and 3 updates.\n\nSo an average backend only processes ~ 5 queries before exiting?\n\n> My feelings are that the machine is being swamped by both the number of\n> context switches and the I/O, most likely the memory bandwidth.\n\nI think you're getting killed by the lack of connection pooling.\nLaunching a new backend is moderately expensive: there's not just the\nOS-level fork overhead, but significant cost to fill the catalog caches\nto useful levels, etc.\n\n7.3 has reduced some of those startup costs a little, so if you're still\non 7.2 then an update might help. But I'd strongly recommend getting\nconnection re-use in place before you go off and buy hardware.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Jan 2003 14:57:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Question on hardware & server capacity " }, { "msg_contents": "> So an average backend only processes ~ 5 queries before exiting?\n>\n\n> 7.3 has reduced some of those startup costs a little, so if you're still\n> on 7.2 then an update might help. But I'd strongly recommend getting\n> connection re-use in place before you go off and buy hardware.\n\n I've been fooling around with some connection pooling, and it hasn't\nmake the sort of difference we're looking for. Going from 3 queries per\nback-end to 100 queries per backend made only about a 20% difference.\nWhile that's nothing to scoff at, we're looking for at least a 100%\nimprovement. Either way, the connection pooling WILL be put in place, but\nI'm certainly not counting on it preventing the need for a hardware\nupgrade.\n\nsteve\n\n", "msg_date": "Thu, 2 Jan 2003 13:59:19 -0700", "msg_from": "\"Steve Wolfe\" <nw@codon.com>", "msg_from_op": true, "msg_subject": "Re: Question on hardware & server capacity " }, { "msg_contents": "Steve Wolfe kirjutas N, 02.01.2003 kell 22:42:\n> Well, our current database server is getting tremendously loaded, and\n> right now there isn't a clear-cut choice as to an upgrade path - at least\n> not within the commodity hardware market.\n\nHave you optimized your queries to max ?\n\nOften one or two of the queries take most of resources and starve\nothers.\n\n> The machine is a dual AthlonMP 2000, with 2 gigs of RAM. the loads on\n> the machine are getting out of hand, and performance is noticeably slowed.\n> 'top' shows the CPU's as being anywhere from 30% to 50% idle, with (on\n> average) 5-10 postmasters in the \"non-idle\" state. 'vmstat' shows bi/bo\n> pegged at zero (copious quantities of disk cache, fsync turned off),\n\nCould there be some unnecessary trashing between OS and PG caches ? \nHow could this be detected ?\n\n> interrupts fluctuating between 200 and 1,000 per second (avg. is approx\n> 400), context switches between 1300 and 4500 (avg. is approx 2300). I\n> logged some queries, and found that in an average second, the machine\n> forks off 10 new backends, and responds to 50 selects and 3 updates.\n\nWhat are the average times for query responses ?\n\nWill running the same queries (the ones from the logs) serially run\nfaster/slower/at the same speed ?\n\nDo you have some triggers on updates - I have occasionally found them to\nbe real performance killers.\n\nAlso - if memory bandwidth is the issue, you could tweak the parameters\nso that PG will prefer index scans more often - there are rumors that\nunder heavy loads it is often better to use more index scans due to\npossible smaller memory/buffer use, even if they would be slower for\nonly one or two backends.\n\n> My feelings are that the machine is being swamped by both the number of\n> context switches and the I/O, most likely the memory bandwidth. I'm\n> working on implementing some connection pooling to reduce the number of\n> new backends forked off, but there's not much I can do about the sheer\n> volume (or cost) of queries.\n\nYou could try to replicate the updates (one master - multiple slaves)\nand distribute the selects. I guess this is what current postgreSQL\nstate-of-the-art already lets you do with reasonable effort.\n\n> Now, if quad-Hammers were here, I'd simply throw hardware at it.\n> Unfortunately, they're not.\n\nYes, it's BAD if your business grows faster than Moores law ;-p\n\n> So far, about the only commodity-level answer\n> I can think of would be a dual P4 Xeon, with the 533 MHz bus, and\n> dual-channel DDR memory. That would give each processor approximately\n> double the memory bandwidth over what we're currently running.\n> \n> I'm fairly sure that would at least help lower the load, but I'm not\n> sure by how much. If anyone has run testing under similar platforms, I'd\n> love to hear of the performance difference. \n\nHow big is the dataset ? What kinds of queries ?\n\nI could perhaps run some quick tests on quad Xeon 1.40GHz , 2GB before\nthis box goes to production sometime early next week. It is a RedHat\nAS2.1 box with rh-postgresql-7.2.3-1_as21.\n\n# hdparm -tT /dev/sda\n\n/dev/sda:\n Timing buffer-cache reads: 128 MB in 0.39 seconds =328.21 MB/sec\n Timing buffered disk reads: 64 MB in 1.97 seconds = 32.49 MB/sec\n\n> If this is going to chop the\n> loads in half, I'll do it. If it's only going to improve it by 10% or so,\n> I'm not going to waste the money.\n\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "03 Jan 2003 03:07:34 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Question on hardware & server capacity" }, { "msg_contents": "> Have you optimized your queries to max ?\n>\n> Often one or two of the queries take most of resources and starve\n> others.\n\n I did log a good number of queries and analyze them, and 69% of the\nqueries issued are from one particular application, and they consume 78%\nof the total \"cost\". The developper is looking into optimizations, but it\ndoesn't look like there's going to be any low-hanging fruit. It's simply\na complicated and frequently-used app.\n\n> Could there be some unnecessary trashing between OS and PG caches ?\n> How could this be detected ?\n\n The machine generally has a minimum of a hundred megs free, unused\nmemory, so I'm not terribly worried about memory thrashing. I've\nincreased the various tuneable parameters (buffer blocks, sort mem, etc.)\nto the point where performance increases stopped, then I doubled them all\nfor good measure. I've already decided that the next machine will have at\nleast 4 gigs of RAM, just because RAM's cheap, and having too much is a\nGood Thing.\n\n> Do you have some triggers on updates - I have occasionally found them to\n> be real performance killers.\n\n There are a few triggers, but not many - and the number of updates is\nextremely low relative to the number of inserts.\n\n> Yes, it's BAD if your business grows faster than Moores law ;-p\n\n .. unfortunately, that's been the case. Each year we've done slightly\nmore than double the traffic of the previous year - and at the same time,\nas we unify all of our various data sources, the new applications that we\ndevelop tend to make greater and greater demands on the database server.\nThere is always the option of the \"big iron\", but your\ncost-per-transaction shoots through the roof. Paying a 10x premium can\nreally hurt. : )\n\n> How big is the dataset ? What kinds of queries ?\n\n our ~postgres/data/base is currently 3.4 gigs.\n\n> I could perhaps run some quick tests on quad Xeon 1.40GHz , 2GB before\n> this box goes to production sometime early next week. It is a RedHat\n> AS2.1 box with rh-postgresql-7.2.3-1_as21.\n\n I'd appreciate that!\n\nsteve\n\n", "msg_date": "Fri, 3 Jan 2003 10:31:54 -0700", "msg_from": "\"Steve Wolfe\" <nw@codon.com>", "msg_from_op": true, "msg_subject": "Re: Question on hardware & server capacity" }, { "msg_contents": "On Fri, 3 Jan 2003, Steve Wolfe wrote:\n\n> > Have you optimized your queries to max ?\n> >\n> > Often one or two of the queries take most of resources and starve\n> > others.\n> \n> I did log a good number of queries and analyze them, and 69% of the\n> queries issued are from one particular application, and they consume 78%\n> of the total \"cost\". The developper is looking into optimizations, but it\n> doesn't look like there's going to be any low-hanging fruit. It's simply\n> a complicated and frequently-used app.\n> \n> > Could there be some unnecessary trashing between OS and PG caches ?\n> > How could this be detected ?\n> \n> The machine generally has a minimum of a hundred megs free, unused\n> memory, so I'm not terribly worried about memory thrashing. I've\n> increased the various tuneable parameters (buffer blocks, sort mem, etc.)\n> to the point where performance increases stopped, then I doubled them all\n> for good measure. I've already decided that the next machine will have at\n> least 4 gigs of RAM, just because RAM's cheap, and having too much is a\n> Good Thing.\n\nActually, free memory doesn't mean a whole lot. How much memory is being \nused as cache by the kernel? I've found that as long as the kernel is \ncaching more data than postgresql, performance is better than when \npostgresql starts using more memory than the OS. for example, on my boxes \nat work, we have 1.5 gigs ram, and 256 megs are allocated to pgsql as \nshared buffer. The Linux kernel on those boxes has 100 megs free mem and \n690 megs cached. The first time a heavy query runs there's a lag as the \ndataset is read into memory, but then subsequent queries fly.\n\nMy experience has been that under Liunx (2.4.9 kernel RH7.2) the file \nsystem caching is better performance wise for very large amounts of data \n(500 Megs or more) than the postgresql shared buffers are. I.e. it would \nseem that when Postgresql has a large amount of shared memory to keep \ntrack of, it's quicker to just issue a request to the OS if the data is in \nthe file cache than it is to look it up in postgresql's own shared memory \nbuffers. The knee for me is somewhere between 32 megs and 512 megs memory \nto postgresql and twice that on average or a little more to the kernel \nfile caches.\n\n> Yes, it's BAD if your business grows faster than Moores law ;-p\n> \n> .. unfortunately, that's been the case. Each year we've done slightly\n> more than double the traffic of the previous year - and at the same time,\n> as we unify all of our various data sources, the new applications that we\n> develop tend to make greater and greater demands on the database server.\n> There is always the option of the \"big iron\", but your\n> cost-per-transaction shoots through the roof. Paying a 10x premium can\n> really hurt. : )\n\nCan you distribute your dataset across multiple machines? or is it the \nkinda thing that all needs to be in one big machine?\n\nWell, good luck with all this.\n\n", "msg_date": "Fri, 3 Jan 2003 11:06:38 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Question on hardware & server capacity" }, { "msg_contents": "> Actually, free memory doesn't mean a whole lot. How much memory is\nbeing\n> used as cache by the kernel?\n\n Generally, a gig or so.\n\n> Can you distribute your dataset across multiple machines? or is it the\n> kinda thing that all needs to be in one big machine?\n\n We're splitting the front-end across a number of machines, but all of\nthe various datasets are sufficiently intertwined that they all have to be\nin the same database. I'm going to fiddle around with some of the\navailable replication options and see if they're robust enough to put them\ninto production.\n\nsteve\n\n", "msg_date": "Fri, 3 Jan 2003 13:49:33 -0700", "msg_from": "\"Steve Wolfe\" <nw@codon.com>", "msg_from_op": true, "msg_subject": "Re: Question on hardware & server capacity" }, { "msg_contents": "\nSteve,\n\n> We're splitting the front-end across a number of machines, but all of\n> the various datasets are sufficiently intertwined that they all have to be\n> in the same database. I'm going to fiddle around with some of the\n> available replication options and see if they're robust enough to put them\n> into production.\n\n2 other suggestions:\n\n1. Both PostgreSQL Inc. and Command Prompt Inc. have some sort of pay-for HA \nsolution for Postgres. Paying them may end up being cheaper than \nimprovising this yourself.\n\n2. Zapatec Inc. has acheived impressive performance gains by putting the \ndatabase on a high-speed, HA gigabit NAS server and having a few \"client \nservers\" handle incoming queries. You may want to experiment along these \nlines.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 6 Jan 2003 11:14:50 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Question on hardware & server capacity" } ]
[ { "msg_contents": "I have a query generated by an application (not mine, but there's \nnothing I can find that looks bad about the query itself) that takes an \nexcessive amount of time to return even though there are almost no rows \nin the schema yet. 3 secs may not seem to be much, but the query is run \nby a web-application for a page you have to go through quite \nfrequently, and it appears the query should be able to execute below 1 \nsec easily. I'm running Postgres 7.3.1 on Mac OSX.\n\nAfter having turned on several logging options, here is a pertinent \nexcerpt from the log that also shows the query. It seems the query \nplanner takes the whole time, not the actual execution. Does anyone \nhave an idea what's going on here, and what I could do to alleviate the \nproblem? (Just to mention, I've run the same with GEQO off and if \nanything it makes the timing worse.)\n\n2003-01-02 11:22:59 LOG: query: SELECT TW.WORKITEMKEY, \nTW.PACKAGESYNOPSYS, TW.PACKAGEDESCRIPTION, TW.BUILD,\nTW.LASTEDIT, TOW.LASTNAME AS LOWNER, TOW.FIRSTNAME AS FOWNER,\nTOR.LASTNAME AS LORIGINATOR, TOR.FIRSTNAME AS FORIGINATOR,\nTRE.LASTNAME AS LRESPONSIBLE, TRE.FIRSTNAME AS FRESPONSIBLE,\nTPRJC.LABEL AS PROJCATLABEL, TPRJ.LABEL AS PROJLABEL, TCL.LABEL AS \nREQCLASS,\nTW.CATEGORYKEY AS REQCATEGORY, TW.PRIORITYKEY AS REQPRIORITY,\nTW.SEVERITYKEY AS REQSEVERITY, TST.LABEL AS STATELABEL, TW.STATE,\nTST.STATEFLAG, TREL.LABEL AS RELEASELABEL, TW.ENDDATE\nFROM TWORKITEM TW, TPERSON TOW, TPERSON TOR, TPERSON TRE, TPROJECT TPRJ,\nTPROJCAT TPRJC, TCATEGORY TCAT, TCLASS TCL, TPRIORITY TPRIO, TSEVERITY \nTSEV,\nTSTATE TST, TRELEASE TREL\nWHERE (TW.OWNER = TOW.PKEY) AND (TW.ORIGINATOR = TOR.PKEY)\nAND (TW.RESPONSIBLE = TRE.PKEY) AND (TW.PROJCATKEY = TPRJC.PKEY)\nAND (TPRJ.PKEY = TPRJC.PROJKEY) AND (TW.CLASSKEY = TCL.PKEY)\nAND (TW.CATEGORYKEY = TCAT.PKEY) AND (TW.PRIORITYKEY = TPRIO.PKEY)\nAND (TW.SEVERITYKEY = TSEV.PKEY) AND (TST.PKEY = TW.STATE)\nAND (TREL.PKEY = TW.RELSCHEDULEDKEY)\n\n2003-01-02 11:23:02 LOG: PLANNER STATISTICS\n! system usage stats:\n! 2.730501 elapsed 1.400000 user 0.000000 system sec\n! [3.580000 user 0.000000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/0 [0/0] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/14] messages rcvd/sent\n! 0/0 [24/0] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 0 read, 0 written, buffer hit \nrate = 0.00%\n! Local blocks: 0 read, 0 written, buffer hit \nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n2003-01-02 11:23:02 LOG: EXECUTOR STATISTICS\n! system usage stats:\n! 0.005024 elapsed 0.000000 user 0.000000 system sec\n! [3.580000 user 0.000000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/0 [0/0] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/14] messages rcvd/sent\n! 0/0 [24/0] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 0 read, 0 written, buffer hit \nrate = 100.00%\n! Local blocks: 0 read, 0 written, buffer hit \nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n2003-01-02 11:23:02 LOG: duration: 2.740243 sec\n2003-01-02 11:23:02 LOG: QUERY STATISTICS\n! system usage stats:\n! 0.006432 elapsed 0.000000 user 0.000000 system sec\n! [3.580000 user 0.000000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 0/0 [0/0] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/14] messages rcvd/sent\n! 0/0 [24/0] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 0 read, 0 written, buffer hit \nrate = 100.00%\n! Local blocks: 0 read, 0 written, buffer hit \nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Thu, 2 Jan 2003 11:42:02 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "join over 12 tables takes 3 secs to plan" }, { "msg_contents": "Hilmar Lapp <hlapp@gmx.net> writes:\n> I have a query generated by an application (not mine, but there's \n> nothing I can find that looks bad about the query itself) that takes an \n> excessive amount of time to return even though there are almost no rows \n> in the schema yet.\n\nRead\nhttp://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 02 Jan 2003 15:24:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan " }, { "msg_contents": "Hilmar Lapp said:\n> I have a query generated by an application (not mine, but there's\n> nothing I can find that looks bad about the query itself) that takes an\n> excessive amount of time to return even though there are almost no rows\n> in the schema yet.\n\nYes -- an exhaustive search to determine the correct join order for a\nmultiple relation query is similar to solving the traveling salesman\nproblem (only more difficult, due to the availability of different join\nalgorithms, etc.). GEQO should be faster than the default optimizer for\nlarge queries involving large numbers of joins, but it's still going to\ntake a fair bit of time.\n\nIn other words, it's not a surprise that a 12-relation join takes a little\nwhile to plan.\n\n> I'm running Postgres 7.3.1 on Mac OSX.\n\nTom recently checked in some optimizations for GEQO in CVS HEAD, so you\ncould try using that (or at least testing it, so you have an idea of what\n7.4 will perform like).\n\nYou could also try using prepared queries.\n\nFinally, there are a bunch of GEQO tuning parameters that you might want\nto play with. They should allow you to reduce the planning time a bit, in\nexchange for possibly generating an inferior plan.\n\nCheers,\n\nNeil\n\n\n", "msg_date": "Thu, 2 Jan 2003 15:41:24 -0500 (EST)", "msg_from": "\"Neil Conway\" <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "Thanks for the pointer Tom. The application that's generating those \nqueries is open source, so I could even go in and hack the query \ngenerating code accordingly, but I doubt I can spare that time. Given \nthe information in the document you pointed me at and Neil's email I \nassume there is no other immediate remedy.\n\nAs an added note, appreciating that query optimization is a difficult \nproblem, and I do think PostgreSQL is a great product. Having said \nthat, I've written 16-table joins for Oracle and always found them to \nplan within a second or two, so that's why I thought there's nothing \nspecial about the query I posted ... I'm not saying this to be bashful \nabout PostgreSQL, but rather to suggest that apparently there are ways \nto do it pretty fast.\n\nI'm only starting to use PostgreSQL and making experiences, so I'm \nasking for forgiveness what may occasionally seem to be ignorant ...\n\n\t-hilmar\n\nOn Thursday, January 2, 2003, at 12:24 PM, Tom Lane wrote:\n\n> Hilmar Lapp <hlapp@gmx.net> writes:\n>> I have a query generated by an application (not mine, but there's\n>> nothing I can find that looks bad about the query itself) that takes \n>> an\n>> excessive amount of time to return even though there are almost no \n>> rows\n>> in the schema yet.\n>\n> Read\n> http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit- \n> joins.html\n>\n> \t\t\tregards, tom lane\n>\n>\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Thu, 2 Jan 2003 13:03:54 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "Re: join over 12 tables takes 3 secs to plan " }, { "msg_contents": "\nOn Thursday, January 2, 2003, at 12:41 PM, Neil Conway wrote:\n\n>\n> Finally, there are a bunch of GEQO tuning parameters that you might \n> want\n> to play with. They should allow you to reduce the planning time a bit, \n> in\n> exchange for possibly generating an inferior plan.\n>\n>\n\nThanks for the tip. I have to admit that I have zero experience with \ntuning GAs. If anyone could provide a starter which parameters are best \nto start with? Or is it in the docs?\n\n\t-hilmar\n\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Thu, 2 Jan 2003 13:08:24 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "Hilmar Lapp said:\n> Thanks for the tip. I have to admit that I have zero experience with\n> tuning GAs. If anyone could provide a starter which parameters are best\n> to start with? Or is it in the docs?\n\nhttp://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/runtime-config.html\nlists the available options.\n\nI'd think that GEQO_EFFORT, GEQO_GENERATIONS, and GEQO_POOL_SIZE would be\nthe parameters that would effect performance the most.\n\nCheers,\n\nNeil\n\n\n", "msg_date": "Thu, 2 Jan 2003 16:11:34 -0500 (EST)", "msg_from": "\"Neil Conway\" <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "Hilmar Lapp said:\n> As an added note, appreciating that query optimization is a difficult\n> problem, and I do think PostgreSQL is a great product. Having said\n> that, I've written 16-table joins for Oracle and always found them to\n> plan within a second or two, so that's why I thought there's nothing\n> special about the query I posted ... I'm not saying this to be bashful\n> about PostgreSQL, but rather to suggest that apparently there are ways\n> to do it pretty fast.\n\nI'm sure there is room for improvement -- either by adding additional\nheuristics to the default optimizer, by improving GEQO, or by implementing\nanother method for non-exhaustive search for large join queries (there are\nseveral ways to handle large join queries, only one of which uses a\ngenetic algorithm: see \"Query Optimization\" (Ioannidis, 1996) for a good\nintroductory survey).\n\nIf you'd like to take a shot at improving it, let me know if I can be of\nany assistance :-)\n\nCheers,\n\nNeil\n\n\n", "msg_date": "Thu, 2 Jan 2003 16:21:28 -0500 (EST)", "msg_from": "\"Neil Conway\" <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "\nOn Thursday, January 2, 2003, at 01:21 PM, Neil Conway wrote:\n\n> If you'd like to take a shot at improving it, let me know if I can be \n> of\n> any assistance :-)\n>\n>\n\nWould be a very cool problem to work on once I enroll in a CS program \n:-)\n\n\t-hilmar\n\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Thu, 2 Jan 2003 13:29:39 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "Hilmar Lapp wrote:\n> As an added note, appreciating that query optimization is a difficult \n> problem, and I do think PostgreSQL is a great product. Having said \n> that, I've written 16-table joins for Oracle and always found them to \n> plan within a second or two, so that's why I thought there's nothing \n> special about the query I posted ... I'm not saying this to be bashful \n> about PostgreSQL, but rather to suggest that apparently there are ways \n> to do it pretty fast.\n\nI could be wrong, but I believe Oracle uses its rule based optimizer by \ndefault, not its cost based optimizer. A rule based optimizer will be very \nquick all the time, but might not pick the best plan all the time, because it \ndoesn't consider the statistics of the data. Any idea which one you were using \nin your Oracle experience?\n\nJoe\n\n", "msg_date": "Thu, 02 Jan 2003 13:40:23 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "\nOn Thursday, January 2, 2003, at 01:40 PM, Joe Conway wrote:\n\n> I could be wrong, but I believe Oracle uses its rule based optimizer \n> by default, not its cost based optimizer.\n\nThey changed it from 9i on. The cost-based is now the default. The \nrecent 16-table join example I was referring to was on the cost-based \noptimizer.\n\nThey actually did an amazing good job on the CBO, at least in my \nexperience. I caught it screwing up badly only once, only to realize \nthat I had forgotten to compute the statistics ... It also allows for \ndifferent plans depending on whether you want some rows fast and the \ntotal not necessarily as fast, or all rows as fast as possible. This \nalso caught me off-guard initially when I wanted to peek into the first \nrows returned and had to wait almost as long as the entire query to \nreturn. (optimizing for all rows is the default)\n\n> A rule based optimizer will be very quick all the time, but might not \n> pick the best plan all the time, because it doesn't consider the \n> statistics of the data.\n\nTrue. In a situation with not that many rows though even a sub-optimal \nplan that takes 10x longer to execute than the possibly best (e.g., 1s \nvs 0.1s), but plans 10x faster (e.g. 0.3s vs 3s), might still return \nsignificantly sooner. Especially if some of the tables have been cached \nin memory already ...\n\n\t-hilmar\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Thu, 2 Jan 2003 14:49:01 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "On Thu, 2003-01-02 at 15:40, Joe Conway wrote:\n> Hilmar Lapp wrote:\n> > As an added note, appreciating that query optimization is a difficult \n> > problem, and I do think PostgreSQL is a great product. Having said \n> > that, I've written 16-table joins for Oracle and always found them to \n> > plan within a second or two, so that's why I thought there's nothing \n> > special about the query I posted ... I'm not saying this to be bashful \n> > about PostgreSQL, but rather to suggest that apparently there are ways \n> > to do it pretty fast.\n> \n> I could be wrong, but I believe Oracle uses its rule based optimizer by \n> default, not its cost based optimizer. A rule based optimizer will be very \n> quick all the time, but might not pick the best plan all the time, because it \n> doesn't consider the statistics of the data. Any idea which one you were using \n> in your Oracle experience?\n\nRemember also that the commercial RDMBSs have had many engineers working\nfor many years on these problems, whereas PostgreSQL hasn't...\n\nCould it be that PG isn't the proper tool for the job? Of course,\nat USD20K/cp, Oracle may be slightly out of budget.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "02 Jan 2003 18:01:10 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "\nOn Thursday, January 2, 2003, at 04:01 PM, Ron Johnson wrote:\n\n>\n> Could it be that PG isn't the proper tool for the job? Of course,\n> at USD20K/cp, Oracle may be slightly out of budget.\n>\n>\n\nWe are in fact an Oracle shop, but the application I tried to get \nrunning (http://trackplus.sourceforge.net/) I wanted to run on an OSS \nRDBMS so that I could easily move it onto my laptop etc (BTW apparently \nit was primarily developed on InterBase/Firebird). Anyway, I was able \nto cut the planning time for those queries in half by setting \ngeqo_pool_size to 512. However, now it gets stuck for an excessive \namount of time after the issue update page and I have no idea what's \ngoing on, and I'm not in the mood to track it down. So finally I'm \ngiving up and I'm rolling it out on MySQL on which it is working fine, \neven though I don't like MySQL to say the least.\n\n\t-hilmar\n\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Thu, 2 Jan 2003 16:24:51 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "\n\nHilmar Lapp wrote:\n\n> We are in fact an Oracle shop, but the application I tried to get \n> running (http://trackplus.sourceforge.net/) I wanted to run on an OSS \n> RDBMS so that I could easily move it onto my laptop etc (BTW \n> apparently it was primarily developed on InterBase/Firebird). Anyway, \n> I was able to cut the planning time for those queries in half by \n> setting geqo_pool_size to 512. However, now it gets stuck for an \n> excessive amount of time after the issue update page and I have no \n> idea what's going on, and I'm not in the mood to track it down. So \n> finally I'm giving up and I'm rolling it out on MySQL on which it is \n> working fine, even though I don't like MySQL to say the least.\n>\n> -hilmar\n>\nUhoh, did I just hear a gauntlet thrown down ... works well on MySQL but \nnot on PostgreSQL. If I can find the time, perhaps I can take a look at \nthe specific query(ies) and see what is missed in PostgreSQL that MySQL \nhas gotten right.\n\nIf only there were 48 hours in a day :-).\n\nCharlie\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n", "msg_date": "Thu, 02 Jan 2003 19:36:12 -0500", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "On Thu, 2 Jan 2003, Hilmar Lapp wrote:\n\n> \n> On Thursday, January 2, 2003, at 04:01 PM, Ron Johnson wrote:\n> \n> >\n> > Could it be that PG isn't the proper tool for the job? Of course,\n> > at USD20K/cp, Oracle may be slightly out of budget.\n> >\n> >\n> \n> We are in fact an Oracle shop, but the application I tried to get \n> running (http://trackplus.sourceforge.net/) I wanted to run on an OSS \n> RDBMS so that I could easily move it onto my laptop etc (BTW apparently \n> it was primarily developed on InterBase/Firebird). Anyway, I was able \n> to cut the planning time for those queries in half by setting \n> geqo_pool_size to 512. However, now it gets stuck for an excessive \n> amount of time after the issue update page and I have no idea what's \n> going on, and I'm not in the mood to track it down. So finally I'm \n> giving up and I'm rolling it out on MySQL on which it is working fine, \n> even though I don't like MySQL to say the least.\n\nHave you tried it on firebird for linux? It's an actively developed rdbms \nthat's open source too. If this was developed for it, it might be a \nbetter fit to use that for now, and then learn postgresql under the less \nrigorous schedule of simply porting, not having to get a product out the \ndoor.\n\nIs an explicit join the answer here? i.e. will the number of rows we get \nfrom each table in a single query likely to never change? If so then you \ncould just make an explicit join and be done with it.\n\n", "msg_date": "Fri, 3 Jan 2003 10:01:29 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "On Thu, 2 Jan 2003, Hilmar Lapp wrote:\n\n> I have a query generated by an application (not mine, but there's\n> nothing I can find that looks bad about the query itself) that takes an\n> excessive amount of time to return even though there are almost no rows\n> in the schema yet. 3 secs may not seem to be much, but the query is run\n> by a web-application for a page you have to go through quite\n> frequently, and it appears the query should be able to execute below 1\n> sec easily. I'm running Postgres 7.3.1 on Mac OSX.\n>\n\nHmm.. This won't fix the fact the planner takes three seconds, but since\nit is a web application have you tried using PREPARE/EXECUTE so it only\nneeds to be planned once? (Unless I am mistaken about what prepare/execute\nactually do) that way only the first visitor gets the hit..\n\n------------------------------------------------------------------------------\nJeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/\n Ronald McDonald, with the help of cheese soup,\n controls America from a secret volkswagon hidden in the past\n-------------------------------------------------------------------------------\n\n\n", "msg_date": "Fri, 3 Jan 2003 12:12:44 -0500 (EST)", "msg_from": "Jeff <threshar@torgo.978.org>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "\nOn Friday, January 3, 2003, at 09:12 AM, Jeff wrote:\n\n> Hmm.. This won't fix the fact the planner takes three seconds, but \n> since\n> it is a web application have you tried using PREPARE/EXECUTE so it only\n> needs to be planned once?\n\nInteresting point. I'd have to look into the source code whether the \nguy who wrote it actually uses JDBC PreparedStatements. I understand \nthat PostgreSQL from 7.3 onwards supports prepared statements (cool!). \nWould the JDBC driver accompanying the dist. exploit that feature for \nits PreparedStatement implementation?\n\n\t-hilmar\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Fri, 3 Jan 2003 11:38:50 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "\nOn Friday, January 3, 2003, at 09:01 AM, scott.marlowe wrote:\n\n>\n> Have you tried it on firebird for linux? It's an actively developed \n> rdbms\n> that's open source too. If this was developed for it, it might be a\n> better fit to use that for now,\n\nProbably it would. But honestly I'm not that keen to install the 3rd \nOSS database (in addition to Oracle, MySQL, PostgreSQL), and my \nsysadmins probably wouldn't be cheerfully jumping either ...\n\n\n> and then learn postgresql under the less\n> rigorous schedule of simply porting, not having to get a product out \n> the\n> door.\n\nYes, so odd MySQL fit that bill for now ...\n\n>\n> Is an explicit join the answer here? i.e. will the number of rows we \n> get\n> from each table in a single query likely to never change? If so then \n> you\n> could just make an explicit join and be done with it.\n>\n\nProbably, even though the number of rows will change over time, but not \nby magnitudes. It's not an application of ours though, and since we're \na bioinformatics shop, I'm not that eager to spend time hacking a \nproject management system's query generation code.\n\nThanks for all the thoughts and comments from you and others though, I \nappreciate that.\n\n\t-hilmar\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Fri, 3 Jan 2003 14:09:02 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "I have been asking (and learning) about this same thing on the \nPGSQL-JDBC mailing list. Apparently, there is a new driver for 7.3 that \ncan store the plan on the server (aka, preparing it on the server) and \nre-use it. However, you need to set the PreparedStatement to do this \nfor each statement. So, yes, you can retain the plan but it looks like \nyou need to do some work to make it stick. [Also, you need to retain \nthe PreparedStatement, it is not cached based based on the text of the\nstatement, but associated with the PreparedStatement itself].\n\nI think the functionality is starting to become real, but it looks like \nit is starting with some limitations that might restricts its use from \nbe maximally realized until 7.4 (or beyond).\n\nCharlie\n\n\n\n\nHilmar Lapp wrote:\n\n>\n> On Friday, January 3, 2003, at 09:12 AM, Jeff wrote:\n>\n>> Hmm.. This won't fix the fact the planner takes three seconds, but since\n>> it is a web application have you tried using PREPARE/EXECUTE so it only\n>> needs to be planned once?\n>\n>\n> Interesting point. I'd have to look into the source code whether the \n> guy who wrote it actually uses JDBC PreparedStatements. I understand \n> that PostgreSQL from 7.3 onwards supports prepared statements (cool!). \n> Would the JDBC driver accompanying the dist. exploit that feature for \n> its PreparedStatement implementation?\n>\n> -hilmar\n\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n", "msg_date": "Fri, 03 Jan 2003 17:16:32 -0500", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] join over 12 tables takes 3 secs to plan" }, { "msg_contents": "\nOn Friday, January 3, 2003, at 02:16 PM, Charles H. Woloszynski wrote:\n\n> Also, you need to retain the PreparedStatement, it is not cached based \n> based on the text of the\n> statement, but associated with the PreparedStatement itself\n\nI think that's normal. I don't recall the JDBC spec saying that you \nhave a chance the server will remember that you created a \nPreparedStatement for the same query text before. You have to cache the \nPreparedStatement object in your app, not the query string.\n\nBTW that's the same for perl/DBI. At least for Oracle.\n\n\t-hilmar\n-- \n-------------------------------------------------------------\nHilmar Lapp email: lapp at gnf.org\nGNF, San Diego, Ca. 92121 phone: +1-858-812-1757\n-------------------------------------------------------------\n\n", "msg_date": "Fri, 3 Jan 2003 14:22:49 -0800", "msg_from": "Hilmar Lapp <hlapp@gmx.net>", "msg_from_op": true, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "On Fri, 2003-01-03 at 17:16, Charles H. Woloszynski wrote:\n> I think the functionality is starting to become real, but it looks like \n> it is starting with some limitations that might restricts its use from \n> be maximally realized until 7.4 (or beyond).\n\nSpecifically, which limitations in this feature would you like to see\ncorrected?\n\nCheers,\n\nNeil\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "11 Jan 2003 20:03:25 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "Neil:\n\nI think that general use of this feature should be enabled using the \nURL, not with an API call. We use a JDBC connection pool and it will \nhelp tremendously to have the pool set to user server-side preparing \nwithout having to downcast the connection to a PG connection (which I \nthink is an issue because of the facade in our connection pool code). \n\nThe second item is that of compatibility. If the new code cannot handle \nall statements (eg. something with a semi in it) and disable the \ngeneration of a 'prepare' then we cannot count on the URL functionality. \n As I understand it, the programmer is required currently to \nenable/disable the server-side functionality by hand and only when the \nstatement to be prepared is not composite (statement1; statement2; \nstatement2).\n\nBut in our real-world application space, we use a connection pool with a \nfacade, so getting to the actual connection to enable this is \nproblematic (and forces postgresql-specific coding into our framework \nwhere it is not particularly welcome). If we overcame this issue, we \nwould then need to hand-manage the enable/disable to only be used when \nthe statement is appropriately formulated (e.g., no semicolons in the \nstatement).\n\nIf we could get URL enabling and auto-detection of statements that won't \nwork (and hence disable the enabled function for these functions), I \nthink we have a solution that can be deployed into 'generic' app server \nenvironments with just configuration changes. That is, an operations \nperson could enable this feature and monitor its impact on performance \nto see if/how it helps. That is a BIG win (at least to me) and a HUGE \nmarketing item. I'd love to test MySQL with some joins over JDBC with \nPostgreSQL with some joins using prepared statements and be able to \ndemonstrate the big improvement that this makes.\n\nAs I understand it, the functions I am waiting for are targeted into 7.4 \n(but I'd love to see them early and do some testing of those for the \ncommunity).\n\nCharlie\n\n\nNeil Conway wrote:\n\n>On Fri, 2003-01-03 at 17:16, Charles H. Woloszynski wrote:\n> \n>\n>>I think the functionality is starting to become real, but it looks like \n>>it is starting with some limitations that might restricts its use from \n>>be maximally realized until 7.4 (or beyond).\n>> \n>>\n>\n>Specifically, which limitations in this feature would you like to see\n>corrected?\n>\n>Cheers,\n>\n>Neil\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n", "msg_date": "Sun, 12 Jan 2003 10:52:11 -0500", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: join over 12 tables takes 3 secs to plan" }, { "msg_contents": "Neil:\n\nThanks for the feedback. I've attached my original text to this note \nand re-posted it back to pgsql-jdbc to make sure that they are aware of \nthem. I look forward to this new and improved server-side preparation.\n\nCharlie\n\nNeil Conway wrote:\n\n>On Sun, 2003-01-12 at 10:52, Charles H. Woloszynski wrote:\n> \n>\n>>As I understand it, the functions I am waiting for are targeted into 7.4 \n>>(but I'd love to see them early and do some testing of those for the \n>>community).\n>> \n>>\n>\n>Ok -- those are pretty much all features on the JDBC side of things (not\n>the backend implementation of PREPARE/EXECUTE). I'm not sure how much of\n>that is planned for 7.4: if you haven't talked to the JDBC guys about\n>it, they may not be aware of your comments.\n>\n>Cheers,\n>\n>Neil\n> \n>\nCharles H. Woloszynski wrote:\n\n> Neil:\n>\n> I think that general use of this feature should be enabled using the \n> URL, not with an API call. We use a JDBC connection pool and it will \n> help tremendously to have the pool set to user server-side preparing \n> without having to downcast the connection to a PG connection (which I \n> think is an issue because of the facade in our connection pool code). \n> The second item is that of compatibility. If the new code cannot \n> handle all statements (eg. something with a semi in it) and disable \n> the generation of a 'prepare' then we cannot count on the URL \n> functionality. As I understand it, the programmer is required \n> currently to enable/disable the server-side functionality by hand and \n> only when the statement to be prepared is not composite (statement1; \n> statement2; statement2).\n>\n> But in our real-world application space, we use a connection pool with \n> a facade, so getting to the actual connection to enable this is \n> problematic (and forces postgresql-specific coding into our framework \n> where it is not particularly welcome). If we overcame this issue, we \n> would then need to hand-manage the enable/disable to only be used when \n> the statement is appropriately formulated (e.g., no semicolons in the \n> statement).\n>\n> If we could get URL enabling and auto-detection of statements that \n> won't work (and hence disable the enabled function for these \n> functions), I think we have a solution that can be deployed into \n> 'generic' app server environments with just configuration changes. \n> That is, an operations person could enable this feature and monitor \n> its impact on performance to see if/how it helps. That is a BIG win \n> (at least to me) and a HUGE marketing item. I'd love to test MySQL \n> with some joins over JDBC with PostgreSQL with some joins using \n> prepared statements and be able to demonstrate the big improvement \n> that this makes.\n>\n> As I understand it, the functions I am waiting for are targeted into \n> 7.4 (but I'd love to see them early and do some testing of those for \n> the community).\n\n\n\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n\n", "msg_date": "Mon, 13 Jan 2003 08:08:02 -0500", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] join over 12 tables takes 3 secs to plan" } ]
[ { "msg_contents": "Hi!\n I'm developing a small application which I'd like to be as fast as\n possible. The program simply receives an order by modem and has to give an\n answer with the products my enterprise will be able to send them. The number\n of products could be as much as 300-400 and I don't want to make my clients\n put I high time-out before the answer is sended.\n\nI do also need to use transactions as I start calculating before the whole\norder has been received and if an error occurs everything has to be rolled\nback.\n\nUnder this circumstances which way do you think it would be faster?\n\n- Make a sequence for each product (we're talking about 20000 available\nproducts so I think it is very big but it might give a really fast answer).\n\n- Using standard SQL queries: SELECT the product, and if there are enough\nunits UPDATE to decrease the number of available ones. (This one I suppose\nit's not very fast as two queries need to be processed for each product).\n\n- Using a CURSOR or something like this which I'm not used to but I've seen\n in the examples.\n\nShould I have the queries saved in the database to encrease performance?\n\nI hope I explained well enough :-) Thanks in advance!\n", "msg_date": "Sat, 4 Jan 2003 14:31:51 +0100", "msg_from": "Albert Cervera Areny <albertca@jazzfree.com>", "msg_from_op": true, "msg_subject": "Fwd: Stock update like application" } ]
[ { "msg_contents": "Hi\n1. I have plpgsql function having only one query returning 1 value. This \nquery joins some tables. I read, that plpgsql function saves execution \nplan for all queries inside one database connection.\n\n2. Instead of this I can create a view returning one row. How does \npostgres work with views? When the plan is being created? What happens \nto views which don't have explicit joins? What happens if I create index \non tables after creating a view?\n\nWhich one is better and when?\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Mon, 06 Jan 2003 12:03:19 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": true, "msg_subject": "views vs pl/pgsql" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: Monday, January 06, 2003 7:30 PM\n> To: Dann Corbit\n> Cc: pgsql-hackers@postgresql.org; pgsql-general@postgresql.org\n> Subject: Re: [GENERAL] PostgreSQL and memory usage \n> \n> \n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n> > I have a machine with 4 CPU's and 2 gigabytes of physical \n> ram. I would \n> > like to get PostgreSQL to use as much memory as possible. I can't \n> > seem to get PostgreSQL to use more than 100 megabytes or so.\n> \n> You should not assume that more is necessarily better.\n> \n> In many practical situations, it's better to leave the \n> majority of RAM free for kernel disk caching.\n\nIn any case, I would like to know what knobs and dials are available to\nturn and what each of them means.\nIn at least one instance, the whole database should fit into memory. I\nwould think that would be faster than any sort of kernel disk caching.\n", "msg_date": "Mon, 6 Jan 2003 19:32:52 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL and memory usage " }, { "msg_contents": "\nHi Dann, I took hackers out of the list as this isn't really a hacking \nissue, but I added in performance as this definitely applies there.\n\nThere are generally two areas of a database server you have to reconfigure \nto use that extra memory. The first is the kernel's shared memory \nsettings.\n\nOn a linux box that has sysconf installed this is quite easy. If it isn't \ninstalled, install it, as it's much easier to manipulate your kernel's \nsettings using sysctl than it is with editing rc.local.\n\nFirst, get root. Then, use 'sysctl -a|grep shm' to get a list of all the \nshared memory settings handled by sysctl. \n\nOn a default redhat install, we'll get something like this:\n\nkernel.shmmni = 4096\nkernel.shmall = 2097152\nkernel.shmmax = 33554432\n\nOn my bigger box, it's been setup to have this:\n\nkernel.shmmni = 4096\nkernel.shmall = 32000000\nkernel.shmmax = 256000000\n\nTo make changes that stick around, edit the /etc/sysctl.conf file to have \nlines that look kinda like those above. To make the changes to the \n/etc/sysctl.conf file take effect, use 'sysctl -p'.\n\nNext, as the postgres user, edit $PGDATA/postgresql.conf and increase the \nnumber of shared buffers. On most postgresql installations this number is \nmultiplied by 8k to get the amount of ram being allocated, since \npostgresql allocates share buffers in blocks the same size as what it uses \non the dataset. To allocate 256 Megs of buffers (that's what I use, seems \nlike a nice large chunk, but doesn't starve my other processes or system \nfile cache) set it to 32768.\n\nBe careful how big you make your sort size. I haven't seen a great \nincrease in speed on anything over 8 or 16 megs, while memory usage can \nskyrocket under heavy parallel load with lots of sorts, since sort memory \nis PER SORT maximum.\n\nThen do the old pg_ctl reload and you should be cooking with gas.\n\n", "msg_date": "Tue, 7 Jan 2003 09:55:30 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and memory usage " }, { "msg_contents": "\"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> Then do the old pg_ctl reload and you should be cooking with gas.\n\nOne correction: altering the number of shared buffers requires an actual\npostmaster restart.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Jan 2003 12:09:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL and memory usage " }, { "msg_contents": "To put this usage of shared buffers in perspective would you mind kindly\nlet us know your total amount of system ram? Without hearing what\npercentage of memory used as shared buffers (assuming is the primary\napplication being using here)\n\nI have always taken the 'more is better' approach with shared buffers but\nwould like to know what in terms of percentages other people are using. I\nhave been using 50% of system ram (2 out of 4 gigs) for shared buffers\n(and corresponding shmmax values) and it has been working great. I\nhaven't tweaked the kernel yet to get more than 2 gigs shmmax so I can't\nspeak for a setup using over 50%. I've been using between 256 and 512\nmegs sort memory which sounds like a little much from what I'm hearing\nhere.\n\nThanks\n\nFred\n\n>\n> Hi Dann, I took hackers out of the list as this isn't really a hacking\n> issue, but I added in performance as this definitely applies there.\n>\n> There are generally two areas of a database server you have to\n> reconfigure to use that extra memory. The first is the kernel's shared\n> memory settings.\n>\n> On a linux box that has sysconf installed this is quite easy. If it\n> isn't installed, install it, as it's much easier to manipulate your\n> kernel's settings using sysctl than it is with editing rc.local.\n>\n> First, get root. Then, use 'sysctl -a|grep shm' to get a list of all\n> the shared memory settings handled by sysctl.\n>\n> On a default redhat install, we'll get something like this:\n>\n> kernel.shmmni = 4096\n> kernel.shmall = 2097152\n> kernel.shmmax = 33554432\n>\n> On my bigger box, it's been setup to have this:\n>\n> kernel.shmmni = 4096\n> kernel.shmall = 32000000\n> kernel.shmmax = 256000000\n>\n> To make changes that stick around, edit the /etc/sysctl.conf file to\n> have lines that look kinda like those above. To make the changes to\n> the /etc/sysctl.conf file take effect, use 'sysctl -p'.\n>\n> Next, as the postgres user, edit $PGDATA/postgresql.conf and increase\n> the number of shared buffers. On most postgresql installations this\n> number is multiplied by 8k to get the amount of ram being allocated,\n> since\n> postgresql allocates share buffers in blocks the same size as what it\n> uses on the dataset. To allocate 256 Megs of buffers (that's what I\n> use, seems like a nice large chunk, but doesn't starve my other\n> processes or system file cache) set it to 32768.\n>\n> Be careful how big you make your sort size. I haven't seen a great\n> increase in speed on anything over 8 or 16 megs, while memory usage can\n> skyrocket under heavy parallel load with lots of sorts, since sort\n> memory is PER SORT maximum.\n>\n> Then do the old pg_ctl reload and you should be cooking with gas.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n\n\n", "msg_date": "Tue, 7 Jan 2003 11:46:22 -0800 (PST)", "msg_from": "\"Fred Moyer\" <fred@digicamp.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] PostgreSQL and memory usage" }, { "msg_contents": "Oh yeah, sorry. My box has 1.5 gig ram, but it is an application server \nthat runs things other than just postgresql. It also runs:\n\nApache\nReal Server\nOpenLDAP\nSquid\nSamba\n\nwith all those services fired up and running, as well as postgresql with \n256 Megs of shared buffer, I have about 900 megs of cache and 100 megs \nfree ram. Since a lot of data is flying off the hard drives at any given \ntime, favoring one service (database) over the others makes little sense \nfor me, and I've found that there was little or no performance gain from \n256 Megs ram over say 128 meg or 64 meg.\n\nWe run about 50 databases averaging about 25megs each or so (backed up, \nit's about 50 to 75 Megs on the machine's hard drives) so there's no \nway for ALL the data to fit into memory.\n\nOn Tue, 7 Jan 2003, Fred Moyer wrote:\n\n> To put this usage of shared buffers in perspective would you mind kindly\n> let us know your total amount of system ram? Without hearing what\n> percentage of memory used as shared buffers (assuming is the primary\n> application being using here)\n> \n> I have always taken the 'more is better' approach with shared buffers but\n> would like to know what in terms of percentages other people are using. I\n> have been using 50% of system ram (2 out of 4 gigs) for shared buffers\n> (and corresponding shmmax values) and it has been working great. I\n> haven't tweaked the kernel yet to get more than 2 gigs shmmax so I can't\n> speak for a setup using over 50%. I've been using between 256 and 512\n> megs sort memory which sounds like a little much from what I'm hearing\n> here.\n\n", "msg_date": "Tue, 7 Jan 2003 16:16:09 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] PostgreSQL and memory usage" } ]
[ { "msg_contents": "Achilleus Mantzios wrote:\n<cut>\n\n > it has indexes:\n > Indexes: noonf_date btree (report_date),\n > noonf_logno btree (log_no),\n > noonf_rotation btree (rotation text_ops),\n > noonf_vcode btree (v_code),\n > noonf_voyageno btree (voyage_no)\n >\n<cut>\n\n >\n > \n-------------------------------------------------------------------------------------------------------------------\n > Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39)\n > (actual time=0.27..52.89 rows=259 loops=1)\n > Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <=\n > '2003-01-07'::date))\n > Filter: ((v_code = '4500'::character varying) AND (rotation = 'NOON\n > '::character varying))\n > Total runtime: 53.98 msec\n > (4 rows)\n<cut>\n\nMaybe it is not an answer to your question, but why don't you help \nPostgres by yourself?\nFor this kind of queries it's better to drop index on report_date - your \nreport period is one year and answer to this condition is 10% records (I \nsuppose)\nIt would be better to change 2 indexes on v_code and rotation into one \nindex based on both fields.\nWhat kind of queries do you have? How many records returns each \"where\" \ncondition? Use indexes on fields, on which condition result in smallest \namount of rows.\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Tue, 07 Jan 2003 13:00:12 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": true, "msg_subject": "Re: [SQL] 7.3.1 index use / performance" }, { "msg_contents": "\nOn Tue, 7 Jan 2003, Achilleus Mantzios wrote:\n\n> i am just in the stage of having migrated my test system to 7.3.1\n> and i am experiencing some performance problems.\n>\n> i have a table \"noon\"\n> Table \"public.noon\"\n> Column | Type | Modifiers\n> ------------------------+------------------------+-----------\n> v_code | character varying(4) |\n> log_no | bigint |\n> report_date | date |\n> report_time | time without time zone |\n> voyage_no | integer |\n> charterer | character varying(12) |\n> port | character varying(24) |\n> duration | character varying(4) |\n> rotation | character varying(9) |\n> ......\n>\n> with a total of 278 columns.\n>\n> it has indexes:\n> Indexes: noonf_date btree (report_date),\n> noonf_logno btree (log_no),\n> noonf_rotation btree (rotation text_ops),\n> noonf_vcode btree (v_code),\n> noonf_voyageno btree (voyage_no)\n>\n> On the test 7.3.1 system (a FreeBSD 4.7-RELEASE-p2, Celeron 1.2GHz\n> 400Mb, with 168Mb for pgsql),\n> i get:\n> dynacom=# EXPLAIN ANALYZE select\n> FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where\n> v_code='4500' and rotation='NOON ' and report_date between\n> '2002-01-07' and '2003-01-07';\n> QUERY PLAN\n>\n\n> -------------------------------------------------------------------------------------------------------------------\n> Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39)\n> (actual time=0.27..52.89 rows=259 loops=1)\n\n\n> Index Scan using noonf_vcode on noon (cost=0.00..3122.88 rows=1\n> width=39) (actual time=0.16..13.92 rows=259 loops=1)\n\n\nWhat do the statistics for the three columns actually look like and what\nare the real distributions and counts like?\nGiven an estimated cost of around 4 for the first scan, my guess would be\nthat it's not expecting alot of rows between 2002-01-07 and 2003-01-07\nwhich would make that a reasonable plan.\n\n", "msg_date": "Tue, 7 Jan 2003 07:27:49 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] 7.3.1 index use / performance" }, { "msg_contents": "Hi,\n\ni am just in the stage of having migrated my test system to 7.3.1\nand i am experiencing some performance problems.\n\ni have a table \"noon\"\n Table \"public.noon\"\n Column | Type | Modifiers\n------------------------+------------------------+-----------\n v_code | character varying(4) |\n log_no | bigint |\n report_date | date |\n report_time | time without time zone |\n voyage_no | integer |\n charterer | character varying(12) |\n port | character varying(24) |\n duration | character varying(4) |\n rotation | character varying(9) |\n......\n\nwith a total of 278 columns.\n\nit has indexes:\nIndexes: noonf_date btree (report_date),\n noonf_logno btree (log_no),\n noonf_rotation btree (rotation text_ops),\n noonf_vcode btree (v_code),\n noonf_voyageno btree (voyage_no)\n\nOn the test 7.3.1 system (a FreeBSD 4.7-RELEASE-p2, Celeron 1.2GHz\n400Mb, with 168Mb for pgsql),\ni get:\ndynacom=# EXPLAIN ANALYZE select\nFUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where\nv_code='4500' and rotation='NOON ' and report_date between\n'2002-01-07' and '2003-01-07';\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------\n Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39)\n(actual time=0.27..52.89 rows=259 loops=1)\n Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <=\n'2003-01-07'::date))\n Filter: ((v_code = '4500'::character varying) AND (rotation = 'NOON\n'::character varying))\n Total runtime: 53.98 msec\n(4 rows)\n\nafter i drop the noonf_date index i actually get better performance\ncause the backend uses now the more appropriate index noonf_vcode :\n\ndynacom=# EXPLAIN ANALYZE select\nFUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where\nv_code='4500' and rotation='NOON ' and report_date between\n'2002-01-07' and '2003-01-07';\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using noonf_vcode on noon (cost=0.00..3122.88 rows=1\nwidth=39) (actual time=0.16..13.92 rows=259 loops=1)\n Index Cond: (v_code = '4500'::character varying)\n Filter: ((rotation = 'NOON '::character varying) AND (report_date\n>= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date))\n Total runtime: 14.98 msec\n(4 rows)\n\nOn the pgsql 7.2.3 development system (a RH linux 2.4.7, PIII 1 GHz,\n1Mb, with 168M for pgsql), i always get the right index use:\n\ndynacom=# EXPLAIN ANALYZE select\nFUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where\nv_code='4500' and rotation='NOON ' and report_date between\n'2002-01-07' and '2003-01-07';\nNOTICE: QUERY PLAN:\n\nIndex Scan using noonf_vcode on noon (cost=0.00..3046.38 rows=39\nwidth=39) (actual time=0.09..8.55 rows=259 loops=1)\nTotal runtime: 8.86 msec\n\nEXPLAIN\n\nIs something i am missing??\nIs this reasonable behaviour??\n\nP.S.\nYes i have vaccumed analyzed both systems before the queries were issued.\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n\n", "msg_date": "Tue, 7 Jan 2003 13:39:57 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "7.3.1 index use / performance" }, { "msg_contents": "On Tue, 7 Jan 2003, Tomasz Myrta wrote:\n\n> Maybe it is not an answer to your question, but why don't you help\n> Postgres by yourself?\n\nThanx,\n\ni dont think that the issue here is to help postgresql by myself.\nI can always stick to 7.2.3, or use indexes that 7.3.1 will\nacknowledge, like noonf_vcode_date on noon (v_code,report_date).\n(unfortunately when i create the above noonf_vcode_date index, it is only\nused until\nthe next vacuum analyze, hackers is this an issue too???),\nbut these options are not interesting from a postgresql perspective :)\n\n> For this kind of queries it's better to drop index on report_date - your\n> report period is one year and answer to this condition is 10% records (I\n> suppose)\n\nI cannot drop the index on the report_date since a lot of other queries\nneed it.\n\n> It would be better to change 2 indexes on v_code and rotation into one\n> index based on both fields.\n> What kind of queries do you have? How many records returns each \"where\"\n> condition? Use indexes on fields, on which condition result in smallest\n> amount of rows.\n>\n> Regards,\n> Tomasz Myrta\n>\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n", "msg_date": "Tue, 7 Jan 2003 14:21:19 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3.1 index use / performance" }, { "msg_contents": "Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> About the stats on these 3 columns i get:\n\nDoes 7.2 generate the same stats? (minus the schemaname of course)\n\nAlso, I would like to see the results of these queries on both versions,\nso that we can see what the planner thinks the index selectivity is:\n\nEXPLAIN ANALYZE select * from noon where\nv_code='4500';\n\nEXPLAIN ANALYZE select * from noon where\nreport_date between '2002-01-07' and '2003-01-07';\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Jan 2003 11:29:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " }, { "msg_contents": "Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n>> Also, I would like to see the results of these queries on both versions,\n>> so that we can see what the planner thinks the index selectivity is:\n>> \n> [ data supplied ]\n\nThere is something really, really bizarre going on there. You have\n\ndynacom=# EXPLAIN ANALYZE select * from noon where report_date between '2002-01-07' and '2003-01-07';\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using noonf_date on noon (cost=0.00..15919.50 rows=11139 width=1974) (actual time=2.05..13746.17 rows=7690 loops=1)\n Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date))\n Total runtime: 13775.48 msec\n(3 rows)\n\nand from your earlier message\n\ndynacom=# EXPLAIN ANALYZE select FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where\nv_code='4500' and rotation='NOON ' and report_date between '2002-01-07' and '2003-01-07';\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------\n Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39) (actual time=0.27..52.89 rows=259 loops=1)\n Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date))\n Filter: ((v_code = '4500'::character varying) AND (rotation = 'NOON'::character varying))\n Total runtime: 53.98 msec\n(4 rows)\n\nThere is no way that adding the filter condition should have reduced the\nestimated runtime for this plan --- reducing the estimated number of\noutput rows, yes, but not the runtime. And in fact I can't duplicate\nthat when I try it here. I did this on 7.3.1:\n\nregression=# create table noon (v_code character varying(4) ,\nregression(# report_date date ,\nregression(# rotation character varying(9));\nCREATE TABLE\nregression=# create index noonf_date on noon(report_date);\nCREATE INDEX\nregression=# EXPLAIN select * from noon where report_date between\nregression-# '2002-01-07' and '2003-01-07';\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------\n Index Scan using noonf_date on noon (cost=0.00..17.08 rows=5 width=25)\n Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date))\n(2 rows)\n\nregression=# explain select * from noon where\nregression-# v_code='4500' and rotation='NOON ' and report_date between\nregression-# '2002-01-07' and '2003-01-07';\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n------------------\n Index Scan using noonf_date on noon (cost=0.00..17.11 rows=1 width=25)\n Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date))\n Filter: ((v_code = '4500'::character varying) AND (rotation = 'NOON '::character varying))\n(3 rows)\n\nNote that the cost went up, not down.\n\nI am wondering about a compiler bug, or some other peculiarity on your\nplatform. Can anyone else using FreeBSD try the above experiment and\nsee if they get different results from mine on 7.3.* (or CVS tip)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Jan 2003 12:44:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " }, { "msg_contents": "Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> Hi i had written a C function to easily convert an int4 to its\n> equivalent 1x1 int4[] array.\n\nDoes your function know about filling in the elemtype field that was\nrecently added to struct ArrayType?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Jan 2003 13:22:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 function problem: ERROR: cache lookup failed for type 0 " }, { "msg_contents": "Achilleus Mantzios wrote:\n> On Tue, 7 Jan 2003, Tom Lane wrote:\n>>Does your function know about filling in the elemtype field that was\n>>recently added to struct ArrayType?\n> \n> She has no clue :)\n> \n> Any pointers would be great.\n\nSee construct_array() in src/backend/utils/adt/arrayfuncs.c.\n\nHTH,\n\nJoe\n\n", "msg_date": "Tue, 07 Jan 2003 10:39:19 -0800", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 function problem: ERROR: cache lookup failed" }, { "msg_contents": "Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> My case persists:\n> After clean install of the database, and after vacuum analyze,\n> i get\n\nUm ... is it persisting? That looks like it's correctly picked the\nvcode index this time. Strange behavior though. By \"clean install\"\ndo you mean you rebuilt Postgres, or just did dropdb/createdb/reload\ndata?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Jan 2003 13:39:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " }, { "msg_contents": "> I am wondering about a compiler bug, or some other peculiarity on your\n> platform. Can anyone else using FreeBSD try the above experiment and\n> see if they get different results from mine on 7.3.* (or CVS tip)?\n\nOn FreeBSD 4.7 I received the exact same results as Tom using the\nstatements shown by Tom.\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "07 Jan 2003 13:45:14 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [SQL] 7.3.1 index use / performance" }, { "msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n>> I am wondering about a compiler bug, or some other peculiarity on your\n>> platform. Can anyone else using FreeBSD try the above experiment and\n>> see if they get different results from mine on 7.3.* (or CVS tip)?\n\n> On FreeBSD 4.7 I received the exact same results as Tom using the\n> statements shown by Tom.\n\nOn looking at the code, I do see part of a possible mechanism for this\nbehavior: cost_index calculates the estimated cost for qual-clause\nevaluation like this:\n\n /*\n * Estimate CPU costs per tuple.\n *\n * Normally the indexquals will be removed from the list of restriction\n * clauses that we have to evaluate as qpquals, so we should subtract\n * their costs from baserestrictcost. XXX For a lossy index, not all\n * the quals will be removed and so we really shouldn't subtract their\n * costs; but detecting that seems more expensive than it's worth.\n * Also, if we are doing a join then some of the indexquals are join\n * clauses and shouldn't be subtracted. Rather than work out exactly\n * how much to subtract, we don't subtract anything.\n */\n cpu_per_tuple = cpu_tuple_cost + baserel->baserestrictcost;\n\n if (!is_injoin)\n cpu_per_tuple -= cost_qual_eval(indexQuals);\n\nIn theory, indexQuals will always be a subset of the qual list on which \nbaserestrictcost was computed, so we should always end up with a\ncpu_per_tuple value at least as large as cpu_tuple_cost. I am wondering\nif somehow in Achilleus's situation, cost_qual_eval() is producing a\nsilly result leading to negative cpu_per_tuple. I don't see how that\ncould happen though --- nor why it would happen on his machine and not\nother people's.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 07 Jan 2003 14:26:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " }, { "msg_contents": "On Tue, 7 Jan 2003, Stephan Szabo wrote:\n\n>\n> On Tue, 7 Jan 2003, Achilleus Mantzios wrote:\n>\n> > i am just in the stage of having migrated my test system to 7.3.1\n> > and i am experiencing some performance problems.\n> >\n> > i have a table \"noon\"\n> > Table \"public.noon\"\n> > Column | Type | Modifiers\n> > ------------------------+------------------------+-----------\n> > v_code | character varying(4) |\n> > log_no | bigint |\n> > report_date | date |\n> > report_time | time without time zone |\n> > voyage_no | integer |\n> > charterer | character varying(12) |\n> > port | character varying(24) |\n> > duration | character varying(4) |\n> > rotation | character varying(9) |\n> > ......\n> >\n> > with a total of 278 columns.\n> >\n> > it has indexes:\n> > Indexes: noonf_date btree (report_date),\n> > noonf_logno btree (log_no),\n> > noonf_rotation btree (rotation text_ops),\n> > noonf_vcode btree (v_code),\n> > noonf_voyageno btree (voyage_no)\n> >\n> > On the test 7.3.1 system (a FreeBSD 4.7-RELEASE-p2, Celeron 1.2GHz\n> > 400Mb, with 168Mb for pgsql),\n> > i get:\n> > dynacom=# EXPLAIN ANALYZE select\n> > FUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where\n> > v_code='4500' and rotation='NOON ' and report_date between\n> > '2002-01-07' and '2003-01-07';\n> > QUERY PLAN\n> >\n>\n> > -------------------------------------------------------------------------------------------------------------------\n> > Index Scan using noonf_date on noon (cost=0.00..4.46 rows=1 width=39)\n> > (actual time=0.27..52.89 rows=259 loops=1)\n>\n>\n> > Index Scan using noonf_vcode on noon (cost=0.00..3122.88 rows=1\n> > width=39) (actual time=0.16..13.92 rows=259 loops=1)\n>\n>\n> What do the statistics for the three columns actually look like and what\n> are the real distributions and counts like?\n\nThe two databases (test 7.3.1 and development 7.2.3) are identical\n(loaded from the same pg_dump).\n\nAbout the stats on these 3 columns i get: (see also attachment 1 to avoid\nidentation/wraparound problems)\n\n schemaname | tablename | attname | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n------------+-----------+-------------+-----------+-----------+------------+-----------------------------------------------------------------------------------------------------------------+------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------------------------------------------------------------------+-------------\n public | noon | v_code | 0 | 8 | 109 | {4630,4650,4690,4670,4520,4610,4550,4560,4620,4770} | {0.0283333,0.028,0.0256667,0.0243333,0.024,0.0236667,0.0233333,0.0233333,0.0226667,0.0226667} | {2070,3210,4330,4480,4570,4680,4751,4820,4870,4940,6020} | -0.249905\n public | noon | report_date | 0 | 4 | 3408 | {2001-11-14,1998-10-18,2000-04-03,2000-07-04,2000-12-20,2000-12-31,2001-01-12,2001-10-08,2001-12-25,1996-01-23} | {0.002,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00166667,0.00133333} | {\"0001-12-11 BC\",1994-09-27,1996-03-26,1997-07-29,1998-08-26,1999-03-29,1999-11-30,2000-09-25,2001-05-25,2002-01-17,2002-12-31} | -0.812295\n public | noon | rotation | 0 | 13 | 6 | {\"NOON \",\"PORT LOG \",\"ARRIVAL \",DEPARTURE,\"SEA \",\"NEXT PORT\"} | {0.460333,0.268667,0.139,0.119667,0.007,0.00533333} | | 0.119698\n(3 rows)\n\n\nAbout distributions, i have:\n\ndynacom=# SELECT rotation,count(*) from noon group by rotation;\n rotation | count\n-----------+-------\n | 2\n 000000000 | 65\n ARRIVAL | 1\n ARRIVAL | 15471\n DEPARTURE | 15030\n NEXT PORT | 462\n NOON | 50874\n PORT LOG | 25688\n SEA | 1202\n(9 rows)\n\ndynacom=# SELECT v_code,count(*) from noon group by v_code;\n v_code | count\n--------+-------\n 0004 | 1\n 1030 | 1\n 2070 | 170\n 2080 | 718\n 2110 | 558\n 2220 | 351\n 2830 | 1373\n 2840 | 543\n 2860 | 407\n 2910 | 418\n 3010 | 352\n 3020 | 520\n 3060 | 61\n 3130 | 117\n 3140 | 1\n 3150 | 752\n 3160 | 811\n 3170 | 818\n 3180 | 1064\n 3190 | 640\n 3200 | 998\n 3210 | 1512\n 3220 | 595\n 3230 | 374\n 3240 | 514\n 3250 | 13\n 3260 | 132\n 3270 | 614\n 4010 | 413\n 4020 | 330\n 4040 | 728\n 4050 | 778\n 4060 | 476\n 4070 | 534\n 4310 | 759\n 4320 | 424\n 4330 | 549\n 4360 | 366\n 4370 | 334\n 4380 | 519\n 4410 | 839\n 4420 | 183\n 4421 | 590\n 4430 | 859\n 4450 | 205\n 4470 | 861\n 4480 | 766\n 4490 | 169\n 4500 | 792\n 4510 | 2116\n 4520 | 2954\n 4530 | 2142\n 4531 | 217\n 4540 | 2273\n 4550 | 2765\n 4560 | 2609\n 4570 | 2512\n 4580 | 1530\n 4590 | 1987\n 4600 | 308\n 4610 | 2726\n 4620 | 2698\n 4630 | 2813\n 4640 | 1733\n 4650 | 2655\n 4660 | 2139\n 4661 | 65\n 4670 | 2607\n 4680 | 1729\n 4690 | 2587\n 4700 | 2101\n 4710 | 1830\n 4720 | 1321\n 4730 | 1258\n 4740 | 1506\n 4750 | 1391\n 4751 | 640\n 4760 | 1517\n 4770 | 2286\n 4780 | 1353\n 4790 | 1209\n 4800 | 2414\n 4810 | 770\n 4820 | 1115\n 4830 | 1587\n 4840 | 983\n 4841 | 707\n 4850 | 1297\n 4860 | 375\n 4870 | 1440\n 4880 | 456\n 4881 | 742\n 4890 | 210\n 4891 | 45\n 4900 | 2\n 4910 | 1245\n 4920 | 414\n 4930 | 1130\n 4940 | 1268\n 4950 | 949\n 4960 | 836\n 4970 | 1008\n 4980 | 1239\n 5510 | 477\n 5520 | 380\n 5530 | 448\n 5540 | 470\n 5550 | 352\n 5560 | 148\n 5570 | 213\n 5580 | 109\n 5590 | 55\n 6010 | 246\n 6020 | 185\n 9180 | 1\n\n(Not all the above vessels are active or belong to me:) )\n\nThe distribution on the report_date has no probabilistic significance\nsince each report_date usually corresponds to one row.\nSo,\ndynacom=# SELECT count(*) from noon;\n count\n--------\n 108795\n(1 row)\n\ndynacom=#\n\nNow for the specific query the counts have as follows:\n\ndynacom=# select count(*) from noon where v_code='4500';\n count\n-------\n 792\n(1 row)\n\ndynacom=# select count(*) from noon where rotation='NOON ';\n count\n-------\n 50874\n(1 row)\n\ndynacom=# select count(*) from noon where report_date between '2002-01-07'\nand '2003-01-07';\n count\n-------\n 7690\n(1 row)\n\ndynacom=#\n\n> Given an estimated cost of around 4 for the first scan, my guess would be\n> that it's not expecting alot of rows between 2002-01-07 and 2003-01-07\n> which would make that a reasonable plan.\n>\n\nAs we see the rows returned for v_code='4500' (792) are much fewer than\nthe rows returned for the dates between '2002-01-07' and '2003-01-07'\n(7690).\n\nIs there a way to provide you with more information?\n\nAnd i must note that the two databases were worked on after a fresh\ncreatedb on both systems (and as i told they are identical).\nBut, for some reason the 7.2.3 *always* finds the best index to use :)\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr", "msg_date": "Tue, 7 Jan 2003 18:27:32 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance" }, { "msg_contents": "On Tue, 7 Jan 2003, Tom Lane wrote:\n\n> Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> > About the stats on these 3 columns i get:\n>\n> Does 7.2 generate the same stats? (minus the schemaname of course)\n\nNot absolutely but close:\n\n(See attachment)\n\n>\n> Also, I would like to see the results of these queries on both versions,\n> so that we can see what the planner thinks the index selectivity is:\n>\n> EXPLAIN ANALYZE select * from noon where\n> v_code='4500';\n>\n> EXPLAIN ANALYZE select * from noon where\n> report_date between '2002-01-07' and '2003-01-07';\n>\n\nOn 7.3.1 (On a FreeBSD)\n=======================\ndynacom=# EXPLAIN ANALYZE select * from noon where v_code='4500';\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------\n Index Scan using noonf_vcode on noon (cost=0.00..3066.64 rows=829\nwidth=1974) (actual time=2.02..1421.14 rows=792 loops=1)\n Index Cond: (v_code = '4500'::character varying)\n Total runtime: 1424.82 msec\n(3 rows)\n\n\ndynacom=# EXPLAIN ANALYZE select * from noon where report_date between\n'2002-01-07' and '2003-01-07';\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Index Scan using noonf_date on noon (cost=0.00..15919.50 rows=11139\nwidth=1974) (actual time=2.05..13746.17 rows=7690 loops=1)\n Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <=\n'2003-01-07'::date))\n Total runtime: 13775.48 msec\n(3 rows)\n\nOn 7.2.3 (Linux)\n==================\ndynacom=# EXPLAIN ANALYZE select * from noon where v_code='4500';\nNOTICE: QUERY PLAN:\n\nIndex Scan using noonf_vcode on noon (cost=0.00..3043.45 rows=827\nwidth=1974) (actual time=19.59..927.06 rows=792 loops=1)\nTotal runtime: 928.86 msec\n\ndynacom=# EXPLAIN ANALYZE select * from noon where report_date between\n'2002-01-07' and '2003-01-07';\nNOTICE: QUERY PLAN:\n\nIndex Scan using noonf_date on noon (cost=0.00..16426.45 rows=11958\nwidth=1974) (actual time=29.64..8854.05 rows=7690 loops=1)\nTotal runtime: 8861.90 msec\n\nEXPLAIN\n\n> \t\t\tregards, tom lane\n>\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr", "msg_date": "Tue, 7 Jan 2003 19:05:22 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " }, { "msg_contents": "\nHi i had written a C function to easily convert an int4 to its\nequivalent 1x1 int4[] array.\n\nIt worked fine under 7.1,7.2.\nNow under 7.3.1 i get the following message whenever i try to:\n\ndynacom=# select itoar(3126);\nERROR: cache lookup failed for type 0\n\nSurprisingly though when i do something like :\n\ndynacom=# select defid from machdefs where itoar(3126) ~ parents and\nlevel(parents) = 1 order by description,partno;\n defid\n-------\n 3137\n 3127\n 3130\n 3129\n 3133\n 3136\n 3135\n 3128\n 3131\n 3132\n 3134\n 3138\n(12 rows)\n\nit works fine, but then again when i try to EXPLAIN the above (successful)\nstatement i also get:\n\ndynacom=# EXPLAIN select defid from machdefs where itoar(3126) ~ parents\nand\nlevel(parents) = 1 order by description,partno;\nERROR: cache lookup failed for type 0\n\n\nAny clues of what could be wrong??\n\nThe definition of the function is:\n\nCREATE FUNCTION \"itoar\" (integer) RETURNS integer[] AS\n'$libdir/itoar', 'itoar' LANGUAGE 'c' WITH ( iscachable,isstrict );\n\nI also tried without the iscachable option with no luck\n(since it seems to complain about *type* 0)\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n", "msg_date": "Tue, 7 Jan 2003 19:55:33 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "7.3.1 function problem: ERROR: cache lookup failed for type 0" }, { "msg_contents": "On Tue, 7 Jan 2003, Tom Lane wrote:\n\n> Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> > Hi i had written a C function to easily convert an int4 to its\n> > equivalent 1x1 int4[] array.\n>\n> Does your function know about filling in the elemtype field that was\n> recently added to struct ArrayType?\n\nShe has no clue :)\n\nAny pointers would be great.\nThanx Tom.\n\n>\n> \t\t\tregards, tom lane\n>\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n", "msg_date": "Tue, 7 Jan 2003 20:32:56 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 function problem: ERROR: cache lookup failed" }, { "msg_contents": "On Tue, 7 Jan 2003, Tom Lane wrote:\n\n> There is no way that adding the filter condition should have reduced the\n> estimated runtime for this plan --- reducing the estimated number of\n> output rows, yes, but not the runtime. And in fact I can't duplicate\n\nMy case persists:\nAfter clean install of the database, and after vacuum analyze,\ni get\n\ndynacom=# EXPLAIN ANALYZE select\nFUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where\nreport_date between '2002-01-07' and '2003-01-07';\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------\n Index Scan using noonf_date on noon (cost=0.00..16458.54 rows=10774\nwidth=39) (actual time=0.13..205.86 rows=7690 loops=1)\n Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <=\n'2003-01-07'::date))\n Total runtime: 233.22 msec\n\ndynacom=# EXPLAIN ANALYZE select\nFUELCONSUMPTION,rpm,Steam_Hours,voyage_activity,ldin from noon where\nreport_date between '2002-01-07' and '2003-01-07' and v_code='4500';\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------\n Index Scan using noonf_vcode on noon (cost=0.00..3092.52 rows=83\nwidth=39) (actual time=0.15..15.08 rows=373 loops=1)\n Index Cond: (v_code = '4500'::character varying)\n Filter: ((report_date >= '2002-01-07'::date) AND (report_date <=\n'2003-01-07'::date))\n Total runtime: 16.56 msec\n(4 rows)\n\nI thought PostgreSQL in some sense (hub.org) used FreeBSD,\nis there any 4.7 FreeBSD server with pgsql 7.3.1 you could use?\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n", "msg_date": "Tue, 7 Jan 2003 20:38:17 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " }, { "msg_contents": "On Tue, 7 Jan 2003, Tom Lane wrote:\n\n> Achilleus Mantzios <achill@matrix.gatewaynet.com> writes:\n> > My case persists:\n> > After clean install of the database, and after vacuum analyze,\n> > i get\n>\n> Um ... is it persisting? That looks like it's correctly picked the\n> vcode index this time. Strange behavior though. By \"clean install\"\n> do you mean you rebuilt Postgres, or just did dropdb/createdb/reload\n> data?\n\nJust dropdb/createdb/reload.\n\n>\n> \t\t\tregards, tom lane\n>\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n", "msg_date": "Tue, 7 Jan 2003 20:51:35 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " }, { "msg_contents": "Just to close off the thread, here is the end-result of investigating\nAchilleus Mantzios' problem.\n\n------- Forwarded Message\n\nDate: Wed, 08 Jan 2003 11:54:36 -0500\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nTo: Achilleus Mantzios <achill@matrix.gatewaynet.com>\nSubject: Re: [SQL] [PERFORM] 7.3.1 index use / performance \n\nI believe I see what's going on. You have a number of silly outlier\nvalues in the report_date column --- quite a few instances of '10007-06-09'\nfor example. Depending on whether ANALYZE's random sample happens to\ninclude one of these, the histogram generated by ANALYZE might look like\nthis (it took about half a dozen tries with ANALYZE to get this result):\n\ndynacom=# analyze noon;\nANALYZE\ndynacom=# select histogram_bounds from pg_stats where attname = 'report_date';\n histogram_bounds\n-----------------------------------------------------------------------------------------------------------------------------\n {1969-06-26,1994-09-24,1996-04-05,1997-07-21,1998-08-27,1999-03-13,1999-11-11,2000-08-18,2001-04-18,2002-01-04,10007-06-09}\n(1 row)\n\nin which case we get this:\n\ndynacom=# EXPLAIN select * from noon where\ndynacom-# report_date between '2002-01-07' and '2003-01-07';\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Index Scan using noonf_date on noon (cost=0.00..4.08 rows=1 width=1975)\n Index Cond: ((report_date >= '2002-01-07'::date) AND (report_date <= '2003-01-07'::date))\n(2 rows)\n\nSeeing this histogram, the planner assumes that one-tenth of the table\nis uniformly distributed between 2002-01-04 and 10007-06-09, which leads\nit to the conclusion that the range between 2002-01-07 and 2003-01-07\nprobably contains only about one row, which causes it to prefer a scan\non report_date rather than on v_code.\n\nThe reason the problem comes and goes is that any given ANALYZE run\nmight or might not happen across one of the outliers. When it doesn't,\nyou get a histogram that leads to reasonably accurate estimates.\n\nThere are a couple of things you could do about this. One is to\nincrease the statistics target for report_date (see ALTER TABLE SET\nSTATISTICS) so that a finer-grained histogram is generated for the\nreport_date column. The other thing, which is more work but probably\nthe best answer in the long run, is to fix the outliers, which I imagine\nmust be incorrect entries.\n\nYou could perhaps put a constraint on report_date to prevent bogus\nentries from sneaking in in future.\n\nIt looks like increasing the stats target would be worth doing also,\nif you make many queries using ranges of report_date.\n\n\t\t\tregards, tom lane\n\n------- End of Forwarded Message\n", "msg_date": "Wed, 08 Jan 2003 12:32:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " }, { "msg_contents": "On Tue, 7 Jan 2003, Tom Lane wrote:\n\n> Rod Taylor <rbt@rbt.ca> writes:\n> >> I am wondering about a compiler bug, or some other peculiarity on your\n> >> platform. Can anyone else using FreeBSD try the above experiment and\n> >> see if they get different results from mine on 7.3.* (or CVS tip)?\n>\n> > On FreeBSD 4.7 I received the exact same results as Tom using the\n> > statements shown by Tom.\n>\n> On looking at the code, I do see part of a possible mechanism for this\n> behavior: cost_index calculates the estimated cost for qual-clause\n> evaluation like this:\n>\n\nThis bizarre index decreased cost (when adding conditions) behaviour maybe\nwas due to some vacuums.\n(i cant remember how many reloads and vacuums i did to the database\nin the period petween the two emails).\n\nHowever my linux machine with the same pgsql 7.3.1, with a full clean\ninstallation also gives the same symptoms:\nChoosing the slow index, and after some (random)\nvacuums choosing the right index, and then after some vacuums chooses the\nbad\nindex again.\n\n\n>\n> \t\t\tregards, tom lane\n>\n\n==================================================================\nAchilleus Mantzios\nS/W Engineer\nIT dept\nDynacom Tankers Mngmt\nNikis 4, Glyfada\nAthens 16610\nGreece\ntel: +30-10-8981112\nfax: +30-10-8981877\nemail: achill@matrix.gatewaynet.com\n mantzios@softlab.ece.ntua.gr\n\n", "msg_date": "Wed, 8 Jan 2003 15:53:54 -0200 (GMT+2)", "msg_from": "Achilleus Mantzios <achill@matrix.gatewaynet.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] [PERFORM] 7.3.1 index use / performance " } ]
[ { "msg_contents": "Hi all,\n\nHow is this path created without the (.profile)? \n\n$ echo $PATH\n/usr/local/bin:/usr/bin:/bin:/cygdrive/c/amtagent:/cygdrive/c/informix/bin:/cygd\nrive/c/winnt:/cygdrive/c/winnt/system:winnt/system32:/cygdrive/c/Windows:/cygdri\nve/c/Windows/command:C:jdk1.2.2/bin\n\nHow can I add this path to the above path? ~/cygwin/usr/bin/gcc-3.2.1\n\n\n\n", "msg_date": "Tue, 7 Jan 2003 15:29:54 -0500 ", "msg_from": "\"Claiborne, Aldemaco Earl (Al)\" <claiborne@lucent.com>", "msg_from_op": true, "msg_subject": "path " } ]
[ { "msg_contents": "Hello!\n\nI am quite new in the PostgreSQL performance business, done a few years Oracle \nstuff before. My ist question is the following:\n\nWe have three table, lets name them rk150, 151 and rk152. They all have a \ntimestamp and a order number in common but than different data after this. \nNow I need the data from all tables in one view for a given order number, so \nI created a view\n\ncreate view orderevents as\n select ts, aufnr from rk150\n union\n select ts, aufnr from rk151\n union\n select ts, aufnr from rk152;\n\nWhen I does a \"select * from orderevents where aufnr='1234'\" it takes over 14 \nseconds!\nThe problem is now that PostgreSQL first does the union with all the three \ntables and after this sorts out the right rows:\n\nSubquery Scan a (cost=54699.06..56622.18 rows=38462 width=20)\n -> Unique (cost=54699.06..56622.18 rows=38462 width=20)\n -> Sort (cost=54699.06..54699.06 rows=384624 width=20)\n -> Append (cost=0.00..10689.24 rows=384624 width=20)\n -> Subquery Scan *SELECT* 1\n (cost=0.00..8862.52 rows=314852 width=20)\n -> Seq Scan on rk150 \n (cost=0.00..8862.52 rows=314852 width=20)\n -> Subquery Scan *SELECT* 2 \n (cost=0.00..1208.58 rows=45858 width=20)\n -> Seq Scan on rk151 \n (cost=0.00..1208.58 rows=45858 width=20)\n -> Subquery Scan *SELECT* 3 \n (cost=0.00..618.14 rows=23914 width=20)\n -> Seq Scan on rk152 \n (cost=0.00..618.14 rows=23914 width=20)\n\nA better thing would it (Oracle does this and I think I have seen it on \nPostgreSQL before), that the where-clause is moved inside every select so we \nhave something like this (written by hand):\n\nselect * from (\n select zeit, aufnr from rk150 where aufnr='13153811'\n union\n select zeit, aufnr from rk151 where aufnr='13153811'\n union\n select zeit, aufnr from rk152 where aufnr='13153811')\n as A;\n\nThis takes less than 1 second because the nr of rows that have to be joined \nare only 45 (optimizer expects 4), not > 300.000:\n\nSubquery Scan a (cost=45.97..46.19 rows=4 width=20)\n -> Unique (cost=45.97..46.19 rows=4 width=20)\n -> Sort (cost=45.97..45.97 rows=45 width=20)\n -> Append (cost=0.00..44.74 rows=45 width=20)\n -> Subquery Scan *SELECT* 1 \n (cost=0.00..32.22 rows=31 width=20)\n -> Index Scan using rk150_uidx_aufnr on rk150 \n (cost=0.00..32.22 rows=31 width=20)\n -> Subquery Scan *SELECT* 2 \n (cost=0.00..7.67 rows=9 width=20)\n -> Index Scan using rk151_uidx_aufnr on rk151 \n (cost=0.00..7.67 rows=9 width=20)\n -> Subquery Scan *SELECT* 3 \n (cost=0.00..4.85 rows=5 width=20)\n -> Index Scan using rk152_uidx_aufnr on rk152 \n (cost=0.00..4.85 rows=5 width=20)\n\nMy question now: Is the optimizer able to move the where clause into unions? \nIf so, how I can get him to do it?\n\nThank you for the help in advance!\n\n-- \nDipl. Inform. Boris Klug, control IT GmbH, Germany\n", "msg_date": "Wed, 8 Jan 2003 14:25:48 +0100", "msg_from": "Boris Klug <boris.klug@control.de>", "msg_from_op": true, "msg_subject": "Unions and where optimisation" }, { "msg_contents": "Hannu Krosing wrote:\n\n>\n> try making the orderevents view like this:\n>\n> create view orderevents as\n> select rk.aufnr, sub.ts\n> from rk150 rk,\n> ( select ts from rk150 where aufnr = rk.aufr\n> union\n> select ts from rk151 where aufnr = rk.aufr\n> union\n> select ts from rk152 where aufnr = rk.aufr\n> ) as sub\n> ;\n>\n> this could/should force your desired behavior.\n>\n\nHannu, does it work?\nFew months ago I lost some time trying to create this kind of query and \nI always got error, that subselect doesn't knows anything about upper \n(outer?) table.\n\nIn this query you should get error:\n\"relation rk does not exist\".\n\nWhat version of postgres do you have?\nTomasz Myrta\n\n", "msg_date": "Wed, 08 Jan 2003 15:32:11 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": false, "msg_subject": "Re: Unions and where optimisation" }, { "msg_contents": "\nHello!\n\n> Hannu, does it work?\n> Few months ago I lost some time trying to create this kind of query and\n> I always got error, that subselect doesn't knows anything about upper\n> (outer?) table.\n\nIt does not work on my PostgreSQL 7.2.x\n\nGet the same error like you: \"relation rk does not exist\"\n\nAlso the disadvantage of this solution is that the speed up is bound to \nqueries for the ordernr. If a statement has a where clause e.g. for a \ntimestamp, the view is still slow.\n\nDoes PostgreSQL not know how to move where clause inside each select in a \nunion?\n\n-- \nDipl. Inform. Boris Klug, control IT GmbH, Germany\n", "msg_date": "Wed, 8 Jan 2003 15:36:33 +0100", "msg_from": "Boris Klug <boris.klug@control.de>", "msg_from_op": true, "msg_subject": "Re: Unions and where optimisation" }, { "msg_contents": "Boris Klug wrote:\n\n> create view orderevents as\n> select ts, aufnr from rk150\n> union\n> select ts, aufnr from rk151\n> union\n> select ts, aufnr from rk152;\n\nI lost some time and I didn't find valid solution for this kind of query :-(\n\nI solved it (nice to hear about better solution) using table inheritance.\n\ncreate table rk_master(\nfields...\nfields...\n);\n\ncreate table rk150 () inherits rk_master;\ncreate table rk151 () inherits rk_master;\ncreate table rk152 () inherits rk_master;\n\nnow you can just create simple view:\nselect ts, aufnr from rk_master;\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Wed, 08 Jan 2003 15:40:43 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": false, "msg_subject": "Re: Unions and where optimisation" }, { "msg_contents": "Boris Klug <boris.klug@control.de> wrote:\n\n> > Hannu, does it work?\n> > Few months ago I lost some time trying to create this kind of query and\n> > I always got error, that subselect doesn't knows anything about upper\n> > (outer?) table.\n>\n> It does not work on my PostgreSQL 7.2.x\n>\n> Get the same error like you: \"relation rk does not exist\"\n>\n> Also the disadvantage of this solution is that the speed up is bound to\n> queries for the ordernr. If a statement has a where clause e.g. for a\n> timestamp, the view is still slow.\n>\n> Does PostgreSQL not know how to move where clause inside each select in a\n> union?\n\nHi Boris,\n\nAs far as I know, this has first been \"fixed\" in 7.3. I think it was Tom who\nimproved the optimizer to push the where clause into the selects of a union\nview. I've done a test...\n\ncreate view test as\n select updated, invoice_id from invoice\n union all\n select updated, invoice_id from inv2\n union all\n select updated, invoice_id from inv3;\n\n... and it seems to work (postgresql 7.3 here):\n\nbilling=# explain select * from test where invoice_id = 111000;\n QUERY PLAN\n----------------------------------------------------------------------------\n----------------\n Subquery Scan test (cost=0.00..413.24 rows=114 width=12)\n -> Append (cost=0.00..413.24 rows=114 width=12)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..6.00 rows=1 width=12)\n -> Index Scan using pk_invoice on invoice (cost=0.00..6.00\nrows=1 width=12)\n Index Cond: (invoice_id = 111000)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..203.62 rows=57\nwidth=12)\n -> Index Scan using idx_inv2 on inv2 (cost=0.00..203.62\nrows=57 width=12)\n Index Cond: (invoice_id = 111000)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..203.62 rows=57\nwidth=12)\n -> Index Scan using idx_inv3 on inv3 (cost=0.00..203.62\nrows=57 width=12)\n Index Cond: (invoice_id = 111000)\n(11 rows)\n\nI hope this is helps. Can you upgrade to 7.3.1? I really think the upgrade\nis worth the effort.\n\nBest Regards,\nMichael Paesold\n\n", "msg_date": "Wed, 8 Jan 2003 16:48:27 +0100", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: Unions and where optimisation" }, { "msg_contents": "On Wed, 2003-01-08 at 13:25, Boris Klug wrote:\n> Hello!\n> \n> I am quite new in the PostgreSQL performance business, done a few years Oracle \n> stuff before. My ist question is the following:\n> \n> We have three table, lets name them rk150, 151 and rk152. They all have a \n> timestamp and a order number in common but than different data after this. \n> Now I need the data from all tables in one view for a given order number, so \n> I created a view\n> \n> create view orderevents as\n> select ts, aufnr from rk150\n> union\n> select ts, aufnr from rk151\n> union\n> select ts, aufnr from rk152;\n\ntry making the orderevents view like this:\n\ncreate view orderevents as\nselect rk.aufnr, sub.ts\n from rk150 rk,\n ( select ts from rk150 where aufnr = rk.aufr\n union\n select ts from rk151 where aufnr = rk.aufr\n union\n select ts from rk152 where aufnr = rk.aufr\n ) as sub\n;\n\nthis could/should force your desired behavior.\n\n> My question now: Is the optimizer able to move the where clause into unions? \n> If so, how I can get him to do it?\n> \n> Thank you for the help in advance!\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "08 Jan 2003 16:02:15 +0000", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Unions and where optimisation" }, { "msg_contents": "Hello!\n\n> As far as I know, this has first been \"fixed\" in 7.3. I think it was Tom\n> who improved the optimizer to push the where clause into the selects of a\n> union view. I've done a test...\n\nYes, I installed 7.3 and it works fine there. I think we will upgrade to 7.3.1 \nour development system soon.\n\nThank you!\n\n-- \nDipl. Inform. Boris Klug, control IT GmbH, Germany\n", "msg_date": "Wed, 8 Jan 2003 17:13:00 +0100", "msg_from": "Boris Klug <boris.klug@control.de>", "msg_from_op": true, "msg_subject": "Re: Unions and where optimisation" }, { "msg_contents": "On Wed, 2003-01-08 at 14:32, Tomasz Myrta wrote:\n> Hannu Krosing wrote:\n> \n> >\n> > try making the orderevents view like this:\n> >\n> > create view orderevents as\n> > select rk.aufnr, sub.ts\n> > from rk150 rk,\n> > ( select ts from rk150 where aufnr = rk.aufr\n> > union\n> > select ts from rk151 where aufnr = rk.aufr\n> > union\n> > select ts from rk152 where aufnr = rk.aufr\n> > ) as sub\n> > ;\n> >\n> > this could/should force your desired behavior.\n> >\n> \n> Hannu, does it work?\n\nNope! Sorry.\n\nSQL spec clearly states that subqueries in FROM clause must not see each\nother ;(\n\nIt would work in WITH part of the query, which will hopefully be\nimplemented in some future PG version, perhaps even 7.4 as WITH is the\nprerequisite for implementing SQL99 recursive queries, and RedHat has\nshown an strongish interest in implementing these.\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "08 Jan 2003 16:49:20 +0000", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Unions and where optimisation" } ]
[ { "msg_contents": "Hello to all list members:\n\nI'm looking for information about using postgresql in a cluster of servers\nwhere all real servers share a unique databases location outside them.\n\nThe question could be one of the follows?\n\n?Is the prostgresql prepared to synchronize simultaneous accesses to oneself\ndatabase among processes that run in different PC's?\n\nor\n\n?Was the postgresql database designed to allow simultaneous acceses of\nprocesses that run in different PC's allowing for own design the\nsincronizationof all the processes?\n\n\nThanks in advance for the attention\nEnediel\nLinux user 300141\n\nHappy who can penetrate the secret causes of the things\n�Use Linux!\n\n", "msg_date": "Wed, 8 Jan 2003 09:18:09 -0800", "msg_from": "\"enediel\" <enediel@com.ith.tur.cu>", "msg_from_op": true, "msg_subject": "postgresql in cluster of servers" }, { "msg_contents": "\nEnediel,\n\n> ?Was the postgresql database designed to allow simultaneous acceses of\n> processes that run in different PC's allowing for own design the\n> sincronizationof all the processes?\n\nNo, unless I'm really out of the loop. \n\nHowever, I believe that some/all of the commercial companies who offer \nPostgreSQL-based solutions offer extensions/versions of Postgres that do \nthis. I suggest that you contact:\n\nPostgreSQL Inc.\nCommand Prompt Inc.\nRed Hat (Red Hat Database)\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 8 Jan 2003 11:08:13 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: postgresql in cluster of servers" }, { "msg_contents": "On Wed, 2003-01-08 at 11:18, enediel wrote:\n> Hello to all list members:\n> \n> I'm looking for information about using postgresql in a cluster of servers\n> where all real servers share a unique databases location outside them.\n> \n> The question could be one of the follows?\n> \n> ?Is the prostgresql prepared to synchronize simultaneous accesses to oneself\n> database among processes that run in different PC's?\n> \n> or\n> \n> ?Was the postgresql database designed to allow simultaneous acceses of\n> processes that run in different PC's allowing for own design the\n> sincronizationof all the processes?\n\nTo clarify: do you mean \n\n(1) multiple copies of *the*same*database* sitting on many machines, \nand all of them synchronizing themselves?\n\n OR\n\n(2) multiple application server machines all hitting a single database\nsitting on a single database server machine?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "08 Jan 2003 15:14:52 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: postgresql in cluster of servers" } ]
[ { "msg_contents": "On Wed, 2003-01-08 at 19:49, enediel wrote:\n> Thanks for all answers:\n> Ron Johnson, I mean the second option thet you wrote:\n> \n> (2) multiple application server machines all hitting a single database\n> sitting on a single database server machine?\n\nOk, great. To the PG server, the app servers are db clients, just like\nany other client. Multi-user access, arranged so that users don't step\nover each other, is integrated deeply into the server.\n\nThus, PostgreSQL meets this qualification...\n\n> Happy who can penetrate the secret causes of the things\n> �Use Linux!\n> ----- Original Message -----\n> From: \"Ron Johnson\" <ron.l.johnson@cox.net>\n> To: \"postgresql\" <pgsql-performance@postgresql.org>\n> Sent: Wednesday, January 08, 2003 1:14 PM\n> Subject: Re: [PERFORM] postgresql in cluster of servers\n> \n> \n> > On Wed, 2003-01-08 at 11:18, enediel wrote:\n> > > Hello to all list members:\n> > >\n> > > I'm looking for information about using postgresql in a cluster of\n> servers\n> > > where all real servers share a unique databases location outside them.\n> > >\n> > > The question could be one of the follows?\n> > >\n> > > ?Is the prostgresql prepared to synchronize simultaneous accesses to\n> oneself\n> > > database among processes that run in different PC's?\n> > >\n> > > or\n> > >\n> > > ?Was the postgresql database designed to allow simultaneous acceses of\n> > > processes that run in different PC's allowing for own design the\n> > > sincronizationof all the processes?\n> >\n> > To clarify: do you mean\n> >\n> > (1) multiple copies of *the*same*database* sitting on many machines,\n> > and all of them synchronizing themselves?\n> >\n> > OR\n> >\n> > (2) multiple application server machines all hitting a single database\n> > sitting on a single database server machine?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "08 Jan 2003 17:38:54 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "Re: Fw: postgresql in cluster of servers" }, { "msg_contents": "Thanks for all answers:\nRon Johnson, I mean the second option thet you wrote:\n\n(2) multiple application server machines all hitting a single database\nsitting on a single database server machine?\n\nGreetings\nEnediel\nLinux user 300141\n\nHappy who can penetrate the secret causes of the things\n�Use Linux!\n----- Original Message -----\nFrom: \"Ron Johnson\" <ron.l.johnson@cox.net>\nTo: \"postgresql\" <pgsql-performance@postgresql.org>\nSent: Wednesday, January 08, 2003 1:14 PM\nSubject: Re: [PERFORM] postgresql in cluster of servers\n\n\n> On Wed, 2003-01-08 at 11:18, enediel wrote:\n> > Hello to all list members:\n> >\n> > I'm looking for information about using postgresql in a cluster of\nservers\n> > where all real servers share a unique databases location outside them.\n> >\n> > The question could be one of the follows?\n> >\n> > ?Is the prostgresql prepared to synchronize simultaneous accesses to\noneself\n> > database among processes that run in different PC's?\n> >\n> > or\n> >\n> > ?Was the postgresql database designed to allow simultaneous acceses of\n> > processes that run in different PC's allowing for own design the\n> > sincronizationof all the processes?\n>\n> To clarify: do you mean\n>\n> (1) multiple copies of *the*same*database* sitting on many machines,\n> and all of them synchronizing themselves?\n>\n> OR\n>\n> (2) multiple application server machines all hitting a single database\n> sitting on a single database server machine?\n>\n> --\n> +------------------------------------------------------------+\n> | Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n> | Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n> | |\n> | \"Basically, I got on the plane with a bomb. Basically, I |\n> | tried to ignite it. Basically, yeah, I intended to damage |\n> | the plane.\" |\n> | RICHARD REID, who tried to blow up American Airlines |\n> | Flight 63 |\n> +------------------------------------------------------------+\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Wed, 8 Jan 2003 17:49:21 -0800", "msg_from": "\"enediel\" <enediel@com.ith.tur.cu>", "msg_from_op": false, "msg_subject": "Fw: postgresql in cluster of servers" }, { "msg_contents": "Hi,\n\nI think if we speak about single db server you will nead to ask\nthe app. server provider about clustering.\n\nI am working with jboss and it is supporting clustering.\n\n\nregards,\nivan.\n\nenediel wrote:\n\n> Thanks for all answers:\n> Ron Johnson, I mean the second option thet you wrote:\n>\n> (2) multiple application server machines all hitting a single database\n> sitting on a single database server machine?\n>\n> Greetings\n> Enediel\n> Linux user 300141\n>\n> Happy who can penetrate the secret causes of the things\n> �Use Linux!\n> ----- Original Message -----\n> From: \"Ron Johnson\" <ron.l.johnson@cox.net>\n> To: \"postgresql\" <pgsql-performance@postgresql.org>\n> Sent: Wednesday, January 08, 2003 1:14 PM\n> Subject: Re: [PERFORM] postgresql in cluster of servers\n>\n> > On Wed, 2003-01-08 at 11:18, enediel wrote:\n> > > Hello to all list members:\n> > >\n> > > I'm looking for information about using postgresql in a cluster of\n> servers\n> > > where all real servers share a unique databases location outside them.\n> > >\n> > > The question could be one of the follows?\n> > >\n> > > ?Is the prostgresql prepared to synchronize simultaneous accesses to\n> oneself\n> > > database among processes that run in different PC's?\n> > >\n> > > or\n> > >\n> > > ?Was the postgresql database designed to allow simultaneous acceses of\n> > > processes that run in different PC's allowing for own design the\n> > > sincronizationof all the processes?\n> >\n> > To clarify: do you mean\n> >\n> > (1) multiple copies of *the*same*database* sitting on many machines,\n> > and all of them synchronizing themselves?\n> >\n> > OR\n> >\n> > (2) multiple application server machines all hitting a single database\n> > sitting on a single database server machine?\n> >\n> > --\n> > +------------------------------------------------------------+\n> > | Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n> > | Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n> > | |\n> > | \"Basically, I got on the plane with a bomb. Basically, I |\n> > | tried to ignite it. Basically, yeah, I intended to damage |\n> > | the plane.\" |\n> > | RICHARD REID, who tried to blow up American Airlines |\n> > | Flight 63 |\n> > +------------------------------------------------------------+\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n\n", "msg_date": "Thu, 09 Jan 2003 17:22:17 +0100", "msg_from": "pginfo <pginfo@t1.unisoftbg.com>", "msg_from_op": false, "msg_subject": "Re: Fw: postgresql in cluster of servers" } ]
[ { "msg_contents": "On Thu, 9 Jan 2003, enediel wrote:\n\n> No, pginfo, suposse this example\n> \n> Web server or cluster web server, it's unimportant\n> |\n> Postgresql cluster server containing{\n> ...\n> Real server 1 running postgresql processes\n> Real server 2 running postgresql processes\n> ....\n> }\n> |\n> File server machine that contains all pg_databases\n> \n> Notice that the real servers don't have in their hard disk any database,\n> they could have a link to the hard disk in the File server machine.\n\nPostgresql cannot currently work this way. \n\nIt uses shared memory on a single image OS to maintain the database \ncoherently. when you cluster Postgresql across multiple machines, \ncurrently you have to have two seperate and independant instances which \nare synchronized by an external process of some sort.\n\nsince I/O is usually the single limiting factor, your suggested system \nwould likely be no faster than a single box with decent memory and CPUs in \nit.\n\n", "msg_date": "Thu, 9 Jan 2003 13:18:36 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": true, "msg_subject": "Re: Fw: Fw: postgresql in cluster of servers" }, { "msg_contents": "No, pginfo, suposse this example\n\nWeb server or cluster web server, it's unimportant\n|\nPostgresql cluster server containing{\n ...\n Real server 1 running postgresql processes\n Real server 2 running postgresql processes\n ....\n}\n|\nFile server machine that contains all pg_databases\n\nNotice that the real servers don't have in their hard disk any database,\nthey could have a link to the hard disk in the File server machine.\n\nThe syncronization between postgresql processes that are running on\ndifferent machines and using the same database is my question.\n\nThanks for the answer\n\nGreetings\nEnediel\nLinux user 300141\n\nHappy who can penetrate the secret causes of the things\n�Use Linux!\n----- Original Message -----\nFrom: \"pginfo\" <pginfo@t1.unisoftbg.com>\nTo: \"enediel\" <enediel@com.ith.tur.cu>\nSent: Thursday, January 09, 2003 8:21 AM\nSubject: Re: Fw: [PERFORM] postgresql in cluster of servers\n\n\n> Hi,\n>\n> I think if we speak about single db server you will nead to ask\n> the app. server provider about clustering.\n>\n> I am working with jboss and it is supporting clustering.\n>\n>\n> regards,\n> ivan.\n>\n> enediel wrote:\n>\n> > Thanks for all answers:\n> > Ron Johnson, I mean the second option thet you wrote:\n> >\n> > (2) multiple application server machines all hitting a single database\n> > sitting on a single database server machine?\n> >\n> > Greetings\n> > Enediel\n> > Linux user 300141\n> >\n> > Happy who can penetrate the secret causes of the things\n> > �Use Linux!\n> > ----- Original Message -----\n> > From: \"Ron Johnson\" <ron.l.johnson@cox.net>\n> > To: \"postgresql\" <pgsql-performance@postgresql.org>\n> > Sent: Wednesday, January 08, 2003 1:14 PM\n> > Subject: Re: [PERFORM] postgresql in cluster of servers\n> >\n> > > On Wed, 2003-01-08 at 11:18, enediel wrote:\n> > > > Hello to all list members:\n> > > >\n> > > > I'm looking for information about using postgresql in a cluster of\n> > servers\n> > > > where all real servers share a unique databases location outside\nthem.\n> > > >\n> > > > The question could be one of the follows?\n> > > >\n> > > > ?Is the prostgresql prepared to synchronize simultaneous accesses to\n> > oneself\n> > > > database among processes that run in different PC's?\n> > > >\n> > > > or\n> > > >\n> > > > ?Was the postgresql database designed to allow simultaneous acceses\nof\n> > > > processes that run in different PC's allowing for own design the\n> > > > sincronizationof all the processes?\n> > >\n> > > To clarify: do you mean\n> > >\n> > > (1) multiple copies of *the*same*database* sitting on many machines,\n> > > and all of them synchronizing themselves?\n> > >\n> > > OR\n> > >\n> > > (2) multiple application server machines all hitting a single database\n> > > sitting on a single database server machine?\n> > >\n> > > --\n> > > +------------------------------------------------------------+\n> > > | Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n> > > | Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n> > > | |\n> > > | \"Basically, I got on the plane with a bomb. Basically, I |\n> > > | tried to ignite it. Basically, yeah, I intended to damage |\n> > > | the plane.\" |\n> > > | RICHARD REID, who tried to blow up American Airlines |\n> > > | Flight 63 |\n> > > +------------------------------------------------------------+\n> > >\n> > >\n> > > ---------------------------(end of\nbroadcast)---------------------------\n> > > TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n> > >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n", "msg_date": "Thu, 9 Jan 2003 12:41:44 -0800", "msg_from": "\"enediel\" <enediel@com.ith.tur.cu>", "msg_from_op": false, "msg_subject": "Fw: Fw: postgresql in cluster of servers" } ]
[ { "msg_contents": "Thanks for the answer scott.marlowe\nI'm agree with you about the limit of the I/O operations with this\nconfiguration.\n\nI'm just looking for a fault tolerance configuration in the databases\nserver, considering a very large databases, and a lot of users accesing to\nthem.\n\nI'll accept with pleasure any suggestions about this topic.\n\nGreetings\nEnediel\nLinux user 300141\n\nHappy who can penetrate the secret causes of the things\n�Use Linux!\n\n", "msg_date": "Thu, 9 Jan 2003 16:35:01 -0800", "msg_from": "\"enediel\" <enediel@com.ith.tur.cu>", "msg_from_op": true, "msg_subject": "Fw: Fw: Fw: postgresql in cluster of servers" } ]
[ { "msg_contents": "\nWhich query statement is better in terms of preformance ?\n\nselect ... from table1 where field1 in ('a', 'b', 'c')\n\nselect ... from table1 where field1='a' or field1='b' or field1='c'\n\nThanks.\n\n\n", "msg_date": "Sat, 11 Jan 2003 10:51:44 -0800", "msg_from": "Vernon Wu <vernonw@gatewaytech.com>", "msg_from_op": true, "msg_subject": "\"IN\" or \"=\" and \"OR\"" }, { "msg_contents": "Vernon Wu <vernonw@gatewaytech.com> writes:\n> Which query statement is better in terms of preformance ?\n> select ... from table1 where field1 in ('a', 'b', 'c')\n> select ... from table1 where field1='a' or field1='b' or field1='c'\n\nThere is no difference, other than the microseconds the parser spends\ntransforming form 1 into form 2 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 11 Jan 2003 23:10:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: \"IN\" or \"=\" and \"OR\" " } ]
[ { "msg_contents": "Folks,\n\nCan someone give me a quick pointer on where the ANALYZE stats are kept in \n7.2.3? Thanks.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 13 Jan 2003 10:42:01 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Accessing ANALYZE stats" }, { "msg_contents": "On Mon, 13 Jan 2003, Josh Berkus wrote:\n\n> Can someone give me a quick pointer on where the ANALYZE stats are kept in\n> 7.2.3? Thanks.\n\nShould be in pg_stats I believe.\n\n", "msg_date": "Mon, 13 Jan 2003 10:46:10 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Accessing ANALYZE stats" } ]
[ { "msg_contents": " Tomasz,\n\n> What happens to view planning - is it performed \n> during view creation, or rather each time view is quered?\n\nEach time the view is executed. The only savings in running a view over a \nregular query is that the view will have taken care of some reference \nexpansion and JOIN explication during the CREATE process, but not planning. \nAlso, views can actually be slower if the view is complex enough that any \nquery-time parameters cannot be \"pushed down\" into the view.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 13 Jan 2003 10:44:50 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: complicated queries in pl/pgsql" }, { "msg_contents": "Hi\nSometimes my pl/pgsql functions have pretty complicated queries inside - \n most of their execution time takes postgresql query planning.\nI was wondering if I should change these queries into views? Does it \nspeed up function execution? Pl/pgsql saves execution plan for \nconnection lifetime. What happens to view planning - is it performed \nduring view creation, or rather each time view is quered?\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Tue, 14 Jan 2003 11:54:59 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": false, "msg_subject": "complicated queries in pl/pgsql" }, { "msg_contents": "\nTomasz,\n\n> Thanks a lot.\n> I'm asking, because I use some queries which are easy to change into \n> views. Most of their execution time takes planning, they use 5-10 \n> explicit table joins with not too many rows in these tables and returns \n> few values.\n\nYou might want to investigate the new \"prepared query\" functionality in 7.3.1.\n\nI haven't used it yet, so I can't help much.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 14 Jan 2003 12:36:15 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: complicated queries in pl/pgsql" }, { "msg_contents": "Josh Berkus wrote:\n\n > Tomasz,\n >\n >\n > >What happens to view planning - is it performed\n > >during view creation, or rather each time view is quered?\n >\n >\n > Each time the view is executed. The only savings in running a view \nover a\n > regular query is that the view will have taken care of some reference\n > expansion and JOIN explication during the CREATE process, but not \nplanning.\n > Also, views can actually be slower if the view is complex enough that \nany\n > query-time parameters cannot be \"pushed down\" into the view.\n\nThanks a lot.\nI'm asking, because I use some queries which are easy to change into \nviews. Most of their execution time takes planning, they use 5-10 \nexplicit table joins with not too many rows in these tables and returns \nfew values.\n\nNow I know, that queries inside pl/pgsql functions are better in these \nsituations:\n- complex queries whith deep parameters\n- execution several times during conection lifetime.\n\nCan anyone add something?\n\nTomasz Myrta\n\n", "msg_date": "Wed, 15 Jan 2003 20:12:28 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": false, "msg_subject": "Re: complicated queries in pl/pgsql" } ]
[ { "msg_contents": "Short Version:\n\nI've read the idocs and Notes and Googled a fair amount, honest. :-)\n\nWhat's the most efficient way of determining the number of rows in a\ncursor's result set if you really *DO* need that? (Or, rather, if your\nclient specifically asked for features that require that.)\n\n\nLong Version:\nI'm not finding any way in the docs of asking a cursor how many rows total\nare in the result set, even if I do \"move 1000000 in foo\", knowing a\npriori that 1000000 is far more than could be returned.\n\nOracle docs seem to have a SQL.%ROWCOUNT which gives the answer, provided\none has moved beyond the last row... If I'm reading the Oracle docs\nright...\n\nAnyway. I could find nothing similar in PostgreSQL, even though it seems\nreasonable, even for a Portal, provided one is willing to do the \"move X\"\nfor X sufficiently high -- And, in fact, psql outputs the precise number\nof rows when I do that in the psql monitor, so at some level PostgreSQL\n\"knows\" the answer I want, but I can't get that \"MOVE XX\" output into PHP,\nas far as I can tell. (Can I?)\n\nI suppose I could, in theory, use PHP to fire up psql, but that's not\nexactly going to be efficient, much less pleasant. :-)\n\nUsing PHP, if it matters. I guess it does since maybe other APIs have\nsome way to access that number I want -- psql sure seems to print it out\nwhen one goes over the edge.\n\nGiven that the count(*) queries take just as long as the actual\ndata-retrieval queries, and that some of my queries take too long as it is\n(like, a minute for a 4-term full text search)...\n\nI've written and am about to benchmark a binary search using a bunch of\n\"move X\" \"fetch 1\" \"move backward 1\" \"move backward X\" and then using Ye\nOlde Low/High guessing game algorithm to find the number of rows, but I'm\nhoping for something better from the optimization experts.\n\nSorry this got a bit long, but I wanted to be clear about where I've been\nand gone, rather than leave you guessing. :-)\n\nHope I didn't miss some obvious solution/documentation \"out there\"...\n\n\n\n", "msg_date": "Mon, 13 Jan 2003 13:33:19 -0800 (PST)", "msg_from": "<typea@l-i-e.com>", "msg_from_op": true, "msg_subject": "Cursor rowcount" }, { "msg_contents": "<typea@l-i-e.com> writes:\n> I'm not finding any way in the docs of asking a cursor how many rows total\n> are in the result set, even if I do \"move 1000000 in foo\", knowing a\n> priori that 1000000 is far more than could be returned.\n\nregression=# begin;\nBEGIN\nregression=# declare c cursor for select * from int8_tbl;\nDECLARE CURSOR\nregression=# move all in c;\nMOVE 5 <-----------------------\nregression=# end;\nCOMMIT\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jan 2003 17:22:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cursor rowcount " }, { "msg_contents": "> <typea@l-i-e.com> writes:\n>> I'm not finding any way in the docs of asking a cursor how many rows\n>> total are in the result set, even if I do \"move 1000000 in foo\",\n>> knowing a priori that 1000000 is far more than could be returned.\n>\n> regression=# begin;\n> BEGIN\n> regression=# declare c cursor for select * from int8_tbl;\n> DECLARE CURSOR\n> regression=# move all in c;\n> MOVE 5 <-----------------------\n> regression=# end;\n> COMMIT\n>\n> \t\t\tregards, tom lane\n\nYes, but as noted in my longer version, that number does not seem to \"come\nthrough\" the PHP API.\n\nI've tried calling just about every function I can in the PHP API in a\ntest script, and none of them give me that number.\n\nAt least, none that I can find...\n\n\n\n", "msg_date": "Mon, 13 Jan 2003 17:05:50 -0800 (PST)", "msg_from": "<typea@l-i-e.com>", "msg_from_op": true, "msg_subject": "Re: Cursor rowcount" }, { "msg_contents": "<typea@l-i-e.com> writes:\n> Yes, but as noted in my longer version, that number does not seem to \"come\n> through\" the PHP API.\n\nPerhaps not, but you'd have to ask the PHP folk about it. This question\nsurely doesn't belong on pgsql-performance ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 13 Jan 2003 23:19:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cursor rowcount " }, { "msg_contents": "On Mon, 2003-01-13 at 22:19, Tom Lane wrote:\n> <typea@l-i-e.com> writes:\n> > Yes, but as noted in my longer version, that number does not seem to \"come\n> > through\" the PHP API.\n> \n> Perhaps not, but you'd have to ask the PHP folk about it. This question\n> surely doesn't belong on pgsql-performance ...\n\nWellllll, maybe it does, since the /performance/ of a SELECT COUNT(*)\nfollowed by a cursor certainly is lower than getting the count from\na system variable.\n\nBut still I agree, the PHP list seems more appropriate...\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "14 Jan 2003 03:12:28 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Cursor rowcount" } ]
[ { "msg_contents": "\nNoah,\n\n> Can someone give me a good description of what the various directories \n> and files actually are. I have RTFMed, but the descriptions there \n> don't seem to match what I have on my machine.\n\nWithin $PGDATA:\n\n/base is all database files unless you use WITH LOCATION\n/pg_clog is the Clog, which keeps a permanent count of transactions\n/pg_xlog is the transaction log (WAL)\n/global are a small number of relations, like pg_database or pg_user, which \nare available in all databases.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 13 Jan 2003 14:32:47 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Multiple databases" }, { "msg_contents": "Hi,\n\nA quick configuration question for everybody...\n\nWhen we create more than one database with psql, it appears as if \neverything is thrown into the same directory.\nI can understand having all the tables and indexes for a given database \nin the same directory, but multiple databases?\n\nDo we need to configure something differently, or is this just how \npostgres works?\n\nThanks,\n\n-Noah\n\n", "msg_date": "Tue, 14 Jan 2003 15:51:16 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Multiple databases" }, { "msg_contents": "> Do we need to configure something differently, or is this just how \n> postgres works?\n\nThats just how it works. Under 'base' there are a number of numbered\ndirectories which represent various databases.\n\nIf you really want, take a look at the \"WITH LOCATION\" option for create\ndatabase.\n\n\nCommand: CREATE DATABASE\nDescription: create a new database\nSyntax:\nCREATE DATABASE name\n [ [ WITH ] [ OWNER [=] dbowner ]\n [ LOCATION [=] 'dbpath' ]\n [ TEMPLATE [=] template ]\n [ ENCODING [=] encoding ] ]\n\n> Thanks,\n> \n> -Noah\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "14 Jan 2003 15:54:54 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Multiple databases" }, { "msg_contents": "Thanks,\n\nCan someone give me a good description of what the various directories \nand files actually are. I have RTFMed, but the descriptions there \ndon't seem to match what I have on my machine.\n\nThanks.\n\n\nOn Tuesday, January 14, 2003, at 03:54 PM, Rod Taylor wrote:\n\n>> Do we need to configure something differently, or is this just how\n>> postgres works?\n>\n> Thats just how it works. Under 'base' there are a number of numbered\n> directories which represent various databases.\n>\n> If you really want, take a look at the \"WITH LOCATION\" option for \n> create\n> database.\n>\n>\n> Command: CREATE DATABASE\n> Description: create a new database\n> Syntax:\n> CREATE DATABASE name\n> [ [ WITH ] [ OWNER [=] dbowner ]\n> [ LOCATION [=] 'dbpath' ]\n> [ TEMPLATE [=] template ]\n> [ ENCODING [=] encoding ] ]\n>\n>> Thanks,\n>>\n>> -Noah\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to majordomo@postgresql.org so that your\n>> message can get through to the mailing list cleanly\n> -- \n> Rod Taylor <rbt@rbt.ca>\n>\n> PGP Key: http://www.rbt.ca/rbtpub.asc\n> <signature.asc>\n\n", "msg_date": "Tue, 14 Jan 2003 15:58:03 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Re: Multiple databases" } ]
[ { "msg_contents": "Hello,\n\nI'm having some serious performance issues with PostgreSQL on\nour newish SunFire 280R (1 900MHz ultrasparc III, 1 GB RAM). \nIt's painfully slow. It took me almost a week of tuning to get\nit in the range of an old Mac G3 laptop. Now, a few days later,\nafter tweaking every nearly every parameter (only noting\ndecreased performance on some) in /etc/system and\n$PGDATA/postgresql.conf it's about as fast as I can make it, but\nstill horribly slow. A few simple queries that take 1.5-7\nminutes on the G3 take 1-1.5 minutes on the Sun. A bulk load of\nroughly 2.4 GB database dump takes ~1 hour on each machine. It\ntook almost 2 hours on the Sun before I turned off fsync.\n\nWe have plans to add another CPU, RAM and another disk, which\nshould all help, but in its current state, I (and many others)\nwould think that it should run circles around the G3. I'm\nthinking that I'm missing something big and obvious because this\ncan't be right. Otherwise we might as well just get a bunch of\nibooks to run our databases - they're a lot smaller and much\nmore quiet.\n\nCan someone please point me in the right direction?\n\nThanks,\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 07:00:08 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Sun vs. Mac" }, { "msg_contents": "Hi CaptainXOr,\n\nWhich version of PostgreSQL, and which release of Solaris are you running?\n\nRegards and best wishes,\n\nJustin Clift\n\n\nCaptainX0r wrote:\n> Hello,\n> \n> I'm having some serious performance issues with PostgreSQL on\n> our newish SunFire 280R (1 900MHz ultrasparc III, 1 GB RAM). \n> It's painfully slow. It took me almost a week of tuning to get\n> it in the range of an old Mac G3 laptop. Now, a few days later,\n> after tweaking every nearly every parameter (only noting\n> decreased performance on some) in /etc/system and\n> $PGDATA/postgresql.conf it's about as fast as I can make it, but\n> still horribly slow. A few simple queries that take 1.5-7\n> minutes on the G3 take 1-1.5 minutes on the Sun. A bulk load of\n> roughly 2.4 GB database dump takes ~1 hour on each machine. It\n> took almost 2 hours on the Sun before I turned off fsync.\n> \n> We have plans to add another CPU, RAM and another disk, which\n> should all help, but in its current state, I (and many others)\n> would think that it should run circles around the G3. I'm\n> thinking that I'm missing something big and obvious because this\n> can't be right. Otherwise we might as well just get a bunch of\n> ibooks to run our databases - they're a lot smaller and much\n> more quiet.\n> \n> Can someone please point me in the right direction?\n> \n> Thanks,\n> \n> -X\n> \n> __________________________________________________\n> Do you Yahoo!?\n> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.\n> http://mailplus.yahoo.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Wed, 15 Jan 2003 01:39:26 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "On Tue, Jan 14, 2003 at 07:00:08AM -0800, CaptainX0r wrote:\n> Hello,\n> \n> I'm having some serious performance issues with PostgreSQL on\n> our newish SunFire 280R (1 900MHz ultrasparc III, 1 GB RAM). \n> It's painfully slow. It took me almost a week of tuning to get\n> it in the range of an old Mac G3 laptop. Now, a few days later,\n> after tweaking every nearly every parameter (only noting\n> decreased performance on some) in /etc/system and\n> $PGDATA/postgresql.conf it's about as fast as I can make it, but\n\nYou should tell us about what version of Solaris you're running, what\nversion of Postgres, and what options you have used. Did you split\nthe WAL onto its own filesystem? You'll get a big win that way. \nAlso, what fsync setting are you using (open_datasync is the fastest\nin my experience). Finally, the bottleneck on Solaris is both disk\nand process forking (fork() is notoriously slow on Solaris).\n\nAlso, certain sort routines are abysmal. Replace the\nSolaris-provided qsort().\n\nI have to say, however, that my experience indicates that Solaris is\nslower that the competition for Postgres. It still shouldn't be that\nbad.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 14 Jan 2003 10:10:54 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "Sorry - I meant to include that. I'm running PG 7.3.1 on\nSolaris 8.\n\nThanks,\n\n-X\n\n--- Justin Clift <justin@postgresql.org> wrote:\n> \n> Which version of PostgreSQL, and which release of Solaris are\n> you running?\n> \n> > \n> > I'm having some serious performance issues with PostgreSQL\n> on\n> > our newish SunFire 280R (1 900MHz ultrasparc III, 1 GB RAM).\n> \n> > It's painfully slow. It took me almost a week of tuning to\n> get\n> > it in the range of an old Mac G3 laptop. Now, a few days\n> later,\n> > after tweaking every nearly every parameter (only noting\n> > decreased performance on some) in /etc/system and\n> > $PGDATA/postgresql.conf it's about as fast as I can make it,\n> but\n> > still horribly slow. A few simple queries that take 1.5-7\n> > minutes on the G3 take 1-1.5 minutes on the Sun. A bulk\n> load of\n> > roughly 2.4 GB database dump takes ~1 hour on each machine. \n> It\n> > took almost 2 hours on the Sun before I turned off fsync.\n> > \n> > We have plans to add another CPU, RAM and another disk,\n> which\n> > should all help, but in its current state, I (and many\n> others)\n> > would think that it should run circles around the G3. I'm\n> > thinking that I'm missing something big and obvious because\n> this\n> > can't be right. Otherwise we might as well just get a bunch\n> of\n> > ibooks to run our databases - they're a lot smaller and much\n> > more quiet.\n> > \n> > Can someone please point me in the right direction?\n> > \n> > Thanks,\n> > \n> > -X\n> > \n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 07:18:23 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "All,\n\n> You should tell us about what version of Solaris you're\n> running, what\n> version of Postgres, and what options you have used. \n\nYou're right, sorry. PG 7.3.1 on Solaris 8. I've got the\ndefault recommended /etc/system but with shmmax cranked way up\nwhich seems to have helped. I don't have the system in front of\nme (and it's down, so I can't get to it), but from memory\nmax_connections was increased to 64, shared_buffers up to 65536,\nsort_mem and vacuum_mem were doubled, and I think that's it. I\nchanged every seemingly relevant one, and spent a lot of time on\nthe *cost section trying various factors of n*10 on each, with\nno joy.\n\n> Did you split\n> the WAL onto its own filesystem? You'll get a big win that\n> way. \n\nI have not. What exactly do you by \"own filesystem\"? Another\nfilesystem? I was planning on putting pg_xlog on the OS disk\nand moving $PGDATA off to a second disk. \n\n> Also, what fsync setting are you using (open_datasync is the\n> fastest in my experience). \n\nI've read that somewhere (maybe in the archives?) and I got no\nchange with any of them. But now I'm thinking back - do I need\nfsync=true for that to have an affect? I'm not worried about\nthe cons of having fsync=false at all - and I'm assuming that\nshould be better than true and open_datasync. Or am I confusing\nthings?\n\n> Also, certain sort routines are abysmal. Replace the\n> Solaris-provided qsort().\n\nI've read about this as well - but haven't even gotten that far\non the testing/configuring yet.\n \n> I have to say, however, that my experience indicates that\n> Solaris is\n> slower that the competition for Postgres. It still shouldn't\n> be that bad.\n\nI agree completely.\n\nThanks for your input,\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 07:41:21 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "CaptainX0r <captainx0r@yahoo.com> writes:\n> I've read that somewhere (maybe in the archives?) and I got no\n> change with any of them. But now I'm thinking back - do I need\n> fsync=true for that to have an affect? I'm not worried about\n> the cons of having fsync=false at all - and I'm assuming that\n> should be better than true and open_datasync. \n\nYou are right that fsync_method is a no-op if you've got fsync turned\noff.\n\nLet me get this straight: the Sun is slower even with fsync off? That\nshoots down the first theory that I had, which was that the Sun's disk\ndrives were actually honoring fsync while the laptop's drive does not.\n(See archives for more discussion of that, but briefly: IDE drives are\ncommonly set up to claim write complete as soon as they've absorbed\ndata into their onboard buffers. SCSI drives usually tell the truth\nabout when they've completed a write.)\n\nAndrew Sullivan's nearby recommendation to replace qsort() is a good\none, but PG 7.3 is already configured to do that by default. (Look in\nsrc/Makefile.global to confirm that qsort.o is mentioned in LIBOBJS.)\n\nI'd suggest starting with some elementary measurements, for example\nlooking at I/O rates and CPU idle percentage while running the same\ntask on both Solaris and G3. That would at least give us a clue whether\nI/O or CPU is the bottleneck.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Jan 2003 11:04:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac " }, { "msg_contents": "On Tue, Jan 14, 2003 at 07:41:21AM -0800, CaptainX0r wrote:\n> \n> You're right, sorry. PG 7.3.1 on Solaris 8. I've got the\n> default recommended /etc/system but with shmmax cranked way up\n\nOk, I have no experience with 7.3.1 in a production setting - we're\nusing 7.2. But here are some things.\n\n> which seems to have helped. I don't have the system in front of\n> me (and it's down, so I can't get to it), but from memory\n> max_connections was increased to 64, shared_buffers up to 65536,\n> sort_mem and vacuum_mem were doubled, and I think that's it. I\n> changed every seemingly relevant one, and spent a lot of time on\n\nYou'll need to increase the number of available semaphores more than\nlikely, if you add any connections. You do indeed need to fix\nshmmax, but if the postmaster starts, you're fine.\n\nI would worry slightly about sort_mem. I have managed to make\nSolaris boxes with _lots_ of memory start swapping by setting that\ntoo high (while experimenting). Look for problems in your I/O.\n\n> the *cost section trying various factors of n*10 on each, with\n> no joy.\n\nThese are fine-tuning knobs. You have a different problem :)\n\n> > Did you split\n> > the WAL onto its own filesystem? You'll get a big win that\n> > way. \n> \n> I have not. What exactly do you by \"own filesystem\"? Another\n> filesystem? I was planning on putting pg_xlog on the OS disk\n> and moving $PGDATA off to a second disk. \n\nThat's what you need. Without any doubt at all. The xlog on the\nsame UFS filesystem (and disk) as the rest of $PGDATA is a nightmare.\n\nInterestingly, by the way, there is practically _no difference_ if\nyou do this with an A5200 managed by Veritas. I have tried dozens of\nthings. It never matters. The array is too fast.\n\n> > Also, what fsync setting are you using (open_datasync is the\n> > fastest in my experience). \n> \n> I've read that somewhere (maybe in the archives?) and I got no\n> change with any of them. But now I'm thinking back - do I need\n> fsync=true for that to have an affect? I'm not worried about\n> the cons of having fsync=false at all - and I'm assuming that\n> should be better than true and open_datasync. Or am I confusing\n> things?\n\nYes, if you change the fsync method but have fsync turned off, it\nwill make no difference.\n\n> > Also, certain sort routines are abysmal. Replace the\n> > Solaris-provided qsort().\n> \n> I've read about this as well - but haven't even gotten that far\n> on the testing/configuring yet.\n\nIf you're doing any sorting that is not by an index, forget about it. \nChange it now. It's something like a multiple of 40 slower.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 14 Jan 2003 11:08:52 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "CaptainX0r writes:\n > I'm having some serious performance issues with PostgreSQL on\n > our newish SunFire 280R (1 900MHz ultrasparc III, 1 GB RAM). \n > It's painfully slow. \n\nWhat has PostgreSQL been compiled by? Personal past experience has\nshown the Sun Workshop C compiler to result in much better performance\ncompared to GCC...\n\nL.\n", "msg_date": "Tue, 14 Jan 2003 16:18:57 +0000", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Sun vs. Mac" }, { "msg_contents": "\n--- Lee Kindness <lkindness@csl.co.uk> wrote:\n> CaptainX0r writes:\n>> I'm having some serious performance issues with PostgreSQL on\n>> our newish SunFire 280R (1 900MHz ultrasparc III, 1 GB RAM). \n>> It's painfully slow. \n>\n> What has PostgreSQL been compiled by? Personal past experience\n> has\n> shown the Sun Workshop C compiler to result in much better\n> performance\n> compared to GCC...\n\nI used gcc - mostly because I have in the past, but also because\nI've read that it is \"the one to use\". Am I wrong on this one? \nI'm certainly willing to try the one from Sun Workshop.\n\nThanks for the input,\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 08:29:57 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "> You'll need to increase the number of available semaphores\n> more than\n> likely, if you add any connections. You do indeed need to fix\n> shmmax, but if the postmaster starts, you're fine.\n\nThanks, I'll take a closer look at this.\n \n> That's what you need. Without any doubt at all. The xlog on\n> the same UFS filesystem (and disk) as the rest of $PGDATA is a\n> nightmare.\n\nThe disks are on order - but that can't be the only thing hold\nit up, can it? I've got to check out the IO, as you suggest.\n \n> > > Also, certain sort routines are abysmal. Replace the\n> > > Solaris-provided qsort().\n> > \n> > I've read about this as well - but haven't even gotten that\n> > far on the testing/configuring yet.\n> \n> If you're doing any sorting that is not by an index, forget\n> about it. \n> Change it now. It's something like a multiple of 40 slower.\n\nI double checked, and this was one of the reasons I was glad to\ntry 7.3 on Solaris - it's already got it built in.\n\nThanks much for the input,\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 08:38:25 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "All,\n\n> Let me get this straight: the Sun is slower even with fsync\n> off? That\n\nCorrect. This really helped a lot, especially with the dump\nload, but I've clearly got some more work ahead of me.\n\n> Andrew Sullivan's nearby recommendation to replace qsort() is\n> a good\n> one, but PG 7.3 is already configured to do that by default. \n> (Look in\n> src/Makefile.global to confirm that qsort.o is mentioned in\n> LIBOBJS.)\n\nThanks for confirming. I've got LIBOBJS = isinf.o qsort.o\n\n> I'd suggest starting with some elementary measurements, for\n> example looking at I/O rates and CPU idle percentage while \n> running the same task on both Solaris and G3. That would at\n> least give us a clue whether I/O or CPU is the bottleneck.\n\nGood thoughts - I'm working on it right now, though I'm not\nreally sure how to check I/O rates....\n\nThanks much for the input,\n\n-X\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 08:54:33 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac " }, { "msg_contents": "> I'd suggest starting with some elementary measurements, for\n> example looking at I/O rates and CPU idle percentage while \n> running the same task on both Solaris and G3. That would at \n> least give us a clue whether I/O or CPU is the bottleneck.\n\nWell, I've got the Sun box now, but I don't really have acces to\nthe G3. FWIW, top shows postgres slowly taking up all the CPU -\nover the course of a minute or so it gradually ramps up to\naround 90%. Once the query is complete, however, top shows the\nCPU ramping down slowly, ~1-2% per second over the next 2\nminutes which I find very strange. The CPU idle is 0% for the\nduration of the query, while the user state is around 100% for\nthe same period. This kind of makes me think top is wrong (100%\nidle and 75% postgres?)\n\niostat gives: (sorry for line wrap).\n\n# iostat -DcxnzP\n cpu\n us sy wt id\n 10 1 4 85\n extended device statistics \n r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b\ndevice\n 11.2 1.0 65.5 13.1 0.1 0.1 9.5 9.6 0 3\nc1t0d0s0\n 0.0 0.0 0.0 0.1 0.0 0.0 0.0 6.0 0 0\nc1t0d0s1\n 7.3 0.1 502.3 0.5 0.0 0.0 0.0 2.3 0 1\nc1t0d0s3\n 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0 0\nhost:vold(pid313)\n\n\nThis doesn't really tell me much, except I'm guessing that PG is\nCPU bound?\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 09:50:04 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac " }, { "msg_contents": "On Tue, Jan 14, 2003 at 09:50:04AM -0800, CaptainX0r wrote:\n> the G3. FWIW, top shows postgres slowly taking up all the CPU -\n\nSon't use top on Solaris 8. It's inaccurate, and it affects the\nresults itself. Use prstat instead.\n\n> This doesn't really tell me much, except I'm guessing that PG is\n> CPU bound?\n\nIt looks that way. I've had iostat show CPU-bound, however, when the\nproblem actually turned out to be memory contention. I think you may\nwant to have a poke with vmstat, and also have a look at the SE\ntoolkit.\n\nA\n\n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 14 Jan 2003 12:58:48 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "> could you post your $PGDATA/postgresql.conf for our viewing\n> pleasure ?\n\nmax_connections = 64\nshared_buffers = 65536 # 1/2 total RAM /8K\nsort_mem = 100000 # min 64, size in KB\ncheckpoint_timeout = 300 # range 30-3600, in seconds\nfsync = false\neffective_cache_size = 65536 # typically 8KB each\nlog_timestamp = true\nnotice, warning, error\nstats_command_string = true\nstats_row_level = true\nstats_block_level = true\nLC_MESSAGES = 'C'\nLC_MONETARY = 'C'\nLC_NUMERIC = 'C'\nLC_TIME = 'C'\n\nI've stripped out the default lines (grep -v ^#) comments and\nblank lines.\n\n> Another CPU will probably not help with bulk loads or other\n> single-user stuff.\n> \n[snip]\n> \n> For single-user tasks you will probably be better off by\n> getting a gray box with Athlon 2600+ with 3 Gigs of memory and\n\n> IDE disks and running Linux or *BSD .\n\nHannu brings up a good point - one that was debated before my\nattempts at making Solaris faster. If you were going to make a\nfast postgres server what would you use? Assuming you could\nafford a SunFire 280R (~$8k?), would that money be better spent\non a (say) Dell server running (say) linux? We're doing light\nmultiuser (I guess effectively single user) but at some point\n(years) this may grow considereably. I'm not particular to\nMacs, but I've got to say, that stock out the box, postgres\nloves it. That old G3 was faster than the Sun, and still is\nfaster than my (years newer) linux laptop (on which I've done no\nperformance tweaking). So maybe a dual G4 Xserver would scream?\n\nAny suggestions? It's still not too late for us to change our\nminds on this one.\n\nThanks much,\n\n-X\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 10:10:54 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac - best Postgres platform?" }, { "msg_contents": "CaptainX0r <captainx0r@yahoo.com> writes:\n> Well, I've got the Sun box now, but I don't really have acces to\n> the G3. FWIW, top shows postgres slowly taking up all the CPU -\n> over the course of a minute or so it gradually ramps up to\n> around 90%. Once the query is complete, however, top shows the\n> CPU ramping down slowly, ~1-2% per second over the next 2\n> minutes which I find very strange.\n\nI believe top's percent-of-CPU numbers for individual processes are time\naverages over a minute or so, so the ramping effect is unsurprising.\n\n> This doesn't really tell me much, except I'm guessing that PG is\n> CPU bound?\n\nYup, that seems pretty clear. Next step is to find out what the heck\nit's doing. My instinct would be to use gprof. Recompile with\nprofiling enabled --- if you're using gcc, this should work\n\tcd postgres-distribution/src/backend\n\tmake clean\n\tmake PROFILE=-pg all\n\tmake install-bin\t-- may need to stop postmaster before install\nNext run some sample queries (put them all into one session). After\nquitting the session, find gmon.out in the $PGDATA/base/nnn/\nsubdirectory corresponding to your database, and feed it to gprof.\nThe results should show where the code hotspot is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 14 Jan 2003 13:15:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac " }, { "msg_contents": "On Tue, 14 Jan 2003, CaptainX0r wrote:\n\n> Hello,\n> \n> I'm having some serious performance issues with PostgreSQL on\n> our newish SunFire 280R (1 900MHz ultrasparc III, 1 GB RAM). \n> It's painfully slow. It took me almost a week of tuning to get\n> it in the range of an old Mac G3 laptop. Now, a few days later,\n> after tweaking every nearly every parameter (only noting\n> decreased performance on some) in /etc/system and\n> $PGDATA/postgresql.conf it's about as fast as I can make it, but\n> still horribly slow. A few simple queries that take 1.5-7\n> minutes on the G3 take 1-1.5 minutes on the Sun. A bulk load of\n> roughly 2.4 GB database dump takes ~1 hour on each machine. It\n> took almost 2 hours on the Sun before I turned off fsync.\n\nJust for giggles, do you have a spare drive or something you can try \nloading debian or some other Sparc compatible linux distro and get some \nnumbers? My experience has been that on the same basic hardware, Linux \nruns postgresql about twice as fast as Solaris, and no amount of tweaking \nseemed to ever get postgresql up to the same performance on Solaris. It's \nso bad a Sparc 20 with 256 Meg ram and a 50 MHz 32 bit CPU running linux \nwas outrunning our Sun Ultra 1 with 512 Meg ram and a 150 MHz 64 bit CPU \nby about 50%. That was with the 2.0.x kernel for linux and Solaris 7 on \nthe Ultra I believe. Could have been older on the Solaris version, as I \nwasn't the SA on that box.\n\n", "msg_date": "Tue, 14 Jan 2003 11:26:09 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "\n--- Andrew Sullivan <andrew@libertyrms.info> wrote:\n> On Tue, Jan 14, 2003 at 09:50:04AM -0800, CaptainX0r wrote:\n> > the G3. FWIW, top shows postgres slowly taking up all the\n> CPU -\n> \n> Son't use top on Solaris 8. It's inaccurate, and it affects\n> the results itself. Use prstat instead.\n\nThanks for the tip. Unfortunately it shows the same exact\nthing.\n\n> > This doesn't really tell me much, except I'm guessing that\n> PG is\n> > CPU bound?\n> \n> It looks that way. I've had iostat show CPU-bound, however,\n> when the problem actually turned out to be memory contention. \n\n> I think you may want to have a poke with vmstat, and also have\n\n> a look at the SE toolkit.\n\nI'll have a look at the SE toolkit - thanks.\n\nvmstat shows me this:\n# vmstat -s\n 0 swap ins\n 0 swap outs\n 0 pages swapped in\n 0 pages swapped out\n 125452 total address trans. faults taken\n 35245 page ins\n 60 page outs\n 194353 pages paged in\n 229 pages paged out\n 184621 total reclaims\n 184563 reclaims from free list\n 0 micro (hat) faults\n 125452 minor (as) faults\n 31764 major faults\n 10769 copy-on-write faults\n 80220 zero fill page faults\n 0 pages examined by the clock daemon\n 0 revolutions of the clock hand\n 170 pages freed by the clock daemon\n 601 forks\n 19 vforks\n 577 execs\n 370612 cpu context switches\n 1288431 device interrupts\n 148288 traps\n 1222653 system calls\n 294090 total name lookups (cache hits 48%)\n 43510 user cpu\n 4002 system cpu\n 480912 idle cpu\n 13805 wait cpu\n\n procs memory page disk \nfaults cpu\n r b w swap free re mf pi po fr de sr s6 sd -- -- in sy\n cs us sy id\n 0 0 0 815496 538976 31 21 261 0 0 0 0 0 12 0 0 136 209\n 65 7 1 92\n\nI've not much experience with this, it looks like there are\nconsiderably more page ins than outs as compared to our other\nsolaris boxen but otherwise pretty normal.\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 10:27:36 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "On Tue, Jan 14, 2003 at 10:10:54AM -0800, CaptainX0r wrote:\n> > could you post your $PGDATA/postgresql.conf for our viewing\n> > pleasure ?\n> \n> max_connections = 64\n> shared_buffers = 65536 # 1/2 total RAM /8K\n> sort_mem = 100000 # min 64, size in KB\n ^^^^^^\nThere's your problem. Don't set that anywhere near that high. \nIf you run 2 queries that require sorting, _each sort_ can use up to\n100000 K. Which can chew up all your memory pretty fast.\n\n> effective_cache_size = 65536 # typically 8KB each\n\nWhat basis did you have to change this? Have you done work figuring\nout how big the kernel's disk cache is regularly on that system?\n\n> Hannu brings up a good point - one that was debated before my\n> attempts at making Solaris faster. If you were going to make a\n> fast postgres server what would you use? Assuming you could\n> afford a SunFire 280R (~$8k?), would that money be better spent\n> on a (say) Dell server running (say) linux? We're doing light\n\nI've been finding FreeBSD way faster than Linux. But yes.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 14 Jan 2003 13:30:46 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac - best Postgres platform?" }, { "msg_contents": "\n> > For single-user tasks you will probably be better off by\n> > getting a gray box with Athlon 2600+ with 3 Gigs of memory and\n> \n> > IDE disks and running Linux or *BSD .\n> \n\n> I'm not particular to\n> Macs, but I've got to say, that stock out the box, postgres\n> loves it. That old G3 was faster than the Sun, and still is\n> faster than my (years newer) linux laptop (on which I've done no\n> performance tweaking). So maybe a dual G4 Xserver would scream?\n> \n> Any suggestions? It's still not too late for us to change our\n> minds on this one.\n\nI can't recommend macs for either brute force speed or price/performance. \n\nMy current flock of machines are mostly OSX g4 boxes (single 400s and dual 800), with a couple of linux boxen thrown in for good measure.\n\nThe mac's biggest issues are: \n\n1) Tweakability - you've got one file system, and it doesn't really do useful mount options like noatime. \n2) There are bugs in mount and traversing symlinks that make it hard to move pg_xlog onto another file system and retain performance (in 10.1.5, I don't have test hardware with enough drives to test 10.2)\n3) vm_stat gives vm status, iostat gives nothing. Looks like this is working on my 10.2 laptop, but it's annoyed me for a while on 10.1.5\n4) SW raid is not that much of a help for speed.\n5) Bus bandwidth is a good factor behind x86 linux boxen. (DDR ram isn't really taken advantage of in current designs)\n\nHaving said that, I'm getting reasonable performance out of all the macs, in fact, I'm getting reasonably similar performance out of all of them desplite the 4x difference in processor power. And that's because they basically have the same low end disk system. \n\nI'm piecing together something that I hope will be faster out of a x86 box loaded with more drives in a sw raid mirroring setup. (tentative is 1x system+logs, mirror for pg_xlog, mirror for pg data)\n\nI'm planning on running some comparative benchmarks prior to going live, so I should be able to tell how much faster it is.\n\neric\n\n\n\n", "msg_date": "Tue, 14 Jan 2003 10:39:05 -0800", "msg_from": "eric soroos <eric-psql@soroos.net>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac - best Postgres platform?" }, { "msg_contents": "Hi,\n \n> Just for giggles, do you have a spare drive or something you\n> can try \n> loading debian or some other Sparc compatible linux distro and\n> get some \n> numbers? My experience has been that on the same basic\n\nThis was on the todo list, \"just to see\", but I'm not sure how\nmuch time we want to spend trying a myriad of options when\nconcentrating on one should (maybe?) do the trick.\n\n> hardware, Linux runs postgresql about twice as fast as \n> Solaris, and no amount of tweaking seemed to ever get \n> postgresql up to the same performance on Solaris. It's \n> so bad a Sparc 20 with 256 Meg ram and a 50 MHz 32 bit CPU\n> running linux was outrunning our Sun Ultra 1 with 512 Meg ram\n\n> and a 150 MHz 64 bit CPU by about 50%. That was with the \n> 2.0.x kernel for linux and\n\nThis is not encouraging..... We may be revisiting the linux\noption.\n\nThanks much for the input,\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 10:41:06 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "\n--- Andrew Sullivan <andrew@libertyrms.info> wrote:\n> On Tue, Jan 14, 2003 at 10:10:54AM -0800, CaptainX0r wrote:\n> > > could you post your $PGDATA/postgresql.conf for our\n> viewing\n> > > pleasure ?\n> > \n> > max_connections = 64\n> > shared_buffers = 65536 # 1/2 total RAM /8K\n> > sort_mem = 100000 # min 64, size in KB\n> ^^^^^^\n> There's your problem. Don't set that anywhere near that high.\n> If you run 2 queries that require sorting, _each sort_ can use\n> up to\n> 100000 K. Which can chew up all your memory pretty fast.\n\nI changed back to the default 1024, and down to the minimum, 64\n- no change. I think that was changed simultaneously with some\nother parameter (bad, I know) that actually had an affect. I\nguess I can remove it.\n \n> > effective_cache_size = 65536 # typically 8KB each\n> \n> What basis did you have to change this? Have you done work\n> figuring out how big the kernel's disk cache is regularly on \n> that system?\n\nI read somewhere that this should be set to half the system RAM\nsize, 64k*8k=512m = 1/2 of the 1 Gig RAM. I guess this is way\noff since you're saying that it's disk cache. This agrees with\nthe documentation. I can't really rely on the (precious little\nSolaris postgres) info I find on the net.... ;)\n\nUnfortunately, setting back to 1000 doesn't appear to help.\n \n> > Hannu brings up a good point - one that was debated before\n> > my attempts at making Solaris faster. If you were going to\n> make a\n> > fast postgres server what would you use? Assuming you could\n> > afford a SunFire 280R (~$8k?), would that money be better\n> > spent on a (say) Dell server running (say) linux? We're \n> > doing light\n> \n> I've been finding FreeBSD way faster than Linux. But yes.\n\nI like to hear this since I'm a big FreeBSD fan. So far I think\nI've understood this as: FreeBSD > Linux > OSX > Solaris.\n\nThanks much for the input,\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 11:01:49 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac - best Postgres platform?" }, { "msg_contents": "On Tue, Jan 14, 2003 at 11:01:49AM -0800, CaptainX0r wrote:\n\n> I changed back to the default 1024, and down to the minimum, 64\n> - no change. I think that was changed simultaneously with some\n> other parameter (bad, I know) that actually had an affect. I\n> guess I can remove it.\n\nVery bad to change two things at once. You think it's saving you\ntime, but now . . . well, you already know what happens ;-) Anyway,\nyou _still_ shouldn't have it that high.\n\n> > > effective_cache_size = 65536 # typically 8KB each\n> \n> I read somewhere that this should be set to half the system RAM\n> size, 64k*8k=512m = 1/2 of the 1 Gig RAM. I guess this is way\n> off since you're saying that it's disk cache. This agrees with\n> the documentation. I can't really rely on the (precious little\n> Solaris postgres) info I find on the net.... ;)\n\nI think you should rely on the Postgres documentation, which has way\nfewer errors than just about any other technical documentation I've\never seen. Yes, it's disk cache.\n\nI wouldn't set _anything_ to half the system RAM. It'd be real nice\nif your disk cache was half your RAM, but I'd be amazed if anyone's\nsystem were that efficient.\n\nIt sounds like you need to follow Tom Lane's advice, though, and do\nsome profiling.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 14 Jan 2003 14:18:53 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac - best Postgres platform?" }, { "msg_contents": "On Tue, 2003-01-14 at 15:00, CaptainX0r wrote:\n> Hello,\n> \n> I'm having some serious performance issues with PostgreSQL on\n> our newish SunFire 280R (1 900MHz ultrasparc III, 1 GB RAM). \n> It's painfully slow. It took me almost a week of tuning to get\n> it in the range of an old Mac G3 laptop. Now, a few days later,\n> after tweaking every nearly every parameter (only noting\n> decreased performance on some) in /etc/system and\n> $PGDATA/postgresql.conf \n\ncould you post your $PGDATA/postgresql.conf for our viewing pleasure ?\n\n> it's about as fast as I can make it, but\n> still horribly slow. A few simple queries that take 1.5-7\n> minutes on the G3 take 1-1.5 minutes on the Sun. A bulk load of\n> roughly 2.4 GB database dump takes ~1 hour on each machine. It\n> took almost 2 hours on the Sun before I turned off fsync.\n> \n> We have plans to add another CPU, RAM and another disk, which\n> should all help,\n\nAnother CPU will probably not help with bulk loads or other single-user\nstuff.\n\n> but in its current state, I (and many others)\n> would think that it should run circles around the G3. I'm\n> thinking that I'm missing something big and obvious because this\n> can't be right. Otherwise we might as well just get a bunch of\n> ibooks to run our databases - they're a lot smaller and much\n> more quiet.\n\nFor single-user tasks you will probably be better off by getting a \ngray box with Athlon 2600+ with 3 Gigs of memory and IDE disks \nand running Linux or *BSD .\n\n> Can someone please point me in the right direction?\n> \n> Thanks,\n> \n> -X\n> \n> __________________________________________________\n> Do you Yahoo!?\n> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.\n> http://mailplus.yahoo.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "14 Jan 2003 19:26:24 +0000", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac" }, { "msg_contents": "> I believe top's percent-of-CPU numbers for individual\n> processes are time\n> averages over a minute or so, so the ramping effect is\n> unsurprising.\n\nThanks - this makes much more sense.\n\n> > This doesn't really tell me much, except I'm guessing that\n> > PG is CPU bound?\n> \n> Yup, that seems pretty clear. Next step is to find out what\n> the heck\n> it's doing. My instinct would be to use gprof. Recompile\n> with\n> profiling enabled --- if you're using gcc, this should work\n> \tcd postgres-distribution/src/backend\n> \tmake clean\n> \tmake PROFILE=-pg all\n> \tmake install-bin\t-- may need to stop postmaster \n> Next run some sample queries (put them all into one session). \n> After quitting the session, find gmon.out in the\n> $PGDATA/base/nnn/ subdirectory corresponding to your database,\n\n> and feed it to gprof.\n> The results should show where the code hotspot is.\n\nWell if that isn't a fancy bit of info.... Thanks!\n\ngprof says:\n\nFatal ELF error: can't read ehdr (Request error: class\nfile/memory mismatch)\n\nI'm guessing that's not what we're expecting... I'm using\n/usr/ccs/bin/gprof - maybe there's a better one?\n\n-X\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Tue, 14 Jan 2003 11:38:28 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac " }, { "msg_contents": "Andrew Sullivan wrote:\n<snip>\n > I've been finding FreeBSD way faster than Linux. But yes.\n\nOut of curiosity, have you been trying FreeBSD 4.7x or the developer \nreleases of 5.0? Apparently there are new kernel scheduling \nimprovements in FreeBSD 5.0 that will help certain types of tasks and \nmight boost our performance further.\n\nWould be interested in seeing if the profiling/optimisation options of \nGCC 3.2.x are useful as well.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n > A\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Wed, 15 Jan 2003 10:29:46 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac - best Postgres platform?" }, { "msg_contents": "On Wed, Jan 15, 2003 at 10:29:46AM +1030, Justin Clift wrote:\n> Andrew Sullivan wrote:\n> <snip>\n> > I've been finding FreeBSD way faster than Linux. But yes.\n> \n> Out of curiosity, have you been trying FreeBSD 4.7x or the developer \n\nJust 4.7x. And mostly for little jobs for myself, so I can't speak\nabout testing it in a production case. \n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 14 Jan 2003 19:02:01 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac - best Postgres platform?" }, { "msg_contents": "CaptainX0r <captainx0r@yahoo.com> writes:\n> gprof says:\n> Fatal ELF error: can't read ehdr (Request error: class\n> file/memory mismatch)\n\nHm, that's a new one on me. Just to eliminate the obvious: you did\nread the gprof man page? It typically needs both the pathname of the\npostgres executable and that of the gmon.out file.\n\nIf that's not it, I fear you need a gprof expert, which I ain't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 15 Jan 2003 00:12:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac " }, { "msg_contents": "All,\n\n> Hm, that's a new one on me. Just to eliminate the obvious:\n> you did read the gprof man page? It typically needs both the \n> pathname of the postgres executable and that of the gmon.out \n\nI read it, but apparently not very well. It appears that as long as gmon.out\nis in the current dir, all that's needed is the name of the executeable (with\nfull path). The way it's formated I read it as all that's needed is the\nimage-file. Anyways...\n\nThere's a ton of output, so I'm picking what appear to be the highlights.\n\ngranularity: each sample hit covers 4 byte(s) for 0.00% of 705.17 seconds\n\n called/total parents \nindex %time self descendents called+self name index\n called/total children\n\n[1] 63.6 446.05 2.65 44386289+57463869 <cycle 1 as a whole> [1]\n 442.08 2.50 59491100+2566048020 <external> <cycle 1>\n[2]\n 2.91 0.00 23045572+366 _fini <cycle 1> [23]\n 0.57 0.00 17763681 _rl_input_available \n<cycle 1> [42]\n 0.11 0.15 478216+7 history_expand <cycle\n1> [54]\n 0.21 0.00 7 history_tokenize_internal \n<cycle 1> [56]\n 0.10 0.00 1071137 tilde_expand <cycle\n1> [68]\n 0.07 0.00 397 rl_gather_tyi <cycle\n1> [70]\n 0.00 0.00 31 qsort <cycle 1> [81]\n 0.00 0.00 17 rl_insert_close <cycle\n1> [82]\n-----------------------------------------------\n <spontaneous>\n[3] 32.3 0.65 227.47 rl_get_termcap [3]\n 226.13 1.34 22502064/44386289 <external> <cycle 1>\n[2]\n-----------------------------------------------\n <spontaneous>\n[4] 26.3 0.78 184.35 rl_stuff_char [4]\n 178.51 1.06 17763681/44386289 <external> <cycle 1>\n[2]\n 4.78 0.00 17763681/17763681 rl_clear_signals [18]\n-----------------------------------------------\n <spontaneous>\n[5] 15.8 111.61 0.00 rl_signal_handler [5]\n 0.00 0.00 1/44386289 <external> <cycle 1>\n[2]\n-----------------------------------------------\n <spontaneous>\n[6] 4.3 30.57 0.00 rl_sigwinch_handler [6]\n-----------------------------------------------\n\nAnd:\n\ngranularity: each sample hit covers 4 byte(s) for 0.00% of 705.17 seconds\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 15.8 111.61 111.61 rl_signal_handler [5]\n 4.3 142.18 30.57 rl_sigwinch_handler [6]\n 1.9 155.42 13.24 rl_set_sighandler [8]\n 1.9 168.52 13.10 rl_maybe_set_sighandler\n[9]\n 1.1 176.37 7.85 _rl_next_macro_key [11]\n 0.9 182.38 6.01 rl_read_key [12]\n 0.8 188.07 5.69 rl_backward_kill_line [13]\n 0.8 193.56 5.49 rl_unix_word_rubout [14]\n 0.8 198.91 5.35 _rl_pop_executing_macro\n[15]\n 0.7 203.73 4.82 _rl_fix_last_undo_of_type\n[17]\n 0.7 208.51 4.78 17763681 0.00 0.00 rl_clear_signals [18]\n 0.6 212.87 4.36 rl_modifying [19]\n 0.6 216.95 4.08 rl_begin_undo_group [20]\n 0.6 221.00 4.05 rl_tilde_expand [21]\n 0.4 223.98 2.98 region_kill_internal [22]\n 0.4 226.89 2.91 23045572 0.00 0.00 _fini <cycle 1> [23]\n\n\nSo. Any thoughts? This looks really useful in the hands of someone who knows\nwhat it all means. Looks like some signal handlers are using up most of the\ntime. Good? Bad? Am I reading that first part correctly in that a good part\nof the time spent is external to Postgres? This report also seems to verify\nthat qsort isn't a problem since it was the 81st index, with 31 calls (not\nmuch) and 0.00 self seconds.\n\nThanks much,\n\n-X\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Wed, 15 Jan 2003 09:22:50 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac - gprof output" }, { "msg_contents": "CaptainX0r <captainx0r@yahoo.com> writes:\n> % cumulative self self total \n> time seconds seconds calls ms/call ms/call name \n> 15.8 111.61 111.61 rl_signal_handler [5]\n> 4.3 142.18 30.57 rl_sigwinch_handler [6]\n> 1.9 155.42 13.24 rl_set_sighandler [8]\n> 1.9 168.52 13.10 rl_maybe_set_sighandler\n> [9]\n> 1.1 176.37 7.85 _rl_next_macro_key [11]\n> 0.9 182.38 6.01 rl_read_key [12]\n> 0.8 188.07 5.69 rl_backward_kill_line [13]\n\nAll of these names correspond to internal routines in libreadline.\n\nIt'd not be surprising for libreadline to suck a good deal of the\nruntime of psql ... but I don't believe the backend will call it at all.\nSo, either this trace is erroneous, or you profiled the wrong process\n(client instead of backend), or there's something truly weird going on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Jan 2003 01:11:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Sun vs. Mac - gprof output " }, { "msg_contents": "> It'd not be surprising for libreadline to suck a good deal of the\n> runtime of psql ... but I don't believe the backend will call it at all.\n> So, either this trace is erroneous, or you profiled the wrong process\n> (client instead of backend), or there's something truly weird going on.\n\nYou're right, I got the client, here's the backend:\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 23.4 125.31 125.31 internal_mcount [13]\n 21.2 239.07 113.76 79296415 0.00 0.00 ExecMakeFunctionResult \n<cycle 4> [14]\n 7.8 280.68 41.61 98971658 0.00 0.00 AllocSetReset [23]\n 6.8 317.13 36.45 193735603 0.00 0.00 ExecEvalVar [18]\n 5.2 345.21 28.08 280731963 0.00 0.00 ExecEvalExpr <cycle\n4> [15]\n 2.7 359.93 14.72 38140599 0.00 0.00 nocachegetattr [35]\n 2.7 374.28 14.35 320207 0.04 0.04 _read [38]\n 2.2 385.97 11.69 78969393 0.00 0.00 ExecQual <cycle 4> [34]\n 2.1 397.46 11.49 79296415 0.00 0.00 ExecEvalFuncArgs <cycle\n4> [42]\n 1.4 404.71 7.25 _mcount (6219)\n 1.3 411.73 7.02 11293115 0.00 0.00 heapgettup [31]\n 1.2 418.34 6.61 98975017 0.00 0.00 ExecClearTuple [43]\n 1.0 423.93 5.59 98971592 0.00 0.00 ExecStoreTuple [33]\n 0.9 428.87 4.94 197952332 0.00 0.00 MemoryContextSwitchTo\n[53]\n 0.9 433.53 4.66 7612547 0.00 0.00 heap_formtuple [39]\n 0.8 437.96 4.43 7609318 0.00 0.01 ExecScanHashBucket \n<cycle 4> [17]\n 0.8 442.34 4.38 8 547.50 547.50 .rem [55]\n 0.8 446.64 4.30 79296261 0.00 0.00 ExecEvalOper <cycle\n4> [56]\n\n\nI'm not sure what to make of this.\n\nThanks,\n\n-X\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Thu, 16 Jan 2003 05:45:02 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Sun vs. Mac - gprof output " } ]
[ { "msg_contents": "Roman,\n\nFirst, if this is a dedicated PostgreSQL server, you should try increasing \nyour shared_buffers to at least 512mb (65536) if not 1GB (double that) and \nadjust your shmmax and shmmall to match.\n\nSecond, you will probably want to increase your sort_mem as well. How much \ndepeneds on the number of concurrent queries you expect to be running and \ntheir relative complexity. Give me that information, and I'll offer you \nsome suggestions. Part of your slow query \n\nYour query problem is hopefully relatively easy. The following clause is 95% \nof your query time:\n\n> -> Index Scan using \nbatchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..176768.18 \nrows=44010 width=293) (actual time=35.48..1104625.54 rows=370307 loops=1) \n> \n\nSee the actual time figures? This one clause is taking 1,104,590 msec! \n\nNow, why?\n\nWell, look at the cost estimate figures in contrast to the actual row count:\nestimate rows = 44,010\t\treal rows 370,307\nThat's off by a factor of 9. This index scan is obviously very cumbersome \nand is slowing the query down. Probably it should be using a seq scan \ninstead ... my guess is, you haven't run ANALYZE in a while and the incorrect \nrow estimate is causing the parser to choose a very slow index scan.\n\nTry running ANALYZE on your database and re-running the query. Also try \nusing REINDEX on batchdetail_ix_tranamount_idx .\n\nSecond, this clause near the bottom:\n\n -> Seq Scan on purc1 p1 (cost=0.00..44259.70 \nrows=938770 width=19) (actual time=98.09..4187.32 rows=938770 loops=5) \n\n... suggests that you could save an additional 4 seconds by figuring out a way \nfor the criteria on purc1 to use a relevant index -- but only after you've \nsolved the problem with batchdetail_ix_tranamount_idx.\n\nFinally, if you really want help, post the query.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Tue, 14 Jan 2003 11:32:32 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "I am trying to get a PostgreSQL server into production (moving from MSSQL2K) but having some serious performance issues. PostgreSQL is new to me, and I'm only just now comfortable with Linux. So far I've succesfully compiled postgres from source and migrated all the data from MSSQL. Postgres is primarily accessed using JDBC. \r\n \r\nI really want to use Postgres for production, but if I can't get better results out of it by the end of the week we are dropping it forever and going back to MSSQL despite the $$$. I'm basically at a point where I've got to find help from the list. Please help me make this server fly!\r\n \r\nI have a query that does many joins (including two very big tables) which is slow on Postgres. On PGSQL the query takes 19 minutes, but only 3 seconds on MSSQL. The two servers have the same indexes created (including primary key indexes). I finally gave up on creating all the foreign keys in Postgres - after 12 hours of 100% CPU. It's hard for me to believe that the hardware is the bottleneck - the $20k Postgres server far outclasses the MSSQL server (see below for stats). When I ran EXPLAIN ANALYZE for this query the CPU averaged 5%, sar -b shows about 6,000 block reads/sec, and vmstat had zero swapping. EXPLAIN results are below, I'm not sure how to interpret them.\r\n \r\nThe disks are not reading at max speed during the query - when I ran a VACUUM ANALYZE (after data migration), sar -b was consistently 100,000 block reads/sec. It does not seem like the hardware is holding back things here. I read something about 'fsync' recently, would changing that setting apply in this case?\r\n \r\nDATABASE:\r\n'tranheader' table has 2000 tuples, PK index\r\n'batchheader' table has 2.6 million tuples, 5 indexes, FK constraint to tranheader PK\r\n'batchdetail' table has 23 million tuples, 6 indexes, FK constraint to batcheader PK\r\n18 tables with <1000 tuples, most are child tables of batchdetail\r\nAll tables have a PK and are normalized \r\nLarge nightly INSERTs (~200000 tuples)\r\nComplex SELECTs all day long\r\nNo UPDATEs or DELETEs ever, at least until I run low on storage!\r\n \r\nI've learned as much as I can absorb from the online docs and archives about performance tuning. Based on my limited understanding, I've changed the following settings. I am totally open to any suggestions, including starting over with RAID, filesystems, PGSQL. I would almost consider FreeBSD if it helped a lot, but that would be a stretch given my time investment in Linux. This is a brand new machine, so bad hardware is a possibility - but I'm not sure how to go about determining that.\r\n \r\n*** /etc/sysctl.conf\r\nkernel.shmmni = 4096\r\nkernel.shmall = 32000000\r\nkernel.shmmax = 512000000 \r\n \r\n*** /usr/local/pgsql/data/postgresql.conf\r\ntcpip_socket=true\r\nshared_buffers = 32768\r\nmax_fsm_relations = 10000\r\nmax_fsm_pages = 2000000\r\nsort_mem = 8192\r\n \r\nPOSTGRESQL SYSTEM:\r\nRed Hat Linux 8.0, PostgreSQL 7.3.1 (dedicated, besides SSH daemon)\r\nDell PE6600 Dual Xeon MP 2.0GHz, 2MB L3 cache,HyperThreading enabled\r\n4.0 GB Physical RAM\r\n/dev/sda1: ext3 101MB /boot \r\n/dev/sda2: ext3 34GB / (sda is 2 disk RAID-1)\r\nnone : swap 1.8GB\r\n/dev/sdb1: ext3 104GB /usr/local/pgsql/data (sdb is 6 disk RAID-10)\r\nAll 8 drives are 36GB, 15k RPM, Ultra160 SCSI\r\nPERC3/DC 128MB RAID controller\r\n \r\nMSSQL SYSTEM:\r\nDell PE1650, Dual P3 1.1GHz, 1.5GB RAM\r\nSingle 18GB, 15k RPM SCSI drive (no RAID)\r\nWindows 2000 Server SP3, SQL Server 2000 SP2\r\n\r\nTIA,\r\nRoman Fail\r\nSr. Web Application Developer\r\nPOS Portal, Inc.\r\n\r\nEXPLAIN ANALYZE RESULTS:\r\nLimit (cost=370518.31..370518.31 rows=1 width=540) (actual time=1168722.18..1168722.20 rows=5 loops=1)\r\n -> Sort (cost=370518.31..370518.31 rows=1 width=540) (actual time=1168722.18..1168722.18 rows=5 loops=1)\r\n Sort Key: b.batchdate\r\n -> Nested Loop (cost=314181.17..370518.30 rows=1 width=540) (actual time=1148191.12..1168722.09 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=314181.17..370461.79 rows=1 width=502) (actual time=1148167.55..1168671.80 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=314181.17..370429.29 rows=1 width=485) (actual time=1148167.48..1168671.45 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=314181.17..370396.79 rows=1 width=476) (actual time=1148167.41..1168671.08 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=314181.17..314402.47 rows=1 width=457) (actual time=1139099.39..1139320.79 rows=5 loops=1)\r\n Join Filter: (\"outer\".cardtypeid = \"inner\".cardtypeid)\r\n -> Merge Join (cost=314181.17..314401.24 rows=1 width=443) (actual time=1138912.13..1139133.00 rows=5 loops=1)\r\n Merge Cond: (\"outer\".batchid = \"inner\".batchid)\r\n -> Sort (cost=127418.59..127418.59 rows=3 width=150) (actual time=9681.91..9681.93 rows=17 loops=1)\r\n Sort Key: b.batchid\r\n -> Hash Join (cost=120787.32..127418.56 rows=3 width=150) (actual time=7708.04..9681.83 rows=17 loops=1)\r\n Hash Cond: (\"outer\".merchantid = \"inner\".merchantid)\r\n -> Merge Join (cost=120781.58..125994.80 rows=283597 width=72) (actual time=7655.57..9320.49 rows=213387 loops=1)\r\n Merge Cond: (\"outer\".tranheaderid = \"inner\".tranheaderid)\r\n -> Index Scan using tranheader_ix_tranheaderid_idx on tranheader t (cost=0.00..121.15 rows=1923 width=16) (actual time=0.15..10.86 rows=1923 loops=1)\r\n Filter: (clientid = 6)\r\n -> Sort (cost=120781.58..121552.88 rows=308520 width=56) (actual time=7611.75..8162.81 rows=329431 loops=1)\r\n Sort Key: b.tranheaderid\r\n -> Seq Scan on batchheader b (cost=0.00..79587.23 rows=308520 width=56) (actual time=0.90..4186.30 rows=329431 loops=1)\r\n Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\r\n -> Hash (cost=5.74..5.74 rows=1 width=78) (actual time=31.39..31.39 rows=0 loops=1)\r\n -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.74 rows=1 width=78) (actual time=31.38..31.38 rows=1 loops=1)\r\n Index Cond: (merchid = '701252267'::character varying)\r\n -> Sort (cost=186762.59..186872.62 rows=44010 width=293) (actual time=1127828.96..1128725.39 rows=368681 loops=1)\r\n Sort Key: d.batchid\r\n -> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..176768.18 rows=44010 width=293) (actual time=35.48..1104625.54 rows=370307 loops=1)\r\n Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))\r\n -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=37.44..37.47 rows=10 loops=5)\r\n -> Seq Scan on purc1 p1 (cost=0.00..44259.70 rows=938770 width=19) (actual time=98.09..4187.32 rows=938770 loops=5)\r\n -> Seq Scan on direct dr (cost=0.00..20.00 rows=1000 width=9) (actual time=0.00..0.00 rows=0 loops=5)\r\n -> Seq Scan on carrental cr (cost=0.00..20.00 rows=1000 width=17) (actual time=0.00..0.00 rows=0 loops=5)\r\n -> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=1.03..7.63 rows=1267 loops=5)\r\nTotal runtime: 1168881.12 msec\r\n", "msg_date": "Wed, 15 Jan 2003 10:00:04 -0800", "msg_from": "\"Roman Fail\" <rfail@posportal.com>", "msg_from_op": false, "msg_subject": "7.3.1 New install, large queries are slow" }, { "msg_contents": "Roman Fail wrote:\n<cut>\n\nEXPLAIN ANALYZE RESULTS:\nLimit (cost=370518.31..370518.31 rows=1 width=540) (actual time=1168722.18..1168722.20 rows=5 loops=1)\n -> Sort (cost=370518.31..370518.31 rows=1 width=540) (actual time=1168722.18..1168722.18 rows=5 loops=1)\n Sort Key: b.batchdate\n -> Nested Loop (cost=314181.17..370518.30 rows=1 width=540) (actual time=1148191.12..1168722.09 rows=5 loops=1)\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\n -> Nested Loop (cost=314181.17..370461.79 rows=1 width=502) (actual time=1148167.55..1168671.80 rows=5 loops=1)\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\n -> Nested Loop (cost=314181.17..370429.29 rows=1 width=485) (actual time=1148167.48..1168671.45 rows=5 loops=1)\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\n -> Nested Loop (cost=314181.17..370396.79 rows=1 width=476) (actual time=1148167.41..1168671.08 rows=5 loops=1)\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\n -> Nested Loop (cost=314181.17..314402.47 rows=1 width=457) (actual time=1139099.39..1139320.79 rows=5 loops=1)\n Join Filter: (\"outer\".cardtypeid = \"inner\".cardtypeid)\n -> Merge Join (cost=314181.17..314401.24 rows=1 width=443) (actual time=1138912.13..1139133.00 rows=5 loops=1)\n Merge Cond: (\"outer\".batchid = \"inner\".batchid)\n -> Sort (cost=127418.59..127418.59 rows=3 width=150) (actual time=9681.91..9681.93 rows=17 loops=1)\n Sort Key: b.batchid\n -> Hash Join (cost=120787.32..127418.56 rows=3 width=150) (actual time=7708.04..9681.83 rows=17 loops=1)\n Hash Cond: (\"outer\".merchantid = \"inner\".merchantid)\n -> Merge Join (cost=120781.58..125994.80 rows=283597 width=72) (actual time=7655.57..9320.49 rows=213387 loops=1)\n Merge Cond: (\"outer\".tranheaderid = \"inner\".tranheaderid)\n -> Index Scan using tranheader_ix_tranheaderid_idx on tranheader t (cost=0.00..121.15 rows=1923 width=16) (actual time=0.15..10.86 rows=1923 loops=1)\n Filter: (clientid = 6)\n -> Sort (cost=120781.58..121552.88 rows=308520 width=56) (actual time=7611.75..8162.81 rows=329431 loops=1)\n Sort Key: b.tranheaderid\n -> Seq Scan on batchheader b (cost=0.00..79587.23 rows=308520 width=56) (actual time=0.90..4186.30 rows=329431 loops=1)\n Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\n -> Hash (cost=5.74..5.74 rows=1 width=78) (actual time=31.39..31.39 rows=0 loops=1)\n -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.74 rows=1 width=78) (actual time=31.38..31.38 rows=1 loops=1)\n Index Cond: (merchid = '701252267'::character varying)\n -> Sort (cost=186762.59..186872.62 rows=44010 width=293) (actual time=1127828.96..1128725.39 rows=368681 loops=1)\n Sort Key: d.batchid\n -> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..176768.18 rows=44010 width=293) (actual time=35.48..1104625.54 rows=370307 loops=1)\n Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))\n -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=37.44..37.47 rows=10 loops=5)\n -> Seq Scan on purc1 p1 (cost=0.00..44259.70 rows=938770 width=19) (actual time=98.09..4187.32 rows=938770 loops=5)\n -> Seq Scan on direct dr (cost=0.00..20.00 rows=1000 width=9) (actual time=0.00..0.00 rows=0 loops=5)\n -> Seq Scan on carrental cr (cost=0.00..20.00 rows=1000 width=17) (actual time=0.00..0.00 rows=0 loops=5)\n -> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=1.03..7.63 rows=1267 loops=5)\nTotal runtime: 1168881.12 msec\n<cut>\n\nIt looks like your execution time is not a hardware, but query problem.\nQuery nearly doesn't use indexes at all. You said, that that you have normalized database,\nso you should have a lot of explicit joins, which work pretty well on Postgresql.\n\nCan you add some examples of your queries? If it is difficult for you,\nat least create one example, when you get \"Join Filter\" on \"explain analyze\".\n\n From your analyze result:\nSeq Scan on batchheader b (cost=0.00..79587.23 rows=308520 width=56)\nCan you write what condition and indexes does batchheader have?\n\nRegards, \nTomasz Myrta\n\n", "msg_date": "Wed, 15 Jan 2003 20:31:34 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\nOn Wed, 15 Jan 2003, Roman Fail wrote:\n\n> The disks are not reading at max speed during the query - when I ran a\n> VACUUM ANALYZE (after data migration), sar -b was consistently 100,000\n> block reads/sec. It does not seem like the hardware is holding back\n> things here. I read something about 'fsync' recently, would changing\n> that setting apply in this case?\n\nYou ran vacuum analyze, but some of the explain still looks suspiciously\nlike it's using default statistics (dr and cr for example, unless they\nreally do have 1000 rows).\n\nWhat are the actual query and table definitions for the query?\n\n", "msg_date": "Wed, 15 Jan 2003 11:44:01 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "I didn't see the query itself in the message, but it looks to me like\nit's poorly formed. Could you send it? \n\nBy quick glance, either you're using a bunch of explicit joins that are\npoorly formed (you've made a bad choice in order) or those particular\nIDs are really popular. There are a number of sequential scans that\npossibly should be index scans.\n\n> EXPLAIN ANALYZE RESULTS:\n> Limit (cost=370518.31..370518.31 rows=1 width=540) (actual time=1168722.18..1168722.20 rows=5 loops=1)\n> -> Sort (cost=370518.31..370518.31 rows=1 width=540) (actual time=1168722.18..1168722.18 rows=5 loops=1)\n> Sort Key: b.batchdate\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "15 Jan 2003 14:53:11 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Wed, Jan 15, 2003 at 10:00:04AM -0800, Roman Fail wrote:\n> I have a query that does many joins (including two very big tables)\n> which is slow on Postgres. On PGSQL the query takes 19 minutes,\n\nThere are three things I can think of right off the bat. \n\nFirst, the performance of foreign keys is flat-out awful in Postgres. \nI suggest avoiding them if you can.\n\nSecond, ordering joins explicitly (with the JOIN keyword) constrains\nthe planner, and may select bad plan. The explain analyse output\nwas nice, but I didn't see the query, so I can't tell what the plan\nmaybe ought to be.\n\nThird, I didn't see any suggestion that you'd moved the WAL onto its\nown disk. That will mostly help when you are under write load; I\nguess it's not a problem here, but it's worth keeping in mind.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 15 Jan 2003 15:10:12 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Roman Fail kirjutas K, 15.01.2003 kell 23:00:\n> I am trying to get a PostgreSQL server into production (moving from MSSQL2K) but having some serious performance issues. PostgreSQL is new to me, and I'm only just now comfortable with Linux. So far I've succesfully compiled postgres from source and migrated all the data from MSSQL. Postgres is primarily accessed using JDBC. \n> \n> I really want to use Postgres for production, but if I can't get better results out of it by the end of the week we are dropping it forever and going back to MSSQL despite the $$$. I'm basically at a point where I've got to find help from the list. Please help me make this server fly!\n> \n> I have a query that does many joins (including two very big tables) which is slow on Postgres. On PGSQL the query takes 19 minutes, but only 3 seconds on MSSQL. The two servers have the same indexes created (including primary key indexes). I finally gave up on creating all the foreign keys in Postgres - after 12 hours of 100% CPU. It's hard for me to believe that the hardware is the bottleneck - the $20k Postgres server far outclasses the MSSQL server (see below for stats). When I ran EXPLAIN ANALYZE for this query the CPU averaged 5%, sar -b shows about 6,000 block reads/sec, and vmstat had zero swapping. EXPLAIN results are below, I'm not sure how to interpret them.\n> \n\nTwo questions:\n\n1) Have you run analyze on this database (after loading the data ?)\n\n2) could you also post the actual query - it would make interpreting the\nEXPLAIN ANALYZE RESULTS easier.\n\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "16 Jan 2003 01:42:59 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Roman Fail wrote:\n> I am trying to get a PostgreSQL server into production (moving from\n> MSSQL2K) but having some serious performance issues. PostgreSQL is\n> new to me, and I'm only just now comfortable with Linux. So far\n> I've succesfully compiled postgres from source and migrated all the\n> data from MSSQL. Postgres is primarily accessed using JDBC.\n\n[...]\n\n> POSTGRESQL SYSTEM:\n> Red Hat Linux 8.0, PostgreSQL 7.3.1 (dedicated, besides SSH daemon)\n> Dell PE6600 Dual Xeon MP 2.0GHz, 2MB L3 cache,HyperThreading enabled\n> 4.0 GB Physical RAM\n> /dev/sda1: ext3 101MB /boot \n> /dev/sda2: ext3 34GB / (sda is 2 disk RAID-1)\n> none : swap 1.8GB\n> /dev/sdb1: ext3 104GB /usr/local/pgsql/data (sdb is 6 disk RAID-10)\n> All 8 drives are 36GB, 15k RPM, Ultra160 SCSI\n> PERC3/DC 128MB RAID controller\n\nExt3, huh? Ext3 is a journalling filesystem that is capable of\njournalling data as well as metadata. But if you mount it such that\nit journals data, writes will be significantly slower.\n\nThe default for ext3 is to do ordered writes: data is written before\nthe associated metadata transaction commits, but the data itself isn't\njournalled. But because PostgreSQL synchronously writes the\ntransaction log (using fsync() by default, if I'm not mistaken) and\nuses sync() during a savepoint, I would think that ordered writes at\nthe filesystem level would probably buy you very little in the way of\nadditional data integrity in the event of a crash.\n\nSo if I'm right about that, then you might consider using the\n\"data=writeback\" option to ext3 on the /usr/local/pgsql/data\nfilesystem. I'd recommend the default (\"data=ordered\") for everything\nelse.\n\n\nThat said, I doubt the above change will make the orders of magnitude\ndifference you're looking for. But every little bit helps...\n\nYou might also consider experimenting with different filesystems, but\nothers here may be able to chime in with better information on that.\n\n\nPeople, please correct me if I'm wrong in my analysis of PostgreSQL on\next3 above. If the database on an ext3 filesystem mounted in\nwriteback mode is subject to corruption upon a crash despite the\nefforts PostgreSQL makes to keep things sane, then writeback mode\nshouldn't be used! And clearly it shouldn't be used if it doesn't\nmake a significant performance difference.\n\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Wed, 15 Jan 2003 18:05:27 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" } ]
[ { "msg_contents": "In my current project, among of over twenty main tables, three of them are the master tables and result are the \nmultivalued detail tables. All of those table have the field name userid which is varchar data type.\n\nA selection statement, having a level of subquery, can involve up to twenty tables. After some query statement tuning \nand indexing (on all of the multivalued field of all the detail table), a query performance improves spectacularly. The \nfollowing is the output of ?explain analyze? on a query involved ten tables.\n\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..23.42 rows=1 width=939) (actual time=28.00..28.00 rows=0 loops=1)\n. . .\nTotal runtime: 31.00 msec\n\nI would like to find any performance improvement potentiality in terms of DB table design (indexing always can be done \nlater). The first thing I know I can do is to change the join key, userid, to numeral. Since implementing the change \nrequests some work on the system, I would like to know how significant performance improement it can bring.\n\nAlmost all fields of those tables are a single digit character. I can guess that change them to number type also can \nimprove the selection performance. My question again is how much it can get. The application is a web application. All \ndata on a page is a string, or text type. All number type data has to be parsed from a string to the back-end, and \nconverted into a string from the back end. So that change will have performance overhead on the application.\n\nThanks for your input.\n\nVernon \n \n \n\n\n", "msg_date": "Tue, 14 Jan 2003 11:42:07 -0800", "msg_from": "Vernon Wu <vernonw@gatewaytech.com>", "msg_from_op": true, "msg_subject": "How good I can get" } ]
[ { "msg_contents": "***********************\r\n> Josh Berkus wrote:\r\n> Hey, Roman, how many records in BatchDetail, anyway?\r\n\r\n23 million.\r\n \r\n***********************\r\n> Stephan Szabo wrote:\r\n> What does vacuum verbose batchdetail give you (it'll give an idea of pages anyway)\r\n\r\ntrans=# VACUUM VERBOSE batchdetail;\r\nINFO: --Relation public.batchdetail--\r\nINFO: Pages 1669047: Changed 0, Empty 0; Tup 23316674: Vac 0, Keep 0, UnUsed 0.\r\n Total CPU 85.36s/9.38u sec elapsed 253.38 sec.\r\nINFO: --Relation pg_toast.pg_toast_8604247--\r\nINFO: Pages 0: Changed 0, Empty 0; Tup 0: Vac 0, Keep 0, UnUsed 0.\r\n Total CPU 0.00s/0.00u sec elapsed 0.00 sec.\r\nVACUUM\r\ntrans=#\r\n\r\n***********************\r\nAt Stephan Szabo and Tom Lane's suggestion, I reorganized the query so the JOIN syntax was only used in the outer joins. This did not seem to help at all. Of note: during this query 'sar -b' showed a consistent 6000 blocks read/sec, CPU was about 2%.\r\n \r\nEXPLAIN ANALYZE\r\nSELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount,\r\nd.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode,\r\nm.name AS merchantname, c.cardtype, m.merchid,\r\np1.localtaxamount, p1.productidentifier, dr.avsresponse,\r\ncr.checkoutdate, cr.noshowindicator, ck.checkingacctno,\r\nck.abaroutingno, ck.checkno\r\nFROM tranheader t, batchheader b, merchants m, cardtype c, batchdetail d\r\nLEFT JOIN purc1 p1 on p1.batchdetailid=d.batchdetailid\r\nLEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid\r\nLEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid\r\nLEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid\r\nWHERE t.tranheaderid=b.tranheaderid\r\nAND m.merchantid=b.merchantid\r\nAND d.batchid=b.batchid\r\nAND c.cardtypeid=d.cardtypeid\r\nAND t.clientid = 6\r\nAND d.tranamount BETWEEN 500.0 AND 700.0\r\nAND b.batchdate > '2002-12-15'\r\nAND m.merchid = '701252267'\r\nORDER BY b.batchdate DESC\r\nLIMIT 50\r\nLimit (cost=1789105.21..1789105.22 rows=1 width=285) (actual time=1222029.59..1222029.61 rows=5 loops=1)\r\n -> Sort (cost=1789105.21..1789105.22 rows=1 width=285) (actual time=1222029.58..1222029.59 rows=5 loops=1)\r\n Sort Key: b.batchdate\r\n -> Nested Loop (cost=1787171.22..1789105.20 rows=1 width=285) (actual time=1221815.14..1222019.46 rows=5 loops=1)\r\n Join Filter: (\"inner\".tranheaderid = \"outer\".tranheaderid)\r\n -> Nested Loop (cost=1787171.22..1789026.02 rows=1 width=269) (actual time=1221809.33..1221978.62 rows=5 loops=1)\r\n Join Filter: (\"inner\".cardtypeid = \"outer\".cardtypeid)\r\n -> Merge Join (cost=1787171.22..1789024.79 rows=1 width=255) (actual time=1221802.47..1221971.48 rows=5 loops=1)\r\n Merge Cond: (\"outer\".batchid = \"inner\".batchid)\r\n -> Sort (cost=476.17..476.18 rows=4 width=102) (actual time=678.05..678.07 rows=17 loops=1)\r\n Sort Key: b.batchid\r\n -> Nested Loop (cost=0.00..476.14 rows=4 width=102) (actual time=161.62..677.95 rows=17 loops=1)\r\n -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.65 rows=1 width=78) (actual time=13.87..13.88 rows=1 loops=1)\r\n Index Cond: (merchid = '701252267'::character varying)\r\n -> Index Scan using batchheader_ix_merchantid_idx on batchheader b (cost=0.00..470.30 rows=15 width=24) (actual time=147.72..663.94 rows=17 loops=1)\r\n Index Cond: (\"outer\".merchantid = b.merchantid)\r\n Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\r\n -> Sort (cost=1786695.05..1787621.82 rows=370710 width=153) (actual time=1220080.34..1220722.19 rows=368681 loops=1)\r\n Sort Key: d.batchid\r\n -> Merge Join (cost=1704191.25..1713674.49 rows=370710 width=153) (actual time=1200184.91..1213352.77 rows=370307 loops=1)\r\n Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Merge Join (cost=1704085.28..1712678.33 rows=370710 width=115) (actual time=1199705.71..1210336.37 rows=370307 loops=1)\r\n Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Merge Join (cost=1704085.27..1711751.54 rows=370710 width=98) (actual time=1199705.65..1208122.73 rows=370307 loops=1)\r\n Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Merge Join (cost=1704085.26..1710824.75 rows=370710 width=89) (actual time=1199705.55..1205977.76 rows=370307 loops=1)\r\n Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Sort (cost=1543119.01..1544045.79 rows=370710 width=70) (actual time=1181172.79..1181902.77 rows=370307 loops=1)\r\n Sort Key: d.batchdetailid\r\n -> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..1489103.46 rows=370710 width=70) (actual time=14.45..1176074.90 rows=370307 loops=1)\r\n Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))\r\n -> Sort (cost=160966.25..163319.59 rows=941335 width=19) (actual time=18532.70..20074.09 rows=938770 loops=1)\r\n Sort Key: p1.batchdetailid\r\n -> Seq Scan on purc1 p1 (cost=0.00..44285.35 rows=941335 width=19) (actual time=9.44..9119.83 rows=938770 loops=1)\r\n -> Sort (cost=0.01..0.02 rows=1 width=9) (actual time=0.08..0.08 rows=0 loops=1)\r\n Sort Key: dr.batchdetailid\r\n -> Seq Scan on direct dr (cost=0.00..0.00 rows=1 width=9) (actual time=0.01..0.01 rows=0 loops=1)\r\n -> Sort (cost=0.01..0.02 rows=1 width=17) (actual time=0.04..0.04 rows=0 loops=1)\r\n Sort Key: cr.batchdetailid\r\n -> Seq Scan on carrental cr (cost=0.00..0.00 rows=1 width=17) (actual time=0.00..0.00 rows=0 loops=1)\r\n -> Sort (cost=105.97..109.13 rows=1267 width=38) (actual time=479.17..480.74 rows=1267 loops=1)\r\n Sort Key: ck.batchdetailid\r\n -> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=447.88..475.60 rows=1267 loops=1)\r\n -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=1.37..1.39 rows=10 loops=5)\r\n -> Seq Scan on tranheader t (cost=0.00..55.15 rows=1923 width=16) (actual time=0.01..5.14 rows=1923 loops=5)\r\n Filter: (clientid = 6)\r\nTotal runtime: 1222157.28 msec\r\n\r\n***********************\r\nJust to see what would happen, I executed:\r\n ALTER TABLE batchdetail ALTER COLUMN tranamount SET STATISTICS 1000;\r\n ANALYZE;\r\nIt seemed to hurt performance if anything. But the EXPLAIN estimate for rows was much closer to the real value than it was previously.\r\n \r\n***********************\r\nIt seems to me that the big, big isolated problem is the index scan on batchdetail.tranamount. During this small query, 'sar -b' showed consistent 90,000 block reads/sec. (contrast with only 6,000 with larger query index scan). 'top' shows the CPU is at 20% user, 30% system the whole time (contrast with 2% total in larger query above). This results here still seem pretty bad (although not as bad as above), but I still don't know what is the bottleneck. And the strange sar stats are confusing me.\r\n \r\nEXPLAIN ANALYZE SELECT * FROM batchdetail WHERE tranamount BETWEEN 300 AND 499;\r\nSeq Scan on batchdetail (cost=0.00..2018797.11 rows=783291 width=440) (actual time=45.66..283926.58 rows=783687 loops=1)\r\n Filter: ((tranamount >= 300::numeric) AND (tranamount <= 499::numeric))\r\nTotal runtime: 285032.47 msec\r\n\r\n \r\n***********************\r\n> Stephan Szabo wrote:\r\n> As a followup, if you do set enable_indexscan=off;\r\n> before running the explain analyze, what does that give you?\r\n\r\nNow this is very interesting: 'sar -b' shows about 95,000 block reads/sec; CPU is at 20% user 30% system, vmstat shows no swapping, query takes only 5 minutes to execute (which is one-quarter of the time WITH the index scan!!!!). Obviously the execution plan is pretty different on this one (query is identical the larger one above).\r\n \r\nEXPLAIN ANALYZE\r\nSELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount,\r\nd.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode,\r\nm.name AS merchantname, c.cardtype, m.merchid,\r\np1.localtaxamount, p1.productidentifier, dr.avsresponse,\r\ncr.checkoutdate, cr.noshowindicator, ck.checkingacctno,\r\nck.abaroutingno, ck.checkno\r\nFROM tranheader t, batchheader b, merchants m, cardtype c,\r\nbatchdetail d\r\nLEFT JOIN purc1 p1 on p1.batchdetailid=d.batchdetailid\r\nLEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid\r\nLEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid\r\nLEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid\r\nWHERE t.tranheaderid=b.tranheaderid\r\nAND m.merchantid=b.merchantid\r\nAND d.batchid=b.batchid\r\nAND c.cardtypeid=d.cardtypeid\r\nAND t.clientid = 6\r\nAND d.tranamount BETWEEN 500.0 AND 700.0\r\nAND b.batchdate > '2002-12-15'\r\nAND m.merchid = '701252267'\r\nORDER BY b.batchdate DESC\r\nLIMIT 50\r\nLimit (cost=2321460.56..2321460.57 rows=1 width=285) (actual time=308194.57..308194.59 rows=5 loops=1)\r\n -> Sort (cost=2321460.56..2321460.57 rows=1 width=285) (actual time=308194.57..308194.58 rows=5 loops=1)\r\n Sort Key: b.batchdate\r\n -> Nested Loop (cost=2319526.57..2321460.55 rows=1 width=285) (actual time=307988.56..308194.46 rows=5 loops=1)\r\n Join Filter: (\"inner\".tranheaderid = \"outer\".tranheaderid)\r\n -> Nested Loop (cost=2319526.57..2321381.37 rows=1 width=269) (actual time=307982.80..308153.22 rows=5 loops=1)\r\n Join Filter: (\"inner\".cardtypeid = \"outer\".cardtypeid)\r\n -> Merge Join (cost=2319526.57..2321380.14 rows=1 width=255) (actual time=307982.69..308152.82 rows=5 loops=1)\r\n Merge Cond: (\"outer\".batchid = \"inner\".batchid)\r\n -> Sort (cost=2316388.70..2317315.47 rows=370710 width=153) (actual time=305976.74..306622.88 rows=368681 loops=1)\r\n Sort Key: d.batchid\r\n -> Merge Join (cost=2233884.90..2243368.15 rows=370710 width=153) (actual time=286452.12..299485.43 rows=370307 loops=1)\r\n Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Merge Join (cost=2233778.93..2242371.98 rows=370710 width=115) (actual time=286428.77..296939.66 rows=370307 loops=1)\r\n Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Merge Join (cost=2233778.92..2241445.19 rows=370710 width=98) (actual time=286428.72..294750.01 rows=370307 loops=1)\r\n Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Merge Join (cost=2233778.91..2240518.40 rows=370710 width=89) (actual time=286428.60..292606.56 rows=370307 loops=1)\r\n Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Sort (cost=2072812.66..2073739.44 rows=370710 width=70) (actual time=269738.34..270470.83 rows=370307 loops=1)\r\n Sort Key: d.batchdetailid\r\n -> Seq Scan on batchdetail d (cost=0.00..2018797.11 rows=370710 width=70) (actual time=41.66..266568.83 rows=370307 loops=1)\r\n Filter: ((tranamount >= 500.0) AND (tranamount <= 700.0))\r\n -> Sort (cost=160966.25..163319.59 rows=941335 width=19) (actual time=16690.20..18202.65 rows=938770 loops=1)\r\n Sort Key: p1.batchdetailid\r\n -> Seq Scan on purc1 p1 (cost=0.00..44285.35 rows=941335 width=19) (actual time=6.88..7779.31 rows=938770 loops=1)\r\n -> Sort (cost=0.01..0.02 rows=1 width=9) (actual time=0.10..0.10 rows=0 loops=1)\r\n Sort Key: dr.batchdetailid\r\n -> Seq Scan on direct dr (cost=0.00..0.00 rows=1 width=9) (actual time=0.00..0.00 rows=0 loops=1)\r\n -> Sort (cost=0.01..0.02 rows=1 width=17) (actual time=0.03..0.03 rows=0 loops=1)\r\n Sort Key: cr.batchdetailid\r\n -> Seq Scan on carrental cr (cost=0.00..0.00 rows=1 width=17) (actual time=0.00..0.00 rows=0 loops=1)\r\n -> Sort (cost=105.97..109.13 rows=1267 width=38) (actual time=23.32..24.89 rows=1267 loops=1)\r\n Sort Key: ck.batchdetailid\r\n -> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=6.51..19.59 rows=1267 loops=1)\r\n -> Sort (cost=3137.87..3137.88 rows=4 width=102) (actual time=954.18..954.20 rows=19 loops=1)\r\n Sort Key: b.batchid\r\n -> Nested Loop (cost=0.00..3137.84 rows=4 width=102) (actual time=236.26..954.04 rows=17 loops=1)\r\n -> Seq Scan on merchants m (cost=0.00..2667.35 rows=1 width=78) (actual time=2.48..227.71 rows=1 loops=1)\r\n Filter: (merchid = '701252267'::character varying)\r\n -> Index Scan using batchheader_ix_merchantid_idx on batchheader b (cost=0.00..470.30 rows=15 width=24) (actual time=233.75..726.22 rows=17 loops=1)\r\n Index Cond: (\"outer\".merchantid = b.merchantid)\r\n Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\r\n -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=0.02..0.04 rows=10 loops=5)\r\n -> Seq Scan on tranheader t (cost=0.00..55.15 rows=1923 width=16) (actual time=0.01..5.21 rows=1923 loops=5)\r\n Filter: (clientid = 6)\r\nTotal runtime: 308323.60 msec\r\n\r\n***********************\r\nI hope we can come up with something soon.....it seems this index scan is a big part of the problem. I'm still really curious why the disk reads are so few with the index scan. Let's hope I can get it near the 3 second time for MSSQL by Friday!\r\n \r\nRoman Fail\r\n \r\n", "msg_date": "Thu, 16 Jan 2003 01:03:43 -0800", "msg_from": "\"Roman Fail\" <rfail@posportal.com>", "msg_from_op": true, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Thu, 2003-01-16 at 03:03, Roman Fail wrote:\n> ***********************\n> > Josh Berkus wrote:\n> > Hey, Roman, how many records in BatchDetail, anyway?\n> \n> 23 million.\n\nWhat are the indexes on batchdetail?\n\nThere's one on batchid and a seperate one on tranamount?\n\nIf so, what about dropping them and create a single multi-segment\nindex on \"batchid, tranamount\". (A constraint can then enforce\nuniqueness on batchid.\n\n> ***********************\n> > Stephan Szabo wrote:\n> > What does vacuum verbose batchdetail give you (it'll give an idea of pages anyway)\n> \n> trans=# VACUUM VERBOSE batchdetail;\n> INFO: --Relation public.batchdetail--\n> INFO: Pages 1669047: Changed 0, Empty 0; Tup 23316674: Vac 0, Keep 0, UnUsed 0.\n> Total CPU 85.36s/9.38u sec elapsed 253.38 sec.\n> INFO: --Relation pg_toast.pg_toast_8604247--\n> INFO: Pages 0: Changed 0, Empty 0; Tup 0: Vac 0, Keep 0, UnUsed 0.\n> Total CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> VACUUM\n> trans=#\n> \n> ***********************\n> At Stephan Szabo and Tom Lane's suggestion, I reorganized the query\n> so the JOIN syntax was only used in the outer joins. This did not\n> seem to help at all. Of note: during this query 'sar -b' showed a\n> consistent 6000 blocks read/sec, CPU was about 2%.\n> \n> EXPLAIN ANALYZE\n> SELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount,\n> d.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode,\n> m.name AS merchantname, c.cardtype, m.merchid,\n> p1.localtaxamount, p1.productidentifier, dr.avsresponse,\n> cr.checkoutdate, cr.noshowindicator, ck.checkingacctno,\n> ck.abaroutingno, ck.checkno\n> FROM tranheader t, batchheader b, merchants m, cardtype c, batchdetail d\n> LEFT JOIN purc1 p1 on p1.batchdetailid=d.batchdetailid\n> LEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid\n> LEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid\n> LEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid\n> WHERE t.tranheaderid=b.tranheaderid\n> AND m.merchantid=b.merchantid\n> AND d.batchid=b.batchid\n> AND c.cardtypeid=d.cardtypeid\n> AND t.clientid = 6\n> AND d.tranamount BETWEEN 500.0 AND 700.0\n> AND b.batchdate > '2002-12-15'\n> AND m.merchid = '701252267'\n> ORDER BY b.batchdate DESC\n> LIMIT 50\n> Limit (cost=1789105.21..1789105.22 rows=1 width=285) (actual time=1222029.59..1222029.61 rows=5 loops=1)\n> -> Sort (cost=1789105.21..1789105.22 rows=1 width=285) (actual time=1222029.58..1222029.59 rows=5 loops=1)\n> Sort Key: b.batchdate\n> -> Nested Loop (cost=1787171.22..1789105.20 rows=1 width=285) (actual time=1221815.14..1222019.46 rows=5 loops=1)\n> Join Filter: (\"inner\".tranheaderid = \"outer\".tranheaderid)\n> -> Nested Loop (cost=1787171.22..1789026.02 rows=1 width=269) (actual time=1221809.33..1221978.62 rows=5 loops=1)\n> Join Filter: (\"inner\".cardtypeid = \"outer\".cardtypeid)\n> -> Merge Join (cost=1787171.22..1789024.79 rows=1 width=255) (actual time=1221802.47..1221971.48 rows=5 loops=1)\n> Merge Cond: (\"outer\".batchid = \"inner\".batchid)\n> -> Sort (cost=476.17..476.18 rows=4 width=102) (actual time=678.05..678.07 rows=17 loops=1)\n> Sort Key: b.batchid\n> -> Nested Loop (cost=0.00..476.14 rows=4 width=102) (actual time=161.62..677.95 rows=17 loops=1)\n> -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.65 rows=1 width=78) (actual time=13.87..13.88 rows=1 loops=1)\n> Index Cond: (merchid = '701252267'::character varying)\n> -> Index Scan using batchheader_ix_merchantid_idx on batchheader b (cost=0.00..470.30 rows=15 width=24) (actual time=147.72..663.94 rows=17 loops=1)\n> Index Cond: (\"outer\".merchantid = b.merchantid)\n> Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\n> -> Sort (cost=1786695.05..1787621.82 rows=370710 width=153) (actual time=1220080.34..1220722.19 rows=368681 loops=1)\n> Sort Key: d.batchid\n> -> Merge Join (cost=1704191.25..1713674.49 rows=370710 width=153) (actual time=1200184.91..1213352.77 rows=370307 loops=1)\n> Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\n> -> Merge Join (cost=1704085.28..1712678.33 rows=370710 width=115) (actual time=1199705.71..1210336.37 rows=370307 loops=1)\n> Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\n> -> Merge Join (cost=1704085.27..1711751.54 rows=370710 width=98) (actual time=1199705.65..1208122.73 rows=370307 loops=1)\n> Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\n> -> Merge Join (cost=1704085.26..1710824.75 rows=370710 width=89) (actual time=1199705.55..1205977.76 rows=370307 loops=1)\n> Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\n> -> Sort (cost=1543119.01..1544045.79 rows=370710 width=70) (actual time=1181172.79..1181902.77 rows=370307 loops=1)\n> Sort Key: d.batchdetailid\n> -> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..1489103.46 rows=370710 width=70) (actual time=14.45..1176074.90 rows=370307 loops=1)\n> Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))\n> -> Sort (cost=160966.25..163319.59 rows=941335 width=19) (actual time=18532.70..20074.09 rows=938770 loops=1)\n> Sort Key: p1.batchdetailid\n> -> Seq Scan on purc1 p1 (cost=0.00..44285.35 rows=941335 width=19) (actual time=9.44..9119.83 rows=938770 loops=1)\n> -> Sort (cost=0.01..0.02 rows=1 width=9) (actual time=0.08..0.08 rows=0 loops=1)\n> Sort Key: dr.batchdetailid\n> -> Seq Scan on direct dr (cost=0.00..0.00 rows=1 width=9) (actual time=0.01..0.01 rows=0 loops=1)\n> -> Sort (cost=0.01..0.02 rows=1 width=17) (actual time=0.04..0.04 rows=0 loops=1)\n> Sort Key: cr.batchdetailid\n> -> Seq Scan on carrental cr (cost=0.00..0.00 rows=1 width=17) (actual time=0.00..0.00 rows=0 loops=1)\n> -> Sort (cost=105.97..109.13 rows=1267 width=38) (actual time=479.17..480.74 rows=1267 loops=1)\n> Sort Key: ck.batchdetailid\n> -> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=447.88..475.60 rows=1267 loops=1)\n> -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=1.37..1.39 rows=10 loops=5)\n> -> Seq Scan on tranheader t (cost=0.00..55.15 rows=1923 width=16) (actual time=0.01..5.14 rows=1923 loops=5)\n> Filter: (clientid = 6)\n> Total runtime: 1222157.28 msec\n> \n> ***********************\n> Just to see what would happen, I executed:\n> ALTER TABLE batchdetail ALTER COLUMN tranamount SET STATISTICS 1000;\n> ANALYZE;\n> It seemed to hurt performance if anything. But the EXPLAIN estimate \n> for rows was much closer to the real value than it was previously.\n> \n> ***********************\n> It seems to me that the big, big isolated problem is the index scan on\n> batchdetail.tranamount. During this small query, 'sar -b' showed\n> consistent 90,000 block reads/sec. (contrast with only 6,000 with\n> larger query index scan). 'top' shows the CPU is at 20% user, 30%\n> system the whole time (contrast with 2% total in larger query above). \n> This results here still seem pretty bad (although not as bad as\n> above), but I still don't know what is the bottleneck. And the\n> strange sar stats are confusing me.\n> \n> EXPLAIN ANALYZE SELECT * FROM batchdetail WHERE tranamount BETWEEN 300 AND 499;\n> Seq Scan on batchdetail (cost=0.00..2018797.11 rows=783291 width=440) (actual time=45.66..283926.58 rows=783687 loops=1)\n> Filter: ((tranamount >= 300::numeric) AND (tranamount <= 499::numeric))\n> Total runtime: 285032.47 msec\n> \n> \n> ***********************\n> > Stephan Szabo wrote:\n> > As a followup, if you do set enable_indexscan=off;\n> > before running the explain analyze, what does that give you?\n> \n> Now this is very interesting: 'sar -b' shows about 95,000 block\n> reads/sec; CPU is at 20% user 30% system, vmstat shows no swapping,\n> query takes only 5 minutes to execute (which is one-quarter of the\n> time WITH the index scan!!!!). Obviously the execution plan is pretty\n> different on this one (query is identical the larger one above).\n> \n> EXPLAIN ANALYZE\n> SELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount,\n> d.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode,\n> m.name AS merchantname, c.cardtype, m.merchid,\n> p1.localtaxamount, p1.productidentifier, dr.avsresponse,\n> cr.checkoutdate, cr.noshowindicator, ck.checkingacctno,\n> ck.abaroutingno, ck.checkno\n> FROM tranheader t, batchheader b, merchants m, cardtype c,\n> batchdetail d\n> LEFT JOIN purc1 p1 on p1.batchdetailid=d.batchdetailid\n> LEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid\n> LEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid\n> LEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid\n> WHERE t.tranheaderid=b.tranheaderid\n> AND m.merchantid=b.merchantid\n> AND d.batchid=b.batchid\n> AND c.cardtypeid=d.cardtypeid\n> AND t.clientid = 6\n> AND d.tranamount BETWEEN 500.0 AND 700.0\n> AND b.batchdate > '2002-12-15'\n> AND m.merchid = '701252267'\n> ORDER BY b.batchdate DESC\n> LIMIT 50\n> Limit (cost=2321460.56..2321460.57 rows=1 width=285) (actual time=308194.57..308194.59 rows=5 loops=1)\n> -> Sort (cost=2321460.56..2321460.57 rows=1 width=285) (actual time=308194.57..308194.58 rows=5 loops=1)\n> Sort Key: b.batchdate\n> -> Nested Loop (cost=2319526.57..2321460.55 rows=1 width=285) (actual time=307988.56..308194.46 rows=5 loops=1)\n> Join Filter: (\"inner\".tranheaderid = \"outer\".tranheaderid)\n> -> Nested Loop (cost=2319526.57..2321381.37 rows=1 width=269) (actual time=307982.80..308153.22 rows=5 loops=1)\n> Join Filter: (\"inner\".cardtypeid = \"outer\".cardtypeid)\n> -> Merge Join (cost=2319526.57..2321380.14 rows=1 width=255) (actual time=307982.69..308152.82 rows=5 loops=1)\n> Merge Cond: (\"outer\".batchid = \"inner\".batchid)\n> -> Sort (cost=2316388.70..2317315.47 rows=370710 width=153) (actual time=305976.74..306622.88 rows=368681 loops=1)\n> Sort Key: d.batchid\n> -> Merge Join (cost=2233884.90..2243368.15 rows=370710 width=153) (actual time=286452.12..299485.43 rows=370307 loops=1)\n> Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\n> -> Merge Join (cost=2233778.93..2242371.98 rows=370710 width=115) (actual time=286428.77..296939.66 rows=370307 loops=1)\n> Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\n> -> Merge Join (cost=2233778.92..2241445.19 rows=370710 width=98) (actual time=286428.72..294750.01 rows=370307 loops=1)\n> Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\n> -> Merge Join (cost=2233778.91..2240518.40 rows=370710 width=89) (actual time=286428.60..292606.56 rows=370307 loops=1)\n> Merge Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\n> -> Sort (cost=2072812.66..2073739.44 rows=370710 width=70) (actual time=269738.34..270470.83 rows=370307 loops=1)\n> Sort Key: d.batchdetailid\n> -> Seq Scan on batchdetail d (cost=0.00..2018797.11 rows=370710 width=70) (actual time=41.66..266568.83 rows=370307 loops=1)\n> Filter: ((tranamount >= 500.0) AND (tranamount <= 700.0))\n> -> Sort (cost=160966.25..163319.59 rows=941335 width=19) (actual time=16690.20..18202.65 rows=938770 loops=1)\n> Sort Key: p1.batchdetailid\n> -> Seq Scan on purc1 p1 (cost=0.00..44285.35 rows=941335 width=19) (actual time=6.88..7779.31 rows=938770 loops=1)\n> -> Sort (cost=0.01..0.02 rows=1 width=9) (actual time=0.10..0.10 rows=0 loops=1)\n> Sort Key: dr.batchdetailid\n> -> Seq Scan on direct dr (cost=0.00..0.00 rows=1 width=9) (actual time=0.00..0.00 rows=0 loops=1)\n> -> Sort (cost=0.01..0.02 rows=1 width=17) (actual time=0.03..0.03 rows=0 loops=1)\n> Sort Key: cr.batchdetailid\n> -> Seq Scan on carrental cr (cost=0.00..0.00 rows=1 width=17) (actual time=0.00..0.00 rows=0 loops=1)\n> -> Sort (cost=105.97..109.13 rows=1267 width=38) (actual time=23.32..24.89 rows=1267 loops=1)\n> Sort Key: ck.batchdetailid\n> -> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=6.51..19.59 rows=1267 loops=1)\n> -> Sort (cost=3137.87..3137.88 rows=4 width=102) (actual time=954.18..954.20 rows=19 loops=1)\n> Sort Key: b.batchid\n> -> Nested Loop (cost=0.00..3137.84 rows=4 width=102) (actual time=236.26..954.04 rows=17 loops=1)\n> -> Seq Scan on merchants m (cost=0.00..2667.35 rows=1 width=78) (actual time=2.48..227.71 rows=1 loops=1)\n> Filter: (merchid = '701252267'::character varying)\n> -> Index Scan using batchheader_ix_merchantid_idx on batchheader b (cost=0.00..470.30 rows=15 width=24) (actual time=233.75..726.22 rows=17 loops=1)\n> Index Cond: (\"outer\".merchantid = b.merchantid)\n> Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\n> -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=0.02..0.04 rows=10 loops=5)\n> -> Seq Scan on tranheader t (cost=0.00..55.15 rows=1923 width=16) (actual time=0.01..5.21 rows=1923 loops=5)\n> Filter: (clientid = 6)\n> Total runtime: 308323.60 msec\n> \n> ***********************\n> I hope we can come up with something soon.....it seems this index\n> scan is a big part of the problem. I'm still really curious why the\n> disk reads are so few with the index scan. Let's hope I can get it\n> near the 3 second time for MSSQL by Friday!\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "16 Jan 2003 06:29:43 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Thu, 2003-01-16 at 07:29, Ron Johnson wrote:\n> On Thu, 2003-01-16 at 03:03, Roman Fail wrote:\n> > ***********************\n> > > Josh Berkus wrote:\n> > > Hey, Roman, how many records in BatchDetail, anyway?\n> > \n> > 23 million.\n> \n> What are the indexes on batchdetail?\n> \n> There's one on batchid and a seperate one on tranamount?\n> \n> If so, what about dropping them and create a single multi-segment\n> index on \"batchid, tranamount\". (A constraint can then enforce\n> uniqueness on batchid.\n\nThats a good step. Once done, CLUSTER by that index -- might buy 10 to\n20% extra.\n\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "16 Jan 2003 09:23:32 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\"Roman Fail\" <rfail@posportal.com> writes:\n> SELECT ...\n> FROM tranheader t, batchheader b, merchants m, cardtype c, (batchdetail d\n> LEFT JOIN purc1 p1 on p1.batchdetailid=d.batchdetailid\n> LEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid\n> LEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid\n> LEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid)\n> WHERE t.tranheaderid=b.tranheaderid\n> AND m.merchantid=b.merchantid\n> AND d.batchid=b.batchid\n> AND c.cardtypeid=d.cardtypeid\n> AND t.clientid = 6\n> AND d.tranamount BETWEEN 500.0 AND 700.0\n> AND b.batchdate > '2002-12-15'\n> AND m.merchid = '701252267'\n\nNo no no ... this is even worse than before. Your big tables are\nbatchdetail (d) and purc1 (p1). What you've got to do is arrange the\ncomputation so that those are trimmed to just the interesting records as\nsoon as possible. The constraint on d.tranamount helps, but after that\nyou proceed to join d to p1 *first*, before any of the other constraints\ncan be applied. That's a huge join that you then proceed to throw away\nmost of, as shown by the row counts in the EXPLAIN output.\n\nNote the parentheses I added above to show how the system interprets\nyour FROM clause. Since dr,cr,ck are contributing nothing to\nelimination of records, you really want them joined last, not first.\n\nWhat would probably work better is\n\nSELECT ...\nFROM\n (SELECT ...\n FROM tranheader t, batchheader b, merchants m, cardtype c, batchdetail d\n WHERE t.tranheaderid=b.tranheaderid\n AND m.merchantid=b.merchantid\n AND d.batchid=b.batchid\n AND c.cardtypeid=d.cardtypeid\n AND t.clientid = 6\n AND d.tranamount BETWEEN 500.0 AND 700.0\n AND b.batchdate > '2002-12-15'\n AND m.merchid = '701252267') ss\n LEFT JOIN purc1 p1 on p1.batchdetailid=ss.batchdetailid\n LEFT JOIN direct dr ON dr.batchdetailid = ss.batchdetailid\n LEFT JOIN carrental cr ON cr.batchdetailid = ss.batchdetailid\n LEFT JOIN checks ck ON ck.batchdetailid = ss.batchdetailid\n\nwhich lets the system get the useful restrictions applied before it has\nto finish expanding out the star query. Since cardtype isn't\ncontributing any restrictions, you might think about moving it into the\nLEFT JOIN series too (although I think the planner will choose to join\nit last in the subselect, anyway).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Jan 2003 10:13:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "\nOn Thu, 16 Jan 2003, Roman Fail wrote:\n\n> ***********************\n\nHmm, I wonder if maybe we're going about things backwards in this\ncase. Does the original database have something like EXPLAIN\nthat'll show what it's doing? Perhaps that'll give an idea.\n\n> > What does vacuum verbose batchdetail give you (it'll give an idea of pages anyway)\n>\n> trans=# VACUUM VERBOSE batchdetail;\n> INFO: --Relation public.batchdetail--\n> INFO: Pages 1669047: Changed 0, Empty 0; Tup 23316674: Vac 0, Keep 0, UnUsed 0.\n\nSo about 12 gigabytes of data, then?\n\n\n> It seems to me that the big, big isolated problem is the index scan on\n> batchdetail.tranamount. During this small query, 'sar -b' showed\n> consistent 90,000 block reads/sec. (contrast with only 6,000 with\n> larger query index scan). 'top' shows the CPU is at 20% user, 30%\n> system the whole time (contrast with 2% total in larger query above).\n\nNote that in this case below, you've gotten a sequence scan not an\nindex scan. (similar to setting enable_indexscan=off performance)\n\n> This results here still seem pretty bad (although not as bad as\n> above), but I still don't know what is the bottleneck. And the\n> strange sar stats are confusing me.\n>\n> EXPLAIN ANALYZE SELECT * FROM batchdetail WHERE tranamount BETWEEN 300 AND 499;\n> Seq Scan on batchdetail (cost=0.00..2018797.11 rows=783291 width=440) (actual time=45.66..283926.58 rows=783687 loops=1)\n> Filter: ((tranamount >= 300::numeric) AND (tranamount <= 499::numeric))\n> Total runtime: 285032.47 msec\n\nI'd assume that tranamount values are fairly randomly distributed\nthroughout the table, right? It takes about 5 minutes for the\nsystem to read the entire table and more for the index scan, so\nyou're probably reading most of the table randomly and the index\nas well.\n\nWhat values on batchdetail do you use in query where clauses\nregularly? It's possible that occasional clusters would help\nif this was the main field you filtered on. The cluster\nitself is time consuming, but it might help make the index\nscans actually read fewer pages.\n\n", "msg_date": "Thu, 16 Jan 2003 09:02:38 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Roman, Tom:\n\n> No no no ... this is even worse than before. Your big tables are\n> batchdetail (d) and purc1 (p1). What you've got to do is arrange the\n> computation so that those are trimmed to just the interesting records\n> as\n> soon as possible. \n\nWhen joining disproportionally large tables, I've also had some success\nwith the following method:\n\nSELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate,\nd.tranamount, \nd.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode, \nm.name AS merchantname, c.cardtype, m.merchid, \np1.localtaxamount, p1.productidentifier, dr.avsresponse, \ncr.checkoutdate, cr.noshowindicator, ck.checkingacctno, \nck.abaroutingno, ck.checkno \nFROM tranheader t \nJOIN batchheader b ON (t.tranheaderid = b.tranheaderid AND b.batchdate\n> '2002-12-15')\nJOIN merchants m ON (m.merchantid = b.merchantid AND mmerchid =\n'701252267')\nJOIN batchdetail d ON (d.batchid = b.batchid AND d.tranamount BETWEEN\n500 and 700)\nJOIN cardtype c ON d.cardtypeid = c.cardtypeid \nLEFT JOIN purc1 p1 ON p1.batchdetailid = d.batchdetailid \nLEFT JOIN direct dr ON dr.batchdetailid = d.batchdetailid \nLEFT JOIN carrental cr ON cr.batchdetailid = d.batchdetailid \nLEFT JOIN checks ck ON ck.batchdetailid = d.batchdetailid \nWHERE t.clientid = 6 \nAND d.tranamount BETWEEN 500.0 AND 700.0 \nAND b.batchdate > '2002-12-15' \nAND m.merchid = '701252267' \nORDER BY b.batchdate DESC \nLIMIT 50\n\nThis could be re-arranged some, but I think you get the idea ... I've\nbeen able, in some queries, to get the planner to use a better and\nfaster join strategy by repeating my WHERE conditions in the JOIN\ncriteria.\n\n-Josh\n\n", "msg_date": "Thu, 16 Jan 2003 09:16:33 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "Roman,\n\n> > Hey, Roman, how many records in BatchDetail, anyway?\n> \n> 23 million.\n\nAnd MSSQL is returning results in 3 seconds? I find that a bit hard\nto believe, unless this query is called repeatedly and that's the\nfigure for the last call, where the records are being cached. I'll\nhave to look at your hardware descriptions again.\n\n> It seems to me that the big, big isolated problem is the index scan\n> on batchdetail.tranamount. \n\nNope. This was a misimpression caused by batchdetail waiting for a\nbunch of other processes to complete. Sometimes the parallelizing\ngives me a wrong impression of what's holding up the query. Sorry if I\nconfused you.\n\n> I hope we can come up with something soon.....it seems this index\n> scan is a big part of the problem. I'm still really curious why the\n> disk reads are so few with the index scan. Let's hope I can get it\n> near the 3 second time for MSSQL by Friday!\n\nUm, Roman, keep in mind this is a mailing list. I'm sure that\neveryone here is happy to give you the tools to figure out how to fix\nthings, but only in a DIY fashion, and not on your schedule. \n\nIf you have a deadline, you'd better hire some paid query/database\ntuning help. DB Tuning experts .... whether on MSSQL or Postgres ...\nrun about $250/hour last I checked.\n\n-Josh Berkus\n", "msg_date": "Thu, 16 Jan 2003 09:28:28 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> This could be re-arranged some, but I think you get the idea ... I've\n> been able, in some queries, to get the planner to use a better and\n> faster join strategy by repeating my WHERE conditions in the JOIN\n> criteria.\n\nHm. It shouldn't be necessary to do that --- the planner should be able\nto push down the WHERE conditions to the right place without that help.\n\nThe list of explicit JOINs as you have here is a good way to proceed\n*if* you write the JOINs in an appropriate order for implementation.\nI believe the problem with Roman's original query was that he listed\nthe JOINs in a bad order. Unfortunately I didn't keep a copy of that\nmessage, and the list archives seem to be a day or more behind...\nbut at least for these WHERE conditions, it looks like the best bet\nwould to join m to b (I'm assuming m.merchid is unique), then to t,\nthen to d, then add on the others.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Jan 2003 12:30:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "\nOn Thu, 16 Jan 2003, Josh Berkus wrote:\n\n> Roman,\n>\n> > > Hey, Roman, how many records in BatchDetail, anyway?\n> >\n> > 23 million.\n>\n> And MSSQL is returning results in 3 seconds? I find that a bit hard\n> to believe, unless this query is called repeatedly and that's the\n> figure for the last call, where the records are being cached. I'll\n> have to look at your hardware descriptions again.\n>\n> > It seems to me that the big, big isolated problem is the index scan\n> > on batchdetail.tranamount.\n>\n> Nope. This was a misimpression caused by batchdetail waiting for a\n> bunch of other processes to complete. Sometimes the parallelizing\n> gives me a wrong impression of what's holding up the query. Sorry if I\n> confused you.\n\nI'm still not sure that it isn't a big part given that the time went down\nby a factor of about 4 when index scans were disabled and a sequence scan\nwas done and that a sequence scan over the table with no other tables\njoined looked to take about 5 minutes itself and the difference between\nthat seqscan and the big query was only about 20 seconds when\nenable_indexscan was off unless I'm misreading those results.\n\n", "msg_date": "Thu, 16 Jan 2003 09:47:02 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Tom,\n\n> The list of explicit JOINs as you have here is a good way to proceed\n> *if* you write the JOINs in an appropriate order for implementation.\n> I believe the problem with Roman's original query was that he listed\n> the JOINs in a bad order. Unfortunately I didn't keep a copy of that\n> message, and the list archives seem to be a day or more behind...\n> but at least for these WHERE conditions, it looks like the best bet\n> would to join m to b (I'm assuming m.merchid is unique), then to t,\n> then to d, then add on the others.\n\nI realize that I've contributed nothing other than bug reports to the\nparser design. But shouldn't Postgres, given a free hand, figure out\nthe above automatically? I'd be embarassed if MS could one-up us in\nparser planning anywhere, theirs sucks on sub-selects ....\n\n-Josh Berkus\n", "msg_date": "Thu, 16 Jan 2003 09:52:47 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n>> but at least for these WHERE conditions, it looks like the best bet\n>> would to join m to b (I'm assuming m.merchid is unique), then to t,\n>> then to d, then add on the others.\n\n> I realize that I've contributed nothing other than bug reports to the\n> parser design. But shouldn't Postgres, given a free hand, figure out\n> the above automatically?\n\nI believe it will. So far I've not seen an EXPLAIN from a query that\nwas structured to give it a free hand.\n\nAs noted elsewhere, the fact that we allow JOIN syntax to constrain the\nplanner is a real pain if you are accustomed to databases that don't do\nthat. On the other hand, it's a real lifesaver for people who need to\npare the planning time for dozen-way joins; it was only a day or two\nback in this same mailing list that we last had a discussion about that\nend of the problem. So even though it started out as an implementation\nshortcut rather than an intended feature, I'm loathe to just disable the\nbehavior entirely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Jan 2003 13:40:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " } ]
[ { "msg_contents": "All,\n\nI just noted in another thread that use of foreign keys in postgres\nsignificantly hinders performance. I'm wondering what other aspects we should\ntake into consideration in the design of our database. We're coming from\nSybase and trying to design a more encompassing, modular, generic database that\nwon't take a serious performance hit under postgres.\n\nThanks,\n\n-X\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Thu, 16 Jan 2003 05:51:40 -0800 (PST)", "msg_from": "CaptainX0r <captainx0r@yahoo.com>", "msg_from_op": true, "msg_subject": "schema/db design wrt performance" }, { "msg_contents": "On Thu, Jan 16, 2003 at 05:51:40AM -0800, CaptainX0r wrote:\n> All,\n> \n> I just noted in another thread that use of foreign keys in postgres\n> significantly hinders performance. I'm wondering what other\n\nSince I think I'm the one responsible for this, I'd better say\nsomething clearer for the record.\n\nThe foreign keys implementation in PostgreSQL essentially uses SELECT\n. . . FOR UPDATE to ensure that referenced data doesn't go away while a\nreferencing datum is being inserted or altered.\n\nThe problem with this is that frequently-referenced data are\ntherefore effectively locked during the operation. Other writers\nwill block on the locked data until the first writer finishes.\n\nSo, for instance, consider two artificial-example tables:\n\ncreate table account (acct_id serial primary key);\n\ncreate table acct_activity (acct_id int references\naccount(acct_id), trans_on timestamp, val numeric(12,2));\n\nIf a user has multiple connections and charges things to the same\naccount in more than one connection at the same time, the\ntransactions will have to be processed, effectively, in series: each\none will have to wait for another to commit in order to complete.\n\nThis is just a performance bottleneck. But it gets worse. Suppose\nthe account table is like this:\n\ncreate table account (acct_id serial primary key, con_id int\nreferences contact(con_id));\n\ncreate table contact (con_id serial primary key, name text, address1\ntext [. . .]);\n\nNow, if another transaction is busy trying to delete a contact at the\nsame time the account table is being updated to reflect, say, a new\ncontact, you run the risk of deadlock.\n\nThe FK support in PostgreSQL is therefore mostly useful for\nlow-volume applications. It can be made to work under heavier load\nif you use it very carefully and program your application for it. \nBut I suggest avoiding it for heavy-duty use if you really can.\n\n> take into consideration in the design of our database. We're\n> coming from Sybase and trying to design a more encompassing,\n> modular, generic database that won't take a serious performance hit\n> under postgres.\n\nAvoid NOT IN. This is difficult, because the workaround in Postgres\n(NOT EXISTS) is frequently lousy on other systems. Apparently there\nis some fix for this contemplated for 7.4, but I've been really busy\nlately, so I haven't been following -hackers. Someone else can\nprobably say something more useful about it.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 16 Jan 2003 09:20:05 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "On Thu, 2003-01-16 at 08:20, Andrew Sullivan wrote:\n> On Thu, Jan 16, 2003 at 05:51:40AM -0800, CaptainX0r wrote:\n> > All,\n> > \n> > I just noted in another thread that use of foreign keys in postgres\n> > significantly hinders performance. I'm wondering what other\n> \n> Since I think I'm the one responsible for this, I'd better say\n> something clearer for the record.\n> \n> The foreign keys implementation in PostgreSQL essentially uses SELECT\n> . . . FOR UPDATE to ensure that referenced data doesn't go away while a\n> referencing datum is being inserted or altered.\n> \n> The problem with this is that frequently-referenced data are\n> therefore effectively locked during the operation. Other writers\n> will block on the locked data until the first writer finishes.\n> \n> So, for instance, consider two artificial-example tables:\n> \n> create table account (acct_id serial primary key);\n> \n> create table acct_activity (acct_id int references\n> account(acct_id), trans_on timestamp, val numeric(12,2));\n> \n> If a user has multiple connections and charges things to the same\n> account in more than one connection at the same time, the\n> transactions will have to be processed, effectively, in series: each\n> one will have to wait for another to commit in order to complete.\n\nThis is true even though the default transaction mode is\nREAD COMMITTED?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "16 Jan 2003 08:34:38 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "On Thu, Jan 16, 2003 at 08:34:38AM -0600, Ron Johnson wrote:\n> On Thu, 2003-01-16 at 08:20, Andrew Sullivan wrote:\n\n> > If a user has multiple connections and charges things to the same\n> > account in more than one connection at the same time, the\n> > transactions will have to be processed, effectively, in series: each\n> > one will have to wait for another to commit in order to complete.\n> \n> This is true even though the default transaction mode is\n> READ COMMITTED?\n\nYes. Remember, _both_ of these are doing SELECT. . .FOR UPDATE. \nWhich means they both try to lock the corresponding record. But they\ncan't _both_ lock the same record; that's what the lock prevents.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 16 Jan 2003 10:39:33 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "On Thu, 2003-01-16 at 09:39, Andrew Sullivan wrote:\n> On Thu, Jan 16, 2003 at 08:34:38AM -0600, Ron Johnson wrote:\n> > On Thu, 2003-01-16 at 08:20, Andrew Sullivan wrote:\n> \n> > > If a user has multiple connections and charges things to the same\n> > > account in more than one connection at the same time, the\n> > > transactions will have to be processed, effectively, in series: each\n> > > one will have to wait for another to commit in order to complete.\n> > \n> > This is true even though the default transaction mode is\n> > READ COMMITTED?\n> \n> Yes. Remember, _both_ of these are doing SELECT. . .FOR UPDATE. \n> Which means they both try to lock the corresponding record. But they\n> can't _both_ lock the same record; that's what the lock prevents.\n\nCould BEFORE INSERT|UPDATE|DELETE triggers perform the same\nfunctionality while touching only the desired records, thus \ndecreasing conflict?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "16 Jan 2003 09:50:04 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "\nOn 16 Jan 2003, Ron Johnson wrote:\n\n> On Thu, 2003-01-16 at 09:39, Andrew Sullivan wrote:\n> > On Thu, Jan 16, 2003 at 08:34:38AM -0600, Ron Johnson wrote:\n> > > On Thu, 2003-01-16 at 08:20, Andrew Sullivan wrote:\n> >\n> > > > If a user has multiple connections and charges things to the same\n> > > > account in more than one connection at the same time, the\n> > > > transactions will have to be processed, effectively, in series: each\n> > > > one will have to wait for another to commit in order to complete.\n> > >\n> > > This is true even though the default transaction mode is\n> > > READ COMMITTED?\n> >\n> > Yes. Remember, _both_ of these are doing SELECT. . .FOR UPDATE.\n> > Which means they both try to lock the corresponding record. But they\n> > can't _both_ lock the same record; that's what the lock prevents.\n>\n> Could BEFORE INSERT|UPDATE|DELETE triggers perform the same\n> functionality while touching only the desired records, thus\n> decreasing conflict?\n\nIt does limit it to the corresponding records, but if you\nsay insert a row pointing at customer 1, and in another transaction\ninsert a row pointing at customer 1, the second waits on the first.\n\n\n", "msg_date": "Thu, 16 Jan 2003 08:02:40 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "On Thu, Jan 16, 2003 at 09:50:04AM -0600, Ron Johnson wrote:\n> \n> Could BEFORE INSERT|UPDATE|DELETE triggers perform the same\n> functionality while touching only the desired records, thus \n> decreasing conflict?\n\nYou can make the constraint DEFERRABLE INITIALY DEFERRED. It helps\nsomewhat. But the potential for deadlock, and the backing up, will\nstill happen to some degree. It's a well-known flaw in the FK\nsystem. I beleive the FK implementation was mostly intended as a\nproof of concept.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 16 Jan 2003 11:05:22 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "On Thu, 2003-01-16 at 10:02, Stephan Szabo wrote:\n> On 16 Jan 2003, Ron Johnson wrote:\n> \n> > On Thu, 2003-01-16 at 09:39, Andrew Sullivan wrote:\n> > > On Thu, Jan 16, 2003 at 08:34:38AM -0600, Ron Johnson wrote:\n> > > > On Thu, 2003-01-16 at 08:20, Andrew Sullivan wrote:\n> > >\n> > > > > If a user has multiple connections and charges things to the same\n> > > > > account in more than one connection at the same time, the\n> > > > > transactions will have to be processed, effectively, in series: each\n> > > > > one will have to wait for another to commit in order to complete.\n> > > >\n> > > > This is true even though the default transaction mode is\n> > > > READ COMMITTED?\n> > >\n> > > Yes. Remember, _both_ of these are doing SELECT. . .FOR UPDATE.\n> > > Which means they both try to lock the corresponding record. But they\n> > > can't _both_ lock the same record; that's what the lock prevents.\n> >\n> > Could BEFORE INSERT|UPDATE|DELETE triggers perform the same\n> > functionality while touching only the desired records, thus\n> > decreasing conflict?\n> \n> It does limit it to the corresponding records, but if you\n> say insert a row pointing at customer 1, and in another transaction\n> insert a row pointing at customer 1, the second waits on the first.\n\n2 points:\n\n1. Don't you *want* TXN2 to wait on TXN1?\n2. In an OLTP environment (heck, in *any* environment), the goal\n is to minimize txn length, so TXN2 shouldn't be waiting on\n TXN1 for more than a fraction of a second anyway.\n\nAm I missing something?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "16 Jan 2003 10:25:36 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "On 16 Jan 2003, Ron Johnson wrote:\n\n> On Thu, 2003-01-16 at 10:02, Stephan Szabo wrote:\n> > On 16 Jan 2003, Ron Johnson wrote:\n> >\n> > > On Thu, 2003-01-16 at 09:39, Andrew Sullivan wrote:\n> > > > On Thu, Jan 16, 2003 at 08:34:38AM -0600, Ron Johnson wrote:\n> > > > > On Thu, 2003-01-16 at 08:20, Andrew Sullivan wrote:\n> > > >\n> > > > > > If a user has multiple connections and charges things to the same\n> > > > > > account in more than one connection at the same time, the\n> > > > > > transactions will have to be processed, effectively, in series: each\n> > > > > > one will have to wait for another to commit in order to complete.\n> > > > >\n> > > > > This is true even though the default transaction mode is\n> > > > > READ COMMITTED?\n> > > >\n> > > > Yes. Remember, _both_ of these are doing SELECT. . .FOR UPDATE.\n> > > > Which means they both try to lock the corresponding record. But they\n> > > > can't _both_ lock the same record; that's what the lock prevents.\n> > >\n> > > Could BEFORE INSERT|UPDATE|DELETE triggers perform the same\n> > > functionality while touching only the desired records, thus\n> > > decreasing conflict?\n> >\n> > It does limit it to the corresponding records, but if you\n> > say insert a row pointing at customer 1, and in another transaction\n> > insert a row pointing at customer 1, the second waits on the first.\n>\n> 2 points:\n>\n> 1. Don't you *want* TXN2 to wait on TXN1?\n\nNot really. Maybe I was unclear though.\n\nGiven\ncreate table pktable(a int primary key);\ncreate table fktable(a int references pktable);\ninsert into pktable values (1);\n\nThe blocking would occur on:\nT1: begin;\nT2: begin;\nT1: insert into fktable values (1);\nT2: insert into fktable values (1);\n\nThis doesn't need to block. The reason for\nthe lock is to prevent someone from updating\nor deleting the row out of pktable, but it\nalso prevents this kind of thing. This becomes\nan issue if you say have tables that store mappings\nand a table that has an fk to that. You'll\nbe inserting lots of rows with say\ncustomertype=7 which points into a table with\ntypes and they'll block. Worse, if you say\ndo inserts with different customertypes in\ndifferent orders in two transactions you\ncan deadlock yourself.\n\n\n", "msg_date": "Thu, 16 Jan 2003 08:38:08 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "On Thu, Jan 16, 2003 at 10:25:36AM -0600, Ron Johnson wrote:\n> \n> 2 points:\n> \n> 1. Don't you *want* TXN2 to wait on TXN1?\n\nNot really. You really just want a tag which prevents TXN2 from\ncommitting when its reference data might go away. So what you want\nis a lock which says \"don't delete, no matter what\", until TXN2\ncommits. Then TXN1 could fail or not, depending on what it's trying\nto do. The problem is that there isn't a lock of the right strength\nto do that.\n\n> 2. In an OLTP environment (heck, in *any* environment), the goal\n> is to minimize txn length, so TXN2 shouldn't be waiting on\n> TXN1 for more than a fraction of a second anyway.\n\nRight. But it's possible to have multiple REFERENCES constraints\nto the same table; that's why I picked an account table, for\ninstance, because you might have a large number of different kinds of\nthings that the same account can do. So while you're correct that\none wants to minimize txn length, it's also true that, when the\neffects are followed across a large system, you can easily start\ntripping over the FKs. The real problem, then, only shows up on a\nbusy system with a table which gets referenced a lot.\n\nI should note, by the way, that the tremendous performance\nimprovements available in 7.2.x have reduced the problem considerably\nfrom 7.1.x, at least in my experience.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 16 Jan 2003 11:46:18 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" }, { "msg_contents": "On Thu, 2003-01-16 at 10:38, Stephan Szabo wrote:\n> On 16 Jan 2003, Ron Johnson wrote:\n> \n> > On Thu, 2003-01-16 at 10:02, Stephan Szabo wrote:\n> > > On 16 Jan 2003, Ron Johnson wrote:\n> > >\n> > > > On Thu, 2003-01-16 at 09:39, Andrew Sullivan wrote:\n> > > > > On Thu, Jan 16, 2003 at 08:34:38AM -0600, Ron Johnson wrote:\n> > > > > > On Thu, 2003-01-16 at 08:20, Andrew Sullivan wrote:\n> > > > >\n> > > > > > > If a user has multiple connections and charges things to the same\n> > > > > > > account in more than one connection at the same time, the\n> > > > > > > transactions will have to be processed, effectively, in series: each\n> > > > > > > one will have to wait for another to commit in order to complete.\n> > > > > >\n> > > > > > This is true even though the default transaction mode is\n> > > > > > READ COMMITTED?\n> > > > >\n> > > > > Yes. Remember, _both_ of these are doing SELECT. . .FOR UPDATE.\n> > > > > Which means they both try to lock the corresponding record. But they\n> > > > > can't _both_ lock the same record; that's what the lock prevents.\n> > > >\n> > > > Could BEFORE INSERT|UPDATE|DELETE triggers perform the same\n> > > > functionality while touching only the desired records, thus\n> > > > decreasing conflict?\n> > >\n> > > It does limit it to the corresponding records, but if you\n> > > say insert a row pointing at customer 1, and in another transaction\n> > > insert a row pointing at customer 1, the second waits on the first.\n> >\n> > 2 points:\n> >\n> > 1. Don't you *want* TXN2 to wait on TXN1?\n> \n> Not really. Maybe I was unclear though.\n> \n> Given\n> create table pktable(a int primary key);\n> create table fktable(a int references pktable);\n> insert into pktable values (1);\n> \n> The blocking would occur on:\n> T1: begin;\n> T2: begin;\n> T1: insert into fktable values (1);\n> T2: insert into fktable values (1);\n> \n> This doesn't need to block. The reason for\n> the lock is to prevent someone from updating\n> or deleting the row out of pktable, but it\n> also prevents this kind of thing. This becomes\n> an issue if you say have tables that store mappings\n> and a table that has an fk to that. You'll\n> be inserting lots of rows with say\n> customertype=7 which points into a table with\n> types and they'll block. Worse, if you say\n> do inserts with different customertypes in\n> different orders in two transactions you\n> can deadlock yourself.\n\nSo Postgres will think it's possible that I could modify the\nreference table that \"customertype=7\" refers to? If so, bummer.\n\nThe commercial RDBMS that I use (Rdb/VMS) allows one to specify\nthat certain tables are only for read access.\n\nFor example:\nSET TRANSACTION READ WRITE\n RESERVING T_MASTER, T_DETAIL FOR SHARED WRITE,\n T_MAPPING1, T_MAPPING2, T_MAPPING3 FOR SHARED READ;\n\nThus, only minimal locking is taken out on T_MAPPING1, T_MAPPING2\n& T_MAPPING3, but if I try to \"UPDATE T_MAPPING1\" or reference any\nother table, even in a SELECT statement, then the statement will \nfail.\n\nRdb also alows for exclusive write locks:\nSET TRANSACTION READ WRITE\n RESERVING T_MASTER, T_DETAIL FOR SHARED WRITE,\n T_MAPPING1, T_MAPPING2, T_MAPPING3 FOR SHARED READ,\n T_FOOBAR FOR EXCLUSIVE WRITE;\n\nThus, even though there is concurrent access to the other tables,\na table lock on T_FOOBAR is taken out. This cuts IO usage in 1/2,\nbut obviously must be used with great discretion.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "16 Jan 2003 11:24:52 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: schema/db design wrt performance" } ]
[ { "msg_contents": "I think FK on every database (oracle and MSSQL too) may hit performance,\nbut only in DML (insert/update/delete). These are tradeoffs...\nreferencial integrity vs. problems with batchload for example.\nMy Oracle experience say when I need to do batchload, I disable\nconstraints and then apply and work over exceptions.\nIf you don't make referencial integrity on database maybe you need to do\nit on you application... and I think will be very painfull.\n\n--\nFernando O. Papa\nDBA\n \n\n> -----Mensaje original-----\n> De: CaptainX0r [mailto:captainx0r@yahoo.com] \n> Enviado el: jueves, 16 de enero de 2003 10:52\n> Para: pgsql-performance@postgresql.org\n> Asunto: [PERFORM] schema/db design wrt performance\n> \n> \n> All,\n> \n> I just noted in another thread that use of foreign keys in \n> postgres significantly hinders performance. I'm wondering \n> what other aspects we should take into consideration in the \n> design of our database. We're coming from Sybase and trying \n> to design a more encompassing, modular, generic database that \n> won't take a serious performance hit under postgres.\n> \n> Thanks,\n> \n> -X\n> \n> __________________________________________________\n> Do you Yahoo!?\n> Yahoo! Mail Plus - Powerful. Affordable. Sign up now. \nhttp://mailplus.yahoo.com\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n", "msg_date": "Thu, 16 Jan 2003 11:07:25 -0300", "msg_from": "\"Fernando Papa\" <fpapa@claxson.com>", "msg_from_op": true, "msg_subject": "Re: schema/db design wrt performance" } ]
[ { "msg_contents": "> Josh Berkus wrote:\r\n> And MSSQL is returning results in 3 seconds? I find that a bit hard\r\n> to believe, unless this query is called repeatedly and that's the\r\n> figure for the last call, where the records are being cached. I'll\r\n> have to look at your hardware descriptions again.\r\n\r\nHardware-wise, the Postgres server is a hot rod and MSSQL is a basic vanilla server. I changed all the WHERE clauses to radically different values and couldn't get it to take more than 5 seconds on MSSQL. Most of it's cost savings seems to come from some kind of \"Table Spool/Lazy Spool\" in it's execution plan, which looks to me like it only exists for the life of the query. You can read more about this at:\r\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/optimsql/odp_tun_1_1m7g.asp\r\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/optimsql/odp_tun_1_7rjg.asp\r\nMaybe there are some good ideas here for Postgres. Unfortunately, the MSSQL Execution Plan is displayed graphically, and I can't figure out a way to get it to text without typing it all. I could do some screen shots if you really want to see it.\r\n \r\n> Stephan Szabo wrote:\r\n> I'd assume that tranamount values are fairly randomly distributed\r\n> throughout the table, right? It takes about 5 minutes for the\r\n> system to read the entire table and more for the index scan, so\r\n> you're probably reading most of the table randomly and the index\r\n> as well.\r\n> What values on batchdetail do you use in query where clauses regularly? \r\n \r\nYes, tranamount values are randomly distributed. I don't understand why an index scan would be \"random\", isn't the whole point of an index to have an ordered reference into the data? batchdetail has 5 columns that can be in the WHERE clause, all of which are indexed. None is more likely than the other to be searched, so a clustered index doesn't make much sense to me. The whole thing needs to be fast.\r\n \r\n>> Nope. This was a misimpression caused by batchdetail waiting for a\r\n>> bunch of other processes to complete. Sometimes the parallelizing\r\n>> gives me a wrong impression of what's holding up the query. Sorry if I\r\n>> confused you.\r\n>\r\n>I'm still not sure that it isn't a big part given that the time went down\r\n>by a factor of about 4 when index scans were disabled and a sequence scan\r\n>was done and that a sequence scan over the table with no other tables\r\n>joined looked to take about 5 minutes itself and the difference between\r\n>that seqscan and the big query was only about 20 seconds when\r\n>enable_indexscan was off unless I'm misreading those results.\r\n\r\nYou are not misreading the results. There was a huge difference. Nobody has ever made note of it, but this still seems very odd to me:\r\n*** 'sar -b' during the query\r\nwith index scan: 6,000 block reads/sec\r\nwith seq scan: 95,000 block reads/sec\r\n \r\n \r\nTom, here is the EXPLAIN for your suggested version of the query with enable_indexscan=on. I performed the same query with enable_indexscan=off and total runtime was *much* better: 296174.60 msec. By the way, thank you for your detailed description of how the JOIN order affects the outcome - I understand much better now.\r\nEXPLAIN ANALYZE\r\nSELECT ss.batchdate, ss.batchdetailid, ss.bankno, ss.trandate, ss.tranamount,\r\nss.submitinterchange, ss.authamount, ss.authno, ss.cardtypeid, ss.mcccode,\r\nss.name AS merchantname, ss.cardtype, ss.merchid,\r\np1.localtaxamount, p1.productidentifier, dr.avsresponse,\r\ncr.checkoutdate, cr.noshowindicator, ck.checkingacctno,\r\nck.abaroutingno, ck.checkno\r\nFROM\r\n (SELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount,\r\n d.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode,\r\n m.name, c.cardtype, m.merchid\r\n FROM tranheader t, batchheader b, merchants m, cardtype c, batchdetail d\r\n WHERE t.tranheaderid=b.tranheaderid\r\n AND m.merchantid=b.merchantid\r\n AND d.batchid=b.batchid\r\n AND c.cardtypeid=d.cardtypeid\r\n AND t.clientid = 6\r\n AND d.tranamount BETWEEN 500.0 AND 700.0\r\n AND b.batchdate > '2002-12-15'\r\n AND m.merchid = '701252267') ss\r\n LEFT JOIN purc1 p1 on p1.batchdetailid=ss.batchdetailid\r\n LEFT JOIN direct dr ON dr.batchdetailid = ss.batchdetailid\r\n LEFT JOIN carrental cr ON cr.batchdetailid = ss.batchdetailid\r\n LEFT JOIN checks ck ON ck.batchdetailid = ss.batchdetailid\r\nORDER BY ss.batchdate DESC\r\nLIMIT 50\r\nLimit (cost=1601637.75..1601637.75 rows=1 width=285) (actual time=1221606.41..1221606.42 rows=5 loops=1)\r\n -> Sort (cost=1601637.75..1601637.75 rows=1 width=285) (actual time=1221606.40..1221606.41 rows=5 loops=1)\r\n Sort Key: b.batchdate\r\n -> Nested Loop (cost=1543595.18..1601637.74 rows=1 width=285) (actual time=1204815.02..1221606.27 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=1543595.18..1601581.23 rows=1 width=247) (actual time=1204792.38..1221560.42 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=1543595.18..1601581.22 rows=1 width=230) (actual time=1204792.35..1221560.27 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=1543595.18..1601581.21 rows=1 width=221) (actual time=1204792.31..1221560.09 rows=5 loops=1)\r\n Join Filter: (\"inner\".batchdetailid = \"outer\".batchdetailid)\r\n -> Nested Loop (cost=1543595.18..1545529.17 rows=1 width=202) (actual time=1195376.48..1195578.86 rows=5 loops=1)\r\n Join Filter: (\"inner\".tranheaderid = \"outer\".tranheaderid)\r\n -> Nested Loop (cost=1543595.18..1545449.98 rows=1 width=186) (actual time=1195370.72..1195536.53 rows=5 loops=1)\r\n Join Filter: (\"inner\".cardtypeid = \"outer\".cardtypeid)\r\n -> Merge Join (cost=1543595.18..1545448.76 rows=1 width=172) (actual time=1195311.88..1195477.32 rows=5 loops=1)\r\n Merge Cond: (\"outer\".batchid = \"inner\".batchid)\r\n -> Sort (cost=476.17..476.18 rows=4 width=102) (actual time=30.57..30.59 rows=17 loops=1)\r\n Sort Key: b.batchid\r\n -> Nested Loop (cost=0.00..476.14 rows=4 width=102) (actual time=25.21..30.47 rows=17 loops=1)\r\n -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.65 rows=1 width=78) (actual time=23.81..23.82 rows=1 loops=1)\r\n Index Cond: (merchid = '701252267'::character varying)\r\n -> Index Scan using batchheader_ix_merchantid_idx on batchheader b (cost=0.00..470.30 rows=15 width=24) (actual time=1.38..6.55 rows=17 loops=1)\r\n Index Cond: (\"outer\".merchantid = b.merchantid)\r\n Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\r\n -> Sort (cost=1543119.01..1544045.79 rows=370710 width=70) (actual time=1194260.51..1194892.79 rows=368681 loops=1)\r\n Sort Key: d.batchid\r\n -> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..1489103.46 rows=370710 width=70) (actual time=5.26..1186051.44 rows=370307 loops=1)\r\n Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))\r\n -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=14) (actual time=11.77..11.79 rows=10 loops=5)\r\n -> Seq Scan on tranheader t (cost=0.00..55.15 rows=1923 width=16) (actual time=0.02..5.46 rows=1923 loops=5)\r\n Filter: (clientid = 6)\r\n -> Seq Scan on purc1 p1 (cost=0.00..44285.35 rows=941335 width=19) (actual time=10.79..3763.56 rows=938770 loops=5)\r\n -> Seq Scan on direct dr (cost=0.00..0.00 rows=1 width=9) (actual time=0.00..0.00 rows=0 loops=5)\r\n -> Seq Scan on carrental cr (cost=0.00..0.00 rows=1 width=17) (actual time=0.00..0.00 rows=0 loops=5)\r\n -> Seq Scan on checks ck (cost=0.00..40.67 rows=1267 width=38) (actual time=0.77..7.15 rows=1267 loops=5)\r\nTotal runtime: 1221645.52 msec\r\n\r\n \r\n> Tomasz Myrta wrote:\r\n> Are there any where clauses which all of theses variation have?\r\n\r\nYes.....WHERE clientid = ? will appear in every query. The others are present based on user input.\r\n\r\n \r\n> Ron Johnson wrote:\r\n> What are the indexes on batchdetail?\r\n> There's one on batchid and a seperate one on tranamount?\r\n> If so, what about dropping them and create a single multi-segment\r\n> index on \"batchid, tranamount\". (A constraint can then enforce\r\n> uniqueness on batchid.\r\n \r\nThere is no index on batchid, I think it is a good idea to create one. Stephan also suggested this. After I try the single batchid index, I might try to multi-segment index idea as well. I'll post results later today.\r\n \r\n> Stephan Szabo wrote:\r\n> Then I realized that batchheader.batchid and\r\n> batchdetail.batchid don't even have the same\r\n> type, and that's probably something else you'd\r\n> need to fix.\r\n\r\nYes, that's a mistake on my part....batchdetail(batchid) should be an int8. It looks to me like converting this datatype can't be done with a single ALTER TABLE ALTER COLUMN statement.....so I guess I'll work around it with an ADD, UPDATE, DROP, and RENAME.\r\n \r\n> Josh Berkus wrote:\r\n> Um, Roman, keep in mind this is a mailing list. I'm sure that\r\n> everyone here is happy to give you the tools to figure out how to fix\r\n> things, but only in a DIY fashion, and not on your schedule. \r\n\r\nI hate being defensive, but I don't remember saying that I expect anyone to fix my problems for me on my schedule. *I* hope that *I* can get this done by Friday, because otherwise my boss is going to tell me to dump Postgres and install MSSQL on the server. I only mention this fact because it's a blow against PostgreSQL's reputation if I have to give up. There is no pressure on you, and I apologize if something I said sounded like whining.\r\n \r\nI am VERY grateful for the time that all of you have given to this problem.\r\n \r\nRoman Fail\r\nSr. Web Application Programmer\r\nPOS Portal, Inc.\r\n \r\n", "msg_date": "Thu, 16 Jan 2003 11:22:03 -0800", "msg_from": "\"Roman Fail\" <rfail@posportal.com>", "msg_from_op": true, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\nOn Thu, 16 Jan 2003, Roman Fail wrote:\n\n> > Stephan Szabo wrote:\n> > I'd assume that tranamount values are fairly randomly distributed\n> > throughout the table, right? It takes about 5 minutes for the\n> > system to read the entire table and more for the index scan, so\n> > you're probably reading most of the table randomly and the index\n> > as well.\n> > What values on batchdetail do you use in query where clauses regularly?\n\n> Yes, tranamount values are randomly distributed. I don't understand\n> why an index scan would be \"random\", isn't the whole point of an index\n> to have an ordered reference into the data? batchdetail has 5 columns\n> that can be in the WHERE clause, all of which are indexed. None is\n> more likely than the other to be searched, so a clustered index\n> doesn't make much sense to me. The whole thing needs to be fast.\n\nYeah, in that case a clustered index doesn't help.\nIndexes give you an ordered way to find the rows that meet a condition,\nbut say you had three rows in your table in this order (note that this is\nan amazing oversimplification):\n(1,'a')\n(2,'b')\n(0,'c')\n\nAnd you want to scan the index from values with the first number between 0\nand 2. It reads the third row, then the first, then the second (to get\nthe letter associated). Between those reads, it's got to seek back and\nforth through the heap file and the order in which it hits them is pretty\nrandom seeming (to the kernel).\n\n> > Ron Johnson wrote:\n> > What are the indexes on batchdetail?\n> > There's one on batchid and a seperate one on tranamount?\n> > If so, what about dropping them and create a single multi-segment\n> > index on \"batchid, tranamount\". (A constraint can then enforce\n> > uniqueness on batchid.\n> There is no index on batchid, I think it is a good idea to create\n> one. Stephan also suggested this. After I try the single batchid\n> index, I might try to multi-segment index idea as well. I'll post\n> results later today.\n\nI think we may all have misread the index list to include an index on\nbatchid. Also you have two indexes on batchdetailid right now (primary key\nalso creates one) which added to the confusion.\n\n> > Stephan Szabo wrote:\n> > Then I realized that batchheader.batchid and\n> > batchdetail.batchid don't even have the same\n> > type, and that's probably something else you'd\n> > need to fix.\n>\n> Yes, that's a mistake on my part....batchdetail(batchid) should be an\n> int8. It looks to me like converting this datatype can't be done with\n> a single ALTER TABLE ALTER COLUMN statement.....so I guess I'll work\n> around it with an ADD, UPDATE, DROP, and RENAME.\n\nDon't forget to do a vacuum full in there as well.\n\n\n\n", "msg_date": "Thu, 16 Jan 2003 11:35:55 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\"Roman Fail\" <rfail@posportal.com> writes:\n> -> Merge Join (cost=1543595.18..1545448.76 rows=1 width=172) (actual time=1195311.88..1195477.32 rows=5 loops=1)\n> Merge Cond: (\"outer\".batchid = \"inner\".batchid)\n> -> Sort (cost=476.17..476.18 rows=4 width=102) (actual time=30.57..30.59 rows=17 loops=1)\n> Sort Key: b.batchid\n> -> Nested Loop (cost=0.00..476.14 rows=4 width=102) (actual time=25.21..30.47 rows=17 loops=1)\n> -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.65 rows=1 width=78) (actual time=23.81..23.82 rows=1 loops=1)\n> Index Cond: (merchid = '701252267'::character varying)\n> -> Index Scan using batchheader_ix_merchantid_idx on batchheader b (cost=0.00..470.30 rows=15 width=24) (actual time=1.38..6.55 rows=17 loops=1)\n> Index Cond: (\"outer\".merchantid = b.merchantid)\n> Filter: (batchdate > '2002-12-15 00:00:00'::timestamp without time zone)\n> -> Sort (cost=1543119.01..1544045.79 rows=370710 width=70) (actual time=1194260.51..1194892.79 rows=368681 loops=1)\n> Sort Key: d.batchid\n> -> Index Scan using batchdetail_ix_tranamount_idx on batchdetail d (cost=0.00..1489103.46 rows=370710 width=70) (actual time=5.26..1186051.44 rows=370307 loops=1)\n> Index Cond: ((tranamount >= 500.0) AND (tranamount <= 700.0))\n\nThe expensive part of this is clearly the sort and merge of the rows\nextracted from batchdetail. The index on tranamount is not helping\nyou at all, because the condition (between 500 and 700) isn't very\nselective --- it picks up 370000 rows --- and since those rows are\ntotally randomly scattered in the table, you do a ton of random\nseeking. It's actually faster to scan the table linearly --- that's why\nenable_indexscan=off was faster.\n\nHowever, I'm wondering why the thing picked this plan, when it knew it\nwould get only a few rows out of the m/b join (estimate 4, actual 17,\nnot too bad). I would have expected it to use an inner indexscan on\nd.batchid. Either you've not got an index on d.batchid, or there's a\ndatatype mismatch that prevents the index from being used. What are the\ndatatypes of d.batchid and b.batchid, exactly? If they're not the same,\neither make them the same or add an explicit coercion to the query, like\n\tWHERE d.batchid = b.batchid::typeof_d_batchid\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 16 Jan 2003 14:46:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "Tom said:\n> datatype mismatch that prevents the index from being used. What are the\n> datatypes of d.batchid and b.batchid, exactly? If they're not the same,\n> either make them the same or add an explicit coercion to the query, like\n> \tWHERE d.batchid = b.batchid::typeof_d_batchid\n>\nIt can be source of problem. I found in one of Roman's mail, that\nbatchid is declared as int8 in master table and as int4 in detail table.\nRegards,\nTomasz Myrta\n", "msg_date": "Thu, 16 Jan 2003 21:48:26 +0100", "msg_from": "jasiek@klaster.net", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" } ]
[ { "msg_contents": "Tom and Tomasz:\r\nI have to change the datatype of batchdetail.batchid from int4 to int8. After over 4 hours, the UPDATE transfer from the old column to new has not yet completed. After that I still have to build a new index and run VACUUM FULL. When that is all done I'll re-run the various queries, including a specific small one that Josh requested. \r\n \r\nChad Thompson suggested that I add single quotes around the literals in the WHERE clause, which sounded like a great idea based on his experience. Unfortunately, it did not make the query any faster. But read on!\r\n \r\nFor kicks, I tried this simple query, which should happen in an instant. It is the first row in the table.\r\nEXPLAIN ANALYZE select batchdetailid from batchdetail where batchdetailid = 27321;\r\n Seq Scan on batchdetail (cost=0.00..1960485.43 rows=1 width=8) (actual time=17.58..264303.76 rows=1 loops=1)\r\n Filter: (batchdetailid = 27321)\r\n Total runtime: 264303.87 msec\r\nDoes it make sense to do a sequence scan when the primary key index is available? Even so, it's still a pretty horrible time given the hardware.\r\n \r\nHOWEVER.....look at this:\r\nEXPLAIN ANALYZE select batchdetailid from batchdetail where batchdetailid = 27321::bigint;\r\n Index Scan using batchdetail_pkey on batchdetail (cost=0.00..4.13 rows=1 width=8) (actual time=0.03..0.03 rows=1 loops=1)\r\n Index Cond: (batchdetailid = 27321::bigint)\r\n Total runtime: 0.07 msec\r\n \r\nIt sort of feels like a magic moment. I went back and looked through a lot of the JOIN columns and found that I was mixing int4 with int8 in a lot of them. All of these tables (except batchdetail) were migrated using pgAdminII's migration wizard, so I didn't really give a hard look at all the data types matching up since it has a nice data map (I used the defaults except for the money type). \r\n \r\nNow I think I'm just going to drop the entire database and reload the data from scratch, making sure that the data types are mapped exactly right. Correct me if I'm wrong, but int4 only ranges from negative 2 billion to positive 2 billion. All the primary keys for my tables would fit in this range with the exception of batchdetail, which could conceivably grow beyond 2 billion someday (although I'd be archiving a lot of it when it got that big). Maybe I just shouldn't worry about it for now and make everything int4 for simplicity.\r\n \r\nI doubt I will accomplish all this on Friday, but I'll give a full report once I get it all reloaded. \r\n \r\n> Stephan Szabo wrote:\r\n> Also you have two indexes on batchdetailid right now (primary key\r\n> also creates one) which added to the confusion.\r\n\r\nThe 7.3.1 docs for CREATE TABLE don't mention anything about automatic index creation for a PRIMARY KEY. I didn't see any PK indexes via pgAdminII, so I read this line from the docs and decided to create them separately.\r\n \"Technically, PRIMARY KEY is merely a combination of UNIQUE and NOT NULL\"\r\nHowever, this query proves you are right:\r\ntrans=# select relname, relpages, indisunique, indisprimary from pg_class, pg_index\r\ntrans-# where indexrelid in (37126739, 8604257) and pg_class.oid = pg_index.indexrelid;\r\n relname | relpages | indisunique | indisprimary\r\n----------------------------------+----------+-------------+--------------\r\n batchdetail_pkey | 121850 | t | t\r\n batchdetail_ix_batchdetailid_idx | 63934 | f | f\r\n \r\nAll other columns in the two tables are identical for these two indexes. So now I've gone through and deleted all of these duplicate indexes I created (and then a VACUUM FULL). Perhaps an extra sentence in the docs might prevent someone else from making the same mistake as I?\r\n \r\n*** Current postgresql.conf settings:\r\ntcpip_socket=true\r\nshared_buffers = 131072\r\nmax_fsm_relations = 10000\r\nmax_fsm_pages = 2000000\r\nsort_mem = 32768\r\ndefault_statistics_target = 30\r\n\r\nThanks again for all your help!\r\n \r\nRoman Fail\r\nSr. Web Application Developer\r\nPOS Portal, Inc.\r\n \r\n", "msg_date": "Thu, 16 Jan 2003 21:54:39 -0800", "msg_from": "\"Roman Fail\" <rfail@posportal.com>", "msg_from_op": true, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\"Roman Fail\" <rfail@posportal.com> writes:\n> shared_buffers = 131072\n\nYipes! Try about a tenth that much. Or less.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jan 2003 01:06:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "\nOn Thu, 16 Jan 2003, Roman Fail wrote:\n\n> > Stephan Szabo wrote:\n> > Also you have two indexes on batchdetailid right now (primary key\n> > also creates one) which added to the confusion.\n>\n> The 7.3.1 docs for CREATE TABLE don't mention anything about automatic\n> index creation for a PRIMARY KEY. I didn't see any PK indexes via\n> pgAdminII, so I read this line from the docs and decided to create\n> them separately.\n> \"Technically, PRIMARY KEY is merely a combination of UNIQUE and NOT NULL\"\n\nRight, but the implementation of UNIQUE constraints in postgresql right\nnow is through a unique index. That's not necessarily a guarantee for\nthe future, but for right now you can rely on it.\n\n", "msg_date": "Thu, 16 Jan 2003 23:00:48 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Thu, 16 Jan 2003, Roman Fail wrote:\n\n>\n> HOWEVER.....look at this:\n> EXPLAIN ANALYZE select batchdetailid from batchdetail where batchdetailid = 27321::bigint;\n> Index Scan using batchdetail_pkey on batchdetail (cost=0.00..4.13 rows=1 width=8) (actual time=0.03..0.03 rows=1 loops=1)\n> Index Cond: (batchdetailid = 27321::bigint)\n> Total runtime: 0.07 msec\n>\n\nWe had this happen to us - we had a serial8 column (int8) and our query\nwas straight forward where id = 12345; which ran craptacularly. After\nmuch head banging and cursing I had tried where id = '12345' and it\nmagically worked. I think the parser is interpreting a \"number\" to be an\nint4 instead of int8. (instead of quotes you can also cast via\n12345::int8 like you did)\n\nPerhaps this should go on the TODO - when one side is an int8 and the\nother is a literal number assume the number to be int8 instead of int4?\n\n------------------------------------------------------------------------------\nJeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/\n Ronald McDonald, with the help of cheese soup,\n controls America from a secret volkswagon hidden in the past\n-------------------------------------------------------------------------------\n\n\n", "msg_date": "Fri, 17 Jan 2003 09:00:19 -0500 (EST)", "msg_from": "Jeff <threshar@torgo.978.org>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Jeff <threshar@torgo.978.org> writes:\n> Perhaps this should go on the TODO - when one side is an int8 and the\n> other is a literal number assume the number to be int8 instead of int4?\n\nIt's been on TODO for so long that it's buried near the bottom.\n\n* Allow SELECT * FROM tab WHERE int2col = 4 to use int2col index, int8,\n float4, numeric/decimal too [optimizer]\n\nThis behavior interacts with enough other stuff that we can't just\nchange it willy-nilly. See many past discussions in the pghackers\narchives if you want details. A recent example of a promising-looking\nfix crashing and burning is\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1357121\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jan 2003 10:11:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "On Fri, Jan 17, 2003 at 09:00:19AM -0500, Jeff wrote:\n> Perhaps this should go on the TODO - when one side is an int8 and the\n> other is a literal number assume the number to be int8 instead of int4?\n\nActually, this is a broader problem having to do with type coercion. \nThere are a couple of TODO items which refer to this, it looks to me,\nbut in any case there has been _plenty_ of discussion on -general and\n-hackers about what's wrong here.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 17 Jan 2003 10:15:34 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Tom,\n\n> > shared_buffers = 131072\n> \n> Yipes! Try about a tenth that much. Or less.\n\nWhy? He has 4GB RAM on the machine.\n\n-Josh Berkus\n", "msg_date": "Fri, 17 Jan 2003 09:08:54 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "Hi,\n\nI have a challenging (for me) SQL question:\n\nTwo tables\n(Note: these are fictitious, the real tables actually make sense, so no \nneed to re-design our table structure)\n\nTable 1\nid | name | count\n------------------------\n1 | foo | 10\n1 | foo | 20\n2 | bar | 100\n\n\nTable 2\nid | f1 | f2 | t1ref\n-----------------------\n1 | 10 | 20 | 1\n2 | 50 | 40 | 2\n\n\nThe question:\n\nI want to do the following select:\nselect table2.f1, table1.name from table1,table2 where table1.id = \ntable 2.id and table2.id = 2;\n\nThe problem is that I really only need the name from table2 returned \nonce. With this query, I get two records back. Clearly this is \nbecause of the join that I am doing. Is there a different way to \nperform this join, so that I only get back ONE record from table1 that \nmatches?\n\nThanks,\n\n-Noah\n\n", "msg_date": "Fri, 17 Jan 2003 12:28:35 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Strange Join question" }, { "msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n>>> shared_buffers = 131072\n>> \n>> Yipes! Try about a tenth that much. Or less.\n\n> Why? He has 4GB RAM on the machine.\n\nI think a gig of shared buffers is overkill no matter what.\n\nOne reason not to crank up shared_buffers \"just because you can\" is that\nthere are operations (such as CHECKPOINT) that have to scan through all\nthe buffers, linearly. I don't *think* any of these are in\nperformance-critical paths, but nonetheless you're wasting CPU. I trust\nthe kernel to manage a huge number of buffers efficiently more than I\ntrust Postgres.\n\nThere's another issue, which is somewhat platform-dependent; I'm not\nsure if it applies to whatever OS Roman is using. But if you have a\nmachine where SysV shared memory is not locked into RAM, then a huge\nshared buffer arena creates the probability that some of it will be\ntouched seldom enough that the kernel will decide to swap it out. When\nthat happens, you *really* pay through the nose --- a page that you\nmight have been able to get from kernel cache without incurring I/O will\nnow certainly cost you I/O to touch. It's even worse if the buffer\ncontained a dirty page at the time it was swapped out --- now that page\nis going to require being read back in and written out again, a net cost\nof three I/Os where there should have been one. Bottom line is that\nshared_buffers should be kept small enough that the space all looks like\na hot spot to the kernel's memory allocation manager.\n\nIn short, I believe in keeping shared_buffers relatively small --- one\nto ten thousand seems like the right ballpark --- and leaving the kernel\nto allocate the rest of RAM as kernel disk cache.\n\nI have been thinking recently about proposing that we change the factory\ndefault shared_buffers to 1000, which if this line of reasoning is\ncorrect would eliminate the need for average installations to tune it.\nThe reason the default is 64 is that on some older Unixen, the default\nSHMMAX is only one meg --- but it's been a long time since our default\nshared memory request was less than a meg anyway, because of bloat in\nother components of shared memory. It's probably time to change the\ndefault to something more reasonable from a performance standpoint, and\nput some burden on users of older Unixen to either reduce the setting\nor fix their SHMMAX parameter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jan 2003 12:33:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "Tom Lane wrote:\n> \"Josh Berkus\" <josh@agliodbs.com> writes:\n> >>> shared_buffers = 131072\n> >> \n> >> Yipes! Try about a tenth that much. Or less.\n> \n> > Why? He has 4GB RAM on the machine.\n> \n> I think a gig of shared buffers is overkill no matter what.\n> \n> One reason not to crank up shared_buffers \"just because you can\" is that\n> there are operations (such as CHECKPOINT) that have to scan through all\n> the buffers, linearly. I don't *think* any of these are in\n> performance-critical paths, but nonetheless you're wasting CPU. I trust\n> the kernel to manage a huge number of buffers efficiently more than I\n> trust Postgres.\n> \n> There's another issue, which is somewhat platform-dependent; I'm not\n> sure if it applies to whatever OS Roman is using. But if you have a\n> machine where SysV shared memory is not locked into RAM, then a huge\n> shared buffer arena creates the probability that some of it will be\n> touched seldom enough that the kernel will decide to swap it out. When\n> that happens, you *really* pay through the nose --- a page that you\n> might have been able to get from kernel cache without incurring I/O will\n> now certainly cost you I/O to touch. It's even worse if the buffer\n> contained a dirty page at the time it was swapped out --- now that page\n> is going to require being read back in and written out again, a net cost\n> of three I/Os where there should have been one. Bottom line is that\n> shared_buffers should be kept small enough that the space all looks like\n> a hot spot to the kernel's memory allocation manager.\n\nJust as a data point, I believe other database systems recommend very\nlarge shared memory areas if a lot of data is being accessed. I seem to\nremember Informix doing that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 17 Jan 2003 12:52:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Fri, Jan 17, 2003 at 12:33:11PM -0500, Tom Lane wrote:\n> One reason not to crank up shared_buffers \"just because you can\" is that\n> there are operations (such as CHECKPOINT) that have to scan through all\n> the buffers, linearly. I don't *think* any of these are in\n> performance-critical paths, but nonetheless you're wasting CPU. I trust\n> the kernel to manage a huge number of buffers efficiently more than I\n> trust Postgres.\n\nFor what it's worth, we have exactly that experience on our Sun\nE4500s. I had machines with 12 gig I was testing on, and I increased\nthe buffers to 2 Gig, because truss was showing us some sluggishness\nin the system was tripping on the system call to get a page. It was\nsatisifed right away by the kernel's cache, but the system call was\nstill the most expensive part of the operation.\n\nAfter we'd increased the shared buffers, however, performance\n_degraded_ considerably. It now spent all its time instead managing\nthe huge shared buffer, and the cost of that was much worse than the\ncost of the system call.\n\nSo it is extremely dependent on the efficiency of PostgreSQL's use of\nshared memory as compared to the efficiency of the system call.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 17 Jan 2003 12:59:39 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Just as a data point, I believe other database systems recommend very\n> large shared memory areas if a lot of data is being accessed. I seem to\n> remember Informix doing that.\n\nYeah, but isn't that theory a hangover from pre-Unix operating systems?\nIn all modern Unixen, you can expect the kernel to make use of any spare\nRAM for disk buffer cache --- and that behavior makes it pointless for\nPostgres to try to do large amounts of its own buffering.\n\nHaving a page in our own buffer instead of kernel buffer saves a context\nswap to access the page, but it doesn't save I/O, so the benefit is a\nlot less than you might think. I think there's seriously diminishing\nreturns in pushing shared_buffers beyond a few thousand, and once you\nget to the point where it distorts the kernel's ability to manage\nmemory for processes, you're really shooting yourself in the foot.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jan 2003 13:01:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "On Fri, 17 Jan 2003, Tom Lane wrote:\n\n>\n> Yeah, but isn't that theory a hangover from pre-Unix operating systems?\n> In all modern Unixen, you can expect the kernel to make use of any spare\n> RAM for disk buffer cache --- and that behavior makes it pointless for\n> Postgres to try to do large amounts of its own buffering.\n>\n\nInformix, oracle, etc all do raw device access bypassing the kernels\nbuffering, etc. So they need heaping gobules of memory to do the same\nthing the kernel does.. but since they know the exact patterns of data and\nhow things will be done they can fine tune their buffer caches to get much\nbetter performance than the kernel (15-20% in informix's case) since the\nkernel needs to be a \"works good generally\"\n\nprobably the desire to crank that up stems from using those other db's I\nknow I used to do that with pgsql. (Ahh, I crank that setting up through\nthe roof on informix, I'll do the same with pg)\n\nperhaps a FAQ entry or comment in the shipped config about it?\nI think if people realize it isn't quite the same as what it does in\noracle/informix/etc then they'll be less inclined to cranking it.\n\n------------------------------------------------------------------------------\nJeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/\n Ronald McDonald, with the help of cheese soup,\n controls America from a secret volkswagon hidden in the past\n-------------------------------------------------------------------------------\n\n\n", "msg_date": "Fri, 17 Jan 2003 13:39:41 -0500 (EST)", "msg_from": "Jeff <threshar@torgo.978.org>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Jeff <threshar@torgo.978.org> writes:\n> On Fri, 17 Jan 2003, Tom Lane wrote:\n>> Yeah, but isn't that theory a hangover from pre-Unix operating systems?\n\n> Informix, oracle, etc all do raw device access bypassing the kernels\n> buffering, etc. So they need heaping gobules of memory to do the same\n> thing the kernel does..\n\nD'oh, I believe Jeff's put his finger on it. You need lotsa RAM if you\nare trying to bypass the OS. But Postgres would like to work with the\nOS, not bypass it.\n\n> but since they know the exact patterns of data and\n> how things will be done they can fine tune their buffer caches to get much\n> better performance than the kernel (15-20% in informix's case) since the\n> kernel needs to be a \"works good generally\"\n\nThey go to all that work for 15-20% ??? Remind me not to follow that\nprimrose path. I can think of lots of places where we can buy 20% for\nless work than implementing (and maintaining) our own raw-device access\nlayer.\n\n> perhaps a FAQ entry or comment in the shipped config about it?\n> I think if people realize it isn't quite the same as what it does in\n> oracle/informix/etc then they'll be less inclined to cranking it.\n\nGood thought. But we do need to set the default to something a tad\nmore realistic-for-2003 than 64 buffers ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jan 2003 23:49:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "On Thu, 16 Jan 2003, Roman Fail wrote:\n\n> It sort of feels like a magic moment. I went back and looked through a\n> lot of the JOIN columns and found that I was mixing int4 with int8 in a\n> lot of them. \n\nThere is note about it in the docs:\n\nhttp://www.postgresql.org/idocs/index.php?datatype.html#DATATYPE-INT\n\nI don't know if this is in a faq anywhere, but it should be. I myself have\nhelped a number of persons with this. Every once in a while there come\nsomeone in to the #postgresql irc channel with the exact same problem. \nUsually they leave the channel very happy, when their queries take less\nthen a second instead of minutes.\n\n-- \n/Dennis\n\n", "msg_date": "Sat, 18 Jan 2003 08:39:08 +0100 (CET)", "msg_from": "=?ISO-8859-1?Q?Dennis_Bj=F6rklund?= <db@zigo.dhs.org>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Fri, Jan 17, 2003 at 11:49:31PM -0500, Tom Lane wrote:\n> Jeff <threshar@torgo.978.org> writes:\n> > Informix, oracle, etc all do raw device access bypassing the kernels\n> > buffering, etc. So they need heaping gobules of memory to do the same\n> > thing the kernel does..\n> \n> D'oh, I believe Jeff's put his finger on it. You need lotsa RAM if you\n> are trying to bypass the OS. But Postgres would like to work with the\n> OS, not bypass it.\n\nOne of the interesting things I have been playing with on Solaris\nrecently is the various no-buffer settings you can give to the kernel\nfor filesystems. The idea is that you don't have the kernel do the\nbuffering, and you set your database's shared memory setting\n_reeeeal_ high. \n\nAs nearly as I can tell, there is again no benefit with PostgreSQL. \nI'd also be amazed if this approach is a win for other systems. But\na lot of DBAs seem to believe that they know better than their\ncomputers which tables are \"really\" accessed frequently. I think\nthey must be smarter than I am: I'd rather trust a system that was\ndesigned to track these things and change the tuning on the fly,\nmyself.\n\n(To be fair, there are some cases where you have an\ninfrequently-accessed table which nevertheless is required to be fast\nfor some reason or other, so you might want to force it to stay in\nmemory.)\n\nA\n\n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 18 Jan 2003 11:24:26 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On 17 Jan 2003 at 12:33, Tom Lane wrote:\n\n> \"Josh Berkus\" <josh@agliodbs.com> writes:\n> >>> shared_buffers = 131072\n> >> \n> >> Yipes! Try about a tenth that much. Or less.\n> \n> > Why? He has 4GB RAM on the machine.\n> \n> I think a gig of shared buffers is overkill no matter what.\n> \n> One reason not to crank up shared_buffers \"just because you can\" is that\n> there are operations (such as CHECKPOINT) that have to scan through all\n> the buffers, linearly. I don't *think* any of these are in\n> performance-critical paths, but nonetheless you're wasting CPU. I trust\n\nAssuming that one knows what he/she is doing, would it help in such cases i.e. \nthe linear search thing, to bump up page size to day 16K/32K?\n\nand that is also the only way to make postgresql use more than couple of gigs \nof RAM, isn't it?\n\nBye\n Shridhar\n\n--\nArithmetic:\tAn obscure art no longer practiced in the world's developed \ncountries.\n\n", "msg_date": "Mon, 20 Jan 2003 12:04:45 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> On 17 Jan 2003 at 12:33, Tom Lane wrote:\n>> One reason not to crank up shared_buffers \"just because you can\" is that\n>> there are operations (such as CHECKPOINT) that have to scan through all\n>> the buffers, linearly. I don't *think* any of these are in\n>> performance-critical paths, but nonetheless you're wasting CPU. I trust\n\n> Assuming that one knows what he/she is doing, would it help in such cases i.e. \n> the linear search thing, to bump up page size to day 16K/32K?\n\nYou mean increase page size and decrease the number of buffers\nproportionately? It'd save on buffer-management overhead, but\nI wouldn't assume there'd be an overall performance gain. The\nsystem would have to do more I/O per page read or written; which\nmight be a wash for sequential scans, but I bet it would hurt for\nrandom access.\n\n> and that is also the only way to make postgresql use more than couple of gigs\n> of RAM, isn't it?\n\nIt seems quite unrelated. The size of our shared memory segment is\nlimited by INT_MAX --- chopping it up differently won't change that.\n\nIn any case, I think worrying because you can't push shared buffers\nabove two gigs is completely wrongheaded, for the reasons already\ndiscussed in this thread. The notion that Postgres can't use more\nthan two gig because its shared memory is limited to that is\n*definitely* wrongheaded. We can exploit however much memory your\nkernel can manage for kernel disk cache.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Jan 2003 02:14:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "On 20 Jan 2003 at 2:14, Tom Lane wrote:\n\n> \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > Assuming that one knows what he/she is doing, would it help in such cases i.e. \n> > the linear search thing, to bump up page size to day 16K/32K?\n> \n> You mean increase page size and decrease the number of buffers\n> proportionately? It'd save on buffer-management overhead, but\n> I wouldn't assume there'd be an overall performance gain. The\n> system would have to do more I/O per page read or written; which\n> might be a wash for sequential scans, but I bet it would hurt for\n> random access.\n\nRight. But it has its own applications. If I am saving huge data blocks like \nsay gene stuff, I might be better off living with a relatively bigger page \nfragmentation.\n \n> > and that is also the only way to make postgresql use more than couple of gigs\n> > of RAM, isn't it?\n> \n> It seems quite unrelated. The size of our shared memory segment is\n> limited by INT_MAX --- chopping it up differently won't change that.\n\nWell, if my page size is doubled, I can get double amount of shared buffers. \nThat was the logic nothing else.\n\n> In any case, I think worrying because you can't push shared buffers\n> above two gigs is completely wrongheaded, for the reasons already\n> discussed in this thread. The notion that Postgres can't use more\n> than two gig because its shared memory is limited to that is\n> *definitely* wrongheaded. We can exploit however much memory your\n> kernel can manage for kernel disk cache.\n\nWell, I agree completely. However there are folks and situation which demands \nthings because they can be done. This is just to check out the absolute limit \nwhat it can manage.\n\n\nBye\n Shridhar\n\n--\nBagdikian's Observation:\tTrying to be a first-rate reporter on the average \nAmerican newspaper\tis like trying to play Bach's \"St. Matthew Passion\" on a \nukelele.\n\n", "msg_date": "Mon, 20 Jan 2003 13:14:27 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow " }, { "msg_contents": "On Mon, 2003-01-20 at 01:14, Tom Lane wrote:\n> \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > On 17 Jan 2003 at 12:33, Tom Lane wrote:\n[snip]\n> > and that is also the only way to make postgresql use more than couple of gigs\n> > of RAM, isn't it?\n> \n> It seems quite unrelated. The size of our shared memory segment is\n> limited by INT_MAX --- chopping it up differently won't change that.\n> \n> In any case, I think worrying because you can't push shared buffers\n> above two gigs is completely wrongheaded, for the reasons already\n> discussed in this thread. The notion that Postgres can't use more\n> than two gig because its shared memory is limited to that is\n> *definitely* wrongheaded. We can exploit however much memory your\n> kernel can manage for kernel disk cache.\n\nhttp://www.redhat.com/services/techsupport/production/GSS_caveat.html\n\"RAM Limitations on IA32\nRed Hat Linux releases based on the 2.4 kernel -- including Red Hat\nLinux 7.1, 7.2, 7.3 and Red Hat Linux Advanced Server 2.1 -- support\na maximum of 16GB of RAM.\"\n\nSo if I have some honking \"big\" Compaq Xeon SMP server w/ 16GB RAM,\nand top(1) shows that there is 8GB of buffers, then Pg will be happy\nas a pig in the mud?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "20 Jan 2003 03:41:21 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Very large caches (was Re: 7.3.1 New install, large queries are slow)" }, { "msg_contents": "Ron Johnson <ron.l.johnson@cox.net> writes:\n> So if I have some honking \"big\" Compaq Xeon SMP server w/ 16GB RAM,\n> and top(1) shows that there is 8GB of buffers, then Pg will be happy\n> as a pig in the mud?\n\nSounds good to me ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 20 Jan 2003 10:16:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Very large caches (was Re: 7.3.1 New install,\n\tlarge queries are slow)" }, { "msg_contents": "> >> Yeah, but isn't that theory a hangover from pre-Unix operating systems?\n> \n> > Informix, oracle, etc all do raw device access bypassing the kernels\n> > buffering, etc. So they need heaping gobules of memory to do the same\n> > thing the kernel does..\n> \n> D'oh, I believe Jeff's put his finger on it. You need lotsa RAM if you\n> are trying to bypass the OS. But Postgres would like to work with the\n> OS, not bypass it.\n> \n> > but since they know the exact patterns of data and\n> > how things will be done they can fine tune their buffer caches to get much\n> > better performance than the kernel (15-20% in informix's case) since the\n> > kernel needs to be a \"works good generally\"\n> \n> They go to all that work for 15-20% ??? Remind me not to follow that\n> primrose path. I can think of lots of places where we can buy 20% for\n> less work than implementing (and maintaining) our own raw-device access\n> layer.\n\nThis is related somewhat to the raw device access discussion. This is\na quote from Matt Dillion (FreeBSD VM guru) on the topic of disk\ncaches (Message-Id:\n<200301270657.h0R6v2qH071774@apollo.backplane.com>) and a few bits at\nthe end:\n\n\n### Begin quote\n Mmmmm. Basically what it comes down to is that without foreknowledge\n of the data locations being accessed, it is not possible for any \n cache algorithm to adapt to all the myriad ways data might be accessed.\n If you focus the cache on one methodology it will probably perform\n terribly when presented with some other methodology.\n\n What this means is that for the cases where you *KNOW* how a program\n intends to access a dataset larger then main memory, it is better\n to have the program explicitly cache/not-cache the data under program\n control rather then trying to force the system cache to adapt.\n\n I'll also be nice and decode some of Terry's Jargon for the rest of\n the readers.\n\n:will result in significant failure of random page replacement to\n:result in cache hits; likewise, going to 85% overage will practically\n:guarantee an almost 100% failure rate, as cyclical access with random\n:replacement is statistically more likely, in aggregate, to replace\n:the pages which are there longer (the probability is iterative and\n:additive: it's effectively a permutation).\n\n What Terry is saying is that if you have a dataset that is 2x\n the size of your cache, the cache hit rate on that data with random\n page replacement is NOT going to be 50%. This is because with random\n page replacement the likelihood of a piece of data being found in\n the cache depends on how long the data has been sitting in the cache.\n The longer the data has been sitting in the cache, the less likely you\n will find it when you need it (because it is more likely to have been\n replaced by the random replacement algorithm over time).\n\n So, again, the best caching methodology to use in the case where\n you *know* how much data you will be accessing and how you will access\n it is to build the caching directly into your program and not depend\n on system caching at all (for datasets which are far larger then\n main memory).\n\n This is true of all systems, not just BSD. This is one reason why\n databases do their own caching (another is so databases know when an\n access will require I/O for scheduling reasons, but that's a different\n story).\n\n The FreeBSD VM caching system does prevent one process from exhausting\n another process's cached data due to a sequential access, but the\n FreeBSD VM cache does not try to outsmart sequential data accesses to\n datasets which are larger then available cache space because it's an\n insanely difficult (impossible) problem to solve properly without\n foreknowledge of what data elements will be accessed when. \n\n This isn't to say that we can't improve on what we have now. \n I certainly think we can. But there is no magic bullet that will\n deal with every situation.\n\n\t\t\t\t\t\t-Matt\n### End quote\n\nSo if there really is only a 15-20% performance gain to be had from\nusing raw disk access, that 15-20% loss comes from not being able to\ntell the OS what to cache, what not to cache, and what order to have\nthe pages in... which only really matters if there is RAM available to\nthe kernel to cache, and that it is able to determine what is valuable\nto cache in the course of its operations. Predictive caching by the\nOS isn't done because it understands PostgreSQL, because it\nunderstands a generic algorithm for page hits/misses.\n\nWhat is interesting after reading this, however, is the prospect of a\n15-20% speed up on certain tables that we know are accessed frequently\nby implicitly specifying a set of data to be preferred in a user space\ncache. It's impossible for the OS to cache the pages that make the\nbiggest impact on user visible performance given the OS has no\nunderstanding of what pages make a big difference on user visible\nperformance, a user land database process, however, would.\n\nAs things stand, it's entirely possible for a set of large queries to\ncome through and wipe the kernel's cache that smaller queries were\nusing. Once a cache misses, the kernel then has to fetch the data\nagain which could slow down over all number of transactions per\nsecond. That said, this is something that an in-database scheduler\ncould avoid by placing a lower priority on larger, more complex\nqueries with the assumption being that having the smaller queries\ncontinue to process and get in/out is more important than shaving a\nfew seconds off of a larger query that would deplete the cache used by\nthe smaller queries. Oh to be a DBA and being able to make those\ndecisions instead of the kernel...\n\nHrm, so two ideas or questions come to mind:\n\n1) On some of my really large read only queries, it would be SUUUPER\n nice to be able to re-nice the process from SQL land to 5, 10, or\n even 20. IIRC, BSD's VM system is smart enough to prevent lower\n priority jobs from monopolizing the disk cache, which would let the\n smaller faster turn around queries, continue to exist with their\n data in the kernel's disk cache. (some kind of query complexity\n threshold that results in a reduction of priority or an explicit\n directive to run at a lower priority)\n\n2) Is there any way of specifying that a particular set of tables\n should be kept in RAM or some kind of write through cache? I know\n data is selected into a backend out of the catalogs, but would it\n be possible to have them kept in memory and only re-read on change\n with some kind of semaphore? Now that all system tables are in\n their own schemas (pg_catalog and pg_toast), would it be hard to\n set a flag on a change to those tables that would cause the\n postmaster, or children, to re-read then instead of rely on their\n cache? With copy-on-write forking, this could be pretty efficient\n if the postmaster did this and forked off a copy with the tables\n already in memory instead of on disk.\n\nJust a few ideas/ramblings, hope someone finds them interesting... the\nrenice function is one that I think I'll spend some time looking into\nhere shortly actually. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 27 Jan 2003 00:17:45 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Mon, 27 Jan 2003, Sean Chittenden wrote:\n\n> The FreeBSD VM caching system does prevent one process from exhausting\n> another process's cached data due to a sequential access, but the\n> FreeBSD VM cache does not try to outsmart sequential data accesses to\n> datasets which are larger then available cache space because it's an\n> insanely difficult (impossible) problem to solve properly without\n> foreknowledge of what data elements will be accessed when.\n\nThis is not impossible; Solaris does just this. I'm a little short of\ntime right now, but I can probably dig up the paper on google if nobody\nelse finds it.\n\nAlso, it is not hard to give the OS foreknowledge of your access\npattern, if you use mmap. Just call madvise and use the MADV_RANDOM,\nMADV_SEQUENTIAL, MADV_WILLNEED and MADV_DONTNEED flags. (This is one\nof the reasons I think we might see a performance improvement from\nswitching from regular I/O to mmap I/O.)\n\nYou can go back through the archives and see a much fuller discussion of\nall of this.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Mon, 27 Jan 2003 19:34:42 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Mon, 2003-01-27 at 04:34, Curt Sampson wrote:\n> On Mon, 27 Jan 2003, Sean Chittenden wrote:\n> \n> > The FreeBSD VM caching system does prevent one process from exhausting\n> > another process's cached data due to a sequential access, but the\n> > FreeBSD VM cache does not try to outsmart sequential data accesses to\n> > datasets which are larger then available cache space because it's an\n> > insanely difficult (impossible) problem to solve properly without\n> > foreknowledge of what data elements will be accessed when.\n> \n> This is not impossible; Solaris does just this. I'm a little short of\n\nQuite. One way to do it is:\n- the OS notices that process X has been sequentially reading thru \n file Y for, say, 3 seconds.\n- the OS knows that X is currently at the mid-point of file Y\n- OS says, \"Hey, I think I'll be a bit more agressive about, when I\n have a bit of time, trying to read Y faster than X is requesting\n it\n\nIt wouldn't work well, though, in a client-server DB like Postgres,\nwhich, in a busy multi-user system, is constantly hitting different\nparts of different files.\n\nThe algorithm, though, is used in the RDBMS Rdb. It uses the algorithm\nabove, substituting \"process X\" for \"client X\", and passes the agressive\nreads of Y on to the OS. It's a big win when processing a complete\ntable, like during a CREATE INDEX, or \"SELECT foo, COUNT(*)\" where\nthere's no index on foo.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "27 Jan 2003 09:17:28 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "\nDetecting sequential scan and increasing read-ahead is a standard OS\ncapability, and most/all do that already. Solaris has code to detect\nwhen a sequential scan is wiping the cache and adjusting the buffer\nfrees, called \"free-behind.\"\n\n---------------------------------------------------------------------------\n\nRon Johnson wrote:\n> On Mon, 2003-01-27 at 04:34, Curt Sampson wrote:\n> > On Mon, 27 Jan 2003, Sean Chittenden wrote:\n> > \n> > > The FreeBSD VM caching system does prevent one process from exhausting\n> > > another process's cached data due to a sequential access, but the\n> > > FreeBSD VM cache does not try to outsmart sequential data accesses to\n> > > datasets which are larger then available cache space because it's an\n> > > insanely difficult (impossible) problem to solve properly without\n> > > foreknowledge of what data elements will be accessed when.\n> > \n> > This is not impossible; Solaris does just this. I'm a little short of\n> \n> Quite. One way to do it is:\n> - the OS notices that process X has been sequentially reading thru \n> file Y for, say, 3 seconds.\n> - the OS knows that X is currently at the mid-point of file Y\n> - OS says, \"Hey, I think I'll be a bit more agressive about, when I\n> have a bit of time, trying to read Y faster than X is requesting\n> it\n> \n> It wouldn't work well, though, in a client-server DB like Postgres,\n> which, in a busy multi-user system, is constantly hitting different\n> parts of different files.\n> \n> The algorithm, though, is used in the RDBMS Rdb. It uses the algorithm\n> above, substituting \"process X\" for \"client X\", and passes the agressive\n> reads of Y on to the OS. It's a big win when processing a complete\n> table, like during a CREATE INDEX, or \"SELECT foo, COUNT(*)\" where\n> there's no index on foo.\n> \n> -- \n> +---------------------------------------------------------------+\n> | Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n> | Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n> | |\n> | \"Fear the Penguin!!\" |\n> +---------------------------------------------------------------+\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 27 Jan 2003 16:08:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Mon, 2003-01-27 at 15:08, Bruce Momjian wrote:\n> Detecting sequential scan and increasing read-ahead is a standard OS\n> capability, and most/all do that already. Solaris has code to detect\n> when a sequential scan is wiping the cache and adjusting the buffer\n> frees, called \"free-behind.\"\n\nAh, didn't know that.\n\n> ---------------------------------------------------------------------------\n> \n> Ron Johnson wrote:\n> > On Mon, 2003-01-27 at 04:34, Curt Sampson wrote:\n> > > On Mon, 27 Jan 2003, Sean Chittenden wrote:\n> > > \n> > > > The FreeBSD VM caching system does prevent one process from exhausting\n> > > > another process's cached data due to a sequential access, but the\n> > > > FreeBSD VM cache does not try to outsmart sequential data accesses to\n> > > > datasets which are larger then available cache space because it's an\n> > > > insanely difficult (impossible) problem to solve properly without\n> > > > foreknowledge of what data elements will be accessed when.\n> > > \n> > > This is not impossible; Solaris does just this. I'm a little short of\n> > \n> > Quite. One way to do it is:\n> > - the OS notices that process X has been sequentially reading thru \n> > file Y for, say, 3 seconds.\n> > - the OS knows that X is currently at the mid-point of file Y\n> > - OS says, \"Hey, I think I'll be a bit more agressive about, when I\n> > have a bit of time, trying to read Y faster than X is requesting\n> > it\n> > \n> > It wouldn't work well, though, in a client-server DB like Postgres,\n> > which, in a busy multi-user system, is constantly hitting different\n> > parts of different files.\n> > \n> > The algorithm, though, is used in the RDBMS Rdb. It uses the algorithm\n> > above, substituting \"process X\" for \"client X\", and passes the agressive\n> > reads of Y on to the OS. It's a big win when processing a complete\n> > table, like during a CREATE INDEX, or \"SELECT foo, COUNT(*)\" where\n> > there's no index on foo.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "27 Jan 2003 19:42:41 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "On Tue, 27 Jan 2003, Ron Johnson wrote:\n\n> [read-ahead detection stuff deleted]\n>\n> It wouldn't work well, though, in a client-server DB like Postgres,\n> which, in a busy multi-user system, is constantly hitting different\n> parts of different files.\n\nIt works great. You just do it on a file-descriptor by file-descriptor\nbasis.\n\nUnfortunately, I don't know of any OSes that detect backwards scans.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Tue, 28 Jan 2003 19:54:50 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" } ]
[ { "msg_contents": "Hi, everyone!\n\nI have a simple query which takes almost 3 seconds to complete, but\ndisabling sequence scans leads to a new plan using index. This second\nplan takes less than 1 millisecond to run.\n\nSo, I'd like to hear any comments and suggestions.\n\nDetails.\n\nCREATE TABLE MediumStats (\n year SMALLINT NOT NULL,\n month SMALLINT NOT NULL,\n day SMALLINT NOT NULL,\n hour SMALLINT NOT NULL,\n --- and then goes few data fields\n figureId INTEGER NOT NULL,\n typeId INTEGER NOT NULL\n PRIMARY KEY (figureId, typeId, year, month, day, hour)\n);\n\nCREATE FUNCTION indexHelper (INT2, INT2, INT2, INT2)\nRETURNS CHARACTER(10) AS '\nreturn sprintf(\"%d%02d%02d%02d\", @_);\n' LANGUAGE 'plperl' WITH (isCachable);\n\nCREATE INDEX timeIndex ON MediumStats (indexHelper(year,month,day,hour));\n\nand that is the query:\nSELECT * FROM MediumStats\nWHERE indexHelper(year,month,day,hour) < '2002121500'\nLIMIT 1;\n\nFirst, original plan:\nLimit (cost=0.00..0.09 rows=1 width=22) (actual time=2969.30..2969.30 rows=0 loops=1)\n -> Seq Scan on mediumstats (cost=0.00..1332.33 rows=15185 width=22) (actual time=2969.29..2969.29 rows=0 loops=1)\nTotal runtime: 2969.39 msec\n\nSecond plan, seq scans disabled:\n\nLimit (cost=0.00..0.19 rows=1 width=6) (actual time=0.43..0.43 rows=0 loops=1)\n -> Index Scan using timeindex on mediumstats (cost=0.00..2898.96 rows=15185 width=6) (actual time=0.42..0.42 rows=0 loops=1)\nTotal runtime: 0.54 msec\n\nTable MediumStats currently has 45000 rows, all rows belong to this\nmonth.\n\n", "msg_date": "Fri, 17 Jan 2003 16:48:00 +0500", "msg_from": "Timur Irmatov <thor@sarkor.com>", "msg_from_op": true, "msg_subject": "index usage" }, { "msg_contents": "Timur Irmatov <thor@sarkor.com> writes:\n> Limit (cost=0.00..0.19 rows=1 width=6) (actual time=0.43..0.43 rows=0 loops=1)\n> -> Index Scan using timeindex on mediumstats (cost=0.00..2898.96 rows=15185 width=6) (actual time=0.42..0.42 rows=0 loops=1)\n\nThe planner has absolutely no clue about the behavior of your function,\nand so its estimate of the number of rows matched is way off, leading to\na poor estimate of the cost of an indexscan. There is not much to be\ndone about this in the current system (though I've speculated about the\npossibility of computing statistics for functional indexes).\n\nJust out of curiosity, why don't you lose all this year/month/day stuff\nand use a timestamp column? Less space, more functionality.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jan 2003 09:57:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: index usage " }, { "msg_contents": "TL> Timur Irmatov <thor@sarkor.com> writes:\n>> Limit (cost=0.00..0.19 rows=1 width=6) (actual time=0.43..0.43 rows=0 loops=1)\n>> -> Index Scan using timeindex on mediumstats (cost=0.00..2898.96 rows=15185 width=6) (actual time=0.42..0.42 rows=0 loops=1)\n\nTL> The planner has absolutely no clue about the behavior of your function,\nTL> and so its estimate of the number of rows matched is way off, leading to\nTL> a poor estimate of the cost of an indexscan. There is not much to be\nTL> done about this in the current system (though I've speculated about the\nTL> possibility of computing statistics for functional indexes).\n\nyou're absolutely right.\nthanks.\n\nTL> Just out of curiosity, why don't you lose all this year/month/day stuff\nTL> and use a timestamp column? Less space, more functionality.\n\n:-)\nWell, I've a seen a lot of people on pgsql-general mailing list with\nproblems with dates, timestamps, and I was just scared of using\nPostreSQL date and time types and functions..\n\nMay be, I should just try it myself before doing it other way...\n\n", "msg_date": "Fri, 17 Jan 2003 20:08:14 +0500", "msg_from": "Timur Irmatov <thor@sarkor.com>", "msg_from_op": true, "msg_subject": "Re: index usage" }, { "msg_contents": "On Fri, Jan 17, 2003 at 08:08:14PM +0500, Timur Irmatov wrote:\n> Well, I've a seen a lot of people on pgsql-general mailing list with\n> problems with dates, timestamps, and I was just scared of using\n> PostreSQL date and time types and functions..\n\nWhat problems? The only problems I know of with datetime stuff are\non those machines with the utterly silly glibc hobbling, and even\nthat has been worked around in recent releases. I think the date and\ntime handling in PostgreSQL beats most systems. It just works, and\nhandles all the time-zone conversions for you and everything.\n\nA\n\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 17 Jan 2003 10:33:41 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: index usage" } ]
[ { "msg_contents": ">> HOWEVER.....look at this:\r\n>> EXPLAIN ANALYZE select batchdetailid from batchdetail where batchdetailid = 27321::bigint;\r\n>> Index Scan using batchdetail_pkey on batchdetail (cost=0.00..4.13 rows=1 width=8) (actual time=0.03..0.03 rows=1 loops=1)\r\n>> Index Cond: (batchdetailid = 27321::bigint)\r\n>> Total runtime: 0.07 msec\r\n\r\n> Jeff Trout wrote:\r\n> We had this happen to us - we had a serial8 column (int8) and our query\r\n> was straight forward where id = 12345; which ran craptacularly. After\r\n> much head banging and cursing I had tried where id = '12345' and it\r\n> magically worked. I think the parser is interpreting a \"number\" to be an\r\n> int4 instead of int8. (instead of quotes you can also cast via\r\n> 12345::int8 like you did)\r\n\r\n> Perhaps this should go on the TODO - when one side is an int8 and the\r\n> other is a literal number assume the number to be int8 instead of int4?\r\n\r\nIt seems to me that this should absolutely go on the TODO list. Why does the planner require an explicit cast when the implicit cast is so obvious? Does Oracle do this? I can assure you that MSSQL does not. \r\n \r\nIf getting more people to migrate to PostgreSQL is a major goal these days, it's got to be relatively easy. I think that almost everyone coming from a MSSQL or Access background is going to have big problems with this. And the other issue of the JOIN syntax constraining the planner - you've got to be able to turn that off too. I've been writing SQL queries for 10 years in FoxPro, Access, SQL Server, MySQL, and Sybase. I have never come across this very confusing \"feature\" until now. \r\n \r\nHow do we go about voting an issue onto the TODO list? These two get my vote for sure!\r\n \r\nRoman\r\n", "msg_date": "Fri, 17 Jan 2003 06:48:28 -0800", "msg_from": "\"Roman Fail\" <rfail@posportal.com>", "msg_from_op": true, "msg_subject": "Implicit casting and JOIN syntax constraints" }, { "msg_contents": "On Fri, Jan 17, 2003 at 06:48:28AM -0800, Roman Fail wrote:\n> It seems to me that this should absolutely go on the TODO list. \n> Why does the planner require an explicit cast when the implicit\n> cast is so obvious? Does Oracle do this? I can assure you that\n> MSSQL does not.\n\nThe reason it happens is because of the flexible datatype system in\nPostgreSQL. Because it's easy to add a datatype, you pay in other\nways. The problem is coming up with a nice, clean set of rules for\ncoercion. See the link that Tom Lane posted, and the thousands of\nother discussions around this in the archives. Yes, it's a pain. \nEveryone knows that. A complete solution is what's missing.\n\n> too. I've been writing SQL queries for 10 years in FoxPro, Access,\n> SQL Server, MySQL, and Sybase. I have never come across this very\n> confusing \"feature\" until now.\n\nWell, there are differences between every system. Indeed, the \"SQL\"\nof MySQL is so far from anything resembling the standard that one\ncould argue it doesn't comply at all. You're right that it means a\nsteep learning curve for some things, and the problems can be\nfrustrating. But that doesn't mean you want to throw the baby out\nwith the bathwater. The ability to give the planner hints through\nthe JOIN syntax is, frankly, a real help when you're faced with\ncertain kinds of performance problems. Some systems don't give you a\nknob to tune there at all. Is it different from other systems? \nSure. Is that automatically a reason to pitch the feature? No. \n(Further discussion of this probably belongs on -general, if\nanywhere, by the way.)\n\nA\n\n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 17 Jan 2003 10:41:10 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Implicit casting and JOIN syntax constraints" } ]
[ { "msg_contents": "\nCan you do\nselect table2.f1, table1.name\nfrom table1,table2\nwhere table1.id =\ntable 2.id and table2.id = 2\nGROUP BY table2.f1, table1.name;\n\n\nPatrick Hatcher\nMacys.Com\nLegacy Integration Developer\n415-422-1610 office\nHatcherPT - AIM\n\n\n\n\n \n Noah Silverman \n <noah@allresearch.com> To: pgsql-performance@postgresql.org \n Sent by: cc: \n pgsql-performance-owner@post Subject: [PERFORM] Strange Join question \n gresql.org \n \n \n 01/17/2003 09:28 AM \n \n \n\n\n\n\nHi,\n\nI have a challenging (for me) SQL question:\n\nTwo tables\n(Note: these are fictitious, the real tables actually make sense, so no\nneed to re-design our table structure)\n\nTable 1\nid | name | count\n------------------------\n1 | foo | 10\n1 | foo | 20\n2 | bar | 100\n\n\nTable 2\nid | f1 | f2 | t1ref\n-----------------------\n1 | 10 | 20 | 1\n2 | 50 | 40 | 2\n\n\nThe question:\n\nI want to do the following select:\nselect table2.f1, table1.name from table1,table2 where table1.id =\ntable 2.id and table2.id = 2;\n\nThe problem is that I really only need the name from table2 returned\nonce. With this query, I get two records back. Clearly this is\nbecause of the join that I am doing. Is there a different way to\nperform this join, so that I only get back ONE record from table1 that\nmatches?\n\nThanks,\n\n-Noah\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n\n\n", "msg_date": "Fri, 17 Jan 2003 09:38:42 -0800", "msg_from": "\"Patrick Hatcher\" <PHatcher@macys.com>", "msg_from_op": true, "msg_subject": "Re: Strange Join question" } ]
[ { "msg_contents": "I have a table which is rather wide (~800 columns) and consists of a few \ncolumns of identifying data (run time, channel and such) and up to several \nhundred columns of collected data (no, normalization does not suggest putting \ncollected data in another table - collected item 1 always corresponds to \ncollected item 1 but is completely different than item 3).\n\nMy test table is very short (62 rows) but in production would grow by several \nthousand rows per day. Unfortunately if my test data is correct, performance \non wide selects is so bad that it will render the system unusable.\n\nHere's the test. I have created two versions of the table - one stores the \ncollected data in an array of text and the other stores the data in \nindividual columns, no joins, no indexes. Times are averages of many runs - \nthe times varied very little and the data is small enough that I'm sure it \nwas served from RAM. Postgres CPU utilization observed on the longer runs was \n98-99%. Changing the output format didn't seem to change things significantly.\n\nTimes for selecting all the columns in the table:\nselect * from columnversion;\n8,000 ms\n\nselect * from arrayversion;\n110 ms\n\nselect * from arraytocolumnview (data in the array version but converted to \ncolumns in the view)\n10,000 ms\n\nTimes to select a single column in a table:\nselect runstarttime from columversion;\n32 ms\n\nselect runstarttime from arrayversion;\n6 ms\n\nSo the question is, does it seem reasonable that a query on fundamentally \nidentical data should take 70-90 times as long when displayed as individual \ncolumns vs. when output as a raw array and, more imporantly, what can I do to \nget acceptable performance on this query?\n\nCheers,\nSteve\n", "msg_date": "Fri, 17 Jan 2003 11:37:26 -0800", "msg_from": "Steve Crawford <scrawford@pinpointresearch.com>", "msg_from_op": true, "msg_subject": "Terrible performance on wide selects" }, { "msg_contents": "Steve Crawford <scrawford@pinpointresearch.com> writes:\n> So the question is, does it seem reasonable that a query on fundamentally \n> identical data should take 70-90 times as long when displayed as individual \n> columns vs. when output as a raw array and, more imporantly, what can I do to\n> get acceptable performance on this query?\n\nThere are undoubtedly some places that are O(N^2) in the number of\ntargetlist items. Feel free to do some profiling to identify them.\nIt probably won't be real hard to fix 'em once identified.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 17 Jan 2003 18:06:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "Steve Crawford sent me some profiling results for queries involving wide\ntuples (hundreds of columns).\n\n> Done, results attached. nocachegetattr seems to be the likely suspect.\n\nYipes, you can say that again.\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 93.38 26.81 26.81 885688 0.03 0.03 nocachegetattr\n\n 0.00 0.00 1/885688 heapgettup [159]\n 0.00 0.00 1/885688 CatalogCacheComputeTupleHashValue [248]\n 0.00 0.00 5/885688 SearchCatCache [22]\n 13.40 0.00 442840/885688 ExecEvalVar [20]\n 13.40 0.00 442841/885688 printtup [12]\n[11] 93.4 26.81 0.00 885688 nocachegetattr [11]\n\n\nHalf of the calls are coming from printtup(), which seems relatively easy\nto fix.\n\n\t/*\n\t * send the attributes of this tuple\n\t */\n\tfor (i = 0; i < natts; ++i)\n\t{\n\t\t...\n\t\torigattr = heap_getattr(tuple, i + 1, typeinfo, &isnull);\n\t\t...\n\t}\n\nThe trouble here is that in the presence of variable-width fields,\nheap_getattr requires a linear scan over the tuple --- and so the total\ntime spent in it is O(N^2) in the number of fields.\n\nWhat we could do is reinstitute heap_deformtuple() as the inverse of\nheap_formtuple() --- but make it extract Datums for all the columns in\na single pass over the tuple. This would reduce the time in printtup()\nfrom O(N^2) to O(N), which would pretty much wipe out that part of the\nproblem.\n\nThe other half of the calls are coming from ExecEvalVar, which is a\nharder problem to solve, since those calls are scattered all over the\nplace. It's harder to see how to get them to share work. Any ideas\nout there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 18:14:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects " } ]
[ { "msg_contents": "Hi,\n\nWill there be any advantages to running Pg on a 64-bit CPU rather\nthan 32-bit?\n\nThe recent discussions in the \"7.3.1 New install, large queries are\nslow\" thread make me think not, since Pg says that the OS can manage\nbuffers better:\n<QUOTE From=\"Tom Lane <tgl@sss.pgh.pa.us>\">\nYeah, but isn't that theory a hangover from pre-Unix operating systems?\nIn all modern Unixen, you can expect the kernel to make use of any spare\nRAM for disk buffer cache --- and that behavior makes it pointless for\nPostgres to try to do large amounts of its own buffering.\n\nHaving a page in our own buffer instead of kernel buffer saves a context\nswap to access the page, but it doesn't save I/O, so the benefit is a\nlot less than you might think. I think there's seriously diminishing\nreturns in pushing shared_buffers beyond a few thousand, and once you\nget to the point where it distorts the kernel's ability to manage\nmemory for processes, you're really shooting yourself in the foot.\n</QUOTE>\n\nAlso, would int8 then become a more \"natural\" default integer, rather\nthan the int4 that all of us millions of i386, PPC & Sparc users use?\n\nThanks,\nRon\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "17 Jan 2003 23:29:25 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "x86-64 and PostgreSQL" }, { "msg_contents": "Ron Johnson <ron.l.johnson@cox.net> writes:\n> Will there be any advantages to running Pg on a 64-bit CPU rather\n> than 32-bit?\n\nNot so's you'd notice. PG is designed to be cross-platform, and at\nthe moment that means 32-bit-centric. There's been occasional talk\nof improving the performance of float8 and int8 types on 64-bit\nmachines, but so far it's only idle talk; and in any case I think\nthat performance improvements for those two datatypes wouldn't have\nmuch effect for average applications.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 18 Jan 2003 01:44:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: x86-64 and PostgreSQL " }, { "msg_contents": "On Sat, 2003-01-18 at 00:44, Tom Lane wrote:\n> Ron Johnson <ron.l.johnson@cox.net> writes:\n> > Will there be any advantages to running Pg on a 64-bit CPU rather\n> > than 32-bit?\n> \n> Not so's you'd notice. PG is designed to be cross-platform, and at\n> the moment that means 32-bit-centric. There's been occasional talk\n> of improving the performance of float8 and int8 types on 64-bit\n> machines, but so far it's only idle talk; and in any case I think\n> that performance improvements for those two datatypes wouldn't have\n> much effect for average applications.\n\nThat's kinda what I expected.\n\nThe ability to /relatively/ inexpensively get a box chock full of\nRAM couldn't hurt, though...\n\nPutting a 12GB database on a box with 8GB RAM has to make it run\npretty fast. (As long as you aren't joining mismatched types!!!)\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "18 Jan 2003 00:55:04 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "Re: x86-64 and PostgreSQL" }, { "msg_contents": "On Fri, Jan 17, 2003 at 11:29:25PM -0600, Ron Johnson wrote:\n> Hi,\n> \n> Will there be any advantages to running Pg on a 64-bit CPU rather\n> than 32-bit?\n\nIn my experience, not really. We use SPARCs under Solaris 7 and,\nnow, 8. We haven't found any terribly obvious advantages with the\nsystem compiled as a 64 bit app, but we _did_ find problems with the\n64 bit libraries combined with gcc. As a result of that and\npressures to get working on some other things, we stopped testing\nPostgres as a 64 bit app on Solaris. We haven't done any work on\nSolaris 8 with it, and that system is a little more mature in the\n64-bit-support department, so when I have a chance do to more\ninvestigation, I will.\n\n> Also, would int8 then become a more \"natural\" default integer, rather\n> than the int4 that all of us millions of i386, PPC & Sparc users use?\n\nI think the problem with int8s in a lot of cases has more to do with\ntyper coercion. So at least in systems < 7.4, I'm not sure you'll\nsee a big win, unless you make sure to cast everything correctly.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 18 Jan 2003 11:39:35 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: x86-64 and PostgreSQL" }, { "msg_contents": "On 18 Jan 2003 at 11:39, Andrew Sullivan wrote:\n> On Fri, Jan 17, 2003 at 11:29:25PM -0600, Ron Johnson wrote:\n> > Hi,\n> > \n> > Will there be any advantages to running Pg on a 64-bit CPU rather\n> > than 32-bit?\n> \n> In my experience, not really. We use SPARCs under Solaris 7 and,\n> now, 8. We haven't found any terribly obvious advantages with the\n> system compiled as a 64 bit app, but we _did_ find problems with the\n> 64 bit libraries combined with gcc. As a result of that and\n> pressures to get working on some other things, we stopped testing\n> Postgres as a 64 bit app on Solaris. We haven't done any work on\n> Solaris 8 with it, and that system is a little more mature in the\n> 64-bit-support department, so when I have a chance do to more\n> investigation, I will.\n\nI remember reading in one of the HP guides regarding 64 bit that 64 bit is a \ntool provided for applications. In general no app. should be 64 bit unless \nrequired. In fact they advice that fastest performance one can get is by \nrunning 32 bit app. on 64 bit machine because registers are wide and can be \nfilled in is less number of fetches.\n\nSounds reasonable to me.\n\n\n\nBye\n Shridhar\n\n--\nTip of the Day:\tNever fry bacon in the nude.\t[Correction: always fry bacon in \nthe nude; you'll learn not to burn it]\n\n", "msg_date": "Mon, 20 Jan 2003 11:59:59 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: x86-64 and PostgreSQL" }, { "msg_contents": "On Mon, 2003-01-20 at 00:29, Shridhar Daithankar wrote:\n> On 18 Jan 2003 at 11:39, Andrew Sullivan wrote:\n> > On Fri, Jan 17, 2003 at 11:29:25PM -0600, Ron Johnson wrote:\n> > > Hi,\n> > > \n> > > Will there be any advantages to running Pg on a 64-bit CPU rather\n> > > than 32-bit?\n> > \n> > In my experience, not really. We use SPARCs under Solaris 7 and,\n> > now, 8. We haven't found any terribly obvious advantages with the\n> > system compiled as a 64 bit app, but we _did_ find problems with the\n> > 64 bit libraries combined with gcc. As a result of that and\n> > pressures to get working on some other things, we stopped testing\n> > Postgres as a 64 bit app on Solaris. We haven't done any work on\n> > Solaris 8 with it, and that system is a little more mature in the\n> > 64-bit-support department, so when I have a chance do to more\n> > investigation, I will.\n> \n> I remember reading in one of the HP guides regarding 64 bit that 64 bit is a \n> tool provided for applications. In general no app. should be 64 bit unless \n> required. In fact they advice that fastest performance one can get is by \n> running 32 bit app. on 64 bit machine because registers are wide and can be \n> filled in is less number of fetches.\n> \n> Sounds reasonable to me.\n\nDou you, the programmer or SysAdmin, always know when 64 bits is\nneeded?\n\nTake, for simple example, a memcpy() of 1024 bytes. Most CPUs don't\nhave direct core-core copy instruction. (The RISC philosophy, after\nall, is load-and-store.) A 32-bit executable would need 1024/32 = 32\npairs of load-store operations, while a 64-bit executable would only\nneed 16. Yes, L1 & L2 caching would help some, but not if you are\nmoving huge amounts of data...\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "20 Jan 2003 03:52:15 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "Re: x86-64 and PostgreSQL" }, { "msg_contents": "On 20 Jan 2003 at 3:52, Ron Johnson wrote:\n\n> On Mon, 2003-01-20 at 00:29, Shridhar Daithankar wrote:\n> > I remember reading in one of the HP guides regarding 64 bit that 64 bit is a \n> > tool provided for applications. In general no app. should be 64 bit unless \n> > required. In fact they advice that fastest performance one can get is by \n> > running 32 bit app. on 64 bit machine because registers are wide and can be \n> > filled in is less number of fetches.\n> > \n> > Sounds reasonable to me.\n> \n> Dou you, the programmer or SysAdmin, always know when 64 bits is\n> needed?\n> \n> Take, for simple example, a memcpy() of 1024 bytes. Most CPUs don't\n> have direct core-core copy instruction. (The RISC philosophy, after\n> all, is load-and-store.) A 32-bit executable would need 1024/32 = 32\n> pairs of load-store operations, while a 64-bit executable would only\n> need 16. Yes, L1 & L2 caching would help some, but not if you are\n> moving huge amounts of data...\n\nWell, that wasn't intended application aera of that remark. I was more on the \nline of, I have 16GB data of double precision which I need to shuffle thr. once \nin a while, should I use 32 bit or 64 bit?\n\nSomething like that.. bit more macroscopic.\n\nI work on an application which is 32 bit on HP-UX 64 bit. It handles more than \n15GB of data at some sites pretty gracefully..No need to move to 64 bit as \nyet..\n\nBye\n Shridhar\n\n--\nKramer's Law:\tYou can never tell which way the train went by looking at the \ntracks.\n\n", "msg_date": "Mon, 20 Jan 2003 15:29:42 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: x86-64 and PostgreSQL" }, { "msg_contents": "On Mon, 2003-01-20 at 03:59, Shridhar Daithankar wrote:\n> On 20 Jan 2003 at 3:52, Ron Johnson wrote:\n> \n> > On Mon, 2003-01-20 at 00:29, Shridhar Daithankar wrote:\n> > > I remember reading in one of the HP guides regarding 64 bit that 64 bit is a \n> > > tool provided for applications. In general no app. should be 64 bit unless \n> > > required. In fact they advice that fastest performance one can get is by \n> > > running 32 bit app. on 64 bit machine because registers are wide and can be \n> > > filled in is less number of fetches.\n> > > \n> > > Sounds reasonable to me.\n> > \n> > Dou you, the programmer or SysAdmin, always know when 64 bits is\n> > needed?\n> > \n> > Take, for simple example, a memcpy() of 1024 bytes. Most CPUs don't\n> > have direct core-core copy instruction. (The RISC philosophy, after\n> > all, is load-and-store.) A 32-bit executable would need 1024/32 = 32\n> > pairs of load-store operations, while a 64-bit executable would only\n> > need 16. Yes, L1 & L2 caching would help some, but not if you are\n> > moving huge amounts of data...\n> \n> Well, that wasn't intended application aera of that remark. I was more on the \n> line of, I have 16GB data of double precision which I need to shuffle thr. once \n> in a while, should I use 32 bit or 64 bit?\n> \n> Something like that.. bit more macroscopic.\n> \n> I work on an application which is 32 bit on HP-UX 64 bit. It handles more than \n> 15GB of data at some sites pretty gracefully..No need to move to 64 bit as \n> yet..\n\nMaybe you wouldn't get a speed boost on HP-UX, but I bet you would on\nx86-64. Why? 64 bit programs get to use the new registers that AMD\ncreated just for 64 bit mode. Thus, the compiler should, hopefully,\nbe able to generate more efficient code.\n\nAlso, since (at least on the gcc-3.2 compiler) a \"long\" and \"int\" are\nstill 32 bits (64 bit scalars are of type \"long long\"), existing \nprograms (that all use \"long\" and \"int\") will still only fill up 1/2\nof each register (attaining the speed that HP alleges), but, as I\nmentioned above, would be able to use the extra registers if recompiled\ninto native 64-bit apps...\n\n$ cat test.c\n#include <stdio.h>\n#include <stdlib.h>\nint main (int argc, char **argv)\n{\n printf(\"%d %d %d\\n\", sizeof(long), \n sizeof(int), \n sizeof(long long));\n};\n$ gcc-3.2 test.c && ./a.out\n4 4 8\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Basically, I got on the plane with a bomb. Basically, I |\n| tried to ignite it. Basically, yeah, I intended to damage |\n| the plane.\" |\n| RICHARD REID, who tried to blow up American Airlines |\n| Flight 63 |\n+------------------------------------------------------------+\n\n", "msg_date": "20 Jan 2003 06:06:04 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "Re: x86-64 and PostgreSQL" }, { "msg_contents": "On 20 Jan 2003 at 6:06, Ron Johnson wrote:\n\n> On Mon, 2003-01-20 at 03:59, Shridhar Daithankar wrote:\n> > On 20 Jan 2003 at 3:52, Ron Johnson wrote:\n> > I work on an application which is 32 bit on HP-UX 64 bit. It handles more than \n> > 15GB of data at some sites pretty gracefully..No need to move to 64 bit as \n> > yet..\n> \n> Maybe you wouldn't get a speed boost on HP-UX, but I bet you would on\n> x86-64. Why? 64 bit programs get to use the new registers that AMD\n> created just for 64 bit mode. Thus, the compiler should, hopefully,\n> be able to generate more efficient code.\n\nWell, that is not the issue exactly. The app. is commercial with oracle under \nit and it si going nowhere but oracle/HP-UX for its remaining lifecycle.. I was \njust quoting it as an example.\n \n> Also, since (at least on the gcc-3.2 compiler) a \"long\" and \"int\" are\n> still 32 bits (64 bit scalars are of type \"long long\"), existing \n> programs (that all use \"long\" and \"int\") will still only fill up 1/2\n> of each register (attaining the speed that HP alleges), but, as I\n> mentioned above, would be able to use the extra registers if recompiled\n> into native 64-bit apps...\n\nI am not too sure, but most 64bit migration guides talk of ILP paradigm that is \ninteger/long/pointer with later two going to 8 bits. If gcc puts long at 4 \nbytes on a 64 bit platform, it is wrong.\n\nBye\n Shridhar\n\n--\nPainting, n.:\tThe art of protecting flat surfaces from the weather, and\t\nexposing them to the critic.\t\t-- Ambrose Bierce\n\n", "msg_date": "Mon, 20 Jan 2003 17:41:59 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: x86-64 and PostgreSQL" }, { "msg_contents": ">>>>> \"Ron\" == Ron Johnson <ron.l.johnson@cox.net> writes:\n\nRon> Also, since (at least on the gcc-3.2 compiler) a \"long\" and \"int\"\nRon> are still 32 bits (64 bit scalars are of type \"long long\"),\n\nAccording to the draft x86-64 psABI�, which is to become the \nSystem V psABI for the x86-64 architecture, sizeof(long) == 8.\n\nThis does seem necessary as most code tends to presume that an\nunsigned long can hold a pointer....\n\n-JimC\n\n� http://www.x86-64.org/abi.pdf\n\n", "msg_date": "21 Jan 2003 03:21:03 -0500", "msg_from": "\"James H. Cloos Jr.\" <cloos@jhcloos.com>", "msg_from_op": false, "msg_subject": "Re: x86-64 and PostgreSQL" } ]
[ { "msg_contents": ">> It sort of feels like a magic moment. I went back and looked through a\r\n>> lot of the JOIN columns and found that I was mixing int4 with int8 in a\r\n>> lot of them.\r\n\r\n>There is note about it in the docs:\r\n>http://www.postgresql.org/idocs/index.php?datatype.html#DATATYPE-INT\r\n>\r\n>I don't know if this is in a faq anywhere, but it should be. I myself have\r\n>helped a number of persons with this. Every once in a while there come\r\n>someone in to the #postgresql irc channel with the exact same problem. \r\n>Usually they leave the channel very happy, when their queries take less\r\n>then a second instead of minutes.\r\n>\r\n>--\r\n>/Dennis\r\n\r\nI'm really surprised that this issue doesn't pop up all the time. As the community grows, I think it will start to. I came very, very close to dropping PostgreSQL entirely because of it. Hopefully the TODO issue on implicit type casting will move closer to the top of the hackers list. But I'm just a beggar so I won't pretend to be a chooser.\r\n \r\nBack to my original problems: I re-created everything from scratch and made sure there are no int8's in my entire database. I found a few more places that I could create useful indexes as well. I didn't get to test it over the weekend, but today I played with it for several hours and could not get the queries to perform much better than last week. I was about ready to give up, throw Postgres in the junk pile, and get out the MSSQL CD. \r\n \r\nLuckily, an unrelated post on one of the lists mentioned something about ANALYZE, and I realized that I had forgotten to run it after all the new data was imported (although I did remember a VACUUM FULL). After running ANALYZE, I started getting amazing results.....like a query that took 20 minutes last week was taking only 6 milliseconds now. That kicks the MSSQL server's ass all over the map (as I had originally expected it would!!!).\r\n \r\nSo things are working pretty good now....and it looks like the whole problem was the data type mismatch issue. I hate to point fingers, but the pgAdminII Migration Wizard forces all your primary keys to be int8 even if you set the Type Map to int4. The second time through I recognized this and did a pg_dump so I could switch everything to int4. Now I'm going to write some minor mods in my Java programs for PGSQL-syntax compatibility, and will hopefully have the PostgreSQL server in production shortly. \r\n \r\nTHANK YOU to everyone on pgsql-performance for all your help. You are the reason that I'll be a long term member of the Postgres community. I hope that I can assist someone else out in the future. \r\n \r\nRoman Fail\r\nSr. Web Application Developer\r\nPOS Portal, Inc.\r\n \r\n \r\n", "msg_date": "Mon, 20 Jan 2003 13:32:42 -0800", "msg_from": "\"Roman Fail\" <rfail@posportal.com>", "msg_from_op": true, "msg_subject": "Re: 7.3.1 New install, large queries are slow" }, { "msg_contents": "Roman, \n\n> I'm really surprised that this issue doesn't pop up all the time. As\n> the community grows, I think it will start to. \n\nActually, in the general sense of intelligent casting, the issue *does*\ncome up all the time. Unfortunately, this is one of those issues that\nrequires both an inspired solution and program-wide overhaul work to\nfix. In fact, in the FTP achives you can find an alternate version of\nPostgres (7.1 I think) where someone tried to fix the \"stupid casting\"\nissue and succeeded in making Postgres crash and burn instead.\n\n> Luckily, an unrelated post on one of the lists mentioned something\n> about ANALYZE, and I realized that I had forgotten to run it after\n> all the new data was imported (although I did remember a VACUUM\n> FULL). After running ANALYZE, I started getting amazing\n> results.....like a query that took 20 minutes last week was taking\n> only 6 milliseconds now. That kicks the MSSQL server's ass all over\n> the map (as I had originally expected it would!!!).\n\nThat's great!\n\n> So things are working pretty good now....and it looks like the whole\n> problem was the data type mismatch issue. I hate to point fingers,\n> but the pgAdminII Migration Wizard forces all your primary keys to be\n> int8 even if you set the Type Map to int4. \n\nSo? Send Dave Page (address at pgadmin.postgresql.org) a quick note\ndocumenting the problem. I'm sure he'll patch it, or at least fix it\nfor PGAdmin III.\n \n> THANK YOU to everyone on pgsql-performance for all your help. You\n> are the reason that I'll be a long term member of the Postgres\n> community. I hope that I can assist someone else out in the future.\n\nYou're welcome! If you can get your boss to authorize it, the\nAdvocacy page (advocacy.postgresql.org) could use some more business\ntestimonials. \n\n-Josh Berkus\n\n", "msg_date": "Mon, 20 Jan 2003 14:14:46 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: 7.3.1 New install, large queries are slow" } ]
[ { "msg_contents": "> Jochem van Dieten wrote:\r\n> Just out of curiosity and for archiving purposes, could you post the new\r\n> EXPLAIN ANALYZE output to the list?\r\n\r\nTo reiterate, the batchdetail table is 24 million rows, batchheader is 2.7 million, and purc1 is 1 million. The rest are 2000 rows or less. I think having the 6-disk RAID-10 devoted to /usr/local/pgsql/data helps out a little here. I did try changing the WHERE clauses to radically different values and it was still just as fast. This is the original query I was working with (plus suggested modifications from the list):\r\n\r\nEXPLAIN ANALYZE\r\nSELECT ss.batchdate, ss.batchdetailid, ss.bankno, ss.trandate, ss.tranamount,\r\nss.submitinterchange, ss.authamount, ss.authno, ss.cardtypeid, ss.mcccode,\r\nss.name AS merchantname, ss.cardtype, ss.merchid,\r\np1.localtaxamount, p1.productidentifier, dr.avsresponse,\r\ncr.checkoutdate, cr.noshowindicator, ck.checkingacctno,\r\nck.abaroutingno, ck.checkno\r\nFROM\r\n (SELECT b.batchdate, d.batchdetailid, t.bankno, d.trandate, d.tranamount,\r\n d.submitinterchange, d.authamount, d.authno, d.cardtypeid, d.mcccode,\r\n m.name, c.cardtype, m.merchid\r\n FROM tranheader t, batchheader b, merchants m, cardtype c, batchdetail d\r\n WHERE t.tranheaderid=b.tranheaderid\r\n AND m.merchantid=b.merchantid\r\n AND d.batchid=b.batchid\r\n AND c.cardtypeid=d.cardtypeid\r\n AND t.clientid = 6\r\n AND d.tranamount BETWEEN 500.0 AND 700.0\r\n AND b.batchdate > '2002-12-15'\r\n AND m.merchid = '701252267') ss\r\n LEFT JOIN purc1 p1 on p1.batchdetailid=ss.batchdetailid\r\n LEFT JOIN direct dr ON dr.batchdetailid = ss.batchdetailid\r\n LEFT JOIN carrental cr ON cr.batchdetailid = ss.batchdetailid\r\n LEFT JOIN checks ck ON ck.batchdetailid = ss.batchdetailid\r\nORDER BY ss.batchdate DESC\r\nLIMIT 50\r\n\r\nLimit (cost=1351.93..1351.93 rows=1 width=261) (actual time=5.34..5.36 rows=8 loops=1)\r\n -> Sort (cost=1351.93..1351.93 rows=1 width=261) (actual time=5.33..5.34 rows=8 loops=1)\r\n Sort Key: b.batchdate\r\n -> Nested Loop (cost=0.01..1351.92 rows=1 width=261) (actual time=1.61..5.24 rows=8 loops=1)\r\n -> Hash Join (cost=0.01..1346.99 rows=1 width=223) (actual time=1.58..5.06 rows=8 loops=1)\r\n Hash Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Hash Join (cost=0.00..1346.98 rows=1 width=210) (actual time=1.21..4.58 rows=8 loops=1)\r\n Hash Cond: (\"outer\".batchdetailid = \"inner\".batchdetailid)\r\n -> Nested Loop (cost=0.00..1346.97 rows=1 width=201) (actual time=0.82..4.05 rows=8 loops=1)\r\n -> Nested Loop (cost=0.00..1343.84 rows=1 width=182) (actual time=0.78..3.82 rows=8 loops=1)\r\n Join Filter: (\"inner\".cardtypeid = \"outer\".cardtypeid)\r\n -> Nested Loop (cost=0.00..1342.62 rows=1 width=172) (actual time=0.74..3.35 rows=8 loops=1)\r\n -> Nested Loop (cost=0.00..539.32 rows=4 width=106) (actual time=0.17..1.61 rows=26 loops=1)\r\n -> Nested Loop (cost=0.00..515.48 rows=5 width=94) (actual time=0.13..1.01 rows=26 loops=1)\r\n -> Index Scan using merchants_ix_merchid_idx on merchants m (cost=0.00..5.65 rows=1 width=78) (actual time=0.07..0.08 rows=1 loops=1)\r\n Index Cond: (merchid = '701252267'::character varying)\r\n -> Index Scan using batchheader_ix_merchantid_idx on batchheader b (cost=0.00..508.56 rows=20 width=16) (actual time=0.04..0.81 rows=26 loops=1)\r\n Index Cond: (\"outer\".merchantid = b.merchantid)\r\n Filter: (batchdate > '2002-12-15'::date)\r\n -> Index Scan using tranheader_pkey on tranheader t (cost=0.00..5.08 rows=1 width=12) (actual time=0.01..0.01 rows=1 loops=26)\r\n Index Cond: (t.tranheaderid = \"outer\".tranheaderid)\r\n Filter: (clientid = 6)\r\n -> Index Scan using batchdetail_ix_batchid_idx on batchdetail d (cost=0.00..186.81 rows=2 width=66) (actual time=0.05..0.06 rows=0 loops=26)\r\n Index Cond: (d.batchid = \"outer\".batchid)\r\n Filter: ((tranamount >= 500.0) AND (tranamount <= 700.0))\r\n -> Seq Scan on cardtype c (cost=0.00..1.10 rows=10 width=10) (actual time=0.00..0.03 rows=10 loops=8)\r\n -> Index Scan using purc1_ix_batchdetailid_idx on purc1 p1 (cost=0.00..3.12 rows=1 width=19) (actual time=0.01..0.01 rows=0 loops=8)\r\n Index Cond: (p1.batchdetailid = \"outer\".batchdetailid)\r\n -> Hash (cost=0.00..0.00 rows=1 width=9) (actual time=0.00..0.00 rows=0 loops=1)\r\n -> Seq Scan on direct dr (cost=0.00..0.00 rows=1 width=9) (actual time=0.00..0.00 rows=0 loops=1)\r\n -> Hash (cost=0.00..0.00 rows=1 width=13) (actual time=0.01..0.01 rows=0 loops=1)\r\n -> Seq Scan on carrental cr (cost=0.00..0.00 rows=1 width=13) (actual time=0.00..0.00 rows=0 loops=1)\r\n -> Index Scan using checks_ix_batchdetailid_idx on checks ck (cost=0.00..4.92 rows=1 width=38) (actual time=0.01..0.01 rows=0 loops=8)\r\n Index Cond: (ck.batchdetailid = \"outer\".batchdetailid)\r\nTotal runtime: 5.89 msec\r\n\r\n\r\n", "msg_date": "Mon, 20 Jan 2003 14:33:25 -0800", "msg_from": "\"Roman Fail\" <rfail@posportal.com>", "msg_from_op": true, "msg_subject": "Re: 7.3.1 New install, large queries are slow" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Tue, 21 Jan 2003 09:40:13 +1000", "msg_from": "Rudi Starcevic <rudi@oasis.net.au>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "I'm trying to get converted over to Postgres from Oracle (Postgres is\nbillions of times more straightforward and pragmatically clean than\nOracle), but I'm having some severe performance problems on what\nnaively appears to be a very straightforward dead-simple test.\n\nThe test is comprised of two parts: a write part which attempts to\naccumulate (sum) numbers by distinct keys, and a read part which\nsearches for keys in the database (some of which will be present, some\nof which will not). In a more realistic scenario, both will be\nhappening all of the time, but we can start off easy.\n\nHowever, performance is terrible: around 39 write transactions/second\nand 69 searches/second. Oracle, by comparison, writes at 314 and\nreads at 395--practically an order of magnitude better performance.\nBoth are using the same hardware (obviously not at the same time)\nwhich is a dual-processor AMD 2000+ with 3GB memory and both oracle\nand postgres loaded on a 105GB ``MD'' striped (no redundancy) 2 SCSI\ndisks running ext3 fs (no special flags) with Linux 2.4.18-10smp.\n\nI actually have seven different schemes for performing the writes\nusing Postgres:\n\n----------------------------------------------------------------------\n\"normal\" C libpq\t\t\t39 t/s\n\"normal\" Perl DBI\t\t\t39 t/s\n\"DBI Prepared Statement\" Perl DBI\t39 t/s\n\"Batching\" Perl DBI\t\t\t45 t/s\n\"arrays\" Perl DBI\t\t\t26 t/s\n\"server-side function\" Perl DBI\t\t39 t/s\n\"server-side trigger\" Perl DBI\t\t39 t/s\n\"normal\" Perl DBI read\t\t\t69 t/s\n\"normal\" Perl DBI for Oracle\t\t314 t/s\n\"normal\" Perl DBI read for Oracle\t395 t/s\n----------------------------------------------------------------------\n\nOnly batching had a statistically significant improvement, and it\nwasn't that major. I couldn't use true Postgres prepared statements\nsince you cannot determine the success/failure of the statements yet.\nI was planning on using arrays as well, but the additional 33%\nperformance impact is not amusing (though I suppose it is only an\nadditional 3% if you consider the 87% performance drop of Postgres\nfrom Oracle).\n\nI'll include all methods in the attached file, but since there was no\nsignificant difference, I'll concentrate on the basic one:\n\nExample table:\n----------------------------------------------------------------------\nCREATE TABLE test (\n val BIGINT PRIMARY KEY, # \"vals\" may be between 0 and 2^32-1\n accum INTEGER\n);\n----------------------------------------------------------------------\n\nBasic algorithm for writes\n----------------------------------------------------------------------\n while (<>)\n {\n chomp;\n @A = split;\n\n if (dosql($dbh, \"UPDATE test SET accum = accum + $A[1] WHERE val = '$A[0]';\",0) eq \"0E0\")\n {\n dosql($dbh, \"INSERT INTO test VALUES ( $A[0], $A[1] );\");\n }\n }\n----------------------------------------------------------------------\n\nBasic algorithm for reads\n----------------------------------------------------------------------\nwhile (<>)\n{\n chomp;\n @A = split;\n $sth = querysql($dbh,\"SELECT accum FROM test WHERE val = $A[0];\");\n $hit++ if ($sth && ($row = $sth->fetchrow_arrayref));\n $tot++;\n}\n----------------------------------------------------------------------\n\nWhat could be simpler.\n\nIn my randomly generated write data, I usually have about 18K inserts\nand 82K updates. In my randomly generated read data, I have 100K keys\nwhich will be found and 100K keys which will not be found.\n\nThe postgresql.conf file is default (my sysadmin nuked all of my\nchanges when he upgraded to 7.3.1--grr) and there are some shared\nmemory configs: kernel.sem = 250 32000 100 128, kernel.shmmax =\n2147483648, kernel.shmmni = 100, kernel.shmmax = 134217728 The\nWAL is not seperated (but see below).\n\nA \"vacuum analyze\" is performed between the write phase and the read\nphase. However, for your analysis pleasure, here are the results\nof a full verbose analyze and some explain results (both before and after).\n\n/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\\nseth=> explain update test set accum = accum + 53 where val = '5';\n QUERY PLAN \n-----------------------------------------------------\n Seq Scan on test (cost=0.00..0.00 rows=1 width=18)\n Filter: (val = 5::bigint)\n(2 rows)\nseth=> explain insert into test values (5, 53);\n QUERY PLAN \n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0)\n(1 row)\nseth=> vacuum full verbose analyze test;\nINFO: --Relation public.test--\nINFO: Pages 541: Changed 2, reaped 540, Empty 0, New 0; Tup 18153: Vac 81847, Keep/VTL 0/0, UnUsed 0, MinLen 40, MaxLen 40; Re-using: Free/Avail. Space 3294932/3294932; EndEmpty/Avail. Pages 0/541.\n CPU 0.00s/0.03u sec elapsed 0.02 sec.\nINFO: Index test_pkey: Pages 355; Tuples 18153: Deleted 81847.\n CPU 0.03s/0.34u sec elapsed 0.65 sec.\nINFO: Rel test: Pages: 541 --> 99; Tuple(s) moved: 18123.\n CPU 1.01s/0.31u sec elapsed 9.65 sec.\nINFO: Index test_pkey: Pages 355; Tuples 18153: Deleted 18123.\n CPU 0.02s/0.06u sec elapsed 0.19 sec.\nINFO: Analyzing public.test\nVACUUM\n\nseth=> explain select accum from test where val = 5;\n QUERY PLAN \n----------------------------------------------------------------------\n Index Scan using test_pkey on test (cost=0.00..5.99 rows=1 width=4)\n Index Cond: (val = 5)\n(2 rows)\nseth=> explain update test set accum = accum + 53 where val = '5';\n QUERY PLAN \n-----------------------------------------------------------------------\n Index Scan using test_pkey on test (cost=0.00..5.99 rows=1 width=18)\n Index Cond: (val = 5::bigint)\n(2 rows)\nseth=> explain insert into test values (5, 53);\n QUERY PLAN \n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0)\n(1 row)\n/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\\n\nI certainly understand that using an index scan might well speed\nthings up WRT the update policy, but considering the search\nperformance is post-analyze (pre-analyze it is even more deadly slow),\nI am dubious that doing it during the updates will get me within\nstriking distance of Oracle since read performance has got to be\nbetter than write performance, right?. This is also why I am dubious\nthat moving the WAL to another filesystem or futzing with the fsync\npolicy will do anything.\n\nI will include below a compressed tarball of the programs I used (and\nthe corresponding RUNME script) in case you wish to play along at\nhome. I don't claim they are pretty, BTW :-)\n\n -Seth Robertson", "msg_date": "Tue, 21 Jan 2003 16:20:49 -0500", "msg_from": "\"Seth Robertson\" <pgsql-performance@sysd.com>", "msg_from_op": true, "msg_subject": "Postgres 7.3.1 poor insert/update/search performance esp WRT Oracle" }, { "msg_contents": "\nOn Tue, 21 Jan 2003, Seth Robertson wrote:\n\n> The postgresql.conf file is default (my sysadmin nuked all of my\n> changes when he upgraded to 7.3.1--grr) and there are some shared\n> memory configs: kernel.sem = 250 32000 100 128, kernel.shmmax =\n> 2147483648, kernel.shmmni = 100, kernel.shmmax = 134217728 The\n> WAL is not seperated (but see below).\n\nYou almost certainly want to raise shared_buffers from the default (64?)\nto say 1k-10k. I'm not sure how much that'll help but it should help\nsome.\n\n> A \"vacuum analyze\" is performed between the write phase and the read\n> phase. However, for your analysis pleasure, here are the results\n> of a full verbose analyze and some explain results (both before and after).\n\nBTW: what does explain analyze (rather than plain explain) show?\n\n", "msg_date": "Tue, 21 Jan 2003 13:46:17 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "\nIn message <20030121134242.Q84028-100000@megazone23.bigpanda.com>, Stephan Szabo writes:\n\n On Tue, 21 Jan 2003, Seth Robertson wrote:\n \n > The postgresql.conf file is default (my sysadmin nuked all of my\n > changes when he upgraded to 7.3.1--grr) and there are some shared\n > memory configs: kernel.sem = 250 32000 100 128, kernel.shmmax =\n > 2147483648, kernel.shmmni = 100, kernel.shmmax = 134217728 The\n > WAL is not seperated (but see below).\n \n You almost certainly want to raise shared_buffers from the default (64?)\n to say 1k-10k. I'm not sure how much that'll help but it should help\n some.\n\nI'll try that and report back later, but I was under the (false?)\nimpression that it was primarily important when you had multiple\ndatabase connections using the same table.\n \n > A \"vacuum analyze\" is performed between the write phase and the\n > read phase. However, for your analysis pleasure, here are the\n > results of a full verbose analyze and some explain results (both\n > before and after).\n \n BTW: what does explain analyze (rather than plain explain) show?\n \n\n/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\\nseth=> explain analyze select accum from test where val = 5;\n QUERY PLAN \n----------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..0.00 rows=1 width=4) (actual time=94.55..94.55 rows=0 loops=1)\n Filter: (val = 5)\n Total runtime: 99.20 msec\n(3 rows)\n\nseth=> explain analyze update test set accum = accum + 53 where val = '5';\n QUERY PLAN \n-----------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..0.00 rows=1 width=18) (actual time=31.95..31.95 rows=0 loops=1)\n Filter: (val = 5::bigint)\n Total runtime: 32.04 msec\n(3 rows)\n\nseth=> explain analyze insert into test values (5, 53);\n QUERY PLAN \n----------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1 loops=1)\n Total runtime: 7.50 msec\n(2 rows)\n\nseth=> vacuum full verbose analyze test\nseth-> ;\nINFO: --Relation public.test--\nINFO: Pages 541: Changed 1, reaped 539, Empty 0, New 0; Tup 18071: Vac 81930, Keep/VTL 0/0, UnUsed 0, MinLen 40, MaxLen 40; Re-using: Free/Avail. Space 3298208/3298176; EndEmpty/Avail. Pages 0/540.\n CPU 0.03s/0.00u sec elapsed 0.02 sec.\nINFO: Index test_pkey: Pages 355; Tuples 18071: Deleted 81930.\n CPU 0.04s/0.41u sec elapsed 1.96 sec.\nINFO: Rel test: Pages: 541 --> 98; Tuple(s) moved: 18046.\n CPU 0.95s/0.42u sec elapsed 12.74 sec.\nINFO: Index test_pkey: Pages 355; Tuples 18071: Deleted 18046.\n CPU 0.02s/0.05u sec elapsed 0.31 sec.\nINFO: Analyzing public.test\nVACUUM\nseth=> explain analyze select accum from test where val = 5;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------\n Seq Scan on test (cost=0.00..323.89 rows=1 width=4) (actual time=0.13..14.20 rows=1 loops=1)\n Filter: (val = 5)\n Total runtime: 14.26 msec\n(3 rows)\n\nseth=> explain analyze select accum from test where val = 2147483648;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------\n Index Scan using test_pkey on test (cost=0.00..5.99 rows=1 width=4) (actual time=0.11..0.11 rows=0 loops=1)\n Index Cond: (val = 2147483648::bigint)\n Total runtime: 0.16 msec\n(3 rows)\n\nseth=> explain analyze update test set accum = accum + 53 where val = '5';\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------\n Index Scan using test_pkey on test (cost=0.00..5.99 rows=1 width=18) (actual time=0.24..0.24 rows=1 loops=1)\n Index Cond: (val = 5::bigint)\n Total runtime: 0.39 msec\n(3 rows)\n\nseth=> explain analyze insert into test values (6, 53);\n QUERY PLAN \n----------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1 loops=1)\n Total runtime: 0.08 msec\n(2 rows)\n\nseth=> explain analyze insert into test values (2147483647, 53);\n QUERY PLAN \n----------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.01..0.01 rows=1 loops=1)\n Total runtime: 0.33 msec\n(2 rows)\n/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\/\\\n\n -Seth Robertson\n", "msg_date": "Tue, 21 Jan 2003 17:07:13 -0500", "msg_from": "Seth Robertson <pgsql-performance@sysd.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance " }, { "msg_contents": "Seth Robertson <pgsql-performance@sysd.com> writes:\n> I'll try that and report back later, but I was under the (false?)\n> impression that it was primarily important when you had multiple\n> database connections using the same table.\n\nDefinitely false. shared_buffers needs to be 1000 or so for\nproduction-grade performance. There are varying schools of thought\nabout whether it's useful to raise it even higher, but in any case\n64 is just a toy-installation setting.\n\n> seth=> explain analyze select accum from test where val = 5;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------\n> Seq Scan on test (cost=0.00..323.89 rows=1 width=4) (actual time=0.13..14.20 rows=1 loops=1)\n> Filter: (val = 5)\n> Total runtime: 14.26 msec\n> (3 rows)\n\n> seth=> explain analyze update test set accum = accum + 53 where val = '5';\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------\n> Index Scan using test_pkey on test (cost=0.00..5.99 rows=1 width=18) (actual time=0.24..0.24 rows=1 loops=1)\n> Index Cond: (val = 5::bigint)\n> Total runtime: 0.39 msec\n> (3 rows)\n\nThe quotes are important when you are dealing with BIGINT indexes.\nYou won't get an indexscan if the constant looks like int4 rather than int8.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 21 Jan 2003 17:31:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance " }, { "msg_contents": "Tom and others:\n\nThere has been a lot of talk about shared memory size recently, along \nwith many conflicting statements from various people. Earlier threads \nsaid that setting the shared buffer to a high values (like 512MB on a \n2GB dedicated DB server) is not a good idea. A couple of reasons were \nmentioned. a) potential inefficiencies with the kernel and VM system \nb) modern kernels aggressive caching with all free memory and c) the \nshared memory stealing from memory the kernel would use to cache, etc.\n\nSo my question is: if the kernel is caching all this data, what's the \nbenefit of setting this to 1000 or higher? Why wouldn't i just set it \nto 0 if I believe my kernel is doing a good job.\n\n\n From all the discussion on this topic, it's still not clear to me how \nto calculate what value this should be set at and why. I've read these \ndocuments and others and have yet to find explanations and \nrecommendations that i can use.\n\nhttp://www.postgresql.org/docs/momjian/hw_performance.pdf\nhttp://www.postgresql.org/idocs/index.php?runtime-config.html\nhttp://www.postgresql.org/idocs/index.php?kernel-resources.html\nhttp://www.postgresql.org/idocs/index.php?performance-tips.html\nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/node6.html\nhttp://www.ca.postgresql.org/docs/momjian/hw_performance/node5.html\nhttp://www.ca.postgresql.org/docs/faq-english.html#3.6\n\nThis is such a common topic, it would be nice to see a more definitive \nand comprehensive section in the docs for tuning. Google searches for \n\"shared_buffers site:www.postgresql.org\" and \"tuning \nsite:www.postgresql.org\" come up with little info.\n\nFYI: I've been running our database which is mostly read only with 1500 \nbuffers. On a whole, we see very little IO. postgresql performs \nmany many million queries a day, many simple, many complex. Though the \ndatabase is relatively small, around 3GB.\n\n--brian\n\nOn Tuesday, January 21, 2003, at 03:31 PM, Tom Lane wrote:\n\n> Seth Robertson <pgsql-performance@sysd.com> writes:\n>> I'll try that and report back later, but I was under the (false?)\n>> impression that it was primarily important when you had multiple\n>> database connections using the same table.\n>\n> Definitely false. shared_buffers needs to be 1000 or so for\n> production-grade performance. There are varying schools of thought\n> about whether it's useful to raise it even higher, but in any case\n> 64 is just a toy-installation setting.\n>\n>> seth=> explain analyze select accum from test where val = 5;\n>> QUERY PLAN\n>> ---------------------------------------------------------------------- \n>> -------------------------\n>> Seq Scan on test (cost=0.00..323.89 rows=1 width=4) (actual \n>> time=0.13..14.20 rows=1 loops=1)\n>> Filter: (val = 5)\n>> Total runtime: 14.26 msec\n>> (3 rows)\n>\n>> seth=> explain analyze update test set accum = accum + 53 where val = \n>> '5';\n>> QUERY PLAN\n>> ---------------------------------------------------------------------- \n>> -----------------------------------------\n>> Index Scan using test_pkey on test (cost=0.00..5.99 rows=1 \n>> width=18) (actual time=0.24..0.24 rows=1 loops=1)\n>> Index Cond: (val = 5::bigint)\n>> Total runtime: 0.39 msec\n>> (3 rows)\n>\n> The quotes are important when you are dealing with BIGINT indexes.\n> You won't get an indexscan if the constant looks like int4 rather than \n> int8.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to \n> majordomo@postgresql.org)\n\n", "msg_date": "Tue, 21 Jan 2003 18:44:57 -0700", "msg_from": "Brian Hirt <bhirt@mobygames.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance " }, { "msg_contents": "\nIn message <13165.1043188295@sss.pgh.pa.us>, Tom Lane writes:\n\n Seth Robertson <pgsql-performance@sysd.com> writes:\n > I'll try that and report back later, but I was under the (false?)\n > impression that it was primarily important when you had multiple\n > database connections using the same table.\n\n Definitely false. shared_buffers needs to be 1000 or so for\n production-grade performance. There are varying schools of thought\n about whether it's useful to raise it even higher, but in any case\n 64 is just a toy-installation setting.\n\nIncreasing the setting to 4096 improved write performance by 20%.\nIncreasing the setting to 8192 had no additional effect. I could try\na few more probes if anyone cared.\n\n The quotes are important when you are dealing with BIGINT indexes.\n You won't get an indexscan if the constant looks like int4 rather\n than int8.\n\nYou are not kidding!!!! Changing this increased the search\nperformance to 2083 transactions/second. This is 30 times faster than\nbefore, and 5 times faster than Oracle! Go Tom Lane!!!\n\nUnfortunately, the update accidentally already used the quoting, so\nthis top did not directly help the write case. However, it did\ninspire me to check some other suggestions I have read since obviously\nperformance was to be had.\n\n----------------------------------------------------------------------\nOracle read performance: 395\nOriginal read performance: 69\nshared_buffer = 4096 118\n+ quoted where (WHERE val = '5') 2083\n----------------------------------------------------------------------\n\n----------------------------------------------------------------------\nOracle write performance: 314\nOriginal write performance: 39\nshared_buffer = 4096: 47\n+ Occassional (@ 10K & 60K vectors) vacuum analyze in bg: 121\n+ Periodic (every 10K vectors) vacuum analyze in background: 124\n+ wal_buffers = 24: 125\n+ wal_method = fdatasync 127\n+ wal_method = open_sync 248\n+ wal_method = open_datasync Not Supported\n+ fsync=false: 793\n----------------------------------------------------------------------\n\nJust to round out my report, using the fastest safe combination I was\nable to find (open_sync *is* safe, isn't it?), I reran all 7\nperformance tests to see if there was any different using the\ndifferent access methods:\n\n----------------------------------------------------------------------\n\"normal\" C libpq 256 t/s\n\"normal\" Perl DBI 251 t/s\n\"DBI Prepared Statement\" Perl DBI 254 t/s\n\"Batching\" Perl DBI 1149 t/s\n\"arrays\" Perl DBI 43 t/s\n\"server-side function\" Perl DBI 84 t/s\n\"server-side trigger\" Perl DBI 84 t/s\n\"normal\" Perl DBI read 1960 t/s\n\"normal\" Perl DBI for Oracle 314 t/s\n\"normal\" Perl DBI read for Oracle 395 t/s\n----------------------------------------------------------------------\n\nWith a batching update of 1149 transactions per second (2900%\nimprovement), I am willing to call it a day unless anyone else has any\nbrilliant ideas. However, it looks like my hope to use arrays is\ndoomed though, I'm not sure I can handle the performance penalty.\n\n -Seth Robertson\n", "msg_date": "Wed, 22 Jan 2003 02:19:45 -0500", "msg_from": "Seth Robertson <pgsql-performance@sysd.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "On Tue, Jan 21, 2003 at 06:44:57PM -0700, Brian Hirt wrote:\n> \n> So my question is: if the kernel is caching all this data, what's the \n> benefit of setting this to 1000 or higher? Why wouldn't i just set it \n> to 0 if I believe my kernel is doing a good job.\n\nIf Postgres tries to fetch a bit of data which is in its own shared\nbuffer, it does not even need to make a system call in order to fetch\nit. The data fetch is extremely fast.\n\nThe problem is that managing that shared memory comes at some cost.\n\nIf the data is not in a shared buffer, then Postgres makes exactly\nthe same call, no matter what, to the OS kernel, asking for the data\nfrom disk. It might happen, however, that the kernel will have the\ndata in its disk cache, however. The total cost of the operation,\ntherefore, is much lower in case the data is in the kernel's disk\ncache than in the case where it is actually on the disk. It is\nnevertheless still higher (atomically speaking) than fetching the\ndata from Postgres's own shared buffer.\n\nSo the question is this: where is the \"sweet spot\" where it costs\nlittle enough for Postgres to manage the shared buffer that the\nreduced cost of a system call is worth it. (As you point out, this\ncaclulation is complicated by the potential to waste memory by\ncaching the data twice -- once in the shared buffer and once in the\ndisk cache. Some systems, like Solaris, allow you to turn off the\ndisk cache, so the problem may not be one you face.) The trouble is\nthat there is no agreement on the answer to that question, and\nprecious little evidence which seems to settle the question.\n\nThe only way to determine the best setting, then, is to use your\nsystem with more-or-less simulated production loads, and see where\nthe best setting lies. You have to do this over time, because\nsometimes inefficiencies turn up only after running for a while. In\nan experiment we tried, we used a 2G shared buffer on a 12G machine. \nIt looked brilliantly fast at first, but 48 hours later was\n_crawling_; that indicates a problem with shared-buffer management on\nthe part of Postgres, I guess, but it was hard to say more than that. \nWe ultimately settled on a value somewhere less than 1 G as\nappropriate for our use. But if I had to justify the number I picked\n(as opposed to one a few hundred higher or lower), I'd have a tough\ntime.\n\n> From all the discussion on this topic, it's still not clear to me how \n> to calculate what value this should be set at and why. I've read these \n> documents and others and have yet to find explanations and \n> recommendations that i can use.\n\nI'm afraid what I'm saying is that it's a bit of a black art. The\npg_autotune project is an attempt to help make this a little more\nscientific. It relies on pgbench, which has its own problems,\nhowever.\n\nHope that's helpful, but I fear it doesn't give you the answer you'd\nlike.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 22 Jan 2003 07:05:24 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "Brian Hirt <bhirt@mobygames.com> writes:\n> So my question is: if the kernel is caching all this data, what's the \n> benefit of setting this to 1000 or higher? Why wouldn't i just set it \n> to 0 if I believe my kernel is doing a good job.\n\nWell, setting it to 0 won't work ;-). There's some minimum number of\nbuffers needed for PG to work at all; depending on complexity of your\nqueries and number of active backends it's probably around 25-100.\n(You'll get \"out of buffers\" failures if too few.) But more to the\npoint, when shared_buffers is too small you'll waste CPU cycles on\nunnecessary data transfers between kernel and user memory. It seems to\nbe pretty well established that 64 is too small for most applications.\nI'm not sure how much is enough, but I suspect that a few thousand is\nplenty to get past the knee of the performance curve in most scenarios.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 11:09:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance " }, { "msg_contents": "\nSeth Robertson writes:\n\n \n However, it looks like my hope to use arrays is\n doomed though, I'm not sure I can handle the performance penalty.\n\nJust in case I get the person who implemented arrays annoyed or\nworried, I did not properly modify the \"array\" test and was vacuum'ing\nthe wrong table every 10000 vectors during the test. I realized that\nthis morning and the new array results are listed below. I also\nexperimented with batching read operations, and I was surprised to find\nthat this helps a great deal as well.\n\n----------------------------------------------------------------------\n\"normal\" C libpq 256 t/s\n\"normal\" Perl DBI 251 t/s\n\"DBI Prepared Statement\" Perl DBI 254 t/s\n\"Batching\" Perl DBI 1149 t/s\n\"arrays\" Perl DBI 250 t/s (*)\n\"arrays with batching\" Perl DBI 1020 t/s (*)\n\"server-side function\" Perl DBI 84 t/s\n\"server-side trigger\" Perl DBI 84 t/s\n\"normal\" Perl DBI read 1960 t/s\n\"batched\" Perl DBI read\t\t\t3076 t/s (*)\n\"array\" Perl DBI read 1754 t/s (*)\n\"batched array\" Perl DBI read 2702 t/s (*)\n\"normal\" Perl DBI for Oracle 314 t/s\n\"normal\" Perl DBI read for Oracle 395 t/s\n----------------------------------------------------------------------\n(*) New/updated from this morning\n\nThis brings array code to within 11% of the performance of batched\nnon-arrays, and close enough to be an option. I may well be doing\nsomething wrong with the server-side functions, but I don't see\nanything quite so obviously wrong.\n\n -Seth Robertson\n", "msg_date": "Wed, 22 Jan 2003 12:45:24 -0500", "msg_from": "Seth Robertson <pgsql-performance@sysd.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance " }, { "msg_contents": "On Wed, 2003-01-22 at 07:05, Andrew Sullivan wrote:\n> (As you point out, this caclulation is complicated by the potential to\n> waste memory by caching the data twice\n\nIf we had a good buffer replacement algorithm (which we currently do\nnot), ISTM that hot pages retained in PostgreSQL's buffer cache would\nnever get loaded from the OS's IO cache, thus causing those pages to\neventually be evicted from the OS's cache. So the \"cache the data twice\"\nproblem doesn't apply in all circumstances.\n\n> Some systems, like Solaris, allow you to turn off the\n> disk cache, so the problem may not be one you face.)\n\nI think it would be interesting to investigate disabling the OS' cache\nfor all relation I/O (i.e. heap files, index files). That way we could\nboth improve performance (by moving all the caching into PostgreSQL's\ndomain, where there is more room for optimization), as well as make\nconfiguration simpler: in an ideal world, it would remove the need to\nconsider the OS' caching when configuring the amount of shared memory to\nallocate to PostgreSQL.\n\nCan this be done using O_DIRECT? If so, is it portable?\n\nBTW, if anyone has any information on actually *using* O_DIRECT, I'd be\ninterested in it. I tried to quickly hack PostgreSQL to use it, without\nsuccess...\n\nCheers,\n\nNeil\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "30 Jan 2003 18:18:54 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "On Fri, 30 Jan 2003, Neil Conway wrote:\n\n> If we had a good buffer replacement algorithm (which we currently do\n> not), ISTM that hot pages retained in PostgreSQL's buffer cache would\n> never get loaded from the OS's IO cache, thus causing those pages to\n> eventually be evicted from the OS's cache. So the \"cache the data twice\"\n> problem doesn't apply in all circumstances.\n\nNo, but it does apply to every block at some point, since during the\ninitial load it's present in both caches, and it has to be flushed from\nthe OS's cache at some point.\n\n> > Some systems, like Solaris, allow you to turn off the\n> > disk cache, so the problem may not be one you face.)\n>\n> I think it would be interesting to investigate disabling the OS' cache\n> for all relation I/O (i.e. heap files, index files). That way we could\n> both improve performance (by moving all the caching into PostgreSQL's\n> domain, where there is more room for optimization)...\n\nI'm not so sure that there is that all much more room for optimization.\nBut take a look at what Solaris and FFS do now, and consider how much\nwork it would be to rewrite it, and then see if you even want to do that\nbefore adding stuff to improve performance.\n\n> , as well as make configuration simpler: in an ideal world, it would\n> remove the need to consider the OS' caching when configuring the\n> amount of shared memory to allocate to PostgreSQL.\n\nWe could do that much more simply by using mmap.\n\n> Can this be done using O_DIRECT?\n\nIt can, but you're doing to lose some of the advantages that you'd get\nfrom using raw devices instead. In particular, you have no way to know\nthe physical location of blocks on the disk, because those locations are\noften different from the location in the file.\n\n> If so, is it portable?\n\nO_DIRECT is not all that portable, I don't think. Certainly not as\nportable as mmap.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Fri, 31 Jan 2003 13:02:52 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "Curt Sampson wrote:\n> > > Some systems, like Solaris, allow you to turn off the\n> > > disk cache, so the problem may not be one you face.)\n> >\n> > I think it would be interesting to investigate disabling the OS' cache\n> > for all relation I/O (i.e. heap files, index files). That way we could\n> > both improve performance (by moving all the caching into PostgreSQL's\n> > domain, where there is more room for optimization)...\n> \n> I'm not so sure that there is that all much more room for optimization.\n> But take a look at what Solaris and FFS do now, and consider how much\n> work it would be to rewrite it, and then see if you even want to do that\n> before adding stuff to improve performance.\n\nWe need free-behind for large sequential scans, like Solaris has. Do we\nhave LRU-2 or LRU-K now?\n\n> > If so, is it portable?\n> \n> O_DIRECT is not all that portable, I don't think. Certainly not as\n> portable as mmap.\n\nAs I remember, DIRECT doesn't return until the data hits the disk\n(because there is no OS cache), so if you want to write a page so you\ncan reused the buffer, DIRECT would be quite slow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 2 Feb 2003 05:39:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "On Sun, 2003-02-02 at 05:39, Bruce Momjian wrote:\n> We need free-behind for large sequential scans, like Solaris has. Do we\n> have LRU-2 or LRU-K now?\n\nNo.\n\n> As I remember, DIRECT doesn't return until the data hits the disk\n> (because there is no OS cache), so if you want to write a page so you\n> can reused the buffer, DIRECT would be quite slow.\n\nWhy? If there is a finite amount of memory for doing buffering, the data\nneeds to be written to disk at *some* point, anyway. And if we didn't\nuse the OS cache, the size of the PostgreSQL shared buffer would be much\nlarger (I'd think 80% or more of the physical RAM in a typical high-end\nmachine, for dedicated PostgreSQL usage).\n\nOne possible problem would be the fact that it might mean that writing\nout dirty pages would become part of some key code paths in PostgreSQL\n(rather than assuming that the OS can write out dirty pages in the\nbackground, as it chooses to). But there are lots of ways to work around\nthis, notably by using a daemon to periodically write out some of the\npages in the background.\n\nCheers,\n\nNeil\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "02 Feb 2003 12:01:41 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "On Mon, 2 Feb 2003, Neil Conway wrote:\n\n> > As I remember, DIRECT doesn't return until the data hits the disk\n> > (because there is no OS cache), so if you want to write a page so you\n> > can reused the buffer, DIRECT would be quite slow.\n> ...\n> One possible problem would be the fact that it might mean that writing\n> out dirty pages would become part of some key code paths in PostgreSQL\n> (rather than assuming that the OS can write out dirty pages in the\n> background, as it chooses to). But there are lots of ways to work around\n> this, notably by using a daemon to periodically write out some of the\n> pages in the background.\n\nIf you're doing blocking direct I/O, you really have to have and use\nwhat I guess I'd call \"scatter-scatter I/O\": you need to chose a large\nnumber of blocks scattered over various positions in all your open\nfiles, and be able to request a write for all of them at once.\n\nIf you write one by one, with each write going to disk before your\nrequest returns, you're going to be forcing the physical order of the\nwrites with no knowledge of where the blocks physically reside on the\ndisk, and you stand a snowball's chance in you-know-where of getting\na write ordering that will maximize your disk throughput.\n\nThis is why systems that use direct I/O, for the most part, use a raw\npartition and their own \"filesystem\" as well; you need to know the\nphysical layout of the blocks to create efficient write strategies.\n\n(MS SQL Server on Windows NT is a notable exception to this. They do,\nhowever, make you pre-create the data file in advance, and they suggest\ndoing it on an empty partition, which at the very least would get you\nlong stretches of the file in contiguous order. They may also be using\ntricks to make sure the file gets created in contiguous order, or they\nmay be able to get information from the OS about the physical block\nnumbers corresponding to logical block numbers in the file.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Mon, 3 Feb 2003 02:14:51 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "Curt,\n\n> (MS SQL Server on Windows NT is a notable exception to this. They do,\n> however, make you pre-create the data file in advance, and they suggest\n> doing it on an empty partition, which at the very least would get you\n> long stretches of the file in contiguous order. They may also be using\n> tricks to make sure the file gets created in contiguous order, or they\n> may be able to get information from the OS about the physical block\n> numbers corresponding to logical block numbers in the file.)\n\nMSSQL is, in fact, doing some kind of direct-block-addressing. If you \nattempt to move a partition on the disk containing MSSQL databases, SQL \nServer will crash on restart and be unrecoverable ... even if the other files \non that disk are fine. Nor can you back up MSSQL by using disk imaging, \nunless you can recover to an identical model disk/array.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 2 Feb 2003 11:31:30 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" }, { "msg_contents": "Neil Conway wrote:\n> On Sun, 2003-02-02 at 05:39, Bruce Momjian wrote:\n> > We need free-behind for large sequential scans, like Solaris has. Do we\n> > have LRU-2 or LRU-K now?\n> \n> No.\n> \n> > As I remember, DIRECT doesn't return until the data hits the disk\n> > (because there is no OS cache), so if you want to write a page so you\n> > can reused the buffer, DIRECT would be quite slow.\n> \n> Why? If there is a finite amount of memory for doing buffering, the data\n> needs to be written to disk at *some* point, anyway. And if we didn't\n> use the OS cache, the size of the PostgreSQL shared buffer would be much\n> larger (I'd think 80% or more of the physical RAM in a typical high-end\n> machine, for dedicated PostgreSQL usage).\n> \n> One possible problem would be the fact that it might mean that writing\n> out dirty pages would become part of some key code paths in PostgreSQL\n> (rather than assuming that the OS can write out dirty pages in the\n> background, as it chooses to). But there are lots of ways to work around\n> this, notably by using a daemon to periodically write out some of the\n> pages in the background.\n\nRight. This is what we _don't_ want to do. If we need a buffer, we\nneed it now. We can't wait for some other process to write the buffer\ndirectly to disk, nor do we want to group the writes somehow.\n\nAnd the other person mentioning we have to group writes again causes the\nsame issues --- we are bypassing the kernel buffers which know more than\nwe do. I can see advantage of preventing double buffering _quickly_\nbeing overtaken by the extra overhead of direct i/o.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 2 Feb 2003 19:41:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.3.1 poor insert/update/search performance" } ]
[ { "msg_contents": "Hi:\n\n\n Has anyone done performance comparison between\ntriggers/functions in C vs. PL/PGSQL?\n\n What are the drawbacks of functions written using\nC?\n\n\nludwig.\n\n\n\n__________________________________________________\nDo you Yahoo!?\nNew DSL Internet Access from SBC & Yahoo!\nhttp://sbc.yahoo.com\n", "msg_date": "Tue, 21 Jan 2003 18:50:39 -0800 (PST)", "msg_from": "Ludwig Lim <lud_nowhere_man@yahoo.com>", "msg_from_op": true, "msg_subject": "Performance between triggers/functions written in C and PL/PGSQL" }, { "msg_contents": "\nLudwig,\n\n> Has anyone done performance comparison between\n> triggers/functions in C vs. PL/PGSQL?\n\nOn simple ON UPDATE triggers that update an archive table, but are called many \ntimes per minute, about 20:1 in favor of C triggers. Partly that depends on \nwhether you load the C function as an external file or compile it into the \ndatabase. The latter is, of course, faster by far less flexible.\n\nPartly this is because C is fast, being a lower-level language. Partly this \nis because the PL/pgSQL parser is in *desperate* need of an overhaul, as it \nwas written in a hurry and has since suffered incremental development.\n\n> What are the drawbacks of functions written using\n> C?\n\nWriting C is harder. Gotta manage your own memory. Plus a badly written C \nfunction can easily crash Postgres, whereas that's much harder to do with \nPL/pgSQL.\n\nUsually I just write the original Trigger in PL/pgSQL, test & debug for data \nerrors, and then farm it out to a crack C programmer to convert to C.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 22 Jan 2003 10:57:29 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Performance between triggers/functions written in C and PL/PGSQL" } ]
[ { "msg_contents": "hello all,\n\nI am getting the following output from EXPLAIN, concerning a query with \njoins. The merge uses index scans but takes too long, in my opinion. The \nquery is in fact only a part (subquery) of another one, but it is the \nbottle neck.\n\nAs I am quite ignorant in optimizing queries, and I have no idea where \nto find documentation on the net on how to learn optimizing my queries, \nI am posting this here in hope someone will give me either tips how to \noptimize, or where to find some tutorial that could help me get along on \nmy own.\n\ndropping the \"DISTINCT\" has some effect, but I can't really do without.\n\nThank you\nChantal\n\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\nrelate=# explain SELECT DISTINCT gene.gene_name, \ngene_occurrences_puid.puid FROM disease, gene, disease_occurrences_puid, \ngene_occurrences_puid WHERE \ndisease_occurrences_puid.puid=gene_occurrences_puid.puid AND \ndisease.disease_id=disease_occurrences_puid.disease_id AND \ngene.gene_id=gene_occurrences_puid.gene_id;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=426503.59..436839.47 rows=137812 width=41)\n -> Sort (cost=426503.59..429948.88 rows=1378118 width=41)\n Sort Key: gene.gene_name, gene_occurrences_puid.puid\n -> Hash Join (cost=67813.96..162375.07 rows=1378118 width=41)\n Hash Cond: (\"outer\".disease_id = \"inner\".disease_id)\n -> Merge Join (cost=63671.50..98237.36 rows=1378118 \nwidth=37)\n Merge Cond: (\"outer\".puid = \"inner\".puid)\n -> Index Scan using disease_occpd_puid_i on \ndisease_occurrences_puid (cost=0.00..14538.05 rows=471915 width=8)\n -> Sort (cost=63671.50..64519.87 rows=339347 \nwidth=29)\n Sort Key: gene_occurrences_puid.puid\n -> Merge Join (cost=0.00..22828.18 \nrows=339347 width=29)\n Merge Cond: (\"outer\".gene_id = \n\"inner\".gene_id)\n -> Index Scan using gene_pkey on gene \n (cost=0.00..7668.59 rows=218085 width=21)\n -> Index Scan using gene_id_puid_uni \non gene_occurrences_puid (cost=0.00..9525.57 rows=339347 width=8)\n -> Hash (cost=3167.97..3167.97 rows=164597 width=4)\n -> Seq Scan on disease (cost=0.00..3167.97 \nrows=164597 width=4)\n(16 rows)\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\n", "msg_date": "Wed, 22 Jan 2003 11:30:32 +0100", "msg_from": "Chantal Ackermann <chantal.ackermann@biomax.de>", "msg_from_op": true, "msg_subject": "optimizing query" }, { "msg_contents": "\n(Replying to general and performance in a hope to move this\nto performance after a couple of replies).\n\nOn Wed, 22 Jan 2003, Chantal Ackermann wrote:\n\n> I am getting the following output from EXPLAIN, concerning a query with\n> joins. The merge uses index scans but takes too long, in my opinion. The\n> query is in fact only a part (subquery) of another one, but it is the\n> bottle neck.\n>\n> As I am quite ignorant in optimizing queries, and I have no idea where\n> to find documentation on the net on how to learn optimizing my queries,\n> I am posting this here in hope someone will give me either tips how to\n> optimize, or where to find some tutorial that could help me get along on\n> my own.\n>\n> dropping the \"DISTINCT\" has some effect, but I can't really do without.\n\nThe first thing is, have you done ANALYZE recently to make sure that the\nstatistics are correct and what does EXPLAIN ANALYZE give you (that will\nrun the query and give the estimate and actual). Also, if you haven't\nvacuumed recently, you may want to vacuum full.\n\nHow many rows are there on gene, disease and both occurrances tables?\nI'd wonder if perhaps using explicit sql join syntax (which postgres uses\nto constrain order) to join disease and disease_occurrences_puid before\njoining it to the other two would be better or worse in practice.\n\n", "msg_date": "Wed, 22 Jan 2003 08:20:15 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: optimizing query" }, { "msg_contents": "hi Stephan,\n\nthank you for your reply.\n\nI ran vacuum analyze before calling explain. As this is a newly built \ndatabase where no rows have been deleted, yet, I thought vacuum full \nwould have no effect. In fact, BEFORE running vacuum full, the cost of \nthe query is estimates by explain analyze as 33 secs, and AFTER running \nit, the cost is estimate to be 43 secs??? (Hey, I want at least the 10 \nsecs back ;-) )\n\nI have just installed this database on a \"bigger\" (see the system info \nfurther down) machine, and I expected the queries would run _really_ \nfast. especially, as there is a lot more data to be inserted in the \noccurrences tables.\n\nThis is the row count of the tables and the output of explain analyze \nbefore and after running vacuum full (after that, I listed some system \nand postgresql information):\n\nrelate=# select count(*) from gene;\n count\n--------\n 218085\n(1 row)\n\nrelate=# select count(*) from disease;\n count\n--------\n 164597\n(1 row)\n\nrelate=# select count(*) from disease_occurrences_puid;\n count\n--------\n 471915\n(1 row)\n\nrelate=# select count(*) from gene_occurrences_puid;\n count\n--------\n 339347\n(1 row)\n\nrelate=# explain analyze SELECT DISTINCT gene.gene_name, \ngene_occurrences_puid.puid FROM gene, disease_occurrences_puid, \ngene_occurrences_puid WHERE\ndisease_occurrences_puid.puid=gene_occurrences_puid.puid AND \ngene.gene_id=gene_occurrences_puid.gene_id;\n \n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=342175.89..352511.77 rows=137812 width=33) (actual \ntime=32112.66..33139.23 rows=219435 loops=1)\n -> Sort (cost=342175.89..345621.18 rows=1378118 width=33) (actual \ntime=32112.65..32616.14 rows=695158 loops=1)\n Sort Key: gene.gene_name, gene_occurrences_puid.puid\n -> Merge Join (cost=63671.50..98237.36 rows=1378118 \nwidth=33) (actual time=10061.83..17940.02 rows=695158 loops=1)\n Merge Cond: (\"outer\".puid = \"inner\".puid)\n -> Index Scan using disease_occpd_puid_i on \ndisease_occurrences_puid (cost=0.00..14538.05 rows=471915 width=4) \n(actual time=0.03..3917.99 rows=471915 loops=1)\n -> Sort (cost=63671.50..64519.87 rows=339347 width=29) \n(actual time=10061.69..10973.64 rows=815068 loops=1)\n Sort Key: gene_occurrences_puid.puid\n -> Merge Join (cost=0.00..22828.18 rows=339347 \nwidth=29) (actual time=0.21..3760.59 rows=339347 loops=1)\n Merge Cond: (\"outer\".gene_id = \"inner\".gene_id)\n -> Index Scan using gene_pkey on gene \n(cost=0.00..7668.59 rows=218085 width=21) (actual time=0.02..955.19 \nrows=218073 loops=1)\n -> Index Scan using gene_id_puid_uni on \ngene_occurrences_puid (cost=0.00..9525.57 rows=339347 width=8) (actual \ntime=0.02..1523.81 rows=339347 loops=1)\n Total runtime: 33244.81 msec\n(13 rows)\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\nAFTER\n\nrelate=# vacuum full verbose analyze;\n\nrelate=# explain analyze SELECT DISTINCT gene.gene_name, \ngene_occurrences_puid.puid FROM gene, disease_occurrences_puid, \ngene_occurrences_puid WHERE\ndisease_occurrences_puid.puid=gene_occurrences_puid.puid AND \ngene.gene_id=gene_occurrences_puid.gene_id;\n \n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=359069.64..369948.41 rows=145050 width=33) (actual \ntime=42195.60..43229.04 rows=219435 loops=1)\n -> Sort (cost=359069.64..362695.90 rows=1450503 width=33) (actual \ntime=42195.59..42694.70 rows=695158 loops=1)\n Sort Key: gene.gene_name, gene_occurrences_puid.puid\n -> Merge Join (cost=63732.51..99264.24 rows=1450503 \nwidth=33) (actual time=13172.40..27973.79 rows=695158 loops=1)\n Merge Cond: (\"outer\".puid = \"inner\".puid)\n -> Index Scan using disease_occpd_puid_i on \ndisease_occurrences_puid (cost=0.00..14543.06 rows=471915 width=4) \n(actual time=36.50..10916.29 rows=471915 loops=1)\n -> Sort (cost=63732.51..64580.88 rows=339347 width=29) \n(actual time=13126.56..14048.38 rows=815068 loops=1)\n Sort Key: gene_occurrences_puid.puid\n -> Merge Join (cost=0.00..22889.19 rows=339347 \nwidth=29) (actual time=58.00..6775.55 rows=339347 loops=1)\n Merge Cond: (\"outer\".gene_id = \"inner\".gene_id)\n -> Index Scan using gene_pkey on gene \n(cost=0.00..7739.91 rows=218085 width=21) (actual time=29.00..3416.01 \nrows=218073\nloops=1)\n -> Index Scan using gene_id_puid_uni on \ngene_occurrences_puid (cost=0.00..9525.57 rows=339347 width=8) (actual \ntime=28.69..1936.83 rows=339347 loops=1)\n Total runtime: 43338.94 msec\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\nPostgres Version: 7.3.1\nCPU: 1666.767 MHz\nRAM: 2070492 kB\nshmmax/shmall: 1048576000\n\npostgresql.conf:\nshared_buffers: 121600\nmax_connections: 64\nmax_fsm_relations = 200\nmax_fsm_pages = 40000\neffective_cache_size = 8000\n\n********************************************************************\n\nThank you again for your interest and help!\n\nChantal\n\n\nStephan Szabo wrote:\n> (Replying to general and performance in a hope to move this\n> to performance after a couple of replies).\n> \n> On Wed, 22 Jan 2003, Chantal Ackermann wrote:\n> \n> \n>>I am getting the following output from EXPLAIN, concerning a query with\n>>joins. The merge uses index scans but takes too long, in my opinion. The\n>>query is in fact only a part (subquery) of another one, but it is the\n>>bottle neck.\n>>\n>>As I am quite ignorant in optimizing queries, and I have no idea where\n>>to find documentation on the net on how to learn optimizing my queries,\n>>I am posting this here in hope someone will give me either tips how to\n>>optimize, or where to find some tutorial that could help me get along on\n>>my own.\n>>\n>>dropping the \"DISTINCT\" has some effect, but I can't really do without.\n> \n> \n> The first thing is, have you done ANALYZE recently to make sure that the\n> statistics are correct and what does EXPLAIN ANALYZE give you (that will\n> run the query and give the estimate and actual). Also, if you haven't\n> vacuumed recently, you may want to vacuum full.\n> \n> How many rows are there on gene, disease and both occurrances tables?\n> I'd wonder if perhaps using explicit sql join syntax (which postgres uses\n> to constrain order) to join disease and disease_occurrences_puid before\n> joining it to the other two would be better or worse in practice.\n> \n> \n\n", "msg_date": "Thu, 23 Jan 2003 10:16:01 +0100", "msg_from": "Chantal Ackermann <chantal.ackermann@biomax.de>", "msg_from_op": true, "msg_subject": "Re: optimizing query" }, { "msg_contents": "\nOn Thu, 23 Jan 2003, Chantal Ackermann wrote:\n\n> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n>\n> Postgres Version: 7.3.1\n> CPU: 1666.767 MHz\n> RAM: 2070492 kB\n> shmmax/shmall: 1048576000\n>\n> postgresql.conf:\n> shared_buffers: 121600\n> max_connections: 64\n> max_fsm_relations = 200\n> max_fsm_pages = 40000\n> effective_cache_size = 8000\n>\n> ********************************************************************\n\nHmm, how about how many pages are in the various tables, (do a\nvacuum verbose <table> for the various tables and what is sort_mem\nset to? It's picking the index scan to get the tables in sorted\norder, but I wonder if that's really the best plan given it's getting\na large portion of the tables.\n\nHmm, what does it do if you set enable_indexscan=off; ?\n\n", "msg_date": "Thu, 23 Jan 2003 07:05:28 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: optimizing query" }, { "msg_contents": "Chantal Ackermann <chantal.ackermann@biomax.de> writes:\n> Unique (cost=359069.64..369948.41 rows=145050 width=33) (actual \n> time=42195.60..43229.04 rows=219435 loops=1)\n> -> Sort (cost=359069.64..362695.90 rows=1450503 width=33) (actual \n> time=42195.59..42694.70 rows=695158 loops=1)\n> Sort Key: gene.gene_name, gene_occurrences_puid.puid\n> -> Merge Join (cost=63732.51..99264.24 rows=1450503 \n> width=33) (actual time=13172.40..27973.79 rows=695158 loops=1)\n> Merge Cond: (\"outer\".puid = \"inner\".puid)\n> -> Index Scan using disease_occpd_puid_i on \n> disease_occurrences_puid (cost=0.00..14543.06 rows=471915 width=4) \n> (actual time=36.50..10916.29 rows=471915 loops=1)\n> -> Sort (cost=63732.51..64580.88 rows=339347 width=29) \n> (actual time=13126.56..14048.38 rows=815068 loops=1)\n> Sort Key: gene_occurrences_puid.puid\n> -> Merge Join (cost=0.00..22889.19 rows=339347 \n> width=29) (actual time=58.00..6775.55 rows=339347 loops=1)\n> Merge Cond: (\"outer\".gene_id = \"inner\".gene_id)\n> -> Index Scan using gene_pkey on gene \n> (cost=0.00..7739.91 rows=218085 width=21) (actual time=29.00..3416.01 \n> rows=218073\n> loops=1)\n> -> Index Scan using gene_id_puid_uni on \n> gene_occurrences_puid (cost=0.00..9525.57 rows=339347 width=8) (actual \n> time=28.69..1936.83 rows=339347 loops=1)\n> Total runtime: 43338.94 msec\n\nSeems like most of the time is going into the sort steps.\n\n> postgresql.conf:\n> shared_buffers: 121600\n> max_connections: 64\n> max_fsm_relations = 200\n> max_fsm_pages = 40000\n> effective_cache_size = 8000\n\nTry increasing sort_mem.\n\nAlso, I'd back off on shared_buffers if I were you. There's no evidence\nthat values above a few thousand buy anything.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Jan 2003 10:26:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] optimizing query " }, { "msg_contents": "hi Stephan, hi Tom,\n\nsort_mem was at its default: 1024. I increased it, and the query takes \neven longer (~ 36 secs). I tried two different values: 4096 and 8192, \nthis last time I reduced the shared_buffers to 25600 (--> ~ 37 secs).\nAnother point is: after a vacuum, the cost would slightly increase.\n\nwould it help to cluster the index? but as I am using several indexes I \nfind it difficult to decide on which index to cluster.\n\n(I paste the output from vacuum full verbose analyze)\n\nThanks!\nChantal\n\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\nINFO: --Relation public.disease_occurrences_puid--\nINFO: Pages 2079: Changed 0, reaped 0, Empty 0, New 0; Tup 471915: Vac \n0, Keep/VTL 0/0, UnUsed 0, MinLen 32, MaxLen 32; Re-using: Free/Avail. \nSpace 648/648; EndEmpty/Avail. Pages 0/1.\n CPU 0.02s/0.05u sec elapsed 0.07 sec.\nINFO: Index disease_occpd_puid_i: Pages 1036; Tuples 471915.\n CPU 0.00s/0.03u sec elapsed 0.03 sec.\nINFO: Index disease_id_puid_uni: Pages 1297; Tuples 471915.\n CPU 0.03s/0.05u sec elapsed 0.23 sec.\nINFO: Rel disease_occurrences_puid: Pages: 2079 --> 2079; Tuple(s) \nmoved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Analyzing public.disease_occurrences_puid\n\nINFO: --Relation public.gene_occurrences_puid--\nINFO: Pages 1495: Changed 0, reaped 0, Empty 0, New 0; Tup 339347: Vac \n0, Keep/VTL 0/0, UnUsed 0, MinLen 32, MaxLen 32; Re-using: Free/Avail. \nSpace 648/648; EndEmpty/Avail. Pages 0/1.\n CPU 0.01s/0.04u sec elapsed 0.05 sec.\nINFO: Index gene_occpd_puid_i: Pages 746; Tuples 339347.\n CPU 0.01s/0.02u sec elapsed 0.03 sec.\nINFO: Index gene_id_puid_uni: Pages 934; Tuples 339347.\n CPU 0.00s/0.02u sec elapsed 0.02 sec.\nINFO: Rel gene_occurrences_puid: Pages: 1495 --> 1495; Tuple(s) moved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Analyzing public.gene_occurrences_puid\n\nINFO: --Relation public.disease--\nINFO: Pages 1522: Changed 0, reaped 0, Empty 0, New 0; Tup 164597: Vac \n0, Keep/VTL 0/0, UnUsed 0, MinLen 44, MaxLen 232; Re-using: Free/Avail. \nSpace 56920/38388; EndEmpty/Avail. Pages 0/603.\n CPU 0.00s/0.04u sec elapsed 0.04 sec.\nINFO: Index disease_name_i: Pages 1076; Tuples 164597.\n CPU 0.05s/0.02u sec elapsed 0.18 sec.\nINFO: Index disease_pkey: Pages 364; Tuples 164597.\n CPU 0.00s/0.00u sec elapsed 0.03 sec.\nINFO: Index disease_uni: Pages 1168; Tuples 164597.\n CPU 0.08s/0.04u sec elapsed 0.22 sec.\nINFO: Rel disease: Pages: 1522 --> 1521; Tuple(s) moved: 75.\n CPU 0.00s/0.03u sec elapsed 0.04 sec.\nINFO: Index disease_name_i: Pages 1077; Tuples 164597: Deleted 75.\n CPU 0.00s/0.03u sec elapsed 0.03 sec.\nINFO: Index disease_pkey: Pages 364; Tuples 164597: Deleted 75.\n CPU 0.01s/0.02u sec elapsed 0.02 sec.\nINFO: Index disease_uni: Pages 1168; Tuples 164597: Deleted 75.\n CPU 0.00s/0.03u sec elapsed 0.03 sec.\nINFO: --Relation pg_toast.pg_toast_7114632--\nINFO: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, \nKeep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space \n0/0; EndEmpty/Avail. Pages 0/0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Index pg_toast_7114632_index: Pages 1; Tuples 0.\n CPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: Analyzing public.disease\n\nINFO: --Relation public.gene--\nINFO: Pages 1566: Changed 0, reaped 0, Empty 0, New 0; Tup 218085: Vac \n0, Keep/VTL 0/0, UnUsed 0, MinLen 44, MaxLen 348; Re-using: Free/Avail. \nSpace 48692/25408; EndEmpty/Avail. Pages 0/365.\n CPU 0.01s/0.04u sec elapsed 0.04 sec.\nINFO: Index gene_pkey: Pages 481; Tuples 218085.\n CPU 0.00s/0.01u sec elapsed 0.01 sec.\nINFO: Index gene_uni: Pages 1038; Tuples 218085.\n CPU 0.04s/0.01u sec elapsed 0.19 sec.\nINFO: Index gene_name_uni: Pages 917; Tuples 218085.\n CPU 0.06s/0.00u sec elapsed 0.15 sec.\nINFO: Rel gene: Pages: 1566 --> 1564; Tuple(s) moved: 230.\n CPU 0.01s/0.06u sec elapsed 0.11 sec.\nINFO: Index gene_pkey: Pages 482; Tuples 218085: Deleted 230.\n CPU 0.00s/0.03u sec elapsed 0.02 sec.\nINFO: Index gene_uni: Pages 1041; Tuples 218085: Deleted 230.\n CPU 0.00s/0.04u sec elapsed 0.03 sec.\nINFO: Index gene_name_uni: Pages 918; Tuples 218085: Deleted 230.\n CPU 0.00s/0.04u sec elapsed 0.03 sec.\nINFO: --Relation pg_toast.pg_toast_7114653--\nINFO: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, \nKeep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space \n0/0; EndEmpty/Avail. Pages 0/0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Index pg_toast_7114653_index: Pages 1; Tuples 0.\n CPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: Analyzing public.gene\n\n++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\n", "msg_date": "Thu, 23 Jan 2003 16:52:51 +0100", "msg_from": "Chantal Ackermann <chantal.ackermann@biomax.de>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] optimizing query" }, { "msg_contents": "Tom Lane wrote:\n> > postgresql.conf:\n> > shared_buffers: 121600\n> > max_connections: 64\n> > max_fsm_relations = 200\n> > max_fsm_pages = 40000\n> > effective_cache_size = 8000\n> \n> Try increasing sort_mem.\n> \n> Also, I'd back off on shared_buffers if I were you. There's no evidence\n> that values above a few thousand buy anything.\n\nIncreasing shared_buffers above several thousand will only be a win if\nyour entire working set will fit in the larger buffer pool, but didn't\nin the previous size. If you working set is smaller or larger than\nthat, pushing it above several thousand isn't a win. Is that a more\ndefinitive answer?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 26 Jan 2003 19:54:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] optimizing query" } ]
[ { "msg_contents": "I have a table that contains over 13 million rows. This query takes an\nextremely long time to return. I've vacuum full, analyzed, and re-indexed\nthe table. Still the results are the same. Any ideas?\nTIA\nPatrick\n\nmdc_oz=# explain analyze select wizard from search_log where wizard\n='Keyword' and sdate between '2002-12-01' and '2003-01-15';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on search_log (cost=0.00..609015.34 rows=3305729 width=10)\n(actual time=99833.83..162951.25 rows=3280573 loops=1)\n Filter: ((wizard = 'Keyword'::character varying) AND (sdate >\n= '2002-12-01'::date) AND (sdate <= '2003-01-15'::date))\n Total runtime: 174713.25 msec\n(3 rows)\n\nMy box I'm running PG on:\nDual 500 Mac OS X\n1g ram\nPg 7.3.0\n\nConf settings\nmax_connections = 200\nshared_buffers = 15200\n#max_fsm_relations = 100 # min 10, fsm is free space map, ~40 bytes\n#max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 64 # min 10\n#wal_buffers = 8 # min 4, typically 8KB each\n\n\n\n\nCREATE TABLE public.search_log (\n wizard varchar(50) NOT NULL,\n sub_wizard varchar(50),\n timestamp varchar(75),\n department int4,\n gender varchar(25),\n occasion varchar(50),\n age varchar(25),\n product_type varchar(2000),\n price_range varchar(1000),\n brand varchar(2000),\n keyword varchar(1000),\n result_count int4,\n html_count int4,\n fragrance_type varchar(50),\n frag_type varchar(50),\n frag_gender char(1),\n trip_length varchar(25),\n carry_on varchar(25),\n suiter varchar(25),\n expandable varchar(25),\n wheels varchar(25),\n style varchar(1000),\n heel_type varchar(25),\n option varchar(50),\n metal varchar(255),\n gem varchar(255),\n bra_size varchar(25),\n feature1 varchar(50),\n feature2 varchar(50),\n feature3 varchar(50),\n sdate date,\n stimestamp timestamptz,\n file_name text\n) WITH OIDS;\n\nCREATE INDEX date_idx ON search_log USING btree (sdate);\nCREATE INDEX slog_wizard_idx ON search_log USING btree (wizard);\n\n", "msg_date": "Wed, 22 Jan 2003 10:26:17 -0800", "msg_from": "\"Patrick Hatcher\" <PHatcher@macys.com>", "msg_from_op": true, "msg_subject": "Slow query on OS X box" }, { "msg_contents": "On Wed, 2003-01-22 at 13:26, Patrick Hatcher wrote:\n> I have a table that contains over 13 million rows. This query takes an\n> extremely long time to return. I've vacuum full, analyzed, and re-indexed\n> the table. Still the results are the same. Any ideas?\n\nYeah, you're pulling out 3.2 million rows from (possibly) a wide table\nbytewise. Do all of those fields actually have data? Thats always\ngoing to take a while -- and I find it hard to believe you're actually\ndoing something with all of those rows that runs regularly.\n\nIf every one of those rows was maxed out (ignoring the text field at the\nend) you could have ~ 15GB of data to pull out. Without knowing the\ntype of data actually in the table, I'm going to bet it's a harddrive\nlimitation.\n\nThe index on 'wizard' is next to useless as at least 1/4 of the data in\nthe table is under the same key. You might try a partial index on\n'wizard' (skip the value 'Keyword'). It won't help this query, but\nit'll help ones looking for values other than 'Keyword'.\n\nAnyway, you might try a CURSOR. Fetch rows out 5000 at a time, do some\nwork with them, then grab some more. This -- more or less -- will allow\nyou to process the rows received while awaiting the remaining lines to\nbe processed by the database. Depending on what you're doing with them\nit'll give a chance for the diskdrive to catch up. If the kernels smart\nit'll read ahead of the scan. This doesn't remove read time, but hides\nit while you're transferring the data out (from the db to your client)\nor processing it.\n\n> mdc_oz=# explain analyze select wizard from search_log where wizard\n> ='Keyword' and sdate between '2002-12-01' and '2003-01-15';\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on search_log (cost=0.00..609015.34 rows=3305729 width=10)\n> (actual time=99833.83..162951.25 rows=3280573 loops=1)\n> Filter: ((wizard = 'Keyword'::character varying) AND (sdate >\n> = '2002-12-01'::date) AND (sdate <= '2003-01-15'::date))\n> Total runtime: 174713.25 msec\n> (3 rows)\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "22 Jan 2003 15:02:05 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Slow query on OS X box" }, { "msg_contents": "What about creating a multi-segment index on wizard/sdate?\n\nOn a side note: that record is ~8KB long, which is kinda big. You could\nsplit those column into a seperate table (or tables), so that when you\nwant to query, say, gender, department & trip_length, you won't have to\nread in *so*much* extra data, slowing the query down.\n\nAlso, these column sizes seem kind excessive, and allow for bad data to \nseep in to the table:\n timestamp varchar(75),\n age varchar(25),\n metal varchar(255),\n gem varchar(255),\n bra_size varchar(25),\n\nOn Wed, 2003-01-22 at 12:26, Patrick Hatcher wrote:\n> I have a table that contains over 13 million rows. This query takes an\n> extremely long time to return. I've vacuum full, analyzed, and re-indexed\n> the table. Still the results are the same. Any ideas?\n> TIA\n> Patrick\n> \n> mdc_oz=# explain analyze select wizard from search_log where wizard\n> ='Keyword' and sdate between '2002-12-01' and '2003-01-15';\n> QUERY PLAN\n> -----------------------------------------------------------------------------\n> Seq Scan on search_log (cost=0.00..609015.34 rows=3305729 width=10)\n> (actual time=99833.83..162951.25 rows=3280573 loops=1)\n> Filter: ((wizard = 'Keyword'::character varying) AND (sdate >\n> = '2002-12-01'::date) AND (sdate <= '2003-01-15'::date))\n> Total runtime: 174713.25 msec\n> (3 rows)\n> \n> My box I'm running PG on:\n> Dual 500 Mac OS X\n> 1g ram\n> Pg 7.3.0\n> \n> Conf settings\n> max_connections = 200\n> shared_buffers = 15200\n> #max_fsm_relations = 100 # min 10, fsm is free space map, ~40 bytes\n> #max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n> #max_locks_per_transaction = 64 # min 10\n> #wal_buffers = 8 # min 4, typically 8KB each\n> \n> \n> \n> \n> CREATE TABLE public.search_log (\n> wizard varchar(50) NOT NULL,\n> sub_wizard varchar(50),\n> timestamp varchar(75),\n> department int4,\n> gender varchar(25),\n> occasion varchar(50),\n> age varchar(25),\n> product_type varchar(2000),\n> price_range varchar(1000),\n> brand varchar(2000),\n> keyword varchar(1000),\n> result_count int4,\n> html_count int4,\n> fragrance_type varchar(50),\n> frag_type varchar(50),\n> frag_gender char(1),\n> trip_length varchar(25),\n> carry_on varchar(25),\n> suiter varchar(25),\n> expandable varchar(25),\n> wheels varchar(25),\n> style varchar(1000),\n> heel_type varchar(25),\n> option varchar(50),\n> metal varchar(255),\n> gem varchar(255),\n> bra_size varchar(25),\n> feature1 varchar(50),\n> feature2 varchar(50),\n> feature3 varchar(50),\n> sdate date,\n> stimestamp timestamptz,\n> file_name text\n> ) WITH OIDS;\n> \n> CREATE INDEX date_idx ON search_log USING btree (sdate);\n> CREATE INDEX slog_wizard_idx ON search_log USING btree (wizard);\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"My advice to you is to get married: If you find a good wife, |\n| you will be happy; if not, you will become a philosopher.\" |\n| Socrates |\n+---------------------------------------------------------------+\n\n", "msg_date": "22 Jan 2003 14:35:22 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Slow query on OS X box" }, { "msg_contents": "Patrick Hatcher wrote:\n\n>I have a table that contains over 13 million rows. This query takes an\n>extremely long time to return. I've vacuum full, analyzed, and re-indexed\n>the table. Still the results are the same. Any ideas?\n>TIA\n>Patrick\n>\n>mdc_oz=# explain analyze select wizard from search_log where wizard\n>='Keyword' and sdate between '2002-12-01' and '2003-01-15';\n> QUERY PLAN\n>-----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on search_log (cost=0.00..609015.34 rows=3305729 width=10)\n>(actual time=99833.83..162951.25 rows=3280573 loops=1)\n> Filter: ((wizard = 'Keyword'::character varying) AND (sdate >\n>= '2002-12-01'::date) AND (sdate <= '2003-01-15'::date))\n> Total runtime: 174713.25 msec\n>(3 rows)\n>\n>My box I'm running PG on:\n>Dual 500 Mac OS X\n>1g ram\n>Pg 7.3.0\n>\n>Conf settings\n>max_connections = 200\n>shared_buffers = 15200\n>#max_fsm_relations = 100 # min 10, fsm is free space map, ~40 bytes\n>#max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n>#max_locks_per_transaction = 64 # min 10\n>#wal_buffers = 8 # min 4, typically 8KB each\n>\n>\n>\n>\n>CREATE TABLE public.search_log (\n> wizard varchar(50) NOT NULL,\n> sub_wizard varchar(50),\n> timestamp varchar(75),\n> department int4,\n> gender varchar(25),\n> occasion varchar(50),\n> age varchar(25),\n> product_type varchar(2000),\n> price_range varchar(1000),\n> brand varchar(2000),\n> keyword varchar(1000),\n> result_count int4,\n> html_count int4,\n> fragrance_type varchar(50),\n> frag_type varchar(50),\n> frag_gender char(1),\n> trip_length varchar(25),\n> carry_on varchar(25),\n> suiter varchar(25),\n> expandable varchar(25),\n> wheels varchar(25),\n> style varchar(1000),\n> heel_type varchar(25),\n> option varchar(50),\n> metal varchar(255),\n> gem varchar(255),\n> bra_size varchar(25),\n> feature1 varchar(50),\n> feature2 varchar(50),\n> feature3 varchar(50),\n> sdate date,\n> stimestamp timestamptz,\n> file_name text\n>) WITH OIDS;\n>\n>CREATE INDEX date_idx ON search_log USING btree (sdate);\n>CREATE INDEX slog_wizard_idx ON search_log USING btree (wizard);\n\nDid you try to change theses 2 indexes into 1?\nCREATE INDEX date_wizard_idx on search_log USING btree(wizard,sdate)\n\nHow selective are these fields:\n - if you ask about \n wizard=\"Keyword\", \n the answer is 0.1% or 5% or 50% of rows?\n - if you ask about \n sdate >= '2002-12-01'::date) AND (sdate <= '2003-01-15'::date)\n what is the answer?\n\nConsider creating table \"wizards\", and changing field \"wizard\" in table \"search_log\"\ninto integer field \"wizardid\". Searching by integer is faster than by varchar.\n\nRegards,\nTomasz Myrta\n\n", "msg_date": "Wed, 22 Jan 2003 21:58:54 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": false, "msg_subject": "Re: Slow query on OS X box" }, { "msg_contents": "\"Patrick Hatcher\" <PHatcher@macys.com> writes:\n> I have a table that contains over 13 million rows. This query takes an\n> extremely long time to return. I've vacuum full, analyzed, and re-indexed\n> the table. Still the results are the same. Any ideas?\n\n> mdc_oz=# explain analyze select wizard from search_log where wizard\n> ='Keyword' and sdate between '2002-12-01' and '2003-01-15';\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on search_log (cost=0.00..609015.34 rows=3305729 width=10)\n> (actual time=99833.83..162951.25 rows=3280573 loops=1)\n> Filter: ((wizard = 'Keyword'::character varying) AND (sdate >\n> = '2002-12-01'::date) AND (sdate <= '2003-01-15'::date))\n> Total runtime: 174713.25 msec\n> (3 rows)\n\nThis query is selecting 3280573 rows out of your 13 million. I'd say\nthe machine is doing the best it can. Returning 19000 rows per second\nis not all that shabby.\n\nPerhaps you should rethink what you're doing. Do you actually need to\nreturn 3 million rows to the client?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 17:03:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Slow query on OS X box " } ]
[ { "msg_contents": "Sorry I'm being really dense today. I didn't even notice the 3.2 million\nrow being returned. :(\n\nTo answer your question, no, all fields would not have data. The data we\nreceive is from a Web log file. It's parsed and then uploaded to this\ntable.\n\nI guess the bigger issue is that when trying to do aggregates, grouping by\nthe wizard field, it takes just as long.\n\nEx:\nmdc_oz=# explain analyze select wizard,count(wizard) from search_log where\nsdate\n between '2002-12-01' and '2003-01-15' group by wizard;\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1083300.43..1112411.55 rows=388148 width=10) (actual\ntime=229503.85..302617.75 rows=14 loops=1)\n -> Group (cost=1083300.43..1102707.84 rows=3881482 width=10) (actual\ntime=229503.60..286014.83 rows=3717161 loops=1)\n -> Sort (cost=1083300.43..1093004.14 rows=3881482 width=10)\n(actual time=229503.57..248415.81 rows=3717161 loops=1)\n Sort Key: wizard\n -> Seq Scan on search_log (cost=0.00..575217.57\nrows=3881482 width=10) (actual time=91235.76..157559.58 rows=3717161\nloops=1)\n Filter: ((sdate >= '2002-12-01'::date) AND (sdate\n<= '2003-01-15'::date))\n Total runtime: 302712.48 msec\n(7 rows)\n\nThanks again for the help\nPatrick Hatcher\n\n\n\n\n \n Rod Taylor \n <rbt@rbt.ca> To: Patrick Hatcher <PHatcher@macys.com> \n cc: Postgresql Performance <pgsql-performance@postgresql.org> \n 01/22/2003 Subject: Re: [PERFORM] Slow query on OS X box \n 12:02 PM \n \n \n\n\n\n\nOn Wed, 2003-01-22 at 13:26, Patrick Hatcher wrote:\n> I have a table that contains over 13 million rows. This query takes an\n> extremely long time to return. I've vacuum full, analyzed, and\nre-indexed\n> the table. Still the results are the same. Any ideas?\n\nYeah, you're pulling out 3.2 million rows from (possibly) a wide table\nbytewise. Do all of those fields actually have data? Thats always\ngoing to take a while -- and I find it hard to believe you're actually\ndoing something with all of those rows that runs regularly.\n\nIf every one of those rows was maxed out (ignoring the text field at the\nend) you could have ~ 15GB of data to pull out. Without knowing the\ntype of data actually in the table, I'm going to bet it's a harddrive\nlimitation.\n\nThe index on 'wizard' is next to useless as at least 1/4 of the data in\nthe table is under the same key. You might try a partial index on\n'wizard' (skip the value 'Keyword'). It won't help this query, but\nit'll help ones looking for values other than 'Keyword'.\n\nAnyway, you might try a CURSOR. Fetch rows out 5000 at a time, do some\nwork with them, then grab some more. This -- more or less -- will allow\nyou to process the rows received while awaiting the remaining lines to\nbe processed by the database. Depending on what you're doing with them\nit'll give a chance for the diskdrive to catch up. If the kernels smart\nit'll read ahead of the scan. This doesn't remove read time, but hides\nit while you're transferring the data out (from the db to your client)\nor processing it.\n\n> mdc_oz=# explain analyze select wizard from search_log where wizard\n> ='Keyword' and sdate between '2002-12-01' and '2003-01-15';\n> QUERY PLAN\n>\n-----------------------------------------------------------------------------------------------------------------------------\n\n> Seq Scan on search_log (cost=0.00..609015.34 rows=3305729 width=10)\n> (actual time=99833.83..162951.25 rows=3280573 loops=1)\n> Filter: ((wizard = 'Keyword'::character varying) AND (sdate >\n> = '2002-12-01'::date) AND (sdate <= '2003-01-15'::date))\n> Total runtime: 174713.25 msec\n> (3 rows)\n--\nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc\n(See attached file: signature.asc)", "msg_date": "Wed, 22 Jan 2003 12:49:49 -0800", "msg_from": "\"Patrick Hatcher\" <PHatcher@macys.com>", "msg_from_op": true, "msg_subject": "Re: Slow query on OS X box" }, { "msg_contents": "Yup, since you still need to pull everything off the disk (the slowest\npart), which is quite a bit of data. You're simply dealing with a lot\nof data for a single query -- not much you can do.\n\nIs this a dedicated -- one client doing big selects like this?\n\nKnock your shared_buffers down to about 2000, bump your sort mem up to\naround 32MB (128MB or so if it's a dedicated box with a vast majority of\nqueries like the below).\n\n\nOkay, need to do something about the rest of the data. 13million * 2k is\na big number. Do you have a set of columns that are rarely used? If\nso, toss them into a separate table and link via a unique identifier\n(int4). It'll cost extra when you do hit them, but pulling out a few of\nthe large ones information wise would buy quite a bit.\n\nNow, wizard. For that particular query it would be best if entries were\nmade for all the values of wizard into a lookup table, and change\nsearch_log.wizard into a reference to that entry in the lookup. Index\nthe lookup table well (one in the wizard primary key -- int4, and a\nunique index on the 'wizard' varchar). Group by the number, join to the\nlookup table for the name.\n\nAny other values with highly repetitive data? Might want to consider\ndoing the same for them.\n\nIn search_log, index the numeric representation of 'wizard' (key from\nlookup table), but don't bother indexing numbers that occur regularly.\nLook up how to create a partial index. Ie. The value 'Keyword' could be\nskipped as it occurs once in four tuples -- too often for an index to be\nuseful.\n\n\nOn Wed, 2003-01-22 at 15:49, Patrick Hatcher wrote:\n> Sorry I'm being really dense today. I didn't even notice the 3.2 million\n> row being returned. :(\n> \n> To answer your question, no, all fields would not have data. The data we\n> receive is from a Web log file. It's parsed and then uploaded to this\n> table.\n> \n> I guess the bigger issue is that when trying to do aggregates, grouping by\n> the wizard field, it takes just as long.\n> \n> Ex:\n> mdc_oz=# explain analyze select wizard,count(wizard) from search_log where\n> sdate\n> between '2002-12-01' and '2003-01-15' group by wizard;\n> QUERY\n> PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=1083300.43..1112411.55 rows=388148 width=10) (actual\n> time=229503.85..302617.75 rows=14 loops=1)\n> -> Group (cost=1083300.43..1102707.84 rows=3881482 width=10) (actual\n> time=229503.60..286014.83 rows=3717161 loops=1)\n> -> Sort (cost=1083300.43..1093004.14 rows=3881482 width=10)\n> (actual time=229503.57..248415.81 rows=3717161 loops=1)\n> Sort Key: wizard\n> -> Seq Scan on search_log (cost=0.00..575217.57\n> rows=3881482 width=10) (actual time=91235.76..157559.58 rows=3717161\n> loops=1)\n> Filter: ((sdate >= '2002-12-01'::date) AND (sdate\n> <= '2003-01-15'::date))\n> Total runtime: 302712.48 msec\n> (7 rows)\n\n> On Wed, 2003-01-22 at 13:26, Patrick Hatcher wrote:\n> > I have a table that contains over 13 million rows. This query takes an\n> > extremely long time to return. I've vacuum full, analyzed, and\n> re-indexed\n> > the table. Still the results are the same. Any ideas?\n\n> > mdc_oz=# explain analyze select wizard from search_log where wizard\n> > ='Keyword' and sdate between '2002-12-01' and '2003-01-15';\n> > QUERY PLAN\n> >\n> -----------------------------------------------------------------------------------------------------------------------------\n> \n> > Seq Scan on search_log (cost=0.00..609015.34 rows=3305729 width=10)\n> > (actual time=99833.83..162951.25 rows=3280573 loops=1)\n> > Filter: ((wizard = 'Keyword'::character varying) AND (sdate >\n> > = '2002-12-01'::date) AND (sdate <= '2003-01-15'::date))\n> > Total runtime: 174713.25 msec\n> > (3 rows)\n> --\n> Rod Taylor <rbt@rbt.ca>\n> \n> PGP Key: http://www.rbt.ca/rbtpub.asc\n> (See attached file: signature.asc)\n> \n> \n> ______________________________________________________________________\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "22 Jan 2003 17:54:20 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Slow query on OS X box" } ]
[ { "msg_contents": "I have a database that makes fairly extensive use of table Inheritance.\n\nStructure is one parent table and 5 child tables as follows:\n\ntbl_objects (parent table)\n -> tbl_viewers\n -> tbl_documents\n -> tbl_icons\n -> tbl_massemails\n -> tbl_formats\n\nI have two questions:\n\nFirst, if I create an index on the parent table will queries to the\nchild tables use that index?\n\nSecondly, I tried to use explain to find out but I got very strange\nresults. It appears to read all the child tables even when you specify\nonly the parent table. In this case this appears to make the select do 6\nqueries instead of only 1. Obviously a huge performance hit. And none of\nthem uses the index though the table only has 420 rows at the moment so\nthat might be why its just doing a scan (though IMHO 'explain' should\nexplain that it isn't using the available index and why).\n\nI can't say that I'm reading these results properly but here they are:\n\n\"EXPLAIN select * from tbl_objects where id = 1;\"\n\nGives:\n\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..27.25 rows=6 width=138)\n -> Append (cost=0.00..27.25 rows=6 width=138)\n -> Seq Scan on tbl_objects (cost=0.00..12.24 rows=1 width=73)\n -> Seq Scan on tbl_viewers tbl_objects (cost=0.00..1.07 rows=1\nwidth=83)\n -> Seq Scan on tbl_documents tbl_objects (cost=0.00..11.56\nrows=1 width=78)\n -> Seq Scan on tbl_massemails tbl_objects (cost=0.00..0.00\nrows=1 width=138)\n -> Seq Scan on tbl_formats tbl_objects (cost=0.00..1.12 rows=1\nwidth=80)\n -> Seq Scan on tbl_icons tbl_objects (cost=0.00..1.25 rows=1\nwidth=89)\n\n\nCan anyone tell me if these results are making any sense and why\npostgres is doing 6 reads when I only need one?\n\nJohn Lange\n\n", "msg_date": "22 Jan 2003 15:44:42 -0600", "msg_from": "John Lange <lists@darkcore.net>", "msg_from_op": true, "msg_subject": "Query plan and Inheritance. Weird behavior" }, { "msg_contents": "\nOn 22 Jan 2003, John Lange wrote:\n\n> I have a database that makes fairly extensive use of table Inheritance.\n>\n> Structure is one parent table and 5 child tables as follows:\n>\n> tbl_objects (parent table)\n> -> tbl_viewers\n> -> tbl_documents\n> -> tbl_icons\n> -> tbl_massemails\n> -> tbl_formats\n>\n> I have two questions:\n>\n> First, if I create an index on the parent table will queries to the\n> child tables use that index?\n\nAFAIK no since indices aren't inherited.\n\n> Secondly, I tried to use explain to find out but I got very strange\n> results. It appears to read all the child tables even when you specify\n> only the parent table. In this case this appears to make the select do 6\n> queries instead of only 1. Obviously a huge performance hit. And none of\n> them uses the index though the table only has 420 rows at the moment so\n> that might be why its just doing a scan (though IMHO 'explain' should\n> explain that it isn't using the available index and why).\n\nIt seems reasonable to me since given the # of rows and the estimated\nrow width the table is probably only like 5 or 6 pages. Reading the index\nis unlikely to make life much better given an index read, seek in heap\nfile, read heap file page.\n\n> I can't say that I'm reading these results properly but here they are:\n>\n> \"EXPLAIN select * from tbl_objects where id = 1;\"\n\nThis gets any rows in tbl_objects that have id=1 and any rows in any\nsubtables that have id=1. Is that the intended effect?\n\n\n", "msg_date": "Wed, 22 Jan 2003 16:59:13 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Query plan and Inheritance. Weird behavior" }, { "msg_contents": "> This gets any rows in tbl_objects that have id=1 and any rows in any\n> subtables that have id=1. Is that the intended effect?\n\nIt is the intended result, but not the expected implementation.\n\nDoing more investigation I think I figured out why Postgres does what it\ndoes.\n\nCreating child tables by inheriting from another table doesn't really do\nwhat I consider to be 'true' inheritance, at least not in the way I\nexpected as a programmer.\n\nPostgres seems to create \"child\" tables by first fully duplicating the\nparent table and then adding the new columns to it. It then links the\ntables internally some how so that a query on a parent table also\nqueries the child tables.\n\nIHO this seems like inheritance by 'brute force' and a parent table that\nhas many children will cause a significant performance hit.\n\nWhen I say \"as a programmer\" what I mean is I had expected it to be done\nentirely the opposite way. In other words, child tables would simply be\nlinked internally to the parent table and a new table created which only\ncontains the new columns.\n\nIn this way the parent table would not need to know, nor would it care\nabout child tables in any way (just like inheritance in most programming\nlanguages). If done this way a select on a parent table would only\nrequire the retrieval of a single row and a select on a child table\nwould only require the retrieval of two rows (one in the child table and\none in the parent table).\n\nI don't pretend to know the intricacies of Postgres performance but this\nis the way I'm interpreting the data from the explains.\n\nAt this time, now that I (think I) understand how the inheritance is\nimplemented I'm considering abandoning it in Postgres and solving the\nissue entirely pragmatically.\n\nI hoping someone on the list will tell me where I'm going wrong here or\nwhat wrong assumptions I'm making.\n\nJohn Lange\n\nOn Wed, 2003-01-22 at 18:59, Stephan Szabo wrote:\n> \n> On 22 Jan 2003, John Lange wrote:\n> \n> > I have a database that makes fairly extensive use of table Inheritance.\n> >\n> > Structure is one parent table and 5 child tables as follows:\n> >\n> > tbl_objects (parent table)\n> > -> tbl_viewers\n> > -> tbl_documents\n> > -> tbl_icons\n> > -> tbl_massemails\n> > -> tbl_formats\n> >\n> > I have two questions:\n> >\n> > First, if I create an index on the parent table will queries to the\n> > child tables use that index?\n> \n> AFAIK no since indices aren't inherited.\n> \n> > Secondly, I tried to use explain to find out but I got very strange\n> > results. It appears to read all the child tables even when you specify\n> > only the parent table. In this case this appears to make the select do 6\n> > queries instead of only 1. Obviously a huge performance hit. And none of\n> > them uses the index though the table only has 420 rows at the moment so\n> > that might be why its just doing a scan (though IMHO 'explain' should\n> > explain that it isn't using the available index and why).\n> \n> It seems reasonable to me since given the # of rows and the estimated\n> row width the table is probably only like 5 or 6 pages. Reading the index\n> is unlikely to make life much better given an index read, seek in heap\n> file, read heap file page.\n> \n> > I can't say that I'm reading these results properly but here they are:\n> >\n> > \"EXPLAIN select * from tbl_objects where id = 1;\"\n> \n> This gets any rows in tbl_objects that have id=1 and any rows in any\n> subtables that have id=1. Is that the intended effect?\n> \n> \n\n", "msg_date": "22 Jan 2003 20:11:56 -0600", "msg_from": "John Lange <lists@darkcore.net>", "msg_from_op": true, "msg_subject": "Re: Query plan and Inheritance. Weird behavior" }, { "msg_contents": "\nOn 22 Jan 2003, John Lange wrote:\n\n> Creating child tables by inheriting from another table doesn't really do\n> what I consider to be 'true' inheritance, at least not in the way I\n> expected as a programmer.\n>\n> Postgres seems to create \"child\" tables by first fully duplicating the\n> parent table and then adding the new columns to it. It then links the\n> tables internally some how so that a query on a parent table also\n> queries the child tables.\n\nThat pretty much sums up my understanding of it.\n\n[snip]\n> In this way the parent table would not need to know, nor would it care\n> about child tables in any way (just like inheritance in most programming\n> languages). If done this way a select on a parent table would only\n> require the retrieval of a single row and a select on a child table\n> would only require the retrieval of two rows (one in the child table and\n> one in the parent table).\n\nAs opposed to needing one row from a select on a child table and\neffectively a union all when selecting from the parent. There are up and\ndown sides of both implementations, and I haven't played with it enough\nto speak meaningfully on it.\n\n> I don't pretend to know the intricacies of Postgres performance but this\n> is the way I'm interpreting the data from the explains.\n\nAs a side note, for a better understanding of timings, explain analyze is\nmuch better than plain explain which only gives the plan and estimates.\n\n", "msg_date": "Wed, 22 Jan 2003 18:45:18 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Query plan and Inheritance. Weird behavior" }, { "msg_contents": "> On 22 Jan 2003, John Lange wrote:\n>> In this way the parent table would not need to know, nor would it care\n>> about child tables in any way (just like inheritance in most programming\n>> languages). If done this way a select on a parent table would only\n>> require the retrieval of a single row and a select on a child table\n>> would only require the retrieval of two rows (one in the child table and\n>> one in the parent table).\n\nNo, it'd require the retrieval of N rows: you're failing to think about\nmultiple levels of inheritance or multi-parent inheritance, both of\nwhich are supported reasonably effectively by the current model.\nMy guess is that this scheme would crash and burn just on locking\nconsiderations. (When you want to update a child row, what locks do you\nhave to get in what order? With pieces of the row scattered through\nmany tables, it'd be pretty messy.)\n\nYou may care to look in the pghackers archives for prior discussions.\nThe variant scheme that's sounded most interesting to me so far is to\nstore *all* rows of an inheritance hierarchy in a single physical table.\nThis'd require giving up multiple inheritance, but few people seem to\nuse that, and the other benefits (like being able to enforce uniqueness\nconstraints over the whole hierarchy with just a standard unique index)\nseem worth it. No one's stepped up to bat to do the legwork on the idea\nyet, though. One bit that looks pretty tricky is ALTER TABLE ADD\nCOLUMN.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Jan 2003 01:07:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query plan and Inheritance. Weird behavior " }, { "msg_contents": "On Thu, 2003-01-23 at 00:07, Tom Lane wrote:\n> > On 22 Jan 2003, John Lange wrote:\n> >> In this way the parent table would not need to know, nor would it care\n> >> about child tables in any way (just like inheritance in most programming\n> >> languages). If done this way a select on a parent table would only\n> >> require the retrieval of a single row and a select on a child table\n> >> would only require the retrieval of two rows (one in the child table and\n> >> one in the parent table).\n> \n> No, it'd require the retrieval of N rows: you're failing to think about\n> multiple levels of inheritance or multi-parent inheritance, both of\n> which are supported reasonably effectively by the current model.\n\nLets not be too nit-picky here. In the case of multiple layers of\ninheritance you are still only selecting two rows (at a time), one\nchild, one parent. However if the parent also has a parent, then the\nprocess repeats, once for every layer.\n\nThis is entirely reasonable and efficient compared to the current model\nwhere a select on a parent table requires the same select to be executed\non EVERY child table. If it's a large expensive JOIN of some kind then\nthis is verging on un-workable. \n\n> My guess is that this scheme would crash and burn just on locking\n> considerations. (When you want to update a child row, what locks do you\n> have to get in what order? With pieces of the row scattered through\n> many tables, it'd be pretty messy.)\n\nYou lock the parent on down to the last child. I'm not a database\ndeveloper but that seems fairly straight forward?\n\nThe choice between the schema I've suggested and the way it is currently\nimplemented is a trade off between more efficient selects vs. more\nefficient updates. If you are selecting on the parent table more than\nupdating then my idea is vastly more efficient. If you INSERT a lot then\nthe current way is marginally better.\n\nWith apologies to the developers, I don't feel the current\nimplementation is really usable for the simple fact that expensive\noperations performed on the parent table causes them to be repeated for\nevery child table. And, as an added penalty, indexes on parent tables\nare NOT inherited to the children so the child operations can be even\nmore expensive.\n\nThis solution is not that large and I've already got 6 child tables. It\njust so happens that I do a LOT of selects on the parent so I'm going to\nhave to make a decision on where to go from here.\n\nSolving this programmatically is not really that hard but I've gone a\nways down this path now so I'm not anxious to redo the entire database\nschema since we do have customers already using this.\n\n> You may care to look in the pghackers archives for prior discussions.\n\nI will, thanks.\n\n> The variant scheme that's sounded most interesting to me so far is to\n> store *all* rows of an inheritance hierarchy in a single physical table.\n\nUnless I'm not understanding I don't think that works. In my case for\nexample, a single parent has 4-5 children so the only columns they have\nin common are the ones in the parent. Combining them all into a single\ntable means a big de-normalized table (loads of empty columns). If you\nare going to go this route then you might as well just do it. It doesn't\nneed to be implemented on the DBMS.\n\nRegards,\n\nJohn Lange\n\n> This'd require giving up multiple inheritance, but few people seem to\n> use that, and the other benefits (like being able to enforce uniqueness\n> constraints over the whole hierarchy with just a standard unique index)\n> seem worth it. No one's stepped up to bat to do the legwork on the idea\n> yet, though. One bit that looks pretty tricky is ALTER TABLE ADD\n> COLUMN.\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "23 Jan 2003 09:36:11 -0600", "msg_from": "John Lange <lists@darkcore.net>", "msg_from_op": true, "msg_subject": "Re: Query plan and Inheritance. Weird behavior" } ]
[ { "msg_contents": "There's been some recent discussion about the fact that Postgres treats\nexplicit JOIN syntax as constraining the actual join plan, cf\nhttp://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html\n\nThis behavior was originally in there simply because of lack of time to\nconsider alternatives. I now realize that it wouldn't be hard to get\nthe planner to do better --- basically, preprocess_jointree just has to\nbe willing to fold JoinExpr-under-JoinExpr into a FromExpr when the\njoins are inner joins.\n\nBut in the meantime, some folks have found the present behavior to be\na feature rather than a bug, since it lets them control planning time\non many-table queries. If we are going to change it, I think we need\nsome way to accommodate both camps.\n\nWhat I've been toying with is inventing a GUC variable or two. I am\nthinking of defining a variable that sets the maximum size of a FromExpr\nthat preprocess_jointree is allowed to create by folding JoinExprs.\nIf this were set to 2, the behavior would be the same as before: no\ncollapsing of JoinExprs can occur. If it were set to a large number,\ninner JOIN syntax would never affect the planner at all. In practice\nit'd be smart to leave it at some value less than GEQO_THRESHOLD, so\nthat folding a large number of JOINs wouldn't leave you with a query\nthat takes a long time to plan or produces unpredictable plans.\n\nThere is already a need for a GUC variable to control the existing\nbehavior of preprocess_jointree: right now, it arbitrarily uses\nGEQO_THRESHOLD/2 as the limit for the size of a FromExpr that can be\nmade by collapsing FromExprs together. This ought to be a separately\nsettable parameter, I think.\n\nComments? In particular, can anyone think of pithy names for these\nvariables? The best I'd been able to come up with is MAX_JOIN_COLLAPSE\nand MAX_FROM_COLLAPSE, but neither of these exactly sing...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 18:01:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Proposal: relaxing link between explicit JOINs and execution order" }, { "msg_contents": "> There's been some recent discussion about the fact that Postgres\n> treats explicit JOIN syntax as constraining the actual join plan, cf\n> http://www.ca.postgresql.org/users-lounge/docs/7.3/postgres/explicit-joins.html\n> \n> This behavior was originally in there simply because of lack of time\n> to consider alternatives. I now realize that it wouldn't be hard to\n> get the planner to do better --- basically, preprocess_jointree just\n> has to be willing to fold JoinExpr-under-JoinExpr into a FromExpr\n> when the joins are inner joins.\n> \n> But in the meantime, some folks have found the present behavior to be\n> a feature rather than a bug, since it lets them control planning time\n> on many-table queries. If we are going to change it, I think we need\n> some way to accommodate both camps.\n[snip]\n> Comments? In particular, can anyone think of pithy names for these\n> variables? The best I'd been able to come up with is\n> MAX_JOIN_COLLAPSE and MAX_FROM_COLLAPSE, but neither of these\n> exactly sing...\n\nHow about something that's runtime tunable via a SET/SHOW config var?\nThere are some queries that I have that I haven't spent any time\ntuning and would love to have the planner spend its CPU thinking about\nit instead of mine. Setting it to 2 by default, then on my tuned\nqueries, setting to something obscenely high so the planner won't muck\nwith what I know is fastest (or so I think at least).\n\nI know this is a can of worms, but what about piggy backing on an\nOracle notation and having an inline way of setting this inside of a\ncomment?\n\nSELECT /* +planner:collapse_tables=12 */ .... ?\n ^^^^^^^ ^^^^^^^^^^^^^^^ ^^^\n\t system variable value\n\n::shrug:: In brainstorm mode. Anyway, a few names:\n\nauto_order_join\nauto_order_join_max\nauto_reorder_table_limit\nauto_collapse_join\nauto_collapse_num_join\nauto_join_threshold\n\nWhen I'm thinking about what this variable will do for me as a DBA, I\nthink it will make the plan more intelligent by reordering the joins.\nMy $0.02. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Wed, 22 Jan 2003 15:59:31 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Proposal: relaxing link between explicit JOINs and\n\texecution order" }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> How about something that's runtime tunable via a SET/SHOW config var?\n\nEr, that's what I was talking about.\n\n> I know this is a can of worms, but what about piggy backing on an\n> Oracle notation and having an inline way of setting this inside of a\n> comment?\n\nI don't want to go there ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 19:06:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Proposal: relaxing link between explicit JOINs and\n\texecution order" }, { "msg_contents": "Tom,\n\nI am very strongly in favor of this idea. I would personally prefer it if \nthe Join collapsing parmeter could be set at query time through a SET \nstatement, but will of course defer to the difficulty level in doing so.\n\n> Comments? In particular, can anyone think of pithy names for these\n> variables? The best I'd been able to come up with is MAX_JOIN_COLLAPSE\n> and MAX_FROM_COLLAPSE, but neither of these exactly sing...\n\nHow about:\nEXPLICIT_JOIN_MINIMUM\nand\nFROM_COLLAPSE_LIMIT\n\nJust to make the two params not sound so identical?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 22 Jan 2003 16:17:41 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Proposal: relaxing link between explicit JOINs and\n\texecution order" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> I am very strongly in favor of this idea. I would personally prefer it if \n> the Join collapsing parmeter could be set at query time through a SET \n> statement, but will of course defer to the difficulty level in doing so.\n\nI guess I failed to make it clear that that's what I meant. GUC\nvariables are those things that you can set via SET, or in the\npostgresql.conf file, etc. These values would be just as manipulable\nas, say, ENABLE_SEQSCAN.\n\n> How about:\n> EXPLICIT_JOIN_MINIMUM\n> and\n> FROM_COLLAPSE_LIMIT\n\n> Just to make the two params not sound so identical?\n\nHmm. The two parameters would have closely related functions, so I'd\nsort of think that the names *should* be pretty similar.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 19:21:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Proposal: relaxing link between explicit JOINs and\n\texecution order" }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> How about:\n> EXPLICIT_JOIN_MINIMUM\n> and\n> FROM_COLLAPSE_LIMIT\n\nI've implemented this using FROM_COLLAPSE_LIMIT and JOIN_COLLAPSE_LIMIT\nas the variable names. It'd be easy enough to change if someone comes\nup with better names. You can read updated documentation at\nhttp://developer.postgresql.org/docs/postgres/explicit-joins.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Jan 2003 00:27:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Proposal: relaxing link between explicit JOINs and\n\texecution order" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: Wednesday, January 22, 2003 3:15 PM\n> To: Steve Crawford\n> Cc: pgsql-performance@postgreSQL.org; pgsql-hackers@postgreSQL.org\n> Subject: Re: [HACKERS] Terrible performance on wide selects \n> \n> \n> Steve Crawford sent me some profiling results for queries \n> involving wide tuples (hundreds of columns).\n> \n> > Done, results attached. nocachegetattr seems to be the \n> likely suspect.\n> \n> Yipes, you can say that again.\n> \n> % cumulative self self total \n> time seconds seconds calls ms/call ms/call name \n> 93.38 26.81 26.81 885688 0.03 0.03 nocachegetattr\n> \n> 0.00 0.00 1/885688 heapgettup [159]\n> 0.00 0.00 1/885688 \n> CatalogCacheComputeTupleHashValue [248]\n> 0.00 0.00 5/885688 SearchCatCache [22]\n> 13.40 0.00 442840/885688 ExecEvalVar [20]\n> 13.40 0.00 442841/885688 printtup [12]\n> [11] 93.4 26.81 0.00 885688 nocachegetattr [11]\n> \n> \n> Half of the calls are coming from printtup(), which seems \n> relatively easy to fix.\n> \n> \t/*\n> \t * send the attributes of this tuple\n> \t */\n> \tfor (i = 0; i < natts; ++i)\n> \t{\n> \t\t...\n> \t\torigattr = heap_getattr(tuple, i + 1, typeinfo, \n> &isnull);\n> \t\t...\n> \t}\n> \n> The trouble here is that in the presence of variable-width \n> fields, heap_getattr requires a linear scan over the tuple \n> --- and so the total time spent in it is O(N^2) in the number \n> of fields.\n> \n> What we could do is reinstitute heap_deformtuple() as the inverse of\n> heap_formtuple() --- but make it extract Datums for all the \n> columns in a single pass over the tuple. This would reduce \n> the time in printtup() from O(N^2) to O(N), which would \n> pretty much wipe out that part of the problem.\n> \n> The other half of the calls are coming from ExecEvalVar, \n> which is a harder problem to solve, since those calls are \n> scattered all over the place. It's harder to see how to get \n> them to share work. Any ideas out there?\n\nIs it possible that the needed information could be retrieved by\nquerying the system metadata to collect the column information?\n\nOnce the required tuple attributes are described, it could form a\nbinding list that allocates a buffer of sufficient size with pointers to\nthe required column start points.\n\nMaybe I don't really understand the problem, but it seems simple enough\nto do it once for the whole query.\n\nIf this is utter stupidity, please disregard and have a hearty laugh at\nmy expense.\n;-)\n", "msg_date": "Wed, 22 Jan 2003 15:56:55 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> Maybe I don't really understand the problem, but it seems simple enough\n> to do it once for the whole query.\n\nWe already do cache column offsets when they are fixed. The code that's\nthe problem executes when there's a variable-width column in the table\n--- which means that all columns to its right are not at fixed offsets,\nand have to be scanned for separately in each tuple, AFAICS.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 19:04:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "Tom Lane kirjutas N, 23.01.2003 kell 02:04:\n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n> > Maybe I don't really understand the problem, but it seems simple enough\n> > to do it once for the whole query.\n> \n> We already do cache column offsets when they are fixed. The code that's\n> the problem executes when there's a variable-width column in the table\n> --- which means that all columns to its right are not at fixed offsets,\n> and have to be scanned for separately in each tuple, AFAICS.\n\nNot only varlen columns, but also NULL columns forbid knowing the\noffsets beforehand.\n\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "23 Jan 2003 12:30:48 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects" }, { "msg_contents": ">>>Hannu Krosing said:\n > Tom Lane kirjutas N, 23.01.2003 kell 02:04:\n > > We already do cache column offsets when they are fixed. The code that's\n > > the problem executes when there's a variable-width column in the table\n > > --- which means that all columns to its right are not at fixed offsets,\n > > and have to be scanned for separately in each tuple, AFAICS.\n > \n > Not only varlen columns, but also NULL columns forbid knowing the\n > offsets beforehand.\n\nDoes this mean, that constructing tables where fixed length fields are \n'before' variable lenght fields and 'possibly null' fields might increase \nperformance?\n\nDaniel\n\n", "msg_date": "Thu, 23 Jan 2003 12:44:04 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> Does this mean, that constructing tables where fixed length fields are \n> 'before' variable lenght fields and 'possibly null' fields might increase \n> performance?\n\nThere'd have to be no nulls, period, to get any useful performance\ndifference --- but yes, in theory putting fixed-length columns before\nvariable-length ones is a win.\n\nI wouldn't bother going out to rearrange your schemas though ... at\nleast not before you do some tests to prove that it's worthwhile.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Jan 2003 09:50:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "On Thu, 23 Jan 2003, Daniel Kalchev wrote:\n\n> Does this mean, that constructing tables where fixed length fields are\n> 'before' variable lenght fields and 'possibly null' fields might increase\n> performance?\n\nThis, I believe, is why DB2 always puts (in physical storage) all of the\nfixed-length fields before the variable-length fields.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Fri, 24 Jan 2003 00:50:27 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Terrible performance on wide selects " } ]
[ { "msg_contents": "Thanks everyone. I'll give your suggestions a try and report back.\n\nPatrick Hatcher\nMacys.Com\nLegacy Integration Developer\n415-422-1610 office\nHatcherPT - AIM\n\n\n\n\n \n Rod Taylor <rbt@rbt.ca> \n Sent by: To: Patrick Hatcher <PHatcher@macys.com> \n pgsql-performance-owner@post cc: Postgresql Performance <pgsql-performance@postgresql.org> \n gresql.org Subject: Re: [PERFORM] Slow query on OS X box \n \n \n 01/22/2003 02:54 PM \n \n \n\n\n\n\nYup, since you still need to pull everything off the disk (the slowest\npart), which is quite a bit of data. You're simply dealing with a lot\nof data for a single query -- not much you can do.\n\nIs this a dedicated -- one client doing big selects like this?\n\nKnock your shared_buffers down to about 2000, bump your sort mem up to\naround 32MB (128MB or so if it's a dedicated box with a vast majority of\nqueries like the below).\n\n\nOkay, need to do something about the rest of the data. 13million * 2k is\na big number. Do you have a set of columns that are rarely used? If\nso, toss them into a separate table and link via a unique identifier\n(int4). It'll cost extra when you do hit them, but pulling out a few of\nthe large ones information wise would buy quite a bit.\n\nNow, wizard. For that particular query it would be best if entries were\nmade for all the values of wizard into a lookup table, and change\nsearch_log.wizard into a reference to that entry in the lookup. Index\nthe lookup table well (one in the wizard primary key -- int4, and a\nunique index on the 'wizard' varchar). Group by the number, join to the\nlookup table for the name.\n\nAny other values with highly repetitive data? Might want to consider\ndoing the same for them.\n\nIn search_log, index the numeric representation of 'wizard' (key from\nlookup table), but don't bother indexing numbers that occur regularly.\nLook up how to create a partial index. Ie. The value 'Keyword' could be\nskipped as it occurs once in four tuples -- too often for an index to be\nuseful.\n\n\nOn Wed, 2003-01-22 at 15:49, Patrick Hatcher wrote:\n> Sorry I'm being really dense today. I didn't even notice the 3.2 million\n> row being returned. :(\n>\n> To answer your question, no, all fields would not have data. The data we\n> receive is from a Web log file. It's parsed and then uploaded to this\n> table.\n>\n> I guess the bigger issue is that when trying to do aggregates, grouping\nby\n> the wizard field, it takes just as long.\n>\n> Ex:\n> mdc_oz=# explain analyze select wizard,count(wizard) from search_log\nwhere\n> sdate\n> between '2002-12-01' and '2003-01-15' group by wizard;\n> QUERY\n> PLAN\n>\n-----------------------------------------------------------------------------------------------------------------------------------------------\n\n> Aggregate (cost=1083300.43..1112411.55 rows=388148 width=10) (actual\n> time=229503.85..302617.75 rows=14 loops=1)\n> -> Group (cost=1083300.43..1102707.84 rows=3881482 width=10) (actual\n> time=229503.60..286014.83 rows=3717161 loops=1)\n> -> Sort (cost=1083300.43..1093004.14 rows=3881482 width=10)\n> (actual time=229503.57..248415.81 rows=3717161 loops=1)\n> Sort Key: wizard\n> -> Seq Scan on search_log (cost=0.00..575217.57\n> rows=3881482 width=10) (actual time=91235.76..157559.58 rows=3717161\n> loops=1)\n> Filter: ((sdate >= '2002-12-01'::date) AND (sdate\n> <= '2003-01-15'::date))\n> Total runtime: 302712.48 msec\n> (7 rows)\n\n> On Wed, 2003-01-22 at 13:26, Patrick Hatcher wrote:\n> > I have a table that contains over 13 million rows. This query takes an\n> > extremely long time to return. I've vacuum full, analyzed, and\n> re-indexed\n> > the table. Still the results are the same. Any ideas?\n\n> > mdc_oz=# explain analyze select wizard from search_log where wizard\n> > ='Keyword' and sdate between '2002-12-01' and '2003-01-15';\n> > QUERY PLAN\n> >\n>\n-----------------------------------------------------------------------------------------------------------------------------\n\n>\n> > Seq Scan on search_log (cost=0.00..609015.34 rows=3305729 width=10)\n> > (actual time=99833.83..162951.25 rows=3280573 loops=1)\n> > Filter: ((wizard = 'Keyword'::character varying) AND (sdate >\n> > = '2002-12-01'::date) AND (sdate <= '2003-01-15'::date))\n> > Total runtime: 174713.25 msec\n> > (3 rows)\n> --\n> Rod Taylor <rbt@rbt.ca>\n>\n> PGP Key: http://www.rbt.ca/rbtpub.asc\n> (See attached file: signature.asc)\n>\n>\n> ______________________________________________________________________\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n--\nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc\n(See attached file: signature.asc)", "msg_date": "Wed, 22 Jan 2003 16:05:39 -0800", "msg_from": "\"Patrick Hatcher\" <PHatcher@macys.com>", "msg_from_op": true, "msg_subject": "Re: Slow query on OS X box" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: Wednesday, January 22, 2003 4:04 PM\n> To: Dann Corbit\n> Cc: Steve Crawford; pgsql-performance@postgreSQL.org; \n> pgsql-hackers@postgreSQL.org\n> Subject: Re: [HACKERS] Terrible performance on wide selects \n> \n> \n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n> > Maybe I don't really understand the problem, but it seems simple \n> > enough to do it once for the whole query.\n> \n> We already do cache column offsets when they are fixed. The \n> code that's the problem executes when there's a \n> variable-width column in the table\n> --- which means that all columns to its right are not at \n> fixed offsets, and have to be scanned for separately in each \n> tuple, AFAICS.\n\nWhy not waste a bit of memory and make the row buffer the maximum\npossible length?\nE.g. for varchar(2000) allocate 2000 characters + size element and point\nto the start of that thing.\n\nIf we have 64K rows, even at that it is a pittance. If someone designs\n10,000 row tables, then it will allocate an annoyingly large block of\nmemory, but bad designs are always going to cause a fuss.\n", "msg_date": "Wed, 22 Jan 2003 16:06:26 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> Why not waste a bit of memory and make the row buffer the maximum\n> possible length?\n> E.g. for varchar(2000) allocate 2000 characters + size element and point\n> to the start of that thing.\n\nSurely you're not proposing that we store data on disk that way.\n\nThe real issue here is avoiding overhead while extracting columns out of\na stored tuple. We could perhaps use a different, less space-efficient\nformat for temporary tuples in memory than we do on disk, but I don't\nthink that will help a lot. The nature of O(N^2) bottlenecks is you\nhave to kill them all --- for example, if we fix printtup and don't do\nanything with ExecEvalVar, we can't do more than double the speed of\nSteve's example, so it'll still be slow. So we must have a solution for\nthe case where we are disassembling a stored tuple, anyway.\n\nI have been sitting here toying with a related idea, which is to use the\nheap_deformtuple code I suggested before to form an array of pointers to\nDatums in a specific tuple (we could probably use the TupleTableSlot\nmechanisms to manage the memory for these). Then subsequent accesses to\nindividual columns would just need an array-index operation, not a\nnocachegetattr call. The trick with that would be that if only a few\ncolumns are needed out of a row, it might be a net loss to compute the\nDatum values for all columns. How could we avoid slowing that case down\nwhile making the wide-tuple case faster?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 19:18:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "Tom Lane kirjutas N, 23.01.2003 kell 02:18:\n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n> > Why not waste a bit of memory and make the row buffer the maximum\n> > possible length?\n> > E.g. for varchar(2000) allocate 2000 characters + size element and point\n> > to the start of that thing.\n> \n> Surely you're not proposing that we store data on disk that way.\n> \n> The real issue here is avoiding overhead while extracting columns out of\n> a stored tuple. We could perhaps use a different, less space-efficient\n> format for temporary tuples in memory than we do on disk, but I don't\n> think that will help a lot. The nature of O(N^2) bottlenecks is you\n> have to kill them all --- for example, if we fix printtup and don't do\n> anything with ExecEvalVar, we can't do more than double the speed of\n> Steve's example, so it'll still be slow. So we must have a solution for\n> the case where we are disassembling a stored tuple, anyway.\n> \n> I have been sitting here toying with a related idea, which is to use the\n> heap_deformtuple code I suggested before to form an array of pointers to\n> Datums in a specific tuple (we could probably use the TupleTableSlot\n> mechanisms to manage the memory for these). Then subsequent accesses to\n> individual columns would just need an array-index operation, not a\n> nocachegetattr call. The trick with that would be that if only a few\n> columns are needed out of a row, it might be a net loss to compute the\n> Datum values for all columns. How could we avoid slowing that case down\n> while making the wide-tuple case faster?\n\nmake the pointer array incrementally for O(N) performance:\n\ni.e. for tuple with 100 cols, allocate an array of 100 pointers, plus\nkeep count of how many are actually valid,\n\nso the first call to get col[5] will fill first 5 positions in the array\nsave said nr 5 and then access tuple[ptrarray[5]]\n\nnext call to get col[75] will start form col[5] and fill up to col[75]\n\nnext call to col[76] will start form col[75] and fill up to col[76]\n\nnext call to col[60] will just get tuple[ptrarray[60]]\n\nthe above description assumes 1-based non-C arrays ;)\n\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "23 Jan 2003 12:11:08 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects" }, { "msg_contents": "Hannu Krosing kirjutas N, 23.01.2003 kell 12:11:\n\n> make the pointer array incrementally for O(N) performance:\n> \n> i.e. for tuple with 100 cols, allocate an array of 100 pointers, plus\n> keep count of how many are actually valid,\n\nAdditionally, this should also make repeted determining of NULL fields\nfaster - just put a NULL-pointer in and voila - no more bit-shifting and\nAND-ing to find out if the field is null.\n\nOne has to watch the NULL bitmap on fist pass anyway.\n\n> so the first call to get col[5] will fill first 5 positions in the array\n> save said nr 5 and then access tuple[ptrarray[5]]\n> \n> next call to get col[75] will start form col[5] and fill up to col[75]\n> \n> next call to col[76] will start form col[75] and fill up to col[76]\n> \n> next call to col[60] will just get tuple[ptrarray[60]]\n> \n> the above description assumes 1-based non-C arrays ;)\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "23 Jan 2003 12:41:48 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> i.e. for tuple with 100 cols, allocate an array of 100 pointers, plus\n>> keep count of how many are actually valid,\n\n> Additionally, this should also make repeted determining of NULL fields\n> faster - just put a NULL-pointer in and voila - no more bit-shifting and\n> AND-ing to find out if the field is null.\n\nRight, the output of the operation would be a pair of arrays: Datum\nvalues and is-null flags. (NULL pointers don't work for pass-by-value\ndatatypes.)\n\nI like the idea of keeping track of a last-known-column position and\nincrementally extending that as needed.\n\nI think the way to manage this is to add the overhead data (the output\narrays and last-column state) to TupleTableSlots. Then we'd have\na routine similar to heap_getattr except that it takes a TupleTableSlot\nand makes use of the extra state data. The infrastructure to manage\nthe state data is already in place: for example, ExecStoreTuple would\nreset the last-known-column to 0, ExecSetSlotDescriptor would be\nresponsible for allocating the output arrays using the natts value from\nthe provided tupdesc, etc.\n\nThis wouldn't help for accesses that are not in the context of a slot,\nbut certainly all the ones from ExecEvalVar are. The executor always\nworks with tuples stored in slots, so I think we could fix all the\nhigh-traffic cases this way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Jan 2003 09:41:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "\nAdded to TODO:\n\n\t* Cache last known per-tuple offsets to speed long tuple access\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> >> i.e. for tuple with 100 cols, allocate an array of 100 pointers, plus\n> >> keep count of how many are actually valid,\n> \n> > Additionally, this should also make repeted determining of NULL fields\n> > faster - just put a NULL-pointer in and voila - no more bit-shifting and\n> > AND-ing to find out if the field is null.\n> \n> Right, the output of the operation would be a pair of arrays: Datum\n> values and is-null flags. (NULL pointers don't work for pass-by-value\n> datatypes.)\n> \n> I like the idea of keeping track of a last-known-column position and\n> incrementally extending that as needed.\n> \n> I think the way to manage this is to add the overhead data (the output\n> arrays and last-column state) to TupleTableSlots. Then we'd have\n> a routine similar to heap_getattr except that it takes a TupleTableSlot\n> and makes use of the extra state data. The infrastructure to manage\n> the state data is already in place: for example, ExecStoreTuple would\n> reset the last-known-column to 0, ExecSetSlotDescriptor would be\n> responsible for allocating the output arrays using the natts value from\n> the provided tupdesc, etc.\n> \n> This wouldn't help for accesses that are not in the context of a slot,\n> but certainly all the ones from ExecEvalVar are. The executor always\n> works with tuples stored in slots, so I think we could fix all the\n> high-traffic cases this way.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 08:11:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Terrible performance on wide selects" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: Wednesday, January 22, 2003 4:18 PM\n> To: Dann Corbit\n> Cc: Steve Crawford; pgsql-performance@postgreSQL.org; \n> pgsql-hackers@postgreSQL.org\n> Subject: Re: [HACKERS] Terrible performance on wide selects \n> \n> \n> \"Dann Corbit\" <DCorbit@connx.com> writes:\n> > Why not waste a bit of memory and make the row buffer the maximum \n> > possible length? E.g. for varchar(2000) allocate 2000 characters + \n> > size element and point to the start of that thing.\n> \n> Surely you're not proposing that we store data on disk that way.\n> \n> The real issue here is avoiding overhead while extracting \n> columns out of a stored tuple. We could perhaps use a \n> different, less space-efficient format for temporary tuples \n> in memory than we do on disk, but I don't think that will \n> help a lot. The nature of O(N^2) bottlenecks is you have to \n> kill them all --- for example, if we fix printtup and don't \n> do anything with ExecEvalVar, we can't do more than double \n> the speed of Steve's example, so it'll still be slow. So we \n> must have a solution for the case where we are disassembling \n> a stored tuple, anyway.\n> \n> I have been sitting here toying with a related idea, which is \n> to use the heap_deformtuple code I suggested before to form \n> an array of pointers to Datums in a specific tuple (we could \n> probably use the TupleTableSlot mechanisms to manage the \n> memory for these). Then subsequent accesses to individual \n> columns would just need an array-index operation, not a \n> nocachegetattr call. The trick with that would be that if \n> only a few columns are needed out of a row, it might be a net \n> loss to compute the Datum values for all columns. How could \n> we avoid slowing that case down while making the wide-tuple \n> case faster?\n\nFor the disk case, why not have the start of the record contain an array\nof offsets to the start of the data for each column? It would only be\nnecessary to have a list for variable fields.\n\nSo (for instance) if you have 12 variable fields, you would store 12\nintegers at the start of the record.\n", "msg_date": "Wed, 22 Jan 2003 16:21:18 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n> For the disk case, why not have the start of the record contain an array\n> of offsets to the start of the data for each column? It would only be\n> necessary to have a list for variable fields.\n\nNo, you'd need an entry for *every* column (or at least, every one to\nthe right of the first variable-width column or NULL). That's a lot of\noverhead, especially in comparison to datatypes like bool or int4 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Jan 2003 19:30:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects " } ]
[ { "msg_contents": "[snip]\n> So (for instance) if you have 12 variable fields, you would \n> store 12 integers at the start of the record.\n\nAdditionally, you could implicitly size the integers from the properties\nof the column. A varchar(255) would only need an unsigned char to store\nthe offset, but a varchar(80000) would require an unsigned int.\n", "msg_date": "Wed, 22 Jan 2003 16:22:51 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "Dann Corbit kirjutas N, 23.01.2003 kell 02:22:\n> [snip]\n> > So (for instance) if you have 12 variable fields, you would \n> > store 12 integers at the start of the record.\n> \n> Additionally, you could implicitly size the integers from the properties\n> of the column. A varchar(255) would only need an unsigned char to store\n> the offset, but a varchar(80000) would require an unsigned int.\n\nI guess that the pointer could always be 16-bit, as the offset inside a\ntuple will never be more (other issues constrain max page size to 32K)\n\nvarchar(80000) will use TOAST (another file) anyway, but this will be\nhidden inside the field storage in the page)\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "23 Jan 2003 19:42:26 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Terrible performance on wide selects" } ]
[ { "msg_contents": "[snip]\n> For the disk case, why not have the start of the record \n> contain an array of offsets to the start of the data for each \n> column? It would only be necessary to have a list for \n> variable fields.\n> \n> So (for instance) if you have 12 variable fields, you would \n> store 12 integers at the start of the record.\n\nYou have to store this information anyway (for variable length objects).\nBy storing it at the front of the record you would lose nothing (except\nthe logical coupling of an object with its length). But I would think\nthat it would not consume any additional storage.\n", "msg_date": "Wed, 22 Jan 2003 16:39:57 -0800", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: Terrible performance on wide selects " }, { "msg_contents": "Dann Corbit kirjutas N, 23.01.2003 kell 02:39:\n> [snip]\n> > For the disk case, why not have the start of the record \n> > contain an array of offsets to the start of the data for each \n> > column? It would only be necessary to have a list for \n> > variable fields.\n> > \n> > So (for instance) if you have 12 variable fields, you would \n> > store 12 integers at the start of the record.\n> \n> You have to store this information anyway (for variable length objects).\n> By storing it at the front of the record you would lose nothing (except\n> the logical coupling of an object with its length). But I would think\n> that it would not consume any additional storage.\n\nI don't think it will win much either (except for possible cache\nlocality with really huge page sizes), as the problem is _not_ scanning\nover big strings finding their end marker, but instead is chasing long\nchains of pointers.\n\nThere could be some merit in the idea of storing in the beginning of\ntuple all pointers starting with first varlen field (16 bit int should\nbe enough) \nso people can minimize the overhead by moving fixlen fields to the\nbeginning. once we have this setup, we no longer need the varlen fields\n/stored/ together with field data. \n\nthis adds complexity of converting form (len,data) to ptr,...,data) when\nconstructing the tuple\n\nas tuple (int,int,int,varchar,varchar)\n\nwhich is currently stored as\n\n(intdata1, intdata2, intdata3, (len4, vardata4), (len5,vardata5))\n\nshould be rewritten on storage to\n\n(ptr4,ptr5),(intdata1, intdata2, intdata3, vardata4,vardata5)\n\nbut it seems to solve the O(N) problem quite nicely (and forces no\nstorage growth for tuples with fixlen fields in the beginning of tuple)\n\nand we must also account for NULL fields in calculations .\n\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "23 Jan 2003 12:28:21 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Terrible performance on wide selects" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> as tuple (int,int,int,varchar,varchar)\n> which is currently stored as\n> (intdata1, intdata2, intdata3, (len4, vardata4), (len5,vardata5))\n> should be rewritten on storage to\n> (ptr4,ptr5),(intdata1, intdata2, intdata3, vardata4,vardata5)\n\nI do not see that this buys anything at all. heap_getattr still has to\nmake essentially the same calculation as before to determine column\nlocations, namely adding up column widths. All you've done is move the\ndata that it has to fetch to make the calculation. If anything, this\nwill be slower not faster, because now heap_getattr has to keep track\nof two positions not one --- not just the next column offset, but also\nthe index of the next \"ptr\" to use. In the existing method it only\nneeds the column offset, because that's exactly where it can pick up\nthe next length from.\n\nBut the really serious objection is that the datatype functions that\naccess the data would now also need to be passed two pointers, since\nafter all they would like to know the length too. That breaks APIs\nfar and wide :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 23 Jan 2003 09:46:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Terrible performance on wide selects " } ]
[ { "msg_contents": "Hi all,\n\nFirst, sorry for the long mail...\n\nI have a system with 7 Million of records in 600 tables.\nMy actual production machine is: P4 1.6G, 3 IDE 7200, 1GB PC133\nMy new machine production is: Dual Xeon 2.0G HT, 1GB DDR266 ECC\n3 SCSI with HW Raid 5\n\nThe postgresql.conf is the SAME in both systems and I test\nwith no other connections, only my local test.\n\nshared_buffers = 80000\neffective_cache_size = 60000\nrandom_page_cost = 2.5\ncpu_tuple_cost = 0.001\ncpu_index_tuple_cost = 0.0001\ncpu_operator_cost = 0.00025\n\nMy question is:\n\nIf I execute the same query executed a lot of times, the\nduration is praticaly the same in both systems ?\n\n1) ! 1.185424 elapsed 1.090000 user 0.100000 system sec\n2) ! 1.184415 elapsed 1.070000 user 0.120000 system sec\n3) ! 1.185209 elapsed 1.100000 user 0.080000 system sec\n\nIf the disks is not read directly, the system must find\nthe rows in RAM. If it find in RAM, why so diffrents machines\nhave the times of execution and why the times does not down ???\n\nThe variations of query show bellow have the times pratically\nequals and my system send thousands os this querys with a\nthousands of 1.18 seconds... :(\n\nVery thank�s\n\nAlexandre\n\n\nQuery:\n[postgres@host1 data]$ psql -c \"explain SELECT T2.fi15emp05,\nT2.fi15flagcf, T2.fi15codcf, T1.Fn06Emp07, T1.Fn06TipTit, T1.Fn06TitBan, \nT1.Fn06Conta1, T1.Fn06NumTit, T1.Fn06Desdob, T1.Fn05CodPre, T1.Fn06eCli1,\nT1.Fn06tCli1, T1.Fn06cCli1, T2.fi15nome FROM (FN06T T1 LEFT JOIN FI15T\nT2 ON T2.fi15emp05 = T1.Fn06eCli1 AND T2.fi15flagcf = T1.Fn06tCli1 AND\nT2.fi15codcf = T1.Fn06cCli1) WHERE ( T1.Fn06Emp07 = '1' AND\nT1.Fn06TipTit = 'R' ) AND ( T1.Fn06TitBan = '002021001525 \n' ) ORDER BY T1.Fn06Emp07, T1.Fn06TipTit, T1.Fn06NumTit, T1.Fn06Desdob,\nT1.Fn05CodPre, T1.Fn06eCli1, T1.Fn06tCli1, T1.Fn06cCli1\" Pro13Z\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=25875.53..25875.53 rows=15 width=155)\n Sort Key: t1.fn06emp07, t1.fn06tiptit, t1.fn06numtit, t1.fn06desdob,\nt1.fn05codpre, t1.fn06ecli1, t1.fn06tcli1, t1.fn06ccli1\n -> Nested Loop (cost=0.00..25875.50 rows=15 width=155)\n -> Seq Scan on fn06t t1 (cost=0.00..25808.30 rows=15 width=95)\n Filter: ((fn06emp07 = 1::smallint) AND (fn06tiptit =\n'R'::bpchar) AND (fn06titban = '002021001525 \n '::bpchar))\n -> Index Scan using fi15t_pkey on fi15t t2 (cost=0.00..4.33\nrows=1 width=60)\n Index Cond: ((t2.fi15emp05 = \"outer\".fn06ecli1) AND\n(t2.fi15flagcf = \"outer\".fn06tcli1) AND (t2.fi15codcf =\n\"outer\".fn06ccli1))\n(7 rows)\n\n*** AND FROM LOG when a execute the query:\n\n2003-01-23 00:09:49 [3372] LOG: duration: 1.285900 sec\n2003-01-23 00:09:49 [3372] LOG: QUERY STATISTICS\n! system usage stats:\n! 1.286001 elapsed 1.240000 user 0.040000 system sec\n! [1.250000 user 0.040000 sys total]\n! 0/0 [0/0] filesystem blocks in/out\n! 50526/130 [50693/372] page faults/reclaims, 0 [0] swaps\n! 0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent\n! 0/0 [0/0] voluntary/involuntary context switches\n! buffer usage stats:\n! Shared blocks: 0 read, 0 written, buffer hit\nrate = 100.00%\n! Local blocks: 0 read, 0 written, buffer hit\nrate = 0.00%\n! Direct blocks: 0 read, 0 written\n\n\n\n", "msg_date": "Thu, 23 Jan 2003 00:26:44 -0200 (BRST)", "msg_from": "\"alexandre :: aldeia digital\" <alepaes@aldeiadigital.com.br>", "msg_from_op": true, "msg_subject": "Same query, same performance" }, { "msg_contents": "alexandre :: aldeia digital wrote:\n\n>Hi all,\n>\n>First, sorry for the long mail...\n>\n>I have a system with 7 Million of records in 600 tables.\n>My actual production machine is: P4 1.6G, 3 IDE 7200, 1GB PC133\n>My new machine production is: Dual Xeon 2.0G HT, 1GB DDR266 ECC\n>3 SCSI with HW Raid 5\n>\n>The postgresql.conf is the SAME in both systems and I test\n>with no other connections, only my local test.\n>\n>shared_buffers = 80000\n>effective_cache_size = 60000\n>random_page_cost = 2.5\n>cpu_tuple_cost = 0.001\n>cpu_index_tuple_cost = 0.0001\n>cpu_operator_cost = 0.00025\n>\n>My question is:\n>\n>If I execute the same query executed a lot of times, the\n>duration is praticaly the same in both systems ?\n>\n>1) ! 1.185424 elapsed 1.090000 user 0.100000 system sec\n>2) ! 1.184415 elapsed 1.070000 user 0.120000 system sec\n>3) ! 1.185209 elapsed 1.100000 user 0.080000 system sec\n>\n>If the disks is not read directly, the system must find\n>the rows in RAM. If it find in RAM, why so diffrents machines\n>have the times of execution and why the times does not down ???\n\nHere is your problem:\n-> Seq Scan on fn06t t1 (cost=0.00..25808.30 rows=15 width=95)\n Filter: ((fn06emp07 = 1::smallint) AND (fn06tiptit =\n'R'::bpchar) AND (fn06titban = '002021001525 \n '::bpchar))\n\nProblably system has to read from disk whole table fn06t each time, beacuse it\ndoesn't use index scan.\n\nDo you have any indexes on table fn06t? How selective are conditions above\nHow big is this table? Can you use indexes on multiple fields on this table \n- it should help, because conditions above return only 15 rows?\n\nRegards,\nTomasz Myrta\n\n\n\n", "msg_date": "Thu, 23 Jan 2003 09:20:09 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": false, "msg_subject": "Re: Same query, same performance" }, { "msg_contents": "Alexandre,\n\n> I have a system with 7 Million of records in 600 tables.\n> My actual production machine is: P4 1.6G, 3 IDE 7200, 1GB PC133\n> My new machine production is: Dual Xeon 2.0G HT, 1GB DDR266 ECC\n> 3 SCSI with HW Raid 5\n\nWell, first of all, those two systems are almost equivalent as far as\nPostgres is concerned for simple queries. The extra processor power\nwill only help you with very complex queries. 3-disk RAID 5 is no\nfaster ... and sometimes slower ... than IDE for database purposes.\n The only real boost to the Xeon is the faster RAM ... which may not\nhelp you if your drive array is the bottleneck.\n\n> \n> The postgresql.conf is the SAME in both systems and I test\n> with no other connections, only my local test.\n> \n> shared_buffers = 80000\n> effective_cache_size = 60000\n> random_page_cost = 2.5\n> cpu_tuple_cost = 0.001\n> cpu_index_tuple_cost = 0.0001\n> cpu_operator_cost = 0.00025\n\nNot that it affects the query below, but what about SORT_MEM?\n\n> If I execute the same query executed a lot of times, the\n> duration is praticaly the same in both systems ?\n> \n> 1) ! 1.185424 elapsed 1.090000 user 0.100000 system sec\n> 2) ! 1.184415 elapsed 1.070000 user 0.120000 system sec\n> 3) ! 1.185209 elapsed 1.100000 user 0.080000 system sec\n> \n> If the disks is not read directly, the system must find\n> the rows in RAM. If it find in RAM, why so diffrents machines\n> have the times of execution and why the times does not down ???\n\nI'm pretty sure that PostgreSQL always checks on disk, even when the\nsame query is run repeatedly. Tom?\n\n> [postgres@host1 data]$ psql -c \"explain SELECT T2.fi15emp05,\n> T2.fi15flagcf, T2.fi15codcf, T1.Fn06Emp07, T1.Fn06TipTit,\n> T1.Fn06TitBan, \n> T1.Fn06Conta1, T1.Fn06NumTit, T1.Fn06Desdob, T1.Fn05CodPre,\n> T1.Fn06eCli1,\n> T1.Fn06tCli1, T1.Fn06cCli1, T2.fi15nome FROM (FN06T T1 LEFT JOIN\n> FI15T\n> T2 ON T2.fi15emp05 = T1.Fn06eCli1 AND T2.fi15flagcf = T1.Fn06tCli1\n> AND\n> T2.fi15codcf = T1.Fn06cCli1) WHERE ( T1.Fn06Emp07 = '1' AND\n> T1.Fn06TipTit = 'R' ) AND ( T1.Fn06TitBan = '002021001525\n> \n> ' ) ORDER BY T1.Fn06Emp07, T1.Fn06TipTit, T1.Fn06NumTit,\n> T1.Fn06Desdob,\n> T1.Fn05CodPre, T1.Fn06eCli1, T1.Fn06tCli1, T1.Fn06cCli1\" Pro13Z\n\nActually, from your stats, Postgres is doing a pretty good job. 1.18\nseconds to return 15 rows from a 7 million row table searching on not\nIndexed columns? I don't think you have anything to complain about.\n\nIf you want less-than-1 second respose time: Add some indexes and keep\nthe tables VACUUMed so the indexes work. Particularly, add a\nmulti-column index on ( T1.Fn06Emp07, T1.Fn06TipTit, T1.Fn06TitBan )\n\nIf you want single-digit-msec response: Get a better disk set for\nPostgres: I recommend dual-channel RAID 1 (n addition to indexing).\n\n-Josh Berkus\n\n\n", "msg_date": "Thu, 23 Jan 2003 09:00:02 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Same query, same performance" }, { "msg_contents": "Tomasz,\n\n>>1) ! 1.185424 elapsed 1.090000 user 0.100000 system sec\n>>2) ! 1.184415 elapsed 1.070000 user 0.120000 system sec\n>>3) ! 1.185209 elapsed 1.100000 user 0.080000 system sec\n>>\n>>If the disks is not read directly, the system must find\n>>the rows in RAM. If it find in RAM, why so diffrents machines\n>>have the times of execution and why the times does not down ???\n>\n> Here is your problem:\n> -> Seq Scan on fn06t t1 (cost=0.00..25808.30 rows=15 width=95)\n> Filter: ((fn06emp07 = 1::smallint) AND (fn06tiptit =\n> 'R'::bpchar) AND (fn06titban = '002021001525\n> '::bpchar))\n\nReally! I do not attemp that fn06t does not have an index\nwith fn06titban ... :)\n\nNow, tehe time of the querys are < 0.02 sec on P4\nand <0.05 on Xeon.\n\nVery Thank�s\n\nAlexandre,\n\n\n", "msg_date": "Thu, 23 Jan 2003 17:49:37 -0200 (BRST)", "msg_from": "\"alexandre :: aldeia digital\" <alepaes@aldeiadigital.com.br>", "msg_from_op": true, "msg_subject": "Re: Same query, same performance" }, { "msg_contents": "Josh,\n\n> Alexandre,\n>\n>> I have a system with 7 Million of records in 600 tables.\n>> My actual production machine is: P4 1.6G, 3 IDE 7200, 1GB PC133\n>> My new machine production is: Dual Xeon 2.0G HT, 1GB DDR266 ECC\n>> 3 SCSI with HW Raid 5\n>\n> Well, first of all, those two systems are almost equivalent as far as\n> Postgres is concerned for simple queries. The extra processor power\n> will only help you with very complex queries. 3-disk RAID 5 is no\n> faster ... and sometimes slower ... than IDE for database purposes.\n> The only real boost to the Xeon is the faster RAM ... which may not\n> help you if your drive array is the bottleneck.\n\nToday, I will add more one HD and I will make an RAID 10 ...\nIn next week i will report my tests to the list...\n\n>\n>>\n>> The postgresql.conf is the SAME in both systems and I test\n>> with no other connections, only my local test.\n>>\n>> shared_buffers = 80000\n>> effective_cache_size = 60000\n>> random_page_cost = 2.5\n>> cpu_tuple_cost = 0.001\n>> cpu_index_tuple_cost = 0.0001\n>> cpu_operator_cost = 0.00025\n>\n> Not that it affects the query below, but what about SORT_MEM?\n\nSort_mem = 32000\n\n> Actually, from your stats, Postgres is doing a pretty good job. 1.18\n> seconds to return 15 rows from a 7 million row table searching on not\n> Indexed columns? I don't think you have anything to complain about.\n\nThe table have 300000 tuples, the entire database have 7 million.\nTomazs answer the question: a missing index on fn06t ...\n\nBut the query time difference of the systems continue.\nI will change the discs and tell to list after...\n\nThank�s Josh,\n\n\nAlexandre\n\n\n\n\n\n", "msg_date": "Thu, 23 Jan 2003 18:31:03 -0200 (BRST)", "msg_from": "\"alexandre :: aldeia digital\" <alepaes@aldeiadigital.com.br>", "msg_from_op": true, "msg_subject": "Re: Same query, same performance" }, { "msg_contents": "\nShort summary:\n\n On a large tables, I think the \"correlation\" pg_stats field as calculated\n by \"vacuum analyze\" or \"analyze\" can mislead the optimizer.\n\n By forcing index scans on some queries shown below, some queries \n in my database speed up from 197 seconds to under 30 seconds.\n\n I'd like feedback on whether or not having a smarter \"analyze\" \n function (which I think I could write as a separate utility) would\n help me situations like this.\n\nLonger: \n\n In particular, if I have a large table t with columns 'a','b','c', etc,\n and I cluster the table as follows:\n\n create table t_ordered as select * from t order by a,b;\n vacuum analyze t_ordered;\n\n Column \"b\" will (correctly) get a very low \"correlation\" in\n the pg_stats table -- but I think the optimizer would do better\n assuming a high correlation because similar 'b' values are still\n grouped closely on the same disk pages.\n\n\n\n Below is a real-world example of this issue.\n\n The table \"fact\" is a large one (reltuples = 1e8, relpages = 1082385)\n and contains about 1 years worth of data. The data was loaded \n sequentialy (ordered by dat,tim).\n\n\tlogs=# \\d fact;\n\t\t\tTable \"fact\"\n\t Column | Type | Modifiers \n\t--------+------------------------+-----------\n\t dat | date | \n\t tim | time without time zone | \n\t ip_id | integer | \n\t bid_id | integer | \n\t req_id | integer | \n\t ref_id | integer | \n\t uag_id | integer | \n\tIndexes: i_fact_2__bid_id,\n\t\t i_fact_2__dat,\n\t\t i_fact_2__tim,\n\t\t i_fact_2__ip_id,\n\t\t i_fact_2__ref_id,\n\t\t i_fact_2__req_id\n\n\n With a table this large, each day's worth of data contains\n about 3000 pages; or conversely, each page contains only about\n a 30 second range of values for \"tim\".\n\n As shown in the queries below, the optimizer wanted to do\n a sequential scan when looking at a 10 minute part of the day.\n However also as shown, forcing an index scan did much better.\n\n I'm guessing this happened because the optimizer saw the\n horrible correlation, and decided it would have to read\n an enormous number of pages if it did an index scan. \n\n===========================================\n\nlogs=# select tablename,attname,n_distinct,correlation from pg_stats where tablename='fact';\n tablename | attname | n_distinct | correlation \n-----------+---------+------------+-------------\n fact | dat | 365 | 1\n fact | tim | 80989 | -0.00281447\n fact | ip_id | 44996 | 0.660689\n fact | bid_id | 742850 | 0.969026\n fact | req_id | 2778 | 0.67896\n fact | ref_id | 595 | 0.258023\n fact | uag_id | 633 | 0.234216\n(7 rows)\n\n\nlogs=# explain analyze select * from fact where tim<'00:10:00';\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------\n Seq Scan on fact (cost=0.00..1949838.40 rows=526340 width=32) (actual time=0.39..197447.50 rows=402929 loops=1)\n Filter: (tim < '00:10:00'::time without time zone)\n Total runtime: 197810.01 msec\n(3 rows)\n\nlogs=# explain analyze select * from fact where tim<'00:10:00';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\n Seq Scan on fact (cost=0.00..1949838.40 rows=526340 width=32) (actual time=15.25..156705.76 rows=402929 loops=1)\n Filter: (tim < '00:10:00'::time without time zone)\n Total runtime: 157089.15 msec\n(3 rows)\n\nlogs=# set enable_seqscan = off;\nSET\nlogs=# explain analyze select * from fact where tim<'00:10:00';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using i__fact__tim on fact (cost=0.00..2110978.39 rows=526340 width=32) (actual time=104.41..23307.84 rows=402929 loops=1)\n Index Cond: (tim < '00:10:00'::time without time zone)\n Total runtime: 23660.95 msec\n(3 rows)\n\nlogs=# explain analyze select * from fact where tim<'00:10:00';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using i__fact__tim on fact (cost=0.00..2110978.39 rows=526340 width=32) (actual time=0.03..1477.35 rows=402929 loops=1)\n Index Cond: (tim < '00:10:00'::time without time zone)\n Total runtime: 1827.94 msec\n(3 rows)\n\n\n\nlogs=# \n\n*******************************************************************************\n*******************************************************************************\n\n\nSo two questions:\n\n a) Am I on to something.... or is something else the reason why\n the optimizer chose the much slower sequential scan?\n\n b) If I did write an \"analyze\" that tried to set \"correlation\" values\n that took into account such local grouping of data, would anyone\n be interested?\n\n\n Ron\n\n\n\n \n\n", "msg_date": "Thu, 23 Jan 2003 20:16:09 -0800 (PST)", "msg_from": "Ron Mayer <ron@intervideo.com>", "msg_from_op": false, "msg_subject": "Does \"correlation\" mislead the optimizer on large tables?" }, { "msg_contents": "Ron Mayer <ron@intervideo.com> writes:\n> On a large tables, I think the \"correlation\" pg_stats field as calculated\n> by \"vacuum analyze\" or \"analyze\" can mislead the optimizer.\n\nIf you look in the pghackers archives, you will find some discussion\nabout changing the equation that cost_index() uses to estimate the\nimpact of correlation on indexscan cost. The existing equation is\nad-hoc and surely wrong, but so far no one's proposed a replacement\nthat can be justified any better. If you've got such a replacement\nthen we're all ears...\n\n> In particular, if I have a large table t with columns 'a','b','c', etc,\n> and I cluster the table as follows:\n> create table t_ordered as select * from t order by a,b;\n> vacuum analyze t_ordered;\n> Column \"b\" will (correctly) get a very low \"correlation\" in\n> the pg_stats table -- but I think the optimizer would do better\n> assuming a high correlation because similar 'b' values are still\n> grouped closely on the same disk pages.\n\nHow would that be? They'll be separated by the stride of 'a'.\n\nIt seems likely to me that a one-dimensional correlation statistic may\nbe inadequate, but I haven't seen any proposals for better stats.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 01:48:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Does \"correlation\" mislead the optimizer on large tables? " }, { "msg_contents": "On Fri, 24 Jan 2003, Tom Lane wrote:\n\n> Ron Mayer <ron@intervideo.com> writes:\n> > In particular, if I have a large table t with columns 'a','b','c', etc,\n> > and I cluster the table as follows:\n> > create table t_ordered as select * from t order by a,b;\n> > vacuum analyze t_ordered;\n> > Column \"b\" will (correctly) get a very low \"correlation\" in\n> > the pg_stats table -- but I think the optimizer would do better\n> > assuming a high correlation because similar 'b' values are still\n> > grouped closely on the same disk pages.\n>\n> How would that be? They'll be separated by the stride of 'a'.\n\nI think it's a clumping effect.\n\nFor example, I made a table (ordered) with 20 values of a, 50 values of b\n(each showing up in each a) and 100 values of c (not used, just means 100\nrows for each (a,b) combination. It's got 541 pages it looks like. Analyze\nsets the correlation to about 0.08 on the table and so a query like:\nselect * from test1 where b=1; prefers a sequence scan (1791 vs 2231)\nwhile the index scan actually performs about 5 times better.\n\nI guess the reason is that in general, the index scan *really* is reading\nsomething on the order of 40 pages rather than the much larger estimate\n(I'd guess something on the order of say 300-400? I'm not sure how to\nfind that except by trying to reverse engineer the estimate number),\nbecause pretty much each value of a will probably have 1 or 2 pages with\nb=1.\n\nI'm not really sure how to measure that, however.\n\n\n", "msg_date": "Fri, 24 Jan 2003 08:27:03 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Does \"correlation\" mislead the optimizer on large" }, { "msg_contents": "\nOn Fri, 24 Jan 2003, Tom Lane wrote:\n>\n> Ron Mayer <ron@intervideo.com> writes:\n> > On a large tables, I think the \"correlation\" pg_stats field as calculated\n> > by \"vacuum analyze\" or \"analyze\" can mislead the optimizer.\n> \n> If you look in the pghackers archives, you will find some discussion\n> about changing the equation that cost_index() uses to estimate the\n> impact of correlation on indexscan cost. The existing equation is\n> ad-hoc and surely wrong, but so far no one's proposed a replacement\n> that can be justified any better. If you've got such a replacement\n> then we're all ears...\n\nI've got a very slow one (full table scan perl script) that helps \nmy database... I don't know if it's a good general purpose solution.\n\nThat's why I'm asking if the concept is good here. :-)\n\n\n> > In particular, if I have a large table t with columns 'a','b','c', etc,\n> > and I cluster the table as follows:\n> > create table t_ordered as select * from t order by a,b;\n> > vacuum analyze t_ordered;\n> > Column \"b\" will (correctly) get a very low \"correlation\" in\n> > the pg_stats table -- but I think the optimizer would do better\n> > assuming a high correlation because similar 'b' values are still\n> > grouped closely on the same disk pages.\n> \n> How would that be? They'll be separated by the stride of 'a'.\n\n\nIn the case of date/time (for the queries I showed) the issue was\nthat 'a's were not at all unique so I had data like this:\n\n dat | time | value\n ------------|----------|--------------------------------\n 2002-01-01 | 00:00:00 | whatever\n 2002-01-01 | 00:00:00 |\n 2002-01-01 | 00:00:00 |\n 2002-01-01 | 00:00:01 |\n 2002-01-01 | 00:00:01 | [many pages of 12am]\n 2002-01-01 | 00:00:01 |\n 2002-01-01 | 00:00:01 |\n ... thousands more rows....\n 2002-01-01 | 00:00:59 | \n 2002-01-01 | 00:01:00 | [many pages of 1am] \n ... tens of thousands of rows.\n 2002-01-01 | 23:59:59 | \n 2002-01-01 | 23:59:59 |\n 2002-01-01 | 23:59:59 | [many pages of 11pm]\n 2002-01-02 | 00:00:00 | [many *MORE* pages of 12am]\n 2002-01-02 | 00:00:00 |\n 2002-01-02 | 00:00:00 | \n ... tens of thousands of rows...\n 2002-01-02 | 23:59:59 | [many pages of 11pm]\n 2002-01-03 | 00:00:00 | [many *MORE* pages of 12am]\n ... millions more rows ...\n\n\nA similar problem actually shows up again in the dimention tables\nof my database; where I bulk load many pages at a time (which can\neasily be ordered to give a good correlation for a single load) ... \nbut then the next week's data gets appended to the end.\n\n\tid | value\n ------|----------------------------------\n\t 1 | aalok mehta [many pages of all 'a's]\n\t 2 | aamir khan\n\t 3 | aaron beall\n\t | [...]\n 6234 | axel rose\n 6234 | austin wolf\n 6123 | barbara boxer [many pages of all 'b's]\n\t | [...]\n 123456 | young\n 123457 | zebra\n\t | [...data loaded later..]\n 123458 | aaron whatever [more pages of all 'a's]\n 123458 | aaron something else\n 123458 | aaron something else\n | [...]\n 512344 | zelany\n\n\nIn this case I get many clustered blocks of \"a\" values, but these\nclustered blocks happen at many different times across the table.\n\n\n> It seems likely to me that a one-dimensional correlation statistic may\n> be inadequate, but I haven't seen any proposals for better stats.\n\nThe idea is it walks the whole table and looks for more local\ncorrelations and replaces the correlation value with a \"good\"\nvalue if values \"close\" to each other on the disk are similar.\n\nThis way a single \"correlation\" value still works ... so I didn't\nhave to change the optimizer logic, just the \"analyze\" logic.\n\n\nBasically if data within each block is highly correlated, it doesn't\nmatter as much (yeah, I now the issue about sequential reads vs. random\nreads).\n\n\n Ron\n\n", "msg_date": "Fri, 24 Jan 2003 11:36:50 -0800 (PST)", "msg_from": "Ron Mayer <ron@intervideo.com>", "msg_from_op": false, "msg_subject": "Re: Does \"correlation\" mislead the optimizer on large" }, { "msg_contents": "\nOn Fri, 24 Jan 2003, Stephan Szabo wrote:\n>\n> I think it's a clumping effect.\n\n\nYup, I think that's exactly the effect.\n\nA proposal.... (yes I I'm volunteering if people point me in the right \ndirection)... would be to have a \"plugable\" set of analyze functions so that a \nhuge database that runs analyze infrequently could choose to have a very slow \nanalyze that might work better for it's data.\n\nI see no reason different analyze functions would to be compiled into\nthe source code ... but could probably exists as PL/pgSQL languages.\n\nThe one thing compiling it in would help with is to let me know \nthe exact number of tuples on each individual page, but I guess \nreltuples/relpages from pg_class is a good estimate.\n\n\n> For example, I made a table (ordered) with 20 values of a, 50 values of b\n> (each showing up in each a) and 100 values of c (not used, just means 100\n> rows for each (a,b) combination. It's got 541 pages it looks like. Analyze\n> sets the correlation to about 0.08 on the table and so a query like:\n> select * from test1 where b=1; prefers a sequence scan (1791 vs 2231)\n> while the index scan actually performs about 5 times better.\n\nThat sounds like the same situation I was in. If my logic is right, this \nmeans you had about 184 tuples/page (200*50*100/541), so it looks to me\nlike for each \"a\", you get half-a-page where \"b=1\".\n\nIf you had 'c' have 200 values, I think you'd get even a bigger speedup\nbecause half the page is still \"wasted\" with b=2 values.\n\nIf you had 'c' have 10000 values, I think you'd get even a slightly bigger \nspeedup because you'd have so many b=1 pages next to each other you'd\nbenefit from more sequential disk access.\n\n\n> I guess the reason is that in general, the index scan *really* is reading\n> something on the order of 40 pages rather than the much larger estimate\n> (I'd guess something on the order of say 300-400? I'm not sure how to\n> find that except by trying to reverse engineer the estimate number),\n\nOr by adding a printf()... I think it'd be in cost_index in costsize.c.\n\n> because pretty much each value of a will probably have 1 or 2 pages with\n> b=1.\n> \n> I'm not really sure how to measure that, however.\n\n\nAs I said... I'm happy to volunteer and experiment if people point\nme in a good direction.\n\n Ron\n\n\n", "msg_date": "Fri, 24 Jan 2003 12:04:12 -0800 (PST)", "msg_from": "Ron Mayer <ron@intervideo.com>", "msg_from_op": false, "msg_subject": "Re: Does \"correlation\" mislead the optimizer on large" }, { "msg_contents": "Ron Mayer <ron@intervideo.com> writes:\n> A proposal.... (yes I I'm volunteering if people point me in the right \n> direction)... would be to have a \"plugable\" set of analyze functions so that a \n> huge database that runs analyze infrequently could choose to have a very slow \n> analyze that might work better for it's data.\n\nI do not think ANALYZE is the problem here; at least, it's premature to\nworry about that end of things until you've defined (a) what's to be\nstored in pg_statistic, and (b) what computation the planner needs to\nmake to derive a cost estimate given the stats.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 15:22:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Does \"correlation\" mislead the optimizer on large " }, { "msg_contents": "Hi!\n\nAnother fun question in our ongoing analysis on whether to switch from \nmysql to postgres. (Just as an update, Postgres has performed \nflawlessly on all of our stress tests so far.)\n\nWe have a situation where we will be creating two fairly large and \ncomplex databases with many tables (thousands) each. From what I \nunderstand, postgres keeps everything in one big data directory.\n\nWould there be an advantage to putting each of the two databases into a \nseparate directory and starting two instances of postgres? Is it \nbetter to just lump everything together.\n\nIn a perfect world, we would buy another database server and raid for \nthe second database, but being a small company, we just don't have the \nbudget right now. The raid on our current server is much bigger than we \nneed.\n\nThanks,\n\n-N\n\n", "msg_date": "Fri, 24 Jan 2003 16:55:45 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Re: Does \"correlation\" mislead the optimizer on large " }, { "msg_contents": "Hi!\n\nAnother fun question in our ongoing analysis on whether to switch from \nmysql to postgres. (Just as an update, Postgres has performed \nflawlessly on all of our stress tests so far.)\n\nWe have a situation where we will be creating two fairly large and \ncomplex databases with many tables (thousands) each. From what I \nunderstand, postgres keeps everything in one big data directory.\n\nWould there be an advantage to putting each of the two databases into a \nseparate directory and starting two instances of postgres? Is it \nbetter to just lump everything together.\n\nIn a perfect world, we would buy another database server and raid for \nthe second database, but being a small company, we just don't have the \nbudget right now. The raid on our current server is much bigger than we \nneed.\n\nThanks,\n\n-N\n\n", "msg_date": "Fri, 24 Jan 2003 17:39:42 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Multiple databases one directory " }, { "msg_contents": "\nOn Fri, 24 Jan 2003, Tom Lane wrote:\n>\n> Ron Mayer <ron@intervideo.com> writes:\n> > A proposal.... (yes I I'm volunteering if people point me in the right \n> > direction)...\n> \n> I do not think ANALYZE is the problem here; at least, it's premature to\n> worry about that end of things until you've defined (a) what's to be\n> stored in pg_statistic, and (b) what computation the planner needs to\n> make to derive a cost estimate given the stats.\n\nCool. Thanks for a good starting point. If I wanted to brainstorm\nfurther, should I do so here, or should I encourage interested people\nto take it off line with me (ron@intervideo.com) and I can post\na summary of the conversation?\n\n Ron\n\nFor those who do want to brainstorm with me, my starting point is this:\n\n With my particular table, I think the main issue is still that I have a \n lot of data that looks like:\n\n values: aaaaaaaaaaabbbbbbbbccccccccddddddddddaaaabbbbbbbccccccccddddd...\n disk page: |page 1|page 2|page 3|page 4|page 5|page 6|page 7|page 8|page 9|\n\n The problem I'm trying to address is that the current planner guesses \n that most of the pages will need to be read; however the local clustering\n means that in fact only a small subset need to be accessed. My first\n guess is that modifying the definition of \"correlation\" to account for\n page-sizes would be a good approach.\n\n I.e. Instead of the correlation across the whole table, for each row\n perform an auto-correlation \n (http://astronomy.swin.edu.au/~pbourke/analysis/correlate/)\n and keep only the values with a \"delay\" of less than 1 page-size.\n\nIf you want to share thoughts offline (ron@intervideo.com), I'll gladly\npost a summary of responses here to save the bandwidth of the group.\n\n\n\n", "msg_date": "Fri, 24 Jan 2003 15:09:19 -0800 (PST)", "msg_from": "Ron Mayer <ron@intervideo.com>", "msg_from_op": false, "msg_subject": "Re: Does \"correlation\" mislead the optimizer on large " }, { "msg_contents": "\nNoah,\n\n> Would there be an advantage to putting each of the two databases into a \n> separate directory and starting two instances of postgres? Is it \n> better to just lump everything together.\n\nYou can use the WITH LOCATION option in CREATE DATABASE to put the two \ndatabases into seperate directories *without* running two instances of \npostgres.\n\nFor that matter, the databases each have their own directories, by OID number.\n\nOf course, this only helps you if the seperate directories are on seperate \ndisks/arrays/channels. If everying is on the same disk or array, don't \nbother.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 24 Jan 2003 15:22:28 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Multiple databases one directory" }, { "msg_contents": "I think my server crashed and then restarted itself. Does anybody know \nwhat all this means:\n\n2003-01-24 18:28:06 PANIC: link from \n/RAID/pgsql/pg_xlog/00000009000000BC to \n/RAID/pgsql/pg_xlog/00000009000000C4 (initialization of log file 9, \nsegment 196) failed: File exists\n2003-01-24 18:28:06 LOG: server process (pid 1574) was terminated by \nsignal 6\n2003-01-24 18:28:06 LOG: terminating any other active server processes\n2003-01-24 18:28:06 WARNING: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend\n died abnormally and possibly corrupted shared memory.\n I have rolled back the current transaction and am\n going to terminate your database system connection and exit.\n Please reconnect to the database system and repeat your query.\n2003-01-24 18:28:06 LOG: all server processes terminated; \nreinitializing shared memory and semaphores\n2003-01-24 18:28:06 LOG: database system was interrupted at 2003-01-24 \n18:28:06 EST\n2003-01-24 18:28:06 LOG: checkpoint record is at 9/C4574974\n2003-01-24 18:28:06 LOG: redo record is at 9/C200D144; undo record is \nat 0/0; shutdown FALSE\n2003-01-24 18:28:06 LOG: next transaction id: 5159292; next oid: \n50856954\n2003-01-24 18:28:06 LOG: database system was not properly shut down; \nautomatic recovery in progress\n2003-01-24 18:28:06 LOG: redo starts at 9/C200D144\n2003-01-24 18:28:13 LOG: ReadRecord: record with zero length at \n9/C4578CC0\n2003-01-24 18:28:13 LOG: redo done at 9/C4578C9C\n2003-01-24 18:29:02 LOG: recycled transaction log file 00000009000000C0\n2003-01-24 18:29:02 LOG: recycled transaction log file 00000009000000C1\n2003-01-24 18:29:02 LOG: recycled transaction log file 00000009000000BC\n2003-01-24 18:29:02 LOG: recycled transaction log file 00000009000000BD\n2003-01-24 18:29:02 LOG: recycled transaction log file 00000009000000BE\n2003-01-24 18:29:02 LOG: recycled transaction log file 00000009000000BF\n2003-01-24 18:29:02 LOG: database system is ready\n\n\n\n\n", "msg_date": "Fri, 24 Jan 2003 18:59:58 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "WEIRD CRASH?!?!" }, { "msg_contents": "Noah,\n\n> I think my server crashed and then restarted itself. Does anybody know \n> what all this means:\n> \n> 2003-01-24 18:28:06 PANIC: link from \n> /RAID/pgsql/pg_xlog/00000009000000BC to \n> /RAID/pgsql/pg_xlog/00000009000000C4 (initialization of log file 9, \n> segment 196) failed: File exists\n> 2003-01-24 18:28:06 LOG: server process (pid 1574) was terminated by \n> signal 6\n> 2003-01-24 18:28:06 LOG: terminating any other active server processes\n> 2003-01-24 18:28:06 WARNING: Message from PostgreSQL backend:\n> The Postmaster has informed me that some other backend\n> died abnormally and possibly corrupted shared memory.\n> I have rolled back the current transaction and am\n> going to terminate your database system connection and exit.\n> Please reconnect to the database system and repeat your query.\n\nThis means that somebody KILL -9'd a postgres process or the postmaster, and \nPostgres restarted in order to clear the shared buffers. If the database \nstarted up again, you are fine.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 24 Jan 2003 16:03:42 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "Yes,\nbut I'm the only one logged into this box, and I didn't kill anything. \nIt appears to have died all by itself.\n\nThanks,\n\n-N\n\n\nOn Friday, January 24, 2003, at 07:03 PM, Josh Berkus wrote:\n\n> Noah,\n>\n>> I think my server crashed and then restarted itself. Does anybody \n>> know\n>> what all this means:\n>>\n>> 2003-01-24 18:28:06 PANIC: link from\n>> /RAID/pgsql/pg_xlog/00000009000000BC to\n>> /RAID/pgsql/pg_xlog/00000009000000C4 (initialization of log file 9,\n>> segment 196) failed: File exists\n>> 2003-01-24 18:28:06 LOG: server process (pid 1574) was terminated by\n>> signal 6\n>> 2003-01-24 18:28:06 LOG: terminating any other active server \n>> processes\n>> 2003-01-24 18:28:06 WARNING: Message from PostgreSQL backend:\n>> The Postmaster has informed me that some other backend\n>> died abnormally and possibly corrupted shared memory.\n>> I have rolled back the current transaction and am\n>> going to terminate your database system connection and exit.\n>> Please reconnect to the database system and repeat your \n>> query.\n>\n> This means that somebody KILL -9'd a postgres process or the \n> postmaster, and\n> Postgres restarted in order to clear the shared buffers. If the \n> database\n> started up again, you are fine.\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n\n", "msg_date": "Fri, 24 Jan 2003 19:08:29 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "On Fri, 24 Jan 2003, Noah Silverman wrote:\n\n> Yes,\n> but I'm the only one logged into this box, and I didn't kill anything. \n> It appears to have died all by itself.\n> \n\nIt certainly sounds that way. Can you recreate the circumstances and make \nit happen reliably? If not, the likely it's just an isolated occurance \nand nothing to get too worried about. Your data is still coherent, that's \nwhy all the backends were forced to reset, to cleanse the buffers from \npossible corruption.\n\n\n", "msg_date": "Fri, 24 Jan 2003 17:13:49 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "\nNoah,\n\n> but I'm the only one logged into this box, and I didn't kill anything. \n> It appears to have died all by itself.\n\nI'd check your disk array, then. It doesn't happen to be a Mylex, does it?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 24 Jan 2003 16:15:24 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "We are using a 3ware escalade on this box.\n\nOne clue.\n\nI actually moved the pg_xlog directory to another drive and then \nsymbolically linked it back to the data directory.\n\nAnother idea is that Linux killed one of the processes because postgres \nwas using up too much memory. I belive the part of the kernel is \ncalled \"oomkiller\". We're not sure if this happened, just a guess.\n\nThanks,\n\n-N\n\n\n\nOn Friday, January 24, 2003, at 07:15 PM, Josh Berkus wrote:\n\n>\n> Noah,\n>\n>> but I'm the only one logged into this box, and I didn't kill anything.\n>> It appears to have died all by itself.\n>\n> I'd check your disk array, then. It doesn't happen to be a Mylex, \n> does it?\n>\n> -- \n> -Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n>\n>\n\n", "msg_date": "Fri, 24 Jan 2003 19:17:48 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "On Fri, 24 Jan 2003, Josh Berkus wrote:\n\n> Noah,\n>\n> > I think my server crashed and then restarted itself. Does anybody know\n> > what all this means:\n> >\n> > 2003-01-24 18:28:06 PANIC: link from\n> > /RAID/pgsql/pg_xlog/00000009000000BC to\n> > /RAID/pgsql/pg_xlog/00000009000000C4 (initialization of log file 9,\n> > segment 196) failed: File exists\n> > 2003-01-24 18:28:06 LOG: server process (pid 1574) was terminated by\n> > signal 6\n> > 2003-01-24 18:28:06 LOG: terminating any other active server processes\n> > 2003-01-24 18:28:06 WARNING: Message from PostgreSQL backend:\n> > The Postmaster has informed me that some other backend\n> > died abnormally and possibly corrupted shared memory.\n> > I have rolled back the current transaction and am\n> > going to terminate your database system connection and exit.\n> > Please reconnect to the database system and repeat your query.\n>\n> This means that somebody KILL -9'd a postgres process or the postmaster, and\n> Postgres restarted in order to clear the shared buffers. If the database\n> started up again, you are fine.\n\nActually, it looks like an abort() (signal 6) to me. Probably from the\nPANIC listed.\n\nThe question is why did it get confused and end up linking to a filename\nthat already existed?\n\n\n\n", "msg_date": "Fri, 24 Jan 2003 16:27:50 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "Noah Silverman <noah@allresearch.com> writes:\n> We have a situation where we will be creating two fairly large and \n> complex databases with many tables (thousands) each. From what I \n> understand, postgres keeps everything in one big data directory.\n\nYeah. You're kind of at the mercy of the operating system when you do\nthat: if it copes well with big directories, no problem, but if lookups\nin big directories are slow then you'll take a performance hit.\n\nThe first thing I'd ask is *why* you think you need thousands of\ntables. How will you keep track of them? Are there really thousands of\ndifferent table schemas? Maybe you can combine tables by introducing\nan extra key column.\n\nPerhaps a little bit of rethinking will yield a small design screaming\nto get out of this big one ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 19:50:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multiple databases one directory " }, { "msg_contents": "Thanks,\n\nWe're considering this.\n\nOn an unrelated note, it looks like our crash was due to running out of \nfile descriptors for the bash shell.\n\nLinux won't let me increase the limit for a user other than root. Does \nanyone know how to change this (We're running slackware)\n\n\nThanks,\n\n-N\n\n\nOn Friday, January 24, 2003, at 07:50 PM, Tom Lane wrote:\n\n> Noah Silverman <noah@allresearch.com> writes:\n>> We have a situation where we will be creating two fairly large and\n>> complex databases with many tables (thousands) each. From what I\n>> understand, postgres keeps everything in one big data directory.\n>\n> Yeah. You're kind of at the mercy of the operating system when you do\n> that: if it copes well with big directories, no problem, but if lookups\n> in big directories are slow then you'll take a performance hit.\n>\n> The first thing I'd ask is *why* you think you need thousands of\n> tables. How will you keep track of them? Are there really thousands \n> of\n> different table schemas? Maybe you can combine tables by introducing\n> an extra key column.\n>\n> Perhaps a little bit of rethinking will yield a small design screaming\n> to get out of this big one ...\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Fri, 24 Jan 2003 19:57:01 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Re: Multiple databases one directory " }, { "msg_contents": "Noah Silverman <noah@allresearch.com> writes:\n> One clue.\n> I actually moved the pg_xlog directory to another drive and then \n> symbolically linked it back to the data directory.\n\nUh, did you have the postmaster shut down while you did that?\n\nThis looks like a collision between two processes both trying to create\nthe next segment of the xlog at about the same time. But there are\ninterlocks that are supposed to prevent that.\n\nI don't think you need to worry about the integrity of your data; the\npanic reset should put everything right. But I'd sure be interested\nif you can reproduce this problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 20:04:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?! " }, { "msg_contents": "On Fri, 2003-01-24 at 19:17, Noah Silverman wrote:\n> We are using a 3ware escalade on this box.\n> \n> One clue.\n> \n> I actually moved the pg_xlog directory to another drive and then \n> symbolically linked it back to the data directory.\n\nYou shut it down first right?\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "24 Jan 2003 20:08:24 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> The question is why did it get confused and end up linking to a filename\n> that already existed?\n\nThe message comes from InstallXLogFileSegment(), which is careful to\nensure that the link() cannot fail, either by unlinking the previous\nfile, or searching for an unused name. But it failed anyway.\n\nIt seems to me that there are only two possible explanations: a race\ncondition (but holding ControlFileLock should prevent that) or \nBasicOpenFile() failed for a reason other than nonexistence of the file.\n\nHmm ... I wonder if Noah's machine could have been running out of kernel\nfile table slots, or something like that? It does seem that it'd be\nmore robust to use something like stat(2) to probe for an existing file.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 20:12:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?! " }, { "msg_contents": "OF COURSE!\n\nIt actually looks like the problem was with file descriptors. Our \nshell only had 1024 set, and we also have mysql running and using up a \nbunch of those. We just upped to limit to 8000 to see it that would \ngive postgres more room to breathe.\n\n-N\n\nOn Friday, January 24, 2003, at 08:08 PM, Rod Taylor wrote:\n\n> On Fri, 2003-01-24 at 19:17, Noah Silverman wrote:\n>> We are using a 3ware escalade on this box.\n>>\n>> One clue.\n>>\n>> I actually moved the pg_xlog directory to another drive and then\n>> symbolically linked it back to the data directory.\n>\n> You shut it down first right?\n>\n> -- \n> Rod Taylor <rbt@rbt.ca>\n>\n> PGP Key: http://www.rbt.ca/rbtpub.asc\n> <signature.asc>\n\n", "msg_date": "Fri, 24 Jan 2003 20:14:09 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "On Fri, 2003-01-24 at 20:14, Noah Silverman wrote:\n> OF COURSE!\n\nSorry, but I've seen people try to do that stuff before.\n\n\n> On Friday, January 24, 2003, at 08:08 PM, Rod Taylor wrote:\n> \n> > On Fri, 2003-01-24 at 19:17, Noah Silverman wrote:\n> >> We are using a 3ware escalade on this box.\n> >>\n> >> One clue.\n> >>\n> >> I actually moved the pg_xlog directory to another drive and then\n> >> symbolically linked it back to the data directory.\n> >\n> > You shut it down first right?\n> >\n> > -- \n> > Rod Taylor <rbt@rbt.ca>\n> >\n> > PGP Key: http://www.rbt.ca/rbtpub.asc\n> > <signature.asc>\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "24 Jan 2003 20:20:11 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" }, { "msg_contents": "Noah Silverman <noah@allresearch.com> writes:\n> It actually looks like the problem was with file descriptors. Our \n> shell only had 1024 set, and we also have mysql running and using up a \n> bunch of those. We just upped to limit to 8000 to see it that would \n> give postgres more room to breathe.\n\nAh-hah. You might also want to set max_files_per_process (in\npostgresql.conf) to something small enough to ensure Postgres can't run\nyou out of descriptors. Linux has a bad habit of promising more than\nit can deliver when Postgres asks how many FDs are okay to use. The\nmax_files_per_process setting is useful to prevent Postgres from\nbelieving whatever fairy-tale sysconf(3) tells it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 20:38:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?! " }, { "msg_contents": "I wrote:\n> Hmm ... I wonder if Noah's machine could have been running out of kernel\n> file table slots, or something like that? It does seem that it'd be\n> more robust to use something like stat(2) to probe for an existing file.\n\nI've applied a patch to do it that way in CVS HEAD. After examining the\ncode further I'm inclined not to risk back-patching it into 7.3, though.\nxlog.c is full of open() calls that will elog(PANIC) if they fail, so\nI think there was only a very small window of opportunity for Noah to\nsee this failure and not another one. The patch thus probably\ncontributes little real gain in reliability.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 22:10:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?! " }, { "msg_contents": "On Fri, Jan 24, 2003 at 07:08:29PM -0500, Noah Silverman wrote:\n> Yes,\n> but I'm the only one logged into this box, and I didn't kill anything. \n> It appears to have died all by itself.\n\nIs this on Linux, and were you short on memory? Linux, in a\ncompletely brain-dead design, runs around 'kill -9'-ing random\nprocesses when it starts to think the machine is going to exhaust its\nmemory (or at least it used to. I dunno if it still does).\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 25 Jan 2003 13:44:12 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: WEIRD CRASH?!?!" } ]
[ { "msg_contents": "Hi!\n\nI have a table with year, month, day and hour fields (all SMALLINT\ntype). Selecting one row from it takes about 40 msecs, and I am\ntrying now to use DATE type instead of first three fields.\n\nNow select time decreased to less than millisecond, but I found that i\nmust use this form: hour=10::smallint instead of simple hour=10,\nbecause in the latter case PostgreSQL does sequential scan.\n\nI've heard something about type coercion issues, so I just want to say\nthat it is very funny to see such sort of things...\n\n", "msg_date": "Thu, 23 Jan 2003 13:14:25 +0500", "msg_from": "Timur Irmatov <thor@sarkor.com>", "msg_from_op": true, "msg_subject": "types & index usage" } ]
[ { "msg_contents": "To preface my question, we are still in the process of evaluating postgres \nto determine if we want to switch our production environment over.\n\nI'm curious about where I can find documentation about crash recovery in \npostgres. In mysql, there is a nice table recovery utility (myisamchk). \nis there something similar in postgres? What do we do if a table or \ndatabase becomes corrupted? (I'm aware of backup techniques, but it isn't \nfeasible for some of our larger tables. We're already running on raid 5, \nbut can't do much more)\n\nThanks,\n\n-N\n", "msg_date": "Thu, 23 Jan 2003 22:32:58 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": true, "msg_subject": "Crash Recovery" }, { "msg_contents": "Noah Silverman <noah@allresearch.com> writes:\n> I'm curious about where I can find documentation about crash recovery in \n> postgres. In mysql, there is a nice table recovery utility (myisamchk). \n> is there something similar in postgres?\n\nThere are no automated recovery tools for Postgres, because there are\nno known failure modes that are systematic enough to allow automatic\nrecovery. We prefer to fix such bugs rather than patch around them.\n\nThere are some last-ditch tools for reconstructing indexes (REINDEX)\nand for throwing away the WAL log (pg_resetxlog) but I have not seen\nany recent cases where I would have felt that blind invocation of either\nwould be a good move.\n\n> What do we do if a table or \n> database becomes corrupted? (I'm aware of backup techniques, but it isn't \n> feasible for some of our larger tables.\n\nReconsider that. If your data center burns down tonight, what is your\nfallback? Ultimately, you *must* have a backup copy, or you're just not\ntaking the possibility of failure seriously.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 01:29:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Crash Recovery " }, { "msg_contents": "On Thu, Jan 23, 2003 at 10:32:58PM -0500, Noah Silverman wrote:\n> To preface my question, we are still in the process of evaluating postgres \n> to determine if we want to switch our production environment over.\n> \n> I'm curious about where I can find documentation about crash recovery in \n> postgres. In mysql, there is a nice table recovery utility (myisamchk). \n\nIt recovers automatically. Make sure you run with fsync turned on. \nThat calls fsync on the WAL at the point of every COMMIT, and COMMIT\nisn't finished before the fsync returns. Then, in case of a crash,\nthe WAL just plays back and fixes up the data area.\n\n> is there something similar in postgres? What do we do if a table or \n> database becomes corrupted? (I'm aware of backup techniques, but it isn't \n\nI have never had a table become corrupted under Postgres. There have\nbeen some recent cases where people's bad hardware caused bad data to\nmake it into a table. Postgres's error reporting usually saves you\nthere, because you can go in and stomp on the bad tuple if need be. \nThere are some utilities to help in this; one of them, from Red Hat,\nallows you to look at the binary data in various formats (it's pretty\nslick). I believe it's available from sources.redhat.com/rhdb.\n\n> feasible for some of our larger tables. We're already running on raid 5, \n> but can't do much more)\n\nI suspect you can. First, are you using ECC memory in your\nproduction machines? If not, start doing so. Now. It is _the most\nimportant_ thing, aside from RAID, that you can do to protect your\ndata. Almost every problem of inconsistency I've seen on the lists\nin the past year and a bit has been to do with bad hardware --\nusually memory or disk controllers. (BTW, redundant disk\ncontrollers, and ones with some intelligence built in so that they\ncheck themsleves, are also mighty valuable here. But memory goes bad\nway more often.)\n\nAlso, I'm not sure just what you mean about backups \"not being\nfeasible\" for some of the larger tables, but you need to back up\ndaily. Since pg_dump takes a consistent snapshot, there's no data\ninconsistency trouble, and you can just start the backup and go away. \nIf the resulting files are too large, use split. And if the problem\nis space, well, disk is cheap these days, and so is tape, compared to\nhaving to re-get the data you lost.\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 24 Jan 2003 08:22:19 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Crash Recovery" }, { "msg_contents": "On Thu, 2003-01-23 at 21:32, Noah Silverman wrote:\n> To preface my question, we are still in the process of evaluating postgres \n> to determine if we want to switch our production environment over.\n> \n> I'm curious about where I can find documentation about crash recovery in \n> postgres. In mysql, there is a nice table recovery utility (myisamchk). \n> is there something similar in postgres? What do we do if a table or \n> database becomes corrupted? (I'm aware of backup techniques, but it isn't \n> feasible for some of our larger tables. We're already running on raid 5, \n> but can't do much more)\n\nOf course it's feasible!! If corporations can backup terrabyte-sized\ndatabases, then you can backup your comparatively puny DB.\n\nIn fact, if your data is vital to your company, you *must* back it\nup. Otherwise, poof goes the company if the computer is destroyed.\n\nNow, it might cost some bucks to buy a tape drive, or a multi-loader,\nif you have *lots* of data, but it *can* be done...\n\nBtw, what happens if an obscure bug in the RAID controller shows is\nhead, and starts corrupting your data? A table recovery utility\nwouldn't do squat, then...\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "24 Jan 2003 07:52:57 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Crash Recovery" }, { "msg_contents": "On Fri, 2003-01-24 at 07:22, Andrew Sullivan wrote:\n> On Thu, Jan 23, 2003 at 10:32:58PM -0500, Noah Silverman wrote:\n> > To preface my question, we are still in the process of evaluating postgres \n> > to determine if we want to switch our production environment over.\n> > \n> > I'm curious about where I can find documentation about crash recovery in \n> > postgres. In mysql, there is a nice table recovery utility (myisamchk). \n> \n> It recovers automatically. Make sure you run with fsync turned on. \n> That calls fsync on the WAL at the point of every COMMIT, and COMMIT\n> isn't finished before the fsync returns. Then, in case of a crash,\n> the WAL just plays back and fixes up the data area.\n\nOn commercial databases, there's a command to flush the roll-forward\nlogs to tape at intervals during the day.\n\nThus, if the disk(s) get corrupted, one can restore the database to\nnew disks, then apply the on-tape roll-forward logs to the database,\nand you'd have only lost a few hours of data, instead of however\nmany hours (or days) it's been since the last database backup.\n\nAlso, flushing them to tape (or a different partition) ensures that\nthey don't fill up the partition during a particularly intensive\nbatch job.\n\nAre there any FM's that explain how this works in Postgres?\n\nThanks,\nRon\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "24 Jan 2003 08:12:19 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Crash Recovery, pt 2" }, { "msg_contents": "\nSpeaking about daily backups... We are running into some serious\ntrouble with our backup policy.\n\nFirst (and less important), the size of our backups is increasing\na lot; yet information is not changing, only being added; so, the\nobvious question: is there a way to make incremental backup?\n\nAnd the second (and intriguing) problem: whenever I run pg_dump,\nmy system *freezes* until pg_dump finishes. When I say \"system\",\nI mean the software that is running and sending data to the PG\ndatabase. It just freezes, users are unable to connect during\nseveral minutes, and the ones already connected think the server\ndied, so they end up disconnecting after one or two minutes\nseeing that the server does not respond.\n\nIs this normal? Is there any way to avoid it? (I guess if I\nhave a solution to the first problem -- i.e., doing incremental\nbackups -- then that would solve this one, since it would only\n\"freeze\" the system for a few seconds, which wouldn't be that\nbad...)\n\nThanks for any comments!\n\nCarlos\n--\n\n\n", "msg_date": "Fri, 24 Jan 2003 10:16:42 -0500", "msg_from": "Carlos Moreno <moreno@mochima.com>", "msg_from_op": false, "msg_subject": "Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "On Fri, Jan 24, 2003 at 08:12:19AM -0600, Ron Johnson wrote:\n\n> On commercial databases, there's a command to flush the roll-forward\n> logs to tape at intervals during the day.\n\n[. . .]\n\n> Are there any FM's that explain how this works in Postgres?\n\nNot yet, because you can't do it.\n\nThere is, I understand, some code currently being included in 7.4 to\ndo this. So that's when it'll happen. Look for \"point in time\nrecovery\" or \"PITR\" on the -hackers list to see the progress.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 24 Jan 2003 10:48:34 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Crash Recovery, pt 2" }, { "msg_contents": "On Fri, 2003-01-24 at 10:16, Carlos Moreno wrote:\n> Speaking about daily backups... We are running into some serious\n> trouble with our backup policy.\n> \n> First (and less important), the size of our backups is increasing\n> a lot; yet information is not changing, only being added; so, the\n> obvious question: is there a way to make incremental backup?\n\nIncremental backups are coming. Some folks at RedHat are working on\nfinishing a PIT implementation, with with any luck 7.4 will do what you\nwant.\n\nFor the time being you might be able to cheat. If you're not touching\nthe old data, it should come out in roughly the same order every time.\n\nYou might be able to get away with doing a diff between the new backup\nand an older one, and simply store that. When restoring, you'll need to\npatch together the proper restore file.\n\n> And the second (and intriguing) problem: whenever I run pg_dump,\n> my system *freezes* until pg_dump finishes. When I say \"system\",\n\nNo, this isn't normal -- nor do I believe it. The only explanation would\nbe a hardware or operating system limitation. I.e. with heavy disk usage\nit used to be possible to peg the CPU -- making everything else CPU\nstarved, but the advent of DMA drives put an end to that.\n\nA pg_dump is not resource friendly, simply due to the quantity of\ninformation its dealing with. Are you dumping across a network? Perhaps\nthe NIC is maxed out.\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "24 Jan 2003 11:08:55 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "Carlos Moreno wrote:\n<snip>\n> And the second (and intriguing) problem: whenever I run pg_dump,\n> my system *freezes* until pg_dump finishes. When I say \"system\",\n> I mean the software that is running and sending data to the PG\n> database. It just freezes, users are unable to connect during\n> several minutes, and the ones already connected think the server\n> died, so they end up disconnecting after one or two minutes\n> seeing that the server does not respond.\n\nIs there any chance that you have hardware problems? For example a \ncouple of disk areas that are defective and the system is not happy \nabout, or maybe hard drive controller problems?\n\nWith PC's, this sort of thing generally seems to mean hardware problems \nof some sort that are being triggered by PostgreSQL having to run \nthrough the entire dataset. Could be caused by I/O load, could be \ncaused by hard drive errors, etc.\n\n?\n\n> Is this normal?\n\nNo.\n\nOut of curiosity, which operating system are you using?\n\n:-(\n\nRegards and best wishes,\n\nJustin Clift\n\n\n > Is there any way to avoid it? (I guess if I\n> have a solution to the first problem -- i.e., doing incremental\n> backups -- then that would solve this one, since it would only\n> \"freeze\" the system for a few seconds, which wouldn't be that\n> bad...)\n> \n> Thanks for any comments!\n> \n> Carlos\n> -- \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Sat, 25 Jan 2003 02:39:00 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "On Fri, Jan 24, 2003 at 10:16:42AM -0500, Carlos Moreno wrote:\n> obvious question: is there a way to make incremental backup?\n\nNot really, at the moment. Sorry. It's supposed to be coming soon\n(see my other message about PITR).\n\n> my system *freezes* until pg_dump finishes. When I say \"system\",\n\n> Is this normal? Is there any way to avoid it? (I guess if I\n\nNo, it's not normal. I think some additional digging is needed. \n\nOne thing that is important is to make sure your pg_dump doesn't\ncause swapping on the machine. Causing swapping is easy if you have\nbeen too aggressive in shared-memory allocation for the postmaster,\nand your OS is careless about who gets to be a candidate for paging. \n(Solaris 7.1 without priority paging was subject to this problem, for\ninstance).\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 24 Jan 2003 11:13:33 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "On Fri, 2003-01-24 at 09:48, Andrew Sullivan wrote:\n> On Fri, Jan 24, 2003 at 08:12:19AM -0600, Ron Johnson wrote:\n> \n> > On commercial databases, there's a command to flush the roll-forward\n> > logs to tape at intervals during the day.\n> \n> [. . .]\n> \n> > Are there any FM's that explain how this works in Postgres?\n> \n> Not yet, because you can't do it.\n> \n> There is, I understand, some code currently being included in 7.4 to\n> do this. So that's when it'll happen. Look for \"point in time\n> recovery\" or \"PITR\" on the -hackers list to see the progress.\n\nGreat! That's a big step towards enterprise functiomality.\n\nAnother big step would be aggregate functions using indexes, but \nthat's been discussed before...\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "24 Jan 2003 12:46:49 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Crash Recovery, pt 2" }, { "msg_contents": "Carlos Moreno <moreno@mochima.com> writes:\n> And the second (and intriguing) problem: whenever I run pg_dump,\n> my system *freezes* until pg_dump finishes. When I say \"system\",\n> I mean the software that is running and sending data to the PG\n> database.\n\nOther people have responded on the assumption that this is a performance\nproblem, but you should also consider the possibility that it's bad\ncoding of your application software. Does your app try to grab\nexclusive table locks? If so, it'll sit there waiting for the pg_dump\nto complete. pg_dump only takes ACCESS SHARE lock on the tables it's\nworking on, which is the weakest type of lock and does not conflict with\nmost database operations ... but it does conflict with ACCESS EXCLUSIVE\nlock requests.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 16:01:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery) " }, { "msg_contents": "Tom Lane wrote:\n\n>Carlos Moreno <moreno@mochima.com> writes:\n>\n>>And the second (and intriguing) problem: whenever I run pg_dump,\n>>my system *freezes* until pg_dump finishes. When I say \"system\",\n>>I mean the software that is running and sending data to the PG\n>>database.\n>>\n>\n>Other people have responded on the assumption that this is a performance\n>problem, but you should also consider the possibility that it's bad\n>coding of your application software. Does your app try to grab\n>exclusive table locks? If so, it'll sit there waiting for the pg_dump\n>to complete.\n>\n\nThanks Tom and the others that have replied.\n\nOne quick question, Tom, before some general comments and\nreply to the other messages... Where would I specify any\nlocks the software wants to do? Is it something you do\nwhen connecting to the database, or when executing the\nquery?? (I ask this because, that I know, we're not doing\nany locks; but this may just be lack of knowledge on my\npart; I may be doing that without being aware of it)\n\n(I guess I'll check the docs, instead of asking you guys\nto do my homework! :-))\n\nAssuming that I indeed am not locking any tables, I tend to\nsuspect that it is a problem of excessive workload; I'd\nlike to doubt the possibility of defective hardware --\nit's a dedicated server hired from a company that I'd like\nto believe are serious guys :-) (Rackforce.com, in case\nsomeone wants to break some bad news to me :-O )\n\nThe server is a Dual Athlon 1.8GHz, with 1GB of RAM,\nrunning Linux 7.3, and approx. 250MB for shared buffers.\nI installed PostgreSQL from the sources (7.2.3). It's\nrunning nothing else (I mean, no apache, no public ftp\nor downloads), other than our application, that is.\n\n\"vmstat -n 1\" reports ZERO swaps (si and so columns)\nduring normal operation at peak times, and also during\npg_dump (CPU idle time typically is around 95%, maybe\ngoing down to70 or 80 at peak times, and drops to approx.\n40-60% during the time pg_dump is running -- would that\nbe high enough load to make the software slow down to\na crawl?).\n\nAnd no, as I said above, I don't think the software locks\nany tables -- in fact, if you ask me, I would say *there\nis* bad coding in the application, but precisely because\nthere are no locks, no transactions (I know, shame on me!\nThat's near the top in the list of most important things\nto do...), so that's why I was so reluctant to believe\nmy colleague when he insisted that the pg_dump's were\n\"freezing\" the application... I had to see it with my\nown eyes, and on two different occasions, to be convinced\n:-(\n\nIn case this tells you something... The size of the\nbackup files (in plain ASCII) are around 300MB (the\ncommand is \"nice pg_dump -c -f file.sql dbname\").\n\n\nAny further comments will be welcome and highly\nappreciated. But thank you all for the replies so\nfar! It gives me a good starting point to do some\ndigging.\n\nThanks,\n\nCarlos\n--\n\n\n", "msg_date": "Fri, 24 Jan 2003 19:29:52 -0500", "msg_from": "Carlos Moreno <moreno@mochima.com>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "Carlos Moreno wrote:\n> And no, as I said above, I don't think the software locks\n> any tables -- in fact, if you ask me, I would say *there\n> is* bad coding in the application, but precisely because\n> there are no locks, no transactions (I know, shame on me!\n> That's near the top in the list of most important things\n> to do...), so that's why I was so reluctant to believe\n> my colleague when he insisted that the pg_dump's were\n> \"freezing\" the application... I had to see it with my\n> own eyes, and on two different occasions, to be convinced\n> :-(\n> \n> In case this tells you something... The size of the\n> backup files (in plain ASCII) are around 300MB (the\n> command is \"nice pg_dump -c -f file.sql dbname\").\n\nOne thing you can do to help track this down is to place \n\n stats_command_string = on\n\nin your postgresql.conf and restart the database (it may be sufficient\nto tell the database to reread the config file via \"pg_ctl reload\").\nThen, when the backup is going, run the application.\n\nWhen it \"freezes\", connect to the database via psql as the user\npostgres and do a \"select * from pg_stat_activity\". You'll see the\nlist of connected processes and the current query being executed by\neach, if any.\n\nDo that multiple times and you should see the progress, if any, the\napplication is making in terms of database queries.\n\n\nHope this helps...\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Fri, 24 Jan 2003 17:04:36 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "Carlos Moreno <moreno@mochima.com> writes:\n> Tom Lane wrote:\n>> Other people have responded on the assumption that this is a performance\n>> problem, but you should also consider the possibility that it's bad\n>> coding of your application software. Does your app try to grab\n>> exclusive table locks?\n\n> One quick question, Tom, before some general comments and\n> reply to the other messages... Where would I specify any\n> locks the software wants to do?\n\nIf you are not issuing any explicit \"LOCK\" SQL commands, then you can\ndisregard my theory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 20:19:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery) " }, { "msg_contents": "I said:\n> Carlos Moreno <moreno@mochima.com> writes:\n>> One quick question, Tom, before some general comments and\n>> reply to the other messages... Where would I specify any\n>> locks the software wants to do?\n\n> If you are not issuing any explicit \"LOCK\" SQL commands, then you can\n> disregard my theory.\n\nActually, that's too simple. Are you creating and dropping tables,\nor issuing schema-change commands (such as ADD COLUMN or RENAME)?\nAll of those things take exclusive locks on the tables they modify.\nOrdinary SELECT/INSERT/UPDATE/DELETE operations can run in parallel with\npg_dump, but messing with the database structure is another story.\n\nI guess the real question here is whether your app is actually stopped\ndead (as it would be if waiting for a lock), or just slowed to a crawl\n(as a performance problem could do). I cannot tell if your \"frozen\"\ndescription is hyperbole or literal truth.\n\nOne thing that might help diagnose it is to look at the output of ps\nauxww (or ps -ef on SysV-ish platforms) to see what all the backends are\ncurrently doing while the problem exists.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 20:29:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery) " }, { "msg_contents": "On Fri, 24 Jan 2003, Carlos Moreno wrote:\n\n\n> The server is a Dual Athlon 1.8GHz, with 1GB of RAM,\n> running Linux 7.3, and approx. 250MB for shared buffers.\n> ...\n> In case this tells you something... The size of the\n> backup files (in plain ASCII) are around 300MB (the\n> command is \"nice pg_dump -c -f file.sql dbname\").\n\nI was going to ask you to check your disk I/O statistics, but that tells\nme that disk I/O is probably not the problem. If the ASCII dump file\n(I assume by \"plain ASCII\" you mean uncompressed as well) is only 300\nMB, your database size is likely well under 100 MB. In which case the\nentire database ought to be residing in the buffer cache, and you should\nsee maximum CPU utilisation during the dump, and not too much disk\nI/O. (This is, however, assuming that that's the only database on your\nmachine. You don't have another 250 GB database that gets lots of random\naccess hiding there, do you? :-))\n\nOn a big machine like that, with such a small database, you should be\nable to do a dump in a couple of minutes with little noticable impact on\nthe performance of clients.\n\nI would probably start with carefully tracing what your clients are doing\nduring backup, and where they're blocking.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 25 Jan 2003 13:20:49 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "On Fri, Jan 24, 2003 at 08:29:41PM -0500, Tom Lane wrote:\n> auxww (or ps -ef on SysV-ish platforms) to see what all the backends are\n\nExcept Solaris, where ps -ef gives you no information at all. Use\n/usr/ucb/ps -auxww.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 25 Jan 2003 10:20:49 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "\nTom Lane wrote:\n\n>I said:\n>\n>>Carlos Moreno <moreno@mochima.com> writes:\n>>\n>>>One quick question, Tom, before some general comments and\n>>>reply to the other messages... Where would I specify any\n>>>locks the software wants to do?\n>>>\n>\n>>If you are not issuing any explicit \"LOCK\" SQL commands, then you can\n>>disregard my theory.\n>>\n\nWell, it was a good thing that you brought it to my attention.\nYes, two minutes after I wrote the message I found the docs\nthat told me it is an SQL command -- which means that I'm\npositively sure that I'm not doing any of those :-) I guess\na well-developed software could use some locks here and there,\nand the risk of making a mistake and \"over-blocking\" things is\nthere...\n\n\n\n>>Actually, that's too simple. Are you creating and dropping tables,\n>>or issuing schema-change commands (such as ADD COLUMN or RENAME)?\n>>All of those things take exclusive locks on the tables they modify.\n>>Ordinary SELECT/INSERT/UPDATE/DELETE operations can run in parallel with\n>>pg_dump, but messing with the database structure is another story.\n>>\n\n\nI do that (changing the database schema while the system is\nrunning) once in a while -- but not on a regular basis, and\ndefinitely never during the time a pg_dump is in progress\n(*that* would have scared me to death ;-))\n \n\n>\n>I guess the real question here is whether your app is actually stopped\n>dead (as it would be if waiting for a lock), or just slowed to a crawl\n>(as a performance problem could do). I cannot tell if your \"frozen\"\n>description is hyperbole or literal truth.\n>\n\nActually, you got me on that one... From the \"practical\" point of\nview, you could say it's literal truth (i.e., the system responsiveness\nfalls to ZERO). The system is an online multi-player game, where the\nmain things the database is doing is holding the users information\nto process the login authentications, and logging results and the\nprogress of games (to later -- offline -- compute statistics,\nrankings, etc.). Logging is done on a separate worker thread, so\nit shouldn't matter if that stops for a few minutes (the lists of\nSQL's pending to be executed would just grow during that time)...\n\nBut the thing is, when I run pg_dump, the games freeze, you are\nabsolutely unable to connect (the server does not respond, period),\nand the players that are in a game, playing, massively abandon\ngames, and you then see comments in the chat window that the\nserver went down, etc. (i.e., I take it the server stopped\nresponding to them and they abandoned thinking that the connection\nhad dropped, or that the server had died).\n\nNow, I guess a more specific answer to your question is important\n(i.e., is the above behaviour the result of the system slowing to\na crawl, or is it that the software just hung on a single db.Exec\nstatement in the main loop and no single line of code is being\nexecuted until the pg_dump finishes? -- according to the comments\nso far, I would say this last option is not possible), and I think\nI'll get such an answer when running some tests as suggested by you\nand others that replied.\n\n>One thing that might help diagnose it is to look at the output of ps\n>auxww (or ps -ef on SysV-ish platforms) to see what all the backends are\n>currently doing while the problem exists.\n>\n\nWe have done (IIRC) top (the command \"top\", that is), and yes, the\npostmaster process takes a lot of CPU... (not sure of the exact\nnumbers, but it was at the top).\n\nAnyway, thanks again guys for the valuable comments and ideas!!\n\nCarlos\n--\n \n\n\n\n", "msg_date": "Sat, 25 Jan 2003 10:22:51 -0500", "msg_from": "Carlos Moreno <moreno@mochima.com>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" }, { "msg_contents": "On Sat, Jan 25, 2003 at 10:22:51AM -0500, Carlos Moreno wrote:\n\n> Well, it was a good thing that you brought it to my attention.\n> Yes, two minutes after I wrote the message I found the docs\n> that told me it is an SQL command -- which means that I'm\n> positively sure that I'm not doing any of those :-) I guess\n\nIf you have a lot of foreign keys and are doing long-running UPDATES\n(or other related things), I think you might also see the same thing. \nYou could spot this with ps -auxww (or friends) by looking for [some\ndb operation] waiting.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Sat, 25 Jan 2003 12:17:10 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Having trouble with backups (was: Re: Crash Recovery)" } ]
[ { "msg_contents": "Folks,\n\nWhat mount options to people use for Ext3, particularly what do you set \"data \n= \" for a high-transaction database? I'm used to ReiserFS (\"noatime, \nnotail\") and am not really sure where to go with Ext3.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 24 Jan 2003 11:20:14 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Mount options for Ext3?" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n> What mount options to people use for Ext3, particularly what do you set \"data \n> = \" for a high-transaction database? I'm used to ReiserFS (\"noatime, \n> notail\") and am not really sure where to go with Ext3.\n\nFor ReiserFS, I can certainly understand using \"noatime\", but I'm not\nsure why you use \"notail\" except to allow LILO to operate properly on\nit.\n\nThe default for ext3 is to do ordered writes: data is written before\nthe associated metadata transaction commits, but the data itself isn't\njournalled. But because PostgreSQL synchronously writes the\ntransaction log (using fsync() by default, if I'm not mistaken) and\nuses sync() during a savepoint, I would think that ordered writes at\nthe filesystem level would probably buy you very little in the way of\nadditional data integrity in the event of a crash.\n\nSo if I'm right about that, then you might consider using the\n\"data=writeback\" option for the filesystem that contains the actual\ndata (usually /usr/local/pgsql/data), but I'd use the default\n(\"data=ordered\") at the very least (I suppose there's no harm in using\n\"data=journal\" if you're willing to put up with the performance hit,\nbut it's not clear to me what benefit, if any, there is) for\neverything else.\n\n\nI use ReiserFS also, so I'm basing the above on what knowledge I have\nof the ext3 filesystem and the way PostgreSQL writes data.\n\n\nThe more interesting question in my mind is: if you use PostgreSQL on\nan ext3 filesystem with \"data=ordered\" or \"data=journal\", can you get\naway with turning off PostgreSQL's fsync altogether and still get the\nsame kind of data integrity that you'd get with fsync enabled? If the\noperating system is able to guarantee data integrity, is it still\nnecessary to worry about it at the database level?\n\nI suspect the answer to that is that you can safely turn off fsync\nonly if the operating system will guarantee that write transactions\nfrom a process are actually committed in the order they arrive from\nthat process. Otherwise you'd have to worry about write transactions\nto the transaction log committing before the writes to the data files\nduring a savepoint, which would leave the overall database in an\ninconsistent state if the system were to crash after the transaction\nlog write (which marks the savepoint as completed) committed but\nbefore the data file writes committed. And my suspicion is that the\noperating system rarely makes any such guarantee, journalled\nfilesystem or not.\n\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Fri, 24 Jan 2003 16:30:11 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> I suspect the answer to that is that you can safely turn off fsync\n> only if the operating system will guarantee that write transactions\n> from a process are actually committed in the order they arrive from\n> that process.\n\nYeah. We use fsync partly so that when we tell a client a transaction\nis committed, it really is committed (ie, down to disk) --- but also\nas a means of controlling write order. I strongly doubt that any modern\nfilesystem will promise to execute writes exactly in the order issued,\nunless prodded by means such as fsync.\n\n> Otherwise you'd have to worry about write transactions\n> to the transaction log committing before the writes to the data files\n> during a savepoint,\n\nActually, the other way around is the problem. The WAL algorithm works\nso long as log writes hit disk before the data-file changes they\ndescribe (that's why it's called write *ahead* log).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 20:16:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3? " }, { "msg_contents": "Tom Lane wrote:\n> > Otherwise you'd have to worry about write transactions\n> > to the transaction log committing before the writes to the data files\n> > during a savepoint,\n> \n> Actually, the other way around is the problem. The WAL algorithm works\n> so long as log writes hit disk before the data-file changes they\n> describe (that's why it's called write *ahead* log).\n\nHmm...a case where the transaction data gets written to the files\nbefore the transaction itself even manages to get written to the log?\nTrue. But I was thinking about the following:\n\nI was presuming that when a savepoint occurs, a marker is written to\nthe log indicating which transactions had been committed to the data\nfiles, and that this marker was paid attention to during database\nstartup.\n\nSo suppose the marker makes it to the log but not all of the data the\nmarker refers to makes it to the data files. Then the system crashes.\n\nWhen the database starts back up, the savepoint marker in the\ntransaction log shows that the transactions had already been committed\nto disk. But because the OS wrote the requested data (including the\nsavepoint marker) out of order, the savepoint marker made it to the\ndisk before some of the data made it to the data files. And so, the\ndatabase is in an inconsistent state and it has no way to know about\nit.\n\nBut then, I guess the easy way around the above problem is to always\ncommit all the transactions in the log to disk when the database comes\nup, which renders the savepoint marker moot...and leads back to the\nscenario you were referring to...\n\nIf the savepoint only commits the older transactions in the log (and\nnot all of them) to disk, the possibility of the situation you're\nreferring would, I'd think, be reduced (possibly quite considerably).\n\n\n\n...or is my understanding of how all this works completely off?\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Fri, 24 Jan 2003 18:11:59 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "\nKevin,\n\n> So if I'm right about that, then you might consider using the\n> \"data=writeback\" option for the filesystem that contains the actual\n> data (usually /usr/local/pgsql/data), but I'd use the default\n> (\"data=ordered\") at the very least (I suppose there's no harm in using\n> \"data=journal\" if you're willing to put up with the performance hit,\n> but it's not clear to me what benefit, if any, there is) for\n> everything else.\n\nWell, the only reason I use Ext3 rather than Ext2 is to prevent fsck's on \nrestart after a crash. So I'm interested in the data option that gives the \nminimum performance hit, even if it means that I sacrifice some reliability. \nI'm running with fsynch on, and the DB is on a mirrored drive array, so I'm \nnot too worried about filesystem-level errors.\n\nSo would that be \"data=writeback\"?\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 24 Jan 2003 18:22:33 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "Josh Berkus wrote:\n> Well, the only reason I use Ext3 rather than Ext2 is to prevent fsck's on \n> restart after a crash. So I'm interested in the data option that gives the \n> minimum performance hit, even if it means that I sacrifice some reliability. \n> I'm running with fsynch on, and the DB is on a mirrored drive array, so I'm \n> not too worried about filesystem-level errors.\n> \n> So would that be \"data=writeback\"?\n\nYes. That should give almost the same semantics as ext2 does by\ndefault, except that metadata is journalled, so no fsck needed. :-)\n\nIn fact, I believe that's exactly how ReiserFS works, if I'm not\nmistaken (I saw someone claim that it does data journalling, but I've\nnever seen any references to how to get ReiserFS to journal data).\n\n\nBTW, why exactly are you running ext3? It has some nice journalling\nfeatures but it sounds like you don't want to use them. But at the\nsame time, it uses pre-allocated inodes just like ext2 does, so it's\npossible to run out of inodes on ext2/3 while AFAIK that's not\npossible under ReiserFS. That's not likely to be a problem unless\nyou're running a news server or something, though. :-)\n\nOn the other hand, ext3 with data=writeback will probably be faster\nthan ReiserFS for a number of things.\n\nNo idea how stable ext3 is versus ReiserFS...\n\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Fri, 24 Jan 2003 18:50:08 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> I was presuming that when a savepoint occurs, a marker is written to\n> the log indicating which transactions had been committed to the data\n> files, and that this marker was paid attention to during database\n> startup.\n\nNot quite. The marker says that all datafile updates described by\nlog entries before point X have been flushed to disk by the checkpoint\n--- and, therefore, if we need to restart we need only replay log\nentries occurring after the last checkpoint's point X.\n\nThis has nothing directly to do with which transactions are committed\nor not committed. If we based checkpoint behavior on that, we'd need\nto maintain an indefinitely large amount of WAL log to cope with\nlong-running transactions.\n\nThe actual checkpoint algorithm is\n\n\ttake note of current logical end of WAL (this will be point X)\n\twrite() all dirty buffers in shared buffer arena\n\tsync() to ensure that above writes, as well as previous ones,\n\t\tare on disk\n\tput checkpoint record referencing point X into WAL; write and\n\t\tfsync WAL\n\tupdate pg_control with new checkpoint record, fsync it\n\nSince pg_control is what's examined after restart, the checkpoint is\neffectively committed when the pg_control write hits disk. At any\ninstant before that, a crash would result in replaying from the\nprior checkpoint's point X. The algorithm is correct if and only if\nthe pg_control write hits disk after all the other writes mentioned.\n\nThe key assumption we are making about the filesystem's behavior is that\nwrites scheduled by the sync() will occur before the pg_control write\nthat's issued after it. People have occasionally faulted this algorithm\nby quoting the sync() man page, which saith (in the Gospel According To\nHP)\n\n The writing, although scheduled, is not necessarily complete upon\n return from sync.\n\nThis, however, is not a problem in itself. What we need to know is\nwhether the filesystem will allow writes issued after the sync() to\ncomplete before those \"scheduled\" by the sync().\n\n\n> So suppose the marker makes it to the log but not all of the data the\n> marker refers to makes it to the data files. Then the system crashes.\n\nI think that this analysis is not relevant to what we're doing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Jan 2003 21:58:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3? " }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <kevin@sysexperts.com> writes:\n> > I was presuming that when a savepoint occurs, a marker is written to\n> > the log indicating which transactions had been committed to the data\n> > files, and that this marker was paid attention to during database\n> > startup.\n> \n> Not quite. The marker says that all datafile updates described by\n> log entries before point X have been flushed to disk by the checkpoint\n> --- and, therefore, if we need to restart we need only replay log\n> entries occurring after the last checkpoint's point X.\n> \n> This has nothing directly to do with which transactions are committed\n> or not committed. If we based checkpoint behavior on that, we'd need\n> to maintain an indefinitely large amount of WAL log to cope with\n> long-running transactions.\n\nAh. My apologies for my imprecise wording. I should have said\n\"...indicating which transactions had been written to the data files\"\ninstead of \"...had been committed to the data files\", and meant to say\n\"checkpoint\" but instead said \"savepoint\". I'll try to do better\nhere.\n\n> The actual checkpoint algorithm is\n> \n> \ttake note of current logical end of WAL (this will be point X)\n> \twrite() all dirty buffers in shared buffer arena\n> \tsync() to ensure that above writes, as well as previous ones,\n> \t\tare on disk\n> \tput checkpoint record referencing point X into WAL; write and\n> \t\tfsync WAL\n> \tupdate pg_control with new checkpoint record, fsync it\n> \n> Since pg_control is what's examined after restart, the checkpoint is\n> effectively committed when the pg_control write hits disk. At any\n> instant before that, a crash would result in replaying from the\n> prior checkpoint's point X. The algorithm is correct if and only if\n> the pg_control write hits disk after all the other writes mentioned.\n\n[...]\n\n> > So suppose the marker makes it to the log but not all of the data the\n> > marker refers to makes it to the data files. Then the system crashes.\n> \n> I think that this analysis is not relevant to what we're doing.\n\nAgreed. The context of that analysis is when synchronous writes by\nthe database are turned off and one is left to rely on the operating\nsystem to do the right thing. Clearly it doesn't apply when\nsynchronous writes are enabled. As long as only one process handles a\ncheckpoint, an operating system that guarantees that a process' writes\nare committed to disk in the same order that they were requested,\ncombined with a journalling filesystem that at least wrote all data\nprior to committing the associated metadata transactions, would be\nsufficient to guarantee the integrity of the database even if all\nsynchronous writes by the database were turned off. This would hold\neven if the operating system reordered writes from multiple processes.\nIt suggests an operating system feature that could be considered\nhighly desirable (and relates to the discussion elsewhere about\ntrading off shared buffers against OS file cache: it's often better to\nrely on the abilities of the OS rather than roll your own mechanism).\n\nOne question I have is: in the event of a crash, why not simply replay\nall the transactions found in the WAL? Is the startup time of the\ndatabase that badly affected if pg_control is ignored?\n\nIf there exists somewhere a reasonably succinct description of the\nreasoning behind the current transaction management scheme (including\nan analysis of the pros and cons), I'd love to read it and quit\nbugging you. :-)\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Fri, 24 Jan 2003 20:13:19 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> One question I have is: in the event of a crash, why not simply replay\n> all the transactions found in the WAL? Is the startup time of the\n> database that badly affected if pg_control is ignored?\n\nInteresting thought, indeed. Since we truncate the WAL after each\ncheckpoint, seems like this approach would no more than double the time\nfor restart. The win is it'd eliminate pg_control as a single point of\nfailure. It's always bothered me that we have to update pg_control on\nevery checkpoint --- it should be a write-pretty-darn-seldom file,\nconsidering how critical it is.\n\nI think we'd have to make some changes in the code for deleting old\nWAL segments --- right now it's not careful to delete them in order.\nBut surely that can be coped with.\n\nOTOH, this might just move the locus for fatal failures out of\npg_control and into the OS' algorithms for writing directory updates.\nWe would have no cross-check that the set of WAL file names visible in\npg_xlog is sensible or aligned with the true state of the datafile area.\nWe'd have to take it on faith that we should replay the visible files\nin their name order. This might mean we'd have to abandon the current\nhack of recycling xlog segments by renaming them --- which would be a\nnontrivial performance hit.\n\nComments anyone?\n\n> If there exists somewhere a reasonably succinct description of the\n> reasoning behind the current transaction management scheme (including\n> an analysis of the pros and cons), I'd love to read it and quit\n> bugging you. :-)\n\nNot that I know of. Would you care to prepare such a writeup? There\nis a lot of material in the source-code comments, but no coherent\npresentation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 Jan 2003 00:40:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "WAL replay logic (was Re: [PERFORM] Mount options for Ext3?)" }, { "msg_contents": "On Sat, 25 Jan 2003, Tom Lane wrote:\n\n> We'd have to take it on faith that we should replay the visible files\n> in their name order.\n\nCouldn't you could just put timestamp information at the beginning if\neach file, (or perhaps use that of the first transaction), and read the\nbeginning of each file to find out what order to run them in. Perhaps\nyou could even check the last transaction in each file as well to see if\nthere are \"holes\" between the available logs.\n\n> This might mean we'd have to abandon the current\n> hack of recycling xlog segments by renaming them --- which would be a\n> nontrivial performance hit.\n\nRename and write a \"this is an empty logfile\" record at the beginning?\nThough I don't see how you could do this in an atomic manner.... Maybe if\nyou included the filename in the WAL file header, you'd see that if the name\ndoesn't match the header, it's a recycled file....\n\n(This response sent only to hackers.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 25 Jan 2003 16:59:17 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL replay logic (was Re: [PERFORM] Mount options for Ext3?)" }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <kevin@sysexperts.com> writes:\n> > One question I have is: in the event of a crash, why not simply replay\n> > all the transactions found in the WAL? Is the startup time of the\n> > database that badly affected if pg_control is ignored?\n> \n> Interesting thought, indeed. Since we truncate the WAL after each\n> checkpoint, seems like this approach would no more than double the time\n> for restart. \n\nHmm...truncating the WAL after each checkpoint minimizes the amount of\ndisk space eaten by the WAL, but on the other hand keeping older\nsegments around buys you some safety in the event that things get\nreally hosed. But your later comments make it sound like the older\nWAL segments are kept around anyway, just rotated.\n\n> The win is it'd eliminate pg_control as a single point of\n> failure. It's always bothered me that we have to update pg_control on\n> every checkpoint --- it should be a write-pretty-darn-seldom file,\n> considering how critical it is.\n> \n> I think we'd have to make some changes in the code for deleting old\n> WAL segments --- right now it's not careful to delete them in order.\n> But surely that can be coped with.\n\nEven that might not be necessary. See below.\n\n> OTOH, this might just move the locus for fatal failures out of\n> pg_control and into the OS' algorithms for writing directory updates.\n> We would have no cross-check that the set of WAL file names visible in\n> pg_xlog is sensible or aligned with the true state of the datafile\n> area.\n\nWell, what we somehow need to guarantee is that there is always WAL\ndata that is older than the newest consistent data in the datafile\narea, right? Meaning that if the datafile area gets scribbled on in\nan inconsistent manner, you always have WAL data to fill in the gaps.\n\nRight now we do that by using fsync() and sync(). But I think it\nwould be highly desirable to be able to more or less guarantee\ndatabase consistency even if fsync were turned off. The price for\nthat might be too high, though.\n\n> We'd have to take it on faith that we should replay the visible files\n> in their name order. This might mean we'd have to abandon the current\n> hack of recycling xlog segments by renaming them --- which would be a\n> nontrivial performance hit.\n\nIt's probably a bad idea for the replay to be based on the filenames.\nInstead, it should probably be based strictly on the contents of the\nxlog segment files. Seems to me the beginning of each segment file\nshould have some kind of header information that makes it clear where\nin the scheme of things it belongs. Additionally, writing some sort\nof checksum, either at the beginning or the end, might not be a bad\nidea either (doesn't have to be a strict checksum, but it needs to be\nsomething that's reasonably likely to catch corruption within a\nsegment).\n\nDo that, and you don't have to worry about renaming xlog segments at\nall: you simply move on to the next logical segment in the list (a\nreplay just reads the header info for all the segments and orders the\nlist as it sees fit, and discards all segments prior to any gap it\nfinds. It may be that you simply have to bail out if you find a gap,\nthough). As long as the xlog segment checksum information is\nconsistent with the contents of the segment and as long as its\ntransactions pick up where the previous segment's left off (assuming\nit's not the first segment, of course), you can safely replay the\ntransactions it contains.\n\nI presume we're recycling xlog segments in order to avoid file\ncreation and unlink overhead? Otherwise you can simply create new\nsegments as needed and unlink old segments as policy dictates.\n\n> Comments anyone?\n> \n> > If there exists somewhere a reasonably succinct description of the\n> > reasoning behind the current transaction management scheme (including\n> > an analysis of the pros and cons), I'd love to read it and quit\n> > bugging you. :-)\n> \n> Not that I know of. Would you care to prepare such a writeup? There\n> is a lot of material in the source-code comments, but no coherent\n> presentation.\n\nBe happy to. Just point me to any non-obvious source files.\n\nThus far on my plate:\n\n 1. PID file locking for postmaster startup (doesn't strictly need\n\tto be the PID file but it may as well be, since we're already\n\tmessing with it anyway). I'm currently looking at how to do\n\tthe autoconf tests, since I've never developed using autoconf\n\tbefore.\n\n 2. Documenting the transaction management scheme.\n\nI was initially interested in implementing the explicit JOIN\nreordering but based on your recent comments I think you have a much\nbetter handle on that than I. I'll be very interested to see what you\ndo, to see if it's anything close to what I figure has to happen...\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Sat, 25 Jan 2003 02:11:12 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: WAL replay logic (was Re: [PERFORM] Mount options for Ext3?)" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> On Sat, 25 Jan 2003, Tom Lane wrote:\n>> We'd have to take it on faith that we should replay the visible files\n>> in their name order.\n\n> Couldn't you could just put timestamp information at the beginning if\n> each file,\n\nGood thought --- there's already an xlp_pageaddr field on every page\nof WAL, and you could examine that to be sure it matches the file name.\nIf not, the file csn be ignored.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 25 Jan 2003 11:16:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL replay logic (was Re: [PERFORM] Mount options for Ext3?) " }, { "msg_contents": "On 2003-01-24 21:58:55 -0500, Tom Lane wrote:\n> The key assumption we are making about the filesystem's behavior is that\n> writes scheduled by the sync() will occur before the pg_control write\n> that's issued after it. People have occasionally faulted this algorithm\n> by quoting the sync() man page, which saith (in the Gospel According To\n> HP)\n> \n> The writing, although scheduled, is not necessarily complete upon\n> return from sync.\n> \n> This, however, is not a problem in itself. What we need to know is\n> whether the filesystem will allow writes issued after the sync() to\n> complete before those \"scheduled\" by the sync().\n> \n\nCertain linux 2.4.* kernels (not sure which, newer ones don't seem to have \nit) have the following kernel config option:\n\nUse the NOOP Elevator (WARNING)\nCONFIG_BLK_DEV_ELEVATOR_NOOP\n If you are using a raid class top-level driver above the ATA/IDE core,\n one may find a performance boost by preventing a merging and re-sorting\n of the new requests.\n\n If unsure, say N.\n\nIf one were certain his OS wouldn't do any re-ordering of writes, would it be\nsafe to run with fsync = off? (not that I'm going to try this, but I'm just\ncurious)\n\n\nVincent van Leeuwen\nMedia Design\n", "msg_date": "Sun, 26 Jan 2003 00:21:54 +0100", "msg_from": "pgsql.spam@vinz.nl", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "pgsql.spam@vinz.nl writes:\n> If one were certain his OS wouldn't do any re-ordering of writes, would it be\n> safe to run with fsync = off? (not that I'm going to try this, but I'm just\n> curious)\n\nI suppose so ... but if your OS doesn't do *any* re-ordering of writes,\nI'd say you need a better OS. Even in Postgres, we'd often like the OS\nto collapse multiple writes of the same disk page into one write. And\nwe certainly want the various writes forced by a sync() to be done with\nsome intelligence about disk layout, not blindly in order of issuance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 26 Jan 2003 00:34:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3? " }, { "msg_contents": "On Sat, 2003-01-25 at 23:34, Tom Lane wrote:\n> pgsql.spam@vinz.nl writes:\n> > If one were certain his OS wouldn't do any re-ordering of writes, would it be\n> > safe to run with fsync = off? (not that I'm going to try this, but I'm just\n> > curious)\n> \n> I suppose so ... but if your OS doesn't do *any* re-ordering of writes,\n> I'd say you need a better OS. Even in Postgres, we'd often like the OS\n> to collapse multiple writes of the same disk page into one write. And\n> we certainly want the various writes forced by a sync() to be done with\n> some intelligence about disk layout, not blindly in order of issuance.\n\nAnd anyway, wouldn't SCSI's Tagged Command Queueing override it all,\nno matter if the OS did re-ordering or not?\n\nBut then, it really means it when it says that fsync() succeeds, so does\nTCQ matter in this case?\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "26 Jan 2003 02:04:45 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "\nKevin,\n\n> BTW, why exactly are you running ext3? It has some nice journalling\n> features but it sounds like you don't want to use them. \n\nBecause our RAID array controller, an Adaptec 2200S, is only compatible with \nRedHat 8.0, without some fancy device driver hacking. It certainly wasn't my \nfirst choice, I've been using Reiser for 4 years and am very happy with it.\n\nWarning to anyone following this thread: The web site info for the 2200S says \n\"Redhat and SuSE\", but drivers are only available for RedHat. Adaptec's \nLinux guru, Brian, has been unable to get the web site maintainers to correct \nthe information on the site.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 27 Jan 2003 11:23:58 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "On Mon, 2003-01-27 at 13:23, Josh Berkus wrote:\n> Kevin,\n> \n> > BTW, why exactly are you running ext3? It has some nice journalling\n> > features but it sounds like you don't want to use them. \n> \n> Because our RAID array controller, an Adaptec 2200S, is only compatible with \n> RedHat 8.0, without some fancy device driver hacking. It certainly wasn't my \n\nBinary-only, or OSS and just tuned to their kernels?\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "27 Jan 2003 13:43:48 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "\nLet me add that I have heard that on Linux XFS is better for PostgreSQL\nthan either ext3 or Reiser.\n\n---------------------------------------------------------------------------\n\nKevin Brown wrote:\n> Josh Berkus wrote:\n> > Well, the only reason I use Ext3 rather than Ext2 is to prevent fsck's on \n> > restart after a crash. So I'm interested in the data option that gives the \n> > minimum performance hit, even if it means that I sacrifice some reliability. \n> > I'm running with fsynch on, and the DB is on a mirrored drive array, so I'm \n> > not too worried about filesystem-level errors.\n> > \n> > So would that be \"data=writeback\"?\n> \n> Yes. That should give almost the same semantics as ext2 does by\n> default, except that metadata is journalled, so no fsck needed. :-)\n> \n> In fact, I believe that's exactly how ReiserFS works, if I'm not\n> mistaken (I saw someone claim that it does data journalling, but I've\n> never seen any references to how to get ReiserFS to journal data).\n> \n> \n> BTW, why exactly are you running ext3? It has some nice journalling\n> features but it sounds like you don't want to use them. But at the\n> same time, it uses pre-allocated inodes just like ext2 does, so it's\n> possible to run out of inodes on ext2/3 while AFAIK that's not\n> possible under ReiserFS. That's not likely to be a problem unless\n> you're running a news server or something, though. :-)\n> \n> On the other hand, ext3 with data=writeback will probably be faster\n> than ReiserFS for a number of things.\n> \n> No idea how stable ext3 is versus ReiserFS...\n> \n> \n> \n> -- \n> Kevin Brown\t\t\t\t\t kevin@sysexperts.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 27 Jan 2003 15:11:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Mount options for Ext3?" }, { "msg_contents": "\nIs there a TODO here? I like the idea of not writing pg_controldata, or\nat least allowing it not to be read, perhaps with a pg_resetxlog flag so\nwe can cleanly recover from a corrupt pg_controldata if the WAL files\nare OK.\n\nWe don't want to get rid of the WAL file rename optimization because\nthose are 16mb files and keeping them from checkpoint to checkpoint is\nprobably a win. I also like the idea of allowing something between our\n\"at the instant\" recovery, and no recovery with fsync off. A \"recover\nfrom last checkpooint time\" option would be really valuable for some.\n\n---------------------------------------------------------------------------\n\nKevin Brown wrote:\n> Tom Lane wrote:\n> > Kevin Brown <kevin@sysexperts.com> writes:\n> > > One question I have is: in the event of a crash, why not simply replay\n> > > all the transactions found in the WAL? Is the startup time of the\n> > > database that badly affected if pg_control is ignored?\n> > \n> > Interesting thought, indeed. Since we truncate the WAL after each\n> > checkpoint, seems like this approach would no more than double the time\n> > for restart. \n> \n> Hmm...truncating the WAL after each checkpoint minimizes the amount of\n> disk space eaten by the WAL, but on the other hand keeping older\n> segments around buys you some safety in the event that things get\n> really hosed. But your later comments make it sound like the older\n> WAL segments are kept around anyway, just rotated.\n> \n> > The win is it'd eliminate pg_control as a single point of\n> > failure. It's always bothered me that we have to update pg_control on\n> > every checkpoint --- it should be a write-pretty-darn-seldom file,\n> > considering how critical it is.\n> > \n> > I think we'd have to make some changes in the code for deleting old\n> > WAL segments --- right now it's not careful to delete them in order.\n> > But surely that can be coped with.\n> \n> Even that might not be necessary. See below.\n> \n> > OTOH, this might just move the locus for fatal failures out of\n> > pg_control and into the OS' algorithms for writing directory updates.\n> > We would have no cross-check that the set of WAL file names visible in\n> > pg_xlog is sensible or aligned with the true state of the datafile\n> > area.\n> \n> Well, what we somehow need to guarantee is that there is always WAL\n> data that is older than the newest consistent data in the datafile\n> area, right? Meaning that if the datafile area gets scribbled on in\n> an inconsistent manner, you always have WAL data to fill in the gaps.\n> \n> Right now we do that by using fsync() and sync(). But I think it\n> would be highly desirable to be able to more or less guarantee\n> database consistency even if fsync were turned off. The price for\n> that might be too high, though.\n> \n> > We'd have to take it on faith that we should replay the visible files\n> > in their name order. This might mean we'd have to abandon the current\n> > hack of recycling xlog segments by renaming them --- which would be a\n> > nontrivial performance hit.\n> \n> It's probably a bad idea for the replay to be based on the filenames.\n> Instead, it should probably be based strictly on the contents of the\n> xlog segment files. Seems to me the beginning of each segment file\n> should have some kind of header information that makes it clear where\n> in the scheme of things it belongs. Additionally, writing some sort\n> of checksum, either at the beginning or the end, might not be a bad\n> idea either (doesn't have to be a strict checksum, but it needs to be\n> something that's reasonably likely to catch corruption within a\n> segment).\n> \n> Do that, and you don't have to worry about renaming xlog segments at\n> all: you simply move on to the next logical segment in the list (a\n> replay just reads the header info for all the segments and orders the\n> list as it sees fit, and discards all segments prior to any gap it\n> finds. It may be that you simply have to bail out if you find a gap,\n> though). As long as the xlog segment checksum information is\n> consistent with the contents of the segment and as long as its\n> transactions pick up where the previous segment's left off (assuming\n> it's not the first segment, of course), you can safely replay the\n> transactions it contains.\n> \n> I presume we're recycling xlog segments in order to avoid file\n> creation and unlink overhead? Otherwise you can simply create new\n> segments as needed and unlink old segments as policy dictates.\n> \n> > Comments anyone?\n> > \n> > > If there exists somewhere a reasonably succinct description of the\n> > > reasoning behind the current transaction management scheme (including\n> > > an analysis of the pros and cons), I'd love to read it and quit\n> > > bugging you. :-)\n> > \n> > Not that I know of. Would you care to prepare such a writeup? There\n> > is a lot of material in the source-code comments, but no coherent\n> > presentation.\n> \n> Be happy to. Just point me to any non-obvious source files.\n> \n> Thus far on my plate:\n> \n> 1. PID file locking for postmaster startup (doesn't strictly need\n> \tto be the PID file but it may as well be, since we're already\n> \tmessing with it anyway). I'm currently looking at how to do\n> \tthe autoconf tests, since I've never developed using autoconf\n> \tbefore.\n> \n> 2. Documenting the transaction management scheme.\n> \n> I was initially interested in implementing the explicit JOIN\n> reordering but based on your recent comments I think you have a much\n> better handle on that than I. I'll be very interested to see what you\n> do, to see if it's anything close to what I figure has to happen...\n> \n> \n> -- \n> Kevin Brown\t\t\t\t\t kevin@sysexperts.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 27 Jan 2003 15:26:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL replay logic (was Re: [PERFORM] Mount options for Ext3?)" }, { "msg_contents": "\nIs there a TODO here, like \"Allow recovery from corrupt pg_control via\nWAL\"?\n\n---------------------------------------------------------------------------\n\nKevin Brown wrote:\n> Tom Lane wrote:\n> > Kevin Brown <kevin@sysexperts.com> writes:\n> > > One question I have is: in the event of a crash, why not simply replay\n> > > all the transactions found in the WAL? Is the startup time of the\n> > > database that badly affected if pg_control is ignored?\n> > \n> > Interesting thought, indeed. Since we truncate the WAL after each\n> > checkpoint, seems like this approach would no more than double the time\n> > for restart. \n> \n> Hmm...truncating the WAL after each checkpoint minimizes the amount of\n> disk space eaten by the WAL, but on the other hand keeping older\n> segments around buys you some safety in the event that things get\n> really hosed. But your later comments make it sound like the older\n> WAL segments are kept around anyway, just rotated.\n> \n> > The win is it'd eliminate pg_control as a single point of\n> > failure. It's always bothered me that we have to update pg_control on\n> > every checkpoint --- it should be a write-pretty-darn-seldom file,\n> > considering how critical it is.\n> > \n> > I think we'd have to make some changes in the code for deleting old\n> > WAL segments --- right now it's not careful to delete them in order.\n> > But surely that can be coped with.\n> \n> Even that might not be necessary. See below.\n> \n> > OTOH, this might just move the locus for fatal failures out of\n> > pg_control and into the OS' algorithms for writing directory updates.\n> > We would have no cross-check that the set of WAL file names visible in\n> > pg_xlog is sensible or aligned with the true state of the datafile\n> > area.\n> \n> Well, what we somehow need to guarantee is that there is always WAL\n> data that is older than the newest consistent data in the datafile\n> area, right? Meaning that if the datafile area gets scribbled on in\n> an inconsistent manner, you always have WAL data to fill in the gaps.\n> \n> Right now we do that by using fsync() and sync(). But I think it\n> would be highly desirable to be able to more or less guarantee\n> database consistency even if fsync were turned off. The price for\n> that might be too high, though.\n> \n> > We'd have to take it on faith that we should replay the visible files\n> > in their name order. This might mean we'd have to abandon the current\n> > hack of recycling xlog segments by renaming them --- which would be a\n> > nontrivial performance hit.\n> \n> It's probably a bad idea for the replay to be based on the filenames.\n> Instead, it should probably be based strictly on the contents of the\n> xlog segment files. Seems to me the beginning of each segment file\n> should have some kind of header information that makes it clear where\n> in the scheme of things it belongs. Additionally, writing some sort\n> of checksum, either at the beginning or the end, might not be a bad\n> idea either (doesn't have to be a strict checksum, but it needs to be\n> something that's reasonably likely to catch corruption within a\n> segment).\n> \n> Do that, and you don't have to worry about renaming xlog segments at\n> all: you simply move on to the next logical segment in the list (a\n> replay just reads the header info for all the segments and orders the\n> list as it sees fit, and discards all segments prior to any gap it\n> finds. It may be that you simply have to bail out if you find a gap,\n> though). As long as the xlog segment checksum information is\n> consistent with the contents of the segment and as long as its\n> transactions pick up where the previous segment's left off (assuming\n> it's not the first segment, of course), you can safely replay the\n> transactions it contains.\n> \n> I presume we're recycling xlog segments in order to avoid file\n> creation and unlink overhead? Otherwise you can simply create new\n> segments as needed and unlink old segments as policy dictates.\n> \n> > Comments anyone?\n> > \n> > > If there exists somewhere a reasonably succinct description of the\n> > > reasoning behind the current transaction management scheme (including\n> > > an analysis of the pros and cons), I'd love to read it and quit\n> > > bugging you. :-)\n> > \n> > Not that I know of. Would you care to prepare such a writeup? There\n> > is a lot of material in the source-code comments, but no coherent\n> > presentation.\n> \n> Be happy to. Just point me to any non-obvious source files.\n> \n> Thus far on my plate:\n> \n> 1. PID file locking for postmaster startup (doesn't strictly need\n> \tto be the PID file but it may as well be, since we're already\n> \tmessing with it anyway). I'm currently looking at how to do\n> \tthe autoconf tests, since I've never developed using autoconf\n> \tbefore.\n> \n> 2. Documenting the transaction management scheme.\n> \n> I was initially interested in implementing the explicit JOIN\n> reordering but based on your recent comments I think you have a much\n> better handle on that than I. I'll be very interested to see what you\n> do, to see if it's anything close to what I figure has to happen...\n> \n> \n> -- \n> Kevin Brown\t\t\t\t\t kevin@sysexperts.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 09:30:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL replay logic (was Re: Mount options for Ext3?)" }, { "msg_contents": "On Fri, 14 Feb 2003, Bruce Momjian wrote:\n\n> Is there a TODO here, like \"Allow recovery from corrupt pg_control via\n> WAL\"?\n\nIsn't that already in section 12.2.1 of the documentation?\n\n Using pg_control to get the checkpoint position speeds up the\n recovery process, but to handle possible corruption of pg_control,\n we should actually implement the reading of existing log segments\n in reverse order -- newest to oldest -- in order to find the last\n checkpoint. This has not been implemented, yet.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 15 Feb 2003 17:29:21 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] WAL replay logic (was Re: Mount options for" }, { "msg_contents": "\nAdded to TODO:\n\n\t* Allow WAL information to recover corrupted pg_controldata\n\n---------------------------------------------------------------------------\n\nCurt Sampson wrote:\n> On Fri, 14 Feb 2003, Bruce Momjian wrote:\n> \n> > Is there a TODO here, like \"Allow recovery from corrupt pg_control via\n> > WAL\"?\n> \n> Isn't that already in section 12.2.1 of the documentation?\n> \n> Using pg_control to get the checkpoint position speeds up the\n> recovery process, but to handle possible corruption of pg_control,\n> we should actually implement the reading of existing log segments\n> in reverse order -- newest to oldest -- in order to find the last\n> checkpoint. This has not been implemented, yet.\n> \n> cjs\n> -- \n> Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n> Don't you know, in this new Dark Age, we're all light. --XTC\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 18 Feb 2003 00:15:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] WAL replay logic (was Re: Mount options for" }, { "msg_contents": "On Tue, 18 Feb 2003, Bruce Momjian wrote:\n\n>\n> Added to TODO:\n>\n> \t* Allow WAL information to recover corrupted pg_controldata\n>...\n> > Using pg_control to get the checkpoint position speeds up the\n> > recovery process, but to handle possible corruption of pg_control,\n> > we should actually implement the reading of existing log segments\n> > in reverse order -- newest to oldest -- in order to find the last\n> > checkpoint. This has not been implemented, yet.\n\nSo if you do this, do you still need to store that information in\npg_control at all?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Tue, 18 Feb 2003 14:26:46 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: WAL replay logic (was Re: [PERFORM] Mount options for" }, { "msg_contents": "\nUh, not sure. Does it guard against corrupt WAL records?\n\n---------------------------------------------------------------------------\n\nCurt Sampson wrote:\n> On Tue, 18 Feb 2003, Bruce Momjian wrote:\n> \n> >\n> > Added to TODO:\n> >\n> > \t* Allow WAL information to recover corrupted pg_controldata\n> >...\n> > > Using pg_control to get the checkpoint position speeds up the\n> > > recovery process, but to handle possible corruption of pg_control,\n> > > we should actually implement the reading of existing log segments\n> > > in reverse order -- newest to oldest -- in order to find the last\n> > > checkpoint. This has not been implemented, yet.\n> \n> So if you do this, do you still need to store that information in\n> pg_control at all?\n> \n> cjs\n> -- \n> Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n> Don't you know, in this new Dark Age, we're all light. --XTC\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 18 Feb 2003 14:08:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL replay logic (was Re: [PERFORM] Mount options for" } ]
[ { "msg_contents": "Hi,\n\nWould LOCK TABLE ACCESS EXCLUSIVE MODE speed things up, when I have\na script that loads data by setting transactions, and then committing\nworks after every few thousand INSERTs?\n\nThanks,\nRon\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "25 Jan 2003 12:57:33 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On Sat, 2003-01-25 at 13:57, Ron Johnson wrote:\n> Hi,\n> \n> Would LOCK TABLE ACCESS EXCLUSIVE MODE speed things up, when I have\n> a script that loads data by setting transactions, and then committing\n> works after every few thousand INSERTs?\n\nIf you're the only person working on the database, then no. If you're\nfighting for resources with a bunch of other people -- then possibly,\nbut the others won't get anything done during this timeframe (of\ncourse).\n\nOh, and you're using COPY right?\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "25 Jan 2003 14:07:46 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On Sat, 2003-01-25 at 13:07, Rod Taylor wrote:\n> On Sat, 2003-01-25 at 13:57, Ron Johnson wrote:\n> > Hi,\n> > \n> > Would LOCK TABLE ACCESS EXCLUSIVE MODE speed things up, when I have\n> > a script that loads data by setting transactions, and then committing\n> > works after every few thousand INSERTs?\n> \n> If you're the only person working on the database, then no. If you're\n> fighting for resources with a bunch of other people -- then possibly,\n> but the others won't get anything done during this timeframe (of\n> course).\n\nOk.\n\n> Oh, and you're using COPY right?\n\nNo. Too much data manipulation to do 1st. Also, by committing every\nX thousand rows, then if the process must be aborted, then there's\nno huge rollback, and the script can then skip to the last comitted\nrow and pick up from there.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "25 Jan 2003 13:19:09 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On Sun, 25 Jan 2003, Ron Johnson wrote:\n\n> > Oh, and you're using COPY right?\n>\n> No. Too much data manipulation to do 1st. Also, by committing every\n> X thousand rows, then if the process must be aborted, then there's\n> no huge rollback, and the script can then skip to the last comitted\n> row and pick up from there.\n\nI don't see how the amount of data manipulation makes a difference.\nWhere you now issue a BEGIN, issue a COPY instead. Where you now INSERT,\njust print the data for the columns, separated by tabs. Where you now\nissue a COMMIT, end the copy.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Mon, 27 Jan 2003 08:10:09 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On Sun, 2003-01-26 at 17:10, Curt Sampson wrote:\n> On Sun, 25 Jan 2003, Ron Johnson wrote:\n> \n> > > Oh, and you're using COPY right?\n> >\n> > No. Too much data manipulation to do 1st. Also, by committing every\n> > X thousand rows, then if the process must be aborted, then there's\n> > no huge rollback, and the script can then skip to the last comitted\n> > row and pick up from there.\n> \n> I don't see how the amount of data manipulation makes a difference.\n> Where you now issue a BEGIN, issue a COPY instead. Where you now INSERT,\n> just print the data for the columns, separated by tabs. Where you now\n> issue a COMMIT, end the copy.\n\nYes, create an input file for COPY. Great idea. \n\nHowever, If I understand you correctly, then if I want to be able\nto not have to roll-back and re-run and complete COPY (which may\nentail millions of rows), then I'd have to have thousands of seperate\ninput files (which would get processed sequentially).\n\nHere's what I'd like to see:\nCOPY table [ ( column [, ...] ) ]\n FROM { 'filename' | stdin }\n [ [ WITH ] \n [ BINARY ] \n [ OIDS ]\n [ DELIMITER [ AS ] 'delimiter' ]\n [ NULL [ AS ] 'null string' ] ]\n [COMMIT EVERY ... ROWS WITH LOGGING] <<<<<<<<<<<<<\n [SKIP ... ROWS] <<<<<<<<<<<<<\n\nThis way, if I'm loading 25M rows, I can have it commit every, say,\n1000 rows, and if it pukes 1/2 way thru, then when I restart the \nCOPY, it can SKIP past what's already been loaded, and proceed apace.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "27 Jan 2003 03:08:20 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On 27 Jan 2003 at 3:08, Ron Johnson wrote:\n\n> Here's what I'd like to see:\n> COPY table [ ( column [, ...] ) ]\n> FROM { 'filename' | stdin }\n> [ [ WITH ] \n> [ BINARY ] \n> [ OIDS ]\n> [ DELIMITER [ AS ] 'delimiter' ]\n> [ NULL [ AS ] 'null string' ] ]\n> [COMMIT EVERY ... ROWS WITH LOGGING] <<<<<<<<<<<<<\n> [SKIP ... ROWS] <<<<<<<<<<<<<\n> \n> This way, if I'm loading 25M rows, I can have it commit every, say,\n> 1000 rows, and if it pukes 1/2 way thru, then when I restart the \n> COPY, it can SKIP past what's already been loaded, and proceed apace.\n\nIIRc, there is a hook to \\copy, not the postgreSQL command copy for how many \ntransactions you would like to see. I remember to have benchmarked that and \nconcluded that doing copy in one transaction is the fastest way of doing it.\n\nDOn't have a postgresql installation handy, me being in linux, but this is \ndefinitely possible..\n\nBye\n Shridhar\n\n--\nI still maintain the point that designing a monolithic kernel in 1991 is \nafundamental error. Be thankful you are not my student. You would not get \nahigh grade for such a design :-)(Andrew Tanenbaum to Linus Torvalds)\n\n", "msg_date": "Mon, 27 Jan 2003 15:15:07 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On Mon, 2003-01-27 at 03:45, Shridhar Daithankar wrote:\n> On 27 Jan 2003 at 3:08, Ron Johnson wrote:\n> \n> > Here's what I'd like to see:\n> > COPY table [ ( column [, ...] ) ]\n> > FROM { 'filename' | stdin }\n> > [ [ WITH ] \n> > [ BINARY ] \n> > [ OIDS ]\n> > [ DELIMITER [ AS ] 'delimiter' ]\n> > [ NULL [ AS ] 'null string' ] ]\n> > [COMMIT EVERY ... ROWS WITH LOGGING] <<<<<<<<<<<<<\n> > [SKIP ... ROWS] <<<<<<<<<<<<<\n> > \n> > This way, if I'm loading 25M rows, I can have it commit every, say,\n> > 1000 rows, and if it pukes 1/2 way thru, then when I restart the \n> > COPY, it can SKIP past what's already been loaded, and proceed apace.\n> \n> IIRc, there is a hook to \\copy, not the postgreSQL command copy for how many \n\nI'll have to look into that.\n\n> transactions you would like to see. I remember to have benchmarked that and \n> concluded that doing copy in one transaction is the fastest way of doing it.\n\nBoy Scout motto: Be prepared!! (Serves me well as a DBA.)\n\nSo it takes a little longer. In case of failure, the time would be\nmore than made up. Also, wouldn't the WAL grow hugely if many millions\nof rows were inserted in one txn?\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "27 Jan 2003 03:54:25 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": true, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On 27 Jan 2003 at 15:15, Shridhar Daithankar wrote:\n> IIRc, there is a hook to \\copy, not the postgreSQL command copy for how many \n> transactions you would like to see. I remember to have benchmarked that and \n> concluded that doing copy in one transaction is the fastest way of doing it.\n> \n> DOn't have a postgresql installation handy, me being in linux, but this is \n> definitely possible..\n\nI am sleeping. That should have read XP rather than linux.\n\nGrrr..\n\nBye\n Shridhar\n\n--\nLowery's Law:\tIf it jams -- force it. If it breaks, it needed replacing \nanyway.\n\n", "msg_date": "Mon, 27 Jan 2003 15:27:05 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On 27 Jan 2003 at 3:54, Ron Johnson wrote:\n\n> On Mon, 2003-01-27 at 03:45, Shridhar Daithankar wrote:\n> > transactions you would like to see. I remember to have benchmarked that and \n> > concluded that doing copy in one transaction is the fastest way of doing it.\n> \n> Boy Scout motto: Be prepared!! (Serves me well as a DBA.)\n\nGoes for everything else as well..\n> \n> So it takes a little longer. In case of failure, the time would be\n> more than made up. Also, wouldn't the WAL grow hugely if many millions\n> of rows were inserted in one txn?\n\nNops.. If WAL starts recycling, postgresql should start flishing data from WAL \nto data files. \n\nAt any given moment, WAL will not exceed of what you have configured. They are \njust read ahead logs most of the times intended for crash recovery. \n(Consequently it does not help setting WAL bigger than required.)\n\nBye\n Shridhar\n\n--\nnominal egg:\tNew Yorkerese for expensive.\n\n", "msg_date": "Mon, 27 Jan 2003 15:30:39 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" }, { "msg_contents": "On Mon, 27 Jan 2003, Ron Johnson wrote:\n\n> > I don't see how the amount of data manipulation makes a difference.\n> > Where you now issue a BEGIN, issue a COPY instead. Where you now INSERT,\n> > just print the data for the columns, separated by tabs. Where you now\n> > issue a COMMIT, end the copy.\n>\n> Yes, create an input file for COPY. Great idea.\n\nThat's not quite what I was thinking of. Don't create an input file,\njust send the commands directly to the server (if your API supports it).\nIf worst comes to worst, you could maybe open up a subprocess for a psql\nand write to its standard input.\n\n> However, If I understand you correctly, then if I want to be able\n> to not have to roll-back and re-run and complete COPY (which may\n> entail millions of rows), then I'd have to have thousands of seperate\n> input files (which would get processed sequentially).\n\nRight.\n\nBut you can probably commit much less often than 1000 rows. 10,000 or\n100,000 would probably be more practical.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Mon, 27 Jan 2003 19:23:10 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: LOCK TABLE & speeding up mass data loads" } ]
[ { "msg_contents": "Hi all,\n\nHave you got any data (ie in percentage) of around how much more CPU\nwork needed with the bigserial type in the queries?\n\nI have a log database with 100million records (the biggest table\ncontains 65million records) and I use bigserial data type as primary key\nnow. The primary key looks this way: YYYYMMDD1xxxxxxx where the first 8\nnumbers are the date, and the x's are the record sequence number on that\nday. This way the records are in ascendant order. Almost all of the\nqueries contains date constraints (PK like 'YYYYMMDD%'). I'd like to\nknow if I do it in a stupid way or not. I'm not a DBA expert so every\nidea are welcome. If you need more information about the\nhardware/software environment, the DB structure then I'll post them.\n\nThanks in advance for your help.\n\nGabor\n\n", "msg_date": "26 Jan 2003 22:24:41 +0100", "msg_from": "Medve Gabor <eire@enternet.hu>", "msg_from_op": true, "msg_subject": "bigserial vs serial - which one I'd have to use?" }, { "msg_contents": "On Sun, 2003-01-26 at 15:24, Medve Gabor wrote:\n> Hi all,\n> \n> Have you got any data (ie in percentage) of around how much more CPU\n> work needed with the bigserial type in the queries?\n> \n> I have a log database with 100million records (the biggest table\n> contains 65million records) and I use bigserial data type as primary key\n> now. The primary key looks this way: YYYYMMDD1xxxxxxx where the first 8\n> numbers are the date, and the x's are the record sequence number on that\n> day. This way the records are in ascendant order. Almost all of the\n> queries contains date constraints (PK like 'YYYYMMDD%'). I'd like to\n> know if I do it in a stupid way or not. I'm not a DBA expert so every\n> idea are welcome. If you need more information about the\n> hardware/software environment, the DB structure then I'll post them.\n\nI think you can only do LIKE queries on CHAR-type fields.\n\nBETWEEN ought to help you, though:\nSELECT * \nFROM foo \nwhere prim_key BETWEEN YYYYMMDD00000000 and YYYYMMDD999999999;\n\nAlternatively, if you really want to do 'YYYYMMDD%', you could create\na functional index on to_char(prim_key).\n\nLastly, you could create 2 fields and create a compound PK:\nPK_DATE DATE,\nPK_SERIAL BIGINT\n\nThen you could say:\nSELECT * \nFROM foo \nwhere pk_date = 'YYYY-MM-DD'\n\nOf course, then you'd be adding an extra 8 bytes to each column...\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "27 Jan 2003 03:18:41 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: bigserial vs serial - which one I'd have to use?" }, { "msg_contents": "\nMedve,\n\n> Have you got any data (ie in percentage) of around how much more CPU\n> work needed with the bigserial type in the queries?\n> \n> I have a log database with 100million records (the biggest table\n> contains 65million records) and I use bigserial data type as primary key\n> now. The primary key looks this way: YYYYMMDD1xxxxxxx where the first 8\n> numbers are the date, and the x's are the record sequence number on that\n> day. This way the records are in ascendant order. Almost all of the\n> queries contains date constraints (PK like 'YYYYMMDD%'). I'd like to\n> know if I do it in a stupid way or not. I'm not a DBA expert so every\n> idea are welcome. If you need more information about the\n> hardware/software environment, the DB structure then I'll post them.\n\nGiven that structure, I'd personally create a table with a 2-column primary \nkey, one column of type DATE and one SERIAL column. Alternately, if you find \nthe conversion of DATE to char for output purposes really slows things down, \none column of INT and one of SERIAL. Either way, the two columns together \nmake up the primary key.\n\nI would definitely suggest avoiting the temptation to do this as a single \ncolumn of type CHAR(). That would be vastly more costly than either \nstrategy mentioned above:\n\nDATE + SERIAL (INT) = 8 bytes\nINT + SERIAL (INT) = 8 bytes\nCHAR(16) = 18 bytes\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 27 Jan 2003 11:33:37 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: bigserial vs serial - which one I'd have to use?" }, { "msg_contents": "On Mon, 2003-01-27 at 13:33, Josh Berkus wrote:\n> Medve,\n> \n> > Have you got any data (ie in percentage) of around how much more CPU\n> > work needed with the bigserial type in the queries?\n> > \n> > I have a log database with 100million records (the biggest table\n> > contains 65million records) and I use bigserial data type as primary key\n> > now. The primary key looks this way: YYYYMMDD1xxxxxxx where the first 8\n> > numbers are the date, and the x's are the record sequence number on that\n> > day. This way the records are in ascendant order. Almost all of the\n> > queries contains date constraints (PK like 'YYYYMMDD%'). I'd like to\n> > know if I do it in a stupid way or not. I'm not a DBA expert so every\n> > idea are welcome. If you need more information about the\n> > hardware/software environment, the DB structure then I'll post them.\n> \n> Given that structure, I'd personally create a table with a 2-column primary \n> key, one column of type DATE and one SERIAL column. Alternately, if you find \n> the conversion of DATE to char for output purposes really slows things down, \n> one column of INT and one of SERIAL. Either way, the two columns together \n> make up the primary key.\n> \n> I would definitely suggest avoiting the temptation to do this as a single \n> column of type CHAR(). That would be vastly more costly than either \n> strategy mentioned above:\n> \n> DATE + SERIAL (INT) = 8 bytes\n\nAh, cool. I thought DATE was 8 bytes. Should have RTFM, of course.\n\n> INT + SERIAL (INT) = 8 bytes\n>\n> CHAR(16) = 18 bytes\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "27 Jan 2003 13:55:04 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: bigserial vs serial - which one I'd have to use?" } ]
[ { "msg_contents": "Due to reasons that everyone can probably intuit, we are porting a large \nserver application from IBM Informix to PG. However, things that take \nmilliseconds in IFX are taking HOURS (not joking) in PG. I *think* I \nmay have come across some reasons why, but I would like to see if anyone \nelse has an opinion. I could not find anything relevant in docs (but if \nit is there, please point me to it).\n\nLet me give an example of one of the problems...\n\nI have a table that utilizes 2 foreign keys. It has 400000 records of \napproximately 512 bytes each (mostly text, except for the keys). When I \nrun a specific query on it, it takes 8000ms to complete, and it always \ndoes a full scan.\n\nI \"assumed\" that since I did not have to create an index on those \nforeign key fields in IFX, that I did not have to in PG. However, just \nfor kicks, I created an index on those 2 fields, and my query time \n(after the first, longer attempt, which I presume is from loading an \nindex) went from 8000ms to 100ms.\n\nSo, do we ALWAYS have to create indexes for foreign key fields in PG? \nDo the docs say this? (I couldn't find the info.)\n\nI will create other threads for my other issues.\n\nThanks!\n\n-- \nMatt Mello\n\n", "msg_date": "Mon, 27 Jan 2003 14:39:57 -0600", "msg_from": "Matt Mello <alien@spaceship.com>", "msg_from_op": true, "msg_subject": "Indexing foreign keys" }, { "msg_contents": "Yes, I had not only done a \"vacuum full analyze\" on the PG db once I \nstuffed it, but I also compared that with an IFX db that I had run \n\"update statistics high\" on. Things are much better with the FK indexes.\n\nDid the docs say to index those FK fields (is that standard in the DB \nindustry?), or was I just spoiled by IFX doing it for me? ;)\n\nThanks!\n\n\n\nChad Thompson wrote:\n> Make sure that you've run a vacuum and an analyze. There is also a\n> performance hit if the types of the fields or values are different. ie int\n> to int8\n\n-- \nMatt Mello\n\n\n", "msg_date": "Mon, 27 Jan 2003 14:56:38 -0600", "msg_from": "Matt Mello <alien@spaceship.com>", "msg_from_op": true, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "\nMatt,\n\n> Did the docs say to index those FK fields (is that standard in the DB \n> industry?), or was I just spoiled by IFX doing it for me? ;)\n\nIt's pretty standard in the DB industry.\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 27 Jan 2003 13:12:09 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "On Mon, 2003-01-27 at 14:39, Matt Mello wrote:\n> Due to reasons that everyone can probably intuit, we are porting a large \n> server application from IBM Informix to PG. However, things that take \n> milliseconds in IFX are taking HOURS (not joking) in PG. I *think* I \n> may have come across some reasons why, but I would like to see if anyone \n> else has an opinion. I could not find anything relevant in docs (but if \n> it is there, please point me to it).\n> \n> Let me give an example of one of the problems...\n> \n> I have a table that utilizes 2 foreign keys. It has 400000 records of \n> approximately 512 bytes each (mostly text, except for the keys). When I \n> run a specific query on it, it takes 8000ms to complete, and it always \n> does a full scan.\n> \n> I \"assumed\" that since I did not have to create an index on those \n> foreign key fields in IFX, that I did not have to in PG. However, just \n> for kicks, I created an index on those 2 fields, and my query time \n> (after the first, longer attempt, which I presume is from loading an \n> index) went from 8000ms to 100ms.\n> \n> So, do we ALWAYS have to create indexes for foreign key fields in PG? \n> Do the docs say this? (I couldn't find the info.)\n\nWhen you say \"I created an index on those 2 fields\", so you mean on\nthe fields in the 400K row table, or on the keys in the \"fact tables\"\nthat the 400K row table?\n\nAlso, in IFX, could the creation of the foreign indexes have implicitly\ncreated indexes?\nThe reason I ask is that this is what happens in Pg when you create a\nPK.\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "27 Jan 2003 15:13:32 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "On Mon, 27 Jan 2003, Matt Mello wrote:\n\n> Due to reasons that everyone can probably intuit, we are porting a large\n> server application from IBM Informix to PG. However, things that take\n> milliseconds in IFX are taking HOURS (not joking) in PG. I *think* I\n> may have come across some reasons why, but I would like to see if anyone\n> else has an opinion. I could not find anything relevant in docs (but if\n> it is there, please point me to it).\n>\n> Let me give an example of one of the problems...\n>\n> I have a table that utilizes 2 foreign keys. It has 400000 records of\n> approximately 512 bytes each (mostly text, except for the keys). When I\n> run a specific query on it, it takes 8000ms to complete, and it always\n> does a full scan.\n>\n> I \"assumed\" that since I did not have to create an index on those\n> foreign key fields in IFX, that I did not have to in PG. However, just\n> for kicks, I created an index on those 2 fields, and my query time\n> (after the first, longer attempt, which I presume is from loading an\n> index) went from 8000ms to 100ms.\n>\n> So, do we ALWAYS have to create indexes for foreign key fields in PG?\n> Do the docs say this? (I couldn't find the info.)\n\nYou don't always need to create them, because there are fk patterns where\nan index is counterproductive, but if you're not in one of those cases you\nshould create them. I'm not sure the docs actually say anything about\nthis however.\n\n\n\n", "msg_date": "Mon, 27 Jan 2003 13:16:49 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "Guys,\n\n> You don't always need to create them, because there are fk patterns where\n> an index is counterproductive, but if you're not in one of those cases you\n> should create them. I'm not sure the docs actually say anything about\n> this however.\n\nSee:\nhttp://techdocs.postgresql.org/techdocs/pgsqladventuresep2.php\nhttp://techdocs.postgresql.org/techdocs/pgsqladventuresep3.php\n\n(and yes, I know I need to finish this series ...)\n\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 27 Jan 2003 13:23:19 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "Josh Berkus wrote:\n> Matt,\n>>Did the docs say to index those FK fields (is that standard in the DB \n>>industry?), or was I just spoiled by IFX doing it for me? ;)\n> It's pretty standard in the DB industry.\n\nI didn't know that, but I'm new to the DB field. I've gleaned quite a \nfew tips from this group, especially from responses to people with slow \nqueries/databases, but this is the first I've noticed it this tip. I'll \ntry it on my db too.\n\n\n-- \nRon St.Pierre\nSyscor R&D\ntel: 250-361-1681\nemail: rstpierre@syscor.com\n\n", "msg_date": "Mon, 27 Jan 2003 16:06:51 -0800", "msg_from": "\"Ron St.Pierre\" <rstpierre@syscor.com>", "msg_from_op": false, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "Ron Johnson wrote:\n> When you say \"I created an index on those 2 fields\", so you mean on\n> the fields in the 400K row table, or on the keys in the \"fact tables\"\n> that the 400K row table?\n> \n> Also, in IFX, could the creation of the foreign indexes have implicitly\n> created indexes?\n> The reason I ask is that this is what happens in Pg when you create a\n> PK.\n> \n\nThe 400K row table has 2 fields that are FK fields. The already-indexed \nPK fields that they reference are in another table. I just recently \nadded indexes to the 2 FK fields in the 400K row table to get the speed \nboost.\n\nYes. In IFX, when you create a FK, it seems to create indexes \nautomatically for you, just like PG does with PK's.\n\nIn fact, I can't imagine a situation where you would NOT want a FK \nindexed. I guess there must be one, or else I'm sure the developers \nwould have already added auto-creation of indexes to the FK creation, as \nwell.\n\n-- \nMatt Mello\n\n", "msg_date": "Mon, 27 Jan 2003 23:46:46 -0600", "msg_from": "Matt Mello <alien@spaceship.com>", "msg_from_op": true, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "> You don't always need to create them, because there are fk patterns where\n> an index is counterproductive, but if you're not in one of those cases you\n> should create them. I'm not sure the docs actually say anything about\n> this however.\n\nI would try to add a comment about this to the interactive docs if they \nweren't so far behind already (7.2.1). :\\\n\n-- \nMatt Mello\n\n", "msg_date": "Mon, 27 Jan 2003 23:51:31 -0600", "msg_from": "Matt Mello <alien@spaceship.com>", "msg_from_op": true, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "On Mon, 27 Jan 2003, Matt Mello wrote:\n\n> Yes. In IFX, when you create a FK, it seems to create indexes\n> automatically for you, just like PG does with PK's.\n>\n> In fact, I can't imagine a situation where you would NOT want a FK\n> indexed. I guess there must be one, or else I'm sure the developers\n> would have already added auto-creation of indexes to the FK creation, as\n> well.\n\nAny case where the pk table is small enough and the values are fairly\nevenly distributed so that the index isn't very selective. You end up not\nusing the index anyway because it's not selective and you pay the costs\ninvolved in keeping it up to date.\n\n", "msg_date": "Mon, 27 Jan 2003 23:49:52 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "On Mon, 2003-01-27 at 23:46, Matt Mello wrote:\n> Ron Johnson wrote:\n> > When you say \"I created an index on those 2 fields\", so you mean on\n> > the fields in the 400K row table, or on the keys in the \"fact tables\"\n> > that the 400K row table?\n> > \n> > Also, in IFX, could the creation of the foreign indexes have implicitly\n> > created indexes?\n> > The reason I ask is that this is what happens in Pg when you create a\n> > PK.\n> > \n> \n> The 400K row table has 2 fields that are FK fields. The already-indexed \n> PK fields that they reference are in another table. I just recently \n> added indexes to the 2 FK fields in the 400K row table to get the speed \n> boost.\n> \n> Yes. In IFX, when you create a FK, it seems to create indexes \n> automatically for you, just like PG does with PK's.\n> \n> In fact, I can't imagine a situation where you would NOT want a FK \n> indexed. I guess there must be one, or else I'm sure the developers \n> would have already added auto-creation of indexes to the FK creation, as \n> well.\n\nWhen I took my brain out of 1st gear, it was \"Doh!\": I realized that \nI was thinking backwards...\n\n-- \n+---------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"Fear the Penguin!!\" |\n+---------------------------------------------------------------+\n\n", "msg_date": "28 Jan 2003 09:15:09 -0600", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Indexing foreign keys" }, { "msg_contents": "> > In fact, I can't imagine a situation where you would NOT want a FK\n> > indexed. I guess there must be one, or else I'm sure the developers\n> > would have already added auto-creation of indexes to the FK creation, as\n> > well.\n>\n> Any case where the pk table is small enough and the values are fairly\n> evenly distributed so that the index isn't very selective. You end up not\n> using the index anyway because it's not selective and you pay the costs\n> involved in keeping it up to date.\n\nOr you want an index on two or more fields that includes the FK as the\nprimary field. No sense in making two indexes.\n\nGreg\n\n", "msg_date": "Tue, 28 Jan 2003 16:23:08 -0500", "msg_from": "\"Gregory Wood\" <gregw@com-stock.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Indexing foreign keys" } ]
[ { "msg_contents": "What I'm doing on V7.3.1:\n\nselect t1.keycol, t2.keycol\n from tab1 t1\n LEFT join myview t2 on t1.keycol=t2.keycol\nwhere t1.keycol=1000001\n\nt1 has 100 rows, t2 has 4000, both with keycol as PK.\n\nthe view is created as\nCREATE myview AS SELECT keycol, 22::integer as calc_col FROM tab2\n\nThe query plan will show an ugly subquery scan on all tab2 rows. If the \nview is created without calculated columns, the query plan looks as \nexpected showing and index scan on tab2 with the correct condition, \ninner join will always be ok.\n\nIn real life, the view consists of a lot more tables, and the tables may \ncontain >1,000,000 rows so you may imagine the performance...\n\n\n", "msg_date": "Tue, 28 Jan 2003 17:50:58 +0100", "msg_from": "Andreas Pflug <Andreas.Pflug@web.de>", "msg_from_op": true, "msg_subject": "inefficient query plan with left joined view" }, { "msg_contents": "Andreas Pflug <Andreas.Pflug@web.de> writes:\n> What I'm doing on V7.3.1:\n> select t1.keycol, t2.keycol\n> from tab1 t1\n> LEFT join myview t2 on t1.keycol=t2.keycol\n> where t1.keycol=1000001\n\n> the view is created as\n> CREATE myview AS SELECT keycol, 22::integer as calc_col FROM tab2\n\nThe subquery isn't pulled up because it doesn't pass the\nhas_nullable_targetlist test in src/backend/optimizer/plan/planner.c.\nIf we did flatten it, then references to calc_col wouldn't correctly\ngo to NULL when the LEFT JOIN should make them do so --- they'd be\n22 all the time.\n\nAs the notes in that routine say, it could be made smarter: strict\nfunctions of nullable variables could be allowed. So if your real\nconcern is not '22' but something like 'othercol + 22' then this is\nfixable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Jan 2003 13:30:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inefficient query plan with left joined view " }, { "msg_contents": "Tom Lane wrote:\n\n>The subquery isn't pulled up because it doesn't pass the\n>has_nullable_targetlist test in src/backend/optimizer/plan/planner.c.\n>If we did flatten it, then references to calc_col wouldn't correctly\n>go to NULL when the LEFT JOIN should make them do so --- they'd be\n>22 all the time.\n>\n>As the notes in that routine say, it could be made smarter: strict\n>functions of nullable variables could be allowed. So if your real\n>concern is not '22' but something like 'othercol + 22' then this is\n>fixable.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\nTom,\n\nactually my views do use calculated columns (mostly concated strings, \ne.g. full name from title/1st/last name). As the example shows even \ncolumns that are never used will be taken into account when checking \nhas_nullable_targetlist. Unfortunately I have a lot of views containing \nviews which containing.... delivering a lot more columns than needed. \nBut they are checked anyway...\n\nI'd expect the parser to look at the join construction only to find out \nabout available data. Why should the selected (and even unselected) \ncolumns be evaluated if the join delivers no result? Maybe this can be \nachieved by checking only JOIN ON/WHERE columns with \nhas_nullable_targetlist?\n\n\n\nRegards,\nAndreas\n\n", "msg_date": "Tue, 28 Jan 2003 20:31:46 +0100", "msg_from": "Andreas Pflug <Andreas.Pflug@web.de>", "msg_from_op": true, "msg_subject": "Re: inefficient query plan with left joined view" }, { "msg_contents": "Andreas Pflug <Andreas.Pflug@web.de> writes:\n> As the example shows even \n> columns that are never used will be taken into account when checking \n> has_nullable_targetlist.\n\nIt's not really practical to do otherwise, as the code that needs to\ncheck this doesn't have access to a list of the columns actually used.\nEven if we kluged things up enough to make it possible to find that out,\nthat would merely mean that *some* of your queries wouldn't have a\nproblem.\n\nWhat about improving the intelligence of the nullability check --- or do\nyou have non-strict expressions in there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Jan 2003 17:41:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: inefficient query plan with left joined view " } ]
[ { "msg_contents": "> I don't see how performance would be significantly better if you stored\n> the common columns of all rows (parent and children) in the parent\n> table, in contrast with how it is done now, storing entire rows of child\n> tables in the child table and omitting them from the parent table.\n\nWell there are a couple of points.\n\nFirstly, from the simple standpoint of database normalization you\nshouldn't have tables that have the same columns. The way it is\nimplemented, child tables are copies of parent tables.\n\nBut more importantly it is bad for performance because selecting from a\nparent table causes the same select to be done on all the child tables.\nIn my case selecting from the parent causes six selects to be done (one\nfor every child table).\n\nI would have assumed that child tables only contained the new columns\nunique to it, not duplicates of the columns already in the parent table.\n\nAn insert to a child table would actually cause two inserts to be done\n(assuming only one level of inheritance), one to the parent, and then\none to the child.\n\nTherefore, selects from the parent table would only require a single\nselect (because the common data is all stored in the parent table).\n\nSelects to a child would require two selects to get the entire row (one\nto the parent, one to the child). Similar to a view.\n\nAs I said previously, performance would depend on what operation you\nwere mostly doing.\n\nI think I have more or less covered this in my previous postings.\n\nJohn Lange\n\nOn Tue, 2003-01-28 at 17:52, Andras Kadinger wrote:\n> I see.\n> \n> I don't see how performance would be significantly better if you stored\n> the common columns of all rows (parent and children) in the parent\n> table, in contrast with how it is done now, storing entire rows of child\n> tables in the child table and omitting them from the parent table.\n> \n> Hmm, reviewing your posts to pgsql-performance, I must admit I cannot\n> really see what you feel you are losing performance-wise.\n> \n> As the discussion on pgsql-performance seems to have died off, would you\n> be willing to explain to me?\n> \n> Regards,\n> Andras\n> \n> John Lange wrote:\n> > \n> > No, the keyword ONLY will limit selects to that table ONLY. I need to\n> > return the rows which are common to all tables. Postgres is doing the\n> > work in the correct way, however, the issue is the underlaying design\n> > which is terribly inefficient requiring a separate table scan for every\n> > child table.\n> > \n> > Thanks for the suggestion.\n> > \n> > John Lange\n> > \n> > On Fri, 2003-01-24 at 14:30, Andras Kadinger wrote:\n> > > Hi John,\n> > >\n> > > Isn't the keyword ONLY is what you are after?\n> > >\n> > > \"EXPLAIN select * from tbl_objects where id = 1;\" - this will select\n> > > from table tbl_objects and all it's descendant tables.\n> > >\n> > > \"EXPLAIN select * from tbl_objects ONLY where id = 1;\" - this will\n> > > select from table tbl_objects only.\n> > >\n> > > Regards,\n> > > Andras Kadinger\n\n", "msg_date": "28 Jan 2003 21:21:41 -0600", "msg_from": "John Lange <lists@darkcore.net>", "msg_from_op": true, "msg_subject": "Re: Query plan and Inheritance. Weird behavior" }, { "msg_contents": "John Lange <lists@darkcore.net> writes:\n> Firstly, from the simple standpoint of database normalization you\n> shouldn't have tables that have the same columns. The way it is\n> implemented, child tables are copies of parent tables.\n\nThere is no copied data though. Or are you saying that if any table\nin the database has, say, a timestamp column, then it's a failure of\nnormalization for any other one to have a timestamp column? Don't think\nI buy that.\n\n> But more importantly it is bad for performance because selecting from a\n> parent table causes the same select to be done on all the child tables.\n\nSo? The same amount of data gets scanned either way. To the extent\nthat the planner fails to generate an optimal plan in such cases, we\nhave a performance problem --- but that's just an implementation\nshortcoming, not a fundamental limitation AFAICS.\n\nThe only real disadvantage I can see to the current storage scheme is\nthat we can't easily make an index that covers both a parent and all its\nchildren; the index would have to include a table pointer as well as a\nrow pointer. This creates problems for foreign keys and unique constraints.\nBut there is more than one way to attack that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Jan 2003 00:05:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query plan and Inheritance. Weird behavior " }, { "msg_contents": "John Lange wrote:\n> \n> > I don't see how performance would be significantly better if you stored\n> > the common columns of all rows (parent and children) in the parent\n> > table, in contrast with how it is done now, storing entire rows of child\n> > tables in the child table and omitting them from the parent table.\n> \n> Well there are a couple of points.\n> \n> Firstly, from the simple standpoint of database normalization you\n> shouldn't have tables that have the same columns. The way it is\n> implemented, child tables are copies of parent tables.\n\nAs Tom pointed out, only the schema is copied, but not the data.\n\nThis has the following advantages:\n- if you select from child tables, PostgreSQL will only have to scan\nrows that belong to that child (and further down), and not all rows in\nall tables of the inheritance hierarchy; so if you have 100 million rows\nin the whole hierarchy, but only have say 1 million in the child you are\ncurrently interested in, you only have to scan those 1 million rows, and\nnot the whole 100 million.\n- all columns of rows are stored together, so to read a row only one\ndisk access is needed (your way it would probably need roughly one\nrandom disk access per each inheritance level upwards, both for\nreads/selects and writes/inserts/updates; with a sizable inheritance\nhierarchy this would be a considerable performance hit)\n- it doesn't really cost much in terms of disk space, only some\nbookkeeping information is needed\n\nI don't think inheritance really fits into 'database normalization'\nitself, but still there are cases where it is more convenient/efficient\nthan with traditional database normalization, where you would have to\neither go create completely separate tables for each type (and do UNIONs\nof SELECTs if you are interested in more than one child only), or what's\neven more cumbersome, create a table with common columns ('parent' here)\nand then go create children and children of children that each link\nupwards to their respective parents through some kind of key: in a\nselect, you would have to explicitly specify all tables upwards the\ninheritance hierarchy, and specify the respective joins for them.\n\nSo I think whether you should choose more traditional database\nnormalization or use inheritance depends on what you want to do.\n\n> But more importantly it is bad for performance because selecting from a\n> parent table causes the same select to be done on all the child tables.\n> In my case selecting from the parent causes six selects to be done (one\n> for every child table).\n\n'causes the same select to be done on all the child tables' - I don't\nagree with that, and I hope this is where the misunderstanding lies.\n\nConsider this:\n\nCREATE TABLE parent ( id integer NOT NULL, text text);\nCREATE TABLE child1 ( child1field text) INHERITS (parent);\nCREATE TABLE child2 ( child2field text) INHERITS (parent);\nCREATE TABLE child3 ( child3field text) INHERITS (parent);\nCREATE TABLE child4 ( child4field text) INHERITS (parent);\n\nCREATE TABLE othertable ( id integer NOT NULL, othertext text);\n\nALTER TABLE ONLY parent ADD CONSTRAINT parent_pkey PRIMARY KEY (id);\nALTER TABLE ONLY child1 ADD CONSTRAINT child1_pkey PRIMARY KEY (id);\nALTER TABLE ONLY child2 ADD CONSTRAINT child2_pkey PRIMARY KEY (id);\nALTER TABLE ONLY child3 ADD CONSTRAINT child3_pkey PRIMARY KEY (id);\nALTER TABLE ONLY child4 ADD CONSTRAINT child4_pkey PRIMARY KEY (id);\n\nALTER TABLE ONLY othertable ADD CONSTRAINT othertable_pkey PRIMARY KEY\n(id);\n\nThen I filled all tables with 10000 rows of synthetic data and ANALYZEd\njust to make sure the optimizer considers indexes to be valuable.\n\nFirst I tried this:\n\njohnlange=# explain select * from parent where id=13;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------\n Result (cost=0.00..15.07 rows=5 width=36)\n -> Append (cost=0.00..15.07 rows=5 width=36)\n -> Index Scan using parent_pkey on parent (cost=0.00..3.01\nrows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using child1_pkey on child1 parent \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using child2_pkey on child2 parent \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using child3_pkey on child3 parent \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using child4_pkey on child4 parent \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n(12 rows)\n\nThe planner has rightly chosen to use indexes, and as a result the query\nwill be pretty fast.\n\nAlso, at first sight this might look like the multiple selects you\nmention, but actually it isn't; here's another example to show that:\n\ninh=# explain select * from parent natural join othertable where\nparent.id=13;\n QUERY\nPLAN \n----------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..30.20 rows=5 width=72)\n -> Append (cost=0.00..15.07 rows=5 width=36)\n -> Index Scan using parent_pkey on parent (cost=0.00..3.01\nrows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using child1_pkey on child1 parent \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using child2_pkey on child2 parent \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using child3_pkey on child3 parent \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using child4_pkey on child4 parent \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using othertable_pkey on othertable (cost=0.00..3.01\nrows=1 width=36)\n Index Cond: (\"outer\".id = othertable.id)\n(14 rows)\n\nAs you can see, the planner decided to use the indexes on parent and\nchildren here too, retrieved and then collated the resulting rows first\nand only then performed the join against othertable.\n\nIn other words, it is not peforming 5 separate selects with their\nrespective joins; it collects all qualifying rows first from the\ninheritance hierarchy, and only then performs the join; so the extra\ncost compared to the non-inheriting case is pretty much only the added\ncost of consulting five indexes instead of just one - unless you have\ninheritance hierarchies consisting of several dozen tables or more (and\neven then) I don't think this added cost would be significant.\n\n> This is entirely reasonable and efficient compared to the current model\n> where a select on a parent table requires the same select to be executed\n> on EVERY child table. If it's a large expensive JOIN of some kind then\n> this is verging on un-workable. \n\nPlease show us a join that you would like to use and let us see how well\nthe planner handles it.\n\nRegards,\nAndras\n\nPS (John, don't look here :) ): I have found some queries with plans\nthat are less efficient than I think they could be.\n\nChanging the where clause in the above query to refer to othertable\ngives:\n\njohnlange=# explain select * from parent natural join othertable where\nothertable.id=13;\n QUERY\nPLAN \n-----------------------------------------------------------------------------------------------\n Hash Join (cost=3.02..978.08 rows=5 width=72)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Append (cost=0.00..725.00 rows=50000 width=36)\n -> Seq Scan on parent (cost=0.00..145.00 rows=10000 width=36)\n -> Seq Scan on child1 parent (cost=0.00..145.00 rows=10000\nwidth=36)\n -> Seq Scan on child2 parent (cost=0.00..145.00 rows=10000\nwidth=36)\n -> Seq Scan on child3 parent (cost=0.00..145.00 rows=10000\nwidth=36)\n -> Seq Scan on child4 parent (cost=0.00..145.00 rows=10000\nwidth=36)\n -> Hash (cost=3.01..3.01 rows=1 width=36)\n -> Index Scan using othertable_pkey on othertable \n(cost=0.00..3.01 rows=1 width=36)\n Index Cond: (id = 13)\n(11 rows)\n\nWhile:\n\njohnlange=# explain select * from ONLY parent natural join othertable\nwhere othertable.id=13;\n QUERY\nPLAN \n-----------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..6.04 rows=1 width=72)\n -> Index Scan using othertable_pkey on othertable (cost=0.00..3.01\nrows=1 width=36)\n Index Cond: (id = 13)\n -> Index Scan using parent_pkey on parent (cost=0.00..3.01 rows=1\nwidth=36)\n Index Cond: (parent.id = \"outer\".id)\n(5 rows)\n\nSimilarly, as a somewhat more real-life example:\n\njohnlange=# explain select * from parent natural join othertable where\nothertable.othertext='apple';\n QUERY\nPLAN \n--------------------------------------------------------------------------------------------------\n Hash Join (cost=131.37..1234.50 rows=250 width=72)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Append (cost=0.00..725.00 rows=50000 width=36)\n -> Seq Scan on parent (cost=0.00..145.00 rows=10000 width=36)\n -> Seq Scan on child1 parent (cost=0.00..145.00 rows=10000\nwidth=36)\n -> Seq Scan on child2 parent (cost=0.00..145.00 rows=10000\nwidth=36)\n -> Seq Scan on child3 parent (cost=0.00..145.00 rows=10000\nwidth=36)\n -> Seq Scan on child4 parent (cost=0.00..145.00 rows=10000\nwidth=36)\n -> Hash (cost=131.25..131.25 rows=50 width=36)\n -> Index Scan using othertable_text on othertable \n(cost=0.00..131.25 rows=50 width=36)\n Index Cond: (othertext = 'alma'::text)\n(11 rows)\n\nWhat's more strange, that it still does it with enable_seqscan set to\noff:\n\njohnlange=# explain select * from parent natural join othertable where\nothertable.othertext='apple';\n QUERY\nPLAN \n--------------------------------------------------------------------------------------------------\n Hash Join (cost=100000131.37..500001234.50 rows=250 width=72)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Append (cost=100000000.00..500000725.00 rows=50000 width=36)\n -> Seq Scan on parent (cost=100000000.00..100000145.00\nrows=10000 width=36)\n -> Seq Scan on child1 parent (cost=100000000.00..100000145.00\nrows=10000 width=36)\n -> Seq Scan on child2 parent (cost=100000000.00..100000145.00\nrows=10000 width=36)\n -> Seq Scan on child3 parent (cost=100000000.00..100000145.00\nrows=10000 width=36)\n -> Seq Scan on child4 parent (cost=100000000.00..100000145.00\nrows=10000 width=36)\n -> Hash (cost=131.25..131.25 rows=50 width=36)\n -> Index Scan using othertable_text on othertable \n(cost=0.00..131.25 rows=50 width=36)\n Index Cond: (othertext = 'apple'::text)\n(11 rows)\n\nWhile:\n\njohnlange=# explain select * from ONLY parent natural join othertable\nwhere othertable.othertext='apple';\n QUERY\nPLAN \n--------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..282.55 rows=50 width=72)\n -> Index Scan using othertable_text on othertable \n(cost=0.00..131.25 rows=50 width=36)\n Index Cond: (othertext = 'apple'::text)\n -> Index Scan using parent_pkey on parent (cost=0.00..3.01 rows=1\nwidth=36)\n Index Cond: (parent.id = \"outer\".id)\n(5 rows)\n\nIf I try to make it more efficient and get rid of the seq scans by\npushing the condition into a subselect, I get an even more interesting\nplan:\n\njohnlange=# explain select * from parent where id in (select id from\nothertable where othertext='alma');\n QUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..6563171.43 rows=25000 width=36)\n -> Append (cost=0.00..6563171.43 rows=25000 width=36)\n -> Seq Scan on parent (cost=0.00..1312634.29 rows=5000\nwidth=36)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=131.25..131.25 rows=50 width=4)\n -> Index Scan using othertable_text on\nothertable (cost=0.00..131.25 rows=50 width=4)\n Index Cond: (othertext = 'alma'::text)\n -> Seq Scan on child1 parent (cost=0.00..1312634.29 rows=5000\nwidth=36)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=131.25..131.25 rows=50 width=4)\n -> Index Scan using othertable_text on\nothertable (cost=0.00..131.25 rows=50 width=4)\n Index Cond: (othertext = 'alma'::text)\n -> Seq Scan on child2 parent (cost=0.00..1312634.29 rows=5000\nwidth=36)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=131.25..131.25 rows=50 width=4)\n -> Index Scan using othertable_text on\nothertable (cost=0.00..131.25 rows=50 width=4)\n Index Cond: (othertext = 'alma'::text)\n -> Seq Scan on child3 parent (cost=0.00..1312634.29 rows=5000\nwidth=36)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=131.25..131.25 rows=50 width=4)\n -> Index Scan using othertable_text on\nothertable (cost=0.00..131.25 rows=50 width=4)\n Index Cond: (othertext = 'alma'::text)\n -> Seq Scan on child4 parent (cost=0.00..1312634.29 rows=5000\nwidth=36)\n Filter: (subplan)\n SubPlan\n -> Materialize (cost=131.25..131.25 rows=50 width=4)\n -> Index Scan using othertable_text on\nothertable (cost=0.00..131.25 rows=50 width=4)\n Index Cond: (othertext = 'alma'::text)\n(32 rows)\n\njohnlange=# select version();\n \nversion \n--------------------------------------------------------------------------------------------------------\n PostgreSQL 7.3.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.1\n(Mandrake Linux 9.1 3.2.1-2mdk)\n(1 row)\n", "msg_date": "Wed, 29 Jan 2003 08:52:08 +0100", "msg_from": "Andras Kadinger <bandit@surfnonstop.com>", "msg_from_op": false, "msg_subject": "Re: Query plan and Inheritance. Weird behavior" }, { "msg_contents": "On Wed, 2003-01-29 at 01:52, Andras Kadinger wrote:\n> John Lange wrote:\n> > \n> > > I don't see how performance would be significantly better if you stored\n> > > the common columns of all rows (parent and children) in the parent\n> > > table, in contrast with how it is done now, storing entire rows of child\n> > > tables in the child table and omitting them from the parent table.\n> > \n> > Well there are a couple of points.\n> > \n> > Firstly, from the simple standpoint of database normalization you\n> > shouldn't have tables that have the same columns. The way it is\n> > implemented, child tables are copies of parent tables.\n> \n> As Tom pointed out, only the schema is copied, but not the data.\n\nI guess you are right, strictly speaking this isn't a violation of\nnormalization since no data is duplicated.\n\n> This has the following advantages:\n> - if you select from child tables, PostgreSQL will only have to scan\n> rows that belong to that child (and further down), and not all rows in\n> all tables of the inheritance hierarchy; so if you have 100 million rows\n> in the whole hierarchy, but only have say 1 million in the child you are\n> currently interested in, you only have to scan those 1 million rows, and\n> not the whole 100 million.\n> - all columns of rows are stored together, so to read a row only one\n> disk access is needed (your way it would probably need roughly one\n> random disk access per each inheritance level upwards, both for\n> reads/selects and writes/inserts/updates; with a sizable inheritance\n> hierarchy this would be a considerable performance hit)\n> - it doesn't really cost much in terms of disk space, only some\n> bookkeeping information is needed\n> \n> I don't think inheritance really fits into 'database normalization'\n> itself, but still there are cases where it is more convenient/efficient\n> than with traditional database normalization, where you would have to\n> either go create completely separate tables for each type (and do UNIONs\n> of SELECTs if you are interested in more than one child only), or what's\n> even more cumbersome, create a table with common columns ('parent' here)\n> and then go create children and children of children that each link\n> upwards to their respective parents through some kind of key: in a\n> select, you would have to explicitly specify all tables upwards the\n> inheritance hierarchy, and specify the respective joins for them.\n> \n> So I think whether you should choose more traditional database\n> normalization or use inheritance depends on what you want to do.\n> \n> > But more importantly it is bad for performance because selecting from a\n> > parent table causes the same select to be done on all the child tables.\n> > In my case selecting from the parent causes six selects to be done (one\n> > for every child table).\n> \n> 'causes the same select to be done on all the child tables' - I don't\n> agree with that, and I hope this is where the misunderstanding lies.\n> \n> Consider this:\n> \n> CREATE TABLE parent ( id integer NOT NULL, text text);\n> CREATE TABLE child1 ( child1field text) INHERITS (parent);\n> CREATE TABLE child2 ( child2field text) INHERITS (parent);\n> CREATE TABLE child3 ( child3field text) INHERITS (parent);\n> CREATE TABLE child4 ( child4field text) INHERITS (parent);\n> \n> CREATE TABLE othertable ( id integer NOT NULL, othertext text);\n> \n> ALTER TABLE ONLY parent ADD CONSTRAINT parent_pkey PRIMARY KEY (id);\n> ALTER TABLE ONLY child1 ADD CONSTRAINT child1_pkey PRIMARY KEY (id);\n> ALTER TABLE ONLY child2 ADD CONSTRAINT child2_pkey PRIMARY KEY (id);\n> ALTER TABLE ONLY child3 ADD CONSTRAINT child3_pkey PRIMARY KEY (id);\n> ALTER TABLE ONLY child4 ADD CONSTRAINT child4_pkey PRIMARY KEY (id);\n> \n> ALTER TABLE ONLY othertable ADD CONSTRAINT othertable_pkey PRIMARY KEY\n> (id);\n> \n> Then I filled all tables with 10000 rows of synthetic data and ANALYZEd\n> just to make sure the optimizer considers indexes to be valuable.\n> \n> First I tried this:\n> \n> johnlange=# explain select * from parent where id=13;\n> QUERY\n> PLAN \n> ----------------------------------------------------------------------------------------------\n> Result (cost=0.00..15.07 rows=5 width=36)\n> -> Append (cost=0.00..15.07 rows=5 width=36)\n> -> Index Scan using parent_pkey on parent (cost=0.00..3.01\n> rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using child1_pkey on child1 parent \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using child2_pkey on child2 parent \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using child3_pkey on child3 parent \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using child4_pkey on child4 parent \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> (12 rows)\n> \n> The planner has rightly chosen to use indexes, and as a result the query\n> will be pretty fast.\n> \n> Also, at first sight this might look like the multiple selects you\n> mention, but actually it isn't; here's another example to show that:\n> \n> inh=# explain select * from parent natural join othertable where\n> parent.id=13;\n> QUERY\n> PLAN \n> ----------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..30.20 rows=5 width=72)\n> -> Append (cost=0.00..15.07 rows=5 width=36)\n> -> Index Scan using parent_pkey on parent (cost=0.00..3.01\n> rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using child1_pkey on child1 parent \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using child2_pkey on child2 parent \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using child3_pkey on child3 parent \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using child4_pkey on child4 parent \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using othertable_pkey on othertable (cost=0.00..3.01\n> rows=1 width=36)\n> Index Cond: (\"outer\".id = othertable.id)\n> (14 rows)\n> \n> As you can see, the planner decided to use the indexes on parent and\n> children here too, retrieved and then collated the resulting rows first\n> and only then performed the join against othertable.\n> \n> In other words, it is not peforming 5 separate selects with their\n> respective joins; it collects all qualifying rows first from the\n> inheritance hierarchy, and only then performs the join; so the extra\n> cost compared to the non-inheriting case is pretty much only the added\n> cost of consulting five indexes instead of just one - unless you have\n> inheritance hierarchies consisting of several dozen tables or more (and\n> even then) I don't think this added cost would be significant.\n> \n> > This is entirely reasonable and efficient compared to the current model\n> > where a select on a parent table requires the same select to be executed\n> > on EVERY child table. If it's a large expensive JOIN of some kind then\n> > this is verging on un-workable. \n> \n> Please show us a join that you would like to use and let us see how well\n> the planner handles it.\n\nOk, your reply here is very informative. Firstly, I can see from your\nexample that I likely don't have my keys and constraints implemented\nproperly.\n\nHowever, the issue of indexes is not necessarily that relevant since you\nmay not be selecting rows based on columns that have indexes.\n\nSo the issue of indexes aside, I think some of the misunderstanding is\nrelated to my assumption that the appended operations are relatively\nmore expensive than scanning the same number of rows in a single select.\n\nHere is the way it looks on my system when I select a single object.\n\ndb_drs0001=> explain select * from tbl_objects where id = 1;\nNOTICE: QUERY PLAN:\n\nResult (cost=0.00..153.70 rows=6 width=111)\n -> Append (cost=0.00..153.70 rows=6 width=111)\n -> Seq Scan on tbl_objects (cost=0.00..144.35 rows=1 width=107)\n -> Seq Scan on tbl_viewers tbl_objects (cost=0.00..1.06 rows=1\nwidth=97)\n -> Seq Scan on tbl_documents tbl_objects (cost=0.00..4.91 rows=1\nwidth=111)\n -> Seq Scan on tbl_formats tbl_objects (cost=0.00..1.11 rows=1\nwidth=100)\n -> Seq Scan on tbl_massemails tbl_objects (cost=0.00..1.01 rows=1\nwidth=90)\n -> Seq Scan on tbl_icons tbl_objects (cost=0.00..1.25 rows=1\nwidth=110)\n\ndb_drs0001=> select version();\n version \n---------------------------------------------------------------\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.3\n(1 row)\n\nThe only question here is, if a select requires the scanning of all rows\nto return a result set, is it dramatically less efficient to have it\nscanning 100,000 rows spread over 6 tables or to scan 100,000 rows in a\nsingle table?\n\n(At the moment I have no where near that amount of data. Side question,\nwhat technique do you use to generate data to fill your tables for\ntesting?)\n\nI'm now starting to see that it likely isn't that much different either\nway so the benefits of the way it's implemented probably out weigh the\nnegatives. Your end up scanning the same number of rows either way.\n\nOn the topic of proper indexes, if you would indulge me, can you show me\nwhere I have gone wrong in that regard? My biggest point of confusion\nhere is with regards to the sequences that are used in the parent table.\n\nHere is the schema as produced by pg_dump. The original create used the\nkeyword \"serial\" or \"bigserial\" as the case may be. I've edited some of\nthe columns out just to keep the example shorter:\n\nCREATE SEQUENCE \"tbl_objects_id_seq\" start 1 increment 1 maxvalue\n9223372036854775807 minvalue 1 cache 1;\n\nCREATE TABLE \"tbl_objects\" (\n \"id\" bigint DEFAULT nextval('\"tbl_objects_id_seq\"'::text) NOT NULL,\n \"name\" text DEFAULT '' NOT NULL,\n \"description\" text DEFAULT '' NOT NULL,\n \"status\" smallint DEFAULT '1' NOT NULL,\n \"class\" text\n);\n\nCREATE TABLE \"tbl_viewers\" (\n \"exec\" text DEFAULT '' NOT NULL )\nINHERITS (\"tbl_objects\");\n\nCREATE TABLE \"tbl_documents\" (\n \"filename\" text DEFAULT '' NOT NULL )\nINHERITS (\"tbl_objects\");\n\nCREATE TABLE \"tbl_massemails\" (\n \"from\" text DEFAULT '' NOT NULL,\n \"subject\" text DEFAULT '' NOT NULL,\n \"message\" text DEFAULT '' NOT NULL )\nINHERITS (\"tbl_objects\");\n\nCREATE TABLE \"tbl_icons\" (\n \"format_id\" bigint DEFAULT '0' NOT NULL )\nINHERITS (\"tbl_documents\");\n\nCREATE TABLE \"tbl_formats\" (\n \"viewer_id\" bigint DEFAULT '0' NOT NULL,\n \"extension\" text DEFAULT '' NOT NULL,\n \"contenttype\" text DEFAULT '' NOT NULL,\n \"upload_class\" text )\nINHERITS (\"tbl_objects\");\n\nCREATE UNIQUE INDEX tbl_objects_id_key ON tbl_objects USING btree (id);\n\nThanks very much for taking the time to look into this with me. It has\nbeen most informative.\n\nJohn Lange\n\n> \n> Regards,\n> Andras\n> \n> PS (John, don't look here :) ): I have found some queries with plans\n> that are less efficient than I think they could be.\n> \n> Changing the where clause in the above query to refer to othertable\n> gives:\n> \n> johnlange=# explain select * from parent natural join othertable where\n> othertable.id=13;\n> QUERY\n> PLAN \n> -----------------------------------------------------------------------------------------------\n> Hash Join (cost=3.02..978.08 rows=5 width=72)\n> Hash Cond: (\"outer\".id = \"inner\".id)\n> -> Append (cost=0.00..725.00 rows=50000 width=36)\n> -> Seq Scan on parent (cost=0.00..145.00 rows=10000 width=36)\n> -> Seq Scan on child1 parent (cost=0.00..145.00 rows=10000\n> width=36)\n> -> Seq Scan on child2 parent (cost=0.00..145.00 rows=10000\n> width=36)\n> -> Seq Scan on child3 parent (cost=0.00..145.00 rows=10000\n> width=36)\n> -> Seq Scan on child4 parent (cost=0.00..145.00 rows=10000\n> width=36)\n> -> Hash (cost=3.01..3.01 rows=1 width=36)\n> -> Index Scan using othertable_pkey on othertable \n> (cost=0.00..3.01 rows=1 width=36)\n> Index Cond: (id = 13)\n> (11 rows)\n> \n> While:\n> \n> johnlange=# explain select * from ONLY parent natural join othertable\n> where othertable.id=13;\n> QUERY\n> PLAN \n> -----------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..6.04 rows=1 width=72)\n> -> Index Scan using othertable_pkey on othertable (cost=0.00..3.01\n> rows=1 width=36)\n> Index Cond: (id = 13)\n> -> Index Scan using parent_pkey on parent (cost=0.00..3.01 rows=1\n> width=36)\n> Index Cond: (parent.id = \"outer\".id)\n> (5 rows)\n> \n> Similarly, as a somewhat more real-life example:\n> \n> johnlange=# explain select * from parent natural join othertable where\n> othertable.othertext='apple';\n> QUERY\n> PLAN \n> --------------------------------------------------------------------------------------------------\n> Hash Join (cost=131.37..1234.50 rows=250 width=72)\n> Hash Cond: (\"outer\".id = \"inner\".id)\n> -> Append (cost=0.00..725.00 rows=50000 width=36)\n> -> Seq Scan on parent (cost=0.00..145.00 rows=10000 width=36)\n> -> Seq Scan on child1 parent (cost=0.00..145.00 rows=10000\n> width=36)\n> -> Seq Scan on child2 parent (cost=0.00..145.00 rows=10000\n> width=36)\n> -> Seq Scan on child3 parent (cost=0.00..145.00 rows=10000\n> width=36)\n> -> Seq Scan on child4 parent (cost=0.00..145.00 rows=10000\n> width=36)\n> -> Hash (cost=131.25..131.25 rows=50 width=36)\n> -> Index Scan using othertable_text on othertable \n> (cost=0.00..131.25 rows=50 width=36)\n> Index Cond: (othertext = 'alma'::text)\n> (11 rows)\n> \n> What's more strange, that it still does it with enable_seqscan set to\n> off:\n> \n> johnlange=# explain select * from parent natural join othertable where\n> othertable.othertext='apple';\n> QUERY\n> PLAN \n> --------------------------------------------------------------------------------------------------\n> Hash Join (cost=100000131.37..500001234.50 rows=250 width=72)\n> Hash Cond: (\"outer\".id = \"inner\".id)\n> -> Append (cost=100000000.00..500000725.00 rows=50000 width=36)\n> -> Seq Scan on parent (cost=100000000.00..100000145.00\n> rows=10000 width=36)\n> -> Seq Scan on child1 parent (cost=100000000.00..100000145.00\n> rows=10000 width=36)\n> -> Seq Scan on child2 parent (cost=100000000.00..100000145.00\n> rows=10000 width=36)\n> -> Seq Scan on child3 parent (cost=100000000.00..100000145.00\n> rows=10000 width=36)\n> -> Seq Scan on child4 parent (cost=100000000.00..100000145.00\n> rows=10000 width=36)\n> -> Hash (cost=131.25..131.25 rows=50 width=36)\n> -> Index Scan using othertable_text on othertable \n> (cost=0.00..131.25 rows=50 width=36)\n> Index Cond: (othertext = 'apple'::text)\n> (11 rows)\n> \n> While:\n> \n> johnlange=# explain select * from ONLY parent natural join othertable\n> where othertable.othertext='apple';\n> QUERY\n> PLAN \n> --------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..282.55 rows=50 width=72)\n> -> Index Scan using othertable_text on othertable \n> (cost=0.00..131.25 rows=50 width=36)\n> Index Cond: (othertext = 'apple'::text)\n> -> Index Scan using parent_pkey on parent (cost=0.00..3.01 rows=1\n> width=36)\n> Index Cond: (parent.id = \"outer\".id)\n> (5 rows)\n> \n> If I try to make it more efficient and get rid of the seq scans by\n> pushing the condition into a subselect, I get an even more interesting\n> plan:\n> \n> johnlange=# explain select * from parent where id in (select id from\n> othertable where othertext='alma');\n> QUERY\n> PLAN \n> ---------------------------------------------------------------------------------------------------------------\n> Result (cost=0.00..6563171.43 rows=25000 width=36)\n> -> Append (cost=0.00..6563171.43 rows=25000 width=36)\n> -> Seq Scan on parent (cost=0.00..1312634.29 rows=5000\n> width=36)\n> Filter: (subplan)\n> SubPlan\n> -> Materialize (cost=131.25..131.25 rows=50 width=4)\n> -> Index Scan using othertable_text on\n> othertable (cost=0.00..131.25 rows=50 width=4)\n> Index Cond: (othertext = 'alma'::text)\n> -> Seq Scan on child1 parent (cost=0.00..1312634.29 rows=5000\n> width=36)\n> Filter: (subplan)\n> SubPlan\n> -> Materialize (cost=131.25..131.25 rows=50 width=4)\n> -> Index Scan using othertable_text on\n> othertable (cost=0.00..131.25 rows=50 width=4)\n> Index Cond: (othertext = 'alma'::text)\n> -> Seq Scan on child2 parent (cost=0.00..1312634.29 rows=5000\n> width=36)\n> Filter: (subplan)\n> SubPlan\n> -> Materialize (cost=131.25..131.25 rows=50 width=4)\n> -> Index Scan using othertable_text on\n> othertable (cost=0.00..131.25 rows=50 width=4)\n> Index Cond: (othertext = 'alma'::text)\n> -> Seq Scan on child3 parent (cost=0.00..1312634.29 rows=5000\n> width=36)\n> Filter: (subplan)\n> SubPlan\n> -> Materialize (cost=131.25..131.25 rows=50 width=4)\n> -> Index Scan using othertable_text on\n> othertable (cost=0.00..131.25 rows=50 width=4)\n> Index Cond: (othertext = 'alma'::text)\n> -> Seq Scan on child4 parent (cost=0.00..1312634.29 rows=5000\n> width=36)\n> Filter: (subplan)\n> SubPlan\n> -> Materialize (cost=131.25..131.25 rows=50 width=4)\n> -> Index Scan using othertable_text on\n> othertable (cost=0.00..131.25 rows=50 width=4)\n> Index Cond: (othertext = 'alma'::text)\n> (32 rows)\n> \n> johnlange=# select version();\n> \n> version \n> --------------------------------------------------------------------------------------------------------\n> PostgreSQL 7.3.1 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.1\n> (Mandrake Linux 9.1 3.2.1-2mdk)\n> (1 row)\n\n", "msg_date": "29 Jan 2003 12:29:00 -0600", "msg_from": "John Lange <lists@darkcore.net>", "msg_from_op": true, "msg_subject": "Re: Query plan and Inheritance. Weird behavior" }, { "msg_contents": "John Lange wrote:\n> \n> On Wed, 2003-01-29 at 01:52, Andras Kadinger wrote:\n> > As Tom pointed out, only the schema is copied, but not the data.\n> \n> I guess you are right, strictly speaking this isn't a violation of\n> normalization since no data is duplicated.\n\n[...]\n\n> Ok, your reply here is very informative. Firstly, I can see from your\n> example that I likely don't have my keys and constraints implemented\n> properly.\n> \n> However, the issue of indexes is not necessarily that relevant since you\n> may not be selecting rows based on columns that have indexes.\n\nGranted, now I see that was not strictly related to your point, I just\nwanted to avoid most avoidable objections against performance of\ninheritance, and I wasn't 100% sure you seeing seq scans was not part of\nyou thinking performance of this method would be suboptimal so just to\nbe sure, I explicitly went for an example with indexes.\n\n> So the issue of indexes aside, I think some of the misunderstanding is\n> related to my assumption that the appended operations are relatively\n> more expensive than scanning the same number of rows in a single select.\n\nI see. Well, the Append step itself I suppose is not doing much else\nthan iterates over its subnodes and asks each of them to return their\nrows, and forwards the rows upwards to the rest of the plan as it\nreceives them - it doesn't buffer them, or collect them all before\nforwarding them upwards (I think the Materialize step that were to be\nseen in an example in my last PS is the one that does that). \n\nSo I don't think the Append incurs any significant costs much more than\na few CPU cycles for that iteration and row forwarding (pass of one\npointer to in-memory row I guess) steps - I think these are minuscule\ncompared to the cost of any disk I/O, and in most non-CPU-bound queries\nare hidden by disk throughput/latency anyway.\n\n> Here is the way it looks on my system when I select a single object.\n> \n> db_drs0001=> explain select * from tbl_objects where id = 1;\n> NOTICE: QUERY PLAN:\n> \n> Result (cost=0.00..153.70 rows=6 width=111)\n> -> Append (cost=0.00..153.70 rows=6 width=111)\n> -> Seq Scan on tbl_objects (cost=0.00..144.35 rows=1 width=107)\n> -> Seq Scan on tbl_viewers tbl_objects (cost=0.00..1.06 rows=1\n> width=97)\n> -> Seq Scan on tbl_documents tbl_objects (cost=0.00..4.91 rows=1\n> width=111)\n> -> Seq Scan on tbl_formats tbl_objects (cost=0.00..1.11 rows=1\n> width=100)\n> -> Seq Scan on tbl_massemails tbl_objects (cost=0.00..1.01 rows=1\n> width=90)\n> -> Seq Scan on tbl_icons tbl_objects (cost=0.00..1.25 rows=1\n> width=110)\n> \n> db_drs0001=> select version();\n> version\n> ---------------------------------------------------------------\n> PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.3\n> (1 row)\n> \n> The only question here is, if a select requires the scanning of all rows\n> to return a result set, is it dramatically less efficient to have it\n> scanning 100,000 rows spread over 6 tables or to scan 100,000 rows in a\n> single table?\n\nI see. Well, the Append step itself I suppose isn't doing much more than\niterates over its subnodes (the seq scans in the case above) and asks\neach of them to return rows, and forwards the rows upwards to the rest\nof the plan as it receives them - it doesn't buffer them, or collect\nthem all before forwarding them upwards (I think the Materialize step\nthat were to be seen in an example in my last PS does that). So aside\nfor any costs of consulting indexes, I don't think the Append step -\nwhich is the added step when scanning multiple tables versus scanning\none table - incurs any significant costs much more than a few CPU cycles\nfor those iteration and row forwarding (pass of one pointer to in-memory\nrow I guess) steps - I think these are minuscule compared to the cost of\nany disk I/O, and in most non-CPU-bound queries are hidden by disk\nthroughput/latency anyway.\n\n> (At the moment I have no where near that amount of data. Side question,\n> what technique do you use to generate data to fill your tables for\n> testing?)\n\nFor this occasion I just went and created a dozen-line PHP script that\nsimply inserted 10000 rows with consecutive ids into each table.\n\nI suggest you to try to populate your test database with test data on\nthe order of your expected working data set and use EXPLAIN ANALYZE to\nmake estimates of expected performance of the database.\n\n> I'm now starting to see that it likely isn't that much different either\n> way so the benefits of the way it's implemented probably out weigh the\n> negatives. Your end up scanning the same number of rows either way.\n\nAside from extreme cases where child rows are considerably much more\nwider than parent rows and thus result in considerably more data needed\nto be read in case of a sequential scan, yes.\n\n> On the topic of proper indexes, if you would indulge me, can you show me\n> where I have gone wrong in that regard?\n\nHmm, I think you should only have gone and created indexes for child\ntables by hand, as indexes are not inherited.\n\nAlso, don't forget, PostgreSQL has an advanced query cost estimation\nsubsystem, which decides for or against using an index based on, among\nothers, statistics collected on distribution of values in the index to\ndetermine its selectivity (so don't forget to ANALYZE/VACUUM ANALYZE\nafter inserting/changing a lot of rows that significantly change\ndistribution of values - this includes initial table fillup), and also\nit accounts for costs of accessing index pages, so with less than say a\ncouple of thousand rows or with not very selective indexes it will\n(rightly) decide not to use the index but do a seq scan instead -\nprobably the reason for why you don't see an index scan on tbl_objects\nabove despite there being an index on the primary key.\n\n> My biggest point of confusion\n> here is with regards to the sequences that are used in the parent table.\n\nChild tables inherit the \"nextval('...')\" default value, so as a result\nthey will all draw from the same one sequence, which sequence exists\noutside of the tables; as a result as long as you use that default\nvalue, it is guaranteed that the column in question will have unique\nvalues among all tables parent and children; they won't be consecutive -\nbut that's not a drawback of inheritance either, as a sequence is not\nguaranteed to provide consecutive numbers with single tables either due\nto transaction concurrency (rolled back transactions don't 'put back'\nnumbers into the sequence, so in the case of rolled back transactions\nthere will be numbers drawn from the sequence that never actually get\ninto any table - this is nicely documented with sequences and\ntransactions).\n\n> Here is the schema as produced by pg_dump. The original create used the\n> keyword \"serial\" or \"bigserial\" as the case may be. I've edited some of\n> the columns out just to keep the example shorter:\n> \n> CREATE SEQUENCE \"tbl_objects_id_seq\" start 1 increment 1 maxvalue\n> 9223372036854775807 minvalue 1 cache 1;\n> \n> CREATE TABLE \"tbl_objects\" (\n> \"id\" bigint DEFAULT nextval('\"tbl_objects_id_seq\"'::text) NOT NULL,\n> \"name\" text DEFAULT '' NOT NULL,\n> \"description\" text DEFAULT '' NOT NULL,\n> \"status\" smallint DEFAULT '1' NOT NULL,\n> \"class\" text\n> );\n> \n> CREATE TABLE \"tbl_viewers\" (\n> \"exec\" text DEFAULT '' NOT NULL )\n> INHERITS (\"tbl_objects\");\n> \n> CREATE TABLE \"tbl_documents\" (\n> \"filename\" text DEFAULT '' NOT NULL )\n> INHERITS (\"tbl_objects\");\n> \n> CREATE TABLE \"tbl_massemails\" (\n> \"from\" text DEFAULT '' NOT NULL,\n> \"subject\" text DEFAULT '' NOT NULL,\n> \"message\" text DEFAULT '' NOT NULL )\n> INHERITS (\"tbl_objects\");\n> \n> CREATE TABLE \"tbl_icons\" (\n> \"format_id\" bigint DEFAULT '0' NOT NULL )\n> INHERITS (\"tbl_documents\");\n> \n> CREATE TABLE \"tbl_formats\" (\n> \"viewer_id\" bigint DEFAULT '0' NOT NULL,\n> \"extension\" text DEFAULT '' NOT NULL,\n> \"contenttype\" text DEFAULT '' NOT NULL,\n> \"upload_class\" text )\n> INHERITS (\"tbl_objects\");\n> \n> CREATE UNIQUE INDEX tbl_objects_id_key ON tbl_objects USING btree (id);\n\nHmm, I wonder whether you have a specific goal with or reason for\nexplicitly specifying NOT NULL and empty string ('') as default value\nfor all these text fields? If it's just because your frontend makes it\ninconvenient for you to treat a NULL as empty string, you might want to\nconsider allowing NULLs and using the coalesce() function in your select\n- this would incur a few CPU cycles per returned result row, but will\nspare you a few bytes in storage - I think 4 or 8 per column - for each\nNULL value. Whether this is worth it or not depends on the percentage of\nempty/NULL values in your data though.\n\n> Thanks very much for taking the time to look into this with me. It has\n> been most informative.\n\nYou're welcome!\n\nRegards,\nAndras\n", "msg_date": "Thu, 30 Jan 2003 02:02:49 +0100", "msg_from": "Andras Kadinger <bandit@surfnonstop.com>", "msg_from_op": false, "msg_subject": "Re: Query plan and Inheritance. Weird behavior" } ]
[ { "msg_contents": "TEXT vs \"char\" ... vs BOOLEAN\n\nI am porting from Informix to PG. In doing so, I had to pick some data \ntypes for fields, and began wondering about the performance of char/text \nfields with one character. For example, I have a field which has one of \nthe following values/states: {'A', 'D', 'F', 'U'}. Since CHAR(n), \nVARCHAR, and TEXT are all supposed to have the same performance \naccording to the docs, it seems that they will all perform the same. \nFor this reason, I did not squabble over which one of these to use. \nHowever, since \"char\" is implemented differently, I thought I would \ncompare it to one of the others. I chose to pit TEXT against \"char\".\n\nTest query = explain analyze select count(*) from table where onechar='D';\nTable size = 512 wide [mostly TEXT] * 400000 rows\nPerformance averages:\n \"char\" 44ms\n TEXT 63ms\n\nThis seems somewhat reasonable, and makes me want to use \"char\" for my \nsingle-char field. Does everyone else find this to be reasonable? Is \nthis pretty much the behavior I can expect on extraordinarily large \ntables, too? And, should I worry about things like the backend \ndevelopers removing \"char\" as a type later?\n\n--\n\nThis naturally led me to another question. How do TEXT, \"char\", and \nBOOLEAN compare for storing t/f values. The test results I saw were \nsurprising.\n\nTest query=\n \"char\"/TEXT: explain analyze select count(*) from table where bool='Y';\n boolean: explain analyze select count(*) from table where bool=true;\nTable size (see above)\nPerformance averages:\n TEXT 24ms\n BOOLEAN 28ms\n\"char\" 17ms\n\nWhy does boolean rate closer to TEXT than \"char\"? I would think that \nBOOLEANs would actually be stored like \"char\"s to prevent using the \nextra 4 bytes with TEXT types.\n\nBased on these results, I will probably store my booleans as \"char\" \ninstead of boolean. I don't use stored procedures with my application \nserver, so I should never need my booleans to be the BOOLEAN type. I \ncan convert faster in my own code.\n\n--\n\nNOTE: the above tests all had the same relative data in the different \nfields (what was in TEXT could be found in \"char\", etc.) and were all \nindexed equally.\n\n\nThanks!\n\n-- \nMatt Mello\n\n\n\n\n", "msg_date": "Wed, 29 Jan 2003 00:22:53 -0600", "msg_from": "Matt Mello <alien@spaceship.com>", "msg_from_op": true, "msg_subject": "1 char in the world" }, { "msg_contents": "Matt Mello wrote:\n<snip>\n> This naturally led me to another question. How do TEXT, \"char\", and \n> BOOLEAN compare for storing t/f values. The test results I saw were \n> surprising.\n> \n> Test query=\n> \"char\"/TEXT: explain analyze select count(*) from table where bool='Y';\n> boolean: explain analyze select count(*) from table where bool=true;\n> Table size (see above)\n> Performance averages:\n> TEXT 24ms\n> BOOLEAN 28ms\n> \"char\" 17ms\n\nHi Matt,\n\nThis is interesting. As a thought, would you be ok to run the same test \nusing int4 and int8 as well?\n\nThat would probably round out the test nicely.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Wed, 29 Jan 2003 20:50:00 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: 1 char in the world" }, { "msg_contents": "On Wed, 2003-01-29 at 06:22, Matt Mello wrote:\n> TEXT vs \"char\" ... vs BOOLEAN\n> \n> I am porting from Informix to PG. In doing so, I had to pick some data \n> types for fields, and began wondering about the performance of char/text \n> fields with one character. For example, I have a field which has one of \n> the following values/states: {'A', 'D', 'F', 'U'}. Since CHAR(n), \n> VARCHAR, and TEXT are all supposed to have the same performance \n> according to the docs, it seems that they will all perform the same. \n> For this reason, I did not squabble over which one of these to use. \n> However, since \"char\" is implemented differently, I thought I would \n> compare it to one of the others. I chose to pit TEXT against \"char\".\n> \n> Test query = explain analyze select count(*) from table where onechar='D';\n> Table size = 512 wide [mostly TEXT] * 400000 rows\n> Performance averages:\n> \"char\" 44ms\n> TEXT 63ms\n> \n> This seems somewhat reasonable, and makes me want to use \"char\" for my \n> single-char field. Does everyone else find this to be reasonable? Is \n> this pretty much the behavior I can expect on extraordinarily large \n> tables, too?\n\nThe actual compares will likely stay faster for char than for text.\n\nOTOH the actual storage of one-char datatype should not play so\nsignificant role for very large tables, even if this is the only field\nin that table, as most of the overhead will be in other places - storage\noverhead in page/tuple headers, performance in retrieving the\npages/tuples and cache lookups, etc.\n\nAlso, for very big tables you will most likely want to restrict selects\non other criteria than a 4-valued field, so that indexes could be used\nin retrieving data.\n\n> And, should I worry about things like the backend \n> developers removing \"char\" as a type later?\n> \n> --\n> \n> This naturally led me to another question. How do TEXT, \"char\", and \n> BOOLEAN compare for storing t/f values. The test results I saw were \n> surprising.\n> \n> Test query=\n> \"char\"/TEXT: explain analyze select count(*) from table where bool='Y';\n\nYou could also try just\n\nselect count(*) from table where bool;\n\n> boolean: explain analyze select count(*) from table where bool=true;\n> Table size (see above)\n> Performance averages:\n> TEXT 24ms\n> BOOLEAN 28ms\n> \"char\" 17ms\n> \n> Why does boolean rate closer to TEXT than \"char\"? I would think that \n> BOOLEANs would actually be stored like \"char\"s to prevent using the \n> extra 4 bytes with TEXT types.\n> \n> Based on these results, I will probably store my booleans as \"char\" \n> instead of boolean. I don't use stored procedures with my application \n> server, so I should never need my booleans to be the BOOLEAN type. I \n> can convert faster in my own code.\n> \n> --\n> \n> NOTE: the above tests all had the same relative data in the different \n> fields (what was in TEXT could be found in \"char\", etc.) and were all \n> indexed equally.\n\nDid you repeat the texts enough times to be sure that you get reliable\nresults ?\n\n> \n> Thanks!\n-- \nHannu Krosing <hannu@tm.ee>\n", "msg_date": "29 Jan 2003 12:18:20 +0000", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: 1 char in the world" }, { "msg_contents": "Matt Mello <alien@spaceship.com> writes:\n> Test query=\n> \"char\"/TEXT: explain analyze select count(*) from table where bool='Y';\n> boolean: explain analyze select count(*) from table where bool=true;\n> Table size (see above)\n> Performance averages:\n> TEXT 24ms\n> BOOLEAN 28ms\n> \"char\" 17ms\n\nI don't believe those numbers for a moment. All else being equal,\ncomparing a \"char\" field to a literal should be exactly the same speed\nas comparing a bool field to a literal (and if you'd just said \"where bool\",\nthe bool field would be faster). Both ought to be markedly faster than\ntext.\n\nLook for errors in your test procedure. One thing I'd particularly\nwonder about is whether the query plans are the same. In the absence of\nany VACUUM ANALYZE data, I'd fully expect the planner to pick a\ndifferent plan for a bool field than text/char --- because even without\nANALYZE data, it knows that a bool column has only two possible values.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Jan 2003 10:56:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 1 char in the world " }, { "msg_contents": "> OTOH the actual storage of one-char datatype should not play so\n> significant role for very large tables, even if this is the only field\n> in that table, as most of the overhead will be in other places - storage\n> overhead in page/tuple headers, performance in retrieving the\n> pages/tuples and cache lookups, etc.\n\nIs that true if I have a table that consists of lots of 1-char fields? \nFor example, if I have a table with 4 billion records, which consist of \n(20) 1-char fields each, then the storage for the data will be something \nlike 5 times as large if I use TEXT than if I use \"char\".\n\n> Also, for very big tables you will most likely want to restrict selects\n> on other criteria than a 4-valued field, so that indexes could be used\n> in retrieving data.\n\nI do. I was just using that query for this test only. I have some very \ncomplex queries that are constrained by many foriegn-key int4 fields, \nbut also a few of these 1-char fields.\n\n> You could also try just\n> \n> select count(*) from table where bool;\n> \n\nI will do this in a while and report to the list. I am going to try \nmake a reproducable test that anyone can do, to be sure my results are \n\"real\".\n\n> Did you repeat the texts enough times to be sure that you get reliable\n> results ?\n\nI think so. Not so much as hundreds of times, though.\n\n\n-- \nMatt Mello\n512-350-6900\n\n", "msg_date": "Wed, 29 Jan 2003 17:29:58 -0600", "msg_from": "Matt Mello <alien@spaceship.com>", "msg_from_op": true, "msg_subject": "Re: 1 char in the world" }, { "msg_contents": "Matt Mello <alien@spaceship.com> writes:\n> Is that true if I have a table that consists of lots of 1-char fields? \n> For example, if I have a table with 4 billion records, which consist of \n> (20) 1-char fields each, then the storage for the data will be something \n> like 5 times as large if I use TEXT than if I use \"char\".\n\nProbably more like 8 times as large, when you allow for alignment\npadding --- on most machines, TEXT fields will be aligned on 4-byte\nboundaries, so several TEXT fields in a row will take up 8 bytes apiece,\nvs one byte apiece for consecutive \"char\" or bool fields.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Jan 2003 18:59:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 1 char in the world " }, { "msg_contents": "Tom Lane wrote:\n> I don't believe those numbers for a moment. All else being equal,\n> comparing a \"char\" field to a literal should be exactly the same speed\n> as comparing a bool field to a literal (and if you'd just said \"where bool\",\n> the bool field would be faster). Both ought to be markedly faster than\n> text.\n> \n> Look for errors in your test procedure. One thing I'd particularly\n> wonder about is whether the query plans are the same. In the absence of\n> any VACUUM ANALYZE data, I'd fully expect the planner to pick a\n> different plan for a bool field than text/char --- because even without\n> ANALYZE data, it knows that a bool column has only two possible values.\n\nWell, the previous test was done on REAL data. Everything was indexed \nand vacuum analyzed as it should be.\n\nHowever, I generated some test data under \"controlled\" circumstances and \ndid get different results. Bear in mind, though, that the data is no \nlonger \"real\", and doesn't represent the system I am concerned about.\n\n[Someone requested some tests with int4/int8, too, so I included them, \nas well. However, I would never use 4 or 8 bytes to store one bit. \nSince a byte is platform-atomic, however, I will use a whole byte for a \nsingle bit, as bit packing is too expensive.]\n\ncreate table booltest (\n boo boolean,\n cha \"char\",\n txt text,\n in4 int4,\n in8 int8\n);\n\nInsert lots of data here, but stay consistent between fields. [If you \ninsert a TRUE into a boolean, put a 'Y' into a text or \"char\" field and \na 1 into an int type.] So, I basically had 2 different insert \nstatements (one for true and one for false), and I used a random number \ngenerator to get a good distribution of them.\n\ncreate index booidx on booltest(boo);\ncreate index chaidx on booltest(cha);\ncreate index txtidx on booltest(txt);\ncreate index in4idx on booltest(in4);\ncreate index in8idx on booltest(in8);\n\nvacuum full verbose analyze booltest;\nINFO: --Relation public.booltest--\nINFO: Pages 6897: Changed 0, reaped 0, Empty 0, New 0; Tup 1000000: Vac \n0, Keep/VTL 0/0, UnUsed 0, MinLen 52, MaxLen 52; Re-using: Free/Avail. \nSpace 362284/362284; EndEmpty/Avail. Pages 0/6897.\n CPU 0.53s/0.41u sec elapsed 18.69 sec.\nINFO: Index booidx: Pages 2193; Tuples 1000000.\n CPU 0.24s/0.11u sec elapsed 3.33 sec.\nINFO: Index chaidx: Pages 2193; Tuples 1000000.\n CPU 0.23s/0.19u sec elapsed 4.01 sec.\nINFO: Index txtidx: Pages 2745; Tuples 1000000.\n CPU 0.51s/0.14u sec elapsed 4.07 sec.\nINFO: Index in4idx: Pages 2193; Tuples 1000000.\n CPU 0.20s/0.17u sec elapsed 3.51 sec.\nINFO: Index in8idx: Pages 2745; Tuples 1000000.\n CPU 0.26s/0.04u sec elapsed 1.92 sec.\nINFO: Rel booltest: Pages: 6897 --> 6897; Tuple(s) moved: 0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: --Relation pg_toast.pg_toast_4327226--\nINFO: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, \nKeep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space \n0/0; EndEmpty/Avail. Pages 0/0.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: Index pg_toast_4327226_index: Pages 1; Tuples 0.\n CPU 0.00s/0.00u sec elapsed 0.02 sec.\nINFO: Analyzing public.booltest\nVACUUM\n\n\nCount our test set:\nselect count(*) from booltest; [ALL] output:1000000\nselect count(*) from booltest where boo; [TRUES] output:498649\n\nTESTS .....\n\n1) INT8=1\nexplain analyze select count(*) from booltest where in8 = '1';\nAggregate (cost=1342272.26..1342272.26 rows=1 width=0) (actual \ntime=3434.37..3434.37 rows=1 loops=1)\n-> Index Scan using in8idx on booltest (cost=0.00..1340996.42 \nrows=510333 width=0) (actual time=6.96..2704.45 rows=498649 loops=1)\nIndex Cond: (in8 = 1::bigint)\nTotal runtime: 3434.50 msec\n\n2) INT4=1\nexplain analyze select count(*) from booltest where in4 = 1;\nAggregate (cost=1341990.26..1341990.26 rows=1 width=0) (actual \ntime=3219.24..3219.24 rows=1 loops=1)\n-> Index Scan using in4idx on booltest (cost=0.00..1340714.42 \nrows=510333 width=0) (actual time=12.92..2548.20 rows=498649 loops=1)\nIndex Cond: (in4 = 1)\nTotal runtime: 3219.35 msec\n\n3) TEXT='Y'\nexplain analyze select count(*) from booltest where txt = 'Y';\nAggregate (cost=1342272.26..1342272.26 rows=1 width=0) (actual \ntime=4820.06..4820.06 rows=1 loops=1)\n-> Index Scan using txtidx on booltest (cost=0.00..1340996.42 \nrows=510333 width=0) (actual time=15.83..4042.07 rows=498649 loops=1)\nIndex Cond: (txt = 'Y'::text)\nTotal runtime: 4820.18 msec\n\n4) BOOLEAN=true\nexplain analyze select count(*) from booltest where boo = true;\nAggregate (cost=1341990.26..1341990.26 rows=1 width=0) (actual \ntime=3437.30..3437.30 rows=1 loops=1)\n-> Index Scan using booidx on booltest (cost=0.00..1340714.42 \nrows=510333 width=0) (actual time=28.16..2751.38 rows=498649 loops=1)\nIndex Cond: (boo = true)\nTotal runtime: 3437.42 msec\n\n5) BOOLEAN [implied = true]\nexplain analyze select count(*) from booltest where boo;\nAggregate (cost=100018172.83..100018172.83 rows=1 width=0) (actual \ntime=2775.40..2775.40 rows=1 loops=1)\n-> Seq Scan on booltest (cost=100000000.00..100016897.00 rows=510333 \nwidth=0) (actual time=0.10..2138.11 rows=498649 loops=1)\nFilter: boo\nTotal runtime: 2775.50 msec\n\n6) \"char\"='Y'\nexplain analyze select count(*) from booltest where cha = 'Y';\nAggregate (cost=1341990.26..1341990.26 rows=1 width=0) (actual \ntime=3379.71..3379.71 rows=1 loops=1)\n-> Index Scan using chaidx on booltest (cost=0.00..1340714.42 \nrows=510333 width=0) (actual time=32.77..2695.77 rows=498649 loops=1)\nIndex Cond: (cha = 'Y'::\"char\")\nTotal runtime: 3379.82 msec\n\n\nAverage ms over 42 attempts per test, some one-after-the-other, others \nmixed with other queries, was:\n\n1) INT8=1 3229.76\n2) INT4=1 3194.45\n3) TEXT='Y' 4799.23\n4) BOOLEAN=true 3283.30\n5) BOOLEAN 2801.83\n6) \"char\"='Y' 3290.15\n\nThe straight boolean test was the fastest at 2.8 secs, and the TEXT was \nthe slowest at 4.8 secs. Everything else settled in the same pot at \n3.25 secs.\n\nI wasn't too impressed with any of these times, actually, but I'm \nbearing in mind that we are talking about an aggregate, which I have \nlearned much about in the last few days from the mailing list, and which \nI expect to be slow in PG.\n\nSince straight-BOOLEAN [not adding \"= true\" to the condition] is about \n15% faster than \"char\", I will stick with BOOLEAN.\n\nMy immediate future plans also include recoding my system to never use \naggregates in \"live\" queries. I will be tracking statistics in real \ntime in statistics tables instead of hitting the database with an \naggregate. It is much cheaper to add a few milliseconds per insert than \nto slow my whole system down for several seconds during an aggregate query.\n\nIn fact, if you have a sizable table, and especially if you are running \nan OLTP server, unless you are manually investigating something in the \ntable, I recommend never using aggregates. Your SQL queries should be \naggregate-free for large tables, if possible.\n\nThanks!\n\n-- \nMatt Mello\n\n", "msg_date": "Sun, 02 Feb 2003 12:09:29 -0600", "msg_from": "alien <alien@spaceship.com>", "msg_from_op": false, "msg_subject": "Re: 1 char in the world" } ]
[ { "msg_contents": "\nGreetings to all, \n\nI have found strange query execution plans with the\nsame version of\nPostgreSQL but on different types of server machines.\nHere are the details\nof the servers:\n\nServer 1:\nPentium III, 800 MHz, 64 MB of RAM\nRedHat Linux 7.2, Postgres ver 7.1\n \nServer 2:\nDual Pentium III, 1.3 GHz, 512 MB of RAM\nRedHat Linux 7.3 (SMP kernel), Postgres ver 7.1\n \nHere is the query I tried:\n--- query ---\nexplain\nselect bill.customer_no, bill.bill_no, bill.bill_date\n from bill, ( select customer_no, max(\nbill_date) as bill_date from\n bill group by customer_no) as t_bill where\n bill.customer_no = t_bill.customer_no and\n bill.bill_date = t_bill.bill_date order by\nbill.customer_no;\n--- query---\n\n\nResult on Server 1:\n---result---\nNOTICE: QUERY PLAN:\n\nMerge Join (cost=2436.88..2571.99 rows=671 width=44)\n -> Sort (cost=1178.15..1178.15 rows=8189 width=28)\n -> Seq Scan on bill (cost=0.00..645.89\nrows=8189 width=28)\n -> Sort (cost=1258.72..1258.72 rows=819 width=16)\n -> Subquery Scan t_bill \n(cost=1178.15..1219.10 rows=819 width=16)\n -> Aggregate (cost=1178.15..1219.10\nrows=819 width=16)\n -> Group (cost=1178.15..1198.63\nrows=8189 width=16)\n -> Sort \n(cost=1178.15..1178.15 rows=8189 width=16)\n -> Seq Scan on bill \n(cost=0.00..645.89 rows=8189 width=16)\n \nEXPLAIN\n---result--- \n\nResult on Server 2:\n---result---\nNOTICE: QUERY PLAN:\n \nSort (cost=0.04..0.04 rows=1 width=44)\n -> Nested Loop (cost=0.01..0.03 rows=1 width=44)\n -> Seq Scan on bill (cost=0.00..0.00 rows=1\nwidth=28)\n -> Subquery Scan t_bill (cost=0.01..0.02\nrows=1 width=16)\n -> Aggregate (cost=0.01..0.02 rows=1\nwidth=16)\n -> Group (cost=0.01..0.01 rows=1\nwidth=16)\n -> Sort (cost=0.01..0.01\nrows=1 width=16)\n -> Seq Scan on bill \n(cost=0.00..0.00 rows=1 width=16)\n \nEXPLAIN\n---result---\n \n \nCan someone help me to figure out why the query plans\ncome out differently\ndespite the fact that almost everything but the number\nof CPUs are same in\nboth the machines?\n\nAlso the dual processor machine is awfully slow when I\nexecute this query\nand the postmaster hogs the CPU (99.9%) for several\nminutes literally\nleaving that server unusable.\n \nthank you very much\n Anil\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Thu, 30 Jan 2003 00:25:18 -0800 (PST)", "msg_from": "Anil Kumar <techbreeze@yahoo.com>", "msg_from_op": true, "msg_subject": "Strangae Query Plans" }, { "msg_contents": "Hi,\n\nI got this solved. We ran \"vacuum\" with the --analyze flag on the\nsecond server. And now the query plan is same as the first one and\nit returns in a fraction of a second!\n\n Anil\n\n--- Anil Kumar <techbreeze@yahoo.com> wrote:\n> \n> Greetings to all, \n> \n> I have found strange query execution plans with the\n> same version of\n> PostgreSQL but on different types of server machines.\n> Here are the details\n> of the servers:\n> \n> Server 1:\n> Pentium III, 800 MHz, 64 MB of RAM\n> RedHat Linux 7.2, Postgres ver 7.1\n> \n> Server 2:\n> Dual Pentium III, 1.3 GHz, 512 MB of RAM\n> RedHat Linux 7.3 (SMP kernel), Postgres ver 7.1\n> \n> Here is the query I tried:\n> --- query ---\n> explain\n> select bill.customer_no, bill.bill_no, bill.bill_date\n> from bill, ( select customer_no, max(\n> bill_date) as bill_date from\n> bill group by customer_no) as t_bill where\n> bill.customer_no = t_bill.customer_no and\n> bill.bill_date = t_bill.bill_date order by\n> bill.customer_no;\n> --- query---\n> \n> \n> Result on Server 1:\n> ---result---\n> NOTICE: QUERY PLAN:\n> \n> Merge Join (cost=2436.88..2571.99 rows=671 width=44)\n> -> Sort (cost=1178.15..1178.15 rows=8189 width=28)\n> -> Seq Scan on bill (cost=0.00..645.89\n> rows=8189 width=28)\n> -> Sort (cost=1258.72..1258.72 rows=819 width=16)\n> -> Subquery Scan t_bill \n> (cost=1178.15..1219.10 rows=819 width=16)\n> -> Aggregate (cost=1178.15..1219.10\n> rows=819 width=16)\n> -> Group (cost=1178.15..1198.63\n> rows=8189 width=16)\n> -> Sort \n> (cost=1178.15..1178.15 rows=8189 width=16)\n> -> Seq Scan on bill \n> (cost=0.00..645.89 rows=8189 width=16)\n> \n> EXPLAIN\n> ---result--- \n> \n> Result on Server 2:\n> ---result---\n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=0.04..0.04 rows=1 width=44)\n> -> Nested Loop (cost=0.01..0.03 rows=1 width=44)\n> -> Seq Scan on bill (cost=0.00..0.00 rows=1\n> width=28)\n> -> Subquery Scan t_bill (cost=0.01..0.02\n> rows=1 width=16)\n> -> Aggregate (cost=0.01..0.02 rows=1\n> width=16)\n> -> Group (cost=0.01..0.01 rows=1\n> width=16)\n> -> Sort (cost=0.01..0.01\n> rows=1 width=16)\n> -> Seq Scan on bill \n> (cost=0.00..0.00 rows=1 width=16)\n> \n> EXPLAIN\n> ---result---\n> \n> \n> Can someone help me to figure out why the query plans\n> come out differently\n> despite the fact that almost everything but the number\n> of CPUs are same in\n> both the machines?\n> \n> Also the dual processor machine is awfully slow when I\n> execute this query\n> and the postmaster hogs the CPU (99.9%) for several\n> minutes literally\n> leaving that server unusable.\n> \n> thank you very much\n> Anil\n> \n> \n> __________________________________________________\n> Do you Yahoo!?\n> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.\n> http://mailplus.yahoo.com\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n__________________________________________________\nDo you Yahoo!?\nYahoo! Mail Plus - Powerful. Affordable. Sign up now.\nhttp://mailplus.yahoo.com\n", "msg_date": "Thu, 30 Jan 2003 01:08:59 -0800 (PST)", "msg_from": "Anil Kumar <techbreeze@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Strangae Query Plans" }, { "msg_contents": "\n\nyou could consider vacuuming thru a cron job daily..\nits good for db severs' health ;-)\n\n\nOn Thursday 30 January 2003 02:38 pm, Anil Kumar wrote:\n> Hi,\n>\n> I got this solved. We ran \"vacuum\" with the --analyze flag on the\n> second server. And now the query plan is same as the first one and\n> it returns in a fraction of a second!\n>\n> Anil\n>\n> --- Anil Kumar <techbreeze@yahoo.com> wrote:\n> > Greetings to all,\n> >\n> > I have found strange query execution plans with the\n> > same version of\n> > PostgreSQL but on different types of server machines.\n> > Here are the details\n> > of the servers:\n> >\n> > Server 1:\n> > Pentium III, 800 MHz, 64 MB of RAM\n> > RedHat Linux 7.2, Postgres ver 7.1\n> >\n> > Server 2:\n> > Dual Pentium III, 1.3 GHz, 512 MB of RAM\n> > RedHat Linux 7.3 (SMP kernel), Postgres ver 7.1\n> >\n> > Here is the query I tried:\n> > --- query ---\n> > explain\n> > select bill.customer_no, bill.bill_no, bill.bill_date\n> > from bill, ( select customer_no, max(\n> > bill_date) as bill_date from\n> > bill group by customer_no) as t_bill where\n> > bill.customer_no = t_bill.customer_no and\n> > bill.bill_date = t_bill.bill_date order by\n> > bill.customer_no;\n> > --- query---\n> >\n> >\n> > Result on Server 1:\n> > ---result---\n> > NOTICE: QUERY PLAN:\n> >\n> > Merge Join (cost=2436.88..2571.99 rows=671 width=44)\n> > -> Sort (cost=1178.15..1178.15 rows=8189 width=28)\n> > -> Seq Scan on bill (cost=0.00..645.89\n> > rows=8189 width=28)\n> > -> Sort (cost=1258.72..1258.72 rows=819 width=16)\n> > -> Subquery Scan t_bill\n> > (cost=1178.15..1219.10 rows=819 width=16)\n> > -> Aggregate (cost=1178.15..1219.10\n> > rows=819 width=16)\n> > -> Group (cost=1178.15..1198.63\n> > rows=8189 width=16)\n> > -> Sort\n> > (cost=1178.15..1178.15 rows=8189 width=16)\n> > -> Seq Scan on bill\n> > (cost=0.00..645.89 rows=8189 width=16)\n> >\n> > EXPLAIN\n> > ---result---\n> >\n> > Result on Server 2:\n> > ---result---\n> > NOTICE: QUERY PLAN:\n> >\n> > Sort (cost=0.04..0.04 rows=1 width=44)\n> > -> Nested Loop (cost=0.01..0.03 rows=1 width=44)\n> > -> Seq Scan on bill (cost=0.00..0.00 rows=1\n> > width=28)\n> > -> Subquery Scan t_bill (cost=0.01..0.02\n> > rows=1 width=16)\n> > -> Aggregate (cost=0.01..0.02 rows=1\n> > width=16)\n> > -> Group (cost=0.01..0.01 rows=1\n> > width=16)\n> > -> Sort (cost=0.01..0.01\n> > rows=1 width=16)\n> > -> Seq Scan on bill\n> > (cost=0.00..0.00 rows=1 width=16)\n> >\n> > EXPLAIN\n> > ---result---\n> >\n> >\n> > Can someone help me to figure out why the query plans\n> > come out differently\n> > despite the fact that almost everything but the number\n> > of CPUs are same in\n> > both the machines?\n> >\n> > Also the dual processor machine is awfully slow when I\n> > execute this query\n> > and the postmaster hogs the CPU (99.9%) for several\n> > minutes literally\n> > leaving that server unusable.\n> >\n> > thank you very much\n> > Anil\n> >\n> >\n> > __________________________________________________\n> > Do you Yahoo!?\n> > Yahoo! Mail Plus - Powerful. Affordable. Sign up now.\n> > http://mailplus.yahoo.com\n> >\n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n>\n> __________________________________________________\n> Do you Yahoo!?\n> Yahoo! Mail Plus - Powerful. Affordable. Sign up now.\n> http://mailplus.yahoo.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\n\n--------------------------------------------\n Regds Mallah\nRajesh Kumar Mallah,\nProject Manager (Development)\nInfocom Network Limited, New Delhi\nphone: +91(11)26152172 (221) (L) 9811255597 (M)\nVisit http://www.trade-india.com ,\nIndia's Leading B2B eMarketplace.\n\n\n", "msg_date": "Thu, 30 Jan 2003 16:18:11 +0530", "msg_from": "\"Rajesh Kumar Mallah.\" <mallah@trade-india.com>", "msg_from_op": false, "msg_subject": "Re: Strangae Query Plans" } ]
[ { "msg_contents": "As we continue our evaluation of Postgres, another interesting topic \nhas come up that I want to run by the group.\n\nIn our current model, we have about 3,000 small tables that we use \ntrack data for our clients. Each table is an identical structure, and \nholds the data for one client.\n\nAnother idea that we are considering is one big table instead of 3,000 \nsmaller ones. We could simply add a numeric field to indicate which \nclient a particular record was for.\n\nEach table has between 500 and 50,000 records, so the big table could \nhave up to 10 million rows if we combined everything.\n\n\nA query on our current system is (for client #4)\n\nSelect (*) from client_4 where foo=2;\n\nA query from the new, proposed system would be\n\nSelect (*) from big_results where client=4 and foo=2.\n\nThe big questions is, WHICH WILL BE FASTER with Postgres. Is there any \nperformance improvement or cost to switching to this new structure.\n\n\nANY AND ALL FEEDBACK/OPINIONS ARE WELCOME!!\n\nThanks,\n\nNoah\n\n", "msg_date": "Thu, 30 Jan 2003 12:34:36 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": true, "msg_subject": "One large v. many small" }, { "msg_contents": "Noah,\n\n> As we continue our evaluation of Postgres, another interesting topic\n> has come up that I want to run by the group.\n>\n> In our current model, we have about 3,000 small tables that we use\n> track data for our clients. Each table is an identical structure, and\n> holds the data for one client.\n\nI'd list what's wrong with this structure, but frankly it would take me long \nenough that I'd need a consulting fee. Suffice it to say that the above is \na very, very bad (or at least antiquated) design idea and you need to \ntransition out of it as soon as possible.\n\n> Another idea that we are considering is one big table instead of 3,000\n> smaller ones. We could simply add a numeric field to indicate which\n> client a particular record was for.\n\nYes. Absolutely. Although I'd suggest an Integer field.\n\n> Each table has between 500 and 50,000 records, so the big table could\n> have up to 10 million rows if we combined everything.\n\nSure.\n\n> A query on our current system is (for client #4)\n>\n> Select (*) from client_4 where foo=2;\n>\n> A query from the new, proposed system would be\n>\n> Select (*) from big_results where client=4 and foo=2.\n>\n> The big questions is, WHICH WILL BE FASTER with Postgres. Is there any\n> performance improvement or cost to switching to this new structure.\n\nOh, no question query 1 will be faster ... FOR THAT QUERY. You are asking the \nwrong question.\n\nHowever, explain to me how, under the current system, you can find the client \nwho ordered $3000 worth of widgets on January 12th if you don't already know \nwho it is? I'm not sure a 3000-table UNION query is even *possible*.\n\nOr how about giving me the average number of customer transactions in a month, \nacross all clients?\n\n<rant>\n\nYou've enslaved your application design to performance considerations ... an \napproach which was valid in 1990, because processing power was so limited \nthen. But now that dual-processor servers with RAID can be had for less than \n$3000, there's simply no excuse for violating the principles of good \nrelational database design just to speed up a query. Buying more RAM is \nmuch cheaper than having an engineer spend 3 weeks fixing data integrity \nproblems.\n\nThe proper way to go about application design is to build your application on \npaper or in a modelling program according to the best principles of software \ndesign available, and *then* to discuss performance issues -- addressing them \n*first* by buying hardware, and only compromising your applcation design when \nno other alternative is available.\n\n</rant>\n\nI strongly suggest that you purchase Pascal's \"Practical Issues in Database \nDesign\" and give it a read.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 30 Jan 2003 09:56:56 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "On Thu, Jan 30, 2003 at 12:34:36PM -0500, Noah Silverman wrote:\n> Select (*) from client_4 where foo=2;\n> \n> A query from the new, proposed system would be\n> \n> Select (*) from big_results where client=4 and foo=2.\n> \n> The big questions is, WHICH WILL BE FASTER with Postgres. Is there any \n> performance improvement or cost to switching to this new structure.\n\nFaster overall, or faster for that operation? I can't prove it, but\nI suspect that the first one will return faster just because both the\nindex and the table itself is smaller.\n\nThe possibility is thatit will cause you problems overall, however,\nbecause of the large number of files you have to keep if you use 3000\ntables. This is dependent on your filesytem (and its\nimplementation). \n\nNote, too, that a lot of transactions frequently updating the table\nmight make a difference. A large number of dead tuples sitting on a\n10 million row table will make anything crawl.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 30 Jan 2003 13:02:40 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "OK,\n\nThanks for the quick responses.\n\nA bit more information.\n\nWe are in the business of gathering data for our clients. (We're a news \nservice). Subsequently, we do a lot of inserting and very rarely do \nany deleting. (We periodically clear out results that are over 6 months \nold.)\n\nOn a give day, we will insert around 100,000 records in total. \n(Currently split across all the client tables).\n\nA challenging part of the process is that we have to keep track of \nprevious content that may be similar. We CAN'T do this with a unique \nindex (don't ask, it would take too long to explain, but trust me, it \nisn't possible). So, we have to query the table first and then compare \nthe results of that query to what we are inserting. SO, we probably do \nclose to 1 million queries, but then only make about 100,000 inserts. \nThe basic flow is 1) our system finds something it likes, 2) query the \ntable to see if something similar already exists, 3) if nothing similar \nexists, insert.\n\nWhile all this is going on, our clients are accessing our online \nreporting system. This system makes a variety of count and record \nrequests from the database.\n\nAs I mentioned in our earlier post, we are attempting to decide if \nPostgres will run faster/better/ with one big table, or a bunch of \nsmaller ones. It really doesn't make much difference to us, we just \nwant whatever structure will be faster.\n\nThanks,\n\n-N\n\n", "msg_date": "Thu, 30 Jan 2003 13:24:38 -0500", "msg_from": "Noah Silverman <noah@allresearch.com>", "msg_from_op": true, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "On Thu, 30 Jan 2003, Josh Berkus wrote:\n\n>\n> The proper way to go about application design is to build your application on\n> paper or in a modelling program according to the best principles of software\n> design available, and *then* to discuss performance issues -- addressing them\n> *first* by buying hardware, and only compromising your applcation design when\n> no other alternative is available.\n>\n\nApp design & performance go hand-in-hand. the trick is to balance them.\nWho wants a _wonderful_ design that runs like a piece of poo? in this\ncase I agree with you - not the best design around. buying hardware to\nfix speed problems is useful, but the software side should not be\nneglected - imagine this scenario using your\nmethods (with a wonderful pg performance problem in hand (unless you are\nrunning cvs))\n\nUser has a schema and writes a query along the lines of\n\nselect somevalue from sometable where othervalue not in (select badvalues\nfrom badvaluetable where id = 12345)\n\nwe all know this runs horrifically on postgres. using your method I should\ngo out, spend thousands on multi-processor boxes, raid, ram\n\nIf you do a little app tuning (maybe spend 10-30 minutes readig pgsql\narchives) you'll learn to rewrite it as an exists query and make it faster\nthan it ever could have been on the fast hardware. I just saved the\ncompany $10k too! (depends on if you consider that change a design\nchange).. some designs are fatally flawed from the start. but hey.. oh\nwell.\n\n'tis a fine line though.. balancing hardware vs software optimization.\n(I'm also guessing they are not constrained by things such as an embedded\nsystem too)\n\n\n------------------------------------------------------------------------------\nJeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/\n Ronald McDonald, with the help of cheese soup,\n controls America from a secret volkswagon hidden in the past\n-------------------------------------------------------------------------------\n\n\n", "msg_date": "Thu, 30 Jan 2003 14:13:38 -0500 (EST)", "msg_from": "Jeff <threshar@torgo.978.org>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "I'm going to go against the grain here and say that if you already have\nall of the code and schema worked out, you probably should stick with\nthe many table design. While there are many reasons you'd be better off\nwith the one big table design, a speed increase really isn't one of\nthem. If you we're starting from scratch, or even had a slew of\ndevelopment work you we're planning to do, I'd probably recommend the\none big table approach, but if you don't have any bottlenecks in your\ncurrent system and the type of query you've given is typical of the\nmajority of what your application is doing, there's no sense redesigning\nyour application in the middle of a database switch. \n\nRobert Treat\n\nPS. Josh, are you referring to Pascal's \"Practical Issues In Database\nManagement\" book or does he have a different book out that I'm not aware\nof?\n\nOn Thu, 2003-01-30 at 13:24, Noah Silverman wrote:\n> OK,\n> \n> Thanks for the quick responses.\n> \n> A bit more information.\n> \n> We are in the business of gathering data for our clients. (We're a news \n> service). Subsequently, we do a lot of inserting and very rarely do \n> any deleting. (We periodically clear out results that are over 6 months \n> old.)\n> \n> On a give day, we will insert around 100,000 records in total. \n> (Currently split across all the client tables).\n> \n> A challenging part of the process is that we have to keep track of \n> previous content that may be similar. We CAN'T do this with a unique \n> index (don't ask, it would take too long to explain, but trust me, it \n> isn't possible). So, we have to query the table first and then compare \n> the results of that query to what we are inserting. SO, we probably do \n> close to 1 million queries, but then only make about 100,000 inserts. \n> The basic flow is 1) our system finds something it likes, 2) query the \n> table to see if something similar already exists, 3) if nothing similar \n> exists, insert.\n> \n> While all this is going on, our clients are accessing our online \n> reporting system. This system makes a variety of count and record \n> requests from the database.\n> \n> As I mentioned in our earlier post, we are attempting to decide if \n> Postgres will run faster/better/ with one big table, or a bunch of \n> smaller ones. It really doesn't make much difference to us, we just \n> want whatever structure will be faster.\n> \n> Thanks,\n> \n> -N\n>\n\n\n", "msg_date": "30 Jan 2003 14:14:01 -0500", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "I have a database running on PostgresSQL w/ close to 7 million records in\none table and ~ 3 million in another, along w/ various smaller supportive\ntables.\nBefore I started here everything was run out of small tables, one for each\nclient, similar ( i think ) to what you are doing now.\nWe submit ~ 50 - 100K records each week. And before we moved to one table,\nour company had no idea of how it was doing on a daily, weekly or monthly\nbasis. Now that we have moved to one large structure, new ideas about\nreporting funtions and added services we can give to our clients are poping\nup all the time.\n\nThere are MANY benifts to following Josh's advice and putting all your\ninformation in one table. Others than those given, what if you wanted to\ngive an added service to your clients where they are made aware of similar\npostings by your other clients. Running this kind of report would be a\nnightmare in your current situation.\n\nAs far as performance goes, I am able to join these 2 tables along w/ others\nand get the information, counts etc., that I need, using some rather\ncomplicated queries, in about 2-3 seconds per query. While this may sound\nawful realize that Im running on a standard workstation PIII 700, and for\nthe money, Its a dream!\n\nMore importantly you need to realize, as my coworkers have now done, that\nanything that you can do w/ a small table, you can do w/ one big table and\nan extra line in the where clause (eg. Where client_id = 'blah' ).\nPostgresSQL has wonderful support and many excellent DBA's that if you post\na SQL problem they are very supportive in helping solve the problem.\n\nI hope this helps make your decision.\nThanks\nChad\n\n\n----- Original Message -----\nFrom: \"Noah Silverman\" <noah@allresearch.com>\nTo: <pgsql-performance@postgresql.org>\nCc: <pgsql-performance@postgresql.org>\nSent: Thursday, January 30, 2003 11:24 AM\nSubject: Re: [PERFORM] One large v. many small\n\n\n> OK,\n>\n> Thanks for the quick responses.\n>\n> A bit more information.\n>\n> We are in the business of gathering data for our clients. (We're a news\n> service). Subsequently, we do a lot of inserting and very rarely do\n> any deleting. (We periodically clear out results that are over 6 months\n> old.)\n>\n> On a give day, we will insert around 100,000 records in total.\n> (Currently split across all the client tables).\n>\n> A challenging part of the process is that we have to keep track of\n> previous content that may be similar. We CAN'T do this with a unique\n> index (don't ask, it would take too long to explain, but trust me, it\n> isn't possible). So, we have to query the table first and then compare\n> the results of that query to what we are inserting. SO, we probably do\n> close to 1 million queries, but then only make about 100,000 inserts.\n> The basic flow is 1) our system finds something it likes, 2) query the\n> table to see if something similar already exists, 3) if nothing similar\n> exists, insert.\n>\n> While all this is going on, our clients are accessing our online\n> reporting system. This system makes a variety of count and record\n> requests from the database.\n>\n> As I mentioned in our earlier post, we are attempting to decide if\n> Postgres will run faster/better/ with one big table, or a bunch of\n> smaller ones. It really doesn't make much difference to us, we just\n> want whatever structure will be faster.\n>\n> Thanks,\n>\n> -N\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Thu, 30 Jan 2003 12:43:09 -0700", "msg_from": "\"Chad Thompson\" <chad@weblinkservices.com>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "> imagine this scenario using your\n> methods (with a wonderful pg performance problem in hand (unless you are\n> running cvs))\n\n<snip>\n\n> If you do a little app tuning (maybe spend 10-30 minutes readig pgsql\n> archives) you'll learn to rewrite it as an exists query and make it faster\n> than it ever could have been on the fast hardware.\n\nYour example is invalid... you're talking about an implementation detail,\nnot an architectural design issue.\n\nI have to agree with the original point... normalize the database into the\nproper form, then denormalize as necessary to make things perform\nacceptably. In other words, do things the right way and then muck it up if\nyou have to.\n\nWhile you make an excellent point (i.e. you can't always through hardware,\nespecially excessive hardware at the problem), I would err on the side of\ndoing things the right way. It usually ends up making the software easier to\nmaintain and add to. A poor design to save a few thousand dollars on\nhardware now can cost many tens of thousands (or more) dollars on\nprogramming time down the road.\n\nI've seen entirely too many cases where people started thinking about\nperformance before they considered overall design. It almost always ends in\ndisaster (especially since hardware only gets faster over time and software\nonly gets more complex).\n\nGreg\n\n", "msg_date": "Thu, 30 Jan 2003 14:48:16 -0500", "msg_from": "\"Gregory Wood\" <gregw@com-stock.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] One large v. many small" }, { "msg_contents": "On Thu, 30 Jan 2003, Josh Berkus wrote:\n\n> > Another idea that we are considering is one big table instead of 3,000\n> > smaller ones. We could simply add a numeric field to indicate which\n> > client a particular record was for.\n>\n> Yes. Absolutely. Although I'd suggest an Integer field.\n\n From the description given in Noah's message, and also the one given in\nhis later message, I have little doubt that 3000 small tables are going\nto be significantly faster than one large table. If you don't believe\nme, work out just where the disk blocks are going to end up, and how\nmany blocks are going to have to be fetched for his typical query in\na semi-clustered or non-clustered table. (If postgres had clustered\nindexes a la MS SQL server, where the rows are physically stored in the\norder of the clustered index, it would be a different matter.)\n\n> However, explain to me how, under the current system, you can find the client\n> who ordered $3000 worth of widgets on January 12th if you don't already know\n> who it is?\n\nExplain to me why he has to do this.\n\nIt's all very nice to have a general system that can do well on all\nsorts of queries, but if you lose time on the queries you do do, in\norder to save time on queries you don't do, you're definitely not\ngetting the best performance out of the system.\n\n> I'm not sure a 3000-table UNION query is even *possible*.\n\nThis is not the only solution, either. You could simply just do 3000\nqueries. If this is something you execute only once a month, the making\nthat query three or four orders of magnitude more expensive might be a\nsmall price to pay for making cheaper the queries you run several times\nper second.\n\n> <rant>\n>\n> You've enslaved your application design to performance considerations ... an\n> approach which was valid in 1990, because processing power was so limited\n> then. But now that dual-processor servers with RAID can be had for less than\n> $3000, there's simply no excuse for violating the principles of good\n> relational database design just to speed up a query. Buying more RAM is\n> much cheaper than having an engineer spend 3 weeks fixing data integrity\n> problems.\n\n*Sigh.* Ok, my turn to rant.\n\nRAM is not cheap enough yet for me to buy several hundred gigabytes of\nit for typical applications, even if I could find a server that I could\nput it in. Disk performance is not growing the way CPU performance is.\nAnd three weeks of engineering time plus a ten thousand dollar server\nis, even at my top billing rate, still a heck of a lot cheaper than a\nquarter-million dollar server.\n\nApplying your strategy to all situations is not always going to produce\nthe most cost-effective solution. And for most businesses, that's what it's\nall about. They're not interested in the more \"thoretically pure\" way of\ndoing things except insofar as it makes them money.\n\nAs for the data integrity problems, I don't know where that came from. I\nthink that was made up out of whole cloth, because it didn't seem to me\nthat the original question involved any.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Fri, 31 Jan 2003 12:54:26 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "Noah,\n\nWell, there you have it: a unanimous consensus of opinion. You\nshould either combine all of your tables or not. But definitely one or\nthe other.\n\n<grin>\n\nHope you feel informed now.\n\n-Josh\n", "msg_date": "Thu, 30 Jan 2003 21:55:40 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "On Thursday 30 Jan 2003 11:54 pm, you wrote:\n> As I mentioned in our earlier post, we are attempting to decide if\n> Postgres will run faster/better/ with one big table, or a bunch of\n> smaller ones. It really doesn't make much difference to us, we just\n> want whatever structure will be faster.\n\nI would say create a big table with client id. Create a index on it and create \n3000 views. Of course you need to figure out SQL voodoo to insert into \npostgresql views using rules.\n\nBut that would save you from modifying your app. up and down. But there is \ngoing to be massive framgmentation. Consider clustering tables once in a \nwhile.\n\n HTH\n\n Shridhar\n", "msg_date": "Fri, 31 Jan 2003 11:57:40 +0530", "msg_from": "\"Shridhar Daithankar<shridhar_daithankar@persistent.co.in>\"\n\t<shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "On Thu, 30 Jan 2003, Gregory Wood wrote:\n\n> While you make an excellent point (i.e. you can't always through hardware,\n> especially excessive hardware at the problem), I would err on the side of\n> doing things the right way. It usually ends up making the software easier to\n> maintain and add to. A poor design to save a few thousand dollars on\n> hardware now can cost many tens of thousands (or more) dollars on\n> programming time down the road.\n>\n\n\nfun story - I was part of a dot com and we had an informix database and\nthe schema was pretty \"good\" - ref integrity and \"logical layout\". What\nhappened\nwas our traffic started increasng dramatically. We ended up having to\ndisable all the ref integrity simply because it gave us a 50% boost. It\nwas unfortuate, but you have to do it. Sometimes you have to comprimise.\nAs for throwing hardware at it - it was already running on a $500k sun\nbox, an upgrade would have likely gone into the 7 digit range.\nNot to mention you don't exactly get a quick turnaround on\nhardware of that type.. a u10 perhaps, but not a big beefy box.\n(Eventually we did upgrade the db machine when we got another round of\nfunding)\n\nso after a week of engineering and futzing we had things under control..\n(db changes, massive app changes (agressive caching))\n\nYes it was horrid to throw out RI (which caused some minor issues\nlater) but when the business is riding on it.. you make it work any way\nyou can. In a perfect world I would have done it another way, but when\nthe site is down (read: your business is not running, you are losing large\namounts of money) you need to put on your fire fighter suit, not your lab\ncoat.\n\nI remember one time we were featured on CNBC and our traffic jumped by\n1000% (yes, 1000) - our poor machines were hosed. Now we did throw\nhardware at this problem (more frontends) however aquiring hardware in\ntime of crisis is not terribly easy (took 3 days in this case). So you\nhave to look at your other routes.\n\nsometimes your design or some implementation details will be comprimised..\nits a fact of business. If the best design always won then why don't I\nhave an alpha for my machine machine? they are the fastest, best cpu\naround. (I'll admit, alpha failed a lot because of marketing\nissues and cost) Business drives everything. I'd rather continue getting\na paycheck than having a wonderfully designed db that doesn't perform\nwell and is causing us to lose money.\n\nIf you have the ability (ie, you know your site is going to end up doing\n22M page views, or some other statistic like that) to see what things will\nbe like later and are not fighting a fire, design is wonderful. (Lets not\nforget time. I was just on a major project and they scheduled _3_ weeks\nof design & coding.. we were quite upset about that one.. and they\narranged it so the launch date was set in stone. man.. worked some\nlong nights..)\n\ngetting back to the original posters thing - why not just try a test to\nsee how things perform yourself? Try some tests with 3000 tables, and try\na test with 1 table with a client_id (int) field. Or as <insert name of\nperson who I forget> said, you could even make a boatload of views and\nchange your insertion logic..\n\nanyway, sorry if I flamed anybody or if they took it personally.\njust stating some experiences I've had.\n\n------------------------------------------------------------------------------\nJeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/\n Ronald McDonald, with the help of cheese soup,\n controls America from a secret volkswagon hidden in the past\n-------------------------------------------------------------------------------\n\n\n", "msg_date": "Fri, 31 Jan 2003 08:01:24 -0500 (EST)", "msg_from": "Jeff <threshar@torgo.978.org>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] One large v. many small" }, { "msg_contents": "Curt Sampson wrote:\n> >From the description given in Noah's message, and also the \n> one given in his later message, I have little doubt that 3000\n> small tables are going to be significantly faster than one \n> large table. If you don't believe me, work out just where the \n> disk blocks are going to end up, and how many blocks are going\n> to have to be fetched for his typical query in a semi-clustered or \n> non-clustered table.\n\nYou may be right, Curt, but I've seen unintuitive results for this\nkind of thing in the past.\n\nDepending on the way the records are accessed and the cache size,\nthe exact opposite could be true. The index pages will most likely\nrarely be in memory when you have 3000 different tables. Meaning\nthat each search will require at least three or four index page\nretrievals plus the tuple page.\n\nSo what you might lose due to lack of clustering will be made up\nby the more efficient caching of the upper levels of the index\nbtree pages.\n\nCombine a multi-part index (on both client and foo, which order\nwould depend on the access required) that is clustered once a week\nor so using the admittedly non-optimal PostgreSQL CLUSTER command\nand I'll bet you can get equivalent or better performance with the\nsingle table with the concomitant advantages of much better\nreporting options.\n\nI've also seen many examples of linear algorithms in database\ndata dictionaries which would cause a 3000+ table database\nto perform poorly during the parsing/query optimization stage.\nI don't have any idea whether or not PostgreSQL suffers from this\nproblem.\n\nI don't think there is any substitute for just trying it out. It\nshouldn't be that hard to create a bunch of SQL statements that\nconcatenate the tables into one large one.\n\nTry the most common queries against both scenarios. You might be\nsurprised.\n\n- Curtis\n\n\n", "msg_date": "Fri, 31 Jan 2003 10:32:11 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtcapital.com>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "> > While you make an excellent point (i.e. you can't always through\nhardware,\n> > especially excessive hardware at the problem), I would err on the side\nof\n> > doing things the right way. It usually ends up making the software\neasier to\n> > maintain and add to. A poor design to save a few thousand dollars on\n> > hardware now can cost many tens of thousands (or more) dollars on\n> > programming time down the road.\n> >\n>\n>\n> fun story - I was part of a dot com and we had an informix database and\n> the schema was pretty \"good\" - ref integrity and \"logical layout\". What\n> happened\n> was our traffic started increasng dramatically. We ended up having to\n> disable all the ref integrity simply because it gave us a 50% boost. It\n> was unfortuate, but you have to do it. Sometimes you have to comprimise.\n\nYou did what I was suggesting then... start with a good design and work your\nway backwards for the performance you needed and not the other way around.\nI've had to compromise all too often at my business (which upsets me more\nbecause it's often cost the business more in terms of customers and revenue\nin the long run, but they aren't my decisions to make), so I understand that\nnot everything is a matter of \"do it right\"... all too often it's a matter\nof \"get it done\".\n\n> As for throwing hardware at it - it was already running on a $500k sun\n> box, an upgrade would have likely gone into the 7 digit range.\n\nI don't envy you on that... as nice as it is to have that kind of a budget,\nthat adds a lot of pressure to \"make it work\".\n\n> Yes it was horrid to throw out RI (which caused some minor issues\n> later) but when the business is riding on it.. you make it work any way\n> you can. In a perfect world I would have done it another way, but when\n> the site is down (read: your business is not running, you are losing large\n> amounts of money) you need to put on your fire fighter suit, not your lab\n> coat.\n\nWell said.\n\n> anyway, sorry if I flamed anybody or if they took it personally.\n> just stating some experiences I've had.\n\nThe more experiences shared, the more well rounded the conclusions of the\nperson reading them.\n\nGreg\n\n", "msg_date": "Fri, 31 Jan 2003 12:12:18 -0500", "msg_from": "\"Gregory Wood\" <gregw@com-stock.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] One large v. many small" }, { "msg_contents": "Folks,\n\nMany, many replies on this topic:\n\nJeff:\n> App design & performance go hand-in-hand. the trick is to balance them.\n> Who wants a _wonderful_ design that runs like a piece of poo? in this\n<snip>\n>Select somevalue from sometable where othervalue not in (select badvalues\n> from badvaluetable where id = 12345)\n> we all know this runs horrifically on postgres. using your method I should\n> go out, spend thousands on multi-processor boxes, raid, ram\n\nSorry, no, Jeff. The above is what one calls a \"bad query\" and is not, \ntherefore, a performance vs. design issue: that query is bad design-wise, and \nbad performance-wise. Perhpas another example of your argument?\n\nSince you do not seem to have understood my argument, it is this: \nDesign changes, made for the sake of performance or rapid app building, which \ncompletely violate good RDBMS design and normalization principles, almost \nalways cost you more over the life of the application than you gain in \nperformance in the short term. \n\nCurt:\n> It's all very nice to have a general system that can do well on all\n> sorts of queries, but if you lose time on the queries you do do, in\n> order to save time on queries you don't do, you're definitely not\n> getting the best performance out of the system.\n\nThis is a good point; I tend to build for posterity because, so far, 90% of my \nclients who started out having me build a \"single-purpose\" database ended up \nexpanding the application to cover 2-10 additional needs, thus forcing me to \nclean up any design shortcuts I took with the original app. However, Noah \nmay have more control over his company than that.\n\n<and>\n> RAM is not cheap enough yet for me to buy several hundred gigabytes of\n> it for typical applications, even if I could find a server that I could\n> put it in. Disk performance is not growing the way CPU performance is.\n> And three weeks of engineering time plus a ten thousand dollar server\n> is, even at my top billing rate, still a heck of a lot cheaper than a\n> quarter-million dollar server.\n\nI was thinking more of the difference between a $3000 server and a $9000 \nserver myself; unless you're doing nuclear test modelling, I don't see any \nneed for a $250,000 server for anything. \nTo give an extreme example, I have a client who purchased a $150,000 \naccounting system that turned out to be badly designed, normalization-wise, \npartly because the accounting system engineers were focusing on 8-year-old \ntechnology with performance restrictions which were no longer really \napplicable (for example, they talked the client into buying a quad-processor \nserver and then wrote all of their middleware code on a platform that does \nnot do SMP). Over the last two years, they have paid my company $175,000 to \n\"fix\" this accounting database ... more, in fact, than I would have charged \nthem to write a better system from scratch.\n\n<and>\n> Applying your strategy to all situations is not always going to produce\n> the most cost-effective solution.\n\nThat's very true. In fact, that could be taken as a \"general truism\" ... no \none strategy applies to *all* situations.\n\n> PS. Josh, are you referring to Pascal's \"Practical Issues In Database\n> Management\" book or does he have a different book out that I'm not aware\n> of?\n\nYes, you're correct. Sorry!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n", "msg_date": "Fri, 31 Jan 2003 09:34:28 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "On Fri, 31 Jan 2003, Curtis Faith wrote:\n\n> Depending on the way the records are accessed and the cache size,\n> the exact opposite could be true. The index pages will most likely\n> rarely be in memory when you have 3000 different tables. Meaning\n> that each search will require at least three or four index page\n> retrievals plus the tuple page.\n\nAssuming you're using indexes at all. If you're tending to use table\nscans, this doesn't matter.\n\n From Noah's description it seemed he was--he said that a particular data\nitem couldn't be the primary key, presumably because he couldn't index\nit reasonably. But this just my guess, not a fact.\n\n> Combine a multi-part index (on both client and foo, which order\n> would depend on the access required) that is clustered once a week\n> or so using the admittedly non-optimal PostgreSQL CLUSTER command\n> and I'll bet you can get equivalent or better performance...\n\nI would say that, just after a CLUSTER, you're likely to see better\nperformance because this would have the effect, on a FFS or similar\nfilesystem where you've got plenty of free space, of physically\nclustering data that would not have been clustered in the case of a lot\nof small tables that see a lot of appending evenly over all of them over\nthe course of time.\n\nSo the tradeoff there is really, can you afford the time for the CLUSTER?\n(In a system where you have a lot of maintenance time, probably. Though if\nit's a huge table, this might need an entire weekend. In a system that needs\nto be up 24/7, probably not, unless you have lots of spare I/O capacity.)\nJust out of curiousity, how does CLUSTER deal with updates to the table while\nthe CLUSTER command is running?\n\n> I don't think there is any substitute for just trying it out. It\n> shouldn't be that hard to create a bunch of SQL statements that\n> concatenate the tables into one large one.\n\nI entirely agree! There are too many unknowns here to do more than\nspeculate on this list.\n\nBut thanks for enlightening me on situations where one big table perform\nbetter.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 1 Feb 2003 14:07:00 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" }, { "msg_contents": "Curt Sampson wrote:\n> So the tradeoff there is really, can you afford the time for the CLUSTER?\n> (In a system where you have a lot of maintenance time, probably. Though if\n> it's a huge table, this might need an entire weekend. In a system that needs\n> to be up 24/7, probably not, unless you have lots of spare I/O capacity.)\n> Just out of curiousity, how does CLUSTER deal with updates to the table while\n> the CLUSTER command is running?\n\nCLUSTER locks the table, so no updates can happen during a cluster.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 2 Feb 2003 05:16:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: One large v. many small" } ]
[ { "msg_contents": "On Thu, 30 Jan 2003, Gregory Wood wrote:\n\n> While you make an excellent point (i.e. you can't always through hardware,\n> especially excessive hardware at the problem), I would err on the side of\n> doing things the right way. It usually ends up making the software easier to\n> maintain and add to. A poor design to save a few thousand dollars on\n> hardware now can cost many tens of thousands (or more) dollars on\n> programming time down the road.\n>\n\n\nfun story - I was part of a dot com and we had an informix database and\nthe schema was pretty \"good\" - ref integrity and \"logical layout\". What\nhappened\nwas our traffic started increasng dramatically. We ended up having to\ndisable all the ref integrity simply because it gave us a 50% boost. It\nwas unfortuate, but you have to do it. Sometimes you have to comprimise.\nAs for throwing hardware at it - it was already running on a $500k sun\nbox, an upgrade would have likely gone into the 7 digit range.\nNot to mention you don't exactly get a quick turnaround on\nhardware of that type.. a u10 perhaps, but not a big beefy box.\n(Eventually we did upgrade the db machine when we got another round of\nfunding)\n\nso after a week of engineering and futzing we had things under control..\n(db changes, massive app changes (agressive caching))\n\nYes it was horrid to throw out RI (which caused some minor issues\nlater) but when the business is riding on it.. you make it work any way\nyou can. In a perfect world I would have done it another way, but when\nthe site is down (read: your business is not running, you are losing large\namounts of money) you need to put on your fire fighter suit, not your lab\ncoat.\n\nI remember one time we were featured on CNBC and our traffic jumped by\n1000% (yes, 1000) - our poor machines were hosed. Now we did throw\nhardware at this problem (more frontends) however aquiring hardware in\ntime of crisis is not terribly easy (took 3 days in this case). So you\nhave to look at your other routes.\n\nsometimes your design or some implementation details will be comprimised..\nits a fact of business. If the best design always won then why don't I\nhave an alpha for my machine machine? they are the fastest, best cpu\naround. (I'll admit, alpha failed a lot because of marketing\nissues and cost) Business drives everything. I'd rather continue getting\na paycheck than having a wonderfully designed db that doesn't perform\nwell and is causing us to lose money.\n\nIf you have the ability (ie, you know your site is going to end up doing\n22M page views, or some other statistic like that) to see what things will\nbe like later and are not fighting a fire, design is wonderful. (Lets not\nforget time. I was just on a major project and they scheduled _3_ weeks\nof design & coding.. we were quite upset about that one.. and they\narranged it so the launch date was set in stone. man.. worked some\nlong nights..)\n\ngetting back to the original posters thing - why not just try a test to\nsee how things perform yourself? Try some tests with 3000 tables, and try\na test with 1 table with a client_id (int) field. Or as <insert name of\nperson who I forget> said, you could even make a boatload of views and\nchange your insertion logic..\n\nanyway, sorry if I flamed anybody or if they took it personally.\njust stating some experiences I've had.\n\n------------------------------------------------------------------------------\nJeff Trout <jeff@jefftrout.com> http://www.jefftrout.com/\n Ronald McDonald, with the help of cheese soup,\n controls America from a secret volkswagon hidden in the past\n-------------------------------------------------------------------------------\n\n\n\n", "msg_date": "Fri, 31 Jan 2003 13:19:36 -0500 (EST)", "msg_from": "Jeff <threshar@torgo.978.org>", "msg_from_op": true, "msg_subject": "Re: One large v. many small (fwd)" }, { "msg_contents": "\nJeff,\n\n> so after a week of engineering and futzing we had things under control..\n> (db changes, massive app changes (agressive caching))\n> \n> Yes it was horrid to throw out RI (which caused some minor issues\n> later) but when the business is riding on it.. you make it work any way\n> you can. In a perfect world I would have done it another way, but when\n> the site is down (read: your business is not running, you are losing large\n> amounts of money) you need to put on your fire fighter suit, not your lab\n> coat.\n\nActually, I'd say this is a great example of what I'm advocating. You \nstarted out with a \"correct\" design, from an RDBMS perspective, and \ncompromised on it only when the performance issues became insurmountable. \nThat sounds like a good approach to me.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 31 Jan 2003 10:44:00 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: One large v. many small (fwd)" } ]
[ { "msg_contents": "I have a table which is very large (~65K rows). I have\na column in it which is indexed, and I wish to use for\na join. I'm finding that I'm using a sequential scan\nfor this when selecting a MIN.\n\nI've boiled this down to something like this:\n\n=> create table X( value int primary key );\n=> explain select min(value) from x;\n Aggregate (cost=22.50..22.50 rows=1 width=4)\n -> Seq Scan on x (cost=0.00..20.00 rows=1000 width=4)\n=> \\d x\n Table \"public.x\"\n Column | Type | Modifiers \n--------+---------+-----------\n value | integer | not null\nIndexes: x_pkey primary key btree (value)\n\nWhy wouldn't I be doing an index scan on this table?\n\n--don\n", "msg_date": "Fri, 31 Jan 2003 16:12:38 -0500", "msg_from": "Don Bowman <don@sandvine.com>", "msg_from_op": true, "msg_subject": "not using index for select min(...)" }, { "msg_contents": "On Fri, Jan 31, 2003 at 04:12:38PM -0500, Don Bowman wrote:\n> Why wouldn't I be doing an index scan on this table?\n\nBecause you're using the aggregate function min(). See\n\n<http://www.ca.postgresql.org/docs/faq-english.html#4.8>\n\nA \n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 31 Jan 2003 18:31:04 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: not using index for select min(...)" }, { "msg_contents": "Don,\n\n> I have a table which is very large (~65K rows). I have\n> a column in it which is indexed, and I wish to use for\n> a join. I'm finding that I'm using a sequential scan\n> for this when selecting a MIN.\n\nDue to Postgres' system of extensible aggregates (i.e. you can write your own \naggregates), all aggregates will trigger a Seq Scan in a query. It's a \nknown drawrback that nobody has yet found a good way around.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Fri, 31 Jan 2003 15:31:12 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: not using index for select min(...)" }, { "msg_contents": "On Fri, Jan 31, 2003 at 16:12:38 -0500,\n Don Bowman <don@sandvine.com> wrote:\n> I have a table which is very large (~65K rows). I have\n> a column in it which is indexed, and I wish to use for\n> a join. I'm finding that I'm using a sequential scan\n> for this when selecting a MIN.\n> \n> I've boiled this down to something like this:\n> \n> => create table X( value int primary key );\n> => explain select min(value) from x;\n\nUse the following instead:\nselect value from x order by value limit 1;\n", "msg_date": "Fri, 31 Jan 2003 19:02:29 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: not using index for select min(...)" }, { "msg_contents": "> > I have a table which is very large (~65K rows). I have\n> > a column in it which is indexed, and I wish to use for\n> > a join. I'm finding that I'm using a sequential scan\n> > for this when selecting a MIN.\n> \n> Due to Postgres' system of extensible aggregates (i.e. you can write\n> your own aggregates), all aggregates will trigger a Seq Scan in a\n> query. It's a known drawrback that nobody has yet found a good way\n> around.\n\nI've spent some time in the past thinking about this, and here's the\nbest idea that I can come up with:\n\nPart one: setup an ALTER TABLE directive that allows for the\naddition/removal of cached aggregates. Ex:\n\n ALTER TABLE tab1 ADD AGGREGATE CACHE ON count(*);\n ALTER TABLE tab1 ADD AGGREGATE CACHE ON sum(col2);\n ALTER TABLE tab1 ADD AGGREGATE CACHE ON sum(col2) WHERE col2 > 100;\n ALTER TABLE tab1 ADD AGGREGATE CACHE ON sum(col2) WHERE col2 <= 100;\n\n\nWhich would translate into some kind of action on a pg_aggregate_cache\ncatalog:\n\naggregate_cache_oid\t OID -- OID for the aggregate cache\naggregate_table_oid\t OID -- table OID\nins_aggfn_oid\t\t OID\t-- aggregate function id for inserts\nupd_aggfn_oid\t\t OID\t-- aggregate function id for updates\ndel_aggfn_oid\t\t OID\t-- aggregate function id for deletes\ncache_value\t\t INT\t-- the value of the cache\nprivate_data\t\t INT[4] -- temporary data space for needed\n\t\t\t\t -- data necessary to calculate cache_value\n\t\t\t\t -- four is just a guesstimate for how much\n\t\t\t\t -- space would be necessary to calculate\n\t\t\t\t -- the most complex of aggregates\nwhere_clause ??? -- I haven't the faintest idea how to\n\t\t\t\t-- express some kind of conditional like this\n\n\nPart two: setup a RULE or TRIGGER that runs on INSERT, UPDATE, or\nDELETE. For the count(*) exercise, the ON UPDATE would be a no-op.\nFor ON INSERT, the count(*) rule would have to do something like:\n\nUPDATE pg_catalog.pg_aggregate_cache SET cached_value = (cached_value + 1)\n WHERE aggregate_cache_oid = 1111111;\n\n\nFor the sum(col2) aggregate cache, the math is a little more complex,\nbut I think it's quite reasonable given that it obviates a full table\nscan. For an insert:\n\nUPDATE pg_catalog.pg_aggregate_cache SET cached_value =\n ((cached_value * private_data[0] + NEW.col2) / (private_data[0] + 1))\n WHERE aggregate_cache_oid = 1111112;\n\n\nNow, there are some obvious problems:\n\n1) avg requires a floating point return value, therefore an INT may\n not be an appropriate data type for cache_value or private_data.\n\n2) aggregate caching wouldn't speed up anything but full table\n aggregates or regions of a column that are frequently needed.\n\n3) all of the existing aggregates would have to be updated to include\n an insert, update, delete procedure (total of 60 aggregates, but\n only 7 by name).\n\n4) the planner would have to be taught how to use/return values from\n the cache.\n\n5) Each aggregate type makes use of the private_data column\n differently. It's up to the cached aggregate function authors to\n not jumble up their private data space.\n\n6) I don't know of a way to handle mixing of floating point numbers\n and integers. That said, there's some margin of error that could\n creep into the floating point calculations such as avg.\n\n\nAnd some benefits:\n\n1) You only get caching for aggregates that you frequently use\n (sum(col2), count(*), etc.).\n\n2) Aggregate function authors can write their own caching routines.\n\n3) For tens of millions of rows, it can be very time consuming to\n sum() fifty million rows, but it's easy to amortize the cost of\n updating the cache on insert, update, delete over the course of a\n month.\n\n4) If an aggregate cache definition isn't setup, it should be easy for\n the planner to fall back to a full table scan, as it currently is.\n\n\nThis definitely would be a performance boost and something that would\nonly be taken advantage of by DBAs that are intentionally performance\ntuning their database, but for those that do, it could be a massive\nwin. Thoughts? -sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 31 Jan 2003 20:09:54 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] not using index for select min(...)" }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> Now, there are some obvious problems:\n\nYou missed the real reason why this will never happen: it completely\nkills any prospect of concurrent updates. If transaction A has issued\nan update on some row, and gone and modified the relevant aggregate\ncache entries, what happens when transaction B wants to update another\nrow? It has to wait for A to commit or not, so it knows whether to\nbelieve A's changes to the aggregate cache entries.\n\nFor some aggregates you could imagine an 'undo' operator to allow\nA's updates to be retroactively removed even after B has applied its\nchanges. But that doesn't work very well in general. And in any case,\nyou'd have to provide serialization interlocks on physical access to\neach of the aggregate cache entries. That bottleneck applied to every\nupdate would be likely to negate any possible benefit from using the\ncached values.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Jan 2003 23:35:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] not using index for select min(...) " }, { "msg_contents": "> > Now, there are some obvious problems:\n> \n> You missed the real reason why this will never happen: it completely\n> kills any prospect of concurrent updates. If transaction A has\n> issued an update on some row, and gone and modified the relevant\n> aggregate cache entries, what happens when transaction B wants to\n> update another row? It has to wait for A to commit or not, so it\n> knows whether to believe A's changes to the aggregate cache entries.\n\nI never claimed it was perfect, :) but it'd be is no worse than a\ntable lock. For the types of applications that this would be of\nbiggest use to, there would likely be more reads than writes and it\nwouldn't be as bad as one could imagine. A few examples:\n\n# No contension\nTransaction A begins\nTransaction A updates tab1\nTransaction B begins\nTransaction B updates tab1\nTransaction B commits\nTransaction A commits\n\n# contension\nTransaction A begins\nTransaction A updates tab1\nTransaction B begins\nTransaction B updates tab1\nTransaction A commits\nTransaction B commits\n\nThis is just about the only case that I can see where there would be\ncontension. In this case, transaction B would have to re-run its\ntrigger serially. In the worse case scenario:\n\nTransaction A begins\nTransaction A updates tab1\nTransaction B begins\nTransaction B updates tab1\nTransaction A commits\nTransaction B selects\nTransaction B updates tab1 again\nTransaction B commits\n\nIn my journals or books I haven't found any examples of a transaction\nbased cache that'd work any better than this. It ain't perfect, but,\nAFAICT, it's as good as it's going to get. The only thing that I\ncould think of that would add some efficiency in this case would be to\nhave transaction B read trough the committed changes from a log file.\nAfter a threshold, it could be more efficient than having transaction\nB re-run its queries.\n\nLike I said, it ain't perfect, but what would be a better solution?\n::shrug:: Even OODB's with stats agents have this problem (though\ntheir overhead for doing this kind of work is much much lower). -sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 31 Jan 2003 21:09:35 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] not using index for select min(...)" }, { "msg_contents": "Tom Lane wrote:\n> Sean Chittenden <sean@chittenden.org> writes:\n> > Now, there are some obvious problems:\n> \n> You missed the real reason why this will never happen: it completely\n> kills any prospect of concurrent updates. If transaction A has issued\n> an update on some row, and gone and modified the relevant aggregate\n> cache entries, what happens when transaction B wants to update another\n> row? It has to wait for A to commit or not, so it knows whether to\n> believe A's changes to the aggregate cache entries.\n> \n> For some aggregates you could imagine an 'undo' operator to allow\n> A's updates to be retroactively removed even after B has applied its\n> changes. But that doesn't work very well in general. And in any case,\n> you'd have to provide serialization interlocks on physical access to\n> each of the aggregate cache entries. That bottleneck applied to every\n> update would be likely to negate any possible benefit from using the\n> cached values.\n\nHmm...any chance, then, of giving aggregate functions a means of\nasking which table(s) and column(s) the original query referred to so\nthat it could do proper optimization on its own? For instance, for a\n\"SELECT min(x) FROM mytable\" query, the min() function would be told\nupon asking that it's operating on column x of mytable, whereas it\nwould be told \"undefined\" for the column if the query were \"SELECT\nmin(x+y) FROM mytable\". In the former case, it would be able to do a\n\"SELECT x FROM mytable ORDER BY x LIMIT 1\" on its own, whereas in the\nlatter it would have no choice but to fetch the data to do its\ncalculation via the normal means.\n\nBut that may be more trouble than it's worth, if aggregate functions\naren't responsible for retrieving the values they're supposed to base\ntheir computations on, or if it's not possible to get the system to\nrefrain from prefetching data for the aggregate function until the\nfunction asks for it.\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Sat, 1 Feb 2003 08:43:37 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] not using index for select min(...)" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> Hmm...any chance, then, of giving aggregate functions a means of\n> asking which table(s) and column(s) the original query referred to so\n> that it could do proper optimization on its own?\n\nYou can't usefully do that without altering the aggregate paradigm.\nIt won't help for min() to intuit the answer quickly if the query\nplan is going to insist on feeding every row to it anyway.\n\n> For instance, for a\n> \"SELECT min(x) FROM mytable\" query, the min() function would be told\n> upon asking that it's operating on column x of mytable, whereas it\n> would be told \"undefined\" for the column if the query were \"SELECT\n> min(x+y) FROM mytable\". In the former case, it would be able to do a\n> \"SELECT x FROM mytable ORDER BY x LIMIT 1\" on its own,\n\nDon't forget that it would also need to be aware of whether there were\nany WHERE clauses, joins, GROUP BY, perhaps other things I'm not\nthinking of.\n\nIn the end, the only reasonable way to handle this kind of thing is\nto teach the query planner about it. Considering the small number\nof cases that are usefully optimizable (basically only MIN and MAX\non a single table without any WHERE or GROUP clauses), and the ready\navailability of a SQL-level workaround, it strikes me as a very\nlow-priority TODO item.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Feb 2003 12:03:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] not using index for select min(...) " }, { "msg_contents": "Sean,\n\n> I've spent some time in the past thinking about this, and here's the\n> best idea that I can come up with:\n>\n> Part one: setup an ALTER TABLE directive that allows for the\n> addition/removal of cached aggregates. Ex:\n\nActually, Joe Conway and I may be working on something like this for a client. \nJoe's idea is to use a hacked version of the statistics collector to cache \nselected aggregate values in memory. These aggregates would be \nnon-persistent, but the main concern for us is having aggregate values that \nare instantly accessable, and that don't increase the cost of INSERTS and \nUPDATES more than 10%.\n\nThis is to satisfy the needs of a particular client, though, so it may never \nmake it into the general PostgreSQL source. We'll post it somewhere if it \nworks, though.\n\nWe already implemented caching aggregates to tables, with is trivially easy to \ndo with triggers. The problem with this approach is the \nUPDATE/INSERT/DELETE overhead; even with an SPI-optimized C trigger, it's \ncosting us up to 40% additional time when under heavy write activity ... \nwhich is exactly when we can't afford delays.\n\nFor a database which has a low level of UPDATE activity, though, you can \nalready implement cached aggregates as tables without inventing any new \nPostgres extensions.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 1 Feb 2003 11:41:42 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: not using index for select min(...)" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Kevin Brown <kevin@sysexperts.com> writes:\n> > Hmm...any chance, then, of giving aggregate functions a means of\n> > asking which table(s) and column(s) the original query referred to so\n> > that it could do proper optimization on its own?\n> \n> You can't usefully do that without altering the aggregate paradigm.\n> It won't help for min() to intuit the answer quickly if the query\n> plan is going to insist on feeding every row to it anyway.\n\nThat just means you need some way for aggregates to declare which records they\nneed. The only values that seem like they would be useful would be \"first\nrecord\" \"last record\" and \"all records\". Possibly something like \"all-nonnull\nrecords\" for things like count(), but that might be harder.\n\n> Don't forget that it would also need to be aware of whether there were\n> any WHERE clauses, joins, GROUP BY, perhaps other things I'm not\n> thinking of.\n> \n> In the end, the only reasonable way to handle this kind of thing is\n> to teach the query planner about it. Considering the small number\n> of cases that are usefully optimizable (basically only MIN and MAX\n> on a single table without any WHERE or GROUP clauses), and the ready\n> availability of a SQL-level workaround, it strikes me as a very\n> low-priority TODO item.\n\nAll true, but I wouldn't be so quick to dismiss it as low-priority. In my\nexperience I've seen the idiom \"select min(foo) from bar\" more times than I\ncan count. The frequency with which this question occurs here probably is\nindicative of how much people expect it to work. And it's probably used by a\nlot of multi-database applications and in a lot of auto-matically generated\ncode where it would be hard to hack in special purpose workarounds.\n\n-- \ngreg\n\n", "msg_date": "01 Feb 2003 15:21:24 -0500", "msg_from": "Greg Stark <gsstark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] not using index for select min(...)" }, { "msg_contents": "On Sat, Feb 01, 2003 at 15:21:24 -0500,\n Greg Stark <gsstark@mit.edu> wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> That just means you need some way for aggregates to declare which records they\n> need. The only values that seem like they would be useful would be \"first\n> record\" \"last record\" and \"all records\". Possibly something like \"all-nonnull\n> records\" for things like count(), but that might be harder.\n\nI don't see how this is going to be all that useful for aggregates in general.\nmin and max are special and it is unlikely that you are going to get much\nspeed up for general aggregate functions. For the case where you really\nonly need to scan a part of the data (say skipping nulls when nearly all\nof the entries are null), a DBA can add an appropiate partial index and\nwhere clause. This will probably happen infrequently enough that adding\nspecial checks for this aren't going to pay off.\n\nFor min and max, it seems to me that putting special code to detect these\nfunctions and replace them with equivalent subselects in the case where\nan index exists (since a sort is worse than a linear scan) is a possible\nlong term solution to make porting easier.\n\nIn the short term education is the answer. At least the documentation of the\nmin and max functions and the FAQ, and the section with performance tips\nshould recommend the alternative form if there is an appropiate index.\n", "msg_date": "Sat, 1 Feb 2003 20:41:56 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] not using index for select min(...)" } ]
[ { "msg_contents": "The query\n SELECT DISTINCT keycol, 'constant' FROM myTable\nor\n SELECT DISTINCT keycol, NULL FROM myTable\n\nwill result in an error message (7.3.1)\n\nUnable to identify an ordering operator '<' for type \"unknown\"\nUse explicit ordering operator or modify query\n\nIf I use 'constant'::varchar or NULL::varchar everything's fine. \nUnfortunately, this SELECT DISTINCT will appear quite often in my app.\n\nI'd rather like PostgreSQL to use implicit type casting for such \nconstants. The final type chosen doesn't matter anyway and life would be \neasier.\n\n\n\n", "msg_date": "Tue, 04 Feb 2003 13:15:46 +0100", "msg_from": "Andreas Pflug <Andreas.Pflug@web.de>", "msg_from_op": true, "msg_subject": "SELECT DISTINCT is picky about constants" }, { "msg_contents": "On Tue, 2003-02-04 at 07:15, Andreas Pflug wrote:\n> The query\n> SELECT DISTINCT keycol, 'constant' FROM myTable\n> or\n> SELECT DISTINCT keycol, NULL FROM myTable\n> \n> will result in an error message (7.3.1)\n> \n> Unable to identify an ordering operator '<' for type \"unknown\"\n> Use explicit ordering operator or modify query\n> \n> If I use 'constant'::varchar or NULL::varchar everything's fine. \n> Unfortunately, this SELECT DISTINCT will appear quite often in my app.\n> \n> I'd rather like PostgreSQL to use implicit type casting for such \n> constants. The final type chosen doesn't matter anyway and life would be \n> easier.\n\nHow about:\n\nSELECT keycol, NULL FROM (SELECT DISTINCT keycol FROM myTable) AS tab;\n\nMight even be quicker as you won't have to do any comparisons against\nthe constant.\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "04 Feb 2003 09:29:13 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: SELECT DISTINCT is picky about constants" } ]
[ { "msg_contents": "subscribe\n\n", "msg_date": "Tue, 4 Feb 2003 16:48:53 +0200 (SAST)", "msg_from": "<jeandre@itvs.co.za>", "msg_from_op": true, "msg_subject": "subscribe" } ]
[ { "msg_contents": "While executing a lot of INSERTs and DELETEs, I had some performance \nproblems which seemed to result from missing foreign key indexes. \nUnfortunately, if doing an EXPLAIN DELETE myTab .... I'm getting only \nthe first stage of query plan, i.e. \"seq scan on myTab\". The database \naccesses behind the scene to check foreign key constraints don't show \nup, so there's no hint that an index might be missing.\n\nSince I got some highly referenced tables, deleting might be a lengthy \nprocess if any single row leads to a full table scan on dozens of other \nbig tables (example: deleting 4000 rows, duration 500 seconds -> 8rows/sec)\n\nBut this doesn't seem to be the whole truth. Actually, in the case \nstated above, all referencing tables (about 80) are empty. I performed \nthis on a rather small database, imagine what happens if the table has \n1.000.000 rows and referencing tables are filled too!\n\n", "msg_date": "Tue, 04 Feb 2003 16:11:22 +0100", "msg_from": "Andreas Pflug <Andreas.Pflug@web.de>", "msg_from_op": true, "msg_subject": "EXPLAIN not helpful for DELETE" } ]
[ { "msg_contents": "I've a new configuration for our web server\n\nProcessor\tProcesseur Intel Xeon 2.0 Ghz / 512 Ko de cache L2\nMemoiry\t1 Go DDR SDRAM\nDisk1\t18Go Ultra 3 (Ultra 160) SCSI 15 Ktpm \nDisk2\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm \nDisk3\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm \nDisk4\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm \nDisk5\t36Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n\nI will install a Mandrake 8.2\n\nIt's an application server that runs things other than just postgresql. \nIt also runs: apache + Php , bigbrother, log analyser.\n\n\nAt the moment, on my old server, there's postgresql 7.2.3\n my database takes 469M and there are approximatively \n5 millions query per day\n\n\nwhat values should I use for\n\nlinux values:\nkernel.shmmni = 4096\nkernel.shmall = 32000000\nkernel.shmmax = 256000000\n\npostgresql values:\nshared_buffers\nmax_fsm_relations\nmax_fsm_pages\nwal_buffers\nwal_files\nsort_mem\nvacuum_mem\n\n\nany other advices are welcome\n\nthanks in advance\n\n\n\n---------------------\nPhilip johnson \n01 64 86 83 00\nhttp://www.atempo.com \n", "msg_date": "Wed, 5 Feb 2003 11:54:44 +0100", "msg_from": "\"philip johnson\" <philip.johnson@atempo.com>", "msg_from_op": true, "msg_subject": "how to configure my new server" }, { "msg_contents": "pgsql-performance-owner@postgresql.org wrote:\n> Objet : [PERFORM] how to configure my new server\n> Importance : Haute\n> \n> \n> I've a new configuration for our web server\n> \n> Processor\tProcesseur Intel Xeon 2.0 Ghz / 512 Ko de cache L2\n> Memoiry\t1 Go DDR SDRAM\n> Disk1\t18Go Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk2\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk3\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk4\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk5\t36Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> \n> I will install a Mandrake 8.2\n> \n> It's an application server that runs things other than just\n> postgresql. It also runs: apache + Php , bigbrother, log analyser.\n> \n> \n> At the moment, on my old server, there's postgresql 7.2.3\n> my database takes 469M and there are approximatively\n> 5 millions query per day\n> \n> \n> what values should I use for\n> \n> linux values:\n> kernel.shmmni = 4096\n> kernel.shmall = 32000000\n> kernel.shmmax = 256000000\n> \n> postgresql values:\n> shared_buffers\n> max_fsm_relations\n> max_fsm_pages\n> wal_buffers\n> wal_files\n> sort_mem\n> vacuum_mem\n> \n> \n> any other advices are welcome\n> \n> thanks in advance\n> \n> \n> \n> ---------------------\n> Philip johnson\n> 01 64 86 83 00\n> http://www.atempo.com\n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 4: Don't 'kill -9' the\n> postmaster \n\nSomeone is able to help me ?\n", "msg_date": "Thu, 6 Feb 2003 17:25:38 +0100", "msg_from": "\"philip johnson\" <philip.johnson@atempo.com>", "msg_from_op": true, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "Phillip,\n\nFirst, a disclaimer: my advice is without warranty whatsoever. You want a \nwarranty, you gotta pay me.\n\n> I've a new configuration for our web server\n>\n> Processor\tProcesseur Intel Xeon 2.0 Ghz / 512 Ko de cache L2\n> Memoiry\t1 Go DDR SDRAM\n> Disk1\t18Go Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk2\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk3\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk4\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk5\t36Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n\nNo RAID, though?\n\nThink carefully about which disks you put things on. Ideally, the OS, the \nweb files, the database files, the database log, and the swap partition will \nall be on seperate disks. With a large database you may even think about \nshifting individual tables or indexes to seperate disks.\n\n> linux values:\n> kernel.shmmni = 4096\n> kernel.shmall = 32000000\n> kernel.shmmax = 256000000\n\nThese are probably too high, but I'm ready to speak authoritatively on that.\n\n> postgresql values:\n> shared_buffers\n> max_fsm_relations\n> max_fsm_pages\n> wal_buffers\n> wal_files\n> sort_mem\n> vacuum_mem\n\nPlease visit the archives for this list. Setting those values is a topic of \ndiscussion for 50% of the threads, and there is yet no firm agreement on good \nvs. bad values.\n\nAlso, you need to ask youself more questions before you start setting values:\n\n1. How many queries does my database handle per second or minute?\n2. How big/complex are those queries?\n3. What is the ratio of database read activity vs. database writing activity?\n4. What large tables in my database get queried simultaneously/together?\n5. Are my database writes bundled into transactions, or seperate?\netc.\n\nSimply knowing the size of the database files isn't enough.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 6 Feb 2003 09:04:11 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "pgsql-performance-owner@postgresql.org wrote:\n> Phillip,\n>\n> First, a disclaimer: my advice is without warranty whatsoever. You\n> want a warranty, you gotta pay me.\n>\n>> I've a new configuration for our web server\n>>\n>> Processor\tProcesseur Intel Xeon 2.0 Ghz / 512 Ko de cache L2\n>> Memoiry\t1 Go DDR SDRAM Disk1\t18Go Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>> Disk2\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>> Disk3\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>> Disk4\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>> Disk5\t36Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>\n> No RAID, though?\n\nYes no Raid, but will could change soon\n\n>\n> Think carefully about which disks you put things on. Ideally, the\n> OS, the web files, the database files, the database log, and the swap\n> partition will all be on seperate disks. With a large database you\n> may even think about shifting individual tables or indexes to\n> seperate disks.\n\nhow can I put indexes on a seperate disk ?\n\n>\n>> linux values:\n>> kernel.shmmni = 4096\n>> kernel.shmall = 32000000\n>> kernel.shmmax = 256000000\n>\n> These are probably too high, but I'm ready to speak authoritatively\n> on that.\nI took a look a the performance archive, and it's not possible to find\nreal info on how to set these 3 values.\n\n>\n>> postgresql values:\n>> shared_buffers\n>> max_fsm_relations\n>> max_fsm_pages\n>> wal_buffers\n>> wal_files\n>> sort_mem\n>> vacuum_mem\n>\n> Please visit the archives for this list. Setting those values is a\n> topic of discussion for 50% of the threads, and there is yet no firm\n> agreement on good vs. bad values.\n>\n\nI'm surprised that there's no spreadsheet to calculate those values.\nThere are many threads, but it seems that no one is able to find a rule\nto define values.\n\n\n> Also, you need to ask youself more questions before you start setting\n> values:\n>\n> 1. How many queries does my database handle per second or minute?\ncan't say now\n\n> 2. How big/complex are those queries?\nNot really complex and big as you can see\n\nSELECT qu_request.request_id, qu_request.type, qu_request_doc.ki_status,\nqu_request_doc.ki_subject, qu_request_doc.ki_description,\nqu_request_doc.ki_category, qu_request_doc.rn_description_us,\nqu_request_doc.rn_status_us, quad_config_nati.nati_version_extended\nFROM qu_request left join quad_config_nati on qu_request.quad_server_nati =\nquad_config_nati.nati_version\nleft join qu_request_doc on qu_request.request_id =\nqu_request_doc.request_id\nWHERE qu_request.request_id = '130239'\n\n\nselect sv_inquiry.inquiry_id, sv_inquiry.quad_account_inquiry_id\n,to_char(sv_inquiry.change_dt, 'YYYY-MM-DD HH24:MI') as change_dt ,\nto_char(sv_inquiry.closed_dt, 'YYYY-MM-DD HH24:MI') as closed_dt\n,sv_inquiry.state, sv_inquiry.priority, sv_inquiry.type,\naccount_contact.dear as contact , account_contact2.dear as contact2,\nsv_inquiry.action, sv_inquiry.activity ,\nsubstr(sv_inq_txt.inquiry_txt, 1, 120) as inquiry_txt from sv_inquiry left\njoin sv_inq_txt on sv_inquiry.inquiry_id = sv_inq_txt.inquiry_id\nleft join account_contact on sv_inquiry.account_contact_id =\naccount_contact.account_contact_id left join account_contact\naccount_contact2\non sv_inquiry.account_contact_id2 = account_contact2.account_contact_id\nwhere sv_inquiry.account_id=3441833 and\nsv_inquiry.state not in ('Closed', 'Classified') ORDER BY\nsv_inquiry.inquiry_id DESC\n\n\n> 3. What is the ratio of database read activity vs. database writing\n> activity?\nThere are more insert/update than read, because I'm doing table\nsynchronization\nfrom an SQL Server database. Every 5 minutes I'm looking for change in SQL\nServer\nDatabase.\nI've made some stats, and I found that without user acces, and only with the\nreplications\nI get 2 millions query per day\n\n> 4. What large tables in my database get queried simultaneously/together?\nwhy this questions ?\n\n> 5. Are my database writes bundled into transactions, or seperate?\nbundle in transactions\n\n> etc.\n>\n> Simply knowing the size of the database files isn't enough.\n\nis it better like this ?\n\n", "msg_date": "Thu, 6 Feb 2003 19:13:17 +0100", "msg_from": "\"philip johnson\" <philip.johnson@atempo.com>", "msg_from_op": true, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "I do not agree with the advice to dedicate one disc to every table.\n\nImagine 10 disks, 10 tables, and 10 users accessing the same table. This \nwould mean 1 disk really busy and 9 idling all the way. If you use a \nRAID array, the random access is hopefully split to all disks, giving a \nmuch better performance. There are suggestions about direct disk access \n(using O_DIRECT file flags), versus using the OS' caching mechanisms. \nThis is quite the same, in hardware. Today's hardware is designed to \nmake the best of it, give it a chance!\n\nPostgreSQL's transaction logs are probably being written sequentially. \nThis situation is different because the access pattern is predictable, \ndedicated disks might be useful since head movement is reduced to near \nzero, but if there's no high write volume it's wasted performance.\n\nIf your system is so small that web and database are running on the same \nmachine, you can consider page access being quite like table access, so \nI'd put it on the same big array. Your favorite *.html or *.php will be \ncached either.\n\nSwap space may be on a dedicated disk, but it's better for a server if \nswap is never used. Put enough RAM into that machine! Swapping is quite \na desaster on a server. So you could put swap just where you like it: if \nyour server is sane, it's never accessed under load.\n\nSame about OS files: is there really heavy traffic on them?\n\nThere's a white paper at www.microsoft.com about tuning MSSQL 7.0. If \nread carefully, some advice will be applicable to PostgreSQL too.\n\nSo as my general rule, valid for >99 % of users: use as much disks as \npossible in one big RAID array. Let the hardware do the data scattering, \nyou'll be better off. For a production system, you will need disk \nredundancy either (my experience says one failed disk per year for 20 in \nuse). Until your system is really heavily loaded, and you're using >10 \ndisks, don't think about dedicated disks. If you want extra performance \nfor no cost, you can put the most accessed partition on the outer \ncylinders of the disk array (probably corresponding to the outer \ncylinders of the disk) since throughput is highest there.\n\nRegards,\n\nAndreas\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 06 Feb 2003 22:43:23 +0100", "msg_from": "Andreas Pflug <Andreas.Pflug@web.de>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "Andreas,\n\n> I do not agree with the advice to dedicate one disc to every table.\n\nNobody gave any such advice. What are you talking about?\n\n-Josh\n", "msg_date": "Thu, 06 Feb 2003 14:56:41 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "On Thu, 06 Feb 2003 22:43:23 +0100 in message <3E42D6FB.9000605@web.de>, Andreas Pflug <Andreas.Pflug@web.de> wrote:\n> I do not agree with the advice to dedicate one disc to every table.\n> \n> Imagine 10 disks, 10 tables, and 10 users accessing the same table. This \n> would mean 1 disk really busy and 9 idling all the way. If you use a \n> RAID array, the random access is hopefully split to all disks, giving a \n> much better performance. There are suggestions about direct disk access \n> (using O_DIRECT file flags), versus using the OS' caching mechanisms. \n> This is quite the same, in hardware. Today's hardware is designed to \n> make the best of it, give it a chance!\n\nUnfortunately, today's hardware still has rotational latency. You aren't goign to get much more than 300 seeks per sec on the best single drive. Putting them together in a way that requires half to all of them to seek for a given read or write is a performance killer. The only way around this is high end raid cards with backup batteries and ram. \n\nI've been doing some tests using pgbench (which aren't written up yet) on the topic of low budget performance. So far, using linux 2.4.20 md software raid where applicable, I've seen against a baseline of one ide disk:\n\nrunning on a rocketraid card (kernel thinks it's scsi) is faster than onboard controllers \nmirrored is negligably slower\nstriped is much slower\nsplitting WAL and Data on two drives gives a 40+% speed boost\nhaving data in ram cache is good for ~ 100% speed boost. Essentially, disk activity goes from evenly split reading and writing to all writing\n\nThe only pg settings that show any correlation with pgbench performance are the # of WAL logs, generally corresponding to the interval between flushing wal logs to the data store. Buffers don't change much over a 64-8192 range, Sort mem doesn't change much. (Note that that may be due to the query types in this benchmark. My app certainly needs the sortmem)\n\nAs a somewhat on topic thought, it would be really neat to have a pci card that was one slot for ram, one for compact flash, a memory/ide controller and battery. Fill the ram and cf with identical sized units, and use it as a disk for WAL. if the power goes off, dump the ram to cf. Should be able to do thousands of writes per sec, effectivley moving the bottleneck somewhere else. It's probably $20 worth of chips for the board, but it would probably sell for thousands. \n\neric\n\n\n\n", "msg_date": "Thu, 06 Feb 2003 15:14:09 -0800", "msg_from": "eric soroos <eric-psql@soroos.net>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "\nIn message <137343446.1167578047@[4.42.179.151]>, eric soroos writes:\n\n As a somewhat on topic thought, it would be really neat to have a\n pci card that was one slot for ram, one for compact flash, a\n memory/ide controller and battery. Fill the ram and cf with\n identical sized units, and use it as a disk for WAL. if the power\n goes off, dump the ram to cf. Should be able to do thousands of\n writes per sec, effectivley moving the bottleneck somewhere else.\n It's probably $20 worth of chips for the board, but it would\n probably sell for thousands.\n \nHow is this not provided by one of the many solid state disks?\n\nhttp://www.storagesearch.com/ssd.html\n\nI have never puchased one of these due to cost ($1 per MB or more) but\nI always assumed this was a direct fit. The solid state disk people\nclaim so as well on their marketing literature. One of the drives\nclaims 700MB/s bandwidth.\n\n -Seth Robertson\n seth@sysd.com\n", "msg_date": "Thu, 06 Feb 2003 19:07:30 -0500", "msg_from": "Seth Robertson <seth@sysd.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server " }, { "msg_contents": "\nIn message <137343446.1167578047@[4.42.179.151]>, eric soroos writes:\n\n As a somewhat on topic thought, it would be really neat to have a\n pci card that was one slot for ram, one for compact flash, a\n memory/ide controller and battery. Fill the ram and cf with\n identical sized units, and use it as a disk for WAL. if the power\n goes off, dump the ram to cf. Should be able to do thousands of\n writes per sec, effectivley moving the bottleneck somewhere else.\n It's probably $20 worth of chips for the board, but it would\n probably sell for thousands.\n \nHow is this not provided by one of the many solid state disks?\n\nhttp://www.storagesearch.com/ssd.html\n\nI have never puchased one of these due to cost ($1 per MB or more) but\nI always assumed this was a direct fit. The solid state disk people\nclaim so as well on their marketing literature. One of the drives\nclaims 700MB/s bandwidth.\n\n -Seth Robertson\n pgsql-performance@sysd.com\n", "msg_date": "Thu, 06 Feb 2003 19:23:41 -0500", "msg_from": "Seth Robertson <pgsql-performance@sysd.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server " }, { "msg_contents": "On Thu, 06 Feb 2003 19:07:30 -0500 in message <200302070007.h1707Us06536@winwood.sysdetect.com>, Seth Robertson <seth@sysd.com> wrote:\n> \n> In message <137343446.1167578047@[4.42.179.151]>, eric soroos writes:\n> \n> As a somewhat on topic thought, it would be really neat to have a\n> pci card that was one slot for ram, one for compact flash, a\n> memory/ide controller and battery. Fill the ram and cf with\n> identical sized units, and use it as a disk for WAL. if the power\n> goes off, dump the ram to cf. Should be able to do thousands of\n> writes per sec, effectivley moving the bottleneck somewhere else.\n> It's probably $20 worth of chips for the board, but it would\n> probably sell for thousands.\n> \n> How is this not provided by one of the many solid state disks?\n\n$20 worth of chips. The selling for thousands is what is provided by SSDs. A pci interface.\n\n> http://www.storagesearch.com/ssd.html\n\nSolid state disks are sold by companies targeting the military and don't have prices on the website. That scares me. \n\nThe pci board would need about the same circuitry as a north+southbridge on an everyday motherboard, current chip cost is in the tens of dollars. Assuming that the demand is low, call it $100 or so for a completed board. support pc-100 (much faster than a pci bus) and cf type 2 (3? allow microdrives) and you've got something like 512mb of storage for $300. Which is almost exactly my WAL size. Compared to a raid card + 2 ide drives, price is a wash and performance is limited by the pci bus. \n\nThere's one board like this w/o the flash, but it's something like $500-1000 depending on ultimate capacity, without ram. \n\n> I have never puchased one of these due to cost ($1 per MB or more) but\n> I always assumed this was a direct fit. The solid state disk people\n> claim so as well on their marketing literature. One of the drives\n> claims 700MB/s bandwidth.\n\nI was seeing prices start in the low thousands and go up from there. Out of my budget I'm afraid. I'm in the process of spending ~ $500 on a drive system. The desire is to switch from raid card + 4 ide drives to my wish card + a raid card + 2 drives. (note that I'm talking in mirrored drives). I'm guessing that I could move the bottleneck to either the processor or pci bus if these cards existed.\n\neric\n\n\n\n", "msg_date": "Thu, 06 Feb 2003 16:33:45 -0800", "msg_from": "eric soroos <eric-psql@soroos.net>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "On Thu, 6 Feb 2003, eric soroos wrote:\n\n> running on a rocketraid card (kernel thinks it's scsi) is faster than\n> onboard controllers\n\nHow many transactions per second can you get on a single RAID or stripe\nsystem using this card?\n\nI found that the write performance for large writes on an Escalade 7850\nwas great, but I couldn't coax more than about 120 writes per second\nout of the thing in any configuration (even writing to separate disks\nin JBOD mode), which made it very disappointing for database use. (The\nindividual disks on the controller, modern IBM 60 GB IDEs, could do\nabout 90 writes per second.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Fri, 7 Feb 2003 10:17:43 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "On Fri, 7 Feb 2003 10:17:43 +0900 (JST) in message <Pine.NEB.4.51.0302071013180.7356@angelic.cynic.net>, Curt Sampson <cjs@cynic.net> wrote:\n> On Thu, 6 Feb 2003, eric soroos wrote:\n> \n> > running on a rocketraid card (kernel thinks it's scsi) is faster than\n> > onboard controllers\n> \n> How many transactions per second can you get on a single RAID or stripe\n> system using this card?\n\nMy current test setup is using a rocketraid 404 card, + 2 WD Caviar 80G SE (8 meg cache), and a 2 yr old 7200 rpm ide ibm on the mb controller channel as the os/log drive. The rocketraid is a 4 channel card, I'm going to fill the other two channels with mirror drives. I'm _not_ using hw mirroring, since I want the flexibility of sw mirroring for now. (I don't think their driver supports breaking and reestablishing the mirror with live drive usage)\n\nThis is on a single p3-733, 640 mb ram. Processor is apparently never redlined normally running at 50-75% as reported by vmstat. \n\n> I found that the write performance for large writes on an Escalade 7850\n> was great, but I couldn't coax more than about 120 writes per second\n> out of the thing in any configuration (even writing to separate disks\n> in JBOD mode), which made it very disappointing for database use. (The\n> individual disks on the controller, modern IBM 60 GB IDEs, could do\n> about 90 writes per second.)\n\nUsing a data/wal split, I see peaks around 135 t/s (500 transactions, 10 concurrent clients). Sustained (25000 transactions) that goes down to the 80 range due to the moving of data from the WAL to the data directories. these numbers are all with the kernel's data caches full of about 1/2 gig of data, I'm seeing 300 blks/s read and 3000 write \n\nPeaks for striped are @ 80, peaks for single are ~ 100, peaks for mirror are around 100. I'm curious if hw mirroring would help, as I am about a 4 disk raid 5. But I'm not likely to have the proper drives for that in time to do the testing, and I like the ablity to break the mirror for a backup. For comparison, that same system was doing 20-50 without the extra 512 stick of ram and on the internal single drive. \n\neric\n\n\n\n", "msg_date": "Thu, 06 Feb 2003 19:07:18 -0800", "msg_from": "eric soroos <eric-psql@soroos.net>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "Hello @ all,\n\nJosh wrote:\n>> With a large database you may even think about \n>> shifting individual tables or indexes to seperate disks.\n\nOK, I admit it was a bit provoking. It was my intention to stir things up a little bit ;-) IMHO, thinking about locating data on dedicated files is a waste of time on small servers. Let the hardware do the job for you! It is \"good enough\". \n\n\nEric wrote:\n>>Unfortunately, today's hardware still has rotational latency. You aren't goign to get much \n>> more than 300 seeks per sec on the best single drive. Putting them together in a way that\n>> requires half to all of them to seek for a given read or write is a performance killer. \n>> The only way around this is high end raid cards with backup batteries and ram.\n\nYou're right, 300 seeks is best you can expect from a state-of-the-art HD. But the average disk request will certainly not be performed over several disks. Usual block size for RAID is 32kb or 64kb, while most requests will be only some kb (assuming you're not doing full table scans all the time). Thus, the usual request will require only one disk to be accessed on read. This way, a 10-disk array will be capable of up to 3000 requests/second (if the controller allows this).\n\nActually, I don't trust software RAID. If I'm talking about RAID, I mean mature RAID solutions, using SCSI or similar professional equipment. More RAM, ideally with backup power, is desirable. For small servers, a RAID controller < 1000 $ usually will do. IDE RAID, uhm eh... I never did like it, and I doubt that IDE RAID controller are doing a good job optimizing for this kind of traffic. IMHO, they are meant for workstation, leave them there. And remember, if we talk about access time, typical latency for a SCSI disk is half of fast IDE disks, giving double speed for typical DB access patterns. You may use IDE if speed means MB/s, but for us it's seeks/s.\n\nI don't think solid state disks are a way out (unless you don't know where to bury your money :-). Maybe the gurus can tell more about PostgreSQL's caching, but for my opinion if enough RAM is available after some time all of the DB should be in cache eliminating the need to access the disks for read access. For writing, which is typically less than 10 % of total load, an optimizing caching disk controller should be sufficient.\n\nAndreas\n\n\n", "msg_date": "Fri, 07 Feb 2003 04:23:34 +0100", "msg_from": "Andreas Pflug <Andreas.Pflug@web.de>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "On Thu, 6 Feb 2003, eric soroos wrote:\n\n> > I found that the write performance for large writes on an Escalade\n> > 7850 was great, but I couldn't coax more than about 120 writes\n> > per second out of the thing in any configuration (even writing to\n> > separate disks in JBOD mode), which made it very disappointing for\n> > database use. (The individual disks on the controller, modern IBM 60\n> > GB IDEs, could do about 90 writes per second.)\n> ...\n> Peaks for striped are @ 80, peaks for single are ~ 100, peaks for\n> mirror are around 100. I'm curious if hw mirroring would help, as I am\n> about a 4 disk raid 5. But I'm not likely to have the proper drives\n> for that in time to do the testing, and I like the ablity to break the\n> mirror for a backup. For comparison, that same system was doing 20-50\n> without the extra 512 stick of ram and on the internal single drive.\n\nHm. That's still very low (about the same as a single modern IDE drive).\nI'm looking for an IDE RAID controller that would get me up into the\n300-500 reads/writes per second range, for 8K blocks. This should not\nbe a problem when doing striping across eight disks that are each\nindividually capable of about 90 random 8K reads/writes per second.\n(Depending on the size of the data area you're testing on, of course.)\n\nSee http://randread.sourceforge.net for some tools to help measure this.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Fri, 7 Feb 2003 13:34:32 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "On Fri, 7 Feb 2003, Andreas Pflug wrote:\n\n> Actually, I don't trust software RAID. If I'm talking about RAID, I \n> mean mature RAID solutions, using SCSI or similar professional \n> equipment. \n\nFunny you should mention that. A buddy running a \"professional\" level \ncard had it mark two out of three drives in a RAID 5 bad and wouldn't let \nhim reinsert the drives no matter what. Had to revert to backups.\n\nI'll take Linux's built in kernel raid any day over most pro cards. I've \nbeen using it in production for about 3 years and it is very mature and \nstable, and lets you do what you want to do (which can be good if your \nsmart, but very bad if you do something dumb... :-)\n\nI've had good and bad experiences with pro grade RAID boxes and \ncontrollers, but I've honestly had nothing but good from linux's kernel \nlevel raid. Some early 2.0 stuff had some squirreliness that meant I had \nto actually reboot for some changes to take affect. Since the 2.2 kernel \ncame out the md driver has been rock solid. I've not played with the \nvolume manager yet, but I hear equally nice things about it.\n\nKeep in mind that a \"hardware raid card\" is nothing more than software \nraid burnt into ROM and stuffed on a dedicated card, there's no magic \npixie dust that decrees doing such makes it a better or more reliable \nsolution.\n\n", "msg_date": "Fri, 7 Feb 2003 09:13:17 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "Folks,\n\tI'm going to be setting up a Linux software RAID this weekend, and in my\nresearch I cam across the following document:\nhttp://www.hpl.hp.com/techreports/2002/HPL-2002-352.html\n\tIt says that Linux software RAID is slower than XP Software RAID on the\nsame hardware. If this is the case, wouldn't it follow that hardware RAID\nhas a really good chance of beating Linux software RAID? Or does the\nproblem that affects the software raid affect all Linux disk IO? I'm not\nreally knowledgeable enough to tell.\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: pgsql-performance-owner@postgresql.org\n[mailto:pgsql-performance-owner@postgresql.org]On Behalf Of\nscott.marlowe\nSent: Friday, February 07, 2003 8:13 AM\nTo: Andreas Pflug\nCc: pgsql-performance@postgresql.org\nSubject: Re: [PERFORM] how to configure my new server\n\n\nOn Fri, 7 Feb 2003, Andreas Pflug wrote:\n\n> Actually, I don't trust software RAID. If I'm talking about RAID, I\n> mean mature RAID solutions, using SCSI or similar professional\n> equipment.\n\nFunny you should mention that. A buddy running a \"professional\" level\ncard had it mark two out of three drives in a RAID 5 bad and wouldn't let\nhim reinsert the drives no matter what. Had to revert to backups.\n\nI'll take Linux's built in kernel raid any day over most pro cards. I've\nbeen using it in production for about 3 years and it is very mature and\nstable, and lets you do what you want to do (which can be good if your\nsmart, but very bad if you do something dumb... :-)\n\nI've had good and bad experiences with pro grade RAID boxes and\ncontrollers, but I've honestly had nothing but good from linux's kernel\nlevel raid. Some early 2.0 stuff had some squirreliness that meant I had\nto actually reboot for some changes to take affect. Since the 2.2 kernel\ncame out the md driver has been rock solid. I've not played with the\nvolume manager yet, but I hear equally nice things about it.\n\nKeep in mind that a \"hardware raid card\" is nothing more than software\nraid burnt into ROM and stuffed on a dedicated card, there's no magic\npixie dust that decrees doing such makes it a better or more reliable\nsolution.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Fri, 7 Feb 2003 08:47:01 -0800", "msg_from": "\"Peter Darley\" <pdarley@kinesis-cem.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "Andreas,\n\n> Josh wrote:\n> >> With a large database you may even think about\n> >> shifting individual tables or indexes to seperate disks.\n>\n> OK, I admit it was a bit provoking. It was my intention to stir things up a\n> little bit ;-) IMHO, thinking about locating data on dedicated files is a\n> waste of time on small servers. Let the hardware do the job for you! It is\n> \"good enough\".\n\nAha, by \"large databases\" I mean \"several million records\". In the odd case \nwhere you have a database which has one or two tables which are larger than \nthe rest of the database combined, you can get a performance boost by putting \nthose tables, and/or their indexes, on a seperate spindle.\n\nFrankly, for small servers, a pair of mirrored IDE drives is adequate. And, \nof course, if you have a RAID 1+0 controller, that's better than trying to \ndirectly allocate different files to different disks ... except the WAL log.\n\n> I don't think solid state disks are a way out (unless you don't know where\n> to bury your money :-). Maybe the gurus can tell more about PostgreSQL's\n> caching, but for my opinion if enough RAM is available after some time all\n> of the DB should be in cache eliminating the need to access the disks for\n> read access. For writing, which is typically less than 10 % of total load,\n> an optimizing caching disk controller should be sufficient.\n\nDepends on the database. I've worked on DBs where writes were 40% of all \nqueries ... and 90% of the system resource load. For those databases, \nmoving the WAL log to a RAMdisk might mean a big boost.\n\nAlso, I'm currently working to figure out a solution for some 1U machines \nwhich don't have any *room* for an extra drive for WAL. A PCI ramdisk would \nbe just perfect ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 7 Feb 2003 09:16:27 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "I've always had very good performance with Linux's kernel raid, though \nI've never compared it to Windows, just to hardware raid cards running in \nlinux.\n\nI can get aggregate reads of about 48 Megs a second on a pair of 10k 18 \ngig UW scsi drives in RAID1 config. I'm not saying there's no room for \nimprovement, but for what I use it for, it gives very good performance.\n\nSome hardware cards will certainly be faster than the linux kernel \nraid software, but it's not a given that any hardware card WILL be faster. \nI'm quite certain that you could outrun most older cards using 33 MHz I960 \nfor checksum calculations with a dual 2.4Ghz machine doing software.\n\nThe only way to be sure is to test it.\n\n On Fri, 7 Feb 2003, Peter Darley wrote:\n\n> Folks,\n> \tI'm going to be setting up a Linux software RAID this weekend, and in my\n> research I cam across the following document:\n> http://www.hpl.hp.com/techreports/2002/HPL-2002-352.html\n> \tIt says that Linux software RAID is slower than XP Software RAID on the\n> same hardware. If this is the case, wouldn't it follow that hardware RAID\n> has a really good chance of beating Linux software RAID? Or does the\n> problem that affects the software raid affect all Linux disk IO? I'm not\n> really knowledgeable enough to tell.\n> Thanks,\n> Peter Darley\n> \n> -----Original Message-----\n> From: pgsql-performance-owner@postgresql.org\n> [mailto:pgsql-performance-owner@postgresql.org]On Behalf Of\n> scott.marlowe\n> Sent: Friday, February 07, 2003 8:13 AM\n> To: Andreas Pflug\n> Cc: pgsql-performance@postgresql.org\n> Subject: Re: [PERFORM] how to configure my new server\n> \n> \n> On Fri, 7 Feb 2003, Andreas Pflug wrote:\n> \n> > Actually, I don't trust software RAID. If I'm talking about RAID, I\n> > mean mature RAID solutions, using SCSI or similar professional\n> > equipment.\n> \n> Funny you should mention that. A buddy running a \"professional\" level\n> card had it mark two out of three drives in a RAID 5 bad and wouldn't let\n> him reinsert the drives no matter what. Had to revert to backups.\n> \n> I'll take Linux's built in kernel raid any day over most pro cards. I've\n> been using it in production for about 3 years and it is very mature and\n> stable, and lets you do what you want to do (which can be good if your\n> smart, but very bad if you do something dumb... :-)\n> \n> I've had good and bad experiences with pro grade RAID boxes and\n> controllers, but I've honestly had nothing but good from linux's kernel\n> level raid. Some early 2.0 stuff had some squirreliness that meant I had\n> to actually reboot for some changes to take affect. Since the 2.2 kernel\n> came out the md driver has been rock solid. I've not played with the\n> volume manager yet, but I hear equally nice things about it.\n> \n> Keep in mind that a \"hardware raid card\" is nothing more than software\n> raid burnt into ROM and stuffed on a dedicated card, there's no magic\n> pixie dust that decrees doing such makes it a better or more reliable\n> solution.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n\n", "msg_date": "Fri, 7 Feb 2003 10:33:48 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "\n> > Peaks for striped are @ 80, peaks for single are ~ 100, peaks for\n> > mirror are around 100. I'm curious if hw mirroring would help, as I am\n> > about a 4 disk raid 5. But I'm not likely to have the proper drives\n> > for that in time to do the testing, and I like the ablity to break the\n> > mirror for a backup. For comparison, that same system was doing 20-50\n> > without the extra 512 stick of ram and on the internal single drive.\n\nUpon some further poking around, I have determined that there were procedural errors that make the data inconsistent. I believe that half of the ram cache may not have been in the state that I thought it was in for all of the tests. \n\n> Hm. That's still very low (about the same as a single modern IDE drive).\n> I'm looking for an IDE RAID controller that would get me up into the\n> 300-500 reads/writes per second range, for 8K blocks. This should not\n> be a problem when doing striping across eight disks that are each\n> individually capable of about 90 random 8K reads/writes per second.\n> (Depending on the size of the data area you're testing on, of course.)\n\nRunning some further tests, I'm seeing software striping in the 125 tps peak/100 sustained range, which is about what I'm getting from the split WAL/Data mode right now. I'm still seeing about 10% reads, so there's probably some more to be gained with additional system ram. \n\neric\n\n\n\n", "msg_date": "Fri, 07 Feb 2003 10:06:10 -0800", "msg_from": "eric soroos <eric-psql@soroos.net>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "scott.marlowe wrote:\n\n>I can get aggregate reads of about 48 Megs a second on a pair of 10k 18 \n>gig UW scsi drives in RAID1 config. I'm not saying there's no room for \n>improvement, but for what I use it for, it gives very good performance.\n>\n> \n>\nScott,\n\nas most people talking about performance you mean throughput, but this \nis not the most important parameter for databases. Reading the comments \nof other users with software and IDE RAID, it seems to me that indeed \nthese solutions are only good at this discipline.\n\nAnother suggestion:\nYou're right, a hardware RAID controller is nothing but a stripped down \nsystem that does noting more than a software RAID would do either. But \nthis tends to be the discussion that Intels plays for years now. There \nwere times when Intel said \"don't need an intelligent graphics \ncontroller, just use a fast processor\". Well, development went another \ndirection, and it's good this way. Same with specialized controllers. \nThey will take burden from the central processing unit, which can \nconcentrate on the complicated things, not just getting some block from \ndisk. Look at most Intel based servers. Often, CPU Speed is less than \nworkstations CPUs, RAM technology one step behind. But they have \nsophisticated infrastructure for coprocessing. This is the way to speed \nthings up, not pumping up the CPU.\n\nIf you got two of three HDs bad in a RAID5 array, you're lost. That's \nthe case for all RAID5 solutions, because the redundancy is just one \ndisk. Better solutions will allow for spare disks that jump in as soon \nas one fails, hopefully it rebuilds before the next fails.\n\n\nRegards,\n\nAndreas\n\n\n\n\n", "msg_date": "Fri, 07 Feb 2003 19:42:48 +0100", "msg_from": "Andreas Pflug <Andreas.Pflug@web.de>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "On Fri, 7 Feb 2003, Andreas Pflug wrote:\n\n> scott.marlowe wrote:\n> \n> >I can get aggregate reads of about 48 Megs a second on a pair of 10k 18 \n> >gig UW scsi drives in RAID1 config. I'm not saying there's no room for \n> >improvement, but for what I use it for, it gives very good performance.\n> >\n> > \n> >\n> Scott,\n> \n> as most people talking about performance you mean throughput, but this \n> is not the most important parameter for databases. Reading the comments \n> of other users with software and IDE RAID, it seems to me that indeed \n> these solutions are only good at this discipline.\n\nWell, I have run bonnie across it and several other options as well, and \nthe RAID cards I've test (Mega RAID 428 kinda stuff, i.e. 2 or 3 years \nold) were no better than Linux at any of the tests. In some cases much \nslower. \n\n> Another suggestion:\n> You're right, a hardware RAID controller is nothing but a stripped down \n> system that does noting more than a software RAID would do either. But \n> this tends to be the discussion that Intels plays for years now. There \n> were times when Intel said \"don't need an intelligent graphics \n> controller, just use a fast processor\". Well, development went another \n> direction, and it's good this way. Same with specialized controllers. \n> They will take burden from the central processing unit, which can \n> concentrate on the complicated things, not just getting some block from \n> disk. Look at most Intel based servers. Often, CPU Speed is less than \n> workstations CPUs, RAM technology one step behind. But they have \n> sophisticated infrastructure for coprocessing. This is the way to speed \n> things up, not pumping up the CPU.\n\nHey, I was an Amiga owner, I'm all in favor of moving off the CPU that you \ncan. But, that's only a win if you're on a machine that will be \nCPU/interrupt bound. If the machine sits at 99% idle with most of the \nwaiting being I/O, and it has 4 CPUs anyway, then you may or may not gain \nfrom moving the work onto another card. while SSL et. al. encryption is \nCPU intensive, but generally the XOring needed to be done for RAID \nchecksums is very simple to do quickly on modern architectures, so \nthere's no great gain the that department. I'd imagine the big gain \nwould come from on board battery backed up write through / or behind \ncache memory.\n\nI think the fastest solutions have always been the big outboard boxes with \nthe RAID built in, and the PCI cards tend to be also rans in comparison.\n\nBut the one point I'm sure we'll agree on in this is that until you test \nit with your workload, you won't really know which is better, if either.\n\n> If you got two of three HDs bad in a RAID5 array, you're lost. That's \n> the case for all RAID5 solutions, because the redundancy is just one \n> disk. Better solutions will allow for spare disks that jump in as soon \n> as one fails, hopefully it rebuilds before the next fails.\n\nThe problem was that all three drives were good. He moved the server, \ncable came half off, the card marked the drives as bad, and wouldn't \naccept them back until it had formatted them. This wasn't the first time \nI'd seen this kind of problem with RAID controllers either, as it had \nhappened to me in testing one a few years earlier. Which is one of the \nmany life experiences that makes me like backups so much. \n\n", "msg_date": "Fri, 7 Feb 2003 14:01:38 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "scott.marlowe wrote:\n\n>The problem was that all three drives were good. He moved the server, \n>cable came half off, the card marked the drives as bad, and wouldn't \n>accept them back until it had formatted them. This wasn't the first time \n>I'd seen this kind of problem with RAID controllers either, as it had \n>happened to me in testing one a few years earlier. Which is one of the \n>many life experiences that makes me like backups so much. \n>\n> \n>\nOk, had this kind of problems too. Everytime you stop and start a \nserver, rest a second and say a little prayer :-) 80 % of HDs I've seen \ndying didn't start again after a regular maintenance power down. Or a \ncontroller finds some disk to be faulty for some nonreproduceable \nreason, until you kick the disk out of the array and rebuild it. Bad \nsurprise, if TWO disks are considered bad. Stop and start again, and say \na real GOOD prayer, then get out the backup tape...\n\nI had this especially with Adaptec controllers. I had to learn they \ncannot build reliable RAID, and they don't know what maintenance of \nproducts is either (three generations of AAA controllers in two years, \nall being incompatible, only partial supported for new OS. I'm cured!) \nSome years ago they bought PMD, which is a lot better.\n\nAndreas\n\n", "msg_date": "Sat, 08 Feb 2003 00:21:47 +0100", "msg_from": "Andreas Pflug <Andreas.Pflug@web.de>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "pgsql-performance-owner@postgresql.org wrote:\n> Objet : Re: [PERFORM] how to configure my new server\n> \n> \n> pgsql-performance-owner@postgresql.org wrote:\n>> Phillip,\n>> \n>> First, a disclaimer: my advice is without warranty whatsoever. You\n>> want a warranty, you gotta pay me.\n>> \n>>> I've a new configuration for our web server\n>>> \n>>> Processor\tProcesseur Intel Xeon 2.0 Ghz / 512 Ko de cache L2\n>>> Memoiry\t1 Go DDR SDRAM Disk1\t18Go Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>>> Disk2\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>>> Disk3\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>>> Disk4\t18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>>> Disk5\t36Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n>> \n>> No RAID, though?\n> \n> Yes no Raid, but will could change soon\n> \n>> \n>> Think carefully about which disks you put things on. Ideally, the\n>> OS, the web files, the database files, the database log, and the swap\n>> partition will all be on seperate disks. With a large database you\n>> may even think about shifting individual tables or indexes to\n>> seperate disks.\n> \n> how can I put indexes on a seperate disk ?\n> \n>> \n>>> linux values:\n>>> kernel.shmmni = 4096\n>>> kernel.shmall = 32000000\n>>> kernel.shmmax = 256000000\n>> \n>> These are probably too high, but I'm ready to speak authoritatively\n>> on that.\n> I took a look a the performance archive, and it's not possible to find\n> real info on how to set these 3 values.\n> \n>> \n>>> postgresql values:\n>>> shared_buffers\n>>> max_fsm_relations\n>>> max_fsm_pages\n>>> wal_buffers\n>>> wal_files\n>>> sort_mem\n>>> vacuum_mem\n>> \n>> Please visit the archives for this list. Setting those values is a\n>> topic of discussion for 50% of the threads, and there is yet no firm\n>> agreement on good vs. bad values.\n>> \n> \n> I'm surprised that there's no spreadsheet to calculate those values.\n> There are many threads, but it seems that no one is able to find a\n> rule to define values.\n> \n> \n>> Also, you need to ask youself more questions before you start\n>> setting values: \n>> \n>> 1. How many queries does my database handle per second or minute?\n>> can't say now \n> \n>> 2. How big/complex are those queries?\n> Not really complex and big as you can see\n> \n> SELECT qu_request.request_id, qu_request.type,\n> qu_request_doc.ki_status, qu_request_doc.ki_subject,\n> qu_request_doc.ki_description, qu_request_doc.ki_category,\n> qu_request_doc.rn_description_us, qu_request_doc.rn_status_us,\n> quad_config_nati.nati_version_extended FROM qu_request left join\n> quad_config_nati on qu_request.quad_server_nati =\n> quad_config_nati.nati_version left join qu_request_doc on\n> qu_request.request_id = qu_request_doc.request_id\n> WHERE qu_request.request_id = '130239'\n> \n> \n> select sv_inquiry.inquiry_id, sv_inquiry.quad_account_inquiry_id\n> ,to_char(sv_inquiry.change_dt, 'YYYY-MM-DD HH24:MI') as change_dt ,\n> to_char(sv_inquiry.closed_dt, 'YYYY-MM-DD HH24:MI') as closed_dt\n> ,sv_inquiry.state, sv_inquiry.priority, sv_inquiry.type,\n> account_contact.dear as contact , account_contact2.dear as contact2,\n> sv_inquiry.action, sv_inquiry.activity ,\n> substr(sv_inq_txt.inquiry_txt, 1, 120) as inquiry_txt from sv_inquiry\n> left join sv_inq_txt on sv_inquiry.inquiry_id = sv_inq_txt.inquiry_id\n> left join account_contact on sv_inquiry.account_contact_id =\n> account_contact.account_contact_id left join account_contact\n> account_contact2\n> on sv_inquiry.account_contact_id2 =\n> account_contact2.account_contact_id where\n> sv_inquiry.account_id=3441833 and sv_inquiry.state not in ('Closed',\n> 'Classified') ORDER BY sv_inquiry.inquiry_id DESC\n> \n> \n>> 3. What is the ratio of database read activity vs. database writing\n>> activity?\n> There are more insert/update than read, because I'm doing table\n> synchronization\n> from an SQL Server database. Every 5 minutes I'm looking for change\n> in SQL Server\n> Database.\n> I've made some stats, and I found that without user acces, and only\n> with the replications\n> I get 2 millions query per day\n> \n>> 4. What large tables in my database get queried\n>> simultaneously/together? why this questions ? \n> \n>> 5. Are my database writes bundled into transactions, or seperate?\n>> bundle in transactions \n> \n>> etc.\n>> \n>> Simply knowing the size of the database files isn't enough.\n> \n> is it better like this ?\n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 5: Have you checked our\n> extensive FAQ? \n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\nsomeone could come back to first request ?\n", "msg_date": "Mon, 10 Feb 2003 12:18:35 +0100", "msg_from": "\"philip johnson\" <philip.johnson@atempo.com>", "msg_from_op": true, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "Philip,\n\n>\n> someone could come back to first request ?\n>\n\nInsistent, aren't you? ;-)\n\n> > Yes no Raid, but will could change soon\n\nAdding RAID 1+0 could simplify your job enormously. It would prevent you \nfrom having to figure out what to put on each disk. If it were my machine, \nand I knew that the database was more important than the other services, I'd \nbuild it like this:\n\nArray 1: Disk 1: 18Go Ultra 3 (Ultra 160) SCSI 15 Ktpm\n Disk2 : 18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n\nContains: Linux, Apache, Swap\n\nArray 2: \n Di:sk3 18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n Disk4 18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n\nContains: PostgreSQL and databases\n\n Disk5 36Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n\nContains: Postgresql log, backup partition.\n\nAlternately:\nPut all of the above on one *smart* RAID5 controller, with on-controller \nmemory and battery. Might give you better performance considering your disk \nsetup.\n\n> > how can I put indexes on a seperate disk ?\n\nMove the index object (use the oid2name package in /contrib to find the index) \nto a different location, and symlink it back to its original location. Make \nsure that you REINDEX at maintainence time, and don't drop and re-create the \nindex, as that will have the effect of moving it back to the original \nlocation.\n\n> >>> linux values:\n> >>> kernel.shmmni = 4096\n> >>> kernel.shmall = 32000000\n> >>> kernel.shmmax = 256000000\n> > I took a look a the performance archive, and it's not possible to find\n> > real info on how to set these 3 values.\n\nYeah. Personally, I just raise them until I stop getting error messages from \nPostgres. Perhaps someone on the list could speak to the danger of setting \nany of these values too high?\n\n> > I'm surprised that there's no spreadsheet to calculate those values.\n> > There are many threads, but it seems that no one is able to find a\n> > rule to define values.\n\nThat's correct. There is no rule, because there are too many variables, and \nthe value of many of those variables is a matter of opinion. As an \n*abbreviated* list:\n1) Your processors and RAM; 2) Your drive setup and speed; 3) the frequency \nof data reads; 4) the frequency of data writes; 5) the average complexity \nof queries; 6) use of database procedures (functions) for DML; 7) your \nmaintainence plan (e.g. how often can you run VACUUM FULL?); 8) the expected \ndata population of tables (how many rows, how many tables); 9) your ability \nto program for indexed vs. non-indexed queries; 10) do you do mass data \nloads? ; 11) is the server being used for any other hihg-memory/networked \napplications? ; 12) the expected number of concurrent users; 13) use of large \nobjects and/or large text fields; etc.\n\nAs a result, a set of values that work really well for me might crash your \ndatabase. It's an interactive process. Justin Clift started a project to \ncreate an automated interactive postgresql.conf tuner, one that would \nrepeatedly test the speed of different queries against your database, \novernight while you sleep. However, he didn't get very far and I haven't \nhad time to help.\n\n> >> 1. How many queries does my database handle per second or minute?\n> >> can't say now\n\nThis has a big influence on your desired sort_mem and shared_buffer settings. \nMake some estimates.\n\n> >>\n> >> 2. How big/complex are those queries?\n> >\n> > Not really complex and big as you can see\n\nOK, so nothing that would require you to really jack up your sort or shared \nmemory beyond levels suggested by other factors. However, you don't say how \nmany rows these queries usually return, which has a substantial effect on \ndesired sort_mem.\n\nA good, if time-consuming, technique for setting sort_mem is to move it up and \ndown (from, say 512 to 4096) seeing at what level your biggest meanest \nqueries slow down noticably ... and then set it to one level just above that.\n\n> > There are more insert/update than read, because I'm doing table\n> > synchronization\n> > from an SQL Server database. Every 5 minutes I'm looking for change\n> > in SQL Server\n> > Database.\n> > I've made some stats, and I found that without user acces, and only\n> > with the replications\n> > I get 2 millions query per day\n\nIn that case, making sure that your WAL files (the pg_xlog directory) is \nlocated on a seperate drive which *does nothing else* during normal operation \nis your paramount concern for performance. You'll also need to carefully \nprune your indexes down to only the ones you really need to avoid slowing \nyour inserts and updates.\n\n> >> 4. What large tables in my database get queried\n> >> simultaneously/together? why this questions ?\n\nIf you're not using RAID, it would affect whether you should even consider \nmoving a particular table or index to a seperate drive. If you have two \ntables, each of which is 3 million records, and they are quried joined \ntogether in 50% of data reads, then one of those tables is a good candidate \nfor moving to another drive.\n\nGood luck!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 10 Feb 2003 09:35:14 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "pgsql-performance-owner@postgresql.org wrote:\n> Philip,\n> \n>> \n>> someone could come back to first request ?\n>> \n> \n> Insistent, aren't you? ;-)\n> \n>>> Yes no Raid, but will could change soon\n> \n> Adding RAID 1+0 could simplify your job enormously. It would\n> prevent you from having to figure out what to put on each disk. If\n> it were my machine, and I knew that the database was more important\n> than the other services, I'd build it like this:\n> \n> Array 1: Disk 1: 18Go Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk2 : 18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15\n> Ktpm \n> \n> Contains: Linux, Apache, Swap\n> \n> Array 2:\n> Di:sk3 18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> Disk4 18Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> \n> Contains: PostgreSQL and databases\n> \n> Disk5 36Go Hot Plug Ultra 3 (Ultra 160) SCSI 15 Ktpm\n> \n> Contains: Postgresql log, backup partition.\n> \n> Alternately:\n> Put all of the above on one *smart* RAID5 controller, with\n> on-controller memory and battery. Might give you better performance\n> considering your disk setup.\n> \n>>> how can I put indexes on a seperate disk ?\n> \n> Move the index object (use the oid2name package in /contrib to find\n> the index) to a different location, and symlink it back to its\n> original location. Make sure that you REINDEX at maintainence time,\n> and don't drop and re-create the index, as that will have the effect\n> of moving it back to the original location.\n> \n>>>>> linux values:\n>>>>> kernel.shmmni = 4096\n>>>>> kernel.shmall = 32000000\n>>>>> kernel.shmmax = 256000000\n>>> I took a look a the performance archive, and it's not possible to\n>>> find real info on how to set these 3 values.\n> \n> Yeah. Personally, I just raise them until I stop getting error\n> messages from Postgres. Perhaps someone on the list could speak to\n> the danger of setting any of these values too high?\n> \n>>> I'm surprised that there's no spreadsheet to calculate those values.\n>>> There are many threads, but it seems that no one is able to find a\n>>> rule to define values.\n> \n> That's correct. There is no rule, because there are too many\n> variables, and the value of many of those variables is a matter of\n> opinion. As an \n> *abbreviated* list:\n> 1) Your processors and RAM; 2) Your drive setup and speed; 3) the\n> frequency of data reads; 4) the frequency of data writes; 5) the\n> average complexity of queries; 6) use of database procedures\n> (functions) for DML; 7) your maintainence plan (e.g. how often can\n> you run VACUUM FULL?); 8) the expected data population of tables\n> (how many rows, how many tables); 9) your ability to program for\n> indexed vs. non-indexed queries; 10) do you do mass data loads? ; \n> 11) is the server being used for any other hihg-memory/networked\n> applications? ; 12) the expected number of concurrent users; 13) use\n> of large objects and/or large text fields; etc. \n> \n> As a result, a set of values that work really well for me might crash\n> your database. It's an interactive process. Justin Clift started\n> a project to create an automated interactive postgresql.conf tuner,\n> one that would repeatedly test the speed of different queries against\n> your database, overnight while you sleep. However, he didn't get\n> very far and I haven't had time to help.\n> \n>>>> 1. How many queries does my database handle per second or minute?\n>>>> can't say now\n> \n> This has a big influence on your desired sort_mem and shared_buffer\n> settings. Make some estimates.\n> \n>>>> \n>>>> 2. How big/complex are those queries?\n>>> \n>>> Not really complex and big as you can see\n> \n> OK, so nothing that would require you to really jack up your sort or\n> shared memory beyond levels suggested by other factors. However, you\n> don't say how many rows these queries usually return, which has a\n> substantial effect on desired sort_mem.\n> \n> A good, if time-consuming, technique for setting sort_mem is to move\n> it up and down (from, say 512 to 4096) seeing at what level your\n> biggest meanest queries slow down noticably ... and then set it to\n> one level just above that. \n> \n>>> There are more insert/update than read, because I'm doing table\n>>> synchronization from an SQL Server database. Every 5 minutes I'm\n>>> looking for change in SQL Server Database.\n>>> I've made some stats, and I found that without user acces, and only\n>>> with the replications I get 2 millions query per day\n> \n> In that case, making sure that your WAL files (the pg_xlog directory)\n> is located on a seperate drive which *does nothing else* during\n> normal operation is your paramount concern for performance. You'll\n> also need to carefully prune your indexes down to only the ones you\n> really need to avoid slowing your inserts and updates.\n> \n>>>> 4. What large tables in my database get queried\n>>>> simultaneously/together? why this questions ?\n> \n> If you're not using RAID, it would affect whether you should even\n> consider moving a particular table or index to a seperate drive. If\n> you have two tables, each of which is 3 million records, and they are\n> quried joined together in 50% of data reads, then one of those tables\n> is a good candidate for moving to another drive.\n> \n> Good luck!\n\nthanks very much\n", "msg_date": "Mon, 10 Feb 2003 19:02:21 +0100", "msg_from": "\"philip johnson\" <philip.johnson@atempo.com>", "msg_from_op": true, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "Josh Berkus wrote:\n> > > how can I put indexes on a seperate disk ?\n> \n> Move the index object (use the oid2name package in /contrib to find the index) \n> to a different location, and symlink it back to its original location. Make \n> sure that you REINDEX at maintainence time, and don't drop and re-create the \n> index, as that will have the effect of moving it back to the original \n> location.\n\nI believe reindex will create a new file and hence remove the symlink,\nat least in 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 12 Feb 2003 00:52:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" }, { "msg_contents": "Bruce Momjian wrote:\n> Josh Berkus wrote:\n> \n>>>>how can I put indexes on a seperate disk ?\n>>\n>>Move the index object (use the oid2name package in /contrib to find the index) \n>>to a different location, and symlink it back to its original location. Make \n>>sure that you REINDEX at maintainence time, and don't drop and re-create the \n>>index, as that will have the effect of moving it back to the original \n>>location.\n> \n> I believe reindex will create a new file and hence remove the symlink,\n> at least in 7.3.\n\nYep, it's a complete pain. You can't stabily have indexes moved to \ndifferent drives, or even different partitions on the same drive (i.e. \nfastest disk area), as they're recreated in the default data location at \nindex time.\n\nThis was one of the things I was hoping would _somehow_ be solved with \nnamespaces, as in high transaction volume environments it would be nice \nto have frequently used indexes [be practical] on separate drives from \nthe data. At present, that's not very workable.\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n\n", "msg_date": "Wed, 12 Feb 2003 22:51:56 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: how to configure my new server" } ]
[ { "msg_contents": "I am interested in hearing from anyone that has used Postgres with the\ndb files existing on a SAN, fibre channel or iSCSI. I would be\ninterested in what hardware you used and what the performance ended up\nlike.\n\nThanking you in advance,\n\nKeith Bottner\nkbottner@istation.com\n\n\"Vegetarian - that's an old Indian word meaning 'lousy hunter.'\" - Andy\nRooney\n\n", "msg_date": "Wed, 5 Feb 2003 10:16:28 -0600", "msg_from": "\"Keith Bottner\" <kbottner@istation.com>", "msg_from_op": true, "msg_subject": "Postgres and SAN performance?" }, { "msg_contents": "Keith,\n\n> I am interested in hearing from anyone that has used Postgres with the\n> db files existing on a SAN, fibre channel or iSCSI. I would be\n> interested in what hardware you used and what the performance ended up\n> like.\n\nI suggest that you contact Zapatec directly: www.zapatec.com.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Wed, 5 Feb 2003 10:01:07 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Postgres and SAN performance?" } ]
[ { "msg_contents": "I come from a Sybase background and have just started working with \nPostgre. Are there any books or tutorials that cover general performance \nissues for beginners? I don't just want to start creating databases and \ntables or writing mad queries without really understanding what is happening\nin the background. I know quite a few tips and tricks for Sybase but I don't \nthink that much of it is relevant on Postgres. I would like to know how \nthe query optimizer works, i.e. does the order of tables in the from \nclause make a difference in speed and how does indexing work on Postgres? \nAny help pointing me in the right direction would be appreciated.\n\nMany Thanks\nJeandre\n\n", "msg_date": "Fri, 7 Feb 2003 10:11:46 +0200 (SAST)", "msg_from": "jeandre@itvs.co.za", "msg_from_op": true, "msg_subject": "Performance" }, { "msg_contents": "On Friday 07 February 2003 01:41 pm, you wrote:\n> I come from a Sybase background and have just started working with\n> Postgre. Are there any books or tutorials that cover general performance\n> issues for beginners? I don't just want to start creating databases and\n> tables or writing mad queries without really understanding what is\n> happening in the background. I know quite a few tips and tricks for Sybase\n> but I don't think that much of it is relevant on Postgres. I would like to\n> know how the query optimizer works, i.e. does the order of tables in the\n> from clause make a difference in speed and how does indexing work on\n> Postgres? Any help pointing me in the right direction would be appreciated.\n\nReligously go thr. admin guide. and then a quick look thr. all SQL commands. \nMay take a day or two but believe me, it is worth that.\n\n Shridhar\n", "msg_date": "Fri, 7 Feb 2003 14:10:51 +0530", "msg_from": "Shridhar Daithankar <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Performance" }, { "msg_contents": "jeandre@itvs.co.za writes:\n> ... does the order of tables in the from \n> clause make a difference in speed\n\nNo, it does not; except perhaps in corner cases where two tables have\nexactly the same statistics, so that the planner has no basis for\nchoosing one over the other. (This scenario could happen if you've\nnever ANALYZEd either, for example.)\n\n> and how does indexing work on Postgres? \n\nUh, it indexes. What's your question exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Feb 2003 09:55:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance " }, { "msg_contents": "On Fri, 7 Feb 2003, Tom Lane wrote:\n\n> Uh, it indexes. What's your question exactly?\n\nIn the documentation I saw something about different index types. In \nSybase you create an index and that is it. How do you choose what type of \nindex to create on a column in Postgres?\n\n", "msg_date": "Fri, 7 Feb 2003 17:04:52 +0200 (SAST)", "msg_from": "jeandre@itvs.co.za", "msg_from_op": true, "msg_subject": "Re: Performance " }, { "msg_contents": "jeandre@itvs.co.za writes:\n> In the documentation I saw something about different index types. In \n> Sybase you create an index and that is it. How do you choose what type of \n> index to create on a column in Postgres?\n\nThere's an optional clause in the CREATE INDEX command --- I think\n\"USING access_method\", but check the man page.\n\nIn practice, 99.44% of indexes are the default btree type, so you\nusually don't need to think about it. I'd only use a non-btree index\nif I needed to index non-scalar data (arrays, geometric types, etc);\nthe GIST and RTREE index types are designed for those.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Feb 2003 10:13:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance " }, { "msg_contents": "\nOn Fri, 7 Feb 2003, Tom Lane wrote:\n\n> In practice, 99.44% of indexes are the default btree type, so you\n> usually don't need to think about it.\n\nThanks for clearing that up, and everyone else that has responded. With \nthe help I am getting from you guys I should have a good clue of using the \ndatabase to it's utmost optimum very soon.\n\nregards\nJeandre\n\n", "msg_date": "Fri, 7 Feb 2003 17:38:26 +0200 (SAST)", "msg_from": "jeandre@itvs.co.za", "msg_from_op": true, "msg_subject": "Re: Performance " }, { "msg_contents": "Jean,\n\nAlso check out the many performance-related articles at:\n\nhttp://techdocs.postgresql.org\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 7 Feb 2003 09:20:07 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Performance" } ]
[ { "msg_contents": "hi, i have to set up a postgres database on a\nsun enterprise 10000 machine running on solaris 8.\nhas anyone hints for tuning the database for me?\n\nregards\n-Chris\n-- \nIf Bill Gates had a penny for every time Windows crashed...\n..oh wait, he does.\n\n", "msg_date": "Fri, 07 Feb 2003 11:26:47 +0100", "msg_from": "Christian Toepp <christian.toepp@vhm.de>", "msg_from_op": true, "msg_subject": "postgres on solaris 8" }, { "msg_contents": "Christian Toepp wrote:\n> hi, i have to set up a postgres database on a\n> sun enterprise 10000 machine running on solaris 8.\n> has anyone hints for tuning the database for me?\n\nHi Chris,\n\nSure.\n\nWe need some info first:\n\n + How much memory is in the E10k?\n + What's the disk/array configuration of the E10k?\n + How many CPU's? (just out of curiosity)\n + Will the E10k be doing anything other than PostgreSQL?\n + Which version of PostgreSQL? (please say 7.3.2)\n + What is the expected workload of the E10k?\n\nSorry for the not-entirely-easy questions, it's just that this will give \nus a good understanding of your configuration and what to recommend.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> regards\n> -Chris\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Fri, 07 Feb 2003 21:29:16 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: postgres on solaris 8" } ]
[ { "msg_contents": "When defining data types, which is better text, varchar or char? I heard \nthat on Postgres text is better, but I know on Sybase char is more efficient. \nCan someone please tell me whether this statement is true and if so why?\n\nMany Thanks\nJeandre\n\n", "msg_date": "Tue, 11 Feb 2003 13:10:12 +0200 (SAST)", "msg_from": "jeandre@itvs.co.za", "msg_from_op": true, "msg_subject": "None" }, { "msg_contents": "On Tue, Feb 11, 2003 at 01:10:12PM +0200, jeandre@itvs.co.za wrote:\n> When defining data types, which is better text, varchar or char? I heard \n> that on Postgres text is better, but I know on Sybase char is more efficient. \n> Can someone please tell me whether this statement is true and if so why?\n\nAvoid char(n) unless you absolutely know you have a constant-length\nfield. Even then, you may get surprises.\n\nFor practical purposes, text is probably your best bet. For\ncompatibility, there is varchar(), which is the same thing as text. \nIf you need to limit the size, use varchar(n). Be aware that it is\nslightly slower, so don't use it unless your model demands it.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 11 Feb 2003 08:29:27 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "I've tested all the win32 versions of postgres I can get my hands on\n(cygwin and not), and my general feeling is that they have problems with\ninsert performance with fsync() turned on, probably the fault of the os.\nSelect performance is not so much affected.\n\nThis is easily solved with transactions and other such things. Also\nPostgres benefits from pl just like oracle.\n\nMay I make a suggestion that maybe it is time to start thinking about\ntuning the default config file, IMHO its just a little bit too\nconservative, and its hurting you in benchmarks being run by idiots, but\nits still bad publicity. Any real database admin would know his test\nare synthetic and not meaningful without having to look at the #s.\n\nThis is irritating me so much that I am going to put together a\nbenchmark of my own, a real world one, on (publicly available) real\nworld data. Mysql is a real dog in a lot of situations. The FCC\npublishes a database of wireless transmitters that has tables with 10\nmillion records in it. I'll pump that into pg, run some benchmarks,\nreal world queries, and we'll see who the faster database *really* is.\nThis is just a publicity issue, that's all. Its still annoying though.\n\nI'll even run an open challenge to database admin to beat query\nperformance of postgres in such datasets, complex multi table joins,\netc. I'll even throw out the whole table locking issue and analyze\nsingle user performance.\n\nMerlin \n\n\n\n_____________\nHow much of the performance difference is from the RDBMS, from the\nmiddleware, and from the quality of implementation in the middleware.\n\nWhile I'm not surprised that the the cygwin version of PostgreSQL is\nslow, those results don't tell me anything about the quality of the\nmiddleware interface between PHP and PostgreSQL. Does anyone know if we\ncan rule out some of the performance loss by pinning it to bad\nmiddleware implementation for PostgreSQL?\n\n\nRegards,\n\n-- \nGreg Copeland <greg@copelandconsulting.net>\nCopeland Computer Consulting\n\n\n\n", "msg_date": "Tue, 11 Feb 2003 10:44:07 -0500", "msg_from": "\"Merlin Moncure\" <merlin.moncure@rcsonline.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] PostgreSQL Benchmarks" }, { "msg_contents": "\"Merlin Moncure\" <merlin.moncure@rcsonline.com> writes:\n> May I make a suggestion that maybe it is time to start thinking about\n> tuning the default config file, IMHO its just a little bit too\n> conservative,\n\nIt's a lot too conservative. I've been thinking for awhile that we\nshould adjust the defaults.\n\nThe original motivation for setting shared_buffers = 64 was so that\nPostgres would start out-of-the-box on machines where SHMMAX is 1 meg\n(64 buffers = 1/2 meg, leaving 1/2 meg for our other shared data\nstructures). At one time SHMMAX=1M was a pretty common stock kernel\nsetting. But our other data structures blew past the 1/2 meg mark\nsome time ago; at default settings the shmem request is now close to\n1.5 meg. So people with SHMMAX=1M have already got to twiddle their\npostgresql.conf settings, or preferably learn how to increase SHMMAX.\nThat means there is *no* defensible reason anymore for defaulting to\n64 buffers. \n\nWe could retarget to try to stay under SHMMAX=4M, which I think is\nthe next boundary that's significant in terms of real-world platforms\n(isn't that the default SHMMAX on some BSDen?). That would allow us\n350 or so shared_buffers, which is better, but still not really a\nserious choice for production work.\n\nWhat I would really like to do is set the default shared_buffers to\n1000. That would be 8 meg worth of shared buffer space. Coupled with\nmore-realistic settings for FSM size, we'd probably be talking a shared\nmemory request approaching 16 meg. This is not enough RAM to bother\nany modern machine from a performance standpoint, but there are probably\nquite a few platforms out there that would need an increase in their\nstock SHMMAX kernel setting before they'd take it.\n\nSo what this comes down to is making it harder for people to get\nPostgres running for the first time, versus making it more likely that\nthey'll see decent performance when they do get it running.\n\nIt's worth noting that increasing SHMMAX is not nearly as painful as\nit was back when these decisions were taken. Most people have moved\nto platforms where it doesn't even take a kernel rebuild, and we've\nacquired documentation that tells how to do it on all(?) our supported\nplatforms. So I think it might be okay to expect people to do it.\n\nThe alternative approach is to leave the settings where they are, and\nto try to put more emphasis in the documentation on the fact that the\nfactory-default settings produce a toy configuration that you *must*\nadjust upward for decent performance. But we've not had a lot of\nsuccess spreading that word, I think. With SHMMMAX too small, you\ndo at least get a pretty specific error message telling you so.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 11:20:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Changing the default configuration (was Re: [HACKERS] PostgreSQL\n\tBenchmarks)" }, { "msg_contents": "On Tue, 2003-02-11 at 10:20, Tom Lane wrote:\n> \"Merlin Moncure\" <merlin.moncure@rcsonline.com> writes:\n> > May I make a suggestion that maybe it is time to start thinking about\n> > tuning the default config file, IMHO its just a little bit too\n> > conservative,\n> \n> It's a lot too conservative. I've been thinking for awhile that we\n> should adjust the defaults.\n> \n> The original motivation for setting shared_buffers = 64 was so that\n> Postgres would start out-of-the-box on machines where SHMMAX is 1 meg\n> (64 buffers = 1/2 meg, leaving 1/2 meg for our other shared data\n> structures). At one time SHMMAX=1M was a pretty common stock kernel\n> setting. But our other data structures blew past the 1/2 meg mark\n> some time ago; at default settings the shmem request is now close to\n> 1.5 meg. So people with SHMMAX=1M have already got to twiddle their\n> postgresql.conf settings, or preferably learn how to increase SHMMAX.\n> That means there is *no* defensible reason anymore for defaulting to\n> 64 buffers. \n> \n> We could retarget to try to stay under SHMMAX=4M, which I think is\n> the next boundary that's significant in terms of real-world platforms\n> (isn't that the default SHMMAX on some BSDen?). That would allow us\n> 350 or so shared_buffers, which is better, but still not really a\n> serious choice for production work.\n> \n> What I would really like to do is set the default shared_buffers to\n> 1000. That would be 8 meg worth of shared buffer space. Coupled with\n> more-realistic settings for FSM size, we'd probably be talking a shared\n> memory request approaching 16 meg. This is not enough RAM to bother\n> any modern machine from a performance standpoint, but there are probably\n> quite a few platforms out there that would need an increase in their\n> stock SHMMAX kernel setting before they'd take it.\n> \n> So what this comes down to is making it harder for people to get\n> Postgres running for the first time, versus making it more likely that\n> they'll see decent performance when they do get it running.\n> \n> It's worth noting that increasing SHMMAX is not nearly as painful as\n> it was back when these decisions were taken. Most people have moved\n> to platforms where it doesn't even take a kernel rebuild, and we've\n> acquired documentation that tells how to do it on all(?) our supported\n> platforms. So I think it might be okay to expect people to do it.\n> \n> The alternative approach is to leave the settings where they are, and\n> to try to put more emphasis in the documentation on the fact that the\n> factory-default settings produce a toy configuration that you *must*\n> adjust upward for decent performance. But we've not had a lot of\n> success spreading that word, I think. With SHMMMAX too small, you\n> do at least get a pretty specific error message telling you so.\n> \n> Comments?\n\nI'd personally rather have people stumble trying to get PostgreSQL\nrunning, up front, rather than allowing the lowest common denominator\nmore easily run PostgreSQL only to be disappointed with it and move on.\n\nAfter it's all said and done, I would rather someone simply say, \"it's\nbeyond my skill set\", and attempt to get help or walk away. That seems\nbetter than them being able to run it and say, \"it's a dog\", spreading\nword-of-mouth as such after they left PostgreSQL behind. Worse yet,\nthose that do walk away and claim it performs horribly are probably\ndoing more harm to the PostgreSQL community than expecting someone to be\nable to install software ever can.\n\nNutshell:\n\t\"Easy to install but is horribly slow.\"\n\n\t\tor\n\n\t\"Took a couple of minutes to configure and it rocks!\"\n\n\n\nSeems fairly cut-n-dry to me. ;)\n\n\nRegards,\n\n-- \nGreg Copeland <greg@copelandconsulting.net>\nCopeland Computer Consulting\n\n", "msg_date": "11 Feb 2003 10:42:54 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "On Tue, Feb 11, 2003 at 11:20:14AM -0500, Tom Lane wrote:\n...\n> We could retarget to try to stay under SHMMAX=4M, which I think is\n> the next boundary that's significant in terms of real-world platforms\n> (isn't that the default SHMMAX on some BSDen?).\n...\n\nAssuming 1 page = 4k, and number of pages is correct in GENERIC kernel configs,\nSHMMAX=4M for NetBSD (8M for i386, x86_64)\n\nCheers,\n\nPatrick\n", "msg_date": "Tue, 11 Feb 2003 16:44:34 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [HACKERS] PostgreSQL\n\tBenchmarks)" }, { "msg_contents": ">Nutshell:\n>\t\"Easy to install but is horribly slow.\"\n>\n>\t\tor\n>\n>\t\"Took a couple of minutes to configure and it rocks!\"\n\nSince when is it easy to install on win32?\nThe easiest way I know of is through Cygwin, then you have to worry about\ninstalling the IPC service (an getting the right version too!) I've\ninstalled versions 6.1 to 7.1, but I almost gave up on the windows install.\nAt least in 6.x you had very comprehensive installation guide with a TOC.\n\nVersus the competition which are you going to choose if you're a wanna-be\nDBA? The one with all he hoops to jump through, or the one that comes with a\nsetup.exe?\n\nNow I actually am in support of making it more aggressive, but it should\nwait until we too have a setup.exe for the native windows port. (Changing it\non *n*x platforms is of little benefit because most benchmarks seem to run\nit on w32 anyway :-( )\n\nJust my $.02. I reserve the right to be wrong.\n-J\n\n", "msg_date": "Tue, 11 Feb 2003 12:03:13 -0500", "msg_from": "Jason Hihn <jhihn@paytimepayroll.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Tom Lane wrote:\n<snip>\n> What I would really like to do is set the default shared_buffers to\n> 1000. That would be 8 meg worth of shared buffer space. Coupled with\n> more-realistic settings for FSM size, we'd probably be talking a shared\n> memory request approaching 16 meg. This is not enough RAM to bother\n> any modern machine from a performance standpoint, but there are probably\n> quite a few platforms out there that would need an increase in their\n> stock SHMMAX kernel setting before they'd take it.\n<snip>\n\nTotally agree with this. We really, really, really, really need to get \nthe default to a point where we have _decent_ default performance.\n\n> The alternative approach is to leave the settings where they are, and\n> to try to put more emphasis in the documentation on the fact that the\n> factory-default settings produce a toy configuration that you *must*\n> adjust upward for decent performance. But we've not had a lot of\n> success spreading that word, I think. With SHMMMAX too small, you\n> do at least get a pretty specific error message telling you so.\n> \n> Comments?\n\nYep.\n\nHere's an *unfortunately very common* scenario, that again \nunfortunately, a _seemingly large_ amount of people fall for.\n\na) Someone decides to \"benchmark\" database XYZ vs PostgreSQL vs other \ndatabases\n\nb) Said benchmarking person knows very little about PostgreSQL, so they \ninstall the RPM's, packages, or whatever, and \"it works\". Then they run \nwhatever benchmark they've downloaded, or designed, or whatever\n\nc) PostgreSQL, being practically unconfigured, runs at the pace of a \nslow, mostly-disabled snail.\n\nd) Said benchmarking person gets better performance from the other \ndatabases (also set to their default settings) and thinks \"PostgreSQL \nhas lots of features, and it's free, but it's Too Slow\".\n\nYes, this kind of testing shouldn't even _pretend_ to have any real \nworld credibility.\n\ne) Said benchmarking person tells everyone they know, _and_ everyone \nthey meet about their results. Some of them even create nice looking or \nprofesional looking web pages about it.\n\nf) People who know even _less_ than the benchmarking person hear about \nthe test, or read the result, and don't know any better than to believe \nit at face value. So, they install whatever system was recommended.\n\ng) Over time, the benchmarking person gets the hang of their chosen \ndatabase more and writes further articles about it, and doesn't \ngenerally look any further afield than it for say... a couple of years. \n By this time, they've already influenced a couple of thousand people \nin the non-optimal direction.\n\nh) Arrgh. With better defaults, our next release would _appear_ to be a \nlot faster to quite a few people, just because they have no idea about \ntuning.\n\nSo, as sad as this scenario is, better defaults will probably encourage \na lot more newbies to get involved, and that'll eventually translate \ninto a lot more experienced users, and a few more coders to assist. ;-)\n\nPersonally I'd be a bunch happier if we set the buffers so high that we \ndefinitely have decent performance, and the people that want to run \nPostgreSQL are forced to make the choice of either:\n\n 1) Adjust their system settings to allow PostgreSQL to run properly, or\n\n 2) Manually adjust the PostgreSQL settings to run memory-constrained\n\nThis way, PostgreSQL either runs decently, or they are _aware_ that \nthey're limiting it. That should cut down on the false benchmarks \n(hopefully).\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> \t\t\tregards, tom lane\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n\n", "msg_date": "Wed, 12 Feb 2003 04:08:22 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "A quick-'n'-dirty first step would be more comments in postgresql.conf. Most \nof the lines are commented out which would imply \"use the default\" but the \ndefault is not shown. (I realize this has the difficulty of defaults that \nchange depending upon how PostgreSQL was configured/compiled but perhaps \npostgresql.conf could be built by the make process based on the configuration \noptions.)\n\nIf postgresql.conf were commented with recommendations it would probably be \nall I need though perhaps a recommendation to edit that file should be \ndisplayed at the conclusion of \"make install\".\n\nCheers,\nSteve\n\n\nOn Tuesday 11 February 2003 8:20 am, Tom Lane wrote:\n> \"Merlin Moncure\" <merlin.moncure@rcsonline.com> writes:\n> > May I make a suggestion that maybe it is time to start thinking about\n> > tuning the default config file, IMHO its just a little bit too\n> > conservative,\n>\n> It's a lot too conservative. I've been thinking for awhile that we\n> should adjust the defaults.\n>\n> The original motivation for setting shared_buffers = 64 was so that\n> Postgres would start out-of-the-box on machines where SHMMAX is 1 meg\n> (64 buffers = 1/2 meg, leaving 1/2 meg for our other shared data\n> structures). At one time SHMMAX=1M was a pretty common stock kernel\n> setting. But our other data structures blew past the 1/2 meg mark\n> some time ago; at default settings the shmem request is now close to\n> 1.5 meg. So people with SHMMAX=1M have already got to twiddle their\n> postgresql.conf settings, or preferably learn how to increase SHMMAX.\n> That means there is *no* defensible reason anymore for defaulting to\n> 64 buffers.\n>\n> We could retarget to try to stay under SHMMAX=4M, which I think is\n> the next boundary that's significant in terms of real-world platforms\n> (isn't that the default SHMMAX on some BSDen?). That would allow us\n> 350 or so shared_buffers, which is better, but still not really a\n> serious choice for production work.\n>\n> What I would really like to do is set the default shared_buffers to\n> 1000. That would be 8 meg worth of shared buffer space. Coupled with\n> more-realistic settings for FSM size, we'd probably be talking a shared\n> memory request approaching 16 meg. This is not enough RAM to bother\n> any modern machine from a performance standpoint, but there are probably\n> quite a few platforms out there that would need an increase in their\n> stock SHMMAX kernel setting before they'd take it.\n>\n> So what this comes down to is making it harder for people to get\n> Postgres running for the first time, versus making it more likely that\n> they'll see decent performance when they do get it running.\n>\n> It's worth noting that increasing SHMMAX is not nearly as painful as\n> it was back when these decisions were taken. Most people have moved\n> to platforms where it doesn't even take a kernel rebuild, and we've\n> acquired documentation that tells how to do it on all(?) our supported\n> platforms. So I think it might be okay to expect people to do it.\n>\n> The alternative approach is to leave the settings where they are, and\n> to try to put more emphasis in the documentation on the fact that the\n> factory-default settings produce a toy configuration that you *must*\n> adjust upward for decent performance. But we've not had a lot of\n> success spreading that word, I think. With SHMMMAX too small, you\n> do at least get a pretty specific error message telling you so.\n>\n> Comments?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n", "msg_date": "Tue, 11 Feb 2003 09:10:48 -0800", "msg_from": "Steve Crawford <scrawford@pinpointresearch.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [HACKERS] PostgreSQL\n\tBenchmarks)" }, { "msg_contents": "Tom Lane wrote:\n\n>\"Merlin Moncure\" <merlin.moncure@rcsonline.com> writes:\n> \n>\n>>May I make a suggestion that maybe it is time to start thinking about\n>>tuning the default config file, IMHO its just a little bit too\n>>conservative,\n>> \n>>\n>\n>It's a lot too conservative. I've been thinking for awhile that we\n>should adjust the defaults.\n>\n> \n>\nOne of the things I did on my Windows install was to have a number of \ndefault configuration files, postgresql.conf.small, \npostgresql.conf.medium, postgresql.conf.large.\n\nRather than choose one, in the \"initdb\" script, ask for or determine the \nmount of shared memory, memory, etc.\n\nAnother pet peeve I have is forcing the configuration files to be in the \ndatabase directory. We had this argument in 7.1 days, and I submitted a \npatch that allowed a configuration file to be specified as a command \nline parameter. One of the things that Oracle does better is separating \nthe \"configuration\" from the data.\n\nIt is an easy patch to allow PostgreSQL to use a separate configuration \ndirectory, and specify the data directory within the configuration file \n(The way any logical application works), and, NO, symlinks are not a \nsolution, they are a kludge.\n\n\n\n\n\n\n\n\nTom Lane wrote:\n\n\"Merlin Moncure\" <merlin.moncure@rcsonline.com> writes:\n \n\nMay I make a suggestion that maybe it is time to start thinking about\ntuning the default config file, IMHO its just a little bit too\nconservative,\n \n\n\nIt's a lot too conservative. I've been thinking for awhile that we\nshould adjust the defaults.\n\n \n\nOne of the things I did on my Windows install was to have a number of default\nconfiguration files, postgresql.conf.small, postgresql.conf.medium, postgresql.conf.large.\n\nRather than choose one, in the \"initdb\" script, ask for or determine the\nmount of shared memory, memory, etc.\n\nAnother pet peeve I have is forcing the configuration files to be in the\ndatabase directory. We had this argument in 7.1 days, and I submitted a patch\nthat allowed a configuration file to be specified as a command line parameter.\nOne of the things that Oracle does better is separating the \"configuration\"\nfrom the data. \n\nIt is an easy patch to allow PostgreSQL to use a separate configuration directory,\nand specify the data directory within the configuration file (The way any\nlogical application works), and, NO, symlinks are not a solution, they are\na kludge.", "msg_date": "Tue, 11 Feb 2003 12:12:04 -0500", "msg_from": "mlw <pgsql@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Personally I'd be a bunch happier if we set the buffers so high that we \n> definitely have decent performance, and the people that want to run \n> PostgreSQL are forced to make the choice of either:\n> 1) Adjust their system settings to allow PostgreSQL to run properly, or\n> 2) Manually adjust the PostgreSQL settings to run memory-constrained\n> This way, PostgreSQL either runs decently, or they are _aware_ that \n> they're limiting it.\n\nYeah, that is the subtext here. If you can't increase SHMMAX then you\ncan always trim the postgresql.conf parameters --- but theoretically,\nat least, you should then have a clue that you're running a\nbadly-configured setup ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 12:18:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [HACKERS] PostgreSQL\n\tBenchmarks)" }, { "msg_contents": "Tom, Justin,\n\n> > What I would really like to do is set the default shared_buffers to\n> > 1000. That would be 8 meg worth of shared buffer space. Coupled with\n> > more-realistic settings for FSM size, we'd probably be talking a shared\n> > memory request approaching 16 meg. This is not enough RAM to bother\n> > any modern machine from a performance standpoint, but there are probably\n> > quite a few platforms out there that would need an increase in their\n> > stock SHMMAX kernel setting before they'd take it.\n\nWhat if we supplied several sample .conf files, and let the user choose which \nto copy into the database directory? We could have a \"high read \nperformance\" profile, and a \"transaction database\" profile, and a \n\"workstation\" profile, and a \"low impact\" profile. We could even supply a \nPerl script that would adjust SHMMAX and SHMMALL on platforms where this can \nbe done from the command line.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 11 Feb 2003 09:18:48 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "On Tue, 2003-02-11 at 12:10, Steve Crawford wrote:\n> A quick-'n'-dirty first step would be more comments in postgresql.conf. Most \n\nThis will not solve the issue with the large number of users who have no\ninterest in looking at the config file -- but are interested in\npublishing their results.\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "11 Feb 2003 12:21:13 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "\n\nGreg Copeland wrote:\n\n> \n>\n>I'd personally rather have people stumble trying to get PostgreSQL\n>running, up front, rather than allowing the lowest common denominator\n>more easily run PostgreSQL only to be disappointed with it and move on.\n>\n>After it's all said and done, I would rather someone simply say, \"it's\n>beyond my skill set\", and attempt to get help or walk away. That seems\n>better than them being able to run it and say, \"it's a dog\", spreading\n>word-of-mouth as such after they left PostgreSQL behind. Worse yet,\n>those that do walk away and claim it performs horribly are probably\n>doing more harm to the PostgreSQL community than expecting someone to be\n>able to install software ever can.\n>\n<RANT>\n\nAnd that my friends is why PostgreSQL is still relatively obscure.\n\nThis attitude sucks. If you want a product to be used, you must put the \neffort into making it usable.\n\nIt is a no-brainer to make the default configuration file suitable for \nthe majority of users. It is lunacy to create a default configuration \nwhich provides poor performance for over 90% of the users, but which \nallows the lowest common denominator to work.\n\nA product must not perform poorly out of the box, period. A good product \nmanager would choose one of two possible configurations, (a) a high \nspeed fairly optimized system from the get-go, or (b) it does not run \nunless you create the configuration file. Option (c) out of the box it \nworks like crap, is not an option.\n\nThis is why open source gets such a bad reputation. Outright contempt \nfor the user who may not know the product as well as those developing \nit. This attitude really sucks and it turns people off. We want people \nto use PostgreSQL, to do that we must make PostgreSQL usable. Usability \nIS important.\n</RANT>\n\n\n", "msg_date": "Tue, 11 Feb 2003 12:23:42 -0500", "msg_from": "mlw <pgsql@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> What if we supplied several sample .conf files, and let the user choose which\n> to copy into the database directory? We could have a \"high read \n> performance\" profile, and a \"transaction database\" profile, and a \n> \"workstation\" profile, and a \"low impact\" profile.\n\nUh ... do we have a basis for recommending any particular sets of\nparameters for these different scenarios? This could be a good idea\nin the abstract, but I'm not sure I know enough to fill in the details.\n\nA lower-tech way to accomplish the same result is to document these\nalternatives in postgresql.conf comments and encourage people to review\nthat file, as Steve Crawford just suggested. But first we need the raw\nknowledge.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 12:26:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "> What if we supplied several sample .conf files, and let the user choose\n> which to copy into the database directory? We could have a \"high read\n\nExactly my first thought when reading the proposal for a setting suited for \nperformance tests. \n\n> performance\" profile, and a \"transaction database\" profile, and a\n> \"workstation\" profile, and a \"low impact\" profile. We could even supply a\n\nAnd a .benchmark profile :-)\n\n> Perl script that would adjust SHMMAX and SHMMALL on platforms where this\n> can be done from the command line.\n\nOr maybe configuration could be adjusted with ./configure if SHMMAX can be \ndetermined at that point?\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 Åben 12.00-18.00 Email: kar@kakidata.dk\n2000 Frederiksberg Lørdag 12.00-16.00 Web: www.suse.dk\n", "msg_date": "Tue, 11 Feb 2003 18:26:46 +0100", "msg_from": "Kaare Rasmussen <kar@kakidata.dk>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "Josh Berkus wrote:\n> Tom, Justin,\n<snip>\n> \n> What if we supplied several sample .conf files, and let the user choose which \n> to copy into the database directory? We could have a \"high read \n> performance\" profile, and a \"transaction database\" profile, and a \n> \"workstation\" profile, and a \"low impact\" profile. We could even supply a \n> Perl script that would adjust SHMMAX and SHMMALL on platforms where this can \n> be done from the command line.\n\n\nThis might have value as the next step in the process of:\n\na) Are we going to have better defaults?\n\nor\n\nb) Let's stick with the current approach.\n\n\nIf we decide to go with better (changed) defaults, we may also be able \nto figure out a way of having profiles that could optionally be chosen from.\n\nAs a longer term thought, it would be nice if the profiles weren't just \nhard-coded example files, but more of:\n\npg_autotune --setprofile=xxx\n\nOr similar utility, and it did all the work. Named profiles being one \ncapability, and other tuning measurements (i.e. cpu costings, disk \nperformance profiles, etc) being the others.\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n\n", "msg_date": "Wed, 12 Feb 2003 04:29:19 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Tom Lane wrote:\n<snip>\n> Uh ... do we have a basis for recommending any particular sets of\n> parameters for these different scenarios? This could be a good idea\n> in the abstract, but I'm not sure I know enough to fill in the details.\n> \n> A lower-tech way to accomplish the same result is to document these\n> alternatives in postgresql.conf comments and encourage people to review\n> that file, as Steve Crawford just suggested. But first we need the raw\n> knowledge.\n\nWithout too much hacking around, you could pretty easily adapt the \npg_autotune code to do proper profiles of a system with different settings.\n\ni.e. increment one setting at a time, run pgbench on it with some decent \namount of transactions and users, stuff the results into a different \ndatabase. Aggregate data over time kind of thing. Let it run for a \nweek, etc.\n\nIf it's helpful, there's a 100% spare Althon 1.6Ghz box around with \n(choose your OS) + Adaptec 29160 + 512MB RAM + 2 x 9GB Seagate Cheetah \n10k rpm drives hanging around. No stress to set that up and let it run \nany long terms tests you'd like plus send back results.\n\nRegards and best wishes,\n\nJustin Clift\n\n> \t\t\tregards, tom lane\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n\n", "msg_date": "Wed, 12 Feb 2003 04:34:13 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "On Tue, 2003-02-11 at 11:23, mlw wrote:\n> Greg Copeland wrote:\n> \n> > \n> >\n> >I'd personally rather have people stumble trying to get PostgreSQL\n> >running, up front, rather than allowing the lowest common denominator\n> >more easily run PostgreSQL only to be disappointed with it and move on.\n> >\n> >After it's all said and done, I would rather someone simply say, \"it's\n> >beyond my skill set\", and attempt to get help or walk away. That seems\n> >better than them being able to run it and say, \"it's a dog\", spreading\n> >word-of-mouth as such after they left PostgreSQL behind. Worse yet,\n> >those that do walk away and claim it performs horribly are probably\n> >doing more harm to the PostgreSQL community than expecting someone to be\n> >able to install software ever can.\n> >\n> <RANT>\n> \n> And that my friends is why PostgreSQL is still relatively obscure.\n> \n> This attitude sucks. If you want a product to be used, you must put the \n> effort into making it usable.\n> \n\n\nAh..okay....\n\n\n> It is a no-brainer to make the default configuration file suitable for \n> the majority of users. It is lunacy to create a default configuration \n> which provides poor performance for over 90% of the users, but which \n> allows the lowest common denominator to work.\n> \n\nI think you read something into my email which I did not imply. I'm\ncertainly not advocating a default configuration file assuming 512M of\nshare memory or some such insane value.\n\nBasically, you're arguing that they should keep doing exactly what they\nare doing. It's currently known to be causing problems and propagating\nthe misconception that PostgreSQL is unable to perform under any\ncircumstance. I'm arguing that who cares if 5% of the potential user\nbase has to learn to properly install software. Either they'll read and\nlearn, ask for assistance, or walk away. All of which are better than\nJonny-come-lately offering up a meaningless benchmark which others are\nhappy to eat with rather large spoons.\n\n\n> A product must not perform poorly out of the box, period. A good product \n> manager would choose one of two possible configurations, (a) a high \n> speed fairly optimized system from the get-go, or (b) it does not run \n> unless you create the configuration file. Option (c) out of the box it \n> works like crap, is not an option.\n> \n\nThat's the problem. Option (c) is what we currently have. I'm amazed\nthat you even have a problem with option (a), as that's what I'm\nsuggesting. The problem is, potentially for some minority of users, it\nmay not run out of the box. As such, I'm more than happy with this\nsituation than 90% of the user base being stuck with a crappy default\nconfiguration.\n\nOddly enough, your option (b) is even worse than what you are ranting at\nme about. Go figure.\n\n> This is why open source gets such a bad reputation. Outright contempt \n> for the user who may not know the product as well as those developing \n> it. This attitude really sucks and it turns people off. We want people \n> to use PostgreSQL, to do that we must make PostgreSQL usable. Usability \n> IS important.\n> </RANT>\n\n\nThere is no contempt here. Clearly you've read your own bias into this\nthread. If you go back and re-read my posting, I think it's VERY clear\nthat it's entirely about usability.\n\n\nRegards,\n\n-- \nGreg Copeland <greg@copelandconsulting.net>\nCopeland Computer Consulting\n\n", "msg_date": "11 Feb 2003 11:36:17 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re:" }, { "msg_contents": "FYI, my stock linux 2.4.19 gentoo kernel has:\nkernel.shmall = 2097152\nkernel.shmmax = 33554432\n\nsysctl -a\n\nSo it appears that linux at least is way above your 8 meg point, unless I\nam missing something.\n\n\n", "msg_date": "Tue, 11 Feb 2003 09:38:18 -0800 (PST)", "msg_from": "\"Jon Griffin\" <jon@jongriffin.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]\n\tPostgreSQL Benchmarks)" }, { "msg_contents": "Tom, Justin,\n\n> > Uh ... do we have a basis for recommending any particular sets of\n> > parameters for these different scenarios? This could be a good idea\n> > in the abstract, but I'm not sure I know enough to fill in the details.\n\nSure. \nMostly-Read database, few users, good hardware, complex queries:\n\t= High shared buffers and sort mem, high geqo and join collapse thresholds,\n\t\tmoderate fsm settings, defaults for WAL.\nSame as above with many users and simple queries (webserver) =\n\tsame as above, except lower sort mem and higher connection limit\nHigh-Transaction Database =\n\tModerate shared buffers and sort mem, high FSM settings, increase WAL files \nand buffers.\nWorkstation =\n\tModerate to low shared buffers and sort mem, moderate FSM, defaults for WAL, \netc.\nLow-Impact server = current defaults, more or less.\n\nWhile none of these settings will be *perfect* for anyone, they will be \nconsiderably better than what's shipping with postgresql. And, based on my \n\"Learning Perl\" knowledge, I'm pretty sure I could write the program. \n\nAll we'd need to do is argue out, on the PERFORMANCE list, what's a good value \nfor each profile. That's the tough part. The Perl script is easy.\n\n> > A lower-tech way to accomplish the same result is to document these\n> > alternatives in postgresql.conf comments and encourage people to review\n> > that file, as Steve Crawford just suggested. But first we need the raw\n> > knowledge.\n\nThat's also not a bad approach ... the CONF file should be more heavily \ncommented, period, regardless of what approach we take. I volunteer to work \non this with other participants.\n\n> Without too much hacking around, you could pretty easily adapt the\n> pg_autotune code to do proper profiles of a system with different settings.\n\nNo offense, Justin, but I don't know anyone else who's gotten your pg_autotune \nscript to run other than you. And pg_bench has not been useful performance \nmeasure for any real database server I have worked on so far.\n\nI'd be glad to help improve pg_autotune, with two caveats:\n1) We will still need to figure out the \"profiles\" above so that we have \ndecent starting values.\n2) I suggest that we do pg_autotune in Perl or Python or another higher-level \nlanguage. This would enable several performance buffs who don't do C to \ncontribute to it, and a performance-tuning script is a higher-level-language \nsort of function, anyway.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 11 Feb 2003 09:48:39 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Tom Lane wrote:\n>> Uh ... do we have a basis for recommending any particular sets of\n>> parameters for these different scenarios? This could be a good idea\n>> in the abstract, but I'm not sure I know enough to fill in the details.\n\n> Without too much hacking around, you could pretty easily adapt the \n> pg_autotune code to do proper profiles of a system with different settings.\n\n> i.e. increment one setting at a time, run pgbench on it with some decent \n> amount of transactions and users, stuff the results into a different \n> database.\n\nIf I thought that pgbench was representative of anything, or even\ncapable of reliably producing repeatable numbers, then I might subscribe\nto results derived this way. But I have little or no confidence in\npgbench. Certainly I don't see how you'd use it to produce\nrecommendations for a range of application scenarios, when it's only\none very narrow scenario itself.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 12:52:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "mlw <pgsql@mohawksoft.com> writes:\n> This attitude sucks. If you want a product to be used, you must put the \n> effort into making it usable.\n> [snip]\n\nAFAICT, you are flaming Greg for recommending the exact same thing you\nare recommending. Please calm down and read again.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 12:54:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "\"Jon Griffin\" <jon@jongriffin.com> writes:\n> So it appears that linux at least is way above your 8 meg point, unless I\n> am missing something.\n\nYeah, AFAIK all recent Linuxen are well above the range of parameters\nthat I was suggesting (and even if they weren't, Linux is particularly\neasy to change the SHMMAX setting on). It's other Unixoid platforms\nthat are likely to have a problem. Particularly the ones where you\nhave to rebuild the kernel to change SHMMAX; people may be afraid to\ndo that.\n\nDoes anyone know whether cygwin has a setting comparable to SHMMAX,\nand if so what is its default value? How about the upcoming native\nWindows port --- any issues there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 13:01:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]\n\tPostgreSQL Benchmarks)" }, { "msg_contents": "On Tue, 2003-02-11 at 12:08, Justin Clift wrote:\n> b) Said benchmarking person knows very little about PostgreSQL, so they \n> install the RPM's, packages, or whatever, and \"it works\". Then they run \n> whatever benchmark they've downloaded, or designed, or whatever\n> \n\nOut of curiosity, how feasible is it for the rpm/package/deb/exe\nmaintainers to modify their supplied postgresql.conf settings when\nbuilding said distribution? AFAIK the minimum default SHHMAX setting on\nRed Hat 8.0 is 32MB, seems like bumping shared buffers to work with that\namount would be acceptable inside the 8.0 rpm's.\n\nRobert Treat\n\n\n\n", "msg_date": "11 Feb 2003 13:03:45 -0500", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "Apology\n\nAfter Mark calms down and, in fact, sees that Greg was saying the right \nthing after all, chagrin is the only word.\n\nI'm sorry.\n\n\nGreg Copeland wrote:\n\n>On Tue, 2003-02-11 at 11:23, mlw wrote:\n> \n>\n>>Greg Copeland wrote:\n>>\n>> \n>>\n>>> \n>>>\n>>>I'd personally rather have people stumble trying to get PostgreSQL\n>>>running, up front, rather than allowing the lowest common denominator\n>>>more easily run PostgreSQL only to be disappointed with it and move on.\n>>>\n>>>After it's all said and done, I would rather someone simply say, \"it's\n>>>beyond my skill set\", and attempt to get help or walk away. That seems\n>>>better than them being able to run it and say, \"it's a dog\", spreading\n>>>word-of-mouth as such after they left PostgreSQL behind. Worse yet,\n>>>those that do walk away and claim it performs horribly are probably\n>>>doing more harm to the PostgreSQL community than expecting someone to be\n>>>able to install software ever can.\n>>>\n>>> \n>>>\n>><RANT>\n>>\n>>And that my friends is why PostgreSQL is still relatively obscure.\n>>\n>>This attitude sucks. If you want a product to be used, you must put the \n>>effort into making it usable.\n>>\n>> \n>>\n>\n>\n>Ah..okay....\n>\n>\n> \n>\n>>It is a no-brainer to make the default configuration file suitable for \n>>the majority of users. It is lunacy to create a default configuration \n>>which provides poor performance for over 90% of the users, but which \n>>allows the lowest common denominator to work.\n>>\n>> \n>>\n>\n>I think you read something into my email which I did not imply. I'm\n>certainly not advocating a default configuration file assuming 512M of\n>share memory or some such insane value.\n>\n>Basically, you're arguing that they should keep doing exactly what they\n>are doing. It's currently known to be causing problems and propagating\n>the misconception that PostgreSQL is unable to perform under any\n>circumstance. I'm arguing that who cares if 5% of the potential user\n>base has to learn to properly install software. Either they'll read and\n>learn, ask for assistance, or walk away. All of which are better than\n>Jonny-come-lately offering up a meaningless benchmark which others are\n>happy to eat with rather large spoons.\n>\n>\n> \n>\n>>A product must not perform poorly out of the box, period. A good product \n>>manager would choose one of two possible configurations, (a) a high \n>>speed fairly optimized system from the get-go, or (b) it does not run \n>>unless you create the configuration file. Option (c) out of the box it \n>>works like crap, is not an option.\n>>\n>> \n>>\n>\n>That's the problem. Option (c) is what we currently have. I'm amazed\n>that you even have a problem with option (a), as that's what I'm\n>suggesting. The problem is, potentially for some minority of users, it\n>may not run out of the box. As such, I'm more than happy with this\n>situation than 90% of the user base being stuck with a crappy default\n>configuration.\n>\n>Oddly enough, your option (b) is even worse than what you are ranting at\n>me about. Go figure.\n>\n> \n>\n>>This is why open source gets such a bad reputation. Outright contempt \n>>for the user who may not know the product as well as those developing \n>>it. This attitude really sucks and it turns people off. We want people \n>>to use PostgreSQL, to do that we must make PostgreSQL usable. Usability \n>>IS important.\n>></RANT>\n>> \n>>\n>\n>\n>There is no contempt here. Clearly you've read your own bias into this\n>thread. If you go back and re-read my posting, I think it's VERY clear\n>that it's entirely about usability.\n>\n>\n>Regards,\n>\n> \n>\n\n\n\n\n\n\n\nApology\n\nAfter Mark calms down and, in fact, sees that Greg was saying the right thing\nafter all, chagrin is the only word.\n\nI'm sorry.\n\n\nGreg Copeland wrote:\n\nOn Tue, 2003-02-11 at 11:23, mlw wrote:\n \n\nGreg Copeland wrote:\n\n \n\n \n\nI'd personally rather have people stumble trying to get PostgreSQL\nrunning, up front, rather than allowing the lowest common denominator\nmore easily run PostgreSQL only to be disappointed with it and move on.\n\nAfter it's all said and done, I would rather someone simply say, \"it's\nbeyond my skill set\", and attempt to get help or walk away. That seems\nbetter than them being able to run it and say, \"it's a dog\", spreading\nword-of-mouth as such after they left PostgreSQL behind. Worse yet,\nthose that do walk away and claim it performs horribly are probably\ndoing more harm to the PostgreSQL community than expecting someone to be\nable to install software ever can.\n\n \n\n<RANT>\n\nAnd that my friends is why PostgreSQL is still relatively obscure.\n\nThis attitude sucks. If you want a product to be used, you must put the \neffort into making it usable.\n\n \n\n\n\nAh..okay....\n\n\n \n\nIt is a no-brainer to make the default configuration file suitable for \nthe majority of users. It is lunacy to create a default configuration \nwhich provides poor performance for over 90% of the users, but which \nallows the lowest common denominator to work.\n\n \n\n\nI think you read something into my email which I did not imply. I'm\ncertainly not advocating a default configuration file assuming 512M of\nshare memory or some such insane value.\n\nBasically, you're arguing that they should keep doing exactly what they\nare doing. It's currently known to be causing problems and propagating\nthe misconception that PostgreSQL is unable to perform under any\ncircumstance. I'm arguing that who cares if 5% of the potential user\nbase has to learn to properly install software. Either they'll read and\nlearn, ask for assistance, or walk away. All of which are better than\nJonny-come-lately offering up a meaningless benchmark which others are\nhappy to eat with rather large spoons.\n\n\n \n\nA product must not perform poorly out of the box, period. A good product \nmanager would choose one of two possible configurations, (a) a high \nspeed fairly optimized system from the get-go, or (b) it does not run \nunless you create the configuration file. Option (c) out of the box it \nworks like crap, is not an option.\n\n \n\n\nThat's the problem. Option (c) is what we currently have. I'm amazed\nthat you even have a problem with option (a), as that's what I'm\nsuggesting. The problem is, potentially for some minority of users, it\nmay not run out of the box. As such, I'm more than happy with this\nsituation than 90% of the user base being stuck with a crappy default\nconfiguration.\n\nOddly enough, your option (b) is even worse than what you are ranting at\nme about. Go figure.\n\n \n\nThis is why open source gets such a bad reputation. Outright contempt \nfor the user who may not know the product as well as those developing \nit. This attitude really sucks and it turns people off. We want people \nto use PostgreSQL, to do that we must make PostgreSQL usable. Usability \nIS important.\n</RANT>\n \n\n\n\nThere is no contempt here. Clearly you've read your own bias into this\nthread. If you go back and re-read my posting, I think it's VERY clear\nthat it's entirely about usability.\n\n\nRegards,", "msg_date": "Tue, 11 Feb 2003 13:27:19 -0500", "msg_from": "mlw <pgsql@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re:\t" }, { "msg_contents": "My other pet peeve is the default max connections setting. This should be \nhigher if possible, but of course, there's always the possibility of \nrunning out of file descriptors.\n\nApache has a default max children of 150, and if using PHP or another \nlanguage that runs as an apache module, it is quite possible to use up all \nthe pgsql backend slots before using up all the apache child slots.\n\nIs setting the max connections to something like 200 reasonable, or likely \nto cause too many problems?\n\n", "msg_date": "Tue, 11 Feb 2003 11:34:32 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "On Tue, 2003-02-11 at 13:01, Tom Lane wrote:\n> \"Jon Griffin\" <jon@jongriffin.com> writes:\n> > So it appears that linux at least is way above your 8 meg point, unless I\n> > am missing something.\n> \n> Yeah, AFAIK all recent Linuxen are well above the range of parameters\n> that I was suggesting (and even if they weren't, Linux is particularly\n> easy to change the SHMMAX setting on). It's other Unixoid platforms\n> that are likely to have a problem. Particularly the ones where you\n> have to rebuild the kernel to change SHMMAX; people may be afraid to\n> do that.\n\nThe issue as I see it is: \nBetter performing vs. More Compatible Out of the box Defaults.\n\nPerhaps a compromise (hack?):\nSet the default to some default value that performs well, a value we all\nagree is not too big (16M? 32M?). On startup, if the OS can't give us\nwhat we want, instead of failing, we can try again with a smaller\namount, perhaps half the default, if that fails try again with half\nuntil we reach some bottom threshold (1M?).\n\nThe argument against this might be: When I set shared_buffers=X, I want\nX shared buffers. I don't want it to fail silently and give me less than\nwhat I need / want. To address this we might want to add a guc option\nthat controls this behavior. So we ship postgresql.conf with 32M of\nshared memory and auto_shared_mem_reduction = true. With a comment that\nthe administrator might want to turn this off for production.\n\nThoughts? \n\nI think this will allow most uninformed users get decent performing\ndefaults as most systems will accommodate this larger value.\n\n", "msg_date": "11 Feb 2003 13:53:51 -0500", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "\"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> Is setting the max connections to something like 200 reasonable, or likely \n> to cause too many problems?\n\nThat would likely run into number-of-semaphores limitations (SEMMNI,\nSEMMNS). We do not seem to have as good documentation about changing\nthat as we do about changing the SHMMAX setting, so I'm not sure I want\nto buy into the \"it's okay to expect people to fix this before they can\nstart Postgres the first time\" argument here.\n\nAlso, max-connections doesn't silently skew your testing: if you need\nto raise it, you *will* know it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 13:55:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy] " }, { "msg_contents": "\"Matthew T. O'Connor\" <matthew@zeut.net> writes:\n> ... So we ship postgresql.conf with 32M of\n> shared memory and auto_shared_mem_reduction = true. With a comment that\n> the administrator might want to turn this off for production.\n\nThis really doesn't address Justin's point about clueless benchmarkers,\nhowever. In fact I fear it would make that problem worse: if Joe Blow\nsays he got horrible performance, who knows whether he was running with\na reasonable number of buffers or not? Especially when you ask him\n\"did you have lots of shared buffers\" and he responds \"yes, of course,\nit says 32M right here\".\n\nWe've recently been moving away from the notion that it's okay to\nsilently lose functionality in order to run on a given system. For\nexample, if you want to install without readline, you now have to\nexplicitly tell configure that, because we heard \"why don't I have\nhistory in psql\" way too often from people who just ran configure\nand paid no attention to what it told them.\n\nI think that what this discussion is really leading up to is that we\nare going to decide to apply the same principle to performance. The\nout-of-the-box settings ought to give reasonable performance, and if\nyour system can't handle it, you should have to take explicit action\nto acknowledge the fact that you aren't going to get reasonable\nperformance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 14:06:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]\n\tPostgreSQL Benchmarks)" }, { "msg_contents": "On Tue, 2003-02-11 at 12:55, Tom Lane wrote:\n> \"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> > Is setting the max connections to something like 200 reasonable, or likely \n> > to cause too many problems?\n> \n> That would likely run into number-of-semaphores limitations (SEMMNI,\n> SEMMNS). We do not seem to have as good documentation about changing\n> that as we do about changing the SHMMAX setting, so I'm not sure I want\n> to buy into the \"it's okay to expect people to fix this before they can\n> start Postgres the first time\" argument here.\n> \n> Also, max-connections doesn't silently skew your testing: if you need\n> to raise it, you *will* know it.\n> \n\nBesides, I'm not sure that it makes sense to let other product needs\ndictate the default configurations for this one. It would be one thing\nif the vast majority of people only used PostgreSQL with Apache. I know\nI'm using it in environments in which no way relate to the web. I'm\nthinking I'm not alone.\n\n\nRegards,\n\n-- \nGreg Copeland <greg@copelandconsulting.net>\nCopeland Computer Consulting\n\n", "msg_date": "11 Feb 2003 13:16:15 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re:" }, { "msg_contents": "Tom Lane wrote:\n\n> I think that what this discussion is really leading up to is that we\n> are going to decide to apply the same principle to performance. The\n> out-of-the-box settings ought to give reasonable performance, and if\n> your system can't handle it, you should have to take explicit action\n> to acknowledge the fact that you aren't going to get reasonable\n> performance.\n\nWhat I don't understand is why this is such a huge issue. Set it to a \nreasonable level (be it 4M or whatever the concensus is) & let the \npackagers worry about it if that's not appropriate. Isn't it their job \nto have a good out-of-the-package experience? Won't they have better \nknowledge of what the system limits are for the packages they develop \nfor? Worst case, couldn't they have a standard conf package & a special \n\"high-performance\" conf package in addition to all the base packages? \nAfter all, it's the users of the RPMs that are the real problem, not \nusually the people that compile it on their own. If you were having \nproblems with the \"compile-it-yourself\" audience, couldn't you just hit \nthem over the head three or four times (configure, install, initdb & \nfailed startup to name a few) reminding them to change it if it wasn't \nappropriate. What more can you really do? At some point, the end user \nhas to bear some responsibility...\n\n-- \n\nJeff Hoffmann\nPropertyKey.com\n\n", "msg_date": "Tue, 11 Feb 2003 13:36:05 -0600", "msg_from": "Jeff Hoffmann <jeff@propertykey.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "On Tue, 11 Feb 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> > Is setting the max connections to something like 200 reasonable, or likely \n> > to cause too many problems?\n> \n> That would likely run into number-of-semaphores limitations (SEMMNI,\n> SEMMNS). We do not seem to have as good documentation about changing\n> that as we do about changing the SHMMAX setting, so I'm not sure I want\n> to buy into the \"it's okay to expect people to fix this before they can\n> start Postgres the first time\" argument here.\n> \n> Also, max-connections doesn't silently skew your testing: if you need\n> to raise it, you *will* know it.\n\nTrue, but unfortunately, the time you usually learn that the first time is \nwhen your web server starts issuing error messages about not being able to \nconnect to the database. i.e. it fails at the worst possible time.\n\nOK. I just did some very simple testing in RH Linux 7.2 and here's what I \nfound about file handles: default max appears to be 8192 now, not 4096.\n\nWith max file handles set to 4096, I run out of handles when opening about \n450 or more simultaneous connections. At 8192, the default for RH72, I \npretty much run out of memory on a 512 Meg box and start swapping \nmassively long before I can exhaust the file handle pool.\n\nAt 200 connections, I use about half of all my file descriptors out of \n4096, which seems pretty safe to me.\n\nNote that setting the max connections to 200 in the conf does NOT result \nin huge allocations of file handles right away, but only while the \ndatabase is under load, so this leads us to the other possible problem, \nthat the database will exhaust file handles if we set this number too \nhigh, as opposed to not being able to connect because it's too low.\n\nI'm guessing that 200 or less is pretty safe on most modern flavors of \nUnix, but I'm not one of those folks who keeps the older flavors happy \nreally, so I can't speak for them.\n\nBack in the day, a P100 with 30 or 40 connections was a heavy load, \nnowadays, a typical workstation has 512 Meg ram or more, and a 1.5+GHz \nCPU, so I can see increasing this setting too. I'd rather the only issue \nfor the user be adjusting their kernel than having to up the connection \nlimit in postgresql. I can up the max file handles in Linux on the fly, \nwith no one noticeing it, I have to stop and restart postgresql to make \nthe max backends take affect, so that's another reason not to have too low \na limit.\n\nIs there a place on the web somewhere that lists the default settings for \nmost major unixes for file handles, inodes, and shared memory?\n\n", "msg_date": "Tue, 11 Feb 2003 12:54:06 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "On 11 Feb 2003, Greg Copeland wrote:\n\n> On Tue, 2003-02-11 at 12:55, Tom Lane wrote:\n> > \"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> > > Is setting the max connections to something like 200 reasonable, or likely \n> > > to cause too many problems?\n> > \n> > That would likely run into number-of-semaphores limitations (SEMMNI,\n> > SEMMNS). We do not seem to have as good documentation about changing\n> > that as we do about changing the SHMMAX setting, so I'm not sure I want\n> > to buy into the \"it's okay to expect people to fix this before they can\n> > start Postgres the first time\" argument here.\n> > \n> > Also, max-connections doesn't silently skew your testing: if you need\n> > to raise it, you *will* know it.\n> > \n> \n> Besides, I'm not sure that it makes sense to let other product needs\n> dictate the default configurations for this one. It would be one thing\n> if the vast majority of people only used PostgreSQL with Apache. I know\n> I'm using it in environments in which no way relate to the web. I'm\n> thinking I'm not alone.\n\nTrue, but even so, 32 max connections is a bit light. I have more \npgsql databases than that on my box now. My point in my previous answer \nto Tom was that you HAVE to shut down postgresql to change this. It \ndoesn't allocate tons of semaphores on startup, just when the child \nprocesses are spawned, and I'd rather have the user adjust their OS to \nmeet the higher need than have to shut down and restart postgresql as \nwell. This is one of the settings that make it feel like a \"toy\" when you \nfirst open it.\n\nHow many other high quality databases in the whole world restrict max \nconnections to 32? The original choice of 32 was set because the original \nchoice of 64 shared memory blocks as the most we could hope for on common \nOS installs. Now that we're looking at cranking that up to 1000, \nshouldn't max connections get a look too?\n\nYou don't have to be using apache to need more than 32 simo connections. \nHeck, how many postgresql databases do you figure are in production with \nthat setting still in there? My guess is not many.\n\nI'm not saying we should do this to make benchmarks better either, I'm \nsaying we should do it to improve the user experience. A limit of 32 \nconnects makes things tough for a beginning DBA, not only does he find out \nthe problem while his database is under load the first time, but then he \ncan't fix it without shutting down and restarting postgresql. If the max \nis set to 200 or 500 and he starts running out of semaphores, that's a \nproblem he can address while his database is still up and running in most \noperating systems, at least in the ones I use.\n\nSo, my main point is that any setting that requires you to shut down \npostgresql to make the change, we should pick a compromise value that \nmeans you never likely will have to shut down the database once you've \nstarted it up and it's under load. shared buffers, max connects, etc... \nshould not need tweaking for 95% or more of the users if we can help it. \nIt would be nice if we could find a set of numbers that reduce the number \nof problems users have, so all I'm doing is looking for the sweetspot, \nwhich is NOT 32 max connections.\n\n", "msg_date": "Tue, 11 Feb 2003 13:10:17 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re:" }, { "msg_contents": "Tom Lane writes:\n\n> We could retarget to try to stay under SHMMAX=4M, which I think is\n> the next boundary that's significant in terms of real-world platforms\n> (isn't that the default SHMMAX on some BSDen?). That would allow us\n> 350 or so shared_buffers, which is better, but still not really a\n> serious choice for production work.\n\nWhat is a serious choice for production work? And what is the ideal\nchoice? The answer probably involves some variables, but maybe we should\nget values for those variables in each case and work from there.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 11 Feb 2003 22:13:37 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "On Tuesday 11 February 2003 13:03, Robert Treat wrote:\n> On Tue, 2003-02-11 at 12:08, Justin Clift wrote:\n> > b) Said benchmarking person knows very little about PostgreSQL, so they\n> > install the RPM's, packages, or whatever, and \"it works\". Then they run\n> > whatever benchmark they've downloaded, or designed, or whatever\n\n> Out of curiosity, how feasible is it for the rpm/package/deb/exe\n> maintainers to modify their supplied postgresql.conf settings when\n> building said distribution? AFAIK the minimum default SHHMAX setting on\n> Red Hat 8.0 is 32MB, seems like bumping shared buffers to work with that\n> amount would be acceptable inside the 8.0 rpm's.\n\nYes, this is easy to do. But what is a sane default? I can patch any file \nI'd like to, but my preference is to patch as little as possible, as I'm \ntrying to be generic here. I can't assume Red Hat 8 in the source RPM, and \nmy binaries are to be preferred only if the distributor doesn't have updated \nones.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n\n", "msg_date": "Tue, 11 Feb 2003 16:53:39 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "> On Tue, 2003-02-11 at 10:20, Tom Lane wrote:\n> > \"Merlin Moncure\" <merlin.moncure@rcsonline.com> writes:\n> > > May I make a suggestion that maybe it is time to start thinking about\n> > > tuning the default config file, IMHO its just a little bit too\n> > > conservative,\n> >\n> > It's a lot too conservative. I've been thinking for awhile that we\n> > should adjust the defaults.\n> >\n> > The original motivation for setting shared_buffers = 64 was so that\n> > Postgres would start out-of-the-box on machines where SHMMAX is 1 meg\n> > (64 buffers = 1/2 meg, leaving 1/2 meg for our other shared data\n> > structures). At one time SHMMAX=1M was a pretty common stock kernel\n> > setting. But our other data structures blew past the 1/2 meg mark\n> > some time ago; at default settings the shmem request is now close to\n> > 1.5 meg. So people with SHMMAX=1M have already got to twiddle their\n> > postgresql.conf settings, or preferably learn how to increase SHMMAX.\n> > That means there is *no* defensible reason anymore for defaulting to\n> > 64 buffers.\n> >\n> > We could retarget to try to stay under SHMMAX=4M, which I think is\n> > the next boundary that's significant in terms of real-world platforms\n> > (isn't that the default SHMMAX on some BSDen?). That would allow us\n> > 350 or so shared_buffers, which is better, but still not really a\n> > serious choice for production work.\n> >\n> > What I would really like to do is set the default shared_buffers to\n> > 1000. That would be 8 meg worth of shared buffer space. Coupled with\n> > more-realistic settings for FSM size, we'd probably be talking a shared\n> > memory request approaching 16 meg. This is not enough RAM to bother\n> > any modern machine from a performance standpoint, but there are probably\n> > quite a few platforms out there that would need an increase in their\n> > stock SHMMAX kernel setting before they'd take it.\n> >\n> > So what this comes down to is making it harder for people to get\n> > Postgres running for the first time, versus making it more likely that\n> > they'll see decent performance when they do get it running.\n> >\n> > It's worth noting that increasing SHMMAX is not nearly as painful as\n> > it was back when these decisions were taken. Most people have moved\n> > to platforms where it doesn't even take a kernel rebuild, and we've\n> > acquired documentation that tells how to do it on all(?) our supported\n> > platforms. So I think it might be okay to expect people to do it.\n> >\n> > The alternative approach is to leave the settings where they are, and\n> > to try to put more emphasis in the documentation on the fact that the\n> > factory-default settings produce a toy configuration that you *must*\n> > adjust upward for decent performance. But we've not had a lot of\n> > success spreading that word, I think. With SHMMMAX too small, you\n> > do at least get a pretty specific error message telling you so.\n> >\n> > Comments?\n>\n> I'd personally rather have people stumble trying to get PostgreSQL\n> running, up front, rather than allowing the lowest common denominator\n> more easily run PostgreSQL only to be disappointed with it and move on.\n>\n> After it's all said and done, I would rather someone simply say, \"it's\n> beyond my skill set\", and attempt to get help or walk away. That seems\n> better than them being able to run it and say, \"it's a dog\", spreading\n> word-of-mouth as such after they left PostgreSQL behind. Worse yet,\n> those that do walk away and claim it performs horribly are probably\n> doing more harm to the PostgreSQL community than expecting someone to be\n> able to install software ever can.\n>\n> Nutshell:\n> \"Easy to install but is horribly slow.\"\n>\n> or\n>\n> \"Took a couple of minutes to configure and it rocks!\"\n>\n>\n>\n> Seems fairly cut-n-dry to me. ;)\n\nThe type of person who can't configure it or doesnt' think to try is\nprobably not doing a project that requires any serious performance. As long\nas you are running it on decent hardware postgres will run fantastic for\nanything but a very heavy load. I think there may be many people out there\nwho have little experience but want an RDBMS to manage their data. Those\npeople need something very, very easy. Look at the following that mysql\ngets despite how poor of a product it is. It's very, very easy. Mysql\nworks great for many peoples needs but then when they need to do something\nreal they need to move to a different database entirely. I think there is a\nhuge advantage to having a product that can be set up very quickly out of\nthe box. Those who need serious performance, hopefully for ther employers\nsake, will be more like to take a few minutes to do some quick performance\ntuning.\n\nRick Gigger\n\n", "msg_date": "Tue, 11 Feb 2003 17:25:29 -0700", "msg_from": "\"Rick Gigger\" <rick@alpinenetworking.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "On Tue, 11 Feb 2003, Tom Lane wrote:\n\n> It's a lot too conservative. I've been thinking for awhile that we\n> should adjust the defaults.\n\nSome of these issues could be made to Just Go Away with some code\nchanges. For example, using mmap rather than SysV shared memory\nwould automatically optimize your memory usage, and get rid of the\ndouble-buffering problem as well. If we could find a way to avoid using\nsemephores proportional to the number of connections we have, then you\nwouldn't have to worry about that configuration parameter, either.\n\nIn fact, some of this stuff might well improve our portability, too.\nFor example, mmap is a POSIX standard, whereas shmget is only an X/Open\nstandard. That makes me suspect that mmap is more widely available on\nnon-Unix platforms. (But I could be wrong.)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Wed, 12 Feb 2003 09:41:45 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "On Tue, 11 Feb 2003, Rick Gigger wrote:\n\n> The type of person who can't configure it or doesnt' think to try is\n> probably not doing a project that requires any serious performance. As long\n> as you are running it on decent hardware postgres will run fantastic for\n> anything but a very heavy load. I think there may be many people out there\n> who have little experience but want an RDBMS to manage their data. Those\n> people need something very, very easy. Look at the following that mysql\n> gets despite how poor of a product it is. It's very, very easy. Mysql\n> works great for many peoples needs but then when they need to do something\n> real they need to move to a different database entirely. I think there is a\n> huge advantage to having a product that can be set up very quickly out of\n> the box. Those who need serious performance, hopefully for ther employers\n> sake, will be more like to take a few minutes to do some quick performance\n> tuning.\n\nVery good point. I'm pushing for changes that will NOT negatively impact \njoe beginner on the major platforms (Linux, BSD, Windows) in terms of \ninstall. I figure anyone installing on big iron already knows enough \nabout their OS we don't have to worry about shared buffers being too big \nfor that machine.\n\nSo, a compromise of faster performance out of the box, with little or no \nnegative user impact seems the sweet spot here.\n\nI'm thinking a good knee setting for each one, where not too much memory / \nsemaphores / file handles get gobbled up, but the database isn't pokey.\n\nThe poor performance of Postgresql in it's current default configuration \nHAS cost us users, trust me, I know a few we've almost lost where I work \nthat I converted after some quick tweaking of their database.\n\nIn it's stock form Postgresql is very slow at large simple queries, like \n'select * from table1 t1 natural join table2 t2 where t1.field='a'; where \nyou get back something like 10,000 rows. The real bottleneck here is \nsort_mem. A simple bump up to 8192 or so makes the database much more \nresponsive.\n\nIf we're looking at changing default settings for 7.4, then we should look \nat changing ALL of them that matter, since we'll have the most time to \nshake out problems if we do them early, and we won't have four or five \nrounds of setting different defaults over time and finding the limitations \nof the HOST OSes one at a time.\n\n", "msg_date": "Tue, 11 Feb 2003 17:42:06 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "On Wed, 12 Feb 2003, Curt Sampson wrote:\n\n> On Tue, 11 Feb 2003, Tom Lane wrote:\n> \n> > It's a lot too conservative. I've been thinking for awhile that we\n> > should adjust the defaults.\n> \n> Some of these issues could be made to Just Go Away with some code\n> changes. For example, using mmap rather than SysV shared memory\n> would automatically optimize your memory usage, and get rid of the\n> double-buffering problem as well. If we could find a way to avoid using\n> semephores proportional to the number of connections we have, then you\n> wouldn't have to worry about that configuration parameter, either.\n> \n> In fact, some of this stuff might well improve our portability, too.\n> For example, mmap is a POSIX standard, whereas shmget is only an X/Open\n> standard. That makes me suspect that mmap is more widely available on\n> non-Unix platforms. (But I could be wrong.)\n\nI'll vote for mmap. I use the mm libs with apache/openldap/authldap and \nit is very fast and pretty common nowadays. It seems quite stable as \nwell.\n\n", "msg_date": "Tue, 11 Feb 2003 18:02:09 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "> If I thought that pgbench was representative of anything, or even\n> capable of reliably producing repeatable numbers, then I might subscribe\n> to results derived this way. But I have little or no confidence in\n> pgbench. Certainly I don't see how you'd use it to produce\n> recommendations for a range of application scenarios, when it's only\n> one very narrow scenario itself.\n\nSigh. People always complain \"pgbench does not reliably producing\nrepeatable numbers\" or something then say \"that's because pgbench's\ntransaction has too much contention on the branches table\". So I added\n-N option to pgbench which makes pgbench not to do any UPDATE to\nthe branches table. But still people continue to complian...\n\nThere should be many factors that would produce non-repeatable\nresults exist, for instance kenel buffer, PostgreSQL's buffer manager,\npgbench itself etc. etc... So far it seems no one has ever made clean\nexplanation why non-repeatable results happen...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 12 Feb 2003 10:10:00 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration" }, { "msg_contents": "> My other pet peeve is the default max connections setting. This should be \n> higher if possible, but of course, there's always the possibility of \n> running out of file descriptors.\n> \n> Apache has a default max children of 150, and if using PHP or another \n> language that runs as an apache module, it is quite possible to use up all \n> the pgsql backend slots before using up all the apache child slots.\n> \n> Is setting the max connections to something like 200 reasonable, or likely \n> to cause too many problems?\n\nIt likely. First you will ran out kernel file descriptors. This could\nbe solved by increasing the kernel table or lowering\nmax_files_per_process, though. Second the total throughput will\nrapidly descrease if you don't have enough RAM and many\nCPUs. PostgreSQL can not handle many concurrent\nconnections/transactions effectively. I recommend to employ some kind\nof connection pooling software and lower the max connections.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 12 Feb 2003 10:10:08 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration" }, { "msg_contents": "It's interesting that people focus on shared_buffers. From my\nexperience the most dominating parameter for performance is\nwal_sync_method. It sometimes makes ~20% performance difference. On\nthe otherhand, shared_buffers does very little for\nperformance. Moreover too many shared_buffers cause performance\ndegration! I guess this is due to the poor design of bufmgr. Until it\nis fixed, just increasing the number of shared_buffers a bit, say\n1024, is enough IMHO.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 12 Feb 2003 10:10:26 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration" }, { "msg_contents": "On Wed, 12 Feb 2003, Tatsuo Ishii wrote:\n\n> > My other pet peeve is the default max connections setting. This should be \n> > higher if possible, but of course, there's always the possibility of \n> > running out of file descriptors.\n> > \n> > Apache has a default max children of 150, and if using PHP or another \n> > language that runs as an apache module, it is quite possible to use up all \n> > the pgsql backend slots before using up all the apache child slots.\n> > \n> > Is setting the max connections to something like 200 reasonable, or likely \n> > to cause too many problems?\n> \n> It likely. First you will ran out kernel file descriptors. This could\n> be solved by increasing the kernel table or lowering\n> max_files_per_process, though. Second the total throughput will\n> rapidly descrease if you don't have enough RAM and many\n> CPUs. PostgreSQL can not handle many concurrent\n> connections/transactions effectively. I recommend to employ some kind\n> of connection pooling software and lower the max connections.\n\nDon't know if you saw my other message, but increasing max connects to 200 \nused about 10% of all my semaphores and about 10% of my file handles. \nThat was while running pgbench to create 200 simo sessions.\n\nKeep in mind, on my fairly small intranet database server, I routinely \nhave >32 connections, most coming from outside my webserver. Probably no \nmore than 4 or 5 connects at a time come from there. These are all things \nlike Windows boxes with ODBC running access or something similar. Many of \nthe connections are idle 98% of the time, and use little or no real \nresources, even getting swapped out should the server need the spare \nmemory (it doesn't :-) that machine is set to 120 max simos if I remember \ncorrectly.\n\nwhile 200 may seem high, 32 definitely seems low. So, what IS a good \ncompromise? for this and ALL the other settings that should probably be a \nbit higher. I'm guessing sort_mem or 4 or 8 meg hits the knee for most \nfolks, and the max fsm settings tom has suggested make sense.\n\nWhat wal_sync method should we make default? Or should we pick one based \non the OS the user is running?\n\n\n", "msg_date": "Tue, 11 Feb 2003 18:23:28 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration" }, { "msg_contents": "On Tue, Feb 11, 2003 at 17:42:06 -0700,\n \"scott.marlowe\" <scott.marlowe@ihs.com> wrote:\n> \n> The poor performance of Postgresql in it's current default configuration \n> HAS cost us users, trust me, I know a few we've almost lost where I work \n> that I converted after some quick tweaking of their database.\n\nAbout two years ago I talked some people into trying it at work to\nuse with IMP/Horde which had been having some corruption problems\nwhile using MySQL (though it wasn't necessarily a problem with MySQL).\nI told them to be sure to use 7.1. When they tried it out it couldn't\nkeep up with the load. I asked the guys what they tried and found out\nthey couldn't find 7.1 rpms and didn't want to compile from source and\nso ended up using 7.0.?. Also as far as I could tell from talking to them,\nthey didn't do any tuning at all. They weren't interested in taking another\nlook at it after that. We are still using MySQL with that system today.\n\nOne of our DBAs is using it for some trial projects (including one for me)\neven though we have a site license for Oracle.\n", "msg_date": "Tue, 11 Feb 2003 19:37:48 -0600", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "> > It likely. First you will ran out kernel file descriptors. This could\n> > be solved by increasing the kernel table or lowering\n> > max_files_per_process, though. Second the total throughput will\n> > rapidly descrease if you don't have enough RAM and many\n> > CPUs. PostgreSQL can not handle many concurrent\n> > connections/transactions effectively. I recommend to employ some kind\n> > of connection pooling software and lower the max connections.\n> \n> Don't know if you saw my other message, but increasing max connects to 200 \n> used about 10% of all my semaphores and about 10% of my file handles. \n> That was while running pgbench to create 200 simo sessions.\n\nI'm not talking about semaphores. You see the low usage of file\ndescriptors is just because pgbench uses very few tables.\n\n> Keep in mind, on my fairly small intranet database server, I routinely \n> have >32 connections, most coming from outside my webserver. Probably no \n> more than 4 or 5 connects at a time come from there. These are all things \n> like Windows boxes with ODBC running access or something similar. Many of \n> the connections are idle 98% of the time, and use little or no real \n> resources, even getting swapped out should the server need the spare \n> memory (it doesn't :-) that machine is set to 120 max simos if I remember \n> correctly.\n> \n> while 200 may seem high, 32 definitely seems low. So, what IS a good \n> compromise? for this and ALL the other settings that should probably be a \n> bit higher. I'm guessing sort_mem or 4 or 8 meg hits the knee for most \n> folks, and the max fsm settings tom has suggested make sense.\n\n32 is not too low if the kernel file descriptors is not\nincreased. Beware that running out of the kernel file descriptors is a\nserious problem for the entire system, not only for PostgreSQL.\n\n> What wal_sync method should we make default? Or should we pick one based \n> on the OS the user is running?\n\nIt's really depending on the OS or kernel version. I saw open_sync is\nbest for certain version of Linux kernel, while fdatasync is good for\nanother version of kernel. I'm not sure, but it could be possible that\nthe file system type might affect the wal_sync choice.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 12 Feb 2003 11:00:00 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration" }, { "msg_contents": "Why don't we include a postgresql.conf.recommended along with our\npostgresql.conf.sample. That shouldn't be too hard. We can just jack up\nthe shared buffers and wal buffers and everything - it doesn't matter if\nit's not perfect, but it will at least give people an idea of what needs to\nbe increased, etc to get good results.\n\nI'm currently benchmarking our new DB server before we put it into\nproduction. I plan to publish the results from that shortly.\n\nRegards,\n\nChris\n\n> -----Original Message-----\n> From: pgsql-advocacy-owner@postgresql.org\n> [mailto:pgsql-advocacy-owner@postgresql.org]On Behalf Of Merlin Moncure\n> Sent: Tuesday, 11 February 2003 11:44 PM\n> To: Greg Copeland\n> Cc: PostgresSQL Hackers Mailing List; pgsql-advocacy@postgresql.org\n> Subject: Re: [pgsql-advocacy] [HACKERS] PostgreSQL Benchmarks\n>\n>\n> I've tested all the win32 versions of postgres I can get my hands on\n> (cygwin and not), and my general feeling is that they have problems with\n> insert performance with fsync() turned on, probably the fault of the os.\n> Select performance is not so much affected.\n>\n> This is easily solved with transactions and other such things. Also\n> Postgres benefits from pl just like oracle.\n>\n> May I make a suggestion that maybe it is time to start thinking about\n> tuning the default config file, IMHO its just a little bit too\n> conservative, and its hurting you in benchmarks being run by idiots, but\n> its still bad publicity. Any real database admin would know his test\n> are synthetic and not meaningful without having to look at the #s.\n>\n> This is irritating me so much that I am going to put together a\n> benchmark of my own, a real world one, on (publicly available) real\n> world data. Mysql is a real dog in a lot of situations. The FCC\n> publishes a database of wireless transmitters that has tables with 10\n> million records in it. I'll pump that into pg, run some benchmarks,\n> real world queries, and we'll see who the faster database *really* is.\n> This is just a publicity issue, that's all. Its still annoying though.\n>\n> I'll even run an open challenge to database admin to beat query\n> performance of postgres in such datasets, complex multi table joins,\n> etc. I'll even throw out the whole table locking issue and analyze\n> single user performance.\n>\n> Merlin\n>\n>\n>\n> _____________\n> How much of the performance difference is from the RDBMS, from the\n> middleware, and from the quality of implementation in the middleware.\n>\n> While I'm not surprised that the the cygwin version of PostgreSQL is\n> slow, those results don't tell me anything about the quality of the\n> middleware interface between PHP and PostgreSQL. Does anyone know if we\n> can rule out some of the performance loss by pinning it to bad\n> middleware implementation for PostgreSQL?\n>\n>\n> Regards,\n>\n> --\n> Greg Copeland <greg@copelandconsulting.net>\n> Copeland Computer Consulting\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Wed, 12 Feb 2003 10:42:42 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PostgreSQL Benchmarks" }, { "msg_contents": "> >After it's all said and done, I would rather someone simply say, \"it's\n> >beyond my skill set\", and attempt to get help or walk away. That seems\n> >better than them being able to run it and say, \"it's a dog\", spreading\n> >word-of-mouth as such after they left PostgreSQL behind. Worse yet,\n> >those that do walk away and claim it performs horribly are probably\n> >doing more harm to the PostgreSQL community than expecting someone to be\n> >able to install software ever can.\n> >\n> <RANT>\n>\n> And that my friends is why PostgreSQL is still relatively obscure.\n\nDude - I hang out on PHPBuilder's database forums and you wouldn't believe\nhow often the \"oh, don't use Postgres, it has a history of database\ncorruption problems\" thing is mentioned.\n\nChris\n\n\n", "msg_date": "Wed, 12 Feb 2003 10:47:12 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "\"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> ... The original choice of 32 was set because the original \n> choice of 64 shared memory blocks as the most we could hope for on common \n> OS installs. Now that we're looking at cranking that up to 1000, \n> shouldn't max connections get a look too?\n\nActually I think max-connections at 32 was set because of SEMMAX limits,\nand had only the most marginal connection to shared_buffers (anyone care\nto troll the archives to check?) But sure, let's take another look at\nthe realistic limits today.\n\n> ... If he starts running out of semaphores, that's a \n> problem he can address while his database is still up and running in most \n> operating systems, at least in the ones I use.\n\nBack in the day, this took a kernel rebuild and system reboot to fix.\nIf this has changed, great ... but on exactly which Unixen can you\nalter SEMMAX on the fly?\n\n> So, my main point is that any setting that requires you to shut down \n> postgresql to make the change, we should pick a compromise value that \n> means you never likely will have to shut down the database once you've \n> started it up and it's under load.\n\nWhen I started using Postgres, it did not allocate the max number of\nsemas it might need at startup, but was instead prone to fail when you\ntried to open the 17th or 33rd or so connection. It was universally\nagreed to be an improvement to refuse to start at all if we could not\nmeet the specified max_connections setting. I don't want to backtrack\nfrom that. If we can up the default max_connections setting, great ...\nbut let's not increase the odds of failing under load.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Feb 2003 23:24:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> We could retarget to try to stay under SHMMAX=4M, which I think is\n>> the next boundary that's significant in terms of real-world platforms\n>> (isn't that the default SHMMAX on some BSDen?). That would allow us\n>> 350 or so shared_buffers, which is better, but still not really a\n>> serious choice for production work.\n\n> What is a serious choice for production work?\n\nWell, as I commented later in that mail, I feel that 1000 buffers is\na reasonable choice --- but I have to admit that I have no hard data\nto back up that feeling. Perhaps we should take this to the\npgsql-perform list and argue about reasonable choices.\n\nA separate line of investigation is \"what is the lowest common\ndenominator nowadays?\" I think we've established that SHMMAX=1M\nis obsolete, but what replaces it as the next LCD? 4M seems to be\ncorrect for some BSD flavors, and I can confirm that that's the\ncurrent default for Mac OS X --- any other comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Feb 2003 00:27:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "> >> We could retarget to try to stay under SHMMAX=4M, which I think is\n> >> the next boundary that's significant in terms of real-world platforms\n> >> (isn't that the default SHMMAX on some BSDen?). That would allow us\n> >> 350 or so shared_buffers, which is better, but still not really a\n> >> serious choice for production work.\n>\n> > What is a serious choice for production work?\n>\n> Well, as I commented later in that mail, I feel that 1000 buffers is\n> a reasonable choice --- but I have to admit that I have no hard data\n> to back up that feeling. Perhaps we should take this to the\n> pgsql-perform list and argue about reasonable choices.\n\nDamn. Another list I have to subscribe to!\n\nThe results I just posted indicate that 1000 buffers is really quite bad\nperformance comaped to 4000, perhaps up to 100 TPS for selects and 30 TPS\nfor TPC-B.\n\nStill, that 1000 is in itself vastly better than 64!!\n\nChris\n\n\n", "msg_date": "Wed, 12 Feb 2003 13:32:41 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "> A separate line of investigation is \"what is the lowest common\n> denominator nowadays?\" I think we've established that SHMMAX=1M\n> is obsolete, but what replaces it as the next LCD? 4M seems to be\n> correct for some BSD flavors, and I can confirm that that's the\n> current default for Mac OS X --- any other comments?\n\nIt's 1025 * 4k pages on FreeBSD = 4MB\n\nChris\n\n", "msg_date": "Wed, 12 Feb 2003 13:33:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "On Tuesday 11 Feb 2003 10:56 pm, you wrote:\n> Josh Berkus <josh@agliodbs.com> writes:\n> > What if we supplied several sample .conf files, and let the user choose\n> > which to copy into the database directory? We could have a \"high read\n> > performance\" profile, and a \"transaction database\" profile, and a\n> > \"workstation\" profile, and a \"low impact\" profile.\n>\n> Uh ... do we have a basis for recommending any particular sets of\n> parameters for these different scenarios? This could be a good idea\n> in the abstract, but I'm not sure I know enough to fill in the details.\n\nLet's take very simple scenario to supply pre-configured postgresql.conf.\n\nAssume that SHMMAX=Total memory/2 and supply different config files for\n\n64MB/128Mb/256MB/512MB and above.\n\nIs it simple enough?\n\n Shridhar\n", "msg_date": "Wed, 12 Feb 2003 11:51:44 +0530", "msg_from": "\"Shridhar Daithankar<shridhar_daithankar@persistent.co.in>\"\n\t<shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "On Tue, 2003-02-11 at 21:00, Tatsuo Ishii wrote:\n> > \n> > while 200 may seem high, 32 definitely seems low. So, what IS a good \n> > compromise? for this and ALL the other settings that should probably be a \n> > bit higher. I'm guessing sort_mem or 4 or 8 meg hits the knee for most \n> > folks, and the max fsm settings tom has suggested make sense.\n> \n> 32 is not too low if the kernel file descriptors is not\n> increased. Beware that running out of the kernel file descriptors is a\n> serious problem for the entire system, not only for PostgreSQL.\n> \n\nHad this happen at a previous employer, and it definitely is bad. I\nbelieve we had to do a reboot to clear it up. And we saw the problem a\ncouple of times since the sys admin wasn't able to deduce what had\nhappened the first time we got it. IIRC the problem hit somewhere around\n150 connections, so we ran with 128 max. I think this is a safe number\non most servers these days (running linux as least) though out of the\nbox I might be more inclined to limit it to 64. If you do hit a file\ndescriptor problem, *you are hosed*.\n\nRobert Treat\n\n\n", "msg_date": "12 Feb 2003 11:36:19 -0500", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration" }, { "msg_contents": "On Tue, Feb 11, 2003 at 05:25:29PM -0700, Rick Gigger wrote:\n\n> The type of person who can't configure it or doesnt' think to try is\n> probably not doing a project that requires any serious performance.\n\nI have piles of email, have fielded thousands of phone calls, and\nhave had many conversations which prove that claim false. People\nthink that computers are magic. That they don't think the machines\nrequire a little bit of attention is nowise an indication that they\ndon't need the system to come with reasonable defaults.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 12 Feb 2003 11:39:41 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "On Wed, 2003-02-12 at 10:36, Robert Treat wrote:\n> On Tue, 2003-02-11 at 21:00, Tatsuo Ishii wrote:\n> > > \n> > > while 200 may seem high, 32 definitely seems low. So, what IS a good \n> > > compromise? for this and ALL the other settings that should probably be a \n> > > bit higher. I'm guessing sort_mem or 4 or 8 meg hits the knee for most \n> > > folks, and the max fsm settings tom has suggested make sense.\n> > \n> > 32 is not too low if the kernel file descriptors is not\n> > increased. Beware that running out of the kernel file descriptors is a\n> > serious problem for the entire system, not only for PostgreSQL.\n> > \n> \n> Had this happen at a previous employer, and it definitely is bad. I\n> believe we had to do a reboot to clear it up. And we saw the problem a\n> couple of times since the sys admin wasn't able to deduce what had\n> happened the first time we got it. IIRC the problem hit somewhere around\n> 150 connections, so we ran with 128 max. I think this is a safe number\n> on most servers these days (running linux as least) though out of the\n> box I might be more inclined to limit it to 64. If you do hit a file\n> descriptor problem, *you are hosed*.\n> \n\nThat does seem like a more reasonable upper limit. I would rather see\npeople have to knowingly increase the limit rather than bump into system\nupper limits and start scratching their heads trying to figure out what\nthe heck is going on.\n\n\n-- \nGreg Copeland <greg@copelandconsulting.net>\nCopeland Computer Consulting\n\n", "msg_date": "12 Feb 2003 10:43:05 -0600", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration" }, { "msg_contents": "On Wed, 2003-02-12 at 11:39, Andrew Sullivan wrote:\n> On Tue, Feb 11, 2003 at 05:25:29PM -0700, Rick Gigger wrote:\n> \n> > The type of person who can't configure it or doesnt' think to try is\n> > probably not doing a project that requires any serious performance.\n> \n> I have piles of email, have fielded thousands of phone calls, and\n> have had many conversations which prove that claim false. People\n\nBut IBM told me computers are self healing, so if there is a performance\nproblem should it just fix itself?\n\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "12 Feb 2003 11:43:15 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re:" }, { "msg_contents": "Robert Treat <xzilla@users.sourceforge.net> writes:\n> Had this happen at a previous employer, and it definitely is bad. I\n> believe we had to do a reboot to clear it up. And we saw the problem a\n> couple of times since the sys admin wasn't able to deduce what had\n> happened the first time we got it. IIRC the problem hit somewhere around\n> 150 connections, so we ran with 128 max. I think this is a safe number\n> on most servers these days (running linux as least) though out of the\n> box I might be more inclined to limit it to 64. If you do hit a file\n> descriptor problem, *you are hosed*.\n\nIf you want to run lots of connections, it's a real good idea to set\nmax_files_per_process to positively ensure Postgres won't overflow\nyour kernel file table, ie, max_connections * max_files_per_process\nshould be less than the file table size.\n\nBefore about 7.2, we didn't have max_files_per_process, and would\nnaively believe whatever sysconf() told us was an okay number of files\nto open. Unfortunately, way too many kernels promise more than they\ncan deliver ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Feb 2003 11:48:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration " }, { "msg_contents": "On Tue, 11 Feb 2003, Tom Lane wrote:\n\n> \"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> > ... If he starts running out of semaphores, that's a \n> > problem he can address while his database is still up and running in most \n> > operating systems, at least in the ones I use.\n> \n> Back in the day, this took a kernel rebuild and system reboot to fix.\n> If this has changed, great ... but on exactly which Unixen can you\n> alter SEMMAX on the fly?\n\nTom, now you're making me all misty eyed for 14\" platter 10 Meg hard \ndrives and paper tape readers. :-)\n\nSeriously, I know Linux can change these on the fly, and I'm pretty sure \nSolaris can too. I haven't played with BSD for a while so can't speak \nabout that. Anyone else know?\n\n> > So, my main point is that any setting that requires you to shut down \n> > postgresql to make the change, we should pick a compromise value that \n> > means you never likely will have to shut down the database once you've \n> > started it up and it's under load.\n> \n> When I started using Postgres, it did not allocate the max number of\n> semas it might need at startup, but was instead prone to fail when you\n> tried to open the 17th or 33rd or so connection. It was universally\n> agreed to be an improvement to refuse to start at all if we could not\n> meet the specified max_connections setting. I don't want to backtrack\n> from that. If we can up the default max_connections setting, great ...\n> but let's not increase the odds of failing under load.\n\nI don't want to backtrack either, and I prefer that we now grab the \nsemaphores we need at startup.\n\nNote that on a stock RH 72 box, the max number of \nbackends you can startup before you exhaust semphores is 2047 backends, \nmore than I'd ever want to try and run on normal PC hardware. So, on a \nlinux box 150 to 200 max backends comes no where near exhausting \nsemaphores.\n\nI imagine that any \"joe average\" who doesn't really understand sysadmin \nduties that well and is trying for the first time to install Postgresql \nWILL be doing so on one of three general platforms, Linux, BSD, or \nWindows. As long as the initial settings use only 10% or so of the file \nhandle and / or semaphore resources on each of those systems, we're \nprobably safe.\n\n64 or 128 seems like a nice power of two number that is likely to keep us \nsafe on inital installs while forestalling problems with too many \nconnections.\n\nJust for score, here's the default max output of rh72's kernel config:\n\nkernel.sem = 250 32000 32 128\nfs.file-max = 8192\n\nNote that while older kernels needed to have max inodes bumped up as well, \nnowadays that doesn't seem to be a problem, or they just set it really \nhigh and I can't hit the ceiling on my workstation without swap storms.\n\nthe definitions of the kernel.sem line are:\n\nkernel.sem: max_sem_per_id max_sem_total max_ops_sem_call max_sem_ids\n\nI'll try to get FreeBSD running today and see what research I can find on \nit, but 5.0 is likely to be a whole new beast for me, so if someone can \ntell us what the maxes are by default on different BSDs and what the \nsettings are in postgresql that can exhaust them that gets us closer.\n\nLike I've said before, anyone running HPUX, Irix, Solaris, or any other \n\"Industrial Strength Unix\" should already know to increase all these \nthings and likely had to long before Postgresql showed up on their box, so \na setting that keeps pgsql from coming up won't be likely, and if it \nhappens, they'll most likely know how to handle it.\n\nBSD and Linux users are more likely to contain the group of folks who \ndon't know all this and don't ever want to (not that all BSD/Linux users \nare like that, just that the sub group mostly exists on those platforms, \nand windows as well.) So the default settings really probably should be \ndetermined, for the most part, by the needs of those users.\n\n", "msg_date": "Wed, 12 Feb 2003 11:26:49 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Oh, another setting that should be a \"default\" for most users is to initdb \nautomatically with locale of C. If they need a different locale, they \nshould have to pick it.\n\nThe performance of Postgresql with a locale other than C when doing like \nand such is a serious ding. I'd much rather have the user experience the \nfaster searches first, then get to test with other locales and see if \nperformance is good enough, than to start out slow and wonder why they \nneed to change their initdb settings to get decent performance on a where \nclause with like in it.\n\n", "msg_date": "Wed, 12 Feb 2003 11:30:13 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration" }, { "msg_contents": "Tom Lane writes:\n\n> Well, as I commented later in that mail, I feel that 1000 buffers is\n> a reasonable choice --- but I have to admit that I have no hard data\n> to back up that feeling.\n\nI know you like it in that range, and 4 or 8 MB of buffers by default\nshould not be a problem. But personally I think if the optimal buffer\nsize does not depend on both the physical RAM you want to dedicate to\nPostgreSQL and the nature and size of the database, then we have achieved\na medium revolution in computer science. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 13 Feb 2003 00:52:25 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "> Had this happen at a previous employer, and it definitely is bad. I\n> believe we had to do a reboot to clear it up. And we saw the problem a\n> couple of times since the sys admin wasn't able to deduce what had\n> happened the first time we got it. IIRC the problem hit somewhere around\n> 150 connections, so we ran with 128 max. I think this is a safe number\n> on most servers these days (running linux as least) though out of the\n> box I might be more inclined to limit it to 64. If you do hit a file\n> descriptor problem, *you are hosed*.\n\nJust yesterday I managed to hose my new Postgres installation during a\nparticular benchmarking run. Postgres did restart itself nicely though. I\nhave no idea why that particular run caused problems when all other runs\nwith identical settings didn't. I checked the log and saw file descriptor\nprobs. I was doing 128 connections with 128 max connetions. This was the\nlog:\n\n> 2003-02-12 04:16:15 LOG: PGSTAT: cannot open temp stats file\n> /usr/local/pgsql/data/global/pgstat.tmp.41388: Too many open files in\n> system\n> 2003-02-12 04:16:15 LOG: PGSTAT: cannot open temp stats file\n> /usr/local/pgsql/data/global/pgstat.tmp.41388: Too many open files in\n> system\n> 2003-02-12 04:16:39 PANIC: could not open transaction-commit log\n> directory\n> (/usr/local/pgsql/data/pg_clog): Too many open files in system\n> 2003-02-12 04:16:39 LOG: statement: SET autocommit TO 'on';VACUUM\n> ANALYZE\n> 2003-02-12 04:16:39 LOG: PGSTAT: cannot open temp stats file\n> /usr/local/pgsql/data/global/pgstat.tmp.41388: Too many open files in\n> system\n\nThis was the MIB:\n\n> kern.maxfiles: 1064\n> kern.maxfilesperproc: 957\n\nThis was the solution:\n\n> sysctl -w kern.maxfiles=65536\n> sysctl -w kern.maxfilesperproc=8192\n>\n> .. and then stick\n>\n> kern.maxfiles=65536\n> kern.maxfilesperproc=8192\n>\n> in /etc/sysctl.conf so its set during a reboot.\n\nWhich just goes to highlight the importance of rigorously testing a\nproduction installation...\n\nChris\n\n\n\n\n", "msg_date": "Thu, 13 Feb 2003 09:43:23 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration" }, { "msg_contents": "> Seriously, I know Linux can change these on the fly, and I'm pretty sure \n> Solaris can too. I haven't played with BSD for a while so can't speak \n> about that. Anyone else know?\n\nYou cannot change SHMMAX on the fly on FreeBSD.\n\nChris\n\n", "msg_date": "Thu, 13 Feb 2003 09:47:28 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "\n\n--On Thursday, February 13, 2003 09:47:28 +0800 Christopher Kings-Lynne \n<chriskl@familyhealth.com.au> wrote:\n\n>> Seriously, I know Linux can change these on the fly, and I'm pretty sure\n>> Solaris can too. I haven't played with BSD for a while so can't speak\n>> about that. Anyone else know?\n>\n> You cannot change SHMMAX on the fly on FreeBSD.\nYes you can, on recent 4-STABLE:\n\nPassword:\nlerlaptop# sysctl kern.ipc.shmmax=66000000\nkern.ipc.shmmax: 33554432 -> 66000000\nlerlaptop#uname -a\nFreeBSD lerlaptop.lerctr.org 4.7-STABLE FreeBSD 4.7-STABLE #38: Mon Feb 3 \n21:51:25 CST 2003 \nler@lerlaptop.lerctr.org:/usr/obj/usr/src/sys/LERLAPTOP i386\nlerlaptop#\n\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n\n\n", "msg_date": "Wed, 12 Feb 2003 19:51:38 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > Seriously, I know Linux can change these on the fly, and I'm pretty sure \n> > Solaris can too. I haven't played with BSD for a while so can't speak \n> > about that. Anyone else know?\n> \n> You cannot change SHMMAX on the fly on FreeBSD.\n\nAnd part of the reason is because some/most BSD's map the page tables\ninto physical RAM (kernel space) rather than use some shared page table\nmechanism. This is good because it prevents the shared memory from\nbeing swapped out (performance disaster).\n\nIt doesn't actually allocate RAM unless someone needs it, but it does\nlock the shared memory into a specific fixed location for all processes.\n\nThe more flexible approach is to make shared memory act just like the\nmemory of a user process, and have other user processes share those page\ntables, but that adds extra overhead and can cause the memory to behave\njust like user memory (swapable).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 12 Feb 2003 21:36:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re:" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I know you like it in that range, and 4 or 8 MB of buffers by default\n> should not be a problem. But personally I think if the optimal buffer\n> size does not depend on both the physical RAM you want to dedicate to\n> PostgreSQL and the nature and size of the database, then we have achieved\n> a medium revolution in computer science. ;-)\n\nBut this is not about \"optimal\" settings. This is about \"pretty good\"\nsettings. As long as we can get past the knee of the performance curve,\nI think we've done what should be expected of a default parameter set.\n\nI believe that 1000 buffers is enough to get past the knee in most\nscenarios. Again, I haven't got hard evidence, but that's my best\nguess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Feb 2003 22:08:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": ">> You cannot change SHMMAX on the fly on FreeBSD.\n\nI think we suffered some topic drift here --- wasn't the last question\nabout whether SEMMAX can be increased on-the-fly? That wouldn't have\nanything to do with memory-mapping strategies...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Feb 2003 22:18:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "On Wed, 12 Feb 2003, Bruce Momjian wrote:\n\n> Christopher Kings-Lynne wrote:\n>\n> > You cannot change SHMMAX on the fly on FreeBSD.\n>\n> And part of the reason is because some/most BSD's map the page tables\n> into physical RAM (kernel space) rather than use some shared page table\n> mechanism. This is good because it prevents the shared memory from\n> being swapped out (performance disaster).\n\nNot at all! In all the BSDs, as far as I'm aware, SysV shared memory is\njust normal mmap'd memory.\n\nFreeBSD offers a sysctl that lets you mlock() that memory, and that is\nhelpful only because postgres insists on taking data blocks that are\nalready in memory, fully sharable amongst all back ends and ready to be\nused, and making a copy of that data to be shared amongst all back ends.\n\n> It doesn't actually allocate RAM unless someone needs it, but it does\n> lock the shared memory into a specific fixed location for all processes.\n\nI don't believe that the shared memory is not locked to a specific VM\naddress for every process. There's certainly no reason it needs to be.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Thu, 13 Feb 2003 13:32:15 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re:" }, { "msg_contents": "\"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> Tom, now you're making me all misty eyed for 14\" platter 10 Meg hard \n> drives and paper tape readers. :-)\n\n<python> Och, we used to *dream* of 10 meg drives... </python>\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 00:15:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > Well, as I commented later in that mail, I feel that 1000 buffers is\n> > a reasonable choice --- but I have to admit that I have no hard data\n> > to back up that feeling.\n> \n> I know you like it in that range, and 4 or 8 MB of buffers by default\n> should not be a problem. But personally I think if the optimal buffer\n> size does not depend on both the physical RAM you want to dedicate to\n> PostgreSQL and the nature and size of the database, then we have achieved\n> a medium revolution in computer science. ;-)\n\nI have thought about this and I have an idea. Basically, increasing the\ndefault values may get us closer, but it will discourage some to tweek,\nand it will cause problems with some OS's that have small SysV params.\n\nSo, my idea is to add a message at the end of initdb that states people\nshould run the pgtune script before running a production server.\n\nThe pgtune script will basically allow us to query the user, test the OS\nversion and perhaps parameters, and modify postgresql.conf with\nreasonable values. I think this is the only way to cleanly get folks\nclose to where they should be.\n\nFor example, we can ask them how many rows and tables they will be\nchanging, on average, between VACUUM runs. That will allow us set the\nFSM params. We can ask them about using 25% of their RAM for shared\nbuffers. If they have other major apps running on the server or have\nsmall tables, we can make no changes. We can basically ask them\nquestions and use that info to set values.\n\nWe can even ask about sort usage maybe and set sort memory. We can even\ncontrol checkpoint_segments this way if they say they will have high\ndatabase write activity and don't worry about disk space usage. We may\neven be able to compute some random page cost estimate.\n\nSeems a script is going to be the best way to test values and assist\nfolks in making reasonable decisions about each parameter. Of course,\nthey can still edit the file, and we can ask them if they want\nassistance to set each parameter or leave it alone.\n\nI would restrict the script to only deal with tuning values, and tell\npeople they still need to review that file for other useful parameters.\n\nAnother option would be to make a big checklist or web page that asks\nsuch questions and computes proper values, but it seems a script would\nbe easiest. We can even support '?' which would explain why the\nquestion is being ask and how it affects the value.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 01:12:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So, my idea is to add a message at the end of initdb that states people\n> should run the pgtune script before running a production server.\n\nDo people read what initdb has to say?\n\nIIRC, the RPM install scripts hide initdb's output from the user\nentirely. I wouldn't put much faith in such a message as having any\nreal effect on people...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 10:06:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > So, my idea is to add a message at the end of initdb that states people\n> > should run the pgtune script before running a production server.\n> \n> Do people read what initdb has to say?\n> \n> IIRC, the RPM install scripts hide initdb's output from the user\n> entirely. I wouldn't put much faith in such a message as having any\n> real effect on people...\n\nYes, that is a problem. We could show something in the server logs if\npg_tune hasn't been run. Not sure what else we can do, but it would\ngive folks a one-stop thing to run to deal with performance\nconfiguration.\n\nWe could prevent the postmaster from starting unless they run pg_tune or\nif they have modified postgresql.conf from the default. Of course,\nthat's pretty drastic.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 12:10:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "\nI was speaking of the 4.4 BSD. FreeBSD has the merged VM, and I think\nNetBSD only recently did that. BSD/OS does do the locking by default\nand it maps into the kernel address space. I believe FreeBSD has a\nsysctl to control locking of SysV memory.\n\nOne advantage of having it all at the same VM address is that they can\nuse the same page tables for virtual address lookups.\n\n---------------------------------------------------------------------------\n\nCurt Sampson wrote:\n> On Wed, 12 Feb 2003, Bruce Momjian wrote:\n> \n> > Christopher Kings-Lynne wrote:\n> >\n> > > You cannot change SHMMAX on the fly on FreeBSD.\n> >\n> > And part of the reason is because some/most BSD's map the page tables\n> > into physical RAM (kernel space) rather than use some shared page table\n> > mechanism. This is good because it prevents the shared memory from\n> > being swapped out (performance disaster).\n> \n> Not at all! In all the BSDs, as far as I'm aware, SysV shared memory is\n> just normal mmap'd memory.\n> \n> FreeBSD offers a sysctl that lets you mlock() that memory, and that is\n> helpful only because postgres insists on taking data blocks that are\n> already in memory, fully sharable amongst all back ends and ready to be\n> used, and making a copy of that data to be shared amongst all back ends.\n> \n> > It doesn't actually allocate RAM unless someone needs it, but it does\n> > lock the shared memory into a specific fixed location for all processes.\n> \n> I don't believe that the shared memory is not locked to a specific VM\n> address for every process. There's certainly no reason it needs to be.\n> \n> cjs\n> -- \n> Curt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n> Don't you know, in this new Dark Age, we're all light. --XTC\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 13:49:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re:" }, { "msg_contents": "\n> On Wed, 12 Feb 2003, Bruce Momjian wrote:\n> >\n> > And part of the reason is because some/most BSD's map the page tables\n> > into physical RAM (kernel space) rather than use some shared page table\n> > mechanism. This is good because it prevents the shared memory from\n> > being swapped out (performance disaster).\n\nWell, it'll only be swapped out if it's not being used...\n\nIn any case you can use madvise() to try to avoid that, but it doesn't seem\nlikely to be a problem since they would probably be the most heavily used\npages in postgres.\n\n-- \ngreg\n\n", "msg_date": "13 Feb 2003 15:47:08 -0500", "msg_from": "Greg Stark <gsstark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re:" }, { "msg_contents": "Josh Berkus wrote:\n> > > Uh ... do we have a basis for recommending any particular sets of\n> > > parameters for these different scenarios? This could be a good idea\n> > > in the abstract, but I'm not sure I know enough to fill in the details.\n> \n> Sure. \n> Mostly-Read database, few users, good hardware, complex queries:\n> \t= High shared buffers and sort mem, high geqo and join collapse thresholds,\n> \t\tmoderate fsm settings, defaults for WAL.\n> Same as above with many users and simple queries (webserver) =\n> \tsame as above, except lower sort mem and higher connection limit\n> High-Transaction Database =\n> \tModerate shared buffers and sort mem, high FSM settings, increase WAL files \n> and buffers.\n> Workstation =\n> \tModerate to low shared buffers and sort mem, moderate FSM, defaults for WAL, \n> etc.\n> Low-Impact server = current defaults, more or less.\n\nOkay, but there should probably be one more, called \"Benchmark\". The\nreal problem is what values to use for it. :-)\n\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Thu, 13 Feb 2003 19:26:05 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: " }, { "msg_contents": "Tatsuo,\n\n> Sigh. People always complain \"pgbench does not reliably producing\n> repeatable numbers\" or something then say \"that's because pgbench's\n> transaction has too much contention on the branches table\". So I added\n> -N option to pgbench which makes pgbench not to do any UPDATE to\n> the branches table. But still people continue to complian...\n\nHey, pg_bench is a good start on a Postgres performance tester, and it's much, \nmuch better than what there was before you came along ... which was nothing. \nThank you again for contributing it.\n\npg_bench is, however, only a start on a performance tester, and we'd need to \nbuild it up before we could use it as the basis of a PG tuner.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 13 Feb 2003 20:00:35 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration" }, { "msg_contents": "Bruce Momjian wrote:\n> We could prevent the postmaster from starting unless they run pg_tune or\n> if they have modified postgresql.conf from the default. Of course,\n> that's pretty drastic.\n\nIf you're going to do that, then you may as well make the defaults\nsomething that will perform reasonably well under the widest\ncircumstances possible and let the postmaster fail when it can't\nacquire the resources those defaults demand.\n\nWhat I'd do is go ahead and make the defaults something reasonable,\nand if the postmaster can't allocate, say, enough shared memory pages,\nthen it should issue an error message saying not only that it wasn't\nable to allocate enough shared memory, but also which parameter to\nchange and (if it's not too much trouble to implement) what it can be\nchanged to in order to get past that part of the initialization (this\nmeans that the postmaster has to figure out how much shared memory it\ncan actually allocate, via a binary search allocate/free method). It\nshould also warn that by lowering the value, the resulting performance\nmay be much less than satisfactory, and that the alternative (to\nincrease SHMMAX, in this example) should be used if good performance\nis desired.\n\nThat way, someone whose only concern is to make it work will be able\nto do so without having to do a lot of experimentation, and will get\nplenty of warning that the result isn't likely to work very well.\n\nAnd we end up getting better benchmarks in the cases where people\ndon't have to touch the default config. :-)\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Thu, 13 Feb 2003 21:06:00 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "Tom Lane wrote:\n> If I thought that pgbench was representative of anything, or even\n> capable of reliably producing repeatable numbers, then I might subscribe\n> to results derived this way. But I have little or no confidence in\n> pgbench. Certainly I don't see how you'd use it to produce\n> recommendations for a range of application scenarios, when it's only\n> one very narrow scenario itself.\n\nSo let's say you were designing a tool to help someone get reasonable\nperformance out of a PostgreSQL installation. What scenarios would\nyou include in such a tool, and what information would you want out of\nit?\n\nYou don't have any real confidence in pgbench. Fair enough. What\n*would* you have confidence in?\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Thu, 13 Feb 2003 21:31:24 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Tuning scenarios (was Changing the default configuration)" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> You don't have any real confidence in pgbench. Fair enough. What\n> *would* you have confidence in?\n\nMeasurements on your actual application?\n\nIn fairness to pgbench, most of its problems come from people running\nit at tiny scale factors, where it reduces to an exercise in how many\nangels can dance on the same pin (or, how many backends can contend to\nupdate the same row). And in that regime it runs into two or three\ndifferent Postgres limitations that might or might not have any\nrelevance to your real-world application --- dead-index-row references\nused to be the worst, but I think probably aren't anymore in 7.3.\nBut those same limitations cause the results to be unstable from run\nto run, which is why I don't have a lot of faith in reports of pgbench\nnumbers. You need to work quite hard to get reproducible numbers out\nof it.\n\nNo, I don't have a better benchmark in my pocket :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 01:04:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default configuration) " }, { "msg_contents": "OK,\n\nHere's a stab at some extra conf files. Feel free to shoot them down.\n\nIf we can come up with at least _some_ alternative files that we can put\nsomewhere for them to see when postgres is installed, then at least people\ncan see what variables will affect what...\n\nI didn't see the point of a 'workstation' option, the default is fine for\nthat.\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Kevin Brown\n> Sent: Friday, 14 February 2003 11:26 AM\n> To: PostgresSQL Hackers Mailing List; pgsql-advocacy@postgresql.org\n> Subject: Re: [HACKERS] Changing the default configuration (was Re:\n> [pgsql-advocacy]\n>\n>\n> Josh Berkus wrote:\n> > > > Uh ... do we have a basis for recommending any particular sets of\n> > > > parameters for these different scenarios? This could be a good idea\n> > > > in the abstract, but I'm not sure I know enough to fill in\n> the details.\n> >\n> > Sure.\n> > Mostly-Read database, few users, good hardware, complex queries:\n> > \t= High shared buffers and sort mem, high geqo and join\n> collapse thresholds,\n> > \t\tmoderate fsm settings, defaults for WAL.\n> > Same as above with many users and simple queries (webserver) =\n> > \tsame as above, except lower sort mem and higher connection limit\n> > High-Transaction Database =\n> > \tModerate shared buffers and sort mem, high FSM settings,\n> increase WAL files\n> > and buffers.\n> > Workstation =\n> > \tModerate to low shared buffers and sort mem, moderate FSM,\n> defaults for WAL,\n> > etc.\n> > Low-Impact server = current defaults, more or less.\n>\n> Okay, but there should probably be one more, called \"Benchmark\". The\n> real problem is what values to use for it. :-)\n>\n>\n>\n> --\n> Kevin Brown\t\t\t\t\t kevin@sysexperts.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>", "msg_date": "Fri, 14 Feb 2003 14:12:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Offering tuned config files" }, { "msg_contents": "Tom Lane wrote:\n> Kevin Brown <kevin@sysexperts.com> writes:\n> > You don't have any real confidence in pgbench. Fair enough. What\n> > *would* you have confidence in?\n> \n> Measurements on your actual application?\n\nThat unfortunately doesn't help us a whole lot in figuring out\ndefaults that will perform reasonably well under broad conditions,\nunless there's some way to determine a reasonably consistent pattern\n(or set of patterns) amongst a lot of those applications.\n\n> In fairness to pgbench, most of its problems come from people running\n> it at tiny scale factors, where it reduces to an exercise in how many\n> angels can dance on the same pin (or, how many backends can contend to\n> update the same row). \n\nThis isn't easy to fix, but I don't think it's impossible either.\nIt's probably sufficient to make the defaults dependent on information\ngathered about the system. I'd think total system memory would be the\nprimary thing to consider, since most database engines are pretty fast\nonce all the data and indexes are cached. :-)\n\n> And in that regime it runs into two or three different Postgres\n> limitations that might or might not have any relevance to your\n> real-world application --- dead-index-row references used to be the\n> worst, but I think probably aren't anymore in 7.3. But those same\n> limitations cause the results to be unstable from run to run, which\n> is why I don't have a lot of faith in reports of pgbench numbers.\n> You need to work quite hard to get reproducible numbers out of it.\n\nThe interesting question is whether that's more an indictment of how\nPG does things or how pg_bench does things. I imagine it's probably\ndifficult to get truly reproducible numbers out of pretty much any\nbenchmark coupled with pretty much any database engine. There are\nsimply far too many parameters to tweak on any but the simplest\ndatabase engines, and we haven't even started talking about tuning the\nOS around the database...\n\nAnd benchmarks (as well as real-world applications) will always run\ninto limitations of the database (locking mechanisms, IPC limits,\netc.). In fact, that's another useful purpose: to see where the\nlimits of the database are.\n\nDespite the limits, it's probably better to have a benchmark that only\ngives you an order of magnitude idea of what to expect than to not\nhave anything at all. And thus we're more or less right back where we\nstarted: what kinds of benchmarking tests should go into a benchmark\nfor the purposes of tuning a database system (PG in particular but the\nanswer might easily apply to others as well) so that it will perform\ndecently, if not optimally, under the most likely loads?\n\nI think we might be able to come up with some reasonable answers to\nthat, as long as we don't expect too much out of the resulting\nbenchmark. The right people to ask are probably the people who are\nactually running production databases.\n\nAnyone wanna chime in here with some opinions and perspectives?\n\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Fri, 14 Feb 2003 03:48:43 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default configuration)" }, { "msg_contents": "On Fri, 14 Feb 2003 14:12:50 +0800, \"Christopher Kings-Lynne\"\n<chriskl@familyhealth.com.au> wrote:\n>Here's a stab at some extra conf files. Feel free to shoot them down.\n\nNo intent to shoot anything down, just random thoughts:\n\neffective_cache_size = 20000 (~ 160 MB) should be more adequate for a\n256 MB machine than the extremely conservative default of 1000. I\nadmit that the effect of this change is hard to benchmark. A way too\nlow (or too high) setting may lead the planner to wrong conclusions.\n\nMore parameters affecting the planner:\n\t#cpu_tuple_cost = 0.01\n\t#cpu_index_tuple_cost = 0.001\n\t#cpu_operator_cost = 0.0025\n\nAre these still good defaults? I have no hard facts, but ISTM that\nCPU speed is increasing more rapidly than disk access speed.\n\nIn postgresql.conf.sample-writeheavy you have:\n\tcommit_delay = 10000\n\nIs this still needed with \"ganged WAL writes\"? Tom?\n\nServus\n Manfred\n", "msg_date": "Fri, 14 Feb 2003 12:58:57 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Offering tuned config files" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> Tom Lane wrote:\n>> ... But those same\n>> limitations cause the results to be unstable from run to run, which\n>> is why I don't have a lot of faith in reports of pgbench numbers.\n>> You need to work quite hard to get reproducible numbers out of it.\n\n> The interesting question is whether that's more an indictment of how\n> PG does things or how pg_bench does things.\n\nI didn't draw a conclusion on that ;-). I merely pointed out that the\nnumbers are unstable, and therefore not to be trusted without a great\ndeal of context ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 10:05:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default configuration) " }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> In postgresql.conf.sample-writeheavy you have:\n> \tcommit_delay = 10000\n> Is this still needed with \"ganged WAL writes\"? Tom?\n\nI doubt that the current options for grouped commits are worth anything\nat the moment. Chris, do you have any evidence backing up using\ncommit_delay with 7.3?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 10:07:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Offering tuned config files " }, { "msg_contents": "On Fri, Feb 14, 2003 at 03:48:43AM -0800, Kevin Brown wrote:\n> That unfortunately doesn't help us a whole lot in figuring out\n> defaults that will perform reasonably well under broad conditions,\n> unless there's some way to determine a reasonably consistent pattern\n> (or set of patterns) amongst a lot of those applications.\n\nWhen moving to a new DB or DB box, we always run a series of\nbenchmarks to make sure there aren't any surprises\nperformance-wise. Our database activity, and thus our benchmarks, are\nbroken up into roughly three different patterns:\n\n1- Transaction processing: small number of arbitrary small\n(single-row) selects intermixed with large numbers of small inserts\nand updates.\n\n2- Reporting: large reads joining 6-12 tables, usually involving\ncalculations and/or aggregation.\n\n3- Application (object retrieval): large numbers of arbitrary,\nsingle-row selects and updates, with smaller numbers of single row\ninserts.\n\nWe use our own application code to do our benchmarks, so they're not\ngeneral enough for your use, but it might be worthwhile to profile\neach of those different patterns, or allow DB admins to limit it to a\nrelevant subset. Other patterns i can think of include logging (large\nnumber of single row inserts, no updates, occasional large, simple\n(1-3 table) selects), mining (complicated selects over 10 or more\ntables), automated (small inserts/updates, with triggers cascading\neverywhere), etc.\n\nThe problem becomes dealing with the large amounts of data necessary\nto frame all of these patterns. An additional wrinkle is accomodating\nboth columns with well-distributed data and columns that are top-heavy\nor which only have one of a small number of values. Plus indexed vs\nunindexed columns.\n\nOr, somewhat orthogonally, you could allow pgbench to take a workload\nof different sql statements (with frequencies), and execute those\nstatements instead of the built-in transaction. Then it would be easy\nenough to contribute a library of pattern workloads, or for the DBA to\nwrite one herself.\n\nJust my two cents.\n\n-johnnnnnnnnnn\n", "msg_date": "Fri, 14 Feb 2003 10:33:14 -0600", "msg_from": "johnnnnnn <john@phaedrusdeinus.org>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default configuration)" }, { "msg_contents": "Kevin,\n\n> I think we might be able to come up with some reasonable answers to\n> that, as long as we don't expect too much out of the resulting\n> benchmark. The right people to ask are probably the people who are\n> actually running production databases.\n>\n> Anyone wanna chime in here with some opinions and perspectives?\n\n<grin> I thought you'd *never* ask.\n\n(for background: I'm a consultant, and I administrate 6 postgresql databases \nfor 5 different clients)\n\nFirst off, one can't do a useful performance test on the sort of random data \nwhich can be generated by a script. The only really useful tests come from \ntesting on a copy of the user's own database(s), or on a real database of \nsome sort. \n\nFor new installations, we'd need to make a copy of a public domain or OSS \ndatabase as a part of our performance testing tool. This database would need \nat least 10 tables, some of them quite large, with FK relationships.\n\nSecond, there are five kinds of query tests relevant to performmance:\n\nA) Rapid-fire simple select queries.\nB) Large complex select queries, combining at least 2 of: aggregates, \nsub-selects, unions, unindexed text searches, and outer joins.\nC) Rapid-fire small (<10 rows) update/insert/delete queries.\nD) Large update queries (> 10,000 rows, possibly in more than one table)\nE) Long-running PL procedures.\n\nTesting on these five types of operations give an all-around test of server \nperformance. Fortunately, for many installations, not all tests are \nrelevant; in fact, for many, only 2 of the 5 above are relevant. For \nexample, for a PHP-Nuke installation, you'd only need to test on A and C. As \nanother example, an OLAP reporting server would only need to test on B.\n\nUnfortunately, for any real production server, you need to test all the \ndifferent operations concurrently at the appropriate multi-user level. \nMeaning that for one of my servers (a company-wide calendaring tool) I'd need \nto run tests on A, B, C, and E all simultaneously ... for that matter, A and \nC by themselves would require multiple connections.\n\nSo, once again, if we're talking about a testing database, we would need \ntwenty examples of A and C, ten of each of B and D, and at least 3 of E that \nwe could run. For real production databases, the user could supply \"pools\" \nof the 5 types of operations from their real query base.\n\nThirdly, we're up against the problem that there are several factors which can \nhave a much more profound effect on database performance than *any* amount of \ntuning postgresql.conf, even given a particular hardware platform. In my \nexperience, these factors include (in no particular order): \n\t1) Location of the pg_xlog for heavy-update databases.\n\t2) Location of other files on multi-disk systems\n\t3) Choice of RAID and controller for RAID systems.\n\t4) Filesystem choice and parameters\n\t5) VACUUM/FULL/ANALYZE/REINDEX frequency and strategy\n\t6) Database schema design\n\t7) Indexing\nThus the user would have to be somehow informed that they need to examine all \nof the above, possibly before running the tuning utility.\n\nTherefore, any tuning utility would have to:\n1) Inform the user about the other factors affecting performance and notify \nthem that they have to deal with these.\n\n2) Ask the user for all of the following data:\n\ta) How much RAM does your system have?\n\tb) How many concurrent users, typically?\n\tc) How often do you run VACUUM/FULL/ANALYZE?\n\td) Which of the Five Basic Operations does your database perform frequently?\n\t (this question could be reduced to \"what kind of database do you have?\"\n\t\tweb site database = A and C\n\t\treporting database = A and B\n\t\ttransaction processing = A, C, D and possibly E etc.)\n\te) For each of the 5 operations, how many times per minute is it run?\n\tf) Do you care about crash recovery? (if not, we can turn off fsync)\n\tg) (for users testing on their own database) Please make a copy of your \ndatabase, and provide 5 pools of operation examples.\n\n3) The the script would need to draw random operations from the pool, with \noperation type randomly drawn weighted by relative frequency for that type of \noperation. Each operation would be timed and scores kept per type of \noperation.\n\n4) Based on the above scores, the tuning tool could adjust the following \nparameters:\n\tFor A) shared_buffers\n\tFor B) shared_buffers and sort_mem (and Tom's new JOIN COLLAPSE settings)\n\tFor C) and D) wal settings and FSM settings\n\tFor E) shared_buffers, wal, and FSM\n\n5) Then run 3) again.\n\nThe problem is that the above process becomes insurmountably complex when we \nare testing for several types of operations simultaneously. For example, if \noperation D is slow, we might dramatically increase FSM, but that could take \naway memory needed for op. B, making op. B run slower. So if we're running \nconcurrently, we could could find the adjustments made for each type of \noperation contradictory, and the script would be more likely to end up in an \nendless loop than at a balance. If we don't run the different types of \noperations simultaneously, then it's not a good test; the optimal settings \nfor op. B, for example, may make ops. A and C slow down and vice-versa.\n\nSo we'd actually need to run an optimization for each type of desired \noperation seperately, and then compare settings, adjust to a balance \n(weighted according to the expected relative frequency), and re-test \nconcurrently. Aieee!\n\nPersonally, I think this is a project in and of itself. GBorg, anyone?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 14 Feb 2003 09:36:02 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default configuration)" }, { "msg_contents": "Folks,\n\n I forgot one question:\n\n> 2) Ask the user for all of the following data:\n> \ta) How much RAM does your system have?\n> \tb) How many concurrent users, typically?\n> \tc) How often do you run VACUUM/FULL/ANALYZE?\n> \td) Which of the Five Basic Operations does your database perform\n> frequently? (this question could be reduced to \"what kind of database do\n> you have?\" web site database = A and C\n> \t\treporting database = A and B\n> \t\ttransaction processing = A, C, D and possibly E etc.)\n> \te) For each of the 5 operations, how many times per minute is it run?\n> \tf) Do you care about crash recovery? (if not, we can turn off fsync)\n> \tg) (for users testing on their own database) Please make a copy of your\n> database, and provide 5 pools of operation examples.\nh) (for users using the test database) How large do you expect your main \ntables to be in your database? (then the test database would need to have \nits tables trimmed to match this estimate)\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 14 Feb 2003 10:10:00 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default configuration)" }, { "msg_contents": "Bruce Momjian writes:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > So, my idea is to add a message at the end of initdb that states people\n> > > should run the pgtune script before running a production server.\n> >\n> > Do people read what initdb has to say?\n> >\n> > IIRC, the RPM install scripts hide initdb's output from the user\n> > entirely. I wouldn't put much faith in such a message as having any\n> > real effect on people...\n>\n> Yes, that is a problem. We could show something in the server logs if\n> pg_tune hasn't been run. Not sure what else we can do, but it would\n> give folks a one-stop thing to run to deal with performance\n> configuration.\n>\n> We could prevent the postmaster from starting unless they run pg_tune or\n> if they have modified postgresql.conf from the default. Of course,\n> that's pretty drastic.\n\nI don't think, that's drastic, if it's done in a user friendy way ;-):\n\nI work with Postgresql for half a year now (and like it very much), but I must \nadmit, that it takes time to understand the various tuning parameters (what \nis not surprising, because you need understand to a certain degree, what's \ngoing on under the hood). Now think of the following reasoning:\n\n- If the resouces of a system (like shared mem, max open files etc.) are not \nknown, it's pretty difficult to set good default values. That's why it is so \ndifficult to ship Postgresql with a postgresql.conf file which works nicely \non all systems on this planet.\n- On the other hand, if the resouces of a system _are_ known, I bet the people \non this list can set much better default values than any newbie or a static \nout-of-the-box postgresql.conf.\n\nThus the know how which is somehow in the heads of the gurus should be \ncondensed into a tune program which can be run on a system to detect the \nsystem resources and which dumps a reasonable postgresql.conf. Those defaults \nwon't be perfect (because the application is not known yet) but much better \nthan the newbie or out-of-the-box settings.\n\nIf the tune program detects, that the system resouces are so limited, that it \nmakes basically no sense to run Postgresql there, it tells the user what the \noptions are: Increase the system resources (and how to do it if possible) or \n\"downtune\" the \"least reasonable\" postgresql.conf file by hand. Given the \nresources of average systems today, the chances are much higher, that users \nleave Postgresql because \"it's slower than other databases\" than that they \nget upset, because it doesn't start right away the first time.\n\nNow how to make sure, the tune program gets run before postmaster starts the \nfirst time? Prevent postmaster from starting, unless the tune program was run \nand fail with a clear error message. The message should state, that the tune \nprogram needs to be run first, why it needs to be run first and the command \nline showing how to do that.\n\nIf I think back, I would have been happy to see such a message, you just copy \nand paste the command to your shell, run the command and a few seconds later \nyou can restart postmaster with resonable settings. And the big distributors \nhave their own scipts anyway, so they can put it just before initdb.\n\nRegards,\n\n\tTilo\n", "msg_date": "Fri, 14 Feb 2003 22:55:51 +0100", "msg_from": "Tilo Schwarz <mail@tilo-schwarz.de>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "No, not really - I can do some more testing with pgbench to see what\nhappens though...I'll do it on monday\n\nChris\n\nOn Fri, 14 Feb 2003, Tom Lane wrote:\n\n> Manfred Koizar <mkoi-pg@aon.at> writes:\n> > In postgresql.conf.sample-writeheavy you have:\n> > \tcommit_delay = 10000\n> > Is this still needed with \"ganged WAL writes\"? Tom?\n>\n> I doubt that the current options for grouped commits are worth anything\n> at the moment. Chris, do you have any evidence backing up using\n> commit_delay with 7.3?\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Sat, 15 Feb 2003 22:15:01 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Offering tuned config files " }, { "msg_contents": "\nPeople seemed to like the idea:\n\n\tAdd a script to ask system configuration questions and tune\n\tpostgresql.conf.\n\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Peter Eisentraut wrote:\n> > Tom Lane writes:\n> > \n> > > Well, as I commented later in that mail, I feel that 1000 buffers is\n> > > a reasonable choice --- but I have to admit that I have no hard data\n> > > to back up that feeling.\n> > \n> > I know you like it in that range, and 4 or 8 MB of buffers by default\n> > should not be a problem. But personally I think if the optimal buffer\n> > size does not depend on both the physical RAM you want to dedicate to\n> > PostgreSQL and the nature and size of the database, then we have achieved\n> > a medium revolution in computer science. ;-)\n> \n> I have thought about this and I have an idea. Basically, increasing the\n> default values may get us closer, but it will discourage some to tweek,\n> and it will cause problems with some OS's that have small SysV params.\n> \n> So, my idea is to add a message at the end of initdb that states people\n> should run the pgtune script before running a production server.\n> \n> The pgtune script will basically allow us to query the user, test the OS\n> version and perhaps parameters, and modify postgresql.conf with\n> reasonable values. I think this is the only way to cleanly get folks\n> close to where they should be.\n> \n> For example, we can ask them how many rows and tables they will be\n> changing, on average, between VACUUM runs. That will allow us set the\n> FSM params. We can ask them about using 25% of their RAM for shared\n> buffers. If they have other major apps running on the server or have\n> small tables, we can make no changes. We can basically ask them\n> questions and use that info to set values.\n> \n> We can even ask about sort usage maybe and set sort memory. We can even\n> control checkpoint_segments this way if they say they will have high\n> database write activity and don't worry about disk space usage. We may\n> even be able to compute some random page cost estimate.\n> \n> Seems a script is going to be the best way to test values and assist\n> folks in making reasonable decisions about each parameter. Of course,\n> they can still edit the file, and we can ask them if they want\n> assistance to set each parameter or leave it alone.\n> \n> I would restrict the script to only deal with tuning values, and tell\n> people they still need to review that file for other useful parameters.\n> \n> Another option would be to make a big checklist or web page that asks\n> such questions and computes proper values, but it seems a script would\n> be easiest. We can even support '?' which would explain why the\n> question is being ask and how it affects the value.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 17 Feb 2003 21:49:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration (was Re: " }, { "msg_contents": "Bruce Momjian writes:\n> People seemed to like the idea:\n>\n> \tAdd a script to ask system configuration questions and tune\n> \tpostgresql.conf.\n>\n\nDefinitely! But it should have some sort of \"This is my first database \ninstallation\"-mode, which means, that either only some very basic questions \n(or none at all) are asked, or each question is followed by a reasonable \ndefault value and a \"if unsure, press <ENTER>\" message. Otherwise the first \ntime user might get scared of all those questions...\n\nTilo\n", "msg_date": "Wed, 19 Feb 2003 23:22:44 +0100", "msg_from": "Tilo Schwarz <mail@tilo-schwarz.de>", "msg_from_op": false, "msg_subject": "Re: Changing the default configuration (was Re: [pgsql-advocacy]" }, { "msg_contents": "\nRobert,\n\n> > \t1) Location of the pg_xlog for heavy-update databases.\n> \n> I see you put this up pretty high on the list. Do you feel this is the\n> most important thing you can do? For example, if you had a two drive\n> installation, would you load the OS and main database files on 1 disk\n> and put the pg_xlog on the second disk above all other configurations? \n\nYes, actually. On machines with 2 IDE disks, I've found that this can make \nas much as 30% difference in speed of serial/large UPDATE statements.\n\n> Ideally I recommend 3 disks, one for os, one for data, one for xlog; but\n> if you only had 2 would the added speed benefits be worth the additional\n> recovery complexity (if you data/xlog are on the same disk, you have 1\n> point of failure, one disk for backing up)\n\nOn the other hand, with the xlog on a seperate disk, the xlog and the database \ndisks are unlikely to fail at the same time. So I don't personally see it as \na recovery problem, but a benefit.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Thu, 20 Feb 2003 14:33:02 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default configuration)" }, { "msg_contents": "On Thu, 2003-02-20 at 17:33, Josh Berkus wrote:\n> \n> Robert,\n> \n> > > \t1) Location of the pg_xlog for heavy-update databases.\n> > \n> > I see you put this up pretty high on the list. Do you feel this is the\n> > most important thing you can do? For example, if you had a two drive\n> > installation, would you load the OS and main database files on 1 disk\n> > and put the pg_xlog on the second disk above all other configurations? \n> \n> Yes, actually. On machines with 2 IDE disks, I've found that this can make \n> as much as 30% difference in speed of serial/large UPDATE statements.\n\nDo you know how well those numbers hold up under scsi and/ or raid based\nsystem? (I'd assume anyone doing serious work would run scsi)\n\n> \n> > Ideally I recommend 3 disks, one for os, one for data, one for xlog; but\n> > if you only had 2 would the added speed benefits be worth the additional\n> > recovery complexity (if you data/xlog are on the same disk, you have 1\n> > point of failure, one disk for backing up)\n> \n> On the other hand, with the xlog on a seperate disk, the xlog and the database \n> disks are unlikely to fail at the same time. So I don't personally see it as \n> a recovery problem, but a benefit.\n> \n\nok (playing a bit of devil's advocate here), but you have two possible\npoints of failure, the data disk and the xlog disk. If either one goes,\nyour in trouble. OTOH if you put the OS disk on one drive and it goes,\nyour database and xlog are still safe on the other drive. \n\nRobert Treat\n\n", "msg_date": "20 Feb 2003 18:35:44 -0500", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default" }, { "msg_contents": "On Thu, Feb 20, 2003 at 06:35:44PM -0500, Robert Treat wrote:\n> Do you know how well those numbers hold up under scsi and/ or raid based\n> system? (I'd assume anyone doing serious work would run scsi)\n\nOn some Sun E450s we have used, the machines are unusable for any\nload with xlog on the same disk (in the case I'm remembering, these\nare older 5400 RPM drives). Moving the xlog changed us for\n<hazymemory>something like 10tps to something like 30tps</hazymemory>\nin one seat-of-the-pants case. Sorry I can't be more specific.\n\n> ok (playing a bit of devil's advocate here), but you have two possible\n> points of failure, the data disk and the xlog disk. If either one goes,\n> your in trouble. OTOH if you put the OS disk on one drive and it goes,\n> your database and xlog are still safe on the other drive. \n\nIf you're really worried about that, use RAID 1+0.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 20 Feb 2003 18:49:28 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Tuning scenarios (was Changing the default" }, { "msg_contents": "On Tue, 2003-02-11 at 20:10, Tatsuo Ishii wrote:\n> Sigh. People always complain \"pgbench does not reliably producing\n> repeatable numbers\" or something then say \"that's because pgbench's\n> transaction has too much contention on the branches table\". So I added\n> -N option to pgbench which makes pgbench not to do any UPDATE to\n> the branches table. But still people continue to complian...\n\nWhat exactly does the -N option do? I see no mention of it in the\nREADME.pgbench, which might be part of reason people \"continue to\ncomplain\".\n\n", "msg_date": "29 Apr 2003 00:41:38 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Changing the default configuration" } ]
[ { "msg_contents": "Hi Everyone,\n\nI've just spent the last day and a half trying to benchmark our new database\ninstallation to find a good value for wal_buffers. The quick answer - there\nisn't, just leave it on the default of 8.\n\nThe numbers just swing up and down so much it's impossible to say that one\nsetting is better than another. I've attached an openoffice doc with my old\nshared_buffers tests plus the wal_buffers tests. The wal results are a bit\ndeceptive as the results I've included are really what I consider the\n'average' results. Just occasionally, I'd get a spike that I could never\nrepeat...\n\nEven if you look at the attached charts and you think that 128 buffers are\nbetter than 8, think again - there's nothing in it. Next time I run that\nbenchmark it could be the same, lower or higher. And the difference between\nthe worst and best results is less than 3 TPS - ie. nothing.\n\nOne proof that has come out of this is that wal_buffers does not affect\nSELECT only performance in any way. So, for websites where the\nselect/update ratio is very high, wal_buffers is almost an irrelevant\noptimisation.\n\nEven massively heavy sites where you are getting write transactions\ncontinuously by 64 simultaneous people, I was unable to prove that any\nsetting other than the default helped. In this situation, probably the\ncommit_delay and commit_siblings variables will give you the best gains.\n\nI'm not sure what I could test next. Does FreeBSD support anything other\nthan fsync? eg. fdatasync, etc. I can't see it in the man pages...\n\nChris\n\nps. I don't think the attachments are too large, but if they annoy anyone,\ntell me. Also, I've cross posted to make sure people who read my previous\nbenchmark, see this one also.", "msg_date": "Thu, 13 Feb 2003 13:16:16 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "More benchmarking of wal_buffers" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> I'm not sure what I could test next. Does FreeBSD support anything other\n> than fsync? eg. fdatasync, etc. I can't see it in the man pages...\n\nYou are already getting the best default for your OS. It say 'fsync'\nfor default, but the comment says the default is OS-specific. The only\nthing you can compare there is open_fdatasync vs fdatasync.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 13 Feb 2003 00:30:07 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers" }, { "msg_contents": "On Thu, 2003-02-13 at 00:16, Christopher Kings-Lynne wrote:\n> Even if you look at the attached charts and you think that 128 buffers are\n> better than 8, think again - there's nothing in it. Next time I run that\n> benchmark it could be the same, lower or higher. And the difference between\n> the worst and best results is less than 3 TPS - ie. nothing.\n\nOne could conclude that this a result of the irrelevancy of wal_buffers;\nanother possible conclusion is that the testing tool (pgbench) is not a\nparticularly good database benchmark, as it tends to be very difficult\nto use it to reproduceable results. Alternatively, it's possible that\nthe limited set of test-cases you've used doesn't happen to include any\ncircumstances in which wal_buffers is useful.\n\nWe definitely need some better benchmarking tools for PostgreSQL (and\nno, OSDB does not cut it, IMHO). I've been thinking of taking a look at\nimproving this, but I can't promise I'll get the time or inclination to\nactually do anything about it :-)\n\nCheers,\n\nNeil\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n\n\n", "msg_date": "13 Feb 2003 01:17:52 -0500", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I've just spent the last day and a half trying to benchmark our new database\n> installation to find a good value for wal_buffers. The quick answer - there\n> isn't, just leave it on the default of 8.\n\nI don't think this is based on a useful test for wal_buffers. The\nwal_buffers setting only has to be large enough for the maximum amount\nof WAL log data that your system emits between commits, because a commit\n(from anyone) is going to flush the WAL data to disk (for everyone).\nSo a benchmark based on short transactions is just not going to show\nany benefit to increasing the setting.\n\nBenchmarking, say, the speed of massive COPY IN operations might show\nsome advantage to larger wal_buffers. Although I'm not real sure that\nit'll make any difference for any single-backend test. It's really just\nthe case where you have concurrent transactions that all make lots of\nupdates before committing that's likely to show a win.\n\n> One proof that has come out of this is that wal_buffers does not affect\n> SELECT only performance in any way.\n\nCoulda told you that without testing ;-). Read-only transactions emit\nno WAL entries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 21:29:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers " }, { "msg_contents": "> I don't think this is based on a useful test for wal_buffers. The\n> wal_buffers setting only has to be large enough for the maximum amount\n> of WAL log data that your system emits between commits, because a commit\n> (from anyone) is going to flush the WAL data to disk (for everyone).\n> So a benchmark based on short transactions is just not going to show\n> any benefit to increasing the setting.\n\nYes, I guess the TPC-B test does many, very short transactions. Each\ntransaction bascially comprises a single update, so I guess it wouldn't\nreally test it.\n\n> > One proof that has come out of this is that wal_buffers does not affect\n> > SELECT only performance in any way.\n>\n> Coulda told you that without testing ;-). Read-only transactions emit\n> no WAL entries.\n\nI knew that as well, that's why I said \"proof\" ;)\n\nChris\n\n", "msg_date": "Fri, 14 Feb 2003 11:04:20 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers " }, { "msg_contents": "> I don't think this is based on a useful test for wal_buffers. The\n> wal_buffers setting only has to be large enough for the maximum amount\n> of WAL log data that your system emits between commits, because a commit\n> (from anyone) is going to flush the WAL data to disk (for everyone).\n> So a benchmark based on short transactions is just not going to show\n> any benefit to increasing the setting.\n\nHere's a question then - what is the _drawback_ to having 1024 wal_buffers\nas opposed to 8?\n\nChris\n\n", "msg_date": "Fri, 14 Feb 2003 11:05:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Here's a question then - what is the _drawback_ to having 1024 wal_buffers\n> as opposed to 8?\n\nWaste of RAM? You'd be better off leaving that 8 meg available for use\nas general-purpose buffers ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 22:10:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers " }, { "msg_contents": "> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Here's a question then - what is the _drawback_ to having 1024\n> wal_buffers\n> > as opposed to 8?\n>\n> Waste of RAM? You'd be better off leaving that 8 meg available for use\n> as general-purpose buffers ...\n\nWhat I mean is say you have an enterprise server doing heaps of transactions\nwith lots of work. If you have scads of RAM, could you just shove up\nwal_buffers really high and assume it will improve performance?\n\nChris\n\n", "msg_date": "Fri, 14 Feb 2003 11:16:04 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers " }, { "msg_contents": "Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > I've just spent the last day and a half trying to benchmark our new database\n> > installation to find a good value for wal_buffers. The quick answer - there\n> > isn't, just leave it on the default of 8.\n> \n> I don't think this is based on a useful test for wal_buffers. The\n> wal_buffers setting only has to be large enough for the maximum amount\n> of WAL log data that your system emits between commits, because a commit\n> (from anyone) is going to flush the WAL data to disk (for everyone).\n\nWhat happens when the only transaction running emits more WAL log data\nthan wal_buffers can handle? A flush happens when the WAL buffers\nfill up (that's what I'd expect)? Didn't find much in the\ndocumentation about it...\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Thu, 13 Feb 2003 19:46:46 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [HACKERS] More benchmarking of wal_buffers" }, { "msg_contents": "Kevin Brown <kevin@sysexperts.com> writes:\n> What happens when the only transaction running emits more WAL log data\n> than wal_buffers can handle? A flush happens when the WAL buffers\n> fill up (that's what I'd expect)? Didn't find much in the\n> documentation about it...\n\nA write, not a flush (ie, we don't force an fsync). Also, I think it\nwrites only a few blocks, not all the available data. Don't recall the\ndetails on that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 23:14:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] [HACKERS] More benchmarking of wal_buffers " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What I mean is say you have an enterprise server doing heaps of transactions\n> with lots of work. If you have scads of RAM, could you just shove up\n> wal_buffers really high and assume it will improve performance?\n\nThere is no such thing as infinite RAM (or if there is, you paid *way*\ntoo much for your database server). My feeling is that it's a bad\nidea to put more than you absolutely have to into single-use buffers.\nMulti-purpose buffers are usually a better use of RAM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 23:23:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers " }, { "msg_contents": "On Thu, 13 Feb 2003, Tom Lane wrote:\n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > What I mean is say you have an enterprise server doing heaps of transactions\n> > with lots of work. If you have scads of RAM, could you just shove up\n> > wal_buffers really high and assume it will improve performance?\n>\n> There is no such thing as infinite RAM (or if there is, you paid *way*\n> too much for your database server). My feeling is that it's a bad\n> idea to put more than you absolutely have to into single-use buffers.\n> Multi-purpose buffers are usually a better use of RAM.\n\nWell, yes, but he was talking about 8 MB of WAL buffers. On a machine\nwith, say, 2 GB of RAM, that's an insignificant amount (0.4% of your\nmemory), and so I would say that it basically can't hurt at all. If your\nlog is on the same disk as your data, the larger writes when doing a big\ntransaction, such as a copy, might be a noticable win, in fact.\n\n(I was about to say that it would seem odd that someone would spend that\nmuch on RAM and not splurge on an extra pair of disks to separate the\nWAL log, but then I realized that we're only talking about $300 or so\nworth of RAM....)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Sat, 15 Feb 2003 17:36:39 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] More benchmarking of wal_buffers " } ]
[ { "msg_contents": "Darryl A. J. Staflund wrote:\n\n>Hi Everyone,\n>\n>I am developing a JBoss 3.0.x application using PostgreSQL 7.2.3 as a\n>back-end database and Solaris 2.8 (SPARC) as my deployment OS. In this\n>application, I am using an EJB technology called Container Managed\n>Persistence (CMP 2.0) to manage data persistence for me. Instead of\n>writing and submitting my own queries to the PostgreSQL database, JBoss is\n>doing this for me.\n>\n>Although this works well for the most part, the insertion of many records\n>within the context of a single transaction can take a very long time to\n>complete. Inserting 800 records, for instance, can take upward of a\n>minute to finish - even though the database is fully indexed and records\n>consist of no more than a string field and several foreign key integer\n>values.\n>\n>I think I've tracked the problem down to the way in which PostgreSQL\n>manages transactions. Although on the Java side of things I perform all\n>my insertions and updates within the context of a single transaction,\n>PostgreSQL seems to treat each individual query as a separate transaction\n>and this is slowing down performance immensely. Here is a sample of my\n>PostgreSQL logging output:\n> \n>\n[...]\n\n\nI think the problem isn't PostgreSQL. This is the JBoss-CMP. Take a look \non EJB Benchmark from urbancode \n(http://www.urbancode.com/projects/ejbbenchmark/default.jsp).\n\n\nBest Regards,\nRafal\n\n", "msg_date": "Thu, 13 Feb 2003 10:22:20 +0100", "msg_from": "Rafal Kedziorski <rafcio@polonium.de>", "msg_from_op": true, "msg_subject": "Re: JBoss CMP Performance Problems with PostgreSQL 7.2.3" }, { "msg_contents": "\n\nRafal Kedziorski wrote:\n\n> Darryl A. J. Staflund wrote:\n>\n> >Hi Everyone,\n> >\n> >I am developing a JBoss 3.0.x application using PostgreSQL 7.2.3 as a\n> >back-end database and Solaris 2.8 (SPARC) as my deployment OS. In this\n> >application, I am using an EJB technology called Container Managed\n> >Persistence (CMP 2.0) to manage data persistence for me. Instead of\n> >writing and submitting my own queries to the PostgreSQL database, JBoss is\n> >doing this for me.\n> >\n> >Although this works well for the most part, the insertion of many records\n> >within the context of a single transaction can take a very long time to\n> >complete. Inserting 800 records, for instance, can take upward of a\n> >minute to finish - even though the database is fully indexed and records\n> >consist of no more than a string field and several foreign key integer\n> >values.\n> >\n> >I think I've tracked the problem down to the way in which PostgreSQL\n> >manages transactions. Although on the Java side of things I perform all\n> >my insertions and updates within the context of a single transaction,\n> >PostgreSQL seems to treat each individual query as a separate transaction\n> >and this is slowing down performance immensely. Here is a sample of my\n> >PostgreSQL logging output:\n> >\n> >\n> [...]\n>\n> I think the problem isn't PostgreSQL. This is the JBoss-CMP. Take a look\n> on EJB Benchmark from urbancode\n> (http://www.urbancode.com/projects/ejbbenchmark/default.jsp).\n>\n> Best Regards,\n> Rafal\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n I think the problem is not in the jboss.\nI am using pg + jboss from a long time and if you know how to wirk with it the\ncombination is excelent.\nThe main problem in this case is CMP and also EntityBeans.\nBy CMP jboss will try to insert this 800 records separate.\nIn this case pg will be slow.\n\nI never got good results by using EB and CMP.\nIf you will to have working produkt use BMP.\nregards,\nivan.\n\n", "msg_date": "Thu, 13 Feb 2003 11:01:31 +0100", "msg_from": "pginfo <pginfo@t1.unisoftbg.com>", "msg_from_op": false, "msg_subject": "Re: JBoss CMP Performance Problems with PostgreSQL 7.2.3" }, { "msg_contents": "You may want to look at this tool as well:\n\nhttp://hibernate.bluemars.net/1.html\n\nOn Thursday 13 February 2003 3:01 am, pginfo wrote:\n> Rafal Kedziorski wrote:\n> > Darryl A. J. Staflund wrote:\n> > >Hi Everyone,\n> > >\n> > >I am developing a JBoss 3.0.x application using PostgreSQL 7.2.3 as a\n> > >back-end database and Solaris 2.8 (SPARC) as my deployment OS. In this\n> > >application, I am using an EJB technology called Container Managed\n> > >Persistence (CMP 2.0) to manage data persistence for me. Instead of\n> > >writing and submitting my own queries to the PostgreSQL database, JBoss\n> > > is doing this for me.\n> > >\n> > >Although this works well for the most part, the insertion of many\n> > > records within the context of a single transaction can take a very long\n> > > time to complete. Inserting 800 records, for instance, can take upward\n> > > of a minute to finish - even though the database is fully indexed and\n> > > records consist of no more than a string field and several foreign key\n> > > integer values.\n> > >\n> > >I think I've tracked the problem down to the way in which PostgreSQL\n> > >manages transactions. Although on the Java side of things I perform all\n> > >my insertions and updates within the context of a single transaction,\n> > >PostgreSQL seems to treat each individual query as a separate\n> > > transaction and this is slowing down performance immensely. Here is a\n> > > sample of my PostgreSQL logging output:\n> >\n> > [...]\n> >\n> > I think the problem isn't PostgreSQL. This is the JBoss-CMP. Take a look\n> > on EJB Benchmark from urbancode\n> > (http://www.urbancode.com/projects/ejbbenchmark/default.jsp).\n> >\n> > Best Regards,\n> > Rafal\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> I think the problem is not in the jboss.\n> I am using pg + jboss from a long time and if you know how to wirk with it\n> the combination is excelent.\n> The main problem in this case is CMP and also EntityBeans.\n> By CMP jboss will try to insert this 800 records separate.\n> In this case pg will be slow.\n>\n> I never got good results by using EB and CMP.\n> If you will to have working produkt use BMP.\n> regards,\n> ivan.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nNick Pavlica\nEchoStar Communications\nCAS-Engineering\n(307)633-5237\n", "msg_date": "Thu, 13 Feb 2003 16:19:12 -0700", "msg_from": "Nick Pavlica <nick.pavlica@echostar.com>", "msg_from_op": false, "msg_subject": "Re: JBoss CMP Performance Problems with PostgreSQL 7.2.3" } ]
[ { "msg_contents": "hello all,\n\nI am still fiddling around with my \"big\" database.\n\nSystem:\nRAM: 2GB\nCPU: 1,6 MHz (cache: 256 Kb)\nsingle disc: 120 GB :-(\n\nI have a query that joins to relatively large tables (10 - 15 Mio rows), \nor part of these tables (explain analyze: rows=46849) respectively.\n\nlong story short:\n\nallover cost estimated in pages by explain is:\ncost=6926.59..6926.60\n\nactual time is from explain analyze is:\nactual time=275461.91..275462.44\n\nmost of it is consumed by a nested loop (surprise!)\nthis is the respective output:\n\nSort Key: disease.disease_name, disease_occurrences.sentence_id\n-> Nested Loop (cost=0.00..6922.38 rows=98 width=64) (actual \ntime=61.49..275047.46 rows=18910 loops=1)\n -> Nested Loop (cost=0.00..6333.23 rows=98 width=28) (actual \ntime=61.42..274313.87 rows=18910 loops=1)\n -> Nested Loop (cost=0.00..5894.04 rows=64 width=16) (actual \ntime=32.00..120617.26 rows=46849 loops=1)\n\nI tried to tweak the conf settings, but I think I already reached quite \na good value concerning shared buffers and sort mem. the database is \nvacuum full analyzed. indexes seem fine.\n\ncould one of you smart guys point me into a direction I might not have \nconsidered? - I know that the hardware is the minimum. nevertheless - if \nyou have suggestions what exactely to add to the hardware to boost the \ndatabase up (more RAM or more discs - even a RAID) - this would be a \ngood argument for my boss.\n\nThank you a lot\nChantal\n\n", "msg_date": "Thu, 13 Feb 2003 19:06:03 +0100", "msg_from": "Chantal Ackermann <chantal.ackermann@biomax.de>", "msg_from_op": true, "msg_subject": "cost and actual time" }, { "msg_contents": "Chantal, \n\n> Sort Key: disease.disease_name, disease_occurrences.sentence_id\n> -> Nested Loop (cost=0.00..6922.38 rows=98 width=64) (actual\n> time=61.49..275047.46 rows=18910 loops=1)\n> -> Nested Loop (cost=0.00..6333.23 rows=98 width=28) (actual \n> time=61.42..274313.87 rows=18910 loops=1)\n> -> Nested Loop (cost=0.00..5894.04 rows=64 width=16) (actual \n> time=32.00..120617.26 rows=46849 loops=1)\n> \n> I tried to tweak the conf settings, but I think I already reached\n> quite a good value concerning shared buffers and sort mem. the\n> database is vacuum full analyzed. indexes seem fine.\n\nYou *sure* that you've vacuum analyzed recently? The planner above is\nchoosing a bad plan because its row estimates are way off ... if the\nsubquery was actually returning 98 rows, the plan above would make\nsense ... but with 18,000 rows being returned, a Nested Loop is\nsuicidal.\n\nPerhaps you could post the full text of the query? If some of your\ncriteria are coming from volatile functions, then that could explain\nwhy the planner is so far off ...\n\n-Josh Berkus\n", "msg_date": "Thu, 13 Feb 2003 12:08:22 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: cost and actual time" }, { "msg_contents": "hello Josh,\n\nthank you for your fast answer. (I had some days off.)\n\nThis posting is quite long, I apologize. But I wanted to provide enough \ninformation to outline the problem.\n\nI did some vacuums analyze on all 4 tables concerned (gene, disease, \ngene_occurrences, disease_occurrences) to be sure the planner is up to \ndate - but that did not minimize the difference between the estimation \nof resulting rows and the actual result.\n\nI changed the settings for default_statistics_target to 1000 (default \n10). The estimation goes up to 102 rows which is little more than \nbefore, and still far away from the actual result.\n\nThe effective_cache_size is at 80000.\n\nTo be sure I didn't change it to be worse, I checked again with the \ndefault_statistics_target set to 10 and a cache size of 1000 (ran vacuum \nafterwards). the estimation is at 92 rows. so there's not a really big \ndifference.\n\nI wonder, if I can set some geqo or planner settings in the \npostgresql.conf file to make the planner estimate better? The database \nis exclusively for reading, so it's ok if the time for analyzing the \ntables increases.\n\nThe query I am testing with is:\n\nEXPLAIN ANALYZE\nSELECT tmp.disease_name, count(tmp.disease_name) AS cnt\n\tFROM (SELECT DISTINCT disease.disease_name, \ndisease_occurrences.sentence_id FROM gene, disease, gene_occurrences, \ndisease_occurrences\n\t\tWHERE gene.gene_name='igg'\n\t\tAND gene_occurrences.sentence_id=disease_occurrences.sentence_id\n\t\tAND gene.gene_id=gene_occurrences.gene_id\n\t\tAND disease.disease_id=disease_occurrences.disease_id) AS tmp\nGROUP BY tmp.disease_name\nORDER BY cnt DESC;\n\nRow counts are:\n'gene': 164657\n'disease': 129923\n'gene_occurrences': 10484141\n'disease_occurrences': 15079045\n\nthe gene_id for 'igg' occurres 110637 times in gene_occurrences, it is \nthe most frequent.\n\nthe index on gene_occurences(sentence_id) and \ndisease_occurrences(sentence_id) is clustered.\n\nI have an alternative query which I am testing to see whether it is \nbetter than the first one:\n\n\nexplain analyze SELECT disease.disease_name, count(disease.disease_name) \nAS cnt FROM\n ((SELECT gene_occurrences.sentence_id FROM gene_occurrences\n WHERE gene_occurrences.gene_id=(SELECT gene.gene_id FROM gene \nWHERE gene.gene_name='igg')) AS tmp\n JOIN disease_occurrences USING (sentence_id)) as tmp2\n NATURAL JOIN disease\nGROUP BY disease.disease_name\nORDER BY cnt DESC;\n\nthe cost estimated by the planner is much higher, thus I thought this \nquery is worse than the first. However - maybe it's just more accurate?\n\nthis is its explain-output (default settings for \ndefault_statistics_target while 'vacuum analyze'):\n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=126690.02..126691.47 rows=581 width=57) (actual \ntime=8066.05..8067.05 rows=3364 loops=1)\n Sort Key: count(disease.disease_name)\n InitPlan\n -> Index Scan using gene_uni on gene (cost=0.00..5.26 rows=1 \nwidth=4) (actual time=0.19..0.20 rows=1 loops=1)\n Index Cond: (gene_name = 'igg'::text)\n -> Aggregate (cost=126619.79..126663.35 rows=581 width=57) (actual \ntime=7894.33..8055.43 rows=3364 loops=1)\n -> Group (cost=126619.79..126648.83 rows=5809 width=57) \n(actual time=7894.31..8020.00 rows=30513 loops=1)\n -> Sort (cost=126619.79..126634.31 rows=5809 width=57) \n(actual time=7894.30..7910.08 rows=30513 loops=1)\n Sort Key: disease.disease_name\n -> Merge Join (cost=119314.93..126256.64 \nrows=5809 width=57) (actual time=6723.92..7732.94 rows=30513 loops=1)\n Merge Cond: (\"outer\".disease_id = \n\"inner\".disease_id)\n -> Index Scan using disease_pkey on disease \n (cost=0.00..6519.14 rows=129923 width=37) (actual time=0.04..742.20 \nrows=129872 loops=1)\n -> Sort (cost=119314.93..119329.45 \nrows=5809 width=20) (actual time=6723.74..6740.24 rows=30513 loops=1)\n Sort Key: disease_occurrences.disease_id\n -> Nested Loop (cost=0.00..118951.78 \nrows=5809 width=20) (actual time=1.19..6558.67 rows=30513 loops=1)\n -> Index Scan using \ngene_occ_id_i on gene_occurrences (cost=0.00..15700.31 rows=4039 \nwidth=8) (actual time=0.36..1404.64 rows=110637 loops=1)\n Index Cond: (gene_id = $0)\n -> Index Scan using \ndisease_occ_uni on disease_occurrences (cost=0.00..25.47 rows=8 \nwidth=12) (actual time=0.04..0.04 rows=0 loops=110637)\n Index Cond: \n(\"outer\".sentence_id = disease_occurrences.sentence_id)\n Total runtime: 8086.87 msec\n(20 rows)\n\n\nstrangely, the estimation is far worse after running vacuum analyze \nagain with a default_statistics_target of 1000:\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=12521.37..12521.61 rows=96 width=57) (actual \ntime=124967.47..124968.47 rows=3364 loops=1)\n Sort Key: count(disease.disease_name)\n InitPlan\n -> Index Scan using gene_uni on gene (cost=0.00..5.26 rows=1 \nwidth=4) (actual time=20.27..20.28 rows=1 loops=1)\n Index Cond: (gene_name = 'igg'::text)\n -> Aggregate (cost=12510.99..12518.20 rows=96 width=57) (actual \ntime=124788.71..124956.77 rows=3364 loops=1)\n -> Group (cost=12510.99..12515.80 rows=961 width=57) (actual \ntime=124788.68..124920.10 rows=30513 loops=1)\n -> Sort (cost=12510.99..12513.39 rows=961 width=57) \n(actual time=124788.66..124804.74 rows=30513 loops=1)\n Sort Key: disease.disease_name\n -> Nested Loop (cost=0.00..12463.35 rows=961 \nwidth=57) (actual time=164.11..124529.76 rows=30513 loops=1)\n -> Nested Loop (cost=0.00..6671.06 \nrows=961 width=20) (actual time=148.34..120295.52 rows=30513 loops=1)\n -> Index Scan using gene_occ_id_i on \ngene_occurrences (cost=0.00..2407.09 rows=602 width=8) (actual \ntime=20.63..1613.99 rows=110637 loops=1)\n Index Cond: (gene_id = $0)\n -> Index Scan using disease_occ_uni \non disease_occurrences (cost=0.00..7.06 rows=2 width=12) (actual \ntime=1.07..1.07 rows=0 loops=110637)\n Index Cond: (\"outer\".sentence_id \n= disease_occurrences.sentence_id)\n -> Index Scan using disease_pkey on disease \n (cost=0.00..6.01 rows=1 width=37) (actual time=0.13..0.13 rows=1 \nloops=30513)\n Index Cond: (\"outer\".disease_id = \ndisease.disease_id)\n Total runtime: 124981.15 msec\n(18 rows)\n\nThere again is the estimation of 961 rows and the decision to choose a \nNested Loop while the actual result includes 30513 rows.\n\n\nThank you for taking the time to read my postings!\nChantal\n\n\n\nJosh Berkus wrote:\n> Chantal, \n> \n> \n>>Sort Key: disease.disease_name, disease_occurrences.sentence_id\n>>-> Nested Loop (cost=0.00..6922.38 rows=98 width=64) (actual\n>>time=61.49..275047.46 rows=18910 loops=1)\n>> -> Nested Loop (cost=0.00..6333.23 rows=98 width=28) (actual \n>>time=61.42..274313.87 rows=18910 loops=1)\n>> -> Nested Loop (cost=0.00..5894.04 rows=64 width=16) (actual \n>>time=32.00..120617.26 rows=46849 loops=1)\n>>\n>>I tried to tweak the conf settings, but I think I already reached\n>>quite a good value concerning shared buffers and sort mem. the\n>>database is vacuum full analyzed. indexes seem fine.\n> \n> \n> You *sure* that you've vacuum analyzed recently? The planner above is\n> choosing a bad plan because its row estimates are way off ... if the\n> subquery was actually returning 98 rows, the plan above would make\n> sense ... but with 18,000 rows being returned, a Nested Loop is\n> suicidal.\n> \n> Perhaps you could post the full text of the query? If some of your\n> criteria are coming from volatile functions, then that could explain\n> why the planner is so far off ...\n> \n> -Josh Berkus\n> \n> \n\n", "msg_date": "Mon, 17 Feb 2003 11:51:56 +0100", "msg_from": "Chantal Ackermann <chantal.ackermann@biomax.de>", "msg_from_op": true, "msg_subject": "Re: cost and actual time" }, { "msg_contents": "Chantal Ackermann <chantal.ackermann@biomax.de> writes:\n> the gene_id for 'igg' occurres 110637 times in gene_occurrences, it is \n> the most frequent.\n\nI think the problem here is that the planner doesn't know that (and\nprobably can't without some kind of cross-table statistics apparatus).\nIt's generating a plan based on the average frequency of gene_ids, which\nis a loser for this outlier.\n\nProbably the most convenient way to do better is to structure things so\nthat the reduction from gene name to gene_id is done before the planner\nstarts to develop a plan. Instead of joining to gene, consider this:\n\ncreate function get_gene_id (text) returns int as -- adjust types as needed\n'select gene_id from gene where gene_name = $1' language sql\nimmutable strict; -- in 7.2, instead say \"with (isCachable, isStrict)\"\n\nEXPLAIN ANALYZE\nSELECT tmp.disease_name, count(tmp.disease_name) AS cnt\n\tFROM (SELECT DISTINCT disease.disease_name, \ndisease_occurrences.sentence_id FROM disease, gene_occurrences, \ndisease_occurrences\n\t\tWHERE\n\t\tgene_occurrences.sentence_id=disease_occurrences.sentence_id\n\t\tAND get_gene_id('igg')=gene_occurrences.gene_id\n\t\tAND disease.disease_id=disease_occurrences.disease_id) AS tmp\nGROUP BY tmp.disease_name\nORDER BY cnt DESC;\n\nNow get_gene_id() isn't really immutable (unless you never change the\ngene table) but you have to lie and pretend that it is, so that the\nfunction call will be constant-folded during planner startup. The\nplanner will then see something like gene_occurrences.gene_id = 42\nand it will have a much better shot at determining the number of rows\nthis matches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Feb 2003 11:21:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: cost and actual time " }, { "msg_contents": "On Mon, 17 Feb 2003 11:51:56 +0100, Chantal Ackermann\n<chantal.ackermann@biomax.de> wrote:\n>the gene_id for 'igg' occurres 110637 times in gene_occurrences, it is \n>the most frequent.\n\nChantal, could you try\n\nEXPLAIN ANALYZE\nSELECT tmp.disease_name, count(tmp.disease_name) AS cnt\n FROM (SELECT DISTINCT dd.disease_name, d_o.sentence_id \n FROM disease d,\n gene_occurrences g_o, \n disease_occurrences d_o\n WHERE g_o.sentence_id=d_o.sentence_id\n AND g_o.gene_id=4711\n AND d.disease_id=d_o.disease_id) AS tmp\n GROUP BY tmp.disease_name\n ORDER BY cnt DESC;\n\nreplacing 4711 by the result of\n\tSELECT gene_id FROM gene WHERE gene_name='igg'\n\nand show us the plan you get?\n\nServus\n Manfred\n", "msg_date": "Mon, 17 Feb 2003 17:49:16 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: cost and actual time" }, { "msg_contents": "hello Manfred, hello Josh, hello Tom,\n\nI followed your advices. Using the new query with explicit joins, \ncombined with the function that retrieves the gene_id, the estimated row \ncount is now far more realistic. Still, the same query using different \ngene names takes sometimes less than a second, sometimes several \nminutes, obviously due to (not) caching. In the resulting query plan, \nthere are still a Seq Scan, a Nested Loop and a Hash Join that take up \nmost of the cost.\n\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\n\nIn details:\n\nI have created the following function:\nCREATE OR REPLACE FUNCTION get_gene_id(TEXT) RETURNS INT AS\n 'SELECT gene_id FROM gene WHERE gene_name = $1'\nLANGUAGE SQL\nIMMUTABLE STRICT;\n\nThen I ran some queries with explain/explain analyze. For example:\n\n1. the old query, leaving out the table gene and setting \ngene_occurrences.gene_id to a certain gene_id, or the function \nget_gene_id, respectively. (This is the query you suggested, Manfred.)\n\nEXPLAIN ANALYZE\n SELECT tmp.disease_name, count(tmp.disease_name) AS cnt\n FROM\n (SELECT DISTINCT disease.disease_name,\n disease_occurrences.sentence_id\n FROM disease, gene_occurrences, disease_occurrences\n WHERE gene_occurrences.sentence_id=disease_occurrences.sentence_id\n AND gene_occurrences.gene_id=get_gene_id('igm')\n AND disease.disease_id=disease_occurrences.disease_id) AS tmp\nGROUP BY tmp.disease_name ORDER BY cnt DESC;\n\n'igm' occures about 54.000 times in gene_occurrences.\n \n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=308065.26..308069.67 rows=882 width=57) (actual \ntime=53107.46..53108.13 rows=2326 loops=1)\n Sort Key: count(disease_name)\n -> Aggregate (cost=307846.61..307978.94 rows=882 width=57) (actual \ntime=53011.97..53100.58 rows=2326 loops=1)\n -> Group (cost=307846.61..307934.83 rows=8822 width=57) \n(actual time=53011.94..53079.74 rows=16711 loops=1)\n -> Sort (cost=307846.61..307890.72 rows=8822 width=57) \n(actual time=53011.93..53020.32 rows=16711 loops=1)\n Sort Key: disease_name\n -> Subquery Scan tmp (cost=305367.08..306690.35 \nrows=8822 width=57) (actual time=52877.87..52958.72 rows=16711 loops=1)\n -> Unique (cost=305367.08..306690.35 \nrows=8822 width=57) (actual time=52877.85..52915.20 rows=16711 loops=1)\n -> Sort (cost=305367.08..305808.17 \nrows=88218 width=57) (actual time=52877.85..52886.47 rows=16711 loops=1)\n Sort Key: disease.disease_name, \ndisease_occurrences.sentence_id\n -> Hash Join \n(cost=4610.17..290873.90 rows=88218 width=57) (actual \ntime=388.53..52752.92 rows=16711 loops=1)\n Hash Cond: \n(\"outer\".disease_id = \"inner\".disease_id)\n -> Nested Loop \n(cost=0.00..282735.01 rows=88218 width=20) (actual time=0.25..52184.26 \nrows=16711 loops=1)\n -> Index Scan using \ngene_occ_id_i on gene_occurrences (cost=0.00..57778.26 rows=54692 \nwidth=8) (actual time=0.07..17455.52 rows=54677 loops=1)\n Index Cond: \n(gene_id = 70764)\n -> Index Scan using \ndisease_occ_uni on disease_occurrences (cost=0.00..4.09 rows=2 \nwidth=12) (actual time=0.63..0.63 rows=0 loops=54677)\n Index Cond: \n(\"outer\".sentence_id = disease_occurrences.sentence_id)\n -> Hash \n(cost=2500.23..2500.23 rows=129923 width=37) (actual time=387.45..387.45 \nrows=0 loops=1)\n -> Seq Scan on \ndisease (cost=0.00..2500.23 rows=129923 width=37) (actual \ntime=0.02..207.71 rows=129923 loops=1)\n Total runtime: 53118.81 msec\n(20 rows)\n\nWhat takes up most of the runtime the Nested Loop (for the join of \ndisease and disease_occurrences, or rather for joining both occurrences \ntables? I'm not sure which rows belong together in the explain output).\n\nThe cost for 'igg' is higher:\nestimation of pages by explain: 584729.76.\nactual runtime: 693210.99 msec.\nThe query plan is the same. The Nested Loop takes up most of the runtime \n(-> Nested Loop (cost=0.00..538119.44 rows=176211 width=20) (actual \ntime=0.28..691474.74 rows=30513 loops=1))\n\n\n\n2. The new query, same changes (gene left out, subselect replaced with \nget_gene_id):\n\nEXPLAIN ANALYZE\n SELECT disease.disease_name, count(disease.disease_name) AS cnt\n FROM\n ((SELECT gene_occurrences.sentence_id\n FROM gene_occurrences\n WHERE gene_occurrences.gene_id=get_gene_id('csf')) AS tmp\n JOIN disease_occurrences USING (sentence_id)) as tmp2\n NATURAL JOIN disease\nGROUP BY disease.disease_name\nORDER BY cnt DESC;\n\n'csf' occurres about 55.000 times in gene_occurrences.\n \n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=323834.95..323881.54 rows=9318 width=57) (actual \ntime=146975.89..146976.60 rows=2383 loops=1)\n Sort Key: count(disease.disease_name)\n -> Aggregate (cost=321208.63..322606.31 rows=9318 width=57) \n(actual time=146840.89..146968.58 rows=2383 loops=1)\n -> Group (cost=321208.63..322140.42 rows=93179 width=57) \n(actual time=146840.87..146941.60 rows=24059 loops=1)\n -> Sort (cost=321208.63..321674.53 rows=93179 \nwidth=57) (actual time=146840.85..146852.92 rows=24059 loops=1)\n Sort Key: disease.disease_name\n -> Hash Join (cost=4485.78..305826.96 rows=93179 \nwidth=57) (actual time=544.79..146651.05 rows=24059 loops=1)\n Hash Cond: (\"outer\".disease_id = \n\"inner\".disease_id)\n -> Nested Loop (cost=0.00..297614.04 \nrows=93179 width=20) (actual time=105.85..145936.47 rows=24059 loops=1)\n -> Index Scan using gene_occ_id_i on \ngene_occurrences (cost=0.00..60007.96 rows=57768 width=8) (actual \ntime=41.86..47116.68 rows=55752 loops=1)\n Index Cond: (gene_id = 29877)\n -> Index Scan using disease_occ_uni \non disease_occurrences (cost=0.00..4.09 rows=2 width=12) (actual \ntime=1.76..1.77 rows=0 loops=55752)\n Index Cond: (\"outer\".sentence_id \n= disease_occurrences.sentence_id)\n -> Hash (cost=2500.23..2500.23 rows=129923 \nwidth=37) (actual time=438.16..438.16 rows=0 loops=1)\n -> Seq Scan on disease \n(cost=0.00..2500.23 rows=129923 width=37) (actual time=0.02..236.78 \nrows=129923 loops=1)\n Total runtime: 146986.12 msec\n(16 rows)\n\nThis query is obviously not as good as the old one, though I don't \nunderstand where the explicit joins are worse than what the optimizer \nchoses. There is still the Nested Loop that takes up the biggest part.\n\nWhen I set enable_nestloop to false, explain outputs this plan:\n \nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=2146887.90..2146934.49 rows=9318 width=57)\n Sort Key: count(disease.disease_name)\n -> Aggregate (cost=2144261.59..2145659.27 rows=9318 width=57)\n -> Group (cost=2144261.59..2145193.37 rows=93179 width=57)\n -> Sort (cost=2144261.59..2144727.48 rows=93179 width=57)\n Sort Key: disease.disease_name\n -> Merge Join (cost=2122513.18..2128879.92 \nrows=93179 width=57)\n Merge Cond: (\"outer\".disease_id = \n\"inner\".disease_id)\n -> Index Scan using disease_pkey on disease \n (cost=0.00..3388.03 rows=129923 width=37)\n -> Sort (cost=2122513.18..2122979.08 \nrows=93179 width=20)\n Sort Key: disease_occurrences.disease_id\n -> Merge Join \n(cost=69145.63..2107131.52 rows=93179 width=20)\n Merge Cond: (\"outer\".sentence_id \n= \"inner\".sentence_id)\n -> Index Scan using \ndisease_occ_uni on disease_occurrences (cost=0.00..1960817.45 \nrows=15079045 width=12)\n -> Sort \n(cost=69145.63..69434.47 rows=57768 width=8)\n Sort Key: \ngene_occurrences.sentence_id\n -> Index Scan using \ngene_occ_id_i on gene_occurrences (cost=0.00..60007.96 rows=57768 width=8)\n Index Cond: (gene_id \n= 29877)\n(18 rows)\n\nMost of the runtime is used up by the index scan to join the occurrences \ntables, while the index scan for joining diesease and \ndisease_occurrences is fast.\n\n\n\n\nAt the moment my settings concering the query planner are:\n\neffective_cache_size = 80000 # typically 8KB each, default 1000\nrandom_page_cost = 1.5 # units are one sequential page fetch cost\ncpu_tuple_cost = 0.01 # (same), default 0.01\ncpu_index_tuple_cost = 0.00001 # (same), default 0.001\ncpu_operator_cost = 0.005 # (same), default 0.0025\n\n\n\nI am still having problems to read the output of explain, especially to \nknow which rows belong together, and what strategy applies to which \njoin. Is there some documentation that describes the format of explain, \nother than the one in the main manual that comes with the postgres \ninstallation? Just some short explanation or example on how to interpret \nindents and arrows.\n\nThank you for your help!\nChantal\n\n", "msg_date": "Tue, 18 Feb 2003 11:28:40 +0100", "msg_from": "Chantal Ackermann <chantal.ackermann@biomax.de>", "msg_from_op": true, "msg_subject": "Re: cost and actual time" }, { "msg_contents": "On Tue, 18 Feb 2003 11:28:40 +0100, Chantal Ackermann\n<chantal.ackermann@biomax.de> wrote:\n>1. the old query, leaving out the table gene and setting \n>gene_occurrences.gene_id to a certain gene_id, or the function \n>get_gene_id, respectively. (This is the query you suggested, Manfred.)\n\nThis was Tom's suggestion. I might have ended up there in a day or\ntwo :-)\n\n>What takes up most of the runtime the Nested Loop (for the join of \n>disease and disease_occurrences, or rather for joining both occurrences \n>tables? I'm not sure which rows belong together in the explain output).\n\n... for joining both occurrences: The \"-> Nested Loop\" takes two\ntables (the \"-> Index Scans\") as input and produces one table as\noutput which is again used as input for the \"-> Hash Join\" above it.\n\n>2. The new query, same changes (gene left out, subselect replaced with \n>get_gene_id):\n>\n>EXPLAIN ANALYZE\n> SELECT disease.disease_name, count(disease.disease_name) AS cnt\n> FROM\n> ((SELECT gene_occurrences.sentence_id\n> FROM gene_occurrences\n> WHERE gene_occurrences.gene_id=get_gene_id('csf')) AS tmp\n> JOIN disease_occurrences USING (sentence_id)) as tmp2\n> NATURAL JOIN disease\n>GROUP BY disease.disease_name\n>ORDER BY cnt DESC;\n\nThere is no DISTINCT here. This is equvalent to your first query, iff\nthe following unique constraints are true:\n\t(gene_id, sentence_id) in gene_occurrences\n\t(disease_id, sentence_id) in disease_occurrences\n\t(disease_id) in disease\n\nIf they are, you don't need a sub-select (unless I'm missing\nsomething, please double-check):\n\nEXPLAIN ANALYZE\n SELECT disease.disease_name, count(*) AS cnt\n FROM disease, gene_occurrences, disease_occurrences\n WHERE gene_occurrences.sentence_id=disease_occurrences.sentence_id\n AND gene_occurrences.gene_id=get_gene_id('igm')\n AND disease.disease_id=disease_occurrences.disease_id\n GROUP BY tmp.disease_name\n ORDER BY cnt DESC;\n\nAnyway, your problem boils down to\n\nEXPLAIN ANALYZE\n\tSELECT d.disease_id, d.sentence_id\n\t FROM gene_occurrences g, disease_occurrences d\n\t WHERE g.sentence_id = d.sentence_id\n\t AND g.gene_id = 'some constant value';\n\nPlay with enable_xxxx to find out which join method provides the best\nperformance for various gene_ids. Then we can start to fiddle with\nrun-time parameters to help the optimizer choose the right plan.\n\n>Most of the runtime is used up by the index scan to join the occurrences \n>tables [...]\n>\n>At the moment my settings concering the query planner are:\n>\n>effective_cache_size = 80000 # typically 8KB each, default 1000\n>random_page_cost = 1.5 # units are one sequential page fetch cost\n\nUsually you set a low random_page_cost value (the default is 4) if you\nwant to favour index scans where the optimizer tends to use sequential\nscans. Was this your intention?\n\n>cpu_tuple_cost = 0.01 # (same), default 0.01\n>cpu_index_tuple_cost = 0.00001 # (same), default 0.001\n>cpu_operator_cost = 0.005 # (same), default 0.0025\n\nJust out of curiosity: Are these settings based on prior experience?\n\nServus\n Manfred\n", "msg_date": "Tue, 18 Feb 2003 18:31:48 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: cost and actual time" }, { "msg_contents": "hello Manfred,\n\n> ... for joining both occurrences: The \"-> Nested Loop\" takes two\n> tables (the \"-> Index Scans\") as input and produces one table as\n> output which is again used as input for the \"-> Hash Join\" above it.\n\nas I am testing with the most frequent gene names (= the gene_ids that \nare the most frequent in the occurrences tables) this is a very \nexpensive join. whenever I try a less frequent gene_id the runtime is \nshorter (though I haven't tested especially with less frequent gene_ids, \nyet. my focus is on making the searches for the most frequent genes \nfaster as these are probably the ones that are searched for a lot.)\n\n> There is no DISTINCT here. This is equvalent to your first query, iff\n> the following unique constraints are true:\n> \t(gene_id, sentence_id) in gene_occurrences\n> \t(disease_id, sentence_id) in disease_occurrences\n> \t(disease_id) in disease\n> \n> If they are, you don't need a sub-select (unless I'm missing\n> something, please double-check):\n\nyeah, I noticed the difference between the two queries. actually, I am \nafraid of dropping the distinct cause I had results with duplicate rows \n(though I shall recheck when this is really the case). These are the \ntable declarations and constraints:\n\nrelate=# \\d gene\n Table \"public.gene\"\n Column | Type | Modifiers\n-------------+---------+-----------\n gene_id | integer | not null\n gene_name | text | not null\n gene_syn_id | integer | not null\nIndexes: gene_pkey primary key btree (gene_id),\n gene_name_uni unique btree (gene_name),\n gene_uni unique btree (gene_name, gene_syn_id),\n gene_syn_idx btree (gene_syn_id)\n\n(disease looks the same)\n\nrelate_01=# \\d gene_occurrences\n Table \"public.gene_occurrences\"\n Column | Type | Modifiers\n-------------+---------+-----------\n sentence_id | bigint | not null\n gene_id | integer | not null\n puid | integer | not null\nIndexes: gene_occ_uni unique btree (sentence_id, gene_id),\n gene_occ_id_i btree (gene_id)\n\nrelate_01=# \\d disease_occurrences\nTable \"public.disease_occurrences\"\n Column | Type | Modifiers\n-------------+---------+-----------\n sentence_id | bigint | not null\n disease_id | integer | not null\n puid | integer | not null\nIndexes: disease_occ_uni unique btree (sentence_id, disease_id),\n disease_occ_id_i btree (disease_id)\n\nsentence_id and gene/disease_id are connected in a n:m relation.\nas sentence_id is the primary key of a table with more than 50 million \nrows, we decided not to use a serial as primary key but to use a unique \ncombination of two existing values. as this combination is to long for \nan ordinary int, we have to use bigint as type. is the join therefore \nsuch expensive?\n\nwe had a primary key occurrence_id on the occurrences tables but we \nnoticed that we don't use it, so we didn't recreate it in the new \ndatabase. is it possible that the postgres could work with it internally?\n\n> Play with enable_xxxx to find out which join method provides the best\n> performance for various gene_ids. Then we can start to fiddle with\n> run-time parameters to help the optimizer choose the right plan.\n\nthis would be VERY helpful! :-)\n\nI played around and this is the result:\n\nEXPLAIN ANALYZE\nSELECT d.disease_id, d.sentence_id\n FROM gene_occurrences g, disease_occurrences d\n WHERE g.sentence_id = d.sentence_id\n AND g.gene_id = get_gene_id([different very frequent gene names]);\n\nchoice of the planner: Nested Loop\n Total runtime: 53508.86 msec\n\nset enable_nextloop to false;\nMerge Join: Total runtime: 113066.81 msec\n\nset enable_mergejoin to false;\nHash Join: Total runtime: 439344.44 msec\n\ndisabling the hash join results again in a Nested Loop with very high \ncost but low runtime - I'm not sure if the latter is the consequence of \ncaching. I changed the gene name at every run to avoid the caching.\n\nSo the Nested Loop is obiously the best way to go?\n\nFor comparison: a less frequent gene (occurres 6717 times in \ngene_occurrences)\noutputs the following query plan:\n\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------- \nNested Loop (cost=0.00..41658.69 rows=12119 width=20) (actual \ntime=87.01..19076.62 rows=1371 loops=1)\n -> Index Scan using gene_occ_id_i on gene_occurrences g \n(cost=0.00..10754.08 rows=7514 width=8) (actual time=35.89..10149.14 \nrows=6717 loops=1)\n Index Cond: (gene_id = 16338)\n -> Index Scan using disease_occ_uni on disease_occurrences d \n(cost=0.00..4.09 rows=2 width=12) (actual time=1.32..1.32 rows=0 loops=6717)\n Index Cond: (\"outer\".sentence_id = d.sentence_id)\n Total runtime: 19078.48 msec\n\n> Usually you set a low random_page_cost value (the default is 4) if you\n> want to favour index scans where the optimizer tends to use sequential\n> scans. Was this your intention?\n\nNo, not really. I found a posting in the archives where one would \nsuggest reducing this parameter, so I tried it. I don't think it had any \nperceptiple effect.\n\n>>cpu_tuple_cost = 0.01 # (same), default 0.01\n>>cpu_index_tuple_cost = 0.00001 # (same), default 0.001\n>>cpu_operator_cost = 0.005 # (same), default 0.0025\n> \n> \n> Just out of curiosity: Are these settings based on prior experience?\n\nNope. Same as above. I changed these variables only two days ago for \nwhat I recall. Untill than I had them at their default.\n\nRegards,\nChantal\n\n", "msg_date": "Wed, 19 Feb 2003 10:38:54 +0100", "msg_from": "Chantal Ackermann <chantal.ackermann@biomax.de>", "msg_from_op": true, "msg_subject": "Re: cost and actual time" }, { "msg_contents": "Chantal,\n\nI'm short of time right now. So just a few quick notes and a request\nfor more information. Next round tomorrow ...\n\nOn Wed, 19 Feb 2003 10:38:54 +0100, Chantal Ackermann\n<chantal.ackermann@biomax.de> wrote:\n>yeah, I noticed the difference between the two queries. actually, I am \n>afraid of dropping the distinct cause I had results with duplicate rows \n>(though I shall recheck when this is really the case). These are the \n>table declarations and constraints: [...]\n\nAFAICS there's no way to get duplicates, so no need for DISTINCT.\n\n> we have to use bigint as type. is the join therefore \n>such expensive?\n\nNo.\n\n>we had a primary key occurrence_id on the occurrences tables but we \n>noticed that we don't use it, so we didn't recreate it in the new \n>database. is it possible that the postgres could work with it internally?\n\nNo.\n\n> Nested Loop Total runtime: 53508.86 msec\n>Merge Join: Total runtime: 113066.81 msec\n>Hash Join: Total runtime: 439344.44 msec\n>I changed the gene name at every run to avoid the caching.\n\nYou can't compare the runtimes unless you query for the same data.\nEither run each query twice to make sure everything is cached or do\nsomething like\n\ttar cf /dev/null /some/big/directory\nbefore each query to empty your disk cache. BTW, what is your\nshared_buffers setting.\n\n>disabling the hash join results again in a Nested Loop with very high \n>cost but low runtime - I'm not sure if the latter is the consequence of \n>caching. \n\nPlease send EXPLAIN ANALYZE results for all queries. Send it to me\noff-list if you think its too long.\n\nServus\n Manfred\n", "msg_date": "Wed, 19 Feb 2003 11:34:03 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: cost and actual time" }, { "msg_contents": "On Wed, 19 Feb 2003 10:38:54 +0100, Chantal Ackermann\n<chantal.ackermann@biomax.de> wrote:\n>Nested Loop: 53508.86 msec\n>Merge Join: 113066.81 msec\n>Hash Join: 439344.44 msec\n\nChantal,\n\nyou might have reached the limit of what Postgres (or any other\ndatabase?) can do for you with these data structures. Time for\nsomething completely different: Try calculating the counts in\nadvance.\n\n CREATE TABLE occ_stat (\n did INT NOT NULL,\n gid INT NOT NULL,\n cnt INT NOT NULL\n ) WITHOUT OIDS;\n\n CREATE INDEX occ_stat_dg ON occ_stat(did, gid);\n CREATE INDEX occ_stat_gd ON occ_stat(gid, did);\n\nThere is *no* UNIQUE constraint on (did, gid). You get the numbers\nyou're after by\n SELECT did, sum(cnt) AS cnt\n FROM occ_stat\n WHERE gid = 'whatever'\n GROUP BY did\n ORDER BY cnt DESC;\n\nocc_stat is initially loaded by\n\n INSERT INTO occ_stat\n SELECT did, gid, count(*)\n FROM g_o INNER JOIN d_o ON (g_o.sid = d_o.sid)\n GROUP BY did, gid;\n\nDoing it in chunks\n WHERE sid BETWEEN a::bigint AND b::bigint\nmight be faster.\n\nYou have to block any INSERT/UPDATE/DELETE activity on d_o and g_o\nwhile you do the initial load. If it takes too long, see below for\nhow to do it in the background; hopefully the load task will catch up\nsome day :-)\n\nKeeping occ_stat current:\n\n CREATE RULE d_o_i AS ON INSERT\n TO d_o DO (\n INSERT INTO occ_stat\n SELECT NEW.did, g_o.gid, 1\n FROM g_o\n WHERE g_o.sid = NEW.sid);\n\n CREATE RULE d_o_d AS ON DELETE\n TO d_o DO (\n INSERT INTO occ_stat\n SELECT OLD.did, g_o.gid, -1\n FROM g_o\n WHERE g_o.sid = OLD.sid);\n\nOn UPDATE do both. Create a set of similar rules for g_o.\n\nThese rules will create a lot of duplicates on (did, gid) in occ_stat.\nUpdating existing rows and inserting only new combinations might seem\nobvious, but this method has concurrency problems (cf. the thread\n\"Hard problem with concurrency\" on -hackers). So occ_stat calls for\nreorganisation from time to time:\n\n BEGIN;\n SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n CREATE TEMP TABLE t (did INT, gid INT, cnt INT) WITHOUT OIDS;\n \n INSERT INTO t\n SELECT did, gid, sum(cnt)\n FROM occ_stat\n GROUP BY did, gid\n HAVING count(*) > 1;\n \n DELETE FROM occ_stat\n WHERE t.did = occ_stat.did\n AND t.gid = occ_stat.gid;\n \n INSERT INTO occ_stat SELECT * FROM t;\n \n DROP TABLE t;\n COMMIT;\n VACUUM ANALYZE occ_stat; -- very important!!\n\nNow this should work, but the rules could kill INSERT/UPDATE/DELETE\nperformance. Depending on your rate of modifications you might be\nforced to push the statistics calculation to the background.\n\n CREATE TABLE d_o_change (\n sid BIGINT NOT NULL,\n did INT NOT NULL,\n cnt INT NOT NULL\n ) WITHOUT OIDS;\n\n ... ON INSERT TO d_o DO (\n INSERT INTO d_o_change VALUES (NEW.sid, NEW.did, 1));\n\n ... ON DELETE TO d_o DO (\n INSERT INTO d_o_change VALUES (OLD.sid, OLD.did, -1));\n\n ... ON UPDATE TO d_o\n WHERE OLD.sid != NEW.sid OR OLD.did != NEW.did\n DO both\n\nAnd the same for g_o.\n\nYou need a task that periodically scans [dg]_o_change and does ...\n\n BEGIN;\n SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n SELECT <any row (or some rows) from x_o_change>;\n INSERT INTO occ_stat <see above>;\n DELETE <the selected row(s) from x_o_change>;\n COMMIT;\n\nDon't forget to VACUUM!\n\nIf you invest a little more work, I guess you can combine the\nreorganisation into the loader task ...\n\nI have no idea whether this approach is better than what you have now.\nWith a high INSERT/UPDATE/DELETE rate it may lead to a complete\nperformance disaster. You have to try ...\n\nServus\n Manfred\n", "msg_date": "Thu, 20 Feb 2003 11:00:19 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: cost and actual time" }, { "msg_contents": "I nominate Manfred for support response award of the week!\n\nChris\n\n----- Original Message -----\nFrom: \"Manfred Koizar\" <mkoi-pg@aon.at>\nTo: \"Chantal Ackermann\" <chantal.ackermann@biomax.de>\nCc: \"Josh Berkus\" <josh@agliodbs.com>; <pgsql-performance@postgresql.org>;\n<tgl@sss.pgh.pa.us>\nSent: Thursday, February 20, 2003 6:00 PM\nSubject: Re: [PERFORM] cost and actual time\n\n\n> On Wed, 19 Feb 2003 10:38:54 +0100, Chantal Ackermann\n> <chantal.ackermann@biomax.de> wrote:\n> >Nested Loop: 53508.86 msec\n> >Merge Join: 113066.81 msec\n> >Hash Join: 439344.44 msec\n>\n> Chantal,\n>\n> you might have reached the limit of what Postgres (or any other\n> database?) can do for you with these data structures. Time for\n> something completely different: Try calculating the counts in\n> advance.\n>\n> CREATE TABLE occ_stat (\n> did INT NOT NULL,\n> gid INT NOT NULL,\n> cnt INT NOT NULL\n> ) WITHOUT OIDS;\n>\n> CREATE INDEX occ_stat_dg ON occ_stat(did, gid);\n> CREATE INDEX occ_stat_gd ON occ_stat(gid, did);\n>\n> There is *no* UNIQUE constraint on (did, gid). You get the numbers\n> you're after by\n> SELECT did, sum(cnt) AS cnt\n> FROM occ_stat\n> WHERE gid = 'whatever'\n> GROUP BY did\n> ORDER BY cnt DESC;\n>\n> occ_stat is initially loaded by\n>\n> INSERT INTO occ_stat\n> SELECT did, gid, count(*)\n> FROM g_o INNER JOIN d_o ON (g_o.sid = d_o.sid)\n> GROUP BY did, gid;\n>\n> Doing it in chunks\n> WHERE sid BETWEEN a::bigint AND b::bigint\n> might be faster.\n>\n> You have to block any INSERT/UPDATE/DELETE activity on d_o and g_o\n> while you do the initial load. If it takes too long, see below for\n> how to do it in the background; hopefully the load task will catch up\n> some day :-)\n>\n> Keeping occ_stat current:\n>\n> CREATE RULE d_o_i AS ON INSERT\n> TO d_o DO (\n> INSERT INTO occ_stat\n> SELECT NEW.did, g_o.gid, 1\n> FROM g_o\n> WHERE g_o.sid = NEW.sid);\n>\n> CREATE RULE d_o_d AS ON DELETE\n> TO d_o DO (\n> INSERT INTO occ_stat\n> SELECT OLD.did, g_o.gid, -1\n> FROM g_o\n> WHERE g_o.sid = OLD.sid);\n>\n> On UPDATE do both. Create a set of similar rules for g_o.\n>\n> These rules will create a lot of duplicates on (did, gid) in occ_stat.\n> Updating existing rows and inserting only new combinations might seem\n> obvious, but this method has concurrency problems (cf. the thread\n> \"Hard problem with concurrency\" on -hackers). So occ_stat calls for\n> reorganisation from time to time:\n>\n> BEGIN;\n> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n> CREATE TEMP TABLE t (did INT, gid INT, cnt INT) WITHOUT OIDS;\n>\n> INSERT INTO t\n> SELECT did, gid, sum(cnt)\n> FROM occ_stat\n> GROUP BY did, gid\n> HAVING count(*) > 1;\n>\n> DELETE FROM occ_stat\n> WHERE t.did = occ_stat.did\n> AND t.gid = occ_stat.gid;\n>\n> INSERT INTO occ_stat SELECT * FROM t;\n>\n> DROP TABLE t;\n> COMMIT;\n> VACUUM ANALYZE occ_stat; -- very important!!\n>\n> Now this should work, but the rules could kill INSERT/UPDATE/DELETE\n> performance. Depending on your rate of modifications you might be\n> forced to push the statistics calculation to the background.\n>\n> CREATE TABLE d_o_change (\n> sid BIGINT NOT NULL,\n> did INT NOT NULL,\n> cnt INT NOT NULL\n> ) WITHOUT OIDS;\n>\n> ... ON INSERT TO d_o DO (\n> INSERT INTO d_o_change VALUES (NEW.sid, NEW.did, 1));\n>\n> ... ON DELETE TO d_o DO (\n> INSERT INTO d_o_change VALUES (OLD.sid, OLD.did, -1));\n>\n> ... ON UPDATE TO d_o\n> WHERE OLD.sid != NEW.sid OR OLD.did != NEW.did\n> DO both\n>\n> And the same for g_o.\n>\n> You need a task that periodically scans [dg]_o_change and does ...\n>\n> BEGIN;\n> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;\n> SELECT <any row (or some rows) from x_o_change>;\n> INSERT INTO occ_stat <see above>;\n> DELETE <the selected row(s) from x_o_change>;\n> COMMIT;\n>\n> Don't forget to VACUUM!\n>\n> If you invest a little more work, I guess you can combine the\n> reorganisation into the loader task ...\n>\n> I have no idea whether this approach is better than what you have now.\n> With a high INSERT/UPDATE/DELETE rate it may lead to a complete\n> performance disaster. You have to try ...\n>\n> Servus\n> Manfred\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Fri, 21 Feb 2003 09:45:24 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: cost and actual time" } ]
[ { "msg_contents": "\"Darryl A. J. Staflund\" <darryl.staflund@shaw.ca> writes:\n> MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEH\n> AaCAJIAEggX+Q29udGVudC1UeXBlOiB0ZXh0L3BsYWluOw0KCWNoYXJzZXQ9\n> IlVTLUFTQ0lJIg0KQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdA0K\n> DQpIaSBldmVyeW9uZSwNCg0KVGhhbmtzIGZvciB5b3VyIGNvbW1lbnRzIGFu\n> ZCBsaW5rcyB0byB0aGUgdmFyaW91cyB0b29scy4gIEkgd2lsbCBtYWtlIHN1\n> cmUNCnRvIGludmVzdGlnYXRlIHRoZW0hDQoNCk5vdyBvbnRvIHRoaXMgbGV0\n> dGVyLi4uDQoNCk9uIHRoZSBhc3N1bXB0aW9uIHRoYXQgUG9zdGdyZVNRTCB3\n> [etc]\n\nI might answer this if it weren't so unremittingly nastily encoded...\n*please* turn off whatever the heck that is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Feb 2003 23:18:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: JBoss CMP Performance Problems with PostgreSQL 7.2.3 " }, { "msg_contents": "Darryl,\n\nYour last e-mail got mangled in transit. I think you'll want to re-send it so \nyou get responses from others. You said:\n\n\"On the assumption that PostgreSQL was auto-committing each of the 1400\nqueries wrapped within the a BEGIN; and COMMIT;, I asked my DBA to disable\nauto-commit so that I could see if there was any performance improvement.\"\n\nEr, no, Darryl, you mis-understood. CMP is Auto-Committing. PostgreSQL is \nperforming normally. Your problem is with CMP, not Postgres. \n\n\" I was told he couldn't do that in our current version of PostgreSQL\n(7.2.3?) but he did turn fsync off in order to achieve a similar effect.\"\n\nWhich is fine up until you have an unexpected power-out, at which point you'd \nbetter have good backups.\n\n\"What a difference. Whereas before it took at least a minute to insert\n1400 queries into the database, with fsync turned off it only took 12\nseconds. If anything, this shows me that JBoss' implementation, although\nslow in comparison to other CMP implementations, is not the primary source\nof the performance problems. Also, I'm inclined to believe that\nPostgreSQL is auto-committing each query within the transaction even\nthough it shouldn't. Otherwise, why would turning fsync off make such a\nbig performance difference?\"\n\nPerhaps because CMP is sending a \"commit\" statement after each query to \nPostgreSQL?\n\n\"So the questions I have now is -- is this improvement in performance\nevidence of my assumption that PostgreSQL is auto-committing each of the\n1400 queries within the transaction (the logs sure suggest this.) And if\nso, why is it auto-committing these queries? I thought PostgreSQL\nsuspended auto-commit after encountering a BEGIN; statement.\"\n\nSee above. You need to examine your tools better, and to read the e-mails \nwhich respond to you. Another poster already gave you a link to a bug report \nmentioning that this is a problem with CMP.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 13 Feb 2003 20:20:38 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: JBoss CMP Performance Problems with PostgreSQL 7.2.3" }, { "msg_contents": "Tom Lane wrote:\n> \"Darryl A. J. Staflund\" <darryl.staflund@shaw.ca> writes:\n> > MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEH\n> > AaCAJIAEggX+Q29udGVudC1UeXBlOiB0ZXh0L3BsYWluOw0KCWNoYXJzZXQ9\n> > IlVTLUFTQ0lJIg0KQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdA0K\n> > DQpIaSBldmVyeW9uZSwNCg0KVGhhbmtzIGZvciB5b3VyIGNvbW1lbnRzIGFu\n> > ZCBsaW5rcyB0byB0aGUgdmFyaW91cyB0b29scy4gIEkgd2lsbCBtYWtlIHN1\n> > cmUNCnRvIGludmVzdGlnYXRlIHRoZW0hDQoNCk5vdyBvbnRvIHRoaXMgbGV0\n> > dGVyLi4uDQoNCk9uIHRoZSBhc3N1bXB0aW9uIHRoYXQgUG9zdGdyZVNRTCB3\n> > [etc]\n> \n> I might answer this if it weren't so unremittingly nastily encoded...\n> *please* turn off whatever the heck that is.\n\nTom, the answer is obvious:\n\n\tljasdfJOJAAJDJFoaijwsdojoAIJSDOIJAsdoFIJSoiJFOIJASDOFIJOLJAASDFaoiojfos\n\n:-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 14 Feb 2003 07:35:46 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: JBoss CMP Performance Problems with PostgreSQL 7.2.3" } ]
[ { "msg_contents": "Hello,\n\nI am going to do my best to describe this problem, but the description\nmay be quite long. Also, this is my first post to this list, so I miss\nimportant details please let me know. All of the following analysis was\ndone on my P4 laptop with 0.5 Gig ram and postgresql 7.3 installed from\nRPM for RedHat linux 7.3\n\nI have a database with two large tables that I need to do a join on. \nHere is the query that is causing the problems:\n\nselect distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n from feature f, featureloc fl\n where\n fl.srcfeature_id = 1 and\n ((fl.strand=1 and fl.nbeg <= 393164 and fl.nend >= 390956) OR\n (fl.strand=-1 and fl.nend <= 393164 and fl.nbeg >= 390956)) and\n f.feature_id = fl.feature_id\n\nThe feature table has 3,685,287 rows and featureloc has 3,803,762 rows. \nHere are all of the relevant indexes on these tables:\n\n Index \"feature_pkey\"\n Column | Type \n------------+---------\n feature_id | integer\nunique btree (primary key)\n\nIndex \"featureloc_idx1\"\n Column | Type \n------------+---------\n feature_id | integer\nbtree\n \n Index \"featureloc_idx2\"\n Column | Type \n---------------+---------\n srcfeature_id | integer\nbtree\n\nIndex \"featureloc_src_strand_beg_end\"\n Column | Type \n---------------+----------\n srcfeature_id | integer\n strand | smallint\n nbeg | integer\n nend | integer\nbtree\n\nIn a naive database (ie, no ANALYZE has ever been run), the above query\nruns in about 3 seconds after the first time it has been run (due to\ncaching?). Here is the output of EXPLAIN on that query:\n\nUnique (cost=75513.46..75513.48 rows=1 width=167)\n -> Sort (cost=75513.46..75513.46 rows=1 width=167)\n -> Nested Loop (cost=0.00..75513.45 rows=1 width=167)\n -> Index Scan using featureloc_idx2 on featureloc fl \n(cost=0.00..75508.43 rows=1 width=14)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..5.01 rows=1 width=153)\n\nNotice that for featureloc it is using featureloc_idx2, which is the\nindex on srcfeature_id. Ideally, it would be using\nfeatureloc_src_strand_beg_end, but this is not bad. In fact, if I drop\nfeatureloc_idx2 (not something I can do in a production database), it\ndoes use that index and cuts the query runtime in half.\n\nNow comes the really bizarre part: if I run VACUUM ANALYZE, the\nperformance becomes truly awful. Instead of using an index for\nfeatureloc, it now does a seq scan, causing the runtime to go up to\nabout 30 seconds. Here is the output of explain now:\n\nUnique (cost=344377.70..344759.85 rows=2548 width=47)\n -> Sort (cost=344377.70..344377.70 rows=25477 width=47)\n -> Nested Loop (cost=0.00..342053.97 rows=25477 width=47)\n -> Seq Scan on featureloc fl (cost=0.00..261709.31\nrows=25477 width=14)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33)\n\nIf I try to force it to use an index by setting enable_seqscan=0, it\nthen uses featureloc_idx1, the index on feature_id. This has slightly\nworse performance than the seq scan, at just over 30 seconds. Here is\nthe output of explain for this case:\n\nUnique (cost=356513.46..356895.61 rows=2548 width=47)\n -> Sort (cost=356513.46..356513.46 rows=25477 width=47)\n -> Nested Loop (cost=0.00..354189.73 rows=25477 width=47)\n -> Index Scan using featureloc_idx1 on featureloc fl \n(cost=0.00..273845.08 rows=25477 width=14)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33)\n\nNow, if I drop featureloc_idx1 (again, not something I can do in\nproduction) and still disallow seq scans, it uses featureloc_idx2, as it\ndid with the naive database above, but the performance is much worse,\nrunning about 24 seconds for the query. Here is the output of explain:\n\nUnique (cost=1310195.21..1310577.36 rows=2548 width=47)\n -> Sort (cost=1310195.21..1310195.21 rows=25477 width=47)\n -> Nested Loop (cost=0.00..1307871.48 rows=25477 width=47)\n -> Index Scan using featureloc_idx2 on featureloc fl \n(cost=0.00..1227526.82 rows=25477 width=14)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33)\n\nFinally, if I drop featureloc_idx2, it uses the right index, and the\nruntime gets back to about 2 seconds, but the database is unusable for\nother queries because I dropped the other indexes and disallowed seq\nscans. Here is the output for explain in this case:\n\nUnique (cost=1414516.98..1414899.13 rows=2548 width=47)\n -> Sort (cost=1414516.98..1414516.98 rows=25477 width=47)\n -> Nested Loop (cost=0.00..1412193.25 rows=25477 width=47)\n -> Index Scan using featureloc_src_strand_beg_end on\nfeatureloc fl (cost=0.00..1331848.60 rows=25477 width=14)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33)\n\nNow the question: can anybody tell me why the query planner does such a\nbad job figuring out how to run this query after I run VACUUM ANALYZE,\nand can you give me any advice on making this arrangement work without\nforcing postgres' hand by limiting its options until it gets it right?\n\nThank you very much for reading down to here,\nScott\n\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. cain@cshl.org\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n", "msg_date": "14 Feb 2003 11:44:00 -0500", "msg_from": "Scott Cain <cain@cshl.org>", "msg_from_op": true, "msg_subject": "performace problem after VACUUM ANALYZE" }, { "msg_contents": "Scott Cain <cain@cshl.org> writes:\n> [ much stuff ]\n\nCould we see EXPLAIN ANALYZE, not just EXPLAIN, output for all these\nalternatives? Your question boils down to \"why is the planner\nmisestimating these queries\" ... which is a difficult question to\nanswer when given only the estimates and not the reality.\n\nA suggestion though is that you might need to raise the statistics\ntarget on the indexed columns, so that ANALYZE will collect\nfiner-grained statistics. (See ALTER TABLE ... SET STATISTICS.)\nTry booting it up to 100 (from the default 10), re-analyze, and\nthen see if/how the plans change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 12:00:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performace problem after VACUUM ANALYZE " }, { "msg_contents": "Tom,\n\nSorry about that: I'll try to briefly give the information you are\nlooking for. I've read the docs on ALTER TABLE, but it is not clear to\nme what columns I should change STATISTICS on, or should I just do it on\nall of the columns for which indexes exist?\n\nHere's the query again:\n\nselect distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n from feature f, featureloc fl\n where\n fl.srcfeature_id = 1 and\n ((fl.strand=1 and fl.nbeg <= 393164 and fl.nend >= 390956) OR\n (fl.strand=-1 and fl.nend <= 393164 and fl.nbeg >= 390956)) and\n f.feature_id = fl.feature_id\n\n--------------------------------------------------------------------------\n\nNaive database:\n\nUnique (cost=75513.46..75513.48 rows=1 width=167) (actual\ntime=22815.25..22815.93 rows=179 loops=1)\n -> Sort (cost=75513.46..75513.46 rows=1 width=167) (actual\ntime=22815.24..22815.43 rows=186 loops=1)\n -> Nested Loop (cost=0.00..75513.45 rows=1 width=167) (actual\ntime=2471.25..22814.01 rows=186 loops=1)\n -> Index Scan using featureloc_idx2 on featureloc fl \n(cost=0.00..75508.43 rows=1 width=14) (actual time=2463.83..22796.50\nrows=186 loops=1)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..5.01 rows=1 width=153) (actual time=0.08..0.08 rows=1\nloops=186)\nTotal runtime: 22816.63 msec\n--------------------------------------------------------------------------\n\nNaive database after featureloc_idx2 dropped:\n\nUnique (cost=75545.46..75545.48 rows=1 width=167) (actual\ntime=5232.36..5234.51 rows=179 loops=1)\n -> Sort (cost=75545.46..75545.46 rows=1 width=167) (actual\ntime=5232.35..5232.54 rows=186 loops=1)\n -> Nested Loop (cost=0.00..75545.45 rows=1 width=167) (actual\ntime=291.46..5220.69 rows=186 loops=1)\n -> Index Scan using featureloc_src_strand_beg_end on\nfeatureloc fl (cost=0.00..75540.43 rows=1 width=14) (actual\ntime=291.30..5214.46 rows=186 loops=1)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..5.01 rows=1 width=153) (actual time=0.02..0.03 rows=1\nloops=186)\nTotal runtime: 5234.89 msec\n--------------------------------------------------------------------------\n\nDatabase after VACUUM ANALYZE was run:\n\nUnique (cost=344377.70..344759.85 rows=2548 width=47) (actual\ntime=26466.82..26467.51 rows=179 loops=1)\n -> Sort (cost=344377.70..344377.70 rows=25477 width=47) (actual\ntime=26466.82..26467.01 rows=186 loops=1)\n -> Nested Loop (cost=0.00..342053.97 rows=25477 width=47)\n(actual time=262.66..26465.63 rows=186 loops=1)\n -> Seq Scan on featureloc fl (cost=0.00..261709.31\nrows=25477 width=14) (actual time=118.62..26006.05 rows=186 loops=1)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33) (actual time=2.45..2.46 rows=1\nloops=186)\nTotal runtime: 26467.85 msec\n--------------------------------------------------------------------------\n\nAfter disallowing seqscans (set enable_seqscan=0):\n\nUnique (cost=356513.46..356895.61 rows=2548 width=47) (actual\ntime=27494.62..27495.34 rows=179 loops=1)\n -> Sort (cost=356513.46..356513.46 rows=25477 width=47) (actual\ntime=27494.61..27494.83 rows=186 loops=1)\n -> Nested Loop (cost=0.00..354189.73 rows=25477 width=47)\n(actual time=198.88..27493.48 rows=186 loops=1)\n -> Index Scan using featureloc_idx1 on featureloc fl \n(cost=0.00..273845.08 rows=25477 width=14) (actual time=129.30..27280.95\nrows=186 loops=1)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33) (actual time=1.13..1.13 rows=1\nloops=186)\nTotal runtime: 27495.66 msec\n--------------------------------------------------------------------------\n\nAfter dropping featureloc_idx1:\n\nUnique (cost=1310195.21..1310577.36 rows=2548 width=47) (actual\ntime=21692.69..21693.37 rows=179 loops=1)\n -> Sort (cost=1310195.21..1310195.21 rows=25477 width=47) (actual\ntime=21692.69..21692.88 rows=186 loops=1)\n -> Nested Loop (cost=0.00..1307871.48 rows=25477 width=47)\n(actual time=2197.65..21691.39 rows=186 loops=1)\n -> Index Scan using featureloc_idx2 on featureloc fl \n(cost=0.00..1227526.82 rows=25477 width=14) (actual\ntime=2197.49..21618.89 rows=186 loops=1)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33) (actual time=0.37..0.38 rows=1\nloops=186)\nTotal runtime: 21693.72 msec\n--------------------------------------------------------------------------\n\nAfter dropping featureloc_idx2:\n\nUnique (cost=1414516.98..1414899.13 rows=2548 width=47) (actual\ntime=1669.17..1669.86 rows=179 loops=1)\n -> Sort (cost=1414516.98..1414516.98 rows=25477 width=47) (actual\ntime=1669.17..1669.36 rows=186 loops=1)\n -> Nested Loop (cost=0.00..1412193.25 rows=25477 width=47)\n(actual time=122.69..1668.08 rows=186 loops=1)\n -> Index Scan using featureloc_src_strand_beg_end on\nfeatureloc fl (cost=0.00..1331848.60 rows=25477 width=14) (actual\ntime=122.51..1661.81 rows=186 loops=1)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33) (actual time=0.02..0.03 rows=1\nloops=186)\nTotal runtime: 1670.20 msec\n\n\nOn Fri, 2003-02-14 at 12:00, Tom Lane wrote:\n> Scott Cain <cain@cshl.org> writes:\n> > [ much stuff ]\n> \n> Could we see EXPLAIN ANALYZE, not just EXPLAIN, output for all these\n> alternatives? Your question boils down to \"why is the planner\n> misestimating these queries\" ... which is a difficult question to\n> answer when given only the estimates and not the reality.\n> \n> A suggestion though is that you might need to raise the statistics\n> target on the indexed columns, so that ANALYZE will collect\n> finer-grained statistics. (See ALTER TABLE ... SET STATISTICS.)\n> Try booting it up to 100 (from the default 10), re-analyze, and\n> then see if/how the plans change.\n> \n> \t\t\tregards, tom lane\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. cain@cshl.org\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n", "msg_date": "14 Feb 2003 12:29:36 -0500", "msg_from": "Scott Cain <cain@cshl.org>", "msg_from_op": true, "msg_subject": "Re: performace problem after VACUUM ANALYZE" }, { "msg_contents": "An update: I ran alter table as suggested, ie,\n\nalter table featureloc alter srcfeature_id set statistics 100;\n\non each column in the table, running vacuum analyze and explain analyze\non the query in between each alter to see if it made any difference. It\ndid not. Postgres still instists on doing a seq scan on featureloc:\n\nUnique (cost=336831.46..337179.45 rows=2320 width=47) (actual\ntime=27219.62..27220.30 rows=179 loops=1)\n -> Sort (cost=336831.46..336831.46 rows=23200 width=47) (actual\ntime=27219.61..27219.80 rows=186 loops=1)\n -> Nested Loop (cost=0.00..334732.77 rows=23200 width=47)\n(actual time=1003.04..27217.99 rows=186 loops=1)\n -> Seq Scan on featureloc fl (cost=0.00..261709.31\nrows=23200 width=14) (actual time=814.68..26094.18 rows=186 loops=1)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..3.14 rows=1 width=33) (actual time=6.03..6.03 rows=1\nloops=186)\nTotal runtime: 27220.63 msec\n\n\nOn Fri, 2003-02-14 at 12:29, Scott Cain wrote:\n> Tom,\n> \n> Sorry about that: I'll try to briefly give the information you are\n> looking for. I've read the docs on ALTER TABLE, but it is not clear to\n> me what columns I should change STATISTICS on, or should I just do it on\n> all of the columns for which indexes exist?\n> \n> Here's the query again:\n> \n> select distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n> from feature f, featureloc fl\n> where\n> fl.srcfeature_id = 1 and\n> ((fl.strand=1 and fl.nbeg <= 393164 and fl.nend >= 390956) OR\n> (fl.strand=-1 and fl.nend <= 393164 and fl.nbeg >= 390956)) and\n> f.feature_id = fl.feature_id\n> \n> --------------------------------------------------------------------------\n> \n> Naive database:\n> \n> Unique (cost=75513.46..75513.48 rows=1 width=167) (actual\n> time=22815.25..22815.93 rows=179 loops=1)\n> -> Sort (cost=75513.46..75513.46 rows=1 width=167) (actual\n> time=22815.24..22815.43 rows=186 loops=1)\n> -> Nested Loop (cost=0.00..75513.45 rows=1 width=167) (actual\n> time=2471.25..22814.01 rows=186 loops=1)\n> -> Index Scan using featureloc_idx2 on featureloc fl \n> (cost=0.00..75508.43 rows=1 width=14) (actual time=2463.83..22796.50\n> rows=186 loops=1)\n> -> Index Scan using feature_pkey on feature f \n> (cost=0.00..5.01 rows=1 width=153) (actual time=0.08..0.08 rows=1\n> loops=186)\n> Total runtime: 22816.63 msec\n> --------------------------------------------------------------------------\n> \n> Naive database after featureloc_idx2 dropped:\n> \n> Unique (cost=75545.46..75545.48 rows=1 width=167) (actual\n> time=5232.36..5234.51 rows=179 loops=1)\n> -> Sort (cost=75545.46..75545.46 rows=1 width=167) (actual\n> time=5232.35..5232.54 rows=186 loops=1)\n> -> Nested Loop (cost=0.00..75545.45 rows=1 width=167) (actual\n> time=291.46..5220.69 rows=186 loops=1)\n> -> Index Scan using featureloc_src_strand_beg_end on\n> featureloc fl (cost=0.00..75540.43 rows=1 width=14) (actual\n> time=291.30..5214.46 rows=186 loops=1)\n> -> Index Scan using feature_pkey on feature f \n> (cost=0.00..5.01 rows=1 width=153) (actual time=0.02..0.03 rows=1\n> loops=186)\n> Total runtime: 5234.89 msec\n> --------------------------------------------------------------------------\n> \n> Database after VACUUM ANALYZE was run:\n> \n> Unique (cost=344377.70..344759.85 rows=2548 width=47) (actual\n> time=26466.82..26467.51 rows=179 loops=1)\n> -> Sort (cost=344377.70..344377.70 rows=25477 width=47) (actual\n> time=26466.82..26467.01 rows=186 loops=1)\n> -> Nested Loop (cost=0.00..342053.97 rows=25477 width=47)\n> (actual time=262.66..26465.63 rows=186 loops=1)\n> -> Seq Scan on featureloc fl (cost=0.00..261709.31\n> rows=25477 width=14) (actual time=118.62..26006.05 rows=186 loops=1)\n> -> Index Scan using feature_pkey on feature f \n> (cost=0.00..3.14 rows=1 width=33) (actual time=2.45..2.46 rows=1\n> loops=186)\n> Total runtime: 26467.85 msec\n> --------------------------------------------------------------------------\n> \n> After disallowing seqscans (set enable_seqscan=0):\n> \n> Unique (cost=356513.46..356895.61 rows=2548 width=47) (actual\n> time=27494.62..27495.34 rows=179 loops=1)\n> -> Sort (cost=356513.46..356513.46 rows=25477 width=47) (actual\n> time=27494.61..27494.83 rows=186 loops=1)\n> -> Nested Loop (cost=0.00..354189.73 rows=25477 width=47)\n> (actual time=198.88..27493.48 rows=186 loops=1)\n> -> Index Scan using featureloc_idx1 on featureloc fl \n> (cost=0.00..273845.08 rows=25477 width=14) (actual time=129.30..27280.95\n> rows=186 loops=1)\n> -> Index Scan using feature_pkey on feature f \n> (cost=0.00..3.14 rows=1 width=33) (actual time=1.13..1.13 rows=1\n> loops=186)\n> Total runtime: 27495.66 msec\n> --------------------------------------------------------------------------\n> \n> After dropping featureloc_idx1:\n> \n> Unique (cost=1310195.21..1310577.36 rows=2548 width=47) (actual\n> time=21692.69..21693.37 rows=179 loops=1)\n> -> Sort (cost=1310195.21..1310195.21 rows=25477 width=47) (actual\n> time=21692.69..21692.88 rows=186 loops=1)\n> -> Nested Loop (cost=0.00..1307871.48 rows=25477 width=47)\n> (actual time=2197.65..21691.39 rows=186 loops=1)\n> -> Index Scan using featureloc_idx2 on featureloc fl \n> (cost=0.00..1227526.82 rows=25477 width=14) (actual\n> time=2197.49..21618.89 rows=186 loops=1)\n> -> Index Scan using feature_pkey on feature f \n> (cost=0.00..3.14 rows=1 width=33) (actual time=0.37..0.38 rows=1\n> loops=186)\n> Total runtime: 21693.72 msec\n> --------------------------------------------------------------------------\n> \n> After dropping featureloc_idx2:\n> \n> Unique (cost=1414516.98..1414899.13 rows=2548 width=47) (actual\n> time=1669.17..1669.86 rows=179 loops=1)\n> -> Sort (cost=1414516.98..1414516.98 rows=25477 width=47) (actual\n> time=1669.17..1669.36 rows=186 loops=1)\n> -> Nested Loop (cost=0.00..1412193.25 rows=25477 width=47)\n> (actual time=122.69..1668.08 rows=186 loops=1)\n> -> Index Scan using featureloc_src_strand_beg_end on\n> featureloc fl (cost=0.00..1331848.60 rows=25477 width=14) (actual\n> time=122.51..1661.81 rows=186 loops=1)\n> -> Index Scan using feature_pkey on feature f \n> (cost=0.00..3.14 rows=1 width=33) (actual time=0.02..0.03 rows=1\n> loops=186)\n> Total runtime: 1670.20 msec\n> \n> \n> On Fri, 2003-02-14 at 12:00, Tom Lane wrote:\n> > Scott Cain <cain@cshl.org> writes:\n> > > [ much stuff ]\n> > \n> > Could we see EXPLAIN ANALYZE, not just EXPLAIN, output for all these\n> > alternatives? Your question boils down to \"why is the planner\n> > misestimating these queries\" ... which is a difficult question to\n> > answer when given only the estimates and not the reality.\n> > \n> > A suggestion though is that you might need to raise the statistics\n> > target on the indexed columns, so that ANALYZE will collect\n> > finer-grained statistics. (See ALTER TABLE ... SET STATISTICS.)\n> > Try booting it up to 100 (from the default 10), re-analyze, and\n> > then see if/how the plans change.\n> > \n> > \t\t\tregards, tom lane\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. cain@cshl.org\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n", "msg_date": "14 Feb 2003 14:22:51 -0500", "msg_from": "Scott Cain <cain@cshl.org>", "msg_from_op": true, "msg_subject": "Re: [Gmod-schema] Re: performace problem after VACUUM" }, { "msg_contents": "\nPG really does not do the right thing as I mentioned earlier for joining \ntables. To force not to use seqscan, it still does not use right index \n(srcfeature_id, nbeg, nend) and performance is even worse.\n\nc_gadfly3=# \\d fl_src_b_e_key;\nIndex \"public.fl_src_b_e_key\"\n Column | Type\n---------------+---------\n srcfeature_id | integer\n nbeg | integer\n nend | integer\nbtree, for table \"public.featureloc\"\n\nc_gadfly3=# explain analyze select * from featureloc fl, feature f where \nf.feature_id = fl.feature_id and srcfeature_id=1 and (nbeg >= 1000 and \nnend <= 2000);\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=115700.27..232516.85 rows=69007 width=445) (actual \ntime=12342.97..12461.23 rows=32 loops=1)\n Merge Cond: (\"outer\".feature_id = \"inner\".feature_id)\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..110535.53 rows=2060653 width=361) (actual time=17.85..490.86 \nrows=28341 loops=1)\n -> Sort (cost=115700.27..115872.79 rows=69007 width=84) (actual \ntime=11944.23..11944.25 rows=32 loops=1)\n Sort Key: fl.feature_id\n -> Seq Scan on featureloc fl (cost=0.00..107580.43 \nrows=69007 width=84) (actual time=375.85..11944.10 rows=32 loops=1)\n Filter: ((srcfeature_id = 1) AND (nbeg >= 1000) AND \n(nend <= 2000))\n Total runtime: 12461.37 msec\n(8 rows)\n\nc_gadfly3=#\nc_gadfly3=# set enable_seqscan=0;\nSET\nc_gadfly3=# explain analyze select * from featureloc fl, feature f where \nf.feature_id = fl.feature_id and srcfeature_id=1 and (nbeg >= 1000 and \nnend <= 2000);\n \nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=0.00..236345.49 rows=69007 width=445) (actual \ntime=721.75..26078.64 rows=32 loops=1)\n Merge Cond: (\"outer\".feature_id = \"inner\".feature_id)\n -> Index Scan using fl_feature_id_key on featureloc fl \n(cost=0.00..119701.43 rows=69007 width=84) (actual time=549.14..25854.12 \nrows=32 loops=1)\n Filter: ((srcfeature_id = 1) AND (nbeg >= 1000) AND (nend <= \n2000))\n -> Index Scan using feature_pkey on feature f \n(cost=0.00..110535.53 rows=2060653 width=361) (actual time=50.95..200.37 \nrows=28342 loops=1)\n Total runtime: 26078.80 msec\n(6 rows)\n\n\n\n\nScott Cain wrote:\n\n> An update: I ran alter table as suggested, ie,\n>\n> alter table featureloc alter srcfeature_id set statistics 100;\n>\n> on each column in the table, running vacuum analyze and explain analyze\n> on the query in between each alter to see if it made any difference. It\n> did not. Postgres still instists on doing a seq scan on featureloc:\n>\n> Unique (cost=336831.46..337179.45 rows=2320 width=47) (actual\n> time=27219.62..27220.30 rows=179 loops=1)\n> -> Sort (cost=336831.46..336831.46 rows=23200 width=47) (actual\n> time=27219.61..27219.80 rows=186 loops=1)\n> -> Nested Loop (cost=0.00..334732.77 rows=23200 width=47)\n> (actual time=1003.04..27217.99 rows=186 loops=1)\n> -> Seq Scan on featureloc fl (cost=0.00..261709.31\n> rows=23200 width=14) (actual time=814.68..26094.18 rows=186 loops=1)\n> -> Index Scan using feature_pkey on feature f\n> (cost=0.00..3.14 rows=1 width=33) (actual time=6.03..6.03 rows=1\n> loops=186)\n> Total runtime: 27220.63 msec\n>\n>\n> On Fri, 2003-02-14 at 12:29, Scott Cain wrote:\n>\n> >Tom,\n> >\n> >Sorry about that: I'll try to briefly give the information you are\n> >looking for. I've read the docs on ALTER TABLE, but it is not clear to\n> >me what columns I should change STATISTICS on, or should I just do it on\n> >all of the columns for which indexes exist?\n> >\n> >Here's the query again:\n> >\n> >select distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n> > from feature f, featureloc fl\n> > where\n> > fl.srcfeature_id = 1 and\n> > ((fl.strand=1 and fl.nbeg <= 393164 and fl.nend >= 390956) OR\n> > (fl.strand=-1 and fl.nend <= 393164 and fl.nbeg >= 390956)) and\n> > f.feature_id = fl.feature_id\n> >\n> >--------------------------------------------------------------------------\n> >\n> >Naive database:\n> >\n> >Unique (cost=75513.46..75513.48 rows=1 width=167) (actual\n> >time=22815.25..22815.93 rows=179 loops=1)\n> > -> Sort (cost=75513.46..75513.46 rows=1 width=167) (actual\n> >time=22815.24..22815.43 rows=186 loops=1)\n> > -> Nested Loop (cost=0.00..75513.45 rows=1 width=167) (actual\n> >time=2471.25..22814.01 rows=186 loops=1)\n> > -> Index Scan using featureloc_idx2 on featureloc fl\n> >(cost=0.00..75508.43 rows=1 width=14) (actual time=2463.83..22796.50\n> >rows=186 loops=1)\n> > -> Index Scan using feature_pkey on feature f\n> >(cost=0.00..5.01 rows=1 width=153) (actual time=0.08..0.08 rows=1\n> >loops=186)\n> >Total runtime: 22816.63 msec\n> >--------------------------------------------------------------------------\n> >\n> >Naive database after featureloc_idx2 dropped:\n> >\n> >Unique (cost=75545.46..75545.48 rows=1 width=167) (actual\n> >time=5232.36..5234.51 rows=179 loops=1)\n> > -> Sort (cost=75545.46..75545.46 rows=1 width=167) (actual\n> >time=5232.35..5232.54 rows=186 loops=1)\n> > -> Nested Loop (cost=0.00..75545.45 rows=1 width=167) (actual\n> >time=291.46..5220.69 rows=186 loops=1)\n> > -> Index Scan using featureloc_src_strand_beg_end on\n> >featureloc fl (cost=0.00..75540.43 rows=1 width=14) (actual\n> >time=291.30..5214.46 rows=186 loops=1)\n> > -> Index Scan using feature_pkey on feature f\n> >(cost=0.00..5.01 rows=1 width=153) (actual time=0.02..0.03 rows=1\n> >loops=186)\n> >Total runtime: 5234.89 msec\n> >--------------------------------------------------------------------------\n> >\n> >Database after VACUUM ANALYZE was run:\n> >\n> >Unique (cost=344377.70..344759.85 rows=2548 width=47) (actual\n> >time=26466.82..26467.51 rows=179 loops=1)\n> > -> Sort (cost=344377.70..344377.70 rows=25477 width=47) (actual\n> >time=26466.82..26467.01 rows=186 loops=1)\n> > -> Nested Loop (cost=0.00..342053.97 rows=25477 width=47)\n> >(actual time=262.66..26465.63 rows=186 loops=1)\n> > -> Seq Scan on featureloc fl (cost=0.00..261709.31\n> >rows=25477 width=14) (actual time=118.62..26006.05 rows=186 loops=1)\n> > -> Index Scan using feature_pkey on feature f\n> >(cost=0.00..3.14 rows=1 width=33) (actual time=2.45..2.46 rows=1\n> >loops=186)\n> >Total runtime: 26467.85 msec\n> >--------------------------------------------------------------------------\n> >\n> >After disallowing seqscans (set enable_seqscan=0):\n> >\n> >Unique (cost=356513.46..356895.61 rows=2548 width=47) (actual\n> >time=27494.62..27495.34 rows=179 loops=1)\n> > -> Sort (cost=356513.46..356513.46 rows=25477 width=47) (actual\n> >time=27494.61..27494.83 rows=186 loops=1)\n> > -> Nested Loop (cost=0.00..354189.73 rows=25477 width=47)\n> >(actual time=198.88..27493.48 rows=186 loops=1)\n> > -> Index Scan using featureloc_idx1 on featureloc fl\n> >(cost=0.00..273845.08 rows=25477 width=14) (actual time=129.30..27280.95\n> >rows=186 loops=1)\n> > -> Index Scan using feature_pkey on feature f\n> >(cost=0.00..3.14 rows=1 width=33) (actual time=1.13..1.13 rows=1\n> >loops=186)\n> >Total runtime: 27495.66 msec\n> >--------------------------------------------------------------------------\n> >\n> >After dropping featureloc_idx1:\n> >\n> >Unique (cost=1310195.21..1310577.36 rows=2548 width=47) (actual\n> >time=21692.69..21693.37 rows=179 loops=1)\n> > -> Sort (cost=1310195.21..1310195.21 rows=25477 width=47) (actual\n> >time=21692.69..21692.88 rows=186 loops=1)\n> > -> Nested Loop (cost=0.00..1307871.48 rows=25477 width=47)\n> >(actual time=2197.65..21691.39 rows=186 loops=1)\n> > -> Index Scan using featureloc_idx2 on featureloc fl\n> >(cost=0.00..1227526.82 rows=25477 width=14) (actual\n> >time=2197.49..21618.89 rows=186 loops=1)\n> > -> Index Scan using feature_pkey on feature f\n> >(cost=0.00..3.14 rows=1 width=33) (actual time=0.37..0.38 rows=1\n> >loops=186)\n> >Total runtime: 21693.72 msec\n> >--------------------------------------------------------------------------\n> >\n> >After dropping featureloc_idx2:\n> >\n> >Unique (cost=1414516.98..1414899.13 rows=2548 width=47) (actual\n> >time=1669.17..1669.86 rows=179 loops=1)\n> > -> Sort (cost=1414516.98..1414516.98 rows=25477 width=47) (actual\n> >time=1669.17..1669.36 rows=186 loops=1)\n> > -> Nested Loop (cost=0.00..1412193.25 rows=25477 width=47)\n> >(actual time=122.69..1668.08 rows=186 loops=1)\n> > -> Index Scan using featureloc_src_strand_beg_end on\n> >featureloc fl (cost=0.00..1331848.60 rows=25477 width=14) (actual\n> >time=122.51..1661.81 rows=186 loops=1)\n> > -> Index Scan using feature_pkey on feature f\n> >(cost=0.00..3.14 rows=1 width=33) (actual time=0.02..0.03 rows=1\n> >loops=186)\n> >Total runtime: 1670.20 msec\n> >\n> >\n> >On Fri, 2003-02-14 at 12:00, Tom Lane wrote:\n> >\n> >>Scott Cain writes:\n> >>\n> >>>[ much stuff ]\n> >>\n> >>Could we see EXPLAIN ANALYZE, not just EXPLAIN, output for all these\n> >>alternatives? Your question boils down to \"why is the planner\n> >>misestimating these queries\" ... which is a difficult question to\n> >>answer when given only the estimates and not the reality.\n> >>\n> >>A suggestion though is that you might need to raise the statistics\n> >>target on the indexed columns, so that ANALYZE will collect\n> >>finer-grained statistics. (See ALTER TABLE ... SET STATISTICS.)\n> >>Try booting it up to 100 (from the default 10), re-analyze, and\n> >>then see if/how the plans change.\n> >>\n> >>\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 14 Feb 2003 13:04:05 -0800", "msg_from": "ShengQiang Shu <sshu@fruitfly.org>", "msg_from_op": false, "msg_subject": "Re: [Gmod-schema] Re: performace problem after VACUUM\tANALYZE" }, { "msg_contents": "Scott Cain <cain@cshl.org> writes:\n> Here is the query that is causing the problems:\n\n> select distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n> from feature f, featureloc fl\n> where\n> fl.srcfeature_id = 1 and\n> ((fl.strand=1 and fl.nbeg <= 393164 and fl.nend >= 390956) OR\n> (fl.strand=-1 and fl.nend <= 393164 and fl.nbeg >= 390956)) and\n> f.feature_id = fl.feature_id\n\n> [ and the index he'd like it to use is ]\n\n> Index \"featureloc_src_strand_beg_end\"\n> Column | Type \n> ---------------+----------\n> srcfeature_id | integer\n> strand | smallint\n> nbeg | integer\n> nend | integer\n> btree\n\nAfter fooling with this I see a couple of problems. One is the\nsame old cross-datatype-comparison issue that so frequently bites\npeople: \"1\" and \"-1\" are integer constants, and comparing them to\na smallint column isn't an indexable operation. You need casts.\n(Or, forget the \"optimization\" of storing strand as a smallint.\nDepending on your row layout, it's quite likely saving you no space\nanyway.)\n\nProblem two is that the optimizer isn't smart enough to recognize that a\nquery condition laid out in this form should be processed as two\nindexscans --- it would possibly have gotten it right if the first index\ncolumn had been inside the OR, but not this way. The upshot is that\nwhen you force it to use index featureloc_src_strand_beg_end, it's\nactually only using the srcfeature_id column of the index --- which is\nslow of course, and also explains why the optimizer doesn't find that\noption very attractive.\n\nI had some success in getting current sources to generate a desirable\nplan by doing this:\n\nregression=# explain select distinct *\nregression-# from feature f join featureloc fl on (f.feature_id = fl.feature_id) where\nregression-# ((fl.srcfeature_id = 1 and fl.strand=1::int2 and fl.nbeg <= 393164 and fl.nend >= 390956) OR\nregression(# (fl.srcfeature_id = 1 and fl.strand=-1::int2 and fl.nend <= 393164 and fl.nbeg >= 390956));\n\n Unique (cost=34.79..34.85 rows=5 width=50)\n -> Sort (cost=34.79..34.80 rows=5 width=50)\n Sort Key: f.name, fl.nbeg, fl.nend, fl.strand\n -> Hash Join (cost=9.68..34.73 rows=5 width=50)\n Hash Cond: (\"outer\".feature_id = \"inner\".feature_id)\n -> Seq Scan on feature f (cost=0.00..20.00 rows=1000 width=36)\n -> Hash (cost=9.68..9.68 rows=1 width=14)\n -> Index Scan using featureloc_src_strand_beg_end, featureloc_src_strand_beg_end on featureloc fl (cost=0.00..9.68 rows=1 width=14)\n Index Cond: (((srcfeature_id = 1) AND (strand = 1::smallint) AND (nbeg <= 393164) AND (nend >= 390956)) OR ((srcfeature_id = 1) AND (strand = -1::smallint) AND (nbeg >= 390956) AND (nend <= 393164)))\n Filter: (((srcfeature_id = 1) AND (strand = 1::smallint) AND (nbeg <= 393164) AND (nend >= 390956)) OR ((srcfeature_id = 1) AND (strand = -1::smallint) AND (nend <= 393164) AND (nbeg >= 390956)))\n(10 rows)\n\nShoving the join condition into an explicit JOIN clause is a hack, but\nit nicely does the job of keeping the WHERE clause as a pristine\nOR-of-ANDs structure, so that the optimizer can hardly fail to notice\nthat that's the preferred canonical form.\n\nI would strongly recommend an upgrade to PG 7.3, both on general\nprinciples and because you can actually see what the indexscan condition\nis in EXPLAIN's output. Before 7.3 you had to grovel through EXPLAIN\nVERBOSE to be sure what was really happening with a multicolumn index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 18:19:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: performace problem after VACUUM ANALYZE " } ]
[ { "msg_contents": "Tom,\n Thanks for your help in this. We have some flexibility in the schema\ndesign; we are going through our first PostGres performance testing now.\nAm I correct in interpreting your comments as saying you believe that\nif we could lose the OR and the strand constraint PG would probably\nuse the index properly? There is an alternative representation on the table\nthat would do that.\n\nCheers, -Stan\n\nIn a message dated 2/14/2003 6:22:15 PM Eastern Standard Time, \ntgl@sss.pgh.pa.us writes:\n\n> Subj: [Gmod-schema] Re: [PERFORM] performace problem after VACUUM ANALYZE \n> Date: 2/14/2003 6:22:15 PM Eastern Standard Time\n> From: <A HREF=\"mailto:tgl@sss.pgh.pa.us\">tgl@sss.pgh.pa.us</A>\n> To: <A HREF=\"mailto:cain@cshl.org\">cain@cshl.org</A>\n> CC: <A HREF=\"mailto:pgsql-performance@postgresql.org\">pgsql-performance@postgresql.org</A>, <A HREF=\"mailto:gmod-schema@lists.sourceforge.net\">gmod-schema@lists.sourceforge.net</A>\n> Sent from the Internet \n> \n> \n> \n> Scott Cain <cain@cshl.org> writes:\n> >Here is the query that is causing the problems:\n> \n> >select distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n> > from feature f, featureloc fl\n> > where\n> > fl.srcfeature_id = 1 and\n> > ((fl.strand=1 and fl.nbeg <= 393164 and fl.nend >= 390956) OR\n> > (fl.strand=-1 and fl.nend <= 393164 and fl.nbeg >= 390956)) and\n> > f.feature_id = fl.feature_id\n> \n> >[ and the index he'd like it to use is ]\n> \n> >Index \"featureloc_src_strand_beg_end\"\n> > Column | Type \n> >---------------+----------\n> > srcfeature_id | integer\n> > strand | smallint\n> > nbeg | integer\n> > nend | integer\n> >btree\n> \n> After fooling with this I see a couple of problems. One is the\n> same old cross-datatype-comparison issue that so frequently bites\n> people: \"1\" and \"-1\" are integer constants, and comparing them to\n> a smallint column isn't an indexable operation. You need casts.\n> (Or, forget the \"optimization\" of storing strand as a smallint.\n> Depending on your row layout, it's quite likely saving you no space\n> anyway.)\n> \n> Problem two is that the optimizer isn't smart enough to recognize that a\n> query condition laid out in this form should be processed as two\n> indexscans --- it would possibly have gotten it right if the first index\n> column had been inside the OR, but not this way. The upshot is that\n> when you force it to use index featureloc_src_strand_beg_end, it's\n> actually only using the srcfeature_id column of the index --- which is\n> slow of course, and also explains why the optimizer doesn't find that\n> option very attractive.\n> \n> I had some success in getting current sources to generate a desirable\n> plan by doing this:\n> \n> regression=# explain select distinct *\n> regression-# from feature f join featureloc fl on (f.feature_id = \n> fl.feature_id) where\n> regression-# ((fl.srcfeature_id = 1 and fl.strand=1::int2 and fl.nbeg <= \n> 393164 and fl.nend >= 390956) OR\n> regression(# (fl.srcfeature_id = 1 and fl.strand=-1::int2 and fl.nend <= \n> 393164 and fl.nbeg >= 390956));\n> \n> Unique (cost=34.79..34.85 rows=5 width=50)\n> -> Sort (cost=34.79..34.80 rows=5 width=50)\n> Sort Key: f.name, fl.nbeg, fl.nend, fl.strand\n> -> Hash Join (cost=9.68..34.73 rows=5 width=50)\n> Hash Cond: (\"outer\".feature_id = \"inner\".feature_id)\n> -> Seq Scan on feature f (cost=0.00..20.00 rows=1000 width=36)\n> -> Hash (cost=9.68..9.68 rows=1 width=14)\n> -> Index Scan using featureloc_src_strand_beg_end, \n> featureloc_src_strand_beg_end on featureloc fl (cost=0.00..9.68 rows=1 \n> width=14)\n> Index Cond: (((srcfeature_id = 1) AND (strand = 1::smallint) \n> AND (nbeg <= 393164) AND (nend >= 390956)) OR ((srcfeature_id = 1) AND \n> (strand = -1::smallint) AND (nbeg >= 390956) AND (nend <= 393164)))\n> Filter: (((srcfeature_id = 1) AND (strand = 1::smallint) AND \n> (nbeg <= 393164) AND (nend >= 390956)) OR ((srcfeature_id = 1) AND (strand \n> = -1::smallint) AND (nend <= 393164) AND (nbeg >= 390956)))\n> (10 rows)\n> \n> Shoving the join condition into an explicit JOIN clause is a hack, but\n> it nicely does the job of keeping the WHERE clause as a pristine\n> OR-of-ANDs structure, so that the optimizer can hardly fail to notice\n> that that's the preferred canonical form.\n> \n> I would strongly recommend an upgrade to PG 7.3, both on general\n> principles and because you can actually see what the indexscan condition\n> is in EXPLAIN's output. Before 7.3 you had to grovel through EXPLAIN\n> VERBOSE to be sure what was really happening with a multicolumn index.\n> \n> regards, tom lane\n> \n> \n> -------------------------------------------------------\n> This SF.NET email is sponsored by: FREE SSL Guide from Thawte\n> are you planning your Web Server Security? Click here to get a FREE\n> Thawte SSL guide and find the answers to all your SSL security issues.\n> http://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0026en\n> _______________________________________________\n> Gmod-schema mailing list\n> Gmod-schema@lists.sourceforge.net\n> https://lists.sourceforge.net/lists/listinfo/gmod-schema\n> \n\n\nTom,\n       Thanks for your help in this. We have some flexibility in the schema\ndesign; we are going through our first PostGres performance testing now.\nAm I correct in interpreting your comments as saying you believe that\nif we could lose the OR and the strand constraint PG would probably\nuse the index properly? There is an alternative representation on the table\nthat would do that.\n\nCheers, -Stan\n\nIn a message dated 2/14/2003 6:22:15 PM Eastern Standard Time, tgl@sss.pgh.pa.us writes:\n\nSubj: [Gmod-schema] Re: [PERFORM] performace problem after VACUUM ANALYZE \n Date: 2/14/2003 6:22:15 PM Eastern Standard Time\n From: tgl@sss.pgh.pa.us\n To: cain@cshl.org\n CC: pgsql-performance@postgresql.org, gmod-schema@lists.sourceforge.net\nSent from the Internet \n\n\n\nScott Cain <cain@cshl.org> writes:\n>Here is the query that is causing the problems:\n\n>select distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n>   from feature f, featureloc fl\n>   where\n>    fl.srcfeature_id = 1 and\n>    ((fl.strand=1  and fl.nbeg <= 393164 and fl.nend >= 390956) OR\n>    (fl.strand=-1 and fl.nend <= 393164 and fl.nbeg >= 390956)) and\n>    f.feature_id  = fl.feature_id\n\n>[ and the index he'd like it to use is ]\n\n>Index \"featureloc_src_strand_beg_end\"\n>   Column    |   Type   \n>---------------+----------\n> srcfeature_id | integer\n> strand     | smallint\n> nbeg      | integer\n> nend      | integer\n>btree\n\nAfter fooling with this I see a couple of problems.  One is the\nsame old cross-datatype-comparison issue that so frequently bites\npeople: \"1\" and \"-1\" are integer constants, and comparing them to\na smallint column isn't an indexable operation.  You need casts.\n(Or, forget the \"optimization\" of storing strand as a smallint.\nDepending on your row layout, it's quite likely saving you no space\nanyway.)\n\nProblem two is that the optimizer isn't smart enough to recognize that a\nquery condition laid out in this form should be processed as two\nindexscans --- it would possibly have gotten it right if the first index\ncolumn had been inside the OR, but not this way.  The upshot is that\nwhen you force it to use index featureloc_src_strand_beg_end, it's\nactually only using the srcfeature_id column of the index --- which is\nslow of course, and also explains why the optimizer doesn't find that\noption very attractive.\n\nI had some success in getting current sources to generate a desirable\nplan by doing this:\n\nregression=# explain select distinct *\nregression-#  from feature f join featureloc fl on (f.feature_id  = fl.feature_id) where\nregression-#   ((fl.srcfeature_id = 1 and fl.strand=1::int2  and fl.nbeg <= 393164 and fl.nend >= 390956) OR\nregression(#   (fl.srcfeature_id = 1 and fl.strand=-1::int2 and fl.nend <= 393164 and fl.nbeg >= 390956));\n\nUnique  (cost=34.79..34.85 rows=5 width=50)\n  ->  Sort  (cost=34.79..34.80 rows=5 width=50)\n     Sort Key: f.name, fl.nbeg, fl.nend, fl.strand\n     ->  Hash Join  (cost=9.68..34.73 rows=5 width=50)\n        Hash Cond: (\"outer\".feature_id = \"inner\".feature_id)\n        ->  Seq Scan on feature f  (cost=0.00..20.00 rows=1000 width=36)\n        ->  Hash  (cost=9.68..9.68 rows=1 width=14)\n           ->  Index Scan using featureloc_src_strand_beg_end, featureloc_src_strand_beg_end on featureloc fl  (cost=0.00..9.68 rows=1 width=14)\n              Index Cond: (((srcfeature_id = 1) AND (strand = 1::smallint) AND (nbeg <= 393164) AND (nend >= 390956)) OR ((srcfeature_id = 1) AND (strand = -1::smallint) AND (nbeg >= 390956) AND (nend <= 393164)))\n              Filter: (((srcfeature_id = 1) AND (strand = 1::smallint) AND (nbeg <= 393164) AND (nend >= 390956)) OR ((srcfeature_id = 1) AND (strand = -1::smallint) AND (nend <= 393164) AND (nbeg >= 390956)))\n(10 rows)\n\nShoving the join condition into an explicit JOIN clause is a hack, but\nit nicely does the job of keeping the WHERE clause as a pristine\nOR-of-ANDs structure, so that the optimizer can hardly fail to notice\nthat that's the preferred canonical form.\n\nI would strongly recommend an upgrade to PG 7.3, both on general\nprinciples and because you can actually see what the indexscan condition\nis in EXPLAIN's output.  Before 7.3 you had to grovel through EXPLAIN\nVERBOSE to be sure what was really happening with a multicolumn index.\n\n      regards, tom lane\n\n\n-------------------------------------------------------\nThis SF.NET email is sponsored by: FREE  SSL Guide from Thawte\nare you planning your Web Server Security? Click here to get a FREE\nThawte SSL guide and find the answers to all your  SSL security issues.\nhttp://ads.sourceforge.net/cgi-bin/redirect.pl?thaw0026en\n_______________________________________________\nGmod-schema mailing list\nGmod-schema@lists.sourceforge.net\nhttps://lists.sourceforge.net/lists/listinfo/gmod-schema", "msg_date": "Fri, 14 Feb 2003 19:07:49 EST", "msg_from": "SLetovsky@aol.com", "msg_from_op": true, "msg_subject": "Re: [Gmod-schema] Re: performace problem after VACUUM ANALYZE" }, { "msg_contents": "SLetovsky@aol.com writes:\n> Am I correct in interpreting your comments as saying you believe that\n> if we could lose the OR and the strand constraint PG would probably\n> use the index properly?\n\nNo, I said I thought it could do it without that ;-). But yes, you'd\nhave a much less fragile query if you could lose the OR condition.\n\nHave you looked into using a UNION ALL instead of OR to merge the two\nsets of results? It sounds grotty, but might be faster...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Feb 2003 19:11:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Gmod-schema] Re: performace problem after VACUUM ANALYZE " }, { "msg_contents": "Hello Tom,\n\nHere's the short answer: I've got it working much faster now (>100 msec\nfor the query by explain analyze).\n\nHere's the long answer: I reworked the table, horribly denormalizing\nit. I changed the coordinate system, so that start is always less than\nend, regardless of strand. Here is the original query:\n\nselect distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n from feature f, featureloc fl\n where\n fl.srcfeature_id = 1 and\n ((fl.strand=1 and fl.nbeg <= 393164 and fl.nend >= 390956) OR\n (fl.strand=-1 and fl.nend <= 393164 and fl.nbeg >= 390956)) and\n f.feature_id = fl.feature_id\n\nand here is the equivalent query in the new coordinate system:\n\n select distinct f.name,fl.nbeg,fl.nend,fl.strand,f.type_id,f.feature_id\n from feature f, featureloc fl\n where\n fl.srcfeature_id = 1 and\n f.feature_id = fl.feature_id and\n fl.max >= 390956 and\n fl.min <= 393164\n\nNotice that it is MUCH simpler, and the query planner uses exactly the\nindexes I want, and as noted above, runs much faster. Of course, this\nalso means that I have to rewrite my database adaptor, but it shouldn't\nbe too bad.\n\nFor those on the GMOD list, here is how I changed the table:\n\nalter table featureloc add column min int;\nalter table featureloc add column max int;\nupdate featureloc set min=nbeg where strand=1;\nupdate featureloc set max=nend where strand=1;\nupdate featureloc set max=nbeg where strand=-1;\nupdate featureloc set min=nend where strand=-1;\nupdate featureloc set min=nbeg where (strand=0 or strand is null) and nbeg<nend;\nupdate featureloc set max=nend where (strand=0 or strand is null) and nbeg<nend;\nupdate featureloc set min=nend where (strand=0 or strand is null) and nbeg>nend;\nupdate featureloc set max=nbeg where (strand=0 or strand is null) and nbeg>nend;\ncreate index featureloc_src_min_max on featureloc (srcfeature_id,min,max);\nselect count(*) from featureloc where min is null and nbeg is not null;\n\nThe last select is just a test to make sure I didn't miss anything, and\nit did return zero. Also, it doesn't appear that there are any features\nthat are strandless. I found that a little surprising, but included\nthose updates for completeness.\n\nTom, thank you much for your help. Hopefully, I will get the group to\nbuy into this schema change, and life will be good.\n\nScott\n\nOn Fri, 2003-02-14 at 19:11, Tom Lane wrote:\n> SLetovsky@aol.com writes:\n> > Am I correct in interpreting your comments as saying you believe that\n> > if we could lose the OR and the strand constraint PG would probably\n> > use the index properly?\n> \n> No, I said I thought it could do it without that ;-). But yes, you'd\n> have a much less fragile query if you could lose the OR condition.\n> \n> Have you looked into using a UNION ALL instead of OR to merge the two\n> sets of results? It sounds grotty, but might be faster...\n> \n> \t\t\tregards, tom lane\n-- \n------------------------------------------------------------------------\nScott Cain, Ph. D. cain@cshl.org\nGMOD Coordinator (http://www.gmod.org/) 216-392-3087\nCold Spring Harbor Laboratory\n\n", "msg_date": "15 Feb 2003 15:36:31 -0500", "msg_from": "Scott Cain <cain@cshl.org>", "msg_from_op": false, "msg_subject": "Re: [Gmod-schema] Re: performace problem after VACUUM" } ]
[ { "msg_contents": "Hello,\n\nWhile testing multi-select views I found some problems. Here are details. I have 3 tables and I created a view on them:\n\ncreate view view123 as\nselect key, value from tab1 where key=1\nunion all\nselect key, value from tab2 where key=2\nunion all\nselect key, value from tab3 where key=3;\n\nWhen querying with no conditions, I get plan:\n\ntest_db=# explain analyze select key, value from view123;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Subquery Scan view123 (cost=0.00..3.19 rows=15 width=11) (actual time=0.15..1.00 rows=15 loops=1)\n -> Append (cost=0.00..3.19 rows=15 width=11) (actual time=0.14..0.80 rows=15 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1.06 rows=5 width=11) (actual time=0.13..0.30 rows=5 loops=1)\n -> Seq Scan on tab1 (cost=0.00..1.06 rows=5 width=11) (actual time=0.11..0.22 rows=5 loops=1)\n Filter: (\"key\" = 1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.06 rows=5 width=11) (actual time=0.07..0.22 rows=5 loops=1)\n -> Seq Scan on tab2 (cost=0.00..1.06 rows=5 width=11) (actual time=0.05..0.15 rows=5 loops=1)\n Filter: (\"key\" = 2)\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..1.06 rows=5 width=11) (actual time=0.06..0.22 rows=5 loops=1)\n -> Seq Scan on tab3 (cost=0.00..1.06 rows=5 width=11) (actual time=0.05..0.15 rows=5 loops=1)\n Filter: (\"key\" = 3)\n Total runtime: 1.57 msec\n(12 rows)\n\nBut with \"key = 3\":\n\ntest_db# explain analyze select key, value from view123 where key=3;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Subquery Scan view123 (cost=0.00..3.22 rows=7 width=11) (actual time=0.40..0.65 rows=5 loops=1)\n -> Append (cost=0.00..3.22 rows=7 width=11) (actual time=0.38..0.58 rows=5 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1.07 rows=1 width=11) (actual time=0.18..0.18 rows=0 loops=1)\n -> Seq Scan on tab1 (cost=0.00..1.07 rows=1 width=11) (actual time=0.17..0.17 rows=0 loops=1)\n Filter: ((\"key\" = 1) AND (\"key\" = 3))\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.07 rows=1 width=11) (actual time=0.11..0.11 rows=0 loops=1)\n -> Seq Scan on tab2 (cost=0.00..1.07 rows=1 width=11) (actual time=0.11..0.11 rows=0 loops=1)\n Filter: ((\"key\" = 2) AND (\"key\" = 3))\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..1.07 rows=5 width=11) (actual time=0.08..0.25 rows=5 loops=1)\n -> Seq Scan on tab3 (cost=0.00..1.07 rows=5 width=11) (actual time=0.06..0.18 rows=5 loops=1)\n Filter: ((\"key\" = 3) AND (\"key\" = 3))\n Total runtime: 1.22 msec\n(12 rows)\n\nI would expect, that false filters, like ((\"key\" = 1) AND (\"key\" = 3)) will make table full scan unnecessary. So I expected plan like:\n\ntest_db# explain analyze select key, value from view123 where key=3;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Subquery Scan view123 (cost=0.00..3.22 rows=7 width=11) (actual time=0.40..0.65 rows=5 loops=1)\n -> Append (cost=0.00..3.22 rows=7 width=11) (actual time=0.38..0.58 rows=5 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..1.07 rows=1 width=11) (actual time=0.18..0.18 rows=0 loops=1)\n -> Result (cost=0.00..0.00 rows=0 width=11) (actual time=0.01..0.01 rows=0 loops=1)\n ^^^^^^^^^^^ my change\n Filter: ((\"key\" = 1) AND (\"key\" = 3)) [always false]\n ^^^^^^^^^^^ my change\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..1.07 rows=1 width=11) (actual time=0.11..0.11 rows=0 loops=1)\n -> Result (cost=0.00..0.00 rows=0 width=11) (actual time=0.01..0.01 rows=0 loops=1)\n ^^^^^^^^^^^ my change\n Filter: ((\"key\" = 2) AND (\"key\" = 3)) [always false]\n ^^^^^^^^^^^ my change\n -> Subquery Scan \"*SELECT* 3\" (cost=0.00..1.07 rows=5 width=11) (actual time=0.08..0.25 rows=5 loops=1)\n -> Seq Scan on tab3 (cost=0.00..1.07 rows=5 width=11) (actual time=0.06..0.18 rows=5 loops=1)\n Filter: ((\"key\" = 3) AND (\"key\" = 3))\n Total runtime: 1.22 msec\n(12 rows)\n\nNo \"Seq Scan\" on tables where filter is false.\n\nI realize that's how it works now, but:\n\na) is there any way to avoid such scans?\nb) is it possible (or in TODO) to optimize for such cases?\n\nRegards,\n\nMariusz Czułada\n", "msg_date": "Sun, 16 Feb 2003 00:48:13 +0100", "msg_from": "Mariusz =?iso-8859-2?q?Czu=B3ada?= <manieq@idea.net.pl>", "msg_from_op": true, "msg_subject": "Views with unions" }, { "msg_contents": "Mariusz,\n\n> While testing multi-select views I found some problems. Here are details. I\n> have 3 tables and I created a view on them:\n\nWhat version of PostgreSQL are you using? UNION views optimized extremely \npoorly through 7.2.4; things have been improved in 7.3\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 15 Feb 2003 19:54:33 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Views with unions" }, { "msg_contents": "Hi,\n\nDnia nie 16. lutego 2003 04:54, Josh Berkus napisał:\n>\n> What version of PostgreSQL are you using? UNION views optimized extremely\n> poorly through 7.2.4; things have been improved in 7.3\n\nPostgreSQL 7.3 on sparc-sun-solaris2.9, compiled by GCC gcc (GCC) 3.2 (self \ncompiled on SunBlade 100).\n\nMariusz\n", "msg_date": "Sun, 16 Feb 2003 09:19:06 +0100", "msg_from": "Mariusz =?iso-8859-2?q?Czu=B3ada?= <manieq@idea.net.pl>", "msg_from_op": true, "msg_subject": "Re: Views with unions" }, { "msg_contents": "On Sat, 15 Feb 2003, Josh Berkus wrote:\n\n> Mariusz,\n>\n> > While testing multi-select views I found some problems. Here are details. I\n> > have 3 tables and I created a view on them:\n>\n> What version of PostgreSQL are you using? UNION views optimized extremely\n> poorly through 7.2.4; things have been improved in 7.3\n\nYeah, but I think what he's hoping is that it'll notice that\n\"key=1 and key=3\" would be noticed as a false condition so that it doesn't\nscan those tables since a row presumably can't satisify both. The question\nwould be, is the expense of checking the condition for all queries\ngreater than the potential gain for these sorts of queries. In addition,\nyou'd have to be careful to make it work correctly with operator\noverloading, since someone could make operators whose semantics in\ncross-datatype comparisons are wierd.\n\n", "msg_date": "Sun, 16 Feb 2003 10:05:34 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Views with unions" }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Yeah, but I think what he's hoping is that it'll notice that\n> \"key=1 and key=3\" would be noticed as a false condition so that it doesn't\n> scan those tables since a row presumably can't satisify both. The question\n> would be, is the expense of checking the condition for all queries\n> greater than the potential gain for these sorts of queries.\n\nYes, this is the key point: we won't put in an optimization that wins on\na small class of queries unless there is no material cost added for\nplanning cases where it doesn't apply.\n\n> In addition, you'd have to be careful to make it work correctly with\n> operator overloading, since someone could make operators whose\n> semantics in cross-datatype comparisons are wierd.\n\nIn practice we would restrict such deductions to mergejoinable =\noperators, which are sufficiently semantics-constrained that I think\nyou can treat equality at face value.\n\nActually, in CVS tip we are on the hairy edge of being able to do this:\ngenerate_implied_equalities() actually detects that the given conditions\nimply that two constants are equal. But it doesn't do anything with the\nknowledge, because I couldn't figure out just what to do --- it's not\nalways correct to add a \"WHERE false\" constraint to the top level, but\nwhere exactly do we add it? Exactly which relations are guaranteed to\nproduce zero rows in such a case? (When there are outer joins in the\npicture, zero rows out of some relations doesn't mean zero rows out\noverall.) And how do we exploit that knowledge once we've got it?\nIt'd be a pretty considerable amount of work to optimize a plan tree\nfully for this sort of thing (eg, suppressing unnecessary joins), and\nI doubt it's worth the trouble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 16 Feb 2003 13:51:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Views with unions " }, { "msg_contents": "Dnia nie 16. lutego 2003 19:51, Tom Lane napisał:\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > Yeah, but I think what he's hoping is that it'll notice that\n> > \"key=1 and key=3\" would be noticed as a false condition so that it\n> > doesn't scan those tables since a row presumably can't satisify both. The\n\nYes, that is what I expected.\n\n>\n> Yes, this is the key point: we won't put in an optimization that wins on\n> a small class of queries unless there is no material cost added for\n> planning cases where it doesn't apply.\n>\n> > In addition, you'd have to be careful to make it work correctly with\n> > operator overloading, since someone could make operators whose\n> > semantics in cross-datatype comparisons are wierd.\n>\n> It'd be a pretty considerable amount of work to optimize a plan tree\n> fully for this sort of thing (eg, suppressing unnecessary joins), and\n> I doubt it's worth the trouble.\n\nOk, perhaps I should give some explaination about my case.\n\nWe are gathering lots of log data in a few tables. Each table grows by some \n300.000...500.000 rows a day. With average row size of 100 bytes we get up to \n50MB of data per day. Keeping data for 1 year only gives us some 18GB per \ntable. Also, in each table there is a field with very low cardinality (5..20 \nunique values). This field appears in most of our queries to the table, in \n'where' clause (mostly key_field = 5, some times key_field in (1,2,3)).\n\nWhat I was thinking of is to implement some kind of horizontal table \npartitioning. I wanted to split physical storage of data to few smaller \ntables. In my case it could be come 12 subtables, 1..2 GB each. Now, with \n'union-all' view (and lots of rules, of course) I could simultate partitioned \ntable as Oracle implements it. IMHO while querying this view (supertable) for \none or few 'key_field' values it should be much faster for scan 5 GB of 3 \npartitions (subtables) than 18GB for one big table.\n\nI realize it is not the only solution. Perhaps it could be implemented by a \nfunction taking key_filed value and returning all rows from proper table \n(p[lus functions for insert/update/delete). Perhaps application code (log \nfeeder and report module) could be recoded to know about splitted tables. \nStill I think it is 'elegant' and clear. \n\nI wait for your comments,\n\nMariusz Czulada\n", "msg_date": "Sun, 16 Feb 2003 21:27:31 +0100", "msg_from": "Mariusz =?iso-8859-2?q?Czu=B3ada?= <manieq@idea.net.pl>", "msg_from_op": true, "msg_subject": "Re: Views with unions" } ]
[ { "msg_contents": "Hi,\n\nI have following tables:\n\nwith id as number(20,0):\nCREATE TABLE public.firm (\n firm_id numeric(20, 0) NOT NULL,\n name varchar(40) NOT NULL,\n CONSTRAINT firm_pkey PRIMARY KEY (firm_id)\n)\n\nwith id as int8:\n\nCREATE TABLE public.firmint8 (\n firmint8_id int8 NOT NULL,\n name varchar(40) NOT NULL,\n CONSTRAINT firmint8_pkey PRIMARY KEY (firmint8_id)\n)\n\nmy system:\n- dual PIII 800 MHz with 640 MB RAM\n- cygwin\n- PostgreSQL 7.3.1 (default configuration after install thru cygwin)\n- J2SE 1.4.1_01\n- JDBC driver for J2SE 1.4.1_01 and J2SE 1.3.1_06\n\nI get very bad performance inserting 1000 simple values in the tables \ndefined above. I'm using PreparedStatement without Batch.\n\nwith J2SE 1.4.1_01 it need:\n\njava db.InsertFirmSQLNumber\nInsertFirmSQLNumber() needed 74438 for creating 1000 entries\nInsertFirmSQLNumber() needed 53140 for creating 1000 entries\n\njava db.InsertFirmSQLInt8\nInsertFirmSQLInt8() needed 44531 for creating 1000 entries\nInsertFirmSQLInt8() needed 63500 for creating 1000 entries\nInsertFirmSQLInt8() needed 70578 for creating 1000 entries\nInsertFirmSQLInt8() needed 68375 for creating 1000 entries\nInsertFirmSQLInt8() needed 80234 for creating 1000 entries\n\n\nwith J2SE 1.3.1_06 it need:\n\njava db.InsertFirmSQLNumber\nInsertFirmSQLNumber() needed 40093 for creating 1000 entries\nInsertFirmSQLNumber() needed 39016 for creating 1000 entries\nInsertFirmSQLNumber() needed 39579 for creating 1000 entries\n\njava db.InsertFirmSQLInt8\nInsertFirmSQLInt8() needed 75437 for creating 1000 entries\nInsertFirmSQLInt8() needed 39156 for creating 1000 entries\nInsertFirmSQLInt8() needed 41421 for creating 1000 entries\nInsertFirmSQLInt8() needed 41156 for creating 1000 entries\n\n\nand there is the Java code:\n\n DriverManager.registerDriver(new org.postgresql.Driver());\n Connection conn = DriverManager.getConnection(db, dbuser, dbpassword);\n PreparedStatement pstmt = null;\n ResultSet rs = null;\n\n if (conn != null) {\n String query = \"insert into firm values(?,?)\";\n pstmt = conn.prepareStatement(query);\n\n long start = System.currentTimeMillis();\n for (int i = 0; i < N; i++) {\n pstmt.setLong(1, getUniquelongID());\n pstmt.setString(2, \"\" + i);\n pstmt.executeUpdate();\n }\n long end = System.currentTimeMillis() - start;\n\n System.out.println(\"InsertFirmSQLInt8() needed \" + end + \" for \ncreating \" + N + \" entries\");\n }\n\n closeConnections(conn, pstmt, rs);\n }\n\nIs this a JDBC driver or PostgreSQL configuration problem? Or is the \nperformance normal?\n\n\nBest Regards,\nRafal \n\n", "msg_date": "Mon, 17 Feb 2003 00:03:01 +0100", "msg_from": "Rafal Kedziorski <rafcio@polonium.de>", "msg_from_op": true, "msg_subject": "Good performance?" }, { "msg_contents": "> ....\n> pstmt.setLong(1, getUniquelongID());\n>....\n\nWhat is getUniquelongID()? Can you post the code for that? I would\nsuspect that might be your problem.\n\nYour results point to something being wrong somewhere. Just yesterday I\nwas doing some benchmarking of my own, and using code similar to yours I\nwas inserting 10000 records in about 23 seconds.\n\njohn\n\n", "msg_date": "Sun, 16 Feb 2003 18:23:23 -0500", "msg_from": "\"John Cavacas\" <oogly@rogers.com>", "msg_from_op": false, "msg_subject": "Re: Good performance?" }, { "msg_contents": "At 18:23 16.02.2003 -0500, John Cavacas wrote:\n> > ....\n> > pstmt.setLong(1, getUniquelongID());\n> >....\n>\n>What is getUniquelongID()? Can you post the code for that? I would\n>suspect that might be your problem.\n\nhere is the code.\n\nprivate final static long getUniquelongID() {\n return (System.currentTimeMillis() * 1000 + (long) (100000 * \nMath.random()));\n}\n\nbut this routine is very fast. for computing 100.000 values she need 6-7 \nseconds.\n\n>Your results point to something being wrong somewhere. Just yesterday I\n>was doing some benchmarking of my own, and using code similar to yours I\n>was inserting 10000 records in about 23 seconds.\n>\n>john\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n\n", "msg_date": "Mon, 17 Feb 2003 01:23:45 +0100", "msg_from": "Rafal Kedziorski <rafcio@polonium.de>", "msg_from_op": true, "msg_subject": "Re: Good performance?" }, { "msg_contents": "Rafal,\n\nPerformance of postgres running under cygwin isn't great. Can you try \nthe same test on a different platform? It also looks like you are \nrunning in autocommit mode. You should see a significant performance \nimprovement if you batch your commits in say groups of 1000 inserts per \ncommit.\n\nthanks,\n--Barry\n\nRafal Kedziorski wrote:\n> Hi,\n> \n> I have following tables:\n> \n> with id as number(20,0):\n> CREATE TABLE public.firm (\n> firm_id numeric(20, 0) NOT NULL,\n> name varchar(40) NOT NULL,\n> CONSTRAINT firm_pkey PRIMARY KEY (firm_id)\n> )\n> \n> with id as int8:\n> \n> CREATE TABLE public.firmint8 (\n> firmint8_id int8 NOT NULL,\n> name varchar(40) NOT NULL,\n> CONSTRAINT firmint8_pkey PRIMARY KEY (firmint8_id)\n> )\n> \n> my system:\n> - dual PIII 800 MHz with 640 MB RAM\n> - cygwin\n> - PostgreSQL 7.3.1 (default configuration after install thru cygwin)\n> - J2SE 1.4.1_01\n> - JDBC driver for J2SE 1.4.1_01 and J2SE 1.3.1_06\n> \n> I get very bad performance inserting 1000 simple values in the tables \n> defined above. I'm using PreparedStatement without Batch.\n> \n> with J2SE 1.4.1_01 it need:\n> \n> java db.InsertFirmSQLNumber\n> InsertFirmSQLNumber() needed 74438 for creating 1000 entries\n> InsertFirmSQLNumber() needed 53140 for creating 1000 entries\n> \n> java db.InsertFirmSQLInt8\n> InsertFirmSQLInt8() needed 44531 for creating 1000 entries\n> InsertFirmSQLInt8() needed 63500 for creating 1000 entries\n> InsertFirmSQLInt8() needed 70578 for creating 1000 entries\n> InsertFirmSQLInt8() needed 68375 for creating 1000 entries\n> InsertFirmSQLInt8() needed 80234 for creating 1000 entries\n> \n> \n> with J2SE 1.3.1_06 it need:\n> \n> java db.InsertFirmSQLNumber\n> InsertFirmSQLNumber() needed 40093 for creating 1000 entries\n> InsertFirmSQLNumber() needed 39016 for creating 1000 entries\n> InsertFirmSQLNumber() needed 39579 for creating 1000 entries\n> \n> java db.InsertFirmSQLInt8\n> InsertFirmSQLInt8() needed 75437 for creating 1000 entries\n> InsertFirmSQLInt8() needed 39156 for creating 1000 entries\n> InsertFirmSQLInt8() needed 41421 for creating 1000 entries\n> InsertFirmSQLInt8() needed 41156 for creating 1000 entries\n> \n> \n> and there is the Java code:\n> \n> DriverManager.registerDriver(new org.postgresql.Driver());\n> Connection conn = DriverManager.getConnection(db, dbuser, \n> dbpassword);\n> PreparedStatement pstmt = null;\n> ResultSet rs = null;\n> \n> if (conn != null) {\n> String query = \"insert into firm values(?,?)\";\n> pstmt = conn.prepareStatement(query);\n> \n> long start = System.currentTimeMillis();\n> for (int i = 0; i < N; i++) {\n> pstmt.setLong(1, getUniquelongID());\n> pstmt.setString(2, \"\" + i);\n> pstmt.executeUpdate();\n> }\n> long end = System.currentTimeMillis() - start;\n> \n> System.out.println(\"InsertFirmSQLInt8() needed \" + end + \" \n> for creating \" + N + \" entries\");\n> }\n> \n> closeConnections(conn, pstmt, rs);\n> }\n> \n> Is this a JDBC driver or PostgreSQL configuration problem? Or is the \n> performance normal?\n> \n> \n> Best Regards,\n> Rafal\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n", "msg_date": "Sun, 16 Feb 2003 20:45:01 -0800", "msg_from": "Barry Lind <blind@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Good performance?" }, { "msg_contents": "hi,\n\nBarry Lind wrote:\n\n> Rafal,\n>\n> Performance of postgres running under cygwin isn't great. Can you try \n> the same test on a different platform? It also looks like you are \n> running in autocommit mode. You should see a significant performance \n> improvement if you batch your commits in say groups of 1000 inserts \n> per commit. \n\nafter set autocommit false, I need 0,9 - 1,04 seconds for insert 1000 \nnew entries into my table. is this normal, that autocommit false is \n40-50 times slower?\n\n\nRafal\n\n> thanks,\n> --Barry\n\n\n\n", "msg_date": "Mon, 17 Feb 2003 10:12:45 +0100", "msg_from": "Rafal Kedziorski <rafcio@polonium.de>", "msg_from_op": true, "msg_subject": "Re: Good performance?" }, { "msg_contents": "Rafal Kedziorski wrote:\n<cut>\n> after set autocommit false, I need 0,9 - 1,04 seconds for insert 1000 \n> new entries into my table. is this normal, that autocommit false is \n> 40-50 times slower?\n> \n> \n> Rafal\nIt is possible when you have \"fsync=false\" in your postgresql.conf. \n(don't change it if you don't have to).\nRegards,\nTomasz Myrta\n\n", "msg_date": "Mon, 17 Feb 2003 11:58:57 +0100", "msg_from": "Tomasz Myrta <jasiek@klaster.net>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Good performance?" }, { "msg_contents": "Tomasz Myrta wrote:\n\n> Rafal Kedziorski wrote:\n> <cut>\n>\n>> after set autocommit false, I need 0,9 - 1,04 seconds for insert 1000 \n>> new entries into my table. is this normal, that autocommit false is \n>> 40-50 times slower?\n>>\n>>\n>> Rafal\n>\n> It is possible when you have \"fsync=false\" in your postgresql.conf. \n> (don't change it if you don't have to). \n\nfsync is:\n#fsync = true\n\nbut there are my new start options: postmaster -i -o -F -D ...\n\nafter set fsync false I get this Performance for creating new entries \nwith entity beans:\n\nneeded 9223 for creating 1000 entries\n\ninstead of about 50.000 milliseconds. it's possible to make it faster?\n\n\nRafal\n\n> Regards,\n\n> Tomasz Myrta\n\n\n", "msg_date": "Mon, 17 Feb 2003 12:25:22 +0100", "msg_from": "Rafal Kedziorski <rafcio@polonium.de>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Good performance?" }, { "msg_contents": "Rafal Kedziorski wrote:\n<snip>\n> instead of about 50.000 milliseconds. it's possible to make it faster?\n\nHi Rafal,\n\nHave you tuned the memory settings of PostgreSQL yet?\n\nRegards and best wishes,\n\nJustin Clift\n\n> Rafal\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Mon, 17 Feb 2003 22:57:37 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [JDBC] Good performance?" }, { "msg_contents": "Justin Clift wrote:\n\n> Rafal Kedziorski wrote:\n> <snip>\n>\n>> instead of about 50.000 milliseconds. it's possible to make it faster?\n>\n>\n> Hi Rafal,\n>\n> Have you tuned the memory settings of PostgreSQL yet? \n\nI'm working on it.\n\n\nRafal\n\n> Regards and best wishes,\n>\n> Justin Clift\n\n\n", "msg_date": "Mon, 17 Feb 2003 13:43:37 +0100", "msg_from": "Rafal Kedziorski <rafcio@polonium.de>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Good performance?" }, { "msg_contents": "Rafal,\n\nI would expect things to be slower with a commit after each insert, \nsince it is the commit that forces the data to be written to disk. \nHowever 50x seems a bit much and I think is due to cygwin performance.\n\nI ran your code on my laptop running RH7.3 and get the following results:\n\nRunning with autocommit on:\nInsertFirmSQLInt8() needed 5129 for creating 1000 entries\nInsertFirmSQLInt8() needed 5417 for creating 1000 entries\nInsertFirmSQLInt8() needed 4976 for creating 1000 entries\nInsertFirmSQLInt8() needed 4162 for creating 1000 entries\n\nRunning with autocommit off:\nInsertFirmSQLInt8() needed 1250 for creating 1000 entries\nInsertFirmSQLInt8() needed 932 for creating 1000 entries\nInsertFirmSQLInt8() needed 1000 for creating 1000 entries\nInsertFirmSQLInt8() needed 1321 for creating 1000 entries\nInsertFirmSQLInt8() needed 1248 for creating 1000 entries\n\nOn linux I see about a 5x slowdown which is more in line with what I \nwould expect.\n\nthanks,\n--Barry\n\n\nRafal Kedziorski wrote:\n> hi,\n> \n> Barry Lind wrote:\n> \n>> Rafal,\n>>\n>> Performance of postgres running under cygwin isn't great. Can you try \n>> the same test on a different platform? It also looks like you are \n>> running in autocommit mode. You should see a significant performance \n>> improvement if you batch your commits in say groups of 1000 inserts \n>> per commit. \n> \n> \n> after set autocommit false, I need 0,9 - 1,04 seconds for insert 1000 \n> new entries into my table. is this normal, that autocommit false is \n> 40-50 times slower?\n> \n> \n> Rafal\n> \n>> thanks,\n>> --Barry\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n", "msg_date": "Mon, 17 Feb 2003 09:50:26 -0800", "msg_from": "Barry Lind <blind@xythos.com>", "msg_from_op": false, "msg_subject": "Re: Good performance?" }, { "msg_contents": "Rafal, Tomasz,\n\n> It is possible when you have \"fsync=false\" in your postgresql.conf. \n> (don't change it if you don't have to).\n\nYou should NOT turn off fsync unless you know what you are doing. With fsync \noff, your database can be unrecoverably corrupted after an unexpected \npower-out, and you will be forced to restore from your last backup.\n\n-- \n-Josh Berkus\n Aglio Database Solutions\n San Francisco\n\n", "msg_date": "Mon, 17 Feb 2003 12:03:58 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Good performance?" } ]
[ { "msg_contents": "Ryan,\n\nI am crossing this discussion to the PGSQL-PERFORMANCE list, which is the \nproper place for it. Anyone interested, please follow us there!\n\n>>>Ryan Bradetich said:\n > the table would look like:\n > 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n > 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has an invalid shell.\n > 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has expired password.\n > 2 | Mon Feb 17 00:34:24 MST 2003 | f101 | file /foo has improper owner.\n > etc...\n > \n > So I do not need the anomaly to be part of the index, I only need it to \n > \n > I agree with you, that I would not normally add the anomally to the\n > index, except for the unique row requirement. Thinking about it now,\n > maybe I should guarentee unique rows via a check constraint...\n > \n > Thanks for making me think about this in a different way!\n\nFirst off, I'm not clear on why a duplicate anominaly would be necessarily \ninvalid, given the above. Not useful, certainly, but legitimate real data.\n\nI realize that you have performance considerations to work with. However, I \nmust point out that your database design is not *at all* normalized ... and \nthat normalizing it might solve some of your indexing problems.\n\nA normalized structure would look something like:\n\nTABLE operations \n\tid serial not null primary key,\n\thost_id int not null,\n\ttimeoccurred timestamp not null default now(),\n\tcategory varchar(5) not null,\n\tconstraint operations_unq unique (host_id, timeoccurred, category)\n\nTABLE anominalies\n\tid serial not null primary key,\n\toperation_id int not null references operations(id) on delete cascade,\n\tanominaly text\n\nThis structure would have two advantages for you:\n1) the operations table would be *much* smaller than what you have now, as you \nwould not be repeating rows for each anominaly. \n2) In neither table would you be including the anominaly text in an index ... \nthus reducing index size tremendously.\n\nAs a warning, though: you may find that for insert speed the referential \nintegrity key is better maintained at the middleware layer. We've had some \ncomplaints about slow FK enforcement in Postgres.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 18 Feb 2003 09:37:20 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Questions about indexes?" }, { "msg_contents": "Does there currently exist any kind of script that can be run on\nPostgres to conduct a complete feature coverage test with varying\ndataset sizes to compare performance between functionality changes?\n\nThe interest of course is to get a baseline of performance and then to\nsee how manipulation of internal algorithms or vacuum frequency or WAL\nlogs being place on a separate physical disk affect the performance of\nthe various features with various dataset sizes.\n\nIf not, how many people would be interested in such a script being\nwritten?\n\nKeith Bottner\nkbottner@istation.com\n\n\"Vegetarian - that's an old Indian word meaning 'lousy hunter.'\" - Andy\nRooney\n\n", "msg_date": "Tue, 18 Feb 2003 13:48:19 -0600", "msg_from": "\"Keith Bottner\" <kbottner@istation.com>", "msg_from_op": false, "msg_subject": "Performance Baseline Script" }, { "msg_contents": "Keith,\n\n> Does there currently exist any kind of script that can be run on\n> Postgres to conduct a complete feature coverage test with varying\n> dataset sizes to compare performance between functionality changes?\n\nNo.\n\n> If not, how many people would be interested in such a script being\n> written?\n\n\nJust about everyon on this list, as well as Advocacy and Hackers, to\njudge by the current long-running thread on the topic, which has\nmeandered across several lists.\n\n-Josh Berkus\n", "msg_date": "Tue, 18 Feb 2003 12:43:48 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: Performance Baseline Script" }, { "msg_contents": "Josh,\n\nPosting to the performance list as requested :) The reason I orgionally\nposted to the hackers list was I was curious about the contents of the\nindex and how they worked.... but now this thread is more about\nperformance, so this list is more appropriate.\n\n\nOn Tue, 2003-02-18 at 10:37, Josh Berkus wrote:\n> Ryan,\n> \n> I am crossing this discussion to the PGSQL-PERFORMANCE list, which is the \n> proper place for it. Anyone interested, please follow us there!\n> \n> >>>Ryan Bradetich said:\n> > the table would look like:\n> > 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n> > 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has an invalid shell.\n> > 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has expired password.\n> > 2 | Mon Feb 17 00:34:24 MST 2003 | f101 | file /foo has improper owner.\n> > etc...\n> > \n> > So I do not need the anomaly to be part of the index, I only need it to \n> > \n> > I agree with you, that I would not normally add the anomally to the\n> > index, except for the unique row requirement. Thinking about it now,\n> > maybe I should guarentee unique rows via a check constraint...\n> > \n> > Thanks for making me think about this in a different way!\n> \n> First off, I'm not clear on why a duplicate anominaly would be necessarily \n> invalid, given the above. Not useful, certainly, but legitimate real data.\n\nDuplicate anomalies are not invalid, they are only invalid if they are\nfor the same system, same category, from the same compliancy report.\n\nie. This is bad:\n1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n\nThis is ok:\n1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has an invalid shell.\n\nThe only reason the duplicate date would occur is if the same compliancy\nreport was entered into the database twice. (ie. The host did not\ngenerate a new compliancy report, or a bug in the data stuffing script,\netc). Daniel Kalchev from the pgsql-hackers list had a good idea that I\nam investigating, which is to have the archive script be responsible for\npreventing duplicate entries into the database, thus the requirement for\nan index to do this is eliminated.\n\nThe whole reason I decided to not allow duplicte entries into the\narchitve table is so when I generate reports, I do not have to put the\ndistinct qualifier on the queries which eliminates the sort and speeds\nup the queries. The entire purpose of the index was to maintain good\ndata integrity in the archive tables for reporting purposes. If I can\nenforce the data integrity another way (ie, data stuffer scripts, index\non md5 hash of the data, etc) then I can drop this huge index and be\nhappy :)\n\n> I realize that you have performance considerations to work with. However, I \n> must point out that your database design is not *at all* normalized ... and \n> that normalizing it might solve some of your indexing problems.\n\nPlease do point out these design errors! I am always interested in\nlearning more about normialization since I do not have any formal DBA\ntraining, and most of my knowledge is from reading books, mailing lists,\nand experimenting :)\n\n\n> A normalized structure would look something like:\n> \n> TABLE operations \n> \tid serial not null primary key,\n> \thost_id int not null,\n> \ttimeoccurred timestamp not null default now(),\n> \tcategory varchar(5) not null,\n> \tconstraint operations_unq unique (host_id, timeoccurred, category)\n> \n> TABLE anominalies\n> \tid serial not null primary key,\n> \toperation_id int not null references operations(id) on delete cascade,\n> \tanominaly text\n> \n> This structure would have two advantages for you:\n> 1) the operations table would be *much* smaller than what you have now, as you \n> would not be repeating rows for each anominaly.\n\nI agree the operations table would be smaller, but you have also added\nsome data by breaking it up into 2 tables. You have an oid (8 bytes) +\noperations.id (4 bytes) + anomalies.id (4 bytes) + operation_id (4\nbytes) + tuple overhead[2] (36 bytes).\n\nThe anomalies.id and operation_id will be duplicated for all \n~85 Millon rows[1], but we did remove the host_id (4 bytes), timestamp\n(8 bytes) and category (~5 bytes) ... for a savings of 9 bytes / row.\n\nMy quick estimation shows this saves ~ 730 MB in the table and the index\nfor a combined total of 1.46 GB (The reason the index savings in the\nanomalies is not more is explain further in response to point 2.).\n\nSo to gain any space savings from breaking the tables up, the total size\nof the operations table + primary index + operatoins_unq index < 1.46\nGB.\n\nThe operations table contains oid (8 bytes) + host_id (4 bytes) +\ntimestamp (8 bytes) + category (~5 bytes) + tuple overhead[2] (36\nbytes). Also since every category is duplicated in either the primary\nindex or the operations_unq index, the index sizes will be approximately\nthe size of the main table.\n\nSo 730 MB / (61 bytes) == ~ 12.5 Million rows.\n\nA quick calculation of hosts * categories * data points show that we\ncould have a maximum of ~ 12 million entries[3] in the operation table,\nso this would save some space :)\n\n\n> 2) In neither table would you be including the anominaly text in an index ... \n> thus reducing index size tremendously.\n\nUnless I impliment Daniel's method of verifying the uniqness at the data\ninsertion point, I will still need to guarentee the anomaly is unique\nfor the given operation_id. If I mis-read the table schema, would you\nplease point it out to me .. I'm probably being dense :)\n\nAlso, I do not understand why the anomalies table need the id key for\nthe primary key. Wouldn't the operation_id and the anomaly form the\nprimary key? We could save 8 bytes (4 from table + 4 from index) * ~85\nMillion rows by removing this column.\n\n\n> As a warning, though: you may find that for insert speed the referential \n> integrity key is better maintained at the middleware layer. We've had some \n> complaints about slow FK enforcement in Postgres.\n\n\nThanks, I will keep this in mind. Although the inserts are usually done\nin a batch job ... so interactive speed is generally not an issue ...\nbut faster is always better :)\n\nAlso I am curious ... With the table more nomalized, I now need to\nperform a join when selecting data.... I realize there will be fewer \npages to read from disk (which is good!) when doing this join, but I\nwill interested to see how the join affects the interactive performance\nof the queries.... something to test :)\n\n\nThanks for looking at this, Josh, and providing input. Hopefully by\nexplaining my figuring and thinking you can see what am I looking at ...\nand point out additional flaws in my methods :) Unfortunately I still\ndo not think I can remove the anomaly field from the index yet, even by\nnormalizing the tables like you did.\n\nI think Daniel has the correct answer by moving the unique constraint\ncheck out into the stuffer script, or by performing some method of\nindexing on a hash as I proposed at the beginning of the thread.\n\nI have figured out a way to reduce my md5 index size in 1/2 again, and\nhave deveoped a method for dealing with collisions too. I am planning\non running some bench marks against the current method, with the tables\nnormalized as Josh suggested, and using the md5 hash I am working on. My\nbenchmarks will be fairly simple ... average time to insert X number of\nvalid inserts and average time to insert X number of duplicate inserts\nalong with disk space usage. If anyone is interested I am willing to\npost the results to this list ... and if anyone has some other benchmark\nsuggestions they would like to see, feel free to let me know.\n\nThanks much for all the help and insight!\n\n- Ryan\n\n[1] \tselect count(anomaly) from history;\n\t count\n\t ----------\n \t 85221799\n\n[2] \tI grabed the 36 bytes overhead / tuple from this old FAQ I found\n\tat http://www.guides.sk/postgresql_docs/faq-english.html\n\tI did not look at what the current value is today.\n\n[3] \tThis was a rough calculation of a maxium, I do not believe we \tare\nat this maxium, so the space savings is most likely larger.\n\n-- \nRyan Bradetich <rbradetich@uswest.net>\n\n", "msg_date": "19 Feb 2003 01:10:53 -0700", "msg_from": "Ryan Bradetich <rbradetich@uswest.net>", "msg_from_op": false, "msg_subject": "Re: Questions about indexes?" }, { "msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> Keith,\n>> Does there currently exist any kind of script that can be run on\n>> Postgres to conduct a complete feature coverage test with varying\n>> dataset sizes to compare performance between functionality changes?\n\n> No.\n\nIf you squint properly, OSDB (http://osdb.sourceforge.net/) might be\nthought to do this, or at least be a starting point for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Feb 2003 10:03:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance Baseline Script " }, { "msg_contents": "Tom Lane wrote:\n> \"Josh Berkus\" <josh@agliodbs.com> writes:\n> \n>>Keith,\n>>\n>>>Does there currently exist any kind of script that can be run on\n>>>Postgres to conduct a complete feature coverage test with varying\n>>>dataset sizes to compare performance between functionality changes?\n> \n>>No.\n> \n> If you squint properly, OSDB (http://osdb.sourceforge.net/) might be\n> thought to do this, or at least be a starting point for it.\n\nAs a side thought, just today found out that the TPC organisation \nprovides all kinds of code freely for building/running the TPC-x tests. \n Didn't know that before.\n\nSure, we can't submit results yet, but we might at least be able to run \nthe tests and see if anything interesting turns up. People have talked \nabout the TPC tests before, but not sure if anyone has really looked at \nmaking them runnable on PostgreSQL for everyone yet.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> \t\t\tregards, tom lane\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Thu, 20 Feb 2003 01:45:25 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Performance Baseline Script" }, { "msg_contents": "On Thu, Feb 20, 2003 at 01:45:25AM +1030, Justin Clift wrote:\n\n> As a side thought, just today found out that the TPC organisation \n> provides all kinds of code freely for building/running the TPC-x tests. \n\nYes, but be careful what you mean there.\n\nIt is _not_ a TPC test unless it is run under tightly-controlled and\n-audited conditions as specified by TPC. And that effectively means\nyou have to pay them. In other words, it's not a TPC test unless you\ncan get it published by them.\n\nThat doesn't mean you can't run a test \"based on the documents\nprovided by the TPC for test definition _x_\". Just make sure you\nwalk on the correct side of the intellectual property line, or you'll\nbe hearing from their lawyers.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Wed, 19 Feb 2003 10:42:51 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Performance Baseline Script" }, { "msg_contents": "Ryan,\n\n> Posting to the performance list as requested :) The reason I orgionally\n> posted to the hackers list was I was curious about the contents of the\n> index and how they worked.... but now this thread is more about\n> performance, so this list is more appropriate.\n\n*shrug* I just figured that you didn't know about the performance list ... \nalso, I'm doing my bit to reduce traffic on -hackers.\n\n> Duplicate anomalies are not invalid, they are only invalid if they are\n> for the same system, same category, from the same compliancy report.\n>\n> ie. This is bad:\n> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n>\n> This is ok:\n> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has an invalid shell.\n\nOK. Given the necessity of verifying anominaly uniqueness, my suggestions \nbelow change somewhat.\n\n> Please do point out these design errors! I am always interested in\n> learning more about normialization since I do not have any formal DBA\n> training, and most of my knowledge is from reading books, mailing lists,\n> and experimenting :)\n\n\"Practical Issues in Database Management\" and \"Database Design for Mere \nMortals\" are useful. Me, I learned through 5 years of doing the wrong thing \nand finding out why it was wrong ...\n\n> I agree the operations table would be smaller, but you have also added\n> some data by breaking it up into 2 tables. You have an oid (8 bytes) +\n> operations.id (4 bytes) + anomalies.id (4 bytes) + operation_id (4\n> bytes) + tuple overhead[2] (36 bytes).\n\nYes, and given your level of traffic, you might have to use 8byte id fields. \nBut if disk space is your main issue, then I'd suggest swaping the category \ncode to a \"categories\" table, allowing you to use an int4 or even and int2 as \nthe category_id in the Operations table. This would save you 6-8 bytes per \nrow in Operations.\n\n> > 2) In neither table would you be including the anominaly text in an index\n> > ... thus reducing index size tremendously.\n>\n> Unless I impliment Daniel's method of verifying the uniqness at the data\n> insertion point, I will still need to guarentee the anomaly is unique\n> for the given operation_id. If I mis-read the table schema, would you\n> please point it out to me .. I'm probably being dense :)\n>\n> Also, I do not understand why the anomalies table need the id key for\n> the primary key. Wouldn't the operation_id and the anomaly form the\n> primary key? We could save 8 bytes (4 from table + 4 from index) * ~85\n> Million rows by removing this column.\n\nAs I said above, I didn't understand why you needed to check anominaly \nuniqueness. Given that you do, I'd suggest that you do the above.\n\nOf course, you also have another option. You could check uniqueness on the \noperation_id and the md5 of the anominaly field. This would be somewhat \nawkward, and would still require that you have a seperate primary key for the \nanominalies table. But the difference between an index on an up-to-1024 \ncharacter field and a md5 string might make it worth it, particularly when it \ncomes to inserting new rows.\n\nIn other words, test:\n1) drop the anominaly_id as suggested, above.\n2) adding an anominaly_md5 column to the anominalies table.\n3) make operation_id, anominaly_md5 your primary key\n4) write a BEFORE trigger that caclulates the md5 of any incoming anominalies \nand adds it to that column.\n\nIt's worth testing, since a unique index on a 1024-character field for 85 \nmillion records could be very slow.\n\n> Thanks, I will keep this in mind. Although the inserts are usually done\n> in a batch job ... so interactive speed is generally not an issue ...\n> but faster is always better :)\n\nIn a transaction, I hope.\n\n> Also I am curious ... With the table more nomalized, I now need to\n> perform a join when selecting data.... I realize there will be fewer\n> pages to read from disk (which is good!) when doing this join, but I\n> will interested to see how the join affects the interactive performance\n> of the queries.... something to test :)\n\nI'll look forward to seeing the results of your test.\n\n> If anyone is interested I am willing to\n> post the results to this list ... and if anyone has some other benchmark\n> suggestions they would like to see, feel free to let me know.\n\nWe're always interested in benchmarks.\n\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 19 Feb 2003 09:50:42 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Questions about indexes?" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Thu, Feb 20, 2003 at 01:45:25AM +1030, Justin Clift wrote:\n> \n>>As a side thought, just today found out that the TPC organisation \n>>provides all kinds of code freely for building/running the TPC-x tests. \n> \n> Yes, but be careful what you mean there.\n> \n> It is _not_ a TPC test unless it is run under tightly-controlled and\n> -audited conditions as specified by TPC. And that effectively means\n> you have to pay them. In other words, it's not a TPC test unless you\n> can get it published by them.\n> \n> That doesn't mean you can't run a test \"based on the documents\n> provided by the TPC for test definition _x_\". Just make sure you\n> walk on the correct side of the intellectual property line, or you'll\n> be hearing from their lawyers.\n\nGood point.\n\nWhat I was thinking about was that we could likely get the \"test suite\" \nof code that the TPC publishes and ensure that it works with PostgreSQL.\n\nThe reason I'm thinking of is that if any of the existing PostgreSQL \nsupport companies (or an alliance of them), decided to become a member \nof the TPC in order to submit results then the difficuly of entry would \nbe that bit lower, and there would be people around at that stage who'd \nalready gained good experience with the test suite(s).\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> A\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n\n", "msg_date": "Thu, 20 Feb 2003 12:47:32 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Performance Baseline Script" }, { "msg_contents": "On Wed, 19 Feb 2003, Ryan Bradetich wrote:\n\n> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user x has an invalid shell.\n> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has an invalid shell.\n> 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | user y has expired password.\n> 2 | Mon Feb 17 00:34:24 MST 2003 | f101 | file /foo has improper owner.\n\nIf you're going to normalize this a bit, you should start looking at\nthe data that are repeated and trying to get rid of the repititions.\nFirst of all, the timestamp is repeated a lot, you might move that to a\nseparate table and just use a key into that table. But you might even\ndo better with multiple columns: combine the timestamp and host ID into\none table to get a \"host report instance\" and replace both those columns\nwith just that. If host-id/timestamp/category triplets are frequently\nrepeated, you might even consider combining the three into another\ntable, and just using an ID from that table with each anomaly.\n\nBut the biggest space and time savings would come from normalizing your\nanomalys themselves, because there's a huge amount repeated there. If you're\nable to change the format to something like:\n\n invalid shell for user: x\n invalid shell for user: y\n expired password for user: y\n improper owner for file: /foo\n\nYou can split those error messages off into another table:\n\n anomaly_id | anomaly\n -----------+------------------------------------------------\n 1 | invalid shell for user\n\t 2 | expired password for user\n\t 3 | improper owner for file\n\nAnd now your main table looks like this:\n\n host_id | timestamp | ctgr | anomaly_id | datum\n --------+------------------------------+------+------------+------\n 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | 1 | x\n 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | 1 | y\n 1 | Mon Feb 17 00:34:24 MST 2003 | p101 | 2 | y\n 2 | Mon Feb 17 00:34:24 MST 2003 | f101 | 3 | /foo\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n", "msg_date": "Thu, 20 Feb 2003 20:30:50 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Questions about indexes?" } ]
[ { "msg_contents": "Folks,\n\nI have a new system with an Adaptec 2200S RAID controller. I've been\ntesting some massive data transformations on it, and data write speed\nseems to top out at 3mb/second .... which seems slow to me for this\nkind of hardware. As it is, I have RAM and CPU sitting idle while they\nwait for the disks to finish.\n\nThoughts, anyone?\n\n-Josh\n", "msg_date": "Tue, 18 Feb 2003 13:45:28 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Data write speed" }, { "msg_contents": "Josh Berkus kirjutas T, 18.02.2003 kell 23:45:\n> Folks,\n> \n> I have a new system with an Adaptec 2200S RAID controller. \n\nRAID what (0,1,1+0,5,...) ?\n\n> I've been\n> testing some massive data transformations on it, and data write speed\n> seems to top out at 3mb/second .... which seems slow to me for this\n> kind of hardware. As it is, I have RAM and CPU sitting idle while they\n> wait for the disks to finish.\n> \n> Thoughts, anyone?\n\nHow have you tested it ?\n\nWhat OS ?\n\n---------------\nHannu\n\n\n", "msg_date": "19 Feb 2003 00:22:33 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Data write speed" }, { "msg_contents": "Josh Berkus wrote:\n> Folks,\n> \n> I have a new system with an Adaptec 2200S RAID controller. I've been\n> testing some massive data transformations on it, and data write speed\n> seems to top out at 3mb/second .... which seems slow to me for this\n> kind of hardware. As it is, I have RAM and CPU sitting idle while they\n> wait for the disks to finish.\n> \n> Thoughts, anyone?\n\nThat does seem low. What rate do you get with software RAID (of the\nsame type, of course) to the same disks (might have to be through a\nstandard SCSI controller to be meaningful) with roughly the same\ndisks/channel distribution?\n\n\nMy experience with hardware RAID (at least with the hardware available\na few years ago) versus software RAID is that software RAID is almost\nalways going to be faster because RAID speed seems to be very\ndependent on the speed of the RAID controller's CPU. And computer\nsystems usually have a processor that's significantly faster than the\nprocessor on a hardware RAID controller. It's rare that an\napplication will be as CPU intensive as it is I/O intensive (in\nparticular, there are relatively few applications that will be burning\nCPU at the same time they're waiting for I/O to complete), so the\nfaster you can get your I/O completed, the higher the overall\nthroughput will be even if you have to use some CPU to do the I/O.\n\nThat may have changed some since CPUs now are much faster than they\nused to be, even on hardware RAID controllers, but to me that just\nmeans that you can build a larger RAID system before saturating the\nCPU.\n\n\nThe Adaptec 2200S has a 100MHz CPU. That's pretty weak. The typical\nmodern desktop system has a factor of 20 more CPU power than that. A\nsoftware RAID setup would have no trouble blowing the 2200S out of the\nwater, especially if the OS is able to make use of features such as\ntagged queueing.\n\nSince the 2200S has a JBOD mode, you might consider testing a software\nRAID setup across that, just to see how much of a difference doing the\nRAID calculations on the host system makes.\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t kevin@sysexperts.com\n", "msg_date": "Tue, 18 Feb 2003 14:27:22 -0800", "msg_from": "Kevin Brown <kevin@sysexperts.com>", "msg_from_op": false, "msg_subject": "Re: Data write speed" }, { "msg_contents": "Kevin,\n\n> That does seem low. What rate do you get with software RAID (of the\n> same type, of course) to the same disks (might have to be through a\n> standard SCSI controller to be meaningful) with roughly the same\n> disks/channel distribution?\n\nUnfortunately, I can't do this without blowing away all of my current\nsoftware setup. I noticed the performance problem *after* installing\nLinux, PostgreSQL, Apache, PHP, pam_auth, NIS, and my database ....\nand, of course, customized builds of the above.\n \n> The Adaptec 2200S has a 100MHz CPU. That's pretty weak. The typical\n> modern desktop system has a factor of 20 more CPU power than that. A\n> software RAID setup would have no trouble blowing the 2200S out of\n> the\n> water, especially if the OS is able to make use of features such as\n> tagged queueing.\n> \n> Since the 2200S has a JBOD mode, you might consider testing a\n> software\n> RAID setup across that, just to see how much of a difference doing\n> the\n> RAID calculations on the host system makes.\n\nThis isn't something I can do without blowing away my current software\nsetup, hey?\n\n-Josh\n", "msg_date": "Tue, 18 Feb 2003 15:51:32 -0800", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": true, "msg_subject": "Re: Data write speed" }, { "msg_contents": "Josh Berkus wrote:\n<snip>\n> This isn't something I can do without blowing away my current software\n> setup, hey?\n\nHi Josh,\n\nAny chance that it's a hardware problem causing a SCSI level slowdown?\n\ni.e. termination accidentally turned on for a device that's not at the \nend of a SCSI chain, or perhaps using a cable that's not of the right \nspec? Stuff that will still let it work, but at a reduced rate. Maybe \neven a setting in the Adaptec controller's BIOS? Might be worth looking \nat dmesg and seeing what it reports the SCSI interface speed to be.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> -Josh\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Wed, 19 Feb 2003 11:51:41 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Data write speed" } ]
[ { "msg_contents": "pgsql-hackers-owner@hub.org\nIf you bought into our last recommendation (CIMG) early enough you had an excellent opportunity to make substantial gains (from .90 to 1.65 in just the first day). Now is your chance to do the same with our newest pick: VICC. To find out more go to Live From the Street.\n\nIf you no longer want to receive information from us just go to \ntallrhe@cs.com.\n \n", "msg_date": "Wed, 19 Feb 2003 04:56:40 GMT", "msg_from": "incomingforward@cs.com", "msg_from_op": true, "msg_subject": "pgsql-hackers-owner,\n\tLIVE FROM WALL STREET: VICC Test Results Are In.........." } ]
[ { "msg_contents": "\nhttp://www.sndfocus.biz/go.cgi/foryou\nhttp://www.foreverstore.cjb.net\n\nTerima kasih atas waktu yang Anda luangkan untuk membaca email ini. Siapa\ntahu, ini akan mengubah kehidupan anda.\n\nDalam kesempatan ini saya ingin memperkenalkan kepada Anda sebuah usaha\nsampingan yang bila Anda lakukan sesuai dengan sistem, bisa memberikan\npenghasilan utama buat Anda dan keluarga Anda. Dengan usaha sampingan ini\nAnda juga bisa membentu program pemerintah untuk memerangi kemiskinan dan\npengangguran.\n\nBisnis yang saya perkenalkan ini adalah multilevel marketing (MLM). MLM yang\nsaya perkenalkan ini adalah MLM murni dan bukan money game yaitu Forever\nYoung Indonesia (FYI) yang telah berdiri sejak 1989 di Indonesia. FYI adalah\nmurni milik bangsa Indonesia. Untuk menjalankannya Anda telah disediakan\nsistem, produk, cara kerja, dan alat. Yang Anda perlu sediakan sendiri\nadalah modal yang relatif kecil sebagai uang pendaftaran yaitu Rp 55.000,00\nserta ongkos kirim business kit Rp 10.000,- Sedangkan biaya bulanan untuk\npembelian produk berkisar Rp 162.000 - 250.000,00.\n\nSISTEM\n\nSistem yang Anda perlukan telah dirancang oleh para founder bisnis MLM FYI.\nAnda bisa melihat bisnis ini secara langsung di www.foreveryoung.co.id. Di\ndalamnya Anda juga bisa melihat legalitas bisnis ini.\n\nPRODUK\n\nProduk yang dipasarkan adalah produk yang setiap hari kita perlukan. Produk\nFYI berupa minuman kesehatan, makanan kesehatan, perlengkapan mandi dan\ncuci, kosmetik serta pupuk untuk tumbuhan dan ternak. Harganya sangat\nterjangkau. Produk FYI yang paling mahal saat ini adalah Adaplex, vitamin\nuntuk anak-anak seharga Rp 236.500,00 (harga distributor untuk Pulau\nJawa-Bali-Sumatera, berlaku sejak 1 Desember 2002). Sedangkan harga-harga\nlainnya seperti minuman kesehatan, makanan kesehatan, perlengkapan mandi dan\ncuci serta kosmetik relatif sama dengan produk yang lain di pasaran, tetapi\nkualitasnya jauh lebih baik. Beberapa produk makanan dan minuman kesehatan\nterbukti dapat mengatasi berbagai problem kesehatan sebagai pengobatan\nalternatif.\n\nCARA KERJA DAN ALAT\n\nSetelah Anda menjadi member secara resmi, Anda harus membeli produk FYI baik\nuntuk Anda gunakan sendiri dan atau dijual lagi kepada orang lain.\nSeandainya Anda tidak mempunyai kepandaian untuk menjual, Anda bisa\nmenggunakan produknya untuk keperluan Anda sendiri.\n\nDalam bisnis ini syarat untuk mendapatkan bonus, Anda harus memiliki poin\natas pembelian yang Anda lakukan. Minimal tiap bulan Anda harus memiliki\nPoin 120 PV, apabila dirupiahkan berkisar Rp 162.000,00 s.d. Rp 300.000,-\ntergantung pada produk yang Anda beli.\n\nCara kerja berikutnya adalah mengembangkan jaringan agar Perolehan Poin Anda\nsemakin meningkat. Dalam mengembangkan jaringan, tentu saja Anda harus\nmenawarkan bisnis ini kepada orang lain, baik yang ada di sekitar Anda atau\norang yang belum Anda kenal sama sekali, yaitu melalui internet, seperti apa\nyang saya lakukan saat ini. Apabila Anda tertarik untuk lebih mendalami\nbisnis ini, silakan Anda klik website saya\nhttp://www.sndfocus.biz/go.cgi/foryou yang diberikan oleh jaringan FYI.\nWebsite ini selain berisi data pribadi saya juga berisi tentang hal-hal yang\nperlu Anda ketahui tentang bisnis ini. Website ini adalah salah satu\nfasilitas yang juga bisa Anda miliki secara gratis apabila Anda bergabung\ndalam jaringan kami. Dalam website tersebut Anda bisa melihat potensi\npenghasilan, display produk dan harga produk, serta legalitas dari bisnis\nini serta kelebihan bisnis ini di bandingkan dengan bisnis yang lain.\n\nApabila Anda tertarik untuk menjalankan bisnis ini, silakan Anda klik di\nhttp://www.sndfocus.biz/go.cgi/foryou dan silakan Anda klik pada Join Now\natau Sign Up. Setelah Anda mendapatkan konfirmasi alamat website Anda,\nsilakan klik website Anda dan lakukan Up Grade dengan cara Klik pada Up\nGrade dan isikan data-data Anda dengan benar dan hati-hati. Sampai pada\nlangkah Up Grade ini Anda tidak dipungut biaya apa pun, karena Anda belum\nterdaftar secara resmi. Setelah Anda melakukan pengisian formulir Up Grade\nsilakan Anda menghubungi saya via e-mail aa_haq@telkom.net untuk memberi\nkonfirmasi kepada saya tentang alamat pengiriman Business Kit. Saya akan\nmengirimkan Business Kit tersebut, dan setelah tiba di alamat Anda, barulah\nAnda mengirimkan uang pendaftaran beserta penggantian ongkos kirim Business\nKit tersebut yang jumlah seluruhnya Rp 65.000,00.\n\nMENGAPA SAYA MENJALANKAN BISNIS INI?\n\nMungkin Anda bertanya, kenapa Anda harus menjalankan bisnis ini? Atau kenapa\nsaya menjalankan bisnis ini? Alasan saya menjalankan bisnis ini adalah saya\ningin memiliki penghasilan tambahan yang jumlahnya cukup lumayan untuk masa\ndepan keluarga saya. Coba Anda bayangkan, dua tahun setelah saya menjalankan\nbisnis ini secara benar dan sesuai dengan sistem, saya bisa mendapatkan\npenghasilan tambahan sebesar Rp 20 juta minimal per bulan. Ini bukan omong\nkosong, karena sudah ada yang mendapatkannya, dan kesempatan bagi saya masih\nsangat-sangat luas. Apakah Anda tertarik? Saya yakin Anda sangat tertarik\ndan ingin membuktikannya.\n\nApa yang harus Anda lakukan apabila Anda sudah terdaftar secara resmi dan\ningin sukses dalam bisnis seperti ini:\n\n1. Menggunakan produk Forever Young Indonesia (mengganti merek produk yang\nselama ini Anda pakai). Apabila memungkinkan Anda bisa juga menjualnya\nkepada orang lain. Setiap bulan Anda harus mempunyai poin 120 atas pembelian\nproduk untuk keluarga Anda atau untuk dijual kepada orang lain.\n\n2. Mengajak orang lain baik secara off line (langsung) maupun secara on line\n(menggunakan internet). Ini sangat berguna untuk memperbesar jaringan Anda.\nSebagai awalnya Anda cukup memiliki dua orang \"frontline\" (downline\nlangsung). Apabila ada lagi orang yang bergabung, letakkan dia di bawah\ndownline Anda, agar jaringan Anda semakin besar. Dalam mengembangkan\njaringan, Anda bisa banyak berkonsultasi dengan upline Anda, baik upline\nlangsung maupun upline tidak langsung. Anda juga bisa berkonsultasi dengan\npemilik Stockiest dan DC (distribution center), yakni penyedia produk-produk\nFYI yang tersebar di seluruh Indonesia.\n\n3. Mengikuti beberapa Training untuk lebih lancar menjalankan usaha ini.\n\nSelain Anda harus mengunjungi situs http://www.sndfocus.biz/go.cgi/foryou,\njangan lewatkan juga toko online saya di http://www.foreverstore.cjb.net\nyang berisi keterangan lengkap mengenai bisnis ini.\n\nBerikut data pribadi saya agar Anda lebih kenal dengan saya:\n\nNama:\n Ahmad Abdul Haq\nNo. Distributor FYI:\n I-700654\nAlamat Rumah:\n Jalan Bukit Flamboyan VI No. 286\n Perumnas Bukit Sendang Mulyo\n Semarang 50272\nTelepon Rumah:\n 0815-663-2571\nAlamat Kantor:\n Kanwil XIII Ditjen Anggaran Semarang\n Jalan Pemuda No. 2, Semarang 50138\nTelepon Kantor:\n 024-3515989 pesawat 215\nFaksimili Kantor:\n 024-3545877\nAlamat E-mail:\n aa_haq@telkom.net\n\n\nSalam Hormat\n\n\n\nAhmad Abdul Haq\nhttp://www.sndfocus.biz/go.cgi/foryou\nhttp://www.foreverstore.cjb.net\n\n----------------------------------------------------------------------------\n Ikuti polling TELKOM Memo 166 di www.plasa.com dan menangkan hadiah masing-masing Rp 250.000 tunai\n ----------------------------------------------------------------------------\n", "msg_date": "Thu, 20 Feb 2003 13:24:11 +0700", "msg_from": "Ahmad Abdul Haq <aa_haq2@telkom.net>", "msg_from_op": true, "msg_subject": "Peluang Usaha yang Luar Biasa" }, { "msg_contents": "Could you send us the results of an EXPLAIN ANALYZE? It may make it\neasier to follow your problem.\n\n\nOn Thu, 2003-02-20 at 17:24, Ahmad Abdul Haq wrote:\n\n> http://www.sndfocus.biz/go.cgi/foryou\n> http://www.foreverstore.cjb.net\n> \n> Terima kasih atas waktu yang Anda luangkan untuk membaca email ini. Siapa\n> tahu, ini akan mengubah kehidupan anda.\n> \n> Dalam kesempatan ini saya ingin memperkenalkan kepada Anda sebuah usaha\n> sampingan yang bila Anda lakukan sesuai dengan sistem, bisa memberikan\n> penghasilan utama buat Anda dan keluarga Anda. Dengan usaha sampingan ini\n> Anda juga bisa membentu program pemerintah untuk memerangi kemiskinan dan\n> pengangguran.\n> \n> Bisnis yang saya perkenalkan ini adalah multilevel marketing (MLM). MLM yang\n> saya perkenalkan ini adalah MLM murni dan bukan money game yaitu Forever\n> Young Indonesia (FYI) yang telah berdiri sejak 1989 di Indonesia. FYI adalah\n> murni milik bangsa Indonesia. Untuk menjalankannya Anda telah disediakan\n> sistem, produk, cara kerja, dan alat. Yang Anda perlu sediakan sendiri\n> adalah modal yang relatif kecil sebagai uang pendaftaran yaitu Rp 55.000,00\n> serta ongkos kirim business kit Rp 10.000,- Sedangkan biaya bulanan untuk\n> pembelian produk berkisar Rp 162.000 - 250.000,00.\n> \n> SISTEM\n> \n> Sistem yang Anda perlukan telah dirancang oleh para founder bisnis MLM FYI.\n> Anda bisa melihat bisnis ini secara langsung di www.foreveryoung.co.id. Di\n> dalamnya Anda juga bisa melihat legalitas bisnis ini.\n> \n> PRODUK\n> \n> Produk yang dipasarkan adalah produk yang setiap hari kita perlukan. Produk\n> FYI berupa minuman kesehatan, makanan kesehatan, perlengkapan mandi dan\n> cuci, kosmetik serta pupuk untuk tumbuhan dan ternak. Harganya sangat\n> terjangkau. Produk FYI yang paling mahal saat ini adalah Adaplex, vitamin\n> untuk anak-anak seharga Rp 236.500,00 (harga distributor untuk Pulau\n> Jawa-Bali-Sumatera, berlaku sejak 1 Desember 2002). Sedangkan harga-harga\n> lainnya seperti minuman kesehatan, makanan kesehatan, perlengkapan mandi dan\n> cuci serta kosmetik relatif sama dengan produk yang lain di pasaran, tetapi\n> kualitasnya jauh lebih baik. Beberapa produk makanan dan minuman kesehatan\n> terbukti dapat mengatasi berbagai problem kesehatan sebagai pengobatan\n> alternatif.\n> \n> CARA KERJA DAN ALAT\n> \n> Setelah Anda menjadi member secara resmi, Anda harus membeli produk FYI baik\n> untuk Anda gunakan sendiri dan atau dijual lagi kepada orang lain.\n> Seandainya Anda tidak mempunyai kepandaian untuk menjual, Anda bisa\n> menggunakan produknya untuk keperluan Anda sendiri.\n> \n> Dalam bisnis ini syarat untuk mendapatkan bonus, Anda harus memiliki poin\n> atas pembelian yang Anda lakukan. Minimal tiap bulan Anda harus memiliki\n> Poin 120 PV, apabila dirupiahkan berkisar Rp 162.000,00 s.d. Rp 300.000,-\n> tergantung pada produk yang Anda beli.\n> \n> Cara kerja berikutnya adalah mengembangkan jaringan agar Perolehan Poin Anda\n> semakin meningkat. Dalam mengembangkan jaringan, tentu saja Anda harus\n> menawarkan bisnis ini kepada orang lain, baik yang ada di sekitar Anda atau\n> orang yang belum Anda kenal sama sekali, yaitu melalui internet, seperti apa\n> yang saya lakukan saat ini. Apabila Anda tertarik untuk lebih mendalami\n> bisnis ini, silakan Anda klik website saya\n> http://www.sndfocus.biz/go.cgi/foryou yang diberikan oleh jaringan FYI.\n> Website ini selain berisi data pribadi saya juga berisi tentang hal-hal yang\n> perlu Anda ketahui tentang bisnis ini. Website ini adalah salah satu\n> fasilitas yang juga bisa Anda miliki secara gratis apabila Anda bergabung\n> dalam jaringan kami. Dalam website tersebut Anda bisa melihat potensi\n> penghasilan, display produk dan harga produk, serta legalitas dari bisnis\n> ini serta kelebihan bisnis ini di bandingkan dengan bisnis yang lain.\n> \n> Apabila Anda tertarik untuk menjalankan bisnis ini, silakan Anda klik di\n> http://www.sndfocus.biz/go.cgi/foryou dan silakan Anda klik pada Join Now\n> atau Sign Up. Setelah Anda mendapatkan konfirmasi alamat website Anda,\n> silakan klik website Anda dan lakukan Up Grade dengan cara Klik pada Up\n> Grade dan isikan data-data Anda dengan benar dan hati-hati. Sampai pada\n> langkah Up Grade ini Anda tidak dipungut biaya apa pun, karena Anda belum\n> terdaftar secara resmi. Setelah Anda melakukan pengisian formulir Up Grade\n> silakan Anda menghubungi saya via e-mail aa_haq@telkom.net untuk memberi\n> konfirmasi kepada saya tentang alamat pengiriman Business Kit. Saya akan\n> mengirimkan Business Kit tersebut, dan setelah tiba di alamat Anda, barulah\n> Anda mengirimkan uang pendaftaran beserta penggantian ongkos kirim Business\n> Kit tersebut yang jumlah seluruhnya Rp 65.000,00.\n> \n> MENGAPA SAYA MENJALANKAN BISNIS INI?\n> \n> Mungkin Anda bertanya, kenapa Anda harus menjalankan bisnis ini? Atau kenapa\n> saya menjalankan bisnis ini? Alasan saya menjalankan bisnis ini adalah saya\n> ingin memiliki penghasilan tambahan yang jumlahnya cukup lumayan untuk masa\n> depan keluarga saya. Coba Anda bayangkan, dua tahun setelah saya menjalankan\n> bisnis ini secara benar dan sesuai dengan sistem, saya bisa mendapatkan\n> penghasilan tambahan sebesar Rp 20 juta minimal per bulan. Ini bukan omong\n> kosong, karena sudah ada yang mendapatkannya, dan kesempatan bagi saya masih\n> sangat-sangat luas. Apakah Anda tertarik? Saya yakin Anda sangat tertarik\n> dan ingin membuktikannya.\n> \n> Apa yang harus Anda lakukan apabila Anda sudah terdaftar secara resmi dan\n> ingin sukses dalam bisnis seperti ini:\n> \n> 1. Menggunakan produk Forever Young Indonesia (mengganti merek produk yang\n> selama ini Anda pakai). Apabila memungkinkan Anda bisa juga menjualnya\n> kepada orang lain. Setiap bulan Anda harus mempunyai poin 120 atas pembelian\n> produk untuk keluarga Anda atau untuk dijual kepada orang lain.\n> \n> 2. Mengajak orang lain baik secara off line (langsung) maupun secara on line\n> (menggunakan internet). Ini sangat berguna untuk memperbesar jaringan Anda.\n> Sebagai awalnya Anda cukup memiliki dua orang \"frontline\" (downline\n> langsung). Apabila ada lagi orang yang bergabung, letakkan dia di bawah\n> downline Anda, agar jaringan Anda semakin besar. Dalam mengembangkan\n> jaringan, Anda bisa banyak berkonsultasi dengan upline Anda, baik upline\n> langsung maupun upline tidak langsung. Anda juga bisa berkonsultasi dengan\n> pemilik Stockiest dan DC (distribution center), yakni penyedia produk-produk\n> FYI yang tersebar di seluruh Indonesia.\n> \n> 3. Mengikuti beberapa Training untuk lebih lancar menjalankan usaha ini.\n> \n> Selain Anda harus mengunjungi situs http://www.sndfocus.biz/go.cgi/foryou,\n> jangan lewatkan juga toko online saya di http://www.foreverstore.cjb.net\n> yang berisi keterangan lengkap mengenai bisnis ini.\n> \n> Berikut data pribadi saya agar Anda lebih kenal dengan saya:\n> \n> Nama:\n> Ahmad Abdul Haq\n> No. Distributor FYI:\n> I-700654\n> Alamat Rumah:\n> Jalan Bukit Flamboyan VI No. 286\n> Perumnas Bukit Sendang Mulyo\n> Semarang 50272\n> Telepon Rumah:\n> 0815-663-2571\n> Alamat Kantor:\n> Kanwil XIII Ditjen Anggaran Semarang\n> Jalan Pemuda No. 2, Semarang 50138\n> Telepon Kantor:\n> 024-3515989 pesawat 215\n> Faksimili Kantor:\n> 024-3545877\n> Alamat E-mail:\n> aa_haq@telkom.net\n> \n> \n> Salam Hormat\n> \n> \n> \n> Ahmad Abdul Haq\n> http://www.sndfocus.biz/go.cgi/foryou\n> http://www.foreverstore.cjb.net\n> \n> ----------------------------------------------------------------------------\n> Ikuti polling TELKOM Memo 166 di www.plasa.com dan menangkan hadiah masing-masing Rp 250.000 tunai\n> ----------------------------------------------------------------------------\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n\n\n\n\n\n\n\n\nCould you send us the results of an EXPLAIN ANALYZE?  It may make it easier to follow your problem.\n\n\nOn Thu, 2003-02-20 at 17:24, Ahmad Abdul Haq wrote:\n\nhttp://www.sndfocus.biz/go.cgi/foryou\nhttp://www.foreverstore.cjb.net\n\nTerima kasih atas waktu yang Anda luangkan untuk membaca email ini. Siapa\ntahu, ini akan mengubah kehidupan anda.\n\nDalam kesempatan ini saya ingin memperkenalkan kepada Anda sebuah usaha\nsampingan yang bila Anda lakukan sesuai dengan sistem, bisa memberikan\npenghasilan utama buat Anda dan keluarga Anda. Dengan usaha sampingan ini\nAnda juga bisa membentu program pemerintah untuk memerangi kemiskinan dan\npengangguran.\n\nBisnis yang saya perkenalkan ini adalah multilevel marketing (MLM). MLM yang\nsaya perkenalkan ini adalah MLM murni dan bukan money game yaitu Forever\nYoung Indonesia (FYI) yang telah berdiri sejak 1989 di Indonesia. FYI adalah\nmurni milik bangsa Indonesia. Untuk menjalankannya Anda telah disediakan\nsistem, produk, cara kerja, dan alat. Yang Anda perlu sediakan sendiri\nadalah modal yang relatif kecil sebagai uang pendaftaran yaitu Rp 55.000,00\nserta ongkos kirim business kit Rp 10.000,- Sedangkan biaya bulanan untuk\npembelian produk berkisar Rp 162.000 - 250.000,00.\n\nSISTEM\n\nSistem yang Anda perlukan telah dirancang oleh para founder bisnis MLM FYI.\nAnda bisa melihat bisnis ini secara langsung di www.foreveryoung.co.id. Di\ndalamnya Anda juga bisa melihat legalitas bisnis ini.\n\nPRODUK\n\nProduk yang dipasarkan adalah produk yang setiap hari kita perlukan. Produk\nFYI berupa minuman kesehatan, makanan kesehatan, perlengkapan mandi dan\ncuci, kosmetik serta pupuk untuk tumbuhan dan ternak. Harganya sangat\nterjangkau. Produk FYI yang paling mahal saat ini adalah Adaplex, vitamin\nuntuk anak-anak seharga Rp 236.500,00 (harga distributor untuk Pulau\nJawa-Bali-Sumatera, berlaku sejak 1 Desember 2002). Sedangkan harga-harga\nlainnya seperti minuman kesehatan, makanan kesehatan, perlengkapan mandi dan\ncuci serta kosmetik relatif sama dengan produk yang lain di pasaran, tetapi\nkualitasnya jauh lebih baik. Beberapa produk makanan dan minuman kesehatan\nterbukti dapat mengatasi berbagai problem kesehatan sebagai pengobatan\nalternatif.\n\nCARA KERJA DAN ALAT\n\nSetelah Anda menjadi member secara resmi, Anda harus membeli produk FYI baik\nuntuk Anda gunakan sendiri dan atau dijual lagi kepada orang lain.\nSeandainya Anda tidak mempunyai kepandaian untuk menjual, Anda bisa\nmenggunakan produknya untuk keperluan Anda sendiri.\n\nDalam bisnis ini syarat untuk mendapatkan bonus, Anda harus memiliki poin\natas pembelian yang Anda lakukan. Minimal tiap bulan Anda harus memiliki\nPoin 120 PV, apabila dirupiahkan berkisar Rp 162.000,00 s.d. Rp 300.000,-\ntergantung pada produk yang Anda beli.\n\nCara kerja berikutnya adalah mengembangkan jaringan agar Perolehan Poin Anda\nsemakin meningkat. Dalam mengembangkan jaringan, tentu saja Anda harus\nmenawarkan bisnis ini kepada orang lain, baik yang ada di sekitar Anda atau\norang yang belum Anda kenal sama sekali, yaitu melalui internet, seperti apa\nyang saya lakukan saat ini. Apabila Anda tertarik untuk lebih mendalami\nbisnis ini, silakan Anda klik website saya\nhttp://www.sndfocus.biz/go.cgi/foryou yang diberikan oleh jaringan FYI.\nWebsite ini selain berisi data pribadi saya juga berisi tentang hal-hal yang\nperlu Anda ketahui tentang bisnis ini. Website ini adalah salah satu\nfasilitas yang juga bisa Anda miliki secara gratis apabila Anda bergabung\ndalam jaringan kami. Dalam website tersebut Anda bisa melihat potensi\npenghasilan, display produk dan harga produk, serta legalitas dari bisnis\nini serta kelebihan bisnis ini di bandingkan dengan bisnis yang lain.\n\nApabila Anda tertarik untuk menjalankan bisnis ini, silakan Anda klik di\nhttp://www.sndfocus.biz/go.cgi/foryou dan silakan Anda klik pada Join Now\natau Sign Up. Setelah Anda mendapatkan konfirmasi alamat website Anda,\nsilakan klik website Anda dan lakukan Up Grade dengan cara Klik pada Up\nGrade dan isikan data-data Anda dengan benar dan hati-hati. Sampai pada\nlangkah Up Grade ini Anda tidak dipungut biaya apa pun, karena Anda belum\nterdaftar secara resmi. Setelah Anda melakukan pengisian formulir Up Grade\nsilakan Anda menghubungi saya via e-mail aa_haq@telkom.net untuk memberi\nkonfirmasi kepada saya tentang alamat pengiriman Business Kit. Saya akan\nmengirimkan Business Kit tersebut, dan setelah tiba di alamat Anda, barulah\nAnda mengirimkan uang pendaftaran beserta penggantian ongkos kirim Business\nKit tersebut yang jumlah seluruhnya Rp 65.000,00.\n\nMENGAPA SAYA MENJALANKAN BISNIS INI?\n\nMungkin Anda bertanya, kenapa Anda harus menjalankan bisnis ini? Atau kenapa\nsaya menjalankan bisnis ini? Alasan saya menjalankan bisnis ini adalah saya\ningin memiliki penghasilan tambahan yang jumlahnya cukup lumayan untuk masa\ndepan keluarga saya. Coba Anda bayangkan, dua tahun setelah saya menjalankan\nbisnis ini secara benar dan sesuai dengan sistem, saya bisa mendapatkan\npenghasilan tambahan sebesar Rp 20 juta minimal per bulan. Ini bukan omong\nkosong, karena sudah ada yang mendapatkannya, dan kesempatan bagi saya masih\nsangat-sangat luas. Apakah Anda tertarik? Saya yakin Anda sangat tertarik\ndan ingin membuktikannya.\n\nApa yang harus Anda lakukan apabila Anda sudah terdaftar secara resmi dan\ningin sukses dalam bisnis seperti ini:\n\n1. Menggunakan produk Forever Young Indonesia (mengganti merek produk yang\nselama ini Anda pakai). Apabila memungkinkan Anda bisa juga menjualnya\nkepada orang lain. Setiap bulan Anda harus mempunyai poin 120 atas pembelian\nproduk untuk keluarga Anda atau untuk dijual kepada orang lain.\n\n2. Mengajak orang lain baik secara off line (langsung) maupun secara on line\n(menggunakan internet). Ini sangat berguna untuk memperbesar jaringan Anda.\nSebagai awalnya Anda cukup memiliki dua orang \"frontline\" (downline\nlangsung). Apabila ada lagi orang yang bergabung, letakkan dia di bawah\ndownline Anda, agar jaringan Anda semakin besar. Dalam mengembangkan\njaringan, Anda bisa banyak berkonsultasi dengan upline Anda, baik upline\nlangsung maupun upline tidak langsung. Anda juga bisa berkonsultasi dengan\npemilik Stockiest dan DC (distribution center), yakni penyedia produk-produk\nFYI yang tersebar di seluruh Indonesia.\n\n3. Mengikuti beberapa Training untuk lebih lancar menjalankan usaha ini.\n\nSelain Anda harus mengunjungi situs http://www.sndfocus.biz/go.cgi/foryou,\njangan lewatkan juga toko online saya di http://www.foreverstore.cjb.net\nyang berisi keterangan lengkap mengenai bisnis ini.\n\nBerikut data pribadi saya agar Anda lebih kenal dengan saya:\n\nNama:\n Ahmad Abdul Haq\nNo. Distributor FYI:\n I-700654\nAlamat Rumah:\n Jalan Bukit Flamboyan VI No. 286\n Perumnas Bukit Sendang Mulyo\n Semarang 50272\nTelepon Rumah:\n 0815-663-2571\nAlamat Kantor:\n Kanwil XIII Ditjen Anggaran Semarang\n Jalan Pemuda No. 2, Semarang 50138\nTelepon Kantor:\n 024-3515989 pesawat 215\nFaksimili Kantor:\n 024-3545877\nAlamat E-mail:\n aa_haq@telkom.net\n\n\nSalam Hormat\n\n\n\nAhmad Abdul Haq\nhttp://www.sndfocus.biz/go.cgi/foryou\nhttp://www.foreverstore.cjb.net\n\n----------------------------------------------------------------------------\n Ikuti polling TELKOM Memo 166 di www.plasa.com dan menangkan hadiah masing-masing Rp 250.000 tunai\n ----------------------------------------------------------------------------\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org", "msg_date": "20 Feb 2003 21:05:32 +1100", "msg_from": "Mike Nielsen <miken@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: Peluang Usaha yang Luar Biasa" } ]
[ { "msg_contents": "Hi\n\nI am doing some performance testing using postgresql. When measuring the\nlatency of commit I got some results, which make me doubt, if the logging\ndata is written to disk in time:\n\nUsing a slow disk (5400rpm) I measured about 3-5 ms for a commit. But\naverage rotational delay is already higher for that disk (5.5ms).\nSo the question is: has anybody verified, that the log is written to disk\nbefore returning from commit?\n\nSystem used was:\n\nLinux 2.4.20\nfilesystem: ext3\npostgresql postgresql-7.2\n\nRegards,\n\n\tMatthias Meixner\n\n-- \nMatthias Meixner meixner@informatik.tu-darmstadt.de\nTechnische Universität Darmstadt\nDatenbanken und Verteilte Systeme Telefon (+49) 6151 16 6232\nWilhelminenstraße 7, D-64283 Darmstadt, Germany Fax (+49) 6151 16 6229\n\n", "msg_date": "Thu, 20 Feb 2003 09:42:02 +0100", "msg_from": "Matthias Meixner <meixner@dvs1.informatik.tu-darmstadt.de>", "msg_from_op": true, "msg_subject": "Write ahead logging" }, { "msg_contents": "On Thu, 20 Feb 2003 09:42:02 +0100, Matthias Meixner\n<meixner@dvs1.informatik.tu-darmstadt.de> wrote:\n>Using a slow disk (5400rpm) I measured about 3-5 ms for a commit. But\n>average rotational delay is already higher for that disk (5.5ms).\n>So the question is: has anybody verified, that the log is written to disk\n>before returning from commit?\n\nSome (or all?) IDE disks are known to lie: they report success as\nsoon as the data have reached the drive's RAM.\n\nServus\n Manfred\n", "msg_date": "Thu, 20 Feb 2003 10:17:31 +0100", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Write ahead logging" }, { "msg_contents": "> >So the question is: has anybody verified, that the log is written to disk\n> >before returning from commit?\n>\n> Some (or all?) IDE disks are known to lie: they report success as\n> soon as the data have reached the drive's RAM.\n\nunder linux, hdparm -W can turn off the write cache of IDE disk, maybe you\nshould try with write-caching turned off.\n\nbest regards,\n mario weilguni\n\n\n", "msg_date": "Thu, 20 Feb 2003 10:27:46 +0100", "msg_from": "\"Mario Weilguni\" <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: Write ahead logging" }, { "msg_contents": "Mario Weilguni wrote:\n>>>So the question is: has anybody verified, that the log is written to disk\n>>>before returning from commit?\n>>\n>>Some (or all?) IDE disks are known to lie: they report success as\n>>soon as the data have reached the drive's RAM.\n> \n> \n> under linux, hdparm -W can turn off the write cache of IDE disk, maybe you\n> should try with write-caching turned off.\n\nYes, that made a big difference. Latency went up to 25-95ms.\n\nRegards,\n\n\tMatthias Meixner\n\n-- \nMatthias Meixner meixner@informatik.tu-darmstadt.de\nTechnische Universität Darmstadt\nDatenbanken und Verteilte Systeme Telefon (+49) 6151 16 6232\nWilhelminenstraße 7, D-64283 Darmstadt, Germany Fax (+49) 6151 16 6229\n\n", "msg_date": "Fri, 21 Feb 2003 09:52:48 +0100", "msg_from": "Matthias Meixner <meixner@dvs1.informatik.tu-darmstadt.de>", "msg_from_op": true, "msg_subject": "Re: Write ahead logging" }, { "msg_contents": "What were you using to measure the latency. Don't suppose you could\nsend it over. The admins locally don't like what I used to do the test\nwith -- they don't want to turn off write-caching for other reasons.\n\nOn Fri, 2003-02-21 at 03:52, Matthias Meixner wrote:\n> Mario Weilguni wrote:\n> >>>So the question is: has anybody verified, that the log is written to disk\n> >>>before returning from commit?\n> >>\n> >>Some (or all?) IDE disks are known to lie: they report success as\n> >>soon as the data have reached the drive's RAM.\n> > \n> > \n> > under linux, hdparm -W can turn off the write cache of IDE disk, maybe you\n> > should try with write-caching turned off.\n> \n> Yes, that made a big difference. Latency went up to 25-95ms.\n> \n> Regards,\n> \n> \tMatthias Meixner\n-- \nRod Taylor <rbt@rbt.ca>\n\nPGP Key: http://www.rbt.ca/rbtpub.asc", "msg_date": "06 Mar 2003 18:35:56 -0500", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Write ahead logging" }, { "msg_contents": "Rod Taylor wrote:\n> What were you using to measure the latency. Don't suppose you could\n> send it over. The admins locally don't like what I used to do the test\n> with -- they don't want to turn off write-caching for other reasons.\n\nI am doing an insert of few bytes, so that the amount of data does not\nsignificantly affect the measured time. And for measuring time without\ncache, I temporarily switched it off.\n\nThat was the code used (nothing unusual):\n\n#include \"timeval.h\"\n\nmain()\n{\n Timeval start,end;\nEXEC SQL WHENEVER sqlerror sqlprint;\nEXEC SQL WHENEVER not found sqlprint;\n\nEXEC SQL CONNECT TO user@localhost;\n start=Timeval::Time();\nEXEC SQL BEGIN;\nEXEC SQL INSERT INTO test values ('qwertz');\nEXEC SQL COMMIT;\n end=Timeval::Time();\n end-=start;\n printf(\"time: %d.%06d\\n\",end.tv_sec,end.tv_usec);\n}\n\n\n- Matthias Meixner\n\n-- \nMatthias Meixner meixner@informatik.tu-darmstadt.de\nTechnische Universit�t Darmstadt\nDatenbanken und Verteilte Systeme Telefon (+49) 6151 16 6232\nWilhelminenstra�e 7, D-64283 Darmstadt, Germany Fax (+49) 6151 16 6229\n\n", "msg_date": "Fri, 07 Mar 2003 09:44:22 +0100", "msg_from": "Matthias Meixner <meixner@dvs1.informatik.tu-darmstadt.de>", "msg_from_op": true, "msg_subject": "Re: Write ahead logging" } ]
[ { "msg_contents": "Hello,\n\nWe would like to use PostgreSQL for one project.\nThis project would need to handle some of 150000 - 200000 (thousands records) per day.\n\nIs there somebody having experience with Postgresql in this kind of environment.\nCan anybody advice us regarding specific PostgreSQL issues for such kind of datamanipulation ?\n\nBest Regards,\nJaki\n\n\n\n\n\n\n\nHello,\n \nWe would like to use PostgreSQL for one \nproject.\nThis project would need to handle some of 150000 - \n200000 (thousands records) per day.\n \nIs there somebody having experience with Postgresql \nin this kind of environment.\nCan anybody advice us regarding specific PostgreSQL \nissues for such kind of datamanipulation ?\n \nBest Regards,\nJaki", "msg_date": "Fri, 21 Feb 2003 09:48:16 +0200", "msg_from": "\"Jakab Laszlo\" <jakablaszlo@sofasoft.ro>", "msg_from_op": true, "msg_subject": "performance issues for processing more then 150000 records / day." }, { "msg_contents": "Hi Jakab,\n\nI don't see why postgresql wouldn't be able to handle that - it's a relatively small amount of data.\n\nWhen you say 150000 records/day, do you mean:\n\n1. 150000 inserts/day (ie. database grows constantly, quickly)\n2. 150000 updates/deletes/day\n3. 150000 transactions/select/day\n\nIf you give us a little more information, we'll be able to help you tune your postgres for your application.\n\nRegards,\n\nChris\n\n ----- Original Message ----- \n From: Jakab Laszlo \n To: pgsql-performance@postgresql.org \n Sent: Friday, February 21, 2003 3:48 PM\n Subject: [PERFORM] performance issues for processing more then 150000 records / day.\n\n\n Hello,\n\n We would like to use PostgreSQL for one project.\n This project would need to handle some of 150000 - 200000 (thousands records) per day.\n\n Is there somebody having experience with Postgresql in this kind of environment.\n Can anybody advice us regarding specific PostgreSQL issues for such kind of datamanipulation ?\n\n Best Regards,\n Jaki\n\n\n\n\n\n\n\nHi Jakab,\n \nI don't see why postgresql wouldn't be able to \nhandle that - it's a relatively small amount of data.\n \nWhen you say 150000 records/day, do you \nmean:\n \n1. 150000 inserts/day (ie. database grows \nconstantly, quickly)\n2. 150000 updates/deletes/day\n3. 150000 transactions/select/day\n \nIf you give us a little more information, we'll be \nable to help you tune your postgres for your application.\n \nRegards,\n \nChris\n \n\n----- Original Message ----- \nFrom:\nJakab \n Laszlo \nTo: pgsql-performance@postgresql.org\n\nSent: Friday, February 21, 2003 3:48 \n PM\nSubject: [PERFORM] performance issues for \n processing more then 150000 records / day.\n\nHello,\n \nWe would like to use PostgreSQL for one \n project.\nThis project would need to handle some of 150000 \n - 200000 (thousands records) per day.\n \nIs there somebody having experience with \n Postgresql in this kind of environment.\nCan anybody advice us regarding specific \n PostgreSQL issues for such kind of datamanipulation ?\n \nBest Regards,\nJaki", "msg_date": "Fri, 21 Feb 2003 16:04:52 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: performance issues for processing more then 150000 records / day." }, { "msg_contents": "Hello Chris,\n\nI mean here 150000 inserts/day ( quickly grows constantly )... - with transactions and on the same table ... maybe after one significant amount of time we can move the records of one year to one archive table ...\nAnd some 2-3 millions of selects / day ... \n\nI would like to know also some hardware related advice.\n\nthanks,\nJaki\n ----- Original Message ----- \n From: Christopher Kings-Lynne \n To: Jakab Laszlo ; pgsql-performance@postgresql.org \n Sent: Friday, February 21, 2003 10:04 AM\n Subject: Re: [PERFORM] performance issues for processing more then 150000 records / day.\n\n\n Hi Jakab,\n \n I don't see why postgresql wouldn't be able to handle that - it's a relatively small amount of data.\n \n When you say 150000 records/day, do you mean:\n \n 1. 150000 inserts/day (ie. database grows constantly, quickly)\n 2. 150000 updates/deletes/day\n 3. 150000 transactions/select/day\n \n If you give us a little more information, we'll be able to help you tune your postgres for your application.\n \n Regards,\n \n Chris\n\n ----- Original Message ----- \n From: Jakab Laszlo \n To: pgsql-performance@postgresql.org \n Sent: Friday, February 21, 2003 3:48 PM\n Subject: [PERFORM] performance issues for processing more then 150000 records / day.\n\n\n Hello,\n\n We would like to use PostgreSQL for one project.\n This project would need to handle some of 150000 - 200000 (thousands records) per day.\n\n Is there somebody having experience with Postgresql in this kind of environment.\n Can anybody advice us regarding specific PostgreSQL issues for such kind of datamanipulation ?\n\n Best Regards,\n Jaki\n\n\n\n\n\n\n\nHello Chris,\n \nI mean here 150000 inserts/day ( quickly grows \nconstantly )...  - with transactions and on the same table ... maybe after \none significant amount of time we can move the records of one year to one \narchive table ...\nAnd some 2-3 millions of selects / day ... \n\n \nI would like to know also some hardware related \nadvice.\n \nthanks,\nJaki\n\n----- Original Message ----- \nFrom:\nChristopher Kings-Lynne \nTo: Jakab \n Laszlo ; pgsql-performance@postgresql.org\n\nSent: Friday, February 21, 2003 10:04 \n AM\nSubject: Re: [PERFORM] performance issues \n for processing more then 150000 records / day.\n\nHi Jakab,\n \nI don't see why postgresql wouldn't be able to \n handle that - it's a relatively small amount of data.\n \nWhen you say 150000 records/day, do you \n mean:\n \n1. 150000 inserts/day (ie. database grows \n constantly, quickly)\n2. 150000 updates/deletes/day\n3. 150000 transactions/select/day\n \nIf you give us a little more information, we'll \n be able to help you tune your postgres for your application.\n \nRegards,\n \nChris\n \n\n----- Original Message ----- \nFrom:\nJakab \n Laszlo \nTo: pgsql-performance@postgresql.org\n\nSent: Friday, February 21, 2003 3:48 \n PM\nSubject: [PERFORM] performance issues \n for processing more then 150000 records / day.\n\nHello,\n \nWe would like to use PostgreSQL for one \n project.\nThis project would need to handle some of \n 150000 - 200000 (thousands records) per day.\n \nIs there somebody having experience with \n Postgresql in this kind of environment.\nCan anybody advice us regarding specific \n PostgreSQL issues for such kind of datamanipulation ?\n \nBest Regards,\nJaki", "msg_date": "Fri, 21 Feb 2003 10:33:42 +0200", "msg_from": "\"Jakab Laszlo\" <jakablaszlo@sofasoft.ro>", "msg_from_op": true, "msg_subject": "Re: performance issues for processing more then 150000 records / day." }, { "msg_contents": "On 21 Feb 2003 at 10:33, Jakab Laszlo wrote:\n\n> \n> Hello Chris,\n> \n> I mean here 150000 inserts/day ( quickly grows constantly )... - with \n> transactions and on the same table ... maybe after onesignificant amount of \n> time we can move the records of one year to one archivetable ...\n> Andsome 2-3 millions of selects / day ... \n> \n> I would like to know also some hardware related advice.\n\nUse a 64 bit machine with SCSI, preferrably RAID to start with. You can search \nlist archives for similar problems and solutions.\n\nHTH\n\nBye\n Shridhar\n\n--\nFog Lamps, n.:\tExcessively (often obnoxiously) bright lamps mounted on the \nfronts\tof automobiles; used on dry, clear nights to indicate that the\tdriver's \nbrain is in a fog. See also \"Idiot Lights\".\n\n", "msg_date": "Fri, 21 Feb 2003 14:12:25 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: performance issues for processing more then 150000 records / day." }, { "msg_contents": "Jakab Laszlo wrote:\n> Hello Chris,\n> \n> I mean here 150000 inserts/day ( quickly grows constantly )... - with \n> transactions and on the same table ... maybe after one significant \n> amount of time we can move the records of one year to one archive table ...\n> And some 2-3 millions of selects / day ...\n\nThat's no problem at all, depending on:\n\n + How complex are the queries you're intending on running?\n\n + How will the data be spread between the tables?\n\n + The amount of data per row also makes a difference, if it is \nextremely large.\n\n\n> I would like to know also some hardware related advice.\n\nYou're almost definitely going to be needing a SCSI or better RAID \narray, plus a server with quite a few GB's of ECC memory.\n\nIf you need to get specific about hardware to the point of knowing \nexactly what you're needing, you're likely best to pay a good PostgreSQL \nconsultant to study your proposal in depth.\n\nHope this helps.\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> thanks,\n> Jaki\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n- Indira Gandhi\n\n", "msg_date": "Fri, 21 Feb 2003 19:46:18 +1030", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: performance issues for processing more then 150000" }, { "msg_contents": "\n> Jakab Laszlo wrote:\n> > Hello Chris,\n> >\n> > I mean here 150000 inserts/day ( quickly grows constantly )... - with\n> > transactions and on the same table ... maybe after one significant\n> > amount of time we can move the records of one year to one archive table\n...\n> > And some 2-3 millions of selects / day ...\n>\n> That's no problem at all, depending on:\n>\n> + How complex are the queries you're intending on running?\n> + How will the data be spread between the tables?\n>\n> + The amount of data per row also makes a difference, if it is\n> extremely large.\n>\n\nThe main information is a barcode. The big table will have aprox 3 fields\n(just id type fields).\nNot so complex there will be mainly about 50 tables with all the info\ntotally normalized and with some around 10-20 tables which make the links\nbetween them. Usually the join is starting form one much smaller table and\nwith left join goes trough the big table (barcodes) and maybe also on other\ntables.\nAnd then this table with barcode (this is the bigone with 150000\nrecords/day) linked to other smaller tables. ( 1 bigger with aprox 7000\ninserts/day).\n\n> > I would like to know also some hardware related advice.\n>\n> You're almost definitely going to be needing a SCSI or better RAID\n> array, plus a server with quite a few GB's of ECC memory.\n\nUnfortunatelly the hardware budget should be keept as low as possible.\nI was thinking is there could be reliable solution based on dual processor\nand ATA 133 raid mirroring normally with some gigs of memory.\n\nThanks,\nJaki\n\n\n", "msg_date": "Fri, 21 Feb 2003 11:59:24 +0200", "msg_from": "\"Jakab Laszlo\" <jakablaszlo@sofasoft.ro>", "msg_from_op": true, "msg_subject": "Re: performance issues for processing more then 150000" }, { "msg_contents": "On 21 Feb 2003 at 11:59, Jakab Laszlo wrote:\n> Unfortunatelly the hardware budget should be keept as low as possible.\n> I was thinking is there could be reliable solution based on dual processor\n> and ATA 133 raid mirroring normally with some gigs of memory.\n\nGigs of memory are not as much important as much badnwidth you have. For these \nkind of databases, a gig or two would not make as much difference as much \nfaster disks would do.\n\nIf you are hell bent on budget, I suggest you write a custom layer that \nconsolidates results of query from two boxes and throw two intel boxes at it. \nEssentially partitioning the data.\n\nIf your queries are simple enough to split and consolidate, try it. It might \ngive you best performance..\n\n\nBye\n Shridhar\n\n--\nBlutarsky's Axiom:\tNothing is impossible for the man who will not listen to \nreason.\n\n", "msg_date": "Fri, 21 Feb 2003 16:02:26 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: performance issues for processing more then 150000" }, { "msg_contents": "Jakab,\n\nSome simple tips, which is what I thing you were really asking for:\n\nAre you adding the records in a single overnight or downtime load batch? If \nso, the fastest way by far is to:\n1) disable all triggers and constraints on the table temporarily, and some or \nall of the indexes\n2) put all the data into a COPY file (tab-delimited text; see COPY docs)\n3) load the data through a COPY statement\n4) re-enable the trigger and constraints and re-build the indexes\nThe degree to which you need to do 1) and 4) depends on how much punishment \nyour system can take; start out by dropping and rebuilding just the triggers \nand move up from there until the load finishes in a satisfactory time.\n\nIf the records are being added on a continuous basis and not in a big batch, \nthen take the following precautions:\n1) put as many inserts as possible into transaction batches\n2) minimize your use of constraints, triggers, and indexes on the tables being \nloaded\n3) consdier using a \"buffer table\" to hold records about to be loaded while \ndata integrity checks and similar are performed.\n\n> Unfortunatelly the hardware budget should be keept as low as possible.\n> I was thinking is there could be reliable solution based on dual processor\n> and ATA 133 raid mirroring normally with some gigs of memory.\n\n1 gig of RAM may be plenty. Your main bottleneck will be your disk channel. \nIf I were setting up your server, I might do something like:\n\n1) buy a motherboard with 4 ATA controllers.\n2) put disks like:\n\tchannel 0: 1 matched pair disks\n\tchannel 1 + 2: 1 matched quartet of disks\n\tchannel 3: single ATA disk\n\t(for Postgresql, more, smaller disks is almost always better than a few big \nones.) (alternately, set up everythin in one big RAID 5 array with at least 6 \ndisks. There is argument about which is better)\n3) Format the above as a RAID 1 pair on channel 0 and a RAID 1+0 double pair \non channel 1 using Linux software RAID\n4) Put Linux OS + swap on channel 0. Put the database on channel 1+2. Put \nthe pg_xlog (the transaction log) on channel 3. Make sure to use a version \nof Linux with kernel 2.4.19 or greater!\n\nThat's just one configuration of several possible, of course, but may serve \nyour purposes admirably and relatively cheaply.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 23 Feb 2003 12:44:06 -0800", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: performance issues for processing more then 150000" } ]
[ { "msg_contents": "I've use PostgreSQL for some pretty amazing things, and was a proponent\nof using it here. I set up 7.3.1 on my development RedHat 8 box, and it\nwas fine. I need to load about 700,000 rows into one table, the rows\nonly having 6 columns, and the load on my box happens in just a couple\nof minutes (there's some calculation while the data is loading, and that\ntime was acceptable to me).\n\nMy box, however, isn't a production server, so I attempted to create the\ndatabase again on a Sun Blade:\n\nSunOS trident 5.8 Generic_108528-17 sun4u sparc SUNW,UltraAX-i2\nStatus of processor 0 as of: 02/21/03 10:10:10\n Processor has been on-line since 01/13/03 13:53:51.\n The sparcv9 processor operates at 500 MHz,\n and has a sparcv9 floating point processor.\n\nIt isn't the world's fastest box, but should be fast enough for this.\n\nHowever...\n\nIt took almost 2 days to load the table on this box. The postgresql\nprocess was constantly eating up all of the CPU time it could get, while\nloading, at times, less than 1 row a second.\n\nNow, I KNOW something is wrong, and it probably isn't postgres. :)\n\nI turned off the index on the table (there was only one, on three\nfields) and made the whole load happen in a transaction. The 2 day\nresult was after those changes were made.\n\nI compiled postgres myself, using the gcc 3.2.2 package from sunfreeware.\n\nAny thoughts? I know that postgres should be more than capable of\nloading this data quickly on this box...I figured it could possibly take\ntwice as long...and I thought my problem would be i/o related, not CPU.\n\nThanks!\n\nKevin\n\n\n", "msg_date": "Fri, 21 Feb 2003 10:32:37 -0500", "msg_from": "Kevin White <kwhite@digital-ics.com>", "msg_from_op": true, "msg_subject": "Really bad insert performance: what did I do wrong?" }, { "msg_contents": "Kevin White <kwhite@digital-ics.com> writes:\n> My box, however, isn't a production server, so I attempted to create the\n> database again on a Sun Blade:\n> ...\n> It took almost 2 days to load the table on this box.\n\nYipes. We found awhile ago that Solaris' standard qsort() really sucks,\nbut 7.3 should work around that --- and I don't think qsort would be\ninvoked during data load anyway.\n\nDo you want to rebuild Postgres with profiling enabled, and get a gprof\ntrace so we can see where the time is going? You don't need to run it\nfor two days --- a few minutes' worth of runtime should be plenty.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Feb 2003 10:45:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Really bad insert performance: what did I do wrong? " }, { "msg_contents": " > Yipes. We found awhile ago that Solaris' standard qsort() really sucks,\n> but 7.3 should work around that --- and I don't think qsort would be\n> invoked during data load anyway.\n> \n> Do you want to rebuild Postgres with profiling enabled, and get a gprof\n> trace so we can see where the time is going? You don't need to run it\n> for two days --- a few minutes' worth of runtime should be plenty.\n\nGreat...I'd love to...as I've never had a problem with Postgres like \nthis, I didn't even know where to start...I'll look for how to do that.\n\nKevin\n\n\n", "msg_date": "Fri, 21 Feb 2003 10:47:18 -0500", "msg_from": "Kevin White <kwhite@digital-ics.com>", "msg_from_op": true, "msg_subject": "Re: Really bad insert performance: what did I do wrong?" }, { "msg_contents": "On Friday 21 Feb 2003 9:02 pm, you wrote:\n> Any thoughts? I know that postgres should be more than capable of\n> loading this data quickly on this box...I figured it could possibly take\n> twice as long...and I thought my problem would be i/o related, not CPU.\n\nFirst, check vmstat or similar on SunOS. See what is maxing out. Second tunr \npostgresql trace on and see where it is specnding most of the CPU.\n\nNeedless to say, did you tune shared buffers?\n\n Shridhar\n", "msg_date": "Fri, 21 Feb 2003 21:21:55 +0530", "msg_from": "\"Shridhar Daithankar<shridhar_daithankar@persistent.co.in>\"\n\t<shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Really bad insert performance: what did I do wrong?" }, { "msg_contents": "On Fri, Feb 21, 2003 at 10:32:37AM -0500, Kevin White wrote:\n> I've use PostgreSQL for some pretty amazing things, and was a proponent\n> of using it here. I set up 7.3.1 on my development RedHat 8 box, and it\n> was fine. I need to load about 700,000 rows into one table, the rows\n> only having 6 columns, and the load on my box happens in just a couple\n> of minutes (there's some calculation while the data is loading, and that\n> time was acceptable to me).\n> \n> My box, however, isn't a production server, so I attempted to create the\n> database again on a Sun Blade:\n> \n> SunOS trident 5.8 Generic_108528-17 sun4u sparc SUNW,UltraAX-i2\n> Status of processor 0 as of: 02/21/03 10:10:10\n> Processor has been on-line since 01/13/03 13:53:51.\n> The sparcv9 processor operates at 500 MHz,\n> and has a sparcv9 floating point processor.\n> \n> It isn't the world's fastest box, but should be fast enough for this.\n\n\nWhat's the disk subsystem? Is fsync turned on in both cases? And is\nyour IDE drive lying to you about what it's doing.\n\nMy experiences in moving from a Linux box to a low-end Sun is pretty\nsimilar. The problem usually turns out to be a combination of\noverhead on fsync (which shows up as processor load instead of i/o,\noddly); and memory contention, especially in case there are too-large\nnumbers of shared buffers (on a 16 Gig box, we find that 2 Gig of\nshared buffers is too large -- the shared memory management is\ncrummy).\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 21 Feb 2003 11:59:17 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Really bad insert performance: what did I do wrong?" }, { "msg_contents": "Andrew Sullivan wrote:\n> What's the disk subsystem? Is fsync turned on in both cases? And is\n> your IDE drive lying to you about what it's doing.\n\nIt is IDE. How do I turn fsync on or off? (All I can find in the man \nis a function call to fsync...is there something else?)\n\n> My experiences in moving from a Linux box to a low-end Sun is pretty\n> similar. The problem usually turns out to be a combination of\n> overhead on fsync (which shows up as processor load instead of i/o,\n> oddly); and memory contention, especially in case there are too-large\n> numbers of shared buffers \n\nThis box only has 1 gig, and I've only set up 200 shared buffers...at \nthis point, it is only me hitting it. Increasing the shared buffers \nmight help, but I haven't yet found the info I need to do that \nintelligently.\n\nShridhar Daithankar wrote:\n > First, check vmstat or similar on SunOS. See what is maxing out. \nSecond tunr\n > postgresql trace on and see where it is specnding most of the CPU.\n\nDo you mean turning on the statistics generators, or gprof?\n\n > Needless to say, did you tune shared buffers?\n\nLike I mentioned above, I haven't yet found good info on what to do to \nactually tune shared buffers...I know how to change them, but that's \nabout it. I'll poke around some more.\n\n\nTom Lane wrote:\n > You should be able to find details in the archives, but the key point\n > is to do\n > \tcd .../src/backend\n > \tgmake clean\n > \tgmake PROFILE=\"-pg\" all\n > to build a profile-enabled backend. You may need a more complex\n > incantation than -pg on Solaris, but it works on other platforms.\n\nI did this, but my gmon.out doesn't appear to have much data from the \nactual child postgres process, just the parent. I might be wrong, and \nI'm letting some stats generate.\n\nHowever, to everyone, I DID find a problem in my code that was making it \ntake super forever long....the code doesn't just insert. It is also \nwritten to do updates if it needs to, and because of what I'm doing, I \nend up doing selects back against the table during the load to find \npreviously loaded rows. In this case, there aren't any, because the \ntable's been trunced, but...having turned the indexes off, those selects \nwere really really slow. Using the stats tools told me that.\n\nSo, that may have been a large part of my problem. I'm still seeing the \nprocess just SIT there, though, for seconds at a time, so there's \nprobably something else that I can fix. Maybe I should just randomly \ntry a larger buffers setting...\n\nBeing able to analyze the output of gmon would be nice, but as I said \nbefore, it doesn't appear to actually change much...\n\nRight now, the load has been running continuously for several minutes. \nIt is 12:20pm, but the time on the gmon.out file is 12:18pm. I should \nbe able to let the load finish while I'm at lunch, and I might be able \nto get something out of gmon when it is done...maybe writing to gmon \njust doesn't happen constantly.\n\nThanks all...\n\nKevin\n\n", "msg_date": "Fri, 21 Feb 2003 12:21:38 -0500", "msg_from": "Kevin White <kwhite@digital-ics.com>", "msg_from_op": true, "msg_subject": "Re: Really bad insert performance: what did I do wrong?" }, { "msg_contents": "On Fri, Feb 21, 2003 at 12:21:38PM -0500, Kevin White wrote:\n> Andrew Sullivan wrote:\n> > What's the disk subsystem? Is fsync turned on in both cases? And is\n> > your IDE drive lying to you about what it's doing.\n> \n> It is IDE. How do I turn fsync on or off? (All I can find in the man \n> is a function call to fsync...is there something else?)\n\nBy default, Postgres calls fsync() at every COMMIT. Lots of IDE\ndrives lie about whether the fsync() succeeded, so you get better\nperformance than you do with SCSI drives; but you're not really\ngetting that performance, because the fsync isn't effectve.\n\nOn Linux, I think you can use hdparm to fix this. I believe the\nwrite caching is turned off under Solaris, but I'm not sure.\n\nAnyway, you can adjust your postgresql.conf file to turn off fsync.\n\n> This box only has 1 gig, and I've only set up 200 shared buffers...at \n> this point, it is only me hitting it. Increasing the shared buffers \n> might help, but I haven't yet found the info I need to do that \n> intelligently.\n\nFor stright inserts, it shouldn't matter, and that seems low enough\nthat it shouldn't be a problem.\n\nYou should put WAL on another disk, as well, if you can.\n\nAlso, try using truss to see what the backend is doing. (truss -c\ngives you a count and gives some time info.)\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 21 Feb 2003 12:37:19 -0500", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Really bad insert performance: what did I do wrong?" }, { "msg_contents": "OK, I'm gonna make a couple of observations here that may help out.\n\n1: sun's performance on IDE hardware is abysmal. Both Solaris X86 and \nSolaris Sparc are utter dogs at IDE, even when you do get DMA and prefetch \nsetup and running. Linux or BSD are much much better at handling IDE \ninterfaces.\n\n2: Postgresql under Solaris on Sparc is about 1/2 as fast as Postgresql \nunder Linux on Sparc, all other things being equal. On 32 bith Sparc the \nchasm widens even more.\n\n3: Inserting ALL 700,000 rows in one transaction is probably not optimal. \nTry putting a test in every 1,000 or 10,000 rows to toss a \"commit;begin;\" \npair at the database while loading. Inserting all 700,000 rows at once \nmeans postgresql can't recycle the transaction logs, so you'll have \n700,000 rows worth of data in the transaction logs waiting for you to \ncommit at the end. That's a fair bit of space, and a large set of files \nto keep track of. My experience has been that anything over 1,000 inserts \nin a transaction gains little.\n\n4: If you want to make sure you don't insert any duplicates, it's \nprobably faster to use a unique multi-column key on all your columns \n(there's a limit on columns in an index depending on which flavor of \npostgresql you are running, but I think it's 16 on 7.2 and before and 32 \non 7.3 and up. I could be off by a factor of two there.\n\n\n", "msg_date": "Fri, 21 Feb 2003 10:55:41 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Really bad insert performance: what did I do wrong?" }, { "msg_contents": "> 1: sun's performance on IDE hardware is abysmal. \n\nOK, good to know. This box won't always be the production server: an \nx86 Dell server with Linux will happen...thanks for the info.\n\n> 2: Postgresql under Solaris on Sparc is about 1/2 as fast as Postgresql \n> under Linux on Sparc, all other things being equal. On 32 bith Sparc the \n> chasm widens even more.\n\nWow...\n\n> 3: Inserting ALL 700,000 rows in one transaction is probably not optimal. \n> Try putting a test in every 1,000 or 10,000 rows to toss a \"commit;begin;\" \n> pair at the database while loading. \n\nMy original test was 700,000 at once...for today's, I realized that was, \num, dumb :) so I fixed it...it commits every 1,000 now...\n\n> 4: If you want to make sure you don't insert any duplicates, it's \n> probably faster to use a unique multi-column key on all your columns \n\nThe problem isn't inserting the dupes, but at times I need to update the \ndata, rather than load a new batch of it...and the rows have a \"rank\" \n(by price)...so when one group of say 5 gets an updated row, it could \nchange the rank of the other 4 so all 5 need updated...so I have to do \nthe select first to find the other values so I can calculate the rank.\n\nIn THIS specific case, with the table empty, I don't need to do that, \nbut the code had been changed to do it, because normally, my table won't \nbe empty. This is just the initial setup of a new server...\n\nThanks for all the help...\n\nIt looks like the load finished...I might try turning the sync off.\n\nKevin\n\n", "msg_date": "Fri, 21 Feb 2003 14:18:12 -0500", "msg_from": "Kevin White <kwhite@digital-ics.com>", "msg_from_op": true, "msg_subject": "Re: Really bad insert performance: what did I do wrong?" }, { "msg_contents": "Kevin White <kwhite@digital-ics.com> writes:\n> I did this, but my gmon.out doesn't appear to have much data from the \n> actual child postgres process, just the parent.\n\nAre you looking in the right place? Child processes will dump gmon.out\ninto $PGDATA/base/yourdbnum/, which is not where the parent process\ndoes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Feb 2003 21:24:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Really bad insert performance: what did I do wrong? " }, { "msg_contents": "\"scott.marlowe\" <scott.marlowe@ihs.com> writes:\n> 3: Inserting ALL 700,000 rows in one transaction is probably not optimal. \n> Try putting a test in every 1,000 or 10,000 rows to toss a \"commit;begin;\" \n> pair at the database while loading. Inserting all 700,000 rows at once \n> means postgresql can't recycle the transaction logs, so you'll have \n> 700,000 rows worth of data in the transaction logs waiting for you to \n> commit at the end.\n\nThat was true in 7.1.0, but we got rid of that behavior *very* quickly\n(by 7.1.3, according to the release notes). Long transactions do not\ncurrently stress the WAL storage any more than the same amount of work\nin short transactions.\n\nWhich is not to say that there's anything wrong with divvying the work\ninto 1000-row-or-so transactions. I agree that that's enough to push\nthe per-transaction overhead down into the noise.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Feb 2003 23:23:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Really bad insert performance: what did I do wrong? " }, { "msg_contents": "Does anyone know if there is a fast way to find out how many records are \nin a table?\n\n\"Select count(*) from table\" is very slow.\n\nI would think that PG would keep up with the number of undeleted rows on \na realtime basis. Is that the case? If so, how would I query it?\n\nHope this is the correct list for this question.\n\nThanks!\n\n-- \nMatt Mello\n\n\n", "msg_date": "Tue, 25 Feb 2003 16:44:13 -0600", "msg_from": "Matt Mello <alien@spaceship.com>", "msg_from_op": false, "msg_subject": "Faster 'select count(*) from table' ?" }, { "msg_contents": "On Tue, 25 Feb 2003, Matt Mello wrote:\n\n> Does anyone know if there is a fast way to find out how many records are \n> in a table?\n> \n> \"Select count(*) from table\" is very slow.\n> \n> I would think that PG would keep up with the number of undeleted rows on \n> a realtime basis. Is that the case? If so, how would I query it?\n\nSorry, it doesn't, and it's one of the areas that having an MVCC style \ndatabase costs you. Also, if postgresql kept up with this automatically, \nit would have an overhead for each table, but how often do you use it on \nALL your tables? Most the time, folks use count(*) on a few tables only, \nand it would be a waste to have a seperate counting mechanism for all \ntables when you'd only need it for a few.\n\nThe general mailing list has several postings in the last 12 months about \nhow to setup a trigger to a single row table that keeps the current \ncount(*) of the master table.\n\nIf you need a rough count, you can get one from the statistics gathered by \nanalyze in the pg_* tables.\n\n", "msg_date": "Tue, 25 Feb 2003 16:25:26 -0700 (MST)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Faster 'select count(*) from table' ?" } ]