question_id int64 4 6.31M | answer_id int64 7 6.31M | title stringlengths 9 150 | question_body stringlengths 0 28.8k | answer_body stringlengths 60 27.2k | question_text stringlengths 40 28.9k | combined_text stringlengths 124 39.6k | tags listlengths 1 6 | question_score int64 0 26.3k | answer_score int64 0 28.8k | view_count int64 15 14M | answer_count int64 0 182 | favorite_count int64 0 32 | question_creation_date stringdate 2008-07-31 21:42:52 2011-06-10 18:12:18 | answer_creation_date stringdate 2008-07-31 22:17:57 2011-06-10 18:14:17 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4 | 7 | How to convert Decimal to Double in C#? | I want to assign the decimal variable "trans" to the double variable "this.Opacity". decimal trans = trackBar1.Value / 5000; this.Opacity = trans; When I build the app it gives the following error: Cannot implicitly convert type decimal to double | An explicit cast to double like this isn't necessary: double trans = (double) trackBar1.Value / 5000.0; Identifying the constant as 5000.0 (or as 5000d ) is sufficient: double trans = trackBar1.Value / 5000.0; double trans = trackBar1.Value / 5000d; | How to convert Decimal to Double in C#? I want to assign the decimal variable "trans" to the double variable "this.Opacity". decimal trans = trackBar1.Value / 5000; this.Opacity = trans; When I build the app it gives the following error: Cannot implicitly convert type decimal to double | TITLE:
How to convert Decimal to Double in C#?
QUESTION:
I want to assign the decimal variable "trans" to the double variable "this.Opacity". decimal trans = trackBar1.Value / 5000; this.Opacity = trans; When I build the app it gives the following error: Cannot implicitly convert type decimal to double
ANSWER:
An explicit cast to double like this isn't necessary: double trans = (double) trackBar1.Value / 5000.0; Identifying the constant as 5000.0 (or as 5000d ) is sufficient: double trans = trackBar1.Value / 5000.0; double trans = trackBar1.Value / 5000d; | [
"c#",
"floating-point",
"type-conversion",
"double",
"decimal"
] | 806 | 529 | 76,733 | 13 | 0 | 2008-07-31T21:42:52.667000 | 2008-07-31T22:17:57.883000 |
6 | 31 | Why did the width collapse in the percentage width child element in an absolutely positioned parent on Internet Explorer 7? | I have an absolutely positioned div containing several children, one of which is a relatively positioned div. When I use a percentage-based width on the child div, it collapses to 0 width on IE7, but not on Firefox or Safari. If I use pixel width, it works. If the parent is relatively positioned, the percentage width on the child works. Is there something I'm missing here? Is there an easy fix for this besides the pixel-based width on the child? Is there an area of the CSS specification that covers this? | The parent div needs to have a defined width, either in pixels or as a percentage. In Internet Explorer 7, the parent div needs a defined width for child percentage div s to work correctly. | Why did the width collapse in the percentage width child element in an absolutely positioned parent on Internet Explorer 7? I have an absolutely positioned div containing several children, one of which is a relatively positioned div. When I use a percentage-based width on the child div, it collapses to 0 width on IE7, but not on Firefox or Safari. If I use pixel width, it works. If the parent is relatively positioned, the percentage width on the child works. Is there something I'm missing here? Is there an easy fix for this besides the pixel-based width on the child? Is there an area of the CSS specification that covers this? | TITLE:
Why did the width collapse in the percentage width child element in an absolutely positioned parent on Internet Explorer 7?
QUESTION:
I have an absolutely positioned div containing several children, one of which is a relatively positioned div. When I use a percentage-based width on the child div, it collapses to 0 width on IE7, but not on Firefox or Safari. If I use pixel width, it works. If the parent is relatively positioned, the percentage width on the child works. Is there something I'm missing here? Is there an easy fix for this besides the pixel-based width on the child? Is there an area of the CSS specification that covers this?
ANSWER:
The parent div needs to have a defined width, either in pixels or as a percentage. In Internet Explorer 7, the parent div needs a defined width for child percentage div s to work correctly. | [
"html",
"css",
"internet-explorer-7"
] | 320 | 160 | 24,503 | 7 | 0 | 2008-07-31T22:08:08.620000 | 2008-08-01T12:22:51.593000 |
9 | 1,404 | How do I calculate someone's age based on a DateTime type birthday? | Given a DateTime representing a person's birthday, how do I calculate their age in years? | An easy to understand and simple solution. // Save today's date. var today = DateTime.Today;
// Calculate the age. var age = today.Year - birthdate.Year;
// Go back to the year in which the person was born in case of a leap year if (birthdate.Date > today.AddYears(-age)) age--; However, this assumes you are looking for the western idea of the age and not using East Asian reckoning. | How do I calculate someone's age based on a DateTime type birthday? Given a DateTime representing a person's birthday, how do I calculate their age in years? | TITLE:
How do I calculate someone's age based on a DateTime type birthday?
QUESTION:
Given a DateTime representing a person's birthday, how do I calculate their age in years?
ANSWER:
An easy to understand and simple solution. // Save today's date. var today = DateTime.Today;
// Calculate the age. var age = today.Year - birthdate.Year;
// Go back to the year in which the person was born in case of a leap year if (birthdate.Date > today.AddYears(-age)) age--; However, this assumes you are looking for the western idea of the age and not using East Asian reckoning. | [
"c#",
".net",
"datetime"
] | 2,248 | 2,417 | 827,598 | 75 | 0 | 2008-07-31T23:40:59.743000 | 2008-08-04T16:50:06.170000 |
11 | 1,248 | Calculate relative time in C# | Given a specific DateTime value, how do I display relative time, like: 2 hours ago 3 days ago a month ago | Jeff, your code is nice but could be clearer with constants (as suggested in Code Complete). const int SECOND = 1; const int MINUTE = 60 * SECOND; const int HOUR = 60 * MINUTE; const int DAY = 24 * HOUR; const int MONTH = 30 * DAY;
var ts = new TimeSpan(DateTime.UtcNow.Ticks - yourDate.Ticks); double delta = Math.Abs(ts.TotalSeconds);
if (delta < 1 * MINUTE) return ts.Seconds == 1? "one second ago": ts.Seconds + " seconds ago";
if (delta < 2 * MINUTE) return "a minute ago";
if (delta < 45 * MINUTE) return ts.Minutes + " minutes ago";
if (delta < 90 * MINUTE) return "an hour ago";
if (delta < 24 * HOUR) return ts.Hours + " hours ago";
if (delta < 48 * HOUR) return "yesterday";
if (delta < 30 * DAY) return ts.Days + " days ago";
if (delta < 12 * MONTH) { int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30)); return months <= 1? "one month ago": months + " months ago"; } else { int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365)); return years <= 1? "one year ago": years + " years ago"; } | Calculate relative time in C# Given a specific DateTime value, how do I display relative time, like: 2 hours ago 3 days ago a month ago | TITLE:
Calculate relative time in C#
QUESTION:
Given a specific DateTime value, how do I display relative time, like: 2 hours ago 3 days ago a month ago
ANSWER:
Jeff, your code is nice but could be clearer with constants (as suggested in Code Complete). const int SECOND = 1; const int MINUTE = 60 * SECOND; const int HOUR = 60 * MINUTE; const int DAY = 24 * HOUR; const int MONTH = 30 * DAY;
var ts = new TimeSpan(DateTime.UtcNow.Ticks - yourDate.Ticks); double delta = Math.Abs(ts.TotalSeconds);
if (delta < 1 * MINUTE) return ts.Seconds == 1? "one second ago": ts.Seconds + " seconds ago";
if (delta < 2 * MINUTE) return "a minute ago";
if (delta < 45 * MINUTE) return ts.Minutes + " minutes ago";
if (delta < 90 * MINUTE) return "an hour ago";
if (delta < 24 * HOUR) return ts.Hours + " hours ago";
if (delta < 48 * HOUR) return "yesterday";
if (delta < 30 * DAY) return ts.Days + " days ago";
if (delta < 12 * MONTH) { int months = Convert.ToInt32(Math.Floor((double)ts.Days / 30)); return months <= 1? "one month ago": months + " months ago"; } else { int years = Convert.ToInt32(Math.Floor((double)ts.Days / 365)); return years <= 1? "one year ago": years + " years ago"; } | [
"c#",
"datetime",
"time",
"datediff",
"relative-time-span"
] | 1,653 | 1,070 | 202,800 | 42 | 0 | 2008-07-31T23:55:37.967000 | 2008-08-04T13:57:26.097000 |
16 | 12,446 | Filling a DataSet or a DataTable from a LINQ query result set | How do you expose a LINQ query as an ASMX web service? Usually, from the business tier, I can return a typed DataSet or a DataTable which can be serialized for transport over ASMX. How can I do the same for a LINQ query? Is there a way to populate a typed DataSet or a DataTable via a LINQ query? public static MyDataTable CallMySproc() { string conn = "...";
MyDatabaseDataContext db = new MyDatabaseDataContext(conn); MyDataTable dt = new MyDataTable();
// execute a sproc via LINQ var query = from dr in db.MySproc().AsEnumerable select dr;
// copy LINQ query resultset into a DataTable -this does not work! dt = query.CopyToDataTable();
return dt; } How could I put the result set of a LINQ query into a DataSet or a DataTable? Alternatively, can the LINQ query be serializable so that I can expose it as an ASMX web service? | As mentioned in the question, IEnumerable has a CopyToDataTable method: IEnumerable query = from order in orders.AsEnumerable() where order.Field ("OrderDate") > new DateTime(2001, 8, 1) select order;
// Create a table from the query. DataTable boundTable = query.CopyToDataTable (); Why won't that work for you? | Filling a DataSet or a DataTable from a LINQ query result set How do you expose a LINQ query as an ASMX web service? Usually, from the business tier, I can return a typed DataSet or a DataTable which can be serialized for transport over ASMX. How can I do the same for a LINQ query? Is there a way to populate a typed DataSet or a DataTable via a LINQ query? public static MyDataTable CallMySproc() { string conn = "...";
MyDatabaseDataContext db = new MyDatabaseDataContext(conn); MyDataTable dt = new MyDataTable();
// execute a sproc via LINQ var query = from dr in db.MySproc().AsEnumerable select dr;
// copy LINQ query resultset into a DataTable -this does not work! dt = query.CopyToDataTable();
return dt; } How could I put the result set of a LINQ query into a DataSet or a DataTable? Alternatively, can the LINQ query be serializable so that I can expose it as an ASMX web service? | TITLE:
Filling a DataSet or a DataTable from a LINQ query result set
QUESTION:
How do you expose a LINQ query as an ASMX web service? Usually, from the business tier, I can return a typed DataSet or a DataTable which can be serialized for transport over ASMX. How can I do the same for a LINQ query? Is there a way to populate a typed DataSet or a DataTable via a LINQ query? public static MyDataTable CallMySproc() { string conn = "...";
MyDatabaseDataContext db = new MyDatabaseDataContext(conn); MyDataTable dt = new MyDataTable();
// execute a sproc via LINQ var query = from dr in db.MySproc().AsEnumerable select dr;
// copy LINQ query resultset into a DataTable -this does not work! dt = query.CopyToDataTable();
return dt; } How could I put the result set of a LINQ query into a DataSet or a DataTable? Alternatively, can the LINQ query be serializable so that I can expose it as an ASMX web service?
ANSWER:
As mentioned in the question, IEnumerable has a CopyToDataTable method: IEnumerable query = from order in orders.AsEnumerable() where order.Field ("OrderDate") > new DateTime(2001, 8, 1) select order;
// Create a table from the query. DataTable boundTable = query.CopyToDataTable (); Why won't that work for you? | [
"c#",
"linq",
"web-services",
".net-3.5"
] | 159 | 99 | 88,393 | 7 | 0 | 2008-08-01T04:59:33.643000 | 2008-08-15T16:27:31.600000 |
17 | 26 | Binary Data in MySQL | How do I store binary data in MySQL? | The answer by phpguy is correct but I think there is a lot of confusion in the additional details there. The basic answer is in a BLOB data type / attribute domain. BLOB is short for Binary Large Object and that column data type is specific for handling binary data. See the relevant manual page for MySQL. | Binary Data in MySQL How do I store binary data in MySQL? | TITLE:
Binary Data in MySQL
QUESTION:
How do I store binary data in MySQL?
ANSWER:
The answer by phpguy is correct but I think there is a lot of confusion in the additional details there. The basic answer is in a BLOB data type / attribute domain. BLOB is short for Binary Large Object and that column data type is specific for handling binary data. See the relevant manual page for MySQL. | [
"mysql",
"database",
"binary-data",
"data-storage"
] | 199 | 143 | 87,599 | 9 | 0 | 2008-08-01T05:09:55.993000 | 2008-08-01T12:16:22.167000 |
19 | 531 | What is the fastest way to get the value of π? | I'm looking for the fastest way to obtain the value of π, as a personal challenge. More specifically, I'm using ways that don't involve using #define constants like M_PI, or hard-coding the number in. The program below tests the various ways I know of. The inline assembly version is, in theory, the fastest option, though clearly not portable. I've included it as a baseline to compare against the other versions. In my tests, with built-ins, the 4 * atan(1) version is fastest on GCC 4.2, because it auto-folds the atan(1) into a constant. With -fno-builtin specified, the atan2(0, -1) version is fastest. Here's the main testing program ( pitimes.c ): #include #include #include #define ITERS 10000000 #define TESTWITH(x) { \ diff = 0.0; \ time1 = clock(); \ for (i = 0; i < ITERS; ++i) \ diff += (x) - M_PI; \ time2 = clock(); \ printf("%s\t=> %e, time => %f\n", #x, diff, diffclock(time2, time1)); \ }
static inline double diffclock(clock_t time1, clock_t time0) { return (double) (time1 - time0) / CLOCKS_PER_SEC; }
int main() { int i; clock_t time1, time2; double diff;
/* Warmup. The atan2 case catches GCC's atan folding (which would * optimise the ``4 * atan(1) - M_PI'' to a no-op), if -fno-builtin * is not used. */ TESTWITH(4 * atan(1)) TESTWITH(4 * atan2(1, 1))
#if defined(__GNUC__) && (defined(__i386__) || defined(__amd64__)) extern double fldpi(); TESTWITH(fldpi()) #endif
/* Actual tests start here. */ TESTWITH(atan2(0, -1)) TESTWITH(acos(-1)) TESTWITH(2 * asin(1)) TESTWITH(4 * atan2(1, 1)) TESTWITH(4 * atan(1))
return 0; } And the inline assembly stuff ( fldpi.c ) that will only work for x86 and x64 systems: double fldpi() { double pi; asm("fldpi": "=t" (pi)); return pi; } And a build script that builds all the configurations I'm testing ( build.sh ): #!/bin/sh gcc -O3 -Wall -c -m32 -o fldpi-32.o fldpi.c gcc -O3 -Wall -c -m64 -o fldpi-64.o fldpi.c
gcc -O3 -Wall -ffast-math -m32 -o pitimes1-32 pitimes.c fldpi-32.o gcc -O3 -Wall -m32 -o pitimes2-32 pitimes.c fldpi-32.o -lm gcc -O3 -Wall -fno-builtin -m32 -o pitimes3-32 pitimes.c fldpi-32.o -lm gcc -O3 -Wall -ffast-math -m64 -o pitimes1-64 pitimes.c fldpi-64.o -lm gcc -O3 -Wall -m64 -o pitimes2-64 pitimes.c fldpi-64.o -lm gcc -O3 -Wall -fno-builtin -m64 -o pitimes3-64 pitimes.c fldpi-64.o -lm Apart from testing between various compiler flags (I've compared 32-bit against 64-bit too because the optimizations are different), I've also tried switching the order of the tests around. But still, the atan2(0, -1) version still comes out on top every time. | The Monte Carlo method, as mentioned, applies some great concepts but it is, clearly, not the fastest, not by a long shot, not by any reasonable measure. Also, it all depends on what kind of accuracy you are looking for. The fastest π I know of is the one with the digits hard coded. Looking at Pi and Pi[PDF], there are a lot of formulae. Here is a method that converges quickly — about 14 digits per iteration. PiFast, the current fastest application, uses this formula with the FFT. I'll just write the formula, since the code is straightforward. This formula was almost found by Ramanujan and discovered by Chudnovsky. It is actually how he calculated several billion digits of the number — so it isn't a method to disregard. The formula will overflow quickly and, since we are dividing factorials, it would be advantageous then to delay such calculations to remove terms. where, Below is the Brent–Salamin algorithm. Wikipedia mentions that when a and b are "close enough" then (a + b)² / 4t will be an approximation of π. I'm not sure what "close enough" means, but from my tests, one iteration got 2 digits, two got 7, and three had 15, of course this is with doubles, so it might have an error based on its representation and the true calculation could be more accurate. let pi_2 iters = let rec loop_ a b t p i = if i = 0 then a,b,t,p else let a_n = (a +. b) /. 2.0 and b_n = sqrt (a*.b) and p_n = 2.0 *. p in let t_n = t -. (p *. (a -. a_n) *. (a -. a_n)) in loop_ a_n b_n t_n p_n (i - 1) in let a,b,t,p = loop_ (1.0) (1.0 /. (sqrt 2.0)) (1.0/.4.0) (1.0) iters in (a +. b) *. (a +. b) /. (4.0 *. t) Lastly, how about some pi golf (800 digits)? 160 characters! int a=10000,b,c=2800,d,e,f[2801],g;main(){for(;b-c;)f[b++]=a/5;for(;d=0,g=c*2;c-=14,printf("%.4d",e+d/a),e=d%a)for(b=c;d+=f[b]*a,f[b]=d%--g,d/=g--,--b;d*=b);} | What is the fastest way to get the value of π? I'm looking for the fastest way to obtain the value of π, as a personal challenge. More specifically, I'm using ways that don't involve using #define constants like M_PI, or hard-coding the number in. The program below tests the various ways I know of. The inline assembly version is, in theory, the fastest option, though clearly not portable. I've included it as a baseline to compare against the other versions. In my tests, with built-ins, the 4 * atan(1) version is fastest on GCC 4.2, because it auto-folds the atan(1) into a constant. With -fno-builtin specified, the atan2(0, -1) version is fastest. Here's the main testing program ( pitimes.c ): #include #include #include #define ITERS 10000000 #define TESTWITH(x) { \ diff = 0.0; \ time1 = clock(); \ for (i = 0; i < ITERS; ++i) \ diff += (x) - M_PI; \ time2 = clock(); \ printf("%s\t=> %e, time => %f\n", #x, diff, diffclock(time2, time1)); \ }
static inline double diffclock(clock_t time1, clock_t time0) { return (double) (time1 - time0) / CLOCKS_PER_SEC; }
int main() { int i; clock_t time1, time2; double diff;
/* Warmup. The atan2 case catches GCC's atan folding (which would * optimise the ``4 * atan(1) - M_PI'' to a no-op), if -fno-builtin * is not used. */ TESTWITH(4 * atan(1)) TESTWITH(4 * atan2(1, 1))
#if defined(__GNUC__) && (defined(__i386__) || defined(__amd64__)) extern double fldpi(); TESTWITH(fldpi()) #endif
/* Actual tests start here. */ TESTWITH(atan2(0, -1)) TESTWITH(acos(-1)) TESTWITH(2 * asin(1)) TESTWITH(4 * atan2(1, 1)) TESTWITH(4 * atan(1))
return 0; } And the inline assembly stuff ( fldpi.c ) that will only work for x86 and x64 systems: double fldpi() { double pi; asm("fldpi": "=t" (pi)); return pi; } And a build script that builds all the configurations I'm testing ( build.sh ): #!/bin/sh gcc -O3 -Wall -c -m32 -o fldpi-32.o fldpi.c gcc -O3 -Wall -c -m64 -o fldpi-64.o fldpi.c
gcc -O3 -Wall -ffast-math -m32 -o pitimes1-32 pitimes.c fldpi-32.o gcc -O3 -Wall -m32 -o pitimes2-32 pitimes.c fldpi-32.o -lm gcc -O3 -Wall -fno-builtin -m32 -o pitimes3-32 pitimes.c fldpi-32.o -lm gcc -O3 -Wall -ffast-math -m64 -o pitimes1-64 pitimes.c fldpi-64.o -lm gcc -O3 -Wall -m64 -o pitimes2-64 pitimes.c fldpi-64.o -lm gcc -O3 -Wall -fno-builtin -m64 -o pitimes3-64 pitimes.c fldpi-64.o -lm Apart from testing between various compiler flags (I've compared 32-bit against 64-bit too because the optimizations are different), I've also tried switching the order of the tests around. But still, the atan2(0, -1) version still comes out on top every time. | TITLE:
What is the fastest way to get the value of π?
QUESTION:
I'm looking for the fastest way to obtain the value of π, as a personal challenge. More specifically, I'm using ways that don't involve using #define constants like M_PI, or hard-coding the number in. The program below tests the various ways I know of. The inline assembly version is, in theory, the fastest option, though clearly not portable. I've included it as a baseline to compare against the other versions. In my tests, with built-ins, the 4 * atan(1) version is fastest on GCC 4.2, because it auto-folds the atan(1) into a constant. With -fno-builtin specified, the atan2(0, -1) version is fastest. Here's the main testing program ( pitimes.c ): #include #include #include #define ITERS 10000000 #define TESTWITH(x) { \ diff = 0.0; \ time1 = clock(); \ for (i = 0; i < ITERS; ++i) \ diff += (x) - M_PI; \ time2 = clock(); \ printf("%s\t=> %e, time => %f\n", #x, diff, diffclock(time2, time1)); \ }
static inline double diffclock(clock_t time1, clock_t time0) { return (double) (time1 - time0) / CLOCKS_PER_SEC; }
int main() { int i; clock_t time1, time2; double diff;
/* Warmup. The atan2 case catches GCC's atan folding (which would * optimise the ``4 * atan(1) - M_PI'' to a no-op), if -fno-builtin * is not used. */ TESTWITH(4 * atan(1)) TESTWITH(4 * atan2(1, 1))
#if defined(__GNUC__) && (defined(__i386__) || defined(__amd64__)) extern double fldpi(); TESTWITH(fldpi()) #endif
/* Actual tests start here. */ TESTWITH(atan2(0, -1)) TESTWITH(acos(-1)) TESTWITH(2 * asin(1)) TESTWITH(4 * atan2(1, 1)) TESTWITH(4 * atan(1))
return 0; } And the inline assembly stuff ( fldpi.c ) that will only work for x86 and x64 systems: double fldpi() { double pi; asm("fldpi": "=t" (pi)); return pi; } And a build script that builds all the configurations I'm testing ( build.sh ): #!/bin/sh gcc -O3 -Wall -c -m32 -o fldpi-32.o fldpi.c gcc -O3 -Wall -c -m64 -o fldpi-64.o fldpi.c
gcc -O3 -Wall -ffast-math -m32 -o pitimes1-32 pitimes.c fldpi-32.o gcc -O3 -Wall -m32 -o pitimes2-32 pitimes.c fldpi-32.o -lm gcc -O3 -Wall -fno-builtin -m32 -o pitimes3-32 pitimes.c fldpi-32.o -lm gcc -O3 -Wall -ffast-math -m64 -o pitimes1-64 pitimes.c fldpi-64.o -lm gcc -O3 -Wall -m64 -o pitimes2-64 pitimes.c fldpi-64.o -lm gcc -O3 -Wall -fno-builtin -m64 -o pitimes3-64 pitimes.c fldpi-64.o -lm Apart from testing between various compiler flags (I've compared 32-bit against 64-bit too because the optimizations are different), I've also tried switching the order of the tests around. But still, the atan2(0, -1) version still comes out on top every time.
ANSWER:
The Monte Carlo method, as mentioned, applies some great concepts but it is, clearly, not the fastest, not by a long shot, not by any reasonable measure. Also, it all depends on what kind of accuracy you are looking for. The fastest π I know of is the one with the digits hard coded. Looking at Pi and Pi[PDF], there are a lot of formulae. Here is a method that converges quickly — about 14 digits per iteration. PiFast, the current fastest application, uses this formula with the FFT. I'll just write the formula, since the code is straightforward. This formula was almost found by Ramanujan and discovered by Chudnovsky. It is actually how he calculated several billion digits of the number — so it isn't a method to disregard. The formula will overflow quickly and, since we are dividing factorials, it would be advantageous then to delay such calculations to remove terms. where, Below is the Brent–Salamin algorithm. Wikipedia mentions that when a and b are "close enough" then (a + b)² / 4t will be an approximation of π. I'm not sure what "close enough" means, but from my tests, one iteration got 2 digits, two got 7, and three had 15, of course this is with doubles, so it might have an error based on its representation and the true calculation could be more accurate. let pi_2 iters = let rec loop_ a b t p i = if i = 0 then a,b,t,p else let a_n = (a +. b) /. 2.0 and b_n = sqrt (a*.b) and p_n = 2.0 *. p in let t_n = t -. (p *. (a -. a_n) *. (a -. a_n)) in loop_ a_n b_n t_n p_n (i - 1) in let a,b,t,p = loop_ (1.0) (1.0 /. (sqrt 2.0)) (1.0/.4.0) (1.0) iters in (a +. b) *. (a +. b) /. (4.0 *. t) Lastly, how about some pi golf (800 digits)? 160 characters! int a=10000,b,c=2800,d,e,f[2801],g;main(){for(;b-c;)f[b++]=a/5;for(;d=0,g=c*2;c-=14,printf("%.4d",e+d/a),e=d%a)for(b=c;d+=f[b]*a,f[b]=d%--g,d/=g--,--b;d*=b);} | [
"performance",
"algorithm",
"language-agnostic",
"unix",
"pi"
] | 352 | 223 | 70,352 | 23 | 0 | 2008-08-01T05:21:22.257000 | 2008-08-02T18:22:52.690000 |
24 | 49 | Throw an error preventing a table update in a MySQL trigger | If I have a trigger before the update on a table, how can I throw an error that prevents the update on that table? | Here is one hack that may work. It isn't clean, but it looks like it might work: Essentially, you just try to update a column that doesn't exist. | Throw an error preventing a table update in a MySQL trigger If I have a trigger before the update on a table, how can I throw an error that prevents the update on that table? | TITLE:
Throw an error preventing a table update in a MySQL trigger
QUESTION:
If I have a trigger before the update on a table, how can I throw an error that prevents the update on that table?
ANSWER:
Here is one hack that may work. It isn't clean, but it looks like it might work: Essentially, you just try to update a column that doesn't exist. | [
"mysql",
"database",
"triggers"
] | 195 | 67 | 105,586 | 7 | 0 | 2008-08-01T12:12:19.350000 | 2008-08-01T13:02:51.900000 |
25 | 1,443,907 | How to use the C socket API in C++ on z/OS | I'm having issues getting the C sockets API to work properly in C++ on z/OS. Although I am including sys/socket.h, I still get compile time errors telling me that AF_INET is not defined. Am I missing something obvious, or is this related to the fact that being on z/OS makes my problems much more complicated? I discovered that there is a #ifdef that I'm hitting. Apparently, z/OS isn't happy unless I define which "type" of sockets I'm using with: #define _OE_SOCKETS Now, I personally have no idea what this _OE_SOCKETS is actually for, so if any z/OS sockets programmers are out there (all 3 of you), perhaps you could give me a rundown of how this all works? Test App: #include int main() { return AF_INET; } Compile/Link Output: cxx -Wc,xplink -Wl,xplink -o inet_test inet.C
"./inet.C", line 5.16: CCN5274 (S) The name lookup for "AF_INET" did not find a declaration. CCN0797(I) Compilation failed for file./inet.C. Object file not created. A check of sys/sockets.h does include the definition I need, and as far as I can tell, it is not being blocked by any #ifdef statements. I have however noticed it contains the following: #ifdef __cplusplus extern "C" { #endif which encapsulates basically the whole file? Not sure if it matters. | Keep a copy of the IBM manuals handy: z/OS V1R11.0 XL C/C++ Programming Guide z/OS V1R11.0 XL C/C++ Run-Time Library Reference The IBM publications are generally very good, but you need to get used to their format, as well as knowing where to look for an answer. You'll find quite often that a feature that you want to use is guarded by a "feature test macro" You should ask your friendly system programmer to install the XL C/C++ Run-Time Library Reference: Man Pages on your system. Then you can do things like "man connect" to pull up the man page for the socket connect() API. When I do that, this is what I see: FORMAT X/Open #define _XOPEN_SOURCE_EXTENDED 1 #include int connect(int socket, const struct sockaddr *address, socklen_t address_len); Berkeley Sockets #define _OE_SOCKETS #include #include int connect(int socket, struct sockaddr *address, int address_len); | How to use the C socket API in C++ on z/OS I'm having issues getting the C sockets API to work properly in C++ on z/OS. Although I am including sys/socket.h, I still get compile time errors telling me that AF_INET is not defined. Am I missing something obvious, or is this related to the fact that being on z/OS makes my problems much more complicated? I discovered that there is a #ifdef that I'm hitting. Apparently, z/OS isn't happy unless I define which "type" of sockets I'm using with: #define _OE_SOCKETS Now, I personally have no idea what this _OE_SOCKETS is actually for, so if any z/OS sockets programmers are out there (all 3 of you), perhaps you could give me a rundown of how this all works? Test App: #include int main() { return AF_INET; } Compile/Link Output: cxx -Wc,xplink -Wl,xplink -o inet_test inet.C
"./inet.C", line 5.16: CCN5274 (S) The name lookup for "AF_INET" did not find a declaration. CCN0797(I) Compilation failed for file./inet.C. Object file not created. A check of sys/sockets.h does include the definition I need, and as far as I can tell, it is not being blocked by any #ifdef statements. I have however noticed it contains the following: #ifdef __cplusplus extern "C" { #endif which encapsulates basically the whole file? Not sure if it matters. | TITLE:
How to use the C socket API in C++ on z/OS
QUESTION:
I'm having issues getting the C sockets API to work properly in C++ on z/OS. Although I am including sys/socket.h, I still get compile time errors telling me that AF_INET is not defined. Am I missing something obvious, or is this related to the fact that being on z/OS makes my problems much more complicated? I discovered that there is a #ifdef that I'm hitting. Apparently, z/OS isn't happy unless I define which "type" of sockets I'm using with: #define _OE_SOCKETS Now, I personally have no idea what this _OE_SOCKETS is actually for, so if any z/OS sockets programmers are out there (all 3 of you), perhaps you could give me a rundown of how this all works? Test App: #include int main() { return AF_INET; } Compile/Link Output: cxx -Wc,xplink -Wl,xplink -o inet_test inet.C
"./inet.C", line 5.16: CCN5274 (S) The name lookup for "AF_INET" did not find a declaration. CCN0797(I) Compilation failed for file./inet.C. Object file not created. A check of sys/sockets.h does include the definition I need, and as far as I can tell, it is not being blocked by any #ifdef statements. I have however noticed it contains the following: #ifdef __cplusplus extern "C" { #endif which encapsulates basically the whole file? Not sure if it matters.
ANSWER:
Keep a copy of the IBM manuals handy: z/OS V1R11.0 XL C/C++ Programming Guide z/OS V1R11.0 XL C/C++ Run-Time Library Reference The IBM publications are generally very good, but you need to get used to their format, as well as knowing where to look for an answer. You'll find quite often that a feature that you want to use is guarded by a "feature test macro" You should ask your friendly system programmer to install the XL C/C++ Run-Time Library Reference: Man Pages on your system. Then you can do things like "man connect" to pull up the man page for the socket connect() API. When I do that, this is what I see: FORMAT X/Open #define _XOPEN_SOURCE_EXTENDED 1 #include int connect(int socket, const struct sockaddr *address, socklen_t address_len); Berkeley Sockets #define _OE_SOCKETS #include #include int connect(int socket, struct sockaddr *address, int address_len); | [
"c++",
"c",
"sockets",
"mainframe",
"zos"
] | 176 | 97 | 16,412 | 9 | 0 | 2008-08-01T12:13:50.207000 | 2009-09-18T11:17:01.933000 |
36 | 352 | Check for changes to an SQL Server table? | How can I monitor an SQL Server database for changes to a table without using triggers or modifying the structure of the database in any way? My preferred programming environment is.NET and C#. I'd like to be able to support any SQL Server 2000 SP4 or newer. My application is a bolt-on data visualization for another company's product. Our customer base is in the thousands, so I don't want to have to put in requirements that we modify the third-party vendor's table at every installation. By "changes to a table" I mean changes to table data, not changes to table structure. Ultimately, I would like the change to trigger an event in my application, instead of having to check for changes at an interval. The best course of action given my requirements (no triggers or schema modification, SQL Server 2000 and 2005) seems to be to use the BINARY_CHECKSUM function in T-SQL. The way I plan to implement is this: Every X seconds run the following query: SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); And compare that against the stored value. If the value has changed, go through the table row by row using the query: SELECT row_id, BINARY_CHECKSUM(*) FROM sample_table WITH (NOLOCK); And compare the returned checksums against stored values. | Take a look at the CHECKSUM command: SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); That will return the same number each time it's run as long as the table contents haven't changed. See my post on this for more information: CHECKSUM Here's how I used it to rebuild cache dependencies when tables changed: ASP.NET 1.1 database cache dependency (without triggers) | Check for changes to an SQL Server table? How can I monitor an SQL Server database for changes to a table without using triggers or modifying the structure of the database in any way? My preferred programming environment is.NET and C#. I'd like to be able to support any SQL Server 2000 SP4 or newer. My application is a bolt-on data visualization for another company's product. Our customer base is in the thousands, so I don't want to have to put in requirements that we modify the third-party vendor's table at every installation. By "changes to a table" I mean changes to table data, not changes to table structure. Ultimately, I would like the change to trigger an event in my application, instead of having to check for changes at an interval. The best course of action given my requirements (no triggers or schema modification, SQL Server 2000 and 2005) seems to be to use the BINARY_CHECKSUM function in T-SQL. The way I plan to implement is this: Every X seconds run the following query: SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); And compare that against the stored value. If the value has changed, go through the table row by row using the query: SELECT row_id, BINARY_CHECKSUM(*) FROM sample_table WITH (NOLOCK); And compare the returned checksums against stored values. | TITLE:
Check for changes to an SQL Server table?
QUESTION:
How can I monitor an SQL Server database for changes to a table without using triggers or modifying the structure of the database in any way? My preferred programming environment is.NET and C#. I'd like to be able to support any SQL Server 2000 SP4 or newer. My application is a bolt-on data visualization for another company's product. Our customer base is in the thousands, so I don't want to have to put in requirements that we modify the third-party vendor's table at every installation. By "changes to a table" I mean changes to table data, not changes to table structure. Ultimately, I would like the change to trigger an event in my application, instead of having to check for changes at an interval. The best course of action given my requirements (no triggers or schema modification, SQL Server 2000 and 2005) seems to be to use the BINARY_CHECKSUM function in T-SQL. The way I plan to implement is this: Every X seconds run the following query: SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); And compare that against the stored value. If the value has changed, go through the table row by row using the query: SELECT row_id, BINARY_CHECKSUM(*) FROM sample_table WITH (NOLOCK); And compare the returned checksums against stored values.
ANSWER:
Take a look at the CHECKSUM command: SELECT CHECKSUM_AGG(BINARY_CHECKSUM(*)) FROM sample_table WITH (NOLOCK); That will return the same number each time it's run as long as the table contents haven't changed. See my post on this for more information: CHECKSUM Here's how I used it to rebuild cache dependencies when tables changed: ASP.NET 1.1 database cache dependency (without triggers) | [
"sql",
"sql-server",
"datatable",
"rdbms"
] | 153 | 101 | 78,517 | 9 | 0 | 2008-08-01T12:35:56.917000 | 2008-08-02T05:20:22.397000 |
39 | 45 | Reliable timer in a console application | I am aware that in.NET there are three timer types (see Comparing the Timer Classes in the.NET Framework Class Library ). I have chosen a threaded timer as the other types can drift if the main thread is busy, and I need this to be reliable. The way this timer works in the control of the timer is put on another thread so it can always tick along with the work begin completed on the parent thread when it is not busy. The issue with this timer in a console application is that while the timer is ticking along on another thread the main thread is not doing anything to the application closes. I tried adding a while true loop, but then the main thread is too busy when the timer does go off. | You can use something like Console.ReadLine() to block the main thread, so other background threads (like timer threads) will still work. You may also use an AutoResetEvent to block the execution, then (when you need to) you can call Set() method on that AutoResetEvent object to release the main thread. Also ensure that your reference to Timer object doesn't go out of scope and garbage collected. | Reliable timer in a console application I am aware that in.NET there are three timer types (see Comparing the Timer Classes in the.NET Framework Class Library ). I have chosen a threaded timer as the other types can drift if the main thread is busy, and I need this to be reliable. The way this timer works in the control of the timer is put on another thread so it can always tick along with the work begin completed on the parent thread when it is not busy. The issue with this timer in a console application is that while the timer is ticking along on another thread the main thread is not doing anything to the application closes. I tried adding a while true loop, but then the main thread is too busy when the timer does go off. | TITLE:
Reliable timer in a console application
QUESTION:
I am aware that in.NET there are three timer types (see Comparing the Timer Classes in the.NET Framework Class Library ). I have chosen a threaded timer as the other types can drift if the main thread is busy, and I need this to be reliable. The way this timer works in the control of the timer is put on another thread so it can always tick along with the work begin completed on the parent thread when it is not busy. The issue with this timer in a console application is that while the timer is ticking along on another thread the main thread is not doing anything to the application closes. I tried adding a while true loop, but then the main thread is too busy when the timer does go off.
ANSWER:
You can use something like Console.ReadLine() to block the main thread, so other background threads (like timer threads) will still work. You may also use an AutoResetEvent to block the execution, then (when you need to) you can call Set() method on that AutoResetEvent object to release the main thread. Also ensure that your reference to Timer object doesn't go out of scope and garbage collected. | [
"c#",
".net",
"vb.net",
"timer"
] | 114 | 63 | 8,237 | 3 | 0 | 2008-08-01T12:43:11.503000 | 2008-08-01T12:56:37.920000 |
42 | 77 | Best way to allow plugins for a PHP application | I am starting a new web application in PHP and this time around I want to create something that people can extend by using a plugin interface. How does one go about writing 'hooks' into their code so that plugins can attach to specific events? | You could use an Observer pattern. A simple functional way to accomplish this: Output: This is my CRAZY application 4 + 5 = 9 4 * 5 = 20 Notes: For this example source code, you must declare all your plugins before the actual source code that you want to be extendable. I've included an example of how to handle single or multiple values being passed to the plugin. The hardest part of this is writing the actual documentation which lists what arguments get passed to each hook. This is just one method of accomplishing a plugin system in PHP. There are better alternatives, I suggest you check out the WordPress Documentation for more information. | Best way to allow plugins for a PHP application I am starting a new web application in PHP and this time around I want to create something that people can extend by using a plugin interface. How does one go about writing 'hooks' into their code so that plugins can attach to specific events? | TITLE:
Best way to allow plugins for a PHP application
QUESTION:
I am starting a new web application in PHP and this time around I want to create something that people can extend by using a plugin interface. How does one go about writing 'hooks' into their code so that plugins can attach to specific events?
ANSWER:
You could use an Observer pattern. A simple functional way to accomplish this: Output: This is my CRAZY application 4 + 5 = 9 4 * 5 = 20 Notes: For this example source code, you must declare all your plugins before the actual source code that you want to be extendable. I've included an example of how to handle single or multiple values being passed to the plugin. The hardest part of this is writing the actual documentation which lists what arguments get passed to each hook. This is just one method of accomplishing a plugin system in PHP. There are better alternatives, I suggest you check out the WordPress Documentation for more information. | [
"php",
"plugins",
"architecture",
"hook"
] | 293 | 168 | 40,824 | 8 | 0 | 2008-08-01T12:50:18.587000 | 2008-08-01T13:46:00.097000 |
48 | 31,910 | Multiple submit buttons in an HTML form | Let's say you create a wizard in an HTML form. One button goes back, and one goes forward. Since the back button appears first in the markup when you press Enter, it will use that button to submit the form. Example: I would like to get to decide which button is used to submit the form when a user presses Enter. That way, when you press Enter the wizard will move to the next page, not the previous. Do you have to use tabindex to do this? | I'm just doing the trick of float ing the buttons to the right. This way the Prev button is left of the Next button, but the Next comes first in the HTML structure:.f { float: right; }.clr { clear: both; } Benefits over other suggestions: no JavaScript code, accessible, and both buttons remain type="submit". | Multiple submit buttons in an HTML form Let's say you create a wizard in an HTML form. One button goes back, and one goes forward. Since the back button appears first in the markup when you press Enter, it will use that button to submit the form. Example: I would like to get to decide which button is used to submit the form when a user presses Enter. That way, when you press Enter the wizard will move to the next page, not the previous. Do you have to use tabindex to do this? | TITLE:
Multiple submit buttons in an HTML form
QUESTION:
Let's say you create a wizard in an HTML form. One button goes back, and one goes forward. Since the back button appears first in the markup when you press Enter, it will use that button to submit the form. Example: I would like to get to decide which button is used to submit the form when a user presses Enter. That way, when you press Enter the wizard will move to the next page, not the previous. Do you have to use tabindex to do this?
ANSWER:
I'm just doing the trick of float ing the buttons to the right. This way the Prev button is left of the Next button, but the Next comes first in the HTML structure:.f { float: right; }.clr { clear: both; } Benefits over other suggestions: no JavaScript code, accessible, and both buttons remain type="submit". | [
"html",
"forms",
"form-submit",
"submit-button"
] | 291 | 152 | 257,419 | 28 | 0 | 2008-08-01T13:01:17.303000 | 2008-08-28T09:34:40.143000 |
59 | 43,110 | How do I get a distinct, ordered list of names from a DataTable using LINQ? | I have a DataTable with a Name column. I want to generate a collection of the unique names ordered alphabetically. The following query ignores the order by clause. var names = (from DataRow dr in dataTable.Rows orderby (string)dr["Name"] select (string)dr["Name"]).Distinct(); Why does the orderby not get enforced? | To make it more readable and maintainable, you can also split it up into multiple LINQ statements. First, select your data into a new list, let's call it x1, do a projection if desired Next, create a distinct list, from x1 into x2, using whatever distinction you require Finally, create an ordered list, from x2 into x3, sorting by whatever you desire | How do I get a distinct, ordered list of names from a DataTable using LINQ? I have a DataTable with a Name column. I want to generate a collection of the unique names ordered alphabetically. The following query ignores the order by clause. var names = (from DataRow dr in dataTable.Rows orderby (string)dr["Name"] select (string)dr["Name"]).Distinct(); Why does the orderby not get enforced? | TITLE:
How do I get a distinct, ordered list of names from a DataTable using LINQ?
QUESTION:
I have a DataTable with a Name column. I want to generate a collection of the unique names ordered alphabetically. The following query ignores the order by clause. var names = (from DataRow dr in dataTable.Rows orderby (string)dr["Name"] select (string)dr["Name"]).Distinct(); Why does the orderby not get enforced?
ANSWER:
To make it more readable and maintainable, you can also split it up into multiple LINQ statements. First, select your data into a new list, let's call it x1, do a projection if desired Next, create a distinct list, from x1 into x2, using whatever distinction you require Finally, create an ordered list, from x2 into x3, sorting by whatever you desire | [
"c#",
"linq",
".net-3.5"
] | 117 | 40 | 27,375 | 7 | 0 | 2008-08-01T13:14:33.797000 | 2008-09-04T02:57:27.263000 |
66 | 4,521 | Paging a collection with LINQ | How do you page through a collection in LINQ given that you have a startIndex and a count? | A few months back I wrote a blog post about Fluent Interfaces and LINQ which used an Extension Method on IQueryable and another class to provide the following natural way of paginating a LINQ collection. var query = from i in ideas select i; var pagedCollection = query.InPagesOf(10); var pageOfIdeas = pagedCollection.Page(2); You can get the code from the MSDN Code Gallery Page: Pipelines, Filters, Fluent API and LINQ to SQL. | Paging a collection with LINQ How do you page through a collection in LINQ given that you have a startIndex and a count? | TITLE:
Paging a collection with LINQ
QUESTION:
How do you page through a collection in LINQ given that you have a startIndex and a count?
ANSWER:
A few months back I wrote a blog post about Fluent Interfaces and LINQ which used an Extension Method on IQueryable and another class to provide the following natural way of paginating a LINQ collection. var query = from i in ideas select i; var pagedCollection = query.InPagesOf(10); var pageOfIdeas = pagedCollection.Page(2); You can get the code from the MSDN Code Gallery Page: Pipelines, Filters, Fluent API and LINQ to SQL. | [
"linq",
".net-3.5"
] | 89 | 45 | 6,204 | 4 | 0 | 2008-08-01T13:20:46.890000 | 2008-08-07T08:22:27.440000 |
79 | 62,853 | Getting Subclipse in Aptana to work with the newest release of Subversion | The version of Subclipse (1.2.4) currently available through Aptana's automatic Plugins Manager does not work with the newest version of Subversion. I see on the Subclipse website however that they have 1.4.2 out for Eclipse. So I added a new remote update site to my Update manager. When I tried to install it, it told me I needed Mylyn 3.0.0. So after much searching I found Mylyn 3.0.0 and added another new remote update site to my update manager. Then when I tried to install that, it told me I needed org.eclipse.ui 3.3.0 or equivalent. Looking at the configuration details for Aptana, it looks like it is built against eclipse 3.2.2. Does anyone know if there is a way to upgrade the version of Eclipse Aptana that is built against to 3.3.0? Or if there is some other way to get Subclipse to work with the very newest version of Subversion? I know this isn't necessarily a "programming" question, but I hope it's ok since it's highly relevant to the programming experience. | Subclipse does not require Mylyn, but the update site includes a plugin that integrates Mylyn and Subclipse. This is intended for people that use Mylyn. In your case, you would want to just de-select Mylyn in the update dialog. Subclipse also requires Subversion 1.5 and the corresponding version of the JavaHL native libraries. I have written the start of an FAQ to help people understand JavaHL and how to get it. See: http://desktop-eclipse.open.collab.net/wiki/JavaHL | Getting Subclipse in Aptana to work with the newest release of Subversion The version of Subclipse (1.2.4) currently available through Aptana's automatic Plugins Manager does not work with the newest version of Subversion. I see on the Subclipse website however that they have 1.4.2 out for Eclipse. So I added a new remote update site to my Update manager. When I tried to install it, it told me I needed Mylyn 3.0.0. So after much searching I found Mylyn 3.0.0 and added another new remote update site to my update manager. Then when I tried to install that, it told me I needed org.eclipse.ui 3.3.0 or equivalent. Looking at the configuration details for Aptana, it looks like it is built against eclipse 3.2.2. Does anyone know if there is a way to upgrade the version of Eclipse Aptana that is built against to 3.3.0? Or if there is some other way to get Subclipse to work with the very newest version of Subversion? I know this isn't necessarily a "programming" question, but I hope it's ok since it's highly relevant to the programming experience. | TITLE:
Getting Subclipse in Aptana to work with the newest release of Subversion
QUESTION:
The version of Subclipse (1.2.4) currently available through Aptana's automatic Plugins Manager does not work with the newest version of Subversion. I see on the Subclipse website however that they have 1.4.2 out for Eclipse. So I added a new remote update site to my Update manager. When I tried to install it, it told me I needed Mylyn 3.0.0. So after much searching I found Mylyn 3.0.0 and added another new remote update site to my update manager. Then when I tried to install that, it told me I needed org.eclipse.ui 3.3.0 or equivalent. Looking at the configuration details for Aptana, it looks like it is built against eclipse 3.2.2. Does anyone know if there is a way to upgrade the version of Eclipse Aptana that is built against to 3.3.0? Or if there is some other way to get Subclipse to work with the very newest version of Subversion? I know this isn't necessarily a "programming" question, but I hope it's ok since it's highly relevant to the programming experience.
ANSWER:
Subclipse does not require Mylyn, but the update site includes a plugin that integrates Mylyn and Subclipse. This is intended for people that use Mylyn. In your case, you would want to just de-select Mylyn in the update dialog. Subclipse also requires Subversion 1.5 and the corresponding version of the JavaHL native libraries. I have written the start of an FAQ to help people understand JavaHL and how to get it. See: http://desktop-eclipse.open.collab.net/wiki/JavaHL | [
"eclipse",
"svn",
"aptana",
"subclipse"
] | 50 | 18 | 11,457 | 4 | 0 | 2008-08-01T13:56:33.837000 | 2008-09-15T13:26:34.350000 |
80 | 124 | SQLStatement.execute() - multiple queries in one statement | I've written a database generation script in SQL and want to execute it in my Adobe AIR application: Create Table tRole ( roleID integer Primary Key,roleName varchar(40) ); Create Table tFile ( fileID integer Primary Key,fileName varchar(50),fileDescription varchar(500),thumbnailID integer,fileFormatID integer,categoryID integer,isFavorite boolean,dateAdded date,globalAccessCount integer,lastAccessTime date,downloadComplete boolean,isNew boolean,isSpotlight boolean,duration varchar(30) ); Create Table tCategory ( categoryID integer Primary Key,categoryName varchar(50),parent_categoryID integer );... I execute this in Adobe AIR using the following methods: public static function RunSqlFromFile(fileName:String):void { var file:File = File.applicationDirectory.resolvePath(fileName); var stream:FileStream = new FileStream(); stream.open(file, FileMode.READ) var strSql:String = stream.readUTFBytes(stream.bytesAvailable); NonQuery(strSql); }
public static function NonQuery(strSQL:String):void { var sqlConnection:SQLConnection = new SQLConnection(); sqlConnection.open(File.applicationStorageDirectory.resolvePath(DBPATH)); var sqlStatement:SQLStatement = new SQLStatement(); sqlStatement.text = strSQL; sqlStatement.sqlConnection = sqlConnection; try { sqlStatement.execute(); } catch (error:SQLError) { Alert.show(error.toString()); } } No errors are generated, however only tRole exists. It seems that it only looks at the first query (up to the semicolon- if I remove it, the query fails). Is there a way to call multiple queries in one statement? | I wound up using this. It is a kind of a hack, but it actually works pretty well. The only thing is you have to be very careful with your semicolons.: D var strSql:String = stream.readUTFBytes(stream.bytesAvailable); var i:Number = 0; var strSqlSplit:Array = strSql.split(";"); for (i = 0; i < strSqlSplit.length; i++){ NonQuery(strSqlSplit[i].toString()); } | SQLStatement.execute() - multiple queries in one statement I've written a database generation script in SQL and want to execute it in my Adobe AIR application: Create Table tRole ( roleID integer Primary Key,roleName varchar(40) ); Create Table tFile ( fileID integer Primary Key,fileName varchar(50),fileDescription varchar(500),thumbnailID integer,fileFormatID integer,categoryID integer,isFavorite boolean,dateAdded date,globalAccessCount integer,lastAccessTime date,downloadComplete boolean,isNew boolean,isSpotlight boolean,duration varchar(30) ); Create Table tCategory ( categoryID integer Primary Key,categoryName varchar(50),parent_categoryID integer );... I execute this in Adobe AIR using the following methods: public static function RunSqlFromFile(fileName:String):void { var file:File = File.applicationDirectory.resolvePath(fileName); var stream:FileStream = new FileStream(); stream.open(file, FileMode.READ) var strSql:String = stream.readUTFBytes(stream.bytesAvailable); NonQuery(strSql); }
public static function NonQuery(strSQL:String):void { var sqlConnection:SQLConnection = new SQLConnection(); sqlConnection.open(File.applicationStorageDirectory.resolvePath(DBPATH)); var sqlStatement:SQLStatement = new SQLStatement(); sqlStatement.text = strSQL; sqlStatement.sqlConnection = sqlConnection; try { sqlStatement.execute(); } catch (error:SQLError) { Alert.show(error.toString()); } } No errors are generated, however only tRole exists. It seems that it only looks at the first query (up to the semicolon- if I remove it, the query fails). Is there a way to call multiple queries in one statement? | TITLE:
SQLStatement.execute() - multiple queries in one statement
QUESTION:
I've written a database generation script in SQL and want to execute it in my Adobe AIR application: Create Table tRole ( roleID integer Primary Key,roleName varchar(40) ); Create Table tFile ( fileID integer Primary Key,fileName varchar(50),fileDescription varchar(500),thumbnailID integer,fileFormatID integer,categoryID integer,isFavorite boolean,dateAdded date,globalAccessCount integer,lastAccessTime date,downloadComplete boolean,isNew boolean,isSpotlight boolean,duration varchar(30) ); Create Table tCategory ( categoryID integer Primary Key,categoryName varchar(50),parent_categoryID integer );... I execute this in Adobe AIR using the following methods: public static function RunSqlFromFile(fileName:String):void { var file:File = File.applicationDirectory.resolvePath(fileName); var stream:FileStream = new FileStream(); stream.open(file, FileMode.READ) var strSql:String = stream.readUTFBytes(stream.bytesAvailable); NonQuery(strSql); }
public static function NonQuery(strSQL:String):void { var sqlConnection:SQLConnection = new SQLConnection(); sqlConnection.open(File.applicationStorageDirectory.resolvePath(DBPATH)); var sqlStatement:SQLStatement = new SQLStatement(); sqlStatement.text = strSQL; sqlStatement.sqlConnection = sqlConnection; try { sqlStatement.execute(); } catch (error:SQLError) { Alert.show(error.toString()); } } No errors are generated, however only tRole exists. It seems that it only looks at the first query (up to the semicolon- if I remove it, the query fails). Is there a way to call multiple queries in one statement?
ANSWER:
I wound up using this. It is a kind of a hack, but it actually works pretty well. The only thing is you have to be very careful with your semicolons.: D var strSql:String = stream.readUTFBytes(stream.bytesAvailable); var i:Number = 0; var strSqlSplit:Array = strSql.split(";"); for (i = 0; i < strSqlSplit.length; i++){ NonQuery(strSqlSplit[i].toString()); } | [
"apache-flex",
"actionscript-3",
"air"
] | 54 | 20 | 6,683 | 3 | 0 | 2008-08-01T13:57:07.033000 | 2008-08-01T16:09:47.687000 |
85 | 154 | Flat file databases | What are the best practices around creating flat file database structures in PHP? A lot of more matured PHP flat file frameworks out there which I attempt to implement SQL-like query syntax which is over the top for my purposes in most cases. (I would just use a database at that point). Are there any elegant tricks out there to get good performance and features with a small code overhead? | Well, what is the nature of the flat databases. Are they large or small. Is it simple arrays with arrays in them? if its something simple say userprofiles built as such: $user = array("name" => "bob", "age" => 20, "websites" => array("example.com","bob.example.com","bob2.example.com"), "and_one" => "more"); and to save or update the db record for that user. $dir = "../userdata/"; //make sure to put it bellow what the server can reach. file_put_contents($dir.$user['name'],serialize($user)); and to load the record for the user function &get_user($name){ return unserialize(file_get_contents("../userdata/".$name)); } but again this implementation will vary on the application and nature of the database you need. | Flat file databases What are the best practices around creating flat file database structures in PHP? A lot of more matured PHP flat file frameworks out there which I attempt to implement SQL-like query syntax which is over the top for my purposes in most cases. (I would just use a database at that point). Are there any elegant tricks out there to get good performance and features with a small code overhead? | TITLE:
Flat file databases
QUESTION:
What are the best practices around creating flat file database structures in PHP? A lot of more matured PHP flat file frameworks out there which I attempt to implement SQL-like query syntax which is over the top for my purposes in most cases. (I would just use a database at that point). Are there any elegant tricks out there to get good performance and features with a small code overhead?
ANSWER:
Well, what is the nature of the flat databases. Are they large or small. Is it simple arrays with arrays in them? if its something simple say userprofiles built as such: $user = array("name" => "bob", "age" => 20, "websites" => array("example.com","bob.example.com","bob2.example.com"), "and_one" => "more"); and to save or update the db record for that user. $dir = "../userdata/"; //make sure to put it bellow what the server can reach. file_put_contents($dir.$user['name'],serialize($user)); and to load the record for the user function &get_user($name){ return unserialize(file_get_contents("../userdata/".$name)); } but again this implementation will vary on the application and nature of the database you need. | [
"php",
"sql",
"database",
"flat-file"
] | 132 | 81 | 69,859 | 11 | 0 | 2008-08-01T14:19:52.510000 | 2008-08-01T17:45:06.513000 |
88 | 98 | Is gettimeofday() guaranteed to be of microsecond resolution? | I am porting a game, that was originally written for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux). I have implemented QueryPerformanceCounter by giving the uSeconds since the process start up: BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount) { gettimeofday(¤tTimeVal, NULL); performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec); performanceCount->QuadPart *= (1000 * 1000); performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec);
return true; } This, coupled with QueryPerformanceFrequency() giving a constant 1000000 as the frequency, works well on my machine, and gives me a 64-bit variable that contains uSeconds since the program's start-up. So is this portable? I don't want to discover it works differently if the kernel was compiled in a certain way or anything like that. I am fine with it being non-portable to something other than Linux, however. | Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" Linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no. You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several fewer issues due to things like multi-core systems and external clock settings. Also, look into the clock_getres() function. | Is gettimeofday() guaranteed to be of microsecond resolution? I am porting a game, that was originally written for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux). I have implemented QueryPerformanceCounter by giving the uSeconds since the process start up: BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount) { gettimeofday(¤tTimeVal, NULL); performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec); performanceCount->QuadPart *= (1000 * 1000); performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec);
return true; } This, coupled with QueryPerformanceFrequency() giving a constant 1000000 as the frequency, works well on my machine, and gives me a 64-bit variable that contains uSeconds since the program's start-up. So is this portable? I don't want to discover it works differently if the kernel was compiled in a certain way or anything like that. I am fine with it being non-portable to something other than Linux, however. | TITLE:
Is gettimeofday() guaranteed to be of microsecond resolution?
QUESTION:
I am porting a game, that was originally written for the Win32 API, to Linux (well, porting the OS X port of the Win32 port to Linux). I have implemented QueryPerformanceCounter by giving the uSeconds since the process start up: BOOL QueryPerformanceCounter(LARGE_INTEGER* performanceCount) { gettimeofday(¤tTimeVal, NULL); performanceCount->QuadPart = (currentTimeVal.tv_sec - startTimeVal.tv_sec); performanceCount->QuadPart *= (1000 * 1000); performanceCount->QuadPart += (currentTimeVal.tv_usec - startTimeVal.tv_usec);
return true; } This, coupled with QueryPerformanceFrequency() giving a constant 1000000 as the frequency, works well on my machine, and gives me a 64-bit variable that contains uSeconds since the program's start-up. So is this portable? I don't want to discover it works differently if the kernel was compiled in a certain way or anything like that. I am fine with it being non-portable to something other than Linux, however.
ANSWER:
Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" Linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no. You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several fewer issues due to things like multi-core systems and external clock settings. Also, look into the clock_getres() function. | [
"linux",
"winapi",
"visual-c++",
"unix",
"timer"
] | 106 | 63 | 45,129 | 10 | 0 | 2008-08-01T14:36:18.640000 | 2008-08-01T14:53:47.497000 |
90 | 1,466,832 | How do you branch and merge with TortoiseSVN? | How do you branch and merge with Apache Subversion using the TortoiseSVN client? | My easy click-by-click instructions ( specific to TortoiseSVN ) are in Stack Overflow question What is the simplest way to do branching and merging using TortoiseSVN?. | How do you branch and merge with TortoiseSVN? How do you branch and merge with Apache Subversion using the TortoiseSVN client? | TITLE:
How do you branch and merge with TortoiseSVN?
QUESTION:
How do you branch and merge with Apache Subversion using the TortoiseSVN client?
ANSWER:
My easy click-by-click instructions ( specific to TortoiseSVN ) are in Stack Overflow question What is the simplest way to do branching and merging using TortoiseSVN?. | [
"svn",
"tortoisesvn",
"branch",
"branching-and-merging"
] | 163 | 23 | 56,424 | 3 | 0 | 2008-08-01T14:41:24.277000 | 2009-09-23T15:40:46.943000 |
104 | 112 | Anatomy of a "Memory Leak" | In.NET perspective: What is a memory leak? How can you determine whether your application leaks? What are the effects? How can you prevent a memory leak? If your application has memory leak, does it go away when the process exits or is killed? Or do memory leaks in your application affect other processes on the system even after process completion? And what about unmanaged code accessed via COM Interop and/or P/Invoke? | The best explanation I've seen is in Chapter 7 of the free Foundations of Programming e-book. Basically, in.NET a memory leak occurs when referenced objects are rooted and thus cannot be garbage collected. This occurs accidentally when you hold on to references beyond the intended scope. You'll know that you have leaks when you start getting OutOfMemoryExceptions or your memory usage goes up beyond what you'd expect ( PerfMon has nice memory counters). Understanding.NET 's memory model is your best way of avoiding it. Specifically, understanding how the garbage collector works and how references work — again, I refer you to chapter 7 of the e-book. Also, be mindful of common pitfalls, probably the most common being events. If object A is registered to an event on object B, then object A will stick around until object B disappears because B holds a reference to A. The solution is to unregister your events when you're done. Of course, a good memory profile will let you see your object graphs and explore the nesting/referencing of your objects to see where references are coming from and what root object is responsible ( red-gate ants profile, JetBrains dotMemory, memprofiler are really good choices, or you can use the text-only WinDbg and SOS, but I'd strongly recommend a commercial/visual product unless you're a real guru). I believe unmanaged code is subject to its typical memory leaks, except that shared references are managed by the garbage collector. I could be wrong about this last point. | Anatomy of a "Memory Leak" In.NET perspective: What is a memory leak? How can you determine whether your application leaks? What are the effects? How can you prevent a memory leak? If your application has memory leak, does it go away when the process exits or is killed? Or do memory leaks in your application affect other processes on the system even after process completion? And what about unmanaged code accessed via COM Interop and/or P/Invoke? | TITLE:
Anatomy of a "Memory Leak"
QUESTION:
In.NET perspective: What is a memory leak? How can you determine whether your application leaks? What are the effects? How can you prevent a memory leak? If your application has memory leak, does it go away when the process exits or is killed? Or do memory leaks in your application affect other processes on the system even after process completion? And what about unmanaged code accessed via COM Interop and/or P/Invoke?
ANSWER:
The best explanation I've seen is in Chapter 7 of the free Foundations of Programming e-book. Basically, in.NET a memory leak occurs when referenced objects are rooted and thus cannot be garbage collected. This occurs accidentally when you hold on to references beyond the intended scope. You'll know that you have leaks when you start getting OutOfMemoryExceptions or your memory usage goes up beyond what you'd expect ( PerfMon has nice memory counters). Understanding.NET 's memory model is your best way of avoiding it. Specifically, understanding how the garbage collector works and how references work — again, I refer you to chapter 7 of the e-book. Also, be mindful of common pitfalls, probably the most common being events. If object A is registered to an event on object B, then object A will stick around until object B disappears because B holds a reference to A. The solution is to unregister your events when you're done. Of course, a good memory profile will let you see your object graphs and explore the nesting/referencing of your objects to see where references are coming from and what root object is responsible ( red-gate ants profile, JetBrains dotMemory, memprofiler are really good choices, or you can use the text-only WinDbg and SOS, but I'd strongly recommend a commercial/visual product unless you're a real guru). I believe unmanaged code is subject to its typical memory leaks, except that shared references are managed by the garbage collector. I could be wrong about this last point. | [
".net",
"performance",
"memory-leaks",
"com-interop"
] | 181 | 112 | 15,291 | 15 | 0 | 2008-08-01T15:12:34.837000 | 2008-08-01T15:28:12.543000 |
108 | 111 | Best Subversion clients for Windows Vista (64bit) | I've been using TortoiseSVN in a Windows environment for quite some time. It seems very feature-complete and nicely integrated into the Windows shell, and more importantly, it's fairly painless to teach to colleagues with little or no experience with source control. However, since we have moved to Windows Vista 64bit, Tortoise has been very buggy and has seemed to cause lots of explorer.exe abnormalities and crashes. This has happened both with older versions of the software and the latest version (1.5.1 build 13563). I was curious if anyone has suggestions for other Subversion clients that will run on Windows (specifically Vista 64bit). Developers here use a variety of text editors so using Visual Studio or Dreamweaver for SVN is not ideal. I have heard great things about Cornerstone, and would love something similar for Windows if it exists. I'm correlating the Vista/explorer problems with Tortoise because they normally occur when I'm using the functionality in Tortoise. Sometimes bringing up the "merge" screen will cause the GUI to start acting very strange and eventually hang or crash. I did not see 1.5.2 -- I'm installing now, maybe that will fix some of my issues. | I have been using the 64Bit version of TortoiseSVN for ages and I have never had issues with it on Windows 64Bit or Vista 64Bit. I am currently not aware of any other similiar SVN clients that do work on Vista. Is it possible the problem could lie within the configuration of TortoiseSVN or even the installation of Vista? Is the problem occurring on Vista native or SP 1? | Best Subversion clients for Windows Vista (64bit) I've been using TortoiseSVN in a Windows environment for quite some time. It seems very feature-complete and nicely integrated into the Windows shell, and more importantly, it's fairly painless to teach to colleagues with little or no experience with source control. However, since we have moved to Windows Vista 64bit, Tortoise has been very buggy and has seemed to cause lots of explorer.exe abnormalities and crashes. This has happened both with older versions of the software and the latest version (1.5.1 build 13563). I was curious if anyone has suggestions for other Subversion clients that will run on Windows (specifically Vista 64bit). Developers here use a variety of text editors so using Visual Studio or Dreamweaver for SVN is not ideal. I have heard great things about Cornerstone, and would love something similar for Windows if it exists. I'm correlating the Vista/explorer problems with Tortoise because they normally occur when I'm using the functionality in Tortoise. Sometimes bringing up the "merge" screen will cause the GUI to start acting very strange and eventually hang or crash. I did not see 1.5.2 -- I'm installing now, maybe that will fix some of my issues. | TITLE:
Best Subversion clients for Windows Vista (64bit)
QUESTION:
I've been using TortoiseSVN in a Windows environment for quite some time. It seems very feature-complete and nicely integrated into the Windows shell, and more importantly, it's fairly painless to teach to colleagues with little or no experience with source control. However, since we have moved to Windows Vista 64bit, Tortoise has been very buggy and has seemed to cause lots of explorer.exe abnormalities and crashes. This has happened both with older versions of the software and the latest version (1.5.1 build 13563). I was curious if anyone has suggestions for other Subversion clients that will run on Windows (specifically Vista 64bit). Developers here use a variety of text editors so using Visual Studio or Dreamweaver for SVN is not ideal. I have heard great things about Cornerstone, and would love something similar for Windows if it exists. I'm correlating the Vista/explorer problems with Tortoise because they normally occur when I'm using the functionality in Tortoise. Sometimes bringing up the "merge" screen will cause the GUI to start acting very strange and eventually hang or crash. I did not see 1.5.2 -- I'm installing now, maybe that will fix some of my issues.
ANSWER:
I have been using the 64Bit version of TortoiseSVN for ages and I have never had issues with it on Windows 64Bit or Vista 64Bit. I am currently not aware of any other similiar SVN clients that do work on Vista. Is it possible the problem could lie within the configuration of TortoiseSVN or even the installation of Vista? Is the problem occurring on Vista native or SP 1? | [
"windows",
"svn",
"64-bit"
] | 52 | 39 | 16,513 | 8 | 0 | 2008-08-01T15:22:29.467000 | 2008-08-01T15:27:23.093000 |
109 | 2,585 | Decoding T-SQL CAST in C#/VB.NET | Recently our site has been deluged with the resurgence of the Asprox botnet SQL injection attack. Without going into details, the attack attempts to execute SQL code by encoding the T-SQL commands in an ASCII encoded BINARY string. It looks something like this: DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x44004500...06F007200%20AS%20NVARCHAR(4000));EXEC(@S);-- I was able to decode this in SQL, but I was a little wary of doing this since I didn't know exactly what was happening at the time. I tried to write a simple decode tool, so I could decode this type of text without even touching SQL Server. The main part I need to be decoded is: CAST(0x44004500...06F007200 AS NVARCHAR(4000)) I've tried all of the following commands with no luck: txtDecodedText.Text = System.Web.HttpUtility.UrlDecode(txtURLText.Text); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Convert.FromBase64String(txtURLText.Text)); What is the proper way to translate this encoding without using SQL Server? Is it possible? I'll take VB.NET code since I'm familiar with that too. Okay, I'm sure I'm missing something here, so here's where I'm at. Since my input is a basic string, I started with just a snippet of the encoded portion - 4445434C41 (which translates to DECLA) - and the first attempt was to do this... txtDecodedText.Text = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(txtURL.Text));...and all it did was return the exact same thing that I put in since it converted each character into is a byte. I realized that I need to parse every two characters into a byte manually since I don't know of any methods yet that will do that, so now my little decoder looks something like this: while (!boolIsDone) { bytURLChar = byte.Parse(txtURLText.Text.Substring(intParseIndex, 2)); bytURL[intURLIndex] = bytURLChar; intParseIndex += 2; intURLIndex++;
if (txtURLText.Text.Length - intParseIndex < 2) { boolIsDone = true; } }
txtDecodedText.Text = Encoding.UTF8.GetString(bytURL); Things look good for the first couple of pairs, but then the loop balks when it gets to the "4C" pair and says that the string is in the incorrect format. Interestingly enough, when I step through the debugger and to the GetString method on the byte array that I was able to parse up to that point, I get ",-+" as the result. How do I figure out what I'm missing - do I need to do a "direct cast" for each byte instead of attempting to parse it? | I went back to Michael's post, did some more poking and realized that I did need to do a double conversion, and eventually worked out this little nugget: Convert.ToString(Convert.ToChar(Int32.Parse(EncodedString.Substring(intParseIndex, 2), System.Globalization.NumberStyles.HexNumber))); From there I simply made a loop to go through all the characters 2 by 2 and get them "hexified" and then translated to a string. To Nick, and anybody else interested, I went ahead and posted my little application over in CodePlex. Feel free to use/modify as you need. | Decoding T-SQL CAST in C#/VB.NET Recently our site has been deluged with the resurgence of the Asprox botnet SQL injection attack. Without going into details, the attack attempts to execute SQL code by encoding the T-SQL commands in an ASCII encoded BINARY string. It looks something like this: DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x44004500...06F007200%20AS%20NVARCHAR(4000));EXEC(@S);-- I was able to decode this in SQL, but I was a little wary of doing this since I didn't know exactly what was happening at the time. I tried to write a simple decode tool, so I could decode this type of text without even touching SQL Server. The main part I need to be decoded is: CAST(0x44004500...06F007200 AS NVARCHAR(4000)) I've tried all of the following commands with no luck: txtDecodedText.Text = System.Web.HttpUtility.UrlDecode(txtURLText.Text); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Convert.FromBase64String(txtURLText.Text)); What is the proper way to translate this encoding without using SQL Server? Is it possible? I'll take VB.NET code since I'm familiar with that too. Okay, I'm sure I'm missing something here, so here's where I'm at. Since my input is a basic string, I started with just a snippet of the encoded portion - 4445434C41 (which translates to DECLA) - and the first attempt was to do this... txtDecodedText.Text = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(txtURL.Text));...and all it did was return the exact same thing that I put in since it converted each character into is a byte. I realized that I need to parse every two characters into a byte manually since I don't know of any methods yet that will do that, so now my little decoder looks something like this: while (!boolIsDone) { bytURLChar = byte.Parse(txtURLText.Text.Substring(intParseIndex, 2)); bytURL[intURLIndex] = bytURLChar; intParseIndex += 2; intURLIndex++;
if (txtURLText.Text.Length - intParseIndex < 2) { boolIsDone = true; } }
txtDecodedText.Text = Encoding.UTF8.GetString(bytURL); Things look good for the first couple of pairs, but then the loop balks when it gets to the "4C" pair and says that the string is in the incorrect format. Interestingly enough, when I step through the debugger and to the GetString method on the byte array that I was able to parse up to that point, I get ",-+" as the result. How do I figure out what I'm missing - do I need to do a "direct cast" for each byte instead of attempting to parse it? | TITLE:
Decoding T-SQL CAST in C#/VB.NET
QUESTION:
Recently our site has been deluged with the resurgence of the Asprox botnet SQL injection attack. Without going into details, the attack attempts to execute SQL code by encoding the T-SQL commands in an ASCII encoded BINARY string. It looks something like this: DECLARE%20@S%20NVARCHAR(4000);SET%20@S=CAST(0x44004500...06F007200%20AS%20NVARCHAR(4000));EXEC(@S);-- I was able to decode this in SQL, but I was a little wary of doing this since I didn't know exactly what was happening at the time. I tried to write a simple decode tool, so I could decode this type of text without even touching SQL Server. The main part I need to be decoded is: CAST(0x44004500...06F007200 AS NVARCHAR(4000)) I've tried all of the following commands with no luck: txtDecodedText.Text = System.Web.HttpUtility.UrlDecode(txtURLText.Text); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.ASCII.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.ASCII.GetString(Encoding.Unicode.GetBytes(txtURLText.Text)); txtDecodedText.Text = Encoding.Unicode.GetString(Convert.FromBase64String(txtURLText.Text)); What is the proper way to translate this encoding without using SQL Server? Is it possible? I'll take VB.NET code since I'm familiar with that too. Okay, I'm sure I'm missing something here, so here's where I'm at. Since my input is a basic string, I started with just a snippet of the encoded portion - 4445434C41 (which translates to DECLA) - and the first attempt was to do this... txtDecodedText.Text = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(txtURL.Text));...and all it did was return the exact same thing that I put in since it converted each character into is a byte. I realized that I need to parse every two characters into a byte manually since I don't know of any methods yet that will do that, so now my little decoder looks something like this: while (!boolIsDone) { bytURLChar = byte.Parse(txtURLText.Text.Substring(intParseIndex, 2)); bytURL[intURLIndex] = bytURLChar; intParseIndex += 2; intURLIndex++;
if (txtURLText.Text.Length - intParseIndex < 2) { boolIsDone = true; } }
txtDecodedText.Text = Encoding.UTF8.GetString(bytURL); Things look good for the first couple of pairs, but then the loop balks when it gets to the "4C" pair and says that the string is in the incorrect format. Interestingly enough, when I step through the debugger and to the GetString method on the byte array that I was able to parse up to that point, I get ",-+" as the result. How do I figure out what I'm missing - do I need to do a "direct cast" for each byte instead of attempting to parse it?
ANSWER:
I went back to Michael's post, did some more poking and realized that I did need to do a double conversion, and eventually worked out this little nugget: Convert.ToString(Convert.ToChar(Int32.Parse(EncodedString.Substring(intParseIndex, 2), System.Globalization.NumberStyles.HexNumber))); From there I simply made a loop to go through all the characters 2 by 2 and get them "hexified" and then translated to a string. To Nick, and anybody else interested, I went ahead and posted my little application over in CodePlex. Feel free to use/modify as you need. | [
"c#",
"sql",
"vb.net",
"ascii",
"hex"
] | 68 | 24 | 6,199 | 2 | 0 | 2008-08-01T15:23:05.190000 | 2008-08-05T17:02:30.870000 |
123 | 183 | Java lib or app to convert CSV to XML file? | Is there an existing application or library in Java which will allow me to convert a CSV data file to XML file? The XML tags would be provided through possibly the first row containing column headings. | Maybe this might help: JSefa You can read CSV file with this tool and serialize it to XML. | Java lib or app to convert CSV to XML file? Is there an existing application or library in Java which will allow me to convert a CSV data file to XML file? The XML tags would be provided through possibly the first row containing column headings. | TITLE:
Java lib or app to convert CSV to XML file?
QUESTION:
Is there an existing application or library in Java which will allow me to convert a CSV data file to XML file? The XML tags would be provided through possibly the first row containing column headings.
ANSWER:
Maybe this might help: JSefa You can read CSV file with this tool and serialize it to XML. | [
"java",
"xml",
"csv",
"data-conversion"
] | 121 | 70 | 81,172 | 16 | 0 | 2008-08-01T16:08:52.353000 | 2008-08-01T18:51:12.090000 |
126 | 127 | How would you access Object properties from within an object method? | What is the "purist" or "correct" way to access an object's properties from within an object method that is not a getter/setter method? I know that from outside of the object you should use a getter/setter, but from within would you just do: Java: String property = this.property; PHP: $property = $this->property; or would you do: Java: String property = this.getProperty(); PHP: $property = $this->getProperty(); Forgive me if my Java is a little off, it's been a year since I programmed in Java... EDIT: It seems people are assuming I am talking about private or protected variables/properties only. When I learned OO I was taught to use getters/setters for every single property even if it was public (and actually I was told never to make any variable/property public). So, I may be starting off from a false assumption from the get go. It appears that people answering this question are maybe saying that you should have public properties and that those don't need getters and setters, which goes against what I was taught, and what I was talking about, although maybe that needs to be discussed as well. That's probably a good topic for a different question though... | This has religious war potential, but it seems to me that if you're using a getter/setter, you should use it internally as well - using both will lead to maintenance problems down the road (e.g. somebody adds code to a setter that needs to run every time that property is set, and the property is being set internally w/o that setter being called). | How would you access Object properties from within an object method? What is the "purist" or "correct" way to access an object's properties from within an object method that is not a getter/setter method? I know that from outside of the object you should use a getter/setter, but from within would you just do: Java: String property = this.property; PHP: $property = $this->property; or would you do: Java: String property = this.getProperty(); PHP: $property = $this->getProperty(); Forgive me if my Java is a little off, it's been a year since I programmed in Java... EDIT: It seems people are assuming I am talking about private or protected variables/properties only. When I learned OO I was taught to use getters/setters for every single property even if it was public (and actually I was told never to make any variable/property public). So, I may be starting off from a false assumption from the get go. It appears that people answering this question are maybe saying that you should have public properties and that those don't need getters and setters, which goes against what I was taught, and what I was talking about, although maybe that needs to be discussed as well. That's probably a good topic for a different question though... | TITLE:
How would you access Object properties from within an object method?
QUESTION:
What is the "purist" or "correct" way to access an object's properties from within an object method that is not a getter/setter method? I know that from outside of the object you should use a getter/setter, but from within would you just do: Java: String property = this.property; PHP: $property = $this->property; or would you do: Java: String property = this.getProperty(); PHP: $property = $this->getProperty(); Forgive me if my Java is a little off, it's been a year since I programmed in Java... EDIT: It seems people are assuming I am talking about private or protected variables/properties only. When I learned OO I was taught to use getters/setters for every single property even if it was public (and actually I was told never to make any variable/property public). So, I may be starting off from a false assumption from the get go. It appears that people answering this question are maybe saying that you should have public properties and that those don't need getters and setters, which goes against what I was taught, and what I was talking about, although maybe that needs to be discussed as well. That's probably a good topic for a different question though...
ANSWER:
This has religious war potential, but it seems to me that if you're using a getter/setter, you should use it internally as well - using both will lead to maintenance problems down the road (e.g. somebody adds code to a setter that needs to run every time that property is set, and the property is being set internally w/o that setter being called). | [
"java",
"php",
"oop",
"theory"
] | 106 | 66 | 26,730 | 18 | 0 | 2008-08-01T16:10:30.337000 | 2008-08-01T16:13:47.600000 |
134 | 206 | XSD DataSets and ignoring foreign keys | I have a pretty standard table set-up in a current application using the.NET XSD DataSet and TableAdapter features. My contracts table consists of some standard contract information, with a column for the primary department. This column is a foreign key to my Departments table, where I store the basic department name, id, notes. This is all setup and functioning in my SQL Server. When I use the XSD tool, I can drag both tables in at once and it auto detects/creates the foreign key I have between these two tables. This works great when I'm on my main page and am viewing contract data. However, when I go to my administrative page to modify the department data, I typically do something like this: Dim dtDepartment As New DepartmentDataTable() Dim taDepartment As New DepartmentTableAdapter()
taDepartment.Fill(dtDepartment) However, at this point an exception is thrown saying to the effect that there is a foreign key reference broken here, I'm guessing since I don't have the Contract DataTable filled. How can I fix this problem? I know I can simply remove the foreign key from the XSD to make things work fine, but having the additional integrity check there and having the XSD schema match the SQL schema in the database is nice. | You can try turning Check-constraints off on the DataSet (it's in its properties), or altering the properties of that relationship, and change the key to a simple reference - up to you. | XSD DataSets and ignoring foreign keys I have a pretty standard table set-up in a current application using the.NET XSD DataSet and TableAdapter features. My contracts table consists of some standard contract information, with a column for the primary department. This column is a foreign key to my Departments table, where I store the basic department name, id, notes. This is all setup and functioning in my SQL Server. When I use the XSD tool, I can drag both tables in at once and it auto detects/creates the foreign key I have between these two tables. This works great when I'm on my main page and am viewing contract data. However, when I go to my administrative page to modify the department data, I typically do something like this: Dim dtDepartment As New DepartmentDataTable() Dim taDepartment As New DepartmentTableAdapter()
taDepartment.Fill(dtDepartment) However, at this point an exception is thrown saying to the effect that there is a foreign key reference broken here, I'm guessing since I don't have the Contract DataTable filled. How can I fix this problem? I know I can simply remove the foreign key from the XSD to make things work fine, but having the additional integrity check there and having the XSD schema match the SQL schema in the database is nice. | TITLE:
XSD DataSets and ignoring foreign keys
QUESTION:
I have a pretty standard table set-up in a current application using the.NET XSD DataSet and TableAdapter features. My contracts table consists of some standard contract information, with a column for the primary department. This column is a foreign key to my Departments table, where I store the basic department name, id, notes. This is all setup and functioning in my SQL Server. When I use the XSD tool, I can drag both tables in at once and it auto detects/creates the foreign key I have between these two tables. This works great when I'm on my main page and am viewing contract data. However, when I go to my administrative page to modify the department data, I typically do something like this: Dim dtDepartment As New DepartmentDataTable() Dim taDepartment As New DepartmentTableAdapter()
taDepartment.Fill(dtDepartment) However, at this point an exception is thrown saying to the effect that there is a foreign key reference broken here, I'm guessing since I don't have the Contract DataTable filled. How can I fix this problem? I know I can simply remove the foreign key from the XSD to make things work fine, but having the additional integrity check there and having the XSD schema match the SQL schema in the database is nice.
ANSWER:
You can try turning Check-constraints off on the DataSet (it's in its properties), or altering the properties of that relationship, and change the key to a simple reference - up to you. | [
".net",
"database",
"xsd"
] | 39 | 13 | 1,519 | 1 | 0 | 2008-08-01T16:33:38.183000 | 2008-08-01T19:52:14.227000 |
146 | 152 | How do I track file downloads | I have a website that plays mp3s in a flash player. If a user clicks 'play' the flash player automatically downloads an mp3 and starts playing it. Is there an easy way to track how many times a particular song clip (or any binary file) has been downloaded? Is the play link a link to the actual mp3 file or to some javascript code that pops up a player? If the latter, you can easily add your own logging code in there to track the number of hits to it. If the former, you'll need something that can track the web server log itself and make that distinction. My hosting plan comes with Webalizer, which does this nicely. It's a javascript code so that answers that. However, it would be nice to know how to track downloads using the other method (without switching hosts). | The funny thing is I wrote a php media gallery for all my musics 2 days ago. I had a similar problem. I'm using http://musicplayer.sourceforge.net/ for the player. And the playlist is built via php. All music requests go to a script called xfer.php?file=WHATEVER $filename = base64_url_decode($_REQUEST['file']); header("Cache-Control: public"); header('Content-disposition: attachment; filename='.basename($filename)); header("Content-Transfer-Encoding: binary"); header('Content-Length: '. filesize($filename));
// Put either file counting code here, either a db or static files // readfile($filename); //and spit the user the file
function base64_url_decode($input) { return base64_decode(strtr($input, '-_,', '+/=')); } And when you call files use something like: function base64_url_encode($input) { return strtr(base64_encode($input), '+/=', '-_,'); } http://us.php.net/manual/en/function.base64-encode.php If you are using some JavaScript or a flash player (JW player for example) that requires the actual link of an mp3 file or whatever, you can append the text "&type=.mp3" so the final link becomes something like: "www.example.com/xfer.php?file=34842ffjfjxfh&type=.mp3". That way it looks like it ends with an mp3 extension without affecting the file link. | How do I track file downloads I have a website that plays mp3s in a flash player. If a user clicks 'play' the flash player automatically downloads an mp3 and starts playing it. Is there an easy way to track how many times a particular song clip (or any binary file) has been downloaded? Is the play link a link to the actual mp3 file or to some javascript code that pops up a player? If the latter, you can easily add your own logging code in there to track the number of hits to it. If the former, you'll need something that can track the web server log itself and make that distinction. My hosting plan comes with Webalizer, which does this nicely. It's a javascript code so that answers that. However, it would be nice to know how to track downloads using the other method (without switching hosts). | TITLE:
How do I track file downloads
QUESTION:
I have a website that plays mp3s in a flash player. If a user clicks 'play' the flash player automatically downloads an mp3 and starts playing it. Is there an easy way to track how many times a particular song clip (or any binary file) has been downloaded? Is the play link a link to the actual mp3 file or to some javascript code that pops up a player? If the latter, you can easily add your own logging code in there to track the number of hits to it. If the former, you'll need something that can track the web server log itself and make that distinction. My hosting plan comes with Webalizer, which does this nicely. It's a javascript code so that answers that. However, it would be nice to know how to track downloads using the other method (without switching hosts).
ANSWER:
The funny thing is I wrote a php media gallery for all my musics 2 days ago. I had a similar problem. I'm using http://musicplayer.sourceforge.net/ for the player. And the playlist is built via php. All music requests go to a script called xfer.php?file=WHATEVER $filename = base64_url_decode($_REQUEST['file']); header("Cache-Control: public"); header('Content-disposition: attachment; filename='.basename($filename)); header("Content-Transfer-Encoding: binary"); header('Content-Length: '. filesize($filename));
// Put either file counting code here, either a db or static files // readfile($filename); //and spit the user the file
function base64_url_decode($input) { return base64_decode(strtr($input, '-_,', '+/=')); } And when you call files use something like: function base64_url_encode($input) { return strtr(base64_encode($input), '+/=', '-_,'); } http://us.php.net/manual/en/function.base64-encode.php If you are using some JavaScript or a flash player (JW player for example) that requires the actual link of an mp3 file or whatever, you can append the text "&type=.mp3" so the final link becomes something like: "www.example.com/xfer.php?file=34842ffjfjxfh&type=.mp3". That way it looks like it ends with an mp3 extension without affecting the file link. | [
"php",
"apache",
"logging",
"download",
"analytics"
] | 90 | 42 | 21,872 | 8 | 0 | 2008-08-01T17:14:58.337000 | 2008-08-01T17:33:58.750000 |
163 | 170 | How do I sync the SVN revision number with my ASP.NET web site? | Stack Overflow has a subversion version number at the bottom: svn revision: 679 I want to use such automatic versioning with my.NET Web Site/Application, Windows Forms, WPD projects/solutions. How do I implement this? | Looks like Jeff is using CruiseControl.NET based on some leafing through the podcast transcripts. This seems to have automated deployment capabilities from source control to production. Might this be where the insertion is happening? | How do I sync the SVN revision number with my ASP.NET web site? Stack Overflow has a subversion version number at the bottom: svn revision: 679 I want to use such automatic versioning with my.NET Web Site/Application, Windows Forms, WPD projects/solutions. How do I implement this? | TITLE:
How do I sync the SVN revision number with my ASP.NET web site?
QUESTION:
Stack Overflow has a subversion version number at the bottom: svn revision: 679 I want to use such automatic versioning with my.NET Web Site/Application, Windows Forms, WPD projects/solutions. How do I implement this?
ANSWER:
Looks like Jeff is using CruiseControl.NET based on some leafing through the podcast transcripts. This seems to have automated deployment capabilities from source control to production. Might this be where the insertion is happening? | [
".net",
"asp.net",
"svn",
"versioning"
] | 101 | 32 | 9,524 | 7 | 0 | 2008-08-01T18:00:13.830000 | 2008-08-01T18:24:30.550000 |
164 | 699 | Embedding Windows Media Player for all browsers | Edit: This question was written in 2008, which was like 3 internet ages ago. If this question is still relevant to your environment, please accept my condolences. Everyone else should convert into a format supported by your browsers (That would be H.264 if Internet Explorer is needed, and probably AV1, VP8/VP9 if not) and use the element. We are using WMV videos on an internal site, and we are embedding them into web sites. This works quite well on Internet Explorer, but not on Firefox. I've found ways to make it work in Firefox, but then it stops working in Internet Explorer. We do not want to use Silverlight just yet, especially since we cannot be sure that all clients will be running Windows XP with Windows Media Player installed. Is there some sort of Universal Code that embeds WMP into both Internet Explorer and Firefox, or do we need to implement some user-agent-detection and deliver different HTML for different browsers? | The following works for me in Firefox and Internet Explorer: | Embedding Windows Media Player for all browsers Edit: This question was written in 2008, which was like 3 internet ages ago. If this question is still relevant to your environment, please accept my condolences. Everyone else should convert into a format supported by your browsers (That would be H.264 if Internet Explorer is needed, and probably AV1, VP8/VP9 if not) and use the element. We are using WMV videos on an internal site, and we are embedding them into web sites. This works quite well on Internet Explorer, but not on Firefox. I've found ways to make it work in Firefox, but then it stops working in Internet Explorer. We do not want to use Silverlight just yet, especially since we cannot be sure that all clients will be running Windows XP with Windows Media Player installed. Is there some sort of Universal Code that embeds WMP into both Internet Explorer and Firefox, or do we need to implement some user-agent-detection and deliver different HTML for different browsers? | TITLE:
Embedding Windows Media Player for all browsers
QUESTION:
Edit: This question was written in 2008, which was like 3 internet ages ago. If this question is still relevant to your environment, please accept my condolences. Everyone else should convert into a format supported by your browsers (That would be H.264 if Internet Explorer is needed, and probably AV1, VP8/VP9 if not) and use the element. We are using WMV videos on an internal site, and we are embedding them into web sites. This works quite well on Internet Explorer, but not on Firefox. I've found ways to make it work in Firefox, but then it stops working in Internet Explorer. We do not want to use Silverlight just yet, especially since we cannot be sure that all clients will be running Windows XP with Windows Media Player installed. Is there some sort of Universal Code that embeds WMP into both Internet Explorer and Firefox, or do we need to implement some user-agent-detection and deliver different HTML for different browsers?
ANSWER:
The following works for me in Firefox and Internet Explorer: | [
"windows",
"embed",
"media"
] | 64 | 47 | 177,305 | 10 | 0 | 2008-08-01T18:02:22.797000 | 2008-08-03T14:27:20.967000 |
173 | 516 | How to do version control for SQL Server database? | I want to get my databases under version control. I'll always want to have at least some data in there (as alumb mentions: user types and administrators). I'll also often want a large collection of generated test data for performance measurements. How would I apply version control to my database? | Martin Fowler wrote my favorite article on the subject, http://martinfowler.com/articles/evodb.html. I choose not to put schema dumps in under version control as alumb and others suggest because I want an easy way to upgrade my production database. For a web application where I'll have a single production database instance, I use two techniques: Database Upgrade Scripts A sequence database upgrade scripts that contain the DDL necessary to move the schema from version N to N+1. (These go in your version control system.) A _version_history_ table, something like create table VersionHistory ( Version int primary key, UpgradeStart datetime not null, UpgradeEnd datetime ); gets a new entry every time an upgrade script runs which corresponds to the new version. This ensures that it's easy to see what version of the database schema exists and that database upgrade scripts are run only once. Again, these are not database dumps. Rather, each script represents the changes necessary to move from one version to the next. They're the script that you apply to your production database to "upgrade" it. Developer Sandbox Synchronization A script to backup, sanitize, and shrink a production database. Run this after each upgrade to the production DB. A script to restore (and tweak, if necessary) the backup on a developer's workstation. Each developer runs this script after each upgrade to the production DB. A caveat: My automated tests run against a schema-correct but empty database, so this advice will not perfectly suit your needs. | How to do version control for SQL Server database? I want to get my databases under version control. I'll always want to have at least some data in there (as alumb mentions: user types and administrators). I'll also often want a large collection of generated test data for performance measurements. How would I apply version control to my database? | TITLE:
How to do version control for SQL Server database?
QUESTION:
I want to get my databases under version control. I'll always want to have at least some data in there (as alumb mentions: user types and administrators). I'll also often want a large collection of generated test data for performance measurements. How would I apply version control to my database?
ANSWER:
Martin Fowler wrote my favorite article on the subject, http://martinfowler.com/articles/evodb.html. I choose not to put schema dumps in under version control as alumb and others suggest because I want an easy way to upgrade my production database. For a web application where I'll have a single production database instance, I use two techniques: Database Upgrade Scripts A sequence database upgrade scripts that contain the DDL necessary to move the schema from version N to N+1. (These go in your version control system.) A _version_history_ table, something like create table VersionHistory ( Version int primary key, UpgradeStart datetime not null, UpgradeEnd datetime ); gets a new entry every time an upgrade script runs which corresponds to the new version. This ensures that it's easy to see what version of the database schema exists and that database upgrade scripts are run only once. Again, these are not database dumps. Rather, each script represents the changes necessary to move from one version to the next. They're the script that you apply to your production database to "upgrade" it. Developer Sandbox Synchronization A script to backup, sanitize, and shrink a production database. Run this after each upgrade to the production DB. A script to restore (and tweak, if necessary) the backup on a developer's workstation. Each developer runs this script after each upgrade to the production DB. A caveat: My automated tests run against a schema-correct but empty database, so this advice will not perfectly suit your needs. | [
"sql-server",
"database",
"svn",
"version-control"
] | 341 | 191 | 124,650 | 21 | 0 | 2008-08-01T18:33:08.333000 | 2008-08-02T17:33:54.927000 |
176 | 207 | error_log per Virtual Host? | On one Linux Server running Apache and PHP 5, we have multiple Virtual Hosts with separate log files. We cannot seem to separate the php error_log between virtual hosts. Overriding this setting in the of the httpd.conf does not seem to do anything. Is there a way to have separate php error_logs for each Virtual Host? | To set the Apache ( not the PHP ) log, the easiest way to do this would be to do: # Stuff, # More Stuff, ErrorLog /path/where/you/want/the/error.log If there is no leading "/" it is assumed to be relative. Apache Error Log Page | error_log per Virtual Host? On one Linux Server running Apache and PHP 5, we have multiple Virtual Hosts with separate log files. We cannot seem to separate the php error_log between virtual hosts. Overriding this setting in the of the httpd.conf does not seem to do anything. Is there a way to have separate php error_logs for each Virtual Host? | TITLE:
error_log per Virtual Host?
QUESTION:
On one Linux Server running Apache and PHP 5, we have multiple Virtual Hosts with separate log files. We cannot seem to separate the php error_log between virtual hosts. Overriding this setting in the of the httpd.conf does not seem to do anything. Is there a way to have separate php error_logs for each Virtual Host?
ANSWER:
To set the Apache ( not the PHP ) log, the easiest way to do this would be to do: # Stuff, # More Stuff, ErrorLog /path/where/you/want/the/error.log If there is no leading "/" it is assumed to be relative. Apache Error Log Page | [
"linux",
"apache",
"virtualhost"
] | 124 | 89 | 120,269 | 11 | 0 | 2008-08-01T18:37:40.150000 | 2008-08-01T19:58:45.837000 |
180 | 539 | Function for creating color wheels | This is something I've pseudo-solved many times and have never quite found a solution for. The problem is to come up with a way to generate N colors, that are as distinguishable as possible where N is a parameter. | My first thought on this is "how to generate N vectors in a space that maximize distance from each other." You can see that the RGB (or any other scale you use that forms a basis in color space) are just vectors. Take a look at Random Point Picking. Once you have a set of vectors that are maximized apart, you can save them in a hash table or something for later, and just perform random rotations on them to get all the colors you desire that are maximally apart from each other! Thinking about this problem more, it would be better to map the colors in a linear manner, possibly (0,0,0) → (255,255,255) lexicographically, and then distribute them evenly. I really don't know how well this will work, but it should since, let us say: n = 10 we know we have 16777216 colors (256^3). We can use Buckles Algorithm 515 to find the lexicographically indexed color.. You'll probably have to edit the algorithm to avoid overflow and probably add some minor speed improvements. | Function for creating color wheels This is something I've pseudo-solved many times and have never quite found a solution for. The problem is to come up with a way to generate N colors, that are as distinguishable as possible where N is a parameter. | TITLE:
Function for creating color wheels
QUESTION:
This is something I've pseudo-solved many times and have never quite found a solution for. The problem is to come up with a way to generate N colors, that are as distinguishable as possible where N is a parameter.
ANSWER:
My first thought on this is "how to generate N vectors in a space that maximize distance from each other." You can see that the RGB (or any other scale you use that forms a basis in color space) are just vectors. Take a look at Random Point Picking. Once you have a set of vectors that are maximized apart, you can save them in a hash table or something for later, and just perform random rotations on them to get all the colors you desire that are maximally apart from each other! Thinking about this problem more, it would be better to map the colors in a linear manner, possibly (0,0,0) → (255,255,255) lexicographically, and then distribute them evenly. I really don't know how well this will work, but it should since, let us say: n = 10 we know we have 16777216 colors (256^3). We can use Buckles Algorithm 515 to find the lexicographically indexed color.. You'll probably have to edit the algorithm to avoid overflow and probably add some minor speed improvements. | [
"algorithm",
"language-agnostic",
"colors",
"color-space"
] | 71 | 28 | 19,598 | 8 | 0 | 2008-08-01T18:42:19.343000 | 2008-08-02T19:03:52.170000 |
192 | 258 | Floating Point Number parsing: Is there a Catch All algorithm? | One of the fun parts of multi-cultural programming is number formats. Americans use 10,000.50 Germans use 10.000,50 French use 10 000,50 My first approach would be to take the string, parse it backwards until I encounter a separator and use this as my decimal separator. There is an obvious flaw with that: 10.000 would be interpreted as 10. Another approach: if the string contains 2 different non-numeric characters, use the last one as the decimal separator and discard the others. If I only have one, check if it occurs more than once and discards it if it does. If it only appears once, check if it has 3 digits after it. If yes, discard it, otherwise, use it as decimal separator. The obvious "best solution" would be to detect the User's culture or Browser, but that does not work if you have a Frenchman using an en-US Windows/Browser. Does the.net Framework contain some mythical black magic floating point parser that is better than Double.(Try)Parse() in trying to auto-detect the number format? | I think the best you can do in this case is to take their input and then show them what you think they meant. If they disagree, show them the format you're expecting and get them to enter it again. | Floating Point Number parsing: Is there a Catch All algorithm? One of the fun parts of multi-cultural programming is number formats. Americans use 10,000.50 Germans use 10.000,50 French use 10 000,50 My first approach would be to take the string, parse it backwards until I encounter a separator and use this as my decimal separator. There is an obvious flaw with that: 10.000 would be interpreted as 10. Another approach: if the string contains 2 different non-numeric characters, use the last one as the decimal separator and discard the others. If I only have one, check if it occurs more than once and discards it if it does. If it only appears once, check if it has 3 digits after it. If yes, discard it, otherwise, use it as decimal separator. The obvious "best solution" would be to detect the User's culture or Browser, but that does not work if you have a Frenchman using an en-US Windows/Browser. Does the.net Framework contain some mythical black magic floating point parser that is better than Double.(Try)Parse() in trying to auto-detect the number format? | TITLE:
Floating Point Number parsing: Is there a Catch All algorithm?
QUESTION:
One of the fun parts of multi-cultural programming is number formats. Americans use 10,000.50 Germans use 10.000,50 French use 10 000,50 My first approach would be to take the string, parse it backwards until I encounter a separator and use this as my decimal separator. There is an obvious flaw with that: 10.000 would be interpreted as 10. Another approach: if the string contains 2 different non-numeric characters, use the last one as the decimal separator and discard the others. If I only have one, check if it occurs more than once and discards it if it does. If it only appears once, check if it has 3 digits after it. If yes, discard it, otherwise, use it as decimal separator. The obvious "best solution" would be to detect the User's culture or Browser, but that does not work if you have a Frenchman using an en-US Windows/Browser. Does the.net Framework contain some mythical black magic floating point parser that is better than Double.(Try)Parse() in trying to auto-detect the number format?
ANSWER:
I think the best you can do in this case is to take their input and then show them what you think they meant. If they disagree, show them the format you're expecting and get them to enter it again. | [
"c#",
".net",
"asp.net",
"internationalization",
"globalization"
] | 72 | 32 | 3,583 | 4 | 0 | 2008-08-01T19:23:13.117000 | 2008-08-01T23:17:53.657000 |
194 | 197 | Upgrading SQL Server 6.5 | Yes, I know. The existence of a running copy of SQL Server 6.5 in 2008 is absurd. That stipulated, what is the best way to migrate from 6.5 to 2005? Is there any direct path? Most of the documentation I've found deals with upgrading 6.5 to 7. Should I forget about the native SQL Server upgrade utilities, script out all of the objects and data, and try to recreate from scratch? I was going to attempt the upgrade this weekend, but server issues pushed it back till next. So, any ideas would be welcomed during the course of the week. Update. This is how I ended up doing it: Back up the database in question and Master on 6.5. Execute SQL Server 2000 's instcat.sql against 6.5 's Master. This allows SQL Server 2000 's OLEDB provider to connect to 6.5. Use SQL Server 2000 's standalone "Import and Export Data" to create a DTS package, using OLEDB to connect to 6.5. This successfully copied all 6.5 's tables to a new 2005 database (also using OLEDB ). Use 6.5 's Enterprise Manager to script out all of the database's indexes and triggers to a.sql file. Execute that.sql file against the new copy of the database, in 2005's Management Studio. Use 6.5's Enterprise Manager to script out all of the stored procedures. Execute that.sql file against the 2005 database. Several dozen sprocs had issues making them incompatible with 2005. Mainly non-ANSI joins and quoted identifier issues. Corrected all of those issues and re-executed the.sql file. Recreated the 6.5 's logins in 2005 and gave them appropriate permissions. There was a bit of rinse/repeat when correcting the stored procedures (there were hundreds of them to correct), but the upgrade went great otherwise. Being able to use Management Studio instead of Query Analyzer and Enterprise Manager 6.5 is such an amazing difference. A few report queries that took 20-30 seconds on the 6.5 database are now running in 1-2 seconds, without any modification, new indexes, or anything. I didn't expect that kind of immediate improvement. | Hey, I'm still stuck in that camp too. The third party application we have to support is FINALLY going to 2K5, so we're almost out of the wood. But I feel your pain 8^D That said, from everything I heard from our DBA, the key is to convert the database to 8.0 format first, and then go to 2005. I believe they used the built in migration/upgrade tools for this. There are some big steps between 6.5 and 8.0 that are better solved there than going from 6.5 to 2005 directly. Your BIGGEST pain, if you didn't know already, is that DTS is gone in favor of SSIS. There is a shell type module that will run your existing DTS packages, but you're going to want to manually recreate them all in SSIS. Ease of this will depend on the complexity of the packages themselves, but I've done a few at work so far and they've been pretty smooth. | Upgrading SQL Server 6.5 Yes, I know. The existence of a running copy of SQL Server 6.5 in 2008 is absurd. That stipulated, what is the best way to migrate from 6.5 to 2005? Is there any direct path? Most of the documentation I've found deals with upgrading 6.5 to 7. Should I forget about the native SQL Server upgrade utilities, script out all of the objects and data, and try to recreate from scratch? I was going to attempt the upgrade this weekend, but server issues pushed it back till next. So, any ideas would be welcomed during the course of the week. Update. This is how I ended up doing it: Back up the database in question and Master on 6.5. Execute SQL Server 2000 's instcat.sql against 6.5 's Master. This allows SQL Server 2000 's OLEDB provider to connect to 6.5. Use SQL Server 2000 's standalone "Import and Export Data" to create a DTS package, using OLEDB to connect to 6.5. This successfully copied all 6.5 's tables to a new 2005 database (also using OLEDB ). Use 6.5 's Enterprise Manager to script out all of the database's indexes and triggers to a.sql file. Execute that.sql file against the new copy of the database, in 2005's Management Studio. Use 6.5's Enterprise Manager to script out all of the stored procedures. Execute that.sql file against the 2005 database. Several dozen sprocs had issues making them incompatible with 2005. Mainly non-ANSI joins and quoted identifier issues. Corrected all of those issues and re-executed the.sql file. Recreated the 6.5 's logins in 2005 and gave them appropriate permissions. There was a bit of rinse/repeat when correcting the stored procedures (there were hundreds of them to correct), but the upgrade went great otherwise. Being able to use Management Studio instead of Query Analyzer and Enterprise Manager 6.5 is such an amazing difference. A few report queries that took 20-30 seconds on the 6.5 database are now running in 1-2 seconds, without any modification, new indexes, or anything. I didn't expect that kind of immediate improvement. | TITLE:
Upgrading SQL Server 6.5
QUESTION:
Yes, I know. The existence of a running copy of SQL Server 6.5 in 2008 is absurd. That stipulated, what is the best way to migrate from 6.5 to 2005? Is there any direct path? Most of the documentation I've found deals with upgrading 6.5 to 7. Should I forget about the native SQL Server upgrade utilities, script out all of the objects and data, and try to recreate from scratch? I was going to attempt the upgrade this weekend, but server issues pushed it back till next. So, any ideas would be welcomed during the course of the week. Update. This is how I ended up doing it: Back up the database in question and Master on 6.5. Execute SQL Server 2000 's instcat.sql against 6.5 's Master. This allows SQL Server 2000 's OLEDB provider to connect to 6.5. Use SQL Server 2000 's standalone "Import and Export Data" to create a DTS package, using OLEDB to connect to 6.5. This successfully copied all 6.5 's tables to a new 2005 database (also using OLEDB ). Use 6.5 's Enterprise Manager to script out all of the database's indexes and triggers to a.sql file. Execute that.sql file against the new copy of the database, in 2005's Management Studio. Use 6.5's Enterprise Manager to script out all of the stored procedures. Execute that.sql file against the 2005 database. Several dozen sprocs had issues making them incompatible with 2005. Mainly non-ANSI joins and quoted identifier issues. Corrected all of those issues and re-executed the.sql file. Recreated the 6.5 's logins in 2005 and gave them appropriate permissions. There was a bit of rinse/repeat when correcting the stored procedures (there were hundreds of them to correct), but the upgrade went great otherwise. Being able to use Management Studio instead of Query Analyzer and Enterprise Manager 6.5 is such an amazing difference. A few report queries that took 20-30 seconds on the 6.5 database are now running in 1-2 seconds, without any modification, new indexes, or anything. I didn't expect that kind of immediate improvement.
ANSWER:
Hey, I'm still stuck in that camp too. The third party application we have to support is FINALLY going to 2K5, so we're almost out of the wood. But I feel your pain 8^D That said, from everything I heard from our DBA, the key is to convert the database to 8.0 format first, and then go to 2005. I believe they used the built in migration/upgrade tools for this. There are some big steps between 6.5 and 8.0 that are better solved there than going from 6.5 to 2005 directly. Your BIGGEST pain, if you didn't know already, is that DTS is gone in favor of SSIS. There is a shell type module that will run your existing DTS packages, but you're going to want to manually recreate them all in SSIS. Ease of this will depend on the complexity of the packages themselves, but I've done a few at work so far and they've been pretty smooth. | [
"sql-server",
"migration"
] | 40 | 11 | 4,760 | 4 | 0 | 2008-08-01T19:26:37.883000 | 2008-08-01T19:34:49.067000 |
227 | 2,943,318 | Create a tag cloud by wrapping array values in h1 through h6 tags based on a mapping array of weights | I have the following arrays: $artist = ["the roots", "michael jackson", "billy idol", "more", "and more", "and_YET_MORE"]; $count = [5, 3, 9, 1, 1, 3]; I want to generate a tag cloud that will have artists with a higher number in $count enclosed in h6 tags and the lowest enclosed h1 tags. | You will want to add a logarithmic function to it too. (taken from tagadelic, my Drupal module to create tag clouds http://drupal.org/project/tagadelic ): db_query('SELECT COUNT(*) AS count, id, name FROM... ORDER BY count DESC');
$steps = 6; $tags = array(); $min = 1e9; $max = -1e9;
while ($tag = db_fetch_object($result)) { $tag->number_of_posts = $tag->count; #sets the amount of items a certain tag has attached to it $tag->count = log($tag->count); $min = min($min, $tag->count); $max = max($max, $tag->count); $tags[$tag->tid] = $tag; } // Note: we need to ensure the range is slightly too large to make sure even // the largest element is rounded down. $range = max(.01, $max - $min) * 1.0001;
foreach ($tags as $key => $value) { $tags[$key]->weight = 1 + floor($steps * ($value->count - $min) / $range); } Then in your view or template: foreach ($tags as $tag) { $output.= " weight>$tag->name weight>" } | Create a tag cloud by wrapping array values in h1 through h6 tags based on a mapping array of weights I have the following arrays: $artist = ["the roots", "michael jackson", "billy idol", "more", "and more", "and_YET_MORE"]; $count = [5, 3, 9, 1, 1, 3]; I want to generate a tag cloud that will have artists with a higher number in $count enclosed in h6 tags and the lowest enclosed h1 tags. | TITLE:
Create a tag cloud by wrapping array values in h1 through h6 tags based on a mapping array of weights
QUESTION:
I have the following arrays: $artist = ["the roots", "michael jackson", "billy idol", "more", "and more", "and_YET_MORE"]; $count = [5, 3, 9, 1, 1, 3]; I want to generate a tag cloud that will have artists with a higher number in $count enclosed in h6 tags and the lowest enclosed h1 tags.
ANSWER:
You will want to add a logarithmic function to it too. (taken from tagadelic, my Drupal module to create tag clouds http://drupal.org/project/tagadelic ): db_query('SELECT COUNT(*) AS count, id, name FROM... ORDER BY count DESC');
$steps = 6; $tags = array(); $min = 1e9; $max = -1e9;
while ($tag = db_fetch_object($result)) { $tag->number_of_posts = $tag->count; #sets the amount of items a certain tag has attached to it $tag->count = log($tag->count); $min = min($min, $tag->count); $max = max($max, $tag->count); $tags[$tag->tid] = $tag; } // Note: we need to ensure the range is slightly too large to make sure even // the largest element is rounded down. $range = max(.01, $max - $min) * 1.0001;
foreach ($tags as $key => $value) { $tags[$key]->weight = 1 + floor($steps * ($value->count - $min) / $range); } Then in your view or template: foreach ($tags as $tag) { $output.= " weight>$tag->name weight>" } | [
"php",
"arrays",
"mapping",
"tag-cloud"
] | 79 | 43 | 9,776 | 9 | 0 | 2008-08-01T21:40:55.833000 | 2010-05-31T11:53:57.940000 |
231 | 1,812 | Register Windows program with the mailto protocol programmatically | How do I make it so mailto: links will be registered with my program? How would I then handle that event in my program? Most of the solutions I found from a quick Google search are how to do this manually, but I need to do this automatically for users of my program if they click a button, such as "set as default email client". #Edit: Removed reference to the Delphi Programming Language, because the answer is independent of your language. | @Dillie-O: Your answer put me in the right direction (I should have expected it to just be a registry change) and I got this working. But I'm going to mark this as the answer because I'm going to put some additional information that I found while working on this. The solution to this question really doesn't depend on what programming language you're using, as long as there's some way to modify Windows registry settings. Finally, here's the answer: To associate a program with the mailto protocol for all users on a computer, change the HKEY_CLASSES_ROOT\mailto\shell\open\command Default value to: " Your program's executable " "%1" To associate a program with the mailto protocol for the current user, change the HKEY_CURRENT_USER\Software\Classes\mailto\shell\open\command Default value to: " Your program's executable " "%1" The %1 will be replaced with the entire mailto URL. For example, given the link: Email me The following will be executed: " Your program's executable " "mailto:user@example.com" Update (via comment by shellscape): As of Windows 8, this method no longer works as expected. Win8 enforces the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\Shell\Associations\URLAssociations\MAILTO\UserChoice for which the ProgID of the selected app is hashed and can't be forged. It's a royal PITA. | Register Windows program with the mailto protocol programmatically How do I make it so mailto: links will be registered with my program? How would I then handle that event in my program? Most of the solutions I found from a quick Google search are how to do this manually, but I need to do this automatically for users of my program if they click a button, such as "set as default email client". #Edit: Removed reference to the Delphi Programming Language, because the answer is independent of your language. | TITLE:
Register Windows program with the mailto protocol programmatically
QUESTION:
How do I make it so mailto: links will be registered with my program? How would I then handle that event in my program? Most of the solutions I found from a quick Google search are how to do this manually, but I need to do this automatically for users of my program if they click a button, such as "set as default email client". #Edit: Removed reference to the Delphi Programming Language, because the answer is independent of your language.
ANSWER:
@Dillie-O: Your answer put me in the right direction (I should have expected it to just be a registry change) and I got this working. But I'm going to mark this as the answer because I'm going to put some additional information that I found while working on this. The solution to this question really doesn't depend on what programming language you're using, as long as there's some way to modify Windows registry settings. Finally, here's the answer: To associate a program with the mailto protocol for all users on a computer, change the HKEY_CLASSES_ROOT\mailto\shell\open\command Default value to: " Your program's executable " "%1" To associate a program with the mailto protocol for the current user, change the HKEY_CURRENT_USER\Software\Classes\mailto\shell\open\command Default value to: " Your program's executable " "%1" The %1 will be replaced with the entire mailto URL. For example, given the link: Email me The following will be executed: " Your program's executable " "mailto:user@example.com" Update (via comment by shellscape): As of Windows 8, this method no longer works as expected. Win8 enforces the following key: HKEY_CURRENT_USER\Software\Microsoft\Windows\Shell\Associations\URLAssociations\MAILTO\UserChoice for which the ProgID of the selected app is hashed and can't be forged. It's a royal PITA. | [
"windows",
"mailto"
] | 43 | 19 | 31,416 | 2 | 0 | 2008-08-01T22:08:14.453000 | 2008-08-05T01:49:25.940000 |
234 | 12,202 | SQL Server 2005 implementation of MySQL REPLACE INTO? | MySQL has this incredibly useful yet proprietary REPLACE INTO SQL Command. Can this easily be emulated in SQL Server 2005? Starting a new Transaction, doing a Select() and then either UPDATE or INSERT and COMMIT is always a little bit of a pain, especially when doing it in the application and therefore always keeping 2 versions of the statement. I wonder if there is an easy and universal way to implement such a function into SQL Server 2005? | This is something that annoys me about MSSQL ( rant on my blog ). I wish MSSQL supported upsert. @Dillie-O's code is a good way in older SQL versions (+1 vote), but it still is basically two IO operations (the exists and then the update or insert ) There's a slightly better way on this post, basically: --try an update update tablename set field1 = 'new value', field2 = 'different value',... where idfield = 7
--insert if failed if @@rowcount = 0 and @@error = 0 insert into tablename ( idfield, field1, field2,... ) values ( 7, 'value one', 'another value',... ) This reduces it to one IO operations if it's an update, or two if an insert. MS Sql2008 introduces merge from the SQL:2003 standard: merge tablename as target using (values ('new value', 'different value')) as source (field1, field2) on target.idfield = 7 when matched then update set field1 = source.field1, field2 = source.field2,... when not matched then insert ( idfield, field1, field2,... ) values ( 7, source.field1, source.field2,... ) Now it's really just one IO operation, but awful code:-( | SQL Server 2005 implementation of MySQL REPLACE INTO? MySQL has this incredibly useful yet proprietary REPLACE INTO SQL Command. Can this easily be emulated in SQL Server 2005? Starting a new Transaction, doing a Select() and then either UPDATE or INSERT and COMMIT is always a little bit of a pain, especially when doing it in the application and therefore always keeping 2 versions of the statement. I wonder if there is an easy and universal way to implement such a function into SQL Server 2005? | TITLE:
SQL Server 2005 implementation of MySQL REPLACE INTO?
QUESTION:
MySQL has this incredibly useful yet proprietary REPLACE INTO SQL Command. Can this easily be emulated in SQL Server 2005? Starting a new Transaction, doing a Select() and then either UPDATE or INSERT and COMMIT is always a little bit of a pain, especially when doing it in the application and therefore always keeping 2 versions of the statement. I wonder if there is an easy and universal way to implement such a function into SQL Server 2005?
ANSWER:
This is something that annoys me about MSSQL ( rant on my blog ). I wish MSSQL supported upsert. @Dillie-O's code is a good way in older SQL versions (+1 vote), but it still is basically two IO operations (the exists and then the update or insert ) There's a slightly better way on this post, basically: --try an update update tablename set field1 = 'new value', field2 = 'different value',... where idfield = 7
--insert if failed if @@rowcount = 0 and @@error = 0 insert into tablename ( idfield, field1, field2,... ) values ( 7, 'value one', 'another value',... ) This reduces it to one IO operations if it's an update, or two if an insert. MS Sql2008 introduces merge from the SQL:2003 standard: merge tablename as target using (values ('new value', 'different value')) as source (field1, field2) on target.idfield = 7 when matched then update set field1 = source.field1, field2 = source.field2,... when not matched then insert ( idfield, field1, field2,... ) values ( 7, source.field1, source.field2,... ) Now it's really just one IO operation, but awful code:-( | [
"mysql",
"sql-server",
"sql-server-2005"
] | 89 | 63 | 51,546 | 4 | 0 | 2008-08-01T22:13:34.100000 | 2008-08-15T12:50:22.350000 |
246 | 176,741 | Create a SQLite database based on an XSD Data Set | Does anybody know if there is a way to create an SQLite database based on an XSD DataSet? In the past, I've just used a basic SQLite manager, but I want to fuse things a bit more with my.NET development if possible. | I suspect in the general case this is hard; XML Schema allows for some quite bizarre type constructions. I'm not sure how you would do substitution groups, or extensions to restrictions of types for example. However it should be possible to knock something together quite quickly (especially mapping from the classes in System.Xml.Schema) that works for 90% of schemas (i.e. sequence and choice elements with a few simple data types). | Create a SQLite database based on an XSD Data Set Does anybody know if there is a way to create an SQLite database based on an XSD DataSet? In the past, I've just used a basic SQLite manager, but I want to fuse things a bit more with my.NET development if possible. | TITLE:
Create a SQLite database based on an XSD Data Set
QUESTION:
Does anybody know if there is a way to create an SQLite database based on an XSD DataSet? In the past, I've just used a basic SQLite manager, but I want to fuse things a bit more with my.NET development if possible.
ANSWER:
I suspect in the general case this is hard; XML Schema allows for some quite bizarre type constructions. I'm not sure how you would do substitution groups, or extensions to restrictions of types for example. However it should be possible to knock something together quite quickly (especially mapping from the classes in System.Xml.Schema) that works for 90% of schemas (i.e. sequence and choice elements with a few simple data types). | [
".net",
"database",
"sqlite"
] | 45 | 19 | 3,283 | 3 | 0 | 2008-08-01T22:34:42.853000 | 2008-10-06T23:58:50.823000 |
260 | 307 | Adding scripting functionality to .NET applications | I have a little game written in C#. It uses a database as back-end. It's a trading card game, and I wanted to implement the function of the cards as a script. What I mean is that I essentially have an interface, ICard, which a card class implements ( public class Card056: ICard ) and which contains a function that is called by the game. Now, to make the thing maintainable/moddable, I would like to have the class for each card as source code in the database and essentially compile it on first use. So when I have to add/change a card, I'll just add it to the database and tell my application to refresh, without needing any assembly deployment (especially since we would be talking about 1 assembly per card which means hundreds of assemblies). Is that possible? Register a class from a source file and then instantiate it, etc. ICard Cards[current] = new MyGame.CardLibrary.Card056(); Cards[current].OnEnterPlay(ref currentGameState); The language is C# but extra bonus if it's possible to write the script in any.NET language. | Oleg Shilo's C# Script solution (at The Code Project ) really is a great introduction to providing script abilities in your application. A different approach would be to consider a language that is specifically built for scripting, such as IronRuby, IronPython, or Lua. IronPython and IronRuby are both available today. For a guide to embedding IronPython read How to embed IronPython script support in your existing app in 10 easy steps. Lua is a scripting language commonly used in games. There is a Lua compiler for.NET, available from CodePlex -- http://www.codeplex.com/Nua That codebase is a great read if you want to learn about building a compiler in.NET. A different angle altogether is to try PowerShell. There are numerous examples of embedding PowerShell into an application -- here's a thorough project on the topic: Powershell Tunnel | Adding scripting functionality to .NET applications I have a little game written in C#. It uses a database as back-end. It's a trading card game, and I wanted to implement the function of the cards as a script. What I mean is that I essentially have an interface, ICard, which a card class implements ( public class Card056: ICard ) and which contains a function that is called by the game. Now, to make the thing maintainable/moddable, I would like to have the class for each card as source code in the database and essentially compile it on first use. So when I have to add/change a card, I'll just add it to the database and tell my application to refresh, without needing any assembly deployment (especially since we would be talking about 1 assembly per card which means hundreds of assemblies). Is that possible? Register a class from a source file and then instantiate it, etc. ICard Cards[current] = new MyGame.CardLibrary.Card056(); Cards[current].OnEnterPlay(ref currentGameState); The language is C# but extra bonus if it's possible to write the script in any.NET language. | TITLE:
Adding scripting functionality to .NET applications
QUESTION:
I have a little game written in C#. It uses a database as back-end. It's a trading card game, and I wanted to implement the function of the cards as a script. What I mean is that I essentially have an interface, ICard, which a card class implements ( public class Card056: ICard ) and which contains a function that is called by the game. Now, to make the thing maintainable/moddable, I would like to have the class for each card as source code in the database and essentially compile it on first use. So when I have to add/change a card, I'll just add it to the database and tell my application to refresh, without needing any assembly deployment (especially since we would be talking about 1 assembly per card which means hundreds of assemblies). Is that possible? Register a class from a source file and then instantiate it, etc. ICard Cards[current] = new MyGame.CardLibrary.Card056(); Cards[current].OnEnterPlay(ref currentGameState); The language is C# but extra bonus if it's possible to write the script in any.NET language.
ANSWER:
Oleg Shilo's C# Script solution (at The Code Project ) really is a great introduction to providing script abilities in your application. A different approach would be to consider a language that is specifically built for scripting, such as IronRuby, IronPython, or Lua. IronPython and IronRuby are both available today. For a guide to embedding IronPython read How to embed IronPython script support in your existing app in 10 easy steps. Lua is a scripting language commonly used in games. There is a Lua compiler for.NET, available from CodePlex -- http://www.codeplex.com/Nua That codebase is a great read if you want to learn about building a compiler in.NET. A different angle altogether is to try PowerShell. There are numerous examples of embedding PowerShell into an application -- here's a thorough project on the topic: Powershell Tunnel | [
"c#",
".net",
"scripting",
"compiler-construction"
] | 83 | 44 | 22,879 | 9 | 0 | 2008-08-01T23:22:08.983000 | 2008-08-02T01:49:46.220000 |
263 | 607 | GTK implementation of MessageBox | I have been trying to implement Win32's MessageBox using GTK. The app uses SDL/OpenGL, so this isn't a GTK app. I handle the initialization ( gtk_init ) sort of stuff inside the MessageBox function as follows: int MessageBox(HWND hwnd, const char* text, const char* caption, UINT type) { GtkWidget *window = NULL; GtkWidget *dialog = NULL;
gtk_init(>kArgc, >kArgv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); g_signal_connect(G_OBJECT(window), "delete_event", G_CALLBACK(delete_event), NULL); g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(destroy), NULL); // gcallback calls gtk_main_quit() gtk_init_add((GtkFunction)gcallback, NULL);
if (type & MB_YESNO) { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_QUESTION, GTK_BUTTONS_YES_NO, text); } else { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_INFO, GTK_BUTTONS_OK, text); }
gtk_window_set_title(GTK_WINDOW(dialog), caption); gint result = gtk_dialog_run(GTK_DIALOG(dialog));
gtk_main();
gtk_widget_destroy(dialog);
if (type & MB_YESNO) { switch (result) { default: case GTK_RESPONSE_DELETE_EVENT: case GTK_RESPONSE_NO: return IDNO; break; case GTK_RESPONSE_YES: return IDYES; break; } }
return IDOK; } Now, I am by no means an experienced GTK programmer, and I realize that I'm probably doing something horribly wrong. However, my problem is that the last dialog popped up with this function staying around until the process exits. Any ideas? | Hmm, ok. I'd suggest code like this, then: typedef struct { int type; int result; } DialogData;
static gboolean display_dialog(gpointer user_data) { DialogData *dialog_data = user_data; GtkWidget *dialog;
if (dialog_data->type & MB_YESNO) dialog = gtk_message_dialog_new(...); else dialog = gtk_message_dialog_new(...);
// Set title, etc.
dialog_data->result = gtk_dialog_run(...);
gtk_main_quit(); // Quits the main loop run in MessageBox()
return FALSE; }
int MessageBox(...) { DialogData dialog_data;
dialog_data.type = type;
gtk_idle_add(display_dialog, &dialog_data);
gtk_main();
// Do stuff based on dialog_data.result } The struct is required because you need to pass around a couple pieces of data. The gtk_idle_add() call adds a method to be run when the main loop is running and idle, and the FALSE return value from the display_dialog() call means that it's only run once. After we get the result from the dialog, we quit the main loop. That'll cause the gtk_main() in your main MessageBox() method to return, and you'll be able to access the result from there. | GTK implementation of MessageBox I have been trying to implement Win32's MessageBox using GTK. The app uses SDL/OpenGL, so this isn't a GTK app. I handle the initialization ( gtk_init ) sort of stuff inside the MessageBox function as follows: int MessageBox(HWND hwnd, const char* text, const char* caption, UINT type) { GtkWidget *window = NULL; GtkWidget *dialog = NULL;
gtk_init(>kArgc, >kArgv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); g_signal_connect(G_OBJECT(window), "delete_event", G_CALLBACK(delete_event), NULL); g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(destroy), NULL); // gcallback calls gtk_main_quit() gtk_init_add((GtkFunction)gcallback, NULL);
if (type & MB_YESNO) { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_QUESTION, GTK_BUTTONS_YES_NO, text); } else { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_INFO, GTK_BUTTONS_OK, text); }
gtk_window_set_title(GTK_WINDOW(dialog), caption); gint result = gtk_dialog_run(GTK_DIALOG(dialog));
gtk_main();
gtk_widget_destroy(dialog);
if (type & MB_YESNO) { switch (result) { default: case GTK_RESPONSE_DELETE_EVENT: case GTK_RESPONSE_NO: return IDNO; break; case GTK_RESPONSE_YES: return IDYES; break; } }
return IDOK; } Now, I am by no means an experienced GTK programmer, and I realize that I'm probably doing something horribly wrong. However, my problem is that the last dialog popped up with this function staying around until the process exits. Any ideas? | TITLE:
GTK implementation of MessageBox
QUESTION:
I have been trying to implement Win32's MessageBox using GTK. The app uses SDL/OpenGL, so this isn't a GTK app. I handle the initialization ( gtk_init ) sort of stuff inside the MessageBox function as follows: int MessageBox(HWND hwnd, const char* text, const char* caption, UINT type) { GtkWidget *window = NULL; GtkWidget *dialog = NULL;
gtk_init(>kArgc, >kArgv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); g_signal_connect(G_OBJECT(window), "delete_event", G_CALLBACK(delete_event), NULL); g_signal_connect(G_OBJECT(window), "destroy", G_CALLBACK(destroy), NULL); // gcallback calls gtk_main_quit() gtk_init_add((GtkFunction)gcallback, NULL);
if (type & MB_YESNO) { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_QUESTION, GTK_BUTTONS_YES_NO, text); } else { dialog = gtk_message_dialog_new(GTK_WINDOW(window), GTK_DIALOG_DESTROY_WITH_PARENT, GTK_MESSAGE_INFO, GTK_BUTTONS_OK, text); }
gtk_window_set_title(GTK_WINDOW(dialog), caption); gint result = gtk_dialog_run(GTK_DIALOG(dialog));
gtk_main();
gtk_widget_destroy(dialog);
if (type & MB_YESNO) { switch (result) { default: case GTK_RESPONSE_DELETE_EVENT: case GTK_RESPONSE_NO: return IDNO; break; case GTK_RESPONSE_YES: return IDYES; break; } }
return IDOK; } Now, I am by no means an experienced GTK programmer, and I realize that I'm probably doing something horribly wrong. However, my problem is that the last dialog popped up with this function staying around until the process exits. Any ideas?
ANSWER:
Hmm, ok. I'd suggest code like this, then: typedef struct { int type; int result; } DialogData;
static gboolean display_dialog(gpointer user_data) { DialogData *dialog_data = user_data; GtkWidget *dialog;
if (dialog_data->type & MB_YESNO) dialog = gtk_message_dialog_new(...); else dialog = gtk_message_dialog_new(...);
// Set title, etc.
dialog_data->result = gtk_dialog_run(...);
gtk_main_quit(); // Quits the main loop run in MessageBox()
return FALSE; }
int MessageBox(...) { DialogData dialog_data;
dialog_data.type = type;
gtk_idle_add(display_dialog, &dialog_data);
gtk_main();
// Do stuff based on dialog_data.result } The struct is required because you need to pass around a couple pieces of data. The gtk_idle_add() call adds a method to be run when the main loop is running and idle, and the FALSE return value from the display_dialog() call means that it's only run once. After we get the result from the dialog, we quit the main loop. That'll cause the gtk_main() in your main MessageBox() method to return, and you'll be able to access the result from there. | [
"c",
"linux",
"gtk",
"x11"
] | 42 | 18 | 11,832 | 3 | 0 | 2008-08-01T23:27:24.993000 | 2008-08-03T02:30:05.907000 |
265 | 266 | Best Practice: Collaborative Environment, Bin Directory, SVN | What are the best practices for checking in BIN directories in a collaborative development environment using SVN? Should project level references be excluded from checkin? Is it easier to just add all bin directories? I develop a lot of DotNetNuke sites and it seems that in a multi-developer environment, it's always a huge task to get the environment setup correctly. The ultimate goal (of course) is to have a new developer checkout the trunk from SVN, restore the DNN database and have it all just 'work'... | Any assemblies that are expected to be in the GAC should stay in the GAC. This includes System.web.dll or any other 3rd party dll that you'll deploy to the GAC in production. This means a new developer would have to install these assemblies. All other 3rd party assemblies should be references through a relative path. My typical structure is: -Project --Project.sln --References ---StructureMap.dll ---NUnit.dll ---System.Web.Mvc.dll --Project.Web ---Project.Web.Proj ---Project.Web.Proj files --Project ---Project.Proj ---Project.Proj files Project.Web and Project reference the assemblies in the root/References folder relatively. These.dlls are checked into subversion. Aside from that, */bin */bin/* obj should be in your global ignore path. With this setup, all references to assemblies are either through the GAC (so should work across all computers), or relative to each project within your solution. | Best Practice: Collaborative Environment, Bin Directory, SVN What are the best practices for checking in BIN directories in a collaborative development environment using SVN? Should project level references be excluded from checkin? Is it easier to just add all bin directories? I develop a lot of DotNetNuke sites and it seems that in a multi-developer environment, it's always a huge task to get the environment setup correctly. The ultimate goal (of course) is to have a new developer checkout the trunk from SVN, restore the DNN database and have it all just 'work'... | TITLE:
Best Practice: Collaborative Environment, Bin Directory, SVN
QUESTION:
What are the best practices for checking in BIN directories in a collaborative development environment using SVN? Should project level references be excluded from checkin? Is it easier to just add all bin directories? I develop a lot of DotNetNuke sites and it seems that in a multi-developer environment, it's always a huge task to get the environment setup correctly. The ultimate goal (of course) is to have a new developer checkout the trunk from SVN, restore the DNN database and have it all just 'work'...
ANSWER:
Any assemblies that are expected to be in the GAC should stay in the GAC. This includes System.web.dll or any other 3rd party dll that you'll deploy to the GAC in production. This means a new developer would have to install these assemblies. All other 3rd party assemblies should be references through a relative path. My typical structure is: -Project --Project.sln --References ---StructureMap.dll ---NUnit.dll ---System.Web.Mvc.dll --Project.Web ---Project.Web.Proj ---Project.Web.Proj files --Project ---Project.Proj ---Project.Proj files Project.Web and Project reference the assemblies in the root/References folder relatively. These.dlls are checked into subversion. Aside from that, */bin */bin/* obj should be in your global ignore path. With this setup, all references to assemblies are either through the GAC (so should work across all computers), or relative to each project within your solution. | [
"svn",
"collaboration"
] | 38 | 19 | 3,296 | 5 | 0 | 2008-08-01T23:29:32.853000 | 2008-08-01T23:40:28.463000 |
289 | 298 | How do you sort a dictionary by value? | I often have to sort a dictionary (consisting of keys & values) by value. For example, I have a hash of words and respective frequencies that I want to order by frequency. There is a SortedList which is good for a single value (say frequency), that I want to map back to the word. SortedDictionary orders by key, not value. Some resort to a custom class, but is there a cleaner way? | Use: using System.Linq.Enumerable;... List > myList = aDictionary.ToList();
myList.Sort( delegate(KeyValuePair pair1, KeyValuePair pair2) { return pair1.Value.CompareTo(pair2.Value); } ); Since you're targeting.NET 2.0 or above, you can simplify this into lambda syntax -- it's equivalent, but shorter. If you're targeting.NET 2.0 you can only use this syntax if you're using the compiler from Visual Studio 2008 (or above). var myList = aDictionary.ToList();
myList.Sort((pair1,pair2) => pair1.Value.CompareTo(pair2.Value)); | How do you sort a dictionary by value? I often have to sort a dictionary (consisting of keys & values) by value. For example, I have a hash of words and respective frequencies that I want to order by frequency. There is a SortedList which is good for a single value (say frequency), that I want to map back to the word. SortedDictionary orders by key, not value. Some resort to a custom class, but is there a cleaner way? | TITLE:
How do you sort a dictionary by value?
QUESTION:
I often have to sort a dictionary (consisting of keys & values) by value. For example, I have a hash of words and respective frequencies that I want to order by frequency. There is a SortedList which is good for a single value (say frequency), that I want to map back to the word. SortedDictionary orders by key, not value. Some resort to a custom class, but is there a cleaner way?
ANSWER:
Use: using System.Linq.Enumerable;... List > myList = aDictionary.ToList();
myList.Sort( delegate(KeyValuePair pair1, KeyValuePair pair2) { return pair1.Value.CompareTo(pair2.Value); } ); Since you're targeting.NET 2.0 or above, you can simplify this into lambda syntax -- it's equivalent, but shorter. If you're targeting.NET 2.0 you can only use this syntax if you're using the compiler from Visual Studio 2008 (or above). var myList = aDictionary.ToList();
myList.Sort((pair1,pair2) => pair1.Value.CompareTo(pair2.Value)); | [
"c#",
".net",
"sorting",
"dictionary"
] | 955 | 572 | 825,359 | 21 | 0 | 2008-08-02T00:40:58.200000 | 2008-08-02T01:15:42.123000 |
308 | 360 | Is there a version control system for database structure changes? | I often run into the following problem. I work on some changes to a project that require new tables or columns in the database. I make the database modifications and continue my work. Usually, I remember to write down the changes so that they can be replicated on the live system. However, I don't always remember what I've changed and I don't always remember to write it down. So, I make a push to the live system and get a big, obvious error that there is no NewColumnX, ugh. Regardless of the fact that this may not be the best practice for this situation, is there a version control system for databases? I don't care about the specific database technology. I just want to know if one exists. If it happens to work with MS SQL Server, then great. | In Ruby on Rails, there's a concept of a migration -- a quick script to change the database. You generate a migration file, which has rules to increase the db version (such as adding a column) and rules to downgrade the version (such as removing a column). Each migration is numbered, and a table keeps track of your current db version. To migrate up, you run a command called "db:migrate" which looks at your version and applies the needed scripts. You can migrate down in a similar way. The migration scripts themselves are kept in a version control system -- whenever you change the database you check in a new script, and any developer can apply it to bring their local db to the latest version. | Is there a version control system for database structure changes? I often run into the following problem. I work on some changes to a project that require new tables or columns in the database. I make the database modifications and continue my work. Usually, I remember to write down the changes so that they can be replicated on the live system. However, I don't always remember what I've changed and I don't always remember to write it down. So, I make a push to the live system and get a big, obvious error that there is no NewColumnX, ugh. Regardless of the fact that this may not be the best practice for this situation, is there a version control system for databases? I don't care about the specific database technology. I just want to know if one exists. If it happens to work with MS SQL Server, then great. | TITLE:
Is there a version control system for database structure changes?
QUESTION:
I often run into the following problem. I work on some changes to a project that require new tables or columns in the database. I make the database modifications and continue my work. Usually, I remember to write down the changes so that they can be replicated on the live system. However, I don't always remember what I've changed and I don't always remember to write it down. So, I make a push to the live system and get a big, obvious error that there is no NewColumnX, ugh. Regardless of the fact that this may not be the best practice for this situation, is there a version control system for databases? I don't care about the specific database technology. I just want to know if one exists. If it happens to work with MS SQL Server, then great.
ANSWER:
In Ruby on Rails, there's a concept of a migration -- a quick script to change the database. You generate a migration file, which has rules to increase the db version (such as adding a column) and rules to downgrade the version (such as removing a column). Each migration is numbered, and a table keeps track of your current db version. To migrate up, you run a command called "db:migrate" which looks at your version and applies the needed scripts. You can migrate down in a similar way. The migration scripts themselves are kept in a version control system -- whenever you change the database you check in a new script, and any developer can apply it to bring their local db to the latest version. | [
"sql",
"database",
"oracle",
"version-control"
] | 133 | 66 | 38,383 | 22 | 0 | 2008-08-02T01:52:54.653000 | 2008-08-02T06:23:33.737000 |
328 | 7,488 | PHP Session Security | What are some guidelines for maintaining responsible session security with PHP? There's information all over the web and it's about time it all landed in one place! | There are a couple of things to do in order to keep your session secure: Use SSL when authenticating users or performing sensitive operations. Regenerate the session id whenever the security level changes (such as logging in). You can even regenerate the session id every request if you wish. Have sessions time out Don't use register globals Store authentication details on the server. That is, don't send details such as username in the cookie. Check the $_SERVER['HTTP_USER_AGENT']. This adds a small barrier to session hijacking. You can also check the IP address. But this causes problems for users that have changing IP address due to load balancing on multiple internet connections etc (which is the case in our environment here). Lock down access to the sessions on the file system or use custom session handling For sensitive operations consider requiring logged in users to provide their authenication details again | PHP Session Security What are some guidelines for maintaining responsible session security with PHP? There's information all over the web and it's about time it all landed in one place! | TITLE:
PHP Session Security
QUESTION:
What are some guidelines for maintaining responsible session security with PHP? There's information all over the web and it's about time it all landed in one place!
ANSWER:
There are a couple of things to do in order to keep your session secure: Use SSL when authenticating users or performing sensitive operations. Regenerate the session id whenever the security level changes (such as logging in). You can even regenerate the session id every request if you wish. Have sessions time out Don't use register globals Store authentication details on the server. That is, don't send details such as username in the cookie. Check the $_SERVER['HTTP_USER_AGENT']. This adds a small barrier to session hijacking. You can also check the IP address. But this causes problems for users that have changing IP address due to load balancing on multiple internet connections etc (which is the case in our environment here). Lock down access to the sessions on the file system or use custom session handling For sensitive operations consider requiring logged in users to provide their authenication details again | [
"security",
"php"
] | 125 | 88 | 85,917 | 13 | 0 | 2008-08-02T02:41:34.493000 | 2008-08-11T02:38:06.737000 |
330 | 332 | Should I use nested classes in this case? | I am working on a collection of classes used for video playback and recording. I have one main class which acts like the public interface, with methods like play(), stop(), pause(), record() etc... Then I have workhorse classes that do the video decoding and video encoding. I just learned about the existence of nested classes in C++, and I'm curious to know what programmers think about using them. I am a little wary and not really sure what the benefits/drawbacks are, but they seem (according to the book I'm reading) to be used in cases such as mine. The book suggests that in a scenario like mine, a good solution would be to nest the workhorse classes inside the interface class, so there are no separate files for classes the client is not meant to use and to avoid any possible naming conflicts. I don't know about these justifications. Nested classes are a new concept to me. Just want to see what programmers think about the issue. | I would be a bit reluctant to use nested classes here. What if you created an abstract base class for a "multimedia driver" to handle the back-end stuff (workhorse), and a separate class for the front-end work? The front-end class could take a pointer/reference to an implemented driver class (for the appropriate media type and situation) and perform the abstract operations on the workhorse structure. My philosophy would be to go ahead and make both structures accessible to the client in a polished way, just under the assumption they would be used in tandem. I would reference something like a QTextDocument in Qt. You provide a direct interface to the bare metal data handling, but pass the authority along to an object like a QTextEdit to do the manipulation. | Should I use nested classes in this case? I am working on a collection of classes used for video playback and recording. I have one main class which acts like the public interface, with methods like play(), stop(), pause(), record() etc... Then I have workhorse classes that do the video decoding and video encoding. I just learned about the existence of nested classes in C++, and I'm curious to know what programmers think about using them. I am a little wary and not really sure what the benefits/drawbacks are, but they seem (according to the book I'm reading) to be used in cases such as mine. The book suggests that in a scenario like mine, a good solution would be to nest the workhorse classes inside the interface class, so there are no separate files for classes the client is not meant to use and to avoid any possible naming conflicts. I don't know about these justifications. Nested classes are a new concept to me. Just want to see what programmers think about the issue. | TITLE:
Should I use nested classes in this case?
QUESTION:
I am working on a collection of classes used for video playback and recording. I have one main class which acts like the public interface, with methods like play(), stop(), pause(), record() etc... Then I have workhorse classes that do the video decoding and video encoding. I just learned about the existence of nested classes in C++, and I'm curious to know what programmers think about using them. I am a little wary and not really sure what the benefits/drawbacks are, but they seem (according to the book I'm reading) to be used in cases such as mine. The book suggests that in a scenario like mine, a good solution would be to nest the workhorse classes inside the interface class, so there are no separate files for classes the client is not meant to use and to avoid any possible naming conflicts. I don't know about these justifications. Nested classes are a new concept to me. Just want to see what programmers think about the issue.
ANSWER:
I would be a bit reluctant to use nested classes here. What if you created an abstract base class for a "multimedia driver" to handle the back-end stuff (workhorse), and a separate class for the front-end work? The front-end class could take a pointer/reference to an implemented driver class (for the appropriate media type and situation) and perform the abstract operations on the workhorse structure. My philosophy would be to go ahead and make both structures accessible to the client in a polished way, just under the assumption they would be used in tandem. I would reference something like a QTextDocument in Qt. You provide a direct interface to the bare metal data handling, but pass the authority along to an object like a QTextEdit to do the manipulation. | [
"c++",
"class",
"oop",
"inner-classes"
] | 58 | 30 | 5,019 | 10 | 0 | 2008-08-02T02:51:36.470000 | 2008-08-02T03:00:24.613000 |
336 | 339 | When to use unsigned values over signed ones? | When is it appropriate to use an unsigned variable over a signed one? What about in a for loop? I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus. for (unsigned int i = 0; i < someThing.length(); i++) { SomeThing var = someThing.at(i); // You get the idea. } I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems ' part. | I was glad to find a good conversation on this subject, as I hadn't really given it much thought before. In summary, signed is a good general choice - even when you're dead sure all the numbers are positive - if you're going to do arithmetic on the variable (like in a typical for loop case). unsigned starts to make more sense when: You're going to do bitwise things like masks, or You're desperate to to take advantage of the sign bit for that extra positive range. Personally, I like signed because I don't trust myself to stay consistent and avoid mixing the two types (like the article warns against). | When to use unsigned values over signed ones? When is it appropriate to use an unsigned variable over a signed one? What about in a for loop? I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus. for (unsigned int i = 0; i < someThing.length(); i++) { SomeThing var = someThing.at(i); // You get the idea. } I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems ' part. | TITLE:
When to use unsigned values over signed ones?
QUESTION:
When is it appropriate to use an unsigned variable over a signed one? What about in a for loop? I hear a lot of opinions about this and I wanted to see if there was anything resembling a consensus. for (unsigned int i = 0; i < someThing.length(); i++) { SomeThing var = someThing.at(i); // You get the idea. } I know Java doesn't have unsigned values, and that must have been a concious decision on Sun Microsystems ' part.
ANSWER:
I was glad to find a good conversation on this subject, as I hadn't really given it much thought before. In summary, signed is a good general choice - even when you're dead sure all the numbers are positive - if you're going to do arithmetic on the variable (like in a typical for loop case). unsigned starts to make more sense when: You're going to do bitwise things like masks, or You're desperate to to take advantage of the sign bit for that extra positive range. Personally, I like signed because I don't trust myself to stay consistent and avoid mixing the two types (like the article warns against). | [
"language-agnostic",
"types"
] | 93 | 82 | 29,448 | 5 | 0 | 2008-08-02T03:34:44.763000 | 2008-08-02T03:49:21.987000 |
337 | 342 | XML Processing in Python | I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. Unfortunately, whilst I know the XML model fairly well in.NET, I'm uncertain what the pros and cons are of the XML models in Python. Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. | Personally, I've played with several of the built-in options on an XML-heavy project and have settled on pulldom as the best choice for less complex documents. Especially for small simple stuff, I like the event-driven theory of parsing rather than setting up a whole slew of callbacks for a relatively simple structure. Here is a good quick discussion of how to use the API. What I like: you can handle the parsing in a for loop rather than using callbacks. You also delay full parsing (the "pull" part) and only get additional detail when you call expandNode(). This satisfies my general requirement for "responsible" efficiency without sacrificing ease of use and simplicity. | XML Processing in Python I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. Unfortunately, whilst I know the XML model fairly well in.NET, I'm uncertain what the pros and cons are of the XML models in Python. Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. | TITLE:
XML Processing in Python
QUESTION:
I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. Unfortunately, whilst I know the XML model fairly well in.NET, I'm uncertain what the pros and cons are of the XML models in Python. Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple.
ANSWER:
Personally, I've played with several of the built-in options on an XML-heavy project and have settled on pulldom as the best choice for less complex documents. Especially for small simple stuff, I like the event-driven theory of parsing rather than setting up a whole slew of callbacks for a relatively simple structure. Here is a good quick discussion of how to use the API. What I like: you can handle the parsing in a for loop rather than using callbacks. You also delay full parsing (the "pull" part) and only get additional detail when you call expandNode(). This satisfies my general requirement for "responsible" efficiency without sacrificing ease of use and simplicity. | [
"python",
"xml"
] | 82 | 36 | 10,595 | 12 | 0 | 2008-08-02T03:35:55.697000 | 2008-08-02T04:01:34.600000 |
361 | 362 | Generate list of all possible permutations of a string | How would I go about generating a list of all possible permutations of a string between x and y characters in length, containing a variable list of characters. Any language would work, but it should be portable. | There are several ways to do this. Common methods use recursion, memoization, or dynamic programming. The basic idea is that you produce a list of all strings of length 1, then in each iteration, for all strings produced in the last iteration, add that string concatenated with each character in the string individually. (the variable index in the code below keeps track of the start of the last and the next iteration) Some pseudocode: list = originalString.split('') index = (0,0) list = [""] for iteration n in 1 to y: index = (index[1], len(list)) for string s in list.subset(index[0] to end): for character c in originalString: list.add(s + c) you'd then need to remove all strings less than x in length, they'll be the first (x-1) * len(originalString) entries in the list. | Generate list of all possible permutations of a string How would I go about generating a list of all possible permutations of a string between x and y characters in length, containing a variable list of characters. Any language would work, but it should be portable. | TITLE:
Generate list of all possible permutations of a string
QUESTION:
How would I go about generating a list of all possible permutations of a string between x and y characters in length, containing a variable list of characters. Any language would work, but it should be portable.
ANSWER:
There are several ways to do this. Common methods use recursion, memoization, or dynamic programming. The basic idea is that you produce a list of all strings of length 1, then in each iteration, for all strings produced in the last iteration, add that string concatenated with each character in the string individually. (the variable index in the code below keeps track of the start of the last and the next iteration) Some pseudocode: list = originalString.split('') index = (0,0) list = [""] for iteration n in 1 to y: index = (index[1], len(list)) for string s in list.subset(index[0] to end): for character c in originalString: list.add(s + c) you'd then need to remove all strings less than x in length, they'll be the first (x-1) * len(originalString) entries in the list. | [
"string",
"language-agnostic",
"cross-platform",
"permutation",
"combinatorics"
] | 166 | 69 | 214,067 | 36 | 0 | 2008-08-02T06:57:57.957000 | 2008-08-02T07:48:07.607000 |
371 | 396 | How do you make sure email you send programmatically is not automatically marked as spam? | This is a tricky one and I've always relied on techniques, such as permission-based emails (i.e. only sending to people you have permission to send to) and not using blatantly spamish terminology. Of late, some of the emails I send out programmatically have started being shuffled into people's spam folder automatically and I'm wondering what I can do about it. This is despite the fact that these particular emails are not ones that humans would mark as spam, specifically, they are emails that contain license keys that people have paid good money for, so I don't think they're going to consider them spam I figure this is a big topic in which I am essentially an ignorant simpleton. | Use email authentication methods, such as SPF, and DKIM to prove that your emails and your domain name belong together, and to prevent spoofing of your domain name. The SPF website includes a wizard to generate the DNS information for your site. Check your reverse DNS to make sure the IP address of your mail server points to the domain name that you use for sending mail. Make sure that the IP-address that you're using is not on a blacklist Make sure that the reply-to address is a valid, existing address. Use the full, real name of the addressee in the To field, not just the email-address (e.g. "John Smith" ). Monitor your abuse accounts, such as abuse@yourdomain.example and postmaster@yourdomain.example. That means - make sure that these accounts exist, read what's sent to them, and act on complaints. Finally, make it really easy to unsubscribe. Otherwise, your users will unsubscribe by pressing the spam button, and that will affect your reputation. That said, getting Hotmail to accept your emails remains a black art. | How do you make sure email you send programmatically is not automatically marked as spam? This is a tricky one and I've always relied on techniques, such as permission-based emails (i.e. only sending to people you have permission to send to) and not using blatantly spamish terminology. Of late, some of the emails I send out programmatically have started being shuffled into people's spam folder automatically and I'm wondering what I can do about it. This is despite the fact that these particular emails are not ones that humans would mark as spam, specifically, they are emails that contain license keys that people have paid good money for, so I don't think they're going to consider them spam I figure this is a big topic in which I am essentially an ignorant simpleton. | TITLE:
How do you make sure email you send programmatically is not automatically marked as spam?
QUESTION:
This is a tricky one and I've always relied on techniques, such as permission-based emails (i.e. only sending to people you have permission to send to) and not using blatantly spamish terminology. Of late, some of the emails I send out programmatically have started being shuffled into people's spam folder automatically and I'm wondering what I can do about it. This is despite the fact that these particular emails are not ones that humans would mark as spam, specifically, they are emails that contain license keys that people have paid good money for, so I don't think they're going to consider them spam I figure this is a big topic in which I am essentially an ignorant simpleton.
ANSWER:
Use email authentication methods, such as SPF, and DKIM to prove that your emails and your domain name belong together, and to prevent spoofing of your domain name. The SPF website includes a wizard to generate the DNS information for your site. Check your reverse DNS to make sure the IP address of your mail server points to the domain name that you use for sending mail. Make sure that the IP-address that you're using is not on a blacklist Make sure that the reply-to address is a valid, existing address. Use the full, real name of the addressee in the To field, not just the email-address (e.g. "John Smith" ). Monitor your abuse accounts, such as abuse@yourdomain.example and postmaster@yourdomain.example. That means - make sure that these accounts exist, read what's sent to them, and act on complaints. Finally, make it really easy to unsubscribe. Otherwise, your users will unsubscribe by pressing the spam button, and that will affect your reputation. That said, getting Hotmail to accept your emails remains a black art. | [
"email",
"email-spam"
] | 464 | 354 | 119,005 | 24 | 0 | 2008-08-02T08:19:18.260000 | 2008-08-02T10:21:54.300000 |
382 | 384 | What is the meaning of the type safety warning in certain Java generics casts? | What is the meaning of the Java warning? Type safety: The cast from Object to List is actually checking against the erased type List I get this warning when I try to cast an Object to a type with generic information, such as in the following code: Object object = getMyList(); List list = (List ) object; | This warning is there because Java is not actually storing type information at run-time in an object that uses generics. Thus, if object is actually a List, there will be no ClassCastException at run-time except until an item is accessed from the list that doesn't match the generic type defined in the variable. This can cause further complications if items are added to the list, with this incorrect generic type information. Any code still holding a reference to the list but with the correct generic type information will now have an inconsistent list. To remove the warning, try: List list = (List ) object; However, note that you will not be able to use certain methods such as add because the compiler doesn't know if you are trying to add an object of incorrect type. The above will work in a lot of situations, but if you have to use add, or some similarly restricted method, you will just have to suffer the yellow underline in Eclipse (or a SuppressWarning annotation). | What is the meaning of the type safety warning in certain Java generics casts? What is the meaning of the Java warning? Type safety: The cast from Object to List is actually checking against the erased type List I get this warning when I try to cast an Object to a type with generic information, such as in the following code: Object object = getMyList(); List list = (List ) object; | TITLE:
What is the meaning of the type safety warning in certain Java generics casts?
QUESTION:
What is the meaning of the Java warning? Type safety: The cast from Object to List is actually checking against the erased type List I get this warning when I try to cast an Object to a type with generic information, such as in the following code: Object object = getMyList(); List list = (List ) object;
ANSWER:
This warning is there because Java is not actually storing type information at run-time in an object that uses generics. Thus, if object is actually a List, there will be no ClassCastException at run-time except until an item is accessed from the list that doesn't match the generic type defined in the variable. This can cause further complications if items are added to the list, with this incorrect generic type information. Any code still holding a reference to the list but with the correct generic type information will now have an inconsistent list. To remove the warning, try: List list = (List ) object; However, note that you will not be able to use certain methods such as add because the compiler doesn't know if you are trying to add an object of incorrect type. The above will work in a lot of situations, but if you have to use add, or some similarly restricted method, you will just have to suffer the yellow underline in Eclipse (or a SuppressWarning annotation). | [
"java",
"generics",
"warnings",
"casting",
"type-safety"
] | 81 | 53 | 10,291 | 1 | 0 | 2008-08-02T08:58:27.540000 | 2008-08-02T08:58:48.430000 |
402 | 2,530,953 | iPhone app in landscape mode, 2008 systems | Please note that this question is from 2008 and now is of only historic interest. What's the best way to create an iPhone application that runs in landscape mode from the start, regardless of the position of the device? Both programmatically and using the Interface Builder. | Historic answer only. Spectacularly out of date. Please note that this answer is now hugely out of date/ This answer is only a historical curiosity. Exciting news! As discovered by Andrew below, this problem has been fixed by Apple in 4.0+. It would appear it is NO longer necessary to force the size of the view on every view, and the specific serious problem of landscape "only working the first time" has been resolved. As of April 2011, it is not possible to test or even build anything below 4.0, so the question is purely a historic curiosity. It's incredible how much trouble it caused developers for so long! Here is the original discussion and solution. This is utterly irrelevant now, as these systems are not even operable. It is EXTREMELY DIFFICULT to make this work fully -- there are at least three problems/bugs at play. try this.. interface builder landscape design Note in particular that where it says "and you need to use shouldAutorotateToInterfaceOrientation properly everywhere" it means everywhere, all your fullscreen views. Hope it helps in this nightmare! An important reminder of the ADDITIONAL well-known problem at hand here: if you are trying to swap between MORE THAN ONE view (all landscape), IT SIMPLY DOES NOT WORK. It is essential to remember this or you will waste days on the problem. It is literally NOT POSSIBLE. It is the biggest open, known, bug on the iOS platform. There is literally no way to make the hardware make the second view you load, be landscape. The annoying but simple workaround, and what you must do, is have a trivial master UIViewController that does nothing but sit there and let you swap between your views. In other words, in iOS because of a major know bug: [window addSubview:happyThing.view]; [window makeKeyAndVisible]; You can do that only once. Later, if you try to remove happyThing.view, and instead put in there newThing.view, IT DOES NOT WORK - AND THAT'S THAT. The machine will never rotate the view to landscape. There is no trick fix, even Apple cannot make it work. The workaround you must adopt is having an overall UIViewController that simply sits there and just holds your various views (happyThing, newThing, etc). Hope it helps! | iPhone app in landscape mode, 2008 systems Please note that this question is from 2008 and now is of only historic interest. What's the best way to create an iPhone application that runs in landscape mode from the start, regardless of the position of the device? Both programmatically and using the Interface Builder. | TITLE:
iPhone app in landscape mode, 2008 systems
QUESTION:
Please note that this question is from 2008 and now is of only historic interest. What's the best way to create an iPhone application that runs in landscape mode from the start, regardless of the position of the device? Both programmatically and using the Interface Builder.
ANSWER:
Historic answer only. Spectacularly out of date. Please note that this answer is now hugely out of date/ This answer is only a historical curiosity. Exciting news! As discovered by Andrew below, this problem has been fixed by Apple in 4.0+. It would appear it is NO longer necessary to force the size of the view on every view, and the specific serious problem of landscape "only working the first time" has been resolved. As of April 2011, it is not possible to test or even build anything below 4.0, so the question is purely a historic curiosity. It's incredible how much trouble it caused developers for so long! Here is the original discussion and solution. This is utterly irrelevant now, as these systems are not even operable. It is EXTREMELY DIFFICULT to make this work fully -- there are at least three problems/bugs at play. try this.. interface builder landscape design Note in particular that where it says "and you need to use shouldAutorotateToInterfaceOrientation properly everywhere" it means everywhere, all your fullscreen views. Hope it helps in this nightmare! An important reminder of the ADDITIONAL well-known problem at hand here: if you are trying to swap between MORE THAN ONE view (all landscape), IT SIMPLY DOES NOT WORK. It is essential to remember this or you will waste days on the problem. It is literally NOT POSSIBLE. It is the biggest open, known, bug on the iOS platform. There is literally no way to make the hardware make the second view you load, be landscape. The annoying but simple workaround, and what you must do, is have a trivial master UIViewController that does nothing but sit there and let you swap between your views. In other words, in iOS because of a major know bug: [window addSubview:happyThing.view]; [window makeKeyAndVisible]; You can do that only once. Later, if you try to remove happyThing.view, and instead put in there newThing.view, IT DOES NOT WORK - AND THAT'S THAT. The machine will never rotate the view to landscape. There is no trick fix, even Apple cannot make it work. The workaround you must adopt is having an overall UIViewController that simply sits there and just holds your various views (happyThing, newThing, etc). Hope it helps! | [
"ios",
"objective-c",
"landscape"
] | 100 | 47 | 67,593 | 8 | 0 | 2008-08-02T10:47:08.460000 | 2010-03-27T21:13:17.970000 |
419 | 17,396 | Unload a COM control when working in VB6 IDE | Part of my everyday work is maintaining and extending legacy VB6 applications. A common engine is written in C/C++ and VB6 uses these functions in order to improve performance. When it comes to asynchronous programming, a C interface is not enough and we rely on COM controls to fire events to VB6. My problem is that when I register the control in VB6, VB loads this control in memory and does not unload it until I quit the VB6 IDE. As the control is loaded the whole time, I am unable to recompile it in VC6, because the DLL file is locked. A solution I found is not to enable the control in VB but use the CreateObject() with the full name of my control. The problem then is that I must declare my control as an Object because VB6 knows nothing of the interface I am using and I do not have access to IntelliSense, which is a pain. Any idea how I can tell VB6 to unload controls after quitting the application or directly in the IDE? | I'm pretty sure there's no good way to force VB6 to unload the control. Here's what I do... instead of running Visual C and Visual Basic side-by-side, run VB6 under VC: Load up VC Open the project containing your COM objects Edit, change, etc. In VC, set the Output Executable to be VB6.EXE with appropriate command-line arguments to load the VB6 workspace Now just hit F5 to launch the VB6 IDE and load your VB6 project When you want to change the COM code again, exit VB6.EXE, make your changes, and hit F5 again. As long as you save your workspace VB6 will remember what windows you had open and all your project settings. Advantages of this method: You can set breakpoints in the COM object and debug it using a full source debugger You can happily debug in C and VB at the same time Whenever VB6 is running it always has the latest version of the COM DLLs | Unload a COM control when working in VB6 IDE Part of my everyday work is maintaining and extending legacy VB6 applications. A common engine is written in C/C++ and VB6 uses these functions in order to improve performance. When it comes to asynchronous programming, a C interface is not enough and we rely on COM controls to fire events to VB6. My problem is that when I register the control in VB6, VB loads this control in memory and does not unload it until I quit the VB6 IDE. As the control is loaded the whole time, I am unable to recompile it in VC6, because the DLL file is locked. A solution I found is not to enable the control in VB but use the CreateObject() with the full name of my control. The problem then is that I must declare my control as an Object because VB6 knows nothing of the interface I am using and I do not have access to IntelliSense, which is a pain. Any idea how I can tell VB6 to unload controls after quitting the application or directly in the IDE? | TITLE:
Unload a COM control when working in VB6 IDE
QUESTION:
Part of my everyday work is maintaining and extending legacy VB6 applications. A common engine is written in C/C++ and VB6 uses these functions in order to improve performance. When it comes to asynchronous programming, a C interface is not enough and we rely on COM controls to fire events to VB6. My problem is that when I register the control in VB6, VB loads this control in memory and does not unload it until I quit the VB6 IDE. As the control is loaded the whole time, I am unable to recompile it in VC6, because the DLL file is locked. A solution I found is not to enable the control in VB but use the CreateObject() with the full name of my control. The problem then is that I must declare my control as an Object because VB6 knows nothing of the interface I am using and I do not have access to IntelliSense, which is a pain. Any idea how I can tell VB6 to unload controls after quitting the application or directly in the IDE?
ANSWER:
I'm pretty sure there's no good way to force VB6 to unload the control. Here's what I do... instead of running Visual C and Visual Basic side-by-side, run VB6 under VC: Load up VC Open the project containing your COM objects Edit, change, etc. In VC, set the Output Executable to be VB6.EXE with appropriate command-line arguments to load the VB6 workspace Now just hit F5 to launch the VB6 IDE and load your VB6 project When you want to change the COM code again, exit VB6.EXE, make your changes, and hit F5 again. As long as you save your workspace VB6 will remember what windows you had open and all your project settings. Advantages of this method: You can set breakpoints in the COM object and debug it using a full source debugger You can happily debug in C and VB at the same time Whenever VB6 is running it always has the latest version of the COM DLLs | [
"com",
"vb6"
] | 37 | 27 | 3,067 | 1 | 0 | 2008-08-02T11:52:01.543000 | 2008-08-20T03:58:09.347000 |
438 | 77,397 | Implementation of "Remember me" in a Rails application | My Rails-app has a sign in box with a "remember me" checkbox. Users who check that box should remain logged in even after closing their browser. I'm keeping track of whether users are logged in by storing their id in the user's session. But sessions are implemented in Rails as session cookies, which are not persistent. I can make them persistent: class ApplicationController < ActionController::Base before_filter:update_session_expiration_date
private
def update_session_expiration_date options = ActionController::Base.session_options unless options[:session_expires] options[:session_expires] = 1.year.from_now end end end But that seems like a hack, which is surprising for such common functionality. Is there any better way? Edit Gareth's answer is pretty good, but I would still like an answer from someone familiar with Rails 2 (because of it's unique CookieSessionStore ). | I have spent a while thinking about this and came to some conclusions. Rails session cookies are tamper-proof by default, so you really don't have to worry about a cookie being modified on the client end. Here is what I've done: Session cookie is set to be long-lived (6 months or so) Inside the session store An 'expires on' date that is set to login + 24 hours user id Authenticated = true so I can allow for anonymous user sesssions (not dangerous because of the cookie tamper protection) I add a before_filter in the Application Controller that checks the 'expires on' part of the session. When the user checks the "Remember Me" box, I just set the session[:expireson] date to be login + 2 weeks. No one can steal the cookie and stay logged in forever or masquerade as another user because the rails session cookie is tamper-proof. | Implementation of "Remember me" in a Rails application My Rails-app has a sign in box with a "remember me" checkbox. Users who check that box should remain logged in even after closing their browser. I'm keeping track of whether users are logged in by storing their id in the user's session. But sessions are implemented in Rails as session cookies, which are not persistent. I can make them persistent: class ApplicationController < ActionController::Base before_filter:update_session_expiration_date
private
def update_session_expiration_date options = ActionController::Base.session_options unless options[:session_expires] options[:session_expires] = 1.year.from_now end end end But that seems like a hack, which is surprising for such common functionality. Is there any better way? Edit Gareth's answer is pretty good, but I would still like an answer from someone familiar with Rails 2 (because of it's unique CookieSessionStore ). | TITLE:
Implementation of "Remember me" in a Rails application
QUESTION:
My Rails-app has a sign in box with a "remember me" checkbox. Users who check that box should remain logged in even after closing their browser. I'm keeping track of whether users are logged in by storing their id in the user's session. But sessions are implemented in Rails as session cookies, which are not persistent. I can make them persistent: class ApplicationController < ActionController::Base before_filter:update_session_expiration_date
private
def update_session_expiration_date options = ActionController::Base.session_options unless options[:session_expires] options[:session_expires] = 1.year.from_now end end end But that seems like a hack, which is surprising for such common functionality. Is there any better way? Edit Gareth's answer is pretty good, but I would still like an answer from someone familiar with Rails 2 (because of it's unique CookieSessionStore ).
ANSWER:
I have spent a while thinking about this and came to some conclusions. Rails session cookies are tamper-proof by default, so you really don't have to worry about a cookie being modified on the client end. Here is what I've done: Session cookie is set to be long-lived (6 months or so) Inside the session store An 'expires on' date that is set to login + 24 hours user id Authenticated = true so I can allow for anonymous user sesssions (not dangerous because of the cookie tamper protection) I add a before_filter in the Application Controller that checks the 'expires on' part of the session. When the user checks the "Remember Me" box, I just set the session[:expireson] date to be login + 2 weeks. No one can steal the cookie and stay logged in forever or masquerade as another user because the rails session cookie is tamper-proof. | [
"ruby-on-rails",
"ruby",
"http"
] | 57 | 13 | 12,127 | 7 | 0 | 2008-08-02T12:56:58.590000 | 2008-09-16T21:37:36.440000 |
469 | 3,040 | How can I find the full path to a font from its display name on a Mac? | I am using the Photoshop's javascript API to find the fonts in a given PSD. Given a font name returned by the API, I want to find the actual physical font file that font name corresponds to on the disc. This is all happening in a python program running on OSX so I guess I'm looking for one of: Some Photoshop javascript A Python function An OSX API that I can call from python | Unfortunately the only API that isn't deprecated is located in the ApplicationServices framework, which doesn't have a bridge support file, and thus isn't available in the bridge. If you're wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef. Cocoa doesn't have any native support, at least as of 10.5, for getting the location of a font. | How can I find the full path to a font from its display name on a Mac? I am using the Photoshop's javascript API to find the fonts in a given PSD. Given a font name returned by the API, I want to find the actual physical font file that font name corresponds to on the disc. This is all happening in a python program running on OSX so I guess I'm looking for one of: Some Photoshop javascript A Python function An OSX API that I can call from python | TITLE:
How can I find the full path to a font from its display name on a Mac?
QUESTION:
I am using the Photoshop's javascript API to find the fonts in a given PSD. Given a font name returned by the API, I want to find the actual physical font file that font name corresponds to on the disc. This is all happening in a python program running on OSX so I guess I'm looking for one of: Some Photoshop javascript A Python function An OSX API that I can call from python
ANSWER:
Unfortunately the only API that isn't deprecated is located in the ApplicationServices framework, which doesn't have a bridge support file, and thus isn't available in the bridge. If you're wanting to use ctypes, you can use ATSFontGetFileReference after looking up the ATSFontRef. Cocoa doesn't have any native support, at least as of 10.5, for getting the location of a font. | [
"python",
"macos",
"fonts",
"photoshop"
] | 48 | 22 | 5,094 | 6 | 0 | 2008-08-02T15:11:16.430000 | 2008-08-06T03:01:23.890000 |
470 | 473 | Homegrown consumption of web services | I've been writing a few web services for a.net app, now I'm ready to consume them. I've seen numerous examples where there is homegrown code for consuming the service as opposed to using the auto generated methods that Visual Studio creates when adding the web reference. Is there some advantages to this? | No, what you're doing is fine. Don't let those people confuse you. If you've written the web services with.net then the reference proxies generated by.net are going to be quite suitable. The situation you describe (where you are both producer and consumer) is the ideal situation. If you need to connect to a web services that is unknown at compile time, then you would want a more dynamic approach, where you deduce the 'shape' of the web service. But start by using the auto generated proxy class, and don't worry about it until you hit a limitation. And when you do -- come back to stack overflow;-) | Homegrown consumption of web services I've been writing a few web services for a.net app, now I'm ready to consume them. I've seen numerous examples where there is homegrown code for consuming the service as opposed to using the auto generated methods that Visual Studio creates when adding the web reference. Is there some advantages to this? | TITLE:
Homegrown consumption of web services
QUESTION:
I've been writing a few web services for a.net app, now I'm ready to consume them. I've seen numerous examples where there is homegrown code for consuming the service as opposed to using the auto generated methods that Visual Studio creates when adding the web reference. Is there some advantages to this?
ANSWER:
No, what you're doing is fine. Don't let those people confuse you. If you've written the web services with.net then the reference proxies generated by.net are going to be quite suitable. The situation you describe (where you are both producer and consumer) is the ideal situation. If you need to connect to a web services that is unknown at compile time, then you would want a more dynamic approach, where you deduce the 'shape' of the web service. But start by using the auto generated proxy class, and don't worry about it until you hit a limitation. And when you do -- come back to stack overflow;-) | [
".net",
"web-services"
] | 24 | 11 | 789 | 1 | 0 | 2008-08-02T15:11:47.523000 | 2008-08-02T15:33:13.390000 |
482 | 509 | WinForms ComboBox data binding gotcha | Assume you are doing something like the following List myitems = new List { "Item 1", "Item 2", "Item 3" };
ComboBox box = new ComboBox(); box.DataSource = myitems;
ComboBox box2 = new ComboBox(); box2.DataSource = myitems So now we have 2 combo boxes bound to that array, and everything works fine. But when you change the value of one combo box, it changes BOTH combo boxes to the one you just selected. Now, I know that Arrays are always passed by reference (learned that when i learned C:D), but why on earth would the combo boxes change together? I don't believe the combo box control is modifying the collection at all. As a workaround, don't this would achieve the functionality that is expected/desired ComboBox box = new ComboBox(); box.DataSource = myitems.ToArray(); | This has to do with how data bindings are set up in the dotnet framework, especially the BindingContext. On a high level it means that if you haven't specified otherwise each form and all the controls of the form share the same BindingContext. When you are setting the DataSource property the ComboBox will use the BindingContext to get a ConcurrenyMangager that wraps the list. The ConcurrenyManager keeps track of such things as the current selected position in the list. When you set the DataSource of the second ComboBox it will use the same BindingContext (the forms) which will yield a reference to the same ConcurrencyManager as above used to set up the data bindings. To get a more detailed explanation see BindingContext. | WinForms ComboBox data binding gotcha Assume you are doing something like the following List myitems = new List { "Item 1", "Item 2", "Item 3" };
ComboBox box = new ComboBox(); box.DataSource = myitems;
ComboBox box2 = new ComboBox(); box2.DataSource = myitems So now we have 2 combo boxes bound to that array, and everything works fine. But when you change the value of one combo box, it changes BOTH combo boxes to the one you just selected. Now, I know that Arrays are always passed by reference (learned that when i learned C:D), but why on earth would the combo boxes change together? I don't believe the combo box control is modifying the collection at all. As a workaround, don't this would achieve the functionality that is expected/desired ComboBox box = new ComboBox(); box.DataSource = myitems.ToArray(); | TITLE:
WinForms ComboBox data binding gotcha
QUESTION:
Assume you are doing something like the following List myitems = new List { "Item 1", "Item 2", "Item 3" };
ComboBox box = new ComboBox(); box.DataSource = myitems;
ComboBox box2 = new ComboBox(); box2.DataSource = myitems So now we have 2 combo boxes bound to that array, and everything works fine. But when you change the value of one combo box, it changes BOTH combo boxes to the one you just selected. Now, I know that Arrays are always passed by reference (learned that when i learned C:D), but why on earth would the combo boxes change together? I don't believe the combo box control is modifying the collection at all. As a workaround, don't this would achieve the functionality that is expected/desired ComboBox box = new ComboBox(); box.DataSource = myitems.ToArray();
ANSWER:
This has to do with how data bindings are set up in the dotnet framework, especially the BindingContext. On a high level it means that if you haven't specified otherwise each form and all the controls of the form share the same BindingContext. When you are setting the DataSource property the ComboBox will use the BindingContext to get a ConcurrenyMangager that wraps the list. The ConcurrenyManager keeps track of such things as the current selected position in the list. When you set the DataSource of the second ComboBox it will use the same BindingContext (the forms) which will yield a reference to the same ConcurrencyManager as above used to set up the data bindings. To get a more detailed explanation see BindingContext. | [
"c#",
"winforms",
"data-binding"
] | 57 | 39 | 20,990 | 2 | 0 | 2008-08-02T16:09:56.780000 | 2008-08-02T17:18:12.680000 |
502 | 7,090 | Get a preview JPEG of a PDF on Windows? | I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF. On the Mac I am spawning sips. Is there something similarly simple I can do on Windows? | ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output): gs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \ -dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \ -sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \ -sOutputFile=$OUTPUT -f$INPUT where $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.) This is good for two reasons: You don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions. ImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step. Other things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m. | Get a preview JPEG of a PDF on Windows? I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF. On the Mac I am spawning sips. Is there something similarly simple I can do on Windows? | TITLE:
Get a preview JPEG of a PDF on Windows?
QUESTION:
I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF. On the Mac I am spawning sips. Is there something similarly simple I can do on Windows?
ANSWER:
ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output): gs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \ -dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \ -sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \ -sOutputFile=$OUTPUT -f$INPUT where $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.) This is good for two reasons: You don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions. ImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step. Other things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m. | [
"python",
"windows",
"image",
"pdf"
] | 59 | 44 | 18,039 | 3 | 0 | 2008-08-02T17:01:58.500000 | 2008-08-10T08:08:33.543000 |
514 | 519 | Frequent SystemExit in Ruby when making HTTP calls | I have a Ruby on Rails Website that makes HTTP calls to an external Web Service. About once a day I get a SystemExit (stacktrace below) error email where a call to the service has failed. If I then try the exact same query on my site moments later it works fine. It's been happening since the site went live and I've had no luck tracking down what causes it. Ruby is version 1.8.6 and rails is version 1.2.6. Anyone else have this problem? This is the error and stacktrace. A SystemExit occurred /usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in exit' /usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in exit_now_handler' /usr/local/lib/ruby/gems/1.8/gems/activesupport-1.4.4/lib/active_support/inflector.rb:250:in to_proc' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in call' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in sysread' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in rbuf_fill' /usr/local/lib/ruby/1.8/timeout.rb:56:in timeout' /usr/local/lib/ruby/1.8/timeout.rb:76:in timeout' /usr/local/lib/ruby/1.8/net/protocol.rb:132:in rbuf_fill' /usr/local/lib/ruby/1.8/net/protocol.rb:116:in readuntil' /usr/local/lib/ruby/1.8/net/protocol.rb:126:in readline' /usr/local/lib/ruby/1.8/net/http.rb:2017:in read_status_line' /usr/local/lib/ruby/1.8/net/http.rb:2006:in read_new' /usr/local/lib/ruby/1.8/net/http.rb:1047:in request' /usr/local/lib/ruby/1.8/net/http.rb:945:in request_get' /usr/local/lib/ruby/1.8/net/http.rb:380:in get_response' /usr/local/lib/ruby/1.8/net/http.rb:543:in start' /usr/local/lib/ruby/1.8/net/http.rb:379:in get_response' | Using fcgi with Ruby is known to be very buggy. Practically everybody has moved to Mongrel for this reason, and I recommend you do the same. | Frequent SystemExit in Ruby when making HTTP calls I have a Ruby on Rails Website that makes HTTP calls to an external Web Service. About once a day I get a SystemExit (stacktrace below) error email where a call to the service has failed. If I then try the exact same query on my site moments later it works fine. It's been happening since the site went live and I've had no luck tracking down what causes it. Ruby is version 1.8.6 and rails is version 1.2.6. Anyone else have this problem? This is the error and stacktrace. A SystemExit occurred /usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in exit' /usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in exit_now_handler' /usr/local/lib/ruby/gems/1.8/gems/activesupport-1.4.4/lib/active_support/inflector.rb:250:in to_proc' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in call' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in sysread' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in rbuf_fill' /usr/local/lib/ruby/1.8/timeout.rb:56:in timeout' /usr/local/lib/ruby/1.8/timeout.rb:76:in timeout' /usr/local/lib/ruby/1.8/net/protocol.rb:132:in rbuf_fill' /usr/local/lib/ruby/1.8/net/protocol.rb:116:in readuntil' /usr/local/lib/ruby/1.8/net/protocol.rb:126:in readline' /usr/local/lib/ruby/1.8/net/http.rb:2017:in read_status_line' /usr/local/lib/ruby/1.8/net/http.rb:2006:in read_new' /usr/local/lib/ruby/1.8/net/http.rb:1047:in request' /usr/local/lib/ruby/1.8/net/http.rb:945:in request_get' /usr/local/lib/ruby/1.8/net/http.rb:380:in get_response' /usr/local/lib/ruby/1.8/net/http.rb:543:in start' /usr/local/lib/ruby/1.8/net/http.rb:379:in get_response' | TITLE:
Frequent SystemExit in Ruby when making HTTP calls
QUESTION:
I have a Ruby on Rails Website that makes HTTP calls to an external Web Service. About once a day I get a SystemExit (stacktrace below) error email where a call to the service has failed. If I then try the exact same query on my site moments later it works fine. It's been happening since the site went live and I've had no luck tracking down what causes it. Ruby is version 1.8.6 and rails is version 1.2.6. Anyone else have this problem? This is the error and stacktrace. A SystemExit occurred /usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in exit' /usr/local/lib/ruby/gems/1.8/gems/rails-1.2.6/lib/fcgi_handler.rb:116:in exit_now_handler' /usr/local/lib/ruby/gems/1.8/gems/activesupport-1.4.4/lib/active_support/inflector.rb:250:in to_proc' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in call' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in sysread' /usr/local/lib/ruby/1.8/net/protocol.rb:133:in rbuf_fill' /usr/local/lib/ruby/1.8/timeout.rb:56:in timeout' /usr/local/lib/ruby/1.8/timeout.rb:76:in timeout' /usr/local/lib/ruby/1.8/net/protocol.rb:132:in rbuf_fill' /usr/local/lib/ruby/1.8/net/protocol.rb:116:in readuntil' /usr/local/lib/ruby/1.8/net/protocol.rb:126:in readline' /usr/local/lib/ruby/1.8/net/http.rb:2017:in read_status_line' /usr/local/lib/ruby/1.8/net/http.rb:2006:in read_new' /usr/local/lib/ruby/1.8/net/http.rb:1047:in request' /usr/local/lib/ruby/1.8/net/http.rb:945:in request_get' /usr/local/lib/ruby/1.8/net/http.rb:380:in get_response' /usr/local/lib/ruby/1.8/net/http.rb:543:in start' /usr/local/lib/ruby/1.8/net/http.rb:379:in get_response'
ANSWER:
Using fcgi with Ruby is known to be very buggy. Practically everybody has moved to Mongrel for this reason, and I recommend you do the same. | [
"ruby-on-rails",
"ruby",
"crash"
] | 24 | 9 | 2,335 | 4 | 0 | 2008-08-02T17:26:39.793000 | 2008-08-02T17:50:34.987000 |
535 | 541 | Continuous Integration System for a Python Codebase | I am starting to work on a hobby project with a Python codebase and I would like to set up some form of continuous integration (i.e. running a battery of test-cases each time a check-in is made and sending nag e-mails to responsible persons when the tests fail) similar to CruiseControl or TeamCity. I realize I could do this with hooks in most VCSes, but that requires that the tests run on the same machine as the version control server, which isn't as elegant as I would like. Does anyone have any suggestions for a small, user-friendly, open-source continuous integration system suitable for a Python codebase? | We run Buildbot - Trac at work. I haven't used it too much since my codebase isn't part of the release cycle yet. But we run the tests on different environments (OSX/Linux/Win) and it sends emails — and it's written in Python. | Continuous Integration System for a Python Codebase I am starting to work on a hobby project with a Python codebase and I would like to set up some form of continuous integration (i.e. running a battery of test-cases each time a check-in is made and sending nag e-mails to responsible persons when the tests fail) similar to CruiseControl or TeamCity. I realize I could do this with hooks in most VCSes, but that requires that the tests run on the same machine as the version control server, which isn't as elegant as I would like. Does anyone have any suggestions for a small, user-friendly, open-source continuous integration system suitable for a Python codebase? | TITLE:
Continuous Integration System for a Python Codebase
QUESTION:
I am starting to work on a hobby project with a Python codebase and I would like to set up some form of continuous integration (i.e. running a battery of test-cases each time a check-in is made and sending nag e-mails to responsible persons when the tests fail) similar to CruiseControl or TeamCity. I realize I could do this with hooks in most VCSes, but that requires that the tests run on the same machine as the version control server, which isn't as elegant as I would like. Does anyone have any suggestions for a small, user-friendly, open-source continuous integration system suitable for a Python codebase?
ANSWER:
We run Buildbot - Trac at work. I haven't used it too much since my codebase isn't part of the release cycle yet. But we run the tests on different environments (OSX/Linux/Win) and it sends emails — and it's written in Python. | [
"python",
"continuous-integration",
"extreme-programming"
] | 69 | 33 | 9,653 | 7 | 0 | 2008-08-02T18:43:54.787000 | 2008-08-02T19:06:40.667000 |
549 | 477,578 | The definitive guide to form-based website authentication | Moderator note: This question is not a good fit for our question and answer format with the topicality rules which currently apply for Stack Overflow. We normally use a "historical lock" for such questions where the content still has value. However, the answers on this question are actively maintained and a historical lock doesn't permit editing of the answers. As such, a "wiki answer" lock has been applied to allow the answers to be edited. You should assume the topicality issues which are normally handled by a historical lock are present (i.e. this question not a good example of an on-topic question for Stack Overflow). Form-based authentication for websites We believe that Stack Overflow should not just be a resource for very specific technical questions, but also for general guidelines on how to solve variations on common problems. "Form based authentication for websites" should be a fine topic for such an experiment. It should include topics such as: How to log in How to log out How to remain logged in Managing cookies (including recommended settings) SSL/HTTPS encryption How to store passwords Using secret questions Forgotten username/password functionality Use of nonces to prevent cross-site request forgeries (CSRF) OpenID "Remember me" checkbox Browser autocompletion of usernames and passwords Secret URLs (public URL protected by digest) Checking password strength E-mail validation and much more about form based authentication... It should not include things like: Roles and authorization HTTP basic authentication Please help us by: Suggesting subtopics Submitting good articles about this subject Editing the official answer | PART I: How To Log In We'll assume you already know how to build a login+password HTML form which POSTs the values to a script on the server side for authentication. The sections below will deal with patterns for sound practical auth, and how to avoid the most common security pitfalls. To HTTPS or not to HTTPS? Unless the connection is already secure (that is, tunneled through HTTPS using SSL/TLS), your login form values will be sent in cleartext, which allows anyone eavesdropping on the line between browser and web server will be able to read logins as they pass through. This type of wiretapping is done routinely by governments, but in general, we won't address 'owned' wires other than to say this: Just use HTTPS. In essence, the only practical way to protect against wiretapping/packet sniffing during login is by using HTTPS or another certificate-based encryption scheme (for example, TLS ) or a proven & tested challenge-response scheme (for example, the Diffie-Hellman -based SRP). Any other method can be easily circumvented by an eavesdropping attacker. Of course, if you are willing to get a little bit impractical, you could also employ some form of two-factor authentication scheme (e.g. the Google Authenticator app, a physical 'cold war style' codebook, or an RSA key generator dongle). If applied correctly, this could work even with an unsecured connection, but it's hard to imagine that a dev would be willing to implement two-factor auth but not SSL. (Do not) Roll-your-own JavaScript encryption/hashing Given the perceived (though now avoidable ) cost and technical difficulty of setting up an SSL certificate on your website, some developers are tempted to roll their own in-browser hashing or encryption schemes in order to avoid passing cleartext logins over an unsecured wire. While this is a noble thought, it is essentially useless (and can be a security flaw ) unless it is combined with one of the above - that is, either securing the line with strong encryption or using a tried-and-tested challenge-response mechanism (if you don't know what that is, just know that it is one of the most difficult to prove, most difficult to design, and most difficult to implement concepts in digital security). While it is true that hashing the password can be effective against password disclosure, it is vulnerable to replay attacks, Man-In-The-Middle attacks / hijackings (if an attacker can inject a few bytes into your unsecured HTML page before it reaches your browser, they can simply comment out the hashing in the JavaScript), or brute-force attacks (since you are handing the attacker both username, salt and hashed password). CAPTCHAS against humanity CAPTCHA is meant to thwart one specific category of attack: automated dictionary/brute force trial-and-error with no human operator. There is no doubt that this is a real threat, however, there are ways of dealing with it seamlessly that don't require a CAPTCHA, specifically properly designed server-side login throttling schemes - we'll discuss those later. Know that CAPTCHA implementations are not created alike; they often aren't human-solvable, most of them are actually ineffective against bots, all of them are ineffective against cheap third-world labor (according to OWASP, the current sweatshop rate is $12 per 500 tests), and some implementations may be technically illegal in some countries (see OWASP Authentication Cheat Sheet ). If you must use a CAPTCHA, use Google's reCAPTCHA, since it is OCR-hard by definition (since it uses already OCR-misclassified book scans) and tries very hard to be user-friendly. Personally, I tend to find CAPTCHAS annoying, and use them only as a last resort when a user has failed to log in a number of times and throttling delays are maxed out. This will happen rarely enough to be acceptable, and it strengthens the system as a whole. Storing Passwords / Verifying logins This may finally be common knowledge after all the highly-publicized hacks and user data leaks we've seen in recent years, but it has to be said: Do not store passwords in cleartext in your database. User databases are routinely hacked, leaked or gleaned through SQL injection, and if you are storing raw, plaintext passwords, that is instant game over for your login security. So if you can't store the password, how do you check that the login+password combination POSTed from the login form is correct? The answer is hashing using a key derivation function. Whenever a new user is created or a password is changed, you take the password and run it through a KDF, such as Argon2, bcrypt, scrypt or PBKDF2, turning the cleartext password ("correcthorsebatterystaple") into a long, random-looking string, which is a lot safer to store in your database. To verify a login, you run the same hash function on the entered password, this time passing in the salt and compare the resulting hash string to the value stored in your database. Argon2, bcrypt and scrypt store the salt with the hash already. Check out this article on sec.stackexchange for more detailed information. The reason a salt is used is that hashing in itself is not sufficient -- you'll want to add a so-called 'salt' to protect the hash against rainbow tables. A salt effectively prevents two passwords that exactly match from being stored as the same hash value, preventing the whole database being scanned in one run if an attacker is executing a password guessing attack. A cryptographic hash should not be used for password storage because user-selected passwords are not strong enough (i.e. do not usually contain enough entropy) and a password guessing attack could be completed in a relatively short time by an attacker with access to the hashes. This is why KDFs are used - these effectively "stretch the key", which means that every password guess an attacker makes causes multiple repetitions of the hash algorithm, for example 10,000 times, which causes the attacker to guess the password 10,000 times slower. Session data - "You are logged in as Spiderman69" Once the server has verified the login and password against your user database and found a match, the system needs a way to remember that the browser has been authenticated. This fact should only ever be stored server side in the session data. If you are unfamiliar with session data, here's how it works: A single randomly-generated string is stored in an expiring cookie and used to reference a collection of data - the session data - which is stored on the server. If you are using an MVC framework, this is undoubtedly handled already. If at all possible, make sure the session cookie has the secure and HTTP Only flags set when sent to the browser. The HttpOnly flag provides some protection against the cookie being read through XSS attack. The secure flag ensures that the cookie is only sent back via HTTPS, and therefore protects against network sniffing attacks. The value of the cookie should not be predictable. Where a cookie referencing a non-existent session is presented, its value should be replaced immediately to prevent session fixation. Session state can also be maintained on the client side. This is achieved by using techniques like JWT (JSON Web Token). PART II: How To Remain Logged In - The Infamous "Remember Me" Checkbox Persistent Login Cookies ("remember me" functionality) are a danger zone; on the one hand, they are entirely as safe as conventional logins when users understand how to handle them; and on the other hand, they are an enormous security risk in the hands of careless users, who may use them on public computers and forget to log out, and who may not know what browser cookies are or how to delete them. Personally, I like persistent logins for the websites I visit on a regular basis, but I know how to handle them safely. If you are positive that your users know the same, you can use persistent logins with a clean conscience. If not - well, then you may subscribe to the philosophy that users who are careless with their login credentials brought it upon themselves if they get hacked. It's not like we go to our user's houses and tear off all those facepalm-inducing Post-It notes with passwords they have lined up on the edge of their monitors, either. Of course, some systems can't afford to have any accounts hacked; for such systems, there is no way you can justify having persistent logins. If you DO decide to implement persistent login cookies, this is how you do it: First, take some time to read Paragon Initiative's article on the subject. You'll need to get a bunch of elements right, and the article does a great job of explaining each. And just to reiterate one of the most common pitfalls, DO NOT STORE THE PERSISTENT LOGIN COOKIE (TOKEN) IN YOUR DATABASE, ONLY A HASH OF IT! The login token is Password Equivalent, so if an attacker got their hands on your database, they could use the tokens to log in to any account, just as if they were cleartext login-password combinations. Therefore, use hashing (according to https://security.stackexchange.com/a/63438/5002 a weak hash will do just fine for this purpose) when storing persistent login tokens. PART III: Using Secret Questions Don't implement 'secret questions'. The 'secret questions' feature is a security anti-pattern. Read the paper from link number 4 from the MUST-READ list. You can ask Sarah Palin about that one, after her Yahoo! email account got hacked during a previous presidential campaign because the answer to her security question was... "Wasilla High School"! Even with user-specified questions, it is highly likely that most users will choose either: A 'standard' secret question like mother's maiden name or favorite pet A simple piece of trivia that anyone could lift from their blog, LinkedIn profile, or similar Any question that is easier to answer than guessing their password. Which, for any decent password, is every question you can imagine In conclusion, security questions are inherently insecure in virtually all their forms and variations, and should not be employed in an authentication scheme for any reason. The true reason why security questions even exist in the wild is that they conveniently save the cost of a few support calls from users who can't access their email to get to a reactivation code. This at the expense of security and Sarah Palin's reputation. Worth it? Probably not. PART IV: Forgotten Password Functionality I already mentioned why you should never use security questions for handling forgotten/lost user passwords; it also goes without saying that you should never e-mail users their actual passwords. There are at least two more all-too-common pitfalls to avoid in this field: Don't reset a forgotten password to an autogenerated strong password - such passwords are notoriously hard to remember, which means the user must either change it or write it down - say, on a bright yellow Post-It on the edge of their monitor. Instead of setting a new password, just let users pick a new one right away - which is what they want to do anyway. (An exception to this might be if the users are universally using a password manager to store/manage passwords that would normally be impossible to remember without writing it down). Always hash the lost password code/token in the database. AGAIN, this code is another example of a Password Equivalent, so it MUST be hashed in case an attacker got their hands on your database. When a lost password code is requested, send the plaintext code to the user's email address, then hash it, save the hash in your database -- and throw away the original. Just like a password or a persistent login token. A final note: always make sure your interface for entering the 'lost password code' is at least as secure as your login form itself, or an attacker will simply use this to gain access instead. Making sure you generate very long 'lost password codes' (for example, 16 case-sensitive alphanumeric characters) is a good start, but consider adding the same throttling scheme that you do for the login form itself. PART V: Checking Password Strength First, you'll want to read this small article for a reality check: The 500 most common passwords Okay, so maybe the list isn't the canonical list of most common passwords on any system anywhere ever, but it's a good indication of how poorly people will choose their passwords when there is no enforced policy in place. Plus, the list looks frighteningly close to home when you compare it to publicly available analyses of recently stolen passwords. So: With no minimum password strength requirements, 2% of users use one of the top 20 most common passwords. Meaning: if an attacker gets just 20 attempts, 1 in 50 accounts on your website will be crackable. Thwarting this requires calculating the entropy of a password and then applying a threshold. The National Institute of Standards and Technology (NIST) Special Publication 800-63 has a set of very good suggestions. That, when combined with a dictionary and keyboard layout analysis (for example, 'qwertyuiop' is a bad password), can reject 99% of all poorly selected passwords at a level of 18 bits of entropy. Simply calculating password strength and showing a visual strength meter to a user is good, but insufficient. Unless it is enforced, a lot of users will most likely ignore it. And for a refreshing take on user-friendliness of high-entropy passwords, Randall Munroe's Password Strength xkcd is highly recommended. Utilize Troy Hunt's Have I Been Pwned API to check users passwords against passwords compromised in public data breaches. PART VI: Much More - Or: Preventing Rapid-Fire Login Attempts First, have a look at the numbers: Password Recovery Speeds - How long will your password stand up If you don't have the time to look through the tables in that link, here's the list of them: It takes virtually no time to crack a weak password, even if you're cracking it with an abacus It takes virtually no time to crack an alphanumeric 9-character password if it is case insensitive It takes virtually no time to crack an intricate, symbols-and-letters-and-numbers, upper-and-lowercase password if it is less than 8 characters long (a desktop PC can search the entire keyspace up to 7 characters in a matter of days or even hours) It would, however, take an inordinate amount of time to crack even a 6-character password, if you were limited to one attempt per second! So what can we learn from these numbers? Well, lots, but we can focus on the most important part: the fact that preventing large numbers of rapid-fire successive login attempts (ie. the brute force attack) really isn't that difficult. But preventing it right isn't as easy as it seems. Generally speaking, you have three choices that are all effective against brute-force attacks (and dictionary attacks, but since you are already employing a strong passwords policy, they shouldn't be an issue): Present a CAPTCHA after N failed attempts (annoying as hell and often ineffective -- but I'm repeating myself here) Locking accounts and requiring email verification after N failed attempts (this is a DoS attack waiting to happen) And finally, login throttling: that is, setting a time delay between attempts after N failed attempts (yes, DoS attacks are still possible, but at least they are far less likely and a lot more complicated to pull off). Best practice #1: A short time delay that increases with the number of failed attempts, like: 1 failed attempt = no delay 2 failed attempts = 2 sec delay 3 failed attempts = 4 sec delay 4 failed attempts = 8 sec delay 5 failed attempts = 16 sec delay etc. DoS attacking this scheme would be very impractical, since the resulting lockout time is slightly larger than the sum of the previous lockout times. To clarify: The delay is not a delay before returning the response to the browser. It is more like a timeout or refractory period during which login attempts to a specific account or from a specific IP address will not be accepted or evaluated at all. That is, correct credentials will not return in a successful login, and incorrect credentials will not trigger a delay increase. Best practice #2: A medium length time delay that goes into effect after N failed attempts, like: 1-4 failed attempts = no delay 5 failed attempts = 15-30 min delay DoS attacking this scheme would be quite impractical, but certainly doable. Also, it might be relevant to note that such a long delay can be very annoying for a legitimate user. Forgetful users will dislike you. Best practice #3: Combining the two approaches - either a fixed, short time delay that goes into effect after N failed attempts, like: 1-4 failed attempts = no delay 5+ failed attempts = 20 sec delay Or, an increasing delay with a fixed upper bound, like: 1 failed attempt = 5 sec delay 2 failed attempts = 15 sec delay 3+ failed attempts = 45 sec delay This final scheme was taken from the OWASP best-practices suggestions (link 1 from the MUST-READ list) and should be considered best practice, even if it is admittedly on the restrictive side. As a rule of thumb, however, I would say: the stronger your password policy is, the less you have to bug users with delays. If you require strong (case-sensitive alphanumerics + required numbers and symbols) 9+ character passwords, you could give the users 2-4 non-delayed password attempts before activating the throttling. DoS attacking this final login throttling scheme would be very impractical. And as a final touch, always allow persistent (cookie) logins (and/or a CAPTCHA-verified login form) to pass through, so legitimate users won't even be delayed while the attack is in progress. That way, the very impractical DoS attack becomes an extremely impractical attack. Additionally, it makes sense to do more aggressive throttling on admin accounts, since those are the most attractive entry points PART VII: Distributed Brute Force Attacks Just as an aside, more advanced attackers will try to circumvent login throttling by 'spreading their activities': Distributing the attempts on a botnet to prevent IP address flagging Rather than picking one user and trying the 50.000 most common passwords (which they can't, because of our throttling), they will pick THE most common password and try it against 50.000 users instead. That way, not only do they get around maximum-attempts measures like CAPTCHAs and login throttling, their chance of success increases as well, since the number 1 most common password is far more likely than number 49.995 Spacing the login requests for each user account, say, 30 seconds apart, to sneak under the radar Here, the best practice would be logging the number of failed logins, system-wide, and using a running average of your site's bad-login frequency as the basis for an upper limit that you then impose on all users. Too abstract? Let me rephrase: Say your site has had an average of 120 bad logins per day over the past 3 months. Using that (running average), your system might set the global limit to 3 times that -- ie. 360 failed attempts over a 24 hour period. Then, if the total number of failed attempts across all accounts exceeds that number within one day (or even better, monitor the rate of acceleration and trigger on a calculated threshold), it activates system-wide login throttling - meaning short delays for ALL users (still, with the exception of cookie logins and/or backup CAPTCHA logins). I also posted a question with more details and a really good discussion of how to avoid tricky pitfals in fending off distributed brute force attacks PART VIII: Two-Factor Authentication and Authentication Providers Credentials can be compromised, whether by exploits, passwords being written down and lost, laptops with keys being stolen, or users entering logins into phishing sites. Logins can be further protected with two-factor authentication, which uses out-of-band factors such as single-use codes received from a phone call, SMS message, app, or dongle. Several providers offer two-factor authentication services. Authentication can be completely delegated to a single-sign-on service, where another provider handles collecting credentials. This pushes the problem to a trusted third party. Google and Twitter both provide standards-based SSO services, while Facebook provides a similar proprietary solution. MUST-READ LINKS About Web Authentication OWASP Guide To Authentication / OWASP Authentication Cheat Sheet Dos and Don’ts of Client Authentication on the Web (very readable MIT research paper) Wikipedia: HTTP cookie Personal knowledge questions for fallback authentication: Security questions in the era of Facebook (very readable Berkeley research paper) | The definitive guide to form-based website authentication Moderator note: This question is not a good fit for our question and answer format with the topicality rules which currently apply for Stack Overflow. We normally use a "historical lock" for such questions where the content still has value. However, the answers on this question are actively maintained and a historical lock doesn't permit editing of the answers. As such, a "wiki answer" lock has been applied to allow the answers to be edited. You should assume the topicality issues which are normally handled by a historical lock are present (i.e. this question not a good example of an on-topic question for Stack Overflow). Form-based authentication for websites We believe that Stack Overflow should not just be a resource for very specific technical questions, but also for general guidelines on how to solve variations on common problems. "Form based authentication for websites" should be a fine topic for such an experiment. It should include topics such as: How to log in How to log out How to remain logged in Managing cookies (including recommended settings) SSL/HTTPS encryption How to store passwords Using secret questions Forgotten username/password functionality Use of nonces to prevent cross-site request forgeries (CSRF) OpenID "Remember me" checkbox Browser autocompletion of usernames and passwords Secret URLs (public URL protected by digest) Checking password strength E-mail validation and much more about form based authentication... It should not include things like: Roles and authorization HTTP basic authentication Please help us by: Suggesting subtopics Submitting good articles about this subject Editing the official answer | TITLE:
The definitive guide to form-based website authentication
QUESTION:
Moderator note: This question is not a good fit for our question and answer format with the topicality rules which currently apply for Stack Overflow. We normally use a "historical lock" for such questions where the content still has value. However, the answers on this question are actively maintained and a historical lock doesn't permit editing of the answers. As such, a "wiki answer" lock has been applied to allow the answers to be edited. You should assume the topicality issues which are normally handled by a historical lock are present (i.e. this question not a good example of an on-topic question for Stack Overflow). Form-based authentication for websites We believe that Stack Overflow should not just be a resource for very specific technical questions, but also for general guidelines on how to solve variations on common problems. "Form based authentication for websites" should be a fine topic for such an experiment. It should include topics such as: How to log in How to log out How to remain logged in Managing cookies (including recommended settings) SSL/HTTPS encryption How to store passwords Using secret questions Forgotten username/password functionality Use of nonces to prevent cross-site request forgeries (CSRF) OpenID "Remember me" checkbox Browser autocompletion of usernames and passwords Secret URLs (public URL protected by digest) Checking password strength E-mail validation and much more about form based authentication... It should not include things like: Roles and authorization HTTP basic authentication Please help us by: Suggesting subtopics Submitting good articles about this subject Editing the official answer
ANSWER:
PART I: How To Log In We'll assume you already know how to build a login+password HTML form which POSTs the values to a script on the server side for authentication. The sections below will deal with patterns for sound practical auth, and how to avoid the most common security pitfalls. To HTTPS or not to HTTPS? Unless the connection is already secure (that is, tunneled through HTTPS using SSL/TLS), your login form values will be sent in cleartext, which allows anyone eavesdropping on the line between browser and web server will be able to read logins as they pass through. This type of wiretapping is done routinely by governments, but in general, we won't address 'owned' wires other than to say this: Just use HTTPS. In essence, the only practical way to protect against wiretapping/packet sniffing during login is by using HTTPS or another certificate-based encryption scheme (for example, TLS ) or a proven & tested challenge-response scheme (for example, the Diffie-Hellman -based SRP). Any other method can be easily circumvented by an eavesdropping attacker. Of course, if you are willing to get a little bit impractical, you could also employ some form of two-factor authentication scheme (e.g. the Google Authenticator app, a physical 'cold war style' codebook, or an RSA key generator dongle). If applied correctly, this could work even with an unsecured connection, but it's hard to imagine that a dev would be willing to implement two-factor auth but not SSL. (Do not) Roll-your-own JavaScript encryption/hashing Given the perceived (though now avoidable ) cost and technical difficulty of setting up an SSL certificate on your website, some developers are tempted to roll their own in-browser hashing or encryption schemes in order to avoid passing cleartext logins over an unsecured wire. While this is a noble thought, it is essentially useless (and can be a security flaw ) unless it is combined with one of the above - that is, either securing the line with strong encryption or using a tried-and-tested challenge-response mechanism (if you don't know what that is, just know that it is one of the most difficult to prove, most difficult to design, and most difficult to implement concepts in digital security). While it is true that hashing the password can be effective against password disclosure, it is vulnerable to replay attacks, Man-In-The-Middle attacks / hijackings (if an attacker can inject a few bytes into your unsecured HTML page before it reaches your browser, they can simply comment out the hashing in the JavaScript), or brute-force attacks (since you are handing the attacker both username, salt and hashed password). CAPTCHAS against humanity CAPTCHA is meant to thwart one specific category of attack: automated dictionary/brute force trial-and-error with no human operator. There is no doubt that this is a real threat, however, there are ways of dealing with it seamlessly that don't require a CAPTCHA, specifically properly designed server-side login throttling schemes - we'll discuss those later. Know that CAPTCHA implementations are not created alike; they often aren't human-solvable, most of them are actually ineffective against bots, all of them are ineffective against cheap third-world labor (according to OWASP, the current sweatshop rate is $12 per 500 tests), and some implementations may be technically illegal in some countries (see OWASP Authentication Cheat Sheet ). If you must use a CAPTCHA, use Google's reCAPTCHA, since it is OCR-hard by definition (since it uses already OCR-misclassified book scans) and tries very hard to be user-friendly. Personally, I tend to find CAPTCHAS annoying, and use them only as a last resort when a user has failed to log in a number of times and throttling delays are maxed out. This will happen rarely enough to be acceptable, and it strengthens the system as a whole. Storing Passwords / Verifying logins This may finally be common knowledge after all the highly-publicized hacks and user data leaks we've seen in recent years, but it has to be said: Do not store passwords in cleartext in your database. User databases are routinely hacked, leaked or gleaned through SQL injection, and if you are storing raw, plaintext passwords, that is instant game over for your login security. So if you can't store the password, how do you check that the login+password combination POSTed from the login form is correct? The answer is hashing using a key derivation function. Whenever a new user is created or a password is changed, you take the password and run it through a KDF, such as Argon2, bcrypt, scrypt or PBKDF2, turning the cleartext password ("correcthorsebatterystaple") into a long, random-looking string, which is a lot safer to store in your database. To verify a login, you run the same hash function on the entered password, this time passing in the salt and compare the resulting hash string to the value stored in your database. Argon2, bcrypt and scrypt store the salt with the hash already. Check out this article on sec.stackexchange for more detailed information. The reason a salt is used is that hashing in itself is not sufficient -- you'll want to add a so-called 'salt' to protect the hash against rainbow tables. A salt effectively prevents two passwords that exactly match from being stored as the same hash value, preventing the whole database being scanned in one run if an attacker is executing a password guessing attack. A cryptographic hash should not be used for password storage because user-selected passwords are not strong enough (i.e. do not usually contain enough entropy) and a password guessing attack could be completed in a relatively short time by an attacker with access to the hashes. This is why KDFs are used - these effectively "stretch the key", which means that every password guess an attacker makes causes multiple repetitions of the hash algorithm, for example 10,000 times, which causes the attacker to guess the password 10,000 times slower. Session data - "You are logged in as Spiderman69" Once the server has verified the login and password against your user database and found a match, the system needs a way to remember that the browser has been authenticated. This fact should only ever be stored server side in the session data. If you are unfamiliar with session data, here's how it works: A single randomly-generated string is stored in an expiring cookie and used to reference a collection of data - the session data - which is stored on the server. If you are using an MVC framework, this is undoubtedly handled already. If at all possible, make sure the session cookie has the secure and HTTP Only flags set when sent to the browser. The HttpOnly flag provides some protection against the cookie being read through XSS attack. The secure flag ensures that the cookie is only sent back via HTTPS, and therefore protects against network sniffing attacks. The value of the cookie should not be predictable. Where a cookie referencing a non-existent session is presented, its value should be replaced immediately to prevent session fixation. Session state can also be maintained on the client side. This is achieved by using techniques like JWT (JSON Web Token). PART II: How To Remain Logged In - The Infamous "Remember Me" Checkbox Persistent Login Cookies ("remember me" functionality) are a danger zone; on the one hand, they are entirely as safe as conventional logins when users understand how to handle them; and on the other hand, they are an enormous security risk in the hands of careless users, who may use them on public computers and forget to log out, and who may not know what browser cookies are or how to delete them. Personally, I like persistent logins for the websites I visit on a regular basis, but I know how to handle them safely. If you are positive that your users know the same, you can use persistent logins with a clean conscience. If not - well, then you may subscribe to the philosophy that users who are careless with their login credentials brought it upon themselves if they get hacked. It's not like we go to our user's houses and tear off all those facepalm-inducing Post-It notes with passwords they have lined up on the edge of their monitors, either. Of course, some systems can't afford to have any accounts hacked; for such systems, there is no way you can justify having persistent logins. If you DO decide to implement persistent login cookies, this is how you do it: First, take some time to read Paragon Initiative's article on the subject. You'll need to get a bunch of elements right, and the article does a great job of explaining each. And just to reiterate one of the most common pitfalls, DO NOT STORE THE PERSISTENT LOGIN COOKIE (TOKEN) IN YOUR DATABASE, ONLY A HASH OF IT! The login token is Password Equivalent, so if an attacker got their hands on your database, they could use the tokens to log in to any account, just as if they were cleartext login-password combinations. Therefore, use hashing (according to https://security.stackexchange.com/a/63438/5002 a weak hash will do just fine for this purpose) when storing persistent login tokens. PART III: Using Secret Questions Don't implement 'secret questions'. The 'secret questions' feature is a security anti-pattern. Read the paper from link number 4 from the MUST-READ list. You can ask Sarah Palin about that one, after her Yahoo! email account got hacked during a previous presidential campaign because the answer to her security question was... "Wasilla High School"! Even with user-specified questions, it is highly likely that most users will choose either: A 'standard' secret question like mother's maiden name or favorite pet A simple piece of trivia that anyone could lift from their blog, LinkedIn profile, or similar Any question that is easier to answer than guessing their password. Which, for any decent password, is every question you can imagine In conclusion, security questions are inherently insecure in virtually all their forms and variations, and should not be employed in an authentication scheme for any reason. The true reason why security questions even exist in the wild is that they conveniently save the cost of a few support calls from users who can't access their email to get to a reactivation code. This at the expense of security and Sarah Palin's reputation. Worth it? Probably not. PART IV: Forgotten Password Functionality I already mentioned why you should never use security questions for handling forgotten/lost user passwords; it also goes without saying that you should never e-mail users their actual passwords. There are at least two more all-too-common pitfalls to avoid in this field: Don't reset a forgotten password to an autogenerated strong password - such passwords are notoriously hard to remember, which means the user must either change it or write it down - say, on a bright yellow Post-It on the edge of their monitor. Instead of setting a new password, just let users pick a new one right away - which is what they want to do anyway. (An exception to this might be if the users are universally using a password manager to store/manage passwords that would normally be impossible to remember without writing it down). Always hash the lost password code/token in the database. AGAIN, this code is another example of a Password Equivalent, so it MUST be hashed in case an attacker got their hands on your database. When a lost password code is requested, send the plaintext code to the user's email address, then hash it, save the hash in your database -- and throw away the original. Just like a password or a persistent login token. A final note: always make sure your interface for entering the 'lost password code' is at least as secure as your login form itself, or an attacker will simply use this to gain access instead. Making sure you generate very long 'lost password codes' (for example, 16 case-sensitive alphanumeric characters) is a good start, but consider adding the same throttling scheme that you do for the login form itself. PART V: Checking Password Strength First, you'll want to read this small article for a reality check: The 500 most common passwords Okay, so maybe the list isn't the canonical list of most common passwords on any system anywhere ever, but it's a good indication of how poorly people will choose their passwords when there is no enforced policy in place. Plus, the list looks frighteningly close to home when you compare it to publicly available analyses of recently stolen passwords. So: With no minimum password strength requirements, 2% of users use one of the top 20 most common passwords. Meaning: if an attacker gets just 20 attempts, 1 in 50 accounts on your website will be crackable. Thwarting this requires calculating the entropy of a password and then applying a threshold. The National Institute of Standards and Technology (NIST) Special Publication 800-63 has a set of very good suggestions. That, when combined with a dictionary and keyboard layout analysis (for example, 'qwertyuiop' is a bad password), can reject 99% of all poorly selected passwords at a level of 18 bits of entropy. Simply calculating password strength and showing a visual strength meter to a user is good, but insufficient. Unless it is enforced, a lot of users will most likely ignore it. And for a refreshing take on user-friendliness of high-entropy passwords, Randall Munroe's Password Strength xkcd is highly recommended. Utilize Troy Hunt's Have I Been Pwned API to check users passwords against passwords compromised in public data breaches. PART VI: Much More - Or: Preventing Rapid-Fire Login Attempts First, have a look at the numbers: Password Recovery Speeds - How long will your password stand up If you don't have the time to look through the tables in that link, here's the list of them: It takes virtually no time to crack a weak password, even if you're cracking it with an abacus It takes virtually no time to crack an alphanumeric 9-character password if it is case insensitive It takes virtually no time to crack an intricate, symbols-and-letters-and-numbers, upper-and-lowercase password if it is less than 8 characters long (a desktop PC can search the entire keyspace up to 7 characters in a matter of days or even hours) It would, however, take an inordinate amount of time to crack even a 6-character password, if you were limited to one attempt per second! So what can we learn from these numbers? Well, lots, but we can focus on the most important part: the fact that preventing large numbers of rapid-fire successive login attempts (ie. the brute force attack) really isn't that difficult. But preventing it right isn't as easy as it seems. Generally speaking, you have three choices that are all effective against brute-force attacks (and dictionary attacks, but since you are already employing a strong passwords policy, they shouldn't be an issue): Present a CAPTCHA after N failed attempts (annoying as hell and often ineffective -- but I'm repeating myself here) Locking accounts and requiring email verification after N failed attempts (this is a DoS attack waiting to happen) And finally, login throttling: that is, setting a time delay between attempts after N failed attempts (yes, DoS attacks are still possible, but at least they are far less likely and a lot more complicated to pull off). Best practice #1: A short time delay that increases with the number of failed attempts, like: 1 failed attempt = no delay 2 failed attempts = 2 sec delay 3 failed attempts = 4 sec delay 4 failed attempts = 8 sec delay 5 failed attempts = 16 sec delay etc. DoS attacking this scheme would be very impractical, since the resulting lockout time is slightly larger than the sum of the previous lockout times. To clarify: The delay is not a delay before returning the response to the browser. It is more like a timeout or refractory period during which login attempts to a specific account or from a specific IP address will not be accepted or evaluated at all. That is, correct credentials will not return in a successful login, and incorrect credentials will not trigger a delay increase. Best practice #2: A medium length time delay that goes into effect after N failed attempts, like: 1-4 failed attempts = no delay 5 failed attempts = 15-30 min delay DoS attacking this scheme would be quite impractical, but certainly doable. Also, it might be relevant to note that such a long delay can be very annoying for a legitimate user. Forgetful users will dislike you. Best practice #3: Combining the two approaches - either a fixed, short time delay that goes into effect after N failed attempts, like: 1-4 failed attempts = no delay 5+ failed attempts = 20 sec delay Or, an increasing delay with a fixed upper bound, like: 1 failed attempt = 5 sec delay 2 failed attempts = 15 sec delay 3+ failed attempts = 45 sec delay This final scheme was taken from the OWASP best-practices suggestions (link 1 from the MUST-READ list) and should be considered best practice, even if it is admittedly on the restrictive side. As a rule of thumb, however, I would say: the stronger your password policy is, the less you have to bug users with delays. If you require strong (case-sensitive alphanumerics + required numbers and symbols) 9+ character passwords, you could give the users 2-4 non-delayed password attempts before activating the throttling. DoS attacking this final login throttling scheme would be very impractical. And as a final touch, always allow persistent (cookie) logins (and/or a CAPTCHA-verified login form) to pass through, so legitimate users won't even be delayed while the attack is in progress. That way, the very impractical DoS attack becomes an extremely impractical attack. Additionally, it makes sense to do more aggressive throttling on admin accounts, since those are the most attractive entry points PART VII: Distributed Brute Force Attacks Just as an aside, more advanced attackers will try to circumvent login throttling by 'spreading their activities': Distributing the attempts on a botnet to prevent IP address flagging Rather than picking one user and trying the 50.000 most common passwords (which they can't, because of our throttling), they will pick THE most common password and try it against 50.000 users instead. That way, not only do they get around maximum-attempts measures like CAPTCHAs and login throttling, their chance of success increases as well, since the number 1 most common password is far more likely than number 49.995 Spacing the login requests for each user account, say, 30 seconds apart, to sneak under the radar Here, the best practice would be logging the number of failed logins, system-wide, and using a running average of your site's bad-login frequency as the basis for an upper limit that you then impose on all users. Too abstract? Let me rephrase: Say your site has had an average of 120 bad logins per day over the past 3 months. Using that (running average), your system might set the global limit to 3 times that -- ie. 360 failed attempts over a 24 hour period. Then, if the total number of failed attempts across all accounts exceeds that number within one day (or even better, monitor the rate of acceleration and trigger on a calculated threshold), it activates system-wide login throttling - meaning short delays for ALL users (still, with the exception of cookie logins and/or backup CAPTCHA logins). I also posted a question with more details and a really good discussion of how to avoid tricky pitfals in fending off distributed brute force attacks PART VIII: Two-Factor Authentication and Authentication Providers Credentials can be compromised, whether by exploits, passwords being written down and lost, laptops with keys being stolen, or users entering logins into phishing sites. Logins can be further protected with two-factor authentication, which uses out-of-band factors such as single-use codes received from a phone call, SMS message, app, or dongle. Several providers offer two-factor authentication services. Authentication can be completely delegated to a single-sign-on service, where another provider handles collecting credentials. This pushes the problem to a trusted third party. Google and Twitter both provide standards-based SSO services, while Facebook provides a similar proprietary solution. MUST-READ LINKS About Web Authentication OWASP Guide To Authentication / OWASP Authentication Cheat Sheet Dos and Don’ts of Client Authentication on the Web (very readable MIT research paper) Wikipedia: HTTP cookie Personal knowledge questions for fallback authentication: Security questions in the era of Facebook (very readable Berkeley research paper) | [
"forms",
"http",
"security",
"authentication",
"language-agnostic"
] | 5,518 | 3,962 | 685,030 | 11 | 0 | 2008-08-02T19:51:50.250000 | 2009-01-25T11:27:46.093000 |
561 | 794 | How to use combinations of sets as test data | I would like to test a function with a tuple from a set of fringe cases and normal values. For example, while testing a function which returns true whenever given three lengths that form a valid triangle, I would have specific cases, negative / small / large numbers, values close-to being overflowed, etc.; what is more, main aim is to generate combinations of these values, with or without repetition, in order to get a set of test data. (inf,0,-1), (5,10,1000), (10,5,5), (0,-1,5), (1000,inf,inf),... As a note: I actually know the answer to this, but it might be helpful for others, and a challenge for people here! --will post my answer later on. | Absolutely, especially dealing with lots of these permutations/combinations I can definitely see that the first pass would be an issue. Interesting implementation in python, though I wrote a nice one in C and Ocaml based on "Algorithm 515" (see below). He wrote his in Fortran as it was common back then for all the "Algorithm XX" papers, well, that assembly or c. I had to re-write it and make some small improvements to work with arrays not ranges of numbers. This one does random access, I'm still working on getting some nice implementations of the ones mentioned in Knuth 4th volume fascicle 2. I'll an explanation of how this works to the reader. Though if someone is curious, I wouldn't object to writing something up. /** [combination c n p x] * get the [x]th lexicographically ordered set of [p] elements in [n] * output is in [c], and should be sizeof(int)*[p] */ void combination(int* c,int n,int p, int x){ int i,r,k = 0; for(i=0;i ~"Algorithm 515: Generation of a Vector from the Lexicographical Index"; Buckles, B. P., and Lybanon, M. ACM Transactions on Mathematical Software, Vol. 3, No. 2, June 1977. | How to use combinations of sets as test data I would like to test a function with a tuple from a set of fringe cases and normal values. For example, while testing a function which returns true whenever given three lengths that form a valid triangle, I would have specific cases, negative / small / large numbers, values close-to being overflowed, etc.; what is more, main aim is to generate combinations of these values, with or without repetition, in order to get a set of test data. (inf,0,-1), (5,10,1000), (10,5,5), (0,-1,5), (1000,inf,inf),... As a note: I actually know the answer to this, but it might be helpful for others, and a challenge for people here! --will post my answer later on. | TITLE:
How to use combinations of sets as test data
QUESTION:
I would like to test a function with a tuple from a set of fringe cases and normal values. For example, while testing a function which returns true whenever given three lengths that form a valid triangle, I would have specific cases, negative / small / large numbers, values close-to being overflowed, etc.; what is more, main aim is to generate combinations of these values, with or without repetition, in order to get a set of test data. (inf,0,-1), (5,10,1000), (10,5,5), (0,-1,5), (1000,inf,inf),... As a note: I actually know the answer to this, but it might be helpful for others, and a challenge for people here! --will post my answer later on.
ANSWER:
Absolutely, especially dealing with lots of these permutations/combinations I can definitely see that the first pass would be an issue. Interesting implementation in python, though I wrote a nice one in C and Ocaml based on "Algorithm 515" (see below). He wrote his in Fortran as it was common back then for all the "Algorithm XX" papers, well, that assembly or c. I had to re-write it and make some small improvements to work with arrays not ranges of numbers. This one does random access, I'm still working on getting some nice implementations of the ones mentioned in Knuth 4th volume fascicle 2. I'll an explanation of how this works to the reader. Though if someone is curious, I wouldn't object to writing something up. /** [combination c n p x] * get the [x]th lexicographically ordered set of [p] elements in [n] * output is in [c], and should be sizeof(int)*[p] */ void combination(int* c,int n,int p, int x){ int i,r,k = 0; for(i=0;i ~"Algorithm 515: Generation of a Vector from the Lexicographical Index"; Buckles, B. P., and Lybanon, M. ACM Transactions on Mathematical Software, Vol. 3, No. 2, June 1977. | [
"unit-testing",
"language-agnostic",
"testing"
] | 29 | 18 | 11,470 | 5 | 0 | 2008-08-02T21:34:24.223000 | 2008-08-03T19:06:00.327000 |
564 | 566 | What is the difference between an int and an Integer in Java and C#? | I was reading More Joel on Software when I came across Joel Spolsky saying something about a particular type of programmer knowing the difference between an int and an Integer in Java/C# (Object-Oriented Programming Languages). So, what is the difference? | In Java, the 'int' type is a primitive, whereas the 'Integer' type is an object. In C#, the 'int' type is the same as System.Int32 and is a value type (ie more like the java 'int'). An integer (just like any other value types) can be boxed ("wrapped") into an object. The differences between objects and primitives are somewhat beyond the scope of this question, but to summarize: Objects provide facilities for polymorphism, are passed by reference (or more accurately have references passed by value), and are allocated from the heap. Conversely, primitives are immutable types that are passed by value and are often allocated from the stack. | What is the difference between an int and an Integer in Java and C#? I was reading More Joel on Software when I came across Joel Spolsky saying something about a particular type of programmer knowing the difference between an int and an Integer in Java/C# (Object-Oriented Programming Languages). So, what is the difference? | TITLE:
What is the difference between an int and an Integer in Java and C#?
QUESTION:
I was reading More Joel on Software when I came across Joel Spolsky saying something about a particular type of programmer knowing the difference between an int and an Integer in Java/C# (Object-Oriented Programming Languages). So, what is the difference?
ANSWER:
In Java, the 'int' type is a primitive, whereas the 'Integer' type is an object. In C#, the 'int' type is the same as System.Int32 and is a value type (ie more like the java 'int'). An integer (just like any other value types) can be boxed ("wrapped") into an object. The differences between objects and primitives are somewhat beyond the scope of this question, but to summarize: Objects provide facilities for polymorphism, are passed by reference (or more accurately have references passed by value), and are allocated from the heap. Conversely, primitives are immutable types that are passed by value and are often allocated from the stack. | [
"c#",
"java",
"integer",
"int"
] | 271 | 252 | 249,050 | 26 | 0 | 2008-08-02T21:47:34.820000 | 2008-08-02T21:55:45.477000 |
580 | 586 | Deploying SQL Server Databases from Test to Live | I wonder how you guys manage deployment of a database between 2 SQL Servers, specifically SQL Server 2005. Now, there is a development and a live one. As this should be part of a buildscript (standard windows batch, even do with current complexity of those scripts, i might switch to PowerShell or so later), Enterprise Manager/Management Studio Express do not count. Would you just copy the.mdf File and attach it? I am always a bit careful when working with binary data, as this seems to be a compatiblity issue (even though development and live should run the same version of the server at all time). Or - given the lack of "EXPLAIN CREATE TABLE" in T-SQL - do you do something that exports an existing database into SQL-Scripts which you can run on the target server? If yes, is there a tool that can automatically dump a given Database into SQL Queries and that runs off the command line? (Again, Enterprise Manager/Management Studio Express do not count). And lastly - given the fact that the live database already contains data, the deployment may not involve creating all tables but rather checking the difference in structure and ALTER TABLE the live ones instead, which may also need data verification/conversion when existing fields change. Now, i hear a lot of great stuff about the Red Gate products, but for hobby projects, the price is a bit steep. So, what are you using to automatically deploy SQL Server Databases from Test to Live? | I've taken to hand-coding all of my DDL (creates/alter/delete) statements, adding them to my.sln as text files, and using normal versioning (using subversion, but any revision control should work). This way, I not only get the benefit of versioning, but updating live from dev/stage is the same process for code and database - tags, branches and so on work all the same. Otherwise, I agree redgate is expensive if you don't have a company buying it for you. If you can get a company to buy it for you though, it really is worth it! | Deploying SQL Server Databases from Test to Live I wonder how you guys manage deployment of a database between 2 SQL Servers, specifically SQL Server 2005. Now, there is a development and a live one. As this should be part of a buildscript (standard windows batch, even do with current complexity of those scripts, i might switch to PowerShell or so later), Enterprise Manager/Management Studio Express do not count. Would you just copy the.mdf File and attach it? I am always a bit careful when working with binary data, as this seems to be a compatiblity issue (even though development and live should run the same version of the server at all time). Or - given the lack of "EXPLAIN CREATE TABLE" in T-SQL - do you do something that exports an existing database into SQL-Scripts which you can run on the target server? If yes, is there a tool that can automatically dump a given Database into SQL Queries and that runs off the command line? (Again, Enterprise Manager/Management Studio Express do not count). And lastly - given the fact that the live database already contains data, the deployment may not involve creating all tables but rather checking the difference in structure and ALTER TABLE the live ones instead, which may also need data verification/conversion when existing fields change. Now, i hear a lot of great stuff about the Red Gate products, but for hobby projects, the price is a bit steep. So, what are you using to automatically deploy SQL Server Databases from Test to Live? | TITLE:
Deploying SQL Server Databases from Test to Live
QUESTION:
I wonder how you guys manage deployment of a database between 2 SQL Servers, specifically SQL Server 2005. Now, there is a development and a live one. As this should be part of a buildscript (standard windows batch, even do with current complexity of those scripts, i might switch to PowerShell or so later), Enterprise Manager/Management Studio Express do not count. Would you just copy the.mdf File and attach it? I am always a bit careful when working with binary data, as this seems to be a compatiblity issue (even though development and live should run the same version of the server at all time). Or - given the lack of "EXPLAIN CREATE TABLE" in T-SQL - do you do something that exports an existing database into SQL-Scripts which you can run on the target server? If yes, is there a tool that can automatically dump a given Database into SQL Queries and that runs off the command line? (Again, Enterprise Manager/Management Studio Express do not count). And lastly - given the fact that the live database already contains data, the deployment may not involve creating all tables but rather checking the difference in structure and ALTER TABLE the live ones instead, which may also need data verification/conversion when existing fields change. Now, i hear a lot of great stuff about the Red Gate products, but for hobby projects, the price is a bit steep. So, what are you using to automatically deploy SQL Server Databases from Test to Live?
ANSWER:
I've taken to hand-coding all of my DDL (creates/alter/delete) statements, adding them to my.sln as text files, and using normal versioning (using subversion, but any revision control should work). This way, I not only get the benefit of versioning, but updating live from dev/stage is the same process for code and database - tags, branches and so on work all the same. Otherwise, I agree redgate is expensive if you don't have a company buying it for you. If you can get a company to buy it for you though, it really is worth it! | [
"sql-server",
"sql-server-2005",
"deployment",
"release-management"
] | 30 | 19 | 8,360 | 14 | 0 | 2008-08-02T23:30:59.090000 | 2008-08-02T23:51:09.410000 |
588 | 633 | Best way to access Exchange using PHP? | I'm writing a CMS application in PHP and one of the requirements is that it must be able to interface with the customer's Exchange server. I've written up this functionality a few times before and have always used WebDAV to do it, but now I'm leaning away from that. I will be running the site on IIS OR Apache (no preference) on Windows server 2008. A few things I would need to do include adding contacts to a given user's address book, sending emails as a given user and running reports on contacts for a user. All of this is pretty easy to do with WebDAV, but if there is a better way that doesn't require any functionality that is likely to be deprecated any time soon. Any ideas? Update: Justin, I love the idea of using com objects, I just worry about maintaining a 3rd product to make everything work... John, I can write a web service in C# to interface with for these functions and access it with my PHP app, but it's also a little bit out of the way. So far, I'm not 100% convinced that either of these is better than WebDAV... Can anyone show me where I'm being silly? | Update as of 2020: Over a decade since this question and things have moved on. Microsft now has a Rest API that will allow you to easily access this data. Original Answer I have not used PHP to do this but have experience in using C# to achieve the same thing. The Outlook API is a way of automating Outlook rather than connecting to Exchange directly. I have previously taken this approach in a C# application and it does work although can be buggy. If you wish to connect directly to the Exchange server you will need to research extended MAPI. In the past I used this wrapper MAPIEx: Extended MAPI Wrapper. It is a C# project but I believe you can use some.NET code on a PHP5 Windows server. Alternatively it has a C++ core DLL that you may be a able to use. I have found it to be very good and there are some good example applications. Sorry for the delay no current way to keep track of posts yet. I do agree adding more layer on to your application and relying on 3rd party code can be scary (and rightfully so.) Today I read another interesting post tagged up as MAPI that is on a different subject. The key thing here though is that it has linked to this important MS article. I have been unaware of the issues until now on using managed code to interface to MAPI although the C++ code in the component should be unaffected by this error as it is unmanaged. This blog entry also suggests other ways to connect to MAPI/Exchange server. In this case due to these new facts https://www.php.net/imap may be the answer as suggested by the other user. | Best way to access Exchange using PHP? I'm writing a CMS application in PHP and one of the requirements is that it must be able to interface with the customer's Exchange server. I've written up this functionality a few times before and have always used WebDAV to do it, but now I'm leaning away from that. I will be running the site on IIS OR Apache (no preference) on Windows server 2008. A few things I would need to do include adding contacts to a given user's address book, sending emails as a given user and running reports on contacts for a user. All of this is pretty easy to do with WebDAV, but if there is a better way that doesn't require any functionality that is likely to be deprecated any time soon. Any ideas? Update: Justin, I love the idea of using com objects, I just worry about maintaining a 3rd product to make everything work... John, I can write a web service in C# to interface with for these functions and access it with my PHP app, but it's also a little bit out of the way. So far, I'm not 100% convinced that either of these is better than WebDAV... Can anyone show me where I'm being silly? | TITLE:
Best way to access Exchange using PHP?
QUESTION:
I'm writing a CMS application in PHP and one of the requirements is that it must be able to interface with the customer's Exchange server. I've written up this functionality a few times before and have always used WebDAV to do it, but now I'm leaning away from that. I will be running the site on IIS OR Apache (no preference) on Windows server 2008. A few things I would need to do include adding contacts to a given user's address book, sending emails as a given user and running reports on contacts for a user. All of this is pretty easy to do with WebDAV, but if there is a better way that doesn't require any functionality that is likely to be deprecated any time soon. Any ideas? Update: Justin, I love the idea of using com objects, I just worry about maintaining a 3rd product to make everything work... John, I can write a web service in C# to interface with for these functions and access it with my PHP app, but it's also a little bit out of the way. So far, I'm not 100% convinced that either of these is better than WebDAV... Can anyone show me where I'm being silly?
ANSWER:
Update as of 2020: Over a decade since this question and things have moved on. Microsft now has a Rest API that will allow you to easily access this data. Original Answer I have not used PHP to do this but have experience in using C# to achieve the same thing. The Outlook API is a way of automating Outlook rather than connecting to Exchange directly. I have previously taken this approach in a C# application and it does work although can be buggy. If you wish to connect directly to the Exchange server you will need to research extended MAPI. In the past I used this wrapper MAPIEx: Extended MAPI Wrapper. It is a C# project but I believe you can use some.NET code on a PHP5 Windows server. Alternatively it has a C++ core DLL that you may be a able to use. I have found it to be very good and there are some good example applications. Sorry for the delay no current way to keep track of posts yet. I do agree adding more layer on to your application and relying on 3rd party code can be scary (and rightfully so.) Today I read another interesting post tagged up as MAPI that is on a different subject. The key thing here though is that it has linked to this important MS article. I have been unaware of the issues until now on using managed code to interface to MAPI although the C++ code in the component should be unaffected by this error as it is unmanaged. This blog entry also suggests other ways to connect to MAPI/Exchange server. In this case due to these new facts https://www.php.net/imap may be the answer as suggested by the other user. | [
"php",
"windows",
"exchange-server",
"webdav",
"mapi"
] | 58 | 24 | 60,403 | 9 | 0 | 2008-08-03T00:03:58.510000 | 2008-08-03T07:50:37.617000 |
594 | 595 | cx_Oracle: How do I iterate over a result set? | There are several ways to iterate over a result set. What are the tradeoff of each? | The canonical way is to use the built-in cursor iterator. curs.execute('select * from people') for row in curs: print row You can use fetchall() to get all rows at once. for row in curs.fetchall(): print row It can be convenient to use this to create a Python list containing the values returned: curs.execute('select first_name from people') names = [row[0] for row in curs.fetchall()] This can be useful for smaller result sets, but can have bad side effects if the result set is large. You have to wait for the entire result set to be returned to your client process. You may eat up a lot of memory in your client to hold the built-up list. It may take a while for Python to construct and deconstruct the list which you are going to immediately discard anyways. If you know there's a single row being returned in the result set you can call fetchone() to get the single row. curs.execute('select max(x) from t') maxValue = curs.fetchone()[0] Finally, you can loop over the result set fetching one row at a time. In general, there's no particular advantage in doing this over using the iterator. row = curs.fetchone() while row: print row row = curs.fetchone() | cx_Oracle: How do I iterate over a result set? There are several ways to iterate over a result set. What are the tradeoff of each? | TITLE:
cx_Oracle: How do I iterate over a result set?
QUESTION:
There are several ways to iterate over a result set. What are the tradeoff of each?
ANSWER:
The canonical way is to use the built-in cursor iterator. curs.execute('select * from people') for row in curs: print row You can use fetchall() to get all rows at once. for row in curs.fetchall(): print row It can be convenient to use this to create a Python list containing the values returned: curs.execute('select first_name from people') names = [row[0] for row in curs.fetchall()] This can be useful for smaller result sets, but can have bad side effects if the result set is large. You have to wait for the entire result set to be returned to your client process. You may eat up a lot of memory in your client to hold the built-up list. It may take a while for Python to construct and deconstruct the list which you are going to immediately discard anyways. If you know there's a single row being returned in the result set you can call fetchone() to get the single row. curs.execute('select max(x) from t') maxValue = curs.fetchone()[0] Finally, you can loop over the result set fetching one row at a time. In general, there's no particular advantage in doing this over using the iterator. row = curs.fetchone() while row: print row row = curs.fetchone() | [
"python",
"sql",
"database",
"oracle",
"cx-oracle"
] | 55 | 56 | 60,015 | 3 | 0 | 2008-08-03T01:15:08.507000 | 2008-08-03T01:17:36.393000 |
601 | 602 | Robust Random Number Generation | I'm looking for a performant, reasonably robust RNG using no special hardware. It can use mathematical methods (Mersenne Twister, etc), it can "collect entropy" from the machine, whatever. On Linux/etc we have a drand48() which generates 48 random bits. I'd like a similar function/class for C++ or C# which can generate more than 32 bits of randomness and which low-order bits are equally as random as high-order bits. It doesn't have to be cryptographically secure but it must not use or be based on the C-language rand() or.NET System.Random. Any source code, links to sources, etc. would be appreciated! Failing that, what TYPE of RNG should I be looking for? | For C++, Boost.Random is probably what you're looking for. It has support for MT (among many other algorithms), and can collect entropy via the nondet_random class. Check it out!:-) | Robust Random Number Generation I'm looking for a performant, reasonably robust RNG using no special hardware. It can use mathematical methods (Mersenne Twister, etc), it can "collect entropy" from the machine, whatever. On Linux/etc we have a drand48() which generates 48 random bits. I'd like a similar function/class for C++ or C# which can generate more than 32 bits of randomness and which low-order bits are equally as random as high-order bits. It doesn't have to be cryptographically secure but it must not use or be based on the C-language rand() or.NET System.Random. Any source code, links to sources, etc. would be appreciated! Failing that, what TYPE of RNG should I be looking for? | TITLE:
Robust Random Number Generation
QUESTION:
I'm looking for a performant, reasonably robust RNG using no special hardware. It can use mathematical methods (Mersenne Twister, etc), it can "collect entropy" from the machine, whatever. On Linux/etc we have a drand48() which generates 48 random bits. I'd like a similar function/class for C++ or C# which can generate more than 32 bits of randomness and which low-order bits are equally as random as high-order bits. It doesn't have to be cryptographically secure but it must not use or be based on the C-language rand() or.NET System.Random. Any source code, links to sources, etc. would be appreciated! Failing that, what TYPE of RNG should I be looking for?
ANSWER:
For C++, Boost.Random is probably what you're looking for. It has support for MT (among many other algorithms), and can collect entropy via the nondet_random class. Check it out!:-) | [
"c#",
"c++",
"random",
"mersenne-twister"
] | 42 | 31 | 2,145 | 5 | 0 | 2008-08-03T02:05:59.620000 | 2008-08-03T02:18:57.987000 |
622 | 623 | Most efficient code for the first 10000 prime numbers? | I want to print the first 10000 prime numbers. Can anyone give me the most efficient code for this? Clarifications: It does not matter if your code is inefficient for n >10000. The size of the code does not matter. You cannot just hard code the values in any manner. | The Sieve of Atkin is probably what you're looking for, its upper bound running time is O(N/log log N). If you only run the numbers 1 more and 1 less than the multiples of 6, it could be even faster, as all prime numbers above 3 are 1 away from some multiple of six. Resource for my statement | Most efficient code for the first 10000 prime numbers? I want to print the first 10000 prime numbers. Can anyone give me the most efficient code for this? Clarifications: It does not matter if your code is inefficient for n >10000. The size of the code does not matter. You cannot just hard code the values in any manner. | TITLE:
Most efficient code for the first 10000 prime numbers?
QUESTION:
I want to print the first 10000 prime numbers. Can anyone give me the most efficient code for this? Clarifications: It does not matter if your code is inefficient for n >10000. The size of the code does not matter. You cannot just hard code the values in any manner.
ANSWER:
The Sieve of Atkin is probably what you're looking for, its upper bound running time is O(N/log log N). If you only run the numbers 1 more and 1 less than the multiples of 6, it could be even faster, as all prime numbers above 3 are 1 away from some multiple of six. Resource for my statement | [
"performance",
"algorithm",
"primes"
] | 65 | 50 | 72,844 | 31 | 0 | 2008-08-03T05:45:21.423000 | 2008-08-03T06:03:35.973000 |
626 | 723 | When to use lambda, when to use Proc.new? | In Ruby 1.8, there are subtle differences between proc/lambda on the one hand, and Proc.new on the other. What are those differences? Can you give guidelines on how to decide which one to choose? In Ruby 1.9, proc and lambda are different. What's the deal? | Another important but subtle difference between procs created with lambda and procs created with Proc.new is how they handle the return statement: In a lambda -created proc, the return statement returns only from the proc itself In a Proc.new -created proc, the return statement is a little more surprising: it returns control not just from the proc, but also from the method enclosing the proc! Here's lambda -created proc's return in action. It behaves in a way that you probably expect: def whowouldwin
mylambda = lambda {return "Freddy"} mylambda.call
# mylambda gets called and returns "Freddy", and execution # continues on the next line
return "Jason"
end
whowouldwin #=> "Jason" Now here's a Proc.new -created proc's return doing the same thing. You're about to see one of those cases where Ruby breaks the much-vaunted Principle of Least Surprise: def whowouldwin2
myproc = Proc.new {return "Freddy"} myproc.call
# myproc gets called and returns "Freddy", # but also returns control from whowhouldwin2! # The line below *never* gets executed.
return "Jason"
end
whowouldwin2 #=> "Freddy" Thanks to this surprising behavior (as well as less typing), I tend to favor using lambda over Proc.new when making procs. | When to use lambda, when to use Proc.new? In Ruby 1.8, there are subtle differences between proc/lambda on the one hand, and Proc.new on the other. What are those differences? Can you give guidelines on how to decide which one to choose? In Ruby 1.9, proc and lambda are different. What's the deal? | TITLE:
When to use lambda, when to use Proc.new?
QUESTION:
In Ruby 1.8, there are subtle differences between proc/lambda on the one hand, and Proc.new on the other. What are those differences? Can you give guidelines on how to decide which one to choose? In Ruby 1.9, proc and lambda are different. What's the deal?
ANSWER:
Another important but subtle difference between procs created with lambda and procs created with Proc.new is how they handle the return statement: In a lambda -created proc, the return statement returns only from the proc itself In a Proc.new -created proc, the return statement is a little more surprising: it returns control not just from the proc, but also from the method enclosing the proc! Here's lambda -created proc's return in action. It behaves in a way that you probably expect: def whowouldwin
mylambda = lambda {return "Freddy"} mylambda.call
# mylambda gets called and returns "Freddy", and execution # continues on the next line
return "Jason"
end
whowouldwin #=> "Jason" Now here's a Proc.new -created proc's return doing the same thing. You're about to see one of those cases where Ruby breaks the much-vaunted Principle of Least Surprise: def whowouldwin2
myproc = Proc.new {return "Freddy"} myproc.call
# myproc gets called and returns "Freddy", # but also returns control from whowhouldwin2! # The line below *never* gets executed.
return "Jason"
end
whowouldwin2 #=> "Freddy" Thanks to this surprising behavior (as well as less typing), I tend to favor using lambda over Proc.new when making procs. | [
"ruby",
"lambda",
"proc"
] | 345 | 384 | 83,758 | 14 | 0 | 2008-08-03T06:40:54.120000 | 2008-08-03T15:21:52.760000 |
644 | 665 | Swap unique indexed column values in database | I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are: Delete both rows and re-insert them. Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out? | I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of. If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful. But in short: there is no other solution than the ones you provided. | Swap unique indexed column values in database I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are: Delete both rows and re-insert them. Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out? | TITLE:
Swap unique indexed column values in database
QUESTION:
I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are: Delete both rows and re-insert them. Update rows with some other value and swap and then update to actual value. But I don't want to go for these as they do not seem to be the appropriate solution to the problem. Could anyone help me out?
ANSWER:
I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of. If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful. But in short: there is no other solution than the ones you provided. | [
"sql",
"database"
] | 73 | 17 | 19,618 | 12 | 0 | 2008-08-03T09:55:26.257000 | 2008-08-03T12:26:35.843000 |
650 | 655 | Automatically update version number | I would like the version property of my application to be incremented for each build but I'm not sure on how to enable this functionality in Visual Studio (2005/2008). I have tried to specify the AssemblyVersion as 1.0.* but it doesn't get me exactly what I want. I'm also using a settings file and in earlier attempts when the assembly version changed my settings got reset to the default since the application looked for the settings file in another directory. I would like to be able to display a version number in the form of 1.1.38 so when a user finds a problem I can log the version they are using as well as tell them to upgrade if they have an old release. A short explanation of how the versioning works would also be appreciated. When does the build and revision number get incremented? | With the "Built in" stuff, you can't, as using 1.0.* or 1.0.0.* will replace the revision and build numbers with a coded date/timestamp, which is usually also a good way. For more info, see the Assembly Linker Documentation in the /v tag. As for automatically incrementing numbers, use the AssemblyInfo Task: AssemblyInfo Task This can be configured to automatically increment the build number. There are 2 Gotchas: Each of the 4 numbers in the Version string is limited to 65535. This is a Windows Limitation and unlikely to get fixed. Why are build numbers limited to 65535? Using with with Subversion requires a small change: Using MSBuild to generate assembly version info at build time (including SubVersion fix) Retrieving the Version number is then quite easy: Version v = Assembly.GetExecutingAssembly().GetName().Version; string About = string.Format(CultureInfo.InvariantCulture, @"YourApp Version {0}.{1}.{2} (r{3})", v.Major, v.Minor, v.Build, v.Revision); And, to clarify: In.net or at least in C#, the build is actually the THIRD number, not the fourth one as some people (for example Delphi Developers who are used to Major.Minor.Release.Build) might expect. In.net, it's Major.Minor.Build.Revision. | Automatically update version number I would like the version property of my application to be incremented for each build but I'm not sure on how to enable this functionality in Visual Studio (2005/2008). I have tried to specify the AssemblyVersion as 1.0.* but it doesn't get me exactly what I want. I'm also using a settings file and in earlier attempts when the assembly version changed my settings got reset to the default since the application looked for the settings file in another directory. I would like to be able to display a version number in the form of 1.1.38 so when a user finds a problem I can log the version they are using as well as tell them to upgrade if they have an old release. A short explanation of how the versioning works would also be appreciated. When does the build and revision number get incremented? | TITLE:
Automatically update version number
QUESTION:
I would like the version property of my application to be incremented for each build but I'm not sure on how to enable this functionality in Visual Studio (2005/2008). I have tried to specify the AssemblyVersion as 1.0.* but it doesn't get me exactly what I want. I'm also using a settings file and in earlier attempts when the assembly version changed my settings got reset to the default since the application looked for the settings file in another directory. I would like to be able to display a version number in the form of 1.1.38 so when a user finds a problem I can log the version they are using as well as tell them to upgrade if they have an old release. A short explanation of how the versioning works would also be appreciated. When does the build and revision number get incremented?
ANSWER:
With the "Built in" stuff, you can't, as using 1.0.* or 1.0.0.* will replace the revision and build numbers with a coded date/timestamp, which is usually also a good way. For more info, see the Assembly Linker Documentation in the /v tag. As for automatically incrementing numbers, use the AssemblyInfo Task: AssemblyInfo Task This can be configured to automatically increment the build number. There are 2 Gotchas: Each of the 4 numbers in the Version string is limited to 65535. This is a Windows Limitation and unlikely to get fixed. Why are build numbers limited to 65535? Using with with Subversion requires a small change: Using MSBuild to generate assembly version info at build time (including SubVersion fix) Retrieving the Version number is then quite easy: Version v = Assembly.GetExecutingAssembly().GetName().Version; string About = string.Format(CultureInfo.InvariantCulture, @"YourApp Version {0}.{1}.{2} (r{3})", v.Major, v.Minor, v.Build, v.Revision); And, to clarify: In.net or at least in C#, the build is actually the THIRD number, not the fourth one as some people (for example Delphi Developers who are used to Major.Minor.Release.Build) might expect. In.net, it's Major.Minor.Build.Revision. | [
"c#",
"visual-studio",
"versioning"
] | 115 | 99 | 83,634 | 8 | 0 | 2008-08-03T11:12:52.463000 | 2008-08-03T11:41:38.490000 |
651 | 725 | Checklist for IIS 6/ASP.NET Windows Authentication? | I've been having trouble getting my ASP.NET application to automatically log users into the Intranet site I'm building. No matter the googling or the experimentation I applied, there is always a login box displayed by IE7. I've got Windows authentication mode set in the Web.config, disabled anonymous access and configured the correct default domain in IIS, but it's still asking the user to log in and, more annoyingly, the user is required to provide the domain too ( DOMAIN\auser ), which is causing problems with non-technical visitors. Thank Zeus for password remembering functionality. I'm not the network administrator so it's possible that something about Active Directory is set up incorrectly, or it could just be me missing something very simple. Please note that I don't want to impersonate the user, I just need to know that the IPrincipal.Name property matches that of a valid record in my user database, hence authenticating the user to my application. To this end, it would be very useful to have a checklist of all configuration requirements for AD, ASP.NET and IIS to work together in this manner as a reference for debugging and hopefully reducing some user friction. | It sounds like you've covered all the server-side bases--maybe it's a client issue? I assume your users have integrated authentication enabled in IE7? (Tools -> Internet Options -> Advanced -> Security). This is enabled by default. Also, is your site correctly recognized by IE7 as being in the Local Intranet zone? The IE7 default is to allow automatic logon only in that zone, so users would be prompted if IE thinks your site is on the internet. I believe using a hostname with a dot in it causes IE to place the site into the Internet zone. | Checklist for IIS 6/ASP.NET Windows Authentication? I've been having trouble getting my ASP.NET application to automatically log users into the Intranet site I'm building. No matter the googling or the experimentation I applied, there is always a login box displayed by IE7. I've got Windows authentication mode set in the Web.config, disabled anonymous access and configured the correct default domain in IIS, but it's still asking the user to log in and, more annoyingly, the user is required to provide the domain too ( DOMAIN\auser ), which is causing problems with non-technical visitors. Thank Zeus for password remembering functionality. I'm not the network administrator so it's possible that something about Active Directory is set up incorrectly, or it could just be me missing something very simple. Please note that I don't want to impersonate the user, I just need to know that the IPrincipal.Name property matches that of a valid record in my user database, hence authenticating the user to my application. To this end, it would be very useful to have a checklist of all configuration requirements for AD, ASP.NET and IIS to work together in this manner as a reference for debugging and hopefully reducing some user friction. | TITLE:
Checklist for IIS 6/ASP.NET Windows Authentication?
QUESTION:
I've been having trouble getting my ASP.NET application to automatically log users into the Intranet site I'm building. No matter the googling or the experimentation I applied, there is always a login box displayed by IE7. I've got Windows authentication mode set in the Web.config, disabled anonymous access and configured the correct default domain in IIS, but it's still asking the user to log in and, more annoyingly, the user is required to provide the domain too ( DOMAIN\auser ), which is causing problems with non-technical visitors. Thank Zeus for password remembering functionality. I'm not the network administrator so it's possible that something about Active Directory is set up incorrectly, or it could just be me missing something very simple. Please note that I don't want to impersonate the user, I just need to know that the IPrincipal.Name property matches that of a valid record in my user database, hence authenticating the user to my application. To this end, it would be very useful to have a checklist of all configuration requirements for AD, ASP.NET and IIS to work together in this manner as a reference for debugging and hopefully reducing some user friction.
ANSWER:
It sounds like you've covered all the server-side bases--maybe it's a client issue? I assume your users have integrated authentication enabled in IE7? (Tools -> Internet Options -> Advanced -> Security). This is enabled by default. Also, is your site correctly recognized by IE7 as being in the Local Intranet zone? The IE7 default is to allow automatic logon only in that zone, so users would be prompted if IE thinks your site is on the internet. I believe using a hostname with a dot in it causes IE to place the site into the Internet zone. | [
"asp.net",
"iis",
"authentication",
"active-directory"
] | 34 | 20 | 6,880 | 3 | 0 | 2008-08-03T11:21:54.520000 | 2008-08-03T15:24:38.290000 |
657 | 669 | Encrypting Passwords | What is the fastest, yet secure way to encrypt passwords (in PHP preferably), and for whichever method you choose, is it portable? In other words, if I later migrate my website to a different server, will my passwords continue to work? The method I am using now, as I was told, is dependent on the exact versions of the libraries installed on the server. | If you are choosing an encryption method for your login system then speed is not your friend, Jeff had a to-and-frow with Thomas Ptacek about passwords and the conclusion was that you should use the slowest, most secure encryption method you can afford to. From Thomas Ptacek's blog: Speed is exactly what you don’t want in a password hash function. Modern password schemes are attacked with incremental password crackers. Incremental crackers don’t precalculate all possible cracked passwords. They consider each password hash individually, and they feed their dictionary through the password hash function the same way your PHP login page would. Rainbow table crackers like Ophcrack use space to attack passwords; incremental crackers like John the Ripper, Crack, and LC5 work with time: statistics and compute. The password attack game is scored in time taken to crack password X. With rainbow tables, that time depends on how big your table needs to be and how fast you can search it. With incremental crackers, the time depends on how fast you can make the password hash function run. The better you can optimize your password hash function, the faster your password hash function gets, the weaker your scheme is. MD5 and SHA1, even conventional block ciphers like DES, are designed to be fast. MD5, SHA1, and DES are weak password hashes. On modern CPUs, raw crypto building blocks like DES and MD5 can be bitsliced, vectorized, and parallelized to make password searches lightning fast. Game-over FPGA implementations cost only hundreds of dollars. | Encrypting Passwords What is the fastest, yet secure way to encrypt passwords (in PHP preferably), and for whichever method you choose, is it portable? In other words, if I later migrate my website to a different server, will my passwords continue to work? The method I am using now, as I was told, is dependent on the exact versions of the libraries installed on the server. | TITLE:
Encrypting Passwords
QUESTION:
What is the fastest, yet secure way to encrypt passwords (in PHP preferably), and for whichever method you choose, is it portable? In other words, if I later migrate my website to a different server, will my passwords continue to work? The method I am using now, as I was told, is dependent on the exact versions of the libraries installed on the server.
ANSWER:
If you are choosing an encryption method for your login system then speed is not your friend, Jeff had a to-and-frow with Thomas Ptacek about passwords and the conclusion was that you should use the slowest, most secure encryption method you can afford to. From Thomas Ptacek's blog: Speed is exactly what you don’t want in a password hash function. Modern password schemes are attacked with incremental password crackers. Incremental crackers don’t precalculate all possible cracked passwords. They consider each password hash individually, and they feed their dictionary through the password hash function the same way your PHP login page would. Rainbow table crackers like Ophcrack use space to attack passwords; incremental crackers like John the Ripper, Crack, and LC5 work with time: statistics and compute. The password attack game is scored in time taken to crack password X. With rainbow tables, that time depends on how big your table needs to be and how fast you can search it. With incremental crackers, the time depends on how fast you can make the password hash function run. The better you can optimize your password hash function, the faster your password hash function gets, the weaker your scheme is. MD5 and SHA1, even conventional block ciphers like DES, are designed to be fast. MD5, SHA1, and DES are weak password hashes. On modern CPUs, raw crypto building blocks like DES and MD5 can be bitsliced, vectorized, and parallelized to make password searches lightning fast. Game-over FPGA implementations cost only hundreds of dollars. | [
"php",
"encryption",
"passwords"
] | 38 | 33 | 6,243 | 8 | 0 | 2008-08-03T11:50:33.137000 | 2008-08-03T12:48:36.657000 |
683 | 57,833 | Using 'in' to match an attribute of Python objects in an array | I don't remember whether I was dreaming or not but I seem to recall there being a function which allowed something like, foo in iter_attr(array of python objects, attribute name) I've looked over the docs but this kind of thing doesn't fall under any obvious listed headers | Using a list comprehension would build a temporary list, which could eat all your memory if the sequence being searched is large. Even if the sequence is not large, building the list means iterating over the whole of the sequence before in could start its search. The temporary list can be avoiding by using a generator expression: foo = 12 foo in (obj.id for obj in bar) Now, as long as obj.id == 12 near the start of bar, the search will be fast, even if bar is infinitely long. As @Matt suggested, it's a good idea to use hasattr if any of the objects in bar can be missing an id attribute: foo = 12 foo in (obj.id for obj in bar if hasattr(obj, 'id')) | Using 'in' to match an attribute of Python objects in an array I don't remember whether I was dreaming or not but I seem to recall there being a function which allowed something like, foo in iter_attr(array of python objects, attribute name) I've looked over the docs but this kind of thing doesn't fall under any obvious listed headers | TITLE:
Using 'in' to match an attribute of Python objects in an array
QUESTION:
I don't remember whether I was dreaming or not but I seem to recall there being a function which allowed something like, foo in iter_attr(array of python objects, attribute name) I've looked over the docs but this kind of thing doesn't fall under any obvious listed headers
ANSWER:
Using a list comprehension would build a temporary list, which could eat all your memory if the sequence being searched is large. Even if the sequence is not large, building the list means iterating over the whole of the sequence before in could start its search. The temporary list can be avoiding by using a generator expression: foo = 12 foo in (obj.id for obj in bar) Now, as long as obj.id == 12 near the start of bar, the search will be fast, even if bar is infinitely long. As @Matt suggested, it's a good idea to use hasattr if any of the objects in bar can be missing an id attribute: foo = 12 foo in (obj.id for obj in bar if hasattr(obj, 'id')) | [
"python",
"arrays",
"iteration"
] | 58 | 49 | 15,131 | 8 | 0 | 2008-08-03T13:19:16.983000 | 2008-09-11T22:42:14.047000 |
696 | 704 | Connect PHP to IBM i (AS/400) | I've got an upcoming project wherein I will need to connect our website ( PHP5/Apache 1.3/OpenBSD 4.1 ) to our back-end system running on an iSeries with OS400 V5R3 so that I can access some tables stored there. I've done some checking around but am running into some roadblocks. From what I've seen the DB2 extensions and DB2 software from IBM only run under Linux. I've tried compiling the extensions with all the software from IBM and even tried their precompiled ibm_db2 extension with no luck. IBM only supports Linux so I turned on the Linux emulation in the kernel but that didn't seem to help anything. If anyone has run across getting everything to run natively under OpenBSD that would be great, but what I think I may have to do is setting up a second server running CentOS with DB2 installed (most likely via ZendCore for IBM since it seems to do all this for me) and the driver so that I can set up a small transaction server that I can post against and get a JSON representation of the DB2 data that I need. Does the second option seem overkill or does anyone else have any better ideas? | Have you looked at connecting to the server using unixODBC? If I remember correctly it has support for IBM DB2 and compiles on OpenBSD. Check out http://www.php.net/odbc for more information regarding the PHP side. If you can't get that to work, the option to setup a web service on a Linux server may be all you can do. | Connect PHP to IBM i (AS/400) I've got an upcoming project wherein I will need to connect our website ( PHP5/Apache 1.3/OpenBSD 4.1 ) to our back-end system running on an iSeries with OS400 V5R3 so that I can access some tables stored there. I've done some checking around but am running into some roadblocks. From what I've seen the DB2 extensions and DB2 software from IBM only run under Linux. I've tried compiling the extensions with all the software from IBM and even tried their precompiled ibm_db2 extension with no luck. IBM only supports Linux so I turned on the Linux emulation in the kernel but that didn't seem to help anything. If anyone has run across getting everything to run natively under OpenBSD that would be great, but what I think I may have to do is setting up a second server running CentOS with DB2 installed (most likely via ZendCore for IBM since it seems to do all this for me) and the driver so that I can set up a small transaction server that I can post against and get a JSON representation of the DB2 data that I need. Does the second option seem overkill or does anyone else have any better ideas? | TITLE:
Connect PHP to IBM i (AS/400)
QUESTION:
I've got an upcoming project wherein I will need to connect our website ( PHP5/Apache 1.3/OpenBSD 4.1 ) to our back-end system running on an iSeries with OS400 V5R3 so that I can access some tables stored there. I've done some checking around but am running into some roadblocks. From what I've seen the DB2 extensions and DB2 software from IBM only run under Linux. I've tried compiling the extensions with all the software from IBM and even tried their precompiled ibm_db2 extension with no luck. IBM only supports Linux so I turned on the Linux emulation in the kernel but that didn't seem to help anything. If anyone has run across getting everything to run natively under OpenBSD that would be great, but what I think I may have to do is setting up a second server running CentOS with DB2 installed (most likely via ZendCore for IBM since it seems to do all this for me) and the driver so that I can set up a small transaction server that I can post against and get a JSON representation of the DB2 data that I need. Does the second option seem overkill or does anyone else have any better ideas?
ANSWER:
Have you looked at connecting to the server using unixODBC? If I remember correctly it has support for IBM DB2 and compiles on OpenBSD. Check out http://www.php.net/odbc for more information regarding the PHP side. If you can't get that to work, the option to setup a web service on a Linux server may be all you can do. | [
"php",
"database",
"odbc",
"db2",
"ibm-midrange"
] | 37 | 18 | 9,109 | 8 | 0 | 2008-08-03T14:03:28.830000 | 2008-08-03T14:39:09.710000 |
709 | 713 | .NET testing framework advice | I'm looking to introduce a unit testing framework into the mix at my job. We're using Visual Studio 2005 (though we may be moving to 2008 within the next six months) and work primarily in C#. If the framework has some kind of IDE integration that would be best, but I'm open to frameworks that don't have integration but are still relatively simple to get set up. I'm going to get resistance to it one way or another, so if I can make sure what I'm pushing isn't a pain in the neck, that would help my case. The obvious choice from the research I've done so far points to NUnit, but I'd like to get the impressions of someone who's actually used it before recommending it to my team. Has anyone out there used NUnit? If so, are there any pitfalls or limitations of which I should be aware? Are there other good options out there? If so, if you've used both NUnit at that, I'd greatly appreciate an idea of the strengths and weaknesses of them. | I think NUnit is your best bet. With TestDriven.NET, you get great integration within Visual Studio. (ReSharper also has a unit test runner if you're using it). NUnit is simple to use and follows an established paradigm. You'll also find plenty of projects, tutorials, and guides using it which always helps. Your other main choice is probably MbUnit, which is more and more positioning itself as the BDD framework of choice (in conjunction with Gallio ). | .NET testing framework advice I'm looking to introduce a unit testing framework into the mix at my job. We're using Visual Studio 2005 (though we may be moving to 2008 within the next six months) and work primarily in C#. If the framework has some kind of IDE integration that would be best, but I'm open to frameworks that don't have integration but are still relatively simple to get set up. I'm going to get resistance to it one way or another, so if I can make sure what I'm pushing isn't a pain in the neck, that would help my case. The obvious choice from the research I've done so far points to NUnit, but I'd like to get the impressions of someone who's actually used it before recommending it to my team. Has anyone out there used NUnit? If so, are there any pitfalls or limitations of which I should be aware? Are there other good options out there? If so, if you've used both NUnit at that, I'd greatly appreciate an idea of the strengths and weaknesses of them. | TITLE:
.NET testing framework advice
QUESTION:
I'm looking to introduce a unit testing framework into the mix at my job. We're using Visual Studio 2005 (though we may be moving to 2008 within the next six months) and work primarily in C#. If the framework has some kind of IDE integration that would be best, but I'm open to frameworks that don't have integration but are still relatively simple to get set up. I'm going to get resistance to it one way or another, so if I can make sure what I'm pushing isn't a pain in the neck, that would help my case. The obvious choice from the research I've done so far points to NUnit, but I'd like to get the impressions of someone who's actually used it before recommending it to my team. Has anyone out there used NUnit? If so, are there any pitfalls or limitations of which I should be aware? Are there other good options out there? If so, if you've used both NUnit at that, I'd greatly appreciate an idea of the strengths and weaknesses of them.
ANSWER:
I think NUnit is your best bet. With TestDriven.NET, you get great integration within Visual Studio. (ReSharper also has a unit test runner if you're using it). NUnit is simple to use and follows an established paradigm. You'll also find plenty of projects, tutorials, and guides using it which always helps. Your other main choice is probably MbUnit, which is more and more positioning itself as the BDD framework of choice (in conjunction with Gallio ). | [
"c#",
".net",
"visual-studio",
"unit-testing",
"nunit"
] | 55 | 44 | 6,777 | 10 | 0 | 2008-08-03T14:53:53.550000 | 2008-08-03T14:59:20.993000 |
717 | 799 | Why doesn't VFP .NET OLEdb provider work in 64 bit Windows? | I wrote a windows service using VB that read some legacy data from Visual Foxpro Databases to be inserted in SQL 2005. The problem is this use to run fine in Windows server 2003 32-Bits, but the client recently moved to Windows 2003 64-Bits and now the service won't work. I'm getting a message the the VFP.NET OLEdb provider is not found. I researched and everything seems to point out that there is no solution. Any Help, please... | Have you tried changing the target CPU to x86 instead of "Any CPU" in the advanced compiler options? I know that this solves some problems with other OLEDB providers by forcing the use of the 32-bit version. | Why doesn't VFP .NET OLEdb provider work in 64 bit Windows? I wrote a windows service using VB that read some legacy data from Visual Foxpro Databases to be inserted in SQL 2005. The problem is this use to run fine in Windows server 2003 32-Bits, but the client recently moved to Windows 2003 64-Bits and now the service won't work. I'm getting a message the the VFP.NET OLEdb provider is not found. I researched and everything seems to point out that there is no solution. Any Help, please... | TITLE:
Why doesn't VFP .NET OLEdb provider work in 64 bit Windows?
QUESTION:
I wrote a windows service using VB that read some legacy data from Visual Foxpro Databases to be inserted in SQL 2005. The problem is this use to run fine in Windows server 2003 32-Bits, but the client recently moved to Windows 2003 64-Bits and now the service won't work. I'm getting a message the the VFP.NET OLEdb provider is not found. I researched and everything seems to point out that there is no solution. Any Help, please...
ANSWER:
Have you tried changing the target CPU to x86 instead of "Any CPU" in the advanced compiler options? I know that this solves some problems with other OLEDB providers by forcing the use of the 32-bit version. | [
".net",
"sql-server-2005",
"oledb",
"legacy",
"visual-foxpro"
] | 31 | 15 | 11,995 | 3 | 0 | 2008-08-03T15:07:10.073000 | 2008-08-03T19:42:03.687000 |
742 | 33,957 | Class views in Django | Django view points to a function, which can be a problem if you want to change only a bit of functionality. Yes, I could have million keyword arguments and even more if statements in the function, but I was thinking more of an object oriented approach. For example, I have a page that displays a user. This page is very similar to page that displays a group, but it's still not so similar to just use another data model. Group also has members etc... One way would be to point views to class methods and then extend that class. Has anyone tried this approach or has any other idea? | I've created and used my own generic view classes, defining __call__ so an instance of the class is callable. I really like it; while Django's generic views allow some customization through keyword arguments, OO generic views (if their behavior is split into a number of separate methods) can have much more fine-grained customization via subclassing, which lets me repeat myself a lot less. (I get tired of rewriting the same create/update view logic anytime I need to tweak something Django's generic views don't quite allow). I've posted some code at djangosnippets.org. The only real downside I see is the proliferation of internal method calls, which may impact performance somewhat. I don't think this is much of a concern; it's rare that Python code execution would be your performance bottleneck in a web app. UPDATE: Django's own generic views are now class-based. UPDATE: FWIW, I've changed my opinion on class-based views since this answer was written. After having used them extensively on a couple of projects, I feel they tend to lead to code that is satisfyingly DRY to write, but very hard to read and maintain later, because functionality is spread across so many different places, and subclasses are so dependent on every implementation detail of the superclasses and mixins. I now feel that TemplateResponse and view decorators is a better answer for decomposing view code. | Class views in Django Django view points to a function, which can be a problem if you want to change only a bit of functionality. Yes, I could have million keyword arguments and even more if statements in the function, but I was thinking more of an object oriented approach. For example, I have a page that displays a user. This page is very similar to page that displays a group, but it's still not so similar to just use another data model. Group also has members etc... One way would be to point views to class methods and then extend that class. Has anyone tried this approach or has any other idea? | TITLE:
Class views in Django
QUESTION:
Django view points to a function, which can be a problem if you want to change only a bit of functionality. Yes, I could have million keyword arguments and even more if statements in the function, but I was thinking more of an object oriented approach. For example, I have a page that displays a user. This page is very similar to page that displays a group, but it's still not so similar to just use another data model. Group also has members etc... One way would be to point views to class methods and then extend that class. Has anyone tried this approach or has any other idea?
ANSWER:
I've created and used my own generic view classes, defining __call__ so an instance of the class is callable. I really like it; while Django's generic views allow some customization through keyword arguments, OO generic views (if their behavior is split into a number of separate methods) can have much more fine-grained customization via subclassing, which lets me repeat myself a lot less. (I get tired of rewriting the same create/update view logic anytime I need to tweak something Django's generic views don't quite allow). I've posted some code at djangosnippets.org. The only real downside I see is the proliferation of internal method calls, which may impact performance somewhat. I don't think this is much of a concern; it's rare that Python code execution would be your performance bottleneck in a web app. UPDATE: Django's own generic views are now class-based. UPDATE: FWIW, I've changed my opinion on class-based views since this answer was written. After having used them extensively on a couple of projects, I feel they tend to lead to code that is satisfyingly DRY to write, but very hard to read and maintain later, because functionality is spread across so many different places, and subclasses are so dependent on every implementation detail of the superclasses and mixins. I now feel that TemplateResponse and view decorators is a better answer for decomposing view code. | [
"python",
"django",
"view",
"oop"
] | 62 | 46 | 15,428 | 9 | 0 | 2008-08-03T15:55:28.633000 | 2008-08-29T04:29:22.053000 |
752 | 755 | How to create a new object instance from a Type | One may not always know the Type of an object at compile-time, but may need to create an instance of the Type. How do you get a new object instance from a Type? | The Activator class within the root System namespace is pretty powerful. There are a lot of overloads for passing parameters to the constructor and such. Check out the documentation at: http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx or (new path) https://learn.microsoft.com/en-us/dotnet/api/system.activator.createinstance Here are some simple examples: ObjectType instance = (ObjectType)Activator.CreateInstance(objectType);
ObjectType instance = (ObjectType)Activator.CreateInstance("MyAssembly","MyNamespace.ObjectType"); | How to create a new object instance from a Type One may not always know the Type of an object at compile-time, but may need to create an instance of the Type. How do you get a new object instance from a Type? | TITLE:
How to create a new object instance from a Type
QUESTION:
One may not always know the Type of an object at compile-time, but may need to create an instance of the Type. How do you get a new object instance from a Type?
ANSWER:
The Activator class within the root System namespace is pretty powerful. There are a lot of overloads for passing parameters to the constructor and such. Check out the documentation at: http://msdn.microsoft.com/en-us/library/system.activator.createinstance.aspx or (new path) https://learn.microsoft.com/en-us/dotnet/api/system.activator.createinstance Here are some simple examples: ObjectType instance = (ObjectType)Activator.CreateInstance(objectType);
ObjectType instance = (ObjectType)Activator.CreateInstance("MyAssembly","MyNamespace.ObjectType"); | [
"c#",
".net",
"performance",
"reflection",
"types"
] | 926 | 1,057 | 744,888 | 11 | 0 | 2008-08-03T16:29:03.607000 | 2008-08-03T16:35:34.233000 |
761 | 762 | Localising date format descriptors | What is the best way to localise a date format descriptor? As anyone from a culture which does not use the mm/dd/yyyy format knows, it is annoying to have to enter dates in this format. The.NET framework provides some very good localisation support, so it's trivial to parse dates according to the users culture, but you often want to also display a helpful hint as to the format required (especially to distinguish between yy and yyyy which is interchangeable in most cultures). What is the best way to do this in a way that make sense to most users (e.g. dd/M/yyy is confusing because of the change in case and the switching between one and two letters). | Here is my current method. Any suggestions? Regex singleMToDoubleRegex = new Regex("(? | Localising date format descriptors What is the best way to localise a date format descriptor? As anyone from a culture which does not use the mm/dd/yyyy format knows, it is annoying to have to enter dates in this format. The.NET framework provides some very good localisation support, so it's trivial to parse dates according to the users culture, but you often want to also display a helpful hint as to the format required (especially to distinguish between yy and yyyy which is interchangeable in most cultures). What is the best way to do this in a way that make sense to most users (e.g. dd/M/yyy is confusing because of the change in case and the switching between one and two letters). | TITLE:
Localising date format descriptors
QUESTION:
What is the best way to localise a date format descriptor? As anyone from a culture which does not use the mm/dd/yyyy format knows, it is annoying to have to enter dates in this format. The.NET framework provides some very good localisation support, so it's trivial to parse dates according to the users culture, but you often want to also display a helpful hint as to the format required (especially to distinguish between yy and yyyy which is interchangeable in most cultures). What is the best way to do this in a way that make sense to most users (e.g. dd/M/yyy is confusing because of the change in case and the switching between one and two letters).
ANSWER:
Here is my current method. Any suggestions? Regex singleMToDoubleRegex = new Regex("(? | [
".net",
"internationalization",
"date",
"globalization"
] | 22 | 2 | 1,340 | 7 | 0 | 2008-08-03T17:30:20.473000 | 2008-08-03T17:31:31.313000 |
766 | 1,619 | Python and MySQL | I can get Python to work with Postgresql but I cannot get it to work with MySQL. The main problem is that on the shared hosting account I have I do not have the ability to install things such as Django or PySQL, I generally fail when installing them on my computer so maybe it's good I can't install on the host. I found bpgsql really good because it does not require an install, it's a single file that I can look at, read and then call the functions of. Does anybody know of something like this for MySQL? | MySQLdb is what I have used before. If you host is using Python version 2.5 or higher, support for sqlite3 databases is built in (sqlite allows you to have a relational database that is simply a file in your filesystem). But buyer beware, sqlite is not suited for production, so it may depend what you are trying to do with it. Another option may be to call your host and complain, or change hosts. Honestly these days, any self respecting web host that supports python and mysql ought to have MySQLdb pre installed. | Python and MySQL I can get Python to work with Postgresql but I cannot get it to work with MySQL. The main problem is that on the shared hosting account I have I do not have the ability to install things such as Django or PySQL, I generally fail when installing them on my computer so maybe it's good I can't install on the host. I found bpgsql really good because it does not require an install, it's a single file that I can look at, read and then call the functions of. Does anybody know of something like this for MySQL? | TITLE:
Python and MySQL
QUESTION:
I can get Python to work with Postgresql but I cannot get it to work with MySQL. The main problem is that on the shared hosting account I have I do not have the ability to install things such as Django or PySQL, I generally fail when installing them on my computer so maybe it's good I can't install on the host. I found bpgsql really good because it does not require an install, it's a single file that I can look at, read and then call the functions of. Does anybody know of something like this for MySQL?
ANSWER:
MySQLdb is what I have used before. If you host is using Python version 2.5 or higher, support for sqlite3 databases is built in (sqlite allows you to have a relational database that is simply a file in your filesystem). But buyer beware, sqlite is not suited for production, so it may depend what you are trying to do with it. Another option may be to call your host and complain, or change hosts. Honestly these days, any self respecting web host that supports python and mysql ought to have MySQLdb pre installed. | [
"python",
"mysql",
"postgresql",
"bpgsql"
] | 45 | 27 | 4,434 | 6 | 0 | 2008-08-03T17:44:07.450000 | 2008-08-04T21:54:11.857000 |
769 | 781 | Solving a linear equation | I need to programmatically solve a system of linear equations in C, Objective C, or (if needed) C++. Here's an example of the equations: -44.3940 = a * 50.0 + b * 37.0 + tx -45.3049 = a * 43.0 + b * 39.0 + tx -44.9594 = a * 52.0 + b * 41.0 + tx From this, I'd like to get the best approximation for a, b, and tx. | Cramer's Rule and Gaussian Elimination are two good, general-purpose algorithms (also see Simultaneous Linear Equations ). If you're looking for code, check out GiNaC, Maxima, and SymbolicC++ (depending on your licensing requirements, of course). EDIT: I know you're working in C land, but I also have to put in a good word for SymPy (a computer algebra system in Python). You can learn a lot from its algorithms (if you can read a bit of python). Also, it's under the new BSD license, while most of the free math packages are GPL. | Solving a linear equation I need to programmatically solve a system of linear equations in C, Objective C, or (if needed) C++. Here's an example of the equations: -44.3940 = a * 50.0 + b * 37.0 + tx -45.3049 = a * 43.0 + b * 39.0 + tx -44.9594 = a * 52.0 + b * 41.0 + tx From this, I'd like to get the best approximation for a, b, and tx. | TITLE:
Solving a linear equation
QUESTION:
I need to programmatically solve a system of linear equations in C, Objective C, or (if needed) C++. Here's an example of the equations: -44.3940 = a * 50.0 + b * 37.0 + tx -45.3049 = a * 43.0 + b * 39.0 + tx -44.9594 = a * 52.0 + b * 41.0 + tx From this, I'd like to get the best approximation for a, b, and tx.
ANSWER:
Cramer's Rule and Gaussian Elimination are two good, general-purpose algorithms (also see Simultaneous Linear Equations ). If you're looking for code, check out GiNaC, Maxima, and SymbolicC++ (depending on your licensing requirements, of course). EDIT: I know you're working in C land, but I also have to put in a good word for SymPy (a computer algebra system in Python). You can learn a lot from its algorithms (if you can read a bit of python). Also, it's under the new BSD license, while most of the free math packages are GPL. | [
"math",
"linear-algebra",
"system",
"linear-equation"
] | 48 | 20 | 45,885 | 11 | 0 | 2008-08-03T18:14:24.267000 | 2008-08-03T18:37:24.003000 |
773 | 7,286 | How do I use itertools.groupby()? | I haven't been able to find an understandable explanation of how to actually use Python's itertools.groupby() function. What I'm trying to do is this: Take a list - in this case, the children of an objectified lxml element Divide it into groups based on some criteria Then later iterate over each of these groups separately. I've reviewed the documentation, but I've had trouble trying to apply them beyond a simple list of numbers. So, how do I use of itertools.groupby()? Is there another technique I should be using? Pointers to good "prerequisite" reading would also be appreciated. | IMPORTANT NOTE: You may have to sort your data first. The part I didn't get is that in the example construction groups = [] uniquekeys = [] for k, g in groupby(data, keyfunc): groups.append(list(g)) # Store group iterator as a list uniquekeys.append(k) k is the current grouping key, and g is an iterator that you can use to iterate over the group defined by that grouping key. In other words, the groupby iterator itself returns iterators. Here's an example of that, using clearer variable names: from itertools import groupby
things = [("animal", "bear"), ("animal", "duck"), ("plant", "cactus"), ("vehicle", "speed boat"), ("vehicle", "school bus")]
for key, group in groupby(things, lambda x: x[0]): for thing in group: print("A %s is a %s." % (thing[1], key)) print("") This will give you the output: A bear is a animal. A duck is a animal. A cactus is a plant. A speed boat is a vehicle. A school bus is a vehicle. In this example, things is a list of tuples where the first item in each tuple is the group the second item belongs to. The groupby() function takes two arguments: (1) the data to group and (2) the function to group it with. Here, lambda x: x[0] tells groupby() to use the first item in each tuple as the grouping key. In the above for statement, groupby returns three (key, group iterator) pairs - once for each unique key. You can use the returned iterator to iterate over each individual item in that group. Here's a slightly different example with the same data, using a list comprehension: for key, group in groupby(things, lambda x: x[0]): listOfThings = " and ".join([thing[1] for thing in group]) print(key + "s: " + listOfThings + ".") This will give you the output: animals: bear and duck. plants: cactus. vehicles: speed boat and school bus. | How do I use itertools.groupby()? I haven't been able to find an understandable explanation of how to actually use Python's itertools.groupby() function. What I'm trying to do is this: Take a list - in this case, the children of an objectified lxml element Divide it into groups based on some criteria Then later iterate over each of these groups separately. I've reviewed the documentation, but I've had trouble trying to apply them beyond a simple list of numbers. So, how do I use of itertools.groupby()? Is there another technique I should be using? Pointers to good "prerequisite" reading would also be appreciated. | TITLE:
How do I use itertools.groupby()?
QUESTION:
I haven't been able to find an understandable explanation of how to actually use Python's itertools.groupby() function. What I'm trying to do is this: Take a list - in this case, the children of an objectified lxml element Divide it into groups based on some criteria Then later iterate over each of these groups separately. I've reviewed the documentation, but I've had trouble trying to apply them beyond a simple list of numbers. So, how do I use of itertools.groupby()? Is there another technique I should be using? Pointers to good "prerequisite" reading would also be appreciated.
ANSWER:
IMPORTANT NOTE: You may have to sort your data first. The part I didn't get is that in the example construction groups = [] uniquekeys = [] for k, g in groupby(data, keyfunc): groups.append(list(g)) # Store group iterator as a list uniquekeys.append(k) k is the current grouping key, and g is an iterator that you can use to iterate over the group defined by that grouping key. In other words, the groupby iterator itself returns iterators. Here's an example of that, using clearer variable names: from itertools import groupby
things = [("animal", "bear"), ("animal", "duck"), ("plant", "cactus"), ("vehicle", "speed boat"), ("vehicle", "school bus")]
for key, group in groupby(things, lambda x: x[0]): for thing in group: print("A %s is a %s." % (thing[1], key)) print("") This will give you the output: A bear is a animal. A duck is a animal. A cactus is a plant. A speed boat is a vehicle. A school bus is a vehicle. In this example, things is a list of tuples where the first item in each tuple is the group the second item belongs to. The groupby() function takes two arguments: (1) the data to group and (2) the function to group it with. Here, lambda x: x[0] tells groupby() to use the first item in each tuple as the grouping key. In the above for statement, groupby returns three (key, group iterator) pairs - once for each unique key. You can use the returned iterator to iterate over each individual item in that group. Here's a slightly different example with the same data, using a list comprehension: for key, group in groupby(things, lambda x: x[0]): listOfThings = " and ".join([thing[1] for thing in group]) print(key + "s: " + listOfThings + ".") This will give you the output: animals: bear and duck. plants: cactus. vehicles: speed boat and school bus. | [
"python",
"python-itertools"
] | 694 | 867 | 462,776 | 15 | 0 | 2008-08-03T18:27:09.687000 | 2008-08-10T18:45:32.430000 |
805 | 894 | ASP, need to use SFTP | This is ASP classic, not.Net. We have to get a way to SFTP into a server to upload and download a couple of files, kicked off by a user. What have other people used to do SFTP in ASP classic? Not necessarily opposed to purchasing a control. | If you have the ability to use WScript.Shell then you can just execute pscp.exe from the Putty package. Obviously this is less then ideal but it will get the job done and let you use SCP/SFTP in classic ASP. | ASP, need to use SFTP This is ASP classic, not.Net. We have to get a way to SFTP into a server to upload and download a couple of files, kicked off by a user. What have other people used to do SFTP in ASP classic? Not necessarily opposed to purchasing a control. | TITLE:
ASP, need to use SFTP
QUESTION:
This is ASP classic, not.Net. We have to get a way to SFTP into a server to upload and download a couple of files, kicked off by a user. What have other people used to do SFTP in ASP classic? Not necessarily opposed to purchasing a control.
ANSWER:
If you have the ability to use WScript.Shell then you can just execute pscp.exe from the Putty package. Obviously this is less then ideal but it will get the job done and let you use SCP/SFTP in classic ASP. | [
"asp-classic",
"sftp"
] | 17 | 8 | 2,854 | 6 | 0 | 2008-08-03T20:11:26.043000 | 2008-08-03T23:52:47.360000 |
810 | 820 | Visual Studio Setup Project - Per User Registry Settings | I'm trying to maintain a Setup Project in Visual Studio 2003 (yes, it's a legacy application). The problem we have at the moment is that we need to write registry entries to HKCU for every user on the computer. They need to be in the HKCU rather than HKLM because they are the default user settings, and they do change per user. My feeling is that This isn't possible This isn't something the installer should be doing, but something the application should be doing (after all what happens when a user profile is created after the install?). With that in mind, I still want to change as little as possible in the application, so my question is, is it possible to add registry entries for every user in a Visual Studio 2003 setup project? And, at the moment the project lists five registry root keys ( HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_USERS, and User/Machine Hive). I don't really know anything about the Users root key, and haven't seen User/Machine Hive. Can anyone enlighten me on what they are? Perhaps they could solve my problem above. | First: Yes, this is something that belongs in the Application for the exact reson you specified: What happens after new user profiles are created? Sure, if you're using a domain it's possible to have some stuff put in the registry on creation, but this is not really a use case. The Application should check if there are seetings and use the default settings if not. That being said, it IS possible to change other users Keys through the HKEY_USERS Hive. I have no experience with the Visual Studio 2003 Setup Project, so here is a bit of (totally unrelated) VBScript code that might just give you an idea where to look: const HKEY_USERS = &H80000003 strComputer = "." Set objReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv") strKeyPath = "" objReg.EnumKey HKEY_USERS, strKeyPath, arrSubKeys strKeyPath = "\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing" For Each subkey In arrSubKeys objReg.SetDWORDValue HKEY_USERS, subkey & strKeyPath, "State", 146944 Next (Code Courtesy of Jeroen Ritmeijer ) | Visual Studio Setup Project - Per User Registry Settings I'm trying to maintain a Setup Project in Visual Studio 2003 (yes, it's a legacy application). The problem we have at the moment is that we need to write registry entries to HKCU for every user on the computer. They need to be in the HKCU rather than HKLM because they are the default user settings, and they do change per user. My feeling is that This isn't possible This isn't something the installer should be doing, but something the application should be doing (after all what happens when a user profile is created after the install?). With that in mind, I still want to change as little as possible in the application, so my question is, is it possible to add registry entries for every user in a Visual Studio 2003 setup project? And, at the moment the project lists five registry root keys ( HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_USERS, and User/Machine Hive). I don't really know anything about the Users root key, and haven't seen User/Machine Hive. Can anyone enlighten me on what they are? Perhaps they could solve my problem above. | TITLE:
Visual Studio Setup Project - Per User Registry Settings
QUESTION:
I'm trying to maintain a Setup Project in Visual Studio 2003 (yes, it's a legacy application). The problem we have at the moment is that we need to write registry entries to HKCU for every user on the computer. They need to be in the HKCU rather than HKLM because they are the default user settings, and they do change per user. My feeling is that This isn't possible This isn't something the installer should be doing, but something the application should be doing (after all what happens when a user profile is created after the install?). With that in mind, I still want to change as little as possible in the application, so my question is, is it possible to add registry entries for every user in a Visual Studio 2003 setup project? And, at the moment the project lists five registry root keys ( HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_USERS, and User/Machine Hive). I don't really know anything about the Users root key, and haven't seen User/Machine Hive. Can anyone enlighten me on what they are? Perhaps they could solve my problem above.
ANSWER:
First: Yes, this is something that belongs in the Application for the exact reson you specified: What happens after new user profiles are created? Sure, if you're using a domain it's possible to have some stuff put in the registry on creation, but this is not really a use case. The Application should check if there are seetings and use the default settings if not. That being said, it IS possible to change other users Keys through the HKEY_USERS Hive. I have no experience with the Visual Studio 2003 Setup Project, so here is a bit of (totally unrelated) VBScript code that might just give you an idea where to look: const HKEY_USERS = &H80000003 strComputer = "." Set objReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv") strKeyPath = "" objReg.EnumKey HKEY_USERS, strKeyPath, arrSubKeys strKeyPath = "\Software\Microsoft\Windows\CurrentVersion\WinTrust\Trust Providers\Software Publishing" For Each subkey In arrSubKeys objReg.SetDWORDValue HKEY_USERS, subkey & strKeyPath, "State", 146944 Next (Code Courtesy of Jeroen Ritmeijer ) | [
"windows",
"visual-studio",
"registry",
"installation"
] | 21 | 6 | 7,824 | 4 | 0 | 2008-08-03T20:35:01.987000 | 2008-08-03T20:48:47.263000 |
826 | 97,294 | Efficiently get sorted sums of a sorted list | You have an ascending list of numbers, what is the most efficient algorithm you can think of to get the ascending list of sums of every two numbers in that list. Duplicates in the resulting list are irrelevant, you can remove them or avoid them if you like. To be clear, I'm interested in the algorithm. Feel free to post code in any language and paradigm that you like. | Edit as of 2018: You should probably stop reading this. (But I can't delete it as it is accepted.) If you write out the sums like this: 1 4 5 6 8 9 --------------- 2 5 6 7 9 10 8 9 10 12 13 10 11 13 14 12 14 15 16 17 18 You'll notice that since M[i,j] <= M[i,j+1] and M[i,j] <= M[i+1,j], then you only need to examine the top left "corners" and choose the lowest one. e.g. only 1 top left corner, pick 2 only 1, pick 5 6 or 8, pick 6 7 or 8, pick 7 9 or 8, pick 8 9 or 9, pick both:) 10 or 10 or 10, pick all 12 or 11, pick 11 12 or 12, pick both 13 or 13, pick both 14 or 14, pick both 15 or 16, pick 15 only 1, pick 16 only 1, pick 17 only 1, pick 18 Of course, when you have lots of top left corners then this solution devolves. I'm pretty sure this problem is Ω(n²), because you have to calculate the sums for each M[i,j] -- unless someone has a better algorithm for the summation:) | Efficiently get sorted sums of a sorted list You have an ascending list of numbers, what is the most efficient algorithm you can think of to get the ascending list of sums of every two numbers in that list. Duplicates in the resulting list are irrelevant, you can remove them or avoid them if you like. To be clear, I'm interested in the algorithm. Feel free to post code in any language and paradigm that you like. | TITLE:
Efficiently get sorted sums of a sorted list
QUESTION:
You have an ascending list of numbers, what is the most efficient algorithm you can think of to get the ascending list of sums of every two numbers in that list. Duplicates in the resulting list are irrelevant, you can remove them or avoid them if you like. To be clear, I'm interested in the algorithm. Feel free to post code in any language and paradigm that you like.
ANSWER:
Edit as of 2018: You should probably stop reading this. (But I can't delete it as it is accepted.) If you write out the sums like this: 1 4 5 6 8 9 --------------- 2 5 6 7 9 10 8 9 10 12 13 10 11 13 14 12 14 15 16 17 18 You'll notice that since M[i,j] <= M[i,j+1] and M[i,j] <= M[i+1,j], then you only need to examine the top left "corners" and choose the lowest one. e.g. only 1 top left corner, pick 2 only 1, pick 5 6 or 8, pick 6 7 or 8, pick 7 9 or 8, pick 8 9 or 9, pick both:) 10 or 10 or 10, pick all 12 or 11, pick 11 12 or 12, pick both 13 or 13, pick both 14 or 14, pick both 15 or 16, pick 15 only 1, pick 16 only 1, pick 17 only 1, pick 18 Of course, when you have lots of top left corners then this solution devolves. I'm pretty sure this problem is Ω(n²), because you have to calculate the sums for each M[i,j] -- unless someone has a better algorithm for the summation:) | [
"algorithm",
"language-agnostic"
] | 20 | 13 | 4,508 | 8 | 0 | 2008-08-03T21:08:54.977000 | 2008-09-18T21:41:13.387000 |
832 | 837 | How do I most elegantly express left join with aggregate SQL as LINQ query | SQL: SELECT u.id, u.name, isnull(MAX(h.dateCol), '1900-01-01') dateColWithDefault FROM universe u LEFT JOIN history h ON u.id=h.id AND h.dateCol | A solution, albeit one that defers handling of the null value to the code, could be: DateTime yesterday = DateTime.Now.Date.AddDays(-1); var collection= from u in db.Universe select new { u.id, u.name, MaxDate =(DateTime?) ( from h in db.History where u.Id == h.Id && h.dateCol < yesterday select h.dateCol ).Max() }; This does not produce exactly the same SQL, but does provide the same logical result. Translating "complex" SQL queries to LINQ is not always straightforward. | How do I most elegantly express left join with aggregate SQL as LINQ query SQL: SELECT u.id, u.name, isnull(MAX(h.dateCol), '1900-01-01') dateColWithDefault FROM universe u LEFT JOIN history h ON u.id=h.id AND h.dateCol | TITLE:
How do I most elegantly express left join with aggregate SQL as LINQ query
QUESTION:
SQL: SELECT u.id, u.name, isnull(MAX(h.dateCol), '1900-01-01') dateColWithDefault FROM universe u LEFT JOIN history h ON u.id=h.id AND h.dateCol
ANSWER:
A solution, albeit one that defers handling of the null value to the code, could be: DateTime yesterday = DateTime.Now.Date.AddDays(-1); var collection= from u in db.Universe select new { u.id, u.name, MaxDate =(DateTime?) ( from h in db.History where u.Id == h.Id && h.dateCol < yesterday select h.dateCol ).Max() }; This does not produce exactly the same SQL, but does provide the same logical result. Translating "complex" SQL queries to LINQ is not always straightforward. | [
"c#",
"linq",
"left-join"
] | 24 | 10 | 1,912 | 4 | 0 | 2008-08-03T21:22:52.717000 | 2008-08-03T21:31:17.863000 |
833 | 838 | Editing database records by multiple users | I have designed database tables (normalised, on an MS SQL server) and created a standalone windows front end for an application that will be used by a handful of users to add and edit information. We will add a web interface to allow searching accross our production area at a later date. I am concerned that if two users start editing the same record then the last to commit the update would be the 'winner' and important information may be lost. A number of solutions come to mind but I'm not sure if I am going to create a bigger headache. Do nothing and hope that two users are never going to be editing the same record at the same time. - Might never happed but what if it does? Editing routine could store a copy of the original data as well as the updates and then compare when the user has finished editing. If they differ show user and comfirm update - Would require two copies of data to be stored. Add last updated DATETIME column and check it matches when we update, if not then show differences. - requires new column in each of the relevant tables. Create an editing table that registers when users start editing a record that will be checked and prevent other users from editing same record. - would require carful thought of program flow to prevent deadlocks and records becoming locked if a user crashes out of the program. Are there any better solutions or should I go for one of these? | If you expect infrequent collisions, Optimistic Concurrency is probably your best bet. Scott Mitchell wrote a comprehensive tutorial on implementing that pattern: Implementing Optimistic Concurrency | Editing database records by multiple users I have designed database tables (normalised, on an MS SQL server) and created a standalone windows front end for an application that will be used by a handful of users to add and edit information. We will add a web interface to allow searching accross our production area at a later date. I am concerned that if two users start editing the same record then the last to commit the update would be the 'winner' and important information may be lost. A number of solutions come to mind but I'm not sure if I am going to create a bigger headache. Do nothing and hope that two users are never going to be editing the same record at the same time. - Might never happed but what if it does? Editing routine could store a copy of the original data as well as the updates and then compare when the user has finished editing. If they differ show user and comfirm update - Would require two copies of data to be stored. Add last updated DATETIME column and check it matches when we update, if not then show differences. - requires new column in each of the relevant tables. Create an editing table that registers when users start editing a record that will be checked and prevent other users from editing same record. - would require carful thought of program flow to prevent deadlocks and records becoming locked if a user crashes out of the program. Are there any better solutions or should I go for one of these? | TITLE:
Editing database records by multiple users
QUESTION:
I have designed database tables (normalised, on an MS SQL server) and created a standalone windows front end for an application that will be used by a handful of users to add and edit information. We will add a web interface to allow searching accross our production area at a later date. I am concerned that if two users start editing the same record then the last to commit the update would be the 'winner' and important information may be lost. A number of solutions come to mind but I'm not sure if I am going to create a bigger headache. Do nothing and hope that two users are never going to be editing the same record at the same time. - Might never happed but what if it does? Editing routine could store a copy of the original data as well as the updates and then compare when the user has finished editing. If they differ show user and comfirm update - Would require two copies of data to be stored. Add last updated DATETIME column and check it matches when we update, if not then show differences. - requires new column in each of the relevant tables. Create an editing table that registers when users start editing a record that will be checked and prevent other users from editing same record. - would require carful thought of program flow to prevent deadlocks and records becoming locked if a user crashes out of the program. Are there any better solutions or should I go for one of these?
ANSWER:
If you expect infrequent collisions, Optimistic Concurrency is probably your best bet. Scott Mitchell wrote a comprehensive tutorial on implementing that pattern: Implementing Optimistic Concurrency | [
"sql-server",
"database"
] | 34 | 16 | 28,606 | 8 | 0 | 2008-08-03T21:23:41.077000 | 2008-08-03T21:31:40.187000 |
835 | 1,023 | CruiseControl.net, msbuild, /p:OutputPath and CCNetArtifactDirectory | I'm trying to setup CruiseControl.net at the moment. So far it works nice, but I have a Problem with the MSBuild Task. According to the Documentation, it passes CCNetArtifactDirectory to MSBuild. But how do I use it? I tried this: /noconsolelogger /p:OutputPath=$(CCNetArtifactDirectory)\test But that does not work. In fact, it kills the service with this error: ThoughtWorks.CruiseControl.Core.Config.Preprocessor.EvaluationException: Reference to unknown symbol CCNetArtifactDirectory Documentation is rather sparse, and google und mainly offers modifying the.sln Project file, which is what I want to avoid in order to be able to manually build this project later - I would really prefer /p:OutputPath. | The CCNetArtifactDirectory is passed to the MSBuild by default, so you dont need to worry about it. MSBuild will place the build output in the "bin location" relevant to the working directory that you have specified. c:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe C:\data\projects\FooSolution\ FooSolution.sln /noconsolelogger /p:Configuration=Debug So in the above example your build output will be put in C:\data\projects\FooSolution[ProjectName]\bin\Debug. Should you want to output to a different location you may want to look at of the tag in CCNET. C:\data\projects\FooSolution\FooProject\bin\Debug C:\published\FooSolution\ false This will allow you to publish your output to a different location. | CruiseControl.net, msbuild, /p:OutputPath and CCNetArtifactDirectory I'm trying to setup CruiseControl.net at the moment. So far it works nice, but I have a Problem with the MSBuild Task. According to the Documentation, it passes CCNetArtifactDirectory to MSBuild. But how do I use it? I tried this: /noconsolelogger /p:OutputPath=$(CCNetArtifactDirectory)\test But that does not work. In fact, it kills the service with this error: ThoughtWorks.CruiseControl.Core.Config.Preprocessor.EvaluationException: Reference to unknown symbol CCNetArtifactDirectory Documentation is rather sparse, and google und mainly offers modifying the.sln Project file, which is what I want to avoid in order to be able to manually build this project later - I would really prefer /p:OutputPath. | TITLE:
CruiseControl.net, msbuild, /p:OutputPath and CCNetArtifactDirectory
QUESTION:
I'm trying to setup CruiseControl.net at the moment. So far it works nice, but I have a Problem with the MSBuild Task. According to the Documentation, it passes CCNetArtifactDirectory to MSBuild. But how do I use it? I tried this: /noconsolelogger /p:OutputPath=$(CCNetArtifactDirectory)\test But that does not work. In fact, it kills the service with this error: ThoughtWorks.CruiseControl.Core.Config.Preprocessor.EvaluationException: Reference to unknown symbol CCNetArtifactDirectory Documentation is rather sparse, and google und mainly offers modifying the.sln Project file, which is what I want to avoid in order to be able to manually build this project later - I would really prefer /p:OutputPath.
ANSWER:
The CCNetArtifactDirectory is passed to the MSBuild by default, so you dont need to worry about it. MSBuild will place the build output in the "bin location" relevant to the working directory that you have specified. c:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe C:\data\projects\FooSolution\ FooSolution.sln /noconsolelogger /p:Configuration=Debug So in the above example your build output will be put in C:\data\projects\FooSolution[ProjectName]\bin\Debug. Should you want to output to a different location you may want to look at of the tag in CCNET. C:\data\projects\FooSolution\FooProject\bin\Debug C:\published\FooSolution\ false This will allow you to publish your output to a different location. | [
"msbuild",
"cruisecontrol.net"
] | 17 | 6 | 6,616 | 3 | 0 | 2008-08-03T21:25:09.763000 | 2008-08-04T04:45:12.497000 |
845 | 849 | How to detect which one of the defined font was used in a web page? | Suppose I have the following CSS rule in my page: body { font-family: Calibri, Trebuchet MS, Helvetica, sans-serif; } How could I detect which one of the defined fonts were used in the user's browser? For people wondering why I want to do this is because the font I'm detecting contains glyphs that are not available in other fonts. If the user does not have the font, then I want it to display a link asking the user to download that font (so they can use my web application with the correct font). Currently, I am displaying the download font link for all users. I want to only display this for people who do not have the correct font installed. | I've seen it done in a kind of iffy, but pretty reliable way. Basically, an element is set to use a specific font and a string is set to that element. If the font set for the element does not exist, it takes the font of the parent element. So, what they do is measure the width of the rendered string. If it matches what they expected for the desired font as opposed to the derived font, it's present. This won't work for monospaced fonts. Here's where it came from: Javascript/CSS Font Detector (ajaxian.com; 12 Mar 2007) | How to detect which one of the defined font was used in a web page? Suppose I have the following CSS rule in my page: body { font-family: Calibri, Trebuchet MS, Helvetica, sans-serif; } How could I detect which one of the defined fonts were used in the user's browser? For people wondering why I want to do this is because the font I'm detecting contains glyphs that are not available in other fonts. If the user does not have the font, then I want it to display a link asking the user to download that font (so they can use my web application with the correct font). Currently, I am displaying the download font link for all users. I want to only display this for people who do not have the correct font installed. | TITLE:
How to detect which one of the defined font was used in a web page?
QUESTION:
Suppose I have the following CSS rule in my page: body { font-family: Calibri, Trebuchet MS, Helvetica, sans-serif; } How could I detect which one of the defined fonts were used in the user's browser? For people wondering why I want to do this is because the font I'm detecting contains glyphs that are not available in other fonts. If the user does not have the font, then I want it to display a link asking the user to download that font (so they can use my web application with the correct font). Currently, I am displaying the download font link for all users. I want to only display this for people who do not have the correct font installed.
ANSWER:
I've seen it done in a kind of iffy, but pretty reliable way. Basically, an element is set to use a specific font and a string is set to that element. If the font set for the element does not exist, it takes the font of the parent element. So, what they do is measure the width of the rendered string. If it matches what they expected for the desired font as opposed to the derived font, it's present. This won't work for monospaced fonts. Here's where it came from: Javascript/CSS Font Detector (ajaxian.com; 12 Mar 2007) | [
"javascript",
"html",
"css",
"fonts"
] | 157 | 82 | 55,224 | 10 | 0 | 2008-08-03T21:42:37.903000 | 2008-08-03T21:51:44.360000 |
855 | 858 | .Net XML comment into API Documentation | Is there an easy way to produce MSDN-style documentation from the Visual Studio XML output? I'm not patient enough to set up a good xslt for it because I know I'm not the first person to cross this bridge. Also, I tried setting up sandcastle recently, but it really made my eyes cross. Either I was missing something important in the process or it is just way too involved. I know somebody out there has a really nice dead-simple solution. I'm reiterating here because I think my formatting made that paragraph non-inviting to read: I gave sandcastle a try but had a really hard time getting it set up. What I really have in mind is something much simpler. That is, unless I just don't understand the sandcastle process. It seemed like an awful lot of extra baggage to me just to produce something nice for the testers to work with. | You're looking for Sandcastle Project Page: Sandcastle Releases Blog: Sandcastle Blog NDoc Code Documentation Generator for.NET used to be the tool of choice, but support has all but stopped. | .Net XML comment into API Documentation Is there an easy way to produce MSDN-style documentation from the Visual Studio XML output? I'm not patient enough to set up a good xslt for it because I know I'm not the first person to cross this bridge. Also, I tried setting up sandcastle recently, but it really made my eyes cross. Either I was missing something important in the process or it is just way too involved. I know somebody out there has a really nice dead-simple solution. I'm reiterating here because I think my formatting made that paragraph non-inviting to read: I gave sandcastle a try but had a really hard time getting it set up. What I really have in mind is something much simpler. That is, unless I just don't understand the sandcastle process. It seemed like an awful lot of extra baggage to me just to produce something nice for the testers to work with. | TITLE:
.Net XML comment into API Documentation
QUESTION:
Is there an easy way to produce MSDN-style documentation from the Visual Studio XML output? I'm not patient enough to set up a good xslt for it because I know I'm not the first person to cross this bridge. Also, I tried setting up sandcastle recently, but it really made my eyes cross. Either I was missing something important in the process or it is just way too involved. I know somebody out there has a really nice dead-simple solution. I'm reiterating here because I think my formatting made that paragraph non-inviting to read: I gave sandcastle a try but had a really hard time getting it set up. What I really have in mind is something much simpler. That is, unless I just don't understand the sandcastle process. It seemed like an awful lot of extra baggage to me just to produce something nice for the testers to work with.
ANSWER:
You're looking for Sandcastle Project Page: Sandcastle Releases Blog: Sandcastle Blog NDoc Code Documentation Generator for.NET used to be the tool of choice, but support has all but stopped. | [
"visual-studio",
"xslt",
"documentation",
"sandcastle",
"xml-comments"
] | 21 | 12 | 3,497 | 6 | 0 | 2008-08-03T22:03:37.567000 | 2008-08-03T22:12:52.820000 |
871 | 875 | Why is Git better than Subversion? | I've been using Subversion for a few years and after using SourceSafe, I just love Subversion. Combined with TortoiseSVN, I can't really imagine how it could be any better. Yet there's a growing number of developers claiming that Subversion has problems and that we should be moving to the new breed of distributed version control systems, such as Git. How does Git improve upon Subversion? | Git is not better than Subversion. But is also not worse. It's different. The key difference is that it is decentralized. Imagine you are a developer on the road, you develop on your laptop and you want to have source control so that you can go back 3 hours. With Subversion, you have a Problem: The SVN Repository may be in a location you can't reach (in your company, and you don't have internet at the moment), you cannot commit. If you want to make a copy of your code, you have to literally copy/paste it. With Git, you do not have this problem. Your local copy is a repository, and you can commit to it and get all benefits of source control. When you regain connectivity to the main repository, you can commit against it. This looks good at first, but just keep in mind the added complexity to this approach. Git seems to be the "new, shiny, cool" thing. It's by no means bad (there is a reason Linus wrote it for the Linux Kernel development after all), but I feel that many people jump on the "Distributed Source Control" train just because it's new and is written by Linus Torvalds, without actually knowing why/if it's better. Subversion has Problems, but so does Git, Mercurial, CVS, TFS or whatever. Edit: So this answer is now a year old and still generates many upvotes, so I thought I'll add some more explanations. In the last year since writing this, Git has gained a lot of momentum and support, particularly since sites like GitHub really took off. I'm using both Git and Subversion nowadays and I'd like to share some personal insight. First of all, Git can be really confusing at first when working decentralized. What is a remote? and How to properly set up the initial repository? are two questions that come up at the beginning, especially compared to SVN's simple "svnadmin create", Git's "git init" can take the parameters --bare and --shared which seems to be the "proper" way to set up a centralized repository. There are reasons for this, but it adds complexity. The documentation of the "checkout" command is very confusing to people changing over - the "proper" way seems to be "git clone", while "git checkout" seems to switch branches. Git REALLY shines when you are decentralized. I have a server at home and a Laptop on the road, and SVN simply doesn't work well here. With SVN, I can't have local source control if I'm not connected to the repository (Yes, I know about SVK or about ways to copy the repo). With Git, that's the default mode anyway. It's an extra command though (git commit commits locally, whereas git push origin master pushes the master branch to the remote named "origin"). As said above: Git adds complexity. Two modes of creating repositories, checkout vs. clone, commit vs. push... You have to know which commands work locally and which work with "the server" (I'm assuming most people still like a central "master-repository"). Also, the tooling is still insufficient, at least on Windows. Yes, there is a Visual Studio AddIn, but I still use git bash with msysgit. SVN has the advantage that it's MUCH simpler to learn: There is your repository, all changes to towards it, if you know how to create, commit and checkout and you're ready to go and can pickup stuff like branching, update etc. later on. Git has the advantage that it's MUCH better suited if some developers are not always connected to the master repository. Also, it's much faster than SVN. And from what I hear, branching and merging support is a lot better (which is to be expected, as these are the core reasons it was written). This also explains why it gains so much buzz on the Internet, as Git is perfectly suited for Open Source projects: Just Fork it, commit your changes to your own Fork, and then ask the original project maintainer to pull your changes. With Git, this just works. Really, try it on Github, it's magic. What I also see are Git-SVN Bridges: The central repository is a Subversion repo, but developers locally work with Git and the bridge then pushes their changes to SVN. But even with this lengthy addition, I still stand by my core message: Git is not better or worse, it's just different. If you have the need for "Offline Source Control" and the willingness to spend some extra time learning it, it's fantastic. But if you have a strictly centralized Source Control and/or are struggling to introduce Source Control in the first place because your co-workers are not interested, then the simplicity and excellent tooling (at least on Windows) of SVN shine. | Why is Git better than Subversion? I've been using Subversion for a few years and after using SourceSafe, I just love Subversion. Combined with TortoiseSVN, I can't really imagine how it could be any better. Yet there's a growing number of developers claiming that Subversion has problems and that we should be moving to the new breed of distributed version control systems, such as Git. How does Git improve upon Subversion? | TITLE:
Why is Git better than Subversion?
QUESTION:
I've been using Subversion for a few years and after using SourceSafe, I just love Subversion. Combined with TortoiseSVN, I can't really imagine how it could be any better. Yet there's a growing number of developers claiming that Subversion has problems and that we should be moving to the new breed of distributed version control systems, such as Git. How does Git improve upon Subversion?
ANSWER:
Git is not better than Subversion. But is also not worse. It's different. The key difference is that it is decentralized. Imagine you are a developer on the road, you develop on your laptop and you want to have source control so that you can go back 3 hours. With Subversion, you have a Problem: The SVN Repository may be in a location you can't reach (in your company, and you don't have internet at the moment), you cannot commit. If you want to make a copy of your code, you have to literally copy/paste it. With Git, you do not have this problem. Your local copy is a repository, and you can commit to it and get all benefits of source control. When you regain connectivity to the main repository, you can commit against it. This looks good at first, but just keep in mind the added complexity to this approach. Git seems to be the "new, shiny, cool" thing. It's by no means bad (there is a reason Linus wrote it for the Linux Kernel development after all), but I feel that many people jump on the "Distributed Source Control" train just because it's new and is written by Linus Torvalds, without actually knowing why/if it's better. Subversion has Problems, but so does Git, Mercurial, CVS, TFS or whatever. Edit: So this answer is now a year old and still generates many upvotes, so I thought I'll add some more explanations. In the last year since writing this, Git has gained a lot of momentum and support, particularly since sites like GitHub really took off. I'm using both Git and Subversion nowadays and I'd like to share some personal insight. First of all, Git can be really confusing at first when working decentralized. What is a remote? and How to properly set up the initial repository? are two questions that come up at the beginning, especially compared to SVN's simple "svnadmin create", Git's "git init" can take the parameters --bare and --shared which seems to be the "proper" way to set up a centralized repository. There are reasons for this, but it adds complexity. The documentation of the "checkout" command is very confusing to people changing over - the "proper" way seems to be "git clone", while "git checkout" seems to switch branches. Git REALLY shines when you are decentralized. I have a server at home and a Laptop on the road, and SVN simply doesn't work well here. With SVN, I can't have local source control if I'm not connected to the repository (Yes, I know about SVK or about ways to copy the repo). With Git, that's the default mode anyway. It's an extra command though (git commit commits locally, whereas git push origin master pushes the master branch to the remote named "origin"). As said above: Git adds complexity. Two modes of creating repositories, checkout vs. clone, commit vs. push... You have to know which commands work locally and which work with "the server" (I'm assuming most people still like a central "master-repository"). Also, the tooling is still insufficient, at least on Windows. Yes, there is a Visual Studio AddIn, but I still use git bash with msysgit. SVN has the advantage that it's MUCH simpler to learn: There is your repository, all changes to towards it, if you know how to create, commit and checkout and you're ready to go and can pickup stuff like branching, update etc. later on. Git has the advantage that it's MUCH better suited if some developers are not always connected to the master repository. Also, it's much faster than SVN. And from what I hear, branching and merging support is a lot better (which is to be expected, as these are the core reasons it was written). This also explains why it gains so much buzz on the Internet, as Git is perfectly suited for Open Source projects: Just Fork it, commit your changes to your own Fork, and then ask the original project maintainer to pull your changes. With Git, this just works. Really, try it on Github, it's magic. What I also see are Git-SVN Bridges: The central repository is a Subversion repo, but developers locally work with Git and the bridge then pushes their changes to SVN. But even with this lengthy addition, I still stand by my core message: Git is not better or worse, it's just different. If you have the need for "Offline Source Control" and the willingness to spend some extra time learning it, it's fantastic. But if you have a strictly centralized Source Control and/or are struggling to introduce Source Control in the first place because your co-workers are not interested, then the simplicity and excellent tooling (at least on Windows) of SVN shine. | [
"svn",
"git"
] | 393 | 547 | 925,767 | 30 | 0 | 2008-08-03T22:38:29.040000 | 2008-08-03T22:45:44.533000 |
879 | 885 | Are PHP Variables passed by value or by reference? | Are PHP variables passed by value or by reference? | It's by value according to the PHP Documentation. By default, function arguments are passed by value (so that if the value of the argument within the function is changed, it does not get changed outside of the function). To allow a function to modify its arguments, they must be passed by reference. To have an argument to a function always passed by reference, prepend an ampersand ( & ) to the argument name in the function definition. | Are PHP Variables passed by value or by reference? Are PHP variables passed by value or by reference? | TITLE:
Are PHP Variables passed by value or by reference?
QUESTION:
Are PHP variables passed by value or by reference?
ANSWER:
It's by value according to the PHP Documentation. By default, function arguments are passed by value (so that if the value of the argument within the function is changed, it does not get changed outside of the function). To allow a function to modify its arguments, they must be passed by reference. To have an argument to a function always passed by reference, prepend an ampersand ( & ) to the argument name in the function definition. | [
"php",
"variables",
"pass-by-reference",
"pass-by-value"
] | 314 | 377 | 223,799 | 16 | 0 | 2008-08-03T22:51:41.447000 | 2008-08-03T23:06:56.423000 |
888 | 890 | How do you debug PHP scripts? | How do you debug PHP scripts? I am aware of basic debugging such as using the Error Reporting. The breakpoint debugging in PHPEclipse is also quite useful. What is the best (in terms of fast and easy) way to debug in phpStorm or any other IDE? | Try Eclipse PDT to setup an Eclipse environment that has debugging features like you mentioned. The ability to step into the code is a much better way to debug then the old method of var_dump and print at various points to see where your flow goes wrong. When all else fails though and all I have is SSH and vim I still var_dump() / die() to find where the code goes south. | How do you debug PHP scripts? How do you debug PHP scripts? I am aware of basic debugging such as using the Error Reporting. The breakpoint debugging in PHPEclipse is also quite useful. What is the best (in terms of fast and easy) way to debug in phpStorm or any other IDE? | TITLE:
How do you debug PHP scripts?
QUESTION:
How do you debug PHP scripts? I am aware of basic debugging such as using the Error Reporting. The breakpoint debugging in PHPEclipse is also quite useful. What is the best (in terms of fast and easy) way to debug in phpStorm or any other IDE?
ANSWER:
Try Eclipse PDT to setup an Eclipse environment that has debugging features like you mentioned. The ability to step into the code is a much better way to debug then the old method of var_dump and print at various points to see where your flow goes wrong. When all else fails though and all I have is SSH and vim I still var_dump() / die() to find where the code goes south. | [
"php",
"eclipse",
"debugging",
"phpstorm",
"xdebug"
] | 402 | 145 | 357,674 | 30 | 0 | 2008-08-03T23:18:21.627000 | 2008-08-03T23:28:39.890000 |
904 | 911 | How to break word after special character like Hyphens (-) | Given a relatively simple CSS: div {
width: 150px;
} 12333-2333-233-23339392-332332323 How do I make it so that the string stays constrained to the width of 150, and wraps to a new line on the hyphen? | Replace your hyphens with this: It's called a "soft" hyphen. div {
width: 150px;
} 12333233323323339392332332323 | How to break word after special character like Hyphens (-) Given a relatively simple CSS: div {
width: 150px;
} 12333-2333-233-23339392-332332323 How do I make it so that the string stays constrained to the width of 150, and wraps to a new line on the hyphen? | TITLE:
How to break word after special character like Hyphens (-)
QUESTION:
Given a relatively simple CSS: div {
width: 150px;
} 12333-2333-233-23339392-332332323 How do I make it so that the string stays constrained to the width of 150, and wraps to a new line on the hyphen?
ANSWER:
Replace your hyphens with this: It's called a "soft" hyphen. div {
width: 150px;
} 12333233323323339392332332323 | [
"html",
"css",
"text"
] | 79 | 75 | 22,538 | 11 | 0 | 2008-08-04T00:17:34.690000 | 2008-08-04T00:25:11.227000 |
905 | 942 | Client collation and SQL Server 2005 | We're upgrading an existing program from Win2k/SQL Server 2k to Windows 2003 and SQL Server 2005 as well as purchasing a new program that also uses 2k3/2k5. The vendor says that for us to host both databases we need to get the Enterprise version because the softwares clients use different collation for the connections and only Enterprise supports this. I cannot find anything on MS's site to support this and, honestly, don't want to pay the extra for Enterprise if the Standard edition works. Am I missing some not talked about feature of SQL Server or is this, as I suspect, a vendor trying to upsell me? | All editions of SQL Server 2000/2005/2008 support having multiple databases, each using their own collation sequence. You don't need the Enterprise version. When you have a database that uses a collation sequence that is different from default collation for the database server, you will need to take some extra precautions if you use temporary tables and/or table variables. Temp tables/variables live in the tempdb database, which uses the collation seqyuence used by by the master databases. Just remember to use " COLLATE database_default " when defining character fields in the temp tables/variables. I blogged about that not too long ago, if you want some more details. | Client collation and SQL Server 2005 We're upgrading an existing program from Win2k/SQL Server 2k to Windows 2003 and SQL Server 2005 as well as purchasing a new program that also uses 2k3/2k5. The vendor says that for us to host both databases we need to get the Enterprise version because the softwares clients use different collation for the connections and only Enterprise supports this. I cannot find anything on MS's site to support this and, honestly, don't want to pay the extra for Enterprise if the Standard edition works. Am I missing some not talked about feature of SQL Server or is this, as I suspect, a vendor trying to upsell me? | TITLE:
Client collation and SQL Server 2005
QUESTION:
We're upgrading an existing program from Win2k/SQL Server 2k to Windows 2003 and SQL Server 2005 as well as purchasing a new program that also uses 2k3/2k5. The vendor says that for us to host both databases we need to get the Enterprise version because the softwares clients use different collation for the connections and only Enterprise supports this. I cannot find anything on MS's site to support this and, honestly, don't want to pay the extra for Enterprise if the Standard edition works. Am I missing some not talked about feature of SQL Server or is this, as I suspect, a vendor trying to upsell me?
ANSWER:
All editions of SQL Server 2000/2005/2008 support having multiple databases, each using their own collation sequence. You don't need the Enterprise version. When you have a database that uses a collation sequence that is different from default collation for the database server, you will need to take some extra precautions if you use temporary tables and/or table variables. Temp tables/variables live in the tempdb database, which uses the collation seqyuence used by by the master databases. Just remember to use " COLLATE database_default " when defining character fields in the temp tables/variables. I blogged about that not too long ago, if you want some more details. | [
"sql-server",
"sql-server-2005",
"windows-server-2003"
] | 15 | 7 | 868 | 1 | 0 | 2008-08-04T00:17:55.767000 | 2008-08-04T01:13:06.247000 |
930 | 951 | How do I connect to a database and loop over a recordset in C#? | What's the simplest way to connect and query a database for a set of records in C#? | @Goyuix -- that's excellent for something written from memory. tested it here -- found the connection wasn't opened. Otherwise very nice. using System.Data.OleDb;...
using (OleDbConnection conn = new OleDbConnection()) { conn.ConnectionString = "Provider=sqloledb;Data Source=yourServername\\yourInstance;Initial Catalog=databaseName;Integrated Security=SSPI;";
using (OleDbCommand cmd = new OleDbCommand()) { conn.Open(); cmd.Connection = conn; cmd.CommandText = "Select * from yourTable";
using (OleDbDataReader dr = cmd.ExecuteReader()) { while (dr.Read()) { Console.WriteLine(dr["columnName"]); } } } } | How do I connect to a database and loop over a recordset in C#? What's the simplest way to connect and query a database for a set of records in C#? | TITLE:
How do I connect to a database and loop over a recordset in C#?
QUESTION:
What's the simplest way to connect and query a database for a set of records in C#?
ANSWER:
@Goyuix -- that's excellent for something written from memory. tested it here -- found the connection wasn't opened. Otherwise very nice. using System.Data.OleDb;...
using (OleDbConnection conn = new OleDbConnection()) { conn.ConnectionString = "Provider=sqloledb;Data Source=yourServername\\yourInstance;Initial Catalog=databaseName;Integrated Security=SSPI;";
using (OleDbCommand cmd = new OleDbCommand()) { conn.Open(); cmd.Connection = conn; cmd.CommandText = "Select * from yourTable";
using (OleDbDataReader dr = cmd.ExecuteReader()) { while (dr.Read()) { Console.WriteLine(dr["columnName"]); } } } } | [
"c#",
"database",
"loops",
"connection"
] | 49 | 35 | 32,655 | 8 | 0 | 2008-08-04T00:47:25.143000 | 2008-08-04T01:31:31.157000 |
935 | 938 | String literals and escape characters in postgresql | Attempting to insert an escape character into a table results in a warning. For example: create table EscapeTest (text varchar(50));
insert into EscapeTest (text) values ('This is the first part \n And this is the second'); Produces the warning: WARNING: nonstandard use of escape in a string literal ( Using PSQL 8.2 ) Anyone know how to get around this? | Partially. The text is inserted, but the warning is still generated. I found a discussion that indicated the text needed to be preceded with 'E', as such: insert into EscapeTest (text) values (E'This is the first part \n And this is the second'); This suppressed the warning, but the text was still not being returned correctly. When I added the additional slash as Michael suggested, it worked. As such: insert into EscapeTest (text) values (E'This is the first part \\n And this is the second'); | String literals and escape characters in postgresql Attempting to insert an escape character into a table results in a warning. For example: create table EscapeTest (text varchar(50));
insert into EscapeTest (text) values ('This is the first part \n And this is the second'); Produces the warning: WARNING: nonstandard use of escape in a string literal ( Using PSQL 8.2 ) Anyone know how to get around this? | TITLE:
String literals and escape characters in postgresql
QUESTION:
Attempting to insert an escape character into a table results in a warning. For example: create table EscapeTest (text varchar(50));
insert into EscapeTest (text) values ('This is the first part \n And this is the second'); Produces the warning: WARNING: nonstandard use of escape in a string literal ( Using PSQL 8.2 ) Anyone know how to get around this?
ANSWER:
Partially. The text is inserted, but the warning is still generated. I found a discussion that indicated the text needed to be preceded with 'E', as such: insert into EscapeTest (text) values (E'This is the first part \n And this is the second'); This suppressed the warning, but the text was still not being returned correctly. When I added the additional slash as Michael suggested, it worked. As such: insert into EscapeTest (text) values (E'This is the first part \\n And this is the second'); | [
"string",
"postgresql",
"escaping"
] | 142 | 158 | 383,391 | 5 | 0 | 2008-08-04T01:00:24.837000 | 2008-08-04T01:07:03.233000 |
944 | 1,016 | Unhandled Exception Handler in .NET 1.1 | I'm maintaining a.NET 1.1 application and one of the things I've been tasked with is making sure the user doesn't see any unfriendly error notifications. I've added handlers to Application.ThreadException and AppDomain.CurrentDomain.UnhandledException, which do get called. My problem is that the standard CLR error dialog is still displayed (before the exception handler is called). Jeff talks about this problem on his blog here and here. But there's no solution. So what is the standard way in.NET 1.1 to handle uncaught exceptions and display a friendly dialog box? Jeff's response was marked as the correct answer because the link he provided has the most complete information on how to do what's required. | Oh, in Windows Forms you definitely should be able to get it to work. The only thing you have to watch out for is things happening on different threads. I have an old Code Project article here which should help: User Friendly Exception Handling | Unhandled Exception Handler in .NET 1.1 I'm maintaining a.NET 1.1 application and one of the things I've been tasked with is making sure the user doesn't see any unfriendly error notifications. I've added handlers to Application.ThreadException and AppDomain.CurrentDomain.UnhandledException, which do get called. My problem is that the standard CLR error dialog is still displayed (before the exception handler is called). Jeff talks about this problem on his blog here and here. But there's no solution. So what is the standard way in.NET 1.1 to handle uncaught exceptions and display a friendly dialog box? Jeff's response was marked as the correct answer because the link he provided has the most complete information on how to do what's required. | TITLE:
Unhandled Exception Handler in .NET 1.1
QUESTION:
I'm maintaining a.NET 1.1 application and one of the things I've been tasked with is making sure the user doesn't see any unfriendly error notifications. I've added handlers to Application.ThreadException and AppDomain.CurrentDomain.UnhandledException, which do get called. My problem is that the standard CLR error dialog is still displayed (before the exception handler is called). Jeff talks about this problem on his blog here and here. But there's no solution. So what is the standard way in.NET 1.1 to handle uncaught exceptions and display a friendly dialog box? Jeff's response was marked as the correct answer because the link he provided has the most complete information on how to do what's required.
ANSWER:
Oh, in Windows Forms you definitely should be able to get it to work. The only thing you have to watch out for is things happening on different threads. I have an old Code Project article here which should help: User Friendly Exception Handling | [
"c#",
".net",
"exception"
] | 31 | 13 | 5,152 | 6 | 0 | 2008-08-04T01:15:38.067000 | 2008-08-04T04:31:02.557000 |
947 | 1,028 | How to use a mutex in Visual Basic | I have imported the kernel32 library. So, I have the createMutex function available but I am not quite sure of the various parameters and return values. This is classic Visual Basic, not Visual Basic.NET but I can probably work with either language in the form of an answer. | The VB code looks something like this: hMutex = CreateMutex(ByVal 0&, 1, ByVal 0&) The first parameter is a pointer to an SECURITY_ATTRIBUTES structure. If you don't know what it is, you don't need it. Pass NULL (0). The second parameter is TRUE (non-zero, or 1) if the calling thread should take ownership of the mutex. FALSE otherwise. The third parameter is the mutex name and may be NULL (0), as shown. If you need a named mutex, pass the name (anything unique) in. Not sure whether the VB wrapper marshals the length-prefixed VB string type ( BSTR ) over to a null-terminated Ascii/Unicode string if not, you'll need to do that and numerous examples are out there. Good luck! | How to use a mutex in Visual Basic I have imported the kernel32 library. So, I have the createMutex function available but I am not quite sure of the various parameters and return values. This is classic Visual Basic, not Visual Basic.NET but I can probably work with either language in the form of an answer. | TITLE:
How to use a mutex in Visual Basic
QUESTION:
I have imported the kernel32 library. So, I have the createMutex function available but I am not quite sure of the various parameters and return values. This is classic Visual Basic, not Visual Basic.NET but I can probably work with either language in the form of an answer.
ANSWER:
The VB code looks something like this: hMutex = CreateMutex(ByVal 0&, 1, ByVal 0&) The first parameter is a pointer to an SECURITY_ATTRIBUTES structure. If you don't know what it is, you don't need it. Pass NULL (0). The second parameter is TRUE (non-zero, or 1) if the calling thread should take ownership of the mutex. FALSE otherwise. The third parameter is the mutex name and may be NULL (0), as shown. If you need a named mutex, pass the name (anything unique) in. Not sure whether the VB wrapper marshals the length-prefixed VB string type ( BSTR ) over to a null-terminated Ascii/Unicode string if not, you'll need to do that and numerous examples are out there. Good luck! | [
"vb6",
"mutex"
] | 27 | 8 | 9,182 | 3 | 0 | 2008-08-04T01:22:49.830000 | 2008-08-04T04:58:40.300000 |
972 | 2,982 | Adding a method to an existing object instance in Python | How do I add a method to an existing object (i.e., not in the class definition) in Python? I understand that it's not generally considered good practice to do so, except in some cases. | In Python, there is a difference between functions and bound methods. >>> def foo():... print "foo"... >>> class A:... def bar( self ):... print "bar"... >>> a = A() >>> foo >>> a.bar > >>> Bound methods have been "bound" (how descriptive) to an instance, and that instance will be passed as the first argument whenever the method is called. Callables that are attributes of a class (as opposed to an instance) are still unbound, though, so you can modify the class definition whenever you want: >>> def fooFighters( self ):... print "fooFighters"... >>> A.fooFighters = fooFighters >>> a2 = A() >>> a2.fooFighters > >>> a2.fooFighters() fooFighters Previously defined instances are updated as well (as long as they haven't overridden the attribute themselves): >>> a.fooFighters() fooFighters The problem comes when you want to attach a method to a single instance: >>> def barFighters( self ):... print "barFighters"... >>> a.barFighters = barFighters >>> a.barFighters() Traceback (most recent call last): File " ", line 1, in TypeError: barFighters() takes exactly 1 argument (0 given) The function is not automatically bound when it's attached directly to an instance: >>> a.barFighters To bind it, we can use the MethodType function in the types module: >>> import types >>> a.barFighters = types.MethodType( barFighters, a ) >>> a.barFighters > >>> a.barFighters() barFighters This time other instances of the class have not been affected: >>> a2.barFighters() Traceback (most recent call last): File " ", line 1, in AttributeError: A instance has no attribute 'barFighters' More information can be found by reading about descriptors and metaclass programming. | Adding a method to an existing object instance in Python How do I add a method to an existing object (i.e., not in the class definition) in Python? I understand that it's not generally considered good practice to do so, except in some cases. | TITLE:
Adding a method to an existing object instance in Python
QUESTION:
How do I add a method to an existing object (i.e., not in the class definition) in Python? I understand that it's not generally considered good practice to do so, except in some cases.
ANSWER:
In Python, there is a difference between functions and bound methods. >>> def foo():... print "foo"... >>> class A:... def bar( self ):... print "bar"... >>> a = A() >>> foo >>> a.bar > >>> Bound methods have been "bound" (how descriptive) to an instance, and that instance will be passed as the first argument whenever the method is called. Callables that are attributes of a class (as opposed to an instance) are still unbound, though, so you can modify the class definition whenever you want: >>> def fooFighters( self ):... print "fooFighters"... >>> A.fooFighters = fooFighters >>> a2 = A() >>> a2.fooFighters > >>> a2.fooFighters() fooFighters Previously defined instances are updated as well (as long as they haven't overridden the attribute themselves): >>> a.fooFighters() fooFighters The problem comes when you want to attach a method to a single instance: >>> def barFighters( self ):... print "barFighters"... >>> a.barFighters = barFighters >>> a.barFighters() Traceback (most recent call last): File " ", line 1, in TypeError: barFighters() takes exactly 1 argument (0 given) The function is not automatically bound when it's attached directly to an instance: >>> a.barFighters To bind it, we can use the MethodType function in the types module: >>> import types >>> a.barFighters = types.MethodType( barFighters, a ) >>> a.barFighters > >>> a.barFighters() barFighters This time other instances of the class have not been affected: >>> a2.barFighters() Traceback (most recent call last): File " ", line 1, in AttributeError: A instance has no attribute 'barFighters' More information can be found by reading about descriptors and metaclass programming. | [
"python",
"oop",
"methods",
"monkeypatching"
] | 855 | 1,171 | 385,254 | 19 | 0 | 2008-08-04T02:17:51.780000 | 2008-08-06T00:33:35.063000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.