question_id int64 4 6.31M | answer_id int64 7 6.31M | title stringlengths 9 150 | question_body stringlengths 0 28.8k | answer_body stringlengths 60 27.2k | question_text stringlengths 40 28.9k | combined_text stringlengths 124 39.6k | tags listlengths 1 6 | question_score int64 0 26.3k | answer_score int64 0 28.8k | view_count int64 15 14M | answer_count int64 0 182 | favorite_count int64 0 32 | question_creation_date stringdate 2008-07-31 21:42:52 2011-06-10 18:12:18 | answer_creation_date stringdate 2008-07-31 22:17:57 2011-06-10 18:14:17 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
50,571 | 727,993 | Windows XP support for Remote NDIS | I'm looking at developing a device which will need to support Ethernet over USB (hosted in Linux, XP, and Vista). As I understand it, Vista and Linux support the industry standard USB CDC. However, in classic Windows style, XP only supports it's own Remote NDIS. So, now I'm thinking of just bowing down and doing it over RNDIS, as opposed to rolling my own CDC driver for XP. I've been reading some older documentation that says even XP is pretty buggy with NDIS (suprise!). Does anyone have experience with XP's RNDIS drivers? Are they safe for product development? Any insight would be much appreciated. | After doing my own research and testing, a single NDIS device works reasonably well. However, if you are at all needing to support multiple NDIS devices, you are out of luck. My system became extremely unstable and was essentially unusable. This was very reproducible. I would not recommend NDIS in any type of multiple-device scenario. | Windows XP support for Remote NDIS I'm looking at developing a device which will need to support Ethernet over USB (hosted in Linux, XP, and Vista). As I understand it, Vista and Linux support the industry standard USB CDC. However, in classic Windows style, XP only supports it's own Remote NDIS. So, now I'm thinking of just bowing down and doing it over RNDIS, as opposed to rolling my own CDC driver for XP. I've been reading some older documentation that says even XP is pretty buggy with NDIS (suprise!). Does anyone have experience with XP's RNDIS drivers? Are they safe for product development? Any insight would be much appreciated. | TITLE:
Windows XP support for Remote NDIS
QUESTION:
I'm looking at developing a device which will need to support Ethernet over USB (hosted in Linux, XP, and Vista). As I understand it, Vista and Linux support the industry standard USB CDC. However, in classic Windows style, XP only supports it's own Remote NDIS. So, now I'm thinking of just bowing down and doing it over RNDIS, as opposed to rolling my own CDC driver for XP. I've been reading some older documentation that says even XP is pretty buggy with NDIS (suprise!). Does anyone have experience with XP's RNDIS drivers? Are they safe for product development? Any insight would be much appreciated.
ANSWER:
After doing my own research and testing, a single NDIS device works reasonably well. However, if you are at all needing to support multiple NDIS devices, you are out of luck. My system became extremely unstable and was essentially unusable. This was very reproducible. I would not recommend NDIS in any type of multiple-device scenario. | [
"windows",
"embedded",
"usb",
"drivers",
"ndis"
] | 1 | 0 | 3,019 | 4 | 0 | 2008-09-08T20:18:40.207000 | 2009-04-07T23:32:23.843000 |
50,579 | 50,593 | __doPostBack not rendering on postback | I'm having a strange problem. I have to use GetPostBackEventRefence to force a Postback, but it works the first time, after the first postback, the.NET function is not rendered... any ideas? This is what I'm missing after the postback: | The first thing I would look at is whether you have any asp controls (such as linkbutton, comboboxes,that don't normally generate a submit but requre a postback) being displayed on the page. The __doPostback function will only be put into the page if ASP thinks that one of your controls requires it. If you aren't using one of those you can use: Page.ClientScript.GetPostBackClientHyperlink(controlName, "") to add the function to your page | __doPostBack not rendering on postback I'm having a strange problem. I have to use GetPostBackEventRefence to force a Postback, but it works the first time, after the first postback, the.NET function is not rendered... any ideas? This is what I'm missing after the postback: | TITLE:
__doPostBack not rendering on postback
QUESTION:
I'm having a strange problem. I have to use GetPostBackEventRefence to force a Postback, but it works the first time, after the first postback, the.NET function is not rendered... any ideas? This is what I'm missing after the postback:
ANSWER:
The first thing I would look at is whether you have any asp controls (such as linkbutton, comboboxes,that don't normally generate a submit but requre a postback) being displayed on the page. The __doPostback function will only be put into the page if ASP thinks that one of your controls requires it. If you aren't using one of those you can use: Page.ClientScript.GetPostBackClientHyperlink(controlName, "") to add the function to your page | [
"asp.net",
"javascript",
"postback"
] | 3 | 3 | 3,950 | 2 | 0 | 2008-09-08T20:22:48.433000 | 2008-09-08T20:28:16.750000 |
50,585 | 1,839,458 | How do you capture mouse events in FF, over Shockwave Object | How do you capture the mouse events, move and click over top of a Shockwave Director Object (not flash) in Firefox, via JavaScript. The code works in IE but not in FF. The script works on the document body of both IE and Moz, but mouse events do not fire when mouse is over a shockwave director object embed. Update: function displaycoordIE(){ window.status=event.clientX+": " + event.clientY; } function displaycoordNS(e){ window.status=e.clientX+": " + e.clientY; } function displaycoordMoz(e) { window.alert(e.clientX+": " + e.clientY); }
document.onmousemove = displaycoordIE; document.onmousemove = displaycoordNS; document.onclick = displaycoordMoz; Just a side note, I have also tried using an addEventListener to "mousemove". | You could also catch the mouse event within Director (That never fails) and then call your JS functions from there, using gotoNetPage "javascript:function('" & argument & "')" ej: on mouseDown me gotoNetPage "javascript:function('" & argument & "')" end The mouse move detection is a little bit trickier, as there is no such an event in lingo, but you can use: property pMouseLock
on beginsprite pMouseLock = _mouse.mouseLock end on exitFrame if _mouse.mouseLock <> pMouseLock then gotoNetPage "javascript:function('" & argument & "')" pMouseLock = _mouse.mouseLock end if end regards | How do you capture mouse events in FF, over Shockwave Object How do you capture the mouse events, move and click over top of a Shockwave Director Object (not flash) in Firefox, via JavaScript. The code works in IE but not in FF. The script works on the document body of both IE and Moz, but mouse events do not fire when mouse is over a shockwave director object embed. Update: function displaycoordIE(){ window.status=event.clientX+": " + event.clientY; } function displaycoordNS(e){ window.status=e.clientX+": " + e.clientY; } function displaycoordMoz(e) { window.alert(e.clientX+": " + e.clientY); }
document.onmousemove = displaycoordIE; document.onmousemove = displaycoordNS; document.onclick = displaycoordMoz; Just a side note, I have also tried using an addEventListener to "mousemove". | TITLE:
How do you capture mouse events in FF, over Shockwave Object
QUESTION:
How do you capture the mouse events, move and click over top of a Shockwave Director Object (not flash) in Firefox, via JavaScript. The code works in IE but not in FF. The script works on the document body of both IE and Moz, but mouse events do not fire when mouse is over a shockwave director object embed. Update: function displaycoordIE(){ window.status=event.clientX+": " + event.clientY; } function displaycoordNS(e){ window.status=e.clientX+": " + e.clientY; } function displaycoordMoz(e) { window.alert(e.clientX+": " + e.clientY); }
document.onmousemove = displaycoordIE; document.onmousemove = displaycoordNS; document.onclick = displaycoordMoz; Just a side note, I have also tried using an addEventListener to "mousemove".
ANSWER:
You could also catch the mouse event within Director (That never fails) and then call your JS functions from there, using gotoNetPage "javascript:function('" & argument & "')" ej: on mouseDown me gotoNetPage "javascript:function('" & argument & "')" end The mouse move detection is a little bit trickier, as there is no such an event in lingo, but you can use: property pMouseLock
on beginsprite pMouseLock = _mouse.mouseLock end on exitFrame if _mouse.mouseLock <> pMouseLock then gotoNetPage "javascript:function('" & argument & "')" pMouseLock = _mouse.mouseLock end if end regards | [
"javascript",
"firefox",
"shockwave"
] | 1 | 2 | 1,813 | 2 | 0 | 2008-09-08T20:25:30.517000 | 2009-12-03T11:51:22.140000 |
50,605 | 50,632 | Signed to unsigned conversion in C - is it always safe? | Suppose I have the following C code. unsigned int u = 1234; int i = -5678;
unsigned int result = u + i; What implicit conversions are going on here, and is this code safe for all values of u and i? (Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.) | Short Answer Your i will be converted to an unsigned integer by adding UINT_MAX + 1, then the addition will be carried out with the unsigned values, resulting in a large result (depending on the values of u and i ). Long Answer According to the C99 Standard: 6.3.1.8 Usual arithmetic conversions If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. In your case, we have one unsigned int ( u ) and signed int ( i ). Referring to (3) above, since both operands have the same rank, your i will need to be converted to an unsigned integer. 6.3.1.3 Signed and unsigned integers When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged. Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type. Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised. Now we need to refer to (2) above. Your i will be converted to an unsigned value by adding UINT_MAX + 1. So the result will depend on how UINT_MAX is defined on your implementation. It will be large, but it will not overflow, because: 6.2.5 (9) A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type. Bonus: Arithmetic Conversion Semi-WTF #include int main(void) { unsigned int plus_one = 1; int minus_one = -1;
if(plus_one < minus_one) printf("1 < -1"); else printf("boring");
return 0; } You can use this link to try this online: https://repl.it/repls/QuickWhimsicalBytes Bonus: Arithmetic Conversion Side Effect Arithmetic conversion rules can be used to get the value of UINT_MAX by initializing an unsigned value to -1, ie: unsigned int umax = -1; // umax set to UINT_MAX This is guaranteed to be portable regardless of the signed number representation of the system because of the conversion rules described above. See this SO question for more information: Is it safe to use -1 to set all bits to true? | Signed to unsigned conversion in C - is it always safe? Suppose I have the following C code. unsigned int u = 1234; int i = -5678;
unsigned int result = u + i; What implicit conversions are going on here, and is this code safe for all values of u and i? (Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.) | TITLE:
Signed to unsigned conversion in C - is it always safe?
QUESTION:
Suppose I have the following C code. unsigned int u = 1234; int i = -5678;
unsigned int result = u + i; What implicit conversions are going on here, and is this code safe for all values of u and i? (Safe, in the sense that even though result in this example will overflow to some huge positive number, I could cast it back to an int and get the real result.)
ANSWER:
Short Answer Your i will be converted to an unsigned integer by adding UINT_MAX + 1, then the addition will be carried out with the unsigned values, resulting in a large result (depending on the values of u and i ). Long Answer According to the C99 Standard: 6.3.1.8 Usual arithmetic conversions If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank. Otherwise, if the operand that has unsigned integer type has rank greater or equal to the rank of the type of the other operand, then the operand with signed integer type is converted to the type of the operand with unsigned integer type. Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, then the operand with unsigned integer type is converted to the type of the operand with signed integer type. Otherwise, both operands are converted to the unsigned integer type corresponding to the type of the operand with signed integer type. In your case, we have one unsigned int ( u ) and signed int ( i ). Referring to (3) above, since both operands have the same rank, your i will need to be converted to an unsigned integer. 6.3.1.3 Signed and unsigned integers When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged. Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type. Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised. Now we need to refer to (2) above. Your i will be converted to an unsigned value by adding UINT_MAX + 1. So the result will depend on how UINT_MAX is defined on your implementation. It will be large, but it will not overflow, because: 6.2.5 (9) A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type. Bonus: Arithmetic Conversion Semi-WTF #include int main(void) { unsigned int plus_one = 1; int minus_one = -1;
if(plus_one < minus_one) printf("1 < -1"); else printf("boring");
return 0; } You can use this link to try this online: https://repl.it/repls/QuickWhimsicalBytes Bonus: Arithmetic Conversion Side Effect Arithmetic conversion rules can be used to get the value of UINT_MAX by initializing an unsigned value to -1, ie: unsigned int umax = -1; // umax set to UINT_MAX This is guaranteed to be portable regardless of the signed number representation of the system because of the conversion rules described above. See this SO question for more information: Is it safe to use -1 to set all bits to true? | [
"c",
"type-conversion"
] | 166 | 265 | 221,413 | 8 | 0 | 2008-09-08T20:36:45.107000 | 2008-09-08T20:44:26.790000 |
50,606 | 50,642 | Wordpress Category Template Question | I am looking at using a custom template for a set of categories. Is it possible to use a category template (like category-4.php) on a parent category and have the children use that template as well? So based on the answer so far, is there a way to accomplish this? I want to add text and images to all categories within a parent category. | From the documentation it does not appear to be possible without actually adding several category template files (unless you custom program it). I run Wordpress, and I have only seen it accomplished category by category. http://codex.wordpress.org/Category_Templates | Wordpress Category Template Question I am looking at using a custom template for a set of categories. Is it possible to use a category template (like category-4.php) on a parent category and have the children use that template as well? So based on the answer so far, is there a way to accomplish this? I want to add text and images to all categories within a parent category. | TITLE:
Wordpress Category Template Question
QUESTION:
I am looking at using a custom template for a set of categories. Is it possible to use a category template (like category-4.php) on a parent category and have the children use that template as well? So based on the answer so far, is there a way to accomplish this? I want to add text and images to all categories within a parent category.
ANSWER:
From the documentation it does not appear to be possible without actually adding several category template files (unless you custom program it). I run Wordpress, and I have only seen it accomplished category by category. http://codex.wordpress.org/Category_Templates | [
"wordpress",
"templates"
] | 0 | 2 | 330 | 1 | 0 | 2008-09-08T20:36:45.590000 | 2008-09-08T20:48:06.110000 |
50,618 | 50,627 | What is the point of the finally block? | Syntax aside, what is the difference between try { } catch() { } finally { x = 3; } and try { } catch() { }
x = 3; edit: in.NET 2.0? so try { throw something maybe x = 3 } catch (...) { x = 3 } is behaviourally equivalent? | Depends on the language as there might be some slight semantic differences, but the idea is that it will execute (almost) always, even if the code in the try block threw an exception. In the second example, if the code in the catch block returns or quits, the x = 3 will not be executed. In the first it will. In the.NET platform, in some cases the execution of the finally block won't occur: Security Exceptions, Thread suspensions, Computer shut down:), etc. | What is the point of the finally block? Syntax aside, what is the difference between try { } catch() { } finally { x = 3; } and try { } catch() { }
x = 3; edit: in.NET 2.0? so try { throw something maybe x = 3 } catch (...) { x = 3 } is behaviourally equivalent? | TITLE:
What is the point of the finally block?
QUESTION:
Syntax aside, what is the difference between try { } catch() { } finally { x = 3; } and try { } catch() { }
x = 3; edit: in.NET 2.0? so try { throw something maybe x = 3 } catch (...) { x = 3 } is behaviourally equivalent?
ANSWER:
Depends on the language as there might be some slight semantic differences, but the idea is that it will execute (almost) always, even if the code in the try block threw an exception. In the second example, if the code in the catch block returns or quits, the x = 3 will not be executed. In the first it will. In the.NET platform, in some cases the execution of the finally block won't occur: Security Exceptions, Thread suspensions, Computer shut down:), etc. | [
".net",
"design-patterns",
"exception",
"try-catch-finally"
] | 35 | 36 | 5,967 | 16 | 0 | 2008-09-08T20:41:49.307000 | 2008-09-08T20:43:21.817000 |
50,702 | 823,467 | linq - how do you do a query for items in one query source that are not in another one? | If I have 2 query sources how do I find ones that are in one that are not in the other? example of join to find items in both: var results = from item1 in qs1.Items join item2 in qs2 on item1.field1 equals item2.field2 select item1; So what would the linq code be to return the items in qs1 that are not in qs2? | Darren Kopp's answer: var excluded = items1.Except(items2); is the best solution from a performance perspective. (NB: This true for at least regular LINQ, perhaps LINQ to SQL changes things as per Marco Russo's blog post. However, I'd imagine that in the "worst case" Darren Kopp's method will return at least the speed of Russo's method even in a LINQ to SQL environment). As a quick example try this in LINQPad: void Main() { Random rand = new Random(); int n = 100000; var randomSeq = Enumerable.Repeat(0, n).Select(i => rand.Next()); var randomFilter = Enumerable.Repeat(0, n).Select(i => rand.Next());
/* Method 1: Bramha Ghosh's/Marco Russo's method */ (from el1 in randomSeq where!(from el2 in randomFilter select el2).Contains(el1) select el1).Dump("Result");
/* Method 2: Darren Kopp's method */ randomSeq.Except(randomFilter).Dump("Result"); } Try commenting one of the two methods out at a time and try out the performance for different values of n. My experience (on my Core 2 Duo Laptop) seems to suggest: n = 100. Method 1 takes about 0.05 seconds, Method 2 takes about 0.05 seconds n = 1,000. Method 1 takes about 0.6 seconds, Method 2 takes about 0.4 seconds n = 10,000. Method 1 takes about 2.5 seconds, Method 2 takes about 0.425 seconds n = 100,000. Method 1 takes about 20 seconds, Method 2 takes about 0.45 seconds n = 1,000,000. Method 1 takes about 3 minutes 25 seconds, Method 2 takes about 1.3 seconds Method 2 (Darren Kopp's answer) is clearly faster. The speed decrease for Method 2 for larger n is most likely due to the creation of the random data (feel free to put in a DateTime diff to confirm this) whereas Method 1 clearly has algorithmic complexity issues (and just by looking you can see it is at least O(N^2) as for each number in the first collection it is comparing against the entire second collection). Conclusion: Use Darren Kopp's answer of LINQ's 'Except' method | linq - how do you do a query for items in one query source that are not in another one? If I have 2 query sources how do I find ones that are in one that are not in the other? example of join to find items in both: var results = from item1 in qs1.Items join item2 in qs2 on item1.field1 equals item2.field2 select item1; So what would the linq code be to return the items in qs1 that are not in qs2? | TITLE:
linq - how do you do a query for items in one query source that are not in another one?
QUESTION:
If I have 2 query sources how do I find ones that are in one that are not in the other? example of join to find items in both: var results = from item1 in qs1.Items join item2 in qs2 on item1.field1 equals item2.field2 select item1; So what would the linq code be to return the items in qs1 that are not in qs2?
ANSWER:
Darren Kopp's answer: var excluded = items1.Except(items2); is the best solution from a performance perspective. (NB: This true for at least regular LINQ, perhaps LINQ to SQL changes things as per Marco Russo's blog post. However, I'd imagine that in the "worst case" Darren Kopp's method will return at least the speed of Russo's method even in a LINQ to SQL environment). As a quick example try this in LINQPad: void Main() { Random rand = new Random(); int n = 100000; var randomSeq = Enumerable.Repeat(0, n).Select(i => rand.Next()); var randomFilter = Enumerable.Repeat(0, n).Select(i => rand.Next());
/* Method 1: Bramha Ghosh's/Marco Russo's method */ (from el1 in randomSeq where!(from el2 in randomFilter select el2).Contains(el1) select el1).Dump("Result");
/* Method 2: Darren Kopp's method */ randomSeq.Except(randomFilter).Dump("Result"); } Try commenting one of the two methods out at a time and try out the performance for different values of n. My experience (on my Core 2 Duo Laptop) seems to suggest: n = 100. Method 1 takes about 0.05 seconds, Method 2 takes about 0.05 seconds n = 1,000. Method 1 takes about 0.6 seconds, Method 2 takes about 0.4 seconds n = 10,000. Method 1 takes about 2.5 seconds, Method 2 takes about 0.425 seconds n = 100,000. Method 1 takes about 20 seconds, Method 2 takes about 0.45 seconds n = 1,000,000. Method 1 takes about 3 minutes 25 seconds, Method 2 takes about 1.3 seconds Method 2 (Darren Kopp's answer) is clearly faster. The speed decrease for Method 2 for larger n is most likely due to the creation of the random data (feel free to put in a DateTime diff to confirm this) whereas Method 1 clearly has algorithmic complexity issues (and just by looking you can see it is at least O(N^2) as for each number in the first collection it is comparing against the entire second collection). Conclusion: Use Darren Kopp's answer of LINQ's 'Except' method | [
"c#",
".net",
"linq",
".net-3.5"
] | 4 | 3 | 1,265 | 5 | 0 | 2008-09-08T21:10:49.923000 | 2009-05-05T05:25:46.113000 |
50,723 | 50,752 | Rss feed for game programmer? | I was browsing this thread, which has good recommendation but a bit too general for me. So, if anyone has a collection of nice game programming feeds,please share them.:) (both general and specific topics are welcome) | Here are two I've used DirectX forum feed and Summary of interesting resources | Rss feed for game programmer? I was browsing this thread, which has good recommendation but a bit too general for me. So, if anyone has a collection of nice game programming feeds,please share them.:) (both general and specific topics are welcome) | TITLE:
Rss feed for game programmer?
QUESTION:
I was browsing this thread, which has good recommendation but a bit too general for me. So, if anyone has a collection of nice game programming feeds,please share them.:) (both general and specific topics are welcome)
ANSWER:
Here are two I've used DirectX forum feed and Summary of interesting resources | [
"graphics",
"rss",
"artificial-intelligence",
"feed"
] | 1 | 1 | 602 | 4 | 0 | 2008-09-08T21:22:57.273000 | 2008-09-08T21:37:45.850000 |
50,737 | 50,826 | Can I have TortoiseSVN auto-add files? | Is there a way to have TortoiseSVN (or any other tool) auto-add any new.cs files I create within a directory to my working copy so I don't have to remember which files I created at the end of the day? | I would probably make a batch file, something like this (untested): dir /b /S *.cs > allcsfiles.txt svn add --targets allcsfiles.txt I believe svn won't mind you trying to add files which are already versioned.. Anyway, that's probably about as automatic as you will easily get. | Can I have TortoiseSVN auto-add files? Is there a way to have TortoiseSVN (or any other tool) auto-add any new.cs files I create within a directory to my working copy so I don't have to remember which files I created at the end of the day? | TITLE:
Can I have TortoiseSVN auto-add files?
QUESTION:
Is there a way to have TortoiseSVN (or any other tool) auto-add any new.cs files I create within a directory to my working copy so I don't have to remember which files I created at the end of the day?
ANSWER:
I would probably make a batch file, something like this (untested): dir /b /S *.cs > allcsfiles.txt svn add --targets allcsfiles.txt I believe svn won't mind you trying to add files which are already versioned.. Anyway, that's probably about as automatic as you will easily get. | [
"svn",
"tortoisesvn"
] | 12 | 3 | 6,579 | 4 | 0 | 2008-09-08T21:29:38.850000 | 2008-09-08T22:16:26.867000 |
50,744 | 50,800 | Wait until file is unlocked in .NET | What's the simplest way of blocking a thread until a file has been unlocked and is accessible for reading and renaming? For example, is there a WaitOnFile() somewhere in the.NET Framework? I have a service that uses a FileSystemWatcher to look for files that are to be transmitted to an FTP site, but the file created event fires before the other process has finished writing the file. The ideal solution would have a timeout period so the thread doesn't hang forever before giving up. Edit: After trying out some of the solutions below, I ended up changing the system so that all files wrote to Path.GetTempFileName(), then performed a File.Move() to the final location. As soon as the FileSystemWatcher event fired, the file was already complete. | This was the answer I gave on a related question: /// /// Blocks until the file is not locked any more. /// /// bool WaitForFile(string fullPath) { int numTries = 0; while (true) { ++numTries; try { // Attempt to open the file exclusively. using (FileStream fs = new FileStream(fullPath, FileMode.Open, FileAccess.ReadWrite, FileShare.None, 100)) { fs.ReadByte();
// If we got this far the file is ready break; } } catch (Exception ex) { Log.LogWarning( "WaitForFile {0} failed to get an exclusive lock: {1}", fullPath, ex.ToString());
if (numTries > 10) { Log.LogWarning( "WaitForFile {0} giving up after 10 tries", fullPath); return false; }
// Wait for the lock to be released System.Threading.Thread.Sleep(500); } }
Log.LogTrace("WaitForFile {0} returning true after {1} tries", fullPath, numTries); return true; } | Wait until file is unlocked in .NET What's the simplest way of blocking a thread until a file has been unlocked and is accessible for reading and renaming? For example, is there a WaitOnFile() somewhere in the.NET Framework? I have a service that uses a FileSystemWatcher to look for files that are to be transmitted to an FTP site, but the file created event fires before the other process has finished writing the file. The ideal solution would have a timeout period so the thread doesn't hang forever before giving up. Edit: After trying out some of the solutions below, I ended up changing the system so that all files wrote to Path.GetTempFileName(), then performed a File.Move() to the final location. As soon as the FileSystemWatcher event fired, the file was already complete. | TITLE:
Wait until file is unlocked in .NET
QUESTION:
What's the simplest way of blocking a thread until a file has been unlocked and is accessible for reading and renaming? For example, is there a WaitOnFile() somewhere in the.NET Framework? I have a service that uses a FileSystemWatcher to look for files that are to be transmitted to an FTP site, but the file created event fires before the other process has finished writing the file. The ideal solution would have a timeout period so the thread doesn't hang forever before giving up. Edit: After trying out some of the solutions below, I ended up changing the system so that all files wrote to Path.GetTempFileName(), then performed a File.Move() to the final location. As soon as the FileSystemWatcher event fired, the file was already complete.
ANSWER:
This was the answer I gave on a related question: /// /// Blocks until the file is not locked any more. /// /// bool WaitForFile(string fullPath) { int numTries = 0; while (true) { ++numTries; try { // Attempt to open the file exclusively. using (FileStream fs = new FileStream(fullPath, FileMode.Open, FileAccess.ReadWrite, FileShare.None, 100)) { fs.ReadByte();
// If we got this far the file is ready break; } } catch (Exception ex) { Log.LogWarning( "WaitForFile {0} failed to get an exclusive lock: {1}", fullPath, ex.ToString());
if (numTries > 10) { Log.LogWarning( "WaitForFile {0} giving up after 10 tries", fullPath); return false; }
// Wait for the lock to be released System.Threading.Thread.Sleep(500); } }
Log.LogTrace("WaitForFile {0} returning true after {1} tries", fullPath, numTries); return true; } | [
".net",
".net-3.5",
"file"
] | 119 | 47 | 95,257 | 16 | 0 | 2008-09-08T21:33:09.620000 | 2008-09-08T21:59:11.790000 |
50,747 | 50,812 | Response.StatusCode and Internet Explorer - Display custom message? | I am implementing a HttpRequestValidationException in my Application_Error Handler, and if possible, I want to display a custom message. Now, I'm thinking about the StatusCode. In my current example, it sends a 200, which I think should not be done. I would like to send the (IMHO) more appropriate 400 Bad Request instead. However, at the same time, I would like to use Response.Write to enter a custom message. Firefox displays it properly, but IE7 gives me the Default unhelpful Internet Explorer Error Page. On one side, I guess that Internet Explorer just assumes that everything <> 200 is simply not having any "good" content, and the RFC is not really clear here. So I just wonder, is sending a HTTP 200 for an Error Page caused by a HttpRequestValidationException good practice or not? Are there good alternatives? | An HTTP 200 Response Code does not indicate an error. It indicates that everything was OK. You should not use a 200 response code for an error. Internet Explorer shows its "Friendly Errors" page if the response is less than 512 bytes. Here's more on this issue: http://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx, | Response.StatusCode and Internet Explorer - Display custom message? I am implementing a HttpRequestValidationException in my Application_Error Handler, and if possible, I want to display a custom message. Now, I'm thinking about the StatusCode. In my current example, it sends a 200, which I think should not be done. I would like to send the (IMHO) more appropriate 400 Bad Request instead. However, at the same time, I would like to use Response.Write to enter a custom message. Firefox displays it properly, but IE7 gives me the Default unhelpful Internet Explorer Error Page. On one side, I guess that Internet Explorer just assumes that everything <> 200 is simply not having any "good" content, and the RFC is not really clear here. So I just wonder, is sending a HTTP 200 for an Error Page caused by a HttpRequestValidationException good practice or not? Are there good alternatives? | TITLE:
Response.StatusCode and Internet Explorer - Display custom message?
QUESTION:
I am implementing a HttpRequestValidationException in my Application_Error Handler, and if possible, I want to display a custom message. Now, I'm thinking about the StatusCode. In my current example, it sends a 200, which I think should not be done. I would like to send the (IMHO) more appropriate 400 Bad Request instead. However, at the same time, I would like to use Response.Write to enter a custom message. Firefox displays it properly, but IE7 gives me the Default unhelpful Internet Explorer Error Page. On one side, I guess that Internet Explorer just assumes that everything <> 200 is simply not having any "good" content, and the RFC is not really clear here. So I just wonder, is sending a HTTP 200 for an Error Page caused by a HttpRequestValidationException good practice or not? Are there good alternatives?
ANSWER:
An HTTP 200 Response Code does not indicate an error. It indicates that everything was OK. You should not use a 200 response code for an error. Internet Explorer shows its "Friendly Errors" page if the response is less than 512 bytes. Here's more on this issue: http://weblogs.asp.net/scottgu/archive/2006/04/09/442332.aspx, | [
"asp.net",
"http",
"internet-explorer-7"
] | 1 | 4 | 1,414 | 3 | 0 | 2008-09-08T21:34:16.310000 | 2008-09-08T22:05:02.570000 |
50,762 | 50,821 | Determine Disk Geometry on Windows | I need to programmatically determine out how many sectors, heads, and cylinders are on a physical disk from Windows XP. Does anyone know the API for determining this? Where might Windows expose this information? | Use DeviceIoControl with control code IOCTL_DISK_GET_DRIVE_GEOMETRY or IOCTL_DISK_GET_DRIVE_GEOMETRY_EX. There's sample code in MSDN to do this here. | Determine Disk Geometry on Windows I need to programmatically determine out how many sectors, heads, and cylinders are on a physical disk from Windows XP. Does anyone know the API for determining this? Where might Windows expose this information? | TITLE:
Determine Disk Geometry on Windows
QUESTION:
I need to programmatically determine out how many sectors, heads, and cylinders are on a physical disk from Windows XP. Does anyone know the API for determining this? Where might Windows expose this information?
ANSWER:
Use DeviceIoControl with control code IOCTL_DISK_GET_DRIVE_GEOMETRY or IOCTL_DISK_GET_DRIVE_GEOMETRY_EX. There's sample code in MSDN to do this here. | [
"windows",
"winapi",
"hard-drive"
] | 5 | 6 | 3,057 | 3 | 0 | 2008-09-08T21:41:57.820000 | 2008-09-08T22:12:22.787000 |
50,771 | 51,268 | ASP.NET AJAX Load Balancing Issues | This would be a question for anyone who has code in the App_Code folder and uses a hardware load balancer. Its true the hardware load balancer could be set to sticky sessions to solve the issue, but in a perfect world, I would like the feature turned off. When a file in the App_Code folder, and the site is not pre-compiled iis will generate random file names for these files. server1 "/ajax/SomeControl, App_Code.tjazq3hb.ashx" server2 "/ajax/SomeControl, App_Code.wzp3akyu.ashx" So when a user posts the page and gets transfered to the other server nothing works. Does anyone have a solution for this? I could change to a pre-compiled web-site, but we would lose the ability for our QA department to just promote the changed files. | You could move whatever is in your app_code to an external class library if your QA dept can promote that entire library. I think you are stuck with sticky sessions if you can't find a convenient or tolerable way to switch to a pre-compiled site. | ASP.NET AJAX Load Balancing Issues This would be a question for anyone who has code in the App_Code folder and uses a hardware load balancer. Its true the hardware load balancer could be set to sticky sessions to solve the issue, but in a perfect world, I would like the feature turned off. When a file in the App_Code folder, and the site is not pre-compiled iis will generate random file names for these files. server1 "/ajax/SomeControl, App_Code.tjazq3hb.ashx" server2 "/ajax/SomeControl, App_Code.wzp3akyu.ashx" So when a user posts the page and gets transfered to the other server nothing works. Does anyone have a solution for this? I could change to a pre-compiled web-site, but we would lose the ability for our QA department to just promote the changed files. | TITLE:
ASP.NET AJAX Load Balancing Issues
QUESTION:
This would be a question for anyone who has code in the App_Code folder and uses a hardware load balancer. Its true the hardware load balancer could be set to sticky sessions to solve the issue, but in a perfect world, I would like the feature turned off. When a file in the App_Code folder, and the site is not pre-compiled iis will generate random file names for these files. server1 "/ajax/SomeControl, App_Code.tjazq3hb.ashx" server2 "/ajax/SomeControl, App_Code.wzp3akyu.ashx" So when a user posts the page and gets transfered to the other server nothing works. Does anyone have a solution for this? I could change to a pre-compiled web-site, but we would lose the ability for our QA department to just promote the changed files.
ANSWER:
You could move whatever is in your app_code to an external class library if your QA dept can promote that entire library. I think you are stuck with sticky sessions if you can't find a convenient or tolerable way to switch to a pre-compiled site. | [
"asp.net",
"load-balancing"
] | 1 | 0 | 4,720 | 8 | 0 | 2008-09-08T21:46:25.393000 | 2008-09-09T05:53:57.203000 |
50,786 | 153,206 | How do I get ms-access to connect to ms-sql as a different user? | How do I get ms-access to connect (through ODBC) to an ms-sql database as a different user than their Active Directory ID? I don't want to specify an account in the ODBC connection, I want to do it on the ms-access side to hide it from my users. Doing it in the ODBC connection would put me right back in to the original situation I'm trying to avoid. Yes, this relates to a previous question: http://www.stackoverflow.com/questions/50164/ | I think you can get this to work the way you want it to if you use an "ODBC DSN-LESS connection" If you need to, keep your ODBC DSN's on your users' machines using windows authentication. Give your users read-only access to your database. (If they create a new mdb file and link the tables they'll only be able to read the data.) Create a SQL Login which has read/write permission to your database. Write a VBA routine which loops over your linked tables and resets the connection to use you SQL Login but be sure to use the "DSN-Less" syntax. "ODBC;Driver={SQL Native Client};" & "Server=MyServerName;" & _ "Database=myDatabaseName;" & _ "Uid=myUsername;" & _ "Pwd=myPassword" Call this routine as part of your startup code. A couple of notes about this approach: Access seems to have an issue with the connection info once you change from Read/Write to Read Only and try going back to Read/Write without closing and re-opening the database (mde/mdb) file. If you can change this once at startup to Read/Write and not change it during the session this solution should work. By using a DSN - Less connection you are able to hide the credentials from the user in code (assuming you're giving them an mde file you should be ok). Normally hard-coding connection strings isn't a good idea, but since you're dealing with an in-house app you should be ok with this approach. | How do I get ms-access to connect to ms-sql as a different user? How do I get ms-access to connect (through ODBC) to an ms-sql database as a different user than their Active Directory ID? I don't want to specify an account in the ODBC connection, I want to do it on the ms-access side to hide it from my users. Doing it in the ODBC connection would put me right back in to the original situation I'm trying to avoid. Yes, this relates to a previous question: http://www.stackoverflow.com/questions/50164/ | TITLE:
How do I get ms-access to connect to ms-sql as a different user?
QUESTION:
How do I get ms-access to connect (through ODBC) to an ms-sql database as a different user than their Active Directory ID? I don't want to specify an account in the ODBC connection, I want to do it on the ms-access side to hide it from my users. Doing it in the ODBC connection would put me right back in to the original situation I'm trying to avoid. Yes, this relates to a previous question: http://www.stackoverflow.com/questions/50164/
ANSWER:
I think you can get this to work the way you want it to if you use an "ODBC DSN-LESS connection" If you need to, keep your ODBC DSN's on your users' machines using windows authentication. Give your users read-only access to your database. (If they create a new mdb file and link the tables they'll only be able to read the data.) Create a SQL Login which has read/write permission to your database. Write a VBA routine which loops over your linked tables and resets the connection to use you SQL Login but be sure to use the "DSN-Less" syntax. "ODBC;Driver={SQL Native Client};" & "Server=MyServerName;" & _ "Database=myDatabaseName;" & _ "Uid=myUsername;" & _ "Pwd=myPassword" Call this routine as part of your startup code. A couple of notes about this approach: Access seems to have an issue with the connection info once you change from Read/Write to Read Only and try going back to Read/Write without closing and re-opening the database (mde/mdb) file. If you can change this once at startup to Read/Write and not change it during the session this solution should work. By using a DSN - Less connection you are able to hide the credentials from the user in code (assuming you're giving them an mde file you should be ok). Normally hard-coding connection strings isn't a good idea, but since you're dealing with an in-house app you should be ok with this approach. | [
"sql-server",
"ms-access",
"odbc"
] | 3 | 5 | 12,874 | 6 | 0 | 2008-09-08T21:51:02.403000 | 2008-09-30T14:14:16.913000 |
50,799 | 156,000 | What version of TinyMCE will work in Drupal 5 with google chrome? | My drupal site (internal) will not display the TinyMCE editor when using Google Chrome browser. Has anyone gotten TinyMCE to work with Chrome in Drupal 5? | Just Refresh the edit page and the editor will render in Chrome. | What version of TinyMCE will work in Drupal 5 with google chrome? My drupal site (internal) will not display the TinyMCE editor when using Google Chrome browser. Has anyone gotten TinyMCE to work with Chrome in Drupal 5? | TITLE:
What version of TinyMCE will work in Drupal 5 with google chrome?
QUESTION:
My drupal site (internal) will not display the TinyMCE editor when using Google Chrome browser. Has anyone gotten TinyMCE to work with Chrome in Drupal 5?
ANSWER:
Just Refresh the edit page and the editor will render in Chrome. | [
"drupal",
"tinymce",
"drupal-5"
] | 2 | 0 | 812 | 3 | 0 | 2008-09-08T21:59:07.733000 | 2008-10-01T02:30:46.223000 |
50,819 | 90,165 | #line and jump to line | Do any editors honer C #line directives with regards to goto line features? Context: I'm working on a code generator and need to jump to a line of the output but the line is specified relative to the the #line directives I'm adding. I can drop them but then finding the input line is even a worse pain | If the editor is scriptable it should be possible to write a script to do the navigation. There might even be a Vim or Emacs script that already does something similar. FWIW when I writing a lot of Bison/Flexx I wrote a Zeus Lua macro script that attempted to do something similar (i.e. move from input file to the corresponding line of the output file by search for the #line marker). For any one that might be interested here is that particular macro script. | #line and jump to line Do any editors honer C #line directives with regards to goto line features? Context: I'm working on a code generator and need to jump to a line of the output but the line is specified relative to the the #line directives I'm adding. I can drop them but then finding the input line is even a worse pain | TITLE:
#line and jump to line
QUESTION:
Do any editors honer C #line directives with regards to goto line features? Context: I'm working on a code generator and need to jump to a line of the output but the line is specified relative to the the #line directives I'm adding. I can drop them but then finding the input line is even a worse pain
ANSWER:
If the editor is scriptable it should be possible to write a script to do the navigation. There might even be a Vim or Emacs script that already does something similar. FWIW when I writing a lot of Bison/Flexx I wrote a Zeus Lua macro script that attempted to do something similar (i.e. move from input file to the corresponding line of the output file by search for the #line marker). For any one that might be interested here is that particular macro script. | [
"preprocessor",
"text-editor"
] | 2 | 2 | 331 | 4 | 0 | 2008-09-08T22:10:00.343000 | 2008-09-18T05:02:10.060000 |
50,822 | 51,104 | Message passing in a plug-in framework | First off, there's a bit of background to this issue available on my blog: http://www.codebork.com/coding/2008/06/25/message-passing-a-plug-framework.html http://www.codebork.com/coding/2008/07/31/message-passing-2.html I'm aware that the descriptions aren't hugely clear, so I'll try to summarise what I'm attempting as best I can here. The application is a personal finance program. Further background on the framework itself is available at the end of this post. There are a number of different types of plug-in that the framework can handle (e.g., accounts, export, reporting, etc.). However, I'm focussing on one particular class of plug-in, so-called data plug-ins, as it is this class that is causing me problems. I have one class of data plug-in for accounts, one for transactions, etc. I'm midway through a vast re-factoring that has left me with the following architecture for data plug-ins: The data plug-in object (implementing intialisation, installation and plug-in metadata) [implements IDataPlugin ] The data object (such as an account) [implements, e.g., IAccount ] A factory to create instances of the data object [implements, e.g., IAccountFactory ] Previously the data object and the plug-in object were combined into one, but this meant that a new transaction plug-in had to be instantiated for each transaction recorded in the account which caused a number of problems. Unfortunately, that re-factoring has broken my message passing. The data object implements INotifyPropertyChanged, and so I've hit a new problem, and one that I'm not sure how to work around: the plug-in object is registering events with the message broker, but it's the data objects that actually fire the events. This means that the subscribing plug-in currently has to subscribe to each created account, transaction, etc.! This is clearly not scalable. As far as I can tell at the moment I have two possible solutions: Make the data plug-in object a go-between for the data-objects and message broker, possibly batching change notifications. I don't like this because it adds another layer of complexity to the messaging system that I feel I should be able to do without. Junk the current event-based implementation and use something else that's more easily manageable (in-memory WCF?!). So I guess I'm really asking: How would you solve this problem? What potential solutions do you think I've overlooked? Is my approach even vaguely on-track/sensible?!:-) As you will be able to tell from the dates of the blog posts, some variant of this problem has been taxing me for quite a long time now! As such, any and all responses will be greatly appreciated. The background to the framework itself is as follows: My plug-in framework consists of three main components: a plug-in broker, a preferences manager and a message broker. The plug-in broker does the bread-and-butter plug-in stuff: discovering and creating plug-ins. The preferences manager manages user preferences for the framework and individual plug-ins, such as which plug-ins are enabled, where data should be saved, etc. Communication is via publish/subscribe, with the message broker sitting in the middle, gathering all published message types and managing subscriptions. The publish/subscribe is currently implemented via the.NET INotifyPropertyChanged interface, which provides one event called PropertyChanged; the message broker builds a list of all plug-ins implementing INotifyPropertyChanged and subscribes other plug-ins this event. The purpose of the message passing is to allow the account and transaction plug-ins to notify the storage plug-ins that data has changed so that it may be saved. | This is my understanding of your question: You have a plugin object that may have to listen for events on x data objects - you don't want to subscribe to the event on each data object though. I'm assuming that several plugins may want to listen to events on the same data object. You could create a session type object. Each plugin listens for events on the session object. The data object no longer raises the event - it calls the session object to raise the event (one of the parameters would have to be the data object raising the event). That means that your plugins only have to subscribe to one event, but they get the event from all data objects. On the other hand, if only one plugin will ever listen to a data object at a time, why not just have the data object call the plugin directly? | Message passing in a plug-in framework First off, there's a bit of background to this issue available on my blog: http://www.codebork.com/coding/2008/06/25/message-passing-a-plug-framework.html http://www.codebork.com/coding/2008/07/31/message-passing-2.html I'm aware that the descriptions aren't hugely clear, so I'll try to summarise what I'm attempting as best I can here. The application is a personal finance program. Further background on the framework itself is available at the end of this post. There are a number of different types of plug-in that the framework can handle (e.g., accounts, export, reporting, etc.). However, I'm focussing on one particular class of plug-in, so-called data plug-ins, as it is this class that is causing me problems. I have one class of data plug-in for accounts, one for transactions, etc. I'm midway through a vast re-factoring that has left me with the following architecture for data plug-ins: The data plug-in object (implementing intialisation, installation and plug-in metadata) [implements IDataPlugin ] The data object (such as an account) [implements, e.g., IAccount ] A factory to create instances of the data object [implements, e.g., IAccountFactory ] Previously the data object and the plug-in object were combined into one, but this meant that a new transaction plug-in had to be instantiated for each transaction recorded in the account which caused a number of problems. Unfortunately, that re-factoring has broken my message passing. The data object implements INotifyPropertyChanged, and so I've hit a new problem, and one that I'm not sure how to work around: the plug-in object is registering events with the message broker, but it's the data objects that actually fire the events. This means that the subscribing plug-in currently has to subscribe to each created account, transaction, etc.! This is clearly not scalable. As far as I can tell at the moment I have two possible solutions: Make the data plug-in object a go-between for the data-objects and message broker, possibly batching change notifications. I don't like this because it adds another layer of complexity to the messaging system that I feel I should be able to do without. Junk the current event-based implementation and use something else that's more easily manageable (in-memory WCF?!). So I guess I'm really asking: How would you solve this problem? What potential solutions do you think I've overlooked? Is my approach even vaguely on-track/sensible?!:-) As you will be able to tell from the dates of the blog posts, some variant of this problem has been taxing me for quite a long time now! As such, any and all responses will be greatly appreciated. The background to the framework itself is as follows: My plug-in framework consists of three main components: a plug-in broker, a preferences manager and a message broker. The plug-in broker does the bread-and-butter plug-in stuff: discovering and creating plug-ins. The preferences manager manages user preferences for the framework and individual plug-ins, such as which plug-ins are enabled, where data should be saved, etc. Communication is via publish/subscribe, with the message broker sitting in the middle, gathering all published message types and managing subscriptions. The publish/subscribe is currently implemented via the.NET INotifyPropertyChanged interface, which provides one event called PropertyChanged; the message broker builds a list of all plug-ins implementing INotifyPropertyChanged and subscribes other plug-ins this event. The purpose of the message passing is to allow the account and transaction plug-ins to notify the storage plug-ins that data has changed so that it may be saved. | TITLE:
Message passing in a plug-in framework
QUESTION:
First off, there's a bit of background to this issue available on my blog: http://www.codebork.com/coding/2008/06/25/message-passing-a-plug-framework.html http://www.codebork.com/coding/2008/07/31/message-passing-2.html I'm aware that the descriptions aren't hugely clear, so I'll try to summarise what I'm attempting as best I can here. The application is a personal finance program. Further background on the framework itself is available at the end of this post. There are a number of different types of plug-in that the framework can handle (e.g., accounts, export, reporting, etc.). However, I'm focussing on one particular class of plug-in, so-called data plug-ins, as it is this class that is causing me problems. I have one class of data plug-in for accounts, one for transactions, etc. I'm midway through a vast re-factoring that has left me with the following architecture for data plug-ins: The data plug-in object (implementing intialisation, installation and plug-in metadata) [implements IDataPlugin ] The data object (such as an account) [implements, e.g., IAccount ] A factory to create instances of the data object [implements, e.g., IAccountFactory ] Previously the data object and the plug-in object were combined into one, but this meant that a new transaction plug-in had to be instantiated for each transaction recorded in the account which caused a number of problems. Unfortunately, that re-factoring has broken my message passing. The data object implements INotifyPropertyChanged, and so I've hit a new problem, and one that I'm not sure how to work around: the plug-in object is registering events with the message broker, but it's the data objects that actually fire the events. This means that the subscribing plug-in currently has to subscribe to each created account, transaction, etc.! This is clearly not scalable. As far as I can tell at the moment I have two possible solutions: Make the data plug-in object a go-between for the data-objects and message broker, possibly batching change notifications. I don't like this because it adds another layer of complexity to the messaging system that I feel I should be able to do without. Junk the current event-based implementation and use something else that's more easily manageable (in-memory WCF?!). So I guess I'm really asking: How would you solve this problem? What potential solutions do you think I've overlooked? Is my approach even vaguely on-track/sensible?!:-) As you will be able to tell from the dates of the blog posts, some variant of this problem has been taxing me for quite a long time now! As such, any and all responses will be greatly appreciated. The background to the framework itself is as follows: My plug-in framework consists of three main components: a plug-in broker, a preferences manager and a message broker. The plug-in broker does the bread-and-butter plug-in stuff: discovering and creating plug-ins. The preferences manager manages user preferences for the framework and individual plug-ins, such as which plug-ins are enabled, where data should be saved, etc. Communication is via publish/subscribe, with the message broker sitting in the middle, gathering all published message types and managing subscriptions. The publish/subscribe is currently implemented via the.NET INotifyPropertyChanged interface, which provides one event called PropertyChanged; the message broker builds a list of all plug-ins implementing INotifyPropertyChanged and subscribes other plug-ins this event. The purpose of the message passing is to allow the account and transaction plug-ins to notify the storage plug-ins that data has changed so that it may be saved.
ANSWER:
This is my understanding of your question: You have a plugin object that may have to listen for events on x data objects - you don't want to subscribe to the event on each data object though. I'm assuming that several plugins may want to listen to events on the same data object. You could create a session type object. Each plugin listens for events on the session object. The data object no longer raises the event - it calls the session object to raise the event (one of the parameters would have to be the data object raising the event). That means that your plugins only have to subscribe to one event, but they get the event from all data objects. On the other hand, if only one plugin will ever listen to a data object at a time, why not just have the data object call the plugin directly? | [
"c#",
"plugins",
"message-passing"
] | 2 | 3 | 1,180 | 3 | 0 | 2008-09-08T22:13:27.807000 | 2008-09-09T01:59:07.703000 |
50,824 | 50,832 | Can XML comments go anywhere? | I wrote a simple tool to generate a DBUnit XML dataset using queries that the user enters. I want to include each query entered in the XML as a comment, but the DBUnit API to generate the XML file doesn't support inserting the comment where I would like it (above the data it generates), so I am resorting to putting the comment with ALL queries either at the top or bottom. So my question: is it valid XML to place it at either location? For example, above the XML Declaration:... Or below the root node:... I plan to initially try above the XML Declaration, but I have doubts on if that is valid XML, despite the claim from wikipedia: Comments can be placed anywhere in the tree, including in the text if the content of the element is text or #PCDATA. I plan to post back if this works, but it would be nice to know if it is an official XML standard. UPDATE: See my response below for the result of my test. | According to the XML specification, a well-formed XML document is: document::= prolog element Misc* where prolog is prolog::= XMLDecl? Misc* (doctypedecl Misc*)? and Misc is Misc::= Comment | PI | S and XMLDecl::= ' ' which means that, if you want to have comments at the top, you cannot have an XML type declaration. You can, however, have comments after the declaration and outside the document element, either at the top or the bottom of the document, because Misc* can contain comments. The specification agrees with Wikipedia on comments: 2.5 Comments [Definition: Comments may appear anywhere in a document outside other markup; in addition, they may appear within the document type declaration at places allowed by the grammar. They are not part of the document's character data; an XML processor MAY, but need not, make it possible for an application to retrieve the text of comments. For compatibility, the string "--" (double-hyphen) MUST NOT occur within comments.] Parameter entity references MUST NOT be recognized within comments. All of this together means that you can put comments anywhere that's not inside other markup, except that you cannot have an XML declaration if you lead with a comment. However, while in theory theory agrees with practice, in practice it doesn't, so I'd be curious to see how your experiment works out. | Can XML comments go anywhere? I wrote a simple tool to generate a DBUnit XML dataset using queries that the user enters. I want to include each query entered in the XML as a comment, but the DBUnit API to generate the XML file doesn't support inserting the comment where I would like it (above the data it generates), so I am resorting to putting the comment with ALL queries either at the top or bottom. So my question: is it valid XML to place it at either location? For example, above the XML Declaration:... Or below the root node:... I plan to initially try above the XML Declaration, but I have doubts on if that is valid XML, despite the claim from wikipedia: Comments can be placed anywhere in the tree, including in the text if the content of the element is text or #PCDATA. I plan to post back if this works, but it would be nice to know if it is an official XML standard. UPDATE: See my response below for the result of my test. | TITLE:
Can XML comments go anywhere?
QUESTION:
I wrote a simple tool to generate a DBUnit XML dataset using queries that the user enters. I want to include each query entered in the XML as a comment, but the DBUnit API to generate the XML file doesn't support inserting the comment where I would like it (above the data it generates), so I am resorting to putting the comment with ALL queries either at the top or bottom. So my question: is it valid XML to place it at either location? For example, above the XML Declaration:... Or below the root node:... I plan to initially try above the XML Declaration, but I have doubts on if that is valid XML, despite the claim from wikipedia: Comments can be placed anywhere in the tree, including in the text if the content of the element is text or #PCDATA. I plan to post back if this works, but it would be nice to know if it is an official XML standard. UPDATE: See my response below for the result of my test.
ANSWER:
According to the XML specification, a well-formed XML document is: document::= prolog element Misc* where prolog is prolog::= XMLDecl? Misc* (doctypedecl Misc*)? and Misc is Misc::= Comment | PI | S and XMLDecl::= ' ' which means that, if you want to have comments at the top, you cannot have an XML type declaration. You can, however, have comments after the declaration and outside the document element, either at the top or the bottom of the document, because Misc* can contain comments. The specification agrees with Wikipedia on comments: 2.5 Comments [Definition: Comments may appear anywhere in a document outside other markup; in addition, they may appear within the document type declaration at places allowed by the grammar. They are not part of the document's character data; an XML processor MAY, but need not, make it possible for an application to retrieve the text of comments. For compatibility, the string "--" (double-hyphen) MUST NOT occur within comments.] Parameter entity references MUST NOT be recognized within comments. All of this together means that you can put comments anywhere that's not inside other markup, except that you cannot have an XML declaration if you lead with a comment. However, while in theory theory agrees with practice, in practice it doesn't, so I'd be curious to see how your experiment works out. | [
"xml",
"comments"
] | 20 | 17 | 7,930 | 4 | 0 | 2008-09-08T22:14:32.407000 | 2008-09-08T22:19:32.740000 |
50,829 | 50,861 | How to log mallocs | This is a bit hypothetical and grossly simplified but... Assume a program that will be calling functions written by third parties. These parties can be assumed to be non-hostile but can't be assumed to be "competent". Each function will take some arguments, have side effects and return a value. They have no state while they are not running. The objective is to ensure they can't cause memory leaks by logging all mallocs (and the like) and then freeing everything after the function exits. Is this possible? Is this practical? p.s. The important part to me is ensuring that no allocations persist so ways to remove memory leaks without doing that are not useful to me. | You don't specify the operating system or environment, this answer assumes Linux, glibc, and C. You can set __malloc_hook, __free_hook, and __realloc_hook to point to functions which will be called from malloc(), realloc(), and free() respectively. There is a __malloc_hook manpage showing the prototypes. You can add track allocations in these hooks, then return to let glibc handle the memory allocation/deallocation. It sounds like you want to free any live allocations when the third-party function returns. There are ways to have gcc automatically insert calls at every function entrance and exit using -finstrument-functions, but I think that would be inelegant for what you are trying to do. Can you have your own code call a function in your memory-tracking library after calling one of these third-party functions? You could then check if there are any allocations which the third-party function did not already free. | How to log mallocs This is a bit hypothetical and grossly simplified but... Assume a program that will be calling functions written by third parties. These parties can be assumed to be non-hostile but can't be assumed to be "competent". Each function will take some arguments, have side effects and return a value. They have no state while they are not running. The objective is to ensure they can't cause memory leaks by logging all mallocs (and the like) and then freeing everything after the function exits. Is this possible? Is this practical? p.s. The important part to me is ensuring that no allocations persist so ways to remove memory leaks without doing that are not useful to me. | TITLE:
How to log mallocs
QUESTION:
This is a bit hypothetical and grossly simplified but... Assume a program that will be calling functions written by third parties. These parties can be assumed to be non-hostile but can't be assumed to be "competent". Each function will take some arguments, have side effects and return a value. They have no state while they are not running. The objective is to ensure they can't cause memory leaks by logging all mallocs (and the like) and then freeing everything after the function exits. Is this possible? Is this practical? p.s. The important part to me is ensuring that no allocations persist so ways to remove memory leaks without doing that are not useful to me.
ANSWER:
You don't specify the operating system or environment, this answer assumes Linux, glibc, and C. You can set __malloc_hook, __free_hook, and __realloc_hook to point to functions which will be called from malloc(), realloc(), and free() respectively. There is a __malloc_hook manpage showing the prototypes. You can add track allocations in these hooks, then return to let glibc handle the memory allocation/deallocation. It sounds like you want to free any live allocations when the third-party function returns. There are ways to have gcc automatically insert calls at every function entrance and exit using -finstrument-functions, but I think that would be inelegant for what you are trying to do. Can you have your own code call a function in your memory-tracking library after calling one of these third-party functions? You could then check if there are any allocations which the third-party function did not already free. | [
"memory",
"memory-management",
"malloc"
] | 6 | 6 | 2,298 | 11 | 0 | 2008-09-08T22:18:27.717000 | 2008-09-08T22:33:16.187000 |
50,845 | 285,923 | How to click on an AutoCompleteExtender with Watin | For my acceptance testing I'm writing text into the auto complete extender and I need to click on the populated list. In order to populate the list I have to use AppendText instead of TypeText, otherwise the textbox looses focus before the list is populated. Now my problem is when I try to click on the populated list. I've tried searching the UL element and clicking on it; but it's not firing the click event on the list. Then I tried to search the list by tagname and value: Element element = Browser.Element(Find.By("tagname", "li") && Find.ByValue("lookupString")); but it's not finding it, has anyone been able to do what I'm trying to do? | The shorter version of that is: string lookupString = "string in list"; Element list = Browser.Element("li", Find.ByText(new Regex(lookupString))); list.MouseDown(); Regexs will do a partial match so you don't need to specify.* either side and use string.Format. This assumes however that the lookupString doesn't contain any characters special to Regexs, they'd need to be escaped. | How to click on an AutoCompleteExtender with Watin For my acceptance testing I'm writing text into the auto complete extender and I need to click on the populated list. In order to populate the list I have to use AppendText instead of TypeText, otherwise the textbox looses focus before the list is populated. Now my problem is when I try to click on the populated list. I've tried searching the UL element and clicking on it; but it's not firing the click event on the list. Then I tried to search the list by tagname and value: Element element = Browser.Element(Find.By("tagname", "li") && Find.ByValue("lookupString")); but it's not finding it, has anyone been able to do what I'm trying to do? | TITLE:
How to click on an AutoCompleteExtender with Watin
QUESTION:
For my acceptance testing I'm writing text into the auto complete extender and I need to click on the populated list. In order to populate the list I have to use AppendText instead of TypeText, otherwise the textbox looses focus before the list is populated. Now my problem is when I try to click on the populated list. I've tried searching the UL element and clicking on it; but it's not firing the click event on the list. Then I tried to search the list by tagname and value: Element element = Browser.Element(Find.By("tagname", "li") && Find.ByValue("lookupString")); but it's not finding it, has anyone been able to do what I'm trying to do?
ANSWER:
The shorter version of that is: string lookupString = "string in list"; Element list = Browser.Element("li", Find.ByText(new Regex(lookupString))); list.MouseDown(); Regexs will do a partial match so you don't need to specify.* either side and use string.Format. This assumes however that the lookupString doesn't contain any characters special to Regexs, they'd need to be escaped. | [
"watin",
"autocompleteextender"
] | 0 | 1 | 1,258 | 2 | 0 | 2008-09-08T22:25:48.153000 | 2008-11-13T00:29:52.953000 |
50,863 | 50,874 | What role to give a sql login in order to create a database and additional logins | What role should I give a sql login if I need the login to be able to create a database, and create additional logins and add users based on those logins to the database i created? This is sql 2005. | There is no fixed database role that includes these permissions. You'll have to create a role and assign the permissions individually. CREATE ROLE db_creator GRANT CREATE DATABASE TO db_creator GRANT ALTER ANY LOGIN TO db_creator GRANT ALTER ANY USER TO db_creator | What role to give a sql login in order to create a database and additional logins What role should I give a sql login if I need the login to be able to create a database, and create additional logins and add users based on those logins to the database i created? This is sql 2005. | TITLE:
What role to give a sql login in order to create a database and additional logins
QUESTION:
What role should I give a sql login if I need the login to be able to create a database, and create additional logins and add users based on those logins to the database i created? This is sql 2005.
ANSWER:
There is no fixed database role that includes these permissions. You'll have to create a role and assign the permissions individually. CREATE ROLE db_creator GRANT CREATE DATABASE TO db_creator GRANT ALTER ANY LOGIN TO db_creator GRANT ALTER ANY USER TO db_creator | [
"sql-server-2005"
] | 2 | 1 | 367 | 1 | 0 | 2008-09-08T22:34:41.657000 | 2008-09-08T22:42:51.387000 |
50,890 | 50,896 | Average User Download Speeds | Any ideas what the average user's download speed is? I'm working on a site that streams video and am trying to figure out what an average download speed as to determine quality. I know i might be comparing apples with oranges but I'm just looking for something to get a basis for where to start. | Speedtest.net has a lot of stats broken down by country, region, city and ISP. Not sure about accuracy, since it's only based on the people using their "bandwidth measurement" service. | Average User Download Speeds Any ideas what the average user's download speed is? I'm working on a site that streams video and am trying to figure out what an average download speed as to determine quality. I know i might be comparing apples with oranges but I'm just looking for something to get a basis for where to start. | TITLE:
Average User Download Speeds
QUESTION:
Any ideas what the average user's download speed is? I'm working on a site that streams video and am trying to figure out what an average download speed as to determine quality. I know i might be comparing apples with oranges but I'm just looking for something to get a basis for where to start.
ANSWER:
Speedtest.net has a lot of stats broken down by country, region, city and ISP. Not sure about accuracy, since it's only based on the people using their "bandwidth measurement" service. | [
"performance",
"download",
"average"
] | 3 | 6 | 1,024 | 9 | 0 | 2008-09-08T22:48:43.497000 | 2008-09-08T22:53:47.773000 |
50,898 | 52,099 | Nesting a GridView within Repeater | I have a scenario wherein, for example, I need to repeat a list of US states and display a table of cities and city populations after the name of each state. The design requirement dictates that every outer repetition must be the name of a state followed by a table of cities, and that requirement cannot be changed at this time. Are there disadvantages to nesting a GridView within a Repeater and then binding each repeated GridView during the Repeater's ItemDataBound event? What are some alternative solutions? | The best solution I was able to come up with was to nest the GridView in the Repeater. Then I bound each repeated GridView during the Repeater's ItemDataBound event. I turned off their ViewStates, of course, as they weren't required. | Nesting a GridView within Repeater I have a scenario wherein, for example, I need to repeat a list of US states and display a table of cities and city populations after the name of each state. The design requirement dictates that every outer repetition must be the name of a state followed by a table of cities, and that requirement cannot be changed at this time. Are there disadvantages to nesting a GridView within a Repeater and then binding each repeated GridView during the Repeater's ItemDataBound event? What are some alternative solutions? | TITLE:
Nesting a GridView within Repeater
QUESTION:
I have a scenario wherein, for example, I need to repeat a list of US states and display a table of cities and city populations after the name of each state. The design requirement dictates that every outer repetition must be the name of a state followed by a table of cities, and that requirement cannot be changed at this time. Are there disadvantages to nesting a GridView within a Repeater and then binding each repeated GridView during the Repeater's ItemDataBound event? What are some alternative solutions?
ANSWER:
The best solution I was able to come up with was to nest the GridView in the Repeater. Then I bound each repeated GridView during the Repeater's ItemDataBound event. I turned off their ViewStates, of course, as they weren't required. | [
"asp.net",
"data-binding",
"gridview",
"repeater"
] | 3 | 1 | 7,306 | 4 | 0 | 2008-09-08T22:56:22.230000 | 2008-09-09T15:03:04.073000 |
50,900 | 50,966 | Best way to detect a release build from a debug build? .net | So I have about 10 short css files that I use with mvc app. There are like error.css login.css etc... Just some really short css files that make updating and editing easy (At least for me). What I want is something that will optimize the if else branch and not incorporate it within the final bits. I want to do something like this if(Debug.Mode){ } else { } I'll have a msbuild task that will combine all the css files, minimize them and all that good stuff. I just need to know if there is a way to remove the if else branch in the final bits. | Specifically, like this in C#: #if (DEBUG) Debug Stuff #endif C# has the following preprocessor directives: #if #else #elif // Else If #endif #define #undef // Undefine #warning // Causes the preprocessor to fire warning #error // Causes the preprocessor to fire a fatal error #line // Lets the preprocessor know where this source line came from #region // Codefolding #endregion | Best way to detect a release build from a debug build? .net So I have about 10 short css files that I use with mvc app. There are like error.css login.css etc... Just some really short css files that make updating and editing easy (At least for me). What I want is something that will optimize the if else branch and not incorporate it within the final bits. I want to do something like this if(Debug.Mode){ } else { } I'll have a msbuild task that will combine all the css files, minimize them and all that good stuff. I just need to know if there is a way to remove the if else branch in the final bits. | TITLE:
Best way to detect a release build from a debug build? .net
QUESTION:
So I have about 10 short css files that I use with mvc app. There are like error.css login.css etc... Just some really short css files that make updating and editing easy (At least for me). What I want is something that will optimize the if else branch and not incorporate it within the final bits. I want to do something like this if(Debug.Mode){ } else { } I'll have a msbuild task that will combine all the css files, minimize them and all that good stuff. I just need to know if there is a way to remove the if else branch in the final bits.
ANSWER:
Specifically, like this in C#: #if (DEBUG) Debug Stuff #endif C# has the following preprocessor directives: #if #else #elif // Else If #endif #define #undef // Undefine #warning // Causes the preprocessor to fire warning #error // Causes the preprocessor to fire a fatal error #line // Lets the preprocessor know where this source line came from #region // Codefolding #endregion | [
".net",
"debugging",
"release"
] | 9 | 28 | 13,025 | 5 | 0 | 2008-09-08T22:57:20.463000 | 2008-09-08T23:40:44.790000 |
50,914 | 50,975 | Search by hash? | I had the idea of a search engine that would index web items like other search engines do now but would only store the file's title, url and a hash of the contents. This way it would be easy to find items on the web if you already had them and didn't know where they came from or wanted to know all the places that something appeared. More useful for non textual items like images, executables and archives. I was wondering if there is already something similar? | Check out the wikipedia page on locality sensitive hashing. There's also a good page hosted by a research on MIT. In general, there are several flavors available: hashes for strings (such as simhash ), sets or 0/1 features (such as min-wise hashes ), and for real vectors. The main trick for numerical hashes is basically dimension reduction, so far. For strings, the idea is to come up with a representation that's robust in the face of minor edits. I'm also doing a little research in this field, although I guess stackoverflow might not be the right place for nascent work. | Search by hash? I had the idea of a search engine that would index web items like other search engines do now but would only store the file's title, url and a hash of the contents. This way it would be easy to find items on the web if you already had them and didn't know where they came from or wanted to know all the places that something appeared. More useful for non textual items like images, executables and archives. I was wondering if there is already something similar? | TITLE:
Search by hash?
QUESTION:
I had the idea of a search engine that would index web items like other search engines do now but would only store the file's title, url and a hash of the contents. This way it would be easy to find items on the web if you already had them and didn't know where they came from or wanted to know all the places that something appeared. More useful for non textual items like images, executables and archives. I was wondering if there is already something similar?
ANSWER:
Check out the wikipedia page on locality sensitive hashing. There's also a good page hosted by a research on MIT. In general, there are several flavors available: hashes for strings (such as simhash ), sets or 0/1 features (such as min-wise hashes ), and for real vectors. The main trick for numerical hashes is basically dimension reduction, so far. For strings, the idea is to come up with a representation that's robust in the face of minor edits. I'm also doing a little research in this field, although I guess stackoverflow might not be the right place for nascent work. | [
"search",
"hash",
"utilities"
] | 5 | 5 | 6,886 | 5 | 0 | 2008-09-08T23:06:34.013000 | 2008-09-08T23:50:30.447000 |
50,923 | 50,929 | How do you create a weak reference to an object in Python? | How do you create a weak reference to an object in Python? | >>> import weakref >>> class Object:... pass... >>> o = Object() >>> r = weakref.ref(o) >>> # if the reference is still active, r() will be o, otherwise None >>> do_something_with_o(r()) See the wearkref module docs for more details. You can also use weakref.proxy to create an object that proxies o. Will throw ReferenceError if used when the referent is no longer referenced. | How do you create a weak reference to an object in Python? How do you create a weak reference to an object in Python? | TITLE:
How do you create a weak reference to an object in Python?
QUESTION:
How do you create a weak reference to an object in Python?
ANSWER:
>>> import weakref >>> class Object:... pass... >>> o = Object() >>> r = weakref.ref(o) >>> # if the reference is still active, r() will be o, otherwise None >>> do_something_with_o(r()) See the wearkref module docs for more details. You can also use weakref.proxy to create an object that proxies o. Will throw ReferenceError if used when the referent is no longer referenced. | [
"python",
"weak-references"
] | 11 | 14 | 921 | 1 | 0 | 2008-09-08T23:10:53.037000 | 2008-09-08T23:13:16.527000 |
50,931 | 50,948 | Redirecting non-www URL to www using .htaccess | I'm using Helicon's ISAPI Rewrite 3, which basically enables.htaccess in IIS. I need to redirect a non-www URL to the www version, i.e. example.com should redirect to www.example.com. I used the following rule from the examples but it affects subdomains: RewriteCond %{HTTPS} (on)? RewriteCond %{HTTP:Host} ^(?!www\.)(.+)$ [NC] RewriteCond %{REQUEST_URI} (.+) RewriteRule.? http(?%1s)://www.%2%3 [R=301,L] This works for most part, but is also redirect sub.example.com to www.sub.example.com. How can I rewrite the above rule so that subdomains do not get redirected? | Append the following RewriteCond: RewriteCond %{HTTP:Host} ^[^.]+\.[a-z]{2,5}$ [NC] That way it'll only apply the rule to nondottedsomething.uptofiveletters as you can see, subdomain.domain.com will not match the condition and thus will not be rewritten. You can change [a-z]{2,5} for a stricter tld matching regex, as well as placing all the constraints for allowed chars in domain names (as [^.]+ is more permissive than strictly necessary). All in all I think in this case that wouldn't be necessary. EDIT: sadie spotted a flaw on the regex, changed the first part of it from [^.] to [^.]+ | Redirecting non-www URL to www using .htaccess I'm using Helicon's ISAPI Rewrite 3, which basically enables.htaccess in IIS. I need to redirect a non-www URL to the www version, i.e. example.com should redirect to www.example.com. I used the following rule from the examples but it affects subdomains: RewriteCond %{HTTPS} (on)? RewriteCond %{HTTP:Host} ^(?!www\.)(.+)$ [NC] RewriteCond %{REQUEST_URI} (.+) RewriteRule.? http(?%1s)://www.%2%3 [R=301,L] This works for most part, but is also redirect sub.example.com to www.sub.example.com. How can I rewrite the above rule so that subdomains do not get redirected? | TITLE:
Redirecting non-www URL to www using .htaccess
QUESTION:
I'm using Helicon's ISAPI Rewrite 3, which basically enables.htaccess in IIS. I need to redirect a non-www URL to the www version, i.e. example.com should redirect to www.example.com. I used the following rule from the examples but it affects subdomains: RewriteCond %{HTTPS} (on)? RewriteCond %{HTTP:Host} ^(?!www\.)(.+)$ [NC] RewriteCond %{REQUEST_URI} (.+) RewriteRule.? http(?%1s)://www.%2%3 [R=301,L] This works for most part, but is also redirect sub.example.com to www.sub.example.com. How can I rewrite the above rule so that subdomains do not get redirected?
ANSWER:
Append the following RewriteCond: RewriteCond %{HTTP:Host} ^[^.]+\.[a-z]{2,5}$ [NC] That way it'll only apply the rule to nondottedsomething.uptofiveletters as you can see, subdomain.domain.com will not match the condition and thus will not be rewritten. You can change [a-z]{2,5} for a stricter tld matching regex, as well as placing all the constraints for allowed chars in domain names (as [^.]+ is more permissive than strictly necessary). All in all I think in this case that wouldn't be necessary. EDIT: sadie spotted a flaw on the regex, changed the first part of it from [^.] to [^.]+ | [
".htaccess",
"isapi-rewrite"
] | 6 | 3 | 2,954 | 7 | 0 | 2008-09-08T23:14:45.117000 | 2008-09-08T23:28:15.617000 |
50,945 | 51,031 | Can you do a partial checkout with Subversion? | If I had 20 directories under trunk/ with lots of files in each and only needed 3 of those directories, would it be possible to do a Subversion checkout with only those 3 directories under trunk? | Subversion 1.5 introduces sparse checkouts which may be something you might find useful. From the documentation:... sparse directories (or shallow checkouts )... allows you to easily check out a working copy—or a portion of a working copy—more shallowly than full recursion, with the freedom to bring in previously ignored files and subdirectories at a later time. | Can you do a partial checkout with Subversion? If I had 20 directories under trunk/ with lots of files in each and only needed 3 of those directories, would it be possible to do a Subversion checkout with only those 3 directories under trunk? | TITLE:
Can you do a partial checkout with Subversion?
QUESTION:
If I had 20 directories under trunk/ with lots of files in each and only needed 3 of those directories, would it be possible to do a Subversion checkout with only those 3 directories under trunk?
ANSWER:
Subversion 1.5 introduces sparse checkouts which may be something you might find useful. From the documentation:... sparse directories (or shallow checkouts )... allows you to easily check out a working copy—or a portion of a working copy—more shallowly than full recursion, with the freedom to bring in previously ignored files and subdirectories at a later time. | [
"svn",
"sparse-checkout"
] | 177 | 82 | 127,156 | 8 | 0 | 2008-09-08T23:26:12.953000 | 2008-09-09T00:48:08.403000 |
50,965 | 50,968 | Will random data appended to a JPG make it unusable? | So, to simplify my life I want to be able to append from 1 to 7 additional characters on the end of some jpg images my program is processing*. These are dummy padding (fillers, etc - probably all 0x00) just to make the file size a multiple of 8 bytes for block encryption. Having tried this out with a few programs, it appears they are fine with the additional characters, which occur after the FF D9 that specifies the end of the image - so it appears that the file format is well defined enough that the 'corruption' I'm adding at the end shouldn't matter. I can always post process the files later if needed, but my preference is to do the simplest thing possible - which is to let them remain (I'm decrypting other file types and they won't mind, so having a special case is annoying). I figure with all the talk of Steganography hullaballo years ago, someone has some input here... (encryption processing by 8 byte blocks, I don't want to save pre-encrypted file size, so append 0x00 to input data, and leave them there after decoding) | No, you can add bits to the end of a jpg file, without making it unusable. The heading of the jpg file tells how to read it, so the program reading it will stop at the end of the jpg data. In fact, people have hidden zip files inside jpg files by appending the zip data to the end of the jpg data. Because of the way these formats are structured, the resulting file is valid in either format. | Will random data appended to a JPG make it unusable? So, to simplify my life I want to be able to append from 1 to 7 additional characters on the end of some jpg images my program is processing*. These are dummy padding (fillers, etc - probably all 0x00) just to make the file size a multiple of 8 bytes for block encryption. Having tried this out with a few programs, it appears they are fine with the additional characters, which occur after the FF D9 that specifies the end of the image - so it appears that the file format is well defined enough that the 'corruption' I'm adding at the end shouldn't matter. I can always post process the files later if needed, but my preference is to do the simplest thing possible - which is to let them remain (I'm decrypting other file types and they won't mind, so having a special case is annoying). I figure with all the talk of Steganography hullaballo years ago, someone has some input here... (encryption processing by 8 byte blocks, I don't want to save pre-encrypted file size, so append 0x00 to input data, and leave them there after decoding) | TITLE:
Will random data appended to a JPG make it unusable?
QUESTION:
So, to simplify my life I want to be able to append from 1 to 7 additional characters on the end of some jpg images my program is processing*. These are dummy padding (fillers, etc - probably all 0x00) just to make the file size a multiple of 8 bytes for block encryption. Having tried this out with a few programs, it appears they are fine with the additional characters, which occur after the FF D9 that specifies the end of the image - so it appears that the file format is well defined enough that the 'corruption' I'm adding at the end shouldn't matter. I can always post process the files later if needed, but my preference is to do the simplest thing possible - which is to let them remain (I'm decrypting other file types and they won't mind, so having a special case is annoying). I figure with all the talk of Steganography hullaballo years ago, someone has some input here... (encryption processing by 8 byte blocks, I don't want to save pre-encrypted file size, so append 0x00 to input data, and leave them there after decoding)
ANSWER:
No, you can add bits to the end of a jpg file, without making it unusable. The heading of the jpg file tells how to read it, so the program reading it will stop at the end of the jpg data. In fact, people have hidden zip files inside jpg files by appending the zip data to the end of the jpg data. Because of the way these formats are structured, the resulting file is valid in either format. | [
"jpeg",
"file-format",
"steganography"
] | 25 | 25 | 4,350 | 4 | 0 | 2008-09-08T23:39:52.063000 | 2008-09-08T23:43:45.627000 |
50,983 | 50,991 | Are there any languages that implement generics _well_? | I liked the discussion at Differences in Generics, and was wondering whether there were any languages that used this feature particularly well. I really dislike Java's List for a List of things that are Liskov-substitutable for Foo. Why can't List cover that? And honestly, Comparable? I also can't remember for the life of my why you should never return an Array of generics: public T[] getAll () {... } I never liked templates in C++, but that was mostly because none of the compilers could ever spit out a remotely meaningful error message for them. One time I actually did a make realclean && make 17 times to get something to compile; I never did figure out why the 17th time was the charm. So, who actually likes using generics in their pet language? | Haskell implements type-constructor parameterisation (generics, or parametric polymorphism) quite well. So does Scala (although it needs a bit of hand-holding sometimes). Both of these languages have higher-kinded types (a.k.a. abstract type constructors, or type-constructor polymorphism, or higher-order polymorphism). See here: Generics of a Higher Kind | Are there any languages that implement generics _well_? I liked the discussion at Differences in Generics, and was wondering whether there were any languages that used this feature particularly well. I really dislike Java's List for a List of things that are Liskov-substitutable for Foo. Why can't List cover that? And honestly, Comparable? I also can't remember for the life of my why you should never return an Array of generics: public T[] getAll () {... } I never liked templates in C++, but that was mostly because none of the compilers could ever spit out a remotely meaningful error message for them. One time I actually did a make realclean && make 17 times to get something to compile; I never did figure out why the 17th time was the charm. So, who actually likes using generics in their pet language? | TITLE:
Are there any languages that implement generics _well_?
QUESTION:
I liked the discussion at Differences in Generics, and was wondering whether there were any languages that used this feature particularly well. I really dislike Java's List for a List of things that are Liskov-substitutable for Foo. Why can't List cover that? And honestly, Comparable? I also can't remember for the life of my why you should never return an Array of generics: public T[] getAll () {... } I never liked templates in C++, but that was mostly because none of the compilers could ever spit out a remotely meaningful error message for them. One time I actually did a make realclean && make 17 times to get something to compile; I never did figure out why the 17th time was the charm. So, who actually likes using generics in their pet language?
ANSWER:
Haskell implements type-constructor parameterisation (generics, or parametric polymorphism) quite well. So does Scala (although it needs a bit of hand-holding sometimes). Both of these languages have higher-kinded types (a.k.a. abstract type constructors, or type-constructor polymorphism, or higher-order polymorphism). See here: Generics of a Higher Kind | [
"language-agnostic",
"generics"
] | 7 | 14 | 587 | 6 | 0 | 2008-09-08T23:58:17.307000 | 2008-09-09T00:02:12.603000 |
50,995 | 51,025 | Using strings in XML Literals | I'm a C# developer who's fumbling in the first VB code he's written since VB6, so if I am asking a rather obvious question, please forgive me. I decided to experiment with XML Literals to generate some XML code for me, instead of using XMLDocument I have 2 questions, the second regarding a workaround due to my inability to figure out the first. 1: Ideal solution I have a Dictionary of ElementName, ElementValue whose KeyValue pairs I was looping over in the hope of generating the values dynamically, but the following syntax dies a horrible death Dim xConnections As XElement For Each connection As Connection In connections.AsList For Each kvp As KeyValuePair(Of String, String) In connection.DecompiledElements xConnections = <<%= kvp.Key %>><%= kvp.Value %><\<%=kvp.Key %>> Next Next I have vague memories of the T4 syntax (the <%=%> syntax) being able to handle more complex operations (rather than direct assignment to the <%= ) and a 'Response.Write' like object to write output to, but I can't remember the details. 2: Cludgy workaround Instead I thought of building a StringBuilder object and assigning its.ToString to the XElement, but that also failed with a conversion error. I would prefer to continue using my key value pair concept in example one above, as I feel cludging together a string as in example 2 above is rather nasty, and I really should go back to using XMLDocument if instead. | If I understand correctly what you are trying to do, you can use the StringBuilder. Use the StringBuilder.Append method and append the XmlElement 'OuterXml' property. For example: sb.Append(xmlElement.OuterXml) | Using strings in XML Literals I'm a C# developer who's fumbling in the first VB code he's written since VB6, so if I am asking a rather obvious question, please forgive me. I decided to experiment with XML Literals to generate some XML code for me, instead of using XMLDocument I have 2 questions, the second regarding a workaround due to my inability to figure out the first. 1: Ideal solution I have a Dictionary of ElementName, ElementValue whose KeyValue pairs I was looping over in the hope of generating the values dynamically, but the following syntax dies a horrible death Dim xConnections As XElement For Each connection As Connection In connections.AsList For Each kvp As KeyValuePair(Of String, String) In connection.DecompiledElements xConnections = <<%= kvp.Key %>><%= kvp.Value %><\<%=kvp.Key %>> Next Next I have vague memories of the T4 syntax (the <%=%> syntax) being able to handle more complex operations (rather than direct assignment to the <%= ) and a 'Response.Write' like object to write output to, but I can't remember the details. 2: Cludgy workaround Instead I thought of building a StringBuilder object and assigning its.ToString to the XElement, but that also failed with a conversion error. I would prefer to continue using my key value pair concept in example one above, as I feel cludging together a string as in example 2 above is rather nasty, and I really should go back to using XMLDocument if instead. | TITLE:
Using strings in XML Literals
QUESTION:
I'm a C# developer who's fumbling in the first VB code he's written since VB6, so if I am asking a rather obvious question, please forgive me. I decided to experiment with XML Literals to generate some XML code for me, instead of using XMLDocument I have 2 questions, the second regarding a workaround due to my inability to figure out the first. 1: Ideal solution I have a Dictionary of ElementName, ElementValue whose KeyValue pairs I was looping over in the hope of generating the values dynamically, but the following syntax dies a horrible death Dim xConnections As XElement For Each connection As Connection In connections.AsList For Each kvp As KeyValuePair(Of String, String) In connection.DecompiledElements xConnections = <<%= kvp.Key %>><%= kvp.Value %><\<%=kvp.Key %>> Next Next I have vague memories of the T4 syntax (the <%=%> syntax) being able to handle more complex operations (rather than direct assignment to the <%= ) and a 'Response.Write' like object to write output to, but I can't remember the details. 2: Cludgy workaround Instead I thought of building a StringBuilder object and assigning its.ToString to the XElement, but that also failed with a conversion error. I would prefer to continue using my key value pair concept in example one above, as I feel cludging together a string as in example 2 above is rather nasty, and I really should go back to using XMLDocument if instead.
ANSWER:
If I understand correctly what you are trying to do, you can use the StringBuilder. Use the StringBuilder.Append method and append the XmlElement 'OuterXml' property. For example: sb.Append(xmlElement.OuterXml) | [
"xml",
"vb.net",
"xml-literals"
] | 3 | 0 | 2,096 | 4 | 0 | 2008-09-09T00:05:57.587000 | 2008-09-09T00:44:06.277000 |
51,002 | 51,024 | Java Desktop application framework | I am working on designing and building a desktop application. I am thinking about using eclipse or netbeans for the base of this application. However, I have never built on either of these platforms. I am personally leaning to using netbeans because it seams like that platform is an easer learning curve. But, I wanted to ask people that have actually build on these platforms before which one is easier to use? My personal definition of easer is as follows: Easy to get started with Consistent and logical API Good documentation Easy to build and deploy Thanks very much, Josh | I can't say enough about the Eclipse RCP platform. I would recommend it for any Java desktop app development. It's free, has great tutorials, and allows cross-platform development of rich desktop applications that use native OS windowing toolkit, meaning that your application will look native in any platform. It also has a clean API that stays out of your way, and makes deploying to any platform a piece of cake. If your interested check out this book: http://www.amazon.com/Eclipse-Rich-Client-Platform-Applications/dp/0321334612 | Java Desktop application framework I am working on designing and building a desktop application. I am thinking about using eclipse or netbeans for the base of this application. However, I have never built on either of these platforms. I am personally leaning to using netbeans because it seams like that platform is an easer learning curve. But, I wanted to ask people that have actually build on these platforms before which one is easier to use? My personal definition of easer is as follows: Easy to get started with Consistent and logical API Good documentation Easy to build and deploy Thanks very much, Josh | TITLE:
Java Desktop application framework
QUESTION:
I am working on designing and building a desktop application. I am thinking about using eclipse or netbeans for the base of this application. However, I have never built on either of these platforms. I am personally leaning to using netbeans because it seams like that platform is an easer learning curve. But, I wanted to ask people that have actually build on these platforms before which one is easier to use? My personal definition of easer is as follows: Easy to get started with Consistent and logical API Good documentation Easy to build and deploy Thanks very much, Josh
ANSWER:
I can't say enough about the Eclipse RCP platform. I would recommend it for any Java desktop app development. It's free, has great tutorials, and allows cross-platform development of rich desktop applications that use native OS windowing toolkit, meaning that your application will look native in any platform. It also has a clean API that stays out of your way, and makes deploying to any platform a piece of cake. If your interested check out this book: http://www.amazon.com/Eclipse-Rich-Client-Platform-Applications/dp/0321334612 | [
"java",
"eclipse",
"netbeans",
"desktop",
"platform"
] | 18 | 6 | 19,894 | 12 | 0 | 2008-09-09T00:11:34.030000 | 2008-09-09T00:42:40.477000 |
51,007 | 51,310 | Lack of operator overloading in ActionScript 3.0 | One of the things I miss the most in ActionScript is the lack of operator overloading, in particular ==. I kind of work around this issue by adding a "Compare" method to my classes, but that doesn't help in many cases, like when you want to use things like the built in Dictionary. Is there a good way to work around this problem? | Nope. But it doesn't hurt to add equals methods to your own classes. I try to never use == when comparing objects (the same goes for ===, which is the same thing for objects) since it only checks identity. Sadly all the collections in Flash and Flex assume that identity is the only measure of equality that is needed. There are hints in Flex that someone wanted to alleviate this problem at one time, but it seems like it was abandoned: there is an interface called IUID, and it is mentioned in the Flex Developer's Guide, but it is not used anywhere. Not even the collections in Flex use it to determine equality. And since you are asking for a solution for Flash, it may not have helped you anyway. I've written some more about this (in the context of Flex) on my blog: Is there no equality?. | Lack of operator overloading in ActionScript 3.0 One of the things I miss the most in ActionScript is the lack of operator overloading, in particular ==. I kind of work around this issue by adding a "Compare" method to my classes, but that doesn't help in many cases, like when you want to use things like the built in Dictionary. Is there a good way to work around this problem? | TITLE:
Lack of operator overloading in ActionScript 3.0
QUESTION:
One of the things I miss the most in ActionScript is the lack of operator overloading, in particular ==. I kind of work around this issue by adding a "Compare" method to my classes, but that doesn't help in many cases, like when you want to use things like the built in Dictionary. Is there a good way to work around this problem?
ANSWER:
Nope. But it doesn't hurt to add equals methods to your own classes. I try to never use == when comparing objects (the same goes for ===, which is the same thing for objects) since it only checks identity. Sadly all the collections in Flash and Flex assume that identity is the only measure of equality that is needed. There are hints in Flex that someone wanted to alleviate this problem at one time, but it seems like it was abandoned: there is an interface called IUID, and it is mentioned in the Flex Developer's Guide, but it is not used anywhere. Not even the collections in Flex use it to determine equality. And since you are asking for a solution for Flash, it may not have helped you anyway. I've written some more about this (in the context of Flex) on my blog: Is there no equality?. | [
"flash",
"actionscript-3"
] | 15 | 12 | 7,398 | 2 | 0 | 2008-09-09T00:23:28.447000 | 2008-09-09T06:52:00.353000 |
51,011 | 51,057 | best tool to reverse-engineer a WinXP PS/2 touchpad driver? | I have a PS/2 touchpad which I would like to write a driver for (I'm just a web guy so this is unfamiliar territory to me). The touchpad comes with a Windows XP driver, which apparently sends messages to enable/disable tap-to-click. I'm trying to find out what message it is sending but I'm not sure how to start. Would software like "Syser Debugger" work? I want to intercept outgoing messages being sent to the PS/2 bus. | IDA Pro won't be much use to you if you want to find out what 'messages' are being sent. You should realise that this is a very big step up for most web developers, but you already knew that? I would start by deciding if you really need to work at the driver-level, often this is the Kernel level. The user mode level may be where you want to look first. Use a tool like WinSpy or other Windows debug tool to find out what messages are getting passed around by your driver software, and the mouse configuration applet in control panel. You can use the Windows API function called SendMessage() to send your messages to the application from user mode. Your first stop for device driver development should be the Windows DDK docs and OSR Online. | best tool to reverse-engineer a WinXP PS/2 touchpad driver? I have a PS/2 touchpad which I would like to write a driver for (I'm just a web guy so this is unfamiliar territory to me). The touchpad comes with a Windows XP driver, which apparently sends messages to enable/disable tap-to-click. I'm trying to find out what message it is sending but I'm not sure how to start. Would software like "Syser Debugger" work? I want to intercept outgoing messages being sent to the PS/2 bus. | TITLE:
best tool to reverse-engineer a WinXP PS/2 touchpad driver?
QUESTION:
I have a PS/2 touchpad which I would like to write a driver for (I'm just a web guy so this is unfamiliar territory to me). The touchpad comes with a Windows XP driver, which apparently sends messages to enable/disable tap-to-click. I'm trying to find out what message it is sending but I'm not sure how to start. Would software like "Syser Debugger" work? I want to intercept outgoing messages being sent to the PS/2 bus.
ANSWER:
IDA Pro won't be much use to you if you want to find out what 'messages' are being sent. You should realise that this is a very big step up for most web developers, but you already knew that? I would start by deciding if you really need to work at the driver-level, often this is the Kernel level. The user mode level may be where you want to look first. Use a tool like WinSpy or other Windows debug tool to find out what messages are getting passed around by your driver software, and the mouse configuration applet in control panel. You can use the Windows API function called SendMessage() to send your messages to the application from user mode. Your first stop for device driver development should be the Windows DDK docs and OSR Online. | [
"windows-xp",
"reverse-engineering",
"drivers",
"black-box"
] | 2 | 4 | 1,938 | 3 | 0 | 2008-09-09T00:31:42.250000 | 2008-09-09T01:07:09.730000 |
51,019 | 51,058 | What does it mean when a PostgreSQL process is "idle in transaction"? | What does it mean when a PostgreSQL process is "idle in transaction"? On a server that I'm looking at, the output of "ps ax | grep postgres" I see 9 PostgreSQL processes that look like the following: postgres: user db 127.0.0.1(55658) idle in transaction Does this mean that some of the processes are hung, waiting for a transaction to be committed? Any pointers to relevant documentation are appreciated. | The PostgreSQL manual indicates that this means the transaction is open (inside BEGIN) and idle. It's most likely a user connected using the monitor who is thinking or typing. I have plenty of those on my system, too. If you're using Slony for replication, however, the Slony-I FAQ suggests idle in transaction may mean that the network connection was terminated abruptly. Check out the discussion in that FAQ for more details. | What does it mean when a PostgreSQL process is "idle in transaction"? What does it mean when a PostgreSQL process is "idle in transaction"? On a server that I'm looking at, the output of "ps ax | grep postgres" I see 9 PostgreSQL processes that look like the following: postgres: user db 127.0.0.1(55658) idle in transaction Does this mean that some of the processes are hung, waiting for a transaction to be committed? Any pointers to relevant documentation are appreciated. | TITLE:
What does it mean when a PostgreSQL process is "idle in transaction"?
QUESTION:
What does it mean when a PostgreSQL process is "idle in transaction"? On a server that I'm looking at, the output of "ps ax | grep postgres" I see 9 PostgreSQL processes that look like the following: postgres: user db 127.0.0.1(55658) idle in transaction Does this mean that some of the processes are hung, waiting for a transaction to be committed? Any pointers to relevant documentation are appreciated.
ANSWER:
The PostgreSQL manual indicates that this means the transaction is open (inside BEGIN) and idle. It's most likely a user connected using the monitor who is thinking or typing. I have plenty of those on my system, too. If you're using Slony for replication, however, the Slony-I FAQ suggests idle in transaction may mean that the network connection was terminated abruptly. Check out the discussion in that FAQ for more details. | [
"postgresql"
] | 114 | 77 | 194,986 | 2 | 0 | 2008-09-09T00:40:53.413000 | 2008-09-09T01:07:21.900000 |
51,021 | 51,037 | What is the difference between Raising Exceptions vs Throwing Exceptions in Ruby? | Ruby has two different exceptions mechanisms: Throw/Catch and Raise/Rescue. Why do we have two? When should you use one and not the other? | I think http://hasno.info/ruby-gotchas-and-caveats has a decent explanation of the difference: catch/throw are not the same as raise/rescue. catch/throw allows you to quickly exit blocks back to a point where a catch is defined for a specific symbol, raise rescue is the real exception handling stuff involving the Exception object. | What is the difference between Raising Exceptions vs Throwing Exceptions in Ruby? Ruby has two different exceptions mechanisms: Throw/Catch and Raise/Rescue. Why do we have two? When should you use one and not the other? | TITLE:
What is the difference between Raising Exceptions vs Throwing Exceptions in Ruby?
QUESTION:
Ruby has two different exceptions mechanisms: Throw/Catch and Raise/Rescue. Why do we have two? When should you use one and not the other?
ANSWER:
I think http://hasno.info/ruby-gotchas-and-caveats has a decent explanation of the difference: catch/throw are not the same as raise/rescue. catch/throw allows you to quickly exit blocks back to a point where a catch is defined for a specific symbol, raise rescue is the real exception handling stuff involving the Exception object. | [
"ruby",
"exception"
] | 203 | 116 | 62,028 | 3 | 0 | 2008-09-09T00:41:03.070000 | 2008-09-09T00:50:02.753000 |
51,032 | 51,080 | Is there a difference between foo(void) and foo() in C++ or C? | Consider these two function definitions: void foo() { }
void foo(void) { } Is there any difference between these two? If not, why is the void argument there? Aesthetic reasons? | Historical note: this answer applies to C17 and older editions. C23 and later editions treat void foo() differently. In C: void foo() means "a function foo taking an unspecified number of arguments of unspecified type" void foo(void) means "a function foo taking no arguments" In C++: void foo() means "a function foo taking no arguments" void foo(void) means "a function foo taking no arguments" By writing foo(void), therefore, we achieve the same interpretation across both languages and make our headers multilingual (though we usually need to do some more things to the headers to make them truly cross-language; namely, wrap them in an extern "C" if we're compiling C++). | Is there a difference between foo(void) and foo() in C++ or C? Consider these two function definitions: void foo() { }
void foo(void) { } Is there any difference between these two? If not, why is the void argument there? Aesthetic reasons? | TITLE:
Is there a difference between foo(void) and foo() in C++ or C?
QUESTION:
Consider these two function definitions: void foo() { }
void foo(void) { } Is there any difference between these two? If not, why is the void argument there? Aesthetic reasons?
ANSWER:
Historical note: this answer applies to C17 and older editions. C23 and later editions treat void foo() differently. In C: void foo() means "a function foo taking an unspecified number of arguments of unspecified type" void foo(void) means "a function foo taking no arguments" In C++: void foo() means "a function foo taking no arguments" void foo(void) means "a function foo taking no arguments" By writing foo(void), therefore, we achieve the same interpretation across both languages and make our headers multilingual (though we usually need to do some more things to the headers to make them truly cross-language; namely, wrap them in an extern "C" if we're compiling C++). | [
"c++",
"c",
"arguments",
"function-declaration"
] | 286 | 360 | 79,890 | 5 | 0 | 2008-09-09T00:48:23.630000 | 2008-09-09T01:34:28.060000 |
51,054 | 51,069 | Batch file to delete files older than N days | I am looking for a way to delete all files older than 7 days in a batch file. I've searched around the web, and found some examples with hundreds of lines of code, and others that required installing extra command line utilities to accomplish the task. Similar things can be done in BASH in just a couple lines of code. It seems that something at least remotely easy could be done for batch files in Windows. I'm looking for a solution that works in a standard Windows command prompt, without any extra utilities. Please no PowerShell or Cygwin either. | Enjoy: forfiles -p "C:\what\ever" -s -m *.* -d -c "cmd /c del @path" See forfiles documentation for more details. For more goodies, refer to An A-Z Index of the Windows XP command line. If you don't have forfiles installed on your machine, copy it from any Windows Server 2003 to your Windows XP machine at %WinDir%\system32\. This is possible since the EXE is fully compatible between Windows Server 2003 and Windows XP. Later versions of Windows and Windows Server have it installed by default. For Windows 7 and newer (including Windows 10): The syntax has changed a little. Therefore the updated command is: forfiles /p "C:\what\ever" /s /m *.* /D - /C "cmd /c del @path" | Batch file to delete files older than N days I am looking for a way to delete all files older than 7 days in a batch file. I've searched around the web, and found some examples with hundreds of lines of code, and others that required installing extra command line utilities to accomplish the task. Similar things can be done in BASH in just a couple lines of code. It seems that something at least remotely easy could be done for batch files in Windows. I'm looking for a solution that works in a standard Windows command prompt, without any extra utilities. Please no PowerShell or Cygwin either. | TITLE:
Batch file to delete files older than N days
QUESTION:
I am looking for a way to delete all files older than 7 days in a batch file. I've searched around the web, and found some examples with hundreds of lines of code, and others that required installing extra command line utilities to accomplish the task. Similar things can be done in BASH in just a couple lines of code. It seems that something at least remotely easy could be done for batch files in Windows. I'm looking for a solution that works in a standard Windows command prompt, without any extra utilities. Please no PowerShell or Cygwin either.
ANSWER:
Enjoy: forfiles -p "C:\what\ever" -s -m *.* -d -c "cmd /c del @path" See forfiles documentation for more details. For more goodies, refer to An A-Z Index of the Windows XP command line. If you don't have forfiles installed on your machine, copy it from any Windows Server 2003 to your Windows XP machine at %WinDir%\system32\. This is possible since the EXE is fully compatible between Windows Server 2003 and Windows XP. Later versions of Windows and Windows Server have it installed by default. For Windows 7 and newer (including Windows 10): The syntax has changed a little. Therefore the updated command is: forfiles /p "C:\what\ever" /s /m *.* /D - /C "cmd /c del @path" | [
"windows",
"date",
"batch-file",
"file-io",
"cmd"
] | 755 | 1,185 | 1,366,622 | 25 | 0 | 2008-09-09T01:05:28.707000 | 2008-09-09T01:18:27.380000 |
51,088 | 789,776 | Plugin for R# similar to CodeRush "statement highlight" | See here http://www.hanselman.com/blog/InSearchOfThePerfectMonospacedProgrammersFontInconsolata.aspx - for want of a better description - the statement block highlighting - eg in the pics on the link the "statement blocks" are grouped with a vertical line. I understand this is a feature of CodeRush - does R# have either anything similar, or a plugin to do the same? | I use the latest version of ReSharper that is currently available — ReSharper 4.5 — but unfortunately I don't believe there is any feature for drawing a vertical line between matching braces, as in the screen-shots you referenced. The feature I find useful, which Ben mentioned, is the matching brace highlighting, however this only takes effect when your cursor is adjacent to an opening or closing brace. | Plugin for R# similar to CodeRush "statement highlight" See here http://www.hanselman.com/blog/InSearchOfThePerfectMonospacedProgrammersFontInconsolata.aspx - for want of a better description - the statement block highlighting - eg in the pics on the link the "statement blocks" are grouped with a vertical line. I understand this is a feature of CodeRush - does R# have either anything similar, or a plugin to do the same? | TITLE:
Plugin for R# similar to CodeRush "statement highlight"
QUESTION:
See here http://www.hanselman.com/blog/InSearchOfThePerfectMonospacedProgrammersFontInconsolata.aspx - for want of a better description - the statement block highlighting - eg in the pics on the link the "statement blocks" are grouped with a vertical line. I understand this is a feature of CodeRush - does R# have either anything similar, or a plugin to do the same?
ANSWER:
I use the latest version of ReSharper that is currently available — ReSharper 4.5 — but unfortunately I don't believe there is any feature for drawing a vertical line between matching braces, as in the screen-shots you referenced. The feature I find useful, which Ben mentioned, is the matching brace highlighting, however this only takes effect when your cursor is adjacent to an opening or closing brace. | [
"resharper",
"codehighlighter"
] | 2 | 1 | 1,438 | 3 | 0 | 2008-09-09T01:45:24.383000 | 2009-04-25T21:37:52.337000 |
51,092 | 51,103 | SQL: aggregate function and group by | Consider the Oracle emp table. I'd like to get the employees with the top salary with department = 20 and job = clerk. Also assume that there is no "empno" column, and that the primary key involves a number of columns. You can do this with: select * from scott.emp where deptno = 20 and job = 'CLERK' and sal = (select max(sal) from scott.emp where deptno = 20 and job = 'CLERK') This works, but I have to duplicate the test deptno = 20 and job = 'CLERK', which I would like to avoid. Is there a more elegant way to write this, maybe using a group by? BTW, if this matters, I am using Oracle. | The following is slightly over-engineered, but is a good SQL pattern for "top x" queries. SELECT * FROM scott.emp WHERE (deptno,job,sal) IN (SELECT deptno, job, max(sal) FROM scott.emp WHERE deptno = 20 and job = 'CLERK' GROUP BY deptno, job ) Also note that this will work in Oracle and Postgress (i think) but not MS SQL. For something similar in MS SQL see question SQL Query to get latest price | SQL: aggregate function and group by Consider the Oracle emp table. I'd like to get the employees with the top salary with department = 20 and job = clerk. Also assume that there is no "empno" column, and that the primary key involves a number of columns. You can do this with: select * from scott.emp where deptno = 20 and job = 'CLERK' and sal = (select max(sal) from scott.emp where deptno = 20 and job = 'CLERK') This works, but I have to duplicate the test deptno = 20 and job = 'CLERK', which I would like to avoid. Is there a more elegant way to write this, maybe using a group by? BTW, if this matters, I am using Oracle. | TITLE:
SQL: aggregate function and group by
QUESTION:
Consider the Oracle emp table. I'd like to get the employees with the top salary with department = 20 and job = clerk. Also assume that there is no "empno" column, and that the primary key involves a number of columns. You can do this with: select * from scott.emp where deptno = 20 and job = 'CLERK' and sal = (select max(sal) from scott.emp where deptno = 20 and job = 'CLERK') This works, but I have to duplicate the test deptno = 20 and job = 'CLERK', which I would like to avoid. Is there a more elegant way to write this, maybe using a group by? BTW, if this matters, I am using Oracle.
ANSWER:
The following is slightly over-engineered, but is a good SQL pattern for "top x" queries. SELECT * FROM scott.emp WHERE (deptno,job,sal) IN (SELECT deptno, job, max(sal) FROM scott.emp WHERE deptno = 20 and job = 'CLERK' GROUP BY deptno, job ) Also note that this will work in Oracle and Postgress (i think) but not MS SQL. For something similar in MS SQL see question SQL Query to get latest price | [
"sql",
"oracle",
"aggregate"
] | 3 | 3 | 2,401 | 6 | 0 | 2008-09-09T01:49:32.093000 | 2008-09-09T01:58:32.883000 |
51,094 | 52,640 | Payment Processors - What do I need to know if I want to accept credit cards on my website? | This question talks about different payment processors and what they cost, but I'm looking for the answer to what do I need to do if I want to accept credit card payments? Assume I need to store credit card numbers for customers, so that the obvious solution of relying on the credit card processor to do the heavy lifting is not available. PCI Data Security, which is apparently the standard for storing credit card info, has a bunch of general requirements, but how does one implement them? And what about the vendors, like Visa, who have their own best practices? Do I need to have keyfob access to the machine? What about physically protecting it from hackers in the building? Or even what if someone got their hands on the backup files with the sql server data files on it? What about backups? Are there other physical copies of that data around? Tip: If you get a merchant account, you should negotiate that they charge you "interchange-plus" instead of tiered pricing. With tiered pricing, they will charge you different rates based on what type of Visa/MC is used -- ie. they charge you more for cards with big rewards attached to them. Interchange plus billing means you only pay the processor what Visa/MC charges them, plus a flat fee. (Amex and Discover charge their own rates directly to merchants, so this doesn't apply to those cards. You'll find Amex rates to be in the 3% range and Discover could be as low as 1%. Visa/MC is in the 2% range). This service is supposed to do the negotiation for you (I haven't used it, this is not an ad, and I'm not affiliated with the website, but this service is greatly needed.) This blog post gives a complete rundown of handling credit cards (specifically for the UK). Perhaps I phrased the question wrong, but I'm looking for tips like these: Use SecurID or eToken to add an additional password layer to the physical box. Make sure the box is in a room with a physical lock or keycode combination. | I went through this process not to long ago with a company I worked for and I plan on going through it again soon with my own business. If you have some network technical knowledge, it really isn't that bad. Otherwise you will be better off using Paypal or another type of service. The process starts by getting a merchant account setup and tied to your bank account. You may want to check with your bank, because a lot of major banks provide merchant services. You may be able to get deals, because you are already a customer of theirs, but if not, then you can shop around. If you plan on accepting Discover or American Express, those will be separate, because they provide the merchant services for their cards, no getting around this. There are other special cases also. This is an application process, be prepared. Next you will want to purchase an SSL certificate that you can use for securing your communications for when the credit card info is transmitted over public networks. There are plenty of vendors, but my rule of thumb is to pick one that is a brand name in a way. The better they are known, the better your customer has probably heard of them. Next you will want to find a payment gateway to use with your site. Although this can be optional depending on how big you are, but majority of the time it won't be. You will need one. The payment gateway vendors provide a way to talk to the Internet Gateway API that you will communicate with. Most vendors provide HTTP or TCP/IP communication with their API. They will process the credit card information on your behalf. Two vendors are Authorize.Net and PayFlow Pro. The link I provide below has some more information on other vendors. Now what? For starters there are guidelines on what your application has to adhere to for transmitting the transactions. During the process of getting everything setup, someone will look at your site or application and make sure you are adhering to the guidelines, like using SSL and that you have terms of use and policy documentation on what the information the user is giving you is used for. Don't steal this from another site. Come up with your own, hire a lawyer if you need to. Most of these things fall under the PCI Data Security link Michael provided in his question. If you plan on storing the credit card numbers, then you better be prepared to put some security measures in place internally to protect the info. Make sure the server the information is stored on is only accessible to members who need to have access. Like any good security, you do things in layers. The more layers you put in place the better. If you want you can use key fob type security, like SecureID or eToken to protect the room the server is in. If you can't afford the key fob route, then use the two key method. Allow a person who has access to the room to sign out a key, which goes along with a key they already carry. They will need both keys to access the room. Next you protect the communication to the server with policies. My policy is that the only thing communicating to it over the network is the application and that information is encrypted. The server should not be accessible in any other form. For backups, I use truecrypt to encrypt the volumes the backups will be saved to. Anytime the data is removed or stored somewhere else, then again you use truecrypt to encrypt the volume the data is on. Basically where ever the data is, it needs to be encrypted. Make sure all processes for getting at the data carries auditing trails. use logs for access to the server room, use cameras if you can, etc... Another measure is to encrypt the credit card information in the database. This makes sure that the data can only be viewed in your application where you can enforce who sees the information. I use pfsense for my firewall. I run it off a compact flash card and have two servers setup. One is for fail over for redundancy. I found this blog post by Rick Strahl which helped tremendously to understand doing e-commerce and what it takes to accept credit cards through a web application. Well, this turned out to be a long answer. I hope these tips help. | Payment Processors - What do I need to know if I want to accept credit cards on my website? This question talks about different payment processors and what they cost, but I'm looking for the answer to what do I need to do if I want to accept credit card payments? Assume I need to store credit card numbers for customers, so that the obvious solution of relying on the credit card processor to do the heavy lifting is not available. PCI Data Security, which is apparently the standard for storing credit card info, has a bunch of general requirements, but how does one implement them? And what about the vendors, like Visa, who have their own best practices? Do I need to have keyfob access to the machine? What about physically protecting it from hackers in the building? Or even what if someone got their hands on the backup files with the sql server data files on it? What about backups? Are there other physical copies of that data around? Tip: If you get a merchant account, you should negotiate that they charge you "interchange-plus" instead of tiered pricing. With tiered pricing, they will charge you different rates based on what type of Visa/MC is used -- ie. they charge you more for cards with big rewards attached to them. Interchange plus billing means you only pay the processor what Visa/MC charges them, plus a flat fee. (Amex and Discover charge their own rates directly to merchants, so this doesn't apply to those cards. You'll find Amex rates to be in the 3% range and Discover could be as low as 1%. Visa/MC is in the 2% range). This service is supposed to do the negotiation for you (I haven't used it, this is not an ad, and I'm not affiliated with the website, but this service is greatly needed.) This blog post gives a complete rundown of handling credit cards (specifically for the UK). Perhaps I phrased the question wrong, but I'm looking for tips like these: Use SecurID or eToken to add an additional password layer to the physical box. Make sure the box is in a room with a physical lock or keycode combination. | TITLE:
Payment Processors - What do I need to know if I want to accept credit cards on my website?
QUESTION:
This question talks about different payment processors and what they cost, but I'm looking for the answer to what do I need to do if I want to accept credit card payments? Assume I need to store credit card numbers for customers, so that the obvious solution of relying on the credit card processor to do the heavy lifting is not available. PCI Data Security, which is apparently the standard for storing credit card info, has a bunch of general requirements, but how does one implement them? And what about the vendors, like Visa, who have their own best practices? Do I need to have keyfob access to the machine? What about physically protecting it from hackers in the building? Or even what if someone got their hands on the backup files with the sql server data files on it? What about backups? Are there other physical copies of that data around? Tip: If you get a merchant account, you should negotiate that they charge you "interchange-plus" instead of tiered pricing. With tiered pricing, they will charge you different rates based on what type of Visa/MC is used -- ie. they charge you more for cards with big rewards attached to them. Interchange plus billing means you only pay the processor what Visa/MC charges them, plus a flat fee. (Amex and Discover charge their own rates directly to merchants, so this doesn't apply to those cards. You'll find Amex rates to be in the 3% range and Discover could be as low as 1%. Visa/MC is in the 2% range). This service is supposed to do the negotiation for you (I haven't used it, this is not an ad, and I'm not affiliated with the website, but this service is greatly needed.) This blog post gives a complete rundown of handling credit cards (specifically for the UK). Perhaps I phrased the question wrong, but I'm looking for tips like these: Use SecurID or eToken to add an additional password layer to the physical box. Make sure the box is in a room with a physical lock or keycode combination.
ANSWER:
I went through this process not to long ago with a company I worked for and I plan on going through it again soon with my own business. If you have some network technical knowledge, it really isn't that bad. Otherwise you will be better off using Paypal or another type of service. The process starts by getting a merchant account setup and tied to your bank account. You may want to check with your bank, because a lot of major banks provide merchant services. You may be able to get deals, because you are already a customer of theirs, but if not, then you can shop around. If you plan on accepting Discover or American Express, those will be separate, because they provide the merchant services for their cards, no getting around this. There are other special cases also. This is an application process, be prepared. Next you will want to purchase an SSL certificate that you can use for securing your communications for when the credit card info is transmitted over public networks. There are plenty of vendors, but my rule of thumb is to pick one that is a brand name in a way. The better they are known, the better your customer has probably heard of them. Next you will want to find a payment gateway to use with your site. Although this can be optional depending on how big you are, but majority of the time it won't be. You will need one. The payment gateway vendors provide a way to talk to the Internet Gateway API that you will communicate with. Most vendors provide HTTP or TCP/IP communication with their API. They will process the credit card information on your behalf. Two vendors are Authorize.Net and PayFlow Pro. The link I provide below has some more information on other vendors. Now what? For starters there are guidelines on what your application has to adhere to for transmitting the transactions. During the process of getting everything setup, someone will look at your site or application and make sure you are adhering to the guidelines, like using SSL and that you have terms of use and policy documentation on what the information the user is giving you is used for. Don't steal this from another site. Come up with your own, hire a lawyer if you need to. Most of these things fall under the PCI Data Security link Michael provided in his question. If you plan on storing the credit card numbers, then you better be prepared to put some security measures in place internally to protect the info. Make sure the server the information is stored on is only accessible to members who need to have access. Like any good security, you do things in layers. The more layers you put in place the better. If you want you can use key fob type security, like SecureID or eToken to protect the room the server is in. If you can't afford the key fob route, then use the two key method. Allow a person who has access to the room to sign out a key, which goes along with a key they already carry. They will need both keys to access the room. Next you protect the communication to the server with policies. My policy is that the only thing communicating to it over the network is the application and that information is encrypted. The server should not be accessible in any other form. For backups, I use truecrypt to encrypt the volumes the backups will be saved to. Anytime the data is removed or stored somewhere else, then again you use truecrypt to encrypt the volume the data is on. Basically where ever the data is, it needs to be encrypted. Make sure all processes for getting at the data carries auditing trails. use logs for access to the server room, use cameras if you can, etc... Another measure is to encrypt the credit card information in the database. This makes sure that the data can only be viewed in your application where you can enforce who sees the information. I use pfsense for my firewall. I run it off a compact flash card and have two servers setup. One is for fail over for redundancy. I found this blog post by Rick Strahl which helped tremendously to understand doing e-commerce and what it takes to accept credit cards through a web application. Well, this turned out to be a long answer. I hope these tips help. | [
"security",
"e-commerce",
"pci-dss"
] | 261 | 238 | 26,186 | 9 | 0 | 2008-09-09T01:50:45.913000 | 2008-09-09T19:06:50.930000 |
51,098 | 51,111 | Modifying a spreadsheet using a VB macro | I have two spreadsheets... when one gets modified in a certain way I want to have a macro run that modifies the second in an appropriate manner. I've already isolated the event I need to act on (the modification of any cell in a particular column), I just can't seem to find any concrete information on accessing and modifying another spreadsheet (this spreadsheet is located on a different LAN share also... the user has access to both, though). Any help would be great. References on how to do this or something similar are just as good as concrete code samples. | In Excel, you would likely just write code to open the other worksheet, modify it and then save the data. See this tutorial for more info. I'll have to edit my VBA later, so pretend this is pseudocode, but it should look something like: Dim xl: Set xl = CreateObject("Excel.Application") xl.Open "\\the\share\file.xls"
Dim ws: Set ws = xl.Worksheets(1) ws.Cells(0,1).Value = "New Value" ws.Save
xl.Quit constSilent | Modifying a spreadsheet using a VB macro I have two spreadsheets... when one gets modified in a certain way I want to have a macro run that modifies the second in an appropriate manner. I've already isolated the event I need to act on (the modification of any cell in a particular column), I just can't seem to find any concrete information on accessing and modifying another spreadsheet (this spreadsheet is located on a different LAN share also... the user has access to both, though). Any help would be great. References on how to do this or something similar are just as good as concrete code samples. | TITLE:
Modifying a spreadsheet using a VB macro
QUESTION:
I have two spreadsheets... when one gets modified in a certain way I want to have a macro run that modifies the second in an appropriate manner. I've already isolated the event I need to act on (the modification of any cell in a particular column), I just can't seem to find any concrete information on accessing and modifying another spreadsheet (this spreadsheet is located on a different LAN share also... the user has access to both, though). Any help would be great. References on how to do this or something similar are just as good as concrete code samples.
ANSWER:
In Excel, you would likely just write code to open the other worksheet, modify it and then save the data. See this tutorial for more info. I'll have to edit my VBA later, so pretend this is pseudocode, but it should look something like: Dim xl: Set xl = CreateObject("Excel.Application") xl.Open "\\the\share\file.xls"
Dim ws: Set ws = xl.Worksheets(1) ws.Cells(0,1).Value = "New Value" ws.Save
xl.Quit constSilent | [
"excel",
"vba"
] | 2 | 5 | 1,180 | 4 | 0 | 2008-09-09T01:56:09.503000 | 2008-09-09T02:07:50.927000 |
51,108 | 51,166 | Spartan Programming | I really enjoyed Jeff's post on Spartan Programming. I agree that code like that is a joy to read. Unfortunately, I'm not so sure it would necessarily be a joy to work with. For years I have read about and adhered to the "one-expression-per-line" practice. I have fought the good fight and held my ground when many programming books countered this advice with example code like: while (bytes = read(...)) {... }
while (GetMessage(...)) {... } Recently, I've advocated one expression per line for more practical reasons - debugging and production support. Getting a log file from production that claims a NullPointer exception at "line 65" which reads: ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber()); is frustrating and entirely avoidable. Short of grabbing an expert with the code that can choose the "most likely" object that was null... this is a real practical pain. One expression per line also helps out quite a bit while stepping through code. I practice this with the assumption that most modern compilers can optimize away all the superfluous temp objects I've just created... I try to be neat - but cluttering my code with explicit objects sure feels laborious at times. It does not generally make the code easier to browse - but it really has come in handy when tracing things down in production or stepping through my or someone else's code. What style do you advocate and can you rationalize it in a practical sense? | In The Pragmatic Programmer Hunt and Thomas talk about a study they term the Law of Demeter and it focuses on the coupling of functions to modules other than there own. By allowing a function to never reach a 3rd level in it's coupling you significantly reduce the number of errors and increase the maintainability of the code. So: ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber()); Is close to a felony because we are 4 objects down the rat hole. That means to change something in one of those objects I have to know that you called this whole stack right here in this very method. What a pain. Better: Account.getUser(); Note this runs counter to the expressive forms of programming that are now really popular with mocking software. The trade off there is that you have a tightly coupled interface anyway, and the expressive syntax just makes it easier to use. | Spartan Programming I really enjoyed Jeff's post on Spartan Programming. I agree that code like that is a joy to read. Unfortunately, I'm not so sure it would necessarily be a joy to work with. For years I have read about and adhered to the "one-expression-per-line" practice. I have fought the good fight and held my ground when many programming books countered this advice with example code like: while (bytes = read(...)) {... }
while (GetMessage(...)) {... } Recently, I've advocated one expression per line for more practical reasons - debugging and production support. Getting a log file from production that claims a NullPointer exception at "line 65" which reads: ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber()); is frustrating and entirely avoidable. Short of grabbing an expert with the code that can choose the "most likely" object that was null... this is a real practical pain. One expression per line also helps out quite a bit while stepping through code. I practice this with the assumption that most modern compilers can optimize away all the superfluous temp objects I've just created... I try to be neat - but cluttering my code with explicit objects sure feels laborious at times. It does not generally make the code easier to browse - but it really has come in handy when tracing things down in production or stepping through my or someone else's code. What style do you advocate and can you rationalize it in a practical sense? | TITLE:
Spartan Programming
QUESTION:
I really enjoyed Jeff's post on Spartan Programming. I agree that code like that is a joy to read. Unfortunately, I'm not so sure it would necessarily be a joy to work with. For years I have read about and adhered to the "one-expression-per-line" practice. I have fought the good fight and held my ground when many programming books countered this advice with example code like: while (bytes = read(...)) {... }
while (GetMessage(...)) {... } Recently, I've advocated one expression per line for more practical reasons - debugging and production support. Getting a log file from production that claims a NullPointer exception at "line 65" which reads: ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber()); is frustrating and entirely avoidable. Short of grabbing an expert with the code that can choose the "most likely" object that was null... this is a real practical pain. One expression per line also helps out quite a bit while stepping through code. I practice this with the assumption that most modern compilers can optimize away all the superfluous temp objects I've just created... I try to be neat - but cluttering my code with explicit objects sure feels laborious at times. It does not generally make the code easier to browse - but it really has come in handy when tracing things down in production or stepping through my or someone else's code. What style do you advocate and can you rationalize it in a practical sense?
ANSWER:
In The Pragmatic Programmer Hunt and Thomas talk about a study they term the Law of Demeter and it focuses on the coupling of functions to modules other than there own. By allowing a function to never reach a 3rd level in it's coupling you significantly reduce the number of errors and increase the maintainability of the code. So: ObjectA a = getTheUser(session.getState().getAccount().getAccountNumber()); Is close to a felony because we are 4 objects down the rat hole. That means to change something in one of those objects I have to know that you called this whole stack right here in this very method. What a pain. Better: Account.getUser(); Note this runs counter to the expressive forms of programming that are now really popular with mocking software. The trade off there is that you have a tightly coupled interface anyway, and the expressive syntax just makes it easier to use. | [
"language-agnostic",
"styles",
"spartan"
] | 10 | 7 | 2,497 | 8 | 0 | 2008-09-09T02:05:36.627000 | 2008-09-09T03:26:01.690000 |
51,110 | 51,116 | Get MIME type of a local file in PHP5 without a PECL extension? | mime_content_type() is deprecated. How can I find the MIME type of a local file using PHP5 but without using this deprecated method or the PECL fileinfo extension? Edit: That's what I was afraid of. It's unfortunate that they deprecated a built-in function in favour of one that requires an extension that isn't always available. | If you can't use the fileinfo extension, and you don't want to use mime_content_type, your options are limited. Most likely you'll need to do a lookup based on the file extension. mime_content_type did something a bit more intelligent and actually looked for special data in the file to determine the mime type. | Get MIME type of a local file in PHP5 without a PECL extension? mime_content_type() is deprecated. How can I find the MIME type of a local file using PHP5 but without using this deprecated method or the PECL fileinfo extension? Edit: That's what I was afraid of. It's unfortunate that they deprecated a built-in function in favour of one that requires an extension that isn't always available. | TITLE:
Get MIME type of a local file in PHP5 without a PECL extension?
QUESTION:
mime_content_type() is deprecated. How can I find the MIME type of a local file using PHP5 but without using this deprecated method or the PECL fileinfo extension? Edit: That's what I was afraid of. It's unfortunate that they deprecated a built-in function in favour of one that requires an extension that isn't always available.
ANSWER:
If you can't use the fileinfo extension, and you don't want to use mime_content_type, your options are limited. Most likely you'll need to do a lookup based on the file extension. mime_content_type did something a bit more intelligent and actually looked for special data in the file to determine the mime type. | [
"php",
"mime"
] | 0 | 1 | 2,002 | 2 | 0 | 2008-09-09T02:07:26.437000 | 2008-09-09T02:12:32.617000 |
51,114 | 52,109 | ASP.NET 3.5 Without Microsoft SQL Server - What do I lose? | I was just assigned to do a CMS using ASP.net 3.5 and MySQL. I am kind of new to ASP.NET development (quite sufficient with C#) and I am wondering what major ASP.NET and general.NET features I am losing when I don't have the option to use Microsoft SQL Server. I know already from quick Googling that I lose LINQ (and I was really looking forward to using this to build my model layer!), but I am not sure what other handy features I will lose. Since I've been relying on ASP.net tutorials which assume that you use MS SQL Server, I feel a chunk of my ASP.net knowledge just become invalid. Thanks! | You do not lose LINQ, you lose LINQtoSQL. LINQ itself is more generic as it can be used on anything that implements iQueryable. You lose the SqlDataSource, not a big deal. You lose some of the integration the server explorer does for you with sql server, again not a big deal. As far as im concerned you dont lose anything very important, and you shouldnt be losing any of your.net knowledge. Most examples use sql server as a default but they can easily be changed to use another database. Also there are a few open source.net CMS packages out there already that use MySql take a look at cuyahoga | ASP.NET 3.5 Without Microsoft SQL Server - What do I lose? I was just assigned to do a CMS using ASP.net 3.5 and MySQL. I am kind of new to ASP.NET development (quite sufficient with C#) and I am wondering what major ASP.NET and general.NET features I am losing when I don't have the option to use Microsoft SQL Server. I know already from quick Googling that I lose LINQ (and I was really looking forward to using this to build my model layer!), but I am not sure what other handy features I will lose. Since I've been relying on ASP.net tutorials which assume that you use MS SQL Server, I feel a chunk of my ASP.net knowledge just become invalid. Thanks! | TITLE:
ASP.NET 3.5 Without Microsoft SQL Server - What do I lose?
QUESTION:
I was just assigned to do a CMS using ASP.net 3.5 and MySQL. I am kind of new to ASP.NET development (quite sufficient with C#) and I am wondering what major ASP.NET and general.NET features I am losing when I don't have the option to use Microsoft SQL Server. I know already from quick Googling that I lose LINQ (and I was really looking forward to using this to build my model layer!), but I am not sure what other handy features I will lose. Since I've been relying on ASP.net tutorials which assume that you use MS SQL Server, I feel a chunk of my ASP.net knowledge just become invalid. Thanks!
ANSWER:
You do not lose LINQ, you lose LINQtoSQL. LINQ itself is more generic as it can be used on anything that implements iQueryable. You lose the SqlDataSource, not a big deal. You lose some of the integration the server explorer does for you with sql server, again not a big deal. As far as im concerned you dont lose anything very important, and you shouldnt be losing any of your.net knowledge. Most examples use sql server as a default but they can easily be changed to use another database. Also there are a few open source.net CMS packages out there already that use MySql take a look at cuyahoga | [
"asp.net",
"mysql",
".net-3.5"
] | 2 | 2 | 960 | 4 | 0 | 2008-09-09T02:09:25.760000 | 2008-09-09T15:08:28.477000 |
51,124 | 51,131 | Usefulness of SQL Server "with encryption" statement | Recently a friend and I were talking about securing stored procedure code in a SQL server database. From distant memory, I'm pretty certain that "with encryption" is incredibly easily broken in all versions of SQL Server, however he said it has been greatly improved in SQL 2005. As a result I have not seriously considered it as a security option in any systems I have ever worked on. So in what scenarious could "with encryption" be used, and when should it be avoided at all costs? | It can be used to hide your code from casual observers, but as you say: it's easily circumvented. It really can't be any other way, since the server needs to decrypt the code to execute it. It's DRM, basically, and fails for the same reason as all the other DRM does - you can't simultaneously hide the data, and allow it to be accessed. | Usefulness of SQL Server "with encryption" statement Recently a friend and I were talking about securing stored procedure code in a SQL server database. From distant memory, I'm pretty certain that "with encryption" is incredibly easily broken in all versions of SQL Server, however he said it has been greatly improved in SQL 2005. As a result I have not seriously considered it as a security option in any systems I have ever worked on. So in what scenarious could "with encryption" be used, and when should it be avoided at all costs? | TITLE:
Usefulness of SQL Server "with encryption" statement
QUESTION:
Recently a friend and I were talking about securing stored procedure code in a SQL server database. From distant memory, I'm pretty certain that "with encryption" is incredibly easily broken in all versions of SQL Server, however he said it has been greatly improved in SQL 2005. As a result I have not seriously considered it as a security option in any systems I have ever worked on. So in what scenarious could "with encryption" be used, and when should it be avoided at all costs?
ANSWER:
It can be used to hide your code from casual observers, but as you say: it's easily circumvented. It really can't be any other way, since the server needs to decrypt the code to execute it. It's DRM, basically, and fails for the same reason as all the other DRM does - you can't simultaneously hide the data, and allow it to be accessed. | [
"sql-server",
"encryption"
] | 5 | 6 | 984 | 3 | 0 | 2008-09-09T02:16:44.510000 | 2008-09-09T02:24:29.607000 |
51,126 | 51,128 | How do you Modify TextBox Control Tab Stops | When you use a Windows Forms TextBox, the default number of tab stops (spaces) is 8. How do you modify this? | First add the following namespace using System.Runtime.InteropServices; Then add the following after the class declaration: private const int EM_SETTABSTOPS = 0x00CB; [DllImport("User32.dll", CharSet = CharSet.Auto)] public static extern IntPtr SendMessage(IntPtr h, int msg, int wParam, int [] lParam); Then add the following to the Form_Load event: // define value of the Tab indent int[] stops = {16}; // change the indent SendMessage(this.textBox1.Handle, EM_SETTABSTOPS, 1, stops); | How do you Modify TextBox Control Tab Stops When you use a Windows Forms TextBox, the default number of tab stops (spaces) is 8. How do you modify this? | TITLE:
How do you Modify TextBox Control Tab Stops
QUESTION:
When you use a Windows Forms TextBox, the default number of tab stops (spaces) is 8. How do you modify this?
ANSWER:
First add the following namespace using System.Runtime.InteropServices; Then add the following after the class declaration: private const int EM_SETTABSTOPS = 0x00CB; [DllImport("User32.dll", CharSet = CharSet.Auto)] public static extern IntPtr SendMessage(IntPtr h, int msg, int wParam, int [] lParam); Then add the following to the Form_Load event: // define value of the Tab indent int[] stops = {16}; // change the indent SendMessage(this.textBox1.Handle, EM_SETTABSTOPS, 1, stops); | [
"c#",
".net",
"winforms"
] | 1 | 4 | 1,681 | 1 | 0 | 2008-09-09T02:21:26.823000 | 2008-09-09T02:23:01.937000 |
51,129 | 51,225 | How do I get a value from an XML web service in C#? | In C#, if I need to open an HTTP connection, download XML and get one value from the result, how would I do that? For consistency, imagine the webservice is at www.webservice.com and that if you pass it the POST argument fXML=1 it gives you back somevalue I'd like it to spit out "somevalue". | I use this code and it works great: System.Xml.XmlDocument xd = new System.Xml.XmlDocument; xd.Load("http://www.webservice.com/webservice?fXML=1"); string xPath = "/xml/somekey"; // this node's inner text contains "somevalue" return xd.SelectSingleNode(xPath).InnerText; EDIT: I just realized you're talking about a webservice and not just plain XML. In your Visual Studio Solution, try right clicking on References in Solution Explorer and choose "Add a Web Reference". A dialog will appear asking for a URL, you can just paste it in: " http://www.webservice.com/webservice.asmx ". VS will autogenerate all the helpers you need. Then you can just call: com.webservice.www.WebService ws = new com.webservice.www.WebService(); // this assumes your web method takes in the fXML as an integer attribute return ws.SomeWebMethod(1); | How do I get a value from an XML web service in C#? In C#, if I need to open an HTTP connection, download XML and get one value from the result, how would I do that? For consistency, imagine the webservice is at www.webservice.com and that if you pass it the POST argument fXML=1 it gives you back somevalue I'd like it to spit out "somevalue". | TITLE:
How do I get a value from an XML web service in C#?
QUESTION:
In C#, if I need to open an HTTP connection, download XML and get one value from the result, how would I do that? For consistency, imagine the webservice is at www.webservice.com and that if you pass it the POST argument fXML=1 it gives you back somevalue I'd like it to spit out "somevalue".
ANSWER:
I use this code and it works great: System.Xml.XmlDocument xd = new System.Xml.XmlDocument; xd.Load("http://www.webservice.com/webservice?fXML=1"); string xPath = "/xml/somekey"; // this node's inner text contains "somevalue" return xd.SelectSingleNode(xPath).InnerText; EDIT: I just realized you're talking about a webservice and not just plain XML. In your Visual Studio Solution, try right clicking on References in Solution Explorer and choose "Add a Web Reference". A dialog will appear asking for a URL, you can just paste it in: " http://www.webservice.com/webservice.asmx ". VS will autogenerate all the helpers you need. Then you can just call: com.webservice.www.WebService ws = new com.webservice.www.WebService(); // this assumes your web method takes in the fXML as an integer attribute return ws.SomeWebMethod(1); | [
"c#",
"xml",
"web-services"
] | 5 | 3 | 4,368 | 4 | 0 | 2008-09-09T02:24:08.283000 | 2008-09-09T04:31:03.610000 |
51,148 | 51,149 | How do I find out if a process is already running using c#? | I have C# winforms application that needs to start an external exe from time to time, but I do not wish to start another process if one is already running, but rather switch to it. So how in C# would I so this in the example below? using System.Diagnostics;...
Process foo = new Process();
foo.StartInfo.FileName = @"C:\bar\foo.exe"; foo.StartInfo.Arguments = "Username Password";
bool isRunning = //TODO: Check to see if process foo.exe is already running
if (isRunning) { //TODO: Switch to foo.exe process } else { foo.Start(); } | This should do it for ya. Check Processes //Namespaces we need to use using System.Diagnostics;
public bool IsProcessOpen(string name) { //here we're going to get a list of all running processes on //the computer foreach (Process clsProcess in Process.GetProcesses()) { //now we're going to see if any of the running processes //match the currently running processes. Be sure to not //add the.exe to the name you provide, i.e: NOTEPAD, //not NOTEPAD.EXE or false is always returned even if //notepad is running. //Remember, if you have the process running more than once, //say IE open 4 times the loop thr way it is now will close all 4, //if you want it to just close the first one it finds //then add a return; after the Kill if (clsProcess.ProcessName.Contains(name)) { //if the process is found to be running then we //return a true return true; } } //otherwise we return a false return false; } | How do I find out if a process is already running using c#? I have C# winforms application that needs to start an external exe from time to time, but I do not wish to start another process if one is already running, but rather switch to it. So how in C# would I so this in the example below? using System.Diagnostics;...
Process foo = new Process();
foo.StartInfo.FileName = @"C:\bar\foo.exe"; foo.StartInfo.Arguments = "Username Password";
bool isRunning = //TODO: Check to see if process foo.exe is already running
if (isRunning) { //TODO: Switch to foo.exe process } else { foo.Start(); } | TITLE:
How do I find out if a process is already running using c#?
QUESTION:
I have C# winforms application that needs to start an external exe from time to time, but I do not wish to start another process if one is already running, but rather switch to it. So how in C# would I so this in the example below? using System.Diagnostics;...
Process foo = new Process();
foo.StartInfo.FileName = @"C:\bar\foo.exe"; foo.StartInfo.Arguments = "Username Password";
bool isRunning = //TODO: Check to see if process foo.exe is already running
if (isRunning) { //TODO: Switch to foo.exe process } else { foo.Start(); }
ANSWER:
This should do it for ya. Check Processes //Namespaces we need to use using System.Diagnostics;
public bool IsProcessOpen(string name) { //here we're going to get a list of all running processes on //the computer foreach (Process clsProcess in Process.GetProcesses()) { //now we're going to see if any of the running processes //match the currently running processes. Be sure to not //add the.exe to the name you provide, i.e: NOTEPAD, //not NOTEPAD.EXE or false is always returned even if //notepad is running. //Remember, if you have the process running more than once, //say IE open 4 times the loop thr way it is now will close all 4, //if you want it to just close the first one it finds //then add a return; after the Kill if (clsProcess.ProcessName.Contains(name)) { //if the process is found to be running then we //return a true return true; } } //otherwise we return a false return false; } | [
"c#",
"winforms",
"external-process"
] | 29 | 39 | 56,615 | 10 | 0 | 2008-09-09T02:54:06.637000 | 2008-09-09T02:58:09.653000 |
51,156 | 51,159 | Is there a way of selecting the last item of a list with CSS? | Say I have a list as follows: item1 item2 item3 Is there a CSS selector that will allow me to directly select the last item of a list? In this case item 3. Cheers! | Not that i'm aware of. The traditional solution is to tag the first & last items with class="first" & class="last" so you can identify them. The CSS psudo-class first-child will get you the first item but not all browsers support it. CSS3 will have last-child too (this is currently supported by Firefox, Safari but not IE 6/7/beta 8) | Is there a way of selecting the last item of a list with CSS? Say I have a list as follows: item1 item2 item3 Is there a CSS selector that will allow me to directly select the last item of a list? In this case item 3. Cheers! | TITLE:
Is there a way of selecting the last item of a list with CSS?
QUESTION:
Say I have a list as follows: item1 item2 item3 Is there a CSS selector that will allow me to directly select the last item of a list? In this case item 3. Cheers!
ANSWER:
Not that i'm aware of. The traditional solution is to tag the first & last items with class="first" & class="last" so you can identify them. The CSS psudo-class first-child will get you the first item but not all browsers support it. CSS3 will have last-child too (this is currently supported by Firefox, Safari but not IE 6/7/beta 8) | [
"css",
"css-selectors"
] | 11 | 15 | 12,412 | 3 | 0 | 2008-09-09T03:11:59.463000 | 2008-09-09T03:14:58.100000 |
51,165 | 51,169 | How to sort strings in JavaScript | I have a list of objects I wish to sort based on a field attr of type string. I tried using - list.sort(function (a, b) { return a.attr - b.attr }) but found that - doesn't appear to work with strings in JavaScript. How can I sort a list of objects based on an attribute with type string? | Use String.prototype.localeCompare as per your example: list.sort(function (a, b) { return ('' + a.attr).localeCompare(b.attr); }) We force a.attr to be a string to avoid exceptions. localeCompare has been supported since Internet Explorer 6 and Firefox 1. You may also see the following code used that doesn't respect a locale: if (item1.attr < item2.attr) return -1; if ( item1.attr > item2.attr) return 1; return 0; | How to sort strings in JavaScript I have a list of objects I wish to sort based on a field attr of type string. I tried using - list.sort(function (a, b) { return a.attr - b.attr }) but found that - doesn't appear to work with strings in JavaScript. How can I sort a list of objects based on an attribute with type string? | TITLE:
How to sort strings in JavaScript
QUESTION:
I have a list of objects I wish to sort based on a field attr of type string. I tried using - list.sort(function (a, b) { return a.attr - b.attr }) but found that - doesn't appear to work with strings in JavaScript. How can I sort a list of objects based on an attribute with type string?
ANSWER:
Use String.prototype.localeCompare as per your example: list.sort(function (a, b) { return ('' + a.attr).localeCompare(b.attr); }) We force a.attr to be a string to avoid exceptions. localeCompare has been supported since Internet Explorer 6 and Firefox 1. You may also see the following code used that doesn't respect a locale: if (item1.attr < item2.attr) return -1; if ( item1.attr > item2.attr) return 1; return 0; | [
"javascript",
"string"
] | 538 | 924 | 649,063 | 16 | 0 | 2008-09-09T03:25:15.343000 | 2008-09-09T03:29:17.300000 |
51,176 | 51,196 | Separating concerns with Linq To SQL and DTO's | I recently started a new webforms project and decided to separate the business classes from any DBML references. My business layer classes instead access discrete Data layer methods and are returned collections of DTO's. So the data layer might project DTO's like the following: (from c in dataContext.Customers where c.Active == true select new DTO.Customer { CustomerID = c.CustomerID, Name = c.CustomerName,... }).ToList() Although building the DTO objects adds work, this feels like a better approach to a tight binding between Business & Data layers and means I can test the Business layer without a database being present. My question is, is this good practice?, Is there a way of generating the DTO's (maybe via SQLMetal), and what other problems might I strike as the project progresses. | I don't know if it's best practice but I have written similar code in the not so recent past because I too felt that I could improve the separation of concerns by using my own classes instead of the LINQ-designer-generated ones within my application. You may want to consider just returning an IQueryable instead of an IList from your data-access method. Since IQueryable inherits from IEnumerable the rest of your app should be able to deal with it quite well. You can also convert it to a List when you really need to. The advantage of this is that you can dynamically modify your query quite easily and minimze the amount of data returned from SQL Server. E.g. if your method signature is IQueryable GetCustomers() you could get a single customer by calling GetCustomers().Where(c => c.CustomerID == 101).Single(); In this example only one record would be returned from the database whereas I imagine currently your code would return either all customers or you'd be required to write separate methods (and thus very repetitive code) to cater for all the different things you may want to filter by. | Separating concerns with Linq To SQL and DTO's I recently started a new webforms project and decided to separate the business classes from any DBML references. My business layer classes instead access discrete Data layer methods and are returned collections of DTO's. So the data layer might project DTO's like the following: (from c in dataContext.Customers where c.Active == true select new DTO.Customer { CustomerID = c.CustomerID, Name = c.CustomerName,... }).ToList() Although building the DTO objects adds work, this feels like a better approach to a tight binding between Business & Data layers and means I can test the Business layer without a database being present. My question is, is this good practice?, Is there a way of generating the DTO's (maybe via SQLMetal), and what other problems might I strike as the project progresses. | TITLE:
Separating concerns with Linq To SQL and DTO's
QUESTION:
I recently started a new webforms project and decided to separate the business classes from any DBML references. My business layer classes instead access discrete Data layer methods and are returned collections of DTO's. So the data layer might project DTO's like the following: (from c in dataContext.Customers where c.Active == true select new DTO.Customer { CustomerID = c.CustomerID, Name = c.CustomerName,... }).ToList() Although building the DTO objects adds work, this feels like a better approach to a tight binding between Business & Data layers and means I can test the Business layer without a database being present. My question is, is this good practice?, Is there a way of generating the DTO's (maybe via SQLMetal), and what other problems might I strike as the project progresses.
ANSWER:
I don't know if it's best practice but I have written similar code in the not so recent past because I too felt that I could improve the separation of concerns by using my own classes instead of the LINQ-designer-generated ones within my application. You may want to consider just returning an IQueryable instead of an IList from your data-access method. Since IQueryable inherits from IEnumerable the rest of your app should be able to deal with it quite well. You can also convert it to a List when you really need to. The advantage of this is that you can dynamically modify your query quite easily and minimze the amount of data returned from SQL Server. E.g. if your method signature is IQueryable GetCustomers() you could get a single customer by calling GetCustomers().Where(c => c.CustomerID == 101).Single(); In this example only one record would be returned from the database whereas I imagine currently your code would return either all customers or you'd be required to write separate methods (and thus very repetitive code) to cater for all the different things you may want to filter by. | [
"c#",
"linq",
"dto-mapping"
] | 11 | 5 | 2,967 | 2 | 0 | 2008-09-09T03:40:22.373000 | 2008-09-09T03:54:51.950000 |
51,180 | 51,194 | How do I stop Visual Studio from automatically inserting asterisk during a block comment? | I'm tearing my hair out with this one. If I start a block comment /* in VS.NET 2005+ then carriage return, Visual Studio insists that I have another asterisk *. I know there's an option to turn this off but I just can't find it. Anyone know how to turn this feature off? | Update: this setting was changed in VS 2015 update 2. See this answer. This post addresses your question. The gist of it is: Text Editor > C# > Advanced > Generate XML documentation comments for /// | How do I stop Visual Studio from automatically inserting asterisk during a block comment? I'm tearing my hair out with this one. If I start a block comment /* in VS.NET 2005+ then carriage return, Visual Studio insists that I have another asterisk *. I know there's an option to turn this off but I just can't find it. Anyone know how to turn this feature off? | TITLE:
How do I stop Visual Studio from automatically inserting asterisk during a block comment?
QUESTION:
I'm tearing my hair out with this one. If I start a block comment /* in VS.NET 2005+ then carriage return, Visual Studio insists that I have another asterisk *. I know there's an option to turn this off but I just can't find it. Anyone know how to turn this feature off?
ANSWER:
Update: this setting was changed in VS 2015 update 2. See this answer. This post addresses your question. The gist of it is: Text Editor > C# > Advanced > Generate XML documentation comments for /// | [
"visual-studio"
] | 40 | 25 | 4,485 | 3 | 0 | 2008-09-09T03:41:23.480000 | 2008-09-09T03:51:15.343000 |
51,185 | 4,717,855 | Are JavaScript strings immutable? Do I need a "string builder" in JavaScript? | Does javascript use immutable or mutable strings? Do I need a "string builder"? | They are immutable. You cannot change a character within a string with something like var myString = "abbdef"; myString[2] = 'c'. The string manipulation methods such as trim, slice return new strings. In the same way, if you have two references to the same string, modifying one doesn't affect the other let a = b = "hello"; a = a + " world"; // b is not affected Myth Debunking - String concatenation is NOT slow I've always heard what Ash mentioned in his answer (that using Array.join is faster for concatenation) so I wanted to test out the different methods of concatenating strings and abstracting the fastest way into a StringBuilder. I wrote some tests to see if this is true (it isn't!). This was what I believed would be the fastest way, avoiding push and using an array to store the strings to then join them in the end. class StringBuilderArrayIndex { array = []; index = 0; append(str) { this.array[this.index++] = str } toString() { return this.array.join('') } } Some benchmarks Read the test cases in the snippet below Run the snippet Press the benchmark button to run the tests and see results I've created two types of tests Using Array indexing to avoid Array.push, then using Array.join Straight string concatenation For each of those tests, I looped appending a constant value and a random string; Findings Nowadays, all evergreen browsers handle string concatenation better, at least twice as fast. i-12600k (added by Alexander Nenashev) Chrome/117 -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 224 232 254 266 275 array push & join 3.2x | x100000 722 753 757 762 763 Random strings string concatenation 1.0x | x100000 261 268 270 273 279 array push & join 5.4x | x10000 142 147 148 155 166 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark Firefox/118 -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 304 335 353 358 370 array push & join 9.5x | x10000 289 300 301 306 309 Random strings string concatenation 1.0x | x100000 334 337 345 349 377 array push & join 5.1x | x10000 169 176 176 176 180 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark Results below on a 2.4 GHz 8-Core i9 Mac on Oct 2023 Chrome -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 574 592 594 607 613 array push & join 2.7x | x10000 156 157 159 164 165 Random strings string concatenation 1.0x | x100000 657 663 669 675 680 array push & join 4.3x | x10000 283 285 295 298 311 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark Firefox -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 546 648 659 663 677 array push & join 5.8x | x10000 314 320 326 331 335 Random strings string concatenation 1.0x | x100000 647 739 764 765 804 array push & join 2.9x | x10000 187 188 199 219 231 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark Brave -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 566 571 572 579 600 array push & join 2.5x | x10000 144 145 159 162 166 Random strings string concatenation 1.0x | x100000 649 658 659 663 669 array push & join 4.4x | x10000 285 285 290 292 300 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ` Safari -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x10000 76 77 77 79 82 array push & join 2.2x | x10000 168 168 174 178 186 Random strings string concatenation 1.0x | x100000 878 884 889 892 903 array push & join 2.3x | x10000 199 200 202 202 204 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ` Opera -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 577 579 581 584 608 array push & join 2.7x | x10000 157 162 165 166 171 Random strings string concatenation 1.0x | x100000 688 694 740 750 781 array push & join 4.2x | x10000 291 315 316 317 379 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark | Are JavaScript strings immutable? Do I need a "string builder" in JavaScript? Does javascript use immutable or mutable strings? Do I need a "string builder"? | TITLE:
Are JavaScript strings immutable? Do I need a "string builder" in JavaScript?
QUESTION:
Does javascript use immutable or mutable strings? Do I need a "string builder"?
ANSWER:
They are immutable. You cannot change a character within a string with something like var myString = "abbdef"; myString[2] = 'c'. The string manipulation methods such as trim, slice return new strings. In the same way, if you have two references to the same string, modifying one doesn't affect the other let a = b = "hello"; a = a + " world"; // b is not affected Myth Debunking - String concatenation is NOT slow I've always heard what Ash mentioned in his answer (that using Array.join is faster for concatenation) so I wanted to test out the different methods of concatenating strings and abstracting the fastest way into a StringBuilder. I wrote some tests to see if this is true (it isn't!). This was what I believed would be the fastest way, avoiding push and using an array to store the strings to then join them in the end. class StringBuilderArrayIndex { array = []; index = 0; append(str) { this.array[this.index++] = str } toString() { return this.array.join('') } } Some benchmarks Read the test cases in the snippet below Run the snippet Press the benchmark button to run the tests and see results I've created two types of tests Using Array indexing to avoid Array.push, then using Array.join Straight string concatenation For each of those tests, I looped appending a constant value and a random string; Findings Nowadays, all evergreen browsers handle string concatenation better, at least twice as fast. i-12600k (added by Alexander Nenashev) Chrome/117 -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 224 232 254 266 275 array push & join 3.2x | x100000 722 753 757 762 763 Random strings string concatenation 1.0x | x100000 261 268 270 273 279 array push & join 5.4x | x10000 142 147 148 155 166 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark Firefox/118 -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 304 335 353 358 370 array push & join 9.5x | x10000 289 300 301 306 309 Random strings string concatenation 1.0x | x100000 334 337 345 349 377 array push & join 5.1x | x10000 169 176 176 176 180 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark Results below on a 2.4 GHz 8-Core i9 Mac on Oct 2023 Chrome -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 574 592 594 607 613 array push & join 2.7x | x10000 156 157 159 164 165 Random strings string concatenation 1.0x | x100000 657 663 669 675 680 array push & join 4.3x | x10000 283 285 295 298 311 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark Firefox -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 546 648 659 663 677 array push & join 5.8x | x10000 314 320 326 331 335 Random strings string concatenation 1.0x | x100000 647 739 764 765 804 array push & join 2.9x | x10000 187 188 199 219 231 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark Brave -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 566 571 572 579 600 array push & join 2.5x | x10000 144 145 159 162 166 Random strings string concatenation 1.0x | x100000 649 658 659 663 669 array push & join 4.4x | x10000 285 285 290 292 300 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ` Safari -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x10000 76 77 77 79 82 array push & join 2.2x | x10000 168 168 174 178 186 Random strings string concatenation 1.0x | x100000 878 884 889 892 903 array push & join 2.3x | x10000 199 200 202 202 204 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark ` Opera -------------------------------------------------------------------- Same string 'Hot diggity dizzle' string concatenation 1.0x | x100000 577 579 581 584 608 array push & join 2.7x | x10000 157 162 165 166 171 Random strings string concatenation 1.0x | x100000 688 694 740 750 781 array push & join 4.2x | x10000 291 315 316 317 379 -------------------------------------------------------------------- https://github.com/silentmantra/benchmark | [
"javascript",
"string"
] | 302 | 355 | 103,984 | 10 | 0 | 2008-09-09T03:44:54.110000 | 2011-01-17T21:03:35.843000 |
51,195 | 51,205 | How to get hashes out of arrays in Perl? | I want to write a little "DBQuery" function in perl so I can have one-liners which send an SQL statement and receive back and an array of hashes, i.e. a recordset. However, I'm running into an issue with Perl syntax (and probably some odd pointer/reference issue) which is preventing me from packing out the information from the hash that I'm getting from the database. The sample code below demonstrates the issue. I can get the data "Jim" out of a hash inside an array with this syntax: print $records[$index]{'firstName'} returns "Jim" but if I copy the hash record in the array to its own hash variable first, then I strangely can't access the data anymore in that hash: %row = $records[$index]; $row{'firstName'}; returns "" (blank) Here is the full sample code showing the problem. Any help is appreciated: my @records = ( {'id' => 1, 'firstName' => 'Jim'}, {'id' => 2, 'firstName' => 'Joe'} ); my @records2 = ();
$numberOfRecords = scalar(@records); print "number of records: ". $numberOfRecords. "\n"; for(my $index=0; $index < $numberOfRecords; $index++) {
#works print 'you can print the records like this: '. $records[$index]{'firstName'}. "\n";
#does NOT work %row = $records[$index]; print 'but not like this: '. $row{'firstName'}. "\n";
} | The nested data structure contains a hash reference, not a hash. # Will work (the -> dereferences the reference) $row = $records[$index]; print "This will work: ", $row->{firstName}, "\n";
# This will also work, by promoting the hash reference into a hash %row = %{ $records[$index] }; print "This will work: ", $row{firstName}, "\n"; If you're ever presented with a deep Perl data structure, you may profit from printing it using Data::Dumper to print it into human-readable (and Perl-parsable) form. | How to get hashes out of arrays in Perl? I want to write a little "DBQuery" function in perl so I can have one-liners which send an SQL statement and receive back and an array of hashes, i.e. a recordset. However, I'm running into an issue with Perl syntax (and probably some odd pointer/reference issue) which is preventing me from packing out the information from the hash that I'm getting from the database. The sample code below demonstrates the issue. I can get the data "Jim" out of a hash inside an array with this syntax: print $records[$index]{'firstName'} returns "Jim" but if I copy the hash record in the array to its own hash variable first, then I strangely can't access the data anymore in that hash: %row = $records[$index]; $row{'firstName'}; returns "" (blank) Here is the full sample code showing the problem. Any help is appreciated: my @records = ( {'id' => 1, 'firstName' => 'Jim'}, {'id' => 2, 'firstName' => 'Joe'} ); my @records2 = ();
$numberOfRecords = scalar(@records); print "number of records: ". $numberOfRecords. "\n"; for(my $index=0; $index < $numberOfRecords; $index++) {
#works print 'you can print the records like this: '. $records[$index]{'firstName'}. "\n";
#does NOT work %row = $records[$index]; print 'but not like this: '. $row{'firstName'}. "\n";
} | TITLE:
How to get hashes out of arrays in Perl?
QUESTION:
I want to write a little "DBQuery" function in perl so I can have one-liners which send an SQL statement and receive back and an array of hashes, i.e. a recordset. However, I'm running into an issue with Perl syntax (and probably some odd pointer/reference issue) which is preventing me from packing out the information from the hash that I'm getting from the database. The sample code below demonstrates the issue. I can get the data "Jim" out of a hash inside an array with this syntax: print $records[$index]{'firstName'} returns "Jim" but if I copy the hash record in the array to its own hash variable first, then I strangely can't access the data anymore in that hash: %row = $records[$index]; $row{'firstName'}; returns "" (blank) Here is the full sample code showing the problem. Any help is appreciated: my @records = ( {'id' => 1, 'firstName' => 'Jim'}, {'id' => 2, 'firstName' => 'Joe'} ); my @records2 = ();
$numberOfRecords = scalar(@records); print "number of records: ". $numberOfRecords. "\n"; for(my $index=0; $index < $numberOfRecords; $index++) {
#works print 'you can print the records like this: '. $records[$index]{'firstName'}. "\n";
#does NOT work %row = $records[$index]; print 'but not like this: '. $row{'firstName'}. "\n";
}
ANSWER:
The nested data structure contains a hash reference, not a hash. # Will work (the -> dereferences the reference) $row = $records[$index]; print "This will work: ", $row->{firstName}, "\n";
# This will also work, by promoting the hash reference into a hash %row = %{ $records[$index] }; print "This will work: ", $row{firstName}, "\n"; If you're ever presented with a deep Perl data structure, you may profit from printing it using Data::Dumper to print it into human-readable (and Perl-parsable) form. | [
"arrays",
"perl",
"hash",
"perl-data-structures"
] | 14 | 23 | 12,143 | 6 | 0 | 2008-09-09T03:51:31.927000 | 2008-09-09T04:03:09.247000 |
51,202 | 51,350 | Can a STP template be hidden from subsite creation page? | When a template is added using the add-template stsadm command, it becomes available to everyone when creating a subsite. Is there any way to make it only available when a site collection is being created? | go to site actions -> Site Settings -> view all site settings -> site templates and page layouts and remove the site template from the list of allowed items. Gary Lapointe may also have made an stsadm extenstion for it; check stsadm.blogspot.com Mauro Masucci http://www.brantas.co.uk | Can a STP template be hidden from subsite creation page? When a template is added using the add-template stsadm command, it becomes available to everyone when creating a subsite. Is there any way to make it only available when a site collection is being created? | TITLE:
Can a STP template be hidden from subsite creation page?
QUESTION:
When a template is added using the add-template stsadm command, it becomes available to everyone when creating a subsite. Is there any way to make it only available when a site collection is being created?
ANSWER:
go to site actions -> Site Settings -> view all site settings -> site templates and page layouts and remove the site template from the list of allowed items. Gary Lapointe may also have made an stsadm extenstion for it; check stsadm.blogspot.com Mauro Masucci http://www.brantas.co.uk | [
"sharepoint",
"templates",
"moss"
] | 0 | 2 | 1,024 | 4 | 0 | 2008-09-09T03:58:06.920000 | 2008-09-09T07:27:49.410000 |
51,212 | 51,239 | How to write a download progress indicator in Python? | I am writing a little application to download files over http (as, for example, described here ). I also want to include a little download progress indicator showing the percentage of the download progress. Here is what I came up with: sys.stdout.write(rem_file + "...") urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("%2d%%" % percent) sys.stdout.write("\b\b\b") sys.stdout.flush() Output: MyFileName... 9% Any other ideas or recommendations to do this? One thing that's somewhat annoying is the blinking cursor in the terminal on the first digit of the percentage. Is there a way to prevent this? Is there a way to hide the cursor? EDIT: Here a better alternative using a global variable for the filename in dlProgress and the '\r' code: global rem_file # global variable to be used in dlProgress
urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("\r" + rem_file + "...%d%%" % percent) sys.stdout.flush() Output: MyFileName...9% And the cursor shows up at the END of the line. Much better. | There's a text progress bar library for python at http://pypi.python.org/pypi/progressbar/2.2 that you might find useful: This library provides a text mode progressbar. This is tipically used to display the progress of a long running operation, providing a visual clue that processing is underway. The ProgressBar class manages the progress, and the format of the line is given by a number of widgets. A widget is an object that may display diferently depending on the state of the progress. There are three types of widget: - a string, which always shows itself; - a ProgressBarWidget, which may return a diferent value every time it's update method is called; and - a ProgressBarWidgetHFill, which is like ProgressBarWidget, except it expands to fill the remaining width of the line. The progressbar module is very easy to use, yet very powerful. And automatically supports features like auto-resizing when available. | How to write a download progress indicator in Python? I am writing a little application to download files over http (as, for example, described here ). I also want to include a little download progress indicator showing the percentage of the download progress. Here is what I came up with: sys.stdout.write(rem_file + "...") urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("%2d%%" % percent) sys.stdout.write("\b\b\b") sys.stdout.flush() Output: MyFileName... 9% Any other ideas or recommendations to do this? One thing that's somewhat annoying is the blinking cursor in the terminal on the first digit of the percentage. Is there a way to prevent this? Is there a way to hide the cursor? EDIT: Here a better alternative using a global variable for the filename in dlProgress and the '\r' code: global rem_file # global variable to be used in dlProgress
urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("\r" + rem_file + "...%d%%" % percent) sys.stdout.flush() Output: MyFileName...9% And the cursor shows up at the END of the line. Much better. | TITLE:
How to write a download progress indicator in Python?
QUESTION:
I am writing a little application to download files over http (as, for example, described here ). I also want to include a little download progress indicator showing the percentage of the download progress. Here is what I came up with: sys.stdout.write(rem_file + "...") urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("%2d%%" % percent) sys.stdout.write("\b\b\b") sys.stdout.flush() Output: MyFileName... 9% Any other ideas or recommendations to do this? One thing that's somewhat annoying is the blinking cursor in the terminal on the first digit of the percentage. Is there a way to prevent this? Is there a way to hide the cursor? EDIT: Here a better alternative using a global variable for the filename in dlProgress and the '\r' code: global rem_file # global variable to be used in dlProgress
urllib.urlretrieve(rem_file, loc_file, reporthook=dlProgress)
def dlProgress(count, blockSize, totalSize): percent = int(count*blockSize*100/totalSize) sys.stdout.write("\r" + rem_file + "...%d%%" % percent) sys.stdout.flush() Output: MyFileName...9% And the cursor shows up at the END of the line. Much better.
ANSWER:
There's a text progress bar library for python at http://pypi.python.org/pypi/progressbar/2.2 that you might find useful: This library provides a text mode progressbar. This is tipically used to display the progress of a long running operation, providing a visual clue that processing is underway. The ProgressBar class manages the progress, and the format of the line is given by a number of widgets. A widget is an object that may display diferently depending on the state of the progress. There are three types of widget: - a string, which always shows itself; - a ProgressBarWidget, which may return a diferent value every time it's update method is called; and - a ProgressBarWidgetHFill, which is like ProgressBarWidget, except it expands to fill the remaining width of the line. The progressbar module is very easy to use, yet very powerful. And automatically supports features like auto-resizing when available. | [
"python",
"http"
] | 59 | 18 | 32,713 | 10 | 0 | 2008-09-09T04:09:15.503000 | 2008-09-09T04:48:28.203000 |
51,224 | 51,231 | Regular Expression to match valid dates | I'm trying to write a regular expression that validates a date. The regex needs to match the following M/D/YYYY MM/DD/YYYY Single digit months can start with a leading zero (eg: 03/12/2008) Single digit days can start with a leading zero (eg: 3/02/2008) CANNOT include February 30 or February 31 (eg: 2/31/2008) So far I have ^(([1-9]|1[012])[-/.]([1-9]|[12][0-9]|3[01])[-/.](19|20)\d\d)|((1[012]|0[1-9])(3[01]|2\d|1\d|0[1-9])(19|20)\d\d)|((1[012]|0[1-9])[-/.](3[01]|2\d|1\d|0[1-9])[-/.](19|20)\d\d)$ This matches properly EXCEPT it still includes 2/30/2008 & 2/31/2008. Does anyone have a better suggestion? Edit: I found the answer on RegExLib ^((((0[13578])|([13578])|(1[02]))[\/](([1-9])|([0-2][0-9])|(3[01])))|(((0[469])|([469])|(11))[\/](([1-9])|([0-2][0-9])|(30)))|((2|02)[\/](([1-9])|([0-2][0-9]))))[\/]\d{4}$|^\d{4}$ It matches all valid months that follow the MM/DD/YYYY format. Thanks everyone for the help. | This is not an appropriate use of regular expressions. You'd be better off using [0-9]{2}/[0-9]{2}/[0-9]{4} and then checking ranges in a higher-level language. | Regular Expression to match valid dates I'm trying to write a regular expression that validates a date. The regex needs to match the following M/D/YYYY MM/DD/YYYY Single digit months can start with a leading zero (eg: 03/12/2008) Single digit days can start with a leading zero (eg: 3/02/2008) CANNOT include February 30 or February 31 (eg: 2/31/2008) So far I have ^(([1-9]|1[012])[-/.]([1-9]|[12][0-9]|3[01])[-/.](19|20)\d\d)|((1[012]|0[1-9])(3[01]|2\d|1\d|0[1-9])(19|20)\d\d)|((1[012]|0[1-9])[-/.](3[01]|2\d|1\d|0[1-9])[-/.](19|20)\d\d)$ This matches properly EXCEPT it still includes 2/30/2008 & 2/31/2008. Does anyone have a better suggestion? Edit: I found the answer on RegExLib ^((((0[13578])|([13578])|(1[02]))[\/](([1-9])|([0-2][0-9])|(3[01])))|(((0[469])|([469])|(11))[\/](([1-9])|([0-2][0-9])|(30)))|((2|02)[\/](([1-9])|([0-2][0-9]))))[\/]\d{4}$|^\d{4}$ It matches all valid months that follow the MM/DD/YYYY format. Thanks everyone for the help. | TITLE:
Regular Expression to match valid dates
QUESTION:
I'm trying to write a regular expression that validates a date. The regex needs to match the following M/D/YYYY MM/DD/YYYY Single digit months can start with a leading zero (eg: 03/12/2008) Single digit days can start with a leading zero (eg: 3/02/2008) CANNOT include February 30 or February 31 (eg: 2/31/2008) So far I have ^(([1-9]|1[012])[-/.]([1-9]|[12][0-9]|3[01])[-/.](19|20)\d\d)|((1[012]|0[1-9])(3[01]|2\d|1\d|0[1-9])(19|20)\d\d)|((1[012]|0[1-9])[-/.](3[01]|2\d|1\d|0[1-9])[-/.](19|20)\d\d)$ This matches properly EXCEPT it still includes 2/30/2008 & 2/31/2008. Does anyone have a better suggestion? Edit: I found the answer on RegExLib ^((((0[13578])|([13578])|(1[02]))[\/](([1-9])|([0-2][0-9])|(3[01])))|(((0[469])|([469])|(11))[\/](([1-9])|([0-2][0-9])|(30)))|((2|02)[\/](([1-9])|([0-2][0-9]))))[\/]\d{4}$|^\d{4}$ It matches all valid months that follow the MM/DD/YYYY format. Thanks everyone for the help.
ANSWER:
This is not an appropriate use of regular expressions. You'd be better off using [0-9]{2}/[0-9]{2}/[0-9]{4} and then checking ranges in a higher-level language. | [
"regex",
"date"
] | 80 | 159 | 320,296 | 17 | 0 | 2008-09-09T04:30:42.957000 | 2008-09-09T04:37:15.310000 |
51,233 | 51,240 | How can I retrieve the page title of a webpage using Python? | How can I retrieve the page title of a webpage (title html tag) using Python? | I'll always use lxml for such tasks. You could use beautifulsoup as well. import lxml.html t = lxml.html.parse(url) print(t.find(".//title").text) EDIT based on comment: from urllib2 import urlopen from lxml.html import parse
url = "https://www.google.com" page = urlopen(url) p = parse(page) print(p.find(".//title").text) | How can I retrieve the page title of a webpage using Python? How can I retrieve the page title of a webpage (title html tag) using Python? | TITLE:
How can I retrieve the page title of a webpage using Python?
QUESTION:
How can I retrieve the page title of a webpage (title html tag) using Python?
ANSWER:
I'll always use lxml for such tasks. You could use beautifulsoup as well. import lxml.html t = lxml.html.parse(url) print(t.find(".//title").text) EDIT based on comment: from urllib2 import urlopen from lxml.html import parse
url = "https://www.google.com" page = urlopen(url) p = parse(page) print(p.find(".//title").text) | [
"python",
"html"
] | 89 | 70 | 114,029 | 12 | 0 | 2008-09-09T04:38:55.527000 | 2008-09-09T04:49:38.463000 |
51,238 | 58,277 | How do I conditionally set a column to its default value with MySqlParameter? | I have a table in a MySql database that stores user accounts. One of the columns, expires, stores an expiration date but defaults to NULL. I need to be able to remove an expiration date and set it back to the default value. Currently, all of my CRUD routines are written using MySqlCommand with parameters. Can this be done directly with a MySqlParameter, or do I have to create an alternate command object to handle this eventuality? | The problem was DBNull, doing: command.Parameters.AddWithValue("@parameter", null); compiles OK. | How do I conditionally set a column to its default value with MySqlParameter? I have a table in a MySql database that stores user accounts. One of the columns, expires, stores an expiration date but defaults to NULL. I need to be able to remove an expiration date and set it back to the default value. Currently, all of my CRUD routines are written using MySqlCommand with parameters. Can this be done directly with a MySqlParameter, or do I have to create an alternate command object to handle this eventuality? | TITLE:
How do I conditionally set a column to its default value with MySqlParameter?
QUESTION:
I have a table in a MySql database that stores user accounts. One of the columns, expires, stores an expiration date but defaults to NULL. I need to be able to remove an expiration date and set it back to the default value. Currently, all of my CRUD routines are written using MySqlCommand with parameters. Can this be done directly with a MySqlParameter, or do I have to create an alternate command object to handle this eventuality?
ANSWER:
The problem was DBNull, doing: command.Parameters.AddWithValue("@parameter", null); compiles OK. | [
"c#",
".net",
"mysql",
"sql",
"mysql-parameter"
] | 2 | 3 | 1,426 | 3 | 0 | 2008-09-09T04:47:33.070000 | 2008-09-12T04:02:21.970000 |
51,256 | 51,260 | How to upload a file to a WCF Service? | I've build a WCF Service to accept a file and write it to disk. The front-end consists of a page with SWFUpload which is handling the upload on the client side. Apparently, SWFUpload posts the data with a Content Type of: multipart/form-data. I would think this ok but on the Service side I get an error. The error is "ProtocolException" expecting text/xml. I've tried different message encodings in the bindings but nothing seems to work. How can I get this file uploaded using multipart/form-data? | @jdiaz, @JasonS is right, to upload file you need to transfer it as a byte stream. You need to use WCF streaming. For example on how to upload file via WCF see an article from http://kjellsj.blogspot.com | How to upload a file to a WCF Service? I've build a WCF Service to accept a file and write it to disk. The front-end consists of a page with SWFUpload which is handling the upload on the client side. Apparently, SWFUpload posts the data with a Content Type of: multipart/form-data. I would think this ok but on the Service side I get an error. The error is "ProtocolException" expecting text/xml. I've tried different message encodings in the bindings but nothing seems to work. How can I get this file uploaded using multipart/form-data? | TITLE:
How to upload a file to a WCF Service?
QUESTION:
I've build a WCF Service to accept a file and write it to disk. The front-end consists of a page with SWFUpload which is handling the upload on the client side. Apparently, SWFUpload posts the data with a Content Type of: multipart/form-data. I would think this ok but on the Service side I get an error. The error is "ProtocolException" expecting text/xml. I've tried different message encodings in the bindings but nothing seems to work. How can I get this file uploaded using multipart/form-data?
ANSWER:
@jdiaz, @JasonS is right, to upload file you need to transfer it as a byte stream. You need to use WCF streaming. For example on how to upload file via WCF see an article from http://kjellsj.blogspot.com | [
"wcf",
"web-services"
] | 20 | 9 | 50,806 | 4 | 0 | 2008-09-09T05:27:00.137000 | 2008-09-09T05:40:05.457000 |
51,264 | 51,285 | What table/view do you query against to select all the table names in a schema in Oracle? | What object do you query against to select all the table names in a schema in Oracle? | To see all the tables you have access to select table_name from all_tables where owner=' '; To select all tables for the current logged in schema (eg, your tables) select table_name from user_tables; | What table/view do you query against to select all the table names in a schema in Oracle? What object do you query against to select all the table names in a schema in Oracle? | TITLE:
What table/view do you query against to select all the table names in a schema in Oracle?
QUESTION:
What object do you query against to select all the table names in a schema in Oracle?
ANSWER:
To see all the tables you have access to select table_name from all_tables where owner=' '; To select all tables for the current logged in schema (eg, your tables) select table_name from user_tables; | [
"sql",
"oracle"
] | 3 | 5 | 1,767 | 3 | 0 | 2008-09-09T05:46:34.037000 | 2008-09-09T06:16:42.140000 |
51,266 | 51,427 | High availability and scalable platform for Java/C++ on Solaris | I have an application that's a mix of Java and C++ on Solaris. The Java aspects of the code run the web UI and establish state on the devices that we're talking to, and the C++ code does the real-time crunching of data coming back from the devices. Shared memory is used to pass device state and context information from the Java code through to the C++ code. The Java code uses a PostgreSQL database to persist its state. We're running into some pretty severe performance bottlenecks, and right now the only way we can scale is to increase memory and CPU counts. We're stuck on the one physical box due to the shared memory design. The really big hit here is being taken by the C++ code. The web interface is fairly lightly used to configure the devices; where we're really struggling is to handle the data volumes that the devices deliver once configured. Every piece of data we get back from the device has an identifier in it which points back to the device context, and we need to look that up. Right now there's a series of shared memory objects that are maintained by the Java/UI code and referred to by the C++ code, and that's the bottleneck. Because of that architecture we cannot move the C++ data handling off to another machine. We need to be able to scale out so that various subsets of devices can be handled by different machines, but then we lose the ability to do that context lookup, and that's the problem I'm trying to resolve: how to offload the real-time data processing to other boxes while still being able to refer to the device context. I should note we have no control over the protocol used by the devices themselves, and there is no possible chance that situation will change. We know we need to move away from this to be able to scale out by adding more machines to the cluster, and I'm in the early stages of working out exactly how we'll do this. Right now I'm looking at Terracotta as a way of scaling out the Java code, but I haven't got as far as working out how to scale out the C++ to match. As well as scaling for performance we need to consider high availability as well. The application needs to be available pretty much the whole time -- not absolutely 100%, which isn't cost effective, but we need to do a reasonable job of surviving a machine outage. If you had to undertake the task I've been given, what would you do? EDIT: Based on the data provided by @john channing, i'm looking at both GigaSpaces and Gemstone. Oracle Coherence and IBM ObjectGrid appear to be java-only. | The first thing I would do is construct a model of the system to map the data flow and try to understand precisely where the bottleneck lies. If you can model your system as a pipeline, then you should be able to use the theory of constraints (most of the literature is about optimising business processes but it applies equally to software) to continuously improve performance and eliminate the bottleneck. Next I would collect some hard empirical data that accurately characterises the performance of your system. It is something of a cliché that you cannot manage what you cannot measure, but I have seen many people attempt to optimise a software system based on hunches and fail miserably. Then I would use the Pareto Principle (80/20 rule) to choose the small number of things that will produce the biggest gains and focus only on those. To scale a Java application horizontally, I have used Oracle Coherence extensively. Although some dismiss it as a very expensive distributed hashtable, the functionality is much richer than that and you can, for example, directly access data in the cache from C++ code. Other alternatives for horizontally scaling your Java code would be Giga Spaces, IBM Object Grid or Gemstone Gemfire. If your C++ code is stateless and is used purely for number crunching, you could look at distributing the process using ICE Grid which has bindings for all of the languages you are using. | High availability and scalable platform for Java/C++ on Solaris I have an application that's a mix of Java and C++ on Solaris. The Java aspects of the code run the web UI and establish state on the devices that we're talking to, and the C++ code does the real-time crunching of data coming back from the devices. Shared memory is used to pass device state and context information from the Java code through to the C++ code. The Java code uses a PostgreSQL database to persist its state. We're running into some pretty severe performance bottlenecks, and right now the only way we can scale is to increase memory and CPU counts. We're stuck on the one physical box due to the shared memory design. The really big hit here is being taken by the C++ code. The web interface is fairly lightly used to configure the devices; where we're really struggling is to handle the data volumes that the devices deliver once configured. Every piece of data we get back from the device has an identifier in it which points back to the device context, and we need to look that up. Right now there's a series of shared memory objects that are maintained by the Java/UI code and referred to by the C++ code, and that's the bottleneck. Because of that architecture we cannot move the C++ data handling off to another machine. We need to be able to scale out so that various subsets of devices can be handled by different machines, but then we lose the ability to do that context lookup, and that's the problem I'm trying to resolve: how to offload the real-time data processing to other boxes while still being able to refer to the device context. I should note we have no control over the protocol used by the devices themselves, and there is no possible chance that situation will change. We know we need to move away from this to be able to scale out by adding more machines to the cluster, and I'm in the early stages of working out exactly how we'll do this. Right now I'm looking at Terracotta as a way of scaling out the Java code, but I haven't got as far as working out how to scale out the C++ to match. As well as scaling for performance we need to consider high availability as well. The application needs to be available pretty much the whole time -- not absolutely 100%, which isn't cost effective, but we need to do a reasonable job of surviving a machine outage. If you had to undertake the task I've been given, what would you do? EDIT: Based on the data provided by @john channing, i'm looking at both GigaSpaces and Gemstone. Oracle Coherence and IBM ObjectGrid appear to be java-only. | TITLE:
High availability and scalable platform for Java/C++ on Solaris
QUESTION:
I have an application that's a mix of Java and C++ on Solaris. The Java aspects of the code run the web UI and establish state on the devices that we're talking to, and the C++ code does the real-time crunching of data coming back from the devices. Shared memory is used to pass device state and context information from the Java code through to the C++ code. The Java code uses a PostgreSQL database to persist its state. We're running into some pretty severe performance bottlenecks, and right now the only way we can scale is to increase memory and CPU counts. We're stuck on the one physical box due to the shared memory design. The really big hit here is being taken by the C++ code. The web interface is fairly lightly used to configure the devices; where we're really struggling is to handle the data volumes that the devices deliver once configured. Every piece of data we get back from the device has an identifier in it which points back to the device context, and we need to look that up. Right now there's a series of shared memory objects that are maintained by the Java/UI code and referred to by the C++ code, and that's the bottleneck. Because of that architecture we cannot move the C++ data handling off to another machine. We need to be able to scale out so that various subsets of devices can be handled by different machines, but then we lose the ability to do that context lookup, and that's the problem I'm trying to resolve: how to offload the real-time data processing to other boxes while still being able to refer to the device context. I should note we have no control over the protocol used by the devices themselves, and there is no possible chance that situation will change. We know we need to move away from this to be able to scale out by adding more machines to the cluster, and I'm in the early stages of working out exactly how we'll do this. Right now I'm looking at Terracotta as a way of scaling out the Java code, but I haven't got as far as working out how to scale out the C++ to match. As well as scaling for performance we need to consider high availability as well. The application needs to be available pretty much the whole time -- not absolutely 100%, which isn't cost effective, but we need to do a reasonable job of surviving a machine outage. If you had to undertake the task I've been given, what would you do? EDIT: Based on the data provided by @john channing, i'm looking at both GigaSpaces and Gemstone. Oracle Coherence and IBM ObjectGrid appear to be java-only.
ANSWER:
The first thing I would do is construct a model of the system to map the data flow and try to understand precisely where the bottleneck lies. If you can model your system as a pipeline, then you should be able to use the theory of constraints (most of the literature is about optimising business processes but it applies equally to software) to continuously improve performance and eliminate the bottleneck. Next I would collect some hard empirical data that accurately characterises the performance of your system. It is something of a cliché that you cannot manage what you cannot measure, but I have seen many people attempt to optimise a software system based on hunches and fail miserably. Then I would use the Pareto Principle (80/20 rule) to choose the small number of things that will produce the biggest gains and focus only on those. To scale a Java application horizontally, I have used Oracle Coherence extensively. Although some dismiss it as a very expensive distributed hashtable, the functionality is much richer than that and you can, for example, directly access data in the cache from C++ code. Other alternatives for horizontally scaling your Java code would be Giga Spaces, IBM Object Grid or Gemstone Gemfire. If your C++ code is stateless and is used purely for number crunching, you could look at distributing the process using ICE Grid which has bindings for all of the languages you are using. | [
"java",
"c++",
"scalability",
"solaris",
"high-availability"
] | 9 | 5 | 1,530 | 3 | 0 | 2008-09-09T05:52:23.680000 | 2008-09-09T09:00:54.337000 |
51,276 | 51,373 | Performance gain in compiling java to native code? | Is there any performance to be gained these days from compiling java to native code, or do modern hotspot compilers end up doing this over time anyway? | There was a similar discussion here recently, for the question What are advantages of bytecode over native code?. You can find interesting answers in that thread. | Performance gain in compiling java to native code? Is there any performance to be gained these days from compiling java to native code, or do modern hotspot compilers end up doing this over time anyway? | TITLE:
Performance gain in compiling java to native code?
QUESTION:
Is there any performance to be gained these days from compiling java to native code, or do modern hotspot compilers end up doing this over time anyway?
ANSWER:
There was a similar discussion here recently, for the question What are advantages of bytecode over native code?. You can find interesting answers in that thread. | [
"java",
"performance"
] | 12 | 4 | 9,409 | 5 | 0 | 2008-09-09T06:05:02.870000 | 2008-09-09T08:08:20 |
51,279 | 51,284 | Strategy for single sign on with legacy applications | I'm wondering what strategies people use for reduced sign on with legacy applications and how effective they have found them? We have an ASP.Net based intranet and own a lot of the legacy applications, but not all. We also have BizTalk and are considering the use of it's SSO engine too. | A good compromise between effort/rework and the convenience of single sign on is to continue to maintain a list of users, privileges, roles etc in the legacy app. Make the changes necessary to automatically log the user into your application based on their user account (usually their Windows or network account). I'm currently running a couple of applications that use this method of sign on, and it makes them seem more integrated even though they aren't. Another advantage we've found is that it stops people from sharing passwords to legacy applications. They're much less likely to hand out an admin password that also gives others access to their email or payroll details! | Strategy for single sign on with legacy applications I'm wondering what strategies people use for reduced sign on with legacy applications and how effective they have found them? We have an ASP.Net based intranet and own a lot of the legacy applications, but not all. We also have BizTalk and are considering the use of it's SSO engine too. | TITLE:
Strategy for single sign on with legacy applications
QUESTION:
I'm wondering what strategies people use for reduced sign on with legacy applications and how effective they have found them? We have an ASP.Net based intranet and own a lot of the legacy applications, but not all. We also have BizTalk and are considering the use of it's SSO engine too.
ANSWER:
A good compromise between effort/rework and the convenience of single sign on is to continue to maintain a list of users, privileges, roles etc in the legacy app. Make the changes necessary to automatically log the user into your application based on their user account (usually their Windows or network account). I'm currently running a couple of applications that use this method of sign on, and it makes them seem more integrated even though they aren't. Another advantage we've found is that it stops people from sharing passwords to legacy applications. They're much less likely to hand out an admin password that also gives others access to their email or payroll details! | [
"integration",
"single-sign-on",
"legacy"
] | 3 | 2 | 3,116 | 4 | 0 | 2008-09-09T06:05:57.377000 | 2008-09-09T06:12:55.037000 |
51,283 | 51,287 | Access to restricted URI denied code: 1012 | How do you get around this Ajax cross site scripting problem on FireFox 3? | If you're using jQuery it has a callback function to overcome this: http://docs.jquery.com/Ajax/jQuery.ajax#options As of jQuery 1.2, you can load JSON data located on another domain if you specify a JSONP callback, which can be done like so: "myurl?callback=?". jQuery automatically replaces the? with the correct method name to call, calling your specified callback. Or, if you set the dataType to "jsonp" a callback will be automatically added to your Ajax request. Alternatively you could make your ajax request to a server-side script which does the cross-domain call for you, then passes the data back to your script | Access to restricted URI denied code: 1012 How do you get around this Ajax cross site scripting problem on FireFox 3? | TITLE:
Access to restricted URI denied code: 1012
QUESTION:
How do you get around this Ajax cross site scripting problem on FireFox 3?
ANSWER:
If you're using jQuery it has a callback function to overcome this: http://docs.jquery.com/Ajax/jQuery.ajax#options As of jQuery 1.2, you can load JSON data located on another domain if you specify a JSONP callback, which can be done like so: "myurl?callback=?". jQuery automatically replaces the? with the correct method name to call, calling your specified callback. Or, if you set the dataType to "jsonp" a callback will be automatically added to your Ajax request. Alternatively you could make your ajax request to a server-side script which does the cross-domain call for you, then passes the data back to your script | [
"javascript",
"jquery",
"ajax",
"xss"
] | 28 | 20 | 47,810 | 5 | 0 | 2008-09-09T06:12:10.450000 | 2008-09-09T06:19:47.997000 |
51,288 | 51,295 | How to keep Stored Procedures and other scripts in SVN/Other repository? | Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository. Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this. What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN. Ideas? | Sounds like you're not wanting to use Revision Control properly, to me. Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over. Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief. | How to keep Stored Procedures and other scripts in SVN/Other repository? Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository. Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this. What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN. Ideas? | TITLE:
How to keep Stored Procedures and other scripts in SVN/Other repository?
QUESTION:
Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository. Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this. What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN. Ideas?
ANSWER:
Sounds like you're not wanting to use Revision Control properly, to me. Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over. Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief. | [
"sql-server",
"svn",
"tortoisesvn",
"repository"
] | 13 | 10 | 5,343 | 10 | 0 | 2008-09-09T06:19:51.337000 | 2008-09-09T06:31:14.600000 |
51,289 | 69,165 | Pass Silverlight type to Microsoft AJAX and pass parameter validation | I'm working on a Silverlight application where I want to take advantage of the Microsoft ASP.NET AJAX Client library. I'm calling the library using the HTML Bridge that is part of Silverlight 2. Silverlight got great support for passing types between JavaScript and Managed Code, but now I've bumped against a problem. Microsoft ASP.NET AJAX Client Libraries includes a "type system", and one of the things the framework does is validating that the parameters is of correct type. The specific function I'm calling is the Sys.Application.addHistoryPoint, and the validation code looks like this: var e = Function.validateParams(arguments, [ {name: "state", type: Object}, {name: "title", type: String, mayBeNull: true, optional: true} ]); I've tried passing all kinds of CLR types as the state parameter (C# structs, [ScriptableTypes], Dictionary types etc. And every time I get the error: "Sys.ArgumentTypeException: Object of type 'Function' cannot be converted to type 'Object' This error is obviously coming from the parameter validation... But WHY does ASP.NET AJAX think my types are Functions? Does anyone understand the type validation in MS AJAX? I know I can do workarounds like calling HtmlPage.Window.Eval("...") and pass my JS integration as strings, but I don't want to do that. I want to pass a real.NET type as the state parameter. | I found a pretty good overview of this here, but even that overview seemed to cover every scenario except the one you mention. I'm wondering if this can't be done because javascript objects really are functions (more or less). What if you wrote a wrapper function that could create the state object using a string? | Pass Silverlight type to Microsoft AJAX and pass parameter validation I'm working on a Silverlight application where I want to take advantage of the Microsoft ASP.NET AJAX Client library. I'm calling the library using the HTML Bridge that is part of Silverlight 2. Silverlight got great support for passing types between JavaScript and Managed Code, but now I've bumped against a problem. Microsoft ASP.NET AJAX Client Libraries includes a "type system", and one of the things the framework does is validating that the parameters is of correct type. The specific function I'm calling is the Sys.Application.addHistoryPoint, and the validation code looks like this: var e = Function.validateParams(arguments, [ {name: "state", type: Object}, {name: "title", type: String, mayBeNull: true, optional: true} ]); I've tried passing all kinds of CLR types as the state parameter (C# structs, [ScriptableTypes], Dictionary types etc. And every time I get the error: "Sys.ArgumentTypeException: Object of type 'Function' cannot be converted to type 'Object' This error is obviously coming from the parameter validation... But WHY does ASP.NET AJAX think my types are Functions? Does anyone understand the type validation in MS AJAX? I know I can do workarounds like calling HtmlPage.Window.Eval("...") and pass my JS integration as strings, but I don't want to do that. I want to pass a real.NET type as the state parameter. | TITLE:
Pass Silverlight type to Microsoft AJAX and pass parameter validation
QUESTION:
I'm working on a Silverlight application where I want to take advantage of the Microsoft ASP.NET AJAX Client library. I'm calling the library using the HTML Bridge that is part of Silverlight 2. Silverlight got great support for passing types between JavaScript and Managed Code, but now I've bumped against a problem. Microsoft ASP.NET AJAX Client Libraries includes a "type system", and one of the things the framework does is validating that the parameters is of correct type. The specific function I'm calling is the Sys.Application.addHistoryPoint, and the validation code looks like this: var e = Function.validateParams(arguments, [ {name: "state", type: Object}, {name: "title", type: String, mayBeNull: true, optional: true} ]); I've tried passing all kinds of CLR types as the state parameter (C# structs, [ScriptableTypes], Dictionary types etc. And every time I get the error: "Sys.ArgumentTypeException: Object of type 'Function' cannot be converted to type 'Object' This error is obviously coming from the parameter validation... But WHY does ASP.NET AJAX think my types are Functions? Does anyone understand the type validation in MS AJAX? I know I can do workarounds like calling HtmlPage.Window.Eval("...") and pass my JS integration as strings, but I don't want to do that. I want to pass a real.NET type as the state parameter.
ANSWER:
I found a pretty good overview of this here, but even that overview seemed to cover every scenario except the one you mention. I'm wondering if this can't be done because javascript objects really are functions (more or less). What if you wrote a wrapper function that could create the state object using a string? | [
"javascript",
"ajax",
"silverlight",
"asp.net-ajax",
"htmlbridge"
] | 1 | 1 | 875 | 1 | 0 | 2008-09-09T06:20:44.610000 | 2008-09-16T03:27:09.337000 |
51,296 | 160,334 | Checking network status and controlling PPP in a program | So I'm running PPP under linux with a cellular modem. The program I'm writing needs to know if the link is active before sending any data. What are my options to check if the link is available if it routes to a server I control (it doesn't go to the internet as I said earlier) Also, what is the best way to restart it - I'd like to have program control over when it starts and stops, so I assume an init.d isn't appropriate. Using system() doesn't seem to give a PID, are there other options besides fork() and the gaggle of exec??() calls? C on Linux on ARM (custom distribution using buildroot). | You can use the ip-up script functionality of pppd to have it execute a program when the IP interface is up and ready. Details are in the pppd(8) man page - search for "ip-up". To restart pppd, use the linkname parameter to set a known pidfile name ( /var/run/ppp-name.pid ). Also use the persist option. When you want to restart pppd, send the SIGHUP signal. Again, this is described in the man page. | Checking network status and controlling PPP in a program So I'm running PPP under linux with a cellular modem. The program I'm writing needs to know if the link is active before sending any data. What are my options to check if the link is available if it routes to a server I control (it doesn't go to the internet as I said earlier) Also, what is the best way to restart it - I'd like to have program control over when it starts and stops, so I assume an init.d isn't appropriate. Using system() doesn't seem to give a PID, are there other options besides fork() and the gaggle of exec??() calls? C on Linux on ARM (custom distribution using buildroot). | TITLE:
Checking network status and controlling PPP in a program
QUESTION:
So I'm running PPP under linux with a cellular modem. The program I'm writing needs to know if the link is active before sending any data. What are my options to check if the link is available if it routes to a server I control (it doesn't go to the internet as I said earlier) Also, what is the best way to restart it - I'd like to have program control over when it starts and stops, so I assume an init.d isn't appropriate. Using system() doesn't seem to give a PID, are there other options besides fork() and the gaggle of exec??() calls? C on Linux on ARM (custom distribution using buildroot).
ANSWER:
You can use the ip-up script functionality of pppd to have it execute a program when the IP interface is up and ready. Details are in the pppd(8) man page - search for "ip-up". To restart pppd, use the linkname parameter to set a known pidfile name ( /var/run/ppp-name.pid ). Also use the persist option. When you want to restart pppd, send the SIGHUP signal. Again, this is described in the man page. | [
"c",
"linux",
"networking",
"embedded",
"ppp"
] | 3 | 6 | 8,192 | 2 | 0 | 2008-09-09T06:32:42.430000 | 2008-10-02T00:11:32.820000 |
51,311 | 51,504 | What is the quickest way to a very simple blog? | I am about to start a new project and would like to document its development in a very simple blog. My requirements are: self-hosted on my Gentoo-based LAMP stack (that seems to rule out blogger) Integration in a django based website (as in www.myproject.com/about, www.myproject.com/blog etc rather than www.myproject.com and a totally different site at blog.myproject.com) very little or no learning curve that's specific to the blog engine (don't want to learn an API just to blog, but having to get deeper into Django to be able to roll my own would be OK) According to the answers so far, there is a chance that this excludes Wordpress Should I a) install blog engine X (please specify X) b) use django to hand-roll a way to post new entries and a page on my website to display the posts in descending chronological order | If you're the perfectionist kind, roll your own. It isn't that hard You learn something useful You'll get exactly what you want and need Be warned that you may run into a quagmire fighting comment spam, fixing security holes, etc. But it'll probably be a fun project. If you are the practical type and ready to face some integration pain, use an existing engine like WadcomBlog (Python) or PyBlosxom, or something completely different like MovableType or WordPress. Here's a simple Django blog example to get you started. Some pros and cons of rolling your blog engine this article by Phil Haack. Jeff Croft apparently rolled his own as well. | What is the quickest way to a very simple blog? I am about to start a new project and would like to document its development in a very simple blog. My requirements are: self-hosted on my Gentoo-based LAMP stack (that seems to rule out blogger) Integration in a django based website (as in www.myproject.com/about, www.myproject.com/blog etc rather than www.myproject.com and a totally different site at blog.myproject.com) very little or no learning curve that's specific to the blog engine (don't want to learn an API just to blog, but having to get deeper into Django to be able to roll my own would be OK) According to the answers so far, there is a chance that this excludes Wordpress Should I a) install blog engine X (please specify X) b) use django to hand-roll a way to post new entries and a page on my website to display the posts in descending chronological order | TITLE:
What is the quickest way to a very simple blog?
QUESTION:
I am about to start a new project and would like to document its development in a very simple blog. My requirements are: self-hosted on my Gentoo-based LAMP stack (that seems to rule out blogger) Integration in a django based website (as in www.myproject.com/about, www.myproject.com/blog etc rather than www.myproject.com and a totally different site at blog.myproject.com) very little or no learning curve that's specific to the blog engine (don't want to learn an API just to blog, but having to get deeper into Django to be able to roll my own would be OK) According to the answers so far, there is a chance that this excludes Wordpress Should I a) install blog engine X (please specify X) b) use django to hand-roll a way to post new entries and a page on my website to display the posts in descending chronological order
ANSWER:
If you're the perfectionist kind, roll your own. It isn't that hard You learn something useful You'll get exactly what you want and need Be warned that you may run into a quagmire fighting comment spam, fixing security holes, etc. But it'll probably be a fun project. If you are the practical type and ready to face some integration pain, use an existing engine like WadcomBlog (Python) or PyBlosxom, or something completely different like MovableType or WordPress. Here's a simple Django blog example to get you started. Some pros and cons of rolling your blog engine this article by Phil Haack. Jeff Croft apparently rolled his own as well. | [
"django",
"blogs"
] | 7 | 13 | 1,546 | 9 | 0 | 2008-09-09T06:54:32.937000 | 2008-09-09T10:07:18.387000 |
51,320 | 51,331 | Find all drive letters in Java | For a project I'm working on. I need to look for an executable on the filesystem. For UNIX derivatives, I assume the user has the file in the mighty $PATH variable, but there is no such thing on Windows. I can safely assume the file is at most 2 levels deep into the filesystem, but I don't know on what drive it will be. I have to try all drives, but I can't figure out how to list all available drives (which have a letter assigned to it). Any help? EDIT: I know there is a %PATH% variable, but it is not as integrated as in UNIX systems. For instance, the application I'm looking for is OpenOffice. Such software would not be in %PATH%, typically. | http://docs.oracle.com/javase/7/docs/api/java/io/File.html#listRoots() File[] roots = File.listRoots(); for(int i = 0; i < roots.length; i++) System.out.println("Root["+i+"]:" + roots[i]); google: list drives java, first hit:-) | Find all drive letters in Java For a project I'm working on. I need to look for an executable on the filesystem. For UNIX derivatives, I assume the user has the file in the mighty $PATH variable, but there is no such thing on Windows. I can safely assume the file is at most 2 levels deep into the filesystem, but I don't know on what drive it will be. I have to try all drives, but I can't figure out how to list all available drives (which have a letter assigned to it). Any help? EDIT: I know there is a %PATH% variable, but it is not as integrated as in UNIX systems. For instance, the application I'm looking for is OpenOffice. Such software would not be in %PATH%, typically. | TITLE:
Find all drive letters in Java
QUESTION:
For a project I'm working on. I need to look for an executable on the filesystem. For UNIX derivatives, I assume the user has the file in the mighty $PATH variable, but there is no such thing on Windows. I can safely assume the file is at most 2 levels deep into the filesystem, but I don't know on what drive it will be. I have to try all drives, but I can't figure out how to list all available drives (which have a letter assigned to it). Any help? EDIT: I know there is a %PATH% variable, but it is not as integrated as in UNIX systems. For instance, the application I'm looking for is OpenOffice. Such software would not be in %PATH%, typically.
ANSWER:
http://docs.oracle.com/javase/7/docs/api/java/io/File.html#listRoots() File[] roots = File.listRoots(); for(int i = 0; i < roots.length; i++) System.out.println("Root["+i+"]:" + roots[i]); google: list drives java, first hit:-) | [
"java",
"windows"
] | 20 | 39 | 24,004 | 5 | 0 | 2008-09-09T07:01:14.577000 | 2008-09-09T07:08:30.783000 |
51,338 | 237,100 | How do I implement license management for on-site installation of webapps (preferably cross-platform)? | I have a web application running on a Gentoo-based LAMP stack. My customers buy the software as a service and I host everything. However, there is some demand for on-site deployment inside the clients' own networks. Currently, because I host the system, there is no built-in license management in the app. I bill based on user accounts and data capacity (it's a processing and analysis app for metering data) and I just set up whatever the client pays for and the client can't setup those things himself. Even without on-site installation, that should be changed for better scalability anyway. I am looking for a license managment framework and/or typical approaches that you have implemented yourselves or have seen to work well elsewhere. My requirements are: "safe enough" rather than "military grade" very much non-obtrusive prevent the owner of a license from running the system in multiple plants when he has only licensed one make the number of user accounts and the data capacity both reasonably tamper-proof and easy to up- / downgrade work without an Internet connection (having a completely self-contained system would be the main point of opting for the on-site solution), though it might be acceptable if there has to be a temporary connection during installation For some of the on-site scenarios, there would be a requirement for some particular OS, typically some version of Windows Server, but various Linux distros are getting more popular as well, especially in the public sector. From a user's point of view, I am quite satisfied with the license management in FogBugz, it seems Joel Spolsky is satisfied with it from a vendor's point of view, and it is cross-platform, so it would make a great reference of what I'm aiming at. | Don't. Every hour that you spend writing a license key system is an hour that you are not spending fixing bugs or adding features. By writing a license management system, you are spending resources in order to reduce the value of your product! Copyright your code, have a lawyer and be ready to prosecute anyone who violates your copyright, and call it a day. | How do I implement license management for on-site installation of webapps (preferably cross-platform)? I have a web application running on a Gentoo-based LAMP stack. My customers buy the software as a service and I host everything. However, there is some demand for on-site deployment inside the clients' own networks. Currently, because I host the system, there is no built-in license management in the app. I bill based on user accounts and data capacity (it's a processing and analysis app for metering data) and I just set up whatever the client pays for and the client can't setup those things himself. Even without on-site installation, that should be changed for better scalability anyway. I am looking for a license managment framework and/or typical approaches that you have implemented yourselves or have seen to work well elsewhere. My requirements are: "safe enough" rather than "military grade" very much non-obtrusive prevent the owner of a license from running the system in multiple plants when he has only licensed one make the number of user accounts and the data capacity both reasonably tamper-proof and easy to up- / downgrade work without an Internet connection (having a completely self-contained system would be the main point of opting for the on-site solution), though it might be acceptable if there has to be a temporary connection during installation For some of the on-site scenarios, there would be a requirement for some particular OS, typically some version of Windows Server, but various Linux distros are getting more popular as well, especially in the public sector. From a user's point of view, I am quite satisfied with the license management in FogBugz, it seems Joel Spolsky is satisfied with it from a vendor's point of view, and it is cross-platform, so it would make a great reference of what I'm aiming at. | TITLE:
How do I implement license management for on-site installation of webapps (preferably cross-platform)?
QUESTION:
I have a web application running on a Gentoo-based LAMP stack. My customers buy the software as a service and I host everything. However, there is some demand for on-site deployment inside the clients' own networks. Currently, because I host the system, there is no built-in license management in the app. I bill based on user accounts and data capacity (it's a processing and analysis app for metering data) and I just set up whatever the client pays for and the client can't setup those things himself. Even without on-site installation, that should be changed for better scalability anyway. I am looking for a license managment framework and/or typical approaches that you have implemented yourselves or have seen to work well elsewhere. My requirements are: "safe enough" rather than "military grade" very much non-obtrusive prevent the owner of a license from running the system in multiple plants when he has only licensed one make the number of user accounts and the data capacity both reasonably tamper-proof and easy to up- / downgrade work without an Internet connection (having a completely self-contained system would be the main point of opting for the on-site solution), though it might be acceptable if there has to be a temporary connection during installation For some of the on-site scenarios, there would be a requirement for some particular OS, typically some version of Windows Server, but various Linux distros are getting more popular as well, especially in the public sector. From a user's point of view, I am quite satisfied with the license management in FogBugz, it seems Joel Spolsky is satisfied with it from a vendor's point of view, and it is cross-platform, so it would make a great reference of what I'm aiming at.
ANSWER:
Don't. Every hour that you spend writing a license key system is an hour that you are not spending fixing bugs or adding features. By writing a license management system, you are spending resources in order to reduce the value of your product! Copyright your code, have a lawyer and be ready to prosecute anyone who violates your copyright, and call it a day. | [
"web-applications",
"licensing"
] | 6 | 3 | 4,082 | 3 | 0 | 2008-09-09T07:20:11.363000 | 2008-10-25T22:11:13.993000 |
51,339 | 51,400 | How can you handle an IN sub-query with LINQ to SQL? | I'm a bit stuck on this. Basically I want to do something like the following SQL query in LINQ to SQL: SELECT f.* FROM Foo f WHERE f.FooId IN ( SELECT fb.FooId FROM FooBar fb WHERE fb.BarId = 1000 ) Any help would be gratefully received. | Have a look at this article. Basically, if you want to get the equivalent of IN, you need to construct an inner query first, and then use the Contains() method. Here's my attempt at translating: var innerQuery = from fb in FoorBar where fb.BarId = 1000 select fb.FooId; var result = from f in Foo where innerQuery.Contains(f.FooId) select f; | How can you handle an IN sub-query with LINQ to SQL? I'm a bit stuck on this. Basically I want to do something like the following SQL query in LINQ to SQL: SELECT f.* FROM Foo f WHERE f.FooId IN ( SELECT fb.FooId FROM FooBar fb WHERE fb.BarId = 1000 ) Any help would be gratefully received. | TITLE:
How can you handle an IN sub-query with LINQ to SQL?
QUESTION:
I'm a bit stuck on this. Basically I want to do something like the following SQL query in LINQ to SQL: SELECT f.* FROM Foo f WHERE f.FooId IN ( SELECT fb.FooId FROM FooBar fb WHERE fb.BarId = 1000 ) Any help would be gratefully received.
ANSWER:
Have a look at this article. Basically, if you want to get the equivalent of IN, you need to construct an inner query first, and then use the Contains() method. Here's my attempt at translating: var innerQuery = from fb in FoorBar where fb.BarId = 1000 select fb.FooId; var result = from f in Foo where innerQuery.Contains(f.FooId) select f; | [
"sql",
"linq",
"linq-to-sql"
] | 56 | 65 | 78,147 | 9 | 0 | 2008-09-09T07:20:51.330000 | 2008-09-09T08:39:47.273000 |
51,342 | 51,358 | Implementing user defined display order UI | i have a list of products that are being displayed in particular order. store admin can reassign the display order, by moving the "hot" items to the top of the list. what's the best way of implementing the admin functionality UI [asp.net C#]? Products table has a [displayOrder(int)] filed which determines the display order. i'm looking for something intuitive and simple. thank you. p.s. i guess i didn't make myself clear, i'm looking for UI advice more than anything. SOLUTION: ReorderList worked out great, this article helped too. Also, make sure OldValuesParameterFormatString="{0}" in your DataSource. | using AJAX you could implement a Reoder list control you can find more information here http://www.asp.net/AJAX/AjaxControlToolkit/Samples/ReorderList/ReorderList.aspx Mauro http://www.brantas.co.uk | Implementing user defined display order UI i have a list of products that are being displayed in particular order. store admin can reassign the display order, by moving the "hot" items to the top of the list. what's the best way of implementing the admin functionality UI [asp.net C#]? Products table has a [displayOrder(int)] filed which determines the display order. i'm looking for something intuitive and simple. thank you. p.s. i guess i didn't make myself clear, i'm looking for UI advice more than anything. SOLUTION: ReorderList worked out great, this article helped too. Also, make sure OldValuesParameterFormatString="{0}" in your DataSource. | TITLE:
Implementing user defined display order UI
QUESTION:
i have a list of products that are being displayed in particular order. store admin can reassign the display order, by moving the "hot" items to the top of the list. what's the best way of implementing the admin functionality UI [asp.net C#]? Products table has a [displayOrder(int)] filed which determines the display order. i'm looking for something intuitive and simple. thank you. p.s. i guess i didn't make myself clear, i'm looking for UI advice more than anything. SOLUTION: ReorderList worked out great, this article helped too. Also, make sure OldValuesParameterFormatString="{0}" in your DataSource.
ANSWER:
using AJAX you could implement a Reoder list control you can find more information here http://www.asp.net/AJAX/AjaxControlToolkit/Samples/ReorderList/ReorderList.aspx Mauro http://www.brantas.co.uk | [
"c#",
"asp.net",
"visual-studio-2005"
] | 2 | 2 | 709 | 4 | 0 | 2008-09-09T07:24:12.167000 | 2008-09-09T07:40:54.793000 |
51,349 | 51,386 | How do you start Knowledge Transfer? | Do you use a formal event to get people talking in your IT department? Like a monthly meetup in a social place, a internal wiki/chat space or just a regular "information market" with some presentations about technology or projects made by your staff for your staff? Do you invite Sales people to participate or is it a closed event for programmers only? How do you get people to participate in these events? Do you allow them to spent work time on knowledge transfer? Or do you understand it as an integral part of the work time? I wonder how to monitor the progress of knowledge transfer itself. How do you spot critical one-person spots of failure in your projects? There are several methods to avoid it, like staff swapping or the "fifo" attempt on bug fixing. Note: Ok, this is a very very noisy question and I hope to fix it after a few comments. Sorry for the mixup. edit: My personal experience is that there is a very high barrier for people to start contributing. It looks like they won't put in the (minimal) extra time to edit our wiki, or spend the hour in the afternoon to talk about technology topics with the developing staff. It's like people don't like our wiki, our document management system or the meeting. Maybe it's because it's all free-to-use and not forced by the management. But I don't like to force people into it - but is it the right way? One example: Our wiki holds pages about projects, telling who worked on it to get a first contact in case of questions. But nobody besides a colleague and me is creating this pages... | Knowledge Transfer and Knowledge Management have one drawback. They seem to cost an aweful lot: if everybody knows what I know, am I still needed? All the time I use to bring others up to speed, what do I gain from it? The best way to go about this is to be an example. Share your knowledge; in a wiki, blog about it, talk about it, make it easily accessible, and talk about the benefits you have from that: less people come to interupt and ask you stuff, as they can get an answer easily without even getting up. And show them that you are still there. This with all the other things mentioned will actually win out. One more thing: one of my employers kept on paying me 1/3 of my salary for another year after I left (on my own initiative), just to keep my knowledge-base up and running. Did he have to? No, it was his property anyway. But it motivated people still working for him to share their knowledge. | How do you start Knowledge Transfer? Do you use a formal event to get people talking in your IT department? Like a monthly meetup in a social place, a internal wiki/chat space or just a regular "information market" with some presentations about technology or projects made by your staff for your staff? Do you invite Sales people to participate or is it a closed event for programmers only? How do you get people to participate in these events? Do you allow them to spent work time on knowledge transfer? Or do you understand it as an integral part of the work time? I wonder how to monitor the progress of knowledge transfer itself. How do you spot critical one-person spots of failure in your projects? There are several methods to avoid it, like staff swapping or the "fifo" attempt on bug fixing. Note: Ok, this is a very very noisy question and I hope to fix it after a few comments. Sorry for the mixup. edit: My personal experience is that there is a very high barrier for people to start contributing. It looks like they won't put in the (minimal) extra time to edit our wiki, or spend the hour in the afternoon to talk about technology topics with the developing staff. It's like people don't like our wiki, our document management system or the meeting. Maybe it's because it's all free-to-use and not forced by the management. But I don't like to force people into it - but is it the right way? One example: Our wiki holds pages about projects, telling who worked on it to get a first contact in case of questions. But nobody besides a colleague and me is creating this pages... | TITLE:
How do you start Knowledge Transfer?
QUESTION:
Do you use a formal event to get people talking in your IT department? Like a monthly meetup in a social place, a internal wiki/chat space or just a regular "information market" with some presentations about technology or projects made by your staff for your staff? Do you invite Sales people to participate or is it a closed event for programmers only? How do you get people to participate in these events? Do you allow them to spent work time on knowledge transfer? Or do you understand it as an integral part of the work time? I wonder how to monitor the progress of knowledge transfer itself. How do you spot critical one-person spots of failure in your projects? There are several methods to avoid it, like staff swapping or the "fifo" attempt on bug fixing. Note: Ok, this is a very very noisy question and I hope to fix it after a few comments. Sorry for the mixup. edit: My personal experience is that there is a very high barrier for people to start contributing. It looks like they won't put in the (minimal) extra time to edit our wiki, or spend the hour in the afternoon to talk about technology topics with the developing staff. It's like people don't like our wiki, our document management system or the meeting. Maybe it's because it's all free-to-use and not forced by the management. But I don't like to force people into it - but is it the right way? One example: Our wiki holds pages about projects, telling who worked on it to get a first contact in case of questions. But nobody besides a colleague and me is creating this pages...
ANSWER:
Knowledge Transfer and Knowledge Management have one drawback. They seem to cost an aweful lot: if everybody knows what I know, am I still needed? All the time I use to bring others up to speed, what do I gain from it? The best way to go about this is to be an example. Share your knowledge; in a wiki, blog about it, talk about it, make it easily accessible, and talk about the benefits you have from that: less people come to interupt and ask you stuff, as they can get an answer easily without even getting up. And show them that you are still there. This with all the other things mentioned will actually win out. One more thing: one of my employers kept on paying me 1/3 of my salary for another year after I left (on my own initiative), just to keep my knowledge-base up and running. Did he have to? No, it was his property anyway. But it motivated people still working for him to share their knowledge. | [
"knowledge-management"
] | 14 | 4 | 5,015 | 8 | 0 | 2008-09-09T07:27:39.957000 | 2008-09-09T08:27:00.580000 |
51,352 | 52,597 | How to show a spinner while loading an image via JavaScript | I'm currently working on a web application which has a page which displays a single chart (a.png image). On another part of this page there are a set of links which, when clicked, the entire page reloads and looks exactly the same as before except for the chart in the middle of the page. What I want to do is when a link is clicked on a page just the chart on the page is changed. This will speed things up tremendously as the page is roughly 100kb large, and don't really want to reload the entire page just to display this. I've been doing this via JavaScript, which works so far, using the following code document.getElementById('chart').src = '/charts/10.png'; The problem is that when the user clicks on the link, it may take a couple of seconds before the chart changes. This makes the user think that their click hasn't done anything, or that the system is slow to respond. What I want to happen is display a spinner / throbber / status indicator, in place of where the image is while it is loading, so when the user clicks the link they know at least the system has taken their input and is doing something about it. I've tried a few suggestions, even using a psudo time out to show a spinner, and then flick back to the image. A good suggestion I've had is to use the following Which would be ideal, except the spinner is significantly smaller than the chart which is being displayed. Any other ideas? | I've used something like this to preload an image and then automatically call back to my javascript when the image is finished loading. You want to check complete before you setup the callback because the image may already be cached and it may not call your callback. function PreloadImage(imgSrc, callback){ var objImagePreloader = new Image();
objImagePreloader.src = imgSrc; if(objImagePreloader.complete){ callback(); objImagePreloader.onload=function(){}; } else{ objImagePreloader.onload = function() { callback(); // clear onLoad, IE behaves irratically with animated gifs otherwise objImagePreloader.onload=function(){}; } } } | How to show a spinner while loading an image via JavaScript I'm currently working on a web application which has a page which displays a single chart (a.png image). On another part of this page there are a set of links which, when clicked, the entire page reloads and looks exactly the same as before except for the chart in the middle of the page. What I want to do is when a link is clicked on a page just the chart on the page is changed. This will speed things up tremendously as the page is roughly 100kb large, and don't really want to reload the entire page just to display this. I've been doing this via JavaScript, which works so far, using the following code document.getElementById('chart').src = '/charts/10.png'; The problem is that when the user clicks on the link, it may take a couple of seconds before the chart changes. This makes the user think that their click hasn't done anything, or that the system is slow to respond. What I want to happen is display a spinner / throbber / status indicator, in place of where the image is while it is loading, so when the user clicks the link they know at least the system has taken their input and is doing something about it. I've tried a few suggestions, even using a psudo time out to show a spinner, and then flick back to the image. A good suggestion I've had is to use the following Which would be ideal, except the spinner is significantly smaller than the chart which is being displayed. Any other ideas? | TITLE:
How to show a spinner while loading an image via JavaScript
QUESTION:
I'm currently working on a web application which has a page which displays a single chart (a.png image). On another part of this page there are a set of links which, when clicked, the entire page reloads and looks exactly the same as before except for the chart in the middle of the page. What I want to do is when a link is clicked on a page just the chart on the page is changed. This will speed things up tremendously as the page is roughly 100kb large, and don't really want to reload the entire page just to display this. I've been doing this via JavaScript, which works so far, using the following code document.getElementById('chart').src = '/charts/10.png'; The problem is that when the user clicks on the link, it may take a couple of seconds before the chart changes. This makes the user think that their click hasn't done anything, or that the system is slow to respond. What I want to happen is display a spinner / throbber / status indicator, in place of where the image is while it is loading, so when the user clicks the link they know at least the system has taken their input and is doing something about it. I've tried a few suggestions, even using a psudo time out to show a spinner, and then flick back to the image. A good suggestion I've had is to use the following Which would be ideal, except the spinner is significantly smaller than the chart which is being displayed. Any other ideas?
ANSWER:
I've used something like this to preload an image and then automatically call back to my javascript when the image is finished loading. You want to check complete before you setup the callback because the image may already be cached and it may not call your callback. function PreloadImage(imgSrc, callback){ var objImagePreloader = new Image();
objImagePreloader.src = imgSrc; if(objImagePreloader.complete){ callback(); objImagePreloader.onload=function(){}; } else{ objImagePreloader.onload = function() { callback(); // clear onLoad, IE behaves irratically with animated gifs otherwise objImagePreloader.onload=function(){}; } } } | [
"javascript",
"jquery",
"dom"
] | 57 | 41 | 105,175 | 13 | 0 | 2008-09-09T07:33:36.790000 | 2008-09-09T18:51:31.887000 |
51,363 | 51,391 | How does the Licenses.licx based .Net component licensing model work? | I've encountered multiple third party.Net component-vendors that use a licensing scheme. On an evaluation copy, the components show up with a nag-screen or watermark or some such indicator. On a licensed machine, a Licenses.licx is created - with what appears to be just the assembly full name/identifiers. This file has to be included when the client assembly is built. How does this model work? Both from component-vendors' and users' perspective. What is the.licx file used for? Should it be checked in? We've had a number of issues with the wrong/right.licx file being checked in and what not | Almost everything about.Net licensing is explained here. No need to rewrite, I think. It is better to exclude license files from project in source control, if you can. Otherwise, editing visual components may be pain in the ass. Also, storing license files in source control repository is not a need. Hope this helps. | How does the Licenses.licx based .Net component licensing model work? I've encountered multiple third party.Net component-vendors that use a licensing scheme. On an evaluation copy, the components show up with a nag-screen or watermark or some such indicator. On a licensed machine, a Licenses.licx is created - with what appears to be just the assembly full name/identifiers. This file has to be included when the client assembly is built. How does this model work? Both from component-vendors' and users' perspective. What is the.licx file used for? Should it be checked in? We've had a number of issues with the wrong/right.licx file being checked in and what not | TITLE:
How does the Licenses.licx based .Net component licensing model work?
QUESTION:
I've encountered multiple third party.Net component-vendors that use a licensing scheme. On an evaluation copy, the components show up with a nag-screen or watermark or some such indicator. On a licensed machine, a Licenses.licx is created - with what appears to be just the assembly full name/identifiers. This file has to be included when the client assembly is built. How does this model work? Both from component-vendors' and users' perspective. What is the.licx file used for? Should it be checked in? We've had a number of issues with the wrong/right.licx file being checked in and what not
ANSWER:
Almost everything about.Net licensing is explained here. No need to rewrite, I think. It is better to exclude license files from project in source control, if you can. Otherwise, editing visual components may be pain in the ass. Also, storing license files in source control repository is not a need. Hope this helps. | [
".net"
] | 53 | 14 | 33,326 | 3 | 0 | 2008-09-09T07:49:56.720000 | 2008-09-09T08:32:22.063000 |
51,380 | 51,393 | When can DataInputStream.skipBytes(n) not skip n bytes? | The Sun Documentation for DataInput.skipBytes states that it "makes an attempt to skip over n bytes of data from the input stream, discarding the skipped bytes. However, it may skip over some smaller number of bytes, possibly zero. This may result from any of a number of conditions; reaching end of file before n bytes have been skipped is only one possibility." Other than reaching end of file, why might skipBytes() not skip the right number of bytes? (The DataInputStream I am using will either be wrapping a FileInputStream or a PipedInputStream.) If I definitely want to skip n bytes and throw an EOFException if this causes me to go to the end of the file, should I use readFully() and ignore the resulting byte array? Or is there a better way? | 1) There might not be that much data available to read (the other end of the pipe might not have sent that much data yet), and the implementing class might be non-blocking (i.e. it will just return what it can, rather than waiting for enough data to fulfil the request). I don't know if any implementations actually behave in this way, however, but the interface is designed to permit it. Another option is simply that the file gets closed part-way through the read. 2) Either readFully() (which will always wait for enough input or else fail) or call skipBytes() in a loop. I think the former is probably better, unless the array is truly vast. | When can DataInputStream.skipBytes(n) not skip n bytes? The Sun Documentation for DataInput.skipBytes states that it "makes an attempt to skip over n bytes of data from the input stream, discarding the skipped bytes. However, it may skip over some smaller number of bytes, possibly zero. This may result from any of a number of conditions; reaching end of file before n bytes have been skipped is only one possibility." Other than reaching end of file, why might skipBytes() not skip the right number of bytes? (The DataInputStream I am using will either be wrapping a FileInputStream or a PipedInputStream.) If I definitely want to skip n bytes and throw an EOFException if this causes me to go to the end of the file, should I use readFully() and ignore the resulting byte array? Or is there a better way? | TITLE:
When can DataInputStream.skipBytes(n) not skip n bytes?
QUESTION:
The Sun Documentation for DataInput.skipBytes states that it "makes an attempt to skip over n bytes of data from the input stream, discarding the skipped bytes. However, it may skip over some smaller number of bytes, possibly zero. This may result from any of a number of conditions; reaching end of file before n bytes have been skipped is only one possibility." Other than reaching end of file, why might skipBytes() not skip the right number of bytes? (The DataInputStream I am using will either be wrapping a FileInputStream or a PipedInputStream.) If I definitely want to skip n bytes and throw an EOFException if this causes me to go to the end of the file, should I use readFully() and ignore the resulting byte array? Or is there a better way?
ANSWER:
1) There might not be that much data available to read (the other end of the pipe might not have sent that much data yet), and the implementing class might be non-blocking (i.e. it will just return what it can, rather than waiting for enough data to fulfil the request). I don't know if any implementations actually behave in this way, however, but the interface is designed to permit it. Another option is simply that the file gets closed part-way through the read. 2) Either readFully() (which will always wait for enough input or else fail) or call skipBytes() in a loop. I think the former is probably better, unless the array is truly vast. | [
"io",
"java"
] | 11 | 5 | 3,220 | 5 | 0 | 2008-09-09T08:19:01.533000 | 2008-09-09T08:33:02.903000 |
51,390 | 51,399 | Where did all the java applets go? | When java was young, people were excited about writing applets. They were cool and popular, for a little while. Now, I never see them anymore. Instead we have flash, javascript, and a plethora of other web app-building technologies. Why don't sites use java applets anymore? I'm also curious: historically, why do you think this occurred? What could have been done differently to keep Java applets alive? | I think Java applets were overshadowed by Flash and ActionScript (pun unintended), being much easier to use for what Java Applets were being used at the time (animations + stateful applications). Flash's success in this respect in turn owes to its much smaller file sizes, as well as benefiting from the Sun vs. Microsoft suit that resulted in Microsoft removing the MSJVM from Internet Explorer, at a time of Netscape's demise and IE's heavy dominance. | Where did all the java applets go? When java was young, people were excited about writing applets. They were cool and popular, for a little while. Now, I never see them anymore. Instead we have flash, javascript, and a plethora of other web app-building technologies. Why don't sites use java applets anymore? I'm also curious: historically, why do you think this occurred? What could have been done differently to keep Java applets alive? | TITLE:
Where did all the java applets go?
QUESTION:
When java was young, people were excited about writing applets. They were cool and popular, for a little while. Now, I never see them anymore. Instead we have flash, javascript, and a plethora of other web app-building technologies. Why don't sites use java applets anymore? I'm also curious: historically, why do you think this occurred? What could have been done differently to keep Java applets alive?
ANSWER:
I think Java applets were overshadowed by Flash and ActionScript (pun unintended), being much easier to use for what Java Applets were being used at the time (animations + stateful applications). Flash's success in this respect in turn owes to its much smaller file sizes, as well as benefiting from the Sun vs. Microsoft suit that resulted in Microsoft removing the MSJVM from Internet Explorer, at a time of Netscape's demise and IE's heavy dominance. | [
"java",
"applet"
] | 56 | 36 | 13,567 | 15 | 0 | 2008-09-09T08:30:38.120000 | 2008-09-09T08:39:12.530000 |
51,407 | 51,433 | Can you load a .Net form as a control? | I want to load a desktop application, via reflection, as a Control inside another application. The application I'm reflecting is a legacy one - I can't make changes to it. I can dynamically access the Form, but can't load it as a Control. In.Net Form expands on Control, and I can assign the reflected Form as a Control, but it throws a run-time exception. Forms cannot be loaded as controls. Is there any way to convert the form to a control? | Yes, this works just fine. I'm working on a.NET app right now that loads forms into a panel on a host form. The relevant snippet: // setup the new form form.TopLevel = false; form.FormBorderStyle = FormBorderStyle.None; form.Dock = DockStyle.Fill; form.Show ( );
// add to the panel's list of child controls panelFormHost.Controls.Add ( form ); | Can you load a .Net form as a control? I want to load a desktop application, via reflection, as a Control inside another application. The application I'm reflecting is a legacy one - I can't make changes to it. I can dynamically access the Form, but can't load it as a Control. In.Net Form expands on Control, and I can assign the reflected Form as a Control, but it throws a run-time exception. Forms cannot be loaded as controls. Is there any way to convert the form to a control? | TITLE:
Can you load a .Net form as a control?
QUESTION:
I want to load a desktop application, via reflection, as a Control inside another application. The application I'm reflecting is a legacy one - I can't make changes to it. I can dynamically access the Form, but can't load it as a Control. In.Net Form expands on Control, and I can assign the reflected Form as a Control, but it throws a run-time exception. Forms cannot be loaded as controls. Is there any way to convert the form to a control?
ANSWER:
Yes, this works just fine. I'm working on a.NET app right now that loads forms into a panel on a host form. The relevant snippet: // setup the new form form.TopLevel = false; form.FormBorderStyle = FormBorderStyle.None; form.Dock = DockStyle.Fill; form.Show ( );
// add to the panel's list of child controls panelFormHost.Controls.Add ( form ); | [
".net",
"winforms"
] | 7 | 10 | 1,146 | 3 | 0 | 2008-09-09T08:45:43.607000 | 2008-09-09T09:06:08.193000 |
51,412 | 51,415 | Passing on named variable arguments in python | Say I have the following methods: def methodA(arg, **kwargs): pass
def methodB(arg, *args, **kwargs): pass In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments. def methodA(arg, **kwargs): methodB("argvalue", kwargs) How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB? | Put the asterisks before the kwargs variable. This makes Python pass the variable (which is assumed to be a dictionary) as keyword arguments. methodB("argvalue", **kwargs) | Passing on named variable arguments in python Say I have the following methods: def methodA(arg, **kwargs): pass
def methodB(arg, *args, **kwargs): pass In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments. def methodA(arg, **kwargs): methodB("argvalue", kwargs) How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB? | TITLE:
Passing on named variable arguments in python
QUESTION:
Say I have the following methods: def methodA(arg, **kwargs): pass
def methodB(arg, *args, **kwargs): pass In methodA I wish to call methodB, passing on the kwargs. However, it seems that if I define methodA as follows, the second argument will be passed on as positional rather than named variable arguments. def methodA(arg, **kwargs): methodB("argvalue", kwargs) How do I make sure that the **kwargs in methodA gets passed as **kwargs to methodB?
ANSWER:
Put the asterisks before the kwargs variable. This makes Python pass the variable (which is assumed to be a dictionary) as keyword arguments. methodB("argvalue", **kwargs) | [
"python",
"variadic-functions"
] | 13 | 34 | 10,663 | 3 | 0 | 2008-09-09T08:46:51.920000 | 2008-09-09T08:50:34.143000 |
51,420 | 51,426 | How to get the base 10 logarithm of a Fixnum in Ruby? | I want to get the base 10 logarithm of a Fixnum using Ruby, but found that n.log or n.log10 are not defined. Math::log is defined but uses a different base than 10. What is the easiest way to get the base 10 logarithm of a Fixnum? | There is Math::log10 (n) And there is also a property of logarithms that logx(y) = log(y)/log(x) | How to get the base 10 logarithm of a Fixnum in Ruby? I want to get the base 10 logarithm of a Fixnum using Ruby, but found that n.log or n.log10 are not defined. Math::log is defined but uses a different base than 10. What is the easiest way to get the base 10 logarithm of a Fixnum? | TITLE:
How to get the base 10 logarithm of a Fixnum in Ruby?
QUESTION:
I want to get the base 10 logarithm of a Fixnum using Ruby, but found that n.log or n.log10 are not defined. Math::log is defined but uses a different base than 10. What is the easiest way to get the base 10 logarithm of a Fixnum?
ANSWER:
There is Math::log10 (n) And there is also a property of logarithms that logx(y) = log(y)/log(x) | [
"ruby",
"math",
"logarithm"
] | 3 | 10 | 2,913 | 3 | 0 | 2008-09-09T08:56:23.287000 | 2008-09-09T09:00:35.553000 |
51,436 | 51,460 | How to host licensed .Net controls in unmanaged C++ app? | I need to host and run managed controls inside of a purely unmanaged C++ app. How to do this? To run unlicensed controls is typically simple: if (SUCCEEDED(ClrCreateManagedInstance(type, iid, &obj))) { // do something with obj } When using a licensed control however, we need to somehow embed a.licx file into the project (ref application licensing ). In an unmanaged C++ app, the requisite glue does not seem to work. The lc.exe tool is supposed to be able to embed the license as an assembly resource but either we were not waving the correct invocation, or it failed silently. Any help would be appreciated. | The answer depends on the particular component you're using. Contact your component help desk OR read up the documentation on what it takes to deploy their component. Basically component developers are free to implement licensing as they deem fit. With the.licx file the component needs to be able to do whatever the developer wished via GetKey and IsValidKey (explained in the link you posted). So if GetKey checks for a.licx file in the component directory - you just need to make sure its there. AFAIK the client assembly doesn't need to do anything except instantiate the control. Also if you post the name of the component and the lc.exe command you're using, people could take a look.. | How to host licensed .Net controls in unmanaged C++ app? I need to host and run managed controls inside of a purely unmanaged C++ app. How to do this? To run unlicensed controls is typically simple: if (SUCCEEDED(ClrCreateManagedInstance(type, iid, &obj))) { // do something with obj } When using a licensed control however, we need to somehow embed a.licx file into the project (ref application licensing ). In an unmanaged C++ app, the requisite glue does not seem to work. The lc.exe tool is supposed to be able to embed the license as an assembly resource but either we were not waving the correct invocation, or it failed silently. Any help would be appreciated. | TITLE:
How to host licensed .Net controls in unmanaged C++ app?
QUESTION:
I need to host and run managed controls inside of a purely unmanaged C++ app. How to do this? To run unlicensed controls is typically simple: if (SUCCEEDED(ClrCreateManagedInstance(type, iid, &obj))) { // do something with obj } When using a licensed control however, we need to somehow embed a.licx file into the project (ref application licensing ). In an unmanaged C++ app, the requisite glue does not seem to work. The lc.exe tool is supposed to be able to embed the license as an assembly resource but either we were not waving the correct invocation, or it failed silently. Any help would be appreciated.
ANSWER:
The answer depends on the particular component you're using. Contact your component help desk OR read up the documentation on what it takes to deploy their component. Basically component developers are free to implement licensing as they deem fit. With the.licx file the component needs to be able to do whatever the developer wished via GetKey and IsValidKey (explained in the link you posted). So if GetKey checks for a.licx file in the component directory - you just need to make sure its there. AFAIK the client assembly doesn't need to do anything except instantiate the control. Also if you post the name of the component and the lc.exe command you're using, people could take a look.. | [
"c++",
".net",
"unmanaged"
] | 3 | 1 | 542 | 1 | 0 | 2008-09-09T09:08:55.847000 | 2008-09-09T09:30:04.687000 |
51,452 | 51,543 | Best practice for Java IPC | what is the best method for inter process communication in a multithreaded java app. It should be performant (so no JMS please) easy to implement and reliable,so that objects & data can be bound to one thread only? Any ideas welcome! | Assuming the scenario 1 JVM, multiple threads then indeed java.util.concurrent is the place to look, specifically the various Queue implementations. However an abstraction on top of that may be nice and there Jetlang looks very interesting, lightweight Java message passing. | Best practice for Java IPC what is the best method for inter process communication in a multithreaded java app. It should be performant (so no JMS please) easy to implement and reliable,so that objects & data can be bound to one thread only? Any ideas welcome! | TITLE:
Best practice for Java IPC
QUESTION:
what is the best method for inter process communication in a multithreaded java app. It should be performant (so no JMS please) easy to implement and reliable,so that objects & data can be bound to one thread only? Any ideas welcome!
ANSWER:
Assuming the scenario 1 JVM, multiple threads then indeed java.util.concurrent is the place to look, specifically the various Queue implementations. However an abstraction on top of that may be nice and there Jetlang looks very interesting, lightweight Java message passing. | [
"java",
"multithreading"
] | 14 | 9 | 17,633 | 5 | 0 | 2008-09-09T09:20:37.393000 | 2008-09-09T10:29:51.393000 |
51,470 | 93,633 | How do I reset a sequence in Oracle? | In PostgreSQL, I can do something like this: ALTER SEQUENCE serial RESTART WITH 0; Is there an Oracle equivalent? | Here is a good procedure for resetting any sequence to 0 from Oracle guru Tom Kyte. Great discussion on the pros and cons in the links below too. tkyte@TKYTE901.US.ORACLE.COM> create or replace procedure reset_seq( p_seq_name in varchar2 ) is l_val number; begin execute immediate 'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate 'alter sequence ' || p_seq_name || ' increment by -' || l_val || ' minvalue 0';
execute immediate 'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate 'alter sequence ' || p_seq_name || ' increment by 1 minvalue 0'; end; / From this page: Dynamic SQL to reset sequence value Another good discussion is also here: How to reset sequences? | How do I reset a sequence in Oracle? In PostgreSQL, I can do something like this: ALTER SEQUENCE serial RESTART WITH 0; Is there an Oracle equivalent? | TITLE:
How do I reset a sequence in Oracle?
QUESTION:
In PostgreSQL, I can do something like this: ALTER SEQUENCE serial RESTART WITH 0; Is there an Oracle equivalent?
ANSWER:
Here is a good procedure for resetting any sequence to 0 from Oracle guru Tom Kyte. Great discussion on the pros and cons in the links below too. tkyte@TKYTE901.US.ORACLE.COM> create or replace procedure reset_seq( p_seq_name in varchar2 ) is l_val number; begin execute immediate 'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate 'alter sequence ' || p_seq_name || ' increment by -' || l_val || ' minvalue 0';
execute immediate 'select ' || p_seq_name || '.nextval from dual' INTO l_val;
execute immediate 'alter sequence ' || p_seq_name || ' increment by 1 minvalue 0'; end; / From this page: Dynamic SQL to reset sequence value Another good discussion is also here: How to reset sequences? | [
"sql",
"database",
"oracle",
"sequence"
] | 205 | 164 | 735,375 | 18 | 0 | 2008-09-09T09:36:39.320000 | 2008-09-18T15:34:52.270000 |
51,492 | 2,567,924 | What usable alternatives to XML syntax do you know? | For me usable means that: it's being used in real-wold it has tools support. (at least some simple editor) it has human readable syntax (no angle brackets please) Also I want it to be as close to XML as possible, i.e. there must be support for attributes as well as for properties. So, no YAML please. Currently, only one matching language comes to my mind - JSON. Do you know any other alternatives? | YAML is a 100% superset of JSON, so it doesn't make sense to reject YAML and then consider JSON instead. YAML does everything JSON does, but YAML gives so much more too (like references). I can't think of anything XML can do that YAML can't, except to validate a document with a DTD, which in my experience has never been worth the overhead. But YAML is so much faster and easier to type and read than XML. As for attributes or properties, if you think about it, they don't truly "add" anything... it's just a notational shortcut to write something as an attribute of the node instead of putting it in its own child node. But if you like that convenience, you can often emulate it with YAML's inline lists/hashes. Eg: # YAML Director: name: Spielberg Movies: - Movie: {title: E.T., year: 1975} - Movie: {title: Jaws, year: 1982} For me, the luxury of not having to write each node tag twice, combined with the freedom from all the angle-bracket litter makes YAML a preferred choice. I also actually like the lack of formal tag attributes, as that always seemed to me like a gray area of XML that needlessly introduced two sets of syntax (both when writing and traversing) for essentially the same concept. YAML does away with that confusion altogether. | What usable alternatives to XML syntax do you know? For me usable means that: it's being used in real-wold it has tools support. (at least some simple editor) it has human readable syntax (no angle brackets please) Also I want it to be as close to XML as possible, i.e. there must be support for attributes as well as for properties. So, no YAML please. Currently, only one matching language comes to my mind - JSON. Do you know any other alternatives? | TITLE:
What usable alternatives to XML syntax do you know?
QUESTION:
For me usable means that: it's being used in real-wold it has tools support. (at least some simple editor) it has human readable syntax (no angle brackets please) Also I want it to be as close to XML as possible, i.e. there must be support for attributes as well as for properties. So, no YAML please. Currently, only one matching language comes to my mind - JSON. Do you know any other alternatives?
ANSWER:
YAML is a 100% superset of JSON, so it doesn't make sense to reject YAML and then consider JSON instead. YAML does everything JSON does, but YAML gives so much more too (like references). I can't think of anything XML can do that YAML can't, except to validate a document with a DTD, which in my experience has never been worth the overhead. But YAML is so much faster and easier to type and read than XML. As for attributes or properties, if you think about it, they don't truly "add" anything... it's just a notational shortcut to write something as an attribute of the node instead of putting it in its own child node. But if you like that convenience, you can often emulate it with YAML's inline lists/hashes. Eg: # YAML Director: name: Spielberg Movies: - Movie: {title: E.T., year: 1975} - Movie: {title: Jaws, year: 1982} For me, the luxury of not having to write each node tag twice, combined with the freedom from all the angle-bracket litter makes YAML a preferred choice. I also actually like the lack of formal tag attributes, as that always seemed to me like a gray area of XML that needlessly introduced two sets of syntax (both when writing and traversing) for essentially the same concept. YAML does away with that confusion altogether. | [
"xml",
"syntax",
"yaml"
] | 29 | 57 | 31,103 | 14 | 0 | 2008-09-09T09:54:41.383000 | 2010-04-02T16:53:50.263000 |
51,499 | 62,714 | Are there any good automated frameworks for applying coding standards in Perl? | One I am aware of is Perl::Critic And my googling has resulted in no results on multiple attempts so far.:-( Does anyone have any recommendations here? Any resources to configure Perl::Critic as per our coding standards and run it on code base would be appreciated. | In terms of setting up a profile, have you tried perlcritic --profile-proto? This will emit to stdout all of your installed policies with all their options with descriptions of both, including their default values, in perlcriticrc format. Save and edit to match what you want. Whenever you upgrade Perl::Critic, you may want to run this command again and do a diff with your current perlcriticrc so you can see any changes to existing policies and pick up any new ones. In terms of running perlcritic regularly, set up a Test::Perl::Critic test along with the rest of your tests. This is good for new code. For your existing code, use Test::Perl::Critic::Progressive instead. T::P::C::Progressive will succeed the first time you run it, but will save counts on the number of violations; thereafter, T::P::C::Progressive will complain if any of the counts go up. One thing to look out for is when you revert changes in your source control system. (You are using one, aren't you?) Say I check in a change and run tests and my changes reduce the number of P::C violations. Later, it turns out my change was bad, so I revert to the old code. The T::P::C::Progressive test will fail due to the reduced counts. The easiest thing to do at this point is to just delete the history file (default location t/.perlcritic-history) and run again. It should reproduce your old counts and you can write new stuff to bring them down again. Perl::Critic has a lot of policies that ship with it, but there are a bunch of add-on distributions of policies. Have a look at Task::Perl::Critic and Task::Perl::Critic::IncludingOptionalDependencies. You don't need to have a single perlcriticrc handle all your code. Create separate perlcriticrc files for each set of files you want to test and then a separate test that points to each one. For an example, have a look at the author tests for P::C itself at http://perlcritic.tigris.org/source/browse/perlcritic/trunk/Perl-Critic/xt/author/. When author tests are run, there's a test that runs over all the code of P::C, a second test that applies additional rules just on the policies, and a third one that criticizes P::C's tests. I personally think that everyone should run at the "brutal" severity level, but knock out the policies that they don't agree with. Perl::Critic isn't entirely self compliant; even the P::C developers don't agree with everything Conway says. Look at the perlcriticrc files used on Perl::Critic itself and search the Perl::Critic code for instances of "## no critic"; I count 143 at present. (Yes, I'm one of the Perl::Critic developers.) | Are there any good automated frameworks for applying coding standards in Perl? One I am aware of is Perl::Critic And my googling has resulted in no results on multiple attempts so far.:-( Does anyone have any recommendations here? Any resources to configure Perl::Critic as per our coding standards and run it on code base would be appreciated. | TITLE:
Are there any good automated frameworks for applying coding standards in Perl?
QUESTION:
One I am aware of is Perl::Critic And my googling has resulted in no results on multiple attempts so far.:-( Does anyone have any recommendations here? Any resources to configure Perl::Critic as per our coding standards and run it on code base would be appreciated.
ANSWER:
In terms of setting up a profile, have you tried perlcritic --profile-proto? This will emit to stdout all of your installed policies with all their options with descriptions of both, including their default values, in perlcriticrc format. Save and edit to match what you want. Whenever you upgrade Perl::Critic, you may want to run this command again and do a diff with your current perlcriticrc so you can see any changes to existing policies and pick up any new ones. In terms of running perlcritic regularly, set up a Test::Perl::Critic test along with the rest of your tests. This is good for new code. For your existing code, use Test::Perl::Critic::Progressive instead. T::P::C::Progressive will succeed the first time you run it, but will save counts on the number of violations; thereafter, T::P::C::Progressive will complain if any of the counts go up. One thing to look out for is when you revert changes in your source control system. (You are using one, aren't you?) Say I check in a change and run tests and my changes reduce the number of P::C violations. Later, it turns out my change was bad, so I revert to the old code. The T::P::C::Progressive test will fail due to the reduced counts. The easiest thing to do at this point is to just delete the history file (default location t/.perlcritic-history) and run again. It should reproduce your old counts and you can write new stuff to bring them down again. Perl::Critic has a lot of policies that ship with it, but there are a bunch of add-on distributions of policies. Have a look at Task::Perl::Critic and Task::Perl::Critic::IncludingOptionalDependencies. You don't need to have a single perlcriticrc handle all your code. Create separate perlcriticrc files for each set of files you want to test and then a separate test that points to each one. For an example, have a look at the author tests for P::C itself at http://perlcritic.tigris.org/source/browse/perlcritic/trunk/Perl-Critic/xt/author/. When author tests are run, there's a test that runs over all the code of P::C, a second test that applies additional rules just on the policies, and a third one that criticizes P::C's tests. I personally think that everyone should run at the "brutal" severity level, but knock out the policies that they don't agree with. Perl::Critic isn't entirely self compliant; even the P::C developers don't agree with everything Conway says. Look at the perlcriticrc files used on Perl::Critic itself and search the Perl::Critic code for instances of "## no critic"; I count 143 at present. (Yes, I'm one of the Perl::Critic developers.) | [
"perl",
"frameworks",
"coding-style",
"perl-critic"
] | 8 | 12 | 1,038 | 7 | 0 | 2008-09-09T10:01:06.640000 | 2008-09-15T13:12:17.130000 |
51,501 | 202,989 | Where to find gdb for an arm target running on linux 2.6.21.5-cfs-v19 | I have a ARM11 based hardware board which runs a Linux kernel ver. 2.6.21.5-cfs-v19. I have my application running on this ARM-LINUX board. To do source level debugging, of my application I used to use gdb from command prompt of the linux board, that was with some earlier version of the board linux version. With this version 2.6.21.5-cfs-v19, I dont have a related version of gdb for the board. I tried to look around to get gdb for this version of linux and arm port of it, but without much success. Can anyone point me where I can get either an independant executable for gdb for above mentioned os and board configuration or source for the same, which I may try compiling for that target. -AD. | Sometime ago I published an article about cross-target debugging with GDB and GDBServer. The target processor there is PPC7450, but it's rather detailed, so maybe you'll find it useful. | Where to find gdb for an arm target running on linux 2.6.21.5-cfs-v19 I have a ARM11 based hardware board which runs a Linux kernel ver. 2.6.21.5-cfs-v19. I have my application running on this ARM-LINUX board. To do source level debugging, of my application I used to use gdb from command prompt of the linux board, that was with some earlier version of the board linux version. With this version 2.6.21.5-cfs-v19, I dont have a related version of gdb for the board. I tried to look around to get gdb for this version of linux and arm port of it, but without much success. Can anyone point me where I can get either an independant executable for gdb for above mentioned os and board configuration or source for the same, which I may try compiling for that target. -AD. | TITLE:
Where to find gdb for an arm target running on linux 2.6.21.5-cfs-v19
QUESTION:
I have a ARM11 based hardware board which runs a Linux kernel ver. 2.6.21.5-cfs-v19. I have my application running on this ARM-LINUX board. To do source level debugging, of my application I used to use gdb from command prompt of the linux board, that was with some earlier version of the board linux version. With this version 2.6.21.5-cfs-v19, I dont have a related version of gdb for the board. I tried to look around to get gdb for this version of linux and arm port of it, but without much success. Can anyone point me where I can get either an independant executable for gdb for above mentioned os and board configuration or source for the same, which I may try compiling for that target. -AD.
ANSWER:
Sometime ago I published an article about cross-target debugging with GDB and GDBServer. The target processor there is PPC7450, but it's rather detailed, so maybe you'll find it useful. | [
"linux",
"gdb",
"arm"
] | 0 | 2 | 3,430 | 2 | 0 | 2008-09-09T10:04:03.467000 | 2008-10-14T21:50:41.860000 |
51,502 | 52,090 | Improving Python readability? | I've been really enjoying Python programming lately. I come from a background of a strong love for C-based coding, where everything is perhaps more complicated than it should be (but puts hair on your chest, at least). So switching from C to Python for more complex things that don't require tons of speed has been more of a boon than a bane in writing projects. However, coming from this land of brackets and parentheses and structs as far as the naked eye can see, I come across a small problem: I find Python difficult to read. For example, the following block of text is hard for me to decipher unless I stare at it (which I dislike doing): if foo: bar = baz while bar not biz: bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz() my_father_is_avenged() The problem occurs at the end of that if block: all the tabbing and then suddenly returning to a jarring block feels almost disturbing. As a solution, I've started coding my Python like this: if foo: bar = baz while bar not biz: bar = i_am_going_to_find_you_biz_i_swear_on_my_life() #-- while -- #-- if --
did_i_not_warn_you_biz() my_father_is_avenged() And this, for some odd reason, makes me more able to read my own code. But I'm curious: has anyone else with my strange problem found easier ways to make their tabbed-out code more readable? I'd love to find out if there's a better way to do this before this becomes a huge habit for me. | Part of learning a new programming language is learning to read code in that language. A crutch like this may make it easier to read your own code, but it's going to impede the process of learning how to read anyone else's Python code. I really think you'd be better off getting rid of the end of block comments and getting used to normal Python. | Improving Python readability? I've been really enjoying Python programming lately. I come from a background of a strong love for C-based coding, where everything is perhaps more complicated than it should be (but puts hair on your chest, at least). So switching from C to Python for more complex things that don't require tons of speed has been more of a boon than a bane in writing projects. However, coming from this land of brackets and parentheses and structs as far as the naked eye can see, I come across a small problem: I find Python difficult to read. For example, the following block of text is hard for me to decipher unless I stare at it (which I dislike doing): if foo: bar = baz while bar not biz: bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz() my_father_is_avenged() The problem occurs at the end of that if block: all the tabbing and then suddenly returning to a jarring block feels almost disturbing. As a solution, I've started coding my Python like this: if foo: bar = baz while bar not biz: bar = i_am_going_to_find_you_biz_i_swear_on_my_life() #-- while -- #-- if --
did_i_not_warn_you_biz() my_father_is_avenged() And this, for some odd reason, makes me more able to read my own code. But I'm curious: has anyone else with my strange problem found easier ways to make their tabbed-out code more readable? I'd love to find out if there's a better way to do this before this becomes a huge habit for me. | TITLE:
Improving Python readability?
QUESTION:
I've been really enjoying Python programming lately. I come from a background of a strong love for C-based coding, where everything is perhaps more complicated than it should be (but puts hair on your chest, at least). So switching from C to Python for more complex things that don't require tons of speed has been more of a boon than a bane in writing projects. However, coming from this land of brackets and parentheses and structs as far as the naked eye can see, I come across a small problem: I find Python difficult to read. For example, the following block of text is hard for me to decipher unless I stare at it (which I dislike doing): if foo: bar = baz while bar not biz: bar = i_am_going_to_find_you_biz_i_swear_on_my_life()
did_i_not_warn_you_biz() my_father_is_avenged() The problem occurs at the end of that if block: all the tabbing and then suddenly returning to a jarring block feels almost disturbing. As a solution, I've started coding my Python like this: if foo: bar = baz while bar not biz: bar = i_am_going_to_find_you_biz_i_swear_on_my_life() #-- while -- #-- if --
did_i_not_warn_you_biz() my_father_is_avenged() And this, for some odd reason, makes me more able to read my own code. But I'm curious: has anyone else with my strange problem found easier ways to make their tabbed-out code more readable? I'd love to find out if there's a better way to do this before this becomes a huge habit for me.
ANSWER:
Part of learning a new programming language is learning to read code in that language. A crutch like this may make it easier to read your own code, but it's going to impede the process of learning how to read anyone else's Python code. I really think you'd be better off getting rid of the end of block comments and getting used to normal Python. | [
"python",
"readability"
] | 4 | 24 | 3,066 | 7 | 0 | 2008-09-09T10:05:36.770000 | 2008-09-09T14:58:52.277000 |
51,520 | 51,523 | How to get an absolute file path in Python | Given a path such as "mydir/myfile.txt", how do I find the file's absolute path in Python? E.g. on Windows, I might end up with: "C:/example/cwd/mydir/myfile.txt" | >>> import os >>> os.path.abspath("mydir/myfile.txt") 'C:/example/cwd/mydir/myfile.txt' Also works if it is already an absolute path: >>> import os >>> os.path.abspath("C:/example/cwd/mydir/myfile.txt") 'C:/example/cwd/mydir/myfile.txt' | How to get an absolute file path in Python Given a path such as "mydir/myfile.txt", how do I find the file's absolute path in Python? E.g. on Windows, I might end up with: "C:/example/cwd/mydir/myfile.txt" | TITLE:
How to get an absolute file path in Python
QUESTION:
Given a path such as "mydir/myfile.txt", how do I find the file's absolute path in Python? E.g. on Windows, I might end up with: "C:/example/cwd/mydir/myfile.txt"
ANSWER:
>>> import os >>> os.path.abspath("mydir/myfile.txt") 'C:/example/cwd/mydir/myfile.txt' Also works if it is already an absolute path: >>> import os >>> os.path.abspath("C:/example/cwd/mydir/myfile.txt") 'C:/example/cwd/mydir/myfile.txt' | [
"python",
"path",
"relative-path",
"absolute-path"
] | 1,043 | 1,503 | 1,471,137 | 11 | 0 | 2008-09-09T10:19:32.727000 | 2008-09-09T10:21:03.987000 |
51,526 | 51,537 | Changing the value of an element in a list of structs | I have a list of structs and I want to change one element. For example: MyList.Add(new MyStruct("john"); MyList.Add(new MyStruct("peter"); Now I want to change one element: MyList[1].Name = "bob" However, whenever I try and do this I get the following error: Cannot modify the return value of System.Collections.Generic.List.this[int]‘ because it is not a variable If I use a list of classes, the problem doesn't occur. I guess the answer has to do with structs being a value type. So, if I have a list of structs should I treat them as read-only? If I need to change elements in a list then I should use classes and not structs? | MyList[1] = new MyStruct("bob"); structs in C# should almost always be designed to be immutable (that is, have no way to change their internal state once they have been created). In your case, what you want to do is to replace the entire struct in specified array index, not to try to change just a single property or field. | Changing the value of an element in a list of structs I have a list of structs and I want to change one element. For example: MyList.Add(new MyStruct("john"); MyList.Add(new MyStruct("peter"); Now I want to change one element: MyList[1].Name = "bob" However, whenever I try and do this I get the following error: Cannot modify the return value of System.Collections.Generic.List.this[int]‘ because it is not a variable If I use a list of classes, the problem doesn't occur. I guess the answer has to do with structs being a value type. So, if I have a list of structs should I treat them as read-only? If I need to change elements in a list then I should use classes and not structs? | TITLE:
Changing the value of an element in a list of structs
QUESTION:
I have a list of structs and I want to change one element. For example: MyList.Add(new MyStruct("john"); MyList.Add(new MyStruct("peter"); Now I want to change one element: MyList[1].Name = "bob" However, whenever I try and do this I get the following error: Cannot modify the return value of System.Collections.Generic.List.this[int]‘ because it is not a variable If I use a list of classes, the problem doesn't occur. I guess the answer has to do with structs being a value type. So, if I have a list of structs should I treat them as read-only? If I need to change elements in a list then I should use classes and not structs?
ANSWER:
MyList[1] = new MyStruct("bob"); structs in C# should almost always be designed to be immutable (that is, have no way to change their internal state once they have been created). In your case, what you want to do is to replace the entire struct in specified array index, not to try to change just a single property or field. | [
"c#",
"struct",
"value-type"
] | 84 | 44 | 116,932 | 8 | 0 | 2008-09-09T10:23:29.503000 | 2008-09-09T10:27:56.700000 |
51,530 | 51,547 | How do you treat legacy code (and data)? | I am currently in the process of restructuring my local Subversion repository by adding some new projects and merging legacy code and data from a couple of older repositories into it. When I have done this in the past I have usually put the legacy code in a dedicated "legacy" folder, as not to "disturb" the new and "well-structured" code tree. However, in the spirit of refactoring I feel this is somewhat wrong. In theory, the legacy code will be refactored over time and moved to its new location, but in practice this rarely happens. How do you treat your legacy code? As much as I feel tempted to tuck away old sins in the "legacy" folder, never to look at it again, on some level I hope that by forcing it to live among the more "healthy" inhabitants in the repository, maybe the legacy code will have a better chance of getting well some day? (Yeah, we all know we shouldn't rewrite stuff, but this is my "fun" repository, not my business projects...) Update I am not worried about the technical aspects of keeping track of various versions. I know how to use tags and branches for that. This is more of a psychological aspect, as I prefer to have a "neat" structure in the repository, which makes navigating it much easier—for humans. | All code becomes 'legacy' one day, why seperate it at all? Source control is by project/branch or project/platform/branch and that type of hierarchy. Who cares how long in the tooth it is? | How do you treat legacy code (and data)? I am currently in the process of restructuring my local Subversion repository by adding some new projects and merging legacy code and data from a couple of older repositories into it. When I have done this in the past I have usually put the legacy code in a dedicated "legacy" folder, as not to "disturb" the new and "well-structured" code tree. However, in the spirit of refactoring I feel this is somewhat wrong. In theory, the legacy code will be refactored over time and moved to its new location, but in practice this rarely happens. How do you treat your legacy code? As much as I feel tempted to tuck away old sins in the "legacy" folder, never to look at it again, on some level I hope that by forcing it to live among the more "healthy" inhabitants in the repository, maybe the legacy code will have a better chance of getting well some day? (Yeah, we all know we shouldn't rewrite stuff, but this is my "fun" repository, not my business projects...) Update I am not worried about the technical aspects of keeping track of various versions. I know how to use tags and branches for that. This is more of a psychological aspect, as I prefer to have a "neat" structure in the repository, which makes navigating it much easier—for humans. | TITLE:
How do you treat legacy code (and data)?
QUESTION:
I am currently in the process of restructuring my local Subversion repository by adding some new projects and merging legacy code and data from a couple of older repositories into it. When I have done this in the past I have usually put the legacy code in a dedicated "legacy" folder, as not to "disturb" the new and "well-structured" code tree. However, in the spirit of refactoring I feel this is somewhat wrong. In theory, the legacy code will be refactored over time and moved to its new location, but in practice this rarely happens. How do you treat your legacy code? As much as I feel tempted to tuck away old sins in the "legacy" folder, never to look at it again, on some level I hope that by forcing it to live among the more "healthy" inhabitants in the repository, maybe the legacy code will have a better chance of getting well some day? (Yeah, we all know we shouldn't rewrite stuff, but this is my "fun" repository, not my business projects...) Update I am not worried about the technical aspects of keeping track of various versions. I know how to use tags and branches for that. This is more of a psychological aspect, as I prefer to have a "neat" structure in the repository, which makes navigating it much easier—for humans.
ANSWER:
All code becomes 'legacy' one day, why seperate it at all? Source control is by project/branch or project/platform/branch and that type of hierarchy. Who cares how long in the tooth it is? | [
"svn",
"version-control",
"project-management",
"legacy"
] | 0 | 4 | 559 | 5 | 0 | 2008-09-09T10:24:44.710000 | 2008-09-09T10:31:39.327000 |
51,540 | 51,554 | Determine how much memory a class uses? | I am trying to find a way to determine at run-time how much memory a given class is using in.NET. Using Marshal.SizeOf() is out, as it only works on value types. Is there a way to check exactly how much memory a class uses? | I've only recently started looking into this type of thing, but i have found that memory profilers can give quite detailed information regarding instances of objects within your application. Here are a couple that are worth trying: ANTS Profiler.NET Memory Profiler | Determine how much memory a class uses? I am trying to find a way to determine at run-time how much memory a given class is using in.NET. Using Marshal.SizeOf() is out, as it only works on value types. Is there a way to check exactly how much memory a class uses? | TITLE:
Determine how much memory a class uses?
QUESTION:
I am trying to find a way to determine at run-time how much memory a given class is using in.NET. Using Marshal.SizeOf() is out, as it only works on value types. Is there a way to check exactly how much memory a class uses?
ANSWER:
I've only recently started looking into this type of thing, but i have found that memory profilers can give quite detailed information regarding instances of objects within your application. Here are a couple that are worth trying: ANTS Profiler.NET Memory Profiler | [
".net",
"memory"
] | 8 | 5 | 658 | 3 | 0 | 2008-09-09T10:28:47.667000 | 2008-09-09T10:34:01.040000 |
51,553 | 52,006 | Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP) | I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows): SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples; Is this normal behaviour when using a SQL database? The schema (the table holds responses to a survey): CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer);
\copy tuples from '350,000 responses.csv' delimiter as ',' I wrote some tests in Java and Python for context and they crush SQL (except for pure python): java 1.5 threads ~ 7 ms java 1.5 ~ 10 ms python 2.5 numpy ~ 18 ms python 2.5 ~ 370 ms Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown) Tunings i've tried without success include (blindly following some web advice): increased the shared memory available to Postgres to 256MB increased the working memory to 2MB disabled connection and statement logging used a stored procedure via CREATE FUNCTION... LANGUAGE SQL So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous. Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help. No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest. The sqlite3 timing is driven by the Python program and is running from disk (not:memory:) I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data. The Postgres query doesn't change timing on subsequent runs. I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!) | Postgres is doing a lot more than it looks like (maintaining data consistency for a start!) If the values don't have to be 100% spot on, or if the table is updated rarely, but you are running this calculation often, you might want to look into Materialized Views to speed it up. (Note, I have not used materialized views in Postgres, they look at little hacky, but might suite your situation). Materialized Views Also consider the overhead of actually connecting to the server and the round trip required to send the request to the server and back. I'd consider 200ms for something like this to be pretty good, A quick test on my oracle server, the same table structure with about 500k rows and no indexes, takes about 1 - 1.5 seconds, which is almost all just oracle sucking the data off disk. The real question is, is 200ms fast enough? -------------- More -------------------- I was interested in solving this using materialized views, since I've never really played with them. This is in oracle. First I created a MV which refreshes every minute. create materialized view mv_so_x build immediate refresh complete START WITH SYSDATE NEXT SYSDATE + 1/24/60 as select count(*),avg(a),avg(b),avg(c),avg(d) from so_x; While its refreshing, there is no rows returned SQL> select * from mv_so_x;
no rows selected
Elapsed: 00:00:00.00 Once it refreshes, its MUCH faster than doing the raw query SQL> select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:05.74 SQL> select * from mv_so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:00.00 SQL> If we insert into the base table, the result is not immediately viewable view the MV. SQL> insert into so_x values (1,2,3,4,5);
1 row created.
Elapsed: 00:00:00.00 SQL> commit;
Commit complete.
Elapsed: 00:00:00.00 SQL> select * from mv_so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:00.00 SQL> But wait a minute or so, and the MV will update behind the scenes, and the result is returned fast as you could want. SQL> /
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899460 7495.35823 22.2905352 5.00276078 2.17647059
Elapsed: 00:00:00.00 SQL> This isn't ideal. for a start, its not realtime, inserts/updates will not be immediately visible. Also, you've got a query running to update the MV whether you need it or not (this can be tune to whatever time frame, or on demand). But, this does show how much faster an MV can make it seem to the end user, if you can live with values which aren't quite upto the second accurate. | Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP) I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows): SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples; Is this normal behaviour when using a SQL database? The schema (the table holds responses to a survey): CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer);
\copy tuples from '350,000 responses.csv' delimiter as ',' I wrote some tests in Java and Python for context and they crush SQL (except for pure python): java 1.5 threads ~ 7 ms java 1.5 ~ 10 ms python 2.5 numpy ~ 18 ms python 2.5 ~ 370 ms Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown) Tunings i've tried without success include (blindly following some web advice): increased the shared memory available to Postgres to 256MB increased the working memory to 2MB disabled connection and statement logging used a stored procedure via CREATE FUNCTION... LANGUAGE SQL So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous. Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help. No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest. The sqlite3 timing is driven by the Python program and is running from disk (not:memory:) I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data. The Postgres query doesn't change timing on subsequent runs. I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!) | TITLE:
Why are SQL aggregate functions so much slower than Python and Java (or Poor Man's OLAP)
QUESTION:
I need a real DBA's opinion. Postgres 8.3 takes 200 ms to execute this query on my Macbook Pro while Java and Python perform the same calculation in under 20 ms (350,000 rows): SELECT count(id), avg(a), avg(b), avg(c), avg(d) FROM tuples; Is this normal behaviour when using a SQL database? The schema (the table holds responses to a survey): CREATE TABLE tuples (id integer primary key, a integer, b integer, c integer, d integer);
\copy tuples from '350,000 responses.csv' delimiter as ',' I wrote some tests in Java and Python for context and they crush SQL (except for pure python): java 1.5 threads ~ 7 ms java 1.5 ~ 10 ms python 2.5 numpy ~ 18 ms python 2.5 ~ 370 ms Even sqlite3 is competitive with Postgres despite it assumping all columns are strings (for contrast: even using just switching to numeric columns instead of integers in Postgres results in 10x slowdown) Tunings i've tried without success include (blindly following some web advice): increased the shared memory available to Postgres to 256MB increased the working memory to 2MB disabled connection and statement logging used a stored procedure via CREATE FUNCTION... LANGUAGE SQL So my question is, is my experience here normal, and this is what I can expect when using a SQL database? I can understand that ACID must come with costs, but this is kind of crazy in my opinion. I'm not asking for realtime game speed, but since Java can process millions of doubles in under 20 ms, I feel a bit jealous. Is there a better way to do simple OLAP on the cheap (both in terms of money and server complexity)? I've looked into Mondrian and Pig + Hadoop but not super excited about maintaining yet another server application and not sure if they would even help. No the Python code and Java code do all the work in house so to speak. I just generate 4 arrays with 350,000 random values each, then take the average. I don't include the generation in the timings, only the averaging step. The java threads timing uses 4 threads (one per array average), overkill but it's definitely the fastest. The sqlite3 timing is driven by the Python program and is running from disk (not:memory:) I realize Postgres is doing much more behind the scenes, but most of that work doesn't matter to me since this is read only data. The Postgres query doesn't change timing on subsequent runs. I've rerun the Python tests to include spooling it off the disk. The timing slows down considerably to nearly 4 secs. But I'm guessing that Python's file handling code is pretty much in C (though maybe not the csv lib?) so this indicates to me that Postgres isn't streaming from the disk either (or that you are correct and I should bow down before whoever wrote their storage layer!)
ANSWER:
Postgres is doing a lot more than it looks like (maintaining data consistency for a start!) If the values don't have to be 100% spot on, or if the table is updated rarely, but you are running this calculation often, you might want to look into Materialized Views to speed it up. (Note, I have not used materialized views in Postgres, they look at little hacky, but might suite your situation). Materialized Views Also consider the overhead of actually connecting to the server and the round trip required to send the request to the server and back. I'd consider 200ms for something like this to be pretty good, A quick test on my oracle server, the same table structure with about 500k rows and no indexes, takes about 1 - 1.5 seconds, which is almost all just oracle sucking the data off disk. The real question is, is 200ms fast enough? -------------- More -------------------- I was interested in solving this using materialized views, since I've never really played with them. This is in oracle. First I created a MV which refreshes every minute. create materialized view mv_so_x build immediate refresh complete START WITH SYSDATE NEXT SYSDATE + 1/24/60 as select count(*),avg(a),avg(b),avg(c),avg(d) from so_x; While its refreshing, there is no rows returned SQL> select * from mv_so_x;
no rows selected
Elapsed: 00:00:00.00 Once it refreshes, its MUCH faster than doing the raw query SQL> select count(*),avg(a),avg(b),avg(c),avg(d) from so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:05.74 SQL> select * from mv_so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:00.00 SQL> If we insert into the base table, the result is not immediately viewable view the MV. SQL> insert into so_x values (1,2,3,4,5);
1 row created.
Elapsed: 00:00:00.00 SQL> commit;
Commit complete.
Elapsed: 00:00:00.00 SQL> select * from mv_so_x;
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899459 7495.38839 22.2905454 5.00276131 2.13432836
Elapsed: 00:00:00.00 SQL> But wait a minute or so, and the MV will update behind the scenes, and the result is returned fast as you could want. SQL> /
COUNT(*) AVG(A) AVG(B) AVG(C) AVG(D) ---------- ---------- ---------- ---------- ---------- 1899460 7495.35823 22.2905352 5.00276078 2.17647059
Elapsed: 00:00:00.00 SQL> This isn't ideal. for a start, its not realtime, inserts/updates will not be immediately visible. Also, you've got a query running to update the MV whether you need it or not (this can be tune to whatever time frame, or on demand). But, this does show how much faster an MV can make it seem to the end user, if you can live with values which aren't quite upto the second accurate. | [
"python",
"sql",
"optimization",
"aggregate",
"olap"
] | 16 | 12 | 7,656 | 10 | 0 | 2008-09-09T10:33:39.620000 | 2008-09-09T14:26:28.297000 |
51,561 | 51,566 | How do I automate finding unused #include directives? | Typically when writing new code you discover that you are missing a #include because the file doesn't compile. Simple enough, you add the required #include. But later you refactor the code somehow and now a couple of #include directives are no longer needed. How do I discover which ones are no longer needed? Of course I can manually remove some or all #include lines and add them back until the file compiles again, but this isn't really feasible in a large project with thousands of files. Are there any tools available that will help automating task? | You can use PC-Lint/FlexeLint to do that. Unusually there isn't a free OS version of the tool available. You can remove #includes by passing by reference instead of passing by value and forward declaring. This is because the compiler doesn't need to know the size of the object at compile time. This will require a large amount of manual work on your behalf however. The good thing is it will reduce your compile times. | How do I automate finding unused #include directives? Typically when writing new code you discover that you are missing a #include because the file doesn't compile. Simple enough, you add the required #include. But later you refactor the code somehow and now a couple of #include directives are no longer needed. How do I discover which ones are no longer needed? Of course I can manually remove some or all #include lines and add them back until the file compiles again, but this isn't really feasible in a large project with thousands of files. Are there any tools available that will help automating task? | TITLE:
How do I automate finding unused #include directives?
QUESTION:
Typically when writing new code you discover that you are missing a #include because the file doesn't compile. Simple enough, you add the required #include. But later you refactor the code somehow and now a couple of #include directives are no longer needed. How do I discover which ones are no longer needed? Of course I can manually remove some or all #include lines and add them back until the file compiles again, but this isn't really feasible in a large project with thousands of files. Are there any tools available that will help automating task?
ANSWER:
You can use PC-Lint/FlexeLint to do that. Unusually there isn't a free OS version of the tool available. You can remove #includes by passing by reference instead of passing by value and forward declaring. This is because the compiler doesn't need to know the size of the object at compile time. This will require a large amount of manual work on your behalf however. The good thing is it will reduce your compile times. | [
"c"
] | 28 | 5 | 4,002 | 5 | 0 | 2008-09-09T10:38:25.507000 | 2008-09-09T10:41:43.563000 |
51,574 | 52,691 | Good Java graph algorithm library? | Has anyone had good experiences with any Java libraries for Graph algorithms. I've tried JGraph and found it ok, and there are a lot of different ones in google. Are there any that people are actually using successfully in production code or would recommend? To clarify, I'm not looking for a library that produces graphs/charts, I'm looking for one that helps with Graph algorithms, eg minimum spanning tree, Kruskal's algorithm Nodes, Edges, etc. Ideally one with some good algorithms/data structures in a nice Java OO API. | If you were using JGraph, you should give a try to JGraphT which is designed for algorithms. One of its features is visualization using the JGraph library. It's still developed, but pretty stable. I analyzed the complexity of JGraphT algorithms some time ago. Some of them aren't the quickest, but if you're going to implement them on your own and need to display your graph, then it might be the best choice. I really liked using its API, when I quickly had to write an app that was working on graph and displaying it later. | Good Java graph algorithm library? Has anyone had good experiences with any Java libraries for Graph algorithms. I've tried JGraph and found it ok, and there are a lot of different ones in google. Are there any that people are actually using successfully in production code or would recommend? To clarify, I'm not looking for a library that produces graphs/charts, I'm looking for one that helps with Graph algorithms, eg minimum spanning tree, Kruskal's algorithm Nodes, Edges, etc. Ideally one with some good algorithms/data structures in a nice Java OO API. | TITLE:
Good Java graph algorithm library?
QUESTION:
Has anyone had good experiences with any Java libraries for Graph algorithms. I've tried JGraph and found it ok, and there are a lot of different ones in google. Are there any that people are actually using successfully in production code or would recommend? To clarify, I'm not looking for a library that produces graphs/charts, I'm looking for one that helps with Graph algorithms, eg minimum spanning tree, Kruskal's algorithm Nodes, Edges, etc. Ideally one with some good algorithms/data structures in a nice Java OO API.
ANSWER:
If you were using JGraph, you should give a try to JGraphT which is designed for algorithms. One of its features is visualization using the JGraph library. It's still developed, but pretty stable. I analyzed the complexity of JGraphT algorithms some time ago. Some of them aren't the quickest, but if you're going to implement them on your own and need to display your graph, then it might be the best choice. I really liked using its API, when I quickly had to write an app that was working on graph and displaying it later. | [
"java",
"algorithm",
"graph"
] | 237 | 114 | 219,683 | 18 | 0 | 2008-09-09T10:47:34.047000 | 2008-09-09T19:28:21.427000 |
51,582 | 51,623 | Java Generics: Comparing the class of Object o to <E> | Let's say I have the following class: public class Test { public boolean sameClassAs(Object o) { // TODO help! } } How would I check that o is the same class as E? Test test = new Test (); test.sameClassAs("a string"); // returns true; test.sameClassAs(4); // returns false; I can't change the method signature from (Object o) as I'm overridding a superclass and so don't get to choose my method signature. I would also rather not go down the road of attempting a cast and then catching the resulting exception if it fails. | An instance of Test has no information as to what E is at runtime. So, you need to pass a Class to the constructor of Test. public class Test { private final Class clazz; public Test(Class clazz) { if (clazz == null) { throw new NullPointerException(); } this.clazz = clazz; } // To make things easier on clients: public static Test create(Class clazz) { return new Test (clazz); } public boolean sameClassAs(Object o) { return o!= null && o.getClass() == clazz; } } If you want an "instanceof" relationship, use Class.isAssignableFrom instead of the Class comparison. Note, E will need to be a non-generic type, for the same reason Test needs the Class object. For examples in the Java API, see java.util.Collections.checkedSet and similar. | Java Generics: Comparing the class of Object o to <E> Let's say I have the following class: public class Test { public boolean sameClassAs(Object o) { // TODO help! } } How would I check that o is the same class as E? Test test = new Test (); test.sameClassAs("a string"); // returns true; test.sameClassAs(4); // returns false; I can't change the method signature from (Object o) as I'm overridding a superclass and so don't get to choose my method signature. I would also rather not go down the road of attempting a cast and then catching the resulting exception if it fails. | TITLE:
Java Generics: Comparing the class of Object o to <E>
QUESTION:
Let's say I have the following class: public class Test { public boolean sameClassAs(Object o) { // TODO help! } } How would I check that o is the same class as E? Test test = new Test (); test.sameClassAs("a string"); // returns true; test.sameClassAs(4); // returns false; I can't change the method signature from (Object o) as I'm overridding a superclass and so don't get to choose my method signature. I would also rather not go down the road of attempting a cast and then catching the resulting exception if it fails.
ANSWER:
An instance of Test has no information as to what E is at runtime. So, you need to pass a Class to the constructor of Test. public class Test { private final Class clazz; public Test(Class clazz) { if (clazz == null) { throw new NullPointerException(); } this.clazz = clazz; } // To make things easier on clients: public static Test create(Class clazz) { return new Test (clazz); } public boolean sameClassAs(Object o) { return o!= null && o.getClass() == clazz; } } If you want an "instanceof" relationship, use Class.isAssignableFrom instead of the Class comparison. Note, E will need to be a non-generic type, for the same reason Test needs the Class object. For examples in the Java API, see java.util.Collections.checkedSet and similar. | [
"java",
"generics"
] | 27 | 27 | 41,495 | 4 | 0 | 2008-09-09T10:54:10.537000 | 2008-09-09T11:16:30.907000 |
51,584 | 51,596 | Twitter for work updates | If you are sending work/progress reports to the project lead on a daily or weekly basis, I wondered if you would consider using Twitter or similar services for these updates. Say if you're working remotely or with a distributed team and the project lead has a hard time getting an overview about the topics people are working on, and where the issues/time consumers are, would you set up some private accounts (or even a private company-internal service) to broadcast progress updates to your colleagues? edit Thanks for the link to those products, but do you already use one of it in your company too? For real-life professional use? | Try Laconica: An open source Twitter-like system you could run on your own servers. | Twitter for work updates If you are sending work/progress reports to the project lead on a daily or weekly basis, I wondered if you would consider using Twitter or similar services for these updates. Say if you're working remotely or with a distributed team and the project lead has a hard time getting an overview about the topics people are working on, and where the issues/time consumers are, would you set up some private accounts (or even a private company-internal service) to broadcast progress updates to your colleagues? edit Thanks for the link to those products, but do you already use one of it in your company too? For real-life professional use? | TITLE:
Twitter for work updates
QUESTION:
If you are sending work/progress reports to the project lead on a daily or weekly basis, I wondered if you would consider using Twitter or similar services for these updates. Say if you're working remotely or with a distributed team and the project lead has a hard time getting an overview about the topics people are working on, and where the issues/time consumers are, would you set up some private accounts (or even a private company-internal service) to broadcast progress updates to your colleagues? edit Thanks for the link to those products, but do you already use one of it in your company too? For real-life professional use?
ANSWER:
Try Laconica: An open source Twitter-like system you could run on your own servers. | [
"project-management",
"project-planning",
"knowledge-management"
] | 7 | 15 | 610 | 11 | 0 | 2008-09-09T10:55:04.437000 | 2008-09-09T11:01:32.330000 |
51,586 | 51,621 | Accessing non-generic members of a generic object | Is there a way to collect (e.g. in a List) multiple 'generic' objects that don't share a common super class? If so, how can I access their common properties? For example: class MyObject { public T Value { get; set; } public string Name { get; set; }
public MyObject(string name, T value) { Name = name; Value = value; } }
var fst = new MyObject ("fst", 42); var snd = new MyObject ("snd", true);
List > list = new List >(){fst, snd};
foreach (MyObject o in list) Console.WriteLine(o.Name); Obviously, this is pseudo code, this doesn't work. Also I don't need to access the.Value property (since that wouldn't be type-safe). EDIT: Now that I've been thinking about this, It would be possible to use sub-classes for this. However, I think that would mean I'd have to write a new subclass for every new type. @ Grzenio Yes, that exactly answered my question. Of course, now I need to duplicate the entire shared interface, but that's not a big problem. I should have thought of that... @ aku You are right about the duck typing. I wouldn't expect two completely random types of objects to be accessible. But I thought generic objects would share some kind of common interface, since they are exactly the same, apart from the type they are parametrized by. Apparently, this is not the case automatically. | I don't think it is possible in C#, because MyObject is not a baseclass of MyObject. What I usually do is to define an interface (a 'normal' one, not generic) and make MyObject implement that interface, e.g. interface INamedObject { string Name {get;} } and then you can use the interface: List list = new List (){fst, snd};
foreach (INamedObject o in list) Console.WriteLine(o.Name); Did it answer your question? | Accessing non-generic members of a generic object Is there a way to collect (e.g. in a List) multiple 'generic' objects that don't share a common super class? If so, how can I access their common properties? For example: class MyObject { public T Value { get; set; } public string Name { get; set; }
public MyObject(string name, T value) { Name = name; Value = value; } }
var fst = new MyObject ("fst", 42); var snd = new MyObject ("snd", true);
List > list = new List >(){fst, snd};
foreach (MyObject o in list) Console.WriteLine(o.Name); Obviously, this is pseudo code, this doesn't work. Also I don't need to access the.Value property (since that wouldn't be type-safe). EDIT: Now that I've been thinking about this, It would be possible to use sub-classes for this. However, I think that would mean I'd have to write a new subclass for every new type. @ Grzenio Yes, that exactly answered my question. Of course, now I need to duplicate the entire shared interface, but that's not a big problem. I should have thought of that... @ aku You are right about the duck typing. I wouldn't expect two completely random types of objects to be accessible. But I thought generic objects would share some kind of common interface, since they are exactly the same, apart from the type they are parametrized by. Apparently, this is not the case automatically. | TITLE:
Accessing non-generic members of a generic object
QUESTION:
Is there a way to collect (e.g. in a List) multiple 'generic' objects that don't share a common super class? If so, how can I access their common properties? For example: class MyObject { public T Value { get; set; } public string Name { get; set; }
public MyObject(string name, T value) { Name = name; Value = value; } }
var fst = new MyObject ("fst", 42); var snd = new MyObject ("snd", true);
List > list = new List >(){fst, snd};
foreach (MyObject o in list) Console.WriteLine(o.Name); Obviously, this is pseudo code, this doesn't work. Also I don't need to access the.Value property (since that wouldn't be type-safe). EDIT: Now that I've been thinking about this, It would be possible to use sub-classes for this. However, I think that would mean I'd have to write a new subclass for every new type. @ Grzenio Yes, that exactly answered my question. Of course, now I need to duplicate the entire shared interface, but that's not a big problem. I should have thought of that... @ aku You are right about the duck typing. I wouldn't expect two completely random types of objects to be accessible. But I thought generic objects would share some kind of common interface, since they are exactly the same, apart from the type they are parametrized by. Apparently, this is not the case automatically.
ANSWER:
I don't think it is possible in C#, because MyObject is not a baseclass of MyObject. What I usually do is to define an interface (a 'normal' one, not generic) and make MyObject implement that interface, e.g. interface INamedObject { string Name {get;} } and then you can use the interface: List list = new List (){fst, snd};
foreach (INamedObject o in list) Console.WriteLine(o.Name); Did it answer your question? | [
"c#",
".net",
"generics"
] | 3 | 7 | 1,249 | 3 | 0 | 2008-09-09T10:56:45.103000 | 2008-09-09T11:13:54.483000 |
51,592 | 51,602 | Is there a need to destroy char * = "string" or char * = new char[6]? | I assume that char* = "string" is the same to char* = new char[6]. I believe these strings are created on the heap instead of the stack. So do I need to destroy them or free their memory when I'm done using them or do they get destroyed by themselves? | No. You only need to manually free strings when you manually allocate the memory yourself using the malloc function (in C) or the new operator (in C++). If you do not use malloc or new, then the char* or string will be created on the stack or as a compile-time constant. | Is there a need to destroy char * = "string" or char * = new char[6]? I assume that char* = "string" is the same to char* = new char[6]. I believe these strings are created on the heap instead of the stack. So do I need to destroy them or free their memory when I'm done using them or do they get destroyed by themselves? | TITLE:
Is there a need to destroy char * = "string" or char * = new char[6]?
QUESTION:
I assume that char* = "string" is the same to char* = new char[6]. I believe these strings are created on the heap instead of the stack. So do I need to destroy them or free their memory when I'm done using them or do they get destroyed by themselves?
ANSWER:
No. You only need to manually free strings when you manually allocate the memory yourself using the malloc function (in C) or the new operator (in C++). If you do not use malloc or new, then the char* or string will be created on the stack or as a compile-time constant. | [
"c++",
"memory"
] | 20 | 36 | 13,667 | 8 | 0 | 2008-09-09T10:59:38.620000 | 2008-09-09T11:02:44.480000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.