question_id
int64
4
6.31M
answer_id
int64
7
6.31M
title
stringlengths
9
150
question_body
stringlengths
0
28.8k
answer_body
stringlengths
60
27.2k
question_text
stringlengths
40
28.9k
combined_text
stringlengths
124
39.6k
tags
listlengths
1
6
question_score
int64
0
26.3k
answer_score
int64
0
28.8k
view_count
int64
15
14M
answer_count
int64
0
182
favorite_count
int64
0
32
question_creation_date
stringdate
2008-07-31 21:42:52
2011-06-10 18:12:18
answer_creation_date
stringdate
2008-07-31 22:17:57
2011-06-10 18:14:17
9,702
9,857
Default database IDs; system and user values
As part of our current database work, we are looking at a dealing with the process of updating databases. A point which has been brought up recurrently, is that of dealing with system vs. user values; in our project user and system vals are stored together. For example... We have a list of templates. 1, 2, 3, These are mapped in the app to an enum (1, 2, 3) Then a user comes in and adds... 4,...and... 5, Then.. we issue an upgrade.. and insert as part of our upgrade scripts... [6], THEN!!... we find a bug in the new system template and need to update it... The problem is how? We cannot update record using ID6 (as we may have inserted it as 9, or 999, so we have to identify the record using some other mechanism) So, we've come to two possible solutions for this. In the red corner (speed).... We simply start user Ids at 5000 (or some other value) and test data at 10000 (or some other value). This would allow us to make modifications to system values and test them up to the lower limit of the next ID range. Advantage...Quick and easy to implement, Disadvantage... could run out of values if we don't choose a big enough range! In the blue corner (scalability)... We store, system and user data separately, use GUIDs as Ids and merge the two lists using a view. Advantage...Scalable..No limits w/regard to DB size. Disadvantage.. More complicated to implement. (many to one updatable views etc.) I plump squarely for the first option, but looking for some ammo to back me up! Does anyone have any thoughts on these approaches, or even one(s) that we've missed?
I have never had problems (performance or development - TDD & unit testing included) using GUIDs as the ID for my databases, and I've worked on some pretty big ones. Have a look here, here and here if you want to find out more about using GUIDs (and the potential GOTCHAS involved) as your primary keys - but I can't recommend it highly enough since moving data around safely and DB synchronisation becomes as easy as brushing your teeth in the morning:-) For your question above, I would either recommend a third column (if possible) that indicates whether or not the template is user or system based, or you can at the very least generate GUIDs for system templates as you insert them and keep a list of those on hand, so that if you need to update the template, you can just target that same GUID in your DEV, UAT and /or PRODUCTION databases without fear of overwriting other templates. The third column would come in handy though for selecting all system or user templates at will, without the need to seperate them into two tables (this is overkill IMHO). I hope that helps, Rob G
Default database IDs; system and user values As part of our current database work, we are looking at a dealing with the process of updating databases. A point which has been brought up recurrently, is that of dealing with system vs. user values; in our project user and system vals are stored together. For example... We have a list of templates. 1, 2, 3, These are mapped in the app to an enum (1, 2, 3) Then a user comes in and adds... 4,...and... 5, Then.. we issue an upgrade.. and insert as part of our upgrade scripts... [6], THEN!!... we find a bug in the new system template and need to update it... The problem is how? We cannot update record using ID6 (as we may have inserted it as 9, or 999, so we have to identify the record using some other mechanism) So, we've come to two possible solutions for this. In the red corner (speed).... We simply start user Ids at 5000 (or some other value) and test data at 10000 (or some other value). This would allow us to make modifications to system values and test them up to the lower limit of the next ID range. Advantage...Quick and easy to implement, Disadvantage... could run out of values if we don't choose a big enough range! In the blue corner (scalability)... We store, system and user data separately, use GUIDs as Ids and merge the two lists using a view. Advantage...Scalable..No limits w/regard to DB size. Disadvantage.. More complicated to implement. (many to one updatable views etc.) I plump squarely for the first option, but looking for some ammo to back me up! Does anyone have any thoughts on these approaches, or even one(s) that we've missed?
TITLE: Default database IDs; system and user values QUESTION: As part of our current database work, we are looking at a dealing with the process of updating databases. A point which has been brought up recurrently, is that of dealing with system vs. user values; in our project user and system vals are stored together. For example... We have a list of templates. 1, 2, 3, These are mapped in the app to an enum (1, 2, 3) Then a user comes in and adds... 4,...and... 5, Then.. we issue an upgrade.. and insert as part of our upgrade scripts... [6], THEN!!... we find a bug in the new system template and need to update it... The problem is how? We cannot update record using ID6 (as we may have inserted it as 9, or 999, so we have to identify the record using some other mechanism) So, we've come to two possible solutions for this. In the red corner (speed).... We simply start user Ids at 5000 (or some other value) and test data at 10000 (or some other value). This would allow us to make modifications to system values and test them up to the lower limit of the next ID range. Advantage...Quick and easy to implement, Disadvantage... could run out of values if we don't choose a big enough range! In the blue corner (scalability)... We store, system and user data separately, use GUIDs as Ids and merge the two lists using a view. Advantage...Scalable..No limits w/regard to DB size. Disadvantage.. More complicated to implement. (many to one updatable views etc.) I plump squarely for the first option, but looking for some ammo to back me up! Does anyone have any thoughts on these approaches, or even one(s) that we've missed? ANSWER: I have never had problems (performance or development - TDD & unit testing included) using GUIDs as the ID for my databases, and I've worked on some pretty big ones. Have a look here, here and here if you want to find out more about using GUIDs (and the potential GOTCHAS involved) as your primary keys - but I can't recommend it highly enough since moving data around safely and DB synchronisation becomes as easy as brushing your teeth in the morning:-) For your question above, I would either recommend a third column (if possible) that indicates whether or not the template is user or system based, or you can at the very least generate GUIDs for system templates as you insert them and keep a list of those on hand, so that if you need to update the template, you can just target that same GUID in your DEV, UAT and /or PRODUCTION databases without fear of overwriting other templates. The third column would come in handy though for selecting all system or user templates at will, without the need to seperate them into two tables (this is overkill IMHO). I hope that helps, Rob G
[ "database" ]
1
1
994
6
0
2008-08-13T12:23:37.863000
2008-08-13T14:06:58.013000
9,704
9,708
How to prevent the mouse cursor from being hidden after calling CComboBox::ShowDropDown?
In my MFC application, when I call CComboBox::ShowDropDown(), the mouse cursor is hidden until interaction with the combo box completes (when the combo box loses focus.) It doesn't reappear when the mouse is moved, like it does with edit boxes. How can I keep the mouse cursor from being hidden?
Call SetCursor(LoadCursor(NULL, IDC_ARROW)); immediately after the ShowDropDown() call.
How to prevent the mouse cursor from being hidden after calling CComboBox::ShowDropDown? In my MFC application, when I call CComboBox::ShowDropDown(), the mouse cursor is hidden until interaction with the combo box completes (when the combo box loses focus.) It doesn't reappear when the mouse is moved, like it does with edit boxes. How can I keep the mouse cursor from being hidden?
TITLE: How to prevent the mouse cursor from being hidden after calling CComboBox::ShowDropDown? QUESTION: In my MFC application, when I call CComboBox::ShowDropDown(), the mouse cursor is hidden until interaction with the combo box completes (when the combo box loses focus.) It doesn't reappear when the mouse is moved, like it does with edit boxes. How can I keep the mouse cursor from being hidden? ANSWER: Call SetCursor(LoadCursor(NULL, IDC_ARROW)); immediately after the ShowDropDown() call.
[ "mfc", "mouse", "visibility", "ccombobox" ]
2
3
2,328
1
0
2008-08-13T12:28:57.057000
2008-08-13T12:31:26.023000
9,729
22,058
How to extend project properties page?
Is it possible to add a custom tab to a project properties page in the Visual Studio 2008? What I want to do is to be able to add a custom tab to properties page for the projects created from default project templates (WPF Application, WPF custom controls library, etc).
Keith, I'm working on VS add-in for WPF applications localization. I want to be able to manage project specific settings via "project properties" page. I did some research and it seems that it is not possible to extend existing projects in this way.
How to extend project properties page? Is it possible to add a custom tab to a project properties page in the Visual Studio 2008? What I want to do is to be able to add a custom tab to properties page for the projects created from default project templates (WPF Application, WPF custom controls library, etc).
TITLE: How to extend project properties page? QUESTION: Is it possible to add a custom tab to a project properties page in the Visual Studio 2008? What I want to do is to be able to add a custom tab to properties page for the projects created from default project templates (WPF Application, WPF custom controls library, etc). ANSWER: Keith, I'm working on VS add-in for WPF applications localization. I want to be able to manage project specific settings via "project properties" page. I did some research and it seems that it is not possible to extend existing projects in this way.
[ "visual-studio", "visual-studio-2008", "vsx", "extensibility" ]
4
3
2,481
2
0
2008-08-13T12:50:10.557000
2008-08-22T10:50:17.123000
9,734
9,747
C#.Net case-insensitive string
Why does C#.Net allow the declaration of the string object to be case-insensitive? String sHello = "Hello"; string sHello = "Hello"; Both the lower-case and upper-case S of the word String are acceptable and this seems to be the only object that allows this. Can anyone explain why?
string is a language keyword while System.String is the type it aliases. Both compile to exactly the same thing, similarly: int is System.Int32 long is System.Int64 float is System.Single double is System.Double char is System.Char byte is System.Byte short is System.Int16 ushort is System.UInt16 uint is System.UInt32 ulong is System.UInt64 I think in most cases this is about code legibility - all the basic system value types have aliases, I think the lower case string might just be for consistency.
C#.Net case-insensitive string Why does C#.Net allow the declaration of the string object to be case-insensitive? String sHello = "Hello"; string sHello = "Hello"; Both the lower-case and upper-case S of the word String are acceptable and this seems to be the only object that allows this. Can anyone explain why?
TITLE: C#.Net case-insensitive string QUESTION: Why does C#.Net allow the declaration of the string object to be case-insensitive? String sHello = "Hello"; string sHello = "Hello"; Both the lower-case and upper-case S of the word String are acceptable and this seems to be the only object that allows this. Can anyone explain why? ANSWER: string is a language keyword while System.String is the type it aliases. Both compile to exactly the same thing, similarly: int is System.Int32 long is System.Int64 float is System.Single double is System.Double char is System.Char byte is System.Byte short is System.Int16 ushort is System.UInt16 uint is System.UInt32 ulong is System.UInt64 I think in most cases this is about code legibility - all the basic system value types have aliases, I think the lower case string might just be for consistency.
[ "c#", ".net" ]
7
21
2,778
7
0
2008-08-13T12:53:20.550000
2008-08-13T12:57:34.520000
9,751
15,415
Sending emails without looking like spam
I want so send every week an update by email. But Im afraid that if there are too many emails sent, they will be marked as spam. Any of you has experience sending many emails (thousands) weekly? What techniques do you use?
A good answer for this question would be a real gold mine for a motivated spammer:) Not really -- as you'll see in that other thread, answers center on showing that you are the authorative sender of the email, and various aspects that are useless to spammers and useful to non-spammers who send a lot of email.
Sending emails without looking like spam I want so send every week an update by email. But Im afraid that if there are too many emails sent, they will be marked as spam. Any of you has experience sending many emails (thousands) weekly? What techniques do you use?
TITLE: Sending emails without looking like spam QUESTION: I want so send every week an update by email. But Im afraid that if there are too many emails sent, they will be marked as spam. Any of you has experience sending many emails (thousands) weekly? What techniques do you use? ANSWER: A good answer for this question would be a real gold mine for a motivated spammer:) Not really -- as you'll see in that other thread, answers center on showing that you are the authorative sender of the email, and various aspects that are useless to spammers and useful to non-spammers who send a lot of email.
[ "asp.net", "email" ]
5
2
2,527
1
0
2008-08-13T13:01:08.023000
2008-08-19T00:12:23.860000
9,783
9,798
Oracle XE or SQL Server Express
I'm starting a new project here (Windows Forms). What's the best option today for a small (free as in beer) DBMS? I've used SQL Server Express on the past projects, but time and time again I hear people saying that the product from Oracle is faster and more powerful. It will be used in a small company (around 20 users) and will not reach the 4 GB limit any time soon:) I don't want to start a flame war on my first post, so please point me to some link showing a good (and actual) comparison between the 2 products, if possible. PS: I've heard about IBM DB2 Express too, but I coudn't find any information about it. (Marketing material from IBM doesn't count:) )
Sorry, no link, but one advice. Because we support Oracle and SQL Server, I know that getting fixes for the 'normal' Oracle database, is not something what I call fun. You have to pay for it, and if you have no tool which updates your Oracle system for you, it's a pain in the a.., if you ask me. Check out how the Oracle XE is supported with updates/fixes. I don't know, I only use the 'normal' Oracle (Developer) database.
Oracle XE or SQL Server Express I'm starting a new project here (Windows Forms). What's the best option today for a small (free as in beer) DBMS? I've used SQL Server Express on the past projects, but time and time again I hear people saying that the product from Oracle is faster and more powerful. It will be used in a small company (around 20 users) and will not reach the 4 GB limit any time soon:) I don't want to start a flame war on my first post, so please point me to some link showing a good (and actual) comparison between the 2 products, if possible. PS: I've heard about IBM DB2 Express too, but I coudn't find any information about it. (Marketing material from IBM doesn't count:) )
TITLE: Oracle XE or SQL Server Express QUESTION: I'm starting a new project here (Windows Forms). What's the best option today for a small (free as in beer) DBMS? I've used SQL Server Express on the past projects, but time and time again I hear people saying that the product from Oracle is faster and more powerful. It will be used in a small company (around 20 users) and will not reach the 4 GB limit any time soon:) I don't want to start a flame war on my first post, so please point me to some link showing a good (and actual) comparison between the 2 products, if possible. PS: I've heard about IBM DB2 Express too, but I coudn't find any information about it. (Marketing material from IBM doesn't count:) ) ANSWER: Sorry, no link, but one advice. Because we support Oracle and SQL Server, I know that getting fixes for the 'normal' Oracle database, is not something what I call fun. You have to pay for it, and if you have no tool which updates your Oracle system for you, it's a pain in the a.., if you ask me. Check out how the Oracle XE is supported with updates/fixes. I don't know, I only use the 'normal' Oracle (Developer) database.
[ "sql-server", "database", "oracle-xe" ]
8
3
9,632
9
0
2008-08-13T13:18:30.950000
2008-08-13T13:32:06.430000
9,805
9,851
Calculate DateTime Weeks into Rows
I am currently writing a small calendar in ASP.Net C#. Currently to produce the rows of the weeks I do the following for loop: var iWeeks = 6; for (int w = 0; w < iWeeks; w++) { This works fine, however, some month will only have 5 weeks and in some rare cases, 4. How can I calculate the number of rows that will be required for a particular month? This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google
Here is the method that does it: public int GetWeekRows(int year, int month) { DateTime firstDayOfMonth = new DateTime(year, month, 1); DateTime lastDayOfMonth = new DateTime(year, month, 1).AddMonths(1).AddDays(-1); System.Globalization.Calendar calendar = System.Threading.Thread.CurrentThread.CurrentCulture.Calendar; int lastWeek = calendar.GetWeekOfYear(lastDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday); int firstWeek = calendar.GetWeekOfYear(firstDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday); return lastWeek - firstWeek + 1; } You can customize the calendar week rule by modifying the System.Globalization.CalendarWeekRule.FirstFourDayWeek part. I hope the code is self explanatory.
Calculate DateTime Weeks into Rows I am currently writing a small calendar in ASP.Net C#. Currently to produce the rows of the weeks I do the following for loop: var iWeeks = 6; for (int w = 0; w < iWeeks; w++) { This works fine, however, some month will only have 5 weeks and in some rare cases, 4. How can I calculate the number of rows that will be required for a particular month? This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google
TITLE: Calculate DateTime Weeks into Rows QUESTION: I am currently writing a small calendar in ASP.Net C#. Currently to produce the rows of the weeks I do the following for loop: var iWeeks = 6; for (int w = 0; w < iWeeks; w++) { This works fine, however, some month will only have 5 weeks and in some rare cases, 4. How can I calculate the number of rows that will be required for a particular month? This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google ANSWER: Here is the method that does it: public int GetWeekRows(int year, int month) { DateTime firstDayOfMonth = new DateTime(year, month, 1); DateTime lastDayOfMonth = new DateTime(year, month, 1).AddMonths(1).AddDays(-1); System.Globalization.Calendar calendar = System.Threading.Thread.CurrentThread.CurrentCulture.Calendar; int lastWeek = calendar.GetWeekOfYear(lastDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday); int firstWeek = calendar.GetWeekOfYear(firstDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday); return lastWeek - firstWeek + 1; } You can customize the calendar week rule by modifying the System.Globalization.CalendarWeekRule.FirstFourDayWeek part. I hope the code is self explanatory.
[ "c#", "asp.net" ]
8
6
5,890
9
0
2008-08-13T13:38:56.043000
2008-08-13T14:03:39.223000
9,831
12,910
Mootools: Drag & Drop problems
I've asked this question to the forums on the Mootools website and one person said that my class selection was corrupted before an admin came along and changed my post status to invalid. Needless to say this did not help much. I then posted to a google group for Mootools with no response. My question is why doesn't the 'enter', 'leave', 'drop' events fire for my '.drop' elements? The events for the.drag elements are working. Untitled Page
Ok, it looks like there are a couple of issues here. As far as I can tell, there is no such thing as a "droppable" in mootools. This means your events like 'enter', 'leave' and 'drop' won't work. (These are events on the drag object) If you change those names to events that elements in mootools have (as in, DOM events) your code works perfectly. For instance, if you change 'enter' and 'leave' to 'mouseover' and 'mouseout', your events fire with no problem. (Opera 9.51 on Windows Vista) This appears to be the revelant line in the documentation for this, which stats to use DOM events. http://docs.mootools.net/Element/Element.Event#Element:addEvents Also, on that page, is a link to the events that regular elements can have http://www.w3schools.com/html/html_eventattributes.asp However, the advice "TG in SD" gave you in the nabble forums is probably best. If you can, don't bother using these events. Put whatever it is you need to do in the draggable object, and save yourself all this hassle.
Mootools: Drag & Drop problems I've asked this question to the forums on the Mootools website and one person said that my class selection was corrupted before an admin came along and changed my post status to invalid. Needless to say this did not help much. I then posted to a google group for Mootools with no response. My question is why doesn't the 'enter', 'leave', 'drop' events fire for my '.drop' elements? The events for the.drag elements are working. Untitled Page
TITLE: Mootools: Drag & Drop problems QUESTION: I've asked this question to the forums on the Mootools website and one person said that my class selection was corrupted before an admin came along and changed my post status to invalid. Needless to say this did not help much. I then posted to a google group for Mootools with no response. My question is why doesn't the 'enter', 'leave', 'drop' events fire for my '.drop' elements? The events for the.drag elements are working. Untitled Page ANSWER: Ok, it looks like there are a couple of issues here. As far as I can tell, there is no such thing as a "droppable" in mootools. This means your events like 'enter', 'leave' and 'drop' won't work. (These are events on the drag object) If you change those names to events that elements in mootools have (as in, DOM events) your code works perfectly. For instance, if you change 'enter' and 'leave' to 'mouseover' and 'mouseout', your events fire with no problem. (Opera 9.51 on Windows Vista) This appears to be the revelant line in the documentation for this, which stats to use DOM events. http://docs.mootools.net/Element/Element.Event#Element:addEvents Also, on that page, is a link to the events that regular elements can have http://www.w3schools.com/html/html_eventattributes.asp However, the advice "TG in SD" gave you in the nabble forums is probably best. If you can, don't bother using these events. Put whatever it is you need to do in the draggable object, and save yourself all this hassle.
[ "javascript", "drag-and-drop", "mootools" ]
6
3
3,000
2
0
2008-08-13T13:49:51.963000
2008-08-16T00:13:57.603000
9,836
9,848
Developer Friendly ERP
My company is currently using Sage MAS as their ERP system. While integrating our shopping cart is not going to be impossible, it uses COM and has it's own challenges. I was wondering if there was a more developer friendly ERP out there. I have looked into Microsoft Dynamics but getting information on ERP systems that isn't a bunch of business jargon is next to impossible. I will be using C# 3.whatever and.NET MVC.
MS Dyanamics is very cool app. V3 was fully Web Serviced V4 i assume even more- all actions are exposed as webservices, there is a big license hit on MS CRM due to "internet" licensing. We use CRMv3 in a totally.NET SOA here and its great. You should have no problems doing the integration - security aside =>
Developer Friendly ERP My company is currently using Sage MAS as their ERP system. While integrating our shopping cart is not going to be impossible, it uses COM and has it's own challenges. I was wondering if there was a more developer friendly ERP out there. I have looked into Microsoft Dynamics but getting information on ERP systems that isn't a bunch of business jargon is next to impossible. I will be using C# 3.whatever and.NET MVC.
TITLE: Developer Friendly ERP QUESTION: My company is currently using Sage MAS as their ERP system. While integrating our shopping cart is not going to be impossible, it uses COM and has it's own challenges. I was wondering if there was a more developer friendly ERP out there. I have looked into Microsoft Dynamics but getting information on ERP systems that isn't a bunch of business jargon is next to impossible. I will be using C# 3.whatever and.NET MVC. ANSWER: MS Dyanamics is very cool app. V3 was fully Web Serviced V4 i assume even more- all actions are exposed as webservices, there is a big license hit on MS CRM due to "internet" licensing. We use CRMv3 in a totally.NET SOA here and its great. You should have no problems doing the integration - security aside =>
[ "c#", "erp" ]
5
2
3,248
3
0
2008-08-13T13:51:48.450000
2008-08-13T14:01:50.647000
9,846
10,301
Performance critical GUI application (windows,linux)
I've been tasked with updating a series of applications which are performance critical VB.NET apps that essentially just monitor and return networking statistics. I've only got three requirements: convert it to C#, make it fast, and make it stable One caveat is that we "may" migrate from a.NET platform to linux "soon" I will be responsible for maintaining these apps in the future so I'd like to do this right. I have decided to refactor these apps according to the MVP pattern so that I can properly unit test the hell out of this bad boy. But I was also thinking since I was using MVP that I could also do the computationally expensive stuff in native C/C++ code while the GUI would be done with.NET forms, or Qt or whatever. questions: does it make sense to do a GUI in winforms but the expensive stuff in native, unmanaged C/C++? any recommendations for a good cross platform windowing kit that would fit for the scenario described above?
First off, I would put some time into trying out a few VB.NET to C# converters. You're basically porting syntax, and there's no reason to do that by hand if you don't have to. Sure, you might have to clean up what comes out of the converter, but that's way better than a by-hand conversion. Now, as for your questions: 1) does it make sense to do a GUI in winforms but the expensive stuff in native, unmanaged C/C++? Not yet. Wait until you've done the conversion, and then find out where you're actually spending your time. There's no reason to jump into mixing C/C++ with C# until you find out that it's necessary. You may find that dropping into unsafe C# is sufficient. Even that may be unnecessary. You might just need to optimize algorithms. Find out what your bottlenecks are and then decide how to fix them. 2) any recommendations for a good cross platform windowing kit that would fit for the scenario described above? I'd be looking into mono for sure. That's really the best you can do if you're going with C#. It's pretty much either mono or another rewrite in another language when/if you move to Linux.
Performance critical GUI application (windows,linux) I've been tasked with updating a series of applications which are performance critical VB.NET apps that essentially just monitor and return networking statistics. I've only got three requirements: convert it to C#, make it fast, and make it stable One caveat is that we "may" migrate from a.NET platform to linux "soon" I will be responsible for maintaining these apps in the future so I'd like to do this right. I have decided to refactor these apps according to the MVP pattern so that I can properly unit test the hell out of this bad boy. But I was also thinking since I was using MVP that I could also do the computationally expensive stuff in native C/C++ code while the GUI would be done with.NET forms, or Qt or whatever. questions: does it make sense to do a GUI in winforms but the expensive stuff in native, unmanaged C/C++? any recommendations for a good cross platform windowing kit that would fit for the scenario described above?
TITLE: Performance critical GUI application (windows,linux) QUESTION: I've been tasked with updating a series of applications which are performance critical VB.NET apps that essentially just monitor and return networking statistics. I've only got three requirements: convert it to C#, make it fast, and make it stable One caveat is that we "may" migrate from a.NET platform to linux "soon" I will be responsible for maintaining these apps in the future so I'd like to do this right. I have decided to refactor these apps according to the MVP pattern so that I can properly unit test the hell out of this bad boy. But I was also thinking since I was using MVP that I could also do the computationally expensive stuff in native C/C++ code while the GUI would be done with.NET forms, or Qt or whatever. questions: does it make sense to do a GUI in winforms but the expensive stuff in native, unmanaged C/C++? any recommendations for a good cross platform windowing kit that would fit for the scenario described above? ANSWER: First off, I would put some time into trying out a few VB.NET to C# converters. You're basically porting syntax, and there's no reason to do that by hand if you don't have to. Sure, you might have to clean up what comes out of the converter, but that's way better than a by-hand conversion. Now, as for your questions: 1) does it make sense to do a GUI in winforms but the expensive stuff in native, unmanaged C/C++? Not yet. Wait until you've done the conversion, and then find out where you're actually spending your time. There's no reason to jump into mixing C/C++ with C# until you find out that it's necessary. You may find that dropping into unsafe C# is sufficient. Even that may be unnecessary. You might just need to optimize algorithms. Find out what your bottlenecks are and then decide how to fix them. 2) any recommendations for a good cross platform windowing kit that would fit for the scenario described above? I'd be looking into mono for sure. That's really the best you can do if you're going with C#. It's pretty much either mono or another rewrite in another language when/if you move to Linux.
[ ".net", "windows", "linux", "user-interface" ]
5
5
3,503
12
0
2008-08-13T14:00:55.280000
2008-08-13T19:47:04.350000
9,877
9,903
Personal Website Construction
I'm currently trying to build a personal website to create a presence on the web for myself. My plan is to include content such as my resume, any projects that I have done on my own and links to open source projects that I have contributed to, and so on. However, I'm not sure which approach would be better from a perspective of "advertising" myself, since that what this site does, especially since I am a software developer. Should I use an out-of-the-box system and extend it as needed, with available modules and custom modules where needed or should I custom build a site and all of its features as I need them? Does a custom site look better in the eyes of a potential employer who might visit my site?
I've toyed with this idea in the past but I don't think it's really a good idea for a number of reasons. Firstly, there are a number of places that can take care of most of this without you needing to do the work or maintenance. Just signing up for a linkedIn account for example will allow you to get most of your needs catered for in this regard. You can create your resume there and bio information etc and make it publicly viewable. The other issue with your "own site" is that if you don't update it often, the information gets stale, and worse yet, people have no reason to go back because "nothing has changed" - and that's not much of an advert for you is it? Now that I've said all that, I'll make another recommendation. Why not start a blog instead?! If you've got decent experience, why not share that. I'd be willing to bet that this will be the best advert for your skills because: It's always updated (if you post often) It's not like you're looking for work doing it - but your (future) employer, or their developers will check it out anyway to get a better insight into your character. Putting something on your resume doesn't mean you can do it. I'm not saying that you'd lie about your skills:-), but there's no argument about your ability when you're writing articles about the stuff, getting comments and feedback, and better yet, learning EVEN MORE about your passions. Best of all - you can run your blog from your chosen domain and also point to your resume that is stored in linkedIn. Just an idea... That's my two pennys worth on that - hope it helps you come to a decision!
Personal Website Construction I'm currently trying to build a personal website to create a presence on the web for myself. My plan is to include content such as my resume, any projects that I have done on my own and links to open source projects that I have contributed to, and so on. However, I'm not sure which approach would be better from a perspective of "advertising" myself, since that what this site does, especially since I am a software developer. Should I use an out-of-the-box system and extend it as needed, with available modules and custom modules where needed or should I custom build a site and all of its features as I need them? Does a custom site look better in the eyes of a potential employer who might visit my site?
TITLE: Personal Website Construction QUESTION: I'm currently trying to build a personal website to create a presence on the web for myself. My plan is to include content such as my resume, any projects that I have done on my own and links to open source projects that I have contributed to, and so on. However, I'm not sure which approach would be better from a perspective of "advertising" myself, since that what this site does, especially since I am a software developer. Should I use an out-of-the-box system and extend it as needed, with available modules and custom modules where needed or should I custom build a site and all of its features as I need them? Does a custom site look better in the eyes of a potential employer who might visit my site? ANSWER: I've toyed with this idea in the past but I don't think it's really a good idea for a number of reasons. Firstly, there are a number of places that can take care of most of this without you needing to do the work or maintenance. Just signing up for a linkedIn account for example will allow you to get most of your needs catered for in this regard. You can create your resume there and bio information etc and make it publicly viewable. The other issue with your "own site" is that if you don't update it often, the information gets stale, and worse yet, people have no reason to go back because "nothing has changed" - and that's not much of an advert for you is it? Now that I've said all that, I'll make another recommendation. Why not start a blog instead?! If you've got decent experience, why not share that. I'd be willing to bet that this will be the best advert for your skills because: It's always updated (if you post often) It's not like you're looking for work doing it - but your (future) employer, or their developers will check it out anyway to get a better insight into your character. Putting something on your resume doesn't mean you can do it. I'm not saying that you'd lie about your skills:-), but there's no argument about your ability when you're writing articles about the stuff, getting comments and feedback, and better yet, learning EVEN MORE about your passions. Best of all - you can run your blog from your chosen domain and also point to your resume that is stored in linkedIn. Just an idea... That's my two pennys worth on that - hope it helps you come to a decision!
[ "content-management-system" ]
33
27
4,230
11
0
2008-08-13T14:22:20.947000
2008-08-13T14:42:06.430000
9,882
12,466
Looking for algorithm that reverses the sprintf() function output
I am working on a project that requires the parsing of log files. I am looking for a fast algorithm that would take groups messages like this: The temperature at P1 is 35F. The temperature at P1 is 40F. The temperature at P3 is 35F. Logger stopped. Logger started. The temperature at P1 is 40F. and puts out something in the form of a printf(): "The temperature at P%d is %dF.", Int1, Int2" {(1,35), (1, 40), (3, 35), (1,40)} The algorithm needs to be generic enough to recognize almost any data load in message groups. I tried searching for this kind of technology, but I don't even know the correct terms to search for.
Overview: A naïve!! algorithm keeps track of the frequency of words in a per-column manner, where one can assume that each line can be separated into columns with a delimiter. Example input: The dog jumped over the moon The cat jumped over the moon The moon jumped over the moon The car jumped over the moon Frequencies: Column 1: {The: 4} Column 2: {car: 1, cat: 1, dog: 1, moon: 1} Column 3: {jumped: 4} Column 4: {over: 4} Column 5: {the: 4} Column 6: {moon: 4} We could partition these frequency lists further by grouping based on the total number of fields, but in this simple and convenient example, we are only working with a fixed number of fields (6). The next step is to iterate through lines which generated these frequency lists, so let's take the first example. The: meets some hand-wavy criteria and the algorithm decides it must be static. dog: doesn't appear to be static based on the rest of the frequency list, and thus it must be dynamic as opposed to static text. We loop through a few pre-defined regular expressions and come up with /[a-z]+/i. over: same deal as #1; it's static, so leave as is. the: same deal as #1; it's static, so leave as is. moon: same deal as #1; it's static, so leave as is. Thus, just from going over the first line we can put together the following regular expression: /The ([a-z]+?) jumps over the moon/ Considerations: Obviously one can choose to scan part or the whole document for the first pass, as long as one is confident the frequency lists will be a sufficient sampling of the entire data. False positives may creep into the results, and it will be up to the filtering algorithm (hand-waving) to provide the best threshold between static and dynamic fields, or some human post-processing. The overall idea is probably a good one, but the actual implementation will definitely weigh in on the speed and efficiency of this algorithm.
Looking for algorithm that reverses the sprintf() function output I am working on a project that requires the parsing of log files. I am looking for a fast algorithm that would take groups messages like this: The temperature at P1 is 35F. The temperature at P1 is 40F. The temperature at P3 is 35F. Logger stopped. Logger started. The temperature at P1 is 40F. and puts out something in the form of a printf(): "The temperature at P%d is %dF.", Int1, Int2" {(1,35), (1, 40), (3, 35), (1,40)} The algorithm needs to be generic enough to recognize almost any data load in message groups. I tried searching for this kind of technology, but I don't even know the correct terms to search for.
TITLE: Looking for algorithm that reverses the sprintf() function output QUESTION: I am working on a project that requires the parsing of log files. I am looking for a fast algorithm that would take groups messages like this: The temperature at P1 is 35F. The temperature at P1 is 40F. The temperature at P3 is 35F. Logger stopped. Logger started. The temperature at P1 is 40F. and puts out something in the form of a printf(): "The temperature at P%d is %dF.", Int1, Int2" {(1,35), (1, 40), (3, 35), (1,40)} The algorithm needs to be generic enough to recognize almost any data load in message groups. I tried searching for this kind of technology, but I don't even know the correct terms to search for. ANSWER: Overview: A naïve!! algorithm keeps track of the frequency of words in a per-column manner, where one can assume that each line can be separated into columns with a delimiter. Example input: The dog jumped over the moon The cat jumped over the moon The moon jumped over the moon The car jumped over the moon Frequencies: Column 1: {The: 4} Column 2: {car: 1, cat: 1, dog: 1, moon: 1} Column 3: {jumped: 4} Column 4: {over: 4} Column 5: {the: 4} Column 6: {moon: 4} We could partition these frequency lists further by grouping based on the total number of fields, but in this simple and convenient example, we are only working with a fixed number of fields (6). The next step is to iterate through lines which generated these frequency lists, so let's take the first example. The: meets some hand-wavy criteria and the algorithm decides it must be static. dog: doesn't appear to be static based on the rest of the frequency list, and thus it must be dynamic as opposed to static text. We loop through a few pre-defined regular expressions and come up with /[a-z]+/i. over: same deal as #1; it's static, so leave as is. the: same deal as #1; it's static, so leave as is. moon: same deal as #1; it's static, so leave as is. Thus, just from going over the first line we can put together the following regular expression: /The ([a-z]+?) jumps over the moon/ Considerations: Obviously one can choose to scan part or the whole document for the first pass, as long as one is confident the frequency lists will be a sufficient sampling of the entire data. False positives may creep into the results, and it will be up to the filtering algorithm (hand-waving) to provide the best threshold between static and dynamic fields, or some human post-processing. The overall idea is probably a good one, but the actual implementation will definitely weigh in on the speed and efficiency of this algorithm.
[ "algorithm", "parsing", "logging", "printf", "normalizing" ]
5
6
1,828
10
0
2008-08-13T14:25:58.133000
2008-08-15T16:45:12.813000
9,905
20,225
Check for hung Office process when using Office Automation
Is there a way to check to see if an Microsoft Office process (i.e. Word, Excel) has hung when using Office Automation? Additionally, if the process is hung, is there a way to terminate it?
Let me start off saying that I don't recommend doing this in a service on a server, but I'll do my best to answer the questions. Running as a service makes it difficult to clean up. For example with what you have running as a service survive killing a hung word or excel. You may be in a position to have to kill the service. Will your service stop if word or excel is in this state. One problem with trying to test if it is hung, is that your test could cause a new instance of word to startup and work, while the one that the service is running would still be hung. The best way to determine if it's hung is to ask it to do what it is supposed to be doing and check for the results. I would need to know more about what it is actually doing. Here are some commands to use in a batch file for cleaning up (both should be in the path): sc stop servicename - stops service named servicename sc start servicename - starts service named servicename sc query servicename - Queries the status of servicename taskkill /F /IM excel.exe - terminates all instances of excel.exe
Check for hung Office process when using Office Automation Is there a way to check to see if an Microsoft Office process (i.e. Word, Excel) has hung when using Office Automation? Additionally, if the process is hung, is there a way to terminate it?
TITLE: Check for hung Office process when using Office Automation QUESTION: Is there a way to check to see if an Microsoft Office process (i.e. Word, Excel) has hung when using Office Automation? Additionally, if the process is hung, is there a way to terminate it? ANSWER: Let me start off saying that I don't recommend doing this in a service on a server, but I'll do my best to answer the questions. Running as a service makes it difficult to clean up. For example with what you have running as a service survive killing a hung word or excel. You may be in a position to have to kill the service. Will your service stop if word or excel is in this state. One problem with trying to test if it is hung, is that your test could cause a new instance of word to startup and work, while the one that the service is running would still be hung. The best way to determine if it's hung is to ask it to do what it is supposed to be doing and check for the results. I would need to know more about what it is actually doing. Here are some commands to use in a batch file for cleaning up (both should be in the path): sc stop servicename - stops service named servicename sc start servicename - starts service named servicename sc query servicename - Queries the status of servicename taskkill /F /IM excel.exe - terminates all instances of excel.exe
[ "language-agnostic", "ms-office", "office-automation" ]
3
2
1,252
3
0
2008-08-13T14:42:52.887000
2008-08-21T15:24:36.957000
9,926
20,736
Analyzing Multithreaded Programs
We have a codebase that is several years old, and all the original developers are long gone. It uses many, many threads, but with no apparent design or common architectural principles. Every developer had his own style of multithreaded programming, so some threads communicate with one another using queues, some lock data with mutexes, some lock with semaphores, some use operating-system IPC mechanisms for intra-process communications. There is no design documentation, and comments are sparse. It's a mess, and it seems that whenever we try to refactor the code or add new functionality, we introduce deadlocks or other problems. So, does anyone know of any tools or techniques that would help to analyze and document all the interactions between threads? FWIW, the codebase is C++ on Linux, but I'd be interested to hear about tools for other environments. Update I appreciate the responses received so far, but I was hoping for something more sophisticated or systematic than advice that is essentially "add log messages, figure out what's going on, and fix it." There are lots of tools out there for analyzing and documenting control-flow in single-threaded programs; is there nothing available for multi-threaded programs? See also Debugging multithreaded applications
Invest in a copy of Intel's VTune and its thread profiling tools. It will give you both a system and a source level view of the thread behaviour. It's certainly not going to autodocument the thing for you, but should be a real help in at least visualising what is happening in different circumstances. I think there is a trial version that you can download, so may be worth giving that a go. I've only used the Windows version, but looking at the VTune webpage it also has a Linux version.
Analyzing Multithreaded Programs We have a codebase that is several years old, and all the original developers are long gone. It uses many, many threads, but with no apparent design or common architectural principles. Every developer had his own style of multithreaded programming, so some threads communicate with one another using queues, some lock data with mutexes, some lock with semaphores, some use operating-system IPC mechanisms for intra-process communications. There is no design documentation, and comments are sparse. It's a mess, and it seems that whenever we try to refactor the code or add new functionality, we introduce deadlocks or other problems. So, does anyone know of any tools or techniques that would help to analyze and document all the interactions between threads? FWIW, the codebase is C++ on Linux, but I'd be interested to hear about tools for other environments. Update I appreciate the responses received so far, but I was hoping for something more sophisticated or systematic than advice that is essentially "add log messages, figure out what's going on, and fix it." There are lots of tools out there for analyzing and documenting control-flow in single-threaded programs; is there nothing available for multi-threaded programs? See also Debugging multithreaded applications
TITLE: Analyzing Multithreaded Programs QUESTION: We have a codebase that is several years old, and all the original developers are long gone. It uses many, many threads, but with no apparent design or common architectural principles. Every developer had his own style of multithreaded programming, so some threads communicate with one another using queues, some lock data with mutexes, some lock with semaphores, some use operating-system IPC mechanisms for intra-process communications. There is no design documentation, and comments are sparse. It's a mess, and it seems that whenever we try to refactor the code or add new functionality, we introduce deadlocks or other problems. So, does anyone know of any tools or techniques that would help to analyze and document all the interactions between threads? FWIW, the codebase is C++ on Linux, but I'd be interested to hear about tools for other environments. Update I appreciate the responses received so far, but I was hoping for something more sophisticated or systematic than advice that is essentially "add log messages, figure out what's going on, and fix it." There are lots of tools out there for analyzing and documenting control-flow in single-threaded programs; is there nothing available for multi-threaded programs? See also Debugging multithreaded applications ANSWER: Invest in a copy of Intel's VTune and its thread profiling tools. It will give you both a system and a source level view of the thread behaviour. It's certainly not going to autodocument the thing for you, but should be a real help in at least visualising what is happening in different circumstances. I think there is a trial version that you can download, so may be worth giving that a go. I've only used the Windows version, but looking at the VTune webpage it also has a Linux version.
[ "multithreading", "concurrency" ]
7
6
2,989
7
0
2008-08-13T15:09:51.757000
2008-08-21T18:38:14.323000
9,932
9,957
What is a good way to open large files across a WAN?
I have an application deployed into multiple zones and there are some issues with opening larger documents (20-50MB) across the WAN. Currently the documents are stored in Zone 1 (Americas) and a link stored in the database to the docs. I have heard some things about blobs in oracle and store binary in MS SQL Server 2005 and then perhaps copying the database to other zones. Any other suggestions or good results with one of the described options?
Your best option here may be caching the document in the requested zone the first time it is requested, and pinging the source document's last modified each time the cached document is requested in order to determine if it needs refreshed. In this case you're only requesting a small piece of information (a date) across the WAN most of the times the document is accessed. This works best for a subset of documents that are frequently requested. If you have a large set of documents, each infrequently requested by a disparate group, then you may want to look into replicating the documents in each of your zones each time the master is updated. This may best be accomplished by storing the document as binary data in your master database and having the slaves pull from the master.
What is a good way to open large files across a WAN? I have an application deployed into multiple zones and there are some issues with opening larger documents (20-50MB) across the WAN. Currently the documents are stored in Zone 1 (Americas) and a link stored in the database to the docs. I have heard some things about blobs in oracle and store binary in MS SQL Server 2005 and then perhaps copying the database to other zones. Any other suggestions or good results with one of the described options?
TITLE: What is a good way to open large files across a WAN? QUESTION: I have an application deployed into multiple zones and there are some issues with opening larger documents (20-50MB) across the WAN. Currently the documents are stored in Zone 1 (Americas) and a link stored in the database to the docs. I have heard some things about blobs in oracle and store binary in MS SQL Server 2005 and then perhaps copying the database to other zones. Any other suggestions or good results with one of the described options? ANSWER: Your best option here may be caching the document in the requested zone the first time it is requested, and pinging the source document's last modified each time the cached document is requested in order to determine if it needs refreshed. In this case you're only requesting a small piece of information (a date) across the WAN most of the times the document is accessed. This works best for a subset of documents that are frequently requested. If you have a large set of documents, each infrequently requested by a disparate group, then you may want to look into replicating the documents in each of your zones each time the master is updated. This may best be accomplished by storing the document as binary data in your master database and having the slaves pull from the master.
[ "sql-server", "database", "oracle" ]
1
2
415
2
0
2008-08-13T15:16:50.713000
2008-08-13T15:37:44.697000
9,938
9,972
Generic iterator
I am trying to find a generic way of accessing a set of containers. I have a standard vector and list in addition to another custom list. The custom list defines an iterator; class Iterator: public std::iterator { //... } Iterator begin() { return (Iterator(root)); } Iterator end() { return (Iterator(NULL)); } with the appropriate operators overloaded. Ideally, I would like to do this; class Foo { public: Foo() { std::list x; std::vector y; custom_list z; iter = x.begin(); // OR iter = y.begin(); // OR iter = z.begin(); //... }; private: std::iterator iter; }; But obviously these are all iterators of different types. I can assume all the containers are of the same type however. Is there an elegant way to solve this problem?
Here are some articles you might find of interest Giving STL Iterators a Base Class Type Erasure for C++ Iterators any_iterator Class Reference
Generic iterator I am trying to find a generic way of accessing a set of containers. I have a standard vector and list in addition to another custom list. The custom list defines an iterator; class Iterator: public std::iterator { //... } Iterator begin() { return (Iterator(root)); } Iterator end() { return (Iterator(NULL)); } with the appropriate operators overloaded. Ideally, I would like to do this; class Foo { public: Foo() { std::list x; std::vector y; custom_list z; iter = x.begin(); // OR iter = y.begin(); // OR iter = z.begin(); //... }; private: std::iterator iter; }; But obviously these are all iterators of different types. I can assume all the containers are of the same type however. Is there an elegant way to solve this problem?
TITLE: Generic iterator QUESTION: I am trying to find a generic way of accessing a set of containers. I have a standard vector and list in addition to another custom list. The custom list defines an iterator; class Iterator: public std::iterator { //... } Iterator begin() { return (Iterator(root)); } Iterator end() { return (Iterator(NULL)); } with the appropriate operators overloaded. Ideally, I would like to do this; class Foo { public: Foo() { std::list x; std::vector y; custom_list z; iter = x.begin(); // OR iter = y.begin(); // OR iter = z.begin(); //... }; private: std::iterator iter; }; But obviously these are all iterators of different types. I can assume all the containers are of the same type however. Is there an elegant way to solve this problem? ANSWER: Here are some articles you might find of interest Giving STL Iterators a Base Class Type Erasure for C++ Iterators any_iterator Class Reference
[ "c++", "generics", "iteration", "containers" ]
34
22
19,966
3
0
2008-08-13T15:22:47.847000
2008-08-13T15:47:35.707000
9,974
9,996
Query times out from web app but runs fine from management studio
This is a question I asked on another forum which received some decent answers, but I wanted to see if anyone here has more insight. The problem is that you have one of your pages in a web application timing out when it gets to a stored procedure call, so you use Sql Profiler, or your application trace logs, to find the query and you paste it into management studio to figure our why it's running slow. But you run it from there and it just blazes along, returning in less than a second each time. My particular case was using ASP.NET 2.0 and Sql Server 2005, but I think the problem could apply to any RDBMS system.
This is what I've learned so far from my research..NET sends in connection settings that are not the same as what you get when you log in to management studio. Here is what you see if you sniff the connection with Sql Profiler: -- network protocol: TCP/IP set quoted_identifier off set arithabort off set numeric_roundabort off set ansi_warnings on set ansi_padding on set ansi_nulls off set concat_null_yields_null on set cursor_close_on_commit off set implicit_transactions off set language us_english set dateformat mdy set datefirst 7 set transaction isolation level read committed I am now pasting those setting in above every query that I run when logged in to sql server, to make sure the settings are the same. For this case, I tried each setting individually, after disconnecting and reconnecting, and found that changing arithabort from off to on reduced the problem query from 90 seconds to 1 second. The most probable explanation is related to parameter sniffing, which is a technique Sql Server uses to pick what it thinks is the most effective query plan. When you change one of the connection settings, the query optimizer might choose a different plan, and in this case, it apparently chose a bad one. But I'm not totally convinced of this. I have tried comparing the actual query plans after changing this setting and I have yet to see the diff show any changes. Is there something else about the arithabort setting that might cause a query to run slowly in some cases? The solution seemed simple: Just put set arithabort on into the top of the stored procedure. But this could lead to the opposite problem: change the query parameters and suddenly it runs faster with 'off' than 'on'. For the time being I am running the procedure 'with recompile' to make sure the plan gets regenerated each time. It's Ok for this particular report, since it takes maybe a second to recompile, and this isn't too noticeable on a report that takes 1-10 seconds to return (it's a monster). But it's not an option for other queries that run much more frequently and need to return as quickly as possible, in just a few milliseconds.
Query times out from web app but runs fine from management studio This is a question I asked on another forum which received some decent answers, but I wanted to see if anyone here has more insight. The problem is that you have one of your pages in a web application timing out when it gets to a stored procedure call, so you use Sql Profiler, or your application trace logs, to find the query and you paste it into management studio to figure our why it's running slow. But you run it from there and it just blazes along, returning in less than a second each time. My particular case was using ASP.NET 2.0 and Sql Server 2005, but I think the problem could apply to any RDBMS system.
TITLE: Query times out from web app but runs fine from management studio QUESTION: This is a question I asked on another forum which received some decent answers, but I wanted to see if anyone here has more insight. The problem is that you have one of your pages in a web application timing out when it gets to a stored procedure call, so you use Sql Profiler, or your application trace logs, to find the query and you paste it into management studio to figure our why it's running slow. But you run it from there and it just blazes along, returning in less than a second each time. My particular case was using ASP.NET 2.0 and Sql Server 2005, but I think the problem could apply to any RDBMS system. ANSWER: This is what I've learned so far from my research..NET sends in connection settings that are not the same as what you get when you log in to management studio. Here is what you see if you sniff the connection with Sql Profiler: -- network protocol: TCP/IP set quoted_identifier off set arithabort off set numeric_roundabort off set ansi_warnings on set ansi_padding on set ansi_nulls off set concat_null_yields_null on set cursor_close_on_commit off set implicit_transactions off set language us_english set dateformat mdy set datefirst 7 set transaction isolation level read committed I am now pasting those setting in above every query that I run when logged in to sql server, to make sure the settings are the same. For this case, I tried each setting individually, after disconnecting and reconnecting, and found that changing arithabort from off to on reduced the problem query from 90 seconds to 1 second. The most probable explanation is related to parameter sniffing, which is a technique Sql Server uses to pick what it thinks is the most effective query plan. When you change one of the connection settings, the query optimizer might choose a different plan, and in this case, it apparently chose a bad one. But I'm not totally convinced of this. I have tried comparing the actual query plans after changing this setting and I have yet to see the diff show any changes. Is there something else about the arithabort setting that might cause a query to run slowly in some cases? The solution seemed simple: Just put set arithabort on into the top of the stored procedure. But this could lead to the opposite problem: change the query parameters and suddenly it runs faster with 'off' than 'on'. For the time being I am running the procedure 'with recompile' to make sure the plan gets regenerated each time. It's Ok for this particular report, since it takes maybe a second to recompile, and this isn't too noticeable on a report that takes 1-10 seconds to return (it's a monster). But it's not an option for other queries that run much more frequently and need to return as quickly as possible, in just a few milliseconds.
[ "sql", "sql-server", "sql-server-2005" ]
33
31
19,392
8
0
2008-08-13T15:49:41.317000
2008-08-13T16:01:51.730000
9,977
10,045
What do you look for from a User Group?
I'm in the process of starting a User Group in my area related to.NET development. The format of the community will be the average free food, presentation, and then maybe free swag giveaway. What would you, as a member of a user community, look for in order to keep you coming back month to month?
It's true that some of the talks out there are very rudimentary, unfortunately some times the bulk of your crowd may need that. I consider myself a novice in a lot of fields, but I've attend talks that I thought were beneath me and still people were asking very basic questions. Perhaps it would be worth having a bi-monthly user group, one week for entry level and one week for advanced. It doesn't necessarily have to mean twice the work if you can get someone to help you coordinate a lot of the work will overlap. On the other hand you might just need to feel out the members of the group and see what their average skill level is and play to that.
What do you look for from a User Group? I'm in the process of starting a User Group in my area related to.NET development. The format of the community will be the average free food, presentation, and then maybe free swag giveaway. What would you, as a member of a user community, look for in order to keep you coming back month to month?
TITLE: What do you look for from a User Group? QUESTION: I'm in the process of starting a User Group in my area related to.NET development. The format of the community will be the average free food, presentation, and then maybe free swag giveaway. What would you, as a member of a user community, look for in order to keep you coming back month to month? ANSWER: It's true that some of the talks out there are very rudimentary, unfortunately some times the bulk of your crowd may need that. I consider myself a novice in a lot of fields, but I've attend talks that I thought were beneath me and still people were asking very basic questions. Perhaps it would be worth having a bi-monthly user group, one week for entry level and one week for advanced. It doesn't necessarily have to mean twice the work if you can get someone to help you coordinate a lot of the work will overlap. On the other hand you might just need to feel out the members of the group and see what their average skill level is and play to that.
[ ".net" ]
1
4
409
3
0
2008-08-13T15:50:56.510000
2008-08-13T16:30:28.303000
10,006
10,162
What is the easiest way to add compression to WCF in Silverlight?
I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.
If you are using IIS7, take a look at the Compression Module. This allows you to configure compression for HTTP requests to your server.
What is the easiest way to add compression to WCF in Silverlight? I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.
TITLE: What is the easiest way to add compression to WCF in Silverlight? QUESTION: I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all. ANSWER: If you are using IIS7, take a look at the Compression Module. This allows you to configure compression for HTTP requests to your server.
[ "wcf", "silverlight", "compression", "gzip" ]
24
10
19,691
4
0
2008-08-13T16:08:43.680000
2008-08-13T18:22:03.273000
10,038
10,054
Animation in javascript, a starting point
I understand how JS is run and I think I understand most of the DOM but I've no idea about animation. Does anybody here know of a good guide that can explain to me how it's done in Javascript? In addition, should I even consider Javascript for animation? Should I instead be looking to learn flash?
Avoid flash, its a horrible requirement, uncrawlable by Google, unsopported by a bunch of browsers and systems (eg iPhone) and most importantly: it forces you to reinvent web standards (e.g. scroll bars and whatnot), Javascript on the other hand is easier to maintain and code for in the noscript case. try scriptaculous for your animations; here's a quickie 3-line tutorial so you can see it working here's a more complete tutorial here's the scriptaculous wiki note that there are a gazillion JS animation libraries, some really good jQuery comes to mind. Usually they're just a script tag and an onclick event to setup. Good luck! /mp
Animation in javascript, a starting point I understand how JS is run and I think I understand most of the DOM but I've no idea about animation. Does anybody here know of a good guide that can explain to me how it's done in Javascript? In addition, should I even consider Javascript for animation? Should I instead be looking to learn flash?
TITLE: Animation in javascript, a starting point QUESTION: I understand how JS is run and I think I understand most of the DOM but I've no idea about animation. Does anybody here know of a good guide that can explain to me how it's done in Javascript? In addition, should I even consider Javascript for animation? Should I instead be looking to learn flash? ANSWER: Avoid flash, its a horrible requirement, uncrawlable by Google, unsopported by a bunch of browsers and systems (eg iPhone) and most importantly: it forces you to reinvent web standards (e.g. scroll bars and whatnot), Javascript on the other hand is easier to maintain and code for in the noscript case. try scriptaculous for your animations; here's a quickie 3-line tutorial so you can see it working here's a more complete tutorial here's the scriptaculous wiki note that there are a gazillion JS animation libraries, some really good jQuery comes to mind. Usually they're just a script tag and an onclick event to setup. Good luck! /mp
[ "javascript", "animation" ]
9
4
1,378
4
0
2008-08-13T16:26:10.463000
2008-08-13T16:36:44.780000
10,042
25,311
How do I create a Linked List Data Structure in Java?
What's the best way to make a linked list in Java?
The obvious solution to developers familiar to Java is to use the LinkedList class already provided in java.util. Say, however, you wanted to make your own implementation for some reason. Here is a quick example of a linked list that inserts a new link at the beginning of the list, deletes from the beginning of the list and loops through the list to print the links contained in it. Enhancements to this implementation include making it a double-linked list, adding methods to insert and delete from the middle or end, and by adding get and sort methods as well. Note: In the example, the Link object doesn't actually contain another Link object - nextLink is actually only a reference to another link. class Link { public int data1; public double data2; public Link nextLink; //Link constructor public Link(int d1, double d2) { data1 = d1; data2 = d2; } //Print Link data public void printLink() { System.out.print("{" + data1 + ", " + data2 + "} "); } } class LinkList { private Link first; //LinkList constructor public LinkList() { first = null; } //Returns true if list is empty public boolean isEmpty() { return first == null; } //Inserts a new Link at the first of the list public void insert(int d1, double d2) { Link link = new Link(d1, d2); link.nextLink = first; first = link; } //Deletes the link at the first of the list public Link delete() { Link temp = first; if(first == null){ return null; //throw new NoSuchElementException(); // this is the better way. } first = first.nextLink; return temp; } //Prints list data public void printList() { Link currentLink = first; System.out.print("List: "); while(currentLink!= null) { currentLink.printLink(); currentLink = currentLink.nextLink; } System.out.println(""); } } class LinkListTest { public static void main(String[] args) { LinkList list = new LinkList(); list.insert(1, 1.01); list.insert(2, 2.02); list.insert(3, 3.03); list.insert(4, 4.04); list.insert(5, 5.05); list.printList(); while(!list.isEmpty()) { Link deletedLink = list.delete(); System.out.print("deleted: "); deletedLink.printLink(); System.out.println(""); } list.printList(); } }
How do I create a Linked List Data Structure in Java? What's the best way to make a linked list in Java?
TITLE: How do I create a Linked List Data Structure in Java? QUESTION: What's the best way to make a linked list in Java? ANSWER: The obvious solution to developers familiar to Java is to use the LinkedList class already provided in java.util. Say, however, you wanted to make your own implementation for some reason. Here is a quick example of a linked list that inserts a new link at the beginning of the list, deletes from the beginning of the list and loops through the list to print the links contained in it. Enhancements to this implementation include making it a double-linked list, adding methods to insert and delete from the middle or end, and by adding get and sort methods as well. Note: In the example, the Link object doesn't actually contain another Link object - nextLink is actually only a reference to another link. class Link { public int data1; public double data2; public Link nextLink; //Link constructor public Link(int d1, double d2) { data1 = d1; data2 = d2; } //Print Link data public void printLink() { System.out.print("{" + data1 + ", " + data2 + "} "); } } class LinkList { private Link first; //LinkList constructor public LinkList() { first = null; } //Returns true if list is empty public boolean isEmpty() { return first == null; } //Inserts a new Link at the first of the list public void insert(int d1, double d2) { Link link = new Link(d1, d2); link.nextLink = first; first = link; } //Deletes the link at the first of the list public Link delete() { Link temp = first; if(first == null){ return null; //throw new NoSuchElementException(); // this is the better way. } first = first.nextLink; return temp; } //Prints list data public void printList() { Link currentLink = first; System.out.print("List: "); while(currentLink!= null) { currentLink.printLink(); currentLink = currentLink.nextLink; } System.out.println(""); } } class LinkListTest { public static void main(String[] args) { LinkList list = new LinkList(); list.insert(1, 1.01); list.insert(2, 2.02); list.insert(3, 3.03); list.insert(4, 4.04); list.insert(5, 5.05); list.printList(); while(!list.isEmpty()) { Link deletedLink = list.delete(); System.out.print("deleted: "); deletedLink.printLink(); System.out.println(""); } list.printList(); } }
[ "java", "data-structures", "linked-list" ]
136
220
427,911
6
0
2008-08-13T16:29:22.063000
2008-08-24T20:03:56.310000
10,059
10,555
Removing icon from Windows title bars without ditching close button or system menu?
I'm developing an MFC application and I've recently been searching for a good method to remove the icon from a Windows title bar, but retain the close button. The two obvious candidate solutions are turning off the system menu style or using the tool window style, but I would prefer not to disable the system menu or use the shrunken tool window title bar. Many MFC applications have this functionality, so I wonder: am I missing some standard way of doing this?
You can use WM_NCRBUTTONDOWN to detect if the user has right-clicked on your caption and then bring up the system menu.
Removing icon from Windows title bars without ditching close button or system menu? I'm developing an MFC application and I've recently been searching for a good method to remove the icon from a Windows title bar, but retain the close button. The two obvious candidate solutions are turning off the system menu style or using the tool window style, but I would prefer not to disable the system menu or use the shrunken tool window title bar. Many MFC applications have this functionality, so I wonder: am I missing some standard way of doing this?
TITLE: Removing icon from Windows title bars without ditching close button or system menu? QUESTION: I'm developing an MFC application and I've recently been searching for a good method to remove the icon from a Windows title bar, but retain the close button. The two obvious candidate solutions are turning off the system menu style or using the tool window style, but I would prefer not to disable the system menu or use the shrunken tool window title bar. Many MFC applications have this functionality, so I wonder: am I missing some standard way of doing this? ANSWER: You can use WM_NCRBUTTONDOWN to detect if the user has right-clicked on your caption and then bring up the system menu.
[ "c++", "windows", "user-interface", "mfc" ]
7
2
4,184
6
0
2008-08-13T16:42:40.570000
2008-08-14T01:28:13.567000
10,071
10,157
Compact Framework/Threading - MessageBox displays over other controls after option is chosen
I'm working on an app that grabs and installs a bunch of updates off an an external server, and need some help with threading. The user follows this process: Clicks button Method checks for updates, count is returned. If greater than 0, then ask the user if they want to install using MessageBox.Show(). If yes, it runs through a loop and call BeginInvoke() on the run() method of each update to run it in the background. My update class has some events that are used to update a progress bar etc. The progress bar updates are fine, but the MessageBox is not fully cleared from the screen because the update loop starts right after the user clicks yes (see screenshot below). What should I do to make the messagebox disappear instantly before the update loop starts? Should I be using Threads instead of BeginInvoke()? Should I be doing the initial update check on a separate thread and calling MessageBox.Show() from that thread? Code // Button clicked event handler code... DialogResult dlgRes = MessageBox.Show( string.Format("There are {0} updates available.\n\nInstall these now?", um2.Updates.Count), "Updates Available", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2 ); if (dlgRes == DialogResult.Yes) { ProcessAllUpdates(um2); } // Processes a bunch of items in a loop private void ProcessAllUpdates(UpdateManager2 um2) { for (int i = 0; i < um2.Updates.Count; i++) { Update2 update = um2.Updates[i]; ProcessSingleUpdate(update); int percentComplete = Utilities.CalculatePercentCompleted(i, um2.Updates.Count); UpdateOverallProgress(percentComplete); } } // Process a single update with IAsyncResult private void ProcessSingleUpdate(Update2 update) { update.Action.OnStart += Action_OnStart; update.Action.OnProgress += Action_OnProgress; update.Action.OnCompletion += Action_OnCompletion; //synchronous //update.Action.Run(); // async IAsyncResult ar = this.BeginInvoke((MethodInvoker)delegate() { update.Action.Run(); }); } Screenshot
Your UI isn't updating because all the work is happening in the user interface thread. Your call to: this.BeginInvoke((MethodInvoker)delegate() {update.Action.Run(); }) is saying invoke update.Action.Run() on the thread that created "this" (your form), which is the user interface thread. Application.DoEvents() will indeed give the UI thread the chance to redraw the screen, but I'd be tempted to create new delegate, and call BeginInvoke on that. This will execute the update.Action.Run() function on a seperate thread allocated from the thread pool. You can then keep checking the IAsyncResult until the update is complete, querying the update object for its progress after every check (because you can't have the other thread update the progress bar/UI), then calling Application.DoEvents(). You also are supposed to call EndInvoke() afterwards otherwise you may end up leaking resources I would also be tempted to put a cancel button on the progress dialog, and add a timeout, otherwise if the update gets stuck (or takes too long) then your application will have locked up forever.
Compact Framework/Threading - MessageBox displays over other controls after option is chosen I'm working on an app that grabs and installs a bunch of updates off an an external server, and need some help with threading. The user follows this process: Clicks button Method checks for updates, count is returned. If greater than 0, then ask the user if they want to install using MessageBox.Show(). If yes, it runs through a loop and call BeginInvoke() on the run() method of each update to run it in the background. My update class has some events that are used to update a progress bar etc. The progress bar updates are fine, but the MessageBox is not fully cleared from the screen because the update loop starts right after the user clicks yes (see screenshot below). What should I do to make the messagebox disappear instantly before the update loop starts? Should I be using Threads instead of BeginInvoke()? Should I be doing the initial update check on a separate thread and calling MessageBox.Show() from that thread? Code // Button clicked event handler code... DialogResult dlgRes = MessageBox.Show( string.Format("There are {0} updates available.\n\nInstall these now?", um2.Updates.Count), "Updates Available", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2 ); if (dlgRes == DialogResult.Yes) { ProcessAllUpdates(um2); } // Processes a bunch of items in a loop private void ProcessAllUpdates(UpdateManager2 um2) { for (int i = 0; i < um2.Updates.Count; i++) { Update2 update = um2.Updates[i]; ProcessSingleUpdate(update); int percentComplete = Utilities.CalculatePercentCompleted(i, um2.Updates.Count); UpdateOverallProgress(percentComplete); } } // Process a single update with IAsyncResult private void ProcessSingleUpdate(Update2 update) { update.Action.OnStart += Action_OnStart; update.Action.OnProgress += Action_OnProgress; update.Action.OnCompletion += Action_OnCompletion; //synchronous //update.Action.Run(); // async IAsyncResult ar = this.BeginInvoke((MethodInvoker)delegate() { update.Action.Run(); }); } Screenshot
TITLE: Compact Framework/Threading - MessageBox displays over other controls after option is chosen QUESTION: I'm working on an app that grabs and installs a bunch of updates off an an external server, and need some help with threading. The user follows this process: Clicks button Method checks for updates, count is returned. If greater than 0, then ask the user if they want to install using MessageBox.Show(). If yes, it runs through a loop and call BeginInvoke() on the run() method of each update to run it in the background. My update class has some events that are used to update a progress bar etc. The progress bar updates are fine, but the MessageBox is not fully cleared from the screen because the update loop starts right after the user clicks yes (see screenshot below). What should I do to make the messagebox disappear instantly before the update loop starts? Should I be using Threads instead of BeginInvoke()? Should I be doing the initial update check on a separate thread and calling MessageBox.Show() from that thread? Code // Button clicked event handler code... DialogResult dlgRes = MessageBox.Show( string.Format("There are {0} updates available.\n\nInstall these now?", um2.Updates.Count), "Updates Available", MessageBoxButtons.YesNo, MessageBoxIcon.Question, MessageBoxDefaultButton.Button2 ); if (dlgRes == DialogResult.Yes) { ProcessAllUpdates(um2); } // Processes a bunch of items in a loop private void ProcessAllUpdates(UpdateManager2 um2) { for (int i = 0; i < um2.Updates.Count; i++) { Update2 update = um2.Updates[i]; ProcessSingleUpdate(update); int percentComplete = Utilities.CalculatePercentCompleted(i, um2.Updates.Count); UpdateOverallProgress(percentComplete); } } // Process a single update with IAsyncResult private void ProcessSingleUpdate(Update2 update) { update.Action.OnStart += Action_OnStart; update.Action.OnProgress += Action_OnProgress; update.Action.OnCompletion += Action_OnCompletion; //synchronous //update.Action.Run(); // async IAsyncResult ar = this.BeginInvoke((MethodInvoker)delegate() { update.Action.Run(); }); } Screenshot ANSWER: Your UI isn't updating because all the work is happening in the user interface thread. Your call to: this.BeginInvoke((MethodInvoker)delegate() {update.Action.Run(); }) is saying invoke update.Action.Run() on the thread that created "this" (your form), which is the user interface thread. Application.DoEvents() will indeed give the UI thread the chance to redraw the screen, but I'd be tempted to create new delegate, and call BeginInvoke on that. This will execute the update.Action.Run() function on a seperate thread allocated from the thread pool. You can then keep checking the IAsyncResult until the update is complete, querying the update object for its progress after every check (because you can't have the other thread update the progress bar/UI), then calling Application.DoEvents(). You also are supposed to call EndInvoke() afterwards otherwise you may end up leaking resources I would also be tempted to put a cancel button on the progress dialog, and add a timeout, otherwise if the update gets stuck (or takes too long) then your application will have locked up forever.
[ "c#", "winforms", "multithreading", "compact-framework" ]
4
6
4,563
3
0
2008-08-13T16:55:51.387000
2008-08-13T18:17:23.483000
10,083
10,107
better command for Windows?
While I grew up using MSWindows, I transitioned to my much-loved Mac years ago. I don't want to start a flame war here on operating systems. I do, however, want a terminal a litle closer to what I'm used to. I'm not asking for full POSIX support - I don't have the patience to install Cygwin - but I miss tabbed terminals, being able to easily cut and paste, and my good friends ls, mkdir, rm, et al. (For these last ones, I could always put.bat files on my path, but that's going to get old fast.) Anybody have a terminal application for MSWindows XP?
Some more options: MSYS: a Minimal SYStem providing a POSIX compatible Bourne shell environment, with a small collection of UNIX command line tools. Primarily developed as a means to execute the configure scripts and Makefiles used to build Open Source software, but also useful as a general purpose command line interface to replace Windows cmd.exe. GNU utilities for Win32: ports of common GNU utilities to native Win32. In this context, native means the executables do only depend on the Microsoft C-runtime (msvcrt.dll) and not an emulation layer like that provided by Cygwin tools.
better command for Windows? While I grew up using MSWindows, I transitioned to my much-loved Mac years ago. I don't want to start a flame war here on operating systems. I do, however, want a terminal a litle closer to what I'm used to. I'm not asking for full POSIX support - I don't have the patience to install Cygwin - but I miss tabbed terminals, being able to easily cut and paste, and my good friends ls, mkdir, rm, et al. (For these last ones, I could always put.bat files on my path, but that's going to get old fast.) Anybody have a terminal application for MSWindows XP?
TITLE: better command for Windows? QUESTION: While I grew up using MSWindows, I transitioned to my much-loved Mac years ago. I don't want to start a flame war here on operating systems. I do, however, want a terminal a litle closer to what I'm used to. I'm not asking for full POSIX support - I don't have the patience to install Cygwin - but I miss tabbed terminals, being able to easily cut and paste, and my good friends ls, mkdir, rm, et al. (For these last ones, I could always put.bat files on my path, but that's going to get old fast.) Anybody have a terminal application for MSWindows XP? ANSWER: Some more options: MSYS: a Minimal SYStem providing a POSIX compatible Bourne shell environment, with a small collection of UNIX command line tools. Primarily developed as a means to execute the configure scripts and Makefiles used to build Open Source software, but also useful as a general purpose command line interface to replace Windows cmd.exe. GNU utilities for Win32: ports of common GNU utilities to native Win32. In this context, native means the executables do only depend on the Microsoft C-runtime (msvcrt.dll) and not an emulation layer like that provided by Cygwin tools.
[ "windows", "macos", "command-line", "terminal" ]
13
6
6,553
10
0
2008-08-13T17:06:04.127000
2008-08-13T17:27:16.440000
10,088
10,180
Where is TFS work item help text displayed?
I'm creating some custom work items in TFS and the helptext field seems handy but I don't see where it is being displayed in Team Explorer or Team System Web Access. Where is this information displayed?
When you hover over the type of the field. For instance create a new bug and hover over the "rank" and you should see "Stack rank used to prioritize work"
Where is TFS work item help text displayed? I'm creating some custom work items in TFS and the helptext field seems handy but I don't see where it is being displayed in Team Explorer or Team System Web Access. Where is this information displayed?
TITLE: Where is TFS work item help text displayed? QUESTION: I'm creating some custom work items in TFS and the helptext field seems handy but I don't see where it is being displayed in Team Explorer or Team System Web Access. Where is this information displayed? ANSWER: When you hover over the type of the field. For instance create a new bug and hover over the "rank" and you should see "Stack rank used to prioritize work"
[ "visual-studio", "tfs" ]
3
2
591
1
0
2008-08-13T17:09:26.210000
2008-08-13T18:30:52.723000
10,098
10,192
Best mock framework that can do both WebForms and MVC?
I'm getting into more of a TDD workflow, and have a mix of MVC and asp.net Web Forms apps. MOQ is recommended for MVC. I've used Rhino for Web Forms. Does anyone have a best practice for having 1 framework mock for both?
Rhino's latest release includes much of the sweet sweet 3.5 love that MoQ has. I'm a fan of MoQ, so that's what I'm using. But I also have Rhino, in case it does something that MoQ doesn't do. TL;DR: MoQ it baby.
Best mock framework that can do both WebForms and MVC? I'm getting into more of a TDD workflow, and have a mix of MVC and asp.net Web Forms apps. MOQ is recommended for MVC. I've used Rhino for Web Forms. Does anyone have a best practice for having 1 framework mock for both?
TITLE: Best mock framework that can do both WebForms and MVC? QUESTION: I'm getting into more of a TDD workflow, and have a mix of MVC and asp.net Web Forms apps. MOQ is recommended for MVC. I've used Rhino for Web Forms. Does anyone have a best practice for having 1 framework mock for both? ANSWER: Rhino's latest release includes much of the sweet sweet 3.5 love that MoQ has. I'm a fan of MoQ, so that's what I'm using. But I also have Rhino, in case it does something that MoQ doesn't do. TL;DR: MoQ it baby.
[ "c#", "asp.net", "asp.net-mvc", "testing" ]
2
1
3,601
5
0
2008-08-13T17:16:33.857000
2008-08-13T18:38:46.733000
10,123
10,129
How do I treat an integer as an array of bytes in Python?
I'm trying to decode the result of the Python os.wait() function. This returns, according to the Python docs: a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced. How do I decode the exit status indication (which is an integer) to obtain the high and low byte? To be specific, how do I implement the decode function used in the following code snippet: (pid,status) = os.wait() (exitstatus, signum) = decode(status)
This will do what you want: signum = status & 0xff exitstatus = (status & 0xff00) >> 8
How do I treat an integer as an array of bytes in Python? I'm trying to decode the result of the Python os.wait() function. This returns, according to the Python docs: a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced. How do I decode the exit status indication (which is an integer) to obtain the high and low byte? To be specific, how do I implement the decode function used in the following code snippet: (pid,status) = os.wait() (exitstatus, signum) = decode(status)
TITLE: How do I treat an integer as an array of bytes in Python? QUESTION: I'm trying to decode the result of the Python os.wait() function. This returns, according to the Python docs: a tuple containing its pid and exit status indication: a 16-bit number, whose low byte is the signal number that killed the process, and whose high byte is the exit status (if the signal number is zero); the high bit of the low byte is set if a core file was produced. How do I decode the exit status indication (which is an integer) to obtain the high and low byte? To be specific, how do I implement the decode function used in the following code snippet: (pid,status) = os.wait() (exitstatus, signum) = decode(status) ANSWER: This will do what you want: signum = status & 0xff exitstatus = (status & 0xff00) >> 8
[ "python" ]
15
14
5,096
7
0
2008-08-13T17:46:41.940000
2008-08-13T17:56:34.167000
10,149
10,161
How do I bind a regular expression to a key combination in emacs?
For context, I am something of an emacs newbie. I haven't used it for very long, but have been using it more and more (I like it a lot). Also I'm comfortable with lisp, but not super familiar with elisp. What I need to do is bind a regular expression to a keyboard combination because I use this particular regex so often. What I've been doing: M-C-s ^.*Table\(\(.*\n\)*?GO\) Note, I used newline above, but I've found that for isearch-forward-regexp, you really need to replace the \n in the regular expression with the result of C-q Q-j. This inserts a literal newline (without ending the command) enabling me to put a newline into the expression and match across lines. How can I bind this to a key combination? I vaguely understand that I need to create an elisp function which executes isearch-forward-regexp with the expression, but I'm fuzzy on the details. I've searched google and found most documentation to be a tad confusing. How can I bind a regular expression to a key combination in emacs? Mike Stone had the best answer so far -- not exactly what I was looking for but it worked for what I needed Edit - this sort of worked, but after storing the macro, when I went back to use it later, I couldn't use it with C-x e. (i.e., if I reboot emacs and then type M-x macro-name, and then C-x e, I get a message in the minibuffer like 'no last kbd macro' or something similar) @Mike Stone - Thanks for the information. I tried creating a macro like so: C-x( M-C-s ^.*Table\(\(.*C-q C-J\)*?GO\) C-x) This created my macro, but when I executed my macro I didn't get the same highlighting that I ordinarily get when I use isearch-forward-regexp. Instead it just jumped to the end of the next match of the expression. So that doesn't really work for what I need. Any ideas? Edit: It looks like I can use macros to do what I want, I just have to think outside the box of isearch-forward-regexp. I'll try what you suggested.
You can use macros, just do C-x ( then do everything for the macro, then C-x ) to end the macro, then C-x e will execute the last defined macro. Then, you can name it using M-x name-last-kbd-macro which lets you assign a name to it, which you can then invoke with M-x TESTIT, then store the definition using M-x insert-kbd-macro which will put the macro into your current buffer, and then you can store it in your.emacs file. Example: C-x( abc *return* C-x) Will define a macro to type "abc" and press return. C-xeee Executes the above macro immediately, 3 times (first e executes it, then following 2 e's will execute it twice more). M-x name-last-kbd-macro testit Names the macro to "testit" M-x testit Executes the just named macro (prints "abc" then return). M-x insert-kbd-macro Puts the following in your current buffer: (fset 'testit [?a?b?c return]) Which can then be saved in your.emacs file to use the named macro over and over again after restarting emacs.
How do I bind a regular expression to a key combination in emacs? For context, I am something of an emacs newbie. I haven't used it for very long, but have been using it more and more (I like it a lot). Also I'm comfortable with lisp, but not super familiar with elisp. What I need to do is bind a regular expression to a keyboard combination because I use this particular regex so often. What I've been doing: M-C-s ^.*Table\(\(.*\n\)*?GO\) Note, I used newline above, but I've found that for isearch-forward-regexp, you really need to replace the \n in the regular expression with the result of C-q Q-j. This inserts a literal newline (without ending the command) enabling me to put a newline into the expression and match across lines. How can I bind this to a key combination? I vaguely understand that I need to create an elisp function which executes isearch-forward-regexp with the expression, but I'm fuzzy on the details. I've searched google and found most documentation to be a tad confusing. How can I bind a regular expression to a key combination in emacs? Mike Stone had the best answer so far -- not exactly what I was looking for but it worked for what I needed Edit - this sort of worked, but after storing the macro, when I went back to use it later, I couldn't use it with C-x e. (i.e., if I reboot emacs and then type M-x macro-name, and then C-x e, I get a message in the minibuffer like 'no last kbd macro' or something similar) @Mike Stone - Thanks for the information. I tried creating a macro like so: C-x( M-C-s ^.*Table\(\(.*C-q C-J\)*?GO\) C-x) This created my macro, but when I executed my macro I didn't get the same highlighting that I ordinarily get when I use isearch-forward-regexp. Instead it just jumped to the end of the next match of the expression. So that doesn't really work for what I need. Any ideas? Edit: It looks like I can use macros to do what I want, I just have to think outside the box of isearch-forward-regexp. I'll try what you suggested.
TITLE: How do I bind a regular expression to a key combination in emacs? QUESTION: For context, I am something of an emacs newbie. I haven't used it for very long, but have been using it more and more (I like it a lot). Also I'm comfortable with lisp, but not super familiar with elisp. What I need to do is bind a regular expression to a keyboard combination because I use this particular regex so often. What I've been doing: M-C-s ^.*Table\(\(.*\n\)*?GO\) Note, I used newline above, but I've found that for isearch-forward-regexp, you really need to replace the \n in the regular expression with the result of C-q Q-j. This inserts a literal newline (without ending the command) enabling me to put a newline into the expression and match across lines. How can I bind this to a key combination? I vaguely understand that I need to create an elisp function which executes isearch-forward-regexp with the expression, but I'm fuzzy on the details. I've searched google and found most documentation to be a tad confusing. How can I bind a regular expression to a key combination in emacs? Mike Stone had the best answer so far -- not exactly what I was looking for but it worked for what I needed Edit - this sort of worked, but after storing the macro, when I went back to use it later, I couldn't use it with C-x e. (i.e., if I reboot emacs and then type M-x macro-name, and then C-x e, I get a message in the minibuffer like 'no last kbd macro' or something similar) @Mike Stone - Thanks for the information. I tried creating a macro like so: C-x( M-C-s ^.*Table\(\(.*C-q C-J\)*?GO\) C-x) This created my macro, but when I executed my macro I didn't get the same highlighting that I ordinarily get when I use isearch-forward-regexp. Instead it just jumped to the end of the next match of the expression. So that doesn't really work for what I need. Any ideas? Edit: It looks like I can use macros to do what I want, I just have to think outside the box of isearch-forward-regexp. I'll try what you suggested. ANSWER: You can use macros, just do C-x ( then do everything for the macro, then C-x ) to end the macro, then C-x e will execute the last defined macro. Then, you can name it using M-x name-last-kbd-macro which lets you assign a name to it, which you can then invoke with M-x TESTIT, then store the definition using M-x insert-kbd-macro which will put the macro into your current buffer, and then you can store it in your.emacs file. Example: C-x( abc *return* C-x) Will define a macro to type "abc" and press return. C-xeee Executes the above macro immediately, 3 times (first e executes it, then following 2 e's will execute it twice more). M-x name-last-kbd-macro testit Names the macro to "testit" M-x testit Executes the just named macro (prints "abc" then return). M-x insert-kbd-macro Puts the following in your current buffer: (fset 'testit [?a?b?c return]) Which can then be saved in your.emacs file to use the named macro over and over again after restarting emacs.
[ "regex", "emacs", "lisp" ]
6
6
1,882
4
0
2008-08-13T18:15:09.340000
2008-08-13T18:21:06.830000
10,190
10,194
How to return a page of results from SQL?
Many applications have grids that display data from a database table one page at a time. Many of them also let the user pick the number of records per page, sort by any column, and navigate back and forth through the results. What's a good algorithm to implement this pattern without bringing the entire table to the client and then filtering the data on the client. How do you bring just the records you want to display to the user? Does LINQ simplify the solution?
On MS SQL Server 2005 and above, ROW_NUMBER() seems to work: T-SQL: Paging with ROW_NUMBER() DECLARE @PageNum AS INT; DECLARE @PageSize AS INT; SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum,OrderID,OrderDate,CustomerID,EmployeeID FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate,OrderID;
How to return a page of results from SQL? Many applications have grids that display data from a database table one page at a time. Many of them also let the user pick the number of records per page, sort by any column, and navigate back and forth through the results. What's a good algorithm to implement this pattern without bringing the entire table to the client and then filtering the data on the client. How do you bring just the records you want to display to the user? Does LINQ simplify the solution?
TITLE: How to return a page of results from SQL? QUESTION: Many applications have grids that display data from a database table one page at a time. Many of them also let the user pick the number of records per page, sort by any column, and navigate back and forth through the results. What's a good algorithm to implement this pattern without bringing the entire table to the client and then filtering the data on the client. How do you bring just the records you want to display to the user? Does LINQ simplify the solution? ANSWER: On MS SQL Server 2005 and above, ROW_NUMBER() seems to work: T-SQL: Paging with ROW_NUMBER() DECLARE @PageNum AS INT; DECLARE @PageSize AS INT; SET @PageNum = 2; SET @PageSize = 10; WITH OrdersRN AS ( SELECT ROW_NUMBER() OVER(ORDER BY OrderDate, OrderID) AS RowNum,OrderID,OrderDate,CustomerID,EmployeeID FROM dbo.Orders ) SELECT * FROM OrdersRN WHERE RowNum BETWEEN (@PageNum - 1) * @PageSize + 1 AND @PageNum * @PageSize ORDER BY OrderDate,OrderID;
[ ".net", "sql", "linq", "pagination" ]
11
11
20,141
8
0
2008-08-13T18:38:02.740000
2008-08-13T18:40:32.220000
10,205
10,262
Delimited string parsing?
I'm looking at parsing a delimited string, something on the order of a,b,c But this is a very simple example, and parsing delimited data can get complex; for instance 1,"Your simple algorithm, it fails",True would blow your naiive string.Split implementation to bits. Is there anything I can freely use/steal/copy and paste that offers a relatively bulletproof solution to parsing delimited text?.NET, plox. Update: I decided to go with the TextFieldParser, which is part of VB.NET's pile of goodies hidden away in Microsoft.VisualBasic.DLL.
I use this to read from a file string filename = @textBox1.Text; string[] fields; string[] delimiter = new string[] {"|"}; using (Microsoft.VisualBasic.FileIO.TextFieldParser parser = new Microsoft.VisualBasic.FileIO.TextFieldParser(filename)) { parser.Delimiters = delimiter; parser.HasFieldsEnclosedInQuotes = false; while (!parser.EndOfData) { fields = parser.ReadFields(); //Do what you need } } I am sure someone here can transform this to parser a string that is in memory.
Delimited string parsing? I'm looking at parsing a delimited string, something on the order of a,b,c But this is a very simple example, and parsing delimited data can get complex; for instance 1,"Your simple algorithm, it fails",True would blow your naiive string.Split implementation to bits. Is there anything I can freely use/steal/copy and paste that offers a relatively bulletproof solution to parsing delimited text?.NET, plox. Update: I decided to go with the TextFieldParser, which is part of VB.NET's pile of goodies hidden away in Microsoft.VisualBasic.DLL.
TITLE: Delimited string parsing? QUESTION: I'm looking at parsing a delimited string, something on the order of a,b,c But this is a very simple example, and parsing delimited data can get complex; for instance 1,"Your simple algorithm, it fails",True would blow your naiive string.Split implementation to bits. Is there anything I can freely use/steal/copy and paste that offers a relatively bulletproof solution to parsing delimited text?.NET, plox. Update: I decided to go with the TextFieldParser, which is part of VB.NET's pile of goodies hidden away in Microsoft.VisualBasic.DLL. ANSWER: I use this to read from a file string filename = @textBox1.Text; string[] fields; string[] delimiter = new string[] {"|"}; using (Microsoft.VisualBasic.FileIO.TextFieldParser parser = new Microsoft.VisualBasic.FileIO.TextFieldParser(filename)) { parser.Delimiters = delimiter; parser.HasFieldsEnclosedInQuotes = false; while (!parser.EndOfData) { fields = parser.ReadFields(); //Do what you need } } I am sure someone here can transform this to parser a string that is in memory.
[ "c#", "text-parsing", "csv" ]
2
4
4,366
9
0
2008-08-13T18:46:44.517000
2008-08-13T19:18:26.730000
10,228
10,321
Run PHPUnit Tests in Certain Order
Is there a way to get the tests inside of a TestCase to run in a certain order? For example, I want to separate the life cycle of an object from creation to use to destruction but I need to make sure that the object is set up first before I run the other tests.
Maybe there is a design problem in your tests. Usually each test must not depend on any other tests, so they can run in any order. Each test needs to instantiate and destroy everything it needs to run, that would be the perfect approach, you should never share objects and states between tests. Can you be more specific about why you need the same object for N tests?
Run PHPUnit Tests in Certain Order Is there a way to get the tests inside of a TestCase to run in a certain order? For example, I want to separate the life cycle of an object from creation to use to destruction but I need to make sure that the object is set up first before I run the other tests.
TITLE: Run PHPUnit Tests in Certain Order QUESTION: Is there a way to get the tests inside of a TestCase to run in a certain order? For example, I want to separate the life cycle of an object from creation to use to destruction but I need to make sure that the object is set up first before I run the other tests. ANSWER: Maybe there is a design problem in your tests. Usually each test must not depend on any other tests, so they can run in any order. Each test needs to instantiate and destroy everything it needs to run, that would be the perfect approach, you should never share objects and states between tests. Can you be more specific about why you need the same object for N tests?
[ "php", "unit-testing", "phpunit" ]
82
62
45,304
9
0
2008-08-13T19:02:23.660000
2008-08-13T20:13:42.830000
10,229
10,484
Multithreading Design Best Practice
Consider this problem: I have a program which should fetch (let's say) 100 records from a database, and then for each one it should get updated information from a web service. There are two ways to introduce parallelism in this scenario: I start each request to the web service on a new Thread. The number of simultaneous threads is controlled by some external parameter (or dynamically adjusted somehow). I create smaller batches (let's say of 10 records each) and launch each batch on a separate thread (so taking our example, 10 threads). Which is a better approach, and why do you think so?
Option 3 is the best: Use Async IO. Unless your request processing is complex and heavy, your program is going to spend 99% of it's time waiting for the HTTP requests. This is exactly what Async IO is designed for - Let the windows networking stack (or.net framework or whatever) worry about all the waiting, and just use a single thread to dispatch and 'pick up' the results. Unfortunately the.NET framework makes it a right pain in the ass. It's easier if you're just using raw sockets or the Win32 api. Here's a (tested!) example using C#3 anyway: using System.Net; // need this somewhere // need to declare an class so we can cast our state object back out class RequestState { public WebRequest Request { get; set; } } static void Main( string[] args ) { // stupid cast neccessary to create the request HttpWebRequest request = WebRequest.Create( "http://www.stackoverflow.com" ) as HttpWebRequest; request.BeginGetResponse( /* callback to be invoked when finished */ (asyncResult) => { // fetch the request object out of the AsyncState var state = (RequestState)asyncResult.AsyncState; var webResponse = state.Request.EndGetResponse( asyncResult ) as HttpWebResponse; // there we go; Debug.Assert( webResponse.StatusCode == HttpStatusCode.OK ); Console.WriteLine( "Got Response from server:" + webResponse.Server ); }, /* pass the request through to our callback */ new RequestState { Request = request } ); // blah Console.WriteLine( "Waiting for response. Press a key to quit" ); Console.ReadKey(); } EDIT: In the case of.NET, the 'completion callback' actually gets fired in a ThreadPool thread, not in your main thread, so you will still need to lock any shared resources, but it still saves you all the trouble of managing threads.
Multithreading Design Best Practice Consider this problem: I have a program which should fetch (let's say) 100 records from a database, and then for each one it should get updated information from a web service. There are two ways to introduce parallelism in this scenario: I start each request to the web service on a new Thread. The number of simultaneous threads is controlled by some external parameter (or dynamically adjusted somehow). I create smaller batches (let's say of 10 records each) and launch each batch on a separate thread (so taking our example, 10 threads). Which is a better approach, and why do you think so?
TITLE: Multithreading Design Best Practice QUESTION: Consider this problem: I have a program which should fetch (let's say) 100 records from a database, and then for each one it should get updated information from a web service. There are two ways to introduce parallelism in this scenario: I start each request to the web service on a new Thread. The number of simultaneous threads is controlled by some external parameter (or dynamically adjusted somehow). I create smaller batches (let's say of 10 records each) and launch each batch on a separate thread (so taking our example, 10 threads). Which is a better approach, and why do you think so? ANSWER: Option 3 is the best: Use Async IO. Unless your request processing is complex and heavy, your program is going to spend 99% of it's time waiting for the HTTP requests. This is exactly what Async IO is designed for - Let the windows networking stack (or.net framework or whatever) worry about all the waiting, and just use a single thread to dispatch and 'pick up' the results. Unfortunately the.NET framework makes it a right pain in the ass. It's easier if you're just using raw sockets or the Win32 api. Here's a (tested!) example using C#3 anyway: using System.Net; // need this somewhere // need to declare an class so we can cast our state object back out class RequestState { public WebRequest Request { get; set; } } static void Main( string[] args ) { // stupid cast neccessary to create the request HttpWebRequest request = WebRequest.Create( "http://www.stackoverflow.com" ) as HttpWebRequest; request.BeginGetResponse( /* callback to be invoked when finished */ (asyncResult) => { // fetch the request object out of the AsyncState var state = (RequestState)asyncResult.AsyncState; var webResponse = state.Request.EndGetResponse( asyncResult ) as HttpWebResponse; // there we go; Debug.Assert( webResponse.StatusCode == HttpStatusCode.OK ); Console.WriteLine( "Got Response from server:" + webResponse.Server ); }, /* pass the request through to our callback */ new RequestState { Request = request } ); // blah Console.WriteLine( "Waiting for response. Press a key to quit" ); Console.ReadKey(); } EDIT: In the case of.NET, the 'completion callback' actually gets fired in a ThreadPool thread, not in your main thread, so you will still need to lock any shared resources, but it still saves you all the trouble of managing threads.
[ ".net", "multithreading" ]
3
6
732
4
0
2008-08-13T19:03:21.427000
2008-08-13T23:35:50.270000
10,230
10,233
Checking for string contents? string Length Vs Empty String
Which is more efficient for the compiler and the best practice for checking whether a string is blank? Checking whether the length of the string == 0 Checking whether the string is empty (strVar == "") Also, does the answer depend on language?
Yes, it depends on language, since string storage differs between languages. Pascal-type strings: Length = 0. C-style strings: [0] == 0..NET:.IsNullOrEmpty. Etc.
Checking for string contents? string Length Vs Empty String Which is more efficient for the compiler and the best practice for checking whether a string is blank? Checking whether the length of the string == 0 Checking whether the string is empty (strVar == "") Also, does the answer depend on language?
TITLE: Checking for string contents? string Length Vs Empty String QUESTION: Which is more efficient for the compiler and the best practice for checking whether a string is blank? Checking whether the length of the string == 0 Checking whether the string is empty (strVar == "") Also, does the answer depend on language? ANSWER: Yes, it depends on language, since string storage differs between languages. Pascal-type strings: Length = 0. C-style strings: [0] == 0..NET:.IsNullOrEmpty. Etc.
[ "string", "optimization", "language-agnostic", "compiler-construction" ]
20
19
20,287
14
0
2008-08-13T19:03:27.683000
2008-08-13T19:05:42.357000
10,260
10,811
What are the useful new ASP.NET features in the .NET Framework 3.5?
I've kept up to date with new features in the C# language as it's moved from version 1 through version 3. I haven't done such a good job keeping up to date with ASP.NET. I feel like some of the post version 1 features are not so good (e.g. the AJAX framework) or are just not that useful to me (e.g. the membership framework). Can anyone recommend any new killer ASP.NET features that might have gone unnoticed?
For ASP.NET, you have a lot of improvements: split view (code and design) faster switching between code and design view embedded master pages (one master page in another) javascript debugging Anyway most of the useful stuff are really in the meat of the language, and for.NET 3.5 the new language features for C# 3.0 will be (and yes, I find ALL of them useful) anonymous objects automatic properties object initializers collection initializers (inline initialization for collections) implicit typing (var keyword) lambda expressions LINQ Extension methods I might have forgotten a few, but I think this is about most of the new cool and useful stuff.
What are the useful new ASP.NET features in the .NET Framework 3.5? I've kept up to date with new features in the C# language as it's moved from version 1 through version 3. I haven't done such a good job keeping up to date with ASP.NET. I feel like some of the post version 1 features are not so good (e.g. the AJAX framework) or are just not that useful to me (e.g. the membership framework). Can anyone recommend any new killer ASP.NET features that might have gone unnoticed?
TITLE: What are the useful new ASP.NET features in the .NET Framework 3.5? QUESTION: I've kept up to date with new features in the C# language as it's moved from version 1 through version 3. I haven't done such a good job keeping up to date with ASP.NET. I feel like some of the post version 1 features are not so good (e.g. the AJAX framework) or are just not that useful to me (e.g. the membership framework). Can anyone recommend any new killer ASP.NET features that might have gone unnoticed? ANSWER: For ASP.NET, you have a lot of improvements: split view (code and design) faster switching between code and design view embedded master pages (one master page in another) javascript debugging Anyway most of the useful stuff are really in the meat of the language, and for.NET 3.5 the new language features for C# 3.0 will be (and yes, I find ALL of them useful) anonymous objects automatic properties object initializers collection initializers (inline initialization for collections) implicit typing (var keyword) lambda expressions LINQ Extension methods I might have forgotten a few, but I think this is about most of the new cool and useful stuff.
[ "asp.net" ]
7
8
637
13
0
2008-08-13T19:18:07.667000
2008-08-14T08:39:11.060000
10,274
10,287
When should I not use the ThreadPool in .Net?
When should I not use the ThreadPool in.Net? It looks like the best option is to use a ThreadPool, in which case, why is it not the only option? What are your experiences around this?
The only reason why I wouldn't use the ThreadPool for cheap multithreading is if I need to… interract with the method running (e.g., to kill it) run code on a STA thread (this happened to me) keep the thread alive after my application has died ( ThreadPool threads are background threads) in case I need to change the priority of the Thread. We can not change priority of threads in ThreadPool which is by default Normal. P.S.: The MSDN article "The Managed Thread Pool" contains a section titled, "When Not to Use Thread Pool Threads", with a very similar but slightly more complete list of possible reasons for not using the thread pool. There are lots of reasons why you would need to skip the ThreadPool, but if you don't know them then the ThreadPool should be good enough for you. Alternatively, look at the new Parallel Extensions Framework, which has some neat stuff in there that may suit your needs without having to use the ThreadPool.
When should I not use the ThreadPool in .Net? When should I not use the ThreadPool in.Net? It looks like the best option is to use a ThreadPool, in which case, why is it not the only option? What are your experiences around this?
TITLE: When should I not use the ThreadPool in .Net? QUESTION: When should I not use the ThreadPool in.Net? It looks like the best option is to use a ThreadPool, in which case, why is it not the only option? What are your experiences around this? ANSWER: The only reason why I wouldn't use the ThreadPool for cheap multithreading is if I need to… interract with the method running (e.g., to kill it) run code on a STA thread (this happened to me) keep the thread alive after my application has died ( ThreadPool threads are background threads) in case I need to change the priority of the Thread. We can not change priority of threads in ThreadPool which is by default Normal. P.S.: The MSDN article "The Managed Thread Pool" contains a section titled, "When Not to Use Thread Pool Threads", with a very similar but slightly more complete list of possible reasons for not using the thread pool. There are lots of reasons why you would need to skip the ThreadPool, but if you don't know them then the ThreadPool should be good enough for you. Alternatively, look at the new Parallel Extensions Framework, which has some neat stuff in there that may suit your needs without having to use the ThreadPool.
[ "c#", ".net", "multithreading", "design-decisions" ]
40
18
17,943
9
0
2008-08-13T19:24:22.337000
2008-08-13T19:32:25.253000
10,300
45,655
Validating posted form data in the ASP.NET MVC framework
I've been playing around with the ASP.NET MVC Framework and the one thing that's really confusing me is how I'm meant to do server side validation of posted form data. I presume I don't post back to the same URL, but if I don't, how do I redisplay the form with the entered data and error messages? Also, where should the validation logic go? In the model or the controller? This seems to be one of the few areas where web forms are much stronger (I miss the validation controls).
You might want to take a look at ScottGu's latest post for ASP.Net prev 5. It walks through a validation sample that is very interesting: http://weblogs.asp.net/scottgu/archive/2008/09/02/asp-net-mvc-preview-5-and-form-posting-scenarios.aspx
Validating posted form data in the ASP.NET MVC framework I've been playing around with the ASP.NET MVC Framework and the one thing that's really confusing me is how I'm meant to do server side validation of posted form data. I presume I don't post back to the same URL, but if I don't, how do I redisplay the form with the entered data and error messages? Also, where should the validation logic go? In the model or the controller? This seems to be one of the few areas where web forms are much stronger (I miss the validation controls).
TITLE: Validating posted form data in the ASP.NET MVC framework QUESTION: I've been playing around with the ASP.NET MVC Framework and the one thing that's really confusing me is how I'm meant to do server side validation of posted form data. I presume I don't post back to the same URL, but if I don't, how do I redisplay the form with the entered data and error messages? Also, where should the validation logic go? In the model or the controller? This seems to be one of the few areas where web forms are much stronger (I miss the validation controls). ANSWER: You might want to take a look at ScottGu's latest post for ASP.Net prev 5. It walks through a validation sample that is very interesting: http://weblogs.asp.net/scottgu/archive/2008/09/02/asp-net-mvc-preview-5-and-form-posting-scenarios.aspx
[ "asp.net-mvc", "validation" ]
10
4
6,236
6
0
2008-08-13T19:45:33.917000
2008-09-05T12:29:11.297000
10,308
10,461
Graphical representation of SVN branch/merge activity
Are you aware of any tool that creates diagrams showing the branch/merge activity in a SVN repository? We've all seen these diagrams in various tutorials. Some good, some not so good. Can they be created automatically (or maybe with a little prodding -- you might have to tell it what if your branching philosophy is dev-test-prod, branch-per-release, etc.) I'm looking at the TortoiseSVN Revision Graph right now, but it has more detail than I want and the wrong layout. Orion, thanks for the response. I guess since branching and merging are more a convention for managing files in a repository than a "built in feature of SVN, it would be pretty tough. I'll stick with the poorly-drawn diagram at the top of the whiteboard in our team's office.
prior to SVN 1.5 (which has been out all of a month or so), it didn't track merges at all, so the bits where branches 'reconnect' to the trunk are impossible for it to do anyway
Graphical representation of SVN branch/merge activity Are you aware of any tool that creates diagrams showing the branch/merge activity in a SVN repository? We've all seen these diagrams in various tutorials. Some good, some not so good. Can they be created automatically (or maybe with a little prodding -- you might have to tell it what if your branching philosophy is dev-test-prod, branch-per-release, etc.) I'm looking at the TortoiseSVN Revision Graph right now, but it has more detail than I want and the wrong layout. Orion, thanks for the response. I guess since branching and merging are more a convention for managing files in a repository than a "built in feature of SVN, it would be pretty tough. I'll stick with the poorly-drawn diagram at the top of the whiteboard in our team's office.
TITLE: Graphical representation of SVN branch/merge activity QUESTION: Are you aware of any tool that creates diagrams showing the branch/merge activity in a SVN repository? We've all seen these diagrams in various tutorials. Some good, some not so good. Can they be created automatically (or maybe with a little prodding -- you might have to tell it what if your branching philosophy is dev-test-prod, branch-per-release, etc.) I'm looking at the TortoiseSVN Revision Graph right now, but it has more detail than I want and the wrong layout. Orion, thanks for the response. I guess since branching and merging are more a convention for managing files in a repository than a "built in feature of SVN, it would be pretty tough. I'll stick with the poorly-drawn diagram at the top of the whiteboard in our team's office. ANSWER: prior to SVN 1.5 (which has been out all of a month or so), it didn't track merges at all, so the bits where branches 'reconnect' to the trunk are impossible for it to do anyway
[ "svn" ]
32
2
26,768
4
0
2008-08-13T19:55:06.393000
2008-08-13T23:08:59.777000
10,309
10,338
Whats the best way of finding ALL your memory when developing on the Compact Framework?
I've used the CF Remote Performance Monitor, however this seems to only track memory initialised in the managed world as opposed to the unmanaged world. Well, I can only presume this as the numbers listed in the profiler are way short of the maximum allowed (32mb on CE 5). Profiling a particular app with the RPM showed me that the total usage of all the caches only manages to get to about 12mb and then slowly shrinks as (I assume) something unmanaged starts to claim more memory. The memory slider in System also shows that the device is very short on memory. If I kill the process the slider shows all the memory coming back. So it must (?) be this managed process that is swallowing the memory. Is there any simple(ish?) fashion how one can track unmanaged memory usage in some way that might enable me to match it up with the corresponding P/Invoke calls? EDIT: To all you re-taggers it isn't.NET, tagging the question like this confuses things. It's.NETCF / Compact Framework. I know they appear to be similar but they're different because.NET rocks whereas CF is basically just a wrapper around NotImplementedException.
Try enabling Interop logging. Also, if you have access to the code of the native dll you are using, check this out: http://msdn.microsoft.com/en-us/netframework/bb630228.aspx
Whats the best way of finding ALL your memory when developing on the Compact Framework? I've used the CF Remote Performance Monitor, however this seems to only track memory initialised in the managed world as opposed to the unmanaged world. Well, I can only presume this as the numbers listed in the profiler are way short of the maximum allowed (32mb on CE 5). Profiling a particular app with the RPM showed me that the total usage of all the caches only manages to get to about 12mb and then slowly shrinks as (I assume) something unmanaged starts to claim more memory. The memory slider in System also shows that the device is very short on memory. If I kill the process the slider shows all the memory coming back. So it must (?) be this managed process that is swallowing the memory. Is there any simple(ish?) fashion how one can track unmanaged memory usage in some way that might enable me to match it up with the corresponding P/Invoke calls? EDIT: To all you re-taggers it isn't.NET, tagging the question like this confuses things. It's.NETCF / Compact Framework. I know they appear to be similar but they're different because.NET rocks whereas CF is basically just a wrapper around NotImplementedException.
TITLE: Whats the best way of finding ALL your memory when developing on the Compact Framework? QUESTION: I've used the CF Remote Performance Monitor, however this seems to only track memory initialised in the managed world as opposed to the unmanaged world. Well, I can only presume this as the numbers listed in the profiler are way short of the maximum allowed (32mb on CE 5). Profiling a particular app with the RPM showed me that the total usage of all the caches only manages to get to about 12mb and then slowly shrinks as (I assume) something unmanaged starts to claim more memory. The memory slider in System also shows that the device is very short on memory. If I kill the process the slider shows all the memory coming back. So it must (?) be this managed process that is swallowing the memory. Is there any simple(ish?) fashion how one can track unmanaged memory usage in some way that might enable me to match it up with the corresponding P/Invoke calls? EDIT: To all you re-taggers it isn't.NET, tagging the question like this confuses things. It's.NETCF / Compact Framework. I know they appear to be similar but they're different because.NET rocks whereas CF is basically just a wrapper around NotImplementedException. ANSWER: Try enabling Interop logging. Also, if you have access to the code of the native dll you are using, check this out: http://msdn.microsoft.com/en-us/netframework/bb630228.aspx
[ "compact-framework", "windows-ce" ]
6
3
1,016
3
0
2008-08-13T19:56:52.707000
2008-08-13T20:33:46.253000
10,313
30,296
Can I copy files to a Network Place from a script or the command line?
Is it possible, in Windows XP, to copy files to a Network Place from the command line, a batch file or, even better, a PowerShell script? What sent me down this road of research was trying to publish files to a WSS 3.0 document library from a user's machine. I can't map a drive to the library in question because the WSS site is only available to authenticate via NTLM on a port other than 80 or 443. I suppose I could alternately use the WSS web services to push the files out, but I'm really curious about the answer to this question now.
From what I'm seeing, it seems that it's not possible to directly access/ manipulate a Network Place from the command line, be it in PowerShell or the plain ol' command prompt. @slipsec (#13899): Thanks for the suggestion, but PowerShell doesn't support the port number in the destination path any more than the net use command does. So the best thing I can figure to do in my particular situation is bite the bullet and buy something like WebDrive to enable drive mapping via WebDAV on non-standard ports, or do some different configuration (e.g. separate web app with a different IP address and host headers) in SharePoint to expose the site via NTLM on a standard port.
Can I copy files to a Network Place from a script or the command line? Is it possible, in Windows XP, to copy files to a Network Place from the command line, a batch file or, even better, a PowerShell script? What sent me down this road of research was trying to publish files to a WSS 3.0 document library from a user's machine. I can't map a drive to the library in question because the WSS site is only available to authenticate via NTLM on a port other than 80 or 443. I suppose I could alternately use the WSS web services to push the files out, but I'm really curious about the answer to this question now.
TITLE: Can I copy files to a Network Place from a script or the command line? QUESTION: Is it possible, in Windows XP, to copy files to a Network Place from the command line, a batch file or, even better, a PowerShell script? What sent me down this road of research was trying to publish files to a WSS 3.0 document library from a user's machine. I can't map a drive to the library in question because the WSS site is only available to authenticate via NTLM on a port other than 80 or 443. I suppose I could alternately use the WSS web services to push the files out, but I'm really curious about the answer to this question now. ANSWER: From what I'm seeing, it seems that it's not possible to directly access/ manipulate a Network Place from the command line, be it in PowerShell or the plain ol' command prompt. @slipsec (#13899): Thanks for the suggestion, but PowerShell doesn't support the port number in the destination path any more than the net use command does. So the best thing I can figure to do in my particular situation is bite the bullet and buy something like WebDrive to enable drive mapping via WebDAV on non-standard ports, or do some different configuration (e.g. separate web app with a different IP address and host headers) in SharePoint to expose the site via NTLM on a standard port.
[ "powershell", "scripting", "command-line", "moss", "wss" ]
12
1
58,615
9
0
2008-08-13T20:02:06.533000
2008-08-27T14:35:37.993000
10,314
11,994
How do you name your instance/param values?
Being new to Objective-C (but a long term C/++) programmer I'm looking for advice/recommendations on naming conventions for variables. My personal preference would be to utilize a prefix for instance variables both for clarity within functions and to prevent shadowing of function parameters. However I'm a fan of properties which rules out prefixes (unless you also prefix your property names, which doesn't work too well and looks daft). Similarly I could use the "self.variable" convention, but only if I make EVERYTHING a property. So given the code below what's your preferred naming style for instance/function variables? And if you don't bother, how do you deal with shadowing on function params? @interface GridItem: NSObject { CGRect _rect;... } @end -(void) initFromRect:(CGRect)rect { _rect = rect;... } Cheers!
Most Cocoa projects use underbar as a non- IBOutlet instance variable prefix, and use no prefix for IBOutlet instance variables. The reason I don't use underbars for IBOutlet instance variables is that when a nib file is loaded, if you have a setter method for a connected outlet, that setter will be called. However this mechanism does not use Key-Value Coding, so an IBOutlet whose name is prefixed with an underbar ( e.g. _myField ) will not be set unless the setter is named exactly like the outlet ( e.g. set_myField: ), which is non-standard and gross. Also, be aware that using properties like self.myProp is not the same as accessing instance variables. You are sending a message when you use a property, just like if you used bracket notation like [self myProp]. All properties do is give you a concise syntax for specifying both the getter and setter in a single line, and allow you to synthesize their implementation; they do not actually short-circuit the message dispatch mechanism. If you want to access an instance variable directly but prefix it with self you need to treat self as a pointer, like self->myProp which really is a C-style field access. Finally, never use Hungarian notation when writing Cocoa code, and shy away from other prefixes like "f" and "m_" — that will mark the code as having been written by someone who doesn't "get it" and will cause it to be viewed by suspicion by other Cocoa developers. In general, follow the advice in the Coding Guidelines for Cocoa document at the Apple Developer Connection, and other developers will be able to pick up and understand your code, and your code will work well with all of the Cocoa features that use runtime introspection. Here's what a window controller class might look like, using my conventions: // EmployeeWindowController.h #import @interface EmployeeWindowController: NSWindowController { @private // model object this window is presenting Employee *_employee; // outlets connected to views in the window IBOutlet NSTextField *nameField; IBOutlet NSTextField *titleField; } - (id)initWithEmployee:(Employee *)employee; @property(readwrite, retain) Employee *employee; @end // EmployeeWindowController.m #import "EmployeeWindowController.h" @implementation EmployeeWindowController @synthesize employee = _employee; - (id)initWithEmployee:(Employee *)employee { if (self = [super initWithWindowNibName:@"Employee"]) { _employee = [employee retain]; } return self; } - (void)dealloc { [_employee release]; [super dealloc]; } - (void)windowDidLoad { // populates the window's controls, not necessary if using bindings [nameField setStringValue:self.employee.name]; [titleField setStringValue:self.employee.title]; } @end You'll see that I'm using the instance variable that references an Employee directly in my -init and -dealloc method, while I'm using the property in other methods. That's generally a good pattern with properties: Only ever touch the underlying instance variable for a property in initializers, in -dealloc, and in the getter and setter for the property.
How do you name your instance/param values? Being new to Objective-C (but a long term C/++) programmer I'm looking for advice/recommendations on naming conventions for variables. My personal preference would be to utilize a prefix for instance variables both for clarity within functions and to prevent shadowing of function parameters. However I'm a fan of properties which rules out prefixes (unless you also prefix your property names, which doesn't work too well and looks daft). Similarly I could use the "self.variable" convention, but only if I make EVERYTHING a property. So given the code below what's your preferred naming style for instance/function variables? And if you don't bother, how do you deal with shadowing on function params? @interface GridItem: NSObject { CGRect _rect;... } @end -(void) initFromRect:(CGRect)rect { _rect = rect;... } Cheers!
TITLE: How do you name your instance/param values? QUESTION: Being new to Objective-C (but a long term C/++) programmer I'm looking for advice/recommendations on naming conventions for variables. My personal preference would be to utilize a prefix for instance variables both for clarity within functions and to prevent shadowing of function parameters. However I'm a fan of properties which rules out prefixes (unless you also prefix your property names, which doesn't work too well and looks daft). Similarly I could use the "self.variable" convention, but only if I make EVERYTHING a property. So given the code below what's your preferred naming style for instance/function variables? And if you don't bother, how do you deal with shadowing on function params? @interface GridItem: NSObject { CGRect _rect;... } @end -(void) initFromRect:(CGRect)rect { _rect = rect;... } Cheers! ANSWER: Most Cocoa projects use underbar as a non- IBOutlet instance variable prefix, and use no prefix for IBOutlet instance variables. The reason I don't use underbars for IBOutlet instance variables is that when a nib file is loaded, if you have a setter method for a connected outlet, that setter will be called. However this mechanism does not use Key-Value Coding, so an IBOutlet whose name is prefixed with an underbar ( e.g. _myField ) will not be set unless the setter is named exactly like the outlet ( e.g. set_myField: ), which is non-standard and gross. Also, be aware that using properties like self.myProp is not the same as accessing instance variables. You are sending a message when you use a property, just like if you used bracket notation like [self myProp]. All properties do is give you a concise syntax for specifying both the getter and setter in a single line, and allow you to synthesize their implementation; they do not actually short-circuit the message dispatch mechanism. If you want to access an instance variable directly but prefix it with self you need to treat self as a pointer, like self->myProp which really is a C-style field access. Finally, never use Hungarian notation when writing Cocoa code, and shy away from other prefixes like "f" and "m_" — that will mark the code as having been written by someone who doesn't "get it" and will cause it to be viewed by suspicion by other Cocoa developers. In general, follow the advice in the Coding Guidelines for Cocoa document at the Apple Developer Connection, and other developers will be able to pick up and understand your code, and your code will work well with all of the Cocoa features that use runtime introspection. Here's what a window controller class might look like, using my conventions: // EmployeeWindowController.h #import @interface EmployeeWindowController: NSWindowController { @private // model object this window is presenting Employee *_employee; // outlets connected to views in the window IBOutlet NSTextField *nameField; IBOutlet NSTextField *titleField; } - (id)initWithEmployee:(Employee *)employee; @property(readwrite, retain) Employee *employee; @end // EmployeeWindowController.m #import "EmployeeWindowController.h" @implementation EmployeeWindowController @synthesize employee = _employee; - (id)initWithEmployee:(Employee *)employee { if (self = [super initWithWindowNibName:@"Employee"]) { _employee = [employee retain]; } return self; } - (void)dealloc { [_employee release]; [super dealloc]; } - (void)windowDidLoad { // populates the window's controls, not necessary if using bindings [nameField setStringValue:self.employee.name]; [titleField setStringValue:self.employee.title]; } @end You'll see that I'm using the instance variable that references an Employee directly in my -init and -dealloc method, while I'm using the property in other methods. That's generally a good pattern with properties: Only ever touch the underlying instance variable for a property in initializers, in -dealloc, and in the getter and setter for the property.
[ "objective-c", "cocoa" ]
4
15
3,113
10
0
2008-08-13T20:03:53.503000
2008-08-15T05:32:40.293000
10,323
10,350
Why doesn't Ruby have a real StringBuffer or StringIO?
I recently read a nice post on using StringIO in Ruby. What the author doesn't mention, though, is that StringIO is just an "I." There's no "O." You can't do this, for example: s = StringIO.new s << 'foo' s << 'bar' s.to_s # => should be "foo\nbar" # => really is ''` Ruby really needs a StringBuffer just like the one Java has. StringBuffers serve two important purposes. First, they let you test the output half of what Ruby's StringIO does. Second, they are useful for building up long strings from small parts -- something that Joel reminds us over and over again is otherwise very very slow. Is there a good replacement? It's true that Strings in Ruby are mutable, but that doesn't mean we should always rely on that functionality. If stuff is large, the performance and memory requirements of this, for example, is really bad. result = stuff.map(&:to_s).join(' ') The "correct" way to do this in Java is: result = StringBuffer.new("") for(String s: stuff) { result.append(s); } Though my Java is a bit rusty.
I looked at the ruby documentation for StringIO, and it looks like what you want is StringIO#string, not StringIO#to_s Thus, change your code to: s = StringIO.new s << 'foo' s << 'bar' s.string
Why doesn't Ruby have a real StringBuffer or StringIO? I recently read a nice post on using StringIO in Ruby. What the author doesn't mention, though, is that StringIO is just an "I." There's no "O." You can't do this, for example: s = StringIO.new s << 'foo' s << 'bar' s.to_s # => should be "foo\nbar" # => really is ''` Ruby really needs a StringBuffer just like the one Java has. StringBuffers serve two important purposes. First, they let you test the output half of what Ruby's StringIO does. Second, they are useful for building up long strings from small parts -- something that Joel reminds us over and over again is otherwise very very slow. Is there a good replacement? It's true that Strings in Ruby are mutable, but that doesn't mean we should always rely on that functionality. If stuff is large, the performance and memory requirements of this, for example, is really bad. result = stuff.map(&:to_s).join(' ') The "correct" way to do this in Java is: result = StringBuffer.new("") for(String s: stuff) { result.append(s); } Though my Java is a bit rusty.
TITLE: Why doesn't Ruby have a real StringBuffer or StringIO? QUESTION: I recently read a nice post on using StringIO in Ruby. What the author doesn't mention, though, is that StringIO is just an "I." There's no "O." You can't do this, for example: s = StringIO.new s << 'foo' s << 'bar' s.to_s # => should be "foo\nbar" # => really is ''` Ruby really needs a StringBuffer just like the one Java has. StringBuffers serve two important purposes. First, they let you test the output half of what Ruby's StringIO does. Second, they are useful for building up long strings from small parts -- something that Joel reminds us over and over again is otherwise very very slow. Is there a good replacement? It's true that Strings in Ruby are mutable, but that doesn't mean we should always rely on that functionality. If stuff is large, the performance and memory requirements of this, for example, is really bad. result = stuff.map(&:to_s).join(' ') The "correct" way to do this in Java is: result = StringBuffer.new("") for(String s: stuff) { result.append(s); } Though my Java is a bit rusty. ANSWER: I looked at the ruby documentation for StringIO, and it looks like what you want is StringIO#string, not StringIO#to_s Thus, change your code to: s = StringIO.new s << 'foo' s << 'bar' s.string
[ "ruby", "string", "io", "buffer" ]
57
128
31,690
5
0
2008-08-13T20:19:09.807000
2008-08-13T20:41:29.153000
10,349
10,359
In WinForms, why can't you update UI controls from other threads?
I'm sure there is a good (or at least decent) reason for this. What is it?
Because you can easily end up with a deadlock (among other issues). For exmaple, your secondary thread could be trying to update the UI control, but the UI control will be waiting for a resource locked by the secondary thread to be released, so both threads end up waiting for each other to finish. As others have commented this situation is not unique to UI code, but is particularly common. In other languages such as C++ you are free to try and do this (without an exception being thrown as in WinForms), but your application may freeze and stop responding should a deadlock occur. Incidentally, you can easily tell the UI thread that you want to update a control, just create a delegate, then call the (asynchronous) BeginInvoke method on that control passing it your delegate. E.g. myControl.BeginInvoke(myControl.UpdateFunction); This is the equivalent to doing a C++/MFC PostMessage from a worker thread
In WinForms, why can't you update UI controls from other threads? I'm sure there is a good (or at least decent) reason for this. What is it?
TITLE: In WinForms, why can't you update UI controls from other threads? QUESTION: I'm sure there is a good (or at least decent) reason for this. What is it? ANSWER: Because you can easily end up with a deadlock (among other issues). For exmaple, your secondary thread could be trying to update the UI control, but the UI control will be waiting for a resource locked by the secondary thread to be released, so both threads end up waiting for each other to finish. As others have commented this situation is not unique to UI code, but is particularly common. In other languages such as C++ you are free to try and do this (without an exception being thrown as in WinForms), but your application may freeze and stop responding should a deadlock occur. Incidentally, you can easily tell the UI thread that you want to update a control, just create a delegate, then call the (asynchronous) BeginInvoke method on that control passing it your delegate. E.g. myControl.BeginInvoke(myControl.UpdateFunction); This is the equivalent to doing a C++/MFC PostMessage from a worker thread
[ "winforms", "multithreading" ]
18
8
8,290
7
0
2008-08-13T20:41:24.927000
2008-08-13T20:47:12.760000
10,366
11,197
Style display not working in Firefox, Opera, Safari - (IE7 is OK)
I have an absolutely positioned div that I want to show when the user clicks a link. The onclick of the link calls a js function that sets the display of the div to block (also tried: "", inline, table-cell, inline-table, etc). This works great in IE7, not at all in every other browser I've tried (FF2, FF3, Opera 9.5, Safari). I've tried adding alerts before and after the call, and they show that the display has changed from none to block but the div does not display. I can get the div to display in FF3 if I change the display value using Firebug's HTML inspector (but not by running javascript through Firebug's console) - so I know it's not just showing up off-screen, etc. I've tried everything I can think of, including: Using a different doctype (XHTML 1, HTML 4, etc) Using visibility visible/hidden instead of display block/none Using inline javascript instead of a function call Testing from different machines Any ideas about what could cause this?
Since setting the properties with javascript never seemed to work, but setting using Firebug's inspect did, I started to suspect that the javascript ID selector was broken - maybe there were multiple items in the DOM with the same ID? The source didn't show that there were, but looping through all divs using javascript I found that that was the case. Here's the function I ended up using to show the popup: function openPopup(popupID) { var divs = getObjectsByTagAndClass('div','popupDiv'); if (divs!= undefined && divs!= null) { for (var i = 0; i < divs.length; i++) { if (divs[i].id == popupID) divs[i].style.display = 'block'; } } } (utility function getObjectsByTagAndClass not listed) Ideally I'll find out why the same item is being inserted multiple times, but I don't have control over the rendering platform, just its inputs. So when debugging issues like this, remember to check for duplicate IDs in the DOM, which can break getElementById. To everyone who answered, thanks for your help!
Style display not working in Firefox, Opera, Safari - (IE7 is OK) I have an absolutely positioned div that I want to show when the user clicks a link. The onclick of the link calls a js function that sets the display of the div to block (also tried: "", inline, table-cell, inline-table, etc). This works great in IE7, not at all in every other browser I've tried (FF2, FF3, Opera 9.5, Safari). I've tried adding alerts before and after the call, and they show that the display has changed from none to block but the div does not display. I can get the div to display in FF3 if I change the display value using Firebug's HTML inspector (but not by running javascript through Firebug's console) - so I know it's not just showing up off-screen, etc. I've tried everything I can think of, including: Using a different doctype (XHTML 1, HTML 4, etc) Using visibility visible/hidden instead of display block/none Using inline javascript instead of a function call Testing from different machines Any ideas about what could cause this?
TITLE: Style display not working in Firefox, Opera, Safari - (IE7 is OK) QUESTION: I have an absolutely positioned div that I want to show when the user clicks a link. The onclick of the link calls a js function that sets the display of the div to block (also tried: "", inline, table-cell, inline-table, etc). This works great in IE7, not at all in every other browser I've tried (FF2, FF3, Opera 9.5, Safari). I've tried adding alerts before and after the call, and they show that the display has changed from none to block but the div does not display. I can get the div to display in FF3 if I change the display value using Firebug's HTML inspector (but not by running javascript through Firebug's console) - so I know it's not just showing up off-screen, etc. I've tried everything I can think of, including: Using a different doctype (XHTML 1, HTML 4, etc) Using visibility visible/hidden instead of display block/none Using inline javascript instead of a function call Testing from different machines Any ideas about what could cause this? ANSWER: Since setting the properties with javascript never seemed to work, but setting using Firebug's inspect did, I started to suspect that the javascript ID selector was broken - maybe there were multiple items in the DOM with the same ID? The source didn't show that there were, but looping through all divs using javascript I found that that was the case. Here's the function I ended up using to show the popup: function openPopup(popupID) { var divs = getObjectsByTagAndClass('div','popupDiv'); if (divs!= undefined && divs!= null) { for (var i = 0; i < divs.length; i++) { if (divs[i].id == popupID) divs[i].style.display = 'block'; } } } (utility function getObjectsByTagAndClass not listed) Ideally I'll find out why the same item is being inserted multiple times, but I don't have control over the rendering platform, just its inputs. So when debugging issues like this, remember to check for duplicate IDs in the DOM, which can break getElementById. To everyone who answered, thanks for your help!
[ "javascript", "html", "css" ]
15
7
44,719
9
0
2008-08-13T20:59:54.360000
2008-08-14T15:23:10.273000
10,412
10,423
How can a Word document be created in C#?
I have a project where I would like to generate a report export in MS Word format. The report will include images/graphs, tables, and text. What is the best way to do this? Third party tools? What are your experiences?
The answer is going to depend slightly upon if the application is running on a server or if it is running on the client machine. If you are running on a server then you are going to want to use one of the XML based office generation formats as there are know issues when using Office Automation on a server. However, if you are working on the client machine then you have a choice of either using Office Automation or using the Office Open XML format (see links below), which is supported by Microsoft Office 2000 and up either natively or through service packs. One draw back to this though is that you might not be able to embed some kinds of graphs or images that you wish to show. The best way to go about things will all depend sightly upon how much time you have to invest in development. If you go the route of Office Automation there are quite a few good tutorials out there that can be found via Google and is fairly simple to learn. However, the Open Office XML format is fairly new so you might find the learning curve to be a bit higher. Office Open XML Iinformation Office Open XML - http://en.wikipedia.org/wiki/Office_Open_XML OpenXML Developer - http://openxmldeveloper.org/default.aspx Introducing the Office (2007) Open XML File Formats - http://msdn.microsoft.com/en-us/library/aa338205.aspx
How can a Word document be created in C#? I have a project where I would like to generate a report export in MS Word format. The report will include images/graphs, tables, and text. What is the best way to do this? Third party tools? What are your experiences?
TITLE: How can a Word document be created in C#? QUESTION: I have a project where I would like to generate a report export in MS Word format. The report will include images/graphs, tables, and text. What is the best way to do this? Third party tools? What are your experiences? ANSWER: The answer is going to depend slightly upon if the application is running on a server or if it is running on the client machine. If you are running on a server then you are going to want to use one of the XML based office generation formats as there are know issues when using Office Automation on a server. However, if you are working on the client machine then you have a choice of either using Office Automation or using the Office Open XML format (see links below), which is supported by Microsoft Office 2000 and up either natively or through service packs. One draw back to this though is that you might not be able to embed some kinds of graphs or images that you wish to show. The best way to go about things will all depend sightly upon how much time you have to invest in development. If you go the route of Office Automation there are quite a few good tutorials out there that can be found via Google and is fairly simple to learn. However, the Open Office XML format is fairly new so you might find the learning curve to be a bit higher. Office Open XML Iinformation Office Open XML - http://en.wikipedia.org/wiki/Office_Open_XML OpenXML Developer - http://openxmldeveloper.org/default.aspx Introducing the Office (2007) Open XML File Formats - http://msdn.microsoft.com/en-us/library/aa338205.aspx
[ "c#", ".net", "ms-word", "openxml" ]
74
49
82,427
17
0
2008-08-13T22:07:45.173000
2008-08-13T22:19:48.597000
10,435
267,243
Impersonation in IIS 7.0
I have a website that works correctly under IIS 6.0: It authenticates users with windows credentials, and then when talking to the service that hits the DB, it passes the credentials. In IIS 7.0, the same config settings do not pass the credentials, and the DB gets hit with NT AUTHORITY\ANONYMOUS. Is there something I'm missing? I've turned ANONYMOUS access off in my IIS 7.0 website, but I can't get the thing to work. These are the settings that I'm using on both IIS 6.0 and 7.0: What changed from 6.0 to 7.0?
There has been changes between IIS7 and IIS6.0. I found for you one blog post that might actually help you ( click here to see it ). Are you running your application in Integrated Mode or in Classic Mode? From what I saw, putting the Impersonate attribute at true should display you a 500 error with the following error message: Internal Server Error. This is HTTP Error 500.19: The requested page cannot be accessed because the related configuration data for the page is invalid. Here is the workaround that is proposed: Workaround: 1) If your application does not rely on impersonating the requesting user in the BeginRequest and AuthenticateRequest stages (the only stages where impersonation is not possible in Integrated mode), ignore this error by adding the following to your application’s web.config: /> 2) If your application does rely on impersonation in BeginRequest and AuthenticateRequest, or you are not sure, move to classic mode. I hoped that was useful to understand how IIS 7.0 now works.
Impersonation in IIS 7.0 I have a website that works correctly under IIS 6.0: It authenticates users with windows credentials, and then when talking to the service that hits the DB, it passes the credentials. In IIS 7.0, the same config settings do not pass the credentials, and the DB gets hit with NT AUTHORITY\ANONYMOUS. Is there something I'm missing? I've turned ANONYMOUS access off in my IIS 7.0 website, but I can't get the thing to work. These are the settings that I'm using on both IIS 6.0 and 7.0: What changed from 6.0 to 7.0?
TITLE: Impersonation in IIS 7.0 QUESTION: I have a website that works correctly under IIS 6.0: It authenticates users with windows credentials, and then when talking to the service that hits the DB, it passes the credentials. In IIS 7.0, the same config settings do not pass the credentials, and the DB gets hit with NT AUTHORITY\ANONYMOUS. Is there something I'm missing? I've turned ANONYMOUS access off in my IIS 7.0 website, but I can't get the thing to work. These are the settings that I'm using on both IIS 6.0 and 7.0: What changed from 6.0 to 7.0? ANSWER: There has been changes between IIS7 and IIS6.0. I found for you one blog post that might actually help you ( click here to see it ). Are you running your application in Integrated Mode or in Classic Mode? From what I saw, putting the Impersonate attribute at true should display you a 500 error with the following error message: Internal Server Error. This is HTTP Error 500.19: The requested page cannot be accessed because the related configuration data for the page is invalid. Here is the workaround that is proposed: Workaround: 1) If your application does not rely on impersonating the requesting user in the BeginRequest and AuthenticateRequest stages (the only stages where impersonation is not possible in Integrated mode), ignore this error by adding the following to your application’s web.config: /> 2) If your application does rely on impersonation in BeginRequest and AuthenticateRequest, or you are not sure, move to classic mode. I hoped that was useful to understand how IIS 7.0 now works.
[ "configuration", "iis-7", "impersonation" ]
7
8
14,440
4
0
2008-08-13T22:31:17.153000
2008-11-06T00:13:52.377000
10,456
10,463
HowTo Disable WebBrowser 'Click Sound' in your app only
The 'click sound' in question is actually a system wide preference, so I only want it to be disabled when my application has focus and then re-enable when the application closes/loses focus. Originally, I wanted to ask this question here on stackoverflow, but I was not yet in the beta. So, after googling for the answer and finding only a little bit of information on it I came up with the following and decided to post it here now that I'm in the beta. using System; using Microsoft.Win32; namespace HowTo { class WebClickSound { /// /// Enables or disables the web browser navigating click sound. /// public static bool Enabled { get { RegistryKey key = Registry.CurrentUser.OpenSubKey(@"AppEvents\Schemes\Apps\Explorer\Navigating\.Current"); string keyValue = (string)key.GetValue(null); return String.IsNullOrEmpty(keyValue) == false && keyValue!= "\"\""; } set { string keyValue; if (value) { keyValue = "%SystemRoot%\\Media\\"; if (Environment.OSVersion.Version.Major == 5 && Environment.OSVersion.Version.Minor > 0) { // XP keyValue += "Windows XP Start.wav"; } else if (Environment.OSVersion.Version.Major == 6) { // Vista keyValue += "Windows Navigation Start.wav"; } else { // Don't know the file name so I won't be able to re-enable it return; } } else { keyValue = "\"\""; } // Open and set the key that points to the file RegistryKey key = Registry.CurrentUser.OpenSubKey(@"AppEvents\Schemes\Apps\Explorer\Navigating\.Current", true); key.SetValue(null, keyValue, RegistryValueKind.ExpandString); isEnabled = value; } } } } Then in the main form we use the above code in these 3 events: Activated Deactivated FormClosing private void Form1_Activated(object sender, EventArgs e) { // Disable the sound when the program has focus WebClickSound.Enabled = false; } private void Form1_Deactivate(object sender, EventArgs e) { // Enable the sound when the program is out of focus WebClickSound.Enabled = true; } private void Form1_FormClosing(object sender, FormClosingEventArgs e) { // Enable the sound on app exit WebClickSound.Enabled = true; } The one problem I see currently is if the program crashes they won't have the click sound until they re-launch my application, but they wouldn't know to do that. What do you guys think? Is this a good solution? What improvements can be made?
I've noticed that if you use WebBrowser.Document.Write rather than WebBrowser.DocumentText then the click sound doesn't happen. So instead of this: webBrowser1.DocumentText = " Hello, world! "; try this: webBrowser1.Document.OpenNew(true); webBrowser1.Document.Write(" Hello, world! ");
HowTo Disable WebBrowser 'Click Sound' in your app only The 'click sound' in question is actually a system wide preference, so I only want it to be disabled when my application has focus and then re-enable when the application closes/loses focus. Originally, I wanted to ask this question here on stackoverflow, but I was not yet in the beta. So, after googling for the answer and finding only a little bit of information on it I came up with the following and decided to post it here now that I'm in the beta. using System; using Microsoft.Win32; namespace HowTo { class WebClickSound { /// /// Enables or disables the web browser navigating click sound. /// public static bool Enabled { get { RegistryKey key = Registry.CurrentUser.OpenSubKey(@"AppEvents\Schemes\Apps\Explorer\Navigating\.Current"); string keyValue = (string)key.GetValue(null); return String.IsNullOrEmpty(keyValue) == false && keyValue!= "\"\""; } set { string keyValue; if (value) { keyValue = "%SystemRoot%\\Media\\"; if (Environment.OSVersion.Version.Major == 5 && Environment.OSVersion.Version.Minor > 0) { // XP keyValue += "Windows XP Start.wav"; } else if (Environment.OSVersion.Version.Major == 6) { // Vista keyValue += "Windows Navigation Start.wav"; } else { // Don't know the file name so I won't be able to re-enable it return; } } else { keyValue = "\"\""; } // Open and set the key that points to the file RegistryKey key = Registry.CurrentUser.OpenSubKey(@"AppEvents\Schemes\Apps\Explorer\Navigating\.Current", true); key.SetValue(null, keyValue, RegistryValueKind.ExpandString); isEnabled = value; } } } } Then in the main form we use the above code in these 3 events: Activated Deactivated FormClosing private void Form1_Activated(object sender, EventArgs e) { // Disable the sound when the program has focus WebClickSound.Enabled = false; } private void Form1_Deactivate(object sender, EventArgs e) { // Enable the sound when the program is out of focus WebClickSound.Enabled = true; } private void Form1_FormClosing(object sender, FormClosingEventArgs e) { // Enable the sound on app exit WebClickSound.Enabled = true; } The one problem I see currently is if the program crashes they won't have the click sound until they re-launch my application, but they wouldn't know to do that. What do you guys think? Is this a good solution? What improvements can be made?
TITLE: HowTo Disable WebBrowser 'Click Sound' in your app only QUESTION: The 'click sound' in question is actually a system wide preference, so I only want it to be disabled when my application has focus and then re-enable when the application closes/loses focus. Originally, I wanted to ask this question here on stackoverflow, but I was not yet in the beta. So, after googling for the answer and finding only a little bit of information on it I came up with the following and decided to post it here now that I'm in the beta. using System; using Microsoft.Win32; namespace HowTo { class WebClickSound { /// /// Enables or disables the web browser navigating click sound. /// public static bool Enabled { get { RegistryKey key = Registry.CurrentUser.OpenSubKey(@"AppEvents\Schemes\Apps\Explorer\Navigating\.Current"); string keyValue = (string)key.GetValue(null); return String.IsNullOrEmpty(keyValue) == false && keyValue!= "\"\""; } set { string keyValue; if (value) { keyValue = "%SystemRoot%\\Media\\"; if (Environment.OSVersion.Version.Major == 5 && Environment.OSVersion.Version.Minor > 0) { // XP keyValue += "Windows XP Start.wav"; } else if (Environment.OSVersion.Version.Major == 6) { // Vista keyValue += "Windows Navigation Start.wav"; } else { // Don't know the file name so I won't be able to re-enable it return; } } else { keyValue = "\"\""; } // Open and set the key that points to the file RegistryKey key = Registry.CurrentUser.OpenSubKey(@"AppEvents\Schemes\Apps\Explorer\Navigating\.Current", true); key.SetValue(null, keyValue, RegistryValueKind.ExpandString); isEnabled = value; } } } } Then in the main form we use the above code in these 3 events: Activated Deactivated FormClosing private void Form1_Activated(object sender, EventArgs e) { // Disable the sound when the program has focus WebClickSound.Enabled = false; } private void Form1_Deactivate(object sender, EventArgs e) { // Enable the sound when the program is out of focus WebClickSound.Enabled = true; } private void Form1_FormClosing(object sender, FormClosingEventArgs e) { // Enable the sound on app exit WebClickSound.Enabled = true; } The one problem I see currently is if the program crashes they won't have the click sound until they re-launch my application, but they wouldn't know to do that. What do you guys think? Is this a good solution? What improvements can be made? ANSWER: I've noticed that if you use WebBrowser.Document.Write rather than WebBrowser.DocumentText then the click sound doesn't happen. So instead of this: webBrowser1.DocumentText = " Hello, world! "; try this: webBrowser1.Document.OpenNew(true); webBrowser1.Document.Write(" Hello, world! ");
[ "c#", ".net", "winforms" ]
28
12
14,862
5
0
2008-08-13T23:01:01.477000
2008-08-13T23:10:16.810000
10,475
10,479
Touch Typing Software recommendations
Since the keyboard is the interface we use to the computer, I've always thought touch typing should be something I should learn, but I've always been, well, lazy is the word. So, anyone recommend any good touch typing software? It's easy enough to google, but I'ld like to hear recommendations.
Typing of the Dead! It's a good few years old so you may have to hunt around, but it's a lot of fun and as well as the main game there are numerous minigames to practice specific areas you may be weak on.
Touch Typing Software recommendations Since the keyboard is the interface we use to the computer, I've always thought touch typing should be something I should learn, but I've always been, well, lazy is the word. So, anyone recommend any good touch typing software? It's easy enough to google, but I'ld like to hear recommendations.
TITLE: Touch Typing Software recommendations QUESTION: Since the keyboard is the interface we use to the computer, I've always thought touch typing should be something I should learn, but I've always been, well, lazy is the word. So, anyone recommend any good touch typing software? It's easy enough to google, but I'ld like to hear recommendations. ANSWER: Typing of the Dead! It's a good few years old so you may have to hunt around, but it's a lot of fun and as well as the main game there are numerous minigames to practice specific areas you may be weak on.
[ "editor", "touch-typing" ]
26
28
8,117
14
0
2008-08-13T23:29:38.707000
2008-08-13T23:32:27.560000
10,477
10,538
Equidistant points across Bezier curves
Currently, I'm attempting to make multiple beziers have equidistant points. I'm currently using cubic interpolation to find the points, but because the way beziers work some areas are more dense than others and proving gross for texture mapping because of the variable distance. Is there a way to find points on a bezier by distance rather than by percentage? Furthermore, is it possible to extend this to multiple connected curves?
distance between P_0 and P_3 (in cubic form), yes, but I think you knew that, is straight forward. Distance on a curve is just arc length: fig 1 http://www.codecogs.com/eq.latex?%5Cint_%7Bt_0%7D%5E%7Bt_1%7D%20%7B%20|P'(t)|%20dt where: fig 2 http://www.codecogs.com/eq.latex?P%27(t)%20=%20[%7Bx%27,y%27,z%27%7D]%20=%20[%7B%5Cfrac%7Bdx(t)%7D%7Bdt%7D,%5Cfrac%7Bdy(t)%7D%7Bdt%7D,%5Cfrac%7Bdz(t)%7D%7Bdt%7D%7D] (see the rest) Probably, you'd have t_0 = 0, t_1 = 1.0, and dz(t) = 0 (2d plane).
Equidistant points across Bezier curves Currently, I'm attempting to make multiple beziers have equidistant points. I'm currently using cubic interpolation to find the points, but because the way beziers work some areas are more dense than others and proving gross for texture mapping because of the variable distance. Is there a way to find points on a bezier by distance rather than by percentage? Furthermore, is it possible to extend this to multiple connected curves?
TITLE: Equidistant points across Bezier curves QUESTION: Currently, I'm attempting to make multiple beziers have equidistant points. I'm currently using cubic interpolation to find the points, but because the way beziers work some areas are more dense than others and proving gross for texture mapping because of the variable distance. Is there a way to find points on a bezier by distance rather than by percentage? Furthermore, is it possible to extend this to multiple connected curves? ANSWER: distance between P_0 and P_3 (in cubic form), yes, but I think you knew that, is straight forward. Distance on a curve is just arc length: fig 1 http://www.codecogs.com/eq.latex?%5Cint_%7Bt_0%7D%5E%7Bt_1%7D%20%7B%20|P'(t)|%20dt where: fig 2 http://www.codecogs.com/eq.latex?P%27(t)%20=%20[%7Bx%27,y%27,z%27%7D]%20=%20[%7B%5Cfrac%7Bdx(t)%7D%7Bdt%7D,%5Cfrac%7Bdy(t)%7D%7Bdt%7D,%5Cfrac%7Bdz(t)%7D%7Bdt%7D%7D] (see the rest) Probably, you'd have t_0 = 0, t_1 = 1.0, and dz(t) = 0 (2d plane).
[ "graphics", "interpolation", "bezier", "curve" ]
9
3
5,139
3
0
2008-08-13T23:31:32.697000
2008-08-14T00:47:05.103000
10,478
10,502
Getting selected text in a browser, cross-platform
One of the things I'd like to do in my browser-based application is allow the user to select some text (not in a, just plain ol' text!), and have my application pop up a small toolbar that then can interact with the next (in my case, add annotations). I've found a lot of stuff on google that seems to be focused on writing WYSIWYG editors, but that isn't what I want, and most of it worked in IE but not in FF2 or 3. Ideally, I'd like some function that can return the currently selected text in the browser window that works in IE7 (and 6 if possible), FireFox 2 & 3 and Safari 2. If it works in Opera, that'd be a bonus, but it's not a requirement. Anyone have a function that does this? Or an idea of where to start?
Have a look at jQuery and the wrapSelection plugin. It may be what you are looking for.
Getting selected text in a browser, cross-platform One of the things I'd like to do in my browser-based application is allow the user to select some text (not in a, just plain ol' text!), and have my application pop up a small toolbar that then can interact with the next (in my case, add annotations). I've found a lot of stuff on google that seems to be focused on writing WYSIWYG editors, but that isn't what I want, and most of it worked in IE but not in FF2 or 3. Ideally, I'd like some function that can return the currently selected text in the browser window that works in IE7 (and 6 if possible), FireFox 2 & 3 and Safari 2. If it works in Opera, that'd be a bonus, but it's not a requirement. Anyone have a function that does this? Or an idea of where to start?
TITLE: Getting selected text in a browser, cross-platform QUESTION: One of the things I'd like to do in my browser-based application is allow the user to select some text (not in a, just plain ol' text!), and have my application pop up a small toolbar that then can interact with the next (in my case, add annotations). I've found a lot of stuff on google that seems to be focused on writing WYSIWYG editors, but that isn't what I want, and most of it worked in IE but not in FF2 or 3. Ideally, I'd like some function that can return the currently selected text in the browser window that works in IE7 (and 6 if possible), FireFox 2 & 3 and Safari 2. If it works in Opera, that'd be a bonus, but it's not a requirement. Anyone have a function that does this? Or an idea of where to start? ANSWER: Have a look at jQuery and the wrapSelection plugin. It may be what you are looking for.
[ "javascript", "browser" ]
14
8
9,918
6
0
2008-08-13T23:32:09.370000
2008-08-13T23:52:35.400000
10,486
10,513
Unit test execution speed (how many tests per second?)
What kind of execution rate do you aim for with your unit tests (# test per second)? How long is too long for an individual unit test? I'd be interested in knowing if people have any specific thresholds for determining whether their tests are too slow, or is it just when the friction of a long running test suite gets the better of you? Finally, when you do decide the tests need to run faster, what techniques do you use to speed up your tests? Note: integration tests are obviously a different matter again. We are strictly talking unit tests that need to be run as frequently as possible. Response roundup: Thanks for the great responses so far. Most advice seems to be don't worry about the speed -- concentrate on quality and just selectively run them if they are too slow. Answers with specific numbers have included aiming for <10ms up to 0.5 and 1 second per test, or just keeping the entire suite of commonly run tests under 10 seconds. Not sure whether it's right to mark one as an "accepted answer" when they're all helpful:)
All unit tests should run in under a second (that is all unit tests combined should run in 1 second). Now I'm sure this has practical limits, but I've had a project with a 1000 tests that run this fast on a laptop. You'll really want this speed so your developers don't dread refactoring some core part of the model (i.e., Lemme go get some coffee while I run these tests...10 minutes later he comes back). This requirement also forces you to design your application correctly. It means that your domain model is pure and contains zero references to any type of persistance (File I/O, Database, etc). Unit tests are all about testing those business relatonships. Now that doesn't mean you ignore testing your database or persistence. But these issues are now isolated behind repositories that can be separately tested with integration tests that is located in a separate project. You run your unit tests constantly when writing domain code and then run your integration tests once on check in.
Unit test execution speed (how many tests per second?) What kind of execution rate do you aim for with your unit tests (# test per second)? How long is too long for an individual unit test? I'd be interested in knowing if people have any specific thresholds for determining whether their tests are too slow, or is it just when the friction of a long running test suite gets the better of you? Finally, when you do decide the tests need to run faster, what techniques do you use to speed up your tests? Note: integration tests are obviously a different matter again. We are strictly talking unit tests that need to be run as frequently as possible. Response roundup: Thanks for the great responses so far. Most advice seems to be don't worry about the speed -- concentrate on quality and just selectively run them if they are too slow. Answers with specific numbers have included aiming for <10ms up to 0.5 and 1 second per test, or just keeping the entire suite of commonly run tests under 10 seconds. Not sure whether it's right to mark one as an "accepted answer" when they're all helpful:)
TITLE: Unit test execution speed (how many tests per second?) QUESTION: What kind of execution rate do you aim for with your unit tests (# test per second)? How long is too long for an individual unit test? I'd be interested in knowing if people have any specific thresholds for determining whether their tests are too slow, or is it just when the friction of a long running test suite gets the better of you? Finally, when you do decide the tests need to run faster, what techniques do you use to speed up your tests? Note: integration tests are obviously a different matter again. We are strictly talking unit tests that need to be run as frequently as possible. Response roundup: Thanks for the great responses so far. Most advice seems to be don't worry about the speed -- concentrate on quality and just selectively run them if they are too slow. Answers with specific numbers have included aiming for <10ms up to 0.5 and 1 second per test, or just keeping the entire suite of commonly run tests under 10 seconds. Not sure whether it's right to mark one as an "accepted answer" when they're all helpful:) ANSWER: All unit tests should run in under a second (that is all unit tests combined should run in 1 second). Now I'm sure this has practical limits, but I've had a project with a 1000 tests that run this fast on a laptop. You'll really want this speed so your developers don't dread refactoring some core part of the model (i.e., Lemme go get some coffee while I run these tests...10 minutes later he comes back). This requirement also forces you to design your application correctly. It means that your domain model is pure and contains zero references to any type of persistance (File I/O, Database, etc). Unit tests are all about testing those business relatonships. Now that doesn't mean you ignore testing your database or persistence. But these issues are now isolated behind repositories that can be separately tested with integration tests that is located in a separate project. You run your unit tests constantly when writing domain code and then run your integration tests once on check in.
[ "unit-testing", "performance" ]
27
30
19,020
10
0
2008-08-13T23:36:28.130000
2008-08-14T00:08:30.467000
10,499
29,807
Oracle - What TNS Names file am I using?
Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using. What's the best way to figure this out? ++happy for various platform solutions.
Oracle provides a utility called tnsping: R:\>tnsping someconnection TNS Ping Utility for 32-bit Windows: Version 9.0.1.3.1 - Production on 27-AUG-20 08 10:38:07 Copyright (c) 1997 Oracle Corporation. All rights reserved. Used parameter files: C:\Oracle92\network\ADMIN\sqlnet.ora C:\Oracle92\network\ADMIN\tnsnames.ora TNS-03505: Failed to resolve name R:\> R:\>tnsping entpr01 TNS Ping Utility for 32-bit Windows: Version 9.0.1.3.1 - Production on 27-AUG-20 08 10:39:22 Copyright (c) 1997 Oracle Corporation. All rights reserved. Used parameter files: C:\Oracle92\network\ADMIN\sqlnet.ora C:\Oracle92\network\ADMIN\tnsnames.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (COMMUNITY = **) (PROTOCOL = TCP) (Host = ****) (Port = 1521))) (CONNECT_DATA = (SID = ENTPR0 1))) OK (40 msec) R:\> This should show what file you're using. The utility sits in the Oracle bin directory.
Oracle - What TNS Names file am I using? Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using. What's the best way to figure this out? ++happy for various platform solutions.
TITLE: Oracle - What TNS Names file am I using? QUESTION: Sometimes I get Oracle connection problems because I can't figure out which tnsnames.ora file my database client is using. What's the best way to figure this out? ++happy for various platform solutions. ANSWER: Oracle provides a utility called tnsping: R:\>tnsping someconnection TNS Ping Utility for 32-bit Windows: Version 9.0.1.3.1 - Production on 27-AUG-20 08 10:38:07 Copyright (c) 1997 Oracle Corporation. All rights reserved. Used parameter files: C:\Oracle92\network\ADMIN\sqlnet.ora C:\Oracle92\network\ADMIN\tnsnames.ora TNS-03505: Failed to resolve name R:\> R:\>tnsping entpr01 TNS Ping Utility for 32-bit Windows: Version 9.0.1.3.1 - Production on 27-AUG-20 08 10:39:22 Copyright (c) 1997 Oracle Corporation. All rights reserved. Used parameter files: C:\Oracle92\network\ADMIN\sqlnet.ora C:\Oracle92\network\ADMIN\tnsnames.ora Used TNSNAMES adapter to resolve the alias Attempting to contact (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (COMMUNITY = **) (PROTOCOL = TCP) (Host = ****) (Port = 1521))) (CONNECT_DATA = (SID = ENTPR0 1))) OK (40 msec) R:\> This should show what file you're using. The utility sits in the Oracle bin directory.
[ "oracle", "connection", "tnsnames", "tns" ]
60
88
238,802
11
0
2008-08-13T23:49:15.623000
2008-08-27T09:41:36.483000
10,506
10,549
Weird DB2 issue with DBUnit
I am having a strange DB2 issue when I run DBUnit tests. My DBUnit tests are highly customized, but I don't think it is the issue. When I run the tests, I get a failure: SQLCODE: -1084, SQLSTATE: 57019 which translates to SQL1084C Shared memory segments cannot be allocated. It sounds like a weird memory issue, though here's the big strange thing. If I ssh to the test database server, then go in to db2 and do "connect to MY_DB", the tests start succeeding! This seems to have no relation to the supposed memory error that is being reported. I have 2 tests, and the first one actually succeeds, the second one is the one that fails. However, it fails in the DBUnit setup code, when it is obtaining the connection to the DB server to load my xml dataset. Any ideas what might be going on?
Well, I think I fixed it by doing the following: db2stop force db2start At least, things seem to be working now.....
Weird DB2 issue with DBUnit I am having a strange DB2 issue when I run DBUnit tests. My DBUnit tests are highly customized, but I don't think it is the issue. When I run the tests, I get a failure: SQLCODE: -1084, SQLSTATE: 57019 which translates to SQL1084C Shared memory segments cannot be allocated. It sounds like a weird memory issue, though here's the big strange thing. If I ssh to the test database server, then go in to db2 and do "connect to MY_DB", the tests start succeeding! This seems to have no relation to the supposed memory error that is being reported. I have 2 tests, and the first one actually succeeds, the second one is the one that fails. However, it fails in the DBUnit setup code, when it is obtaining the connection to the DB server to load my xml dataset. Any ideas what might be going on?
TITLE: Weird DB2 issue with DBUnit QUESTION: I am having a strange DB2 issue when I run DBUnit tests. My DBUnit tests are highly customized, but I don't think it is the issue. When I run the tests, I get a failure: SQLCODE: -1084, SQLSTATE: 57019 which translates to SQL1084C Shared memory segments cannot be allocated. It sounds like a weird memory issue, though here's the big strange thing. If I ssh to the test database server, then go in to db2 and do "connect to MY_DB", the tests start succeeding! This seems to have no relation to the supposed memory error that is being reported. I have 2 tests, and the first one actually succeeds, the second one is the one that fails. However, it fails in the DBUnit setup code, when it is obtaining the connection to the DB server to load my xml dataset. Any ideas what might be going on? ANSWER: Well, I think I fixed it by doing the following: db2stop force db2start At least, things seem to be working now.....
[ "java", "db2", "dbunit", "db2-luw" ]
3
2
1,858
2
0
2008-08-13T23:57:01.567000
2008-08-14T01:05:31.087000
10,515
10,519
What do I need to run PHP applications on IIS?
Having been a PHP developer on LAMP servers for quite a while, is there anything that I will need to take into consideration while preparing an application for IIS on windows.
Make sure you get the FastCGI extension for IIS 6.0 or IIS 7.0. It is the single most important thing you can have when running PHP under IIS. Also this article should get you setup: http://learn.iis.net/page.aspx/247/using-fastcgi-to-host-php-applications-on-iis-60/ Everything beyond this is simple, MySQL and what not.
What do I need to run PHP applications on IIS? Having been a PHP developer on LAMP servers for quite a while, is there anything that I will need to take into consideration while preparing an application for IIS on windows.
TITLE: What do I need to run PHP applications on IIS? QUESTION: Having been a PHP developer on LAMP servers for quite a while, is there anything that I will need to take into consideration while preparing an application for IIS on windows. ANSWER: Make sure you get the FastCGI extension for IIS 6.0 or IIS 7.0. It is the single most important thing you can have when running PHP under IIS. Also this article should get you setup: http://learn.iis.net/page.aspx/247/using-fastcgi-to-host-php-applications-on-iis-60/ Everything beyond this is simple, MySQL and what not.
[ "php", "windows", "iis", "portability", "lamp" ]
9
9
2,031
7
0
2008-08-14T00:12:18.980000
2008-08-14T00:17:03.463000
10,524
718,366
Expression.Invoke in Entity Framework?
The Entity Framework does not support the Expression.Invoke operator. You receive the following exception when trying to use it: "The LINQ expression node type 'Invoke' is not supported in LINQ to Entities. Has anyone got a workaround for this missing functionality? I would like to use the PredicateBuilder detailed here in an Entity Framework context. Edit 1 @marxidad - I like your suggestion, however it does baffle me somewhat. Can you give some further advice on your proposed solution? Edit 2 @marxidad - Thanks for the clarification.
PredicateBuilder and LINQKit now support Entity Framework. Sorry, guys, for not doing this earlier!
Expression.Invoke in Entity Framework? The Entity Framework does not support the Expression.Invoke operator. You receive the following exception when trying to use it: "The LINQ expression node type 'Invoke' is not supported in LINQ to Entities. Has anyone got a workaround for this missing functionality? I would like to use the PredicateBuilder detailed here in an Entity Framework context. Edit 1 @marxidad - I like your suggestion, however it does baffle me somewhat. Can you give some further advice on your proposed solution? Edit 2 @marxidad - Thanks for the clarification.
TITLE: Expression.Invoke in Entity Framework? QUESTION: The Entity Framework does not support the Expression.Invoke operator. You receive the following exception when trying to use it: "The LINQ expression node type 'Invoke' is not supported in LINQ to Entities. Has anyone got a workaround for this missing functionality? I would like to use the PredicateBuilder detailed here in an Entity Framework context. Edit 1 @marxidad - I like your suggestion, however it does baffle me somewhat. Can you give some further advice on your proposed solution? Edit 2 @marxidad - Thanks for the clarification. ANSWER: PredicateBuilder and LINQKit now support Entity Framework. Sorry, guys, for not doing this earlier!
[ ".net", "linq", "entity-framework", "linq-to-entities" ]
29
31
15,365
2
0
2008-08-14T00:24:43.320000
2009-04-05T04:56:19.300000
10,532
10,542
What's the preferred way to connect to a postgresql database from PHP?
I've been using PHP & MySQL for ages and am about to start using PostgreSQL instead. What's the preferred method? Is it via the PDO objects or is there something better?
PDO objects are the new hotness. I'd recommend that as long as you can ensure that your target platform will always be running PHP 5.2+. There are many other database abstraction layers that support PostgreSQL that are compatible with older versions of PHP; I'd recommend ADODB. You should really be using PDO or a different abstraction layer even for your MySQL work; that way you won't have this problem again!
What's the preferred way to connect to a postgresql database from PHP? I've been using PHP & MySQL for ages and am about to start using PostgreSQL instead. What's the preferred method? Is it via the PDO objects or is there something better?
TITLE: What's the preferred way to connect to a postgresql database from PHP? QUESTION: I've been using PHP & MySQL for ages and am about to start using PostgreSQL instead. What's the preferred method? Is it via the PDO objects or is there something better? ANSWER: PDO objects are the new hotness. I'd recommend that as long as you can ensure that your target platform will always be running PHP 5.2+. There are many other database abstraction layers that support PostgreSQL that are compatible with older versions of PHP; I'd recommend ADODB. You should really be using PDO or a different abstraction layer even for your MySQL work; that way you won't have this problem again!
[ "php", "postgresql" ]
8
5
882
6
0
2008-08-14T00:36:20.287000
2008-08-14T00:54:51.030000
10,533
10,573
Parsing attributes with regex in Perl
Here's a problem I ran into recently. I have attributes strings of the form "x=1 and y=abc and z=c4g and..." Some attributes have numeric values, some have alpha values, some have mixed, some have dates, etc. Every string is supposed to have " x=someval and y=anotherval " at the beginning, but some don't. I have three things I need to do. Validate the strings to be certain that they have x and y. Actually parse the values for x and y. Get the rest of the string. Given the example at the top, this would result in the following variables: $x = 1; $y = "abc"; $remainder = "z=c4g and..." My question is: Is there a (reasonably) simple way to parse these and validate with a single regular expression? i.e.: if ($str =~ /someexpression/) { $x = $1; $y = $2; $remainder = $3; } Note that the string may consist of only x and y attributes. This is a valid string. I'll post my solution as an answer, but it doesn't meet my single-regex preference.
I'm not the best at regular expressions, but this seems pretty close to what you're looking for: /x=(.+) and y=([^ ]+)( and (.*))?/ Except you use $1, $2, and $4. In use: my @strs = ("x=1 and y=abc and z=c4g and w=v4l", "x=yes and y=no", "z=nox and w=noy"); foreach (@strs) { if ($_ =~ /x=(.+) and y=([^ ]+)( and (.*))?/) { $x = $1; $y = $2; $remainder = $4; print "x: $x; y: $y; remainder: $remainder\n"; } else { print "Failed.\n"; } } Output: x: 1; y: abc; remainder: z=c4g and w=v4l x: yes; y: no; remainder: Failed. This of course leaves out plenty of error checking, and I don't know everything about your inputs, but this seems to work.
Parsing attributes with regex in Perl Here's a problem I ran into recently. I have attributes strings of the form "x=1 and y=abc and z=c4g and..." Some attributes have numeric values, some have alpha values, some have mixed, some have dates, etc. Every string is supposed to have " x=someval and y=anotherval " at the beginning, but some don't. I have three things I need to do. Validate the strings to be certain that they have x and y. Actually parse the values for x and y. Get the rest of the string. Given the example at the top, this would result in the following variables: $x = 1; $y = "abc"; $remainder = "z=c4g and..." My question is: Is there a (reasonably) simple way to parse these and validate with a single regular expression? i.e.: if ($str =~ /someexpression/) { $x = $1; $y = $2; $remainder = $3; } Note that the string may consist of only x and y attributes. This is a valid string. I'll post my solution as an answer, but it doesn't meet my single-regex preference.
TITLE: Parsing attributes with regex in Perl QUESTION: Here's a problem I ran into recently. I have attributes strings of the form "x=1 and y=abc and z=c4g and..." Some attributes have numeric values, some have alpha values, some have mixed, some have dates, etc. Every string is supposed to have " x=someval and y=anotherval " at the beginning, but some don't. I have three things I need to do. Validate the strings to be certain that they have x and y. Actually parse the values for x and y. Get the rest of the string. Given the example at the top, this would result in the following variables: $x = 1; $y = "abc"; $remainder = "z=c4g and..." My question is: Is there a (reasonably) simple way to parse these and validate with a single regular expression? i.e.: if ($str =~ /someexpression/) { $x = $1; $y = $2; $remainder = $3; } Note that the string may consist of only x and y attributes. This is a valid string. I'll post my solution as an answer, but it doesn't meet my single-regex preference. ANSWER: I'm not the best at regular expressions, but this seems pretty close to what you're looking for: /x=(.+) and y=([^ ]+)( and (.*))?/ Except you use $1, $2, and $4. In use: my @strs = ("x=1 and y=abc and z=c4g and w=v4l", "x=yes and y=no", "z=nox and w=noy"); foreach (@strs) { if ($_ =~ /x=(.+) and y=([^ ]+)( and (.*))?/) { $x = $1; $y = $2; $remainder = $4; print "x: $x; y: $y; remainder: $remainder\n"; } else { print "Failed.\n"; } } Output: x: 1; y: abc; remainder: z=c4g and w=v4l x: yes; y: no; remainder: Failed. This of course leaves out plenty of error checking, and I don't know everything about your inputs, but this seems to work.
[ "regex", "perl" ]
2
1
4,316
5
0
2008-08-14T00:40:26.753000
2008-08-14T01:56:53.480000
10,564
773,973
How can I set up an editor to work with Git on Windows?
I'm trying out Git on Windows. I got to the point of trying "git commit" and I got this error: Terminal is dumb but no VISUAL nor EDITOR defined. Please supply the message using either -m or -F option. So I figured out I need to have an environment variable called EDITOR. No problem. I set it to point to Notepad. That worked, almost. The default commit message opens in Notepad. But Notepad doesn't support bare line feeds. I went out and got Notepad++, but I can't figure out how to get Notepad++ set up as the %EDITOR% in such a way that it works with Git as expected. I'm not married to Notepad++. At this point I don't mind what editor I use. I just want to be able to type commit messages in an editor rather than the command line (with -m ). Those of you using Git on Windows: What tool do you use to edit your commit messages, and what did you have to do to make it work?
Update September 2015 (6 years later) The last release of git-for-Windows (2.5.3) now includes: By configuring git config core.editor notepad, users can now use notepad.exe as their default editor. Configuring git config format.commitMessageColumns 72 will be picked up by the notepad wrapper and line-wrap the commit message after the user edits it. See commit 69b301b by Johannes Schindelin ( dscho ). And Git 2.16 (Q1 2018) will show a message to tell the user that it is waiting for the user to finish editing when spawning an editor, in case the editor opens to a hidden window or somewhere obscure and the user gets lost. See commit abfb04d (07 Dec 2017), and commit a64f213 (29 Nov 2017) by Lars Schneider ( larsxschneider ). Helped-by: Junio C Hamano ( gitster ). (Merged by Junio C Hamano -- gitster -- in commit 0c69a13, 19 Dec 2017) launch_editor(): indicate that Git waits for user input When a graphical GIT_EDITOR is spawned by a Git command that opens and waits for user input (e.g. " git rebase -i "), then the editor window might be obscured by other windows. The user might be left staring at the original Git terminal window without even realizing that s/he needs to interact with another window before Git can proceed. To this user Git appears hanging. Print a message that Git is waiting for editor input in the original terminal and get rid of it when the editor returns, if the terminal supports erasing the last line Original answer I just tested it with git version 1.6.2.msysgit.0.186.gf7512 and Notepad++5.3.1 I prefer to not have to set an EDITOR variable, so I tried: git config --global core.editor "\"c:\Program Files\Notepad++\notepad++.exe\"" # or git config --global core.editor "\"c:\Program Files\Notepad++\notepad++.exe\" %*" That always gives: C:\prog\git>git config --global --edit "c:\Program Files\Notepad++\notepad++.exe" %*: c:\Program Files\Notepad++\notepad++.exe: command not found error: There was a problem with the editor '"c:\Program Files\Notepad++\notepad++.exe" %*'. If I define a npp.bat including: "c:\Program Files\Notepad++\notepad++.exe" %* and I type: C:\prog\git>git config --global core.editor C:\prog\git\npp.bat It just works from the DOS session, but not from the git shell. (not that with the core.editor configuration mechanism, a script with " start /WAIT... " in it would not work, but only open a new DOS window) Bennett's answer mentions the possibility to avoid adding a script, but to reference directly the program itself between simple quotes. Note the direction of the slashes! Use / NOT \ to separate folders in the path name! git config --global core.editor \ "'C:/Program Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin" Or if you are in a 64 bit system: git config --global core.editor \ "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin" But I prefer using a script (see below): that way I can play with different paths or different options without having to register again a git config. The actual solution (with a script) was to realize that: what you refer to in the config file is actually a shell ( /bin/sh ) script, not a DOS script. So what does work is: C:\prog\git>git config --global core.editor C:/prog/git/npp.bat with C:/prog/git/npp.bat: #!/bin/sh "c:/Program Files/Notepad++/notepad++.exe" -multiInst "$*" or #!/bin/sh "c:/Program Files/Notepad++/notepad++.exe" -multiInst -notabbar -nosession -noPlugin "$*" With that setting, I can do ' git config --global --edit ' from DOS or Git Shell, or I can do ' git rebase -i... ' from DOS or Git Shell. Bot commands will trigger a new instance of notepad++ (hence the -multiInst ' option), and wait for that instance to be closed before going on. Note that I use only '/', not \ '. And I installed msysgit using option 2. (Add the git\bin directory to the PATH environment variable, but without overriding some built-in windows tools) The fact that the notepad++ wrapper is called.bat is not important. It would be better to name it 'npp.sh' and to put it in the [git]\cmd directory though (or in any directory referenced by your PATH environment variable). See also: How do I view ‘git diff’ output with visual diff program? for the general theory How do I setup DiffMerge with msysgit / gitk? for another example of external tool (DiffMerge, and WinMerge) lightfire228 adds in the comments: For anyone having an issue where N++ just opens a blank file, and git doesn't take your commit message, see " Aborting commit due to empty message ": change your.bat or.sh file to say: ". That will tell notepad++ to open the temp commit file, rather than a blank new one.
How can I set up an editor to work with Git on Windows? I'm trying out Git on Windows. I got to the point of trying "git commit" and I got this error: Terminal is dumb but no VISUAL nor EDITOR defined. Please supply the message using either -m or -F option. So I figured out I need to have an environment variable called EDITOR. No problem. I set it to point to Notepad. That worked, almost. The default commit message opens in Notepad. But Notepad doesn't support bare line feeds. I went out and got Notepad++, but I can't figure out how to get Notepad++ set up as the %EDITOR% in such a way that it works with Git as expected. I'm not married to Notepad++. At this point I don't mind what editor I use. I just want to be able to type commit messages in an editor rather than the command line (with -m ). Those of you using Git on Windows: What tool do you use to edit your commit messages, and what did you have to do to make it work?
TITLE: How can I set up an editor to work with Git on Windows? QUESTION: I'm trying out Git on Windows. I got to the point of trying "git commit" and I got this error: Terminal is dumb but no VISUAL nor EDITOR defined. Please supply the message using either -m or -F option. So I figured out I need to have an environment variable called EDITOR. No problem. I set it to point to Notepad. That worked, almost. The default commit message opens in Notepad. But Notepad doesn't support bare line feeds. I went out and got Notepad++, but I can't figure out how to get Notepad++ set up as the %EDITOR% in such a way that it works with Git as expected. I'm not married to Notepad++. At this point I don't mind what editor I use. I just want to be able to type commit messages in an editor rather than the command line (with -m ). Those of you using Git on Windows: What tool do you use to edit your commit messages, and what did you have to do to make it work? ANSWER: Update September 2015 (6 years later) The last release of git-for-Windows (2.5.3) now includes: By configuring git config core.editor notepad, users can now use notepad.exe as their default editor. Configuring git config format.commitMessageColumns 72 will be picked up by the notepad wrapper and line-wrap the commit message after the user edits it. See commit 69b301b by Johannes Schindelin ( dscho ). And Git 2.16 (Q1 2018) will show a message to tell the user that it is waiting for the user to finish editing when spawning an editor, in case the editor opens to a hidden window or somewhere obscure and the user gets lost. See commit abfb04d (07 Dec 2017), and commit a64f213 (29 Nov 2017) by Lars Schneider ( larsxschneider ). Helped-by: Junio C Hamano ( gitster ). (Merged by Junio C Hamano -- gitster -- in commit 0c69a13, 19 Dec 2017) launch_editor(): indicate that Git waits for user input When a graphical GIT_EDITOR is spawned by a Git command that opens and waits for user input (e.g. " git rebase -i "), then the editor window might be obscured by other windows. The user might be left staring at the original Git terminal window without even realizing that s/he needs to interact with another window before Git can proceed. To this user Git appears hanging. Print a message that Git is waiting for editor input in the original terminal and get rid of it when the editor returns, if the terminal supports erasing the last line Original answer I just tested it with git version 1.6.2.msysgit.0.186.gf7512 and Notepad++5.3.1 I prefer to not have to set an EDITOR variable, so I tried: git config --global core.editor "\"c:\Program Files\Notepad++\notepad++.exe\"" # or git config --global core.editor "\"c:\Program Files\Notepad++\notepad++.exe\" %*" That always gives: C:\prog\git>git config --global --edit "c:\Program Files\Notepad++\notepad++.exe" %*: c:\Program Files\Notepad++\notepad++.exe: command not found error: There was a problem with the editor '"c:\Program Files\Notepad++\notepad++.exe" %*'. If I define a npp.bat including: "c:\Program Files\Notepad++\notepad++.exe" %* and I type: C:\prog\git>git config --global core.editor C:\prog\git\npp.bat It just works from the DOS session, but not from the git shell. (not that with the core.editor configuration mechanism, a script with " start /WAIT... " in it would not work, but only open a new DOS window) Bennett's answer mentions the possibility to avoid adding a script, but to reference directly the program itself between simple quotes. Note the direction of the slashes! Use / NOT \ to separate folders in the path name! git config --global core.editor \ "'C:/Program Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin" Or if you are in a 64 bit system: git config --global core.editor \ "'C:/Program Files (x86)/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin" But I prefer using a script (see below): that way I can play with different paths or different options without having to register again a git config. The actual solution (with a script) was to realize that: what you refer to in the config file is actually a shell ( /bin/sh ) script, not a DOS script. So what does work is: C:\prog\git>git config --global core.editor C:/prog/git/npp.bat with C:/prog/git/npp.bat: #!/bin/sh "c:/Program Files/Notepad++/notepad++.exe" -multiInst "$*" or #!/bin/sh "c:/Program Files/Notepad++/notepad++.exe" -multiInst -notabbar -nosession -noPlugin "$*" With that setting, I can do ' git config --global --edit ' from DOS or Git Shell, or I can do ' git rebase -i... ' from DOS or Git Shell. Bot commands will trigger a new instance of notepad++ (hence the -multiInst ' option), and wait for that instance to be closed before going on. Note that I use only '/', not \ '. And I installed msysgit using option 2. (Add the git\bin directory to the PATH environment variable, but without overriding some built-in windows tools) The fact that the notepad++ wrapper is called.bat is not important. It would be better to name it 'npp.sh' and to put it in the [git]\cmd directory though (or in any directory referenced by your PATH environment variable). See also: How do I view ‘git diff’ output with visual diff program? for the general theory How do I setup DiffMerge with msysgit / gitk? for another example of external tool (DiffMerge, and WinMerge) lightfire228 adds in the comments: For anyone having an issue where N++ just opens a blank file, and git doesn't take your commit message, see " Aborting commit due to empty message ": change your.bat or.sh file to say: ". That will tell notepad++ to open the temp commit file, rather than a blank new one.
[ "windows", "git", "cygwin", "editor" ]
655
639
428,890
37
0
2008-08-14T01:43:04.270000
2009-04-21T18:34:54.007000
10,580
10,587
What is the difference between Early and Late Binding?
What is the difference between early and late binding?
The short answer is that early (or static) binding refers to compile time binding and late (or dynamic) binding refers to runtime binding (for example when you use reflection).
What is the difference between Early and Late Binding? What is the difference between early and late binding?
TITLE: What is the difference between Early and Late Binding? QUESTION: What is the difference between early and late binding? ANSWER: The short answer is that early (or static) binding refers to compile time binding and late (or dynamic) binding refers to runtime binding (for example when you use reflection).
[ "binding", "dynamic-binding", "static-binding" ]
107
69
121,655
8
0
2008-08-14T02:10:53.873000
2008-08-14T02:24:04.047000
10,586
10,713
Small modification to an XML document using StAX
I'm currently trying to read in an XML file, make some minor changes (alter the value of some attributes), and write it back out again. I have intended to use a StAX parser ( javax.xml.stream.XMLStreamReader ) to read in each event, see if it was one I wanted to change, and then pass it straight on to the StAX writer ( javax.xml.stream.XMLStreamReader ) if no changes were required. Unfortunately, that doesn't look to be so simple - The writer has no way to take an event type and a parser object, only methods like writeAttribute and writeStartElement. Obviously I could write a big switch statement with a case for every possible type of element which can occur in an XML document, and just write it back out again, but it seems like a lot of trouble for something which seems like it should be simple. Is there something I'm missing that makes it easy to write out a very similar XML document to the one you read in with StAX?
StAX works pretty well and is very fast. I used it in a project to parse XML files which are up to 20MB. I don't have a thorough analysis, but it was definitely faster than SAX. As for your question: The difference between streaming and event-handling, AFAIK is control. With the streaming API you can walk through your document step by step and get the contents you want. Whereas the event-based API you can only handle what you are interested in.
Small modification to an XML document using StAX I'm currently trying to read in an XML file, make some minor changes (alter the value of some attributes), and write it back out again. I have intended to use a StAX parser ( javax.xml.stream.XMLStreamReader ) to read in each event, see if it was one I wanted to change, and then pass it straight on to the StAX writer ( javax.xml.stream.XMLStreamReader ) if no changes were required. Unfortunately, that doesn't look to be so simple - The writer has no way to take an event type and a parser object, only methods like writeAttribute and writeStartElement. Obviously I could write a big switch statement with a case for every possible type of element which can occur in an XML document, and just write it back out again, but it seems like a lot of trouble for something which seems like it should be simple. Is there something I'm missing that makes it easy to write out a very similar XML document to the one you read in with StAX?
TITLE: Small modification to an XML document using StAX QUESTION: I'm currently trying to read in an XML file, make some minor changes (alter the value of some attributes), and write it back out again. I have intended to use a StAX parser ( javax.xml.stream.XMLStreamReader ) to read in each event, see if it was one I wanted to change, and then pass it straight on to the StAX writer ( javax.xml.stream.XMLStreamReader ) if no changes were required. Unfortunately, that doesn't look to be so simple - The writer has no way to take an event type and a parser object, only methods like writeAttribute and writeStartElement. Obviously I could write a big switch statement with a case for every possible type of element which can occur in an XML document, and just write it back out again, but it seems like a lot of trouble for something which seems like it should be simple. Is there something I'm missing that makes it easy to write out a very similar XML document to the one you read in with StAX? ANSWER: StAX works pretty well and is very fast. I used it in a project to parse XML files which are up to 20MB. I don't have a thorough analysis, but it was definitely faster than SAX. As for your question: The difference between streaming and event-handling, AFAIK is control. With the streaming API you can walk through your document step by step and get the contents you want. Whereas the event-based API you can only handle what you are interested in.
[ "java", "xml", "stax" ]
6
3
4,545
3
0
2008-08-14T02:22:37.920000
2008-08-14T05:52:27.730000
10,595
11,958
How do I write those cool command line code generators (rails, sproutcore, webgen)
I've grokked the code for all the aforementioned apps and I still can't find a straightforward way to create a static directory structure from a single command.
Check out rubigen. You can also view a presentation by dr nic.
How do I write those cool command line code generators (rails, sproutcore, webgen) I've grokked the code for all the aforementioned apps and I still can't find a straightforward way to create a static directory structure from a single command.
TITLE: How do I write those cool command line code generators (rails, sproutcore, webgen) QUESTION: I've grokked the code for all the aforementioned apps and I still can't find a straightforward way to create a static directory structure from a single command. ANSWER: Check out rubigen. You can also view a presentation by dr nic.
[ "ruby" ]
0
5
655
2
0
2008-08-14T02:39:38.917000
2008-08-15T04:24:56.700000
10,599
11,467
Web page field validation
I need to validate a date/time field on a webpage but want it to do it without reloading the page and would like 'instant' feedback for the users. What's the best/easiest solution. BTW: easiest scores 65% of total points Edit: What if best was 65% of total points?
Looks like there's a great video about the ASP.NET AJAX Control Toolkit provides the MaskedEdit control and the MaskedEditValidator control that works great. Not easy for beginners but VERY good and instant feedback. Thanks for all the answers though! asp.net Unfortunately I can't accept this answer.
Web page field validation I need to validate a date/time field on a webpage but want it to do it without reloading the page and would like 'instant' feedback for the users. What's the best/easiest solution. BTW: easiest scores 65% of total points Edit: What if best was 65% of total points?
TITLE: Web page field validation QUESTION: I need to validate a date/time field on a webpage but want it to do it without reloading the page and would like 'instant' feedback for the users. What's the best/easiest solution. BTW: easiest scores 65% of total points Edit: What if best was 65% of total points? ANSWER: Looks like there's a great video about the ASP.NET AJAX Control Toolkit provides the MaskedEdit control and the MaskedEditValidator control that works great. Not easy for beginners but VERY good and instant feedback. Thanks for all the answers though! asp.net Unfortunately I can't accept this answer.
[ "html" ]
2
0
703
10
0
2008-08-14T02:50:26.210000
2008-08-14T18:33:33.213000
10,604
10,617
Automatically measure all SQL queries
In Maybe Normalizing Isn't Normal Jeff Atwood says, "You're automatically measuring all the queries that flow through your software, right?" I'm not but I'd like to. Some features of the application in question: ASP.NET a data access layer which depends on the MS Enterprise Library Data Access Application Block MS SQL Server
In addition to Brad's mention of SQL Profiler, if you want to do this in code, then all your database calls need to funnelled through a common library. You insert the timing code there, and voila, you know how long every query in your system takes. A single point of entry to the database is a fairly standard feature of any ORM or database layer -- or at least it has been in any project I've worked on so far!
Automatically measure all SQL queries In Maybe Normalizing Isn't Normal Jeff Atwood says, "You're automatically measuring all the queries that flow through your software, right?" I'm not but I'd like to. Some features of the application in question: ASP.NET a data access layer which depends on the MS Enterprise Library Data Access Application Block MS SQL Server
TITLE: Automatically measure all SQL queries QUESTION: In Maybe Normalizing Isn't Normal Jeff Atwood says, "You're automatically measuring all the queries that flow through your software, right?" I'm not but I'd like to. Some features of the application in question: ASP.NET a data access layer which depends on the MS Enterprise Library Data Access Application Block MS SQL Server ANSWER: In addition to Brad's mention of SQL Profiler, if you want to do this in code, then all your database calls need to funnelled through a common library. You insert the timing code there, and voila, you know how long every query in your system takes. A single point of entry to the database is a fairly standard feature of any ORM or database layer -- or at least it has been in any project I've worked on so far!
[ "sql-server", "optimization" ]
5
6
766
6
0
2008-08-14T02:57:18.507000
2008-08-14T03:13:56.770000
10,616
10,623
Differences between MySQL and SQL Server
I'm an ASP.NET developer who has used Microsoft SQL Server for all my database needs (both at work and for personal projects). I am considering trying out the LAMP stack for some of my personal projects. What are some of the main differences between MySQL and SQL Server? Is using stored procedures a common practice in MySQL? Any advice or resources you'd recommend to help me with the switch? To those who have experience with both, are there any missing features from MySQL?
One thing you have to watch out for is the fairly severe differences in the way SQL Server and MySQL implement the SQL syntax. Here's a nice Comparison of Different SQL Implementations. For example, take a look at the top-n section. In MySQL: SELECT age FROM person ORDER BY age ASC LIMIT 1 OFFSET 2 In SQL Server (T-SQL): SELECT TOP 3 WITH TIES * FROM person ORDER BY age ASC
Differences between MySQL and SQL Server I'm an ASP.NET developer who has used Microsoft SQL Server for all my database needs (both at work and for personal projects). I am considering trying out the LAMP stack for some of my personal projects. What are some of the main differences between MySQL and SQL Server? Is using stored procedures a common practice in MySQL? Any advice or resources you'd recommend to help me with the switch? To those who have experience with both, are there any missing features from MySQL?
TITLE: Differences between MySQL and SQL Server QUESTION: I'm an ASP.NET developer who has used Microsoft SQL Server for all my database needs (both at work and for personal projects). I am considering trying out the LAMP stack for some of my personal projects. What are some of the main differences between MySQL and SQL Server? Is using stored procedures a common practice in MySQL? Any advice or resources you'd recommend to help me with the switch? To those who have experience with both, are there any missing features from MySQL? ANSWER: One thing you have to watch out for is the fairly severe differences in the way SQL Server and MySQL implement the SQL syntax. Here's a nice Comparison of Different SQL Implementations. For example, take a look at the top-n section. In MySQL: SELECT age FROM person ORDER BY age ASC LIMIT 1 OFFSET 2 In SQL Server (T-SQL): SELECT TOP 3 WITH TIES * FROM person ORDER BY age ASC
[ "mysql", "sql-server", "t-sql" ]
150
143
334,059
12
0
2008-08-14T03:13:26.633000
2008-08-14T03:17:18.153000
10,634
10,737
Should I switch from nant to msbuild?
I currently use nant, ccnet (cruise control), svn, mbunit. I use msbuild to do my sln build just because it was simpler to shell out. Are there any merits to switching my whole build script to MSBuild? I need to be able to run tests, watir style tests, xcopy deploy. Is this easier? Update: Any compelling features that would cause me to shift from nant to msbuild?
I like MSBuild. One reason is that.csproj files are msbuild files, and building in VS is just like building at the command line. Another reason is the good support from TeamCity which is the CI server I've been using. If you start using MSBuild, and you want to do more custom things in your build process, get the MSBuild Community Tasks. They give you a bunch of nice extra tasks. I haven't used NAnt for several years now, and I haven't regretted it. Also, as Ruben mentions, there are the SDC Tasks tasks on CodePlex. For even more fun, there is the MSBuild Extension Pack on CodePlex, which includes a twitter task.
Should I switch from nant to msbuild? I currently use nant, ccnet (cruise control), svn, mbunit. I use msbuild to do my sln build just because it was simpler to shell out. Are there any merits to switching my whole build script to MSBuild? I need to be able to run tests, watir style tests, xcopy deploy. Is this easier? Update: Any compelling features that would cause me to shift from nant to msbuild?
TITLE: Should I switch from nant to msbuild? QUESTION: I currently use nant, ccnet (cruise control), svn, mbunit. I use msbuild to do my sln build just because it was simpler to shell out. Are there any merits to switching my whole build script to MSBuild? I need to be able to run tests, watir style tests, xcopy deploy. Is this easier? Update: Any compelling features that would cause me to shift from nant to msbuild? ANSWER: I like MSBuild. One reason is that.csproj files are msbuild files, and building in VS is just like building at the command line. Another reason is the good support from TeamCity which is the CI server I've been using. If you start using MSBuild, and you want to do more custom things in your build process, get the MSBuild Community Tasks. They give you a bunch of nice extra tasks. I haven't used NAnt for several years now, and I haven't regretted it. Also, as Ruben mentions, there are the SDC Tasks tasks on CodePlex. For even more fun, there is the MSBuild Extension Pack on CodePlex, which includes a twitter task.
[ "msbuild", "build-process", "build-automation", "nant" ]
28
31
6,168
18
0
2008-08-14T03:36:54.790000
2008-08-14T06:14:26.970000
10,635
10,638
Why are my PowerShell scripts not running?
I wrote a simple batch file as a PowerShell script, and I am getting errors when they run. It's in a scripts directory in my path. This is the error I get: Cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about-signing". I looked in the help, but it's less than helpful.
It could be PowerShell's default security level, which (IIRC) will only run signed scripts. Try typing this: set-executionpolicy remotesigned That will tell PowerShell to allow local (that is, on a local drive) unsigned scripts to run. Then try executing your script again.
Why are my PowerShell scripts not running? I wrote a simple batch file as a PowerShell script, and I am getting errors when they run. It's in a scripts directory in my path. This is the error I get: Cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about-signing". I looked in the help, but it's less than helpful.
TITLE: Why are my PowerShell scripts not running? QUESTION: I wrote a simple batch file as a PowerShell script, and I am getting errors when they run. It's in a scripts directory in my path. This is the error I get: Cannot be loaded because the execution of scripts is disabled on this system. Please see "get-help about-signing". I looked in the help, but it's less than helpful. ANSWER: It could be PowerShell's default security level, which (IIRC) will only run signed scripts. Try typing this: set-executionpolicy remotesigned That will tell PowerShell to allow local (that is, on a local drive) unsigned scripts to run. Then try executing your script again.
[ "powershell" ]
116
114
252,297
11
0
2008-08-14T03:39:17.610000
2008-08-14T03:41:33.003000
10,644
10,655
Any decent C# profilers out there?
I need a C# profiler. Although I'm not averse to paying for one, something which is free or at least with a trial version would be ideal since it takes time to raise a purchase order. Any recommendations?
You can try the following: nprof (free but kinda old) ProfileSharp (open source).Net Memory Profiler (really good for memory leaks, there's a trial version) Edit: Nprof has been replaced with SlimTune and works with.Net 4.0 applications
Any decent C# profilers out there? I need a C# profiler. Although I'm not averse to paying for one, something which is free or at least with a trial version would be ideal since it takes time to raise a purchase order. Any recommendations?
TITLE: Any decent C# profilers out there? QUESTION: I need a C# profiler. Although I'm not averse to paying for one, something which is free or at least with a trial version would be ideal since it takes time to raise a purchase order. Any recommendations? ANSWER: You can try the following: nprof (free but kinda old) ProfileSharp (open source).Net Memory Profiler (really good for memory leaks, there's a trial version) Edit: Nprof has been replaced with SlimTune and works with.Net 4.0 applications
[ "c#", ".net", "profiling", "profiler" ]
83
35
42,000
19
0
2008-08-14T03:52:55.553000
2008-08-14T04:01:56.410000
10,661
11,290
Does the OutputCacheFilter in the Microsoft MVC Preview 4 actually save on action invocations?
We deployed a live, fresh, swanky site using preview 3 including rigorous stress testing. Page Output caching was our saviour and afforded us the ability to meet our performance contractual requirements. My question is, is there a difference between action OutputCacheFilter and page output caching? Is the action output caching faster than page output caching?
Internally, the OutputCacheAttribute (aka output cache filter) uses the same internal mechanism as page output caching (aka the @OutputCache directive). Therefore, it's not any faster than page output caching. However, with MVC, you really can't use page output caching via the @OutputCache directive in MVC because we render the view (aka page) after the action runs. So you would gain very little benefit. With the output cache filter, it does the correct thing and does not execute the action code if the result is in the output cache. Hope that helps.:)
Does the OutputCacheFilter in the Microsoft MVC Preview 4 actually save on action invocations? We deployed a live, fresh, swanky site using preview 3 including rigorous stress testing. Page Output caching was our saviour and afforded us the ability to meet our performance contractual requirements. My question is, is there a difference between action OutputCacheFilter and page output caching? Is the action output caching faster than page output caching?
TITLE: Does the OutputCacheFilter in the Microsoft MVC Preview 4 actually save on action invocations? QUESTION: We deployed a live, fresh, swanky site using preview 3 including rigorous stress testing. Page Output caching was our saviour and afforded us the ability to meet our performance contractual requirements. My question is, is there a difference between action OutputCacheFilter and page output caching? Is the action output caching faster than page output caching? ANSWER: Internally, the OutputCacheAttribute (aka output cache filter) uses the same internal mechanism as page output caching (aka the @OutputCache directive). Therefore, it's not any faster than page output caching. However, with MVC, you really can't use page output caching via the @OutputCache directive in MVC because we render the view (aka page) after the action runs. So you would gain very little benefit. With the output cache filter, it does the correct thing and does not execute the action code if the result is in the output cache. Hope that helps.:)
[ "c#", "asp.net", "asp.net-mvc", "outputcache" ]
6
3
350
2
0
2008-08-14T04:16:18.610000
2008-08-14T16:32:22.693000
10,670
10,714
Asynchronous Remoting calls
We have a remoting singleton server running in a separate windows service (let's call her RemotingService). The clients of the RemotingService are ASP.NET instances (many many). Currently, the clients remoting call RemotingService and blocks while the RemotingService call is serviced. However, the remoting service is getting complicated enough (with more RPC calls and complex algorithms) that the asp.net worker threads are blocked for a significantly long time (4-5 seconds). According to this msdn article, doing this will not scale well because an asp.net worker thread is blocked for each remoting RPC. It advises switching to async handlers to free up asp.net worker threads. The purpose of an asynchronous handler is to free up an ASP.NET thread pool thread to service additional requests while the handler is processing the original request. This seems fine, except the remoting call still takes up a thread from the thread pool. Is this the same thread pool as the asp.net worker threads? How should I go about turning my remoting singleton server into an async system such that I free up my asp.net worker threads? I've probably missed out some important information, please let me know if there is anything else you need to know to answer the question.
The idea behind using the ThreadPool is that through it you can control the amount of synchronous threads, and if those get too many, then the thread pool automatically manages the waiting of newer threads. The Asp.Net worked thread (AFAIK) doesn't come from the Thread Pool and shouldn't get affected by your call to the remoting service (unless this is a very slow processor, and your remoting function is very CPU intensive - in which case, everything on your computer will be affected). You could always host the remoting service on a different physical server. In that case, your asp.net worker thread will be totally independent of your remoting call (if the remoting call is called on a separate thread that is).
Asynchronous Remoting calls We have a remoting singleton server running in a separate windows service (let's call her RemotingService). The clients of the RemotingService are ASP.NET instances (many many). Currently, the clients remoting call RemotingService and blocks while the RemotingService call is serviced. However, the remoting service is getting complicated enough (with more RPC calls and complex algorithms) that the asp.net worker threads are blocked for a significantly long time (4-5 seconds). According to this msdn article, doing this will not scale well because an asp.net worker thread is blocked for each remoting RPC. It advises switching to async handlers to free up asp.net worker threads. The purpose of an asynchronous handler is to free up an ASP.NET thread pool thread to service additional requests while the handler is processing the original request. This seems fine, except the remoting call still takes up a thread from the thread pool. Is this the same thread pool as the asp.net worker threads? How should I go about turning my remoting singleton server into an async system such that I free up my asp.net worker threads? I've probably missed out some important information, please let me know if there is anything else you need to know to answer the question.
TITLE: Asynchronous Remoting calls QUESTION: We have a remoting singleton server running in a separate windows service (let's call her RemotingService). The clients of the RemotingService are ASP.NET instances (many many). Currently, the clients remoting call RemotingService and blocks while the RemotingService call is serviced. However, the remoting service is getting complicated enough (with more RPC calls and complex algorithms) that the asp.net worker threads are blocked for a significantly long time (4-5 seconds). According to this msdn article, doing this will not scale well because an asp.net worker thread is blocked for each remoting RPC. It advises switching to async handlers to free up asp.net worker threads. The purpose of an asynchronous handler is to free up an ASP.NET thread pool thread to service additional requests while the handler is processing the original request. This seems fine, except the remoting call still takes up a thread from the thread pool. Is this the same thread pool as the asp.net worker threads? How should I go about turning my remoting singleton server into an async system such that I free up my asp.net worker threads? I've probably missed out some important information, please let me know if there is anything else you need to know to answer the question. ANSWER: The idea behind using the ThreadPool is that through it you can control the amount of synchronous threads, and if those get too many, then the thread pool automatically manages the waiting of newer threads. The Asp.Net worked thread (AFAIK) doesn't come from the Thread Pool and shouldn't get affected by your call to the remoting service (unless this is a very slow processor, and your remoting function is very CPU intensive - in which case, everything on your computer will be affected). You could always host the remoting service on a different physical server. In that case, your asp.net worker thread will be totally independent of your remoting call (if the remoting call is called on a separate thread that is).
[ "c#", ".net-3.5", ".net-2.0", "remoting", "rpc" ]
9
0
2,506
1
0
2008-08-14T04:35:16.810000
2008-08-14T05:53:40.397000
10,680
10,686
Is there a difference between the on_exit() and atexit() functions?
Is there any difference between int on_exit(void (*function)(int, void *), void *arg); and int atexit(void (*function)(void)); other than the fact that the function used by on_exit gets the exit status? That is, if I don't care about the exit status, is there any reason to use one or the other? Edit: Many of the answers warned against on_exit because it's non-standard. If I'm developing an app that is for internal corporate use and guaranteed to run on specific configurations, should I worry about this?
According to this link I found, it seems there are a few differences. on_exit will let you pass in an argument that is passed in to the on_exit function when it is called... which might let you set up some pointers to do some cleanup work on when it is time to exit. Furthermore, it appears that on_exit was a SunOS specific function that may not be compatible on all platforms... so you may want to stick with atexit, despite it being more restrictive.
Is there a difference between the on_exit() and atexit() functions? Is there any difference between int on_exit(void (*function)(int, void *), void *arg); and int atexit(void (*function)(void)); other than the fact that the function used by on_exit gets the exit status? That is, if I don't care about the exit status, is there any reason to use one or the other? Edit: Many of the answers warned against on_exit because it's non-standard. If I'm developing an app that is for internal corporate use and guaranteed to run on specific configurations, should I worry about this?
TITLE: Is there a difference between the on_exit() and atexit() functions? QUESTION: Is there any difference between int on_exit(void (*function)(int, void *), void *arg); and int atexit(void (*function)(void)); other than the fact that the function used by on_exit gets the exit status? That is, if I don't care about the exit status, is there any reason to use one or the other? Edit: Many of the answers warned against on_exit because it's non-standard. If I'm developing an app that is for internal corporate use and guaranteed to run on specific configurations, should I worry about this? ANSWER: According to this link I found, it seems there are a few differences. on_exit will let you pass in an argument that is passed in to the on_exit function when it is called... which might let you set up some pointers to do some cleanup work on when it is time to exit. Furthermore, it appears that on_exit was a SunOS specific function that may not be compatible on all platforms... so you may want to stick with atexit, despite it being more restrictive.
[ "c", "linux" ]
24
15
13,749
5
0
2008-08-14T04:53:45.620000
2008-08-14T05:03:17.707000
10,727
10,818
What is a "reasonable" length of time to keep a SQL cursor open?
In your applications, what's a "long time" to keep a transaction open before committing or rolling back? Minutes? Seconds? Hours? and on which database?
@lomaxx, @ChanChan: to the best of my knowledge cursors are only a problem on SQL Server and Sybase (T-SQL variants). If your database of choice is Oracle, then cursors are your friend. I've seen a number of cases where the use of cursors has actually improved performance. Cursors are an incredibly useful mechanism and tbh, saying things like "if you use a cursor we fire you" is a little ridiculous. Having said that, you only want to keep a cursor open for the absolute minimum that is required. Specifying a maximum time would be arbitrary and pointless without understanding the problem domain.
What is a "reasonable" length of time to keep a SQL cursor open? In your applications, what's a "long time" to keep a transaction open before committing or rolling back? Minutes? Seconds? Hours? and on which database?
TITLE: What is a "reasonable" length of time to keep a SQL cursor open? QUESTION: In your applications, what's a "long time" to keep a transaction open before committing or rolling back? Minutes? Seconds? Hours? and on which database? ANSWER: @lomaxx, @ChanChan: to the best of my knowledge cursors are only a problem on SQL Server and Sybase (T-SQL variants). If your database of choice is Oracle, then cursors are your friend. I've seen a number of cases where the use of cursors has actually improved performance. Cursors are an incredibly useful mechanism and tbh, saying things like "if you use a cursor we fire you" is a little ridiculous. Having said that, you only want to keep a cursor open for the absolute minimum that is required. Specifying a maximum time would be arbitrary and pointless without understanding the problem domain.
[ "sql", "cursors" ]
3
3
1,057
5
0
2008-08-14T06:07:53.347000
2008-08-14T08:59:38.023000
10,731
10,820
Best way to store a database password in a startup script / config file?
So our web server apps need to connect to the database, and some other apps have startup scripts that execute at boot time. What's the best way to store the name/password for these applications, in terms of security, e.g. perhaps we don't want sysadmins to know the database password maintainability, e.g. making the configuration easy to change when the password changes, etc. both windows and linux solutions appreciated!
The best way to secure your password is to stop using one. Use a trusted connection: How To: Connect to SQL Server Using Windows Authentication in ASP.NET 2.0. Then you have nothing to hide - publish your web.config and source to the world, they still can't hit your database. If that won't work for you, use the built in configuration encryption system in ASP.NET.
Best way to store a database password in a startup script / config file? So our web server apps need to connect to the database, and some other apps have startup scripts that execute at boot time. What's the best way to store the name/password for these applications, in terms of security, e.g. perhaps we don't want sysadmins to know the database password maintainability, e.g. making the configuration easy to change when the password changes, etc. both windows and linux solutions appreciated!
TITLE: Best way to store a database password in a startup script / config file? QUESTION: So our web server apps need to connect to the database, and some other apps have startup scripts that execute at boot time. What's the best way to store the name/password for these applications, in terms of security, e.g. perhaps we don't want sysadmins to know the database password maintainability, e.g. making the configuration easy to change when the password changes, etc. both windows and linux solutions appreciated! ANSWER: The best way to secure your password is to stop using one. Use a trusted connection: How To: Connect to SQL Server Using Windows Authentication in ASP.NET 2.0. Then you have nothing to hide - publish your web.config and source to the world, they still can't hit your database. If that won't work for you, use the built in configuration encryption system in ASP.NET.
[ "sql-server", "security", "passwords" ]
8
10
6,280
7
0
2008-08-14T06:10:25.013000
2008-08-14T09:03:47.627000
10,793
12,055
Catching unhandled exceptions in ASP.NET UserControls
I'm dynamically loading user controls adding them to the Controls collection of the web form. I'd like to hide user controls if they cause a unhandled exception while rendering. So, I tried hooking to the Error event of each UserControl but it seems that this event never fires for the UserControls as it does for Page class. Did some googling around and it doesn't seem promising. Any ideas here?
mmilic, following on from your response to my previous idea.. No additional logic required! That's the point, your doing nothing to the classes in question, just wrapping them in some instantiation bubble-wrap!:) OK, I was going to just bullet point but I wanted to see this work for myself, so I cobbled together some very rough code but the concept is there and it seems to work. APOLOGIES FOR THE LONG POST The SafeLoader This will basically be the "bubble" I mentioned.. It will get the controls HTML, catching any errors that occur during Rendering. public class SafeLoader { public static string LoadControl(Control ctl) { // In terms of what we could do here, its down // to you, I will just return some basic HTML saying // I screwed up. try { // Get the Controls HTML (which may throw) // And store it in our own writer away from the // actual Live page. StringWriter writer = new StringWriter(); HtmlTextWriter htmlWriter = new HtmlTextWriter(writer); ctl.RenderControl(htmlWriter); return writer.GetStringBuilder().ToString(); } catch (Exception) { string ctlType = ctl.GetType().Name; return " " + "Rob + Controls = FAIL (" + ctlType + " rendering failed) Sad face:( "; } } } And Some Controls.. Ok I just mocked together two controls here, one will throw the other will render junk. Point here, I don't give a crap. These will be replaced with your custom controls.. BadControl public class BadControl: WebControl { protected override void Render(HtmlTextWriter writer) { throw new ApplicationException("Rob can't program controls"); } } GoodControl public class GoodControl: WebControl { protected override void Render(HtmlTextWriter writer) { writer.Write(" Holy crap this control works "); } } The Page OK, so lets look at the "test" page.. Here I simply instantiate the controls, grab their html and output it, I will follow with thoughts on designer support etc.. Page Code-Behind protected void Page_Load(object sender, EventArgs e) { // Create some controls (BadControl will throw) string goodHtml = SafeLoader.LoadControl(new BadControl()); Response.Write(goodHtml); string badHtml = SafeLoader.LoadControl(new GoodControl()); Response.Write(badHtml); } Thoughts OK, I know what you are thinking, "these controls are instantiated programatically, what about designer support? I spent freaking hours getting these controls nice for the designer, now you're messing with my mojo". OK, so I havent really tested this yet (probably will do in a min!) but the idea here is to override the CreateChildControls method for the page, and take the instance of each control added on the form and run it through the SafeLoader. If the code passes, you can add it to the Controls collection as normal, if not, then you can create erroneous literals or something, up to you my friend. Finally.. Again, sorry for the long post, but I wanted to get the code here so we can discuss this:) I hope this helps demonstrate my idea:) Update Tested by chucking a control in on the designer and overriding the CreateChildControls method with this, works fine, may need some clean up to make things better looking, but I'll leave that to you;) protected override void CreateChildControls() { // Pass each control through the Loader to check // its not lame foreach (Control ctl in Controls) { string s = SafeLoader.LoadControl(ctl); // If its bad, smack it downnnn! if (s == string.Empty) { ctl.Visible = false; // Prevent Rendering string ctlType = ctl.GetType().Name; Response.Write(" Problem Occurred Rendering " + ctlType + " '" + ctl.ID + "'. "); } } } Enjoy!
Catching unhandled exceptions in ASP.NET UserControls I'm dynamically loading user controls adding them to the Controls collection of the web form. I'd like to hide user controls if they cause a unhandled exception while rendering. So, I tried hooking to the Error event of each UserControl but it seems that this event never fires for the UserControls as it does for Page class. Did some googling around and it doesn't seem promising. Any ideas here?
TITLE: Catching unhandled exceptions in ASP.NET UserControls QUESTION: I'm dynamically loading user controls adding them to the Controls collection of the web form. I'd like to hide user controls if they cause a unhandled exception while rendering. So, I tried hooking to the Error event of each UserControl but it seems that this event never fires for the UserControls as it does for Page class. Did some googling around and it doesn't seem promising. Any ideas here? ANSWER: mmilic, following on from your response to my previous idea.. No additional logic required! That's the point, your doing nothing to the classes in question, just wrapping them in some instantiation bubble-wrap!:) OK, I was going to just bullet point but I wanted to see this work for myself, so I cobbled together some very rough code but the concept is there and it seems to work. APOLOGIES FOR THE LONG POST The SafeLoader This will basically be the "bubble" I mentioned.. It will get the controls HTML, catching any errors that occur during Rendering. public class SafeLoader { public static string LoadControl(Control ctl) { // In terms of what we could do here, its down // to you, I will just return some basic HTML saying // I screwed up. try { // Get the Controls HTML (which may throw) // And store it in our own writer away from the // actual Live page. StringWriter writer = new StringWriter(); HtmlTextWriter htmlWriter = new HtmlTextWriter(writer); ctl.RenderControl(htmlWriter); return writer.GetStringBuilder().ToString(); } catch (Exception) { string ctlType = ctl.GetType().Name; return " " + "Rob + Controls = FAIL (" + ctlType + " rendering failed) Sad face:( "; } } } And Some Controls.. Ok I just mocked together two controls here, one will throw the other will render junk. Point here, I don't give a crap. These will be replaced with your custom controls.. BadControl public class BadControl: WebControl { protected override void Render(HtmlTextWriter writer) { throw new ApplicationException("Rob can't program controls"); } } GoodControl public class GoodControl: WebControl { protected override void Render(HtmlTextWriter writer) { writer.Write(" Holy crap this control works "); } } The Page OK, so lets look at the "test" page.. Here I simply instantiate the controls, grab their html and output it, I will follow with thoughts on designer support etc.. Page Code-Behind protected void Page_Load(object sender, EventArgs e) { // Create some controls (BadControl will throw) string goodHtml = SafeLoader.LoadControl(new BadControl()); Response.Write(goodHtml); string badHtml = SafeLoader.LoadControl(new GoodControl()); Response.Write(badHtml); } Thoughts OK, I know what you are thinking, "these controls are instantiated programatically, what about designer support? I spent freaking hours getting these controls nice for the designer, now you're messing with my mojo". OK, so I havent really tested this yet (probably will do in a min!) but the idea here is to override the CreateChildControls method for the page, and take the instance of each control added on the form and run it through the SafeLoader. If the code passes, you can add it to the Controls collection as normal, if not, then you can create erroneous literals or something, up to you my friend. Finally.. Again, sorry for the long post, but I wanted to get the code here so we can discuss this:) I hope this helps demonstrate my idea:) Update Tested by chucking a control in on the designer and overriding the CreateChildControls method with this, works fine, may need some clean up to make things better looking, but I'll leave that to you;) protected override void CreateChildControls() { // Pass each control through the Loader to check // its not lame foreach (Control ctl in Controls) { string s = SafeLoader.LoadControl(ctl); // If its bad, smack it downnnn! if (s == string.Empty) { ctl.Visible = false; // Prevent Rendering string ctlType = ctl.GetType().Name; Response.Write(" Problem Occurred Rendering " + ctlType + " '" + ctl.ID + "'. "); } } } Enjoy!
[ "asp.net", "exception", "user-controls" ]
5
12
4,933
7
0
2008-08-14T08:06:59.457000
2008-08-15T07:49:57.927000
10,808
3,867,490
Ruby mixins and calling super methods
Ok, so I've been refactoring my code in my little Rails app in an effort to remove duplication, and in general make my life easier (as I like an easy life). Part of this refactoring, has been to move code that's common to two of my models to a module that I can include where I need it. So far, so good. Looks like it's going to work out, but I've just hit a problem that I'm not sure how to get around. The module (which I've called sendable), is just going to be the code that handles faxing, e-mailing, or printing a PDF of the document. So, for example, I have a purchase order, and I have Internal Sales Orders (imaginatively abbreviated to ISO). The problem I've struck, is that I want some variables initialised (initialized for people who don't spell correctly:P ) after the object is loaded, so I've been using the after_initialize hook. No problem... until I start adding some more mixins. The problem I have, is that I can have an after_initialize in any one of my mixins, so I need to include a super call at the start to make sure the other mixin after_initialize calls get called. Which is great, until I end up calling super and I get an error because there is no super to call. Here's a little example, in case I haven't been confusing enough: class Iso < ActiveRecord::Base include Shared::TracksSerialNumberExtension include Shared::OrderLines extend Shared::Filtered include Sendable::Model validates_presence_of:customer validates_associated:lines owned_by:customer order_lines:despatched # Mixin tracks_serial_numbers:items # Mixin sendable:customer # Mixin attr_accessor:address def initialize( params = nil ) super self.created_at ||= Time.now.to_date end end So, if each one of the mixins have an after_initialize call, with a super call, how can I stop that last super call from raising the error? How can I test that the super method exists before I call it?
You can use this: super if defined?(super) Here is an example: class A end class B < A def t super if defined?(super) puts "Hi from B" end end B.new.t
Ruby mixins and calling super methods Ok, so I've been refactoring my code in my little Rails app in an effort to remove duplication, and in general make my life easier (as I like an easy life). Part of this refactoring, has been to move code that's common to two of my models to a module that I can include where I need it. So far, so good. Looks like it's going to work out, but I've just hit a problem that I'm not sure how to get around. The module (which I've called sendable), is just going to be the code that handles faxing, e-mailing, or printing a PDF of the document. So, for example, I have a purchase order, and I have Internal Sales Orders (imaginatively abbreviated to ISO). The problem I've struck, is that I want some variables initialised (initialized for people who don't spell correctly:P ) after the object is loaded, so I've been using the after_initialize hook. No problem... until I start adding some more mixins. The problem I have, is that I can have an after_initialize in any one of my mixins, so I need to include a super call at the start to make sure the other mixin after_initialize calls get called. Which is great, until I end up calling super and I get an error because there is no super to call. Here's a little example, in case I haven't been confusing enough: class Iso < ActiveRecord::Base include Shared::TracksSerialNumberExtension include Shared::OrderLines extend Shared::Filtered include Sendable::Model validates_presence_of:customer validates_associated:lines owned_by:customer order_lines:despatched # Mixin tracks_serial_numbers:items # Mixin sendable:customer # Mixin attr_accessor:address def initialize( params = nil ) super self.created_at ||= Time.now.to_date end end So, if each one of the mixins have an after_initialize call, with a super call, how can I stop that last super call from raising the error? How can I test that the super method exists before I call it?
TITLE: Ruby mixins and calling super methods QUESTION: Ok, so I've been refactoring my code in my little Rails app in an effort to remove duplication, and in general make my life easier (as I like an easy life). Part of this refactoring, has been to move code that's common to two of my models to a module that I can include where I need it. So far, so good. Looks like it's going to work out, but I've just hit a problem that I'm not sure how to get around. The module (which I've called sendable), is just going to be the code that handles faxing, e-mailing, or printing a PDF of the document. So, for example, I have a purchase order, and I have Internal Sales Orders (imaginatively abbreviated to ISO). The problem I've struck, is that I want some variables initialised (initialized for people who don't spell correctly:P ) after the object is loaded, so I've been using the after_initialize hook. No problem... until I start adding some more mixins. The problem I have, is that I can have an after_initialize in any one of my mixins, so I need to include a super call at the start to make sure the other mixin after_initialize calls get called. Which is great, until I end up calling super and I get an error because there is no super to call. Here's a little example, in case I haven't been confusing enough: class Iso < ActiveRecord::Base include Shared::TracksSerialNumberExtension include Shared::OrderLines extend Shared::Filtered include Sendable::Model validates_presence_of:customer validates_associated:lines owned_by:customer order_lines:despatched # Mixin tracks_serial_numbers:items # Mixin sendable:customer # Mixin attr_accessor:address def initialize( params = nil ) super self.created_at ||= Time.now.to_date end end So, if each one of the mixins have an after_initialize call, with a super call, how can I stop that last super call from raising the error? How can I test that the super method exists before I call it? ANSWER: You can use this: super if defined?(super) Here is an example: class A end class B < A def t super if defined?(super) puts "Hi from B" end end B.new.t
[ "ruby-on-rails", "ruby" ]
35
42
30,778
5
0
2008-08-14T08:30:27.570000
2010-10-05T20:34:17.547000
10,810
12,106
Why do I get the error "Unable to update the password" when calling AzMan?
I'm doing a authorization check from a WinForms application with the help of the AzMan authorization provider from Enterprise Library and am receiving the the following error: Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Practices.EnterpriseLibrary.Security.AzMan) Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Interop.Security.AzRoles) The AzMan store is hosted in ADAM on another computer in the same domain. Other computers and users do not have this problem. The user making the call has read access to both ADAM and the AzMan store. The computer running the WinForms app and the computer running ADAM are both on Windows XP SP2. I've had access problems with AzMan before that I've resolved, but this is a new one... What am I missing?
I found out from the event log that there was a security issue with the user making the call to AzMan from a remote computer. The user did not belong the local Users group on the computer running ADAM/AzMan. When I corrected that everything worked again.
Why do I get the error "Unable to update the password" when calling AzMan? I'm doing a authorization check from a WinForms application with the help of the AzMan authorization provider from Enterprise Library and am receiving the the following error: Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Practices.EnterpriseLibrary.Security.AzMan) Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Interop.Security.AzRoles) The AzMan store is hosted in ADAM on another computer in the same domain. Other computers and users do not have this problem. The user making the call has read access to both ADAM and the AzMan store. The computer running the WinForms app and the computer running ADAM are both on Windows XP SP2. I've had access problems with AzMan before that I've resolved, but this is a new one... What am I missing?
TITLE: Why do I get the error "Unable to update the password" when calling AzMan? QUESTION: I'm doing a authorization check from a WinForms application with the help of the AzMan authorization provider from Enterprise Library and am receiving the the following error: Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Practices.EnterpriseLibrary.Security.AzMan) Unable to update the password. The value provided as the current password is incorrect. (Exception from HRESULT: 0x8007052B) (Microsoft.Interop.Security.AzRoles) The AzMan store is hosted in ADAM on another computer in the same domain. Other computers and users do not have this problem. The user making the call has read access to both ADAM and the AzMan store. The computer running the WinForms app and the computer running ADAM are both on Windows XP SP2. I've had access problems with AzMan before that I've resolved, but this is a new one... What am I missing? ANSWER: I found out from the event log that there was a security issue with the user making the call to AzMan from a remote computer. The user did not belong the local Users group on the computer running ADAM/AzMan. When I corrected that everything worked again.
[ "c#", "winforms", "authorization", "adam", "azman" ]
4
2
3,023
2
0
2008-08-14T08:38:24.760000
2008-08-15T09:44:36.773000
10,819
2,663,232
SQL, Auxiliary table of numbers
For certain types of sql queries, an auxiliary table of numbers can be very useful. It may be created as a table with as many rows as you need for a particular task or as a user defined function that returns the number of rows required in each query. What is the optimal way to create such a function?
Heh... sorry I'm so late responding to an old post. And, yeah, I had to respond because the most popular answer (at the time, the Recursive CTE answer with the link to 14 different methods) on this thread is, ummm... performance challenged at best. First, the article with the 14 different solutions is fine for seeing the different methods of creating a Numbers/Tally table on the fly but as pointed out in the article and in the cited thread, there's a very important quote... "suggestions regarding efficiency and performance are often subjective. Regardless of how a query is being used, the physical implementation determines the efficiency of a query. Therefore, rather than relying on biased guidelines, it is imperative that you test the query and determine which one performs better." Ironically, the article itself contains many subjective statements and "biased guidelines" such as "a recursive CTE can generate a number listing pretty efficiently " and "This is an efficient method of using WHILE loop from a newsgroup posting by Itzik Ben-Gen" (which I'm sure he posted just for comparison purposes). C'mon folks... Just mentioning Itzik's good name may lead some poor slob into actually using that horrible method. The author should practice what (s)he preaches and should do a little performance testing before making such ridiculously incorrect statements especially in the face of any scalablility. With the thought of actually doing some testing before making any subjective claims about what any code does or what someone "likes", here's some code you can do your own testing with. Setup profiler for the SPID you're running the test from and check it out for yourself... just do a "Search'n'Replace" of the number 1000000 for your "favorite" number and see... --===== Test for 1000000 rows ================================== GO --===== Traditional RECURSIVE CTE method WITH Tally (N) AS ( SELECT 1 UNION ALL SELECT 1 + N FROM Tally WHERE N < 1000000 ) SELECT N INTO #Tally1 FROM Tally OPTION (MAXRECURSION 0); GO --===== Traditional WHILE LOOP method CREATE TABLE #Tally2 (N INT); SET NOCOUNT ON; DECLARE @Index INT; SET @Index = 1; WHILE @Index <= 1000000 BEGIN INSERT #Tally2 (N) VALUES (@Index); SET @Index = @Index + 1; END; GO --===== Traditional CROSS JOIN table method SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS N INTO #Tally3 FROM Master.sys.All_Columns ac1 CROSS JOIN Master.sys.ALL_Columns ac2; GO --===== Itzik's CROSS JOINED CTE method WITH E00(N) AS (SELECT 1 UNION ALL SELECT 1), E02(N) AS (SELECT 1 FROM E00 a, E00 b), E04(N) AS (SELECT 1 FROM E02 a, E02 b), E08(N) AS (SELECT 1 FROM E04 a, E04 b), E16(N) AS (SELECT 1 FROM E08 a, E08 b), E32(N) AS (SELECT 1 FROM E16 a, E16 b), cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY N) FROM E32) SELECT N INTO #Tally4 FROM cteTally WHERE N <= 1000000; GO --===== Housekeeping DROP TABLE #Tally1, #Tally2, #Tally3, #Tally4; GO While we're at it, here's the numbers I get from SQL Profiler for the values of 100, 1000, 10000, 100000, and 1000000... SPID TextData Dur(ms) CPU Reads Writes ---- ---------------------------------------- ------- ----- ------- ------ 51 --===== Test for 100 rows ============== 8 0 0 0 51 --===== Traditional RECURSIVE CTE method 16 0 868 0 51 --===== Traditional WHILE LOOP method CR 73 16 175 2 51 --===== Traditional CROSS JOIN table met 11 0 80 0 51 --===== Itzik's CROSS JOINED CTE method 6 0 63 0 51 --===== Housekeeping DROP TABLE #Tally 35 31 401 0 51 --===== Test for 1000 rows ============= 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 47 47 8074 0 51 --===== Traditional WHILE LOOP method CR 80 78 1085 0 51 --===== Traditional CROSS JOIN table met 5 0 98 0 51 --===== Itzik's CROSS JOINED CTE method 2 0 83 0 51 --===== Housekeeping DROP TABLE #Tally 6 15 426 0 51 --===== Test for 10000 rows ============ 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 434 344 80230 10 51 --===== Traditional WHILE LOOP method CR 671 563 10240 9 51 --===== Traditional CROSS JOIN table met 25 31 302 15 51 --===== Itzik's CROSS JOINED CTE method 24 0 192 15 51 --===== Housekeeping DROP TABLE #Tally 7 15 531 0 51 --===== Test for 100000 rows =========== 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 4143 3813 800260 154 51 --===== Traditional WHILE LOOP method CR 5820 5547 101380 161 51 --===== Traditional CROSS JOIN table met 160 140 479 211 51 --===== Itzik's CROSS JOINED CTE method 153 141 276 204 51 --===== Housekeeping DROP TABLE #Tally 10 15 761 0 51 --===== Test for 1000000 rows ========== 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 41349 37437 8001048 1601 51 --===== Traditional WHILE LOOP method CR 59138 56141 1012785 1682 51 --===== Traditional CROSS JOIN table met 1224 1219 2429 2101 51 --===== Itzik's CROSS JOINED CTE method 1448 1328 1217 2095 51 --===== Housekeeping DROP TABLE #Tally 8 0 415 0 As you can see, the Recursive CTE method is the second worst only to the While Loop for Duration and CPU and has 8 times the memory pressure in the form of logical reads than the While Loop. It's RBAR on steroids and should be avoided, at all cost, for any single row calculations just as a While Loop should be avoided. There are places where recursion is quite valuable but this ISN'T one of them. As a side bar, Mr. Denny is absolutely spot on... a correctly sized permanent Numbers or Tally table is the way to go for most things. What does correctly sized mean? Well, most people use a Tally table to generate dates or to do splits on VARCHAR(8000). If you create an 11,000 row Tally table with the correct clustered index on "N", you'll have enough rows to create more than 30 years worth of dates (I work with mortgages a fair bit so 30 years is a key number for me) and certainly enough to handle a VARCHAR(8000) split. Why is "right sizing" so important? If the Tally table is used a lot, it easily fits in cache which makes it blazingly fast without much pressure on memory at all. Last but not least, every one knows that if you create a permanent Tally table, it doesn't much matter which method you use to build it because 1) it's only going to be made once and 2) if it's something like an 11,000 row table, all of the methods are going to run "good enough". So why all the indigination on my part about which method to use??? The answer is that some poor guy/gal who doesn't know any better and just needs to get his or her job done might see something like the Recursive CTE method and decide to use it for something much larger and much more frequently used than building a permanent Tally table and I'm trying to protect those people, the servers their code runs on, and the company that owns the data on those servers. Yeah... it's that big a deal. It should be for everyone else, as well. Teach the right way to do things instead of "good enough". Do some testing before posting or using something from a post or book... the life you save may, in fact, be your own especially if you think a recursive CTE is the way to go for something like this.;-) Thanks for listening...
SQL, Auxiliary table of numbers For certain types of sql queries, an auxiliary table of numbers can be very useful. It may be created as a table with as many rows as you need for a particular task or as a user defined function that returns the number of rows required in each query. What is the optimal way to create such a function?
TITLE: SQL, Auxiliary table of numbers QUESTION: For certain types of sql queries, an auxiliary table of numbers can be very useful. It may be created as a table with as many rows as you need for a particular task or as a user defined function that returns the number of rows required in each query. What is the optimal way to create such a function? ANSWER: Heh... sorry I'm so late responding to an old post. And, yeah, I had to respond because the most popular answer (at the time, the Recursive CTE answer with the link to 14 different methods) on this thread is, ummm... performance challenged at best. First, the article with the 14 different solutions is fine for seeing the different methods of creating a Numbers/Tally table on the fly but as pointed out in the article and in the cited thread, there's a very important quote... "suggestions regarding efficiency and performance are often subjective. Regardless of how a query is being used, the physical implementation determines the efficiency of a query. Therefore, rather than relying on biased guidelines, it is imperative that you test the query and determine which one performs better." Ironically, the article itself contains many subjective statements and "biased guidelines" such as "a recursive CTE can generate a number listing pretty efficiently " and "This is an efficient method of using WHILE loop from a newsgroup posting by Itzik Ben-Gen" (which I'm sure he posted just for comparison purposes). C'mon folks... Just mentioning Itzik's good name may lead some poor slob into actually using that horrible method. The author should practice what (s)he preaches and should do a little performance testing before making such ridiculously incorrect statements especially in the face of any scalablility. With the thought of actually doing some testing before making any subjective claims about what any code does or what someone "likes", here's some code you can do your own testing with. Setup profiler for the SPID you're running the test from and check it out for yourself... just do a "Search'n'Replace" of the number 1000000 for your "favorite" number and see... --===== Test for 1000000 rows ================================== GO --===== Traditional RECURSIVE CTE method WITH Tally (N) AS ( SELECT 1 UNION ALL SELECT 1 + N FROM Tally WHERE N < 1000000 ) SELECT N INTO #Tally1 FROM Tally OPTION (MAXRECURSION 0); GO --===== Traditional WHILE LOOP method CREATE TABLE #Tally2 (N INT); SET NOCOUNT ON; DECLARE @Index INT; SET @Index = 1; WHILE @Index <= 1000000 BEGIN INSERT #Tally2 (N) VALUES (@Index); SET @Index = @Index + 1; END; GO --===== Traditional CROSS JOIN table method SELECT TOP (1000000) ROW_NUMBER() OVER (ORDER BY (SELECT 1)) AS N INTO #Tally3 FROM Master.sys.All_Columns ac1 CROSS JOIN Master.sys.ALL_Columns ac2; GO --===== Itzik's CROSS JOINED CTE method WITH E00(N) AS (SELECT 1 UNION ALL SELECT 1), E02(N) AS (SELECT 1 FROM E00 a, E00 b), E04(N) AS (SELECT 1 FROM E02 a, E02 b), E08(N) AS (SELECT 1 FROM E04 a, E04 b), E16(N) AS (SELECT 1 FROM E08 a, E08 b), E32(N) AS (SELECT 1 FROM E16 a, E16 b), cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY N) FROM E32) SELECT N INTO #Tally4 FROM cteTally WHERE N <= 1000000; GO --===== Housekeeping DROP TABLE #Tally1, #Tally2, #Tally3, #Tally4; GO While we're at it, here's the numbers I get from SQL Profiler for the values of 100, 1000, 10000, 100000, and 1000000... SPID TextData Dur(ms) CPU Reads Writes ---- ---------------------------------------- ------- ----- ------- ------ 51 --===== Test for 100 rows ============== 8 0 0 0 51 --===== Traditional RECURSIVE CTE method 16 0 868 0 51 --===== Traditional WHILE LOOP method CR 73 16 175 2 51 --===== Traditional CROSS JOIN table met 11 0 80 0 51 --===== Itzik's CROSS JOINED CTE method 6 0 63 0 51 --===== Housekeeping DROP TABLE #Tally 35 31 401 0 51 --===== Test for 1000 rows ============= 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 47 47 8074 0 51 --===== Traditional WHILE LOOP method CR 80 78 1085 0 51 --===== Traditional CROSS JOIN table met 5 0 98 0 51 --===== Itzik's CROSS JOINED CTE method 2 0 83 0 51 --===== Housekeeping DROP TABLE #Tally 6 15 426 0 51 --===== Test for 10000 rows ============ 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 434 344 80230 10 51 --===== Traditional WHILE LOOP method CR 671 563 10240 9 51 --===== Traditional CROSS JOIN table met 25 31 302 15 51 --===== Itzik's CROSS JOINED CTE method 24 0 192 15 51 --===== Housekeeping DROP TABLE #Tally 7 15 531 0 51 --===== Test for 100000 rows =========== 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 4143 3813 800260 154 51 --===== Traditional WHILE LOOP method CR 5820 5547 101380 161 51 --===== Traditional CROSS JOIN table met 160 140 479 211 51 --===== Itzik's CROSS JOINED CTE method 153 141 276 204 51 --===== Housekeeping DROP TABLE #Tally 10 15 761 0 51 --===== Test for 1000000 rows ========== 0 0 0 0 51 --===== Traditional RECURSIVE CTE method 41349 37437 8001048 1601 51 --===== Traditional WHILE LOOP method CR 59138 56141 1012785 1682 51 --===== Traditional CROSS JOIN table met 1224 1219 2429 2101 51 --===== Itzik's CROSS JOINED CTE method 1448 1328 1217 2095 51 --===== Housekeeping DROP TABLE #Tally 8 0 415 0 As you can see, the Recursive CTE method is the second worst only to the While Loop for Duration and CPU and has 8 times the memory pressure in the form of logical reads than the While Loop. It's RBAR on steroids and should be avoided, at all cost, for any single row calculations just as a While Loop should be avoided. There are places where recursion is quite valuable but this ISN'T one of them. As a side bar, Mr. Denny is absolutely spot on... a correctly sized permanent Numbers or Tally table is the way to go for most things. What does correctly sized mean? Well, most people use a Tally table to generate dates or to do splits on VARCHAR(8000). If you create an 11,000 row Tally table with the correct clustered index on "N", you'll have enough rows to create more than 30 years worth of dates (I work with mortgages a fair bit so 30 years is a key number for me) and certainly enough to handle a VARCHAR(8000) split. Why is "right sizing" so important? If the Tally table is used a lot, it easily fits in cache which makes it blazingly fast without much pressure on memory at all. Last but not least, every one knows that if you create a permanent Tally table, it doesn't much matter which method you use to build it because 1) it's only going to be made once and 2) if it's something like an 11,000 row table, all of the methods are going to run "good enough". So why all the indigination on my part about which method to use??? The answer is that some poor guy/gal who doesn't know any better and just needs to get his or her job done might see something like the Recursive CTE method and decide to use it for something much larger and much more frequently used than building a permanent Tally table and I'm trying to protect those people, the servers their code runs on, and the company that owns the data on those servers. Yeah... it's that big a deal. It should be for everyone else, as well. Teach the right way to do things instead of "good enough". Do some testing before posting or using something from a post or book... the life you save may, in fact, be your own especially if you think a recursive CTE is the way to go for something like this.;-) Thanks for listening...
[ "sql", "sql-server" ]
67
117
26,118
8
0
2008-08-14T09:01:37.663000
2010-04-18T17:44:31.607000
10,822
10,824
Fast database access test from .NET
What would be a very fast way to determine if your connectionstring lets you connect to a database? Normally a connection attempt keeps the user waiting a long time before notifying the attempt was futile anyway.
You haven't mentioned what database you are connecting to, however. In SQL Server 2005, from.NET, you can specify a connection timeout in your connection string like so: server=;database=;uid=;password=;Connect Timeout=3 This will try to connect to the server and if it doesn't do so in three seconds, it will throw a timeout error.
Fast database access test from .NET What would be a very fast way to determine if your connectionstring lets you connect to a database? Normally a connection attempt keeps the user waiting a long time before notifying the attempt was futile anyway.
TITLE: Fast database access test from .NET QUESTION: What would be a very fast way to determine if your connectionstring lets you connect to a database? Normally a connection attempt keeps the user waiting a long time before notifying the attempt was futile anyway. ANSWER: You haven't mentioned what database you are connecting to, however. In SQL Server 2005, from.NET, you can specify a connection timeout in your connection string like so: server=;database=;uid=;password=;Connect Timeout=3 This will try to connect to the server and if it doesn't do so in three seconds, it will throw a timeout error.
[ ".net", "database", "connection", "connection-string" ]
5
11
1,173
2
0
2008-08-14T09:09:52.607000
2008-08-14T09:13:45.797000
10,825
10,826
Compare a date string to datetime in SQL Server?
In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)
Technique 1: DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime >= @p_date AND column_datetime < DATEADD(d, 1, @p_date) The advantage of this is that it will use any index on 'column_datetime' if it exists.
Compare a date string to datetime in SQL Server? In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)
TITLE: Compare a date string to datetime in SQL Server? QUESTION: In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.) ANSWER: Technique 1: DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime >= @p_date AND column_datetime < DATEADD(d, 1, @p_date) The advantage of this is that it will use any index on 'column_datetime' if it exists.
[ "sql-server", "database", "t-sql", "datetime" ]
53
40
270,044
18
0
2008-08-14T09:14:07.683000
2008-08-14T09:16:03.530000
10,855
11,593
LINQ query on a DataTable
I'm trying to perform a LINQ query on a DataTable object and bizarrely I am finding that performing such queries on DataTables is not straightforward. For example: var results = from myRow in myDataTable where results.Field("RowNo") == 1 select results; This is not allowed. How do I get something like this working? I'm amazed that LINQ queries are not allowed on DataTables!
You can't query against the DataTable 's Rows collection, since DataRowCollection doesn't implement IEnumerable. You need to use the AsEnumerable() extension for DataTable. Like so: var results = from myRow in myDataTable.AsEnumerable() where myRow.Field ("RowNo") == 1 select myRow; And as @Keith says, you'll need to add a reference to System.Data.DataSetExtensions AsEnumerable() returns IEnumerable. If you need to convert IEnumerable to a DataTable, use the CopyToDataTable() extension. Below is query with Lambda Expression, var result = myDataTable.AsEnumerable().Where(myRow => myRow.Field ("RowNo") == 1);
LINQ query on a DataTable I'm trying to perform a LINQ query on a DataTable object and bizarrely I am finding that performing such queries on DataTables is not straightforward. For example: var results = from myRow in myDataTable where results.Field("RowNo") == 1 select results; This is not allowed. How do I get something like this working? I'm amazed that LINQ queries are not allowed on DataTables!
TITLE: LINQ query on a DataTable QUESTION: I'm trying to perform a LINQ query on a DataTable object and bizarrely I am finding that performing such queries on DataTables is not straightforward. For example: var results = from myRow in myDataTable where results.Field("RowNo") == 1 select results; This is not allowed. How do I get something like this working? I'm amazed that LINQ queries are not allowed on DataTables! ANSWER: You can't query against the DataTable 's Rows collection, since DataRowCollection doesn't implement IEnumerable. You need to use the AsEnumerable() extension for DataTable. Like so: var results = from myRow in myDataTable.AsEnumerable() where myRow.Field ("RowNo") == 1 select myRow; And as @Keith says, you'll need to add a reference to System.Data.DataSetExtensions AsEnumerable() returns IEnumerable. If you need to convert IEnumerable to a DataTable, use the CopyToDataTable() extension. Below is query with Lambda Expression, var result = myDataTable.AsEnumerable().Where(myRow => myRow.Field ("RowNo") == 1);
[ "c#", ".net", "linq", "datatable", ".net-3.5" ]
1,148
1,413
1,138,519
22
0
2008-08-14T10:08:27.123000
2008-08-14T19:45:01.383000
10,860
10,875
Data Layer Best Practices
I am in the middle of a "discussion" with a colleague about the best way to implement the data layer in a new application. One viewpoint is that the data layer should be aware of business objects (our own classes that represent an entity), and be able to work with that object natively. The opposing viewpoint is that the data layer should be object-agnostic, and purely handle simple data types (strings, bools, dates, etc.) I can see that both approaches may be valid, but my own viewpoint is that I prefer the former. That way, if the data storage medium changes, the business layer doesn't (necessarily) have to change to accommodate the new data layer. It would therefore be a trivial thing to change from a SQL data store to a serialized xml filesystem store. My colleague's point of view is that the data layer shouldn't have to know about object definitions, and that as long as the data is passed about appropriately, that is enough. Now, I know that this is one of those questions that has the potential to start a religious war, but I'd appreciate any feedback from the community on how you approach such things. TIA
It really depends on your view of the world - I used to be in the uncoupled camp. The DAL was only there to supply data to the BAL - end of story. With emerging technologies such as Linq to SQL and Entity Framework becoming a bit more popular, then the line between DAL and BAL have been blurred a bit. In L2S especially your DAL is quite tightly coupled to the Business objects as the object model has a 1-1 mapping to your database field. Like anything in software development there is no right or wrong answer. You need to understand your requirements and future requirments and work from there. I would no more use a Ferrari on the Dakhar rally as I would a Range Rover on a track day.
Data Layer Best Practices I am in the middle of a "discussion" with a colleague about the best way to implement the data layer in a new application. One viewpoint is that the data layer should be aware of business objects (our own classes that represent an entity), and be able to work with that object natively. The opposing viewpoint is that the data layer should be object-agnostic, and purely handle simple data types (strings, bools, dates, etc.) I can see that both approaches may be valid, but my own viewpoint is that I prefer the former. That way, if the data storage medium changes, the business layer doesn't (necessarily) have to change to accommodate the new data layer. It would therefore be a trivial thing to change from a SQL data store to a serialized xml filesystem store. My colleague's point of view is that the data layer shouldn't have to know about object definitions, and that as long as the data is passed about appropriately, that is enough. Now, I know that this is one of those questions that has the potential to start a religious war, but I'd appreciate any feedback from the community on how you approach such things. TIA
TITLE: Data Layer Best Practices QUESTION: I am in the middle of a "discussion" with a colleague about the best way to implement the data layer in a new application. One viewpoint is that the data layer should be aware of business objects (our own classes that represent an entity), and be able to work with that object natively. The opposing viewpoint is that the data layer should be object-agnostic, and purely handle simple data types (strings, bools, dates, etc.) I can see that both approaches may be valid, but my own viewpoint is that I prefer the former. That way, if the data storage medium changes, the business layer doesn't (necessarily) have to change to accommodate the new data layer. It would therefore be a trivial thing to change from a SQL data store to a serialized xml filesystem store. My colleague's point of view is that the data layer shouldn't have to know about object definitions, and that as long as the data is passed about appropriately, that is enough. Now, I know that this is one of those questions that has the potential to start a religious war, but I'd appreciate any feedback from the community on how you approach such things. TIA ANSWER: It really depends on your view of the world - I used to be in the uncoupled camp. The DAL was only there to supply data to the BAL - end of story. With emerging technologies such as Linq to SQL and Entity Framework becoming a bit more popular, then the line between DAL and BAL have been blurred a bit. In L2S especially your DAL is quite tightly coupled to the Business objects as the object model has a 1-1 mapping to your database field. Like anything in software development there is no right or wrong answer. You need to understand your requirements and future requirments and work from there. I would no more use a Ferrari on the Dakhar rally as I would a Range Rover on a track day.
[ ".net", "n-tier-architecture" ]
9
5
3,543
6
0
2008-08-14T10:23:05.657000
2008-08-14T10:38:04.677000
10,872
10,882
How to encourage someone to learn programming?
I have a friend that has a little bit of a holiday coming up and they want ideas on what they should do during the holiday, I plan to suggest programming to them, what are the pros and cons that I need to mention? I'll add to the list below as people reply, I apologise if I duplicate any entries. Pros I have so far Minimal money requirement (they already have a computer) Will help them to think in new ways (Rob Cooper) Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. (Rob Cooper) I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. (Rob Cooper) Money is/can be pretty good. (Rob Cooper) Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. (Rob Cooper) It's an exciting industry to work in, theres massive amounts of tech to work and play with! (Quarrelsome) Jetpacks. Programming is Technology and the more time we spend with technology the closer we get to having Jetpacks. ( Teifion: This is a really cool analogy! ) (Saj) Profitable way of Exercising Brain Muscles. (Saj) It makes you look brilliant to some audience. (Saj) Makes you tech-smart. (Saj) Makes you eligible to the future world. (Saj) It's easy, fun, not in a math way.. (kiwiBastard) If the person likes problem solving then programming is no better example. (kiwiBastard) Brilliant sense of achivement when you can interact with something you have designed and coded (kiwiBastard) Great way to meet chicks/chaps - erm, maybe not that one ( Teifion: I dunno where you do programming but I want to come visit some time ) (epatel) Learning how to program is like learning spell casting at Hogwarts. The computer will be your servant forever... Cons I have so far Can be frustrating when it's not working Not physical exercise (Rob Cooper) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. (Rob Cooper) Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for.NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. (Rob Cooper) Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. (Rob Cooper) Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all! (Saj) Can cause virtual damage. (Saj) Can make one throw their computer away. (Saj) Can make one only virtually available to the world.
I do it for the ladies:D Seriously though, for me Pro's Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. Money is/can be pretty good. Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. It's an exciting industry to work in, theres massive amounts of tech to work and play with! Cons (some of these can easily be Pro's too) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for.NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all!
How to encourage someone to learn programming? I have a friend that has a little bit of a holiday coming up and they want ideas on what they should do during the holiday, I plan to suggest programming to them, what are the pros and cons that I need to mention? I'll add to the list below as people reply, I apologise if I duplicate any entries. Pros I have so far Minimal money requirement (they already have a computer) Will help them to think in new ways (Rob Cooper) Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. (Rob Cooper) I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. (Rob Cooper) Money is/can be pretty good. (Rob Cooper) Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. (Rob Cooper) It's an exciting industry to work in, theres massive amounts of tech to work and play with! (Quarrelsome) Jetpacks. Programming is Technology and the more time we spend with technology the closer we get to having Jetpacks. ( Teifion: This is a really cool analogy! ) (Saj) Profitable way of Exercising Brain Muscles. (Saj) It makes you look brilliant to some audience. (Saj) Makes you tech-smart. (Saj) Makes you eligible to the future world. (Saj) It's easy, fun, not in a math way.. (kiwiBastard) If the person likes problem solving then programming is no better example. (kiwiBastard) Brilliant sense of achivement when you can interact with something you have designed and coded (kiwiBastard) Great way to meet chicks/chaps - erm, maybe not that one ( Teifion: I dunno where you do programming but I want to come visit some time ) (epatel) Learning how to program is like learning spell casting at Hogwarts. The computer will be your servant forever... Cons I have so far Can be frustrating when it's not working Not physical exercise (Rob Cooper) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. (Rob Cooper) Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for.NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. (Rob Cooper) Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. (Rob Cooper) Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all! (Saj) Can cause virtual damage. (Saj) Can make one throw their computer away. (Saj) Can make one only virtually available to the world.
TITLE: How to encourage someone to learn programming? QUESTION: I have a friend that has a little bit of a holiday coming up and they want ideas on what they should do during the holiday, I plan to suggest programming to them, what are the pros and cons that I need to mention? I'll add to the list below as people reply, I apologise if I duplicate any entries. Pros I have so far Minimal money requirement (they already have a computer) Will help them to think in new ways (Rob Cooper) Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. (Rob Cooper) I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. (Rob Cooper) Money is/can be pretty good. (Rob Cooper) Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. (Rob Cooper) It's an exciting industry to work in, theres massive amounts of tech to work and play with! (Quarrelsome) Jetpacks. Programming is Technology and the more time we spend with technology the closer we get to having Jetpacks. ( Teifion: This is a really cool analogy! ) (Saj) Profitable way of Exercising Brain Muscles. (Saj) It makes you look brilliant to some audience. (Saj) Makes you tech-smart. (Saj) Makes you eligible to the future world. (Saj) It's easy, fun, not in a math way.. (kiwiBastard) If the person likes problem solving then programming is no better example. (kiwiBastard) Brilliant sense of achivement when you can interact with something you have designed and coded (kiwiBastard) Great way to meet chicks/chaps - erm, maybe not that one ( Teifion: I dunno where you do programming but I want to come visit some time ) (epatel) Learning how to program is like learning spell casting at Hogwarts. The computer will be your servant forever... Cons I have so far Can be frustrating when it's not working Not physical exercise (Rob Cooper) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. (Rob Cooper) Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for.NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. (Rob Cooper) Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. (Rob Cooper) Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all! (Saj) Can cause virtual damage. (Saj) Can make one throw their computer away. (Saj) Can make one only virtually available to the world. ANSWER: I do it for the ladies:D Seriously though, for me Pro's Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. Money is/can be pretty good. Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. It's an exciting industry to work in, theres massive amounts of tech to work and play with! Cons (some of these can easily be Pro's too) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for.NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all!
[ "language-agnostic" ]
3
8
1,159
8
0
2008-08-14T10:34:05.073000
2008-08-14T10:50:21.420000
10,877
486,662
How can you customize the numbers in an ordered list?
How can I left-align the numbers in an ordered list? 1. an item // skip some items for brevity 9. another item 10. notice the 1 is under the 9, and the item contents also line up Change the character after the number in an ordered list? 1) an item Also is there a CSS solution to change from numbers to alphabetic/roman lists instead of using the type attribute on the ol element. I am mostly interested in answers that work on Firefox 3.
This is the solution I have working in Firefox 3, Opera and Google Chrome. The list still displays in IE7 (but without the close bracket and left align numbers): ol { counter-reset: item; margin-left: 0; padding-left: 0; } li { display: block; margin-bottom:.5em; margin-left: 2em; } li::before { display: inline-block; content: counter(item) ") "; counter-increment: item; width: 2em; margin-left: -2em; } One Two Three Four Five Six Seven Eight Nine Items Ten Items EDIT: Included multiple line fix by strager Also is there a CSS solution to change from numbers to alphabetic/roman lists instead of using the type attribute on the ol element. Refer to list-style-type CSS property. Or when using counters the second argument accepts a list-style-type value. For example the following will use upper roman: li::before { content: counter(item, upper-roman) ") "; counter-increment: item; /*... */
How can you customize the numbers in an ordered list? How can I left-align the numbers in an ordered list? 1. an item // skip some items for brevity 9. another item 10. notice the 1 is under the 9, and the item contents also line up Change the character after the number in an ordered list? 1) an item Also is there a CSS solution to change from numbers to alphabetic/roman lists instead of using the type attribute on the ol element. I am mostly interested in answers that work on Firefox 3.
TITLE: How can you customize the numbers in an ordered list? QUESTION: How can I left-align the numbers in an ordered list? 1. an item // skip some items for brevity 9. another item 10. notice the 1 is under the 9, and the item contents also line up Change the character after the number in an ordered list? 1) an item Also is there a CSS solution to change from numbers to alphabetic/roman lists instead of using the type attribute on the ol element. I am mostly interested in answers that work on Firefox 3. ANSWER: This is the solution I have working in Firefox 3, Opera and Google Chrome. The list still displays in IE7 (but without the close bracket and left align numbers): ol { counter-reset: item; margin-left: 0; padding-left: 0; } li { display: block; margin-bottom:.5em; margin-left: 2em; } li::before { display: inline-block; content: counter(item) ") "; counter-increment: item; width: 2em; margin-left: -2em; } One Two Three Four Five Six Seven Eight Nine Items Ten Items EDIT: Included multiple line fix by strager Also is there a CSS solution to change from numbers to alphabetic/roman lists instead of using the type attribute on the ol element. Refer to list-style-type CSS property. Or when using counters the second argument accepts a list-style-type value. For example the following will use upper roman: li::before { content: counter(item, upper-roman) ") "; counter-increment: item; /*... */
[ "html", "css", "html-lists", "roman-numerals" ]
137
121
148,615
17
0
2008-08-14T10:42:32.013000
2009-01-28T06:31:49.953000
10,880
10,890
Any good advice on using emacs for C++ project?
I'm looking for a good article on using emacs as C/C++ IDE. Something like Steve Yegge's "Effective emacs".
No specific article, really, but I've found EmacsWiki to be full of useful information. Consider checking out these entries: CPlusPlus as a starting point for many C++-related articles, and CppTemplate to define a template that can give you a good skeleton when you start new files
Any good advice on using emacs for C++ project? I'm looking for a good article on using emacs as C/C++ IDE. Something like Steve Yegge's "Effective emacs".
TITLE: Any good advice on using emacs for C++ project? QUESTION: I'm looking for a good article on using emacs as C/C++ IDE. Something like Steve Yegge's "Effective emacs". ANSWER: No specific article, really, but I've found EmacsWiki to be full of useful information. Consider checking out these entries: CPlusPlus as a starting point for many C++-related articles, and CppTemplate to define a template that can give you a good skeleton when you start new files
[ "c++", "emacs" ]
39
16
23,094
5
0
2008-08-14T10:42:59.627000
2008-08-14T11:04:13.680000
10,891
10,914
Best way to connect To Sql Server with MFC
Soooo, I am starting this new job soon where most of the code is legacy MFC. The end goal is to convert/rewrite to C#. I'm a little rusty with MFC so writing some code so I can try and spark my memory. Anyway, was wondering the best way to connect to a SS2005 database with MFC. Back in the day CDatabase via ODBC was the way to go, has that changed in the last few years? I see you can use ATL and ADO, but looked a little, well verbose, for my liking?
The ATL consumer templates for OleDb are an option - start here. The ClassWizard is still there to assist you so the verbosity isn't too much of a hurdle at first. Very soon you will need to hand-code though. There is a lot of careful twiddling, for example ensuring that your command string has exactly the right number of? marks corresponding to the COLUMN_ENTRYs for an accessor. Then you'll probably have a million CopyToCommandFromObject and CopyToObjectFromCommand methods. This app doesn't have any data access yet and you're going to be adding it? If so, I would seriously consider implementing a modern DAL (ADO.Net, linq if you're lucky enough to be on 2008) in a separate managed assembly and doing some interop.
Best way to connect To Sql Server with MFC Soooo, I am starting this new job soon where most of the code is legacy MFC. The end goal is to convert/rewrite to C#. I'm a little rusty with MFC so writing some code so I can try and spark my memory. Anyway, was wondering the best way to connect to a SS2005 database with MFC. Back in the day CDatabase via ODBC was the way to go, has that changed in the last few years? I see you can use ATL and ADO, but looked a little, well verbose, for my liking?
TITLE: Best way to connect To Sql Server with MFC QUESTION: Soooo, I am starting this new job soon where most of the code is legacy MFC. The end goal is to convert/rewrite to C#. I'm a little rusty with MFC so writing some code so I can try and spark my memory. Anyway, was wondering the best way to connect to a SS2005 database with MFC. Back in the day CDatabase via ODBC was the way to go, has that changed in the last few years? I see you can use ATL and ADO, but looked a little, well verbose, for my liking? ANSWER: The ATL consumer templates for OleDb are an option - start here. The ClassWizard is still there to assist you so the verbosity isn't too much of a hurdle at first. Very soon you will need to hand-code though. There is a lot of careful twiddling, for example ensuring that your command string has exactly the right number of? marks corresponding to the COLUMN_ENTRYs for an accessor. Then you'll probably have a million CopyToCommandFromObject and CopyToObjectFromCommand methods. This app doesn't have any data access yet and you're going to be adding it? If so, I would seriously consider implementing a modern DAL (ADO.Net, linq if you're lucky enough to be on 2008) in a separate managed assembly and doing some interop.
[ "sql-server", "mfc" ]
4
2
3,836
1
0
2008-08-14T11:05:21.743000
2008-08-14T11:49:25.883000
10,898
21,550
How to set up a DB2 linked server on a 64-bit SQL Server 2005?
I need to create a linked server to a DB2 database on a mainframe. Has anyone done this successfully on a 64-bit version of SQL Server 2005? If so, which provider and settings were used? It's important that the linked server work whether we are using a Windows authenticated account to login to SQL Server or a SQL Server login. It's also important that both the 4-part name and OPENQUERY query methods are functional. We have one set up on a SQL Server 2000 machine that works well, but it uses a provider that's not available for 64-bit SS 2005.
We had this same issue with a production system late last year (sept 2007) and the official word from our Microsoft contact was that they had a 64 bit oledb driver to connect to ASI/DB2 but it was in BETA at the time. Not sure when it will be out of beta but that was the news as of last year. We decided to move the production server onto a 32 bit machine since we were not comfortable using beta drivers on production systems. I know this doesn't answer your question but it hopefully gives you some insight
How to set up a DB2 linked server on a 64-bit SQL Server 2005? I need to create a linked server to a DB2 database on a mainframe. Has anyone done this successfully on a 64-bit version of SQL Server 2005? If so, which provider and settings were used? It's important that the linked server work whether we are using a Windows authenticated account to login to SQL Server or a SQL Server login. It's also important that both the 4-part name and OPENQUERY query methods are functional. We have one set up on a SQL Server 2000 machine that works well, but it uses a provider that's not available for 64-bit SS 2005.
TITLE: How to set up a DB2 linked server on a 64-bit SQL Server 2005? QUESTION: I need to create a linked server to a DB2 database on a mainframe. Has anyone done this successfully on a 64-bit version of SQL Server 2005? If so, which provider and settings were used? It's important that the linked server work whether we are using a Windows authenticated account to login to SQL Server or a SQL Server login. It's also important that both the 4-part name and OPENQUERY query methods are functional. We have one set up on a SQL Server 2000 machine that works well, but it uses a provider that's not available for 64-bit SS 2005. ANSWER: We had this same issue with a production system late last year (sept 2007) and the official word from our Microsoft contact was that they had a 64 bit oledb driver to connect to ASI/DB2 but it was in BETA at the time. Not sure when it will be out of beta but that was the news as of last year. We decided to move the production server onto a 32 bit machine since we were not comfortable using beta drivers on production systems. I know this doesn't answer your question but it hopefully gives you some insight
[ "sql-server", "db2" ]
5
1
2,158
3
0
2008-08-14T11:18:39.113000
2008-08-22T01:01:54.557000
10,901
11,004
Future proofing a large UI Application - MFC with 2008 Feature pack, or C# and Winforms?
My company has developed a long standing product using MFC in Visual C++ as the defacto standard for UI development. Our codebase contains ALOT of legacy/archaic code which must be kept operational. Some of this code is older than me (originally written in the late 70s) and some members of our team are still on Visual Studio 6. However, a conclusion has thankfully been reached internally that our product is looking somewhat antiquated compared to our competitors', and that something needs to be done. I am currently working on a new area of the UI which is quite separate from the rest of the product. I have therefore been given the chance to try out 'new' technology stacks as a sort of proving ground before the long process of moving over the rest of the UI begins. I have been using C# with Windows Forms and the.net framework for a while in my spare time and enjoy it, but am somewhat worried about the headaches caused by interop. While this particular branch of the UI won't require much interop with the legacy C++ codebase, I can forsee this becoming an issue in the future. The alternative is just to continue with MFC, but try and take advantage of the new feature pack that shipped with VS2008. This I guess is the easiest option, but I worry about longevity and not taking advantage of the goodness that is.net... So, which do I pick? We're a small team so my recommendation will quite probably be accepted as a future direction for our development - I want to get it right. Is MFC dead? Is C#/Winforms the way forward? Is there anything else I'm totally missing? Help greatly appreciated!
I'm a developer on an app that has a ton of legacy MFC code, and we have all of your same concerns. A big driver for our strategy was to eliminate as much risk and uncertainty as we could, which meant avoiding The Big Rewrite. As we all know, TBR fails most of the time. So we chose an incremental approach that allows us to preserve modules that won't be changing in the current release, writing new features managed, andporting features that are getting enhancements to managed. You can do this several ways: Host WPF content on your MFC views (see here ) For MFC MDI apps, create a new WinForms framework and host your MFC MDI views (see here ) Host WinForms user controls in MFC Dialogs and Views (see here ) The problem with adopting WPF (option 1) is that it will require you to rewrite all of your UI at once, otherwise it'll look pretty schizophrenic. The second approach looks viable but very complicated. The third approach is the one we selected and it's been working very well. It allows you to selectively refresh areas of your app while maintaining overall consistency and not touching things that aren't broken. The Visual C++ 2008 Feature Pack looks interesting, I haven't played with it though. Seems like it might help with your issue of outdated look. If the "ribbon" would be too jarring for your users you could look at third-party MFC and/or WinForms control vendors. My overall recommendation is that interop + incremental change is definitely preferable to sweeping changes. After reading your follow-up, I can definitely confirm that the productivity gains of the framework vastly outweigh the investment in learning it. Nobody on our team had used C# at the start of this effort and now we all prefer it.
Future proofing a large UI Application - MFC with 2008 Feature pack, or C# and Winforms? My company has developed a long standing product using MFC in Visual C++ as the defacto standard for UI development. Our codebase contains ALOT of legacy/archaic code which must be kept operational. Some of this code is older than me (originally written in the late 70s) and some members of our team are still on Visual Studio 6. However, a conclusion has thankfully been reached internally that our product is looking somewhat antiquated compared to our competitors', and that something needs to be done. I am currently working on a new area of the UI which is quite separate from the rest of the product. I have therefore been given the chance to try out 'new' technology stacks as a sort of proving ground before the long process of moving over the rest of the UI begins. I have been using C# with Windows Forms and the.net framework for a while in my spare time and enjoy it, but am somewhat worried about the headaches caused by interop. While this particular branch of the UI won't require much interop with the legacy C++ codebase, I can forsee this becoming an issue in the future. The alternative is just to continue with MFC, but try and take advantage of the new feature pack that shipped with VS2008. This I guess is the easiest option, but I worry about longevity and not taking advantage of the goodness that is.net... So, which do I pick? We're a small team so my recommendation will quite probably be accepted as a future direction for our development - I want to get it right. Is MFC dead? Is C#/Winforms the way forward? Is there anything else I'm totally missing? Help greatly appreciated!
TITLE: Future proofing a large UI Application - MFC with 2008 Feature pack, or C# and Winforms? QUESTION: My company has developed a long standing product using MFC in Visual C++ as the defacto standard for UI development. Our codebase contains ALOT of legacy/archaic code which must be kept operational. Some of this code is older than me (originally written in the late 70s) and some members of our team are still on Visual Studio 6. However, a conclusion has thankfully been reached internally that our product is looking somewhat antiquated compared to our competitors', and that something needs to be done. I am currently working on a new area of the UI which is quite separate from the rest of the product. I have therefore been given the chance to try out 'new' technology stacks as a sort of proving ground before the long process of moving over the rest of the UI begins. I have been using C# with Windows Forms and the.net framework for a while in my spare time and enjoy it, but am somewhat worried about the headaches caused by interop. While this particular branch of the UI won't require much interop with the legacy C++ codebase, I can forsee this becoming an issue in the future. The alternative is just to continue with MFC, but try and take advantage of the new feature pack that shipped with VS2008. This I guess is the easiest option, but I worry about longevity and not taking advantage of the goodness that is.net... So, which do I pick? We're a small team so my recommendation will quite probably be accepted as a future direction for our development - I want to get it right. Is MFC dead? Is C#/Winforms the way forward? Is there anything else I'm totally missing? Help greatly appreciated! ANSWER: I'm a developer on an app that has a ton of legacy MFC code, and we have all of your same concerns. A big driver for our strategy was to eliminate as much risk and uncertainty as we could, which meant avoiding The Big Rewrite. As we all know, TBR fails most of the time. So we chose an incremental approach that allows us to preserve modules that won't be changing in the current release, writing new features managed, andporting features that are getting enhancements to managed. You can do this several ways: Host WPF content on your MFC views (see here ) For MFC MDI apps, create a new WinForms framework and host your MFC MDI views (see here ) Host WinForms user controls in MFC Dialogs and Views (see here ) The problem with adopting WPF (option 1) is that it will require you to rewrite all of your UI at once, otherwise it'll look pretty schizophrenic. The second approach looks viable but very complicated. The third approach is the one we selected and it's been working very well. It allows you to selectively refresh areas of your app while maintaining overall consistency and not touching things that aren't broken. The Visual C++ 2008 Feature Pack looks interesting, I haven't played with it though. Seems like it might help with your issue of outdated look. If the "ribbon" would be too jarring for your users you could look at third-party MFC and/or WinForms control vendors. My overall recommendation is that interop + incremental change is definitely preferable to sweeping changes. After reading your follow-up, I can definitely confirm that the productivity gains of the framework vastly outweigh the investment in learning it. Nobody on our team had used C# at the start of this effort and now we all prefer it.
[ "c#", "c++", "winforms", "mfc", "user-interface" ]
12
9
4,251
6
0
2008-08-14T11:26:02.733000
2008-08-14T13:32:29.810000
10,905
10,907
Can you use generic forms in C#?
You should be able to create a generic form: public partial class MyGenericForm: Form where T: class { /* form code */ public List TypedList { get; set; } } Is valid C#, and compiles. However the designer won't work and the form will throw a runtime exception if you have any images stating that it cannot find the resource. I think this is because the windows forms designer assumes that the resources will be stored under the simple type's name.
Yes you can! Here's a blog post I made a while ago with the trick: Designing Generic Forms Edit: Looks like you're already doing it this way. This method works fine so I wouldn't consider it too hacky.
Can you use generic forms in C#? You should be able to create a generic form: public partial class MyGenericForm: Form where T: class { /* form code */ public List TypedList { get; set; } } Is valid C#, and compiles. However the designer won't work and the form will throw a runtime exception if you have any images stating that it cannot find the resource. I think this is because the windows forms designer assumes that the resources will be stored under the simple type's name.
TITLE: Can you use generic forms in C#? QUESTION: You should be able to create a generic form: public partial class MyGenericForm: Form where T: class { /* form code */ public List TypedList { get; set; } } Is valid C#, and compiles. However the designer won't work and the form will throw a runtime exception if you have any images stating that it cannot find the resource. I think this is because the windows forms designer assumes that the resources will be stored under the simple type's name. ANSWER: Yes you can! Here's a blog post I made a while ago with the trick: Designing Generic Forms Edit: Looks like you're already doing it this way. This method works fine so I wouldn't consider it too hacky.
[ "c#", ".net", "winforms" ]
33
22
14,418
4
0
2008-08-14T11:30:54.970000
2008-08-14T11:33:21.377000
10,915
11,164
Best way to model Many-To-One Relationships in NHibernate When Dealing With a Legacy DB?
Warning - I am very new to NHibernate. I know this question seems simple - and I'm sure there's a simple answer, but I've been spinning my wheels for some time on this one. I am dealing with a legacy db which really can't be altered structurally. I have a details table which lists payment plans that have been accepted by a customer. Each payment plan has an ID which links back to a reference table to get the plan's terms, conditions, etc. In my object model, I have an AcceptedPlan class, and a Plan class. Originally, I used a many-to-one relationship from the detail table back to the ref table to model this relationship in NHibernate. I also created a one-to-many relationship going in the opposite direction from the Plan class over to the AcceptedPlan class. This was fine while I was simply reading data. I could go to my Plan object, which was a property of my AcceptedPlan class to read the plan's details. My problem arose when I had to start inserting new rows to the details table. From my reading, it seems the only way to create a new child object is to add it to the parent object and then save the session. But I don't want to have to create a new parent Plan object every time I want to create a new detail record. This seems like unnecessary overhead. Does anyone know if I am going about this in the wrong way?
I'd steer away from having child object containing their logical parent, it can get very messy and very recursive pretty quickly when you do that. I'd take a look at how you're intending to use the domain model before you do that sort of thing. You can easily still have the ID references in the tables and just leave them unmapped. Here are two example mappings that might nudge you in the right direction, I've had to adlib table names etc but it could possibly help. I'd probably also suggest mapping the StatusId to an enumeration. Pay attention to the way the bag effectivly maps the details table into a collection.
Best way to model Many-To-One Relationships in NHibernate When Dealing With a Legacy DB? Warning - I am very new to NHibernate. I know this question seems simple - and I'm sure there's a simple answer, but I've been spinning my wheels for some time on this one. I am dealing with a legacy db which really can't be altered structurally. I have a details table which lists payment plans that have been accepted by a customer. Each payment plan has an ID which links back to a reference table to get the plan's terms, conditions, etc. In my object model, I have an AcceptedPlan class, and a Plan class. Originally, I used a many-to-one relationship from the detail table back to the ref table to model this relationship in NHibernate. I also created a one-to-many relationship going in the opposite direction from the Plan class over to the AcceptedPlan class. This was fine while I was simply reading data. I could go to my Plan object, which was a property of my AcceptedPlan class to read the plan's details. My problem arose when I had to start inserting new rows to the details table. From my reading, it seems the only way to create a new child object is to add it to the parent object and then save the session. But I don't want to have to create a new parent Plan object every time I want to create a new detail record. This seems like unnecessary overhead. Does anyone know if I am going about this in the wrong way?
TITLE: Best way to model Many-To-One Relationships in NHibernate When Dealing With a Legacy DB? QUESTION: Warning - I am very new to NHibernate. I know this question seems simple - and I'm sure there's a simple answer, but I've been spinning my wheels for some time on this one. I am dealing with a legacy db which really can't be altered structurally. I have a details table which lists payment plans that have been accepted by a customer. Each payment plan has an ID which links back to a reference table to get the plan's terms, conditions, etc. In my object model, I have an AcceptedPlan class, and a Plan class. Originally, I used a many-to-one relationship from the detail table back to the ref table to model this relationship in NHibernate. I also created a one-to-many relationship going in the opposite direction from the Plan class over to the AcceptedPlan class. This was fine while I was simply reading data. I could go to my Plan object, which was a property of my AcceptedPlan class to read the plan's details. My problem arose when I had to start inserting new rows to the details table. From my reading, it seems the only way to create a new child object is to add it to the parent object and then save the session. But I don't want to have to create a new parent Plan object every time I want to create a new detail record. This seems like unnecessary overhead. Does anyone know if I am going about this in the wrong way? ANSWER: I'd steer away from having child object containing their logical parent, it can get very messy and very recursive pretty quickly when you do that. I'd take a look at how you're intending to use the domain model before you do that sort of thing. You can easily still have the ID references in the tables and just leave them unmapped. Here are two example mappings that might nudge you in the right direction, I've had to adlib table names etc but it could possibly help. I'd probably also suggest mapping the StatusId to an enumeration. Pay attention to the way the bag effectivly maps the details table into a collection.
[ "c#", "nhibernate" ]
3
3
14,538
6
0
2008-08-14T11:56:29.550000
2008-08-14T15:04:15.240000
10,919
10,922
How do you measure SQL Fill Factor value
Usually when I'm creating indexes on tables, I generally guess what the Fill Factor should be based on an educated guess of how the table will be used (many reads or many writes). Is there a more scientific way to determine a more accurate Fill Factor value?
You could try running a big list of realistic operations and looking at IO queues for the different actions. There are a lot of variables that govern it, such as the size of each row and the number of writes vs reads. Basically: high fill factor = quicker read, low = quicker write. However it's not quite that simple, as almost all writes will be to a subset of rows that need to be looked up first. For instance: set a fill factor to 10% and each single-row update will take 10 times as long to find the row it's changing, even though a page split would then be very unlikely. Generally you see fill factors 70% (very high write) to 95% (very high read). It's a bit of an art form. I find that a good way of thinking of fill factors is as pages in an address book - the more tightly you pack the addresses the harder it is to change them, but the slimmer the book. I think I explained it better on my blog.
How do you measure SQL Fill Factor value Usually when I'm creating indexes on tables, I generally guess what the Fill Factor should be based on an educated guess of how the table will be used (many reads or many writes). Is there a more scientific way to determine a more accurate Fill Factor value?
TITLE: How do you measure SQL Fill Factor value QUESTION: Usually when I'm creating indexes on tables, I generally guess what the Fill Factor should be based on an educated guess of how the table will be used (many reads or many writes). Is there a more scientific way to determine a more accurate Fill Factor value? ANSWER: You could try running a big list of realistic operations and looking at IO queues for the different actions. There are a lot of variables that govern it, such as the size of each row and the number of writes vs reads. Basically: high fill factor = quicker read, low = quicker write. However it's not quite that simple, as almost all writes will be to a subset of rows that need to be looked up first. For instance: set a fill factor to 10% and each single-row update will take 10 times as long to find the row it's changing, even though a page split would then be very unlikely. Generally you see fill factors 70% (very high write) to 95% (very high read). It's a bit of an art form. I find that a good way of thinking of fill factors is as pages in an address book - the more tightly you pack the addresses the harder it is to change them, but the slimmer the book. I think I explained it better on my blog.
[ "sql-server", "fillfactor" ]
8
12
2,499
2
0
2008-08-14T12:07:27.750000
2008-08-14T12:16:03.770000
10,933
10,988
Can you recommend a good .NET web-based repository browser for SVN?
We have an SVN repository running on a Windows server, and I want to link internal documentation, feature changes, bugs and so on to code changes. We've found WebSVN to be amazingly slow - the repository is too large for it (I think). The team using it is primarily coding in C#, and while some have experience with other languages I'd really like a tool anyone on the team can maintain. Most of the tools I've seen are based on PHP, Java, Python, etc. All languages the team could learn, but I'd rather something that uses the skills we already have. Can you recommend a good web-based repository browser for SVN, ideally one that uses ASP.NET, SQL Server and that runs on IIS?
Have a look at http://warehouseapp.com It's Mongrel/Ruby/MySQL stack (should work on Windows though) but I'm looking to avoid installing MySQL and Ruby on the server. I know (also using C# stack myself), but self-hosted web-based SVN client market is such a small niche that even offering in different language could be considered good enough. MySQL doesn't bite and installation of Ruby is pretty much x-copy command. I understand why you don't want to spoil your server with additional software though, but if are OK to host your SVN repositories with third-party, you get a nice web-based interface without maintenance hassles. I'm using http://unfuddled.com (they also have some basic API to hook up on if needed). Not to promote reinventing the wheel, but I originally wrote my own web SVN browser by using the svn log --xml command This is actually good idea. I'm also parsing some XML formatted output during my automated build process, but creating our own full-blown SVN browser is kind of overkill because now you have to maintain not one primary project, but also the tool. But then again, we, programmers, love to create tools that will make working on our primary projects easier. ASP.NET SVN browser sounds like promising open-source idea, anybody willing to start work on it? I would contribute.
Can you recommend a good .NET web-based repository browser for SVN? We have an SVN repository running on a Windows server, and I want to link internal documentation, feature changes, bugs and so on to code changes. We've found WebSVN to be amazingly slow - the repository is too large for it (I think). The team using it is primarily coding in C#, and while some have experience with other languages I'd really like a tool anyone on the team can maintain. Most of the tools I've seen are based on PHP, Java, Python, etc. All languages the team could learn, but I'd rather something that uses the skills we already have. Can you recommend a good web-based repository browser for SVN, ideally one that uses ASP.NET, SQL Server and that runs on IIS?
TITLE: Can you recommend a good .NET web-based repository browser for SVN? QUESTION: We have an SVN repository running on a Windows server, and I want to link internal documentation, feature changes, bugs and so on to code changes. We've found WebSVN to be amazingly slow - the repository is too large for it (I think). The team using it is primarily coding in C#, and while some have experience with other languages I'd really like a tool anyone on the team can maintain. Most of the tools I've seen are based on PHP, Java, Python, etc. All languages the team could learn, but I'd rather something that uses the skills we already have. Can you recommend a good web-based repository browser for SVN, ideally one that uses ASP.NET, SQL Server and that runs on IIS? ANSWER: Have a look at http://warehouseapp.com It's Mongrel/Ruby/MySQL stack (should work on Windows though) but I'm looking to avoid installing MySQL and Ruby on the server. I know (also using C# stack myself), but self-hosted web-based SVN client market is such a small niche that even offering in different language could be considered good enough. MySQL doesn't bite and installation of Ruby is pretty much x-copy command. I understand why you don't want to spoil your server with additional software though, but if are OK to host your SVN repositories with third-party, you get a nice web-based interface without maintenance hassles. I'm using http://unfuddled.com (they also have some basic API to hook up on if needed). Not to promote reinventing the wheel, but I originally wrote my own web SVN browser by using the svn log --xml command This is actually good idea. I'm also parsing some XML formatted output during my automated build process, but creating our own full-blown SVN browser is kind of overkill because now you have to maintain not one primary project, but also the tool. But then again, we, programmers, love to create tools that will make working on our primary projects easier. ASP.NET SVN browser sounds like promising open-source idea, anybody willing to start work on it? I would contribute.
[ ".net", "asp.net", "svn" ]
10
3
4,679
6
0
2008-08-14T12:37:21.013000
2008-08-14T13:20:05.280000
10,935
10,969
How to trace COM objects exceptions?
I have a DLL with some COM objects. Sometimes, this objects crashes and register an error event in the Windows Event Log with lots of hexadecimal informations. I have no clue why this crashes happens. So, How can I trace those COM objects exceptions?
The first step is to lookup the Fail code's hex value (E.G. E_FAIL 0x80004005). I've had really good luck with posting that value in Google to get a sense of what the error code means. Then, I just use trial and error to try to isolate the location in code that's failing, and the root cause of the failure.
How to trace COM objects exceptions? I have a DLL with some COM objects. Sometimes, this objects crashes and register an error event in the Windows Event Log with lots of hexadecimal informations. I have no clue why this crashes happens. So, How can I trace those COM objects exceptions?
TITLE: How to trace COM objects exceptions? QUESTION: I have a DLL with some COM objects. Sometimes, this objects crashes and register an error event in the Windows Event Log with lots of hexadecimal informations. I have no clue why this crashes happens. So, How can I trace those COM objects exceptions? ANSWER: The first step is to lookup the Fail code's hex value (E.G. E_FAIL 0x80004005). I've had really good luck with posting that value in Google to get a sense of what the error code means. Then, I just use trial and error to try to isolate the location in code that's failing, and the root cause of the failure.
[ "exception", "com", "dll", "trace" ]
0
2
666
4
0
2008-08-14T12:41:43.190000
2008-08-14T13:03:10.447000
10,980
16,662
IKVM and Licensing
I have been looking into IKVMing Apache's FOP project to use with our.NET app. It's a commercial product, and looking into licensing, IKVM runs into some sticky areas because of its use of GNU Classpath. From what I've seen, no one can say for sure if this stuff can be used in a commercial product. Has anyone used IKVM, or an IKVM'd product, in a commercial product? Here's what I've found so far: IKVM license page, which notes that one dll contains code from other projects, their license GPLv2 + Classpath Exception Saxon for.NET is generated with IKVM, but released under the Apache license... Anyone have experience with this?
There are multiple issues here as ikvm is currently being transitioned away from the GNU classpath system to Sun's OpenJDK. Both are licensed as GPL+Exceptions to state explicitly that applications which merely use the OpenJDK libraries will not be considered derived works. Generally speaking, applications which rely upon components with defined specs such as this do not fall under the GPL anyway. For example, linking against public POSIX APIs does not trigger GPL reliance in a Linux application, despite the kernel being GPL. A similar principal will usually (the details can be tricky) apply to replacing Sun's Java with a FOSS/GPL implementation.
IKVM and Licensing I have been looking into IKVMing Apache's FOP project to use with our.NET app. It's a commercial product, and looking into licensing, IKVM runs into some sticky areas because of its use of GNU Classpath. From what I've seen, no one can say for sure if this stuff can be used in a commercial product. Has anyone used IKVM, or an IKVM'd product, in a commercial product? Here's what I've found so far: IKVM license page, which notes that one dll contains code from other projects, their license GPLv2 + Classpath Exception Saxon for.NET is generated with IKVM, but released under the Apache license... Anyone have experience with this?
TITLE: IKVM and Licensing QUESTION: I have been looking into IKVMing Apache's FOP project to use with our.NET app. It's a commercial product, and looking into licensing, IKVM runs into some sticky areas because of its use of GNU Classpath. From what I've seen, no one can say for sure if this stuff can be used in a commercial product. Has anyone used IKVM, or an IKVM'd product, in a commercial product? Here's what I've found so far: IKVM license page, which notes that one dll contains code from other projects, their license GPLv2 + Classpath Exception Saxon for.NET is generated with IKVM, but released under the Apache license... Anyone have experience with this? ANSWER: There are multiple issues here as ikvm is currently being transitioned away from the GNU classpath system to Sun's OpenJDK. Both are licensed as GPL+Exceptions to state explicitly that applications which merely use the OpenJDK libraries will not be considered derived works. Generally speaking, applications which rely upon components with defined specs such as this do not fall under the GPL anyway. For example, linking against public POSIX APIs does not trigger GPL reliance in a Linux application, despite the kernel being GPL. A similar principal will usually (the details can be tricky) apply to replacing Sun's Java with a FOSS/GPL implementation.
[ "java", ".net", "ikvm", "gnu-classpath" ]
10
6
2,201
3
0
2008-08-14T13:16:37.580000
2008-08-19T18:15:58.570000
10,985
11,299
How to prevent an object being created on the heap?
Does anyone know how I can, in platform-independent C++ code prevent an object from being created on the heap? That is, for a class "Foo", I want to prevent users from doing this: Foo *ptr = new Foo; and only allow them to do this: Foo myfooObject; Does anyone have any ideas? Cheers,
Nick's answer is a good starting point, but incomplete, as you actually need to overload: private: void* operator new(size_t); // standard new void* operator new(size_t, void*); // placement new void* operator new[](size_t); // array new void* operator new[](size_t, void*); // placement array new (Good coding practice would suggest you should also overload the delete and delete[] operators -- I would, but since they're not going to get called it isn't really necessary.) Pauldoo is also correct that this doesn't survive aggregating on Foo, although it does survive inheriting from Foo. You could do some template meta-programming magic to HELP prevent this, but it would not be immune to "evil users" and thus is probably not worth the complication. Documentation of how it should be used, and code review to ensure it is used properly, are the only ~100% way.
How to prevent an object being created on the heap? Does anyone know how I can, in platform-independent C++ code prevent an object from being created on the heap? That is, for a class "Foo", I want to prevent users from doing this: Foo *ptr = new Foo; and only allow them to do this: Foo myfooObject; Does anyone have any ideas? Cheers,
TITLE: How to prevent an object being created on the heap? QUESTION: Does anyone know how I can, in platform-independent C++ code prevent an object from being created on the heap? That is, for a class "Foo", I want to prevent users from doing this: Foo *ptr = new Foo; and only allow them to do this: Foo myfooObject; Does anyone have any ideas? Cheers, ANSWER: Nick's answer is a good starting point, but incomplete, as you actually need to overload: private: void* operator new(size_t); // standard new void* operator new(size_t, void*); // placement new void* operator new[](size_t); // array new void* operator new[](size_t, void*); // placement array new (Good coding practice would suggest you should also overload the delete and delete[] operators -- I would, but since they're not going to get called it isn't really necessary.) Pauldoo is also correct that this doesn't survive aggregating on Foo, although it does survive inheriting from Foo. You could do some template meta-programming magic to HELP prevent this, but it would not be immune to "evil users" and thus is probably not worth the complication. Documentation of how it should be used, and code review to ensure it is used properly, are the only ~100% way.
[ "c++", "stack", "heap-memory" ]
32
29
10,580
9
0
2008-08-14T13:19:08.073000
2008-08-14T16:36:41.233000
10,990
11,005
What are the proper permissions for an upload folder with PHP/Apache?
Sorry for the basic question - I'm a.NET developer and don't have much experience with LAMP setups. I have a PHP site that will allow uploads to a specific folder. I have been told that this folder needs to be owned by the webserver user for the upload process to work, so I created the folder and then set permissions as such: chown apache:apache -R uploads/ chmod 755 -R uploads/ The only problem now is that the FTP user can not modify the uploaded files at all. Is there a permission setting that will allow me to still upload files and then modify them later as a user other than the webserver user?
You can create a new group with both the apache user and FTP user as members and then make the permission on the upload folder 775. This should give both the apache and FTP users the ability to write to the files in the folder but keep everyone else from modifying them.
What are the proper permissions for an upload folder with PHP/Apache? Sorry for the basic question - I'm a.NET developer and don't have much experience with LAMP setups. I have a PHP site that will allow uploads to a specific folder. I have been told that this folder needs to be owned by the webserver user for the upload process to work, so I created the folder and then set permissions as such: chown apache:apache -R uploads/ chmod 755 -R uploads/ The only problem now is that the FTP user can not modify the uploaded files at all. Is there a permission setting that will allow me to still upload files and then modify them later as a user other than the webserver user?
TITLE: What are the proper permissions for an upload folder with PHP/Apache? QUESTION: Sorry for the basic question - I'm a.NET developer and don't have much experience with LAMP setups. I have a PHP site that will allow uploads to a specific folder. I have been told that this folder needs to be owned by the webserver user for the upload process to work, so I created the folder and then set permissions as such: chown apache:apache -R uploads/ chmod 755 -R uploads/ The only problem now is that the FTP user can not modify the uploaded files at all. Is there a permission setting that will allow me to still upload files and then modify them later as a user other than the webserver user? ANSWER: You can create a new group with both the apache user and FTP user as members and then make the permission on the upload folder 775. This should give both the apache and FTP users the ability to write to the files in the folder but keep everyone else from modifying them.
[ "php", "apache", "upload" ]
60
48
123,060
8
0
2008-08-14T13:22:25.743000
2008-08-14T13:32:55.473000
10,999
11,034
MS Team Foundation Server in distributed environments - hints tips tricks needed
Is anyone out there using Team Foundation Server within a team that is geographically distributed? We're in the UK, trying work with a team in Australia and we're finding it quite tough. Our main two issues are: Things are being checked out to us without us asking on a get latest. Even when using a proxy, most thing take a while to happen. Lots of really annoying little things like this are hardening our arteries, stopping us from delivering code and is frankly creating a user experience akin to pushing golden syrup up a sand dune. Is anyone out there actually using TFS in this manner, on a daily basis with (relative) success? If so, do you have any hints, tips, tricks or gotchas that would be worth knowing? P.S. Upgrading to CruiseControl.NET is not an option.
Definitely upgrade to TFS 2008 and Visual Studio 2008, as it is the "v2" version of Team System in every way. Fixes lots of small and medium sized problems. As for "things being randomly checked out" this is almost always due to Visual Studio deciding to edit files on your behalf. Try getting latest from the Team Explorer, with nothing open in Visual Studio, and see if that behavior persists. I bet it won't! Multiple TFS servers is a bad idea. Make sure your proxy is configured correctly, as it caches repeated GETs. That said, TFS is a server connected model, so it'll always be a bit slower than true "offline" source control systems. Also, if you could edit your question to contain more specific complaints or details, that would help -- right now it's awfully vague, so I can't answer very well.
MS Team Foundation Server in distributed environments - hints tips tricks needed Is anyone out there using Team Foundation Server within a team that is geographically distributed? We're in the UK, trying work with a team in Australia and we're finding it quite tough. Our main two issues are: Things are being checked out to us without us asking on a get latest. Even when using a proxy, most thing take a while to happen. Lots of really annoying little things like this are hardening our arteries, stopping us from delivering code and is frankly creating a user experience akin to pushing golden syrup up a sand dune. Is anyone out there actually using TFS in this manner, on a daily basis with (relative) success? If so, do you have any hints, tips, tricks or gotchas that would be worth knowing? P.S. Upgrading to CruiseControl.NET is not an option.
TITLE: MS Team Foundation Server in distributed environments - hints tips tricks needed QUESTION: Is anyone out there using Team Foundation Server within a team that is geographically distributed? We're in the UK, trying work with a team in Australia and we're finding it quite tough. Our main two issues are: Things are being checked out to us without us asking on a get latest. Even when using a proxy, most thing take a while to happen. Lots of really annoying little things like this are hardening our arteries, stopping us from delivering code and is frankly creating a user experience akin to pushing golden syrup up a sand dune. Is anyone out there actually using TFS in this manner, on a daily basis with (relative) success? If so, do you have any hints, tips, tricks or gotchas that would be worth knowing? P.S. Upgrading to CruiseControl.NET is not an option. ANSWER: Definitely upgrade to TFS 2008 and Visual Studio 2008, as it is the "v2" version of Team System in every way. Fixes lots of small and medium sized problems. As for "things being randomly checked out" this is almost always due to Visual Studio deciding to edit files on your behalf. Try getting latest from the Team Explorer, with nothing open in Visual Studio, and see if that behavior persists. I bet it won't! Multiple TFS servers is a bad idea. Make sure your proxy is configured correctly, as it caches repeated GETs. That said, TFS is a server connected model, so it'll always be a bit slower than true "offline" source control systems. Also, if you could edit your question to contain more specific complaints or details, that would help -- right now it's awfully vague, so I can't answer very well.
[ "visual-studio", "tfs" ]
5
2
1,998
3
0
2008-08-14T13:27:40.523000
2008-08-14T13:48:02.523000
11,028
11,044
What are the advantages of explicit Join Transitive Closure in SQL?
When I'm joining three or more tables together by a common column, I'd write my query like this: SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id a colleague recently asked my why I didn't do explicit Join Transitive Closure in my queries like this: SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id AND c.id = a.id are the really any advantages to this? Surely the optimiser can imply this for itself? edit: I know it's evil syntax, but it's a quick and dirty example of legitimate legacy code +1 @ Stu for cleaning it up
You don't need to do this in todays database engines, but there was a time when things like that would give the query optimizer more hints as to possible index paths and thus to speedier results. These days that entire syntax is going out anyway.
What are the advantages of explicit Join Transitive Closure in SQL? When I'm joining three or more tables together by a common column, I'd write my query like this: SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id a colleague recently asked my why I didn't do explicit Join Transitive Closure in my queries like this: SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id AND c.id = a.id are the really any advantages to this? Surely the optimiser can imply this for itself? edit: I know it's evil syntax, but it's a quick and dirty example of legitimate legacy code +1 @ Stu for cleaning it up
TITLE: What are the advantages of explicit Join Transitive Closure in SQL? QUESTION: When I'm joining three or more tables together by a common column, I'd write my query like this: SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id a colleague recently asked my why I didn't do explicit Join Transitive Closure in my queries like this: SELECT * FROM a, b, c WHERE a.id = b.id AND b.id = c.id AND c.id = a.id are the really any advantages to this? Surely the optimiser can imply this for itself? edit: I know it's evil syntax, but it's a quick and dirty example of legitimate legacy code +1 @ Stu for cleaning it up ANSWER: You don't need to do this in todays database engines, but there was a time when things like that would give the query optimizer more hints as to possible index paths and thus to speedier results. These days that entire syntax is going out anyway.
[ "sql" ]
4
4
3,086
8
0
2008-08-14T13:44:43.357000
2008-08-14T13:53:33.963000
11,043
11,053
SQL Table Aliases - Good or Bad?
What are the pros and cons of using table aliases in SQL? I personally try to avoid them, as I think they make the code less readable (especially when reading through large where/and statements), but I'd be interested in hearing any counter-points to this. When is it generally a good idea to use table aliases, and do you have any preferred formats?
Table aliases are a necessary evil when dealing with highly normalized schemas. For example, and I'm not the architect on this DB so bear with me, it can take 7 joins in order to get a clean and complete record back which includes a person's name, address, phone number and company affiliation. Rather than the somewhat standard single character aliases, I tend to favor short word aliases so the above example's SQL ends up looking like: select person.FirstName,person.LastName,addr.StreetAddress,addr.City,addr.State,addr.Zip,phone.PhoneNumber,company.CompanyName from tblPeople person left outer join tblAffiliations affl on affl.personID = person.personID left outer join tblCompany company on company.companyID = affl.companyID... etc
SQL Table Aliases - Good or Bad? What are the pros and cons of using table aliases in SQL? I personally try to avoid them, as I think they make the code less readable (especially when reading through large where/and statements), but I'd be interested in hearing any counter-points to this. When is it generally a good idea to use table aliases, and do you have any preferred formats?
TITLE: SQL Table Aliases - Good or Bad? QUESTION: What are the pros and cons of using table aliases in SQL? I personally try to avoid them, as I think they make the code less readable (especially when reading through large where/and statements), but I'd be interested in hearing any counter-points to this. When is it generally a good idea to use table aliases, and do you have any preferred formats? ANSWER: Table aliases are a necessary evil when dealing with highly normalized schemas. For example, and I'm not the architect on this DB so bear with me, it can take 7 joins in order to get a clean and complete record back which includes a person's name, address, phone number and company affiliation. Rather than the somewhat standard single character aliases, I tend to favor short word aliases so the above example's SQL ends up looking like: select person.FirstName,person.LastName,addr.StreetAddress,addr.City,addr.State,addr.Zip,phone.PhoneNumber,company.CompanyName from tblPeople person left outer join tblAffiliations affl on affl.personID = person.personID left outer join tblCompany company on company.companyID = affl.companyID... etc
[ "sql", "alias" ]
46
31
40,931
17
0
2008-08-14T13:53:33.467000
2008-08-14T13:56:45.207000
11,045
12,018
Refresh Excel VBA Function Results
How can I get a user-defined function to re-evaluate itself based on changed data in the spreadsheet? I tried F9 and Shift + F9. The only thing that seems to work is editing the cell with the function call and then pressing Enter.
You should use Application.Volatile in the top of your function: Function doubleMe(d) Application.Volatile doubleMe = d * 2 End Function It will then reevaluate whenever the workbook changes (if your calculation is set to automatic).
Refresh Excel VBA Function Results How can I get a user-defined function to re-evaluate itself based on changed data in the spreadsheet? I tried F9 and Shift + F9. The only thing that seems to work is editing the cell with the function call and then pressing Enter.
TITLE: Refresh Excel VBA Function Results QUESTION: How can I get a user-defined function to re-evaluate itself based on changed data in the spreadsheet? I tried F9 and Shift + F9. The only thing that seems to work is editing the cell with the function call and then pressing Enter. ANSWER: You should use Application.Volatile in the top of your function: Function doubleMe(d) Application.Volatile doubleMe = d * 2 End Function It will then reevaluate whenever the workbook changes (if your calculation is set to automatic).
[ "excel", "vba", "user-defined-functions" ]
80
138
166,890
9
0
2008-08-14T13:54:43.947000
2008-08-15T06:18:51.387000
11,055
11,080
How big would such a database be?
I'm trying to figure out how big a certain database would be (it hasn't been created yet). I know how many rows and what the tables will be. Is there a feature in Oracle that will tell me the size of such a theoretical database? Is there a known math formula I can use? I know there is a feature to determine the size of an existing database, but I want to know how big it will be before I create it.
you can from the size of the data types for the columns in a table. You can then get a rough estimate of the size of a row in that table. then for 1 to n tables, then for 1 row in 1 table for x rows in x tables = estimate of the database for a given rowsize. Long handed I know but this is how i normally do this.
How big would such a database be? I'm trying to figure out how big a certain database would be (it hasn't been created yet). I know how many rows and what the tables will be. Is there a feature in Oracle that will tell me the size of such a theoretical database? Is there a known math formula I can use? I know there is a feature to determine the size of an existing database, but I want to know how big it will be before I create it.
TITLE: How big would such a database be? QUESTION: I'm trying to figure out how big a certain database would be (it hasn't been created yet). I know how many rows and what the tables will be. Is there a feature in Oracle that will tell me the size of such a theoretical database? Is there a known math formula I can use? I know there is a feature to determine the size of an existing database, but I want to know how big it will be before I create it. ANSWER: you can from the size of the data types for the columns in a table. You can then get a rough estimate of the size of a row in that table. then for 1 to n tables, then for 1 row in 1 table for x rows in x tables = estimate of the database for a given rowsize. Long handed I know but this is how i normally do this.
[ "database", "oracle" ]
4
5
453
5
0
2008-08-14T13:57:20.227000
2008-08-14T14:07:27.307000
11,060
11,074
How should I unit test a code-generator?
This is a difficult and open-ended question I know, but I thought I'd throw it to the floor and see if anyone had any interesting suggestions. I have developed a code-generator that takes our python interface to our C++ code (generated via SWIG) and generates code needed to expose this as WebServices. When I developed this code I did it using TDD, but I've found my tests to be brittle as hell. Because each test essentially wanted to verify that for a given bit of input code (which happens to be a C++ header) I'd get a given bit of outputted code I wrote a small engine that reads test definitions from XML input files and generates test cases from these expectations. The problem is I dread going in to modify the code at all. That and the fact that the unit tests themselves are a: complex, and b: brittle. So I'm trying to think of alternative approaches to this problem, and it strikes me I'm perhaps tackling it the wrong way. Maybe I need to focus more on the outcome, IE: does the code I generate actually run and do what I want it to, rather than, does the code look the way I want it to. Has anyone got any experiences of something similar to this they would care to share?
I started writing up a summary of my experience with my own code generator, then went back and re-read your question and found you had already touched upon the same issues yourself, focus on the execution results instead of the code layout/look. Problem is, this is hard to test, the generated code might not be suited to actually run in the environment of the unit test system, and how do you encode the expected results? I've found that you need to break down the code generator into smaller pieces and unit test those. Unit testing a full code generator is more like integration testing than unit testing if you ask me.
How should I unit test a code-generator? This is a difficult and open-ended question I know, but I thought I'd throw it to the floor and see if anyone had any interesting suggestions. I have developed a code-generator that takes our python interface to our C++ code (generated via SWIG) and generates code needed to expose this as WebServices. When I developed this code I did it using TDD, but I've found my tests to be brittle as hell. Because each test essentially wanted to verify that for a given bit of input code (which happens to be a C++ header) I'd get a given bit of outputted code I wrote a small engine that reads test definitions from XML input files and generates test cases from these expectations. The problem is I dread going in to modify the code at all. That and the fact that the unit tests themselves are a: complex, and b: brittle. So I'm trying to think of alternative approaches to this problem, and it strikes me I'm perhaps tackling it the wrong way. Maybe I need to focus more on the outcome, IE: does the code I generate actually run and do what I want it to, rather than, does the code look the way I want it to. Has anyone got any experiences of something similar to this they would care to share?
TITLE: How should I unit test a code-generator? QUESTION: This is a difficult and open-ended question I know, but I thought I'd throw it to the floor and see if anyone had any interesting suggestions. I have developed a code-generator that takes our python interface to our C++ code (generated via SWIG) and generates code needed to expose this as WebServices. When I developed this code I did it using TDD, but I've found my tests to be brittle as hell. Because each test essentially wanted to verify that for a given bit of input code (which happens to be a C++ header) I'd get a given bit of outputted code I wrote a small engine that reads test definitions from XML input files and generates test cases from these expectations. The problem is I dread going in to modify the code at all. That and the fact that the unit tests themselves are a: complex, and b: brittle. So I'm trying to think of alternative approaches to this problem, and it strikes me I'm perhaps tackling it the wrong way. Maybe I need to focus more on the outcome, IE: does the code I generate actually run and do what I want it to, rather than, does the code look the way I want it to. Has anyone got any experiences of something similar to this they would care to share? ANSWER: I started writing up a summary of my experience with my own code generator, then went back and re-read your question and found you had already touched upon the same issues yourself, focus on the execution results instead of the code layout/look. Problem is, this is hard to test, the generated code might not be suited to actually run in the environment of the unit test system, and how do you encode the expected results? I've found that you need to break down the code generator into smaller pieces and unit test those. Unit testing a full code generator is more like integration testing than unit testing if you ask me.
[ "c++", "python", "unit-testing", "code-generation", "swig" ]
29
14
8,403
8
0
2008-08-14T13:59:21.533000
2008-08-14T14:04:25.800000
11,085
64,160
How can I determine the type of a blessed reference in Perl?
In Perl, an object is just a reference to any of the basic Perl data types that has been blessed into a particular class. When you use the ref() function on an unblessed reference, you are told what data type the reference points to. However, when you call ref() on a blessed reference, you are returned the name of the package that reference has been blessed into. I want to know the actual underlying type of the blessed reference. How can I determine this?
Scalar::Util::reftype() is the cleanest solution. The Scalar::Util module was added to the Perl core in version 5.7 but is available for older versions (5.004 or later) from CPAN. You can also probe with UNIVERSAL::isa(): $x->isa('HASH') # if $x is known to be an object UNIVERSAL::isa($x, 'HASH') # if $x might not be an object or reference Obviously, you'd also have to check for ARRAY and SCALAR types. The UNIVERSAL module (which serves as the base class for all objects) has been part of the core since Perl 5.003. Another way -- easy but a little dirty -- is to stringify the reference. Assuming that the class hasn't overloaded stringification you'll get back something resembling Class=HASH(0x1234ABCD), which you can parse to extract the underlying data type: my $type = ($object =~ /=(.+)\(0x[0-9a-f]+\)$/i);
How can I determine the type of a blessed reference in Perl? In Perl, an object is just a reference to any of the basic Perl data types that has been blessed into a particular class. When you use the ref() function on an unblessed reference, you are told what data type the reference points to. However, when you call ref() on a blessed reference, you are returned the name of the package that reference has been blessed into. I want to know the actual underlying type of the blessed reference. How can I determine this?
TITLE: How can I determine the type of a blessed reference in Perl? QUESTION: In Perl, an object is just a reference to any of the basic Perl data types that has been blessed into a particular class. When you use the ref() function on an unblessed reference, you are told what data type the reference points to. However, when you call ref() on a blessed reference, you are returned the name of the package that reference has been blessed into. I want to know the actual underlying type of the blessed reference. How can I determine this? ANSWER: Scalar::Util::reftype() is the cleanest solution. The Scalar::Util module was added to the Perl core in version 5.7 but is available for older versions (5.004 or later) from CPAN. You can also probe with UNIVERSAL::isa(): $x->isa('HASH') # if $x is known to be an object UNIVERSAL::isa($x, 'HASH') # if $x might not be an object or reference Obviously, you'd also have to check for ARRAY and SCALAR types. The UNIVERSAL module (which serves as the base class for all objects) has been part of the core since Perl 5.003. Another way -- easy but a little dirty -- is to stringify the reference. Assuming that the class hasn't overloaded stringification you'll get back something resembling Class=HASH(0x1234ABCD), which you can parse to extract the underlying data type: my $type = ($object =~ /=(.+)\(0x[0-9a-f]+\)$/i);
[ "perl", "reference", "types" ]
20
22
12,669
3
0
2008-08-14T14:11:42.913000
2008-09-15T15:54:08.240000