question_id int64 4 6.31M | answer_id int64 7 6.31M | title stringlengths 9 150 | question_body stringlengths 0 28.8k | answer_body stringlengths 60 27.2k | question_text stringlengths 40 28.9k | combined_text stringlengths 124 39.6k | tags listlengths 1 6 | question_score int64 0 26.3k | answer_score int64 0 28.8k | view_count int64 15 14M | answer_count int64 0 182 | favorite_count int64 0 32 | question_creation_date stringdate 2008-07-31 21:42:52 2011-06-10 18:12:18 | answer_creation_date stringdate 2008-07-31 22:17:57 2011-06-10 18:14:17 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,470 | 3,473 | How do I Transform Sql Columns into Rows? | I have a very simple problem which requires a very quick and simple solution in SQL Server 2005. I have a table with x Columns. I want to be able to select one row from the table and then transform the columns into rows. TableA Column1, Column2, Column3 SQL Statement to ruturn ResultA Value of Column1 Value of Column2 Value of Column3 @Kevin: I've had a google search on the topic but alot of the example where overly complex for my example, are you able to help further? @Mario: The solution I am creating has 10 columns which stores the values 0 to 6 and I must work out how many columns have the value 3 or more. So I thought about creating a query to turn that into rows and then using the generated table in a subquery to say count the number of rows with Column >= 3 | You should take a look at the UNPIVOT clause. Update1: GateKiller, strangely enough I read an article (about something unrelated) about it this morning and I'm trying to jog my memory where I saw it again, had some decent looking examples too. It'll come back to me I'm sure. Update2: Found it: http://weblogs.sqlteam.com/jeffs/archive/2008/04/23/unpivot.aspx | How do I Transform Sql Columns into Rows? I have a very simple problem which requires a very quick and simple solution in SQL Server 2005. I have a table with x Columns. I want to be able to select one row from the table and then transform the columns into rows. TableA Column1, Column2, Column3 SQL Statement to ruturn ResultA Value of Column1 Value of Column2 Value of Column3 @Kevin: I've had a google search on the topic but alot of the example where overly complex for my example, are you able to help further? @Mario: The solution I am creating has 10 columns which stores the values 0 to 6 and I must work out how many columns have the value 3 or more. So I thought about creating a query to turn that into rows and then using the generated table in a subquery to say count the number of rows with Column >= 3 | TITLE:
How do I Transform Sql Columns into Rows?
QUESTION:
I have a very simple problem which requires a very quick and simple solution in SQL Server 2005. I have a table with x Columns. I want to be able to select one row from the table and then transform the columns into rows. TableA Column1, Column2, Column3 SQL Statement to ruturn ResultA Value of Column1 Value of Column2 Value of Column3 @Kevin: I've had a google search on the topic but alot of the example where overly complex for my example, are you able to help further? @Mario: The solution I am creating has 10 columns which stores the values 0 to 6 and I must work out how many columns have the value 3 or more. So I thought about creating a query to turn that into rows and then using the generated table in a subquery to say count the number of rows with Column >= 3
ANSWER:
You should take a look at the UNPIVOT clause. Update1: GateKiller, strangely enough I read an article (about something unrelated) about it this morning and I'm trying to jog my memory where I saw it again, had some decent looking examples too. It'll come back to me I'm sure. Update2: Found it: http://weblogs.sqlteam.com/jeffs/archive/2008/04/23/unpivot.aspx | [
"sql-server",
"sql-server-2005"
] | 17 | 6 | 6,631 | 6 | 0 | 2008-08-06T14:36:56.640000 | 2008-08-06T14:39:09.457000 |
3,486 | 3,827 | Block user access to internals of a site using HTTP_REFERER | I have control over the HttpServer but not over the ApplicationServer or the Java Applications sitting there but I need to block direct access to certain pages on those applications. Precisely, I don't want users automating access to forms issuing direct GET/POST HTTP requests to the appropriate servlet. So, I decided to block users based on the value of HTTP_REFERER. After all, if the user is navigating inside the site, it will have an appropriate HTTP_REFERER. Well, that was what I thought. I implemented a rewrite rule in the.htaccess file that says: RewriteEngine on
# Options +FollowSymlinks RewriteCond %{HTTP_REFERER}!^http://mywebaddress(.cl)?/.* [NC] RewriteRule (servlet1|servlet2)/.+\?.+ - [F] I expected to forbid access to users that didn't navigate the site but issue direct GET requests to the "servlet1" or "servlet2" servlets using querystrings. But my expectations ended abruptly because the regular expression (servlet1|servlet2)/.+\?.+ didn't worked at all. I was really disappointed when I changed that expression to (servlet1|servlet2)/.+ and it worked so well that my users were blocked no matter if they navigated the site or not. So, my question is: How do I can accomplish this thing of not allowing "robots" with direct access to certain pages if I have no access/privileges/time to modify the application? | I'm not sure if I can solve this in one go, but we can go back and forth as necessary. First, I want to repeat what I think you are saying and make sure I'm clear. You want to disallow requests to servlet1 and servlet2 is the request doesn't have the proper referer and it does have a query string? I'm not sure I understand (servlet1|servlet2)/.+\?.+ because it looks like you are requiring a file under servlet1 and 2. I think maybe you are combining PATH_INFO (before the "?") with a GET query string (after the "?"). It appears that the PATH_INFO part will work but the GET query test will not. I made a quick test on my server using script1.cgi and script2.cgi and the following rules worked to accomplish what you are asking for. They are obviously edited a little to match my environment: RewriteCond %{HTTP_REFERER}!^http://(www.)?example.(com|org) [NC] RewriteCond %{QUERY_STRING} ^.+$ RewriteRule ^(script1|script2)\.cgi - [F] The above caught all wrong-referer requests to script1.cgi and script2.cgi that tried to submit data using a query string. However, you can also submit data using a path_info and by posting data. I used this form to protect against any of the three methods being used with incorrect referer: RewriteCond %{HTTP_REFERER}!^http://(www.)?example.(com|org) [NC] RewriteCond %{QUERY_STRING} ^.+$ [OR] RewriteCond %{REQUEST_METHOD} ^POST$ [OR] RewriteCond %{PATH_INFO} ^.+$ RewriteRule ^(script1|script2)\.cgi - [F] Based on the example you were trying to get working, I think this is what you want: RewriteCond %{HTTP_REFERER}!^http://mywebaddress(.cl)?/.* [NC] RewriteCond %{QUERY_STRING} ^.+$ [OR] RewriteCond %{REQUEST_METHOD} ^POST$ [OR] RewriteCond %{PATH_INFO} ^.+$ RewriteRule (servlet1|servlet2)\b - [F] Hopefully this at least gets you closer to your goal. Please let us know how it works, I'm interested in your problem. (BTW, I agree that referer blocking is poor security, but I also understand that relaity forces imperfect and partial solutions sometimes, which you seem to already acknowledge.) | Block user access to internals of a site using HTTP_REFERER I have control over the HttpServer but not over the ApplicationServer or the Java Applications sitting there but I need to block direct access to certain pages on those applications. Precisely, I don't want users automating access to forms issuing direct GET/POST HTTP requests to the appropriate servlet. So, I decided to block users based on the value of HTTP_REFERER. After all, if the user is navigating inside the site, it will have an appropriate HTTP_REFERER. Well, that was what I thought. I implemented a rewrite rule in the.htaccess file that says: RewriteEngine on
# Options +FollowSymlinks RewriteCond %{HTTP_REFERER}!^http://mywebaddress(.cl)?/.* [NC] RewriteRule (servlet1|servlet2)/.+\?.+ - [F] I expected to forbid access to users that didn't navigate the site but issue direct GET requests to the "servlet1" or "servlet2" servlets using querystrings. But my expectations ended abruptly because the regular expression (servlet1|servlet2)/.+\?.+ didn't worked at all. I was really disappointed when I changed that expression to (servlet1|servlet2)/.+ and it worked so well that my users were blocked no matter if they navigated the site or not. So, my question is: How do I can accomplish this thing of not allowing "robots" with direct access to certain pages if I have no access/privileges/time to modify the application? | TITLE:
Block user access to internals of a site using HTTP_REFERER
QUESTION:
I have control over the HttpServer but not over the ApplicationServer or the Java Applications sitting there but I need to block direct access to certain pages on those applications. Precisely, I don't want users automating access to forms issuing direct GET/POST HTTP requests to the appropriate servlet. So, I decided to block users based on the value of HTTP_REFERER. After all, if the user is navigating inside the site, it will have an appropriate HTTP_REFERER. Well, that was what I thought. I implemented a rewrite rule in the.htaccess file that says: RewriteEngine on
# Options +FollowSymlinks RewriteCond %{HTTP_REFERER}!^http://mywebaddress(.cl)?/.* [NC] RewriteRule (servlet1|servlet2)/.+\?.+ - [F] I expected to forbid access to users that didn't navigate the site but issue direct GET requests to the "servlet1" or "servlet2" servlets using querystrings. But my expectations ended abruptly because the regular expression (servlet1|servlet2)/.+\?.+ didn't worked at all. I was really disappointed when I changed that expression to (servlet1|servlet2)/.+ and it worked so well that my users were blocked no matter if they navigated the site or not. So, my question is: How do I can accomplish this thing of not allowing "robots" with direct access to certain pages if I have no access/privileges/time to modify the application?
ANSWER:
I'm not sure if I can solve this in one go, but we can go back and forth as necessary. First, I want to repeat what I think you are saying and make sure I'm clear. You want to disallow requests to servlet1 and servlet2 is the request doesn't have the proper referer and it does have a query string? I'm not sure I understand (servlet1|servlet2)/.+\?.+ because it looks like you are requiring a file under servlet1 and 2. I think maybe you are combining PATH_INFO (before the "?") with a GET query string (after the "?"). It appears that the PATH_INFO part will work but the GET query test will not. I made a quick test on my server using script1.cgi and script2.cgi and the following rules worked to accomplish what you are asking for. They are obviously edited a little to match my environment: RewriteCond %{HTTP_REFERER}!^http://(www.)?example.(com|org) [NC] RewriteCond %{QUERY_STRING} ^.+$ RewriteRule ^(script1|script2)\.cgi - [F] The above caught all wrong-referer requests to script1.cgi and script2.cgi that tried to submit data using a query string. However, you can also submit data using a path_info and by posting data. I used this form to protect against any of the three methods being used with incorrect referer: RewriteCond %{HTTP_REFERER}!^http://(www.)?example.(com|org) [NC] RewriteCond %{QUERY_STRING} ^.+$ [OR] RewriteCond %{REQUEST_METHOD} ^POST$ [OR] RewriteCond %{PATH_INFO} ^.+$ RewriteRule ^(script1|script2)\.cgi - [F] Based on the example you were trying to get working, I think this is what you want: RewriteCond %{HTTP_REFERER}!^http://mywebaddress(.cl)?/.* [NC] RewriteCond %{QUERY_STRING} ^.+$ [OR] RewriteCond %{REQUEST_METHOD} ^POST$ [OR] RewriteCond %{PATH_INFO} ^.+$ RewriteRule (servlet1|servlet2)\b - [F] Hopefully this at least gets you closer to your goal. Please let us know how it works, I'm interested in your problem. (BTW, I agree that referer blocking is poor security, but I also understand that relaity forces imperfect and partial solutions sometimes, which you seem to already acknowledge.) | [
"security",
"apache",
"mod-rewrite",
"http-referer"
] | 5 | 2 | 2,433 | 9 | 0 | 2008-08-06T14:44:09.137000 | 2008-08-06T18:41:13.057000 |
3,510 | 3,511 | What does the term "BODMAS" mean? | What is BODMAS and why is it useful in programming? | http://www.easymaths.com/What_on_earth_is_Bodmas.htm: What do you think the answer to 2 + 3 x 5 is? Is it (2 + 3) x 5 = 5 x 5 = 25? or 2 + (3 x 5) = 2 + 15 = 17? BODMAS can come to the rescue and give us rules to follow so that we always get the right answer: (B)rackets (O)rder (D)ivision (M)ultiplication (A)ddition (S)ubtraction According to BODMAS, multiplication should always be done before addition, therefore 17 is actually the correct answer according to BODMAS and will also be the answer which your calculator will give if you type in 2 + 3 x 5. Why it is useful in programming? No idea, but i assume it's because you can get rid of some brackets? I am a quite defensive programmer, so my lines can look like this: result = (((i + 4) - (a + b)) * MAGIC_NUMBER) - ANOTHER_MAGIC_NUMBER; with BODMAS you can make this a bit clearer: result = (i + 4 - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER; I think i'd still use the first variant - more brackets, but that way i do not have to learn yet another rule and i run into less risk of forgetting it and causing those weird hard to debug errors? Just guessing at that part though. Mike Stone EDIT: Fixed math as Gaius points out | What does the term "BODMAS" mean? What is BODMAS and why is it useful in programming? | TITLE:
What does the term "BODMAS" mean?
QUESTION:
What is BODMAS and why is it useful in programming?
ANSWER:
http://www.easymaths.com/What_on_earth_is_Bodmas.htm: What do you think the answer to 2 + 3 x 5 is? Is it (2 + 3) x 5 = 5 x 5 = 25? or 2 + (3 x 5) = 2 + 15 = 17? BODMAS can come to the rescue and give us rules to follow so that we always get the right answer: (B)rackets (O)rder (D)ivision (M)ultiplication (A)ddition (S)ubtraction According to BODMAS, multiplication should always be done before addition, therefore 17 is actually the correct answer according to BODMAS and will also be the answer which your calculator will give if you type in 2 + 3 x 5. Why it is useful in programming? No idea, but i assume it's because you can get rid of some brackets? I am a quite defensive programmer, so my lines can look like this: result = (((i + 4) - (a + b)) * MAGIC_NUMBER) - ANOTHER_MAGIC_NUMBER; with BODMAS you can make this a bit clearer: result = (i + 4 - (a + b)) * MAGIC_NUMBER - ANOTHER_MAGIC_NUMBER; I think i'd still use the first variant - more brackets, but that way i do not have to learn yet another rule and i run into less risk of forgetting it and causing those weird hard to debug errors? Just guessing at that part though. Mike Stone EDIT: Fixed math as Gaius points out | [
"math",
"operator-precedence"
] | 16 | 20 | 12,958 | 7 | 0 | 2008-08-06T14:54:52.360000 | 2008-08-06T14:57:01.403000 |
3,512 | 3,552 | Searching directories for tons of files? | I'm using MSVE, and I have my own tiles I'm displaying in layers on top. Problem is, there's a ton of them, and they're on a network server. In certain directories, there are something on the order of 30,000+ files. Initially I called Directory.GetFiles, but once I started testing in a pseudo-real environment, it timed out. What's the best way to programatically list, and iterate through, this many files? Edit: My coworker suggested using the MS indexing service. Has anyone tried this approach, and (how) has it worked? | I've worked on a SAN system in the past with telephony audio recordings which had issues with numbers of files in a single folder - that system became unusable somewhere near 5,000 (on Windows 2000 Advanced Server with an application in C#.Net 1.1)- the only sensible solution that we came up with was to change the folder structure so that there were a more reasonable number of files. Interestingly Explorer would also time out! The convention we came up with was a structure that broke the structure up in years, months and days - but that will depend upon your system and whether you can control the directory structure... | Searching directories for tons of files? I'm using MSVE, and I have my own tiles I'm displaying in layers on top. Problem is, there's a ton of them, and they're on a network server. In certain directories, there are something on the order of 30,000+ files. Initially I called Directory.GetFiles, but once I started testing in a pseudo-real environment, it timed out. What's the best way to programatically list, and iterate through, this many files? Edit: My coworker suggested using the MS indexing service. Has anyone tried this approach, and (how) has it worked? | TITLE:
Searching directories for tons of files?
QUESTION:
I'm using MSVE, and I have my own tiles I'm displaying in layers on top. Problem is, there's a ton of them, and they're on a network server. In certain directories, there are something on the order of 30,000+ files. Initially I called Directory.GetFiles, but once I started testing in a pseudo-real environment, it timed out. What's the best way to programatically list, and iterate through, this many files? Edit: My coworker suggested using the MS indexing service. Has anyone tried this approach, and (how) has it worked?
ANSWER:
I've worked on a SAN system in the past with telephony audio recordings which had issues with numbers of files in a single folder - that system became unusable somewhere near 5,000 (on Windows 2000 Advanced Server with an application in C#.Net 1.1)- the only sensible solution that we came up with was to change the folder structure so that there were a more reasonable number of files. Interestingly Explorer would also time out! The convention we came up with was a structure that broke the structure up in years, months and days - but that will depend upon your system and whether you can control the directory structure... | [
"c#",
"directory",
"file-management"
] | 15 | 6 | 1,067 | 5 | 0 | 2008-08-06T14:57:16.597000 | 2008-08-06T15:35:30.203000 |
3,530 | 3,543 | How do I run Rake tasks within a Ruby script? | I have a Rakefile with a Rake task that I would normally call from the command line: rake blog:post Title I'd like to write a Ruby script that calls that Rake task multiple times, but the only solution I see is shelling out using `` (backticks) or system. What's the right way to do this? | from timocracy.com: require 'rake'
def capture_stdout s = StringIO.new oldstdout = $stdout $stdout = s yield s.string ensure $stdout = oldstdout end
Rake.application.rake_require 'metric_fetcher', ['../../lib/tasks'] results = capture_stdout {Rake.application['metric_fetcher'].invoke} | How do I run Rake tasks within a Ruby script? I have a Rakefile with a Rake task that I would normally call from the command line: rake blog:post Title I'd like to write a Ruby script that calls that Rake task multiple times, but the only solution I see is shelling out using `` (backticks) or system. What's the right way to do this? | TITLE:
How do I run Rake tasks within a Ruby script?
QUESTION:
I have a Rakefile with a Rake task that I would normally call from the command line: rake blog:post Title I'd like to write a Ruby script that calls that Rake task multiple times, but the only solution I see is shelling out using `` (backticks) or system. What's the right way to do this?
ANSWER:
from timocracy.com: require 'rake'
def capture_stdout s = StringIO.new oldstdout = $stdout $stdout = s yield s.string ensure $stdout = oldstdout end
Rake.application.rake_require 'metric_fetcher', ['../../lib/tasks'] results = capture_stdout {Rake.application['metric_fetcher'].invoke} | [
"ruby",
"rake",
"command-line-interface"
] | 61 | 45 | 31,858 | 4 | 0 | 2008-08-06T15:15:54.143000 | 2008-08-06T15:24:00.787000 |
3,544 | 3,561 | What is the best way to deploy a VB.NET application? | Generally when I use ClickOnce when I build a VB.NET program but it has a few downsides. I've never really used anything else, so I'm not sure what my options are. Downsides to ClickOnce: Consists of multiple files - Seems easier to distribute one file than manageing a bunch of file and the downloader to download those files. You have to build it again for CD installations (for when the end user dosn't have internet) Program does not end up in Program Files - It ends up hidden away in some application catch folder, making it much harder to shortcut to. Pros to ClickOnce: It works. Magically. And it's built into VisualStudio 2008 express. Makes it easy to upgrade the application. Does Windows Installer do these things as well? I know it dosen't have any of the ClickOnce cons, but It would be nice to know if it also has the ClickOnce pros. Update: I ended up using Wix 2 (Wix 3 was available but at the time I did the project, no one had a competent tutorial). It was nice because it supported the three things I (eventually) needed. An optional start-up-with-windows shortcut, a start-up-when-the-installer-is-done option, and three paragraphs of text that my boss thinks will keep uses from clicking the wrong option. | Have you seen WiX yet? http://wix.sourceforge.net/ It builds windows installers using an XML file and has additional libraries to use if you want to fancify your installers and the like. I'll admit the learning curve for me was medium-high in getting things started, but afterwards I was able to build a second installer without any hassles. It will handle updates and other items if you so desire, and you can apply folder permissions and the like to the installers. It also gives you greater control on where exactly you want to install files and is compatible with all the standardized Windows folder conventions, so you can specify "PROGRAM_DATA" or something to that effect and the installer knows to put it in C:\Documents and Settings\All Users\Application Data or C:\ProgramData depending on if you're running XP or Vista. The rumor is that Office 2007 and Visual Studio 2008 used WiX to create their installer, but I haven't been able to verify that anywhere. I do believe is is developed by some Microsoft folks on the inside. | What is the best way to deploy a VB.NET application? Generally when I use ClickOnce when I build a VB.NET program but it has a few downsides. I've never really used anything else, so I'm not sure what my options are. Downsides to ClickOnce: Consists of multiple files - Seems easier to distribute one file than manageing a bunch of file and the downloader to download those files. You have to build it again for CD installations (for when the end user dosn't have internet) Program does not end up in Program Files - It ends up hidden away in some application catch folder, making it much harder to shortcut to. Pros to ClickOnce: It works. Magically. And it's built into VisualStudio 2008 express. Makes it easy to upgrade the application. Does Windows Installer do these things as well? I know it dosen't have any of the ClickOnce cons, but It would be nice to know if it also has the ClickOnce pros. Update: I ended up using Wix 2 (Wix 3 was available but at the time I did the project, no one had a competent tutorial). It was nice because it supported the three things I (eventually) needed. An optional start-up-with-windows shortcut, a start-up-when-the-installer-is-done option, and three paragraphs of text that my boss thinks will keep uses from clicking the wrong option. | TITLE:
What is the best way to deploy a VB.NET application?
QUESTION:
Generally when I use ClickOnce when I build a VB.NET program but it has a few downsides. I've never really used anything else, so I'm not sure what my options are. Downsides to ClickOnce: Consists of multiple files - Seems easier to distribute one file than manageing a bunch of file and the downloader to download those files. You have to build it again for CD installations (for when the end user dosn't have internet) Program does not end up in Program Files - It ends up hidden away in some application catch folder, making it much harder to shortcut to. Pros to ClickOnce: It works. Magically. And it's built into VisualStudio 2008 express. Makes it easy to upgrade the application. Does Windows Installer do these things as well? I know it dosen't have any of the ClickOnce cons, but It would be nice to know if it also has the ClickOnce pros. Update: I ended up using Wix 2 (Wix 3 was available but at the time I did the project, no one had a competent tutorial). It was nice because it supported the three things I (eventually) needed. An optional start-up-with-windows shortcut, a start-up-when-the-installer-is-done option, and three paragraphs of text that my boss thinks will keep uses from clicking the wrong option.
ANSWER:
Have you seen WiX yet? http://wix.sourceforge.net/ It builds windows installers using an XML file and has additional libraries to use if you want to fancify your installers and the like. I'll admit the learning curve for me was medium-high in getting things started, but afterwards I was able to build a second installer without any hassles. It will handle updates and other items if you so desire, and you can apply folder permissions and the like to the installers. It also gives you greater control on where exactly you want to install files and is compatible with all the standardized Windows folder conventions, so you can specify "PROGRAM_DATA" or something to that effect and the installer knows to put it in C:\Documents and Settings\All Users\Application Data or C:\ProgramData depending on if you're running XP or Vista. The rumor is that Office 2007 and Visual Studio 2008 used WiX to create their installer, but I haven't been able to verify that anywhere. I do believe is is developed by some Microsoft folks on the inside. | [
"vb.net",
"visual-studio",
"installation",
"clickonce"
] | 8 | 6 | 13,149 | 5 | 0 | 2008-08-06T15:25:51.237000 | 2008-08-06T15:38:52.287000 |
3,567 | 3,578 | SQL query for a database scheme | In SQL Server how do you query a database to bring back all the tables that have a field of a specific name? | The following query will bring back a unique list of tables where Column_Name is equal to the column you are looking for: SELECT Table_Name FROM INFORMATION_SCHEMA.COLUMNS WHERE Column_Name = 'Desired_Column_Name' GROUP BY Table_Name | SQL query for a database scheme In SQL Server how do you query a database to bring back all the tables that have a field of a specific name? | TITLE:
SQL query for a database scheme
QUESTION:
In SQL Server how do you query a database to bring back all the tables that have a field of a specific name?
ANSWER:
The following query will bring back a unique list of tables where Column_Name is equal to the column you are looking for: SELECT Table_Name FROM INFORMATION_SCHEMA.COLUMNS WHERE Column_Name = 'Desired_Column_Name' GROUP BY Table_Name | [
"sql",
"sql-server"
] | 7 | 7 | 1,384 | 3 | 0 | 2008-08-06T15:41:13.827000 | 2008-08-06T15:46:09.777000 |
3,585 | 3,601 | Create a new Ruby on Rails application using MySQL instead of SQLite | I want to create my Rails application with MySQL, because I like it so much. How can I do that in the latest version of Rails instead of the default SQLite? | If you already have a rails project, change the adapter in the config/database.yml file to mysql and make sure you specify a valid username and password, and optionally, a socket: development: adapter: mysql2 database: db_name_dev username: koploper password: host: localhost socket: /tmp/mysql.sock Next, make sure you edit your Gemfile to include the mysql2 or activerecord-jdbcmysql-adapter (if using jruby). | Create a new Ruby on Rails application using MySQL instead of SQLite I want to create my Rails application with MySQL, because I like it so much. How can I do that in the latest version of Rails instead of the default SQLite? | TITLE:
Create a new Ruby on Rails application using MySQL instead of SQLite
QUESTION:
I want to create my Rails application with MySQL, because I like it so much. How can I do that in the latest version of Rails instead of the default SQLite?
ANSWER:
If you already have a rails project, change the adapter in the config/database.yml file to mysql and make sure you specify a valid username and password, and optionally, a socket: development: adapter: mysql2 database: db_name_dev username: koploper password: host: localhost socket: /tmp/mysql.sock Next, make sure you edit your Gemfile to include the mysql2 or activerecord-jdbcmysql-adapter (if using jruby). | [
"mysql",
"ruby-on-rails",
"ruby"
] | 135 | 144 | 160,255 | 19 | 0 | 2008-08-06T15:50:49.813000 | 2008-08-06T16:04:05.043000 |
3,589 | 3,609 | Backup SQL Schema Only? | I need to create a backup of a SQL Server 2005 Database that's only the structure...no records, just the schema. Is there any way to do this? EDIT: I'm trying to create a backup file to use with old processes, so a script wouldn't work for my purposes, sorry | Use a 3 step process: Generate a script from the working database Create a new database from that script Create a backup of the new database | Backup SQL Schema Only? I need to create a backup of a SQL Server 2005 Database that's only the structure...no records, just the schema. Is there any way to do this? EDIT: I'm trying to create a backup file to use with old processes, so a script wouldn't work for my purposes, sorry | TITLE:
Backup SQL Schema Only?
QUESTION:
I need to create a backup of a SQL Server 2005 Database that's only the structure...no records, just the schema. Is there any way to do this? EDIT: I'm trying to create a backup file to use with old processes, so a script wouldn't work for my purposes, sorry
ANSWER:
Use a 3 step process: Generate a script from the working database Create a new database from that script Create a backup of the new database | [
"sql",
"sql-server",
"oracle",
"sql-server-2005",
"backup"
] | 22 | 11 | 23,760 | 5 | 0 | 2008-08-06T15:53:51.647000 | 2008-08-06T16:12:44.213000 |
3,607 | 370,131 | Integrating Fogbugz with TortoiseSVN with no URL/Subversion backend | I've got TotroiseSVN installed and have a majority of my repositories checking in and out from C:\subversion\ and a couple checking in and out from a network share (I forgot about this when I originally posted this question). This means that I don't have a "subversion" server per-se. How do I integrate TortoiseSVN and Fogbugz? Edit: inserted italics | I've been investigating this issue and have managed to get it working. There are a couple of minor problems but they can be worked-around. There are 3 distinct parts to this problem, as follows: The TortoiseSVN part - getting TortoiseSVN to insert the Bugid and hyperlink in the svn log The FogBugz part - getting FogBugz to insert the SVN info and corresponding links The WebSVN part - ensuring the links from FogBugz actually work Instructions for part 1 are in another answer, although it actually does more than required. The stuff about the hooks is actually for part 2, and as is pointed out - it doesn't work "out of the box" Just to confirm, we are looking at using TortoiseSVN WITHOUT an SVN server (ie. file-based repositories) I'm accessing the repositories using UNC paths, but it also works for local drives or mapped drives. All of this works with TortoiseSVN v1.5.3 and SVN Server v1.5.2 (You need to install SVN Server because part 2 needs svnlook.exe which is in the server package. You don't actually configure it to work as an SVN Server) It may even be possible to just copy svnlook.exe from another computer and put it somewhere in your path. Part 1 - TortoiseSVN Creating the TortoiseSVN properties is all that is required in order to get the links in the SVN log. Previous instructions work fine, I'll quote them here for convenience: Configure the Properties Right click on the root directory of the checked out project you want to work with. Select "TortoiseSVN -> Properties" Add five property value pairs by clicking "New..." and inserting the following in "Property Name" and "Property Value" respectively: (make sure you tick "Apply property recursively" for each one) bugtraq:label BugzID: bugtraq:message BugzID: %BUGID% bugtraq:number true bugtraq:url http://[your fogbugz URL here]/default.asp?%BUGID% bugtraq:warnifnoissue false Click "OK" As Jeff says, you'll need to do that for each working copy, so follow his instructions for migrating the properties. That's it. TortoiseSVN will now add a link to the corresponding FogBugz bugID when you commit. If that's all you want, you can stop here. Part 2 - FogBugz For this to work we need to set up the hook scripts. Basically the batch file is called after each commit, and this in turn calls the VBS script which does the submission to FogBugz. The VBS script actually works fine in this situation so we don't need to modify it. The problem is that the batch file is written to work as a server hook, but we need a client hook. SVN server calls the post-commit hook with these parameters: TortoiseSVN calls the post-commit hook with these parameters: So that's why it doesn't work - the parameters are wrong. We need to amend the batch file so it passes the correct parameters to the VBS script. You'll notice that TSVN doesn't pass the repository path, which is a problem, but it does work in the following circumstances: The repository name and working copy name are the same You do the commit at the root of the working copy, not a subfolder. I'm going to see if I can fix this problem and will post back here if I do. Here's my amended batch file which does work (please excuse the excessive comments...) You'll need to set the hook and repository directories to match your setup. rem @echo off rem SubVersion -> FogBugz post-commit hook file rem Put this into the Hooks directory in your subversion repository rem along with the logBugDataSVN.vbs file
rem TSVN calls this with args rem The ones we're interested in are and which are %4 and %6
rem YOU NEED TO EDIT THE LINE WHICH SETS RepoRoot TO POINT AT THE DIRECTORY rem THAT CONTAINS YOUR REPOSITORIES AND ALSO YOU MUST SET THE HOOKS DIRECTORY
setlocal
rem debugging rem echo %1 %2 %3 %4 %5 %6 > c:\temp\test.txt
rem Set Hooks directory location (no trailing slash) set HooksDir=\\myserver\svn\hooks
rem Set Repo Root location (ie. the directory containing all the repos) rem (no trailing slash) set RepoRoot=\\myserver\svn
rem Build full repo location set Repo=%RepoRoot%\%~n6
rem debugging rem echo %Repo% >> c:\temp\test.txt
rem Grab the last two digits of the revision number rem and append them to the log of svn changes rem to avoid simultaneous commit scenarios causing overwrites set ChangeFileSuffix=%~4 set LogSvnChangeFile=svn%ChangeFileSuffix:~-2,2%.txt
set LogBugDataScript=logBugDataSVN.vbs set ScriptCommand=cscript
rem Could remove the need for svnlook on the client since TSVN rem provides as parameters the info we need to call the script. rem However, it's in a slightly different format than the script is expecting rem for parsing, therefore we would have to amend the script too, so I won't bother. rem @echo on svnlook changed -r %4 %Repo% > %temp%\%LogSvnChangeFile% svnlook log -r %4 %Repo% | %ScriptCommand% %HooksDir%\%LogBugDataScript% %4 %temp%\%LogSvnChangeFile% %~n6
del %temp%\%LogSvnChangeFile% endlocal I'm going to assume the repositories are at \\myserver\svn\ and working copies are all under `C:\Projects\ Go into your FogBugz account and click Extras -> Configure Source Control Integration Download the VBScript file for Subversion (don't bother with the batch file) Create a folder to store the hook scripts. I put it in the same folder as my repositories. eg. \\myserver\svn\hooks\ Rename VBscript to remove the.safe at the end of the filename. Save my version of the batch file in your hooks directory, as post-commit-tsvn.bat Right click on any directory. Select "TortoiseSVN > Settings" (in the right click menu from the last step) Select "Hook Scripts" Click "Add" and set the properties as follows: Hook Type: Post-Commit Hook Working Copy Path: C:\Projects (or whatever your root directory for all of your projects is.) Command Line To Execute: \\myserver\svn\hooks\post-commit-tsvn.bat (this needs to point to wherever you put your hooks directory in step 3) Tick "Wait for the script to finish" Click OK twice. Next time you commit and enter a Bugid, it will be submitted to FogBugz. The links won't work but at least the revision info is there and you can manually look up the log in TortoiseSVN. NOTE: You'll notice that the repository root is hard-coded into the batch file. As a result, if you check out from repositories that don't have the same root (eg. one on local drive and one on network) then you'll need to use 2 batch files and 2 corresponding entries under Hook Scripts in the TSVN settings. The way to do this would be to have 2 separate Working Copy trees - one for each repository root. Part 3 - WebSVN Errr, I haven't done this:-) From reading the WebSVN docs, it seems that WebSVN doesn't actually integrate with the SVN server, it just behaves like any other SVN client but presents a web interface. In theory then it should work fine with a file-based repository. I haven't tried it though. | Integrating Fogbugz with TortoiseSVN with no URL/Subversion backend I've got TotroiseSVN installed and have a majority of my repositories checking in and out from C:\subversion\ and a couple checking in and out from a network share (I forgot about this when I originally posted this question). This means that I don't have a "subversion" server per-se. How do I integrate TortoiseSVN and Fogbugz? Edit: inserted italics | TITLE:
Integrating Fogbugz with TortoiseSVN with no URL/Subversion backend
QUESTION:
I've got TotroiseSVN installed and have a majority of my repositories checking in and out from C:\subversion\ and a couple checking in and out from a network share (I forgot about this when I originally posted this question). This means that I don't have a "subversion" server per-se. How do I integrate TortoiseSVN and Fogbugz? Edit: inserted italics
ANSWER:
I've been investigating this issue and have managed to get it working. There are a couple of minor problems but they can be worked-around. There are 3 distinct parts to this problem, as follows: The TortoiseSVN part - getting TortoiseSVN to insert the Bugid and hyperlink in the svn log The FogBugz part - getting FogBugz to insert the SVN info and corresponding links The WebSVN part - ensuring the links from FogBugz actually work Instructions for part 1 are in another answer, although it actually does more than required. The stuff about the hooks is actually for part 2, and as is pointed out - it doesn't work "out of the box" Just to confirm, we are looking at using TortoiseSVN WITHOUT an SVN server (ie. file-based repositories) I'm accessing the repositories using UNC paths, but it also works for local drives or mapped drives. All of this works with TortoiseSVN v1.5.3 and SVN Server v1.5.2 (You need to install SVN Server because part 2 needs svnlook.exe which is in the server package. You don't actually configure it to work as an SVN Server) It may even be possible to just copy svnlook.exe from another computer and put it somewhere in your path. Part 1 - TortoiseSVN Creating the TortoiseSVN properties is all that is required in order to get the links in the SVN log. Previous instructions work fine, I'll quote them here for convenience: Configure the Properties Right click on the root directory of the checked out project you want to work with. Select "TortoiseSVN -> Properties" Add five property value pairs by clicking "New..." and inserting the following in "Property Name" and "Property Value" respectively: (make sure you tick "Apply property recursively" for each one) bugtraq:label BugzID: bugtraq:message BugzID: %BUGID% bugtraq:number true bugtraq:url http://[your fogbugz URL here]/default.asp?%BUGID% bugtraq:warnifnoissue false Click "OK" As Jeff says, you'll need to do that for each working copy, so follow his instructions for migrating the properties. That's it. TortoiseSVN will now add a link to the corresponding FogBugz bugID when you commit. If that's all you want, you can stop here. Part 2 - FogBugz For this to work we need to set up the hook scripts. Basically the batch file is called after each commit, and this in turn calls the VBS script which does the submission to FogBugz. The VBS script actually works fine in this situation so we don't need to modify it. The problem is that the batch file is written to work as a server hook, but we need a client hook. SVN server calls the post-commit hook with these parameters: TortoiseSVN calls the post-commit hook with these parameters: So that's why it doesn't work - the parameters are wrong. We need to amend the batch file so it passes the correct parameters to the VBS script. You'll notice that TSVN doesn't pass the repository path, which is a problem, but it does work in the following circumstances: The repository name and working copy name are the same You do the commit at the root of the working copy, not a subfolder. I'm going to see if I can fix this problem and will post back here if I do. Here's my amended batch file which does work (please excuse the excessive comments...) You'll need to set the hook and repository directories to match your setup. rem @echo off rem SubVersion -> FogBugz post-commit hook file rem Put this into the Hooks directory in your subversion repository rem along with the logBugDataSVN.vbs file
rem TSVN calls this with args rem The ones we're interested in are and which are %4 and %6
rem YOU NEED TO EDIT THE LINE WHICH SETS RepoRoot TO POINT AT THE DIRECTORY rem THAT CONTAINS YOUR REPOSITORIES AND ALSO YOU MUST SET THE HOOKS DIRECTORY
setlocal
rem debugging rem echo %1 %2 %3 %4 %5 %6 > c:\temp\test.txt
rem Set Hooks directory location (no trailing slash) set HooksDir=\\myserver\svn\hooks
rem Set Repo Root location (ie. the directory containing all the repos) rem (no trailing slash) set RepoRoot=\\myserver\svn
rem Build full repo location set Repo=%RepoRoot%\%~n6
rem debugging rem echo %Repo% >> c:\temp\test.txt
rem Grab the last two digits of the revision number rem and append them to the log of svn changes rem to avoid simultaneous commit scenarios causing overwrites set ChangeFileSuffix=%~4 set LogSvnChangeFile=svn%ChangeFileSuffix:~-2,2%.txt
set LogBugDataScript=logBugDataSVN.vbs set ScriptCommand=cscript
rem Could remove the need for svnlook on the client since TSVN rem provides as parameters the info we need to call the script. rem However, it's in a slightly different format than the script is expecting rem for parsing, therefore we would have to amend the script too, so I won't bother. rem @echo on svnlook changed -r %4 %Repo% > %temp%\%LogSvnChangeFile% svnlook log -r %4 %Repo% | %ScriptCommand% %HooksDir%\%LogBugDataScript% %4 %temp%\%LogSvnChangeFile% %~n6
del %temp%\%LogSvnChangeFile% endlocal I'm going to assume the repositories are at \\myserver\svn\ and working copies are all under `C:\Projects\ Go into your FogBugz account and click Extras -> Configure Source Control Integration Download the VBScript file for Subversion (don't bother with the batch file) Create a folder to store the hook scripts. I put it in the same folder as my repositories. eg. \\myserver\svn\hooks\ Rename VBscript to remove the.safe at the end of the filename. Save my version of the batch file in your hooks directory, as post-commit-tsvn.bat Right click on any directory. Select "TortoiseSVN > Settings" (in the right click menu from the last step) Select "Hook Scripts" Click "Add" and set the properties as follows: Hook Type: Post-Commit Hook Working Copy Path: C:\Projects (or whatever your root directory for all of your projects is.) Command Line To Execute: \\myserver\svn\hooks\post-commit-tsvn.bat (this needs to point to wherever you put your hooks directory in step 3) Tick "Wait for the script to finish" Click OK twice. Next time you commit and enter a Bugid, it will be submitted to FogBugz. The links won't work but at least the revision info is there and you can manually look up the log in TortoiseSVN. NOTE: You'll notice that the repository root is hard-coded into the batch file. As a result, if you check out from repositories that don't have the same root (eg. one on local drive and one on network) then you'll need to use 2 batch files and 2 corresponding entries under Hook Scripts in the TSVN settings. The way to do this would be to have 2 separate Working Copy trees - one for each repository root. Part 3 - WebSVN Errr, I haven't done this:-) From reading the WebSVN docs, it seems that WebSVN doesn't actually integrate with the SVN server, it just behaves like any other SVN client but presents a web interface. In theory then it should work fine with a file-based repository. I haven't tried it though. | [
"svn",
"tortoisesvn",
"integration",
"fogbugz"
] | 16 | 18 | 5,302 | 5 | 0 | 2008-08-06T16:10:44.313000 | 2008-12-16T00:03:30.553000 |
3,611 | 3,618 | PHP Error - Uploading a file | I'm trying to write some PHP to upload a file to a folder on my webserver. Here's what I have: Dump Upload Upload a File Select the File: I'm getting these errors: Warning: move_uploaded_file(./test.txt) [function.move-uploaded-file]: failed to open stream: Permission denied in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3 Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\WINDOWS\Temp\phpA30E.tmp' to './test.txt' in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3 Warning: Cannot modify header information - headers already sent by (output started at E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php:3) in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 4 PHP version 4.4.7 Running IIS on a Windows box. This particular file/folder has 777 permissions. Any ideas? | As it's Windows, there is no real 777. If you're using chmod, check the Windows-related comments. Check that the IIS Account can access (read, write, modify) these two folders: E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\ C:\WINDOWS\Temp\ | PHP Error - Uploading a file I'm trying to write some PHP to upload a file to a folder on my webserver. Here's what I have: Dump Upload Upload a File Select the File: I'm getting these errors: Warning: move_uploaded_file(./test.txt) [function.move-uploaded-file]: failed to open stream: Permission denied in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3 Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\WINDOWS\Temp\phpA30E.tmp' to './test.txt' in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3 Warning: Cannot modify header information - headers already sent by (output started at E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php:3) in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 4 PHP version 4.4.7 Running IIS on a Windows box. This particular file/folder has 777 permissions. Any ideas? | TITLE:
PHP Error - Uploading a file
QUESTION:
I'm trying to write some PHP to upload a file to a folder on my webserver. Here's what I have: Dump Upload Upload a File Select the File: I'm getting these errors: Warning: move_uploaded_file(./test.txt) [function.move-uploaded-file]: failed to open stream: Permission denied in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3 Warning: move_uploaded_file() [function.move-uploaded-file]: Unable to move 'C:\WINDOWS\Temp\phpA30E.tmp' to './test.txt' in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 3 Warning: Cannot modify header information - headers already sent by (output started at E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php:3) in E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\upload.php on line 4 PHP version 4.4.7 Running IIS on a Windows box. This particular file/folder has 777 permissions. Any ideas?
ANSWER:
As it's Windows, there is no real 777. If you're using chmod, check the Windows-related comments. Check that the IIS Account can access (read, write, modify) these two folders: E:\inetpub\vhosts\mywebsite.com\httpdocs\dump\ C:\WINDOWS\Temp\ | [
"php",
"iis",
"upload"
] | 18 | 9 | 14,534 | 8 | 0 | 2008-08-06T16:13:02.250000 | 2008-08-06T16:16:01.667000 |
3,625 | 102,129 | What's the Developer Express equivalent of System.Windows.Forms.LinkButton? | I can't seem to find Developer Express' version of the LinkButton. (The Windows Forms linkbutton, not the ASP.NET linkbutton.) HyperLinkEdit doesn't seem to be what I'm looking for since it looks like a TextEdit/TextBox. Anyone know what their version of it is? I'm using the latest DevX controls: 8.2.1. | The control is called the HyperLinkEdit. You have to adjust the properties to get it to behave like the System.Windows.Forms control like so: control.BorderStyle = BorderStyles.NoBorder; control.Properties.Appearance.BackColor = Color.Transparent; control.Properties.AppearanceFocused.BackColor = Color.Transparent; control.Properties.ReadOnly = true; | What's the Developer Express equivalent of System.Windows.Forms.LinkButton? I can't seem to find Developer Express' version of the LinkButton. (The Windows Forms linkbutton, not the ASP.NET linkbutton.) HyperLinkEdit doesn't seem to be what I'm looking for since it looks like a TextEdit/TextBox. Anyone know what their version of it is? I'm using the latest DevX controls: 8.2.1. | TITLE:
What's the Developer Express equivalent of System.Windows.Forms.LinkButton?
QUESTION:
I can't seem to find Developer Express' version of the LinkButton. (The Windows Forms linkbutton, not the ASP.NET linkbutton.) HyperLinkEdit doesn't seem to be what I'm looking for since it looks like a TextEdit/TextBox. Anyone know what their version of it is? I'm using the latest DevX controls: 8.2.1.
ANSWER:
The control is called the HyperLinkEdit. You have to adjust the properties to get it to behave like the System.Windows.Forms control like so: control.BorderStyle = BorderStyles.NoBorder; control.Properties.Appearance.BackColor = Color.Transparent; control.Properties.AppearanceFocused.BackColor = Color.Transparent; control.Properties.ReadOnly = true; | [
"devexpress"
] | 3 | 3 | 1,538 | 2 | 0 | 2008-08-06T16:20:06.290000 | 2008-09-19T14:12:29.407000 |
3,654 | 3,655 | HTML version choice | When developing a new web based application which version of html should you aim for? EDIT: cool I was just attempting to get a feel from others I tend to use XHTML 1.0 Strict in my own work and Transitional when others are involved in the content creation. I marked the first XHTML 1.0 Transitional post as the 'correct answer' but believe strongly that all the answers given at that point where equally valid. | I'd shoot for XHTML Transitional 1.0. There are still a few nuances out there that don't like XHTML strict, and most editors I've seen now will give you the proper nudges to make sure that things are done right. | HTML version choice When developing a new web based application which version of html should you aim for? EDIT: cool I was just attempting to get a feel from others I tend to use XHTML 1.0 Strict in my own work and Transitional when others are involved in the content creation. I marked the first XHTML 1.0 Transitional post as the 'correct answer' but believe strongly that all the answers given at that point where equally valid. | TITLE:
HTML version choice
QUESTION:
When developing a new web based application which version of html should you aim for? EDIT: cool I was just attempting to get a feel from others I tend to use XHTML 1.0 Strict in my own work and Transitional when others are involved in the content creation. I marked the first XHTML 1.0 Transitional post as the 'correct answer' but believe strongly that all the answers given at that point where equally valid.
ANSWER:
I'd shoot for XHTML Transitional 1.0. There are still a few nuances out there that don't like XHTML strict, and most editors I've seen now will give you the proper nudges to make sure that things are done right. | [
"html",
"xml",
"xhtml"
] | 23 | 9 | 1,629 | 14 | 0 | 2008-08-06T16:40:23.130000 | 2008-08-06T16:43:00.410000 |
3,667 | 29,213 | What is your favorite web app deployment workflow with SVN? | We are currently using a somewhat complicated deployment setup that involves a remote SVN server, 3 SVN branches for DEV, STAGE, and PROD, promoting code between them through patches, etc. I wonder what do you use for deployment in a small dev team situation? | trunk for development, and a branch (production) for the production stuff. On my local machine, I have a VirtualHost that points to the trunk branch, to test my changes. Any commit to trunk triggers a commit hook that does an svn export and sync to the online server's dev URL - so if the site is stackoverflow.com then this hook automatically updates dev.stackoverflow.com Then I use svnmerge to merge selected patches from trunk to production in my local checkouts. I have a VirtualHost again on my local machine pointing to the production branch. When I commit the merged changes to the production branch, again an SVN export hook updates the production (live) export and the site is live! | What is your favorite web app deployment workflow with SVN? We are currently using a somewhat complicated deployment setup that involves a remote SVN server, 3 SVN branches for DEV, STAGE, and PROD, promoting code between them through patches, etc. I wonder what do you use for deployment in a small dev team situation? | TITLE:
What is your favorite web app deployment workflow with SVN?
QUESTION:
We are currently using a somewhat complicated deployment setup that involves a remote SVN server, 3 SVN branches for DEV, STAGE, and PROD, promoting code between them through patches, etc. I wonder what do you use for deployment in a small dev team situation?
ANSWER:
trunk for development, and a branch (production) for the production stuff. On my local machine, I have a VirtualHost that points to the trunk branch, to test my changes. Any commit to trunk triggers a commit hook that does an svn export and sync to the online server's dev URL - so if the site is stackoverflow.com then this hook automatically updates dev.stackoverflow.com Then I use svnmerge to merge selected patches from trunk to production in my local checkouts. I have a VirtualHost again on my local machine pointing to the production branch. When I commit the merged changes to the production branch, again an SVN export hook updates the production (live) export and the site is live! | [
"svn",
"deployment"
] | 15 | 15 | 3,337 | 11 | 0 | 2008-08-06T16:48:24.127000 | 2008-08-26T23:45:56.213000 |
3,682 | 155,901 | Distribution of table in time | I have a MySQL table with approximately 3000 rows per user. One of the columns is a datetime field, which is mutable, so the rows aren't in chronological order. I'd like to visualize the time distribution in a chart, so I need a number of individual datapoints. 20 datapoints would be enough. I could do this: select timefield from entries where uid =? order by timefield; and look at every 150th row. Or I could do 20 separate queries and use limit 1 and offset. But there must be a more efficient solution... | Michal Sznajder almost had it, but you can't use column aliases in a WHERE clause in SQL. So you have to wrap it as a derived table. I tried this and it returns 20 rows: SELECT * FROM ( SELECT @rownum:=@rownum+1 AS rownum, e.* FROM (SELECT @rownum:= 0) r, entries e) AS e2 WHERE uid =? AND rownum % 150 = 0; | Distribution of table in time I have a MySQL table with approximately 3000 rows per user. One of the columns is a datetime field, which is mutable, so the rows aren't in chronological order. I'd like to visualize the time distribution in a chart, so I need a number of individual datapoints. 20 datapoints would be enough. I could do this: select timefield from entries where uid =? order by timefield; and look at every 150th row. Or I could do 20 separate queries and use limit 1 and offset. But there must be a more efficient solution... | TITLE:
Distribution of table in time
QUESTION:
I have a MySQL table with approximately 3000 rows per user. One of the columns is a datetime field, which is mutable, so the rows aren't in chronological order. I'd like to visualize the time distribution in a chart, so I need a number of individual datapoints. 20 datapoints would be enough. I could do this: select timefield from entries where uid =? order by timefield; and look at every 150th row. Or I could do 20 separate queries and use limit 1 and offset. But there must be a more efficient solution...
ANSWER:
Michal Sznajder almost had it, but you can't use column aliases in a WHERE clause in SQL. So you have to wrap it as a derived table. I tried this and it returns 20 rows: SELECT * FROM ( SELECT @rownum:=@rownum+1 AS rownum, e.* FROM (SELECT @rownum:= 0) r, entries e) AS e2 WHERE uid =? AND rownum % 150 = 0; | [
"sql",
"mysql"
] | 12 | 6 | 1,934 | 7 | 0 | 2008-08-06T16:58:34.153000 | 2008-10-01T01:49:27.897000 |
3,713 | 3,777 | Call ASP.NET function from JavaScript | I'm writing a web page in ASP.NET. I have some JavaScript code, and I have a submit button with a click event. Is it possible to call a method I created in ASP with JavaScript's click event? | Well, if you don't want to do it using Ajax or any other way and just want a normal ASP.NET postback to happen, here is how you do it (without using any other libraries): It is a little tricky though...:) i. In your code file (assuming you are using C# and.NET 2.0 or later) add the following Interface to your Page class to make it look like public partial class Default: System.Web.UI.Page, IPostBackEventHandler{} ii. This should add (using Tab - Tab ) this function to your code file: public void RaisePostBackEvent(string eventArgument) { } iii. In your onclick event in JavaScript, write the following code: var pageId = '<%= Page.ClientID %>'; __doPostBack(pageId, argumentString); This will call the 'RaisePostBackEvent' method in your code file with the 'eventArgument' as the 'argumentString' you passed from the JavaScript. Now, you can call any other event you like. P.S: That is 'underscore-underscore-doPostBack'... And, there should be no space in that sequence... Somehow the WMD does not allow me to write to underscores followed by a character! | Call ASP.NET function from JavaScript I'm writing a web page in ASP.NET. I have some JavaScript code, and I have a submit button with a click event. Is it possible to call a method I created in ASP with JavaScript's click event? | TITLE:
Call ASP.NET function from JavaScript
QUESTION:
I'm writing a web page in ASP.NET. I have some JavaScript code, and I have a submit button with a click event. Is it possible to call a method I created in ASP with JavaScript's click event?
ANSWER:
Well, if you don't want to do it using Ajax or any other way and just want a normal ASP.NET postback to happen, here is how you do it (without using any other libraries): It is a little tricky though...:) i. In your code file (assuming you are using C# and.NET 2.0 or later) add the following Interface to your Page class to make it look like public partial class Default: System.Web.UI.Page, IPostBackEventHandler{} ii. This should add (using Tab - Tab ) this function to your code file: public void RaisePostBackEvent(string eventArgument) { } iii. In your onclick event in JavaScript, write the following code: var pageId = '<%= Page.ClientID %>'; __doPostBack(pageId, argumentString); This will call the 'RaisePostBackEvent' method in your code file with the 'eventArgument' as the 'argumentString' you passed from the JavaScript. Now, you can call any other event you like. P.S: That is 'underscore-underscore-doPostBack'... And, there should be no space in that sequence... Somehow the WMD does not allow me to write to underscores followed by a character! | [
"javascript",
"c#",
"asp.net",
"onclick"
] | 147 | 99 | 311,462 | 20 | 0 | 2008-08-06T17:16:36.630000 | 2008-08-06T18:04:25.270000 |
3,725 | 3,776 | How to create a tree-view preferences dialog type of interface in C#? | I'm writing an application that is basically just a preferences dialog, much like the tree-view preferences dialog that Visual Studio itself uses. The function of the application is simply a pass-through for data from a serial device to a file. It performs many, many transformations on the data before writing it to the file, so the GUI for the application is simply all the settings that dictate what those transformations should be. What's the best way to go about designing/coding a tree-view preferences dialog? The way I've been going about it is building the main window with a docked tree control on the left. Then I have been creating container controls that correspond to each node of the tree. When a node is selected, the app brings that node's corresponding container control to the front, moves it to the right position, and maximizes it in the main window. This seems really, really clunky while designing it. It basically means I have tons of container controls beyond the edge of the main window during design time that I have to keep scrolling the main window over to in order to work with them. I don't know if this totally makes sense the way I'm writing this, but maybe this visual for what I'm talking about will make more sense: Basically I have to work with this huge form, with container controls all over the place, and then do a bunch of run-time reformatting to make it all work. This seems like a lot of extra work. Am I doing this in a totally stupid way? Is there some "obvious" easier way of doing this that I'm missing? | A tidier way is to create separate forms for each 'pane' and, in each form constructor, set this.TopLevel = false; this.FormBorderStyle = FormBorderStyle.None; this.Dock = DockStyle.Fill; That way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the empty area like a normal control. Perhaps the main form could use a SplitContainer with a static TreeView in one panel, and space to add these forms in the other. Once they are added, they could be flipped through using Hide/Show or BringToFront/SendToBack methods. SeparateForm f = new SeparateForm(); MainFormSplitContainer.Panel2.Controls.Add(f); f.Show(); | How to create a tree-view preferences dialog type of interface in C#? I'm writing an application that is basically just a preferences dialog, much like the tree-view preferences dialog that Visual Studio itself uses. The function of the application is simply a pass-through for data from a serial device to a file. It performs many, many transformations on the data before writing it to the file, so the GUI for the application is simply all the settings that dictate what those transformations should be. What's the best way to go about designing/coding a tree-view preferences dialog? The way I've been going about it is building the main window with a docked tree control on the left. Then I have been creating container controls that correspond to each node of the tree. When a node is selected, the app brings that node's corresponding container control to the front, moves it to the right position, and maximizes it in the main window. This seems really, really clunky while designing it. It basically means I have tons of container controls beyond the edge of the main window during design time that I have to keep scrolling the main window over to in order to work with them. I don't know if this totally makes sense the way I'm writing this, but maybe this visual for what I'm talking about will make more sense: Basically I have to work with this huge form, with container controls all over the place, and then do a bunch of run-time reformatting to make it all work. This seems like a lot of extra work. Am I doing this in a totally stupid way? Is there some "obvious" easier way of doing this that I'm missing? | TITLE:
How to create a tree-view preferences dialog type of interface in C#?
QUESTION:
I'm writing an application that is basically just a preferences dialog, much like the tree-view preferences dialog that Visual Studio itself uses. The function of the application is simply a pass-through for data from a serial device to a file. It performs many, many transformations on the data before writing it to the file, so the GUI for the application is simply all the settings that dictate what those transformations should be. What's the best way to go about designing/coding a tree-view preferences dialog? The way I've been going about it is building the main window with a docked tree control on the left. Then I have been creating container controls that correspond to each node of the tree. When a node is selected, the app brings that node's corresponding container control to the front, moves it to the right position, and maximizes it in the main window. This seems really, really clunky while designing it. It basically means I have tons of container controls beyond the edge of the main window during design time that I have to keep scrolling the main window over to in order to work with them. I don't know if this totally makes sense the way I'm writing this, but maybe this visual for what I'm talking about will make more sense: Basically I have to work with this huge form, with container controls all over the place, and then do a bunch of run-time reformatting to make it all work. This seems like a lot of extra work. Am I doing this in a totally stupid way? Is there some "obvious" easier way of doing this that I'm missing?
ANSWER:
A tidier way is to create separate forms for each 'pane' and, in each form constructor, set this.TopLevel = false; this.FormBorderStyle = FormBorderStyle.None; this.Dock = DockStyle.Fill; That way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the empty area like a normal control. Perhaps the main form could use a SplitContainer with a static TreeView in one panel, and space to add these forms in the other. Once they are added, they could be flipped through using Hide/Show or BringToFront/SendToBack methods. SeparateForm f = new SeparateForm(); MainFormSplitContainer.Panel2.Controls.Add(f); f.Show(); | [
"c#",
"user-interface"
] | 18 | 12 | 4,491 | 3 | 0 | 2008-08-06T17:22:27.350000 | 2008-08-06T18:02:31.480000 |
3,781 | 3,782 | Prototyping a GUI with a customer | When prototyping initial GUI functionality with a customer is it better to use a pen/paper drawing or to mock something up using a tool and show them that? The argument against a tool generated design being that the customer can sometimes focus on the low-level specifics of the mock-up rather than taking a higher level functional view of the GUI overall. | Always start with paper or paper-like mock-ups first. You do not want to fall into a trap of giving the impression of completeness when the back-end is completely hollow. A polished prototype or pixel-perfect example puts too much emphasis on the design. With an obvious sketch, you have a better shot of discussing desired functionality and content rather than colors, photos, and other stylistic matters. There will be time for that discussion later in the project. Jeff discusses paper prototyping in his Coding Horror article UI-First Software Development Click the "Watch a video!" link at twitter.com to see an interesting take on the idea from Common Craft. | Prototyping a GUI with a customer When prototyping initial GUI functionality with a customer is it better to use a pen/paper drawing or to mock something up using a tool and show them that? The argument against a tool generated design being that the customer can sometimes focus on the low-level specifics of the mock-up rather than taking a higher level functional view of the GUI overall. | TITLE:
Prototyping a GUI with a customer
QUESTION:
When prototyping initial GUI functionality with a customer is it better to use a pen/paper drawing or to mock something up using a tool and show them that? The argument against a tool generated design being that the customer can sometimes focus on the low-level specifics of the mock-up rather than taking a higher level functional view of the GUI overall.
ANSWER:
Always start with paper or paper-like mock-ups first. You do not want to fall into a trap of giving the impression of completeness when the back-end is completely hollow. A polished prototype or pixel-perfect example puts too much emphasis on the design. With an obvious sketch, you have a better shot of discussing desired functionality and content rather than colors, photos, and other stylistic matters. There will be time for that discussion later in the project. Jeff discusses paper prototyping in his Coding Horror article UI-First Software Development Click the "Watch a video!" link at twitter.com to see an interesting take on the idea from Common Craft. | [
"user-interface",
"prototyping"
] | 17 | 16 | 2,842 | 12 | 0 | 2008-08-06T18:10:23.477000 | 2008-08-06T18:10:56.317000 |
3,790 | 3,833 | Is there a WMI Redistributable Package? | I've been working on a project that accesses the WMI to get information about the software installed on a user's machine. We've been querying Win32_Product only to find that it doesn't exist in 64-bit versions of Windows because it's an "optional component". I know there are a lot of really good alternatives to querying the WMI for this information, but I've got a bit of a vested interest in finding out how well this is going to work out. What I want to know is if there's some kind of redistributable that can be packaged with our software to allow 64-bit users to get the WMI Installer Provider put onto their machines? Right now, they have to install it manually and the installation requires they have their Windows disc handy. Edit: You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists. For Operation System, we've been using.NET 3.5 so we need packages that will work on XP64 and 64bit versions of Windows Vista. | You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists. For Windows Server 2003, the WMI SDK and redistributables are part of the Server SDK I believe that the same is true for the Server 2008 SDK | Is there a WMI Redistributable Package? I've been working on a project that accesses the WMI to get information about the software installed on a user's machine. We've been querying Win32_Product only to find that it doesn't exist in 64-bit versions of Windows because it's an "optional component". I know there are a lot of really good alternatives to querying the WMI for this information, but I've got a bit of a vested interest in finding out how well this is going to work out. What I want to know is if there's some kind of redistributable that can be packaged with our software to allow 64-bit users to get the WMI Installer Provider put onto their machines? Right now, they have to install it manually and the installation requires they have their Windows disc handy. Edit: You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists. For Operation System, we've been using.NET 3.5 so we need packages that will work on XP64 and 64bit versions of Windows Vista. | TITLE:
Is there a WMI Redistributable Package?
QUESTION:
I've been working on a project that accesses the WMI to get information about the software installed on a user's machine. We've been querying Win32_Product only to find that it doesn't exist in 64-bit versions of Windows because it's an "optional component". I know there are a lot of really good alternatives to querying the WMI for this information, but I've got a bit of a vested interest in finding out how well this is going to work out. What I want to know is if there's some kind of redistributable that can be packaged with our software to allow 64-bit users to get the WMI Installer Provider put onto their machines? Right now, they have to install it manually and the installation requires they have their Windows disc handy. Edit: You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists. For Operation System, we've been using.NET 3.5 so we need packages that will work on XP64 and 64bit versions of Windows Vista.
ANSWER:
You didn't mention for what OS, but the WMI Redistributable Components version 1.0 definitely exists. For Windows Server 2003, the WMI SDK and redistributables are part of the Server SDK I believe that the same is true for the Server 2008 SDK | [
"windows",
"64-bit",
"wmi"
] | 4 | 2 | 2,165 | 2 | 0 | 2008-08-06T18:15:43.357000 | 2008-08-06T18:44:55.373000 |
3,793 | 1,704,579 | Best way to get InnerXml of an XElement? | What's the best way to get the contents of the mixed body element in the code below? The element might contain either XHTML or text, but I just want its contents in string form. The XmlElement type has the InnerXml property which is exactly what I'm after. The code as written almost does what I want, but includes the surrounding... element, which I don't want. XDocument doc = XDocument.Load(new StreamReader(s)); var templates = from t in doc.Descendants("template") where t.Attribute("name").Value == templateName select new { Subject = t.Element("subject").Value, Body = t.Element("body").ToString() }; | I wanted to see which of these suggested solutions performed best, so I ran some comparative tests. Out of interest, I also compared the LINQ methods to the plain old System.Xml method suggested by Greg. The variation was interesting and not what I expected, with the slowest methods being more than 3 times slower than the fastest. The results ordered by fastest to slowest: CreateReader - Instance Hunter (0.113 seconds) Plain old System.Xml - Greg Hurlman (0.134 seconds) Aggregate with string concatenation - Mike Powell (0.324 seconds) StringBuilder - Vin (0.333 seconds) String.Join on array - Terry (0.360 seconds) String.Concat on array - Marcin Kosieradzki (0.364) Method I used a single XML document with 20 identical nodes (called 'hint'): Thinking of using a fake address? Please don't. If we can't verify your address we might just have to reject your application. The numbers shown as seconds above are the result of extracting the "inner XML" of the 20 nodes, 1000 times in a row, and taking the average (mean) of 5 runs. I didn't include the time it took to load and parse the XML into an XmlDocument (for the System.Xml method) or XDocument (for all the others). The LINQ algorithms I used were: (C# - all take an XElement "parent" and return the inner XML string) CreateReader: var reader = parent.CreateReader(); reader.MoveToContent();
return reader.ReadInnerXml(); Aggregate with string concatenation: return parent.Nodes().Aggregate("", (b, node) => b += node.ToString()); StringBuilder: StringBuilder sb = new StringBuilder();
foreach(var node in parent.Nodes()) { sb.Append(node.ToString()); }
return sb.ToString(); String.Join on array: return String.Join("", parent.Nodes().Select(x => x.ToString()).ToArray()); String.Concat on array: return String.Concat(parent.Nodes().Select(x => x.ToString()).ToArray()); I haven't shown the "Plain old System.Xml" algorithm here as it's just calling.InnerXml on nodes. Conclusion If performance is important (e.g. lots of XML, parsed frequently), I'd use Daniel's CreateReader method every time. If you're just doing a few queries, you might want to use Mike's more concise Aggregate method. If you're using XML on large elements with lots of nodes (maybe 100's), you'd probably start to see the benefit of using StringBuilder over the Aggregate method, but not over CreateReader. I don't think the Join and Concat methods would ever be more efficient in these conditions because of the penalty of converting a large list to a large array (even obvious here with smaller lists). | Best way to get InnerXml of an XElement? What's the best way to get the contents of the mixed body element in the code below? The element might contain either XHTML or text, but I just want its contents in string form. The XmlElement type has the InnerXml property which is exactly what I'm after. The code as written almost does what I want, but includes the surrounding... element, which I don't want. XDocument doc = XDocument.Load(new StreamReader(s)); var templates = from t in doc.Descendants("template") where t.Attribute("name").Value == templateName select new { Subject = t.Element("subject").Value, Body = t.Element("body").ToString() }; | TITLE:
Best way to get InnerXml of an XElement?
QUESTION:
What's the best way to get the contents of the mixed body element in the code below? The element might contain either XHTML or text, but I just want its contents in string form. The XmlElement type has the InnerXml property which is exactly what I'm after. The code as written almost does what I want, but includes the surrounding... element, which I don't want. XDocument doc = XDocument.Load(new StreamReader(s)); var templates = from t in doc.Descendants("template") where t.Attribute("name").Value == templateName select new { Subject = t.Element("subject").Value, Body = t.Element("body").ToString() };
ANSWER:
I wanted to see which of these suggested solutions performed best, so I ran some comparative tests. Out of interest, I also compared the LINQ methods to the plain old System.Xml method suggested by Greg. The variation was interesting and not what I expected, with the slowest methods being more than 3 times slower than the fastest. The results ordered by fastest to slowest: CreateReader - Instance Hunter (0.113 seconds) Plain old System.Xml - Greg Hurlman (0.134 seconds) Aggregate with string concatenation - Mike Powell (0.324 seconds) StringBuilder - Vin (0.333 seconds) String.Join on array - Terry (0.360 seconds) String.Concat on array - Marcin Kosieradzki (0.364) Method I used a single XML document with 20 identical nodes (called 'hint'): Thinking of using a fake address? Please don't. If we can't verify your address we might just have to reject your application. The numbers shown as seconds above are the result of extracting the "inner XML" of the 20 nodes, 1000 times in a row, and taking the average (mean) of 5 runs. I didn't include the time it took to load and parse the XML into an XmlDocument (for the System.Xml method) or XDocument (for all the others). The LINQ algorithms I used were: (C# - all take an XElement "parent" and return the inner XML string) CreateReader: var reader = parent.CreateReader(); reader.MoveToContent();
return reader.ReadInnerXml(); Aggregate with string concatenation: return parent.Nodes().Aggregate("", (b, node) => b += node.ToString()); StringBuilder: StringBuilder sb = new StringBuilder();
foreach(var node in parent.Nodes()) { sb.Append(node.ToString()); }
return sb.ToString(); String.Join on array: return String.Join("", parent.Nodes().Select(x => x.ToString()).ToArray()); String.Concat on array: return String.Concat(parent.Nodes().Select(x => x.ToString()).ToArray()); I haven't shown the "Plain old System.Xml" algorithm here as it's just calling.InnerXml on nodes. Conclusion If performance is important (e.g. lots of XML, parsed frequently), I'd use Daniel's CreateReader method every time. If you're just doing a few queries, you might want to use Mike's more concise Aggregate method. If you're using XML on large elements with lots of nodes (maybe 100's), you'd probably start to see the benefit of using StringBuilder over the Aggregate method, but not over CreateReader. I don't think the Join and Concat methods would ever be more efficient in these conditions because of the penalty of converting a large list to a large array (even obvious here with smaller lists). | [
".net",
"xml",
"xelement",
"innerxml"
] | 154 | 213 | 90,225 | 14 | 0 | 2008-08-06T18:16:55.853000 | 2009-11-09T23:08:14.530000 |
3,801 | 3,824 | More vs. Faster Cores on a Webserver | The discussion of Dual vs. Quadcore is as old as the Quadcores itself and the answer is usually "it depends on your scenario". So here the scenario is a Web Server (Windows 2003 (not sure if x32 or x64), 4 GB RAM, IIS, ASP.net 3.0). My impression is that the CPU in a Webserver does not need to be THAT fast because requests are usually rather lightweight, so having more (slower) cores should be a better choice as we got many small requests. But since I do not have much experience with IIS load balancing and since I don't want to spend a lot of money only to find out I've made the wrong choice, can someone who has a bit more experience comment on whether or not More Slower or Fewer Faster cores is better? | For something like a webserver, dividing up the tasks of handling each connection is (relatively) easy. I say it's safe to say that web servers is one of the most common (and ironed out) uses of parallel code. And since you are able to split up much of the processing into multiple discrete threads, more cores actually does benefit you. This is one of the big reasons why shared hosting is even possible. If server software like IIS and Apache couldn't run requests in parallel it would mean that every page request would have to be dished out in a queue fashion...likely making load times unbearably slow. This also why high end server Operating Systems like Windows 2008 Server Enterprise support something like 64 cores and 2TB of RAM. These are applications that can actually take advantage of that many cores. Also, since each request is likely has low CPU load, you can probably (for some applications) get away with more slower cores. But obviously having each core faster can mean being able to get each task done quicker and, in theory, handle more tasks and more server requests. | More vs. Faster Cores on a Webserver The discussion of Dual vs. Quadcore is as old as the Quadcores itself and the answer is usually "it depends on your scenario". So here the scenario is a Web Server (Windows 2003 (not sure if x32 or x64), 4 GB RAM, IIS, ASP.net 3.0). My impression is that the CPU in a Webserver does not need to be THAT fast because requests are usually rather lightweight, so having more (slower) cores should be a better choice as we got many small requests. But since I do not have much experience with IIS load balancing and since I don't want to spend a lot of money only to find out I've made the wrong choice, can someone who has a bit more experience comment on whether or not More Slower or Fewer Faster cores is better? | TITLE:
More vs. Faster Cores on a Webserver
QUESTION:
The discussion of Dual vs. Quadcore is as old as the Quadcores itself and the answer is usually "it depends on your scenario". So here the scenario is a Web Server (Windows 2003 (not sure if x32 or x64), 4 GB RAM, IIS, ASP.net 3.0). My impression is that the CPU in a Webserver does not need to be THAT fast because requests are usually rather lightweight, so having more (slower) cores should be a better choice as we got many small requests. But since I do not have much experience with IIS load balancing and since I don't want to spend a lot of money only to find out I've made the wrong choice, can someone who has a bit more experience comment on whether or not More Slower or Fewer Faster cores is better?
ANSWER:
For something like a webserver, dividing up the tasks of handling each connection is (relatively) easy. I say it's safe to say that web servers is one of the most common (and ironed out) uses of parallel code. And since you are able to split up much of the processing into multiple discrete threads, more cores actually does benefit you. This is one of the big reasons why shared hosting is even possible. If server software like IIS and Apache couldn't run requests in parallel it would mean that every page request would have to be dished out in a queue fashion...likely making load times unbearably slow. This also why high end server Operating Systems like Windows 2008 Server Enterprise support something like 64 cores and 2TB of RAM. These are applications that can actually take advantage of that many cores. Also, since each request is likely has low CPU load, you can probably (for some applications) get away with more slower cores. But obviously having each core faster can mean being able to get each task done quicker and, in theory, handle more tasks and more server requests. | [
"asp.net",
"windows",
"iis",
"hardware"
] | 11 | 16 | 3,631 | 4 | 0 | 2008-08-06T18:28:04.987000 | 2008-08-06T18:40:08.393000 |
3,802 | 3,874 | How do you typeset code elements in normal text? | What is the best way to typeset a function with arguments for readibility, brevity, and accuracy? I tend to put empty parentheses after the function name like func(), even if there are actually arguments for the function. I have trouble including the arguments and still feeling like the paragraph is readable. Any thoughts on best practices for this? | I usually take that approach, but if I feel like it's going to cause confusion, I'll use ellipses like: myFunction(...) I guess if I were good, I would use those any time I was omitting parameters from a function in text. | How do you typeset code elements in normal text? What is the best way to typeset a function with arguments for readibility, brevity, and accuracy? I tend to put empty parentheses after the function name like func(), even if there are actually arguments for the function. I have trouble including the arguments and still feeling like the paragraph is readable. Any thoughts on best practices for this? | TITLE:
How do you typeset code elements in normal text?
QUESTION:
What is the best way to typeset a function with arguments for readibility, brevity, and accuracy? I tend to put empty parentheses after the function name like func(), even if there are actually arguments for the function. I have trouble including the arguments and still feeling like the paragraph is readable. Any thoughts on best practices for this?
ANSWER:
I usually take that approach, but if I feel like it's going to cause confusion, I'll use ellipses like: myFunction(...) I guess if I were good, I would use those any time I was omitting parameters from a function in text. | [
"language-agnostic",
"format"
] | 8 | 3 | 454 | 2 | 0 | 2008-08-06T18:28:38.573000 | 2008-08-06T19:16:06.403000 |
3,809 | 3,812 | Setup Visual Studio 2005 to print line numbers | How can I get line numbers to print in Visual Studio 2005 when printing code listings? | There is an option in the Print Dialog to do the same (in VS 2005 and 2008 atleast)! | Setup Visual Studio 2005 to print line numbers How can I get line numbers to print in Visual Studio 2005 when printing code listings? | TITLE:
Setup Visual Studio 2005 to print line numbers
QUESTION:
How can I get line numbers to print in Visual Studio 2005 when printing code listings?
ANSWER:
There is an option in the Print Dialog to do the same (in VS 2005 and 2008 atleast)! | [
"visual-studio",
"visual-studio-2005",
"line-numbers"
] | 11 | 5 | 1,487 | 2 | 0 | 2008-08-06T18:32:37.957000 | 2008-08-06T18:35:41.400000 |
3,823 | 3,848 | Suggestions for implementing audit tables in SQL Server? | One simple method I've used in the past is basically just creating a second table whose structure mirrors the one I want to audit, and then create an update/delete trigger on the main table. Before a record is updated/deleted, the current state is saved to the audit table via the trigger. While effective, the data in the audit table is not the most useful or simple to report off of. I'm wondering if anyone has a better method for auditing data changes? There shouldn't be too many updates of these records, but it is highly sensitive information, so it is important to the customer that all changes are audited and easily reported on. | How much writing vs. reading of this table(s) do you expect? I've used a single audit table, with columns for Table, Column, OldValue, NewValue, User, and ChangeDateTime - generic enough to work with any other changes in the DB, and while a LOT of data got written to that table, reports on that data were sparse enough that they could be run at low-use periods of the day. Added: If the amount of data vs. reporting is a concern, the audit table could be replicated to a read-only database server, allowing you to run reports whenever necessary without bogging down the master server from doing their work. | Suggestions for implementing audit tables in SQL Server? One simple method I've used in the past is basically just creating a second table whose structure mirrors the one I want to audit, and then create an update/delete trigger on the main table. Before a record is updated/deleted, the current state is saved to the audit table via the trigger. While effective, the data in the audit table is not the most useful or simple to report off of. I'm wondering if anyone has a better method for auditing data changes? There shouldn't be too many updates of these records, but it is highly sensitive information, so it is important to the customer that all changes are audited and easily reported on. | TITLE:
Suggestions for implementing audit tables in SQL Server?
QUESTION:
One simple method I've used in the past is basically just creating a second table whose structure mirrors the one I want to audit, and then create an update/delete trigger on the main table. Before a record is updated/deleted, the current state is saved to the audit table via the trigger. While effective, the data in the audit table is not the most useful or simple to report off of. I'm wondering if anyone has a better method for auditing data changes? There shouldn't be too many updates of these records, but it is highly sensitive information, so it is important to the customer that all changes are audited and easily reported on.
ANSWER:
How much writing vs. reading of this table(s) do you expect? I've used a single audit table, with columns for Table, Column, OldValue, NewValue, User, and ChangeDateTime - generic enough to work with any other changes in the DB, and while a LOT of data got written to that table, reports on that data were sparse enough that they could be run at low-use periods of the day. Added: If the amount of data vs. reporting is a concern, the audit table could be replicated to a read-only database server, allowing you to run reports whenever necessary without bogging down the master server from doing their work. | [
"sql",
"sql-server",
"database",
"audit"
] | 32 | 19 | 14,393 | 6 | 0 | 2008-08-06T18:39:33.560000 | 2008-08-06T18:51:48.793000 |
3,839 | 3,860 | How do I Concatenate entire result sets in MySQL? | I'm trying out the following query: SELECT A,B,C FROM table WHERE field LIKE 'query%' UNION SELECT A,B,C FROM table WHERE field LIKE '%query' UNION SELECT A,B,C FROM table WHERE field LIKE '%query%' GROUP BY B ORDER BY B ASC LIMIT 5 That's three queries stuck together, kinda sorta. However, the result set that comes back reflects results from query #3 before the results from query #1 which is undesired. Is there any way to prioritize these so that results come as all for query #1, then all for query #2 then all for query #3? I don't want to do this in PHP just yet (not to mention having to control for results that showed up in the first query not to show in the second and so forth). | Maybe you should try including a fourth column, stating the table it came from, and then order and group by it: SELECT A,B,C, "query 1" as origin FROM table WHERE field LIKE 'query%' UNION SELECT A,B,C, "query 2" as origin FROM table WHERE field LIKE '%query' UNION SELECT A,B,C, "query 3" as origin FROM table WHERE field LIKE '%query%' GROUP BY origin, B ORDER BY origin, B ASC LIMIT 5 | How do I Concatenate entire result sets in MySQL? I'm trying out the following query: SELECT A,B,C FROM table WHERE field LIKE 'query%' UNION SELECT A,B,C FROM table WHERE field LIKE '%query' UNION SELECT A,B,C FROM table WHERE field LIKE '%query%' GROUP BY B ORDER BY B ASC LIMIT 5 That's three queries stuck together, kinda sorta. However, the result set that comes back reflects results from query #3 before the results from query #1 which is undesired. Is there any way to prioritize these so that results come as all for query #1, then all for query #2 then all for query #3? I don't want to do this in PHP just yet (not to mention having to control for results that showed up in the first query not to show in the second and so forth). | TITLE:
How do I Concatenate entire result sets in MySQL?
QUESTION:
I'm trying out the following query: SELECT A,B,C FROM table WHERE field LIKE 'query%' UNION SELECT A,B,C FROM table WHERE field LIKE '%query' UNION SELECT A,B,C FROM table WHERE field LIKE '%query%' GROUP BY B ORDER BY B ASC LIMIT 5 That's three queries stuck together, kinda sorta. However, the result set that comes back reflects results from query #3 before the results from query #1 which is undesired. Is there any way to prioritize these so that results come as all for query #1, then all for query #2 then all for query #3? I don't want to do this in PHP just yet (not to mention having to control for results that showed up in the first query not to show in the second and so forth).
ANSWER:
Maybe you should try including a fourth column, stating the table it came from, and then order and group by it: SELECT A,B,C, "query 1" as origin FROM table WHERE field LIKE 'query%' UNION SELECT A,B,C, "query 2" as origin FROM table WHERE field LIKE '%query' UNION SELECT A,B,C, "query 3" as origin FROM table WHERE field LIKE '%query%' GROUP BY origin, B ORDER BY origin, B ASC LIMIT 5 | [
"mysql",
"sql",
"union"
] | 21 | 16 | 12,896 | 8 | 0 | 2008-08-06T18:47:24.137000 | 2008-08-06T18:59:36.467000 |
3,856 | 3,880 | Adobe Flex component events | I wrote a component that displays a filename, a thumbnail and has a button to load/play the file. The component is databound to a repeater. How can I make it so that the button event fires to the main application and tells it which file to play? | On your custom component you can listen to the button click event and then generate a custom event that holds information about the file you want to play. You can then set the bubbles property to true on the event and dispatch the custom event from your custom component. The bubbles property will make your event float up the display list and reach your main application. Now on your main application you can listen to that event and play the correct file. Hope this helps. | Adobe Flex component events I wrote a component that displays a filename, a thumbnail and has a button to load/play the file. The component is databound to a repeater. How can I make it so that the button event fires to the main application and tells it which file to play? | TITLE:
Adobe Flex component events
QUESTION:
I wrote a component that displays a filename, a thumbnail and has a button to load/play the file. The component is databound to a repeater. How can I make it so that the button event fires to the main application and tells it which file to play?
ANSWER:
On your custom component you can listen to the button click event and then generate a custom event that holds information about the file you want to play. You can then set the bubbles property to true on the event and dispatch the custom event from your custom component. The bubbles property will make your event float up the display list and reach your main application. Now on your main application you can listen to that event and play the correct file. Hope this helps. | [
"apache-flex",
"actionscript-3"
] | 7 | 1 | 3,427 | 2 | 0 | 2008-08-06T18:57:38.313000 | 2008-08-06T19:25:37.400000 |
3,859 | 3,865 | What is the difference between all the different types of version control? | After being told by at least 10 people on SO that version control was a good thing even if it's just me I now have a followup question. What is the difference between all the different types of version control and is there a guide that anybody knows of for version control that's very simple and easy to understand? | Eric Sink has a good overview of source control. There are also some existing questions here on SO. | What is the difference between all the different types of version control? After being told by at least 10 people on SO that version control was a good thing even if it's just me I now have a followup question. What is the difference between all the different types of version control and is there a guide that anybody knows of for version control that's very simple and easy to understand? | TITLE:
What is the difference between all the different types of version control?
QUESTION:
After being told by at least 10 people on SO that version control was a good thing even if it's just me I now have a followup question. What is the difference between all the different types of version control and is there a guide that anybody knows of for version control that's very simple and easy to understand?
ANSWER:
Eric Sink has a good overview of source control. There are also some existing questions here on SO. | [
"version-control",
"language-agnostic"
] | 22 | 15 | 13,984 | 13 | 0 | 2008-08-06T18:58:53.707000 | 2008-08-06T19:02:28.303000 |
3,867 | 3,941 | Automated release script and Visual Studio Setup projects | I think most people here understand the importance of fully automated builds. The problem is one of our project is now using an integrated Visual Studio Setup project (vdproj) and has recently been ported to Visual Studio 2008. Unfortunatly, those won't build in MSBuild and calling devenv.exe /build on 2008 just crashes, apparently it does that on all multi core computer (!!!). So now I have the choice to either rollback to.Net 2.0 and 2005 or simply ditch Visual Studio deployement, but first, I'd like a second opinion. Anyone knows of another automated way to build a.vdproj that will not require us to open the IDE and click on stuff? WiX was what I had in mind when saying we would ditch vdproj. Do you have any experience with it, good things, caveat? | The low cost solution is to switch to using ClickOnce, which you can automate using MSBuild. But if you still need to create a Windows Installer package, you will need to convert your project to WiX (pretty straight foward) and build that with your solution. This will get you started: Automate Releases With MSBuild And Windows Installer XML | Automated release script and Visual Studio Setup projects I think most people here understand the importance of fully automated builds. The problem is one of our project is now using an integrated Visual Studio Setup project (vdproj) and has recently been ported to Visual Studio 2008. Unfortunatly, those won't build in MSBuild and calling devenv.exe /build on 2008 just crashes, apparently it does that on all multi core computer (!!!). So now I have the choice to either rollback to.Net 2.0 and 2005 or simply ditch Visual Studio deployement, but first, I'd like a second opinion. Anyone knows of another automated way to build a.vdproj that will not require us to open the IDE and click on stuff? WiX was what I had in mind when saying we would ditch vdproj. Do you have any experience with it, good things, caveat? | TITLE:
Automated release script and Visual Studio Setup projects
QUESTION:
I think most people here understand the importance of fully automated builds. The problem is one of our project is now using an integrated Visual Studio Setup project (vdproj) and has recently been ported to Visual Studio 2008. Unfortunatly, those won't build in MSBuild and calling devenv.exe /build on 2008 just crashes, apparently it does that on all multi core computer (!!!). So now I have the choice to either rollback to.Net 2.0 and 2005 or simply ditch Visual Studio deployement, but first, I'd like a second opinion. Anyone knows of another automated way to build a.vdproj that will not require us to open the IDE and click on stuff? WiX was what I had in mind when saying we would ditch vdproj. Do you have any experience with it, good things, caveat?
ANSWER:
The low cost solution is to switch to using ClickOnce, which you can automate using MSBuild. But if you still need to create a Windows Installer package, you will need to convert your project to WiX (pretty straight foward) and build that with your solution. This will get you started: Automate Releases With MSBuild And Windows Installer XML | [
"visual-studio",
"msbuild",
"wix",
"build-automation",
"vdproj"
] | 11 | 6 | 4,764 | 3 | 0 | 2008-08-06T19:04:51.823000 | 2008-08-06T20:25:24.673000 |
3,868 | 3,958 | How can I turn a string of HTML into a DOM object in a Firefox extension? | I'm downloading a web page (tag soup HTML) with XMLHttpRequest and I want to take the output and turn it into a DOM object that I can then run XPATH queries on. How do I convert from a string into DOM object? It appears that the general solution is to create a hidden iframe and throw the contents of the string into that. There has been talk of updating DOMParser to support text/html but as of Firefox 3.0.1 you still get an NS_ERROR_NOT_IMPLEMENTED if you try. Is there any option besides using the hidden iframe trick? And if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up the code, etc)? This is an example of why I'm looking for a solution other than the iframe hack, if I have to write all that code to have a robust solution, then I'd rather keep looking for something else. | Ajaxian actually had a post on inserting / retrieving html from an iframe today. You can probably use the js snippet they have posted there. As for handling closing of a browser / tab, you can attach to the onbeforeunload ( http://msdn.microsoft.com/en-us/library/ms536907(VS.85).aspx ) event and do whatever you need to do. | How can I turn a string of HTML into a DOM object in a Firefox extension? I'm downloading a web page (tag soup HTML) with XMLHttpRequest and I want to take the output and turn it into a DOM object that I can then run XPATH queries on. How do I convert from a string into DOM object? It appears that the general solution is to create a hidden iframe and throw the contents of the string into that. There has been talk of updating DOMParser to support text/html but as of Firefox 3.0.1 you still get an NS_ERROR_NOT_IMPLEMENTED if you try. Is there any option besides using the hidden iframe trick? And if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up the code, etc)? This is an example of why I'm looking for a solution other than the iframe hack, if I have to write all that code to have a robust solution, then I'd rather keep looking for something else. | TITLE:
How can I turn a string of HTML into a DOM object in a Firefox extension?
QUESTION:
I'm downloading a web page (tag soup HTML) with XMLHttpRequest and I want to take the output and turn it into a DOM object that I can then run XPATH queries on. How do I convert from a string into DOM object? It appears that the general solution is to create a hidden iframe and throw the contents of the string into that. There has been talk of updating DOMParser to support text/html but as of Firefox 3.0.1 you still get an NS_ERROR_NOT_IMPLEMENTED if you try. Is there any option besides using the hidden iframe trick? And if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up the code, etc)? This is an example of why I'm looking for a solution other than the iframe hack, if I have to write all that code to have a robust solution, then I'd rather keep looking for something else.
ANSWER:
Ajaxian actually had a post on inserting / retrieving html from an iframe today. You can probably use the js snippet they have posted there. As for handling closing of a browser / tab, you can attach to the onbeforeunload ( http://msdn.microsoft.com/en-us/library/ms536907(VS.85).aspx ) event and do whatever you need to do. | [
"javascript",
"firefox",
"dom"
] | 26 | 10 | 3,646 | 5 | 0 | 2008-08-06T19:08:19.290000 | 2008-08-06T20:37:32.260000 |
3,881 | 3,886 | IllegalArgumentException or NullPointerException for a null parameter? | I have a simple setter method for a property and null is not appropriate for this particular property. I have always been torn in this situation: should I throw an IllegalArgumentException, or a NullPointerException? From the javadocs, both seem appropriate. Is there some kind of an understood standard? Or is this just one of those things that you should do whatever you prefer and both are really correct? | It seems like an IllegalArgumentException is called for if you don't want null to be an allowed value, and the NullPointerException would be thrown if you were trying to use a variable that turns out to be null. | IllegalArgumentException or NullPointerException for a null parameter? I have a simple setter method for a property and null is not appropriate for this particular property. I have always been torn in this situation: should I throw an IllegalArgumentException, or a NullPointerException? From the javadocs, both seem appropriate. Is there some kind of an understood standard? Or is this just one of those things that you should do whatever you prefer and both are really correct? | TITLE:
IllegalArgumentException or NullPointerException for a null parameter?
QUESTION:
I have a simple setter method for a property and null is not appropriate for this particular property. I have always been torn in this situation: should I throw an IllegalArgumentException, or a NullPointerException? From the javadocs, both seem appropriate. Is there some kind of an understood standard? Or is this just one of those things that you should do whatever you prefer and both are really correct?
ANSWER:
It seems like an IllegalArgumentException is called for if you don't want null to be an allowed value, and the NullPointerException would be thrown if you were trying to use a variable that turns out to be null. | [
"java",
"exception",
"null",
"nullpointerexception",
"illegalargumentexception"
] | 578 | 315 | 205,983 | 26 | 0 | 2008-08-06T19:26:30.627000 | 2008-08-06T19:29:24.427000 |
3,903 | 3,911 | Is this a good way to determine OS Architecture? | Since the WMI class Win32_OperatingSystem only includes OSArchitecture in Windows Vista, I quickly wrote up a method using the registry to try and determine whether or not the current system is a 32 or 64bit system. private Boolean is64BitOperatingSystem() { RegistryKey localEnvironment = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\Session Manager\\Environment"); String processorArchitecture = (String) localEnvironment.GetValue("PROCESSOR_ARCHITECTURE");
if (processorArchitecture.Equals("x86")) { return false; } else { return true; } } It's worked out pretty well for us so far, but I'm not sure how much I like looking through the registry. Is this a pretty standard practice or is there a better method? Edit: Wow, that code looks a lot prettier in the preview. I'll consider linking to a pastebin or something, next time. | Take a look at Raymond Chens solution: How to detect programmatically whether you are running on 64-bit Windows and here's the PINVOKE for.NET: IsWow64Process (kernel32) Update: I'd take issue with checking for 'x86'. Who's to say what intel's or AMD's next 32 bit processor may be designated as. The probability is low but it is a risk. You should ask the OS to determine this via the correct API's, not by querying what could be a OS version/platform specific value that may be considered opaque to the outside world. Ask yourself the questions, 1 - is the registry entry concerned properly documented by MS, 2 - If it is do they provide a definitive list of possible values that is guaranteed to permit you as a developer to make the informed decision between whether you are running 32 bit or 64 bit. If the answer is no, then call the API's, yeah it's a but more long winded but it is documented and definitive. | Is this a good way to determine OS Architecture? Since the WMI class Win32_OperatingSystem only includes OSArchitecture in Windows Vista, I quickly wrote up a method using the registry to try and determine whether or not the current system is a 32 or 64bit system. private Boolean is64BitOperatingSystem() { RegistryKey localEnvironment = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\Session Manager\\Environment"); String processorArchitecture = (String) localEnvironment.GetValue("PROCESSOR_ARCHITECTURE");
if (processorArchitecture.Equals("x86")) { return false; } else { return true; } } It's worked out pretty well for us so far, but I'm not sure how much I like looking through the registry. Is this a pretty standard practice or is there a better method? Edit: Wow, that code looks a lot prettier in the preview. I'll consider linking to a pastebin or something, next time. | TITLE:
Is this a good way to determine OS Architecture?
QUESTION:
Since the WMI class Win32_OperatingSystem only includes OSArchitecture in Windows Vista, I quickly wrote up a method using the registry to try and determine whether or not the current system is a 32 or 64bit system. private Boolean is64BitOperatingSystem() { RegistryKey localEnvironment = Registry.LocalMachine.OpenSubKey("SYSTEM\\CurrentControlSet\\Control\\Session Manager\\Environment"); String processorArchitecture = (String) localEnvironment.GetValue("PROCESSOR_ARCHITECTURE");
if (processorArchitecture.Equals("x86")) { return false; } else { return true; } } It's worked out pretty well for us so far, but I'm not sure how much I like looking through the registry. Is this a pretty standard practice or is there a better method? Edit: Wow, that code looks a lot prettier in the preview. I'll consider linking to a pastebin or something, next time.
ANSWER:
Take a look at Raymond Chens solution: How to detect programmatically whether you are running on 64-bit Windows and here's the PINVOKE for.NET: IsWow64Process (kernel32) Update: I'd take issue with checking for 'x86'. Who's to say what intel's or AMD's next 32 bit processor may be designated as. The probability is low but it is a risk. You should ask the OS to determine this via the correct API's, not by querying what could be a OS version/platform specific value that may be considered opaque to the outside world. Ask yourself the questions, 1 - is the registry entry concerned properly documented by MS, 2 - If it is do they provide a definitive list of possible values that is guaranteed to permit you as a developer to make the informed decision between whether you are running 32 bit or 64 bit. If the answer is no, then call the API's, yeah it's a but more long winded but it is documented and definitive. | [
"c#",
"windows",
"registry"
] | 22 | 8 | 6,010 | 4 | 0 | 2008-08-06T19:41:59.813000 | 2008-08-06T19:49:35.727000 |
3,927 | 100,490 | What Are Some Good .NET Profilers? | What profilers have you used when working with.net programs, and which would you particularly recommend? | I have used JetBrains dotTrace and Redgate ANTS extensively. They are fairly similar in features and price. They both offer useful performance profiling and quite basic memory profiling. dotTrace integrates with Resharper, which is really convenient, as you can profile the performance of a unit test with one click from the IDE. However, dotTrace often seems to give spurious results (e.g. saying that a method took several years to run) I prefer the way that ANTS presents the profiling results. It shows you the source code and to the left of each line tells you how long it took to run. dotTrace just has a tree view. EQATEC profiler is quite basic and requires you to compile special instrumented versions of your assemblies which can then be run in the EQATEC profiler. It is, however, free. Overall I prefer ANTS for performance profiling, although if you use Resharper then the integration of dotTrace is a killer feature and means it beats ANTS in usability. The free Microsoft CLR Profiler (.Net framework 2.0 /.Net Framework 4.0 ) is all you need for.NET memory profiling. 2011 Update: The Scitech memory profiler has quite a basic UI but lots of useful information, including some information on unmanaged memory which dotTrace and ANTS lack - you might find it useful if you are doing COM interop, but I have yet to find any profiler that makes COM memory issues easy to diagnose - you usually have to break out windbg.exe. The ANTS profiler has come on in leaps and bounds in the last few years, and its memory profiler has some truly useful features which now pushed it ahead of dotTrace as a package in my estimation. I'm lucky enough to have licenses for both, but if you are going to buy one.Net profiler for both performance and memory, make it ANTS. | What Are Some Good .NET Profilers? What profilers have you used when working with.net programs, and which would you particularly recommend? | TITLE:
What Are Some Good .NET Profilers?
QUESTION:
What profilers have you used when working with.net programs, and which would you particularly recommend?
ANSWER:
I have used JetBrains dotTrace and Redgate ANTS extensively. They are fairly similar in features and price. They both offer useful performance profiling and quite basic memory profiling. dotTrace integrates with Resharper, which is really convenient, as you can profile the performance of a unit test with one click from the IDE. However, dotTrace often seems to give spurious results (e.g. saying that a method took several years to run) I prefer the way that ANTS presents the profiling results. It shows you the source code and to the left of each line tells you how long it took to run. dotTrace just has a tree view. EQATEC profiler is quite basic and requires you to compile special instrumented versions of your assemblies which can then be run in the EQATEC profiler. It is, however, free. Overall I prefer ANTS for performance profiling, although if you use Resharper then the integration of dotTrace is a killer feature and means it beats ANTS in usability. The free Microsoft CLR Profiler (.Net framework 2.0 /.Net Framework 4.0 ) is all you need for.NET memory profiling. 2011 Update: The Scitech memory profiler has quite a basic UI but lots of useful information, including some information on unmanaged memory which dotTrace and ANTS lack - you might find it useful if you are doing COM interop, but I have yet to find any profiler that makes COM memory issues easy to diagnose - you usually have to break out windbg.exe. The ANTS profiler has come on in leaps and bounds in the last few years, and its memory profiler has some truly useful features which now pushed it ahead of dotTrace as a package in my estimation. I'm lucky enough to have licenses for both, but if you are going to buy one.Net profiler for both performance and memory, make it ANTS. | [
"c#",
".net",
"profiling",
"profiler"
] | 373 | 284 | 339,312 | 30 | 0 | 2008-08-06T20:14:57.173000 | 2008-09-19T08:29:08.040000 |
3,942 | 4,238 | What's the best way to find long-running code in a Windows Forms Application | I inherited a Windows Forms app written in VB.Net. Certain parts of the app run dreadfully slow. What's the easiest way to find which parts of the code are holding things up? I'm looking for a way to quickly find the slowest subroutines and tackle them first in an attempt to speed up the app. I know that there are several code profiler products available for purchase which will show how long each subroutine takes, but I was hoping to find a free solution. | I appreciate the desire to find free software. However, in this case, I would strongly recommend looking at all options, including commercial products. I tried to play with nProf (which is at version 0.1 I think) and didn't have much luck. Even so, performance profiling an application is a subtle business and is best approached using a powerful, flexible tool. Unless you are working for free, I strongly believe the time you will save using a professional product will far outweigh the cost of a license. And of course, if you are only wanting to profile a single application, each commercial package has a 15 or 30 day trial, more than enough time to pinpoint any issues in an existing application. And if you need profiling support for more than just the one-off project, you're better buying a full strength tool anyway. We use the ANTS profiler from RedGate and have been very happy with it. I have also used.NET Memory Profiler with excellent results. The cool thing about.NET Memory Profiler is that it can attach to and profile running production applications, which really saved our butts when we had a memory leak in production we couldn't reproduce in our test lab. The JetBrains folks have a profiler as well called dotTrace which I haven't tried, but I have to believe that if it comes from the JetBrains shop it is probably top notch as well. Anyway, my advice is this: try to fix your app within the free trial window of one or an aggregated combination of the three of them (minimum of 45 days free use) and if that isn't enough time, pick your favorite and spring for one of them. You won't be sorry. | What's the best way to find long-running code in a Windows Forms Application I inherited a Windows Forms app written in VB.Net. Certain parts of the app run dreadfully slow. What's the easiest way to find which parts of the code are holding things up? I'm looking for a way to quickly find the slowest subroutines and tackle them first in an attempt to speed up the app. I know that there are several code profiler products available for purchase which will show how long each subroutine takes, but I was hoping to find a free solution. | TITLE:
What's the best way to find long-running code in a Windows Forms Application
QUESTION:
I inherited a Windows Forms app written in VB.Net. Certain parts of the app run dreadfully slow. What's the easiest way to find which parts of the code are holding things up? I'm looking for a way to quickly find the slowest subroutines and tackle them first in an attempt to speed up the app. I know that there are several code profiler products available for purchase which will show how long each subroutine takes, but I was hoping to find a free solution.
ANSWER:
I appreciate the desire to find free software. However, in this case, I would strongly recommend looking at all options, including commercial products. I tried to play with nProf (which is at version 0.1 I think) and didn't have much luck. Even so, performance profiling an application is a subtle business and is best approached using a powerful, flexible tool. Unless you are working for free, I strongly believe the time you will save using a professional product will far outweigh the cost of a license. And of course, if you are only wanting to profile a single application, each commercial package has a 15 or 30 day trial, more than enough time to pinpoint any issues in an existing application. And if you need profiling support for more than just the one-off project, you're better buying a full strength tool anyway. We use the ANTS profiler from RedGate and have been very happy with it. I have also used.NET Memory Profiler with excellent results. The cool thing about.NET Memory Profiler is that it can attach to and profile running production applications, which really saved our butts when we had a memory leak in production we couldn't reproduce in our test lab. The JetBrains folks have a profiler as well called dotTrace which I haven't tried, but I have to believe that if it comes from the JetBrains shop it is probably top notch as well. Anyway, my advice is this: try to fix your app within the free trial window of one or an aggregated combination of the three of them (minimum of 45 days free use) and if that isn't enough time, pick your favorite and spring for one of them. You won't be sorry. | [
".net",
"vb.net"
] | 8 | 4 | 1,063 | 4 | 0 | 2008-08-06T20:26:21.923000 | 2008-08-07T01:01:16.177000 |
3,975 | 5,907 | How do I know which SQL Server 2005 index recommendations to implement, if any? | We're in the process of upgrading one of our SQL Server instances from 2000 to 2005. I installed the performance dashboard ( http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en ) for access to some high level reporting. One of the reports shows missing (recommended) indexes. I think it's based on some system view that is maintained by the query optimizer. My question is what is the best way to determine when to take an index recommendation. I know that it doesn't make sense to apply all of the optimizer's suggestions. I see a lot of advice that basically says to try the index and to keep it if performance improves and to drop it if performances degrades or stays the same. I wondering if there is a better way to make the decision and what best practices exist on this subject. | First thing to be aware of: When you upgrade from 2000 to 2005 (by using detach and attach) make sure that you: Set compability to 90 Rebuild the indexes Run update statistics with full scan If you don't do this you will get suboptimal plans. IF the table is mostly write you want as few indexes as possible IF the table is used for a lot of read queries you have to make sure that the WHERE clause is covered by indexes. | How do I know which SQL Server 2005 index recommendations to implement, if any? We're in the process of upgrading one of our SQL Server instances from 2000 to 2005. I installed the performance dashboard ( http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en ) for access to some high level reporting. One of the reports shows missing (recommended) indexes. I think it's based on some system view that is maintained by the query optimizer. My question is what is the best way to determine when to take an index recommendation. I know that it doesn't make sense to apply all of the optimizer's suggestions. I see a lot of advice that basically says to try the index and to keep it if performance improves and to drop it if performances degrades or stays the same. I wondering if there is a better way to make the decision and what best practices exist on this subject. | TITLE:
How do I know which SQL Server 2005 index recommendations to implement, if any?
QUESTION:
We're in the process of upgrading one of our SQL Server instances from 2000 to 2005. I installed the performance dashboard ( http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en ) for access to some high level reporting. One of the reports shows missing (recommended) indexes. I think it's based on some system view that is maintained by the query optimizer. My question is what is the best way to determine when to take an index recommendation. I know that it doesn't make sense to apply all of the optimizer's suggestions. I see a lot of advice that basically says to try the index and to keep it if performance improves and to drop it if performances degrades or stays the same. I wondering if there is a better way to make the decision and what best practices exist on this subject.
ANSWER:
First thing to be aware of: When you upgrade from 2000 to 2005 (by using detach and attach) make sure that you: Set compability to 90 Rebuild the indexes Run update statistics with full scan If you don't do this you will get suboptimal plans. IF the table is mostly write you want as few indexes as possible IF the table is used for a lot of read queries you have to make sure that the WHERE clause is covered by indexes. | [
"sql-server",
"sql-server-2005"
] | 9 | 4 | 1,388 | 3 | 0 | 2008-08-06T20:59:58.190000 | 2008-08-08T13:32:30.550000 |
3,978 | 3,998 | Multi-Paradigm Languages | In a language such as (since I'm working in it now) PHP, which supports procedural and object-oriented paradigms. Is there a good rule of thumb for determining which paradigm best suits a new project? If not, how can you make the decision? | It all depends on the problem you're trying to solve. Obviously you can solve any problem in either style (procedural or OO), but you usually can figure out in the planning stages before you start writing code which style suits you better. Some people like to write up use cases and if they see a lot of the same nouns showing up over and over again (e.g., a person withdraws money from the bank), then they go the OO route and use the nouns as their objects. Conversely, if you don't see a lot of nouns and there's really more verbs going on, then procedural or functional may be the way to go. Steve Yegge has a great but long post as usual that touches on this from a different perspective that you may find helpful as well. | Multi-Paradigm Languages In a language such as (since I'm working in it now) PHP, which supports procedural and object-oriented paradigms. Is there a good rule of thumb for determining which paradigm best suits a new project? If not, how can you make the decision? | TITLE:
Multi-Paradigm Languages
QUESTION:
In a language such as (since I'm working in it now) PHP, which supports procedural and object-oriented paradigms. Is there a good rule of thumb for determining which paradigm best suits a new project? If not, how can you make the decision?
ANSWER:
It all depends on the problem you're trying to solve. Obviously you can solve any problem in either style (procedural or OO), but you usually can figure out in the planning stages before you start writing code which style suits you better. Some people like to write up use cases and if they see a lot of the same nouns showing up over and over again (e.g., a person withdraws money from the bank), then they go the OO route and use the nouns as their objects. Conversely, if you don't see a lot of nouns and there's really more verbs going on, then procedural or functional may be the way to go. Steve Yegge has a great but long post as usual that touches on this from a different perspective that you may find helpful as well. | [
"php",
"oop",
"paradigms",
"procedural"
] | 22 | 11 | 2,133 | 2 | 0 | 2008-08-06T21:02:16.207000 | 2008-08-06T21:15:35.150000 |
3,984 | 418,605 | TestDriven.NET is not running my SetUp methods for MbUnit | I've created some MbUnit Test Fixtures that have SetUp methods marked with the SetUp attribute. These methods run before the tests just fine using the MbUnit GUI, the console runner, and the ReSharper MbUnit plugin. However, when I run the tests with TestDriven.NET it does not run the SetUp methods at all. Does anyone know if this is a bug with TestDriven.NET or if I have something setup wrong? | No longer an issue with recent versions of Gallio since v3.0.4. Just make sure to use the 64-bit installer. | TestDriven.NET is not running my SetUp methods for MbUnit I've created some MbUnit Test Fixtures that have SetUp methods marked with the SetUp attribute. These methods run before the tests just fine using the MbUnit GUI, the console runner, and the ReSharper MbUnit plugin. However, when I run the tests with TestDriven.NET it does not run the SetUp methods at all. Does anyone know if this is a bug with TestDriven.NET or if I have something setup wrong? | TITLE:
TestDriven.NET is not running my SetUp methods for MbUnit
QUESTION:
I've created some MbUnit Test Fixtures that have SetUp methods marked with the SetUp attribute. These methods run before the tests just fine using the MbUnit GUI, the console runner, and the ReSharper MbUnit plugin. However, when I run the tests with TestDriven.NET it does not run the SetUp methods at all. Does anyone know if this is a bug with TestDriven.NET or if I have something setup wrong?
ANSWER:
No longer an issue with recent versions of Gallio since v3.0.4. Just make sure to use the 64-bit installer. | [
".net",
"visual-studio",
"tdd",
"mbunit",
"testdriven.net"
] | 21 | 5 | 1,549 | 4 | 0 | 2008-08-06T21:06:32.797000 | 2009-01-06T23:25:47.280000 |
3,996 | 4,043 | How do I configure a Vista Ultimate (64bit) account so it can access a SMB share on OSX? | I have Windows File sharing enabled on an OS X 10.4 computer. It's accessible via \rudy\myshare for all the Windows users on the network, except for one guy running Vista Ultimate 64-bit edition. All the other users are running Vista or XP, all 32-bit. All the workgroup information is the same, all login with the same username/password. The Vista 64 guy can see the Mac on the network, but his login is rejected every time. Now, I imagine that Vista Ultimate is has something configured differently to the Business version and XP but I don't really know where to look. Any ideas? | Try changing the local security policy on that Vista box for "Local Policies\Security Options\Network Security: LAN manager authentication level" from “Send NTLMv2 response only” to “Send LM & NTLM - use NTLMv2 session security if negotiated”. | How do I configure a Vista Ultimate (64bit) account so it can access a SMB share on OSX? I have Windows File sharing enabled on an OS X 10.4 computer. It's accessible via \rudy\myshare for all the Windows users on the network, except for one guy running Vista Ultimate 64-bit edition. All the other users are running Vista or XP, all 32-bit. All the workgroup information is the same, all login with the same username/password. The Vista 64 guy can see the Mac on the network, but his login is rejected every time. Now, I imagine that Vista Ultimate is has something configured differently to the Business version and XP but I don't really know where to look. Any ideas? | TITLE:
How do I configure a Vista Ultimate (64bit) account so it can access a SMB share on OSX?
QUESTION:
I have Windows File sharing enabled on an OS X 10.4 computer. It's accessible via \rudy\myshare for all the Windows users on the network, except for one guy running Vista Ultimate 64-bit edition. All the other users are running Vista or XP, all 32-bit. All the workgroup information is the same, all login with the same username/password. The Vista 64 guy can see the Mac on the network, but his login is rejected every time. Now, I imagine that Vista Ultimate is has something configured differently to the Business version and XP but I don't really know where to look. Any ideas?
ANSWER:
Try changing the local security policy on that Vista box for "Local Policies\Security Options\Network Security: LAN manager authentication level" from “Send NTLMv2 response only” to “Send LM & NTLM - use NTLMv2 session security if negotiated”. | [
"macos",
"windows-vista",
"smb"
] | 10 | 3 | 1,101 | 2 | 0 | 2008-08-06T21:14:57.507000 | 2008-08-06T21:50:49.837000 |
4,034 | 4,346 | Multiple languages in an ASP.NET MVC application? | What is the best way to support multiple languages for the interface in an ASP.NET MVC application? I've seen people use resource files for other applications. Is this still the best way? | If you're using the default view engines, then local resources work in the views. However, if you need to grab resource strings within a controller action, you can't get local resources, and have to use global resources. This makes sense when you think about it because local resources are local to an aspx page and in the controller, you haven't even selected your view. | Multiple languages in an ASP.NET MVC application? What is the best way to support multiple languages for the interface in an ASP.NET MVC application? I've seen people use resource files for other applications. Is this still the best way? | TITLE:
Multiple languages in an ASP.NET MVC application?
QUESTION:
What is the best way to support multiple languages for the interface in an ASP.NET MVC application? I've seen people use resource files for other applications. Is this still the best way?
ANSWER:
If you're using the default view engines, then local resources work in the views. However, if you need to grab resource strings within a controller action, you can't get local resources, and have to use global resources. This makes sense when you think about it because local resources are local to an aspx page and in the controller, you haven't even selected your view. | [
"asp.net-mvc",
"internationalization",
"multilingual"
] | 70 | 43 | 42,299 | 6 | 0 | 2008-08-06T21:43:33.923000 | 2008-08-07T03:04:04.320000 |
4,046 | 4,902 | Example of a build.xml for an EAR that deploys in WebSphere 6 | I'm trying to convince my providers to use ANT instead of Rational Application Development so anyone can recompile, recheck, redeploy the solution anyplace, anytime, anyhow.:P I started a build.xml for a project that generates a JAR file but stopped there and I need real examples to compare notes. My good friends! I don't have anyone close to chat about this! This is my build.xml so far. (*) I edited my question based in the suggestion of to use pastebin.ca | My Environment: Fedora 8; WAS 6.1 (as installed with Rational Application Developer 7) The documentation is very poor in this area and there is a dearth of practical examples. Using the WebSphere Application Server (WAS) Ant tasks To run as described here, you need to run them from your server profile bin directory using the ws_ant.sh or ws_ant.bat commands. Script for listing installed apps. Example run from: /opt/IBM/SDP70/runtimes/base_v61/profiles/AppSrv01/bin Command:./ws_ant.sh -buildfile ~/IBM/rationalsdp7.0/workspace/mywebappDeploy/applist.xml A Deployment Script Build/Deploy an EAR to WebSphere Application Server 6.1 Notes: You can only run this once! You cannot install if the app name is in use - see other tasks like wsUninstallApp It probably won't start the app either You need to run this on the server and the script is quite fragile Alternatives I would probably use Java Management Extensions (JMX). You could write a file-upload servlet that accepts an EAR and uses the deployment MBeans to deploy the EAR on the server. You would just POST the file over HTTP. This would avoid any WAS API dependencies on your dev/build machine and could be independent of any one project. | Example of a build.xml for an EAR that deploys in WebSphere 6 I'm trying to convince my providers to use ANT instead of Rational Application Development so anyone can recompile, recheck, redeploy the solution anyplace, anytime, anyhow.:P I started a build.xml for a project that generates a JAR file but stopped there and I need real examples to compare notes. My good friends! I don't have anyone close to chat about this! This is my build.xml so far. (*) I edited my question based in the suggestion of to use pastebin.ca | TITLE:
Example of a build.xml for an EAR that deploys in WebSphere 6
QUESTION:
I'm trying to convince my providers to use ANT instead of Rational Application Development so anyone can recompile, recheck, redeploy the solution anyplace, anytime, anyhow.:P I started a build.xml for a project that generates a JAR file but stopped there and I need real examples to compare notes. My good friends! I don't have anyone close to chat about this! This is my build.xml so far. (*) I edited my question based in the suggestion of to use pastebin.ca
ANSWER:
My Environment: Fedora 8; WAS 6.1 (as installed with Rational Application Developer 7) The documentation is very poor in this area and there is a dearth of practical examples. Using the WebSphere Application Server (WAS) Ant tasks To run as described here, you need to run them from your server profile bin directory using the ws_ant.sh or ws_ant.bat commands. Script for listing installed apps. Example run from: /opt/IBM/SDP70/runtimes/base_v61/profiles/AppSrv01/bin Command:./ws_ant.sh -buildfile ~/IBM/rationalsdp7.0/workspace/mywebappDeploy/applist.xml A Deployment Script Build/Deploy an EAR to WebSphere Application Server 6.1 Notes: You can only run this once! You cannot install if the app name is in use - see other tasks like wsUninstallApp It probably won't start the app either You need to run this on the server and the script is quite fragile Alternatives I would probably use Java Management Extensions (JMX). You could write a file-upload servlet that accepts an EAR and uses the deployment MBeans to deploy the EAR on the server. You would just POST the file over HTTP. This would avoid any WAS API dependencies on your dev/build machine and could be independent of any one project. | [
"jakarta-ee",
"deployment",
"ant",
"websphere",
"ear"
] | 22 | 14 | 24,714 | 4 | 0 | 2008-08-06T21:55:24.257000 | 2008-08-07T16:16:00.010000 |
4,051 | 4,192 | Passing multidimensional arrays as function arguments in C | In C can I pass a multidimensional array to a function as a single argument when I don't know what the dimensions of the array are going to be? Besides, my multidimensional array may contain types other than strings. | You can do this with any data type. Simply make it a pointer-to-pointer: typedef struct { int myint; char* mystring; } data;
data** array; But don't forget you still have to malloc the variable, and it does get a bit complex: //initialize int x,y,w,h; w = 10; //width of array h = 20; //height of array
//malloc the 'y' dimension array = malloc(sizeof(data*) * h);
//iterate over 'y' dimension for(y=0;y The code to deallocate the structure looks similar - don't forget to call free() on everything you malloced! (Also, in robust applications you should check the return of malloc().) Now let's say you want to pass this to a function. You can still use the double pointer, because you probably want to do manipulations on the data structure, not the pointer to pointers of data structures: int whatsMyInt(data** arrayPtr, int x, int y){ return arrayPtr[y][x].myint; } Call this function with: printf("My int is %d.\n", whatsMyInt(array, 2, 4)); Output: My int is 6. | Passing multidimensional arrays as function arguments in C In C can I pass a multidimensional array to a function as a single argument when I don't know what the dimensions of the array are going to be? Besides, my multidimensional array may contain types other than strings. | TITLE:
Passing multidimensional arrays as function arguments in C
QUESTION:
In C can I pass a multidimensional array to a function as a single argument when I don't know what the dimensions of the array are going to be? Besides, my multidimensional array may contain types other than strings.
ANSWER:
You can do this with any data type. Simply make it a pointer-to-pointer: typedef struct { int myint; char* mystring; } data;
data** array; But don't forget you still have to malloc the variable, and it does get a bit complex: //initialize int x,y,w,h; w = 10; //width of array h = 20; //height of array
//malloc the 'y' dimension array = malloc(sizeof(data*) * h);
//iterate over 'y' dimension for(y=0;y The code to deallocate the structure looks similar - don't forget to call free() on everything you malloced! (Also, in robust applications you should check the return of malloc().) Now let's say you want to pass this to a function. You can still use the double pointer, because you probably want to do manipulations on the data structure, not the pointer to pointers of data structures: int whatsMyInt(data** arrayPtr, int x, int y){ return arrayPtr[y][x].myint; } Call this function with: printf("My int is %d.\n", whatsMyInt(array, 2, 4)); Output: My int is 6. | [
"c",
"function",
"multidimensional-array"
] | 55 | 23 | 82,524 | 5 | 0 | 2008-08-06T22:01:25.463000 | 2008-08-07T00:34:21.097000 |
4,052 | 4,139 | How to enable Full-text Indexing in SQL Server 2005 Express? | I am trying to enable Full-text indexing in SQL Server 2005 Express. I am running this on my laptop with Vista Ultimate. I understand that the standard version of SQL Server Express does not have full-text indexing. I have already downloaded and installed "Microsoft SQL Server 2005 Express Edition with Advanced Services Service Pack 2" ( download ). I have also ensured that both the "SQL Server (instance)" and "SQL Server FullText Search (instance)" services are running on the same account which is "Network Service". I have also selected the option to "Use full-text indexing" in the Database Properties > Files area. I can run the sql query "SELECT fulltextserviceproperty('IsFulltextInstalled');" and return 1. The problem I am having is that when I have my table open in design view and select "Manage FullText Index"; the full-text index window displays the message... "Creation of the full-text index is not available. Check that you have the correct permissions or that full-text catalogs are defined." Any ideas on what to check or where to go next? | sp_fulltext_database 'enable'
CREATE FULLTEXT CATALOG [myFullText] WITH ACCENT_SENSITIVITY = ON
CREATE FULLTEXT INDEX ON [dbo].[tblName] KEY INDEX [PK_something] ON [myFullText] WITH CHANGE_TRACKING AUTO ALTER FULLTEXT INDEX ON [dbo].[otherTable] ADD ([Text]) ALTER FULLTEXT INDEX ON [dbo].[teyOtherTable] ENABLE | How to enable Full-text Indexing in SQL Server 2005 Express? I am trying to enable Full-text indexing in SQL Server 2005 Express. I am running this on my laptop with Vista Ultimate. I understand that the standard version of SQL Server Express does not have full-text indexing. I have already downloaded and installed "Microsoft SQL Server 2005 Express Edition with Advanced Services Service Pack 2" ( download ). I have also ensured that both the "SQL Server (instance)" and "SQL Server FullText Search (instance)" services are running on the same account which is "Network Service". I have also selected the option to "Use full-text indexing" in the Database Properties > Files area. I can run the sql query "SELECT fulltextserviceproperty('IsFulltextInstalled');" and return 1. The problem I am having is that when I have my table open in design view and select "Manage FullText Index"; the full-text index window displays the message... "Creation of the full-text index is not available. Check that you have the correct permissions or that full-text catalogs are defined." Any ideas on what to check or where to go next? | TITLE:
How to enable Full-text Indexing in SQL Server 2005 Express?
QUESTION:
I am trying to enable Full-text indexing in SQL Server 2005 Express. I am running this on my laptop with Vista Ultimate. I understand that the standard version of SQL Server Express does not have full-text indexing. I have already downloaded and installed "Microsoft SQL Server 2005 Express Edition with Advanced Services Service Pack 2" ( download ). I have also ensured that both the "SQL Server (instance)" and "SQL Server FullText Search (instance)" services are running on the same account which is "Network Service". I have also selected the option to "Use full-text indexing" in the Database Properties > Files area. I can run the sql query "SELECT fulltextserviceproperty('IsFulltextInstalled');" and return 1. The problem I am having is that when I have my table open in design view and select "Manage FullText Index"; the full-text index window displays the message... "Creation of the full-text index is not available. Check that you have the correct permissions or that full-text catalogs are defined." Any ideas on what to check or where to go next?
ANSWER:
sp_fulltext_database 'enable'
CREATE FULLTEXT CATALOG [myFullText] WITH ACCENT_SENSITIVITY = ON
CREATE FULLTEXT INDEX ON [dbo].[tblName] KEY INDEX [PK_something] ON [myFullText] WITH CHANGE_TRACKING AUTO ALTER FULLTEXT INDEX ON [dbo].[otherTable] ADD ([Text]) ALTER FULLTEXT INDEX ON [dbo].[teyOtherTable] ENABLE | [
"sql-server",
"sql-server-2005",
"full-text-search"
] | 18 | 14 | 16,716 | 3 | 0 | 2008-08-06T22:02:33.993000 | 2008-08-06T23:46:30.210000 |
4,062 | 4,222 | What methods of caching, other than to file or database, are available? | Currently I know of only two ways to cache data (I use PHP but I assume that the same will apply to most languages). Save the cache to a file Save the cache to a large DB field Are there any other (perhaps better) ways of caching or is it really just this simple? | Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache: Accessing the Data Base where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with AdoDB for example.) Extracting calculations from loops in the code so you don't compute the same value multiple times. Here your third way: storing results in the session for the user. Precompiling the PHP code with an extension like APC Cache. This way you don't have to compile the same PHP code for every request. The page sent to the user making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates.html pages); or by using a proxy cache like Squid. Prefetching and by this I refer all those opportunities you have to improve the user experience just by doing things while the user don't look your way. For example, preloading IMG tags in the HTML file, tunning the RDBMS for prefectching, precomputing results storing complex computations in the database, etc. From my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like XDebug. Verify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule "read many times but write just once" Use a simple tool like YSlow to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript) | What methods of caching, other than to file or database, are available? Currently I know of only two ways to cache data (I use PHP but I assume that the same will apply to most languages). Save the cache to a file Save the cache to a large DB field Are there any other (perhaps better) ways of caching or is it really just this simple? | TITLE:
What methods of caching, other than to file or database, are available?
QUESTION:
Currently I know of only two ways to cache data (I use PHP but I assume that the same will apply to most languages). Save the cache to a file Save the cache to a large DB field Are there any other (perhaps better) ways of caching or is it really just this simple?
ANSWER:
Maybe you want to explicit more precisely what you want to cache. You have all this opportunities to cache: Accessing the Data Base where you cache the data first correctly tuning your RDBMS, then using a layer to delegate the decision to detect multiple queries for the same data (with AdoDB for example.) Extracting calculations from loops in the code so you don't compute the same value multiple times. Here your third way: storing results in the session for the user. Precompiling the PHP code with an extension like APC Cache. This way you don't have to compile the same PHP code for every request. The page sent to the user making sure you're setting the right META tags (do a good thing for the world and don't use ETL at least absolutly necessary); or maybe making dynamic pages completely static (having a batch process that generates.html pages); or by using a proxy cache like Squid. Prefetching and by this I refer all those opportunities you have to improve the user experience just by doing things while the user don't look your way. For example, preloading IMG tags in the HTML file, tunning the RDBMS for prefectching, precomputing results storing complex computations in the database, etc. From my experience, I'd bet you that your code can be improved a lot before we start to talk about caching things. Consider, for example, how well structured is the navigation of your site and how well you control the user experience. Then check your code with a tool like XDebug. Verify also how well are you making your SQL queries and how well are you indexing your tables. Then check your code again to look for opportunities to apply the rule "read many times but write just once" Use a simple tool like YSlow to hint other simple things to improve. Check your code again looking for opportunities to put logic in the browser (via JavaScript) | [
"language-agnostic",
"caching"
] | 13 | 2 | 717 | 4 | 0 | 2008-08-06T22:21:55.373000 | 2008-08-07T00:46:58.467000 |
4,072 | 13,738 | SVN merge merged extra stuff | I just did a merge using something like: svn merge -r 67212:67213 https://my.svn.repository/trunk. I only had 2 files, one of which is a simple ChangeLog. Rather than just merging my ChangeLog changes, it actually pulled mine plus some previous ones that were not in the destination ChangeLog. I noticed there was a conflict when I executed --dry-run, so I updated ChangeLog, and there was still a conflict (and I saw the conflict when I did the actual merge). I then later diffed on the file I was merging from: svn diff -r 67212:67213 ChangeLog And I see just the changes I had made, so I know that extra changes didn't get in there somehow. This makes me worried that merge is not actually just taking what I changed, which is what I would have expected. Can anybody explain what happened? UPDATE: In response to NilObject: So, I have 2 files changed, only ChangeLog is relevant, the other merged fine. When I go to my regular trunk checkout, I do the diff command above and see: Index: ChangeLog =================================================================== --- ChangeLog (revision 67212) +++ ChangeLog (revision 67213) @@ -1,3 +1,7 @@ +2008-08-06 Mike Stone + + * changed_file: Details. + 2008-08-06 Someone Else * theirChanges: Details. After my merge of the previous changes, the diff of ChangeLog looks like this: Index: ChangeLog =================================================================== --- ChangeLog (revision 67215) +++ ChangeLog (working copy) @@ -1,3 +1,14 @@ +<<<<<<<.working +======= +2008-08-06 Mike Stone + + * changed_file: Details. + +2008-08-06 Someone Else + + * theirChanges: Details. + +>>>>>>>.merge-right.r67213 2008-08-05 Someone Else2 * olderChange: Details. Note that the entry that was incorrectly pulled in was not in the file I am merging it to, but yet it was not one of my changes and shouldn't have been merged anyways. It was easy to fix (remove the extra lines that weren't part of my changes), but it still makes me worry about merging automatically in SVN. | This only happens with conflicts - basically svn tried to merge the change in, but (roughly speaking) saw the change as: Add 2008-08-06 Mike Stone * changed_file: Details. before 2008-08-06 Someone Else And it couldn't find the Someone Else line while doing the merge, so chucked that bit in for context when putting in the conflict. If it was a non-conflicting merge only the changes you expected would have been applied. | SVN merge merged extra stuff I just did a merge using something like: svn merge -r 67212:67213 https://my.svn.repository/trunk. I only had 2 files, one of which is a simple ChangeLog. Rather than just merging my ChangeLog changes, it actually pulled mine plus some previous ones that were not in the destination ChangeLog. I noticed there was a conflict when I executed --dry-run, so I updated ChangeLog, and there was still a conflict (and I saw the conflict when I did the actual merge). I then later diffed on the file I was merging from: svn diff -r 67212:67213 ChangeLog And I see just the changes I had made, so I know that extra changes didn't get in there somehow. This makes me worried that merge is not actually just taking what I changed, which is what I would have expected. Can anybody explain what happened? UPDATE: In response to NilObject: So, I have 2 files changed, only ChangeLog is relevant, the other merged fine. When I go to my regular trunk checkout, I do the diff command above and see: Index: ChangeLog =================================================================== --- ChangeLog (revision 67212) +++ ChangeLog (revision 67213) @@ -1,3 +1,7 @@ +2008-08-06 Mike Stone + + * changed_file: Details. + 2008-08-06 Someone Else * theirChanges: Details. After my merge of the previous changes, the diff of ChangeLog looks like this: Index: ChangeLog =================================================================== --- ChangeLog (revision 67215) +++ ChangeLog (working copy) @@ -1,3 +1,14 @@ +<<<<<<<.working +======= +2008-08-06 Mike Stone + + * changed_file: Details. + +2008-08-06 Someone Else + + * theirChanges: Details. + +>>>>>>>.merge-right.r67213 2008-08-05 Someone Else2 * olderChange: Details. Note that the entry that was incorrectly pulled in was not in the file I am merging it to, but yet it was not one of my changes and shouldn't have been merged anyways. It was easy to fix (remove the extra lines that weren't part of my changes), but it still makes me worry about merging automatically in SVN. | TITLE:
SVN merge merged extra stuff
QUESTION:
I just did a merge using something like: svn merge -r 67212:67213 https://my.svn.repository/trunk. I only had 2 files, one of which is a simple ChangeLog. Rather than just merging my ChangeLog changes, it actually pulled mine plus some previous ones that were not in the destination ChangeLog. I noticed there was a conflict when I executed --dry-run, so I updated ChangeLog, and there was still a conflict (and I saw the conflict when I did the actual merge). I then later diffed on the file I was merging from: svn diff -r 67212:67213 ChangeLog And I see just the changes I had made, so I know that extra changes didn't get in there somehow. This makes me worried that merge is not actually just taking what I changed, which is what I would have expected. Can anybody explain what happened? UPDATE: In response to NilObject: So, I have 2 files changed, only ChangeLog is relevant, the other merged fine. When I go to my regular trunk checkout, I do the diff command above and see: Index: ChangeLog =================================================================== --- ChangeLog (revision 67212) +++ ChangeLog (revision 67213) @@ -1,3 +1,7 @@ +2008-08-06 Mike Stone + + * changed_file: Details. + 2008-08-06 Someone Else * theirChanges: Details. After my merge of the previous changes, the diff of ChangeLog looks like this: Index: ChangeLog =================================================================== --- ChangeLog (revision 67215) +++ ChangeLog (working copy) @@ -1,3 +1,14 @@ +<<<<<<<.working +======= +2008-08-06 Mike Stone + + * changed_file: Details. + +2008-08-06 Someone Else + + * theirChanges: Details. + +>>>>>>>.merge-right.r67213 2008-08-05 Someone Else2 * olderChange: Details. Note that the entry that was incorrectly pulled in was not in the file I am merging it to, but yet it was not one of my changes and shouldn't have been merged anyways. It was easy to fix (remove the extra lines that weren't part of my changes), but it still makes me worry about merging automatically in SVN.
ANSWER:
This only happens with conflicts - basically svn tried to merge the change in, but (roughly speaking) saw the change as: Add 2008-08-06 Mike Stone * changed_file: Details. before 2008-08-06 Someone Else And it couldn't find the Someone Else line while doing the merge, so chucked that bit in for context when putting in the conflict. If it was a non-conflicting merge only the changes you expected would have been applied. | [
"svn",
"merge"
] | 13 | 2 | 1,202 | 2 | 0 | 2008-08-06T22:30:26.387000 | 2008-08-17T17:17:46.193000 |
4,080 | 79,845 | What code analysis tools do you use for your Java projects? | What code analysis tools do you use on your Java projects? I am interested in all kinds static code analysis tools (FindBugs, PMD, and any others) code coverage tools (Cobertura, Emma, and any others) any other instrumentation-based tools anything else, if I'm missing something If applicable, also state what build tools you use and how well these tools integrate with both your IDEs and build tools. If a tool is only available a specific way (as an IDE plugin, or, say, a build tool plugin) that information is also worth noting. | For static analysis tools I often use CPD, PMD, FindBugs, and Checkstyle. CPD is the PMD "Copy/Paste Detector" tool. I was using PMD for a little while before I noticed the "Finding Duplicated Code" link on the PMD web page. I'd like to point out that these tools can sometimes be extended beyond their "out-of-the-box" set of rules. And not just because they're open source so that you can rewrite them. Some of these tools come with applications or "hooks" that allow them to be extended. For example, PMD comes with the "designer" tool that allows you to create new rules. Also, Checkstyle has the DescendantToken check that has properties that allow for substantial customization. I integrate these tools with an Ant-based build. You can follow the link to see my commented configuration. In addition to the simple integration into the build, I find it helpful to configure the tools to be somewhat "integrated" in a couple of other ways. Namely, report generation and warning suppression uniformity. I'd like to add these aspects to this discussion (which should probably have the "static-analysis" tag also): how are folks configuring these tools to create a "unified" solution? (I've asked this question separately here ) First, for warning reports, I transform the output so that each warning has the simple format: /absolute-path/filename:line-number:column-number: warning(tool-name): message This is often called the "Emacs format," but even if you aren't using Emacs, it's a reasonable format for homogenizing reports. For example: /project/src/com/example/Foo.java:425:9: warning(Checkstyle):Missing a Javadoc comment. My warning format transformations are done by my Ant script with Ant filterchains. The second "integration" that I do is for warning suppression. By default, each tool supports comments or an annotation (or both) that you can place in your code to silence a warning that you want to ignore. But these various warning suppression requests do not have a consistent look which seems somewhat silly. When you're suppressing a warning, you're suppressing a warning, so why not always write " SuppressWarning?" For example, PMD's default configuration suppresses warning generation on lines of code with the string " NOPMD " in a comment. Also, PMD supports Java's @SuppressWarnings annotation. I configure PMD to use comments containing " SuppressWarning(PMD. " instead of NOPMD so that PMD suppressions look alike. I fill in the particular rule that is violated when using the comment style suppression: // SuppressWarnings(PMD.PreserveStackTrace) justification: (false positive) exceptions are chained Only the " SuppressWarnings(PMD. " part is significant for a comment, but it is consistent with PMD's support for the @SuppressWarning annotation which does recognize individual rule violations by name: @SuppressWarnings("PMD.CompareObjectsWithEquals") // justification: identity comparision intended Similarly, Checkstyle suppresses warning generation between pairs of comments (no annotation support is provided). By default, comments to turn Checkstyle off and on contain the strings CHECKSTYLE:OFF and CHECKSTYLE:ON, respectively. Changing this configuration (with Checkstyle's "SuppressionCommentFilter") to use the strings " BEGIN SuppressWarnings(CheckStyle. " and " END SuppressWarnings(CheckStyle. " makes the controls look more like PMD: // BEGIN SuppressWarnings(Checkstyle.HiddenField) justification: "Effective Java," 2nd ed., Bloch, Item 2 // END SuppressWarnings(Checkstyle.HiddenField) With Checkstyle comments, the particular check violation ( HiddenField ) is significant because each check has its own " BEGIN/END " comment pair. FindBugs also supports warning generation suppression with a @SuppressWarnings annotation, so no further configuration is required to achieve some level of uniformity with other tools. Unfortunately, Findbugs has to support a custom @SuppressWarnings annotation because the built-in Java @SuppressWarnings annotation has a SOURCE retention policy which is not strong enough to retain the annotation in the class file where FindBugs needs it. I fully qualify FindBugs warnings suppressions to avoid clashing with Java's @SuppressWarnings annotation: @edu.umd.cs.findbugs.annotations.SuppressWarnings("UWF_FIELD_NOT_INITIALIZED_IN_CONSTRUCTOR") These techniques makes things look reasonably consistent across tools. Note that having each warning suppression contain the string " SuppressWarnings " makes it easy to run a simple search to find all instances for all tools over an entire code base. | What code analysis tools do you use for your Java projects? What code analysis tools do you use on your Java projects? I am interested in all kinds static code analysis tools (FindBugs, PMD, and any others) code coverage tools (Cobertura, Emma, and any others) any other instrumentation-based tools anything else, if I'm missing something If applicable, also state what build tools you use and how well these tools integrate with both your IDEs and build tools. If a tool is only available a specific way (as an IDE plugin, or, say, a build tool plugin) that information is also worth noting. | TITLE:
What code analysis tools do you use for your Java projects?
QUESTION:
What code analysis tools do you use on your Java projects? I am interested in all kinds static code analysis tools (FindBugs, PMD, and any others) code coverage tools (Cobertura, Emma, and any others) any other instrumentation-based tools anything else, if I'm missing something If applicable, also state what build tools you use and how well these tools integrate with both your IDEs and build tools. If a tool is only available a specific way (as an IDE plugin, or, say, a build tool plugin) that information is also worth noting.
ANSWER:
For static analysis tools I often use CPD, PMD, FindBugs, and Checkstyle. CPD is the PMD "Copy/Paste Detector" tool. I was using PMD for a little while before I noticed the "Finding Duplicated Code" link on the PMD web page. I'd like to point out that these tools can sometimes be extended beyond their "out-of-the-box" set of rules. And not just because they're open source so that you can rewrite them. Some of these tools come with applications or "hooks" that allow them to be extended. For example, PMD comes with the "designer" tool that allows you to create new rules. Also, Checkstyle has the DescendantToken check that has properties that allow for substantial customization. I integrate these tools with an Ant-based build. You can follow the link to see my commented configuration. In addition to the simple integration into the build, I find it helpful to configure the tools to be somewhat "integrated" in a couple of other ways. Namely, report generation and warning suppression uniformity. I'd like to add these aspects to this discussion (which should probably have the "static-analysis" tag also): how are folks configuring these tools to create a "unified" solution? (I've asked this question separately here ) First, for warning reports, I transform the output so that each warning has the simple format: /absolute-path/filename:line-number:column-number: warning(tool-name): message This is often called the "Emacs format," but even if you aren't using Emacs, it's a reasonable format for homogenizing reports. For example: /project/src/com/example/Foo.java:425:9: warning(Checkstyle):Missing a Javadoc comment. My warning format transformations are done by my Ant script with Ant filterchains. The second "integration" that I do is for warning suppression. By default, each tool supports comments or an annotation (or both) that you can place in your code to silence a warning that you want to ignore. But these various warning suppression requests do not have a consistent look which seems somewhat silly. When you're suppressing a warning, you're suppressing a warning, so why not always write " SuppressWarning?" For example, PMD's default configuration suppresses warning generation on lines of code with the string " NOPMD " in a comment. Also, PMD supports Java's @SuppressWarnings annotation. I configure PMD to use comments containing " SuppressWarning(PMD. " instead of NOPMD so that PMD suppressions look alike. I fill in the particular rule that is violated when using the comment style suppression: // SuppressWarnings(PMD.PreserveStackTrace) justification: (false positive) exceptions are chained Only the " SuppressWarnings(PMD. " part is significant for a comment, but it is consistent with PMD's support for the @SuppressWarning annotation which does recognize individual rule violations by name: @SuppressWarnings("PMD.CompareObjectsWithEquals") // justification: identity comparision intended Similarly, Checkstyle suppresses warning generation between pairs of comments (no annotation support is provided). By default, comments to turn Checkstyle off and on contain the strings CHECKSTYLE:OFF and CHECKSTYLE:ON, respectively. Changing this configuration (with Checkstyle's "SuppressionCommentFilter") to use the strings " BEGIN SuppressWarnings(CheckStyle. " and " END SuppressWarnings(CheckStyle. " makes the controls look more like PMD: // BEGIN SuppressWarnings(Checkstyle.HiddenField) justification: "Effective Java," 2nd ed., Bloch, Item 2 // END SuppressWarnings(Checkstyle.HiddenField) With Checkstyle comments, the particular check violation ( HiddenField ) is significant because each check has its own " BEGIN/END " comment pair. FindBugs also supports warning generation suppression with a @SuppressWarnings annotation, so no further configuration is required to achieve some level of uniformity with other tools. Unfortunately, Findbugs has to support a custom @SuppressWarnings annotation because the built-in Java @SuppressWarnings annotation has a SOURCE retention policy which is not strong enough to retain the annotation in the class file where FindBugs needs it. I fully qualify FindBugs warnings suppressions to avoid clashing with Java's @SuppressWarnings annotation: @edu.umd.cs.findbugs.annotations.SuppressWarnings("UWF_FIELD_NOT_INITIALIZED_IN_CONSTRUCTOR") These techniques makes things look reasonably consistent across tools. Note that having each warning suppression contain the string " SuppressWarnings " makes it easy to run a simple search to find all instances for all tools over an entire code base. | [
"java",
"code-coverage",
"static-analysis"
] | 118 | 72 | 42,369 | 12 | 0 | 2008-08-06T22:45:27.543000 | 2008-09-17T04:02:23.067000 |
4,110 | 4,126 | What program can I use to generate diagrams of SQL view/table structure? | I've been tasked with redesigning part of a ms-sql database structure which currently involves a lot of views, some of which contain joins to other views. Anyway, I wonder if anyone here could recommend a utility to automatically generate diagrams to help me visualise the whole structure. What's the best program you've used for such problems? | I am a big fan of Embarcadero's ER/Studio. It is very powerful and produces excellent on-screen as well as printed results. They have a free trial as well, so you should be able to get in and give it a shot without too much strife. Good luck! | What program can I use to generate diagrams of SQL view/table structure? I've been tasked with redesigning part of a ms-sql database structure which currently involves a lot of views, some of which contain joins to other views. Anyway, I wonder if anyone here could recommend a utility to automatically generate diagrams to help me visualise the whole structure. What's the best program you've used for such problems? | TITLE:
What program can I use to generate diagrams of SQL view/table structure?
QUESTION:
I've been tasked with redesigning part of a ms-sql database structure which currently involves a lot of views, some of which contain joins to other views. Anyway, I wonder if anyone here could recommend a utility to automatically generate diagrams to help me visualise the whole structure. What's the best program you've used for such problems?
ANSWER:
I am a big fan of Embarcadero's ER/Studio. It is very powerful and produces excellent on-screen as well as printed results. They have a free trial as well, so you should be able to get in and give it a shot without too much strife. Good luck! | [
"sql",
"sql-server",
"database",
"diagram"
] | 15 | 4 | 4,865 | 5 | 0 | 2008-08-06T23:19:50.500000 | 2008-08-06T23:36:58.547000 |
4,138 | 4,140 | SVN Client Ignore Pattern for VB.NET Solutions | What is the best SVN Ignore Pattern should TortoiseSVN have for a VB.NET solution? | this is what I use for C# w/resharper, should work just the same with vb.net: build deploy */bin */bin/* obj *.dll *.pdb *.user *.suo _ReSharper* *.resharper* bin | SVN Client Ignore Pattern for VB.NET Solutions What is the best SVN Ignore Pattern should TortoiseSVN have for a VB.NET solution? | TITLE:
SVN Client Ignore Pattern for VB.NET Solutions
QUESTION:
What is the best SVN Ignore Pattern should TortoiseSVN have for a VB.NET solution?
ANSWER:
this is what I use for C# w/resharper, should work just the same with vb.net: build deploy */bin */bin/* obj *.dll *.pdb *.user *.suo _ReSharper* *.resharper* bin | [
"vb.net",
"svn",
"tortoisesvn"
] | 17 | 17 | 1,974 | 2 | 0 | 2008-08-06T23:46:24.673000 | 2008-08-06T23:48:46.353000 |
4,149 | 154,588 | How do I use Java to read from a file that is actively being written to? | I have an application that writes information to file. This information is used post-execution to determine pass/failure/correctness of the application. I'd like to be able to read the file as it is being written so that I can do these pass/failure/correctness checks in real time. I assume it is possible to do this, but what are the gotcha's involved when using Java? If the reading catches up to the writing, will it just wait for more writes up until the file is closed, or will the read throw an exception at this point? If the latter, what do I do then? My intuition is currently pushing me towards BufferedStreams. Is this the way to go? | Could not get the example to work using FileChannel.read(ByteBuffer) because it isn't a blocking read. Did however get the code below to work: boolean running = true; BufferedInputStream reader = new BufferedInputStream(new FileInputStream( "out.txt" ) );
public void run() { while( running ) { if( reader.available() > 0 ) { System.out.print( (char)reader.read() ); } else { try { sleep( 500 ); } catch( InterruptedException ex ) { running = false; } } } } Of course the same thing would work as a timer instead of a thread, but I leave that up to the programmer. I'm still looking for a better way, but this works for me for now. Oh, and I'll caveat this with: I'm using 1.4.2. Yes I know I'm in the stone ages still. | How do I use Java to read from a file that is actively being written to? I have an application that writes information to file. This information is used post-execution to determine pass/failure/correctness of the application. I'd like to be able to read the file as it is being written so that I can do these pass/failure/correctness checks in real time. I assume it is possible to do this, but what are the gotcha's involved when using Java? If the reading catches up to the writing, will it just wait for more writes up until the file is closed, or will the read throw an exception at this point? If the latter, what do I do then? My intuition is currently pushing me towards BufferedStreams. Is this the way to go? | TITLE:
How do I use Java to read from a file that is actively being written to?
QUESTION:
I have an application that writes information to file. This information is used post-execution to determine pass/failure/correctness of the application. I'd like to be able to read the file as it is being written so that I can do these pass/failure/correctness checks in real time. I assume it is possible to do this, but what are the gotcha's involved when using Java? If the reading catches up to the writing, will it just wait for more writes up until the file is closed, or will the read throw an exception at this point? If the latter, what do I do then? My intuition is currently pushing me towards BufferedStreams. Is this the way to go?
ANSWER:
Could not get the example to work using FileChannel.read(ByteBuffer) because it isn't a blocking read. Did however get the code below to work: boolean running = true; BufferedInputStream reader = new BufferedInputStream(new FileInputStream( "out.txt" ) );
public void run() { while( running ) { if( reader.available() > 0 ) { System.out.print( (char)reader.read() ); } else { try { sleep( 500 ); } catch( InterruptedException ex ) { running = false; } } } } Of course the same thing would work as a timer instead of a thread, but I leave that up to the programmer. I'm still looking for a better way, but this works for me for now. Oh, and I'll caveat this with: I'm using 1.4.2. Yes I know I'm in the stone ages still. | [
"java",
"file",
"file-io"
] | 109 | 46 | 61,485 | 9 | 0 | 2008-08-06T23:57:10.173000 | 2008-09-30T19:32:24.637000 |
4,157 | 4,220 | ConfigurationManager.AppSettings Performance Concerns | I plan to be storing all my config settings in my application's app.config section (using the ConfigurationManager.AppSettings class). As the user changes settings using the app's UI (clicking checkboxes, choosing radio buttons, etc.), I plan to be writing those changes out to the AppSettings. At the same time, while the program is running I plan to be accessing the AppSettings constantly from a process that will be constantly processing data. Changes to settings via the UI need to affect the data processing in real-time, which is why the process will be accessing the AppSettings constantly. Is this a good idea with regard to performance? Using AppSettings is supposed to be "the right way" to store and access configuration settings when writing.Net apps, but I worry that this method wasn't intended for a constant load (at least in terms of settings being constantly read). If anyone has experience with this, I would greatly appreciate the input. Update: I should probably clarify a few points. This is not a web application, so connecting a database to the application might be overkill simply for storing configuration settings. This is a Windows Forms application. According to the MSDN documention, the ConfigurationManager is for storing not just application level settings, but user settings as well. (Especially important if, for instance, the application is installed as a partial-trust application.) Update 2: I accepted lomaxx's answer because Properties does indeed look like a good solution, without having to add any additional layers to my application (such as a database). When using Properties, it already does all the caching that others suggested. This means any changes and subsequent reads are all done in memory, making it extremely fast. Properties only writes the changes to disk when you explicitly tell it to. This means I can make changes to the config settings on-the-fly at run time and then only do a final save out to disk when the program exits. Just to verify it would actually be able to handle the load I need, I did some testing on my laptop and was able to do 750,000 reads and 7,500 writes per second using Properties. That is so far above and beyond what my application will ever even come close to needing that I feel quite safe in using Properties without impacting performance. | since you're using a winforms app, if it's in.net 2.0 there's actually a user settings system (called Properties) that is designed for this purpose. This article on MSDN has a pretty good introduction into this If you're still worried about performance then take a look at SQL Compact Edition which is similar to SQLite but is the Microsoft offering which I've found plays very nicely with winforms and there's even the ability to make it work with Linq | ConfigurationManager.AppSettings Performance Concerns I plan to be storing all my config settings in my application's app.config section (using the ConfigurationManager.AppSettings class). As the user changes settings using the app's UI (clicking checkboxes, choosing radio buttons, etc.), I plan to be writing those changes out to the AppSettings. At the same time, while the program is running I plan to be accessing the AppSettings constantly from a process that will be constantly processing data. Changes to settings via the UI need to affect the data processing in real-time, which is why the process will be accessing the AppSettings constantly. Is this a good idea with regard to performance? Using AppSettings is supposed to be "the right way" to store and access configuration settings when writing.Net apps, but I worry that this method wasn't intended for a constant load (at least in terms of settings being constantly read). If anyone has experience with this, I would greatly appreciate the input. Update: I should probably clarify a few points. This is not a web application, so connecting a database to the application might be overkill simply for storing configuration settings. This is a Windows Forms application. According to the MSDN documention, the ConfigurationManager is for storing not just application level settings, but user settings as well. (Especially important if, for instance, the application is installed as a partial-trust application.) Update 2: I accepted lomaxx's answer because Properties does indeed look like a good solution, without having to add any additional layers to my application (such as a database). When using Properties, it already does all the caching that others suggested. This means any changes and subsequent reads are all done in memory, making it extremely fast. Properties only writes the changes to disk when you explicitly tell it to. This means I can make changes to the config settings on-the-fly at run time and then only do a final save out to disk when the program exits. Just to verify it would actually be able to handle the load I need, I did some testing on my laptop and was able to do 750,000 reads and 7,500 writes per second using Properties. That is so far above and beyond what my application will ever even come close to needing that I feel quite safe in using Properties without impacting performance. | TITLE:
ConfigurationManager.AppSettings Performance Concerns
QUESTION:
I plan to be storing all my config settings in my application's app.config section (using the ConfigurationManager.AppSettings class). As the user changes settings using the app's UI (clicking checkboxes, choosing radio buttons, etc.), I plan to be writing those changes out to the AppSettings. At the same time, while the program is running I plan to be accessing the AppSettings constantly from a process that will be constantly processing data. Changes to settings via the UI need to affect the data processing in real-time, which is why the process will be accessing the AppSettings constantly. Is this a good idea with regard to performance? Using AppSettings is supposed to be "the right way" to store and access configuration settings when writing.Net apps, but I worry that this method wasn't intended for a constant load (at least in terms of settings being constantly read). If anyone has experience with this, I would greatly appreciate the input. Update: I should probably clarify a few points. This is not a web application, so connecting a database to the application might be overkill simply for storing configuration settings. This is a Windows Forms application. According to the MSDN documention, the ConfigurationManager is for storing not just application level settings, but user settings as well. (Especially important if, for instance, the application is installed as a partial-trust application.) Update 2: I accepted lomaxx's answer because Properties does indeed look like a good solution, without having to add any additional layers to my application (such as a database). When using Properties, it already does all the caching that others suggested. This means any changes and subsequent reads are all done in memory, making it extremely fast. Properties only writes the changes to disk when you explicitly tell it to. This means I can make changes to the config settings on-the-fly at run time and then only do a final save out to disk when the program exits. Just to verify it would actually be able to handle the load I need, I did some testing on my laptop and was able to do 750,000 reads and 7,500 writes per second using Properties. That is so far above and beyond what my application will ever even come close to needing that I feel quite safe in using Properties without impacting performance.
ANSWER:
since you're using a winforms app, if it's in.net 2.0 there's actually a user settings system (called Properties) that is designed for this purpose. This article on MSDN has a pretty good introduction into this If you're still worried about performance then take a look at SQL Compact Edition which is similar to SQLite but is the Microsoft offering which I've found plays very nicely with winforms and there's even the ability to make it work with Linq | [
"c#",
".net",
"performance",
"configuration",
"properties"
] | 27 | 10 | 6,942 | 8 | 0 | 2008-08-07T00:12:55.663000 | 2008-08-07T00:45:37.393000 |
4,164 | 4,209 | What is a good barebones CMS or framework? | I'm about to start a project for a customer who wants CMS-like functionality. They want users to be able to log in, modify a profile, and a basic forum. They also wish to be able to submit things to a front page. Is there a framework or barebones CMS that I could expand on or tailor to my needs? I don't need anything as feature-rich or fancy as Drupal or Joomla. I would actually prefer a framework as opposed to a pre-packaged CMS. I am confident I could code all this from scratch, but would prefer not to, as something like a framework would significantly cut down on my time spent coding, and more on design and layout. Edit: I should have been more specific. I'm looking for a Content Management System that will be run on a Debian server. So no.net preferably. I think i may end up going with Drupal, and only adding modules that I need. Turbogears looks a bit daunting, and i'm still not quite sure what it does after it's 20 minute intro video... TinyCMS doesn't look like it's been touched since... 2000?!? | if you are looking.net you can take a look at umbraco, haven't done much with it (company i work for wanted much more functionality so went with something else) but it seemed lightweight. Edit: if the customer wants a tiny CMS with a forum, I would still probably just go Drupal with phpBB or simple machines forum, almost positive they can share logins. Plus tomorrow the customer is going to want more and Drupal might save you some work there. | What is a good barebones CMS or framework? I'm about to start a project for a customer who wants CMS-like functionality. They want users to be able to log in, modify a profile, and a basic forum. They also wish to be able to submit things to a front page. Is there a framework or barebones CMS that I could expand on or tailor to my needs? I don't need anything as feature-rich or fancy as Drupal or Joomla. I would actually prefer a framework as opposed to a pre-packaged CMS. I am confident I could code all this from scratch, but would prefer not to, as something like a framework would significantly cut down on my time spent coding, and more on design and layout. Edit: I should have been more specific. I'm looking for a Content Management System that will be run on a Debian server. So no.net preferably. I think i may end up going with Drupal, and only adding modules that I need. Turbogears looks a bit daunting, and i'm still not quite sure what it does after it's 20 minute intro video... TinyCMS doesn't look like it's been touched since... 2000?!? | TITLE:
What is a good barebones CMS or framework?
QUESTION:
I'm about to start a project for a customer who wants CMS-like functionality. They want users to be able to log in, modify a profile, and a basic forum. They also wish to be able to submit things to a front page. Is there a framework or barebones CMS that I could expand on or tailor to my needs? I don't need anything as feature-rich or fancy as Drupal or Joomla. I would actually prefer a framework as opposed to a pre-packaged CMS. I am confident I could code all this from scratch, but would prefer not to, as something like a framework would significantly cut down on my time spent coding, and more on design and layout. Edit: I should have been more specific. I'm looking for a Content Management System that will be run on a Debian server. So no.net preferably. I think i may end up going with Drupal, and only adding modules that I need. Turbogears looks a bit daunting, and i'm still not quite sure what it does after it's 20 minute intro video... TinyCMS doesn't look like it's been touched since... 2000?!?
ANSWER:
if you are looking.net you can take a look at umbraco, haven't done much with it (company i work for wanted much more functionality so went with something else) but it seemed lightweight. Edit: if the customer wants a tiny CMS with a forum, I would still probably just go Drupal with phpBB or simple machines forum, almost positive they can share logins. Plus tomorrow the customer is going to want more and Drupal might save you some work there. | [
"frameworks",
"content-management-system"
] | 16 | 0 | 7,297 | 17 | 0 | 2008-08-07T00:20:03.607000 | 2008-08-07T00:39:23.070000 |
4,168 | 4,577 | Graph serialization | I'm looking for a simple algorithm to 'serialize' a directed graph. In particular I've got a set of files with interdependencies on their execution order, and I want to find the correct order at compile time. I know it must be a fairly common thing to do - compilers do it all the time - but my google-fu has been weak today. What's the 'go-to' algorithm for this? | Topological Sort (From Wikipedia): In graph theory, a topological sort or topological ordering of a directed acyclic graph (DAG) is a linear ordering of its nodes in which each node comes before all nodes to which it has outbound edges. Every DAG has one or more topological sorts. Pseudo code: L ← Empty list where we put the sorted elements Q ← Set of all nodes with no incoming edges while Q is non-empty do remove a node n from Q insert n into L for each node m with an edge e from n to m do remove edge e from the graph if m has no other incoming edges then insert m into Q if graph has edges then output error message (graph has a cycle) else output message (proposed topologically sorted order: L) | Graph serialization I'm looking for a simple algorithm to 'serialize' a directed graph. In particular I've got a set of files with interdependencies on their execution order, and I want to find the correct order at compile time. I know it must be a fairly common thing to do - compilers do it all the time - but my google-fu has been weak today. What's the 'go-to' algorithm for this? | TITLE:
Graph serialization
QUESTION:
I'm looking for a simple algorithm to 'serialize' a directed graph. In particular I've got a set of files with interdependencies on their execution order, and I want to find the correct order at compile time. I know it must be a fairly common thing to do - compilers do it all the time - but my google-fu has been weak today. What's the 'go-to' algorithm for this?
ANSWER:
Topological Sort (From Wikipedia): In graph theory, a topological sort or topological ordering of a directed acyclic graph (DAG) is a linear ordering of its nodes in which each node comes before all nodes to which it has outbound edges. Every DAG has one or more topological sorts. Pseudo code: L ← Empty list where we put the sorted elements Q ← Set of all nodes with no incoming edges while Q is non-empty do remove a node n from Q insert n into L for each node m with an edge e from n to m do remove edge e from the graph if m has no other incoming edges then insert m into Q if graph has edges then output error message (graph has a cycle) else output message (proposed topologically sorted order: L) | [
"algorithm",
"sorting",
"graph-algorithm",
"directed-graph"
] | 48 | 67 | 51,571 | 4 | 0 | 2008-08-07T00:22:54.007000 | 2008-08-07T10:53:31.237000 |
4,170 | 4,204 | How to learn ADO.NET | I need to learn ADO.NET to build applications based on MS Office. I have read a good deal about ADO.NET in the MSDN Library, but everything seems rather messy to me. What are the basics one must figure out when using ADO.NET? I think a few key words will suffice to let me organize my learning. | There are three key components (assuming ur using SQL server): SQLConnection SqlCommand SqlDataReader (if you're using something else, replace Sql with "Something", like MySqlConnection, OracleCommand ) Everything else is just built on top of that. Example 1: using (SqlConnection connection = new SqlConnection("CONNECTION STRING")) using (SqlCommand command = new SqlCommand()) { command.commandText = "SELECT Name FROM Users WHERE Status = @OnlineStatus"; command.Connection = connection; command.Parameters.Add("@OnlineStatus", SqlDbType.Int).Value = 1; //replace with enum connection.Open();
using (SqlDataReader dr = command.ExecuteReader)) { List onlineUsers = new List ();
while (dr.Read()) { onlineUsers.Add(dr.GetString(0)); } } } Example 2: using (SqlConnection connection = new SqlConnection("CONNECTION STRING")) using (SqlCommand command = new SqlCommand()) { command.commandText = "DELETE FROM Users where Email = @Email"; command.Connection = connection; command.Parameters.Add("@Email", SqlDbType.VarChar, 100).Value = "user@host.com"; connection.Open(); command.ExecuteNonQuery(); } | How to learn ADO.NET I need to learn ADO.NET to build applications based on MS Office. I have read a good deal about ADO.NET in the MSDN Library, but everything seems rather messy to me. What are the basics one must figure out when using ADO.NET? I think a few key words will suffice to let me organize my learning. | TITLE:
How to learn ADO.NET
QUESTION:
I need to learn ADO.NET to build applications based on MS Office. I have read a good deal about ADO.NET in the MSDN Library, but everything seems rather messy to me. What are the basics one must figure out when using ADO.NET? I think a few key words will suffice to let me organize my learning.
ANSWER:
There are three key components (assuming ur using SQL server): SQLConnection SqlCommand SqlDataReader (if you're using something else, replace Sql with "Something", like MySqlConnection, OracleCommand ) Everything else is just built on top of that. Example 1: using (SqlConnection connection = new SqlConnection("CONNECTION STRING")) using (SqlCommand command = new SqlCommand()) { command.commandText = "SELECT Name FROM Users WHERE Status = @OnlineStatus"; command.Connection = connection; command.Parameters.Add("@OnlineStatus", SqlDbType.Int).Value = 1; //replace with enum connection.Open();
using (SqlDataReader dr = command.ExecuteReader)) { List onlineUsers = new List ();
while (dr.Read()) { onlineUsers.Add(dr.GetString(0)); } } } Example 2: using (SqlConnection connection = new SqlConnection("CONNECTION STRING")) using (SqlCommand command = new SqlCommand()) { command.commandText = "DELETE FROM Users where Email = @Email"; command.Connection = connection; command.Parameters.Add("@Email", SqlDbType.VarChar, 100).Value = "user@host.com"; connection.Open(); command.ExecuteNonQuery(); } | [
"ado.net"
] | 17 | 6 | 1,216 | 2 | 0 | 2008-08-07T00:25:03.457000 | 2008-08-07T00:37:04.727000 |
4,208 | 4,332 | Windows Equivalent of 'nice' | Is there a Windows equivalent of the Unix command, nice? I'm specifically looking for something I can use at the command line, and not the "Set Priority" menu from the task manager. My attempts at finding this on Google have been thwarted by those who can't come up with better adjectives. | If you want to set priority when launching a process you could use the built-in START command: START ["title"] [/Dpath] [/I] [/MIN] [/MAX] [/SEPARATE | /SHARED] [/LOW | /NORMAL | /HIGH | /REALTIME | /ABOVENORMAL | /BELOWNORMAL] [/WAIT] [/B] [command/program] [parameters] Use the low through belownormal options to set priority of the launched command/program. Seems like the most straightforward solution. No downloads or script writing. The other solutions probably work on already running procs though. | Windows Equivalent of 'nice' Is there a Windows equivalent of the Unix command, nice? I'm specifically looking for something I can use at the command line, and not the "Set Priority" menu from the task manager. My attempts at finding this on Google have been thwarted by those who can't come up with better adjectives. | TITLE:
Windows Equivalent of 'nice'
QUESTION:
Is there a Windows equivalent of the Unix command, nice? I'm specifically looking for something I can use at the command line, and not the "Set Priority" menu from the task manager. My attempts at finding this on Google have been thwarted by those who can't come up with better adjectives.
ANSWER:
If you want to set priority when launching a process you could use the built-in START command: START ["title"] [/Dpath] [/I] [/MIN] [/MAX] [/SEPARATE | /SHARED] [/LOW | /NORMAL | /HIGH | /REALTIME | /ABOVENORMAL | /BELOWNORMAL] [/WAIT] [/B] [command/program] [parameters] Use the low through belownormal options to set priority of the launched command/program. Seems like the most straightforward solution. No downloads or script writing. The other solutions probably work on already running procs though. | [
"windows",
"unix",
"process-management"
] | 80 | 71 | 38,492 | 4 | 0 | 2008-08-07T00:39:17.453000 | 2008-08-07T02:49:21.107000 |
4,219 | 4,228 | SVN vs. Team Foundation Server | A few months back my team switched our source control over to Apache Subversion from Visual SourceSafe, and we haven't been happier. Recently I've been looking at Team Foundation Server, and at least on the surface, it seems very impressive. There is some great integration with Visual Studio, and lots of great tools for DBAs, testers, project managers, etc. The most obvious difference between these two products is price. It's hard to beat Apache Subversion (free). Team Foundation Server is quite expensive, so the extra features would really have to kick Subversion in the pants. Does anyone have practical experience with both? How do they compare? Is Team Foundation Server actually worth the expense? | I joined an Open Source project over at CodePlex, recently. They use TFS for their source control and I have to say that it's absolutely magnificent. I'm incredibly impressed with it, so far. I'm a huge fan of the IDE integration and how easy it is to branch and tag your code. Adding a solution to source control is something like two clicks, if you've already got everything configured properly. Now. Is it worth the hefty price tag? I don't think so. The benefit to working on projects at CodePlex is it lets me get the experience with TFS that I need, in the event that I have to use it somewhere later. If you want good IDE integration for your Source Control, go grab VisualSVN integration package. It's a much, much cheaper investment to get a lot of the same features (free on non-domain computers BTW). | SVN vs. Team Foundation Server A few months back my team switched our source control over to Apache Subversion from Visual SourceSafe, and we haven't been happier. Recently I've been looking at Team Foundation Server, and at least on the surface, it seems very impressive. There is some great integration with Visual Studio, and lots of great tools for DBAs, testers, project managers, etc. The most obvious difference between these two products is price. It's hard to beat Apache Subversion (free). Team Foundation Server is quite expensive, so the extra features would really have to kick Subversion in the pants. Does anyone have practical experience with both? How do they compare? Is Team Foundation Server actually worth the expense? | TITLE:
SVN vs. Team Foundation Server
QUESTION:
A few months back my team switched our source control over to Apache Subversion from Visual SourceSafe, and we haven't been happier. Recently I've been looking at Team Foundation Server, and at least on the surface, it seems very impressive. There is some great integration with Visual Studio, and lots of great tools for DBAs, testers, project managers, etc. The most obvious difference between these two products is price. It's hard to beat Apache Subversion (free). Team Foundation Server is quite expensive, so the extra features would really have to kick Subversion in the pants. Does anyone have practical experience with both? How do they compare? Is Team Foundation Server actually worth the expense?
ANSWER:
I joined an Open Source project over at CodePlex, recently. They use TFS for their source control and I have to say that it's absolutely magnificent. I'm incredibly impressed with it, so far. I'm a huge fan of the IDE integration and how easy it is to branch and tag your code. Adding a solution to source control is something like two clicks, if you've already got everything configured properly. Now. Is it worth the hefty price tag? I don't think so. The benefit to working on projects at CodePlex is it lets me get the experience with TFS that I need, in the event that I have to use it somewhere later. If you want good IDE integration for your Source Control, go grab VisualSVN integration package. It's a much, much cheaper investment to get a lot of the same features (free on non-domain computers BTW). | [
"svn",
"tfs"
] | 77 | 46 | 56,008 | 26 | 0 | 2008-08-07T00:43:33.700000 | 2008-08-07T00:52:27.130000 |
4,225 | 4,281 | Territory Map Generation | Is there a trivial, or at least moderately straight-forward way to generate territory maps (e.g. Risk)? I have looked in the past and the best I could find were vague references to Voronoi diagrams. An example of a Voronoi diagram is this:. These hold promise, but I guess i haven't seen any straight-forward ways of rendering these, let alone holding them in some form of data structure to treat each territory as an object. Another approach that holds promise is flood fill, but again I'm unsure on the best way to start with this approach. | The best reference I've seen on them is Computational Geometry: Algorithms and Applications, which covers Voronoi diagrams, Delaunay triangulations (similar to Voronoi diagrams and each can be converted into the other), and other similar data structures. They talk about all the data structures you need but they don't give you the code necessary to implement it (which may be a good exercise). In terms of code, an Amazon search shows the book Computational Geometry in C, which presumably comes with the code (although since you're stuck in C, you'd mind as well get the other one and implement it in whatever language you want). I also don't have any experience with this book, only the first. Sorry to have only books to recommend! The only decent online resource I've seen on them are the two Wikipedia articles, which doesn't really tell you implementation details. This link may be helpful though. | Territory Map Generation Is there a trivial, or at least moderately straight-forward way to generate territory maps (e.g. Risk)? I have looked in the past and the best I could find were vague references to Voronoi diagrams. An example of a Voronoi diagram is this:. These hold promise, but I guess i haven't seen any straight-forward ways of rendering these, let alone holding them in some form of data structure to treat each territory as an object. Another approach that holds promise is flood fill, but again I'm unsure on the best way to start with this approach. | TITLE:
Territory Map Generation
QUESTION:
Is there a trivial, or at least moderately straight-forward way to generate territory maps (e.g. Risk)? I have looked in the past and the best I could find were vague references to Voronoi diagrams. An example of a Voronoi diagram is this:. These hold promise, but I guess i haven't seen any straight-forward ways of rendering these, let alone holding them in some form of data structure to treat each territory as an object. Another approach that holds promise is flood fill, but again I'm unsure on the best way to start with this approach.
ANSWER:
The best reference I've seen on them is Computational Geometry: Algorithms and Applications, which covers Voronoi diagrams, Delaunay triangulations (similar to Voronoi diagrams and each can be converted into the other), and other similar data structures. They talk about all the data structures you need but they don't give you the code necessary to implement it (which may be a good exercise). In terms of code, an Amazon search shows the book Computational Geometry in C, which presumably comes with the code (although since you're stuck in C, you'd mind as well get the other one and implement it in whatever language you want). I also don't have any experience with this book, only the first. Sorry to have only books to recommend! The only decent online resource I've seen on them are the two Wikipedia articles, which doesn't really tell you implementation details. This link may be helpful though. | [
"language-agnostic",
"maps",
"voronoi"
] | 17 | 7 | 1,400 | 4 | 0 | 2008-08-07T00:48:04.953000 | 2008-08-07T01:47:54.133000 |
4,227 | 4,735,712 | Accessing a Dictionary.Keys Key through a numeric index | I'm using a Dictionary where the int is a count of the key. Now, I need to access the last-inserted Key inside the Dictionary, but I do not know the name of it. The obvious attempt: int LastCount = mydict[mydict.keys[mydict.keys.Count]]; does not work, because Dictionary.Keys does not implement a []-indexer. I just wonder if there is any similar class? I thought about using a Stack, but that only stores a string. I could now create my own struct and then use a Stack, but I wonder if there is another alternative, essentially a Dictionary that implements an []-indexer on the Keys? | As @Falanwe points out in a comment, doing something like this is incorrect: int LastCount = mydict.Keys.ElementAt(mydict.Count -1); You should not depend on the order of keys in a Dictionary. If you need ordering, you should use an OrderedDictionary, as suggested in this answer. The other answers on this page are interesting as well. | Accessing a Dictionary.Keys Key through a numeric index I'm using a Dictionary where the int is a count of the key. Now, I need to access the last-inserted Key inside the Dictionary, but I do not know the name of it. The obvious attempt: int LastCount = mydict[mydict.keys[mydict.keys.Count]]; does not work, because Dictionary.Keys does not implement a []-indexer. I just wonder if there is any similar class? I thought about using a Stack, but that only stores a string. I could now create my own struct and then use a Stack, but I wonder if there is another alternative, essentially a Dictionary that implements an []-indexer on the Keys? | TITLE:
Accessing a Dictionary.Keys Key through a numeric index
QUESTION:
I'm using a Dictionary where the int is a count of the key. Now, I need to access the last-inserted Key inside the Dictionary, but I do not know the name of it. The obvious attempt: int LastCount = mydict[mydict.keys[mydict.keys.Count]]; does not work, because Dictionary.Keys does not implement a []-indexer. I just wonder if there is any similar class? I thought about using a Stack, but that only stores a string. I could now create my own struct and then use a Stack, but I wonder if there is another alternative, essentially a Dictionary that implements an []-indexer on the Keys?
ANSWER:
As @Falanwe points out in a comment, doing something like this is incorrect: int LastCount = mydict.Keys.ElementAt(mydict.Count -1); You should not depend on the order of keys in a Dictionary. If you need ordering, you should use an OrderedDictionary, as suggested in this answer. The other answers on this page are interesting as well. | [
"c#",
".net",
"dictionary"
] | 167 | 233 | 280,163 | 15 | 0 | 2008-08-07T00:51:21.720000 | 2011-01-19T13:21:27.887000 |
4,230 | 4,244 | The Difference Between a DataGrid and a GridView in ASP.NET? | I've been doing ASP.NET development for a little while now, and I've used both the GridView and the DataGrid controls before for various things, but I never could find a really good reason to use one or the other. I'd like to know: What is the difference between these 2 ASP.NET controls? What are the advantages or disadvantages of both? Is one any faster? Newer? Easier to maintain? The intellisense summary for the controls doesn't seem to describe any difference between the two. They both can view, edit, and sort data and automatically generate columns at runtime. Edit: Visual Studio 2008 no longer lists DataGrid as an available control in the toolbox. It is still available (for legacy support I assume) if you type it in by hand though. | DataGrid was an ASP.NET 1.1 control, still supported. GridView arrived in 2.0, made certain tasks simpler added different databinding features: This link has a comparison of DataGrid and GridView features - https://msdn.microsoft.com/en-us/library/05yye6k9(v=vs.100).aspx | The Difference Between a DataGrid and a GridView in ASP.NET? I've been doing ASP.NET development for a little while now, and I've used both the GridView and the DataGrid controls before for various things, but I never could find a really good reason to use one or the other. I'd like to know: What is the difference between these 2 ASP.NET controls? What are the advantages or disadvantages of both? Is one any faster? Newer? Easier to maintain? The intellisense summary for the controls doesn't seem to describe any difference between the two. They both can view, edit, and sort data and automatically generate columns at runtime. Edit: Visual Studio 2008 no longer lists DataGrid as an available control in the toolbox. It is still available (for legacy support I assume) if you type it in by hand though. | TITLE:
The Difference Between a DataGrid and a GridView in ASP.NET?
QUESTION:
I've been doing ASP.NET development for a little while now, and I've used both the GridView and the DataGrid controls before for various things, but I never could find a really good reason to use one or the other. I'd like to know: What is the difference between these 2 ASP.NET controls? What are the advantages or disadvantages of both? Is one any faster? Newer? Easier to maintain? The intellisense summary for the controls doesn't seem to describe any difference between the two. They both can view, edit, and sort data and automatically generate columns at runtime. Edit: Visual Studio 2008 no longer lists DataGrid as an available control in the toolbox. It is still available (for legacy support I assume) if you type it in by hand though.
ANSWER:
DataGrid was an ASP.NET 1.1 control, still supported. GridView arrived in 2.0, made certain tasks simpler added different databinding features: This link has a comparison of DataGrid and GridView features - https://msdn.microsoft.com/en-us/library/05yye6k9(v=vs.100).aspx | [
"asp.net"
] | 53 | 47 | 79,726 | 9 | 0 | 2008-08-07T00:54:31.883000 | 2008-08-07T01:06:22.687000 |
4,234 | 4,260 | What to use for Messaging with C# | So my company stores alot of data in a foxpro database and trying to get around the performance hit of touching it directly I was thinking of messaging anything that can be done asynchronously for a snappier user experience. I started looking at ActiveMQ but don't know how well C# will hook with it. Wanting to hear what all of you guys think. edit: It is going to be a web application. Anything touching this foxpro is kinda slow (probably because the person who set it up 10 years ago messed it all to hell, some of the table files are incredibly large). We replicate the foxpro to sql nightly and most of our data reads are ok being a day old so we are focusing on the writes. plus the write affects a critical part of the user experience (purchasing), we store it in sql and then just message to have it put into foxpro when it can. I wish we could just get rid of the foxpro, unfortunately the company doesn't want to get rid of a very old piece of software they bought that depends on it. | ActiveMQ works well with C# using the Spring.NET integrations and NMS. A post with some links to get you started in that direction is here. Also consider using MSMQ (The System.Messaging namespace) or a.NET based asynchronous messaging solution, with some options here. | What to use for Messaging with C# So my company stores alot of data in a foxpro database and trying to get around the performance hit of touching it directly I was thinking of messaging anything that can be done asynchronously for a snappier user experience. I started looking at ActiveMQ but don't know how well C# will hook with it. Wanting to hear what all of you guys think. edit: It is going to be a web application. Anything touching this foxpro is kinda slow (probably because the person who set it up 10 years ago messed it all to hell, some of the table files are incredibly large). We replicate the foxpro to sql nightly and most of our data reads are ok being a day old so we are focusing on the writes. plus the write affects a critical part of the user experience (purchasing), we store it in sql and then just message to have it put into foxpro when it can. I wish we could just get rid of the foxpro, unfortunately the company doesn't want to get rid of a very old piece of software they bought that depends on it. | TITLE:
What to use for Messaging with C#
QUESTION:
So my company stores alot of data in a foxpro database and trying to get around the performance hit of touching it directly I was thinking of messaging anything that can be done asynchronously for a snappier user experience. I started looking at ActiveMQ but don't know how well C# will hook with it. Wanting to hear what all of you guys think. edit: It is going to be a web application. Anything touching this foxpro is kinda slow (probably because the person who set it up 10 years ago messed it all to hell, some of the table files are incredibly large). We replicate the foxpro to sql nightly and most of our data reads are ok being a day old so we are focusing on the writes. plus the write affects a critical part of the user experience (purchasing), we store it in sql and then just message to have it put into foxpro when it can. I wish we could just get rid of the foxpro, unfortunately the company doesn't want to get rid of a very old piece of software they bought that depends on it.
ANSWER:
ActiveMQ works well with C# using the Spring.NET integrations and NMS. A post with some links to get you started in that direction is here. Also consider using MSMQ (The System.Messaging namespace) or a.NET based asynchronous messaging solution, with some options here. | [
"c#",
"messaging"
] | 14 | 8 | 7,065 | 7 | 0 | 2008-08-07T00:56:41.880000 | 2008-08-07T01:20:13.890000 |
4,242 | 4,247 | Why doesn't Java autoboxing extend to method invocations of methods of the autoboxed types? | I want to convert a primitive to a string, and I tried: myInt.toString(); This fails with the error: int cannot be dereferenced Now, I get that primitives are not reference types (ie, not an Object) and so cannot have methods. However, Java 5 introduced autoboxing and unboxing (a la C#... which I never liked in C#, but that's beside the point). So with autoboxing, I would expect the above to convert myInt to an Integer and then call toString() on that. Furthermore, I believe C# allows such a call, unless I remember incorrectly. Is this just an unfortunate shortcoming of Java's autoboxing/unboxing specification, or is there a good reason for this? | Java autoboxing/unboxing doesn't go to the extent to allow you to dereference a primitive, so your compiler prevents it. Your compiler still knows myInt as a primitive. There's a paper about this issue at jcp.org. Autoboxing is mainly useful during assignment or parameter passing -- allowing you to pass a primitive as an object (or vice versa), or assign a primitive to an object (or vice versa). So unfortunately, you would have to do it like this: (kudos Patrick, I switched to your way) Integer.toString(myInt); | Why doesn't Java autoboxing extend to method invocations of methods of the autoboxed types? I want to convert a primitive to a string, and I tried: myInt.toString(); This fails with the error: int cannot be dereferenced Now, I get that primitives are not reference types (ie, not an Object) and so cannot have methods. However, Java 5 introduced autoboxing and unboxing (a la C#... which I never liked in C#, but that's beside the point). So with autoboxing, I would expect the above to convert myInt to an Integer and then call toString() on that. Furthermore, I believe C# allows such a call, unless I remember incorrectly. Is this just an unfortunate shortcoming of Java's autoboxing/unboxing specification, or is there a good reason for this? | TITLE:
Why doesn't Java autoboxing extend to method invocations of methods of the autoboxed types?
QUESTION:
I want to convert a primitive to a string, and I tried: myInt.toString(); This fails with the error: int cannot be dereferenced Now, I get that primitives are not reference types (ie, not an Object) and so cannot have methods. However, Java 5 introduced autoboxing and unboxing (a la C#... which I never liked in C#, but that's beside the point). So with autoboxing, I would expect the above to convert myInt to an Integer and then call toString() on that. Furthermore, I believe C# allows such a call, unless I remember incorrectly. Is this just an unfortunate shortcoming of Java's autoboxing/unboxing specification, or is there a good reason for this?
ANSWER:
Java autoboxing/unboxing doesn't go to the extent to allow you to dereference a primitive, so your compiler prevents it. Your compiler still knows myInt as a primitive. There's a paper about this issue at jcp.org. Autoboxing is mainly useful during assignment or parameter passing -- allowing you to pass a primitive as an object (or vice versa), or assign a primitive to an object (or vice versa). So unfortunately, you would have to do it like this: (kudos Patrick, I switched to your way) Integer.toString(myInt); | [
"java",
"autoboxing"
] | 53 | 47 | 7,101 | 8 | 0 | 2008-08-07T01:05:15.420000 | 2008-08-07T01:09:17.547000 |
4,246 | 4,271 | What is best practice for FTP from a SQL Server 2005 stored procedure? | What is the best method for executing FTP commands from a SQL Server stored procedure? we currently use something like this: EXEC master..xp_cmdshell 'ftp -n -s:d:\ftp\ftpscript.xmt 172.1.1.1' The problem is that the command seems to succeed even if the FTP ended in error. Also, the use of xp_cmdshell requires special permissions and may leave room for security issues. | If you're running SQL 2005 you could do this in a CLR integration assembly and use the FTP classes in the System.Net namespace to build a simple FTP client. You'd benefit from being able to trap and handle exceptions and reduce the security risk of having to use xp_cmdshell. Just some thoughts. | What is best practice for FTP from a SQL Server 2005 stored procedure? What is the best method for executing FTP commands from a SQL Server stored procedure? we currently use something like this: EXEC master..xp_cmdshell 'ftp -n -s:d:\ftp\ftpscript.xmt 172.1.1.1' The problem is that the command seems to succeed even if the FTP ended in error. Also, the use of xp_cmdshell requires special permissions and may leave room for security issues. | TITLE:
What is best practice for FTP from a SQL Server 2005 stored procedure?
QUESTION:
What is the best method for executing FTP commands from a SQL Server stored procedure? we currently use something like this: EXEC master..xp_cmdshell 'ftp -n -s:d:\ftp\ftpscript.xmt 172.1.1.1' The problem is that the command seems to succeed even if the FTP ended in error. Also, the use of xp_cmdshell requires special permissions and may leave room for security issues.
ANSWER:
If you're running SQL 2005 you could do this in a CLR integration assembly and use the FTP classes in the System.Net namespace to build a simple FTP client. You'd benefit from being able to trap and handle exceptions and reduce the security risk of having to use xp_cmdshell. Just some thoughts. | [
"sql-server",
"ftp"
] | 15 | 5 | 6,792 | 3 | 0 | 2008-08-07T01:09:04.820000 | 2008-08-07T01:29:43.200000 |
4,287 | 4,304 | How Do You Determine The PID of the Parent of a Process | I have a process in erlang that is supposed to do something immediately after spawn, then send the result back to the parent when it is finished. How do I figure out the PID of the process that spawned it? | You should pass self() to the child as one of the arguments to the entry function. spawn_link(?MODULE, child, [self()]). | How Do You Determine The PID of the Parent of a Process I have a process in erlang that is supposed to do something immediately after spawn, then send the result back to the parent when it is finished. How do I figure out the PID of the process that spawned it? | TITLE:
How Do You Determine The PID of the Parent of a Process
QUESTION:
I have a process in erlang that is supposed to do something immediately after spawn, then send the result back to the parent when it is finished. How do I figure out the PID of the process that spawned it?
ANSWER:
You should pass self() to the child as one of the arguments to the entry function. spawn_link(?MODULE, child, [self()]). | [
"erlang"
] | 16 | 15 | 3,539 | 4 | 0 | 2008-08-07T01:54:12.107000 | 2008-08-07T02:27:32.740000 |
4,303 | 4,310 | Why should I practice Test Driven Development and how should I start? | Lots of people talk about writing tests for their code before they start writing their code. This practice is generally known as Test Driven Development or TDD for short. What benefits do I gain from writing software this way? How do I get started with this practice? | There are a lot of benefits: You get immediate feedback on if your code is working, so you can find bugs faster By seeing the test go from red to green, you know that you have both a working regression test, and working code You gain confidence to refactor existing code, which means you can clean up code without worrying what it might break At the end you have a suite of regression tests that can be run during automated builds to give you greater confidence that your codebase is solid The best way to start is to just start. There is a great book by Kent Beck all about Test Driven Development. Just start with new code, don't worry about old code... whenever you feel you need to refactor some code, write a test for the existing functionality, then refactor it and make sure the tests stay green. Also, read this great article. | Why should I practice Test Driven Development and how should I start? Lots of people talk about writing tests for their code before they start writing their code. This practice is generally known as Test Driven Development or TDD for short. What benefits do I gain from writing software this way? How do I get started with this practice? | TITLE:
Why should I practice Test Driven Development and how should I start?
QUESTION:
Lots of people talk about writing tests for their code before they start writing their code. This practice is generally known as Test Driven Development or TDD for short. What benefits do I gain from writing software this way? How do I get started with this practice?
ANSWER:
There are a lot of benefits: You get immediate feedback on if your code is working, so you can find bugs faster By seeing the test go from red to green, you know that you have both a working regression test, and working code You gain confidence to refactor existing code, which means you can clean up code without worrying what it might break At the end you have a suite of regression tests that can be run during automated builds to give you greater confidence that your codebase is solid The best way to start is to just start. There is a great book by Kent Beck all about Test Driven Development. Just start with new code, don't worry about old code... whenever you feel you need to refactor some code, write a test for the existing functionality, then refactor it and make sure the tests stay green. Also, read this great article. | [
"testing",
"tdd"
] | 60 | 37 | 8,400 | 4 | 0 | 2008-08-07T02:27:03.397000 | 2008-08-07T02:33:10.437000 |
4,306 | 4,311 | What is the best way to create a sparse array in C++? | I am working on a project that requires the manipulation of enormous matrices, specifically pyramidal summation for a copula calculation. In short, I need to keep track of a relatively small number of values (usually a value of 1, and in rare cases more than 1) in a sea of zeros in the matrix (multidimensional array). A sparse array allows the user to store a small number of values, and assume all undefined records to be a preset value. Since it is not physically possible to store all values in memory, I need to store only a few non-zero elements. This could be several million entries. Speed is a huge priority, and I would also like to dynamically choose the number of variables in the class at runtime. I currently work on a system that uses a binary search tree (b-tree) to store entries. Does anyone know of a better system? | For C++, a map works well. Several million objects won't be a problem. 10 million items took about 4.4 seconds and about 57 meg on my computer. My test application is as follows: #include #include #include class triple { public: int x; int y; int z; bool operator<(const triple &other) const { if (x < other.x) return true; if (other.x < x) return false; if (y < other.y) return true; if (other.y < y) return false; return z < other.z; } };
int main(int, char**) { std::map data; triple point; int i;
for (i = 0; i < 10000000; ++i) { point.x = rand(); point.y = rand(); point.z = rand(); //printf("%d %d %d %d\n", i, point.x, point.y, point.z); data[point] = i; } return 0; } Now to dynamically choose the number of variables, the easiest solution is to represent index as a string, and then use string as a key for the map. For instance, an item located at [23][55] can be represented via "23,55" string. We can also extend this solution for higher dimensions; such as for three dimensions an arbitrary index will look like "34,45,56". A simple implementation of this technique is as follows: std::map data data; char ix[100];
sprintf(ix, "%d,%d", x, y); // 2 vars data[ix] = i;
sprintf(ix, "%d,%d,%d", x, y, z); // 3 vars data[ix] = i; | What is the best way to create a sparse array in C++? I am working on a project that requires the manipulation of enormous matrices, specifically pyramidal summation for a copula calculation. In short, I need to keep track of a relatively small number of values (usually a value of 1, and in rare cases more than 1) in a sea of zeros in the matrix (multidimensional array). A sparse array allows the user to store a small number of values, and assume all undefined records to be a preset value. Since it is not physically possible to store all values in memory, I need to store only a few non-zero elements. This could be several million entries. Speed is a huge priority, and I would also like to dynamically choose the number of variables in the class at runtime. I currently work on a system that uses a binary search tree (b-tree) to store entries. Does anyone know of a better system? | TITLE:
What is the best way to create a sparse array in C++?
QUESTION:
I am working on a project that requires the manipulation of enormous matrices, specifically pyramidal summation for a copula calculation. In short, I need to keep track of a relatively small number of values (usually a value of 1, and in rare cases more than 1) in a sea of zeros in the matrix (multidimensional array). A sparse array allows the user to store a small number of values, and assume all undefined records to be a preset value. Since it is not physically possible to store all values in memory, I need to store only a few non-zero elements. This could be several million entries. Speed is a huge priority, and I would also like to dynamically choose the number of variables in the class at runtime. I currently work on a system that uses a binary search tree (b-tree) to store entries. Does anyone know of a better system?
ANSWER:
For C++, a map works well. Several million objects won't be a problem. 10 million items took about 4.4 seconds and about 57 meg on my computer. My test application is as follows: #include #include #include class triple { public: int x; int y; int z; bool operator<(const triple &other) const { if (x < other.x) return true; if (other.x < x) return false; if (y < other.y) return true; if (other.y < y) return false; return z < other.z; } };
int main(int, char**) { std::map data; triple point; int i;
for (i = 0; i < 10000000; ++i) { point.x = rand(); point.y = rand(); point.z = rand(); //printf("%d %d %d %d\n", i, point.x, point.y, point.z); data[point] = i; } return 0; } Now to dynamically choose the number of variables, the easiest solution is to represent index as a string, and then use string as a key for the map. For instance, an item located at [23][55] can be represented via "23,55" string. We can also extend this solution for higher dimensions; such as for three dimensions an arbitrary index will look like "34,45,56". A simple implementation of this technique is as follows: std::map data data; char ix[100];
sprintf(ix, "%d,%d", x, y); // 2 vars data[ix] = i;
sprintf(ix, "%d,%d,%d", x, y, z); // 3 vars data[ix] = i; | [
"c++",
"oop",
"data-structures",
"hash",
"maps"
] | 63 | 33 | 62,005 | 11 | 0 | 2008-08-07T02:29:58.180000 | 2008-08-07T02:33:16.973000 |
4,314 | 4,322 | What is Object Mocking and when do I need it? | Many people use Mock Objects when they are writing unit tests. What is a Mock Object? Why would I ever need one? Do I need a Mock Object Framework? | Object Mocking is used to keep dependencies out of your unit test. Sometimes you'll have a test like "SelectPerson" which will select a person from the database and return a Person object. To do this, you would normally need a dependency on the database, however with object mocking you can simulate the interaction with the database with a mock framework, so it might return a dataset which looks like one returned from the database and you can then test your code to ensure that it handles translating a dataset to a person object, rather than using it to test that a connection to the database exists. | What is Object Mocking and when do I need it? Many people use Mock Objects when they are writing unit tests. What is a Mock Object? Why would I ever need one? Do I need a Mock Object Framework? | TITLE:
What is Object Mocking and when do I need it?
QUESTION:
Many people use Mock Objects when they are writing unit tests. What is a Mock Object? Why would I ever need one? Do I need a Mock Object Framework?
ANSWER:
Object Mocking is used to keep dependencies out of your unit test. Sometimes you'll have a test like "SelectPerson" which will select a person from the database and return a Person object. To do this, you would normally need a dependency on the database, however with object mocking you can simulate the interaction with the database with a mock framework, so it might return a dataset which looks like one returned from the database and you can then test your code to ensure that it handles translating a dataset to a person object, rather than using it to test that a connection to the database exists. | [
"testing",
"mocking"
] | 45 | 36 | 11,657 | 9 | 0 | 2008-08-07T02:37:07.807000 | 2008-08-07T02:43:59.340000 |
4,335 | 13,097 | High availability | Is there anyway to configure a WCF service with a failover endpoint if the primary endpoint dies? Kind of like being able to specify a failover server in a SQL cluster. Specifically I am using the TCP/IP binding for speed, but on the rare occurrence that the machine is not available I would like to redirect traffic to the failover server. Not too bothered about losing messages. I'd just prefer not to write the code to handle re-routing. | You need to use a layer 4 load balancer in front of the two endpoints. Prob best to stick with a dedicated piece of hardware. | High availability Is there anyway to configure a WCF service with a failover endpoint if the primary endpoint dies? Kind of like being able to specify a failover server in a SQL cluster. Specifically I am using the TCP/IP binding for speed, but on the rare occurrence that the machine is not available I would like to redirect traffic to the failover server. Not too bothered about losing messages. I'd just prefer not to write the code to handle re-routing. | TITLE:
High availability
QUESTION:
Is there anyway to configure a WCF service with a failover endpoint if the primary endpoint dies? Kind of like being able to specify a failover server in a SQL cluster. Specifically I am using the TCP/IP binding for speed, but on the rare occurrence that the machine is not available I would like to redirect traffic to the failover server. Not too bothered about losing messages. I'd just prefer not to write the code to handle re-routing.
ANSWER:
You need to use a layer 4 load balancer in front of the two endpoints. Prob best to stick with a dedicated piece of hardware. | [
"c#",
".net",
"wcf",
"soa"
] | 20 | 10 | 3,771 | 4 | 0 | 2008-08-07T02:50:13.320000 | 2008-08-16T11:38:55.397000 |
4,347 | 4,597 | Where is a good place to start programming GUIs for windows? | I have experience writing console and network client/server applications in C and C++, but I know next to nothing about using the win32 visual API, MFC, Qt, wxWidgets, etc. Where is a good place to start, and what method should I specialize in, so as to be future ready and robust? | This is a rather broad question, as programming GUI applications in Windows can be done in so many ways. There are two main parts to developing any GUI app: the language and the API/framework. Considering you're interested in learning to build Windows GUI apps, the language isn't really a point of focus for you. Hence, you should pick a language you already know and work with a framework or API that can be harnessed by your chosen language. If you want to use C you're pretty much restricted to dealing with the Win32 API yourself, in which case reading Petzold or Richter would be great places to start. The Win32 API can be quite daunting, but it's well worth the effort to learn (imho). There are plenty of tutorials on Win32 on the web, and there's always MSDN, with a complete reference/guide to the Win32 API. Make sure you cover not just the API, but other areas such as resources/dialogs as they are building blocks for your Win32 application. If you want to use C++ you have all of the options that you have when using C plus a few others. I'd recommend going with the Win32 API directly, and then moving on to a known framework such as MFC, Qt, wxWindows or GTK so that you can spend less time working with boilerplate code and instead focus on writing your application logic. The last 3 options I just listed have the added benefit of being cross-platform, so you don't have to worry too much about platform-specific issues. Given that you said you want to work with Windows, I'll assume you're keen to focus on that rather than cross-platform -- so go with MFC, but spend some time with the Win32 API first to get familiar with some of the concepts. When dealing with MFC and the Win32 API, it's a good idea to try and get a solid understanding of the terminology prior to writing code. For example, you need to understand what the message pump is, and how it works. You need to know about concepts such as " owner-drawn controls", and subclassing. When you understand these things (and more), you'll find it easier to work with MFC because it uses similar terminology in its class interfaces (eg. you need to know what "translate messages" means before you can understand how and when to use PreTranslateMessage ). You could also use Managed C++ to write.NET GUI applications, but I've read in a few places that Managed C++ wasn't really intended to be used in this manner. Instead it should be used as a gateway between native/unmanaged code and managed code. If you're using.NET it's best to use a.NET language such as VB.NET or C# to build your GUIs. So if you are going to use.NET, you currently have the choice of the WinForms library, or WPF. I personally feel that you'd be wasting time learning to build WinForms applications given that WPF is designed to replace it. Over time WPF will become more prevelant and Winforms will most likely die off. WPF has a much richer API set, and doesn't suffer from many of the limitations that Winforms does. If you do choose this route, however, you'll no doubt have to learn XAML, which is a markup language that drives WPF applications. This technology is coming of age, and there are many great places to learn about it. First, there are sites such as LearnWPF, and DrWPF which have some really great articles. Secondly, there are plenty of quality books on the topic. So, to sum up, once you've picked your language and tech, the path is actually quite easy. Just pick up a book or two, read some blogs, get into some code samples.. and most importantly... write code. Keep writing, keep making mistakes, and keep learning from them. As a final note... In other words, Silverlight. If you don't want to go the MS route you might give Adobe's Flash/Flex a look see. Both Silverlight and Flash/Flex build RIA's. Which I think is where we are headed. They days of Office like apps are numbered I don't agree at all. Silverlight is not the same as WPF. Silverlight is web-specific, and only has a subset of WPF's features. Given that the question asks for Windows GUI apps, Flash/Flex Rich Internet Apps are not really a fitting suggestion. I also don't agree that the days of Rich Client Applications (such as office) are numbered at all. I hope that helps. Good luck:) | Where is a good place to start programming GUIs for windows? I have experience writing console and network client/server applications in C and C++, but I know next to nothing about using the win32 visual API, MFC, Qt, wxWidgets, etc. Where is a good place to start, and what method should I specialize in, so as to be future ready and robust? | TITLE:
Where is a good place to start programming GUIs for windows?
QUESTION:
I have experience writing console and network client/server applications in C and C++, but I know next to nothing about using the win32 visual API, MFC, Qt, wxWidgets, etc. Where is a good place to start, and what method should I specialize in, so as to be future ready and robust?
ANSWER:
This is a rather broad question, as programming GUI applications in Windows can be done in so many ways. There are two main parts to developing any GUI app: the language and the API/framework. Considering you're interested in learning to build Windows GUI apps, the language isn't really a point of focus for you. Hence, you should pick a language you already know and work with a framework or API that can be harnessed by your chosen language. If you want to use C you're pretty much restricted to dealing with the Win32 API yourself, in which case reading Petzold or Richter would be great places to start. The Win32 API can be quite daunting, but it's well worth the effort to learn (imho). There are plenty of tutorials on Win32 on the web, and there's always MSDN, with a complete reference/guide to the Win32 API. Make sure you cover not just the API, but other areas such as resources/dialogs as they are building blocks for your Win32 application. If you want to use C++ you have all of the options that you have when using C plus a few others. I'd recommend going with the Win32 API directly, and then moving on to a known framework such as MFC, Qt, wxWindows or GTK so that you can spend less time working with boilerplate code and instead focus on writing your application logic. The last 3 options I just listed have the added benefit of being cross-platform, so you don't have to worry too much about platform-specific issues. Given that you said you want to work with Windows, I'll assume you're keen to focus on that rather than cross-platform -- so go with MFC, but spend some time with the Win32 API first to get familiar with some of the concepts. When dealing with MFC and the Win32 API, it's a good idea to try and get a solid understanding of the terminology prior to writing code. For example, you need to understand what the message pump is, and how it works. You need to know about concepts such as " owner-drawn controls", and subclassing. When you understand these things (and more), you'll find it easier to work with MFC because it uses similar terminology in its class interfaces (eg. you need to know what "translate messages" means before you can understand how and when to use PreTranslateMessage ). You could also use Managed C++ to write.NET GUI applications, but I've read in a few places that Managed C++ wasn't really intended to be used in this manner. Instead it should be used as a gateway between native/unmanaged code and managed code. If you're using.NET it's best to use a.NET language such as VB.NET or C# to build your GUIs. So if you are going to use.NET, you currently have the choice of the WinForms library, or WPF. I personally feel that you'd be wasting time learning to build WinForms applications given that WPF is designed to replace it. Over time WPF will become more prevelant and Winforms will most likely die off. WPF has a much richer API set, and doesn't suffer from many of the limitations that Winforms does. If you do choose this route, however, you'll no doubt have to learn XAML, which is a markup language that drives WPF applications. This technology is coming of age, and there are many great places to learn about it. First, there are sites such as LearnWPF, and DrWPF which have some really great articles. Secondly, there are plenty of quality books on the topic. So, to sum up, once you've picked your language and tech, the path is actually quite easy. Just pick up a book or two, read some blogs, get into some code samples.. and most importantly... write code. Keep writing, keep making mistakes, and keep learning from them. As a final note... In other words, Silverlight. If you don't want to go the MS route you might give Adobe's Flash/Flex a look see. Both Silverlight and Flash/Flex build RIA's. Which I think is where we are headed. They days of Office like apps are numbered I don't agree at all. Silverlight is not the same as WPF. Silverlight is web-specific, and only has a subset of WPF's features. Given that the question asks for Windows GUI apps, Flash/Flex Rich Internet Apps are not really a fitting suggestion. I also don't agree that the days of Rich Client Applications (such as office) are numbered at all. I hope that helps. Good luck:) | [
"winapi",
"qt",
"mfc"
] | 26 | 43 | 4,457 | 9 | 0 | 2008-08-07T03:06:19.870000 | 2008-08-07T11:24:54.997000 |
4,363 | 4,386 | What is the best way to do unit testing for ASP.NET 2.0 web pages? | Any suggestions? Using visual studio in C#. Are there any specific tools to use or methods to approach this? Update: Sorry, I should have been a little more specific. I am using ASP.Net 2.0 and was looking more for a tool like jUnit for Java. I took a look at NUnit and NUnitAsp and that looks very promising. And I didn't even know that Visual Studio Pro has a testing suite, so I'll look at all of these options (I've just started using Visual Studio/Asp.net/C# this summer). | Boy, that's a pretty general question. I'll do my best, but be prepared to see me miss by a mile. Assumptions You are using ASP.NET, not plain ASP You don't really want to test your web pages, but the logic behind them. Unit testing the actual.ASPX pages is rather painful, but there are frameworks out there to do it. NUnitAsp is one. The first thing to do is to organize (or plan) your code so that it can be tested. The two most popular design patterns for this at the time seem to be MVP and MVC. Both separate the logic of the application away from the view so that you can test the logic without the view (web pages) getting in your way. Either MVP or MVC will be effective. MVC has the advantage of having a Microsoft framework almost ready to go. Once you've selected a framework pattern that encourages testability, you need to use a unit testing tool. NUnit is a good starting point. Visual Studio Professional has a testing suite built it, but NUnit + TestDrive.NET also works in the IDE. That's sort of a shotgun blast of information. I hope some if it hits. The Pragmatic Bookshelf has a good book covering the topic. | What is the best way to do unit testing for ASP.NET 2.0 web pages? Any suggestions? Using visual studio in C#. Are there any specific tools to use or methods to approach this? Update: Sorry, I should have been a little more specific. I am using ASP.Net 2.0 and was looking more for a tool like jUnit for Java. I took a look at NUnit and NUnitAsp and that looks very promising. And I didn't even know that Visual Studio Pro has a testing suite, so I'll look at all of these options (I've just started using Visual Studio/Asp.net/C# this summer). | TITLE:
What is the best way to do unit testing for ASP.NET 2.0 web pages?
QUESTION:
Any suggestions? Using visual studio in C#. Are there any specific tools to use or methods to approach this? Update: Sorry, I should have been a little more specific. I am using ASP.Net 2.0 and was looking more for a tool like jUnit for Java. I took a look at NUnit and NUnitAsp and that looks very promising. And I didn't even know that Visual Studio Pro has a testing suite, so I'll look at all of these options (I've just started using Visual Studio/Asp.net/C# this summer).
ANSWER:
Boy, that's a pretty general question. I'll do my best, but be prepared to see me miss by a mile. Assumptions You are using ASP.NET, not plain ASP You don't really want to test your web pages, but the logic behind them. Unit testing the actual.ASPX pages is rather painful, but there are frameworks out there to do it. NUnitAsp is one. The first thing to do is to organize (or plan) your code so that it can be tested. The two most popular design patterns for this at the time seem to be MVP and MVC. Both separate the logic of the application away from the view so that you can test the logic without the view (web pages) getting in your way. Either MVP or MVC will be effective. MVC has the advantage of having a Microsoft framework almost ready to go. Once you've selected a framework pattern that encourages testability, you need to use a unit testing tool. NUnit is a good starting point. Visual Studio Professional has a testing suite built it, but NUnit + TestDrive.NET also works in the IDE. That's sort of a shotgun blast of information. I hope some if it hits. The Pragmatic Bookshelf has a good book covering the topic. | [
"c#",
"asp.net",
"visual-studio",
"unit-testing"
] | 41 | 34 | 10,933 | 7 | 0 | 2008-08-07T03:23:45.983000 | 2008-08-07T04:23:38 |
4,369 | 4,385 | How to include PHP files that require an absolute path? | I have a directory structure like the following; script.php inc/include1.php inc/include2.php objects/object1.php objects/object2.php soap/soap.php Now, I use those objects in both script.php and /soap/soap.php, I could move them, but I want the directory structure like that for a specific reason. When executing script.php the include path is inc/include.php and when executing /soap/soap.php it's../inc, absolute paths work, /mnt/webdev/[project name]/inc/include1.php... But it's an ugly solution if I ever want to move the directory to a different location. So is there a way to use relative paths, or a way to programmatically generate the "/mnt/webdev/[project name]/"? | This should work $root = realpath($_SERVER["DOCUMENT_ROOT"]);
include "$root/inc/include1.php"; Edit: added imporvement by aussieviking | How to include PHP files that require an absolute path? I have a directory structure like the following; script.php inc/include1.php inc/include2.php objects/object1.php objects/object2.php soap/soap.php Now, I use those objects in both script.php and /soap/soap.php, I could move them, but I want the directory structure like that for a specific reason. When executing script.php the include path is inc/include.php and when executing /soap/soap.php it's../inc, absolute paths work, /mnt/webdev/[project name]/inc/include1.php... But it's an ugly solution if I ever want to move the directory to a different location. So is there a way to use relative paths, or a way to programmatically generate the "/mnt/webdev/[project name]/"? | TITLE:
How to include PHP files that require an absolute path?
QUESTION:
I have a directory structure like the following; script.php inc/include1.php inc/include2.php objects/object1.php objects/object2.php soap/soap.php Now, I use those objects in both script.php and /soap/soap.php, I could move them, but I want the directory structure like that for a specific reason. When executing script.php the include path is inc/include.php and when executing /soap/soap.php it's../inc, absolute paths work, /mnt/webdev/[project name]/inc/include1.php... But it's an ugly solution if I ever want to move the directory to a different location. So is there a way to use relative paths, or a way to programmatically generate the "/mnt/webdev/[project name]/"?
ANSWER:
This should work $root = realpath($_SERVER["DOCUMENT_ROOT"]);
include "$root/inc/include1.php"; Edit: added imporvement by aussieviking | [
"php"
] | 99 | 157 | 145,225 | 11 | 0 | 2008-08-07T03:46:47.713000 | 2008-08-07T04:20:47.647000 |
4,371 | 4,376 | How do I retrieve my MySQL username and password? | I lost my MySQL username and password. How do I retrieve it? | Stop the MySQL process. Start the MySQL process with the --skip-grant-tables option. Start the MySQL console client with the -u root option. List all the users; SELECT * FROM mysql.user; Reset password; UPDATE mysql.user SET Password=PASSWORD('[password]') WHERE User='[username]'; But DO NOT FORGET to Stop the MySQL process Start the MySQL Process normally (i.e. without the --skip-grant-tables option) when you are finished. Otherwise, your database's security could be compromised. | How do I retrieve my MySQL username and password? I lost my MySQL username and password. How do I retrieve it? | TITLE:
How do I retrieve my MySQL username and password?
QUESTION:
I lost my MySQL username and password. How do I retrieve it?
ANSWER:
Stop the MySQL process. Start the MySQL process with the --skip-grant-tables option. Start the MySQL console client with the -u root option. List all the users; SELECT * FROM mysql.user; Reset password; UPDATE mysql.user SET Password=PASSWORD('[password]') WHERE User='[username]'; But DO NOT FORGET to Stop the MySQL process Start the MySQL Process normally (i.e. without the --skip-grant-tables option) when you are finished. Otherwise, your database's security could be compromised. | [
"mysql",
"mysql-workbench"
] | 192 | 204 | 870,872 | 10 | 0 | 2008-08-07T03:54:14.593000 | 2008-08-07T04:02:17.090000 |
4,393 | 4,401 | Drop all tables whose names begin with a certain string | How can I drop all tables whose names begin with a given string? I think this can be done with some dynamic SQL and the INFORMATION_SCHEMA tables. | You may need to modify the query to include the owner if there's more than one in the database. DECLARE @cmd varchar(4000) DECLARE cmds CURSOR FOR SELECT 'drop table [' + Table_Name + ']' FROM INFORMATION_SCHEMA.TABLES WHERE Table_Name LIKE 'prefix%'
OPEN cmds WHILE 1 = 1 BEGIN FETCH cmds INTO @cmd IF @@fetch_status!= 0 BREAK EXEC(@cmd) END CLOSE cmds; DEALLOCATE cmds This is cleaner than using a two-step approach of generate script plus run. But one advantage of the script generation is that it gives you the chance to review the entirety of what's going to be run before it's actually run. I know that if I were going to do this against a production database, I'd be as careful as possible. Edit Code sample fixed. | Drop all tables whose names begin with a certain string How can I drop all tables whose names begin with a given string? I think this can be done with some dynamic SQL and the INFORMATION_SCHEMA tables. | TITLE:
Drop all tables whose names begin with a certain string
QUESTION:
How can I drop all tables whose names begin with a given string? I think this can be done with some dynamic SQL and the INFORMATION_SCHEMA tables.
ANSWER:
You may need to modify the query to include the owner if there's more than one in the database. DECLARE @cmd varchar(4000) DECLARE cmds CURSOR FOR SELECT 'drop table [' + Table_Name + ']' FROM INFORMATION_SCHEMA.TABLES WHERE Table_Name LIKE 'prefix%'
OPEN cmds WHILE 1 = 1 BEGIN FETCH cmds INTO @cmd IF @@fetch_status!= 0 BREAK EXEC(@cmd) END CLOSE cmds; DEALLOCATE cmds This is cleaner than using a two-step approach of generate script plus run. But one advantage of the script generation is that it gives you the chance to review the entirety of what's going to be run before it's actually run. I know that if I were going to do this against a production database, I'd be as careful as possible. Edit Code sample fixed. | [
"sql",
"sql-server",
"dynamic-sql"
] | 181 | 181 | 276,753 | 18 | 0 | 2008-08-07T04:41:37.713000 | 2008-08-07T04:53:29.857000 |
4,416 | 4,427 | Where can I get the Windows Workflow "wca.exe" application? | I am walking through the MS Press Windows Workflow Step-by-Step book and in chapter 8 it mentions a tool with the filename "wca.exe". This is supposed to be able to generate workflow communication helper classes based on an interface you provide it. I can't find that file. I thought it would be in the latest.NET 3.5 SDK, but I just downloaded and fully installed, and it's not there. Also, some MSDN forum posts had links posted that just go to 404s. So, where can I find wca.exe? | Should be part of the.NET 3 SDK (and later version as well). If you've already installed this, the path might look something like C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\wca.exe More info on Guy Burstein's blog. | Where can I get the Windows Workflow "wca.exe" application? I am walking through the MS Press Windows Workflow Step-by-Step book and in chapter 8 it mentions a tool with the filename "wca.exe". This is supposed to be able to generate workflow communication helper classes based on an interface you provide it. I can't find that file. I thought it would be in the latest.NET 3.5 SDK, but I just downloaded and fully installed, and it's not there. Also, some MSDN forum posts had links posted that just go to 404s. So, where can I find wca.exe? | TITLE:
Where can I get the Windows Workflow "wca.exe" application?
QUESTION:
I am walking through the MS Press Windows Workflow Step-by-Step book and in chapter 8 it mentions a tool with the filename "wca.exe". This is supposed to be able to generate workflow communication helper classes based on an interface you provide it. I can't find that file. I thought it would be in the latest.NET 3.5 SDK, but I just downloaded and fully installed, and it's not there. Also, some MSDN forum posts had links posted that just go to 404s. So, where can I find wca.exe?
ANSWER:
Should be part of the.NET 3 SDK (and later version as well). If you've already installed this, the path might look something like C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\wca.exe More info on Guy Burstein's blog. | [
".net-3.5"
] | 5 | 1 | 904 | 2 | 0 | 2008-08-07T05:13:31.020000 | 2008-08-07T05:40:18.733000 |
4,418 | 4,431 | How do I update Ruby Gems from behind a Proxy (ISA-NTLM) | The firewall I'm behind is running Microsoft ISA server in NTLM-only mode. Hash anyone have success getting their Ruby gems to install/update via Ruby SSPI gem or other method?... or am I just being lazy? Note: rubysspi-1.2.4 does not work. This also works for "igem", part of the IronRuby project | I wasn't able to get mine working from the command-line switch but I have been able to do it just by setting my HTTP_PROXY environment variable. (Note that case seems to be important). I have a batch file that has a line like this in it: SET HTTP_PROXY=http://%USER%:%PASSWORD%@%SERVER%:%PORT% I set the four referenced variables before I get to this line obviously. As an example if my username is "wolfbyte", my password is "secret" and my proxy is called "pigsy" and operates on port 8080: SET HTTP_PROXY=http://wolfbyte:secret@pigsy:8080 You might want to be careful how you manage that because it stores your password in plain text in the machine's session but I don't think it should be too much of an issue. | How do I update Ruby Gems from behind a Proxy (ISA-NTLM) The firewall I'm behind is running Microsoft ISA server in NTLM-only mode. Hash anyone have success getting their Ruby gems to install/update via Ruby SSPI gem or other method?... or am I just being lazy? Note: rubysspi-1.2.4 does not work. This also works for "igem", part of the IronRuby project | TITLE:
How do I update Ruby Gems from behind a Proxy (ISA-NTLM)
QUESTION:
The firewall I'm behind is running Microsoft ISA server in NTLM-only mode. Hash anyone have success getting their Ruby gems to install/update via Ruby SSPI gem or other method?... or am I just being lazy? Note: rubysspi-1.2.4 does not work. This also works for "igem", part of the IronRuby project
ANSWER:
I wasn't able to get mine working from the command-line switch but I have been able to do it just by setting my HTTP_PROXY environment variable. (Note that case seems to be important). I have a batch file that has a line like this in it: SET HTTP_PROXY=http://%USER%:%PASSWORD%@%SERVER%:%PORT% I set the four referenced variables before I get to this line obviously. As an example if my username is "wolfbyte", my password is "secret" and my proxy is called "pigsy" and operates on port 8080: SET HTTP_PROXY=http://wolfbyte:secret@pigsy:8080 You might want to be careful how you manage that because it stores your password in plain text in the machine's session but I don't think it should be too much of an issue. | [
"ruby",
"proxy",
"rubygems",
"ironruby"
] | 240 | 218 | 267,382 | 20 | 0 | 2008-08-07T05:21:16.807000 | 2008-08-07T05:49:00.557000 |
4,430 | 12,585 | How to easily consume a web service from PHP | Is there available any tool for PHP which can be used to generate code for consuming a web service based on its WSDL? Something comparable to clicking "Add Web Reference" in Visual Studio or the Eclipse plugin which does the same thing for Java. | I've had great success with wsdl2php. It will automatically create wrapper classes for all objects and methods used in your web service. | How to easily consume a web service from PHP Is there available any tool for PHP which can be used to generate code for consuming a web service based on its WSDL? Something comparable to clicking "Add Web Reference" in Visual Studio or the Eclipse plugin which does the same thing for Java. | TITLE:
How to easily consume a web service from PHP
QUESTION:
Is there available any tool for PHP which can be used to generate code for consuming a web service based on its WSDL? Something comparable to clicking "Add Web Reference" in Visual Studio or the Eclipse plugin which does the same thing for Java.
ANSWER:
I've had great success with wsdl2php. It will automatically create wrapper classes for all objects and methods used in your web service. | [
"php",
"web-services",
"visual-studio",
"wsdl"
] | 62 | 21 | 174,079 | 6 | 0 | 2008-08-07T05:48:33.570000 | 2008-08-15T18:36:14.227000 |
4,432 | 4,441 | CSV string handling | Typical way of creating a CSV string (pseudocode): Create a CSV container object (like a StringBuilder in C#). Loop through the strings you want to add appending a comma after each one. After the loop, remove that last superfluous comma. Code sample: public string ReturnAsCSV(ContactList contactList) { StringBuilder sb = new StringBuilder(); foreach (Contact c in contactList) { sb.Append(c.Name + ","); }
sb.Remove(sb.Length - 1, 1); //sb.Replace(",", "", sb.Length - 1, 1)
return sb.ToString(); } I like the idea of adding the comma by checking if the container is empty, but doesn't that mean more processing as it needs to check the length of the string on each occurrence? I feel that there should be an easier/cleaner/more efficient way of removing that last comma. Any ideas? | You could use LINQ to Objects: string [] strings = contactList.Select(c => c.Name).ToArray(); string csv = string.Join(",", strings); Obviously that could all be done in one line, but it's a bit clearer on two. | CSV string handling Typical way of creating a CSV string (pseudocode): Create a CSV container object (like a StringBuilder in C#). Loop through the strings you want to add appending a comma after each one. After the loop, remove that last superfluous comma. Code sample: public string ReturnAsCSV(ContactList contactList) { StringBuilder sb = new StringBuilder(); foreach (Contact c in contactList) { sb.Append(c.Name + ","); }
sb.Remove(sb.Length - 1, 1); //sb.Replace(",", "", sb.Length - 1, 1)
return sb.ToString(); } I like the idea of adding the comma by checking if the container is empty, but doesn't that mean more processing as it needs to check the length of the string on each occurrence? I feel that there should be an easier/cleaner/more efficient way of removing that last comma. Any ideas? | TITLE:
CSV string handling
QUESTION:
Typical way of creating a CSV string (pseudocode): Create a CSV container object (like a StringBuilder in C#). Loop through the strings you want to add appending a comma after each one. After the loop, remove that last superfluous comma. Code sample: public string ReturnAsCSV(ContactList contactList) { StringBuilder sb = new StringBuilder(); foreach (Contact c in contactList) { sb.Append(c.Name + ","); }
sb.Remove(sb.Length - 1, 1); //sb.Replace(",", "", sb.Length - 1, 1)
return sb.ToString(); } I like the idea of adding the comma by checking if the container is empty, but doesn't that mean more processing as it needs to check the length of the string on each occurrence? I feel that there should be an easier/cleaner/more efficient way of removing that last comma. Any ideas?
ANSWER:
You could use LINQ to Objects: string [] strings = contactList.Select(c => c.Name).ToArray(); string csv = string.Join(",", strings); Obviously that could all be done in one line, but it's a bit clearer on two. | [
"c#",
"csv"
] | 21 | 21 | 9,598 | 13 | 0 | 2008-08-07T05:49:04.253000 | 2008-08-07T05:56:15.957000 |
4,434 | 4,443 | Can I configure Visual Studio NOT to change StartUp Project every time I open a file from one of the projects? | Let's say that there is a solution that contains two projects (Project1 and Project2). Project1 is set as a StartUp Project (its name is displayed in a bold font). I double-click some file in Project2 to open it. The file opens, but something else happens too - Project2 gets set as a StartUp Project. I tried to find an option in configuration to change it, but I found none. Can this feature (though it's more like a bug to me) be disabled? | The way to select a startup project is described in Sara Ford's blog "Visual Studio Tip of the Day " (highly recommended). She has a post there about setting up StartUp projects. Essentially there are 2 ways, the easiest one being right-clicking on the desired project, and choosing "Set As StartUp Project". That prevents other projects from becoming the StartUp project, even if you click on one their files. | Can I configure Visual Studio NOT to change StartUp Project every time I open a file from one of the projects? Let's say that there is a solution that contains two projects (Project1 and Project2). Project1 is set as a StartUp Project (its name is displayed in a bold font). I double-click some file in Project2 to open it. The file opens, but something else happens too - Project2 gets set as a StartUp Project. I tried to find an option in configuration to change it, but I found none. Can this feature (though it's more like a bug to me) be disabled? | TITLE:
Can I configure Visual Studio NOT to change StartUp Project every time I open a file from one of the projects?
QUESTION:
Let's say that there is a solution that contains two projects (Project1 and Project2). Project1 is set as a StartUp Project (its name is displayed in a bold font). I double-click some file in Project2 to open it. The file opens, but something else happens too - Project2 gets set as a StartUp Project. I tried to find an option in configuration to change it, but I found none. Can this feature (though it's more like a bug to me) be disabled?
ANSWER:
The way to select a startup project is described in Sara Ford's blog "Visual Studio Tip of the Day " (highly recommended). She has a post there about setting up StartUp projects. Essentially there are 2 ways, the easiest one being right-clicking on the desired project, and choosing "Set As StartUp Project". That prevents other projects from becoming the StartUp project, even if you click on one their files. | [
".net",
"visual-studio",
"ide"
] | 23 | 19 | 10,141 | 5 | 0 | 2008-08-07T05:50:57.013000 | 2008-08-07T05:58:25.810000 |
4,458 | 7,070 | Domain Specific Language resources | I was just listening to some older.Net Rocks! episodes, and I found #329 on DSLs to be interesting. My problem is that I can't find any good online resources for people trying to learn this technology. I get the basics of the creating new designers, but the MS docs on the T4 engine used by the DSL tools and then how to integrate the templates with the DSL models are lacking. Does anyone know of some good introductory resources for the MS DSL tools? | The architects of the DSL Tools team wrote a book, Domain-Specific Development with Visual Studio DSL Tools. The book's website has some other links and resources. | Domain Specific Language resources I was just listening to some older.Net Rocks! episodes, and I found #329 on DSLs to be interesting. My problem is that I can't find any good online resources for people trying to learn this technology. I get the basics of the creating new designers, but the MS docs on the T4 engine used by the DSL tools and then how to integrate the templates with the DSL models are lacking. Does anyone know of some good introductory resources for the MS DSL tools? | TITLE:
Domain Specific Language resources
QUESTION:
I was just listening to some older.Net Rocks! episodes, and I found #329 on DSLs to be interesting. My problem is that I can't find any good online resources for people trying to learn this technology. I get the basics of the creating new designers, but the MS docs on the T4 engine used by the DSL tools and then how to integrate the templates with the DSL models are lacking. Does anyone know of some good introductory resources for the MS DSL tools?
ANSWER:
The architects of the DSL Tools team wrote a book, Domain-Specific Development with Visual Studio DSL Tools. The book's website has some other links and resources. | [
"t4",
"dsl",
"vsx"
] | 12 | 5 | 1,517 | 12 | 0 | 2008-08-07T06:24:33.870000 | 2008-08-10T06:59:42.607000 |
4,506 | 4,527 | How to know when to send a 304 Not Modified response | I'm writing a resource handling method where I control access to various files, and I'd like to be able to make use of the browser's cache. My question is two-fold: Which are the definitive HTTP headers that I need to check in order to know for sure whether I should send a 304 response, and what am I looking for when I do check them? Additionally, are there any headers that I need to send when I initially send the file (like 'Last-Modified') as a 200 response? Some psuedo-code would probably be the most useful answer. What about the cache-control header? Can the various possible values of that affect what you send to the client (namely max-age) or should only if-modified-since be obeyed? | Here's how I implemented it. The code has been working for a bit more than a year and with multiple browsers, so I think it's pretty reliable. This is based on RFC 2616 and by observing what and when the various browsers were sending. Here's the pseudocode: server_etag = gen_etag_for_this_file(myfile) etag_from_browser = get_header("Etag")
if etag_from_browser does not exist: etag_from_browser = get_header("If-None-Match") if the browser has quoted the etag: strip the quotes (e.g. "foo" --> foo)
set server_etag into http header
if etag_from_browser matches server_etag send 304 return code to browser Here's a snippet of my server logic that handles this. /* the client should set either Etag or If-None-Match */ /* some clients quote the parm, strip quotes if so */ mketag(etag, &sb);
etagin = apr_table_get(r->headers_in, "Etag"); if (etagin == NULL) etagin = apr_table_get(r->headers_in, "If-None-Match"); if (etag!= NULL && etag[0] == '"') { int sl; sl = strlen(etag); memmove(etag, etag+1, sl+1); etag[sl-2] = 0; logit(2,"etag=:%s:",etag); }... apr_table_add(r->headers_out, "ETag", etag);... if (etagin!= NULL && strcmp(etagin, etag) == 0) { /* if the etag matches, we return a 304 */ rc = HTTP_NOT_MODIFIED; } If you want some help with etag generation post another question and I'll dig out some code that does that as well. HTH! | How to know when to send a 304 Not Modified response I'm writing a resource handling method where I control access to various files, and I'd like to be able to make use of the browser's cache. My question is two-fold: Which are the definitive HTTP headers that I need to check in order to know for sure whether I should send a 304 response, and what am I looking for when I do check them? Additionally, are there any headers that I need to send when I initially send the file (like 'Last-Modified') as a 200 response? Some psuedo-code would probably be the most useful answer. What about the cache-control header? Can the various possible values of that affect what you send to the client (namely max-age) or should only if-modified-since be obeyed? | TITLE:
How to know when to send a 304 Not Modified response
QUESTION:
I'm writing a resource handling method where I control access to various files, and I'd like to be able to make use of the browser's cache. My question is two-fold: Which are the definitive HTTP headers that I need to check in order to know for sure whether I should send a 304 response, and what am I looking for when I do check them? Additionally, are there any headers that I need to send when I initially send the file (like 'Last-Modified') as a 200 response? Some psuedo-code would probably be the most useful answer. What about the cache-control header? Can the various possible values of that affect what you send to the client (namely max-age) or should only if-modified-since be obeyed?
ANSWER:
Here's how I implemented it. The code has been working for a bit more than a year and with multiple browsers, so I think it's pretty reliable. This is based on RFC 2616 and by observing what and when the various browsers were sending. Here's the pseudocode: server_etag = gen_etag_for_this_file(myfile) etag_from_browser = get_header("Etag")
if etag_from_browser does not exist: etag_from_browser = get_header("If-None-Match") if the browser has quoted the etag: strip the quotes (e.g. "foo" --> foo)
set server_etag into http header
if etag_from_browser matches server_etag send 304 return code to browser Here's a snippet of my server logic that handles this. /* the client should set either Etag or If-None-Match */ /* some clients quote the parm, strip quotes if so */ mketag(etag, &sb);
etagin = apr_table_get(r->headers_in, "Etag"); if (etagin == NULL) etagin = apr_table_get(r->headers_in, "If-None-Match"); if (etag!= NULL && etag[0] == '"') { int sl; sl = strlen(etag); memmove(etag, etag+1, sl+1); etag[sl-2] = 0; logit(2,"etag=:%s:",etag); }... apr_table_add(r->headers_out, "ETag", etag);... if (etagin!= NULL && strcmp(etagin, etag) == 0) { /* if the etag matches, we return a 304 */ rc = HTTP_NOT_MODIFIED; } If you want some help with etag generation post another question and I'll dig out some code that does that as well. HTH! | [
"language-agnostic",
"http"
] | 12 | 8 | 5,252 | 5 | 0 | 2008-08-07T07:54:37.013000 | 2008-08-07T08:30:16.113000 |
4,508 | 38,383 | MAPI and managed code experiences? | Using MAPI functions from within managed code is officially unsupported. Apparently, MAPI uses its own memory management and it crashes and burns within managed code (see here and here ) All I want to do is launch the default e-mail client with subject, body, AND one or more attachments. So I've been looking into MAPISendDocuments and it seems to work. But I haven't been able to gather courage to actually use the function in production code. Has anybody used this function a lot? Do you have any horror stories? PS. No, I won't shellExecute Outlook.exe with command line arguments for attachments. PPS. Attachment support is a requirement, so Mailto: solutions do not cut it for me. | Have a separate helper EXE that takes command-line params (or pipe to its StandardInput) that does what is required and call that from your main app. This keeps the MAPI stuff outside of your main app's process space. OK, you're still mixing MAPI and.NET but in a very short-lived process. The assumption is that MAPI and the CLR start causing issues with longer-running processes. We use Dmitry Streblechenko's superb Redemption Data Objects library which allows us to write such "shim" code in JScript and invoke that, which keeps the CLR and MAPI worlds in separate processes, but in a supported fashion. @Chris Fournier re. writing an unmanaged DLL. This won't work because the issue is mixing MAPI and managed code in the same process. | MAPI and managed code experiences? Using MAPI functions from within managed code is officially unsupported. Apparently, MAPI uses its own memory management and it crashes and burns within managed code (see here and here ) All I want to do is launch the default e-mail client with subject, body, AND one or more attachments. So I've been looking into MAPISendDocuments and it seems to work. But I haven't been able to gather courage to actually use the function in production code. Has anybody used this function a lot? Do you have any horror stories? PS. No, I won't shellExecute Outlook.exe with command line arguments for attachments. PPS. Attachment support is a requirement, so Mailto: solutions do not cut it for me. | TITLE:
MAPI and managed code experiences?
QUESTION:
Using MAPI functions from within managed code is officially unsupported. Apparently, MAPI uses its own memory management and it crashes and burns within managed code (see here and here ) All I want to do is launch the default e-mail client with subject, body, AND one or more attachments. So I've been looking into MAPISendDocuments and it seems to work. But I haven't been able to gather courage to actually use the function in production code. Has anybody used this function a lot? Do you have any horror stories? PS. No, I won't shellExecute Outlook.exe with command line arguments for attachments. PPS. Attachment support is a requirement, so Mailto: solutions do not cut it for me.
ANSWER:
Have a separate helper EXE that takes command-line params (or pipe to its StandardInput) that does what is required and call that from your main app. This keeps the MAPI stuff outside of your main app's process space. OK, you're still mixing MAPI and.NET but in a very short-lived process. The assumption is that MAPI and the CLR start causing issues with longer-running processes. We use Dmitry Streblechenko's superb Redemption Data Objects library which allows us to write such "shim" code in JScript and invoke that, which keeps the CLR and MAPI worlds in separate processes, but in a supported fashion. @Chris Fournier re. writing an unmanaged DLL. This won't work because the issue is mixing MAPI and managed code in the same process. | [
".net",
"email",
"pinvoke",
"mapi"
] | 13 | 8 | 4,341 | 8 | 0 | 2008-08-07T07:56:24.327000 | 2008-09-01T20:26:46.720000 |
4,519 | 4,691 | Using Xming X Window Server over a VPN | I have the Xming X Window Server installed on a laptop running Windows XP to connect to some UNIX development servers. It works fine when I connect directly to the company network in the office. However, it does not work when I connect to the network remotely over a VPN. When I start Xming when connected remotely none of my terminal Windows are displayed. I think it may have something to do with the DISPLAY environment variable not being set correctly to the IP address of the laptop when it is connected. I've noticed that when I do an ipconfig whilst connected remotely that my laptop has two IP addresses, the one assigned to it from the company network and the local IP address I've set up for it on my "local network" from my modem/router. Are there some configuration changes I need to make in Xming to support its use through the VPN? | Chances are it's either X authentication, the X server binding to an interface, or your DISPLAY variable. I don't use Xming myself but there are some general phenomenon to check for. One test you can do to manually verify the DISPLAY variable is correct is: Start your VPN. Run ipconfig to be sure you have the two IP addresses you mentioned (your local IP and your VPN IP). Start Xming. Run 'netstat -n' to see how it's binding to the interface. You should see something that either says localIP:6000 or VPNIP:6000. It may not be 6000 but chances are it will be something like that. If there's no VPNIP:6000 it may be binding only to your localIP or even 127.0.0.1. That will probably not work over the VPN. Check if there are some Xming settings to make it bind to other or all interfaces. If you see VPNIP:6000 or something similar, take note of what it says and remote shell into your UNIX host (hopefully something like ssh, if not whatever you have to get a text terminal). On the UNIX terminal type 'echo $DISPLAY'. If there is nothing displayed try 'export DISPLAY=VPNIP:0.0' where VPNIP is your VPN IP address and 0.0 is the port you saw in step 3 minus 6000 with.0 at the end (i.e. 6000 = 0.0, 6010 = 10.0). On the UNIX host run something like 'xclock' or 'xterm' to see if it runs. The error message should be informative. It will tell you that it either couldn't connect to the host (a connectivity problem) or authentication failed (you'll need to coordinate Xauth on your host and local machine or Xhosts on your local machine). Opening Xhosts (with + for all hosts or something similar) isn't too bad if you have a locally protected network and you're going over a VPN. Hopefully this will get you started tracking down the problem. Another option that is often useful as it works over a VPN or simple ssh connectivity is ssh tunneling or X11 forwarding over ssh. This simulates connectivity to the X server on your local box by redirecting a port on your UNIX host to the local port on your X server box. Your display will typically be something like localhost:10.0 for the local 6010 port. X can be ornery to set up but it usually works great once you get the hang of it. | Using Xming X Window Server over a VPN I have the Xming X Window Server installed on a laptop running Windows XP to connect to some UNIX development servers. It works fine when I connect directly to the company network in the office. However, it does not work when I connect to the network remotely over a VPN. When I start Xming when connected remotely none of my terminal Windows are displayed. I think it may have something to do with the DISPLAY environment variable not being set correctly to the IP address of the laptop when it is connected. I've noticed that when I do an ipconfig whilst connected remotely that my laptop has two IP addresses, the one assigned to it from the company network and the local IP address I've set up for it on my "local network" from my modem/router. Are there some configuration changes I need to make in Xming to support its use through the VPN? | TITLE:
Using Xming X Window Server over a VPN
QUESTION:
I have the Xming X Window Server installed on a laptop running Windows XP to connect to some UNIX development servers. It works fine when I connect directly to the company network in the office. However, it does not work when I connect to the network remotely over a VPN. When I start Xming when connected remotely none of my terminal Windows are displayed. I think it may have something to do with the DISPLAY environment variable not being set correctly to the IP address of the laptop when it is connected. I've noticed that when I do an ipconfig whilst connected remotely that my laptop has two IP addresses, the one assigned to it from the company network and the local IP address I've set up for it on my "local network" from my modem/router. Are there some configuration changes I need to make in Xming to support its use through the VPN?
ANSWER:
Chances are it's either X authentication, the X server binding to an interface, or your DISPLAY variable. I don't use Xming myself but there are some general phenomenon to check for. One test you can do to manually verify the DISPLAY variable is correct is: Start your VPN. Run ipconfig to be sure you have the two IP addresses you mentioned (your local IP and your VPN IP). Start Xming. Run 'netstat -n' to see how it's binding to the interface. You should see something that either says localIP:6000 or VPNIP:6000. It may not be 6000 but chances are it will be something like that. If there's no VPNIP:6000 it may be binding only to your localIP or even 127.0.0.1. That will probably not work over the VPN. Check if there are some Xming settings to make it bind to other or all interfaces. If you see VPNIP:6000 or something similar, take note of what it says and remote shell into your UNIX host (hopefully something like ssh, if not whatever you have to get a text terminal). On the UNIX terminal type 'echo $DISPLAY'. If there is nothing displayed try 'export DISPLAY=VPNIP:0.0' where VPNIP is your VPN IP address and 0.0 is the port you saw in step 3 minus 6000 with.0 at the end (i.e. 6000 = 0.0, 6010 = 10.0). On the UNIX host run something like 'xclock' or 'xterm' to see if it runs. The error message should be informative. It will tell you that it either couldn't connect to the host (a connectivity problem) or authentication failed (you'll need to coordinate Xauth on your host and local machine or Xhosts on your local machine). Opening Xhosts (with + for all hosts or something similar) isn't too bad if you have a locally protected network and you're going over a VPN. Hopefully this will get you started tracking down the problem. Another option that is often useful as it works over a VPN or simple ssh connectivity is ssh tunneling or X11 forwarding over ssh. This simulates connectivity to the X server on your local box by redirecting a port on your UNIX host to the local port on your X server box. Your display will typically be something like localhost:10.0 for the local 6010 port. X can be ornery to set up but it usually works great once you get the hang of it. | [
"unix",
"vpn",
"xming"
] | 5 | 6 | 41,201 | 9 | 0 | 2008-08-07T08:20:47.100000 | 2008-08-07T13:11:03.067000 |
4,529 | 4,547 | SQL Server 2005 and 2008 on same developer machine? | Has anyone tried installing SQL Server 2008 Developer on a machine that already has 2005 Developer installed? I am unsure if I should do this, and I need to keep 2005 on this machine for the foreseeable future in order to test our application easily. Since I sometimes need to take backup files of databases and make available for other people in the company I cannot just replace 2005 with 2008 as I suspect (but do not know) that the databases aren't 100% backwards compatible. What kind of issues would arise? Do I need to install the new version with an instance name, will that work? Can I use a different port number to distinguish them? I found this entry on technet: http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3496209&SiteID=17 It doesn't say more than just yes you can do this and I kinda suspected that this was doable anyway, but I need to know if there are anything I need to know before I start installing. Anyone? | Yes this is possible. You will have to create a named instance not used by another version of SQL Server as per the previous answer and version 3.5 of.Net installed. Works great!! Here the list of prerequisites:.NET Framework 3.5 SP1 Windows Installer 4.5 Windows PowerShell 1.0 | SQL Server 2005 and 2008 on same developer machine? Has anyone tried installing SQL Server 2008 Developer on a machine that already has 2005 Developer installed? I am unsure if I should do this, and I need to keep 2005 on this machine for the foreseeable future in order to test our application easily. Since I sometimes need to take backup files of databases and make available for other people in the company I cannot just replace 2005 with 2008 as I suspect (but do not know) that the databases aren't 100% backwards compatible. What kind of issues would arise? Do I need to install the new version with an instance name, will that work? Can I use a different port number to distinguish them? I found this entry on technet: http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3496209&SiteID=17 It doesn't say more than just yes you can do this and I kinda suspected that this was doable anyway, but I need to know if there are anything I need to know before I start installing. Anyone? | TITLE:
SQL Server 2005 and 2008 on same developer machine?
QUESTION:
Has anyone tried installing SQL Server 2008 Developer on a machine that already has 2005 Developer installed? I am unsure if I should do this, and I need to keep 2005 on this machine for the foreseeable future in order to test our application easily. Since I sometimes need to take backup files of databases and make available for other people in the company I cannot just replace 2005 with 2008 as I suspect (but do not know) that the databases aren't 100% backwards compatible. What kind of issues would arise? Do I need to install the new version with an instance name, will that work? Can I use a different port number to distinguish them? I found this entry on technet: http://forums.microsoft.com/TechNet/ShowPost.aspx?PostID=3496209&SiteID=17 It doesn't say more than just yes you can do this and I kinda suspected that this was doable anyway, but I need to know if there are anything I need to know before I start installing. Anyone?
ANSWER:
Yes this is possible. You will have to create a named instance not used by another version of SQL Server as per the previous answer and version 3.5 of.Net installed. Works great!! Here the list of prerequisites:.NET Framework 3.5 SP1 Windows Installer 4.5 Windows PowerShell 1.0 | [
"sql-server-2005",
"sql-server-2008",
"installation"
] | 21 | 23 | 25,930 | 7 | 0 | 2008-08-07T08:35:30.753000 | 2008-08-07T09:30:58.253000 |
4,533 | 4,540 | HTTP: Generating ETag Header | How do I generate an ETag HTTP header for a resource file? | An etag is an arbitrary string that the server sends to the client that the client will send back to the server the next time the file is requested. The etag should be computable on the server based on the file. Sort of like a checksum, but you might not want to checksum every file sending it out. server client
<------------- request file foo
file foo etag: "xyz" -------->
<------------- request file foo etag: "xyz" (what the server just sent)
(the etag is the same, so the server can send a 304) I built up a string in the format "datestamp-file size-file inode number". So, if a file is changed on the server after it has been served out to the client, the newly regenerated etag won't match if the client re-requests it. char *mketag(char *s, struct stat *sb) { sprintf(s, "%d-%d-%d", sb->st_mtime, sb->st_size, sb->st_ino); return s; } | HTTP: Generating ETag Header How do I generate an ETag HTTP header for a resource file? | TITLE:
HTTP: Generating ETag Header
QUESTION:
How do I generate an ETag HTTP header for a resource file?
ANSWER:
An etag is an arbitrary string that the server sends to the client that the client will send back to the server the next time the file is requested. The etag should be computable on the server based on the file. Sort of like a checksum, but you might not want to checksum every file sending it out. server client
<------------- request file foo
file foo etag: "xyz" -------->
<------------- request file foo etag: "xyz" (what the server just sent)
(the etag is the same, so the server can send a 304) I built up a string in the format "datestamp-file size-file inode number". So, if a file is changed on the server after it has been served out to the client, the newly regenerated etag won't match if the client re-requests it. char *mketag(char *s, struct stat *sb) { sprintf(s, "%d-%d-%d", sb->st_mtime, sb->st_size, sb->st_ino); return s; } | [
"language-agnostic",
"http",
"webserver",
"header",
"etag"
] | 32 | 17 | 29,169 | 7 | 0 | 2008-08-07T08:45:07.300000 | 2008-08-07T08:57:37.993000 |
4,541 | 106,093 | Simple MOLAP solution | To analyze lots of text logs I did some hackery that looks like this: Locally import logs into Access Reprocess Cube link to previous mdb in Analisis Service 2000 (yes it is 2k) Use Excel to visualize Cube (it is not big - up to milions raw entries) My hackery is a succes and more people are demanding an access to my Tool. As you see I see more automating and easier deployment. Do you now some tools/libraries that would give me the same but with easier deployment? Kind of embedded OLAP service? Edit: I heard of Mondrian but we don't do much with Java. Have you seen something similiar done for.Net/Win32? Comercial is also OK. | You could also try the other free open source OLAP server, PALO from Jedox (www.palo.net) | Simple MOLAP solution To analyze lots of text logs I did some hackery that looks like this: Locally import logs into Access Reprocess Cube link to previous mdb in Analisis Service 2000 (yes it is 2k) Use Excel to visualize Cube (it is not big - up to milions raw entries) My hackery is a succes and more people are demanding an access to my Tool. As you see I see more automating and easier deployment. Do you now some tools/libraries that would give me the same but with easier deployment? Kind of embedded OLAP service? Edit: I heard of Mondrian but we don't do much with Java. Have you seen something similiar done for.Net/Win32? Comercial is also OK. | TITLE:
Simple MOLAP solution
QUESTION:
To analyze lots of text logs I did some hackery that looks like this: Locally import logs into Access Reprocess Cube link to previous mdb in Analisis Service 2000 (yes it is 2k) Use Excel to visualize Cube (it is not big - up to milions raw entries) My hackery is a succes and more people are demanding an access to my Tool. As you see I see more automating and easier deployment. Do you now some tools/libraries that would give me the same but with easier deployment? Kind of embedded OLAP service? Edit: I heard of Mondrian but we don't do much with Java. Have you seen something similiar done for.Net/Win32? Comercial is also OK.
ANSWER:
You could also try the other free open source OLAP server, PALO from Jedox (www.palo.net) | [
"database",
"logging",
"text-files",
"olap"
] | 5 | 3 | 1,565 | 4 | 0 | 2008-08-07T08:58:18.460000 | 2008-09-19T22:16:28.277000 |
4,544 | 6,996 | Http Auth in a Firefox 3 bookmarklet | I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account. I tested it from the command line like: wget -O - --no-check-certificate \ "https://seconduser:thepassword@api.del.icio.us/v1/posts/add?url=http://seet.dk&description=test" This works great. I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with: javascript:void( open('https://seconduser:password@api.del.icio.us/v1/posts/add?url=' +encodeURIComponent(location.href) +'&description='+encodeURIComponent(document.title), 'delicious','toolbar=no,width=500,height=250' ) ); But all that happens is that I get this from del.icio.us: If I then go to the address bar and press enter, it changes to: Any ideas how to get it to work directly from the bookmarks? | Can you sniff the traffic to find what's actually being sent? Is it sending any auth data at all and it's incorrect or being presented in a form the server doesn't like, or is it never being sent by firefox at all? | Http Auth in a Firefox 3 bookmarklet I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account. I tested it from the command line like: wget -O - --no-check-certificate \ "https://seconduser:thepassword@api.del.icio.us/v1/posts/add?url=http://seet.dk&description=test" This works great. I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with: javascript:void( open('https://seconduser:password@api.del.icio.us/v1/posts/add?url=' +encodeURIComponent(location.href) +'&description='+encodeURIComponent(document.title), 'delicious','toolbar=no,width=500,height=250' ) ); But all that happens is that I get this from del.icio.us: If I then go to the address bar and press enter, it changes to: Any ideas how to get it to work directly from the bookmarks? | TITLE:
Http Auth in a Firefox 3 bookmarklet
QUESTION:
I'm trying to create a bookmarklet for posting del.icio.us bookmarks to a separate account. I tested it from the command line like: wget -O - --no-check-certificate \ "https://seconduser:thepassword@api.del.icio.us/v1/posts/add?url=http://seet.dk&description=test" This works great. I then wanted to create a bookmarklet in my firefox. I googled and found bits and pieces and ended up with: javascript:void( open('https://seconduser:password@api.del.icio.us/v1/posts/add?url=' +encodeURIComponent(location.href) +'&description='+encodeURIComponent(document.title), 'delicious','toolbar=no,width=500,height=250' ) ); But all that happens is that I get this from del.icio.us: If I then go to the address bar and press enter, it changes to: Any ideas how to get it to work directly from the bookmarks?
ANSWER:
Can you sniff the traffic to find what's actually being sent? Is it sending any auth data at all and it's incorrect or being presented in a form the server doesn't like, or is it never being sent by firefox at all? | [
"javascript",
"firefox",
"delicious-api"
] | 18 | 4 | 1,480 | 4 | 0 | 2008-08-07T09:08:52.260000 | 2008-08-10T02:04:32.530000 |
4,545 | 4,549 | What was the <XMP> tag used for? | Does anyone remember the XMP tag? What was it used for and why was it deprecated? | A quick Google search on W3C reveals that XMP was introduced for displaying preformatted text in HTML 3.2 and earlier. When W3C deprecated the XMP tag, it suggested using the PRE tag as a preferred alternative. Update: http://www.w3.org/TR/REC-html32#xmp, http://www.w3.org/MarkUp/html-spec/html-spec_5.html#SEC5.5.2.1 | What was the <XMP> tag used for? Does anyone remember the XMP tag? What was it used for and why was it deprecated? | TITLE:
What was the <XMP> tag used for?
QUESTION:
Does anyone remember the XMP tag? What was it used for and why was it deprecated?
ANSWER:
A quick Google search on W3C reveals that XMP was introduced for displaying preformatted text in HTML 3.2 and earlier. When W3C deprecated the XMP tag, it suggested using the PRE tag as a preferred alternative. Update: http://www.w3.org/TR/REC-html32#xmp, http://www.w3.org/MarkUp/html-spec/html-spec_5.html#SEC5.5.2.1 | [
"html",
"tags"
] | 81 | 37 | 54,165 | 8 | 0 | 2008-08-07T09:21:41.563000 | 2008-08-07T09:43:28.353000 |
4,556 | 4,562 | DataTable Loop Performance Comparison | Which of the following has the best performance? I have seen method two implemented in JavaScript with huge performance gains, however, I was unable to measure any gain in C# and was wondering if the compiler already does method 2 even when written like method 1. The theory behind method 2 is that the code doesn't have to access DataTable.Rows.Count on every iteration, it can simple access the int c. Method 1 for (int i = 0; i < DataTable.Rows.Count; i++) { // Do Something } Method 2 for (int i = 0, c = DataTable.Rows.Count; i < c; i++) { // Do Something } | No, it can't do that since there is no way to express constant over time for a value. If the compiler should be able to do that, there would have to be a guarantee from the code returning the value that the value is constant, and for the duration of the loop won't change. But, in this case, you're free to add new rows to the data table as part of your loop, and thus it's up to you to make that guarantee, in the way you have done it. So in short, the compiler will not do that optimization if the end-index is anything other than a variable. In the case of a variable, where the compiler can just look at the loop-code and see that this particular variable is not changed, it might do that and load the value into a register before starting the loop, but any performance gain from this would most likely be negligible, unless your loop body is empty. Conclusion: If you know, or is willing to accept, that the end loop index is constant for the duration of the loop, place it into a variable. Edit: Re-read your post, and yes, you might see negligible performance gains for your two cases as well, because the JITter optimizes the code. The JITter might optimize your end-index read into a direct access to the variable inside the data table that contains the row count, and a memory read isn't all that expensive anyway. If, on the other hand, reading that property was a very expensive operation, you'd see a more noticable difference. | DataTable Loop Performance Comparison Which of the following has the best performance? I have seen method two implemented in JavaScript with huge performance gains, however, I was unable to measure any gain in C# and was wondering if the compiler already does method 2 even when written like method 1. The theory behind method 2 is that the code doesn't have to access DataTable.Rows.Count on every iteration, it can simple access the int c. Method 1 for (int i = 0; i < DataTable.Rows.Count; i++) { // Do Something } Method 2 for (int i = 0, c = DataTable.Rows.Count; i < c; i++) { // Do Something } | TITLE:
DataTable Loop Performance Comparison
QUESTION:
Which of the following has the best performance? I have seen method two implemented in JavaScript with huge performance gains, however, I was unable to measure any gain in C# and was wondering if the compiler already does method 2 even when written like method 1. The theory behind method 2 is that the code doesn't have to access DataTable.Rows.Count on every iteration, it can simple access the int c. Method 1 for (int i = 0; i < DataTable.Rows.Count; i++) { // Do Something } Method 2 for (int i = 0, c = DataTable.Rows.Count; i < c; i++) { // Do Something }
ANSWER:
No, it can't do that since there is no way to express constant over time for a value. If the compiler should be able to do that, there would have to be a guarantee from the code returning the value that the value is constant, and for the duration of the loop won't change. But, in this case, you're free to add new rows to the data table as part of your loop, and thus it's up to you to make that guarantee, in the way you have done it. So in short, the compiler will not do that optimization if the end-index is anything other than a variable. In the case of a variable, where the compiler can just look at the loop-code and see that this particular variable is not changed, it might do that and load the value into a register before starting the loop, but any performance gain from this would most likely be negligible, unless your loop body is empty. Conclusion: If you know, or is willing to accept, that the end loop index is constant for the duration of the loop, place it into a variable. Edit: Re-read your post, and yes, you might see negligible performance gains for your two cases as well, because the JITter optimizes the code. The JITter might optimize your end-index read into a direct access to the variable inside the data table that contains the row count, and a memory read isn't all that expensive anyway. If, on the other hand, reading that property was a very expensive operation, you'd see a more noticable difference. | [
"c#",
"performance",
"loops"
] | 13 | 6 | 8,063 | 1 | 0 | 2008-08-07T10:01:04.823000 | 2008-08-07T10:09:58.703000 |
4,610 | 4,615 | How can I create Prototype Methods (like JavaScript) in C#.Net? | How is it possible to make prototype methods in C#.Net? In JavaScript, I can do the following to create a trim method for the string object: String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g,""); } How can I go about doing this in C#.Net? | You can't dynamically add methods to existing objects or classes in.NET, except by changing the source for that class. You can, however, in C# 3.0, use extension methods, which look like new methods, but are compile-time magic. To do this for your code: public static class StringExtensions { public static String trim(this String s) { return s.Trim(); } } To use it: String s = " Test "; s = s.trim(); This looks like a new method, but will compile the exact same way as this code: String s = " Test "; s = StringExtensions.trim(s); What exactly are you trying to accomplish? Perhaps there are better ways of doing what you want? | How can I create Prototype Methods (like JavaScript) in C#.Net? How is it possible to make prototype methods in C#.Net? In JavaScript, I can do the following to create a trim method for the string object: String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g,""); } How can I go about doing this in C#.Net? | TITLE:
How can I create Prototype Methods (like JavaScript) in C#.Net?
QUESTION:
How is it possible to make prototype methods in C#.Net? In JavaScript, I can do the following to create a trim method for the string object: String.prototype.trim = function() { return this.replace(/^\s+|\s+$/g,""); } How can I go about doing this in C#.Net?
ANSWER:
You can't dynamically add methods to existing objects or classes in.NET, except by changing the source for that class. You can, however, in C# 3.0, use extension methods, which look like new methods, but are compile-time magic. To do this for your code: public static class StringExtensions { public static String trim(this String s) { return s.Trim(); } } To use it: String s = " Test "; s = s.trim(); This looks like a new method, but will compile the exact same way as this code: String s = " Test "; s = StringExtensions.trim(s); What exactly are you trying to accomplish? Perhaps there are better ways of doing what you want? | [
"c#",
".net"
] | 20 | 22 | 11,030 | 4 | 0 | 2008-08-07T12:00:50.540000 | 2008-08-07T12:04:11.907000 |
4,612 | 29,283 | CSharpCodeProvider Compilation Performance | Is CompileAssemblyFromDom faster than CompileAssemblyFromSource? It should be as it presumably bypasses the compiler front-end. | CompileAssemblyFromDom compiles to a.cs file which is then run through the normal C# compiler. Example: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.CSharp; using System.CodeDom; using System.IO; using System.CodeDom.Compiler; using System.Reflection;
namespace CodeDomQuestion { class Program {
private static void Main(string[] args) { Program p = new Program(); p.dotest("C:\\fs.exe"); }
public void dotest(string outputname) { CSharpCodeProvider cscProvider = new CSharpCodeProvider(); CompilerParameters cp = new CompilerParameters(); cp.MainClass = null; cp.GenerateExecutable = true; cp.OutputAssembly = outputname;
CodeNamespace ns = new CodeNamespace("StackOverflowd");
CodeTypeDeclaration type = new CodeTypeDeclaration(); type.IsClass = true; type.Name = "MainClass"; type.TypeAttributes = TypeAttributes.Public;
ns.Types.Add(type);
CodeMemberMethod cmm = new CodeMemberMethod(); cmm.Attributes = MemberAttributes.Static; cmm.Name = "Main"; cmm.Statements.Add(new CodeSnippetExpression("System.Console.WriteLine('f'zxcvv)")); type.Members.Add(cmm);
CodeCompileUnit ccu = new CodeCompileUnit(); ccu.Namespaces.Add(ns);
CompilerResults results = cscProvider.CompileAssemblyFromDom(cp, ccu);
foreach (CompilerError err in results.Errors) Console.WriteLine(err.ErrorText + " - " + err.FileName + ":" + err.Line);
Console.WriteLine(); } } } which shows errors in a (now nonexistent) temp file: ) expected - c:\Documents and Settings\jacob\Local Settings\Temp\x59n9yb-.0.cs:17; expected - c:\Documents and Settings\jacob\Local Settings\Temp\x59n9yb-.0.cs:17 Invalid expression term ')' - c:\Documents and Settings\jacob\Local Settings\Tem p\x59n9yb-.0.cs:17 So I guess the answer is "no" | CSharpCodeProvider Compilation Performance Is CompileAssemblyFromDom faster than CompileAssemblyFromSource? It should be as it presumably bypasses the compiler front-end. | TITLE:
CSharpCodeProvider Compilation Performance
QUESTION:
Is CompileAssemblyFromDom faster than CompileAssemblyFromSource? It should be as it presumably bypasses the compiler front-end.
ANSWER:
CompileAssemblyFromDom compiles to a.cs file which is then run through the normal C# compiler. Example: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.CSharp; using System.CodeDom; using System.IO; using System.CodeDom.Compiler; using System.Reflection;
namespace CodeDomQuestion { class Program {
private static void Main(string[] args) { Program p = new Program(); p.dotest("C:\\fs.exe"); }
public void dotest(string outputname) { CSharpCodeProvider cscProvider = new CSharpCodeProvider(); CompilerParameters cp = new CompilerParameters(); cp.MainClass = null; cp.GenerateExecutable = true; cp.OutputAssembly = outputname;
CodeNamespace ns = new CodeNamespace("StackOverflowd");
CodeTypeDeclaration type = new CodeTypeDeclaration(); type.IsClass = true; type.Name = "MainClass"; type.TypeAttributes = TypeAttributes.Public;
ns.Types.Add(type);
CodeMemberMethod cmm = new CodeMemberMethod(); cmm.Attributes = MemberAttributes.Static; cmm.Name = "Main"; cmm.Statements.Add(new CodeSnippetExpression("System.Console.WriteLine('f'zxcvv)")); type.Members.Add(cmm);
CodeCompileUnit ccu = new CodeCompileUnit(); ccu.Namespaces.Add(ns);
CompilerResults results = cscProvider.CompileAssemblyFromDom(cp, ccu);
foreach (CompilerError err in results.Errors) Console.WriteLine(err.ErrorText + " - " + err.FileName + ":" + err.Line);
Console.WriteLine(); } } } which shows errors in a (now nonexistent) temp file: ) expected - c:\Documents and Settings\jacob\Local Settings\Temp\x59n9yb-.0.cs:17; expected - c:\Documents and Settings\jacob\Local Settings\Temp\x59n9yb-.0.cs:17 Invalid expression term ')' - c:\Documents and Settings\jacob\Local Settings\Tem p\x59n9yb-.0.cs:17 So I guess the answer is "no" | [
"c#",
"performance",
"compiler-construction"
] | 20 | 9 | 3,326 | 2 | 0 | 2008-08-07T12:01:14.503000 | 2008-08-27T01:03:17.197000 |
4,617 | 4,712 | What is a good Mercurial usage pattern for this setup? | We've got two developers on the same closed (ugh, stupid gov) network, Another developer a couple minutes drive down the road, and a fourth developer half-way across the country. E-Mail, ftp, and removal media are all possible methods of transfer for the people not on the same network. I am one of the two closed network developers, consider us the "master" location. What is the best Mercurial setup/pattern for group? What is the best way to trasmit changes to/from the remote developers? As I am in charge, I figured that I would have to keep at least one master repo with another local repo in which I can develop. Each other person should just need a clone of the master. Is this right? I guess this also makes me responsible for the merging? As you can see, I'm still trying to wrap my head around distributed version control. I don't think there is any other way to do this with the connectivity situation. | The users outside the network can make patches, and/or use email to send the updates to the main repo or someone, like yourself to merge them. The other internal people can have local copies, like yourself and do merges --but if you are having these out of network patches, it might be better that one person deal with them so nobody gets confused, but that's something you'd have to consider yourself. Syncing the other way, you'd create a patch, and them email or get a flash drive to the remote developers to patch their system. You're going to need some good communication in the team man, I am thankful I'm not in your shoes. Those are my only suggestions --well, the obvious, get them a VPN connection! I'd love to hear how it goes, what plans stabilize into a weekly groove, et cetera. | What is a good Mercurial usage pattern for this setup? We've got two developers on the same closed (ugh, stupid gov) network, Another developer a couple minutes drive down the road, and a fourth developer half-way across the country. E-Mail, ftp, and removal media are all possible methods of transfer for the people not on the same network. I am one of the two closed network developers, consider us the "master" location. What is the best Mercurial setup/pattern for group? What is the best way to trasmit changes to/from the remote developers? As I am in charge, I figured that I would have to keep at least one master repo with another local repo in which I can develop. Each other person should just need a clone of the master. Is this right? I guess this also makes me responsible for the merging? As you can see, I'm still trying to wrap my head around distributed version control. I don't think there is any other way to do this with the connectivity situation. | TITLE:
What is a good Mercurial usage pattern for this setup?
QUESTION:
We've got two developers on the same closed (ugh, stupid gov) network, Another developer a couple minutes drive down the road, and a fourth developer half-way across the country. E-Mail, ftp, and removal media are all possible methods of transfer for the people not on the same network. I am one of the two closed network developers, consider us the "master" location. What is the best Mercurial setup/pattern for group? What is the best way to trasmit changes to/from the remote developers? As I am in charge, I figured that I would have to keep at least one master repo with another local repo in which I can develop. Each other person should just need a clone of the master. Is this right? I guess this also makes me responsible for the merging? As you can see, I'm still trying to wrap my head around distributed version control. I don't think there is any other way to do this with the connectivity situation.
ANSWER:
The users outside the network can make patches, and/or use email to send the updates to the main repo or someone, like yourself to merge them. The other internal people can have local copies, like yourself and do merges --but if you are having these out of network patches, it might be better that one person deal with them so nobody gets confused, but that's something you'd have to consider yourself. Syncing the other way, you'd create a patch, and them email or get a flash drive to the remote developers to patch their system. You're going to need some good communication in the team man, I am thankful I'm not in your shoes. Those are my only suggestions --well, the obvious, get them a VPN connection! I'd love to hear how it goes, what plans stabilize into a weekly groove, et cetera. | [
"version-control",
"mercurial",
"dvcs"
] | 14 | 1 | 1,180 | 3 | 0 | 2008-08-07T12:05:48.410000 | 2008-08-07T13:37:30.300000 |
4,622 | 4,626 | SQL Case Expression Syntax? | What is the complete and correct syntax for the SQL Case expression? | The complete syntax depends on the database engine you're working with: For SQL Server: CASE case-expression WHEN when-expression-1 THEN value-1 [ WHEN when-expression-n THEN value-n... ] [ ELSE else-value ] END or: CASE WHEN boolean-when-expression-1 THEN value-1 [ WHEN boolean-when-expression-n THEN value-n... ] [ ELSE else-value ] END expressions, etc: case-expression - something that produces a value when-expression-x - something that is compared against the case-expression value-1 - the result of the CASE statement if: the when-expression == case-expression OR the boolean-when-expression == TRUE boolean-when-exp.. - something that produces a TRUE/FALSE answer Link: CASE (Transact-SQL) Also note that the ordering of the WHEN statements is important. You can easily write multiple WHEN clauses that overlap, and the first one that matches is used. Note: If no ELSE clause is specified, and no matching WHEN-condition is found, the value of the CASE expression will be NULL. | SQL Case Expression Syntax? What is the complete and correct syntax for the SQL Case expression? | TITLE:
SQL Case Expression Syntax?
QUESTION:
What is the complete and correct syntax for the SQL Case expression?
ANSWER:
The complete syntax depends on the database engine you're working with: For SQL Server: CASE case-expression WHEN when-expression-1 THEN value-1 [ WHEN when-expression-n THEN value-n... ] [ ELSE else-value ] END or: CASE WHEN boolean-when-expression-1 THEN value-1 [ WHEN boolean-when-expression-n THEN value-n... ] [ ELSE else-value ] END expressions, etc: case-expression - something that produces a value when-expression-x - something that is compared against the case-expression value-1 - the result of the CASE statement if: the when-expression == case-expression OR the boolean-when-expression == TRUE boolean-when-exp.. - something that produces a TRUE/FALSE answer Link: CASE (Transact-SQL) Also note that the ordering of the WHEN statements is important. You can easily write multiple WHEN clauses that overlap, and the first one that matches is used. Note: If no ELSE clause is specified, and no matching WHEN-condition is found, the value of the CASE expression will be NULL. | [
"sql"
] | 63 | 82 | 174,529 | 7 | 0 | 2008-08-07T12:13:01.390000 | 2008-08-07T12:20:22.827000 |
4,627 | 4,649 | Upgrade to ASP.NET 3.x | I am currently aware that ASP.NET 2.0 is out and about and that there are 3.x versions of the.Net Framework. Is it possible to upgrade my ASP.NET web server to version 3.x of the.Net Framework? I have tried this, however, when selecting which version of the.Net framwork to use in IIS (the ASP.NET Tab), only version 1.1 and 2.0 show. Is there a work around? | if I install 3.5 and have IIS setup to use 2.0. I will be able to use 3.5 features? Yes, that is correct. You have IIS set to 2.0 for both 2.0 and 3.5 sites, as they both run on the same CLR. 3.5 uses a different compile method than 2.0. This is declared in the web.config for the site. See this post for more details on this. But the setup in IIS for both 3.5 and 2.0 ASP.net sites is identical. | Upgrade to ASP.NET 3.x I am currently aware that ASP.NET 2.0 is out and about and that there are 3.x versions of the.Net Framework. Is it possible to upgrade my ASP.NET web server to version 3.x of the.Net Framework? I have tried this, however, when selecting which version of the.Net framwork to use in IIS (the ASP.NET Tab), only version 1.1 and 2.0 show. Is there a work around? | TITLE:
Upgrade to ASP.NET 3.x
QUESTION:
I am currently aware that ASP.NET 2.0 is out and about and that there are 3.x versions of the.Net Framework. Is it possible to upgrade my ASP.NET web server to version 3.x of the.Net Framework? I have tried this, however, when selecting which version of the.Net framwork to use in IIS (the ASP.NET Tab), only version 1.1 and 2.0 show. Is there a work around?
ANSWER:
if I install 3.5 and have IIS setup to use 2.0. I will be able to use 3.5 features? Yes, that is correct. You have IIS set to 2.0 for both 2.0 and 3.5 sites, as they both run on the same CLR. 3.5 uses a different compile method than 2.0. This is declared in the web.config for the site. See this post for more details on this. But the setup in IIS for both 3.5 and 2.0 ASP.net sites is identical. | [
"asp.net",
".net-3.5"
] | 13 | 5 | 1,424 | 6 | 0 | 2008-08-07T12:21:25.070000 | 2008-08-07T12:41:49.897000 |
4,629 | 4,637 | How can I evaluate C# code dynamically? | I can do an eval("something()"); to execute the code dynamically in JavaScript. Is there a way for me to do the same thing in C#? An example of what I am trying to do is: I have an integer variable (say i ) and I have multiple properties by the names: "Property1", "Property2", "Property3", etc. Now, I want to perform some operations on the " Property i " property depending on the value of i. This is really simple with Javascript. Is there any way to do this with C#? | DISCLAIMER: This answer was written back in 2008. The landscape has changed drastically since then. Look at the other answers on this page, especially the one detailing Microsoft.CodeAnalysis.CSharp.Scripting. Rest of answer will be left as it was originally posted but is no longer accurate. Unfortunately, C# isn't a dynamic language like that. What you can do, however, is to create a C# source code file, full with class and everything, and run it through the CodeDom provider for C# and compile it into an assembly, and then execute it. This forum post on MSDN contains an answer with some example code down the page somewhat: create a anonymous method from a string? I would hardly say this is a very good solution, but it is possible anyway. What kind of code are you going to expect in that string? If it is a minor subset of valid code, for instance just math expressions, it might be that other alternatives exists. Edit: Well, that teaches me to read the questions thoroughly first. Yes, reflection would be able to give you some help here. If you split the string by the; first, to get individual properties, you can use the following code to get a PropertyInfo object for a particular property for a class, and then use that object to manipulate a particular object. String propName = "Text"; PropertyInfo pi = someObject.GetType().GetProperty(propName); pi.SetValue(someObject, "New Value", new Object[0]); Link: PropertyInfo.SetValue Method | How can I evaluate C# code dynamically? I can do an eval("something()"); to execute the code dynamically in JavaScript. Is there a way for me to do the same thing in C#? An example of what I am trying to do is: I have an integer variable (say i ) and I have multiple properties by the names: "Property1", "Property2", "Property3", etc. Now, I want to perform some operations on the " Property i " property depending on the value of i. This is really simple with Javascript. Is there any way to do this with C#? | TITLE:
How can I evaluate C# code dynamically?
QUESTION:
I can do an eval("something()"); to execute the code dynamically in JavaScript. Is there a way for me to do the same thing in C#? An example of what I am trying to do is: I have an integer variable (say i ) and I have multiple properties by the names: "Property1", "Property2", "Property3", etc. Now, I want to perform some operations on the " Property i " property depending on the value of i. This is really simple with Javascript. Is there any way to do this with C#?
ANSWER:
DISCLAIMER: This answer was written back in 2008. The landscape has changed drastically since then. Look at the other answers on this page, especially the one detailing Microsoft.CodeAnalysis.CSharp.Scripting. Rest of answer will be left as it was originally posted but is no longer accurate. Unfortunately, C# isn't a dynamic language like that. What you can do, however, is to create a C# source code file, full with class and everything, and run it through the CodeDom provider for C# and compile it into an assembly, and then execute it. This forum post on MSDN contains an answer with some example code down the page somewhat: create a anonymous method from a string? I would hardly say this is a very good solution, but it is possible anyway. What kind of code are you going to expect in that string? If it is a minor subset of valid code, for instance just math expressions, it might be that other alternatives exists. Edit: Well, that teaches me to read the questions thoroughly first. Yes, reflection would be able to give you some help here. If you split the string by the; first, to get individual properties, you can use the following code to get a PropertyInfo object for a particular property for a class, and then use that object to manipulate a particular object. String propName = "Text"; PropertyInfo pi = someObject.GetType().GetProperty(propName); pi.SetValue(someObject, "New Value", new Object[0]); Link: PropertyInfo.SetValue Method | [
"c#",
"reflection",
"properties",
"c#-2.0"
] | 114 | 51 | 83,535 | 16 | 0 | 2008-08-07T12:26:46.917000 | 2008-08-07T12:31:18.530000 |
4,630 | 478,658 | How can I Java webstart multiple, dependent, native libraries? | Example: I have two shared objects (same should apply to.dlls). The first shared object is from a third-party library, we'll call it libA.so. I have wrapped some of this with JNI and created my own library, libB.so. Now libB depends on libA. When webstarting, both libraries are places in some webstart working area. My java code attempts to load libB. At this point the system loader will attempt to load libA which is not in the system library path (java.library.path won't help this). The end result is that libB has an unsatisfied link and cannot be used. I have tried loading libA before libB, but that still does not work. Seems the OS wants to do that loading for me. Is there any way I can make this work other than statically compiling? | Static compilation proved to be the only way to webstart multiple dependent native libraries. | How can I Java webstart multiple, dependent, native libraries? Example: I have two shared objects (same should apply to.dlls). The first shared object is from a third-party library, we'll call it libA.so. I have wrapped some of this with JNI and created my own library, libB.so. Now libB depends on libA. When webstarting, both libraries are places in some webstart working area. My java code attempts to load libB. At this point the system loader will attempt to load libA which is not in the system library path (java.library.path won't help this). The end result is that libB has an unsatisfied link and cannot be used. I have tried loading libA before libB, but that still does not work. Seems the OS wants to do that loading for me. Is there any way I can make this work other than statically compiling? | TITLE:
How can I Java webstart multiple, dependent, native libraries?
QUESTION:
Example: I have two shared objects (same should apply to.dlls). The first shared object is from a third-party library, we'll call it libA.so. I have wrapped some of this with JNI and created my own library, libB.so. Now libB depends on libA. When webstarting, both libraries are places in some webstart working area. My java code attempts to load libB. At this point the system loader will attempt to load libA which is not in the system library path (java.library.path won't help this). The end result is that libB has an unsatisfied link and cannot be used. I have tried loading libA before libB, but that still does not work. Seems the OS wants to do that loading for me. Is there any way I can make this work other than statically compiling?
ANSWER:
Static compilation proved to be the only way to webstart multiple dependent native libraries. | [
"java",
"java-native-interface",
"java-web-start"
] | 17 | 5 | 2,743 | 3 | 0 | 2008-08-07T12:26:50.707000 | 2009-01-26T01:47:13.227000 |
4,638 | 4,650 | How do you create your own moniker (URL Protocol) on Windows systems? | How do you create your own custom moniker (or URL Protocol) on Windows systems? Examples: http: mailto: service: | Take a look at Creating and Using URL Monikers, About Asynchronous Pluggable Protocols and Registering an Application to a URL Protocol from MSDN | How do you create your own moniker (URL Protocol) on Windows systems? How do you create your own custom moniker (or URL Protocol) on Windows systems? Examples: http: mailto: service: | TITLE:
How do you create your own moniker (URL Protocol) on Windows systems?
QUESTION:
How do you create your own custom moniker (or URL Protocol) on Windows systems? Examples: http: mailto: service:
ANSWER:
Take a look at Creating and Using URL Monikers, About Asynchronous Pluggable Protocols and Registering an Application to a URL Protocol from MSDN | [
"windows",
"winapi",
"moniker"
] | 13 | 4 | 4,616 | 3 | 0 | 2008-08-07T12:31:42.413000 | 2008-08-07T12:42:06.683000 |
4,661 | 4,777 | How do I use more than one OpenID? | I have more than one OpenID as I have tried out numerous. As people take up OpenID different suppliers are going to emerge I may want to switch provinders. As all IDs are me, and all are authenticated against the same email address, shouldn't I be able to log into stack overflow with any of them and be able to hit the same account? | I think each site that implements OpenID would have to build their software to allow multiple entries for your OpenID credentials. However, just because a site doesn't allow you to create multiple entries doesn't mean you can't swap out OpenID suppliers. How to turn your blog into an OpenID STEP 1: Get an OpenID. There a lots of servers and services out there you can use. I use http://www.myopenid.com STEP 2: Add these two lines to your blog's main template in-between the tags at the top of your template. Most all blog engines support editing your template so this should be an easy and very possible thing to do. Example: This will let you use your domain/blog as your OpenID. Credits to Scott Hanselman and Simon Willison for these simple instructions. Switch Your Supplier Now that your OpenID points to your blog, you can update your link rel href's to point to a new supplier and all the places that you've tied your blog's OpenID will use the new supplier. | How do I use more than one OpenID? I have more than one OpenID as I have tried out numerous. As people take up OpenID different suppliers are going to emerge I may want to switch provinders. As all IDs are me, and all are authenticated against the same email address, shouldn't I be able to log into stack overflow with any of them and be able to hit the same account? | TITLE:
How do I use more than one OpenID?
QUESTION:
I have more than one OpenID as I have tried out numerous. As people take up OpenID different suppliers are going to emerge I may want to switch provinders. As all IDs are me, and all are authenticated against the same email address, shouldn't I be able to log into stack overflow with any of them and be able to hit the same account?
ANSWER:
I think each site that implements OpenID would have to build their software to allow multiple entries for your OpenID credentials. However, just because a site doesn't allow you to create multiple entries doesn't mean you can't swap out OpenID suppliers. How to turn your blog into an OpenID STEP 1: Get an OpenID. There a lots of servers and services out there you can use. I use http://www.myopenid.com STEP 2: Add these two lines to your blog's main template in-between the tags at the top of your template. Most all blog engines support editing your template so this should be an easy and very possible thing to do. Example: This will let you use your domain/blog as your OpenID. Credits to Scott Hanselman and Simon Willison for these simple instructions. Switch Your Supplier Now that your OpenID points to your blog, you can update your link rel href's to point to a new supplier and all the places that you've tied your blog's OpenID will use the new supplier. | [
"openid"
] | 16 | 23 | 1,935 | 6 | 0 | 2008-08-07T12:51:38.910000 | 2008-08-07T14:36:06.890000 |
4,664 | 4,672 | Should the folders in a solution match the namespace? | Should the folders in a solution match the namespace? In one of my teams projects, we have a class library that has many sub-folders in the project. Project Name and Namespace: MyCompany.Project.Section. Within this project, there are several folders that match the namespace section: Folder Vehicles has classes in the MyCompany.Project.Section.Vehicles namespace Folder Clothing has classes in the MyCompany.Project.Section.Clothing namespace etc. Inside this same project, is another rogue folder Folder BusinessObjects has classes in the MyCompany.Project.Section namespace There are a few cases like this where folders are made for "organizational convenience". My question is: What's the standard? In class libraries do the folders usually match the namespace structure or is it a mixed bag? | Also, note that if you use the built-in templates to add classes to a folder, it will by default be put in a namespace that reflects the folder hierarchy. The classes will be easier to find and that alone should be reasons good enough. The rules we follow are: Project/assembly name is the same as the root namespace, except for the.dll ending Only exception to the above rule is a project with a.Core ending, the.Core is stripped off Folders equals namespaces One type per file (class, struct, enum, delegate, etc.) makes it easy to find the right file | Should the folders in a solution match the namespace? Should the folders in a solution match the namespace? In one of my teams projects, we have a class library that has many sub-folders in the project. Project Name and Namespace: MyCompany.Project.Section. Within this project, there are several folders that match the namespace section: Folder Vehicles has classes in the MyCompany.Project.Section.Vehicles namespace Folder Clothing has classes in the MyCompany.Project.Section.Clothing namespace etc. Inside this same project, is another rogue folder Folder BusinessObjects has classes in the MyCompany.Project.Section namespace There are a few cases like this where folders are made for "organizational convenience". My question is: What's the standard? In class libraries do the folders usually match the namespace structure or is it a mixed bag? | TITLE:
Should the folders in a solution match the namespace?
QUESTION:
Should the folders in a solution match the namespace? In one of my teams projects, we have a class library that has many sub-folders in the project. Project Name and Namespace: MyCompany.Project.Section. Within this project, there are several folders that match the namespace section: Folder Vehicles has classes in the MyCompany.Project.Section.Vehicles namespace Folder Clothing has classes in the MyCompany.Project.Section.Clothing namespace etc. Inside this same project, is another rogue folder Folder BusinessObjects has classes in the MyCompany.Project.Section namespace There are a few cases like this where folders are made for "organizational convenience". My question is: What's the standard? In class libraries do the folders usually match the namespace structure or is it a mixed bag?
ANSWER:
Also, note that if you use the built-in templates to add classes to a folder, it will by default be put in a namespace that reflects the folder hierarchy. The classes will be easier to find and that alone should be reasons good enough. The rules we follow are: Project/assembly name is the same as the root namespace, except for the.dll ending Only exception to the above rule is a project with a.Core ending, the.Core is stripped off Folders equals namespaces One type per file (class, struct, enum, delegate, etc.) makes it easy to find the right file | [
"c#",
".net",
"namespaces"
] | 166 | 89 | 63,962 | 7 | 0 | 2008-08-07T12:53:19.217000 | 2008-08-07T12:58:21.267000 |
4,665 | 4,687 | Verifying files for testing | I was working with quality yesterday doing some formal testing. In their procedure they were verifying all files on the test machine were pulled from the release. The way they were verifying these files were the same was by checking the size and the date/time stamp windows put on them in Windows Explorer. These happened to be off for another reason which I was able to find out why. Is this a valid way to verify a file is the same? I didn't think so and started to argue, but I am younger here so thought I shouldn't push it too far. I wanted to argue they should do a binary compare on the file to verify its contents are exact. In my experience time/date stamps and size attributes don't always act as expected. Any thoughts??? | The only 100% way to figure out if two files are equal is to do a binary comparison of the two. If you can live with the risk of false positives (ie. two files which aren't 100% identical but your code says they are), then the digest and checksum algorithms can be used to lessen the work, particularly if the files lives on two different machines with less than optimal bandwidth so that a binary comparison is infeasible. The digest and checksum algorithms all have chances of false positives, but the exact chance varies with the algorithm. General rule is that the more crypto-made it is, and the more bits it outputs, the less chance of a false positive. Even the CRC-32 algorithm is fairly good to use and it should be easy to find code examples on the internet that implements it. If you only do a size/timestamp comparison then I'm sorry to say that this is easy to circumvent and won't actually give you much of a certainty that the files are the same or different. It depends though, if you know that in your world, timestamps are kept, and only changed when the file is modified, then you can use it, otherwise it holds no guarantee. | Verifying files for testing I was working with quality yesterday doing some formal testing. In their procedure they were verifying all files on the test machine were pulled from the release. The way they were verifying these files were the same was by checking the size and the date/time stamp windows put on them in Windows Explorer. These happened to be off for another reason which I was able to find out why. Is this a valid way to verify a file is the same? I didn't think so and started to argue, but I am younger here so thought I shouldn't push it too far. I wanted to argue they should do a binary compare on the file to verify its contents are exact. In my experience time/date stamps and size attributes don't always act as expected. Any thoughts??? | TITLE:
Verifying files for testing
QUESTION:
I was working with quality yesterday doing some formal testing. In their procedure they were verifying all files on the test machine were pulled from the release. The way they were verifying these files were the same was by checking the size and the date/time stamp windows put on them in Windows Explorer. These happened to be off for another reason which I was able to find out why. Is this a valid way to verify a file is the same? I didn't think so and started to argue, but I am younger here so thought I shouldn't push it too far. I wanted to argue they should do a binary compare on the file to verify its contents are exact. In my experience time/date stamps and size attributes don't always act as expected. Any thoughts???
ANSWER:
The only 100% way to figure out if two files are equal is to do a binary comparison of the two. If you can live with the risk of false positives (ie. two files which aren't 100% identical but your code says they are), then the digest and checksum algorithms can be used to lessen the work, particularly if the files lives on two different machines with less than optimal bandwidth so that a binary comparison is infeasible. The digest and checksum algorithms all have chances of false positives, but the exact chance varies with the algorithm. General rule is that the more crypto-made it is, and the more bits it outputs, the less chance of a false positive. Even the CRC-32 algorithm is fairly good to use and it should be easy to find code examples on the internet that implements it. If you only do a size/timestamp comparison then I'm sorry to say that this is easy to circumvent and won't actually give you much of a certainty that the files are the same or different. It depends though, if you know that in your world, timestamps are kept, and only changed when the file is modified, then you can use it, otherwise it holds no guarantee. | [
"windows",
"testing"
] | 7 | 3 | 824 | 5 | 0 | 2008-08-07T12:54:51.057000 | 2008-08-07T13:07:12.747000 |
4,677 | 5,453 | How do I create a Class using the Singleton Design Pattern in Ruby? | The singleton pattern is a design pattern that is used to restrict instantiation of a class to one object instance. Although I know how to code the singleton pattern in C++ and Java, I was wondering if anyone know how to implement it in Ruby? | Actually, the above answer was not completely correct. require 'singleton'
class Example include Singleton end You also need to include the require 'singleton' statement. | How do I create a Class using the Singleton Design Pattern in Ruby? The singleton pattern is a design pattern that is used to restrict instantiation of a class to one object instance. Although I know how to code the singleton pattern in C++ and Java, I was wondering if anyone know how to implement it in Ruby? | TITLE:
How do I create a Class using the Singleton Design Pattern in Ruby?
QUESTION:
The singleton pattern is a design pattern that is used to restrict instantiation of a class to one object instance. Although I know how to code the singleton pattern in C++ and Java, I was wondering if anyone know how to implement it in Ruby?
ANSWER:
Actually, the above answer was not completely correct. require 'singleton'
class Example include Singleton end You also need to include the require 'singleton' statement. | [
"ruby",
"design-patterns",
"singleton"
] | 11 | 10 | 1,640 | 3 | 0 | 2008-08-07T13:00:07.690000 | 2008-08-07T22:52:25.653000 |
4,684 | 4,706 | Automating VMWare or VirtualPC | I'm currently experimenting with build script, and since I have an ASP.net Web Part under source control, my build script should do that at the end: Grab the "naked" Windows 2003 IIS VMWare or Virtual PC Image from the Network Boot it up Copy the Files from the Build Folder to the Server Install it Do whatever else is needed I have never tried automating a Virtual Machine, but I saw that both VMWare and Virtual Server offer automation facilities. While I cannot use Virtual Server (Windows XP Home:-(), Virtual PC works. Does anyone here have experience with either VMWare Server or Virtual PC 2007 SP1 in terms of automation? Which one is better suited (I run windows, so the Platform-independence of VMWare does not count) and easier to automate? | With VMWare, there is the Virtual Machine Automation APIs (VIX API). You can find the reference guide here. It works with VMWare Server and WorkStation, but AFAIK it's not available for ESX Server. From the main page for VIX: The VIX API allows you to write scripts and programs that automate virtual machine operations. The API is high-level, easy to use, and practical for both script writers and application programmers. It runs on VMware Server and Workstation products, both Windows and Linux. Bindings are provided for C, Perl, and COM (Visual Basic, VBscript, C#). | Automating VMWare or VirtualPC I'm currently experimenting with build script, and since I have an ASP.net Web Part under source control, my build script should do that at the end: Grab the "naked" Windows 2003 IIS VMWare or Virtual PC Image from the Network Boot it up Copy the Files from the Build Folder to the Server Install it Do whatever else is needed I have never tried automating a Virtual Machine, but I saw that both VMWare and Virtual Server offer automation facilities. While I cannot use Virtual Server (Windows XP Home:-(), Virtual PC works. Does anyone here have experience with either VMWare Server or Virtual PC 2007 SP1 in terms of automation? Which one is better suited (I run windows, so the Platform-independence of VMWare does not count) and easier to automate? | TITLE:
Automating VMWare or VirtualPC
QUESTION:
I'm currently experimenting with build script, and since I have an ASP.net Web Part under source control, my build script should do that at the end: Grab the "naked" Windows 2003 IIS VMWare or Virtual PC Image from the Network Boot it up Copy the Files from the Build Folder to the Server Install it Do whatever else is needed I have never tried automating a Virtual Machine, but I saw that both VMWare and Virtual Server offer automation facilities. While I cannot use Virtual Server (Windows XP Home:-(), Virtual PC works. Does anyone here have experience with either VMWare Server or Virtual PC 2007 SP1 in terms of automation? Which one is better suited (I run windows, so the Platform-independence of VMWare does not count) and easier to automate?
ANSWER:
With VMWare, there is the Virtual Machine Automation APIs (VIX API). You can find the reference guide here. It works with VMWare Server and WorkStation, but AFAIK it's not available for ESX Server. From the main page for VIX: The VIX API allows you to write scripts and programs that automate virtual machine operations. The API is high-level, easy to use, and practical for both script writers and application programmers. It runs on VMware Server and Workstation products, both Windows and Linux. Bindings are provided for C, Perl, and COM (Visual Basic, VBscript, C#). | [
"vmware",
"virtualization"
] | 21 | 21 | 3,408 | 5 | 0 | 2008-08-07T13:05:38.040000 | 2008-08-07T13:31:29.733000 |
4,689 | 4,704 | Recommended Fonts for Programming? | What fonts do you use for programming, and for what language/IDE? I use Consolas for all my Visual Studio work, any other recommendations? | Either Consolas (download) or Andale Mono (download). I mostly use Andale Mono. I wrote an article about programming fonts a long time ago, I think Consolas wasn't even out yet. http://www.deadprogrammer.com/photos/fonts.gif I find that typing Illegal1 = O0 is a good test of suitability. | Recommended Fonts for Programming? What fonts do you use for programming, and for what language/IDE? I use Consolas for all my Visual Studio work, any other recommendations? | TITLE:
Recommended Fonts for Programming?
QUESTION:
What fonts do you use for programming, and for what language/IDE? I use Consolas for all my Visual Studio work, any other recommendations?
ANSWER:
Either Consolas (download) or Andale Mono (download). I mostly use Andale Mono. I wrote an article about programming fonts a long time ago, I think Consolas wasn't even out yet. http://www.deadprogrammer.com/photos/fonts.gif I find that typing Illegal1 = O0 is a good test of suitability. | [
"fonts",
"development-environment"
] | 182 | 196 | 253,030 | 114 | 0 | 2008-08-07T13:08:44.070000 | 2008-08-07T13:28:17.040000 |
4,724 | 4,755 | Why should I learn Lisp? | I really feel that I should learn Lisp and there are plenty of good resources out there to help me do it. I'm not put off by the complicated syntax, but where in "traditional commercial programming" would I find places it would make sense to use it instead of a procedural language. Is there a commercial killer-app out there that's been written in Lisp? | One of the main uses for Lisp is in Artificial Intelligence. A friend of mine at college took a graduate AI course and for his main project he wrote a " Lights Out " solver in Lisp. Multiple versions of his program utilized slightly different AI routines and testing on 40 or so computers yielded some pretty neat results (I wish it was online somewhere for me to link to, but I don't think it is). Two semesters ago I used Scheme (a language based on Lisp) to write an interactive program that simulated Abbott and Costello's "Who's on First" routine. Input from the user was matched against some pretty complicated data structures (resembling maps in other languages, but much more flexible) to choose what an appropriate response would be. I also wrote a routine to solve a 3x3 slide puzzle (an algorithm which could easily be extended to larger slide puzzles). In summary, learning Lisp (or Scheme) may not yield many practical applications beyond AI but it is an extremely valuable learning experience, as many others have stated. Programming in a functional language like Lisp will also help you think recursively (if you've had trouble with recursion in other languages, this could be a great help). | Why should I learn Lisp? I really feel that I should learn Lisp and there are plenty of good resources out there to help me do it. I'm not put off by the complicated syntax, but where in "traditional commercial programming" would I find places it would make sense to use it instead of a procedural language. Is there a commercial killer-app out there that's been written in Lisp? | TITLE:
Why should I learn Lisp?
QUESTION:
I really feel that I should learn Lisp and there are plenty of good resources out there to help me do it. I'm not put off by the complicated syntax, but where in "traditional commercial programming" would I find places it would make sense to use it instead of a procedural language. Is there a commercial killer-app out there that's been written in Lisp?
ANSWER:
One of the main uses for Lisp is in Artificial Intelligence. A friend of mine at college took a graduate AI course and for his main project he wrote a " Lights Out " solver in Lisp. Multiple versions of his program utilized slightly different AI routines and testing on 40 or so computers yielded some pretty neat results (I wish it was online somewhere for me to link to, but I don't think it is). Two semesters ago I used Scheme (a language based on Lisp) to write an interactive program that simulated Abbott and Costello's "Who's on First" routine. Input from the user was matched against some pretty complicated data structures (resembling maps in other languages, but much more flexible) to choose what an appropriate response would be. I also wrote a routine to solve a 3x3 slide puzzle (an algorithm which could easily be extended to larger slide puzzles). In summary, learning Lisp (or Scheme) may not yield many practical applications beyond AI but it is an extremely valuable learning experience, as many others have stated. Programming in a functional language like Lisp will also help you think recursively (if you've had trouble with recursion in other languages, this could be a great help). | [
"functional-programming",
"lisp"
] | 133 | 59 | 61,098 | 29 | 0 | 2008-08-07T13:54:03.137000 | 2008-08-07T14:22:36.603000 |
4,738 | 4,746 | Using ConfigurationManager to load config from an arbitrary location | I'm developing a data access component that will be used in a website that contains a mix of classic ASP and ASP.NET pages, and need a good way to manage its configuration settings. I'd like to use a custom ConfigurationSection, and for the ASP.NET pages this works great. But when the component is called via COM interop from a classic ASP page, the component isn't running in the context of an ASP.NET request and therefore has no knowledge of web.config. Is there a way to tell the ConfigurationManager to just load the configuration from an arbitrary path (e.g...\web.config if my assembly is in the /bin folder)? If there is then I'm thinking my component can fall back to that if the default ConfigurationManager.GetSection returns null for my custom section. Any other approaches to this would be welcome! | Try this: System.Configuration.ConfigurationFileMap fileMap = new ConfigurationFileMap(strConfigPath); //Path to your config file System.Configuration.Configuration configuration = System.Configuration.ConfigurationManager.OpenMappedMachineConfiguration(fileMap); | Using ConfigurationManager to load config from an arbitrary location I'm developing a data access component that will be used in a website that contains a mix of classic ASP and ASP.NET pages, and need a good way to manage its configuration settings. I'd like to use a custom ConfigurationSection, and for the ASP.NET pages this works great. But when the component is called via COM interop from a classic ASP page, the component isn't running in the context of an ASP.NET request and therefore has no knowledge of web.config. Is there a way to tell the ConfigurationManager to just load the configuration from an arbitrary path (e.g...\web.config if my assembly is in the /bin folder)? If there is then I'm thinking my component can fall back to that if the default ConfigurationManager.GetSection returns null for my custom section. Any other approaches to this would be welcome! | TITLE:
Using ConfigurationManager to load config from an arbitrary location
QUESTION:
I'm developing a data access component that will be used in a website that contains a mix of classic ASP and ASP.NET pages, and need a good way to manage its configuration settings. I'd like to use a custom ConfigurationSection, and for the ASP.NET pages this works great. But when the component is called via COM interop from a classic ASP page, the component isn't running in the context of an ASP.NET request and therefore has no knowledge of web.config. Is there a way to tell the ConfigurationManager to just load the configuration from an arbitrary path (e.g...\web.config if my assembly is in the /bin folder)? If there is then I'm thinking my component can fall back to that if the default ConfigurationManager.GetSection returns null for my custom section. Any other approaches to this would be welcome!
ANSWER:
Try this: System.Configuration.ConfigurationFileMap fileMap = new ConfigurationFileMap(strConfigPath); //Path to your config file System.Configuration.Configuration configuration = System.Configuration.ConfigurationManager.OpenMappedMachineConfiguration(fileMap); | [
"asp.net",
"configuration",
"asp-classic"
] | 135 | 131 | 114,438 | 9 | 0 | 2008-08-07T14:07:21.033000 | 2008-08-07T14:14:44.187000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.