content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: How to setup a crontab to execute at specific time How can I set up my crontab to execute X script at 11:59PM every day without emailing me or creating any logs? Right now my crontab looks something like this @daily /path/to/script.sh A: When you do crontab -e, try this: 59 23 * * * /usr/sbin/myscript > /dev/null That means: At 59 Minutes and 23 Hours on every day (*) on every month on every weekday, execute myscript. See man crontab for some more info and examples. A: Following up on svrist's answer, depending on your shell, the 2>&1 should go after > /dev/null or you will still see the output from stderr. The following will silence both stdout and stderr: 59 23 * * * /usr/sbin/myscript > /dev/null 2>&1 The following silences stdout, but stderr will still appear (via stdout): 59 23 * * * /usr/sbin/myscript 2>&1 > /dev/null The Advanced Bash Scripting Guide's chapter on IO redirection is a good reference--search for 2>&1 to see a couple of examples. A: You will with the above response receive email with any text written to stderr. Some people redirect that away too, and make sure that the script writes a log instead. ... 2>&1 ....
How to setup a crontab to execute at specific time
How can I set up my crontab to execute X script at 11:59PM every day without emailing me or creating any logs? Right now my crontab looks something like this @daily /path/to/script.sh
[ "When you do crontab -e, try this:\n59 23 * * * /usr/sbin/myscript > /dev/null\n\nThat means: At 59 Minutes and 23 Hours on every day (*) on every month on every weekday, execute myscript.\nSee man crontab for some more info and examples.\n", "Following up on svrist's answer, depending on your shell, the 2>&1 should go after > /dev/null or you will still see the output from stderr.\nThe following will silence both stdout and stderr:\n59 23 * * * /usr/sbin/myscript > /dev/null 2>&1\n\nThe following silences stdout, but stderr will still appear (via stdout):\n59 23 * * * /usr/sbin/myscript 2>&1 > /dev/null\n\nThe Advanced Bash Scripting Guide's chapter on IO redirection is a good reference--search for 2>&1 to see a couple of examples.\n", "You will with the above response receive email with any text written to stderr. Some people redirect that away too, and make sure that the script writes a log instead.\n... 2>&1 ....\n\n" ]
[ 10, 6, 5 ]
[]
[]
[ "cron", "settings" ]
stackoverflow_0000003136_cron_settings.txt
Q: Get a preview JPEG of a PDF on Windows? I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF. On the Mac I am spawning sips. Is there something similarly simple I can do on Windows? A: ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output): gs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \ -dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \ -sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \ -sOutputFile=$OUTPUT -f$INPUT where $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.) This is good for two reasons: You don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions. ImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step. Other things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m. A: You can use ImageMagick's convert utility for this, see some examples in http://studio.imagemagick.org/pipermail/magick-users/2002-May/002636.html : Convert taxes.pdf taxes.jpg Will convert a two page PDF file into [2] jpeg files: taxes.jpg.0, taxes.jpg.1 I can also convert these JPEGS to a thumbnail as follows: convert -size 120x120 taxes.jpg.0 -geometry 120x120 +profile '*' thumbnail.jpg I can even convert the PDF directly to a jpeg thumbnail as follows: convert -size 120x120 taxes.pdf -geometry 120x120 +profile '*' thumbnail.jpg This will result in a thumbnail.jpg.0 and thumbnail.jpg.1 for the two pages. A: Is the PC likely to have Acrobat installed? I think Acrobat installs a shell extension so previews of the first page of a PDF document appear in Windows Explorer's thumbnail view. You can get thumbnails yourself via the IExtractImage COM API, which you'll need to wrap. VBAccelerator has an example in C# that you could port to Python.
Get a preview JPEG of a PDF on Windows?
I have a cross-platform (Python) application which needs to generate a JPEG preview of the first page of a PDF. On the Mac I am spawning sips. Is there something similarly simple I can do on Windows?
[ "ImageMagick delegates the PDF->bitmap conversion to GhostScript anyway, so here's a command you can use (it's based on the actual command listed by the ps:alpha delegate in ImageMagick, just adjusted to use JPEG as output):\ngs -q -dQUIET -dPARANOIDSAFER -dBATCH -dNOPAUSE -dNOPROMPT \\\n-dMaxBitmap=500000000 -dLastPage=1 -dAlignToPixels=0 -dGridFitTT=0 \\\n-sDEVICE=jpeg -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -r72x72 \\\n-sOutputFile=$OUTPUT -f$INPUT\n\nwhere $OUTPUT and $INPUT are the output and input filenames. Adjust the 72x72 to whatever resolution you need. (Obviously, strip out the backslashes if you're writing out the whole command as one line.)\nThis is good for two reasons:\n\nYou don't need to have ImageMagick installed anymore. Not that I have anything against ImageMagick (I love it to bits), but I believe in simple solutions.\nImageMagick does a two-step conversion. First PDF->PPM, then PPM->JPEG. This way, the conversion is one-step.\n\nOther things to consider: with the files I've tested, PNG compresses better than JPEG. If you want to use PNG, change the -sDEVICE=jpeg to -sDEVICE=png16m.\n", "You can use ImageMagick's convert utility for this, see some examples in http://studio.imagemagick.org/pipermail/magick-users/2002-May/002636.html\n:\n\nConvert taxes.pdf taxes.jpg \n\nWill convert a two page PDF file into [2] jpeg files: taxes.jpg.0,\n taxes.jpg.1\nI can also convert these JPEGS to a thumbnail as follows:\nconvert -size 120x120 taxes.jpg.0 -geometry 120x120 +profile '*' thumbnail.jpg\n\nI can even convert the PDF directly to a jpeg thumbnail as follows:\nconvert -size 120x120 taxes.pdf -geometry 120x120 +profile '*' thumbnail.jpg\n\nThis will result in a thumbnail.jpg.0 and thumbnail.jpg.1 for the two\n pages.\n\n", "Is the PC likely to have Acrobat installed? I think Acrobat installs a shell extension so previews of the first page of a PDF document appear in Windows Explorer's thumbnail view. You can get thumbnails yourself via the IExtractImage COM API, which you'll need to wrap. VBAccelerator has an example in C# that you could port to Python.\n" ]
[ 44, 16, 5 ]
[]
[]
[ "image", "pdf", "python", "windows" ]
stackoverflow_0000000502_image_pdf_python_windows.txt
Q: User authentication on Resin webserver I'm currently tasked with replacing an Apache + Resin Java web server with a Resin-only configuration. Currently in our apache setup, we use .htaccess and a small user database to provide passwording at a directory level. Does anyone know the equivalent setup when using just Resin as the web server? A: See "Authentication on Resin" The closest you'll come to .htaccess with apache is configuring for Basic auth using an XmlAuthenticator. You'll need to convert your htpasswd file into the XML format that resin uses. Follow the "Quick Start" section and you'll get what you want.
User authentication on Resin webserver
I'm currently tasked with replacing an Apache + Resin Java web server with a Resin-only configuration. Currently in our apache setup, we use .htaccess and a small user database to provide passwording at a directory level. Does anyone know the equivalent setup when using just Resin as the web server?
[ "See \"Authentication on Resin\"\nThe closest you'll come to .htaccess with apache is configuring for Basic auth using an XmlAuthenticator. You'll need to convert your htpasswd file into the XML format that resin uses.\nFollow the \"Quick Start\" section and you'll get what you want.\n" ]
[ 1 ]
[]
[]
[ "apache", "caucho", "configuration", "resin", "webserver" ]
stackoverflow_0000007214_apache_caucho_configuration_resin_webserver.txt
Q: Animation in .NET What is a good way to perform animation using .NET? I would prefer not to use Flash if possible, so am looking for suggestions of ways which will work to implement different types of animation on a new site I am producing. The new site is for a magician, so I want to provide animated buttons (Cards turning over, etc.) and also embed video. Is it possible to do this without using Flash or is this the only real solution? I would like to keep it as cross-platform and standard as possible. A: Silverlight springs to mind as an obvious choice if you want to do animation using .NET on the web. It may not cover all platforms but will work in IE and FireFox and on the Mac. A: Have a look at the jQuery cross browser JavaScript library for animation (it is what is used on Stack Overflow). The reference for it can be found at http://visualjquery.com/1.1.2.html. Unfortunately without Flash, Silverlight or another plug-in cross system video support is limited. A: JavaScript is probably the way to go if you want to avoid Flash. Check this: http://www.webreference.com/programming/javascript/java_anim/ It won't work for embedded video, though, so you're stuck with Flash for that (or Silverlight, or QuickTime). A: Silverlight is the answer and Moonlight will be the linux equivalent and available shortly. We have done some beta testing on moonlight and found it fairly stable at with most of the Silverlight work we do.
Animation in .NET
What is a good way to perform animation using .NET? I would prefer not to use Flash if possible, so am looking for suggestions of ways which will work to implement different types of animation on a new site I am producing. The new site is for a magician, so I want to provide animated buttons (Cards turning over, etc.) and also embed video. Is it possible to do this without using Flash or is this the only real solution? I would like to keep it as cross-platform and standard as possible.
[ "Silverlight springs to mind as an obvious choice if you want to do animation using .NET on the web. It may not cover all platforms but will work in IE and FireFox and on the Mac.\n", "Have a look at the jQuery cross browser JavaScript library for animation (it is what is used on Stack Overflow). The reference for it can be found at http://visualjquery.com/1.1.2.html.\nUnfortunately without Flash, Silverlight or another plug-in cross system video support is limited.\n", "JavaScript is probably the way to go if you want to avoid Flash. Check this: http://www.webreference.com/programming/javascript/java_anim/\nIt won't work for embedded video, though, so you're stuck with Flash for that (or Silverlight, or QuickTime).\n", "Silverlight is the answer and Moonlight will be the linux equivalent and available shortly. We have done some beta testing on moonlight and found it fairly stable at with most of the Silverlight work we do.\n" ]
[ 4, 2, 0, 0 ]
[]
[]
[ ".net", "animation" ]
stackoverflow_0000007180_.net_animation.txt
Q: Visual Studio - new "default" property values for inherited controls I'm looking for help setting a new default property value for an inherited control in Visual Studio: class NewCombo : System.Windows.Forms.ComboBox { public NewCombo() { DropDownItems = 50; } } The problem is that the base class property DropDownItems has a 'default' attribute set on it that is a different value (not 50). As a result, when I drag the control onto a form, the designer file gets an explicit mycontrol.DropDownItems = 50; line. At first, this doesn't matter. But if later I change my inherited class to DropDownItems = 45; in the constructor, this does not affect any of the controls on any form since all those designer files still have the value 50 hard-coded in them. And the whole point was to have the value set in one place so I can deal with the customer changing his mind. Obviously, if I were creating my own custom property in the subclass, I could give it its own designer default attribute of whatever I wanted. But here I'm wanting to change the default values of properties in the base. Is there any way to apply Visual Studio attributes to a base class member? Or is there some other workaround to get the result I want? A: In your derived class you need to either override (or shadow using new) the property in question and then re-apply the default value attribute.
Visual Studio - new "default" property values for inherited controls
I'm looking for help setting a new default property value for an inherited control in Visual Studio: class NewCombo : System.Windows.Forms.ComboBox { public NewCombo() { DropDownItems = 50; } } The problem is that the base class property DropDownItems has a 'default' attribute set on it that is a different value (not 50). As a result, when I drag the control onto a form, the designer file gets an explicit mycontrol.DropDownItems = 50; line. At first, this doesn't matter. But if later I change my inherited class to DropDownItems = 45; in the constructor, this does not affect any of the controls on any form since all those designer files still have the value 50 hard-coded in them. And the whole point was to have the value set in one place so I can deal with the customer changing his mind. Obviously, if I were creating my own custom property in the subclass, I could give it its own designer default attribute of whatever I wanted. But here I'm wanting to change the default values of properties in the base. Is there any way to apply Visual Studio attributes to a base class member? Or is there some other workaround to get the result I want?
[ "In your derived class you need to either override (or shadow using new) the property in question and then re-apply the default value attribute.\n" ]
[ 5 ]
[]
[]
[ ".net", "c#", "vb.net", "visual_studio" ]
stackoverflow_0000007367_.net_c#_vb.net_visual_studio.txt
Q: Change the width of a scrollbar Is it possible to change the width of a scroll bar on a form. This app is for a touch screen and it is a bit too narrow. A: This is a Windows Forms application? I was able to make a very fat and thick scrollbar by adjusting the "Width" property of my scroll bar control. Is your scroll bar something you have programmatic access to (i.e. it is a control you added to the form)? A: The width of the scrollbars is controlled by Windows. You can adjust the scrollbar width in Display Properties and it will affect all windows on the terminal.
Change the width of a scrollbar
Is it possible to change the width of a scroll bar on a form. This app is for a touch screen and it is a bit too narrow.
[ "This is a Windows Forms application? I was able to make a very fat and thick scrollbar by adjusting the \"Width\" property of my scroll bar control. \n\nIs your scroll bar something you have programmatic access to (i.e. it is a control you added to the form)?\n", "The width of the scrollbars is controlled by Windows. You can adjust the scrollbar width in Display Properties and it will affect all windows on the terminal.\n" ]
[ 4, 3 ]
[]
[]
[ "scrollbar", "vb.net" ]
stackoverflow_0000007224_scrollbar_vb.net.txt
Q: E-mail Notifications In a .net system I'm building, there is a need for automated e-mail notifications. These should be editable by an admin. What's the easiest way to do this? SQL table and WYSIWIG for editing? The queue is a great idea. I've been throwing around that type of process for awhile with my old company. A: From a high level, yes. :D The main thing is some place to store the templates. A database is a great option unless you're not already using one, then file systems work fine. WSIWIG editors (such as fckeditor) work well and give you some good options regarding the features that you allow. Some sort of token replacement system is also a good idea if you need it. For example, if someone puts %FIRSTNAME% in the email template, the code that generates the email can do some simple pattern matching to replace known tokens with other known values that may be dynamic based on user or other circumstances. A: I am thinking that if these are automated notifications, then this means they are probably going out as a result of some type of event in your software. If this is a web based app, and you are going to have a number of these being sent out, then consider implementing an email queue rather than sending out an email on every event. A component can query the queue periodically and send out any pending items. A: Are you just talking about the interface and storage, or the implementation of sending the emails as well? Yes, a SQL table with FROM, TO, Subject, Body should work for storage and, heck, a textbox or even maybe a RichText box should work for editing. Or is this a web interface? For actually sending it, check out the System.Web.Mail namespace, it's pretty self explanatory and easy to use :) A: Adam Haile writes: check out the System.Web.Mail namespace By which you mean System.Net.Mail in .Net 2.0 and above :) A: How about using the new Workflow components in .NET 3.0 (and 3.5)? That is what we use in combination with templates in my current project. The templates have the basic format and the tokens are replaced with user information.
E-mail Notifications
In a .net system I'm building, there is a need for automated e-mail notifications. These should be editable by an admin. What's the easiest way to do this? SQL table and WYSIWIG for editing? The queue is a great idea. I've been throwing around that type of process for awhile with my old company.
[ "From a high level, yes. :D The main thing is some place to store the templates. A database is a great option unless you're not already using one, then file systems work fine.\nWSIWIG editors (such as fckeditor) work well and give you some good options regarding the features that you allow.\nSome sort of token replacement system is also a good idea if you need it. For example, if someone puts %FIRSTNAME% in the email template, the code that generates the email can do some simple pattern matching to replace known tokens with other known values that may be dynamic based on user or other circumstances.\n", "I am thinking that if these are automated notifications, then this means they are probably going out as a result of some type of event in your software. If this is a web based app, and you are going to have a number of these being sent out, then consider implementing an email queue rather than sending out an email on every event.\nA component can query the queue periodically and send out any pending items.\n", "Are you just talking about the interface and storage, or the implementation of sending the emails as well?\nYes, a SQL table with FROM, TO, Subject, Body should work for storage and, heck, a textbox or even maybe a RichText box should work for editing.\nOr is this a web interface?\nFor actually sending it, check out the System.Web.Mail namespace, it's pretty self explanatory and easy to use :)\n", "\nAdam Haile writes:\n\ncheck out the System.Web.Mail namespace\n\n\nBy which you mean System.Net.Mail in .Net 2.0 and above :)\n", "How about using the new Workflow components in .NET 3.0 (and 3.5)? That is what we use in combination with templates in my current project. The templates have the basic format and the tokens are replaced with user information.\n" ]
[ 3, 1, 0, 0, 0 ]
[]
[]
[ ".net", "email" ]
stackoverflow_0000006210_.net_email.txt
Q: What is best practice for FTP from a SQL Server 2005 stored procedure? What is the best method for executing FTP commands from a SQL Server stored procedure? we currently use something like this: EXEC master..xp_cmdshell 'ftp -n -s:d:\ftp\ftpscript.xmt 172.1.1.1' The problem is that the command seems to succeed even if the FTP ended in error. Also, the use of xp_cmdshell requires special permissions and may leave room for security issues. A: If you're running SQL 2005 you could do this in a CLR integration assembly and use the FTP classes in the System.Net namespace to build a simple FTP client. You'd benefit from being able to trap and handle exceptions and reduce the security risk of having to use xp_cmdshell. Just some thoughts. A: Another possibility is to use DTS or Integration Services (DTS for SQL Server 7 or 2000, SSIS for 2005 or higher). Both are from Microsoft, included in the Sql Server installation (in Standard edition at least) and have an FTP task and are designed for import/export jobs from Sql Server. A: If you need to do FTP from within the database, then I would go with a .NET assembly as Kevin suggested. That would provide the most control over the process, plus you would be able to log meaningful error messages to a table for reporting. Another option would be to write a command line app that read the database for commands to run. You could then define a scheduled task to call that command line app every minutes or whatever the polling period needed to be. That would be more secure than enabling CLR support in the database server.
What is best practice for FTP from a SQL Server 2005 stored procedure?
What is the best method for executing FTP commands from a SQL Server stored procedure? we currently use something like this: EXEC master..xp_cmdshell 'ftp -n -s:d:\ftp\ftpscript.xmt 172.1.1.1' The problem is that the command seems to succeed even if the FTP ended in error. Also, the use of xp_cmdshell requires special permissions and may leave room for security issues.
[ "If you're running SQL 2005 you could do this in a CLR integration assembly and use the FTP classes in the System.Net namespace to build a simple FTP client.\nYou'd benefit from being able to trap and handle exceptions and reduce the security risk of having to use xp_cmdshell.\nJust some thoughts.\n", "Another possibility is to use DTS or Integration Services (DTS for SQL Server 7 or 2000, SSIS for 2005 or higher). Both are from Microsoft, included in the Sql Server installation (in Standard edition at least) and have an FTP task and are designed for import/export jobs from Sql Server.\n", "If you need to do FTP from within the database, then I would go with a .NET assembly as Kevin suggested. That would provide the most control over the process, plus you would be able to log meaningful error messages to a table for reporting.\nAnother option would be to write a command line app that read the database for commands to run. You could then define a scheduled task to call that command line app every minutes or whatever the polling period needed to be. That would be more secure than enabling CLR support in the database server.\n" ]
[ 5, 3, 2 ]
[]
[]
[ "ftp", "sql_server" ]
stackoverflow_0000004246_ftp_sql_server.txt
Q: Test serialization encoding What is the best way to verify/test that a text string is serialized to a byte array with a certain encoding? In my case, I want to verify that an XML structure is serialized to a byte array with the UTF-8 encoding which is of variable character length. As an example, my current ugly procedure is to inject a character known to require two bytes into the structure before serializing, then replacing the two-byte character with an ASCII character and comparing the serialized array lengths. This should yield two serialized arrays where the array containing the two-byte characters should have length +1. Plus if the solution is elegant for Java. I can't think of any elegant way to seek for a byte sequence in a byte array. (Could be used to seek for a known byte sequence representing the desired character representation in UTF-8.) A: Perhaps you could deserialise the byte array using a known encoding and ensure that (a) it doesn't throw any exceptions, and (b) deserialises to the original string. It seems that from your description of the scenario, you may not have the original string readily available. Might there be a way to create it? A: That's good. You're right, I don't have the original string, since I'm testing a module that creates an XML document as a byte array. I didn't think about deserializing to a String with an expected encoding. That will do the trick.
Test serialization encoding
What is the best way to verify/test that a text string is serialized to a byte array with a certain encoding? In my case, I want to verify that an XML structure is serialized to a byte array with the UTF-8 encoding which is of variable character length. As an example, my current ugly procedure is to inject a character known to require two bytes into the structure before serializing, then replacing the two-byte character with an ASCII character and comparing the serialized array lengths. This should yield two serialized arrays where the array containing the two-byte characters should have length +1. Plus if the solution is elegant for Java. I can't think of any elegant way to seek for a byte sequence in a byte array. (Could be used to seek for a known byte sequence representing the desired character representation in UTF-8.)
[ "Perhaps you could deserialise the byte array using a known encoding and ensure that (a) it doesn't throw any exceptions, and (b) deserialises to the original string. It seems that from your description of the scenario, you may not have the original string readily available. Might there be a way to create it?\n", "That's good.\nYou're right, I don't have the original string, since I'm testing a module that creates an XML document as a byte array. I didn't think about deserializing to a String with an expected encoding. That will do the trick.\n" ]
[ 2, 0 ]
[]
[]
[ "encoding", "java", "serialization", "string", "xml" ]
stackoverflow_0000007681_encoding_java_serialization_string_xml.txt
Q: Default Form Button in FireFox I am building a server control that will search our db and return results. The server control is contains an ASP:Panel. I have set the default button on the panel equal to my button id and have set the form default button equal to my button id. On the Panel: MyPanel.DefaultButton = SearchButton.ID On the Control: Me.Page.Form.DefaultButton = SearchButton.UniqueID Works fine in IE & Safari I can type a search term and hit the enter key and it searches fine. If I do it in Firefox I get an alert box saying "Object reference not set to an instance of an a object. Anyone run across this before? A: Is SearchButton a LinkButton? If so, the javascript that is written to the browser doesn't work properly. Here is a good blog post explaining the issue and how to solve it: Using Panel.DefaultButton property with LinkButton control in ASP.NET A: Ends up this resolved my issue: SearchButton.UseSubmitBehavior = False A: I might be wrong and this might not make a difference but have you tried: Me.Page.Form.DefaultButton = SearchButton.ID instead of Me.Page.Form.DefaultButton = SearchButton.UniqueID
Default Form Button in FireFox
I am building a server control that will search our db and return results. The server control is contains an ASP:Panel. I have set the default button on the panel equal to my button id and have set the form default button equal to my button id. On the Panel: MyPanel.DefaultButton = SearchButton.ID On the Control: Me.Page.Form.DefaultButton = SearchButton.UniqueID Works fine in IE & Safari I can type a search term and hit the enter key and it searches fine. If I do it in Firefox I get an alert box saying "Object reference not set to an instance of an a object. Anyone run across this before?
[ "Is SearchButton a LinkButton? If so, the javascript that is written to the browser doesn't work properly.\nHere is a good blog post explaining the issue and how to solve it: \nUsing Panel.DefaultButton property with LinkButton control in ASP.NET\n", "Ends up this resolved my issue:\n SearchButton.UseSubmitBehavior = False\n\n", "I might be wrong and this might not make a difference but have you tried:\nMe.Page.Form.DefaultButton = SearchButton.ID\n\ninstead of\nMe.Page.Form.DefaultButton = SearchButton.UniqueID\n\n" ]
[ 3, 2, 0 ]
[]
[]
[ "asp.net", "vb.net" ]
stackoverflow_0000006076_asp.net_vb.net.txt
Q: Eclipse on win64 Is anyone successfully using the latest 64-bit Ganymede release of Eclipse on Windows XP or Vista 64-bit? Currently I run the normal Eclipse 3.4 distribution on a 32bit JDK and launch & compile my apps with a 64bit JDK. Our previous experience has been that the 64bit Eclipse distro is unstable for us, so I'm curious if anyone is using it successfully. We are using JDK 1.6.0_05. A: I'm using Eclipse with a 64bit VM. However I have to use Java 1.5, because with Java 1.6, even 1.6.0_10ea, Eclipse crashed when changing the .classpath-file. On Linux I had the same problems and could only get the 64bit Eclipse to work with 64bit Java 1.5. The problem seems to be with the just in time compilation, since with vmparam -Xint eclipse works -- but this is not a sollution, because it's slow then. Edit: With 1.6.0_11 it seems to work. 1.6_10 final might work as well as mentioned in the comment, but I've not tested that. A: I've been successfully using it on Vista x64 for some light Java work. Nothing too involved and no extra plugins, but basic Java coding has been working without any issues. I'm using the 3.4M7 build but it looks like the 3.4 stable build supports Vista x64 now.
Eclipse on win64
Is anyone successfully using the latest 64-bit Ganymede release of Eclipse on Windows XP or Vista 64-bit? Currently I run the normal Eclipse 3.4 distribution on a 32bit JDK and launch & compile my apps with a 64bit JDK. Our previous experience has been that the 64bit Eclipse distro is unstable for us, so I'm curious if anyone is using it successfully. We are using JDK 1.6.0_05.
[ "I'm using Eclipse with a 64bit VM. However I have to use Java 1.5, because with Java 1.6, even 1.6.0_10ea, Eclipse crashed when changing the .classpath-file. On Linux I had the same problems and could only get the 64bit Eclipse to work with 64bit Java 1.5.\nThe problem seems to be with the just in time compilation, since with vmparam -Xint eclipse works -- but this is not a sollution, because it's slow then.\nEdit:\nWith 1.6.0_11 it seems to work. \n1.6_10 final might work as well as mentioned in the comment, but I've not tested that.\n", "I've been successfully using it on Vista x64 for some light Java work. Nothing too involved and no extra plugins, but basic Java coding has been working without any issues. I'm using the 3.4M7 build but it looks like the 3.4 stable build supports Vista x64 now.\n" ]
[ 7, 1 ]
[]
[]
[ "eclipse", "eclipse_3.4", "ganymede", "java" ]
stackoverflow_0000006222_eclipse_eclipse_3.4_ganymede_java.txt
Q: interrogating table lock schemes in T-SQL Is there some means of querying the system tables to establish which tables are using what locking schemes? I took a look at the columns in sysobjects but nothing jumped out. A: aargh, just being an idiot: SELECT name, lockscheme(name) FROM sysobjects WHERE type="U" ORDER BY name A: take a look at the syslockinfo and syslocks system tables you can also run the sp_lock proc
interrogating table lock schemes in T-SQL
Is there some means of querying the system tables to establish which tables are using what locking schemes? I took a look at the columns in sysobjects but nothing jumped out.
[ "aargh, just being an idiot:\nSELECT name, lockscheme(name)\nFROM sysobjects\nWHERE type=\"U\"\nORDER BY name\n\n", "take a look at the syslockinfo and syslocks system tables\nyou can also run the sp_lock proc\n" ]
[ 1, 0 ]
[]
[]
[ "sysobjects", "tsql" ]
stackoverflow_0000007933_sysobjects_tsql.txt
Q: .NET Interfaces Over the past few years I've changed from having a long flowing page of controls that I hid/showed to using a lot of user controls. I've always had a bit of a discussion between co-workers on best practices. Should you have properties that you populate, or use paramterized sub's to load the information in your controls? Part of my fear of using paramteter's is the fact that I cannot make sure that everything will be populated. What is the basic rule's for using interfaces? I've never created one. And should I try this, or stay with a 'sub load'? A: I'm not sure if interfaces are going to help you a lot here. My understanding is that you are breaking a page down into set of "composite" user controls that contain other controls, and you want to decide whether to use properties for setting values. I guess this really depends on how the user controls are designed and whether they are being dynamically added to a page etc (one possible scenario). I have a personal preference for specifying stuff in a constructor or using a factory method to create controls. I assume responsibility at creation for making sure that everything is set. My experience with properties is that I'll sometimes forget to set something and not realize my mistake. Your point about setting properties or using a sub, and everything being populated doesn't make a lot of sense to me. If you have some sort of dependency and need something else to be loaded then this could happen irrespective of whether it's a property or sub. I would refer to any book on VB.NET/C#/OOP to see the syntax for interfaces. Interfaces basically describe a contract for a class. If you have class A and B and both implement an interface called ITime then both will provide all of the methods defined on ITime. They can still add their own methods but they must at minimum include an implementation of ITime's methods (eg. we might have GetDate(), GetCurrentTime() as methods on ITime). An interface doesn't tell class A or B how those methods should work - just their name, parameters and return type. Lookup inheritance in an OOP book for more information on how interfaces inheritance is different from implementation inheritance.
.NET Interfaces
Over the past few years I've changed from having a long flowing page of controls that I hid/showed to using a lot of user controls. I've always had a bit of a discussion between co-workers on best practices. Should you have properties that you populate, or use paramterized sub's to load the information in your controls? Part of my fear of using paramteter's is the fact that I cannot make sure that everything will be populated. What is the basic rule's for using interfaces? I've never created one. And should I try this, or stay with a 'sub load'?
[ "I'm not sure if interfaces are going to help you a lot here. My understanding is that you are breaking a page down into set of \"composite\" user controls that contain other controls, and you want to decide whether to use properties for setting values. \nI guess this really depends on how the user controls are designed and whether they are being dynamically added to a page etc (one possible scenario). I have a personal preference for specifying stuff in a constructor or using a factory method to create controls. I assume responsibility at creation for making sure that everything is set. My experience with properties is that I'll sometimes forget to set something and not realize my mistake. Your point about setting properties or using a sub, and everything being populated doesn't make a lot of sense to me. If you have some sort of dependency and need something else to be loaded then this could happen irrespective of whether it's a property or sub.\nI would refer to any book on VB.NET/C#/OOP to see the syntax for interfaces. Interfaces basically describe a contract for a class. If you have class A and B and both implement an interface called ITime then both will provide all of the methods defined on ITime. They can still add their own methods but they must at minimum include an implementation of ITime's methods (eg. we might have GetDate(), GetCurrentTime() as methods on ITime). An interface doesn't tell class A or B how those methods should work - just their name, parameters and return type. Lookup inheritance in an OOP book for more information on how interfaces inheritance is different from implementation inheritance.\n" ]
[ 1 ]
[]
[]
[ ".net", "interface", "user_controls" ]
stackoverflow_0000008066_.net_interface_user_controls.txt
Q: How do I find the high water mark (for sessions) on Oracle 9i How can I find the high water mark (the historical maximum number of concurrent users) in an oracle database (9i). A: This should do the trick: SELECT sessions_highwater FROM v$license; A: select max_utilization from v$resource_limit where resource_name = 'sessions'; A good overview of Oracle system views can be found here.
How do I find the high water mark (for sessions) on Oracle 9i
How can I find the high water mark (the historical maximum number of concurrent users) in an oracle database (9i).
[ "This should do the trick:\nSELECT sessions_highwater FROM v$license;\n\n", "select max_utilization from v$resource_limit where resource_name = 'sessions';\n\nA good overview of Oracle system views can be found here.\n" ]
[ 5, 1 ]
[]
[]
[ "oracle", "oracle9i", "session", "sql" ]
stackoverflow_0000008145_oracle_oracle9i_session_sql.txt
Q: MySQL replication: if I don't specify any databases, will log_bin log EVERYTHING? I'm setting up replication for a server which runs a bunch of databases (one per client) and plan on adding more all the time, on my.cnf, Instead of having: binlog-do-db = databasename 1 binlog-do-db = databasename 2 binlog-do-db = databasename 3 ... binlog-do-db = databasename n can I rather just have binlog-ignore-db = mysql binlog-ignore-db = informationschema (and no database to log specified) and assume that everything else is logged? EDIT: actually if I remove all my binlog-do-db entries, it seemingly logs everything (as you see the binary log file change position when you move the database), but on the slave server, nothing gets picked up! (perhaps, this is the case to use replicate-do-db? this would kill the idea; i guess I cant have MySQL automagically detect which databases to replicate). A: That looks correct: http://dev.mysql.com/doc/refman/5.0/en/binary-log.html#option_mysqld_binlog-ignore-db. According to that reference: There are some --binlog-ignore-db rules. Does the default database match any of the --binlog-ignore-db rules? Yes: Do not write the statement, and exit. No: Write the query and exit. Since you only have ignore commands, all queries will be written to the log as long as the default (active) database doesn't match one of the ignored databases.
MySQL replication: if I don't specify any databases, will log_bin log EVERYTHING?
I'm setting up replication for a server which runs a bunch of databases (one per client) and plan on adding more all the time, on my.cnf, Instead of having: binlog-do-db = databasename 1 binlog-do-db = databasename 2 binlog-do-db = databasename 3 ... binlog-do-db = databasename n can I rather just have binlog-ignore-db = mysql binlog-ignore-db = informationschema (and no database to log specified) and assume that everything else is logged? EDIT: actually if I remove all my binlog-do-db entries, it seemingly logs everything (as you see the binary log file change position when you move the database), but on the slave server, nothing gets picked up! (perhaps, this is the case to use replicate-do-db? this would kill the idea; i guess I cant have MySQL automagically detect which databases to replicate).
[ "That looks correct: http://dev.mysql.com/doc/refman/5.0/en/binary-log.html#option_mysqld_binlog-ignore-db.\nAccording to that reference:\n\nThere are some --binlog-ignore-db\n rules. Does the default database match\n any of the --binlog-ignore-db rules?\n\nYes: Do not write the statement, and exit.\nNo: Write the query and exit.\n\n\nSince you only have ignore commands, all queries will be written to the log as long as the default (active) database doesn't match one of the ignored databases. \n" ]
[ 12 ]
[]
[]
[ "mysql", "replication" ]
stackoverflow_0000008166_mysql_replication.txt
Q: Remove Quotes and Commas from a String in MySQL I'm importing some data from a CSV file, and numbers that are larger than 1000 get turned into 1,100 etc. What's a good way to remove both the quotes and the comma from this so I can put it into an int field? Edit: The data is actually already in a MySQL table, so I need to be able to this using SQL. Sorry for the mixup. A: My guess here is that because the data was able to import that the field is actually a varchar or some character field, because importing to a numeric field might have failed. Here was a test case I ran purely a MySQL, SQL solution. The table is just a single column (alpha) that is a varchar. mysql> desc t; +-------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+-------+ | alpha | varchar(15) | YES | | NULL | | +-------+-------------+------+-----+---------+-------+ Add a record mysql> insert into t values('"1,000,000"'); Query OK, 1 row affected (0.00 sec) mysql> select * from t; +-------------+ | alpha | +-------------+ | "1,000,000" | +-------------+ Update statement. mysql> update t set alpha = replace( replace(alpha, ',', ''), '"', '' ); Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> select * from t; +---------+ | alpha | +---------+ | 1000000 | +---------+ So in the end the statement I used was: UPDATE table SET field_name = replace( replace(field_name, ',', ''), '"', '' ); I looked at the MySQL Documentation and it didn't look like I could do the regular expressions find and replace. Although you could, like Eldila, use a regular expression for a find and then an alternative solution for replace. Also be careful with s/"(\d+),(\d+)"/$1$2/ because what if the number has more then just a single comma, for instance "1,000,000" you're going to want to do a global replace (in perl that is s///g). But even with a global replace the replacement starts where you last left off (unless perl is different), and would miss the every other comma separated group. A possible solution would be to make the first (\d+) optional like so s/(\d+)?,(\d+)/$1$2/g and in this case I would need a second find and replace to strip the quotes. Here are some ruby examples of the regular expressions acting on just the string "1,000,000", notice there are NOT double quote inside the string, this is just a string of the number itself. >> "1,000,000".sub( /(\d+),(\d+)/, '\1\2' ) # => "1000,000" >> "1,000,000".gsub( /(\d+),(\d+)/, '\1\2' ) # => "1000,000" >> "1,000,000".gsub( /(\d+)?,(\d+)/, '\1\2' ) # => "1000000" >> "1,000,000".gsub( /[,"]/, '' ) # => "1000000" >> "1,000,000".gsub( /[^0-9]/, '' ) # => "1000000" A: Here is a good case for regular expressions. You can run a find and replace on the data either before you import (easier) or later on if the SQL import accepted those characters (not nearly as easy). But in either case, you have any number of methods to do a find and replace, be it editors, scripting languages, GUI programs, etc. Remember that you're going to want to find and replace all of the bad characters. A typical regular expression to find the comma and quotes (assuming just double quotes) is: (Blacklist) /[,"]/ Or, if you find something might change in the future, this regular expression, matches anything except a number or decimal point. (Whitelist) /[^0-9\.]/ What has been discussed by the people above is that we don't know all of the data in your CSV file. It sounds like you want to remove the commas and quotes from all of the numbers in the CSV file. But because we don't know what else is in the CSV file we want to make sure that we don't corrupt other data. Just blindly doing a find/replace could affect other portions of the file. A: You could use this perl command. Perl -lne 's/[,|"]//; print' file.txt > newfile.txt You may need to play around with it a bit, but it should do the trick. A: Here's the PHP way: $stripped = str_replace(array(',', '"'), '', $value); Link to W3Schools page A: My command does remove all ',' and '"'. In order to convert the sting "1,000" more strictly, you will need the following command. Perl -lne 's/"(\d+),(\d+)"/$1$2/; print' file.txt > newfile.txt A: Actually nlucaroni, your case isn't quite right. Your example doesn't include double-quotes, so id,age,name,... 1,23,phil, won't match my regex. It requires the format "XXX,XXX". I can't think of an example of when it will match incorrectly. All the following example won't include the deliminator in the regex: "111,111",234 234,"111,111" "111,111","111,111" Please let me know if you can think of a counter-example. Cheers! A: The solution to the changed question is basically the same. You will have to run select query with the regex where clause. Somthing like Select * FROM SOMETABLE WHERE SOMEFIELD REGEXP '"(\d+),(\d+)"' Foreach of these rows, you want to do the following regex substitution s/"(\d+),(\d+)"/$1$2/ and then update the field with the new value. Please Joseph Pecoraro seriously and have a backup before doing mass changes to any files or databases. Because whenever you do regex, you can seriously mess up data if there are cases that you have missed.
Remove Quotes and Commas from a String in MySQL
I'm importing some data from a CSV file, and numbers that are larger than 1000 get turned into 1,100 etc. What's a good way to remove both the quotes and the comma from this so I can put it into an int field? Edit: The data is actually already in a MySQL table, so I need to be able to this using SQL. Sorry for the mixup.
[ "My guess here is that because the data was able to import that the field is actually a varchar or some character field, because importing to a numeric field might have failed. Here was a test case I ran purely a MySQL, SQL solution.\n\nThe table is just a single column (alpha) that is a varchar.\nmysql> desc t;\n\n+-------+-------------+------+-----+---------+-------+\n| Field | Type | Null | Key | Default | Extra |\n+-------+-------------+------+-----+---------+-------+\n| alpha | varchar(15) | YES | | NULL | | \n+-------+-------------+------+-----+---------+-------+\n\nAdd a record\nmysql> insert into t values('\"1,000,000\"');\nQuery OK, 1 row affected (0.00 sec)\n\nmysql> select * from t;\n\n+-------------+\n| alpha |\n+-------------+\n| \"1,000,000\" | \n+-------------+\n\nUpdate statement.\nmysql> update t set alpha = replace( replace(alpha, ',', ''), '\"', '' );\nQuery OK, 1 row affected (0.00 sec)\nRows matched: 1 Changed: 1 Warnings: 0\n\nmysql> select * from t;\n\n+---------+\n| alpha |\n+---------+\n| 1000000 | \n+---------+\n\n\nSo in the end the statement I used was:\nUPDATE table\n SET field_name = replace( replace(field_name, ',', ''), '\"', '' );\n\nI looked at the MySQL Documentation and it didn't look like I could do the regular expressions find and replace. Although you could, like Eldila, use a regular expression for a find and then an alternative solution for replace.\n\nAlso be careful with s/\"(\\d+),(\\d+)\"/$1$2/ because what if the number has more then just a single comma, for instance \"1,000,000\" you're going to want to do a global replace (in perl that is s///g). But even with a global replace the replacement starts where you last left off (unless perl is different), and would miss the every other comma separated group. A possible solution would be to make the first (\\d+) optional like so s/(\\d+)?,(\\d+)/$1$2/g and in this case I would need a second find and replace to strip the quotes.\nHere are some ruby examples of the regular expressions acting on just the string \"1,000,000\", notice there are NOT double quote inside the string, this is just a string of the number itself.\n>> \"1,000,000\".sub( /(\\d+),(\\d+)/, '\\1\\2' )\n# => \"1000,000\" \n>> \"1,000,000\".gsub( /(\\d+),(\\d+)/, '\\1\\2' )\n# => \"1000,000\" \n>> \"1,000,000\".gsub( /(\\d+)?,(\\d+)/, '\\1\\2' )\n# => \"1000000\" \n>> \"1,000,000\".gsub( /[,\"]/, '' )\n# => \"1000000\" \n>> \"1,000,000\".gsub( /[^0-9]/, '' )\n# => \"1000000\"\n\n", "Here is a good case for regular expressions. You can run a find and replace on the data either before you import (easier) or later on if the SQL import accepted those characters (not nearly as easy). But in either case, you have any number of methods to do a find and replace, be it editors, scripting languages, GUI programs, etc. Remember that you're going to want to find and replace all of the bad characters.\nA typical regular expression to find the comma and quotes (assuming just double quotes) is: (Blacklist)\n/[,\"]/\n\nOr, if you find something might change in the future, this regular expression, matches anything except a number or decimal point. (Whitelist)\n/[^0-9\\.]/\n\nWhat has been discussed by the people above is that we don't know all of the data in your CSV file. It sounds like you want to remove the commas and quotes from all of the numbers in the CSV file. But because we don't know what else is in the CSV file we want to make sure that we don't corrupt other data. Just blindly doing a find/replace could affect other portions of the file.\n", "You could use this perl command.\nPerl -lne 's/[,|\"]//; print' file.txt > newfile.txt\n\nYou may need to play around with it a bit, but it should do the trick.\n", "Here's the PHP way:\n$stripped = str_replace(array(',', '\"'), '', $value);\n\nLink to W3Schools page\n", "My command does remove all ',' and '\"'.\nIn order to convert the sting \"1,000\" more strictly, you will need the following command.\nPerl -lne 's/\"(\\d+),(\\d+)\"/$1$2/; print' file.txt > newfile.txt\n\n", "Actually nlucaroni, your case isn't quite right. Your example doesn't include double-quotes, so\nid,age,name,...\n1,23,phil,\n\nwon't match my regex. It requires the format \"XXX,XXX\". I can't think of an example of when it will match incorrectly.\nAll the following example won't include the deliminator in the regex:\n\n\"111,111\",234\n234,\"111,111\"\n\"111,111\",\"111,111\"\n\n\nPlease let me know if you can think of a counter-example.\nCheers!\n", "The solution to the changed question is basically the same.\nYou will have to run select query with the regex where clause.\nSomthing like\nSelect *\n FROM SOMETABLE\n WHERE SOMEFIELD REGEXP '\"(\\d+),(\\d+)\"'\n\nForeach of these rows, you want to do the following regex substitution s/\"(\\d+),(\\d+)\"/$1$2/ and then update the field with the new value.\nPlease Joseph Pecoraro seriously and have a backup before doing mass changes to any files or databases. Because whenever you do regex, you can seriously mess up data if there are cases that you have missed.\n" ]
[ 17, 2, 0, 0, 0, 0, 0 ]
[ "Daniel's and Eldila's answer have one problem: They remove all quotes and commas in the whole file.\nWhat I usually do when I have to do something like this is to first replace all separating quotes and (usually) semicolons by tabs. \n\nSearch: \";\"\nReplace: \\t\n\nSince I know in which column my affected values will be I then do another search and replace:\n\nSearch: ^([\\t]+)\\t([\\t]+)\\t([0-9]+),([0-9]+)\\t\nReplace: \\1\\t\\2\\t\\3\\4\\t\n\n... given the value with the comma is in the third column.\nYou need to start with an \"^\" to make sure that it starts at the beginning of a line. Then you repeat ([0-9]+)\\t as often as there are columns that you just want to leave as they are.\n([0-9]+),([0-9]+) searches for values where there is a number, then a comma and then another number.\nIn the replace string we use \\1 and \\2 to just keep the values from the edited line, separating them with \\t (tab). Then we put \\3\\4 (no tab between) to put the two components of the number without the comma right after each other. All values after that will be left alone.\nIf you need your file to have semicolon to separate the elements, you then can go on and replace the tabs with semicolons. However then - if you leave out the quotes - you'll have to make sure that the text values do not contain any semicolons themselves. That's why I prefer to use TAB as column separator.\nI usually do that in an ordinary text editor (EditPlus) that supports RegExp, but the same regexps can be used in any programming language.\n" ]
[ -1 ]
[ "mysql", "regex", "string" ]
stackoverflow_0000007917_mysql_regex_string.txt
Q: ASP.NET Display SVN Revision Number I see in the Stack Overflow footer that the SVN Revision number is displayed. Is this automated and if so, how does one implement it in ASP.NET? (Solutions in other languages are acceptable) A: Make sure that the file has svn:keywords "Rev Id" and then put $Rev$ somewhere in there. See this question and the answers to it. A: in our continuous integration setup we use SVNRevisionLabeller and pass the variables from this to MSBuild to use when creating the compiled website dll. It's then available to .NET using GetCurrentAssembly() in the final build. A: In my rails app I have a secret action which literally does this: render :text => `svn info #{RAILS_ROOT}` This is the equivalent of Process.Start( "svn info..." ) if you're only familiar with .NET) If I'm wondering if the guy who manages the servers has updated the site recently, I can hit this URL, and have a look.
ASP.NET Display SVN Revision Number
I see in the Stack Overflow footer that the SVN Revision number is displayed. Is this automated and if so, how does one implement it in ASP.NET? (Solutions in other languages are acceptable)
[ "Make sure that the file has svn:keywords \"Rev Id\" and then put $Rev$ somewhere in there.\nSee this question and the answers to it.\n", "in our continuous integration setup we use SVNRevisionLabeller and pass the variables from this to MSBuild to use when creating the compiled website dll. It's then available to .NET using GetCurrentAssembly() in the final build.\n", "In my rails app I have a secret action which literally does this:\nrender :text => `svn info #{RAILS_ROOT}`\n\nThis is the equivalent of Process.Start( \"svn info...\" ) if you're only familiar with .NET)\nIf I'm wondering if the guy who manages the servers has updated the site recently, I can hit this URL, and have a look.\n" ]
[ 6, 0, 0 ]
[]
[]
[ "asp.net", "svn" ]
stackoverflow_0000002308_asp.net_svn.txt
Q: How do you get a custom id to render using HtmlHelper in MVC Using preview 4 of ASP.NET MVC Code like: <%= Html.CheckBox( "myCheckBox", "Click Here", "True", false ) %> only outputs: <input type="checkbox" value="True" name="myCheckBox" /> There is a name there for the form post back but no id for javascript or labels :-( I was hoping that changing it to: Html.CheckBox( "myCheckBox", "Click Here", "True", false, new { id="myCheckBox" } ) would work - but instead I get an exception: System.ArgumentException: An item with the same key has already been added. As if there was already an id somewhere in a collection somewhere - I'm stumped! The full exception for anyone interested follows (hey - wouldn't it be nice to attach files in here): System.ArgumentException: An item with the same key has already been added. at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add) at System.Web.Routing.RouteValueDictionary.Add(String key, Object value) at System.Web.Mvc.TagBuilder2.CreateInputTag(HtmlInputType inputType, String name, RouteValueDictionary attributes) at System.Web.Mvc.CheckBoxBuilder.CheckBox(String htmlName, RouteValueDictionary htmlAttributes) at System.Web.Mvc.CheckBoxBuilder.CheckBox(String htmlName, String text, String value, Boolean isChecked, RouteValueDictionary htmlAttributes) at System.Web.Mvc.CheckBoxExtensions.CheckBox(HtmlHelper helper, String htmlName, String text, String value, Boolean isChecked, Object htmlAttributes) at ASP.views_account_termsandconditions_ascx.__Render__control1(HtmlTextWriter __w, Control parameterContainer) in c:\dev\myProject\Views\Account\Edit.ascx:line 108 A: Try this: <%= Html.CheckBox("myCheckbox", "Click here", "True", false, new {_id ="test" })%> For any keyword you can use an underscore before the name of the attribute. Instead of class you use _class. Since class is a keyword in C#, and also the name of the attribute in HTML. Now, "id" isn't a keyword in C#, but perhaps it is in another .NET language that they want to support. From what I can tell, it's not a keyword in VB.NET, F#, or Ruby so maybe it is a mistake that they force you to use an underscore with it. A: Apparently this is a bug. Because they are adding it to potential rendering values, they just forgot to include it. I would recommend creating a bug on codeplex, and download the source and modify it for your needs.
How do you get a custom id to render using HtmlHelper in MVC
Using preview 4 of ASP.NET MVC Code like: <%= Html.CheckBox( "myCheckBox", "Click Here", "True", false ) %> only outputs: <input type="checkbox" value="True" name="myCheckBox" /> There is a name there for the form post back but no id for javascript or labels :-( I was hoping that changing it to: Html.CheckBox( "myCheckBox", "Click Here", "True", false, new { id="myCheckBox" } ) would work - but instead I get an exception: System.ArgumentException: An item with the same key has already been added. As if there was already an id somewhere in a collection somewhere - I'm stumped! The full exception for anyone interested follows (hey - wouldn't it be nice to attach files in here): System.ArgumentException: An item with the same key has already been added. at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add) at System.Web.Routing.RouteValueDictionary.Add(String key, Object value) at System.Web.Mvc.TagBuilder2.CreateInputTag(HtmlInputType inputType, String name, RouteValueDictionary attributes) at System.Web.Mvc.CheckBoxBuilder.CheckBox(String htmlName, RouteValueDictionary htmlAttributes) at System.Web.Mvc.CheckBoxBuilder.CheckBox(String htmlName, String text, String value, Boolean isChecked, RouteValueDictionary htmlAttributes) at System.Web.Mvc.CheckBoxExtensions.CheckBox(HtmlHelper helper, String htmlName, String text, String value, Boolean isChecked, Object htmlAttributes) at ASP.views_account_termsandconditions_ascx.__Render__control1(HtmlTextWriter __w, Control parameterContainer) in c:\dev\myProject\Views\Account\Edit.ascx:line 108
[ "Try this: \n<%= Html.CheckBox(\"myCheckbox\", \"Click here\", \"True\", false, new {_id =\"test\" })%>\n\nFor any keyword you can use an underscore before the name of the attribute. Instead of class you use _class. Since class is a keyword in C#, and also the name of the attribute in HTML. Now, \"id\" isn't a keyword in C#, but perhaps it is in another .NET language that they want to support. From what I can tell, it's not a keyword in VB.NET, F#, or Ruby so maybe it is a mistake that they force you to use an underscore with it.\n", "Apparently this is a bug. Because they are adding it to potential rendering values, they just forgot to include it. I would recommend creating a bug on codeplex, and download the source and modify it for your needs.\n" ]
[ 5, 0 ]
[]
[]
[ "asp.net_mvc", "html_helper" ]
stackoverflow_0000008147_asp.net_mvc_html_helper.txt
Q: Getting started with a custom JXTA PeerGroup I have been working with JXTA 2.3 for the last year or so for a peer-to-peer computing platform I am developing. I am migrating to JXTA 2.5 and in the process I am trying to clean up a lot of my use of JXTA. For the most part, I approached JXTA with a just make it work attitude. I used it to jumpstart creating and managing my peer-to-peer overlay network and providing basic communication services. I would like to use it in a more JXTA way since I am making changes to move to 2.5 anyway. My first step would be a basic creation of a custom PeerGroup. I see some new new mechanisms that are using the META-INF.services infrastructure of Java. Should I be listing a related PeerGroup implementing object here with a GUID in net.jxta.platform.Module? As I understand it, if I do this, when a group with a spec ID matching the GUID is encountered and joined or created it should automatically use the matching object. I should be able to just manually tie a PeerGroup object to the group but this new method using META-INF seems to be a lot easier to manage. Does anyone have any pointers or examples of using this infrastructure for PeerGroup implementation? Also, some general information on the META-INF.services mechanism in Java would be helpful. A: The META-INF.services stuff is known by its class name in the API: ServiceLoader. A Google search for ServiceLoader yields some information. I am not really familiar with it, but sometimes it's all about knowing the right search keywords.
Getting started with a custom JXTA PeerGroup
I have been working with JXTA 2.3 for the last year or so for a peer-to-peer computing platform I am developing. I am migrating to JXTA 2.5 and in the process I am trying to clean up a lot of my use of JXTA. For the most part, I approached JXTA with a just make it work attitude. I used it to jumpstart creating and managing my peer-to-peer overlay network and providing basic communication services. I would like to use it in a more JXTA way since I am making changes to move to 2.5 anyway. My first step would be a basic creation of a custom PeerGroup. I see some new new mechanisms that are using the META-INF.services infrastructure of Java. Should I be listing a related PeerGroup implementing object here with a GUID in net.jxta.platform.Module? As I understand it, if I do this, when a group with a spec ID matching the GUID is encountered and joined or created it should automatically use the matching object. I should be able to just manually tie a PeerGroup object to the group but this new method using META-INF seems to be a lot easier to manage. Does anyone have any pointers or examples of using this infrastructure for PeerGroup implementation? Also, some general information on the META-INF.services mechanism in Java would be helpful.
[ "The META-INF.services stuff is known by its class name in the API: ServiceLoader. A Google search for ServiceLoader yields some information.\nI am not really familiar with it, but sometimes it's all about knowing the right search keywords.\n" ]
[ 6 ]
[]
[]
[ "java", "jxta", "p2p" ]
stackoverflow_0000002931_java_jxta_p2p.txt
Q: Remove the bar at the top of Loginview for formatting I'm making a webform using a LoginView, the problem is that because the control includes a grey bar telling you what type of control it is it throws of correctly formatting the page (it has LoginView1 at the top). Is there a way to hide this on the LoginView as the contentPlaceholder does an excellent job for this. I've found that you can remove the ID, but that seems like a hack as it stops programatic access A: I may have misunderstood your question but.... The 'grey bar telling you what type of control it is' only shows up if you are looking at the page in 'design view' in your IDE (are you using Visual Studio?). Once you run the page this label is not visible. It is very common for pages that have dynamic/server-side content to 'not look right' when you are looking at them in 'design view'. Little things like the label/grey bar you are talking about are just there to help you work on the page when it is not populated with the dynamic content. As a result of this, I find that 99.9% of the time I use 'source view' in my IDE because as your page content becomes more dynamic, the 'design view' becomes more useless. A: I don't know that there is a property to control this (can't find one on MSDN), but I'd think you could just iterate through the Controls property of the LoginView and hide that panel/label/whatever.
Remove the bar at the top of Loginview for formatting
I'm making a webform using a LoginView, the problem is that because the control includes a grey bar telling you what type of control it is it throws of correctly formatting the page (it has LoginView1 at the top). Is there a way to hide this on the LoginView as the contentPlaceholder does an excellent job for this. I've found that you can remove the ID, but that seems like a hack as it stops programatic access
[ "I may have misunderstood your question but.... \nThe 'grey bar telling you what type of control it is' only shows up if you are looking at the page in 'design view' in your IDE (are you using Visual Studio?).\nOnce you run the page this label is not visible. \nIt is very common for pages that have dynamic/server-side content to 'not look right' when you are looking at them in 'design view'. \nLittle things like the label/grey bar you are talking about are just there to help you work on the page when it is not populated with the dynamic content. \nAs a result of this, I find that 99.9% of the time I use 'source view' in my IDE because as your page content becomes more dynamic, the 'design view' becomes more useless.\n", "I don't know that there is a property to control this (can't find one on MSDN), but I'd think you could just iterate through the Controls property of the LoginView and hide that panel/label/whatever.\n" ]
[ 3, 0 ]
[]
[]
[ "asp.net", "webforms" ]
stackoverflow_0000007873_asp.net_webforms.txt
Q: I can't get my debugger to stop breaking on first-chance exceptions I'm using Visual C++ 2003 to debug a program remotely via TCP/IP. I had set the Win32 exception c00000005, "Access violation," to break into the debugger when thrown. Then, I set it back to "Use parent setting." The setting for the parent, Win32 Exceptions, is to continue when the exception is thrown. Now, when I debug the program, it breaks each time that exception is thrown, forcing me to click Continue to let it keep debugging. How do I get it to stop breaking like this? A: Is this an exception that your code would actually handle if you weren't running in the debugger? A: I'd like to support Will Dean's answer An access violation sounds like an actual bug in your code. It's not something I'd expect the underlying C/++ Runtime to be throwing and catching internally. The 'first-chance-exceptions' feature is so you can intercept things which get 'caught' in code, using the debugger, and have a look. If there's nothing 'catching' that exception (which makes sense, why on earth would you catch and ignore access violations?), then it will trigger the debugger regardless of what options you may have set. A: Ctrl+Alt+E (or Debug\Exceptions) From there you can select which exceptions break.
I can't get my debugger to stop breaking on first-chance exceptions
I'm using Visual C++ 2003 to debug a program remotely via TCP/IP. I had set the Win32 exception c00000005, "Access violation," to break into the debugger when thrown. Then, I set it back to "Use parent setting." The setting for the parent, Win32 Exceptions, is to continue when the exception is thrown. Now, when I debug the program, it breaks each time that exception is thrown, forcing me to click Continue to let it keep debugging. How do I get it to stop breaking like this?
[ "Is this an exception that your code would actually handle if you weren't running in the debugger?\n", "I'd like to support Will Dean's answer\nAn access violation sounds like an actual bug in your code. It's not something I'd expect the underlying C/++ Runtime to be throwing and catching internally.\nThe 'first-chance-exceptions' feature is so you can intercept things which get 'caught' in code, using the debugger, and have a look. If there's nothing 'catching' that exception (which makes sense, why on earth would you catch and ignore access violations?), then it will trigger the debugger regardless of what options you may have set.\n", "Ctrl+Alt+E (or Debug\\Exceptions)\nFrom there you can select which exceptions break.\n" ]
[ 5, 5, 1 ]
[]
[]
[ "c++", "debugging", "first_chance_exception", "visual_studio", "visual_studio_2003" ]
stackoverflow_0000008263_c++_debugging_first_chance_exception_visual_studio_visual_studio_2003.txt
Q: How to programmatically iterate datagrid rows? I'm suddenly back to WinForms, after years of web development, and am having trouble with something that should be simple. I have an ArrayList of business objects bound to a Windows Forms DataGrid. I'd like the user to be able to edit the cells, and when finished, press a Save button. At that point I'd like to iterate the all the rows and columns in the DataGrid to find any changes, and save them to the database. But I can't find a way to access the DataGrid rows. I'll also want to validate individual cells real time, as they are edited, but I'm pretty sure that can be done. (Maybe not with an ArrayList as the DataSource?) But as for iterating the DataGrid, I'm quite surprised it doesn't seem possible. Must I really stuff my business objects data into datatables in order to use the datagrid? A: foreach(var row in DataGrid1.Rows) { DoStuff(row); } //Or --------------------------------------------- foreach(DataGridRow row in DataGrid1.Rows) { DoStuff(row); } //Or --------------------------------------------- for(int i = 0; i< DataGrid1.Rows.Count - 1; i++) { DoStuff(DataGrid1.Rows[i]); } A: object cell = myDataGrid[row, col]; A: Is there anything about WinForms 3.0 that is so much better than in 1.1 I don't know about 3.0, but you can write code in VS 2008 which runs on the .NET 2.0 framework. (So, you get to use the latest C# language, but you can only use the 2.0 libraries) This gets you Generics (List<DataRow> instead of those GodAwful ArrayLists) and a ton of other stuff, you'll literally end up writing 3x less code.
How to programmatically iterate datagrid rows?
I'm suddenly back to WinForms, after years of web development, and am having trouble with something that should be simple. I have an ArrayList of business objects bound to a Windows Forms DataGrid. I'd like the user to be able to edit the cells, and when finished, press a Save button. At that point I'd like to iterate the all the rows and columns in the DataGrid to find any changes, and save them to the database. But I can't find a way to access the DataGrid rows. I'll also want to validate individual cells real time, as they are edited, but I'm pretty sure that can be done. (Maybe not with an ArrayList as the DataSource?) But as for iterating the DataGrid, I'm quite surprised it doesn't seem possible. Must I really stuff my business objects data into datatables in order to use the datagrid?
[ "foreach(var row in DataGrid1.Rows)\n{\n DoStuff(row);\n}\n//Or --------------------------------------------- \nforeach(DataGridRow row in DataGrid1.Rows)\n{\n DoStuff(row);\n}\n//Or ---------------------------------------------\nfor(int i = 0; i< DataGrid1.Rows.Count - 1; i++)\n{\n DoStuff(DataGrid1.Rows[i]);\n}\n\n", "object cell = myDataGrid[row, col];\n\n", "\nIs there anything about WinForms 3.0 that is so much better than in 1.1\n\nI don't know about 3.0, but you can write code in VS 2008 which runs on the .NET 2.0 framework. (So, you get to use the latest C# language, but you can only use the 2.0 libraries)\nThis gets you Generics (List<DataRow> instead of those GodAwful ArrayLists) and a ton of other stuff, you'll literally end up writing 3x less code.\n" ]
[ 5, 1, 0 ]
[ "Aha, I was really just testing everyone once again! :) The real answer is, you rarely need to iterate the datagrid. Because even when binding to an ArrayList, the binding is 2 way. Still, it is handy to know how to itereate the grid directly, it can save a few lines of code now and then. \nBut NotMyself and Orion gave the better answers: Convince the stakeholders to move up to a higher version of C#, to save development costs and increase maintainability and extensability.\n" ]
[ -2 ]
[ "winforms" ]
stackoverflow_0000006430_winforms.txt
Q: What client(s) should be targeted in implementing an ICalendar export for events? http://en.wikipedia.org/wiki/ICalendar I'm working to implement an export feature for events. The link above lists tons of clients that support the ICalendar standard, but the "three big ones" I can see are Apple's iCal, Microsoft's Outlook, and Google's Gmail. I'm starting to get the feeling that each of these client implement different parts of the "standard", and I'm unsure of what pieces of information we should be trying to export from the application so that someone can put it on their calendar (especially around recurrence). For example, from what I understand Outlook doesn't support hourly recurrence. Could any of you provide guidance of the "happy medium" here from a features implementation standpoint? Secondary question, if we decide to cut features from the export (such as hourly recurrence) because it isn't supported in Outlook, should we support it in the application as well? (it is a general purpose event scheduling application, with no business specific use in mind...so we really are looking for the happy medium). A: I have to say that I don't use the hourly recurrence feature as really how many people have events that repeat in the same day? I could see if someone however was to schedule when they needed to take a particular medicine at recurring times throughout the day. I would say support full features in the application itself, but provide a warning when they go to export the calendar that all event details may not work as expected or find a way to export in a different manner for Outlook alone that does provide the hourly recurrence feature. A: I use iCal in Lightning (Thunderbird) and Rainlendar. I have used Calendaring software for years (decades) and have never had a need for repeating events within the same day. It is simple to add additional daily repeating events in the same day if it is really needed.
What client(s) should be targeted in implementing an ICalendar export for events?
http://en.wikipedia.org/wiki/ICalendar I'm working to implement an export feature for events. The link above lists tons of clients that support the ICalendar standard, but the "three big ones" I can see are Apple's iCal, Microsoft's Outlook, and Google's Gmail. I'm starting to get the feeling that each of these client implement different parts of the "standard", and I'm unsure of what pieces of information we should be trying to export from the application so that someone can put it on their calendar (especially around recurrence). For example, from what I understand Outlook doesn't support hourly recurrence. Could any of you provide guidance of the "happy medium" here from a features implementation standpoint? Secondary question, if we decide to cut features from the export (such as hourly recurrence) because it isn't supported in Outlook, should we support it in the application as well? (it is a general purpose event scheduling application, with no business specific use in mind...so we really are looking for the happy medium).
[ "I have to say that I don't use the hourly recurrence feature as really how many people have events that repeat in the same day? I could see if someone however was to schedule when they needed to take a particular medicine at recurring times throughout the day.\nI would say support full features in the application itself, but provide a warning when they go to export the calendar that all event details may not work as expected or find a way to export in a different manner for Outlook alone that does provide the hourly recurrence feature.\n", "I use iCal in Lightning (Thunderbird) and Rainlendar.\nI have used Calendaring software for years (decades) and have never had a need for repeating events within the same day. It is simple to add additional daily repeating events in the same day if it is really needed.\n" ]
[ 2, 0 ]
[]
[]
[ "gmail", "icalendar", "outlook", "recurrence" ]
stackoverflow_0000006378_gmail_icalendar_outlook_recurrence.txt
Q: Visual Studio refactoring: Remove method Is there any Visual Studio Add-In that can do the remove method refactoring? Suppose you have the following method: Result DoSomething(parameters) { return ComputeResult(parameters); } Or the variant where Result is void. The purpose of the refactoring is to replace all the calls to DoSomething with calls to ComputeResult or the expression that uses the parameters if ComputeResult is not a method call. A: If I understand the question, then Resharper calls this 'inline method' - Ctrl - R + I A: When it comes to refactoring like that, try out ReSharper. Just right click on the method name, click "Find usages", and refactor until it cannot find any references. And as dlamblin mentioned, the newest version of ReSharper has the possibility to inline a method. That should do just what you need. A: I would do it the simpliest way: rename ComputeResult method to ComputeResultX rename DoSomething method to ComputeResult remove DoSomething method (which is now ComputeResult) rename ComputeResultX method back to ComputeResult Maybe VS will show some conflict because of the last rename, but ignore it. By "rename" I mean: overwrite the name of the method and after it use the dropdown (Shift+Alt+F10) and select "rename". It will replace all occurences with the new name. A: There are a few products available to add extra refactoring options to Visual Studio 2005 & 2008, a few of the better ones are Refactor! Pro and Resharper. As far as remove method, there is a description in the canonical Refactoring book about how to do this incrementally. Personally, I follow a pattern something along these lines (assume that compiling and running unit tests occurs between each step): Create the new method Remove the body of the old method, change it to call the new method Search for all references to the old method (right click the method name and select "Find all Reference"), change them to calls to the new method Mark the old method as [Obsolete] (calls to it will now show up as warnings during the build) Delete the old method A: You can also right click the method name and click "Find all References" in Visual Studio. I personally would just do a CTRL + SHIFT + H to Find & Replace A: ReSharper is definitely the VS 2008 plug in to have for refactoring. However it does not do this form of refactoring in one step; you will have to Refactor->rename DoSomething to ComputeResult and ignore the conflict with the real ComputeResult. Then delete the definition which was DoSomething. It's almost one step. However maybe it can do it one step. If I read that correctly.
Visual Studio refactoring: Remove method
Is there any Visual Studio Add-In that can do the remove method refactoring? Suppose you have the following method: Result DoSomething(parameters) { return ComputeResult(parameters); } Or the variant where Result is void. The purpose of the refactoring is to replace all the calls to DoSomething with calls to ComputeResult or the expression that uses the parameters if ComputeResult is not a method call.
[ "If I understand the question, then Resharper calls this 'inline method' - Ctrl - R + I\n", "When it comes to refactoring like that, try out ReSharper. \nJust right click on the method name, click \"Find usages\", and refactor until it cannot find any references.\nAnd as dlamblin mentioned, the newest version of ReSharper has the possibility to inline a method. That should do just what you need.\n", "I would do it the simpliest way:\n\nrename ComputeResult method to ComputeResultX\nrename DoSomething method to ComputeResult\nremove DoSomething method (which is now ComputeResult)\nrename ComputeResultX method back to ComputeResult\n\nMaybe VS will show some conflict because of the last rename, but ignore it.\nBy \"rename\" I mean: overwrite the name of the method and after it use the dropdown (Shift+Alt+F10) and select \"rename\". It will replace all occurences with the new name.\n", "There are a few products available to add extra refactoring options to Visual Studio 2005 & 2008, a few of the better ones are Refactor! Pro and Resharper.\nAs far as remove method, there is a description in the canonical Refactoring book about how to do this incrementally.\nPersonally, I follow a pattern something along these lines (assume that compiling and running unit tests occurs between each step):\n\nCreate the new method\nRemove the body of the old method, change it to call the new method\nSearch for all references to the old method (right click the method name and select \"Find all Reference\"), change them to calls to the new method\nMark the old method as [Obsolete] (calls to it will now show up as warnings during the build)\nDelete the old method\n\n", "You can also right click the method name and click \"Find all References\" in Visual Studio.\nI personally would just do a CTRL + SHIFT + H to Find & Replace\n", "ReSharper is definitely the VS 2008 plug in to have for refactoring. However it does not do this form of refactoring in one step; you will have to Refactor->rename DoSomething to ComputeResult and ignore the conflict with the real ComputeResult. Then delete the definition which was DoSomething. It's almost one step.\nHowever maybe it can do it one step. If I read that correctly.\n" ]
[ 6, 1, 1, 1, 0, 0 ]
[]
[]
[ "methods", "refactoring", "visual_studio" ]
stackoverflow_0000008549_methods_refactoring_visual_studio.txt
Q: Connection Pooling in .NET/SQL Server? Is it necessary or advantageous to write custom connection pooling code when developing applications in .NET with an SQL Server database? I know that ADO.NET gives you the option to enable/disable connection pooling -- does that mean that it's built into the framework and I don't need to worry about it? Why do people talk about writing their own connection pooling software and how is this different than what's built into ADO.NET? A: The connection pooling built-in to ADO.Net is robust and mature. I would recommend against attempting to write your own version. A: I'm no real expert on this matter, but I know ADO.NET has its own connection pooling system, and as long as I've been using it it's been faultless. My reaction would be that there's no point in reinventing the wheel... Just make sure you close your connections when you're finished with them and everything will be fine! I hope someone else can give you some more firm anwers! A: My understanding is that the connection pooling is automatically handled for you when using the SqlConnection object. This is purposefully designed to work with MSSQL and will ensure connections are pooled efficiently. You just need to be sure you close them when you are finished with them (and ensure they are disposed of). I have never heard of people needing to roll their own myself. But I admit my experience is kind of limited there. A: With the advent of ADO.Net and the newer version of SQL connection pooling is handled on two layers, first through ADO.Net itself and secondly by SQL Server 2005/2008 directly, eliminating the need for custom connection pooling. I have been informed that similar support are being planned or have been implemented in Oracle and MySQL out of interest.
Connection Pooling in .NET/SQL Server?
Is it necessary or advantageous to write custom connection pooling code when developing applications in .NET with an SQL Server database? I know that ADO.NET gives you the option to enable/disable connection pooling -- does that mean that it's built into the framework and I don't need to worry about it? Why do people talk about writing their own connection pooling software and how is this different than what's built into ADO.NET?
[ "The connection pooling built-in to ADO.Net is robust and mature. I would recommend against attempting to write your own version.\n", "I'm no real expert on this matter, but I know ADO.NET has its own connection pooling system, and as long as I've been using it it's been faultless.\nMy reaction would be that there's no point in reinventing the wheel... Just make sure you close your connections when you're finished with them and everything will be fine!\nI hope someone else can give you some more firm anwers!\n", "My understanding is that the connection pooling is automatically handled for you when using the SqlConnection object. This is purposefully designed to work with MSSQL and will ensure connections are pooled efficiently. You just need to be sure you close them when you are finished with them (and ensure they are disposed of).\nI have never heard of people needing to roll their own myself. But I admit my experience is kind of limited there.\n", "With the advent of ADO.Net and the newer version of SQL connection pooling is handled on two layers, first through ADO.Net itself and secondly by SQL Server 2005/2008 directly, eliminating the need for custom connection pooling. \nI have been informed that similar support are being planned or have been implemented in Oracle and MySQL out of interest.\n" ]
[ 15, 3, 2, 1 ]
[ "Well, it is going to go away as the answer to all these questions will be LINQ. Incidentally, we have never needed custom connection pooling for any of our applications, so I am not sure what all the noise is about.\n" ]
[ -2 ]
[ ".net", "c#", "connection_pooling", "sql_server" ]
stackoverflow_0000008223_.net_c#_connection_pooling_sql_server.txt
Q: Watch for change in ip address status Is there a way to watch for changes in the ip-address much the same as it is possible to watch for changes to files using the FileSystemWatcher? I'm connecting to a machine via tcp/ip but it takes a while until it gives me an ip-address. I would like to dim out the connect button until I have a valid ip-address. A: Check NetworkChange class. It raises an event when a network address changes.
Watch for change in ip address status
Is there a way to watch for changes in the ip-address much the same as it is possible to watch for changes to files using the FileSystemWatcher? I'm connecting to a machine via tcp/ip but it takes a while until it gives me an ip-address. I would like to dim out the connect button until I have a valid ip-address.
[ "Check NetworkChange class. It raises an event when a network address changes.\n" ]
[ 6 ]
[]
[]
[ ".net", "windows" ]
stackoverflow_0000008585_.net_windows.txt
Q: Closet server versus Colo? As a programmer I need a place to store my stuff. I've been running a server in my parents closet for a long time, but I recently came across a decent 2U server. I have no experience dealing with hosting companies, beyond the very cheap stuff, and I'm wondering what I should look for in a colo or if I should just keep my closet server. A: There are three major factors here. Cost. The colo will obviously be more expensive than sticking a server in your parents' closet. Quality. The colo should be a lot more reliable than the server in your parents' closet. They aren't as likely to go down when there's a power surge. They should provide some support if things do go wrong on their end. They will also likely give you better bandwidth. Convenience. It is a lot easier to fix a broken box when you can walk over to it and plug up a monitor. Going to the colo to troubleshoot is probably not going to be convenient, if it's even possible. Transferring files from your laptop to the server in the closet is also going to be a lot faster than transferring over the Internet. On the other hand, if it's your box in the closet, you have to deal with the hardware problems, so it can balance out. Personally, I pay for a (shared) server. I find that having someone else handle the server is worth it. Uploading large files can get really frustrating, but having to maintain an extra box in the closet is too much hassle for me. You really have to decide what you value most. Is it worth the extra money to you to have a more reliable, more hands-off server? A: If it were my source control server, I would not want to a) pay the added cost, or b)have to drive to the colo because I can't connect to my repository. A: I'd absolutely go for the server that's located under my roof, as long as I don't need it to be connected to the internet with a static IP. Why: It's a target for hackers, as soon as it is reachable from the net Any problems with the machine? I'd rather walk to the closet instead of calling a hotline - and probably pay for the service Connection speed (from me to the server) I can turn it off as long as I don't need it. Less power consuption, which is better for the environment. The hosting of a machine costs money all the time. Even when you don't need it.
Closet server versus Colo?
As a programmer I need a place to store my stuff. I've been running a server in my parents closet for a long time, but I recently came across a decent 2U server. I have no experience dealing with hosting companies, beyond the very cheap stuff, and I'm wondering what I should look for in a colo or if I should just keep my closet server.
[ "There are three major factors here.\n\nCost. The colo will obviously be more expensive than sticking a server in your parents' closet.\nQuality. The colo should be a lot more reliable than the server in your parents' closet. They aren't as likely to go down when there's a power surge. They should provide some support if things do go wrong on their end. They will also likely give you better bandwidth.\nConvenience. It is a lot easier to fix a broken box when you can walk over to it and plug up a monitor. Going to the colo to troubleshoot is probably not going to be convenient, if it's even possible. Transferring files from your laptop to the server in the closet is also going to be a lot faster than transferring over the Internet. On the other hand, if it's your box in the closet, you have to deal with the hardware problems, so it can balance out.\n\nPersonally, I pay for a (shared) server. I find that having someone else handle the server is worth it. Uploading large files can get really frustrating, but having to maintain an extra box in the closet is too much hassle for me.\nYou really have to decide what you value most. Is it worth the extra money to you to have a more reliable, more hands-off server?\n", "If it were my source control server, I would not want to a) pay the added cost, or b)have to drive to the colo because I can't connect to my repository.\n", "I'd absolutely go for the server that's located under my roof, as long as I don't need it to be connected to the internet with a static IP.\nWhy:\n\nIt's a target for hackers, as soon as it is reachable from the net\nAny problems with the machine? I'd rather walk to the closet instead of calling a hotline - and probably pay for the service\nConnection speed (from me to the server)\nI can turn it off as long as I don't need it. Less power consuption, which is better for the environment.\nThe hosting of a machine costs money all the time. Even when you don't need it.\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "hardware", "storage" ]
stackoverflow_0000008545_hardware_storage.txt
Q: Numerical formatting using String.Format Are there any codes that allow for numerical formatting of data when using string.format? A: Loads, stick string.Format into Google :-) A quite good tutorial is at iduno A: Yes, you could format it this way: string.Format("Format number to: {0 : #.00}", number); string.Format("Format date to: {0 : MM/dd/yyyy}", date); A: There are a number. This MS site is probably the best place to look A: Here is another very good reference that compliments what Keith mentioned. http://www.scribd.com/doc/2547864/msnetformattingstrings A: As Keith said above. The most common one I use is currency: String.Format("{0:c}", 12000); Which would output £12,000.00
Numerical formatting using String.Format
Are there any codes that allow for numerical formatting of data when using string.format?
[ "Loads, stick string.Format into Google :-)\nA quite good tutorial is at iduno\n", "Yes, you could format it this way:\nstring.Format(\"Format number to: {0 : #.00}\", number);\nstring.Format(\"Format date to: {0 : MM/dd/yyyy}\", date);\n\n", "There are a number. This MS site is probably the best place to look\n", "Here is another very good reference that compliments what Keith mentioned.\nhttp://www.scribd.com/doc/2547864/msnetformattingstrings \n", "As Keith said above. The most common one I use is currency:\nString.Format(\"{0:c}\", 12000);\n\nWhich would output £12,000.00\n" ]
[ 6, 4, 2, 2, 1 ]
[]
[]
[ ".net", "formatting", "numeric" ]
stackoverflow_0000008653_.net_formatting_numeric.txt
Q: How can I empty the recycle bin for all users from a Windows service application in c# I'm looking for a c# snippet which I can insert in a Windows service. The code must empty the recycle bin for all users on the computer. I have previously tried using SHEmptyRecycleBin (ref http://www.codeproject.com/KB/cs/Empty_Recycle_Bin.aspx) however the code doesn't work when ran from a windows service as the service is running with local system privileges. A: Hopefully you can't. A service running as the local machine should not be clearing my Recycle bin, ever. You could promote the service to run as an Admin account then it would have the right (and be a security risk), but why do you want to do this? It sounds like the sort of think Viruses try to do. A: I think doing something like this is against Microsoft recommended practices. What are you trying to do that requires emptying the Recycle Bin from a Windows service? A: First, have you tried running the service on an interactive user account? Maybe SHEmptyRecycleBin requires an interactive user even though it doesn't necessarily display a Window. Second, I'm not sure it's a good idea to delete other users' stuff but I guess you have a very good reason?
How can I empty the recycle bin for all users from a Windows service application in c#
I'm looking for a c# snippet which I can insert in a Windows service. The code must empty the recycle bin for all users on the computer. I have previously tried using SHEmptyRecycleBin (ref http://www.codeproject.com/KB/cs/Empty_Recycle_Bin.aspx) however the code doesn't work when ran from a windows service as the service is running with local system privileges.
[ "Hopefully you can't.\nA service running as the local machine should not be clearing my Recycle bin, ever.\nYou could promote the service to run as an Admin account then it would have the right (and be a security risk), but why do you want to do this? It sounds like the sort of think Viruses try to do.\n", "I think doing something like this is against Microsoft recommended practices. What are you trying to do that requires emptying the Recycle Bin from a Windows service?\n", "First, have you tried running the service on an interactive user account? Maybe SHEmptyRecycleBin requires an interactive user even though it doesn't necessarily display a Window.\nSecond, I'm not sure it's a good idea to delete other users' stuff but I guess you have a very good reason?\n" ]
[ 4, 2, 1 ]
[]
[]
[ "c#", "recycle_bin" ]
stackoverflow_0000008648_c#_recycle_bin.txt
Q: Code crash in MS Visual Studio 2005 in RELEASE configuration I have a workspace for running an H.263 Video Encoder in a loop for 31 times i.e. the main is executed 31 times to generate 31 different encoded bit streams. This MS Visual Studio 2005 Workspace has all C source files. When i create a "DEBUG" configuration for the workspace and build and execute it, it runs fine, i.e. it generates all the 31 output files as expected. But when I set the configuration of the workspace to "RELEASE" mdoe, and repeat the process, the encoder crashes at some test case run. Now to debug this is verified following: Analyzed the code to see if there was any variable initialization being missed out in every run of the encoder Checked the various Workspace(Solution) options in both the modes (DEBUG and RELEASE). There are some obvious differences, but i turned the optimization related options explicitly same in both modes. But still could not nail the problem and find a fix for that. Any pointers? -Ajit. A: It's hard to say what the problem might be without carefully inspecting the code. However... One of the differences between debug and release builds is how the function call stack frame is set up. There are certain classes of bad things you can do (like calling a function with the wrong number of arguments) that are not fatal in a debug build but crash horribly in a release build. Perhaps you could try changing the stack frame related options (I forget what they're called, sorry) in the release build to the same as the debug build and see whether that helps. Another thing might be to enable all the warnings you possibly can, and fix them all. A: Could be a concurrency problem of two threads. The DEBUG configuration slows the execution down, so the problem does not occur. But, only a guess. A: Interesting problem.. Are you sure you have no conditional compilation code lurking around that is not being compiled in release mode? i.e: #if (DEBUG) // Debug Code here #else // Release Code here #endif Thats the only thing I can really think of.. Never experienced anything like this myself.. A: Can you add the debug symbols to the release build and run it in the debugger to see where and why it crashed? A: Yeah, those bastard crashes are the hardest to fix. Fortunatly, there are some steps you can do that will give you clues before you resort to manually looking at the code and hope to find the needle. When does it crash? At every test? At a specific test? What does that test does that the others don't? What's the error? If it's an access violation, is there a pattern to where it happens? If the addresses are low, it might mean there is an uninitialised pointer somewhere. Is the program crashing with Debug configuration but without the debugger attached? If so, it's most likely a thread synchronisation problem as John Smithers pointed out. Have you tried running the code through an analyser such as Purify? It's slow but it's usually worth the wait. Try to debug the release configuration anyway. It will only dump assemblies but it can still give you an indication of what happens such as if the code pointer jumps in the middle of garbage or hits a breakpoint in an external library. Are you on an Intel architecture? If not, watch for memory alignement errors, they hard crash without warning on some architectures and those codec algorithm tend to create those situations a lot since they are overly optimized. A: Are you sure there are no precompile directives that, say, ignores some really important code in Release mode but allows them in Debug? Also, have you implemented any logging that might point out to the precise assembly that's throwing the error? A: I would look at the crash in more detail - if it's crashing in a test case, then it sounds pretty easily reproducible, which is usually most of the challenge. A: Another thing to consider: in debug mode, the variables are initialized with 0xCCCCCCCC instead of zero. That might have some nasty side effects.
Code crash in MS Visual Studio 2005 in RELEASE configuration
I have a workspace for running an H.263 Video Encoder in a loop for 31 times i.e. the main is executed 31 times to generate 31 different encoded bit streams. This MS Visual Studio 2005 Workspace has all C source files. When i create a "DEBUG" configuration for the workspace and build and execute it, it runs fine, i.e. it generates all the 31 output files as expected. But when I set the configuration of the workspace to "RELEASE" mdoe, and repeat the process, the encoder crashes at some test case run. Now to debug this is verified following: Analyzed the code to see if there was any variable initialization being missed out in every run of the encoder Checked the various Workspace(Solution) options in both the modes (DEBUG and RELEASE). There are some obvious differences, but i turned the optimization related options explicitly same in both modes. But still could not nail the problem and find a fix for that. Any pointers? -Ajit.
[ "It's hard to say what the problem might be without carefully inspecting the code. However...\nOne of the differences between debug and release builds is how the function call stack frame is set up. There are certain classes of bad things you can do (like calling a function with the wrong number of arguments) that are not fatal in a debug build but crash horribly in a release build. Perhaps you could try changing the stack frame related options (I forget what they're called, sorry) in the release build to the same as the debug build and see whether that helps.\nAnother thing might be to enable all the warnings you possibly can, and fix them all.\n", "Could be a concurrency problem of two threads. The DEBUG configuration slows the execution down, so the problem does not occur. But, only a guess.\n", "Interesting problem.. Are you sure you have no conditional compilation code lurking around that is not being compiled in release mode? i.e:\n#if (DEBUG)\n// Debug Code here\n#else\n// Release Code here\n#endif\n\nThats the only thing I can really think of.. Never experienced anything like this myself..\n", "Can you add the debug symbols to the release build and run it in the debugger to see where and why it crashed?\n", "Yeah, those bastard crashes are the hardest to fix. Fortunatly, there are some steps you can do that will give you clues before you resort to manually looking at the code and hope to find the needle.\nWhen does it crash? At every test? At a specific test? What does that test does that the others don't?\nWhat's the error? If it's an access violation, is there a pattern to where it happens? If the addresses are low, it might mean there is an uninitialised pointer somewhere.\nIs the program crashing with Debug configuration but without the debugger attached? If so, it's most likely a thread synchronisation problem as John Smithers pointed out.\nHave you tried running the code through an analyser such as Purify? It's slow but it's usually worth the wait.\nTry to debug the release configuration anyway. It will only dump assemblies but it can still give you an indication of what happens such as if the code pointer jumps in the middle of garbage or hits a breakpoint in an external library.\nAre you on an Intel architecture? If not, watch for memory alignement errors, they hard crash without warning on some architectures and those codec algorithm tend to create those situations a lot since they are overly optimized.\n", "Are you sure there are no precompile directives that, say, ignores some really important code in Release mode but allows them in Debug?\nAlso, have you implemented any logging that might point out to the precise assembly that's throwing the error?\n", "I would look at the crash in more detail - if it's crashing in a test case, then it sounds pretty easily reproducible, which is usually most of the challenge.\n", "Another thing to consider: in debug mode, the variables are initialized with 0xCCCCCCCC instead of zero. That might have some nasty side effects.\n" ]
[ 2, 1, 1, 1, 1, 0, 0, 0 ]
[]
[]
[ "visual_studio_2005" ]
stackoverflow_0000008612_visual_studio_2005.txt
Q: Best implementation for Key Value Pair Data Structure? So I've been poking around with C# a bit lately, and all the Generic Collections have me a little confused. Say I wanted to represent a data structure where the head of a tree was a key value pair, and then there is one optional list of key value pairs below that (but no more levels than these). Would this be suitable? public class TokenTree { public TokenTree() { /* I must admit to not fully understanding this, * I got it from msdn. As far as I can tell, IDictionary is an * interface, and Dictionary is the default implementation of * that interface, right? */ SubPairs = new Dictionary<string, string>(); } public string Key; public string Value; public IDictionary<string, string> SubPairs; } It's only really a simple shunt for passing around data. A: There is an actual Data Type called KeyValuePair, use like this KeyValuePair<string, string> myKeyValuePair = new KeyValuePair<string,string>("defaultkey", "defaultvalue"); A: One possible thing you could do is use the Dictionary object straight out of the box and then just extend it with your own modifications: public class TokenTree : Dictionary<string, string> { public IDictionary<string, string> SubPairs; } This gives you the advantage of not having to enforce the rules of IDictionary for your Key (e.g., key uniqueness, etc). And yup you got the concept of the constructor right :) A: I think what you might be after (as a literal implementation of your question), is: public class TokenTree { public TokenTree() { tree = new Dictionary<string, IDictionary<string,string>>(); } IDictionary<string, IDictionary<string, string>> tree; } You did actually say a "list" of key-values in your question, so you might want to swap the inner IDictionary with a: IList<KeyValuePair<string, string>> A: There is a KeyValuePair built-in type. As a matter of fact, this is what the IDictionary is giving you access to when you iterate in it. Also, this structure is hardly a tree, finding a more representative name might be a good exercise. A: Just one thing to add to this (although I do think you have already had your question answered by others). In the interests of extensibility (since we all know it will happen at some point) you may want to check out the Composite Pattern This is ideal for working with "Tree-Like Structures".. Like I said, I know you are only expecting one sub-level, but this could really be useful for you if you later need to extend ^_^ A: @Jay Mooney: A generic Dictionary class in .NET is actually a hash table, just with fixed types. The code you've shown shouldn't convince anyone to use Hashtable instead of Dictionary, since both code pieces can be used for both types. For hashtable: foreach(object key in h.keys) { string keyAsString = key.ToString(); // btw, this is unnecessary string valAsString = h[key].ToString(); System.Diagnostics.Debug.WriteLine(keyAsString + " " + valAsString); } For dictionary: foreach(string key in d.keys) { string valAsString = d[key].ToString(); System.Diagnostics.Debug.WriteLine(key + " " + valAsString); } And just the same for the other one with KeyValuePair, just use the non-generic version for Hashtable, and the generic version for Dictionary. So it's just as easy both ways, but Hashtable uses Object for both key and value, which means you will box all value types, and you don't have type safety, and Dictionary uses generic types and is thus better. A: Dictionary Class is exactly what you want, correct. You can declare the field directly as Dictionary, instead of IDictionary, but that's up to you. A: Use something like this: class Tree < T > : Dictionary < T, IList< Tree < T > > > { } It's ugly, but I think it will give you what you want. Too bad KeyValuePair is sealed.
Best implementation for Key Value Pair Data Structure?
So I've been poking around with C# a bit lately, and all the Generic Collections have me a little confused. Say I wanted to represent a data structure where the head of a tree was a key value pair, and then there is one optional list of key value pairs below that (but no more levels than these). Would this be suitable? public class TokenTree { public TokenTree() { /* I must admit to not fully understanding this, * I got it from msdn. As far as I can tell, IDictionary is an * interface, and Dictionary is the default implementation of * that interface, right? */ SubPairs = new Dictionary<string, string>(); } public string Key; public string Value; public IDictionary<string, string> SubPairs; } It's only really a simple shunt for passing around data.
[ "There is an actual Data Type called KeyValuePair, use like this\nKeyValuePair<string, string> myKeyValuePair = new KeyValuePair<string,string>(\"defaultkey\", \"defaultvalue\");\n\n", "One possible thing you could do is use the Dictionary object straight out of the box and then just extend it with your own modifications:\npublic class TokenTree : Dictionary<string, string>\n{\n public IDictionary<string, string> SubPairs;\n}\n\nThis gives you the advantage of not having to enforce the rules of IDictionary for your Key (e.g., key uniqueness, etc).\nAnd yup you got the concept of the constructor right :)\n", "I think what you might be after (as a literal implementation of your question), is:\npublic class TokenTree\n{\n public TokenTree()\n {\n tree = new Dictionary<string, IDictionary<string,string>>();\n }\n\n IDictionary<string, IDictionary<string, string>> tree; \n}\n\nYou did actually say a \"list\" of key-values in your question, so you might want to swap the inner IDictionary with a:\nIList<KeyValuePair<string, string>>\n\n", "There is a KeyValuePair built-in type. As a matter of fact, this is what the IDictionary is giving you access to when you iterate in it.\nAlso, this structure is hardly a tree, finding a more representative name might be a good exercise.\n", "Just one thing to add to this (although I do think you have already had your question answered by others). In the interests of extensibility (since we all know it will happen at some point) you may want to check out the Composite Pattern This is ideal for working with \"Tree-Like Structures\"..\nLike I said, I know you are only expecting one sub-level, but this could really be useful for you if you later need to extend ^_^\n", "@Jay Mooney: A generic Dictionary class in .NET is actually a hash table, just with fixed types.\nThe code you've shown shouldn't convince anyone to use Hashtable instead of Dictionary, since both code pieces can be used for both types.\nFor hashtable:\nforeach(object key in h.keys)\n{\n string keyAsString = key.ToString(); // btw, this is unnecessary\n string valAsString = h[key].ToString();\n\n System.Diagnostics.Debug.WriteLine(keyAsString + \" \" + valAsString);\n}\n\nFor dictionary:\nforeach(string key in d.keys)\n{\n string valAsString = d[key].ToString();\n\n System.Diagnostics.Debug.WriteLine(key + \" \" + valAsString);\n}\n\nAnd just the same for the other one with KeyValuePair, just use the non-generic version for Hashtable, and the generic version for Dictionary.\nSo it's just as easy both ways, but Hashtable uses Object for both key and value, which means you will box all value types, and you don't have type safety, and Dictionary uses generic types and is thus better.\n", "Dictionary Class is exactly what you want, correct.\nYou can declare the field directly as Dictionary, instead of IDictionary, but that's up to you.\n", "Use something like this: \nclass Tree < T > : Dictionary < T, IList< Tree < T > > > \n{ \n} \n\nIt's ugly, but I think it will give you what you want. Too bad KeyValuePair is sealed.\n" ]
[ 141, 14, 7, 5, 3, 2, 1, 1 ]
[]
[]
[ "c#", "collections", "data_structures" ]
stackoverflow_0000008800_c#_collections_data_structures.txt
Q: Is there any list datatype in MySQL stored procedures, or a way to emulate them? I would like to create a stored procedure in MySQL that took a list as argument. For example, say that I would like to be able to set multiple tags for an item in one call, then what I want to do is to define a procedure that takes the ID of the item and a list of tags to set. However, I can't seem to find any way to do this, there is no list datatype, as far as I'm aware, but can it be emulated somehow? Could the list of tags be a comma-separated string, which can somehow be split and looped over? How do you usually work with lists in MySQL stored procedures? A: This article has some good discussion on the problem of parsing an array to a stored procedure since stored procedures only allow valid table column data-types as parameters. There are some neat things you can do with the csv table type in mysql - that is if you are loading a flat file into the db. You could create a temporary table in the stored procedure, iterate over the csv list and insert it to the temp table, then create a cursor which selects the values from that table. This answer in the above mentioned thread shows a way of doing this. Generally I would split the array before I come to the database and then perform the query individually on each item. A: Depending on how complicated you want to get, you can use a generic linking table. For one of my applications there are several reports where the user might pick, for instance a list of customers to run the report on rather than just a single customer from a combo box. I have a separate table with 2 fields: UniqueID (guid) ItemID The psuedo-code looks like this: GUID guid = GenerateGUID() try for each customer in customerList { INSERT(guid, customerId) } ExecuteSQLPocedure(guid) --the procedure can inner-join to the list table to get the list finally DELETE WHERE UniqueID=guid A: Not sure if these will work specifically in a SP, but there are ENUM and SET datatypes in MySQL 5 which may do what you need. http://dev.mysql.com/doc/refman/5.0/en/enum.html http://dev.mysql.com/doc/refman/5.0/en/set.html A: In my programming language of Choice, C#, I actually do this in the application itself because split() functions and loops are easier to program in C# then SQL, However! Perhaps you should look at SubString_Index() function. For example, the following would return google: SELECT SUBSTRING_INDEX(SUBSTRING_INDEX('www.google.com', '.', -2), '.', 1);
Is there any list datatype in MySQL stored procedures, or a way to emulate them?
I would like to create a stored procedure in MySQL that took a list as argument. For example, say that I would like to be able to set multiple tags for an item in one call, then what I want to do is to define a procedure that takes the ID of the item and a list of tags to set. However, I can't seem to find any way to do this, there is no list datatype, as far as I'm aware, but can it be emulated somehow? Could the list of tags be a comma-separated string, which can somehow be split and looped over? How do you usually work with lists in MySQL stored procedures?
[ "This article has some good discussion on the problem of parsing an array to a stored procedure since stored procedures only allow valid table column data-types as parameters.\nThere are some neat things you can do with the csv table type in mysql - that is if you are loading a flat file into the db.\nYou could create a temporary table in the stored procedure, iterate over the csv list and insert it to the temp table, then create a cursor which selects the values from that table. This answer in the above mentioned thread shows a way of doing this.\nGenerally I would split the array before I come to the database and then perform the query individually on each item.\n", "Depending on how complicated you want to get, you can use a generic linking table. For one of my applications there are several reports where the user might pick, for instance a list of customers to run the report on rather than just a single customer from a combo box. I have a separate table with 2 fields:\n\nUniqueID (guid)\nItemID\n\nThe psuedo-code looks like this:\nGUID guid = GenerateGUID()\ntry\n for each customer in customerList { INSERT(guid, customerId) }\n ExecuteSQLPocedure(guid)\n --the procedure can inner-join to the list table to get the list\nfinally\n DELETE WHERE UniqueID=guid \n\n", "Not sure if these will work specifically in a SP, but there are ENUM and SET datatypes in MySQL 5 which may do what you need.\nhttp://dev.mysql.com/doc/refman/5.0/en/enum.html\nhttp://dev.mysql.com/doc/refman/5.0/en/set.html\n", "In my programming language of Choice, C#, I actually do this in the application itself because split() functions and loops are easier to program in C# then SQL, However!\nPerhaps you should look at SubString_Index() function.\nFor example, the following would return google:\nSELECT SUBSTRING_INDEX(SUBSTRING_INDEX('www.google.com', '.', -2), '.', 1);\n\n" ]
[ 9, 1, 0, 0 ]
[]
[]
[ "mysql", "stored_procedures" ]
stackoverflow_0000008795_mysql_stored_procedures.txt
Q: Evidence Based Scheduling Tool Are there any free tools that implement evidence-based scheduling like Joel talks about? There is FogBugz, of course, but I am looking for a simple and free tool that can apply EBS on some tasks that I give estimates (and actual times which are complete) for. A: FogBugz is free for up to 2 users by the way. As far I know this is the only tool that does EBS. See here http://www.workhappy.net/2008/06/get-fogbugz-for.html A: According to Wikipedia, Fogbugz is the only product currently offering EBS.
Evidence Based Scheduling Tool
Are there any free tools that implement evidence-based scheduling like Joel talks about? There is FogBugz, of course, but I am looking for a simple and free tool that can apply EBS on some tasks that I give estimates (and actual times which are complete) for.
[ "FogBugz is free for up to 2 users by the way. As far I know this is the only tool that does EBS.\nSee here http://www.workhappy.net/2008/06/get-fogbugz-for.html\n", "According to Wikipedia, Fogbugz is the only product currently offering EBS.\n" ]
[ 14, 7 ]
[]
[]
[ "fogbugz" ]
stackoverflow_0000008876_fogbugz.txt
Q: Get list of domains on the network Using the Windows API, how can I get a list of domains on my network? A: Answered my own question: Use the NetServerEnum function, passing in the SV_TYPE_DOMAIN_ENUM constant for the "servertype" argument. In Delphi, the code looks like this: <snip> type NET_API_STATUS = DWORD; PSERVER_INFO_100 = ^SERVER_INFO_100; SERVER_INFO_100 = packed record sv100_platform_id : DWORD; sv100_name : PWideChar; end; function NetServerEnum( //get a list of pcs on the network (same as DOS cmd "net view") const servername : PWideChar; const level : DWORD; const bufptr : Pointer; const prefmaxlen : DWORD; const entriesread : PDWORD; const totalentries : PDWORD; const servertype : DWORD; const domain : PWideChar; const resume_handle : PDWORD ) : NET_API_STATUS; stdcall; external 'netapi32.dll'; function NetApiBufferFree( //memory mgmt routine const Buffer : Pointer ) : NET_API_STATUS; stdcall; external 'netapi32.dll'; const MAX_PREFERRED_LENGTH = DWORD(-1); NERR_Success = 0; SV_TYPE_ALL = $FFFFFFFF; SV_TYPE_DOMAIN_ENUM = $80000000; function TNetwork.ComputersInDomain: TStringList; var pBuffer : PSERVER_INFO_100; pWork : PSERVER_INFO_100; dwEntriesRead : DWORD; dwTotalEntries : DWORD; i : integer; dwResult : NET_API_STATUS; begin Result := TStringList.Create; Result.Clear; dwResult := NetServerEnum(nil,100,@pBuffer,MAX_PREFERRED_LENGTH, @dwEntriesRead,@dwTotalEntries,SV_TYPE_DOMAIN_ENUM, PWideChar(FDomainName),nil); if dwResult = NERR_SUCCESS then begin try pWork := pBuffer; for i := 1 to dwEntriesRead do begin Result.Add(pWork.sv100_name); inc(pWork); end; //for i finally NetApiBufferFree(pBuffer); end; //try-finally end //if no error else begin raise Exception.Create('Error while retrieving computer list from domain ' + FDomainName + #13#10 + SysErrorMessage(dwResult)); end; end; <snip> A: You will need to use some LDAP queries Here is some code I have used in a previous script (it was taken off the net somewhere, and I've left in the copyright notices) ' This VBScript code gets the list of the domains contained in the ' forest that the user running the script is logged into ' --------------------------------------------------------------- ' From the book "Active Directory Cookbook" by Robbie Allen ' Publisher: O'Reilly and Associates ' ISBN: 0-596-00466-4 ' Book web site: http://rallenhome.com/books/adcookbook/code.html ' --------------------------------------------------------------- set objRootDSE = GetObject("LDAP://RootDSE") strADsPath = "<GC://" & objRootDSE.Get("rootDomainNamingContext") & ">;" strFilter = "(objectcategory=domainDNS);" strAttrs = "name;" strScope = "SubTree" set objConn = CreateObject("ADODB.Connection") objConn.Provider = "ADsDSOObject" objConn.Open "Active Directory Provider" set objRS = objConn.Execute(strADsPath & strFilter & strAttrs & strScope) objRS.MoveFirst while Not objRS.EOF Wscript.Echo objRS.Fields(0).Value objRS.MoveNext wend Also a C# version
Get list of domains on the network
Using the Windows API, how can I get a list of domains on my network?
[ "Answered my own question:\nUse the NetServerEnum function, passing in the SV_TYPE_DOMAIN_ENUM constant for the \"servertype\" argument.\nIn Delphi, the code looks like this:\n<snip>\ntype\n NET_API_STATUS = DWORD;\n PSERVER_INFO_100 = ^SERVER_INFO_100;\n SERVER_INFO_100 = packed record\n sv100_platform_id : DWORD;\n sv100_name : PWideChar;\nend;\n\nfunction NetServerEnum( //get a list of pcs on the network (same as DOS cmd \"net view\")\n const servername : PWideChar;\n const level : DWORD;\n const bufptr : Pointer;\n const prefmaxlen : DWORD;\n const entriesread : PDWORD;\n const totalentries : PDWORD;\n const servertype : DWORD;\n const domain : PWideChar;\n const resume_handle : PDWORD\n) : NET_API_STATUS; stdcall; external 'netapi32.dll';\n\nfunction NetApiBufferFree( //memory mgmt routine\n const Buffer : Pointer\n) : NET_API_STATUS; stdcall; external 'netapi32.dll';\n\nconst\n MAX_PREFERRED_LENGTH = DWORD(-1);\n NERR_Success = 0;\n SV_TYPE_ALL = $FFFFFFFF;\n SV_TYPE_DOMAIN_ENUM = $80000000;\n\n\nfunction TNetwork.ComputersInDomain: TStringList;\nvar\n pBuffer : PSERVER_INFO_100;\n pWork : PSERVER_INFO_100;\n dwEntriesRead : DWORD;\n dwTotalEntries : DWORD;\n i : integer;\n dwResult : NET_API_STATUS;\nbegin\n Result := TStringList.Create;\n Result.Clear;\n\n dwResult := NetServerEnum(nil,100,@pBuffer,MAX_PREFERRED_LENGTH,\n @dwEntriesRead,@dwTotalEntries,SV_TYPE_DOMAIN_ENUM,\n PWideChar(FDomainName),nil);\n\n if dwResult = NERR_SUCCESS then begin\n try\n pWork := pBuffer;\n for i := 1 to dwEntriesRead do begin\n Result.Add(pWork.sv100_name);\n inc(pWork);\n end; //for i\n finally\n NetApiBufferFree(pBuffer);\n end; //try-finally\n end //if no error\n else begin\n raise Exception.Create('Error while retrieving computer list from domain ' +\n FDomainName + #13#10 +\n SysErrorMessage(dwResult));\n end;\nend;\n<snip>\n\n", "You will need to use some LDAP queries\nHere is some code I have used in a previous script (it was taken off the net somewhere, and I've left in the copyright notices)\n' This VBScript code gets the list of the domains contained in the \n' forest that the user running the script is logged into\n\n' ---------------------------------------------------------------\n' From the book \"Active Directory Cookbook\" by Robbie Allen\n' Publisher: O'Reilly and Associates\n' ISBN: 0-596-00466-4\n' Book web site: http://rallenhome.com/books/adcookbook/code.html\n' ---------------------------------------------------------------\n\nset objRootDSE = GetObject(\"LDAP://RootDSE\")\nstrADsPath = \"<GC://\" & objRootDSE.Get(\"rootDomainNamingContext\") & \">;\"\nstrFilter = \"(objectcategory=domainDNS);\"\nstrAttrs = \"name;\"\nstrScope = \"SubTree\"\n\nset objConn = CreateObject(\"ADODB.Connection\")\nobjConn.Provider = \"ADsDSOObject\"\nobjConn.Open \"Active Directory Provider\"\nset objRS = objConn.Execute(strADsPath & strFilter & strAttrs & strScope)\nobjRS.MoveFirst\nwhile Not objRS.EOF\n Wscript.Echo objRS.Fields(0).Value\n objRS.MoveNext\nwend\n\n\nAlso a C# version\n" ]
[ 3, 1 ]
[]
[]
[ "winapi" ]
stackoverflow_0000008880_winapi.txt
Q: Cannot add WebViewer of ActiveReports to an ASP.NET page I installed ActiveReports from their site. The version was labeled as .NET 2.0 build 5.2.1013.2 (for Visual Studio 2005 and 2008). I have an ASP.NET project in VS 2008 which has 2.0 as target framework. I added all the tools in the DataDynamics namespace to the toolbox, created a new project, added a new report. When I drag and drop the WebViewer control to a page in the design view, nothing happens. No mark up is added, no report viewer is displayed on the page. Also I noticed that there are no tags related to DataDynamics components in my web.config file. Am I missing some configuration? A: I think I found the reason. While trying to get this work, I think I installed another version of the package that removed or deactivated my current version. The control I was dropping on the form belonged to the older version that had no assemblies referenced. I removed all installations of ActiveReports, installed the last version and cleaned up the toolbox. I added the latest version of the WebViewer to toolbox and dropped it on the form. It worked.
Cannot add WebViewer of ActiveReports to an ASP.NET page
I installed ActiveReports from their site. The version was labeled as .NET 2.0 build 5.2.1013.2 (for Visual Studio 2005 and 2008). I have an ASP.NET project in VS 2008 which has 2.0 as target framework. I added all the tools in the DataDynamics namespace to the toolbox, created a new project, added a new report. When I drag and drop the WebViewer control to a page in the design view, nothing happens. No mark up is added, no report viewer is displayed on the page. Also I noticed that there are no tags related to DataDynamics components in my web.config file. Am I missing some configuration?
[ "I think I found the reason. While trying to get this work, I think I installed another version of the package that removed or deactivated my current version. The control I was dropping on the form belonged to the older version that had no assemblies referenced. I removed all installations of ActiveReports, installed the last version and cleaned up the toolbox. I added the latest version of the WebViewer to toolbox and dropped it on the form. It worked.\n" ]
[ 2 ]
[]
[]
[ "activereports" ]
stackoverflow_0000008807_activereports.txt
Q: Why won't Entourage work with Exchange 2007? So this is IT more than programming but Google found nothing, and you guys are just the right kind of geniuses. My Exchange Server 2007 and Entourage clients don't play nice. Right now the big issue is that the entourage client will not connect to Exchange 2007 ( Entourage 2004 or 2008) The account settings are correct and use the proper format of https://exchange2007.mydomain.com/exchange/user@domain.com The issue is with a dll called davex.dll when it is where it belongs, the OWA application pool crashes a whole bunch of nasty things happen. When it isn’t there, I can connect to everything fine - and the OWA app pool doesn’t crash - but Entourage never propogates the folders in the mailbox and doesn't send or receive. Any help or ideas would be appreciated: Microsoft support is silent on the issue, and Google doesn't turn up much. A: Try it without using the /exchange in the server properties field. Here's a link with relevant info. A: davex.dll is the legacy webdav component for Exchange server, which Entourage uses. Your first step should be investigating why the application pool crashes. My guess is that Entourage can't do anything when the dll isn't present because webdav is not responding to any requests.
Why won't Entourage work with Exchange 2007?
So this is IT more than programming but Google found nothing, and you guys are just the right kind of geniuses. My Exchange Server 2007 and Entourage clients don't play nice. Right now the big issue is that the entourage client will not connect to Exchange 2007 ( Entourage 2004 or 2008) The account settings are correct and use the proper format of https://exchange2007.mydomain.com/exchange/user@domain.com The issue is with a dll called davex.dll when it is where it belongs, the OWA application pool crashes a whole bunch of nasty things happen. When it isn’t there, I can connect to everything fine - and the OWA app pool doesn’t crash - but Entourage never propogates the folders in the mailbox and doesn't send or receive. Any help or ideas would be appreciated: Microsoft support is silent on the issue, and Google doesn't turn up much.
[ "Try it without using the /exchange in the server properties field. Here's a link with relevant info.\n", "davex.dll is the legacy webdav component for Exchange server, which Entourage uses. Your first step should be investigating why the application pool crashes. My guess is that Entourage can't do anything when the dll isn't present because webdav is not responding to any requests.\n" ]
[ 2, 0 ]
[]
[]
[ "dll", "email", "entourage", "exchange_server" ]
stackoverflow_0000008228_dll_email_entourage_exchange_server.txt
Q: Looking for best practice for doing a "Net Use" in C# I'd rather not have to resort to calling the command line. I'm looking for code that can map/disconnect a drive, while also having exception handling. Any ideas? A: Use P/Invoke and WNetAddConnection2 There should also be some wrappers out there to do some of the grunt work for you. Google is your friend, as always.
Looking for best practice for doing a "Net Use" in C#
I'd rather not have to resort to calling the command line. I'm looking for code that can map/disconnect a drive, while also having exception handling. Any ideas?
[ "Use P/Invoke and WNetAddConnection2\nThere should also be some wrappers out there to do some of the grunt work for you.\nGoogle is your friend, as always.\n" ]
[ 9 ]
[]
[]
[ ".net_1.1", "c#" ]
stackoverflow_0000008919_.net_1.1_c#.txt
Q: VS 2008 - Objects disappearing? I've only been using VS 2008 Team Foundation for a few weeks. Over the last few days, I've noticed that sometimes one of my objects/controls on my page just disappears from intellisense. The project builds perfectly and the objects are still in the HTML, but I still can't find the object. Any one else notice this? Edit: For what it's worth, I know if I close VS and then open it up again, it comes back. A: The Visual Studio 2008 and .NET 3.5 Framework Service Pack 1 has gone out of beta, maybe you can see if this bug still occurs? A: I am also having a number of problems with VS 2008. Who would guess that I don't ever need to select multiple controls on a web form... Anyway, a lot has been fixed in Service Pack 1, which is in Beta currently. Might be worth installing that. It has gone a little way to fixing absolute positioning. This isn't your problem, of course, but your fix might be in there as well. A: I occasionally get this in Visual Studio 2005. A method I use to get the controls back, is to switch the web page between code view and design view. I know it's not a fix but it's a little quicker than restarting Visual Studio.
VS 2008 - Objects disappearing?
I've only been using VS 2008 Team Foundation for a few weeks. Over the last few days, I've noticed that sometimes one of my objects/controls on my page just disappears from intellisense. The project builds perfectly and the objects are still in the HTML, but I still can't find the object. Any one else notice this? Edit: For what it's worth, I know if I close VS and then open it up again, it comes back.
[ "The Visual Studio 2008 and .NET 3.5 Framework Service Pack 1 has gone out of beta, maybe you can see if this bug still occurs?\n", "I am also having a number of problems with VS 2008. Who would guess that I don't ever need to select multiple controls on a web form...\nAnyway, a lot has been fixed in Service Pack 1, which is in Beta currently. Might be worth installing that. It has gone a little way to fixing absolute positioning. This isn't your problem, of course, but your fix might be in there as well.\n", "I occasionally get this in Visual Studio 2005.\nA method I use to get the controls back, is to switch the web page between code view and design view. I know it's not a fix but it's a little quicker than restarting Visual Studio.\n" ]
[ 3, 2, 0 ]
[]
[]
[ ".net", "tfs", "visual_studio" ]
stackoverflow_0000006284_.net_tfs_visual_studio.txt
Q: Calling Table-Valued SQL Functions From .NET Scalar-valued functions can be called from .NET as follows: SqlCommand cmd = new SqlCommand("testFunction", sqlConn); //testFunction is scalar cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("retVal", SqlDbType.Int); cmd.Parameters["retVal"].Direction = ParameterDirection.ReturnValue; cmd.ExecuteScalar(); int aFunctionResult = (int)cmd.Parameters["retVal"].Value; I also know that table-valued functions can be called in a similar fashion, for example: String query = "select * from testFunction(param1,...)"; //testFunction is table-valued SqlCommand cmd = new SqlCommand(query, sqlConn); SqlDataAdapter adapter = new SqlDataAdapter(cmd); adapter.Fill(tbl); My question is, can table-valued functions be called as stored procedures, like scalar-valued functions can? (e.g., replicate my first code snippet with a table-valued function being called and getting the returned table through a ReturnValue parameter). A: No because you need to select them. However you can create a stored proc wrapper, which may defeat the point of having a table function.
Calling Table-Valued SQL Functions From .NET
Scalar-valued functions can be called from .NET as follows: SqlCommand cmd = new SqlCommand("testFunction", sqlConn); //testFunction is scalar cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("retVal", SqlDbType.Int); cmd.Parameters["retVal"].Direction = ParameterDirection.ReturnValue; cmd.ExecuteScalar(); int aFunctionResult = (int)cmd.Parameters["retVal"].Value; I also know that table-valued functions can be called in a similar fashion, for example: String query = "select * from testFunction(param1,...)"; //testFunction is table-valued SqlCommand cmd = new SqlCommand(query, sqlConn); SqlDataAdapter adapter = new SqlDataAdapter(cmd); adapter.Fill(tbl); My question is, can table-valued functions be called as stored procedures, like scalar-valued functions can? (e.g., replicate my first code snippet with a table-valued function being called and getting the returned table through a ReturnValue parameter).
[ "No because you need to select them. However you can create a stored proc wrapper, which may defeat the point of having a table function.\n" ]
[ 19 ]
[]
[]
[ ".net", "c#", "sql" ]
stackoverflow_0000008987_.net_c#_sql.txt
Q: VMWare Server Under Linux Secondary NIC connection With VMWare Server running under Linux (Debain), I would like to have the following setup: 1st: NIC being used by many of the images running under VMWare, as well as being used by the Linux OS 2nd: NIC being used by only 1 image and to be unused by the Linux OS (as its part of a DMZ) Although the second NIC won't be used by Linux, it is certainly recognised as a NIC (e.g. eth1). Is this possible under VMWare Server, and if so, is it as simple as not binding eth1 under Linux and then bridging it to the image under VMWare Server? A: I believe you can set the desired solution up by rerunning the vmware configuration script. And doing a custom network setup, so that both NIC's are mapped to your vmware instance. I would recommend making eth0 the 2nd NIC since it will be easier for Linux to use by default. Then make eth1 the 1st NIC.
VMWare Server Under Linux Secondary NIC connection
With VMWare Server running under Linux (Debain), I would like to have the following setup: 1st: NIC being used by many of the images running under VMWare, as well as being used by the Linux OS 2nd: NIC being used by only 1 image and to be unused by the Linux OS (as its part of a DMZ) Although the second NIC won't be used by Linux, it is certainly recognised as a NIC (e.g. eth1). Is this possible under VMWare Server, and if so, is it as simple as not binding eth1 under Linux and then bridging it to the image under VMWare Server?
[ "I believe you can set the desired solution up by rerunning the vmware configuration script. And doing a custom network setup, so that both NIC's are mapped to your vmware instance. I would recommend making eth0 the 2nd NIC since it will be easier for Linux to use by default. Then make eth1 the 1st NIC.\n" ]
[ 2 ]
[]
[]
[ "linux", "nic", "sysadmin", "vmware" ]
stackoverflow_0000008940_linux_nic_sysadmin_vmware.txt
Q: HTTPS in IIS 5.1 I'm using IIS 5.1 in Windows XP on my development computer. I'm going to set up HTTPS on my company's web server, but I want to try doing it locally before doing it on a production system. But when I go into the Directory Security tab of my web site's configuration section, the "Secure communication" groupbox is disabled. Is there something I need to do to make this groupbox enabled? A: You may need to manually create a certificate first (on WinXP there does not seem to be a built-in mechanism, so you need to use OpenSSL). Check out these two links: Enabling SSL in IIS on Windows XP Professional Enabling SSL (HTTPS) for IIS in Windows XP A: That is because IIS 5.1 under the limited Windows XP version is limited to only HTTP. You need to have a full version of IIS 6.0 on Windows 2003 to do this. Luckily you can download a VHD image of Windows 2003 from Microsoft and run it under a Virtual PC instance. Plus I would recommend this since you are trying to be careful and use a machine close to your production environment. IIS 5.1 version is never deployed as a production machine so you cannot guarantee anything and the differences between IIS 5.1 and IIS 6.0 are significant enough where the VM is worth your while.
HTTPS in IIS 5.1
I'm using IIS 5.1 in Windows XP on my development computer. I'm going to set up HTTPS on my company's web server, but I want to try doing it locally before doing it on a production system. But when I go into the Directory Security tab of my web site's configuration section, the "Secure communication" groupbox is disabled. Is there something I need to do to make this groupbox enabled?
[ "You may need to manually create a certificate first (on WinXP there does not seem to be a built-in mechanism, so you need to use OpenSSL). Check out these two links:\nEnabling SSL in IIS on Windows XP Professional\nEnabling SSL (HTTPS) for IIS in Windows XP\n", "That is because IIS 5.1 under the limited Windows XP version is limited to only HTTP. You need to have a full version of IIS 6.0 on Windows 2003 to do this. Luckily you can download a VHD image of Windows 2003 from Microsoft and run it under a Virtual PC instance. Plus I would recommend this since you are trying to be careful and use a machine close to your production environment. IIS 5.1 version is never deployed as a production machine so you cannot guarantee anything and the differences between IIS 5.1 and IIS 6.0 are significant enough where the VM is worth your while.\n" ]
[ 3, 3 ]
[]
[]
[ "iis", "ssl" ]
stackoverflow_0000009024_iis_ssl.txt
Q: Checking FTP status codes with a PHP script I have a script that checks responses from HTTP servers using the PEAR HTTP classes. However, I've recently found that the script fails on FTP servers (and probably anything that's not HTTP or HTTPS). I tried Google, but didn't see any scripts or code that returned the server status code from servers other than HTTP servers. How can I find out the status of a newsgroup or FTP server using PHP? EDIT: I should clarify that I am interested only in the ability to read from an FTP server and the directory that I specify. I need to know if the server is dead/gone, I'm not authorized to read, etc. Please note that, although most of the time I'm language agnostic, the entire website is PHP-driven, so a PHP solution would be the best for easy of maintainability and extensibility in the future. A: HTTP works slightly differently than FTP though unfortunately. Although both may look the same in your browser, HTTP works off the basis of URI (i.e. to access resource A, you have an identifier which tells you how to access that). FTP is very old school server driven. Even anonymous FTP is a bit of a hack, since you still supply a username and password, it's just defined as "anonymous" and your email address. Checking if an FTP server is up means checking That you can connect to the FTP server if (!($ftpfd = ftp_connect($hostname))) { ... } That you can login to the server: if (!ftp_login($ftpfd, $username, $password)) { ... } Then, if there are further underlying resources that you need to access to test whether a particular site is up, then use an appropiate operation on them. e.g. on a file, maybe use ftp_mdtm() to get the last modified time or on a directory, see if ftp_nlist() works. A: Wouldn't it be simpler to use the built-in PHP FTP* functionality than trying to roll your own? If the URI is coming from a source outside your control, you would need to check the protocal definition (http:// or ftp://, etc) in order to determine which functionality to use, but that is fairly trivial. If there is now protocal specified (there really should be!) then you could try to default to http. see here A: If you want to read specific responses you will have to open a socket and read/write data manually. <?php $sock = fsockopen($hostname, $port); ?> Then you'll have to fput/fread data back and forth. This will require you to read up on the FTP protocol.
Checking FTP status codes with a PHP script
I have a script that checks responses from HTTP servers using the PEAR HTTP classes. However, I've recently found that the script fails on FTP servers (and probably anything that's not HTTP or HTTPS). I tried Google, but didn't see any scripts or code that returned the server status code from servers other than HTTP servers. How can I find out the status of a newsgroup or FTP server using PHP? EDIT: I should clarify that I am interested only in the ability to read from an FTP server and the directory that I specify. I need to know if the server is dead/gone, I'm not authorized to read, etc. Please note that, although most of the time I'm language agnostic, the entire website is PHP-driven, so a PHP solution would be the best for easy of maintainability and extensibility in the future.
[ "HTTP works slightly differently than FTP though unfortunately. Although both may look the same in your browser, HTTP works off the basis of URI (i.e. to access resource A, you have an identifier which tells you how to access that).\nFTP is very old school server driven. Even anonymous FTP is a bit of a hack, since you still supply a username and password, it's just defined as \"anonymous\" and your email address.\nChecking if an FTP server is up means checking\n\nThat you can connect to the FTP server\nif (!($ftpfd = ftp_connect($hostname))) { ... }\nThat you can login to the server:\nif (!ftp_login($ftpfd, $username, $password)) { ... }\nThen, if there are further underlying resources that you need to access to test whether a particular site is up, then use an appropiate operation on them. e.g. on a file, maybe use ftp_mdtm() to get the last modified time or on a directory, see if ftp_nlist() works.\n\n", "Wouldn't it be simpler to use the built-in PHP FTP* functionality than trying to roll your own? If the URI is coming from a source outside your control, you would need to check the protocal definition (http:// or ftp://, etc) in order to determine which functionality to use, but that is fairly trivial. If there is now protocal specified (there really should be!) then you could try to default to http.\n\nsee here\n\n", "If you want to read specific responses you will have to open a socket and read/write data manually.\n<?php\n$sock = fsockopen($hostname, $port);\n?>\n\nThen you'll have to fput/fread data back and forth.\nThis will require you to read up on the FTP protocol.\n" ]
[ 4, 0, 0 ]
[]
[]
[ "ftp", "http", "pear", "php", "server_response" ]
stackoverflow_0000008726_ftp_http_pear_php_server_response.txt
Q: How do I setup Public-Key Authentication? How do I setup Public-Key Authentication for SSH? A: If you have SSH installed, you should be able to run.. ssh-keygen Then go through the steps, you'll have two files, id_rsa and id_rsa.pub (the first is your private key, the second is your public key - the one you copy to remote machines) Then, connect to the remote machine you want to login to, to the file ~/.ssh/authorized_keys add the contents of your that id_rsa.pub file. Oh, and chmod 600 all the id_rsa* files (both locally and remote), so no other users can read them: chmod 600 ~/.ssh/id_rsa* Similarly, ensure the remote ~/.ssh/authorized_keys file is chmod 600 also: chmod 600 ~/.ssh/authorized_keys Then, when you do ssh remote.machine, it should ask you for the key's password, not the remote machine. To make it nicer to use, you can use ssh-agent to hold the decrypted keys in memory - this means you don't have to type your keypair's password every single time. To launch the agent, you run (including the back-tick quotes, which eval the output of the ssh-agent command) `ssh-agent` On some distros, ssh-agent is started automatically. If you run echo $SSH_AUTH_SOCK and it shows a path (probably in /tmp/) it's already setup, so you can skip the previous command. Then to add your key, you do ssh-add ~/.ssh/id_rsa and enter your passphrase. It's stored until you remove it (using the ssh-add -D command, which removes all keys from the agent) A: For windows this is a good introduction and guide Here are some good ssh-agents for systems other than linux. Windows - pageant OS X - SSHKeychain
How do I setup Public-Key Authentication?
How do I setup Public-Key Authentication for SSH?
[ "If you have SSH installed, you should be able to run..\nssh-keygen\n\nThen go through the steps, you'll have two files, id_rsa and id_rsa.pub (the first is your private key, the second is your public key - the one you copy to remote machines)\nThen, connect to the remote machine you want to login to, to the file ~/.ssh/authorized_keys add the contents of your that id_rsa.pub file.\nOh, and chmod 600 all the id_rsa* files (both locally and remote), so no other users can read them:\nchmod 600 ~/.ssh/id_rsa*\n\nSimilarly, ensure the remote ~/.ssh/authorized_keys file is chmod 600 also:\nchmod 600 ~/.ssh/authorized_keys\n\nThen, when you do ssh remote.machine, it should ask you for the key's password, not the remote machine.\n\nTo make it nicer to use, you can use ssh-agent to hold the decrypted keys in memory - this means you don't have to type your keypair's password every single time. To launch the agent, you run (including the back-tick quotes, which eval the output of the ssh-agent command)\n`ssh-agent`\n\nOn some distros, ssh-agent is started automatically. If you run echo $SSH_AUTH_SOCK and it shows a path (probably in /tmp/) it's already setup, so you can skip the previous command.\nThen to add your key, you do\nssh-add ~/.ssh/id_rsa\n\nand enter your passphrase. It's stored until you remove it (using the ssh-add -D command, which removes all keys from the agent)\n", "For windows this is a good introduction and guide\nHere are some good ssh-agents for systems other than linux.\n\nWindows - pageant\nOS X - SSHKeychain\n\n" ]
[ 105, 5 ]
[]
[]
[ "linux", "private_key", "public_key", "ssh" ]
stackoverflow_0000007260_linux_private_key_public_key_ssh.txt
Q: How can I turn a string of HTML into a DOM object in a Firefox extension? I'm downloading a web page (tag soup HTML) with XMLHttpRequest and I want to take the output and turn it into a DOM object that I can then run XPATH queries on. How do I convert from a string into DOM object? It appears that the general solution is to create a hidden iframe and throw the contents of the string into that. There has been talk of updating DOMParser to support text/html but as of Firefox 3.0.1 you still get an NS_ERROR_NOT_IMPLEMENTED if you try. Is there any option besides using the hidden iframe trick? And if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up the code, etc)? This is an example of why I'm looking for a solution other than the iframe hack, if I have to write all that code to have a robust solution, then I'd rather keep looking for something else. A: Ajaxian actually had a post on inserting / retrieving html from an iframe today. You can probably use the js snippet they have posted there. As for handling closing of a browser / tab, you can attach to the onbeforeunload (http://msdn.microsoft.com/en-us/library/ms536907(VS.85).aspx) event and do whatever you need to do. A: Try this: var request = new XMLHttpRequest(); request.overrideMimeType( 'text/xml' ); request.onreadystatechange = process; request.open ( 'GET', url ); request.send( null ); function process() { if ( request.readyState == 4 && request.status == 200 ) { var xml = request.responseXML; } } Notice the overrideMimeType and responseXML. The readyState == 4 is 'completed'. A: Try creating a div document.createElement( 'div' ); And then set the tag soup HTML to the innerHTML of the div. The browser should process that into XML, which then you can parse. The innerHTML property takes a string that specifies a valid combination of text and elements. When the innerHTML property is set, the given string completely replaces the existing content of the object. If the string contains HTML tags, the string is parsed and formatted as it is placed into the document. A: So you want to download a webpage as an XML object using javascript, but you don't want to use a webpage? Since you have no control over what the user will do (closing tabs or windows or whatnot) you would need to do this in like a OSX Dashboard widget or some separate application. A Firefox extension would also work, unless you have to worry about the user closing the browser. A: Is there any option besides using the hidden iframe trick? Unfortunately, no, not now. Otherwise the microsummary code you point to would use it instead. And if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up code, etc)? The code you quoted uses the recent browser window, so closing tabs won't affect parsing. Closing that browser window will abort your load, but you can deal with it (detect that the load is aborted and restart it in another window for example) and it doesn't happen very often. You need a DOM window for the iframe to work properly, so there's no clean solution at the moment (if you're keen on using the mozilla parser).
How can I turn a string of HTML into a DOM object in a Firefox extension?
I'm downloading a web page (tag soup HTML) with XMLHttpRequest and I want to take the output and turn it into a DOM object that I can then run XPATH queries on. How do I convert from a string into DOM object? It appears that the general solution is to create a hidden iframe and throw the contents of the string into that. There has been talk of updating DOMParser to support text/html but as of Firefox 3.0.1 you still get an NS_ERROR_NOT_IMPLEMENTED if you try. Is there any option besides using the hidden iframe trick? And if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up the code, etc)? This is an example of why I'm looking for a solution other than the iframe hack, if I have to write all that code to have a robust solution, then I'd rather keep looking for something else.
[ "Ajaxian actually had a post on inserting / retrieving html from an iframe today. You can probably use the js snippet they have posted there.\nAs for handling closing of a browser / tab, you can attach to the onbeforeunload (http://msdn.microsoft.com/en-us/library/ms536907(VS.85).aspx) event and do whatever you need to do.\n", "Try this:\nvar request = new XMLHttpRequest();\n\nrequest.overrideMimeType( 'text/xml' );\nrequest.onreadystatechange = process;\nrequest.open ( 'GET', url );\nrequest.send( null );\n\nfunction process() { \n if ( request.readyState == 4 && request.status == 200 ) {\n var xml = request.responseXML;\n }\n}\n\nNotice the overrideMimeType and responseXML. The readyState == 4 is 'completed'.\n", "Try creating a div\ndocument.createElement( 'div' );\n\nAnd then set the tag soup HTML to the innerHTML of the div. The browser should process that into XML, which then you can parse.\n\nThe innerHTML property takes a string\n that specifies a valid combination of\n text and elements. When the innerHTML\n property is set, the given string\n completely replaces the existing\n content of the object. If the string\n contains HTML tags, the string is\n parsed and formatted as it is placed\n into the document.\n\n", "So you want to download a webpage as an XML object using javascript, but you don't want to use a webpage? Since you have no control over what the user will do (closing tabs or windows or whatnot) you would need to do this in like a OSX Dashboard widget or some separate application. A Firefox extension would also work, unless you have to worry about the user closing the browser.\n", "\nIs there any option besides using the hidden iframe trick?\n\nUnfortunately, no, not now. Otherwise the microsummary code you point to would use it instead.\n\nAnd if not, what is the best way to do the iframe trick so that your code works outside the context of any currently open tabs (so that closing tabs won't screw up code, etc)?\n\nThe code you quoted uses the recent browser window, so closing tabs won't affect parsing. Closing that browser window will abort your load, but you can deal with it (detect that the load is aborted and restart it in another window for example) and it doesn't happen very often.\nYou need a DOM window for the iframe to work properly, so there's no clean solution at the moment (if you're keen on using the mozilla parser).\n" ]
[ 10, 6, 2, 1, 1 ]
[]
[]
[ "dom", "firefox", "javascript" ]
stackoverflow_0000003868_dom_firefox_javascript.txt
Q: Opening a file in my application from File Explorer I've created my own application in VB.NET that saves its documents into a file with it's own custom extension (.eds). Assuming that I've properly associated the file extension with my application, how do I actually handle the processing of the selected file within my application when I double click on the file in File Explorer? Do I grab an argsc/argsv variable in my Application.Load() method or is it something else? A: Try this article but short answer is My.Application.CommandLineArgs
Opening a file in my application from File Explorer
I've created my own application in VB.NET that saves its documents into a file with it's own custom extension (.eds). Assuming that I've properly associated the file extension with my application, how do I actually handle the processing of the selected file within my application when I double click on the file in File Explorer? Do I grab an argsc/argsv variable in my Application.Load() method or is it something else?
[ "Try this article but short answer is My.Application.CommandLineArgs\n" ]
[ 2 ]
[]
[]
[ "file_extension", "vb.net" ]
stackoverflow_0000009161_file_extension_vb.net.txt
Q: ADO.NET Connection Pooling & SQLServer What is it? How do I implement connection pooling with MS SQL? What are the performance ramifications when Executing many queries one-after-the other (i.e. using a loop with 30K+ iterations calling a stored procedure)? Executing a few queries that take a long time (10+ min)? Are there any best practices? A: Connection pooling is a mechanism to re-use connections, as establishing a new connection is slow. If you use an MSSQL connection string and System.Data.SqlClient then you're already using it - in .Net this stuff is under the hood most of the time. A loop of 30k iterations might be better as a server side cursor (look up T-SQL cursor statements), depending on what you're doing with each step outside of the sproc. Long queries are fine - but be careful calling them from web pages as Asp.Net isn't really optimised for long waits and some connections will cut out. A: A little more info on the connection pooling thing... you're using it already with SqlClient, but only if your connection string is identical for each new connection you open. My understanding is that the framework will pool connections automatically when it can, but if the connection string varies even slightly from one connection to the next, then the new connection won't come from the pool - it gets created anew (so it's more expensive). You can use the Performance Monitor app with XP/Vista to watch SQL connections and you'll see pretty quickly whether or not pooling is being used. Look under the ".NET CLR Data" category" in Performance Monitor. A: I second Keith; if you're calling a stored procedure 30,000 times, you have far bigger problems than connection pooling.
ADO.NET Connection Pooling & SQLServer
What is it? How do I implement connection pooling with MS SQL? What are the performance ramifications when Executing many queries one-after-the other (i.e. using a loop with 30K+ iterations calling a stored procedure)? Executing a few queries that take a long time (10+ min)? Are there any best practices?
[ "Connection pooling is a mechanism to re-use connections, as establishing a new connection is slow.\nIf you use an MSSQL connection string and System.Data.SqlClient then you're already using it - in .Net this stuff is under the hood most of the time.\nA loop of 30k iterations might be better as a server side cursor (look up T-SQL cursor statements), depending on what you're doing with each step outside of the sproc.\nLong queries are fine - but be careful calling them from web pages as Asp.Net isn't really optimised for long waits and some connections will cut out. \n", "A little more info on the connection pooling thing... you're using it already with SqlClient, but only if your connection string is identical for each new connection you open. My understanding is that the framework will pool connections automatically when it can, but if the connection string varies even slightly from one connection to the next, then the new connection won't come from the pool - it gets created anew (so it's more expensive).\nYou can use the Performance Monitor app with XP/Vista to watch SQL connections and you'll see pretty quickly whether or not pooling is being used. Look under the \".NET CLR Data\" category\" in Performance Monitor.\n", "I second Keith; if you're calling a stored procedure 30,000 times, you have far bigger problems than connection pooling.\n" ]
[ 3, 2, 0 ]
[ "Your question was also partially answered by this thread. A search would have revealed this.. The definition of Connection Pooling, of which a Google would have answered with the first hit being this..\nWhich would leave just the best practices, which I think would have been a good question :)\n+1 to Keith's Answer. He has hit the nail right on the head.\nJust a polite reminder from the FAQ:\n\nYou've searched the internet before\n asking your question, and you come to\n us armed with research and information\n about your question ... right?\n\n" ]
[ -3 ]
[ "ado.net", "performance", "sql_server" ]
stackoverflow_0000009228_ado.net_performance_sql_server.txt
Q: JQuery.Validate failure in Opera If you're using Opera 9.5x you may notice that our client-side JQuery.Validate code is disabled here at Stack Overflow. function initValidation() { if (navigator.userAgent.indexOf("Opera") != -1) return; $("#post-text").rules("add", { required: true, minlength: 5 }); } That's because it generates an exception in Opera! Of course it works in every other browser we've tried. I'm starting to seriously, seriously hate Opera. This is kind of a bummer because without proper client-side validation some of our requests will fail. We haven't had time to put in complete server-side messaging when data is incomplete, so you may see the YSOD on Opera much more than other browsers, if you forget to fill out all the fields on the form. Any Opera-ites want to uncomment those lines (they're on core Ask & Answer pages like this one -- just View Source and search for "Opera") and give it a go? A: turns out the problem was in the { debug : true } option for the JQuery.Validate initializer. With this removed, things work fine in Opera. Thanks to Jörn Zaefferer for helping us figure this out! Oh, and the $50 will be donated to the JQuery project. :) A: I can't seem to reproduce this bug. Can you give more details? I have my copy of Opera masquerading as Firefox so the validation should be executing: >>> $.browser.opera false When I go to the edit profile page and enter a malformed date, the red text comes up and says "Please enter a valid date". That's jQuery.Validate working, right? Does it only fail on certain forms/fields? This is Opera 9.51 on WinXP. Edit: testing editing on Opera. Edit: It also works when I comment out the "if ($.browser.opera) return;" line on a copy of the edit profile page I saved locally. I really can't reproduce this bug. What is your environment like? (Vista? Opera plugins?) A: I'm not up on .NET but I'm guessing YSOD implies uncaught errors, if that's the case then isn't relying on client-side validation alone a little risky? If not then errors that are caught can be converted to something useful for the Opera crowd - even if it is just a Screen Of Death painted white with validation grumbles ...
JQuery.Validate failure in Opera
If you're using Opera 9.5x you may notice that our client-side JQuery.Validate code is disabled here at Stack Overflow. function initValidation() { if (navigator.userAgent.indexOf("Opera") != -1) return; $("#post-text").rules("add", { required: true, minlength: 5 }); } That's because it generates an exception in Opera! Of course it works in every other browser we've tried. I'm starting to seriously, seriously hate Opera. This is kind of a bummer because without proper client-side validation some of our requests will fail. We haven't had time to put in complete server-side messaging when data is incomplete, so you may see the YSOD on Opera much more than other browsers, if you forget to fill out all the fields on the form. Any Opera-ites want to uncomment those lines (they're on core Ask & Answer pages like this one -- just View Source and search for "Opera") and give it a go?
[ "turns out the problem was in the\n{ debug : true }\n\noption for the JQuery.Validate initializer. With this removed, things work fine in Opera. Thanks to Jörn Zaefferer for helping us figure this out!\nOh, and the $50 will be donated to the JQuery project. :)\n", "I can't seem to reproduce this bug. Can you give more details?\nI have my copy of Opera masquerading as Firefox so the validation should be executing:\n>>> $.browser.opera \nfalse\n\nWhen I go to the edit profile page and enter a malformed date, the red text comes up and says \"Please enter a valid date\". That's jQuery.Validate working, right? Does it only fail on certain forms/fields?\nThis is Opera 9.51 on WinXP.\nEdit: testing editing on Opera.\nEdit: It also works when I comment out the \"if ($.browser.opera) return;\" line on a copy of the edit profile page I saved locally. I really can't reproduce this bug. What is your environment like? (Vista? Opera plugins?)\n", "I'm not up on .NET but I'm guessing YSOD implies uncaught errors, if that's the case then isn't relying on client-side validation alone a little risky? If not then errors that are caught can be converted to something useful for the Opera crowd - even if it is just a Screen Of Death painted white with validation grumbles ...\n" ]
[ 11, 0, 0 ]
[]
[]
[ "jquery", "opera", "validation" ]
stackoverflow_0000008681_jquery_opera_validation.txt
Q: Creating Redundancy for a Subversion Repository? What is the best way to create redundant subversion repositories? I have a subversion repository (linked through apache2 and WebDAV) and would like to create a mirror repository on a different server in the event of outages, but I am not certain of the best way to proceed. I am thinking that post-commit scripts could be used to propagate changes, but I am not sure if this is the best way to go, anyone have any input A: Sounds like what you are looking for is basically federated (synced) servers... I asked the same question recently...and while I didn't find the exact solution I was looking for it came close. See here: A: Do you really need per-commit back-ups? There are almost certainly better ways of safe-guarding against failures than going down that route. For example, given that most failures are disk failures, move to a RAID array and/or NAS/SAN storage will provide you with better general protection and, if configured correctly, better performance. At that point, off-site back-ups becomes a matter of using the tools available. See the Repository maintenance section of the svn manual for details. If you truly do need per-commit back-ups then, yeah, post-commit scripts are the way to go. A: If you only need read-only access to the mirrored repository, you can use svnsync which was added in SVN 1.4 for mirroring. We use a secondary repository on our build server to run CruiseControl.NET against, but the mirrored repository is read-only.
Creating Redundancy for a Subversion Repository?
What is the best way to create redundant subversion repositories? I have a subversion repository (linked through apache2 and WebDAV) and would like to create a mirror repository on a different server in the event of outages, but I am not certain of the best way to proceed. I am thinking that post-commit scripts could be used to propagate changes, but I am not sure if this is the best way to go, anyone have any input
[ "Sounds like what you are looking for is basically federated (synced) servers...\nI asked the same question recently...and while I didn't find the exact solution I was looking for it came close.\nSee here:\n", "Do you really need per-commit back-ups? There are almost certainly better ways of safe-guarding against failures than going down that route. For example, given that most failures are disk failures, move to a RAID array and/or NAS/SAN storage will provide you with better general protection and, if configured correctly, better performance. At that point, off-site back-ups becomes a matter of using the tools available. See the Repository maintenance section of the svn manual for details.\nIf you truly do need per-commit back-ups then, yeah, post-commit scripts are the way to go.\n", "If you only need read-only access to the mirrored repository, you can use svnsync which was added in SVN 1.4 for mirroring.\nWe use a secondary repository on our build server to run CruiseControl.NET against, but the mirrored repository is read-only.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "redundancy", "repository", "svn" ]
stackoverflow_0000008306_redundancy_repository_svn.txt
Q: Is there some way of recycling a Crystal Reports dataset? I'm trying to write a Crystal Report which has totals grouped in a different way to the main report. The only way I've been able to do this so far is to use a subreport for the totals, but it means having to hit the data source again to retrieve the same data, which seems like nonsense. Here's a simplified example: date name earnings source location ----------------------------------------------------------- 12-AUG-2008 Tom $50.00 washing cars uptown 12-AUG-2008 Dick $100.00 washing cars downtown { main report } 12-AUG-2008 Harry $75.00 mowing lawns around town total earnings for washing cars: $150.00 { subreport } total earnings for mowing lawns: $75.00 date name earnings source location ----------------------------------------------------------- 13-AUG-2008 John $95.00 dog walking downtown 13-AUG-2008 Jane $105.00 washing cars around town { main report } 13-AUG-2008 Dave $65.00 mowing lawns around town total earnings for dog walking: $95.00 total earnings for washing cars: $105.00 { subreport } total earnings for mowing lawns: $65.00 In this example, the main report is grouped by 'date', but the totals are grouped additionally by 'source'. I've looked up examples of using running totals, but they don't really do what I need. Isn't there some way of storing the result set and having both the main report and the subreport reference the same data? A: Hmm... as nice as it is to call the stored proc from the report and have it all contained in one location, however we found (like you) that you eventually hit a point where you can't get crystal to do what you want even tho the data is right there. We ended up introducing a business layer which sits under the report and rather than "pulling" data from the report we "push" the datasets to it and bind the data to the report. The advantage is that you can manipulate the data in code in datasets or objects before it reaches the report and then simply bind the data to the report. This article has a nice intro on how to setup pushing data to the reports. I understand that your time/business constraints may not allow you to do this, but if it's at all possible, I'd highly recommend it as it's meant we can remove all "coding" out of our reports and into managed code which is always a good thing. A: The only way I can think of doing this without a second run through the data would be by creating some formulas to do running totals per group. The problem I assume you are running into with the existing running totals is that they are intended to follow each of the groups that they are totaling. Since you seem to want the subtotals to follow after all of the 'raw' data this won't work. If you create your own formulas for each group that simply adds on the total from those rows matching the group you should be able to place them at the end of the report. The downside to this approach is that the resulting subtotals will not be dynamic in relationship to the groups. In other words if you had a new 'source' it would not show up in the subtotals until you added it or if you had no 'dog walking' data you would still have a subtotal for it.
Is there some way of recycling a Crystal Reports dataset?
I'm trying to write a Crystal Report which has totals grouped in a different way to the main report. The only way I've been able to do this so far is to use a subreport for the totals, but it means having to hit the data source again to retrieve the same data, which seems like nonsense. Here's a simplified example: date name earnings source location ----------------------------------------------------------- 12-AUG-2008 Tom $50.00 washing cars uptown 12-AUG-2008 Dick $100.00 washing cars downtown { main report } 12-AUG-2008 Harry $75.00 mowing lawns around town total earnings for washing cars: $150.00 { subreport } total earnings for mowing lawns: $75.00 date name earnings source location ----------------------------------------------------------- 13-AUG-2008 John $95.00 dog walking downtown 13-AUG-2008 Jane $105.00 washing cars around town { main report } 13-AUG-2008 Dave $65.00 mowing lawns around town total earnings for dog walking: $95.00 total earnings for washing cars: $105.00 { subreport } total earnings for mowing lawns: $65.00 In this example, the main report is grouped by 'date', but the totals are grouped additionally by 'source'. I've looked up examples of using running totals, but they don't really do what I need. Isn't there some way of storing the result set and having both the main report and the subreport reference the same data?
[ "Hmm... as nice as it is to call the stored proc from the report and have it all contained in one location, however we found (like you) that you eventually hit a point where you can't get crystal to do what you want even tho the data is right there.\nWe ended up introducing a business layer which sits under the report and rather than \"pulling\" data from the report we \"push\" the datasets to it and bind the data to the report. The advantage is that you can manipulate the data in code in datasets or objects before it reaches the report and then simply bind the data to the report.\nThis article has a nice intro on how to setup pushing data to the reports. I understand that your time/business constraints may not allow you to do this, but if it's at all possible, I'd highly recommend it as it's meant we can remove all \"coding\" out of our reports and into managed code which is always a good thing.\n", "The only way I can think of doing this without a second run through the data would be by creating some formulas to do running totals per group. The problem I assume you are running into with the existing running totals is that they are intended to follow each of the groups that they are totaling. Since you seem to want the subtotals to follow after all of the 'raw' data this won't work. \nIf you create your own formulas for each group that simply adds on the total from those rows matching the group you should be able to place them at the end of the report. The downside to this approach is that the resulting subtotals will not be dynamic in relationship to the groups. In other words if you had a new 'source' it would not show up in the subtotals until you added it or if you had no 'dog walking' data you would still have a subtotal for it.\n" ]
[ 2, 1 ]
[]
[]
[ "asp.net", "crystal_reports", "dataset" ]
stackoverflow_0000008669_asp.net_crystal_reports_dataset.txt
Q: How have Html entities inside asp.net page? Inside an asp.net page, should I use <html><title>My page's title from México</title></html> Or <html><title>My page&rsquo;s title from M&eacute;xico</title></html> Both examples have the same output. Since asp.net encodes all my pages to utf-8, there is no need to use html entities, is that right? A: The second example will ensure compatibility with ASCII standards of HTML transmition. So my vote is for the second example, so you don't have to ensure the HTML is output and encoded as UTF-8 all the way through all the proxy servers and any other kind of caching and translation that might occur. A: You're correct; As long as there's unicode at both ends of the pipe, it really doesn't matter. Personally, I would use the first simply because it's more readable. And, honestly, unicode has been widespread for some time. I personally believe that it's time to leave anyone who can't handle UTF-8 behind. A: The ASCII table is set of characters, arguable the first standardized set of characters back in the days when you could only spare 1 byte per character. http://asciitable.com/ But I did some looking around at the extended character set of ASCII and it appears that the character you are referencing is an ASCII character. So there really isn't a problem which ever way you choose to display your title. My revised answer is go for less expensive one according to space (i.e. the first one)
How have Html entities inside asp.net page?
Inside an asp.net page, should I use <html><title>My page's title from México</title></html> Or <html><title>My page&rsquo;s title from M&eacute;xico</title></html> Both examples have the same output. Since asp.net encodes all my pages to utf-8, there is no need to use html entities, is that right?
[ "The second example will ensure compatibility with ASCII standards of HTML transmition. So my vote is for the second example, so you don't have to ensure the HTML is output and encoded as UTF-8 all the way through all the proxy servers and any other kind of caching and translation that might occur.\n", "You're correct; As long as there's unicode at both ends of the pipe, it really doesn't matter. Personally, I would use the first simply because it's more readable.\nAnd, honestly, unicode has been widespread for some time. I personally believe that it's time to leave anyone who can't handle UTF-8 behind.\n", "The ASCII table is set of characters, arguable the first standardized set of characters back in the days when you could only spare 1 byte per character. http://asciitable.com/ But I did some looking around at the extended character set of ASCII and it appears that the character you are referencing is an ASCII character. So there really isn't a problem which ever way you choose to display your title. \nMy revised answer is go for less expensive one according to space (i.e. the first one)\n" ]
[ 3, 3, 3 ]
[]
[]
[ "asp.net", "encoding", "html" ]
stackoverflow_0000009022_asp.net_encoding_html.txt
Q: Running Javascript after control's selected value has been set Simple ASP.NET application. I have two drop-down controls. On the first-drop down I have a JavaScript onChange event. The JavaScript enables the second drop-down and removes a value from it (the value selected in the first drop-down). If they click the blank first value of the drop-down, then the second drop-down will be disabled (and the options reset). I also have code in the OnPreRender method that will enable or disable the second drop-down based on the value of the first drop-down. This is so that the value of the first drop-down can be selected in code (loading user settings). My problem is: The user selects something in the first drop-down. The second drop-down will become enabled through JavaScript. They then change a third drop-down that initiates a post back. After the post back the drop-downs are in the correct state (first value selected, second drop-down enabled). If they then click the back button, the second drop-down will no longer be enabled although it should be since there's something selected in the first drop-down. I've tried adding a startup script (that will set the correct state of the second-drop down) through ClientScript.RegisterStartupScript, however when this gets called the first drop-down has a selectedIndex of 0, not what it actually is. My guess is that the value of the selection gets set after my start script (but still doesn't call the onChange script). Any ideas on what to try? A: If the second dropdown is initially enabled through javascript (I'm assuming this is during a javascript onchange, since you didn't specify), then clicking the back button to reload the previous postback will never enable it. Mixing ASP.NET with classic javascript can be hairy. You might want to have a look at ASP.NET's Ajax implementation (or the third-party AjaxPanel control if you're forced to use an older ASP.NET version). Those will give you the behaviour that you want through pure C#, without forcing you to resort to javascript hackery-pokery. A: <%@ Page Language="C#" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <script runat="server"> protected void indexChanged(object sender, EventArgs e) { Label1.Text = " I did something! "; } </script> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Test Page</title> </head> <body> <script type="text/javascript"> function firstChanged() { if(document.getElementById("firstSelect").selectedIndex != 0) document.getElementById("secondSelect").disabled = false; else document.getElementById("secondSelect").disabled = true; } </script> <form id="form1" runat="server"> <div> <select id="firstSelect" onchange="firstChanged()"> <option value="0"></option> <option value="1">One</option> <option value="2">Two</option> <option value="3">Three</option> </select> <select id="secondSelect" disabled="disabled"> <option value="1">One</option> <option value="2">Two</option> <option value="3">Three</option> </select> <asp:DropDownList ID="DropDownList1" AutoPostBack="true" OnSelectedIndexChanged="indexChanged" runat="server"> <asp:ListItem Text="One" Value="1"></asp:ListItem> <asp:ListItem Text="Two" Value="2"></asp:ListItem> </asp:DropDownList> <asp:Label ID="Label1" runat="server"></asp:Label> </div> </form> <script type="text/javascript"> window.onload = function() {firstChanged();} </script> </body> </html> Edit: Replaced the whole code. This should work even in your user control. I believe that Register.ClientScriptBlock is not working because the code you write in that block is executed before window.onload is called. And, I assume (I am not sure of this point) that the DOM objects do not have their values set at that time. And, this is why you are getting selectedIndex as always 0.
Running Javascript after control's selected value has been set
Simple ASP.NET application. I have two drop-down controls. On the first-drop down I have a JavaScript onChange event. The JavaScript enables the second drop-down and removes a value from it (the value selected in the first drop-down). If they click the blank first value of the drop-down, then the second drop-down will be disabled (and the options reset). I also have code in the OnPreRender method that will enable or disable the second drop-down based on the value of the first drop-down. This is so that the value of the first drop-down can be selected in code (loading user settings). My problem is: The user selects something in the first drop-down. The second drop-down will become enabled through JavaScript. They then change a third drop-down that initiates a post back. After the post back the drop-downs are in the correct state (first value selected, second drop-down enabled). If they then click the back button, the second drop-down will no longer be enabled although it should be since there's something selected in the first drop-down. I've tried adding a startup script (that will set the correct state of the second-drop down) through ClientScript.RegisterStartupScript, however when this gets called the first drop-down has a selectedIndex of 0, not what it actually is. My guess is that the value of the selection gets set after my start script (but still doesn't call the onChange script). Any ideas on what to try?
[ "If the second dropdown is initially enabled through javascript (I'm assuming this is during a javascript onchange, since you didn't specify), then clicking the back button to reload the previous postback will never enable it. \nMixing ASP.NET with classic javascript can be hairy. You might want to have a look at ASP.NET's Ajax implementation (or the third-party AjaxPanel control if you're forced to use an older ASP.NET version). Those will give you the behaviour that you want through pure C#, without forcing you to resort to javascript hackery-pokery.\n", "<%@ Page Language=\"C#\" %>\n\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\">\n\n<script runat=\"server\">\n protected void indexChanged(object sender, EventArgs e)\n {\n Label1.Text = \" I did something! \";\n }\n</script>\n\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head runat=\"server\">\n <title>Test Page</title>\n</head>\n<body>\n <script type=\"text/javascript\">\n function firstChanged() {\n if(document.getElementById(\"firstSelect\").selectedIndex != 0)\n document.getElementById(\"secondSelect\").disabled = false;\n else\n document.getElementById(\"secondSelect\").disabled = true;\n }\n </script>\n <form id=\"form1\" runat=\"server\">\n <div>\n <select id=\"firstSelect\" onchange=\"firstChanged()\">\n <option value=\"0\"></option>\n <option value=\"1\">One</option>\n <option value=\"2\">Two</option>\n <option value=\"3\">Three</option>\n </select>\n <select id=\"secondSelect\" disabled=\"disabled\">\n <option value=\"1\">One</option>\n <option value=\"2\">Two</option>\n <option value=\"3\">Three</option>\n </select>\n <asp:DropDownList ID=\"DropDownList1\" AutoPostBack=\"true\" OnSelectedIndexChanged=\"indexChanged\" runat=\"server\">\n <asp:ListItem Text=\"One\" Value=\"1\"></asp:ListItem>\n <asp:ListItem Text=\"Two\" Value=\"2\"></asp:ListItem> \n </asp:DropDownList>\n <asp:Label ID=\"Label1\" runat=\"server\"></asp:Label>\n </div>\n </form>\n <script type=\"text/javascript\">\n window.onload = function() {firstChanged();}\n </script>\n</body>\n</html>\n\nEdit: Replaced the whole code. This should work even in your user control.\nI believe that Register.ClientScriptBlock is not working because the code you write in that block is executed before window.onload is called. And, I assume (I am not sure of this point) that the DOM objects do not have their values set at that time. And, this is why you are getting selectedIndex as always 0.\n" ]
[ 4, 2 ]
[]
[]
[ "asp.net", "javascript" ]
stackoverflow_0000009341_asp.net_javascript.txt
Q: Quality Control / Log Monitoring One of the articles I really enjoyed reading recently was Quality Control by Last.FM. In the spirit of this article, I was wondering if anyone else had favorite monitoring setups for web type applications. Or maybe if you don't believe in Log Monitoring, why? I'm looking for a mix of opinion slash experience here I guess. A: We get a bunch of email/pager alerts from an older host/app/network monitoring environment that get gradually more abusive depending on severity of the problem/time taken to respond. Fortunately we all have thick skins and very broad senses of humour. :) A: We use log4net, and normally write both to log files and the database. However, when we've been tracking down a particularly difficult problem, we've enabled the email appender, so that critical log messages went straight to a developer's email account. This allowed us to figure out what was happening more immediately. In addition, our infrastructure team has several tools they use to monitor system uptime, event logs, etc., to give them early warning when something is about to go down. We've also helped them implement custom monitoring scripts that test specific functionality of our code.
Quality Control / Log Monitoring
One of the articles I really enjoyed reading recently was Quality Control by Last.FM. In the spirit of this article, I was wondering if anyone else had favorite monitoring setups for web type applications. Or maybe if you don't believe in Log Monitoring, why? I'm looking for a mix of opinion slash experience here I guess.
[ "We get a bunch of email/pager alerts from an older host/app/network monitoring environment that get gradually more abusive depending on severity of the problem/time taken to respond. Fortunately we all have thick skins and very broad senses of humour. :)\n", "We use log4net, and normally write both to log files and the database. However, when we've been tracking down a particularly difficult problem, we've enabled the email appender, so that critical log messages went straight to a developer's email account. This allowed us to figure out what was happening more immediately.\nIn addition, our infrastructure team has several tools they use to monitor system uptime, event logs, etc., to give them early warning when something is about to go down. We've also helped them implement custom monitoring scripts that test specific functionality of our code.\n" ]
[ 2, 2 ]
[]
[]
[ "logging", "monitoring" ]
stackoverflow_0000009338_logging_monitoring.txt
Q: How do I prevent IIS7 from dropping my cookies? I'm using Windows Vista x64 with SP1, and I'm developing an ASP.NET app with IIS7 as the web server. I've got a problem where my cookies aren't "sticking" to the session, so I had a Google and found that there was a known issue with duplicate response headers overwriting instead of being added to the session. This problem was, however, supposed to have been fixed in Service Pack 1 for Vista. Any ideas as to what my trouble might be? I'm using an Integrated app pool, and the max number of worker processes == 1. What's the significance of the underscore? Does it matter where in the URL it is (e.g. it matters if it's in the host name, but not if it's in the query string)? A: Just a thought, have you got an underscore in the url. e.g. http://my_site ? And one other thing, you're not running the app pool in web garden mode? i.e. Process Model -> Maximum Worker Processes: > 1 What type of app pool are you using - Integrated or Classic mode ?
How do I prevent IIS7 from dropping my cookies?
I'm using Windows Vista x64 with SP1, and I'm developing an ASP.NET app with IIS7 as the web server. I've got a problem where my cookies aren't "sticking" to the session, so I had a Google and found that there was a known issue with duplicate response headers overwriting instead of being added to the session. This problem was, however, supposed to have been fixed in Service Pack 1 for Vista. Any ideas as to what my trouble might be? I'm using an Integrated app pool, and the max number of worker processes == 1. What's the significance of the underscore? Does it matter where in the URL it is (e.g. it matters if it's in the host name, but not if it's in the query string)?
[ "Just a thought, have you got an underscore in the url. e.g. http://my_site ?\nAnd one other thing, you're not running the app pool in web garden mode? i.e. Process Model -> Maximum Worker Processes: > 1\nWhat type of app pool are you using - Integrated or Classic mode ?\n" ]
[ 4 ]
[]
[]
[ "cookies", "http", "iis", "iis_7", "windows_vista" ]
stackoverflow_0000009372_cookies_http_iis_iis_7_windows_vista.txt
Q: Windows Mobile Device Emulator - how to save config permanently? I am working at a client site where there is a proxy server (HTTP) in place. If I do a hard reset of the emulator it forgets network connection settings for the emulator and settings in the hosted Windows Mobile OS. If I 'save state and exit' it will lose all of these settings. I need to do hard resets regularly which means that I lose this information and spend a lot of time setting: The emulators associated network card DNS servers for network card in the WM OS. Proxy servers in connection settings of WM OS. How can I make my life easier? Can I save this as defaults in the emulator, or create an installer easily? A: The problem with these devices is everything is stored in the RAM and ROM. So you need a second alternate device storage for these settings, just like a real device. So that when a real device, or your device is reset, it has a statically stored configuration file outside of the RAM that can be loaded on start up. The alternative is to do soft-resets if possible. A: There is a way you can programmatically provision your devices. If you're using managed code, you can use Microsoft.WindowsMobile.Configuration.dll to do most of the work for you. If you're using unmanaged code, you have to use DMProcessConfigXML native function. There's more details in this blog post by Andrew Arnott.
Windows Mobile Device Emulator - how to save config permanently?
I am working at a client site where there is a proxy server (HTTP) in place. If I do a hard reset of the emulator it forgets network connection settings for the emulator and settings in the hosted Windows Mobile OS. If I 'save state and exit' it will lose all of these settings. I need to do hard resets regularly which means that I lose this information and spend a lot of time setting: The emulators associated network card DNS servers for network card in the WM OS. Proxy servers in connection settings of WM OS. How can I make my life easier? Can I save this as defaults in the emulator, or create an installer easily?
[ "The problem with these devices is everything is stored in the RAM and ROM. So you need a second alternate device storage for these settings, just like a real device. So that when a real device, or your device is reset, it has a statically stored configuration file outside of the RAM that can be loaded on start up. The alternative is to do soft-resets if possible.\n", "There is a way you can programmatically provision your devices. If you're using managed code, you can use Microsoft.WindowsMobile.Configuration.dll to do most of the work for you. If you're using unmanaged code, you have to use DMProcessConfigXML native function.\nThere's more details in this blog post by Andrew Arnott.\n" ]
[ 0, 0 ]
[]
[]
[ "device", "emulation", "visual_studio", "windows_mobile" ]
stackoverflow_0000009018_device_emulation_visual_studio_windows_mobile.txt
Q: Test Distribution At my work we are running a group of tests that consist of about 3,000 separate test cases. Previously we were running this entire test suite on one machine, which took about 24-72 hours to complete the entire test run. We now have created our own system for grouping and distributing the tests among about three separate machines and the tests are prioritized so that the core tests get run first for more immediate results and the extra tests run when there is an available machine. I am curious if anyone has found a good way to distribute their tests among several machines to reduce total test time for a complete run and what tools were used to achieve that. I've done some research and it looks like TestNG is moving in this direction, but it looks like it is still under quite a bit of development. We don't plan on rewriting any of our tests, but as we add new tests and test new products or add-ons I'd like to be able to deal with the fact that we are working with very large numbers of tests. On the other hand, if we can find a tool that would help distribute our Junit 3.x tests even in a very basic fashion, that would be helpful since we wouldn't have to maintain our own tooling to do that. A: I've seen some people having a play with distributed JUnit. I can't particularly vouch for how effective it is, but the other teams I've seen seemed to think it was straight forward enough. Hope that helps. A: Our build people use Mozilla Tinderbox. It seems to have some hooks for distributed testing. I'm sorry not to know the details but I thought I would at least pass on the pointer to you. It's also nice coz you can find out immediately when a build breaks, and what checkin might have been the culprit. http://www.mozilla.org/tinderbox.html A: There's also parallel-junit. Depending on how you currently execute your tests its convenience may vary - the idea is just to multithread on a single system that has multiple cores. I've played with it briefly, but it's a change from how we currently run our tests. Hudson, the continuous integration engine I use, also has some ways to distribute test running (separate jobs aggregated results in one).
Test Distribution
At my work we are running a group of tests that consist of about 3,000 separate test cases. Previously we were running this entire test suite on one machine, which took about 24-72 hours to complete the entire test run. We now have created our own system for grouping and distributing the tests among about three separate machines and the tests are prioritized so that the core tests get run first for more immediate results and the extra tests run when there is an available machine. I am curious if anyone has found a good way to distribute their tests among several machines to reduce total test time for a complete run and what tools were used to achieve that. I've done some research and it looks like TestNG is moving in this direction, but it looks like it is still under quite a bit of development. We don't plan on rewriting any of our tests, but as we add new tests and test new products or add-ons I'd like to be able to deal with the fact that we are working with very large numbers of tests. On the other hand, if we can find a tool that would help distribute our Junit 3.x tests even in a very basic fashion, that would be helpful since we wouldn't have to maintain our own tooling to do that.
[ "I've seen some people having a play with distributed JUnit. I can't particularly vouch for how effective it is, but the other teams I've seen seemed to think it was straight forward enough. Hope that helps.\n", "Our build people use Mozilla Tinderbox. It seems to have some hooks for distributed testing. I'm sorry not to know the details but I thought I would at least pass on the pointer to you.\nIt's also nice coz you can find out immediately when a build breaks, and what checkin might have been the culprit.\nhttp://www.mozilla.org/tinderbox.html\n", "There's also parallel-junit. Depending on how you currently execute your tests its convenience may vary - the idea is just to multithread on a single system that has multiple cores. I've played with it briefly, but it's a change from how we currently run our tests. \nHudson, the continuous integration engine I use, also has some ways to distribute test running (separate jobs aggregated results in one).\n" ]
[ 3, 1, 1 ]
[]
[]
[ "enterprise", "java", "testing" ]
stackoverflow_0000008219_enterprise_java_testing.txt
Q: Datagrid: Calculate Avg or Sum for column in Footer I have a datagrid getting bound to a dataset, and I want to display the average result in the footer for a column populated with integers. The way I figure, there's 2 ways I can think of: 1."Use the Source, Luke" In the code where I'm calling DataGrid.DataBind(), use the DataTable.Compute() method (or in my case DataSet.DataTable(0).Compute()). For example: Dim strAverage = DataTable.Compute("Avg(ColumnName)", "") But once I have this, how can I insert it into the footer? 2."Bound for Glory" Using the DataGrid.ItemDataBound event, and calculating a running total from every ListItemType.Item and ListItemType.AlternatingItem, finally displaying in ListItemType.Footer. For example: Select Case e.Item.ItemType Case ListItemType.Item, ListItemType.AlternatingItem runningTotal += CInt(e.Item.Cells(2).Text) Case ListItemType.Footer e.Item.Cells(2).Text = runningTotal/DataGrid.Items.Count End Select This just feels wrong, plus I would have to make sure the runningTotal is reset on every DataBind. Is there a better way? A: I don't know if either are necessarily better, but two alternate ways would be: Manually run through the table once you hit the footer and calculate from the on-screen text Manually retrieve the data and do the calculation separately from the bind Of course, #2 sort of offsets the advantages of data binding (assuming that's what you're doing). A: Thanks DannySmurf, your first answer made me see sense. (Why do we always look for that magic solution?). For reference, here's what I ended up doing: (Warning: VB below, may not contain enough semicolons) Case ListItemType.Footer e.Item.Cells(0).Text = "Average" For i As Integer = 3 To 8 Dim runningTotal As Integer = 0 For Each row As DataGridItem In DataGrid.Items If IsNumeric(row.Cells(i).Text) Then runningTotal += CInt(row.Cells(i).Text) End If Next e.Item.Cells(i).Text = Math.Round(runningTotal / DataGrid.Items.Count, 0) Next End Select I needed to do it for several columns (hence 3 to 8), ultimately why I was looking for the magical solution.
Datagrid: Calculate Avg or Sum for column in Footer
I have a datagrid getting bound to a dataset, and I want to display the average result in the footer for a column populated with integers. The way I figure, there's 2 ways I can think of: 1."Use the Source, Luke" In the code where I'm calling DataGrid.DataBind(), use the DataTable.Compute() method (or in my case DataSet.DataTable(0).Compute()). For example: Dim strAverage = DataTable.Compute("Avg(ColumnName)", "") But once I have this, how can I insert it into the footer? 2."Bound for Glory" Using the DataGrid.ItemDataBound event, and calculating a running total from every ListItemType.Item and ListItemType.AlternatingItem, finally displaying in ListItemType.Footer. For example: Select Case e.Item.ItemType Case ListItemType.Item, ListItemType.AlternatingItem runningTotal += CInt(e.Item.Cells(2).Text) Case ListItemType.Footer e.Item.Cells(2).Text = runningTotal/DataGrid.Items.Count End Select This just feels wrong, plus I would have to make sure the runningTotal is reset on every DataBind. Is there a better way?
[ "I don't know if either are necessarily better, but two alternate ways would be:\n\nManually run through the table once you hit the footer and calculate from the on-screen text\nManually retrieve the data and do the calculation separately from the bind\n\nOf course, #2 sort of offsets the advantages of data binding (assuming that's what you're doing).\n", "Thanks DannySmurf, your first answer made me see sense. (Why do we always look for that magic solution?). \nFor reference, here's what I ended up doing: (Warning: VB below, may not contain enough semicolons) \nCase ListItemType.Footer\n e.Item.Cells(0).Text = \"Average\"\n For i As Integer = 3 To 8\n Dim runningTotal As Integer = 0\n For Each row As DataGridItem In DataGrid.Items\n If IsNumeric(row.Cells(i).Text) Then\n runningTotal += CInt(row.Cells(i).Text)\n End If\n Next\n e.Item.Cells(i).Text = Math.Round(runningTotal / DataGrid.Items.Count, 0)\n Next\nEnd Select\n\nI needed to do it for several columns (hence 3 to 8), ultimately why I was looking for the magical solution.\n" ]
[ 1, 1 ]
[]
[]
[ "asp.net", "datagrid", "report", "vb.net" ]
stackoverflow_0000009409_asp.net_datagrid_report_vb.net.txt
Q: Best way to write a RESTful service "client" in .Net? What techniques do people use to "consume" services in the REST stile on .Net ? Plain http client? Related to this: many rest services are now using JSON (its tighter and faster) - so what JSON lib is used? A: My approach was Write some libraries and interfaces to serialize your objects into REST-compatible XML. You can't neccessarily just use the built-in serializers, because your service may not accept the same kind of XML that .NET wants to give you. Example: When passing booleans to a Rails REST service, "true" gets unserialized as true, whereas "True" (which .NET gives you) unserializes to false. Write some libraries to do the HTTP, wrapping around the basic .NET WebRequest objects. You might get some mileage out of some third party libraries in this area as it tends to be more standard. I found some issues though, such as this lovely bug in the .NET framework, so I'm glad I stuck with the basics.
Best way to write a RESTful service "client" in .Net?
What techniques do people use to "consume" services in the REST stile on .Net ? Plain http client? Related to this: many rest services are now using JSON (its tighter and faster) - so what JSON lib is used?
[ "My approach was\n\nWrite some libraries and interfaces to serialize your objects into REST-compatible XML.\nYou can't neccessarily just use the built-in serializers, because your service may not accept the same kind of XML that .NET wants to give you.\nExample: When passing booleans to a Rails REST service, \"true\" gets unserialized as true, whereas \"True\" (which .NET gives you) unserializes to false.\nWrite some libraries to do the HTTP, wrapping around the basic .NET WebRequest objects.\nYou might get some mileage out of some third party libraries in this area as it tends to be more standard. I found some issues though, such as this lovely bug in the .NET framework, so I'm glad I stuck with the basics.\n\n" ]
[ 5 ]
[]
[]
[ ".net", "rest", "web_services" ]
stackoverflow_0000009467_.net_rest_web_services.txt
Q: RaisePostBackEvent not firing I have a custom control that implements IPostBackEventHandler. Some client-side events invoke __doPostBack(controlID, eventArgs). The control is implemented in two different user controls. In one control, RaisePostBackEvent is fired on the server-side when __doPostBack is invoked. In the other control, RaisePostBackEvent is never invoked. I checked the __EVENTTARGET parameter and it does match the ClientID of the control... where else might I look to troubleshoot this? A: There's a lot of ways this can fall apart. Are you adding the control to the page dynamically in code behind? If so alot of times your UniqueID can be off - even though the client id's are equal. Do you have a code sample that might demonstrate what you're doing? A: Double check that it is indeed a derivation of the UserControl class, not the WebControl one. This one has had me by surprise many times. If you need to use WebControl for the styling, you need to let your control implement INamingContainer. (Don't worry, its a marker interface) So.. public class MyControl : UserControl {} Or public class MyControl : WebControl, INamingContainer {}
RaisePostBackEvent not firing
I have a custom control that implements IPostBackEventHandler. Some client-side events invoke __doPostBack(controlID, eventArgs). The control is implemented in two different user controls. In one control, RaisePostBackEvent is fired on the server-side when __doPostBack is invoked. In the other control, RaisePostBackEvent is never invoked. I checked the __EVENTTARGET parameter and it does match the ClientID of the control... where else might I look to troubleshoot this?
[ "There's a lot of ways this can fall apart. Are you adding the control to the page dynamically in code behind? If so alot of times your UniqueID can be off - even though the client id's are equal. Do you have a code sample that might demonstrate what you're doing?\n", "Double check that it is indeed a derivation of the UserControl class, not the WebControl one.\nThis one has had me by surprise many times. If you need to use WebControl for the styling, you need to let your control implement INamingContainer. (Don't worry, its a marker interface)\nSo..\npublic class MyControl : UserControl {}\n\nOr\npublic class MyControl : WebControl, INamingContainer {}\n\n" ]
[ 1, 0 ]
[]
[]
[ "asp.net", "postback" ]
stackoverflow_0000009473_asp.net_postback.txt
Q: Generate sitemap on the fly I'm trying to generate a sitemap.xml on the fly for a particular asp.net website. I found a couple solutions: chinookwebs cervoproject newtonking Chinookwebs is working great but seems a bit inactive right now and it's impossible to personalize the "priority" and the "changefreq" tags of each and every page, they all inherit the same value from the config file. What solutions do you guys use? A: Usually you'll use an HTTP Handler for this. Given a request for... http://www.yoursite.com/sitemap.axd ...your handler will respond with a formatted XML sitemap. Whether that sitemap is generated on the fly, from a database, or some other method is up to the HTTP Handler implementation. Here's roughly what it would look like: void IHttpHandler.ProcessRequest(HttpContext context) { // // Important to return qualified XML (text/xml) for sitemaps // context.Response.ClearHeaders(); context.Response.ClearContent(); context.Response.ContentType = "text/xml"; // // Create an XML writer // XmlTextWriter writer = new XmlTextWriter(context.Response.Output); writer.WriteStartDocument(); writer.WriteStartElement("urlset", "http://www.sitemaps.org/schemas/sitemap/0.9"); // // Now add entries for individual pages.. // writer.WriteStartElement("url"); writer.WriteElementString("loc", "http://www.codingthewheel.com"); // use W3 date format.. writer.WriteElementString("lastmod", postDate.ToString("yyyy-MM-dd")); writer.WriteElementString("changefreq", "daily"); writer.WriteElementString("priority", "1.0"); writer.WriteEndElement(); // // Close everything out and go home. // result.WriteEndElement(); result.WriteEndDocument(); writer.Flush(); } This code can be improved but that's the basic idea. A: Custom handler to generate the sitemap. A: Using ASP.NET MVC just whipped up a quick bit of code using the .NET XML generation library and then just passed that to a view page that had an XML control on it. In the code-behind I tied the control with the ViewData. This seemed to override the default behaviour of view pages to present a different header.
Generate sitemap on the fly
I'm trying to generate a sitemap.xml on the fly for a particular asp.net website. I found a couple solutions: chinookwebs cervoproject newtonking Chinookwebs is working great but seems a bit inactive right now and it's impossible to personalize the "priority" and the "changefreq" tags of each and every page, they all inherit the same value from the config file. What solutions do you guys use?
[ "Usually you'll use an HTTP Handler for this. Given a request for...\n\nhttp://www.yoursite.com/sitemap.axd\n\n...your handler will respond with a formatted XML sitemap. Whether that sitemap is generated on the fly, from a database, or some other method is up to the HTTP Handler implementation.\nHere's roughly what it would look like:\nvoid IHttpHandler.ProcessRequest(HttpContext context)\n{\n //\n // Important to return qualified XML (text/xml) for sitemaps\n //\n context.Response.ClearHeaders();\n context.Response.ClearContent();\n context.Response.ContentType = \"text/xml\";\n //\n // Create an XML writer\n //\n XmlTextWriter writer = new XmlTextWriter(context.Response.Output);\n writer.WriteStartDocument();\n writer.WriteStartElement(\"urlset\", \"http://www.sitemaps.org/schemas/sitemap/0.9\");\n //\n // Now add entries for individual pages..\n //\n writer.WriteStartElement(\"url\");\n writer.WriteElementString(\"loc\", \"http://www.codingthewheel.com\");\n // use W3 date format..\n writer.WriteElementString(\"lastmod\", postDate.ToString(\"yyyy-MM-dd\"));\n writer.WriteElementString(\"changefreq\", \"daily\");\n writer.WriteElementString(\"priority\", \"1.0\");\n writer.WriteEndElement();\n //\n // Close everything out and go home.\n //\n result.WriteEndElement();\n result.WriteEndDocument();\n writer.Flush();\n}\n\nThis code can be improved but that's the basic idea.\n", "Custom handler to generate the sitemap. \n", "Using ASP.NET MVC just whipped up a quick bit of code using the .NET XML generation library and then just passed that to a view page that had an XML control on it. In the code-behind I tied the control with the ViewData. This seemed to override the default behaviour of view pages to present a different header.\n" ]
[ 7, 0, 0 ]
[]
[]
[ ".net", "asp.net", "sitemap" ]
stackoverflow_0000009336_.net_asp.net_sitemap.txt
Q: Replicating load related crashes in non-production environments We're running a custom application on our intranet and we have found a problem after upgrading it recently where IIS hangs with 100% CPU usage, requiring a reset. Rather than subject users to the hangs, we've rolled back to the previous release while we determine a solution. The first step is to reproduce the problem -- but we can't. Here's some background: Prod has a single virtualized (vmware) web server with two CPUs and 2 GB of RAM. The database server has 4GB, and 2 CPUs as well. It's also on VMWare, but separate physical hardware. During normal usage the application runs fine. The w3wp.exe process normally uses betwen 5-20% CPU and around 200MB of RAM. CPU and RAM fluctuate slightly under normal use, but nothing unusual. However, when we start running into problems, the RAM climbs dramatically and the CPU pegs at 98% (or as much as it can get). The site becomes unresponsive, necessitating a IIS restart. Resetting the app pool does nothing in this situation, a full IIS restart is required. It does not happen during the night (no usage). It happens more when the site is under load, but it has also happened under non-peak periods. First step to solving this problem is reproducing it. To simulate the load, we starting using JMeter to simulate usage. Our load script is based on actual usage around the time of the crash. Using JMeter, we can ramp the usage up quite high (2-3 times the load during the crash) but the site behaves fine. CPU is up high, and the site does become sluggish, but memory usage is reasonable and nothing is hanging. Does anyone have any tips on how to reproduce a problem like this in a non-production environment? We'd really like to reproduce the error, determine a solution, then test again to make sure we've resolved it. During the process we've found a number of small things that we've improved that might solve the problem, but I'd really feel a lot more confident if we could reproduce the problem and test the improved version. Any tools, techniques or theories much appreciated! A: You can find some information about troubleshooting this kind of problem at this blog entry. Her blog is generally a good debugging resource. A: I have an article about debugging ASP.NET in production which may provide some pointers. A: Is your test env the same really as live? i.e 2 separate vm instances on 2 physical servers - with the network connection and account types? Is there any other instances on the Database? Is there any other web applications in IIS? Is the .Net Config right? Is the App Pool Config right for service accounts ? Try look at this - MS Article on II6 Optmising for Performance Lots of tricks.
Replicating load related crashes in non-production environments
We're running a custom application on our intranet and we have found a problem after upgrading it recently where IIS hangs with 100% CPU usage, requiring a reset. Rather than subject users to the hangs, we've rolled back to the previous release while we determine a solution. The first step is to reproduce the problem -- but we can't. Here's some background: Prod has a single virtualized (vmware) web server with two CPUs and 2 GB of RAM. The database server has 4GB, and 2 CPUs as well. It's also on VMWare, but separate physical hardware. During normal usage the application runs fine. The w3wp.exe process normally uses betwen 5-20% CPU and around 200MB of RAM. CPU and RAM fluctuate slightly under normal use, but nothing unusual. However, when we start running into problems, the RAM climbs dramatically and the CPU pegs at 98% (or as much as it can get). The site becomes unresponsive, necessitating a IIS restart. Resetting the app pool does nothing in this situation, a full IIS restart is required. It does not happen during the night (no usage). It happens more when the site is under load, but it has also happened under non-peak periods. First step to solving this problem is reproducing it. To simulate the load, we starting using JMeter to simulate usage. Our load script is based on actual usage around the time of the crash. Using JMeter, we can ramp the usage up quite high (2-3 times the load during the crash) but the site behaves fine. CPU is up high, and the site does become sluggish, but memory usage is reasonable and nothing is hanging. Does anyone have any tips on how to reproduce a problem like this in a non-production environment? We'd really like to reproduce the error, determine a solution, then test again to make sure we've resolved it. During the process we've found a number of small things that we've improved that might solve the problem, but I'd really feel a lot more confident if we could reproduce the problem and test the improved version. Any tools, techniques or theories much appreciated!
[ "You can find some information about troubleshooting this kind of problem at this blog entry. Her blog is generally a good debugging resource.\n", "I have an article about debugging ASP.NET in production which may provide some pointers.\n", "Is your test env the same really as live? \ni.e\n2 separate vm instances on 2 physical servers - with the network connection and account types?\nIs there any other instances on the Database?\nIs there any other web applications in IIS?\nIs the .Net Config right?\nIs the App Pool Config right for service accounts ?\nTry look at this - MS Article on II6 Optmising for Performance\nLots of tricks.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "asp.net", "cpu", "crash", "memory", "performance" ]
stackoverflow_0000009501_asp.net_cpu_crash_memory_performance.txt
Q: C# 2.0 code consuming assemblies compiled with C# 3.0 This should be fine seeing as the CLR hasn't actually changed? The boxes running the C# 2.0 code have had .NET 3.5 rolled out. The background is that we have a windows service (.NET 2.0 exe built with VS2005, deployed to ~150 servers) that dynamically loads assemblies (almost like plug-ins) to complete various work items asked of it. Whenever we roll out a new version of the bus logic, we just drop the assemblies on an FTP server and the windows service knows how to check for, grab and store the latest versions. New assemblies are now built using VS2008 and targetting .NET 2.0, we know that works ok. However we'd like to start taking advantage of C# 3.0 language features such as LINQ and targetting the assemblies against .NET 3.5 without having to build and deploy a new version of the windows service. A: C#3 and .Net 3.5 adds new assemblies, but the IL is unchanged. This means that with .Net 2 assemblies you can compile and use C#3, as long as you don't use Linq or anything else that references System.Linq or System.Core yield, var, lambda syntax, anon types and initialisers are all compiler cleverness. The IL they produce is cross-compatible. If you can reference the new assemblies for 3.5 it should all just work. There is no new version of ASP.Net - it should still be 2.0.50727 - but you should still compile for 3.5 A: yield, var, lambda syntax, anon types and initialisers are all compiler cleverness. The IL they produce is cross-compatible. Minor nit-picking point, but yield was a 2.0 feature anyway. A: This is interesting stuff. I was looking at LinqBridge yesterday after someone on this forum suggested it to me and they are doing a similar thing. I find it strange that Microsoft named the frameworks 2.0, 3.0 and 3.5 when they all compile down to produce the same IL required by the 2.0 CLR. I would have thought adding versions onto 2.0 would have made more sense altho I suppose it also is hard to get people to get their head around the fact that there are different versions of runtimes, compilers and languages.
C# 2.0 code consuming assemblies compiled with C# 3.0
This should be fine seeing as the CLR hasn't actually changed? The boxes running the C# 2.0 code have had .NET 3.5 rolled out. The background is that we have a windows service (.NET 2.0 exe built with VS2005, deployed to ~150 servers) that dynamically loads assemblies (almost like plug-ins) to complete various work items asked of it. Whenever we roll out a new version of the bus logic, we just drop the assemblies on an FTP server and the windows service knows how to check for, grab and store the latest versions. New assemblies are now built using VS2008 and targetting .NET 2.0, we know that works ok. However we'd like to start taking advantage of C# 3.0 language features such as LINQ and targetting the assemblies against .NET 3.5 without having to build and deploy a new version of the windows service.
[ "C#3 and .Net 3.5 adds new assemblies, but the IL is unchanged.\nThis means that with .Net 2 assemblies you can compile and use C#3, as long as you don't use Linq or anything else that references System.Linq or System.Core\nyield, var, lambda syntax, anon types and initialisers are all compiler cleverness. The IL they produce is cross-compatible.\nIf you can reference the new assemblies for 3.5 it should all just work.\nThere is no new version of ASP.Net - it should still be 2.0.50727 - but you should still compile for 3.5\n", "\nyield, var, lambda syntax, anon types\n and initialisers are all compiler\n cleverness. The IL they produce is\n cross-compatible.\n\nMinor nit-picking point, but yield was a 2.0 feature anyway.\n", "This is interesting stuff. I was looking at LinqBridge yesterday after someone on this forum suggested it to me and they are doing a similar thing.\nI find it strange that Microsoft named the frameworks 2.0, 3.0 and 3.5 when they all compile down to produce the same IL required by the 2.0 CLR. I would have thought adding versions onto 2.0 would have made more sense altho I suppose it also is hard to get people to get their head around the fact that there are different versions of runtimes, compilers and languages.\n" ]
[ 6, 2, 1 ]
[]
[]
[ ".net", ".net_3.5", "c#" ]
stackoverflow_0000009508_.net_.net_3.5_c#.txt
Q: Visual Studio 2008 Window layout annoyance I'm having a weird issue with Visual Studio 2008. Every time I fire it up, the solution explorer is about an inch wide. It's like it can't remember it's layout settings. Every un-docked window is in the position I place it. But if I dock a window, it's position is saved, but it's size will be reset to very-narrow (around an inch) when I load. I've never come across this before and it's pretty annoying. Any ideas? The things I've tried: Saving, then reloading settings via Import/Export. Resetting all environment settings via Import/Export. Window -> Reset Window layout. Comination of rebooting after changing the above. Installed SP1. No improvement none of which changed the behaviour of docked windows. (Also, definitely no other instances running..) I do run two monitors, but I've had this setup on three different workstations and this is the first time I've come across it. A: I had the same problem. It turned out that if the VS window was non-maximized, it was really small. So after making the non-maximized wider, the problem disappeared. A: I occasionally get this bug, and others related to layout/fonts/colouring etc. A little trick I've found is use the Tools -> Import and Export Settings, export your current settings once you've got everything setup as you like, then close and reopen Visual Studio and import. Hopefully that'll sort you out. In 2005 there were some little bugs with viewing Project/Solution property panels when the Solution Explorer wasn't in its default position, docked on the left of the screen - I don't know if that's changed in VS2008, but you might want to put it back there and see. Now, when are we going to get decent MultiMonitor support?! A: Maybe you're closing Visual Studio while some other instance is still alive. The settings of the last instance that is closed is the one that will be applied.
Visual Studio 2008 Window layout annoyance
I'm having a weird issue with Visual Studio 2008. Every time I fire it up, the solution explorer is about an inch wide. It's like it can't remember it's layout settings. Every un-docked window is in the position I place it. But if I dock a window, it's position is saved, but it's size will be reset to very-narrow (around an inch) when I load. I've never come across this before and it's pretty annoying. Any ideas? The things I've tried: Saving, then reloading settings via Import/Export. Resetting all environment settings via Import/Export. Window -> Reset Window layout. Comination of rebooting after changing the above. Installed SP1. No improvement none of which changed the behaviour of docked windows. (Also, definitely no other instances running..) I do run two monitors, but I've had this setup on three different workstations and this is the first time I've come across it.
[ "I had the same problem. It turned out that if the VS window was non-maximized, it was really\nsmall. So after making the non-maximized wider, the problem disappeared.\n", "I occasionally get this bug, and others related to layout/fonts/colouring etc. A little trick I've found is use the Tools -> Import and Export Settings, export your current settings once you've got everything setup as you like, then close and reopen Visual Studio and import. Hopefully that'll sort you out.\nIn 2005 there were some little bugs with viewing Project/Solution property panels when the Solution Explorer wasn't in its default position, docked on the left of the screen - I don't know if that's changed in VS2008, but you might want to put it back there and see.\nNow, when are we going to get decent MultiMonitor support?!\n", "Maybe you're closing Visual Studio while some other instance is still alive. The settings of the last instance that is closed is the one that will be applied.\n" ]
[ 11, 3, 0 ]
[]
[]
[ "visual_studio" ]
stackoverflow_0000009601_visual_studio.txt
Q: Bidirectional outer join Suppose we have a table A: itemid mark 1 5 2 3 and table B: itemid mark 1 3 3 5 I want to join A*B on A.itemid=B.itemid both right and left ways. i.e. result: itemid A.mark B.mark 1 5 3 2 3 NULL 3 NULL 5 Is there a way to do it in one query in MySQL? A: It's called a full outer join and it's not supported natively in MySQL, judging from its docs. You can work around this limitation using UNION as described in the comments to the page I linked to. [edit] Since others posted snippets, here you go. You can see explanation on the linked page. SELECT * FROM A LEFT JOIN B ON A.id = B.id UNION ALL SELECT * FROM A RIGHT JOIN B ON A.id = B.id WHERE A.id IS NULL A: Could do with some work but here is some sql select distinct T.itemid, A.mark as "A.mark", B.mark as "B.mark" from (select * from A union select * from B) T left join A on T.itemid = A.itemid left join B on T.itemid = B.itemid; This relies on the left join, which returns all the rows in the original table (in this case this is the subselect table T). If there are no matches in the joined table, then it will set the column to NULL.
Bidirectional outer join
Suppose we have a table A: itemid mark 1 5 2 3 and table B: itemid mark 1 3 3 5 I want to join A*B on A.itemid=B.itemid both right and left ways. i.e. result: itemid A.mark B.mark 1 5 3 2 3 NULL 3 NULL 5 Is there a way to do it in one query in MySQL?
[ "It's called a full outer join and it's not supported natively in MySQL, judging from its docs. You can work around this limitation using UNION as described in the comments to the page I linked to.\n[edit] Since others posted snippets, here you go. You can see explanation on the linked page.\nSELECT *\nFROM A LEFT JOIN B ON A.id = B.id\nUNION ALL\nSELECT *\nFROM A RIGHT JOIN B ON A.id = B.id\nWHERE A.id IS NULL\n\n", "Could do with some work but here is some sql\nselect distinct T.itemid, A.mark as \"A.mark\", B.mark as \"B.mark\"\n from (select * from A union select * from B) T \n left join A on T.itemid = A.itemid \n left join B on T.itemid = B.itemid;\n\nThis relies on the left join, which returns all the rows in the original table (in this case this is the subselect table T). If there are no matches in the joined table, then it will set the column to NULL.\n" ]
[ 8, 2 ]
[ "This works for me on SQL Server:\nselect isnull(a.id, b.id), a.mark, b.mark\nfrom a \nfull outer join b on b.id = a.id\n\n" ]
[ -1 ]
[ "mysql", "sql" ]
stackoverflow_0000009614_mysql_sql.txt
Q: Validating a Win32 Window Handle Given a handle of type HWND is it possible to confirm that the handle represents a real window? A: There is a function IsWindow which does exactly what you asked for. BOOL isRealHandle = IsWindow(unknwodnHandle); Look at this link for more information. A: Generally no. By the time you've got confirmation that a Window is valid another process/thread my come along and remove it for you.
Validating a Win32 Window Handle
Given a handle of type HWND is it possible to confirm that the handle represents a real window?
[ "There is a function IsWindow which does exactly what you asked for.\nBOOL isRealHandle = IsWindow(unknwodnHandle);\n\nLook at this link for more information.\n", "Generally no. By the time you've got confirmation that a Window is valid another process/thread my come along and remove it for you.\n" ]
[ 16, 4 ]
[]
[]
[ "c++", "winapi", "windows" ]
stackoverflow_0000009667_c++_winapi_windows.txt
Q: Enterprise Library CacheFactory.GetCacheManager Throws Null Ref I'm trying to convert an application using the 1.1 version of the Enterprise Library Caching block over to the 2.0 version. I think where I'm really having a problem is that the configuration for the different EntLib pieces was split out over several files. Apparently, this used to be handled by the ConfigurationManagerSectionHandler, but is now obsolete in favor of the built-in configuration mechanisms in .NET 2.0. I'm having a hard time finding a good example of how to do this configuration file splitting, especially in the context of EntLib. Has anyone else dealt with this? A: Looks like it was the configuration. I found a good example of the normal, one-file approach here: http://www.devx.com/dotnet/Article/31158/0/page/2 Using an external config file is actually trivial once you figure out the syntax for it. Ex.: In Web.config: <cachingConfiguration configSource="cachingconfiguration.config" /> In cachingconfiguration.config: <?xml version="1.0" encoding="utf-8"?> <cachingConfiguration defaultCacheManager="Default Cache Manager"> <backingStores> <add name="inMemory" type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching" /> </backingStores> <cacheManagers> <add name="Default Cache Manager" expirationPollFrequencyInSeconds = "60" maximumElementsInCacheBeforeScavenging ="50" numberToRemoveWhenScavenging="10" backingStoreName="inMemory" /> </cacheManagers> </cachingConfiguration> Hopefully this helps somebody!
Enterprise Library CacheFactory.GetCacheManager Throws Null Ref
I'm trying to convert an application using the 1.1 version of the Enterprise Library Caching block over to the 2.0 version. I think where I'm really having a problem is that the configuration for the different EntLib pieces was split out over several files. Apparently, this used to be handled by the ConfigurationManagerSectionHandler, but is now obsolete in favor of the built-in configuration mechanisms in .NET 2.0. I'm having a hard time finding a good example of how to do this configuration file splitting, especially in the context of EntLib. Has anyone else dealt with this?
[ "Looks like it was the configuration. I found a good example of the normal, one-file approach here: http://www.devx.com/dotnet/Article/31158/0/page/2\nUsing an external config file is actually trivial once you figure out the syntax for it. Ex.:\nIn Web.config:\n<cachingConfiguration configSource=\"cachingconfiguration.config\" />\n\nIn cachingconfiguration.config:\n\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<cachingConfiguration defaultCacheManager=\"Default Cache Manager\">\n <backingStores>\n <add name=\"inMemory\" type=\"Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching\" />\n </backingStores>\n <cacheManagers>\n <add name=\"Default Cache Manager\" expirationPollFrequencyInSeconds = \"60\" maximumElementsInCacheBeforeScavenging =\"50\" numberToRemoveWhenScavenging=\"10\" backingStoreName=\"inMemory\" />\n </cacheManagers>\n</cachingConfiguration>\n\nHopefully this helps somebody!\n" ]
[ 4 ]
[]
[]
[ "c#", "configuration", "enterprise_library" ]
stackoverflow_0000009136_c#_configuration_enterprise_library.txt
Q: How to prevent the mouse cursor from being hidden after calling CComboBox::ShowDropDown? In my MFC application, when I call CComboBox::ShowDropDown(), the mouse cursor is hidden until interaction with the combo box completes (when the combo box loses focus.) It doesn't reappear when the mouse is moved, like it does with edit boxes. How can I keep the mouse cursor from being hidden? A: Call SetCursor(LoadCursor(NULL, IDC_ARROW)); immediately after the ShowDropDown() call.
How to prevent the mouse cursor from being hidden after calling CComboBox::ShowDropDown?
In my MFC application, when I call CComboBox::ShowDropDown(), the mouse cursor is hidden until interaction with the combo box completes (when the combo box loses focus.) It doesn't reappear when the mouse is moved, like it does with edit boxes. How can I keep the mouse cursor from being hidden?
[ "Call\nSetCursor(LoadCursor(NULL, IDC_ARROW));\nimmediately after the ShowDropDown() call.\n" ]
[ 2 ]
[]
[]
[ "ccombobox", "mfc", "mouse", "visibility" ]
stackoverflow_0000009704_ccombobox_mfc_mouse_visibility.txt
Q: Visual Studio 2005 Macros stop working when Visual Studio 2008 is installed I have a number of macros written for Visual Studio 2005, but they have since stopped working once I installed Visual Studio 2008 on my computer. No error is returned by the macro when I try and run it, and the environment merely shows the hourglass for a second and then returns to the normal cursor. Currently uninstalling one or the other is not possible, and I am wondering if there is anyway to get the macros to work again? A: You may need to install (reinstall) VS 2005 SP1, since a security update from Microsoft (KB928365) on July 10 may have caused the issue.
Visual Studio 2005 Macros stop working when Visual Studio 2008 is installed
I have a number of macros written for Visual Studio 2005, but they have since stopped working once I installed Visual Studio 2008 on my computer. No error is returned by the macro when I try and run it, and the environment merely shows the hourglass for a second and then returns to the normal cursor. Currently uninstalling one or the other is not possible, and I am wondering if there is anyway to get the macros to work again?
[ "You may need to install (reinstall) VS 2005 SP1, since a security update from Microsoft (KB928365) on July 10 may have caused the issue.\n" ]
[ 3 ]
[]
[]
[ "ide", "macros", "visual_studio", "visual_studio_2005", "visual_studio_2008" ]
stackoverflow_0000009693_ide_macros_visual_studio_visual_studio_2005_visual_studio_2008.txt
Q: How to obtain good concurrent read performance from disk I'd like to ask a question then follow it up with my own answer, but also see what answers other people have. We have two large files which we'd like to read from two separate threads concurrently. One thread will sequentially read fileA while the other thread will sequentially read fileB. There is no locking or communication between the threads, both are sequentially reading as fast as they can, and both are immediately discarding the data they read. Our experience with this setup on Windows is very poor. The combined throughput of the two threads is in the order of 2-3 MiB/sec. The drive seems to be spending most of its time seeking backwards and forwards between the two files, presumably reading very little after each seek. If we disable one of the threads and temporarily look at the performance of a single thread then we get much better bandwidth (~45 MiB/sec for this machine). So clearly the bad two-thread performance is an artefact of the OS disk scheduler. Is there anything we can do to improve the concurrent thread read performance? Perhaps by using different APIs or by tweaking the OS disk scheduler parameters in some way. Some details: The files are in the order of 2 GiB each on a machine with 2GiB of RAM. For the purpose of this question we consider them not to be cached and perfectly defragmented. We have used defrag tools and rebooted to ensure this is the case. We are using no special APIs to read these files. The behaviour is repeatable across various bog-standard APIs such as Win32's CreateFile, C's fopen, C++'s std::ifstream, Java's FileInputStream, etc. Each thread spins in a loop making calls to the read function. We have varied the number of bytes requested from the API each iteration from values between 1KiB up to 128MiB. Varying this has had no effect, so clearly the amount the OS is physically reading after each disk seek is not dictated by this number. This is exactly what should be expected. The dramatic difference between one-thread and two-thread performance is repeatable across Windows 2000, Windows XP (32-bit and 64-bit), Windows Server 2003, and also with and without hardware RAID5. A: The problem seems to be in Windows I/O scheduling policy. According to what I found here there are many ways for an O.S. to schedule disk requests. While Linux and others can choose between different policies, before Vista Windows was locked in a single policy: a FIFO queue, where all requests where splitted in 64 KB blocks. I believe that this policy is the cause for the problem you are experiencing: the scheduler will mix requests from the two threads, causing continuous seek between different areas of the disk. Now, the good news is that according to here and here, Vista introduced a smarter disk scheduler, where you can set the priority of your requests and also allocate a minimum badwidth for your process. The bad news is that I found no way to change disk policy or buffers size in previous versions of Windows. Also, even if raising disk I/O priority of your process will boost the performance against the other processes, you still have the problems of your threads competing against each other. What I can suggest is to modify your software by introducing a self-made disk access policy. For example, you could use a policy like this in your thread B (similar for Thread A): if THREAD A is reading from disk then wait for THREAD A to stop reading or wait for X ms Read for X ms (or Y MB) Stop reading and check status of thread A again You could use semaphores for status checking or you could use perfmon counters to get the status of the actual disk queue. The values of X and/or Y could also be auto-tuned by checking the actual trasfer rates and slowly modify them, thus maximizing the throughtput when the application runs on different machines and/or O.S. You could find that cache, memory or RAID levels affect them in a way or the other, but with auto-tuning you will always get the best performance in every scenario. A: I'd like to add some further notes in my response. All other non-Microsoft operating systems we have tested do not suffer from this problem. Linux, FreeBSD, and Mac OS X (this final one on different hardware) all degrade much more gracefully in terms of aggregate bandwidth when moving from one thread to two. Linux for example degraded from ~45 MiB/sec to ~42 MiB/sec. These other operating systems must be reading larger chunks of the file between each seek, and therefor not spending nearly all their time waiting on the disk to seek. Our solution for Windows is to pass the FILE_FLAG_NO_BUFFERING flag to CreateFile and use large (~16MiB) reads in each call to ReadFile. This is suboptimal for several reasons: Files don't get cached when read like this, so there are none of the advantages that caching normally gives. The constraints when working with this flag are much more complicated than normal reading (alignment of read buffers to page boundaries, etc). (As a final remark. Does this explain why swapping under Windows is so hellish? Ie, Windows is incapable of doing IO to multiple files concurrently with any efficiency, so while swapping all other IO operations are forced to be disproportionately slow.) Edit to add some further details for Will Dean: Of course across these different hardware configurations the raw figures did change (sometimes substantially). The problem however is the consistent degradation in performance that only Windows suffers when moving from one thread to two. Here is a summary of the machines tested: Several Dell workstations (Intel Xeon) of various ages running Windows 2000, Windows XP (32-bit), and Windows XP (64-bit) with single drive. A Dell 1U server (Intel Xeon) running Windows Server 2003 (64-bit) with RAID 1+0. An HP workstation (AMD Opteron) with Windows XP (64-bit), and Windows Server 2003, and hardware RAID 5. My home unbranded PC (AMD Athlon64) running Windows XP (32-bit), FreeBSD (64-bit), and Linux (64-bit) with single drive. My home MacBook (Intel Core1) running Mac OS X, single SATA drive. My home Koolu PC running Linux. Vastly underpowered compared to the other systems but I demonstrated that even this machine can outperform a Windows server with RAID5 when doing multi-threaded disk reads. CPU usage on all of these systems was very low during the tests and anti-virus was disabled. I forgot to mention before but we also tried the normal Win32 CreateFile API with the FILE_FLAG_SEQUENTIAL_SCAN flag set. This flag didn't fix the problem. A: It does seem a little strange that you see no difference across quite a wide range of windows versions and nothing between a single drive and hardware raid-5. It's only 'gut feel', but that does make me doubtful that this is really a simple seeking problem. Other than the OS X and the Raid5, was all this tried on the same machine - have you tried another machine? Is your CPU usage basically zero during this test? What's the shortest app you can write which demonstrates this problem? - I would be interested to try it here. A: I would create some kind of in memory thread safe lock. Each thread could wait on the lock until it was free. When the lock becomes free, take the lock and read the file for a defined length of time or a defined amount of data, then release the lock for any other waiting threads. A: Do you use IOCompletionPorts under Windows? Windows via C++ has an in-depth chapter on this subject and as luck would have it, it is also available on MSDN. A: Paul - saw the update. Very interesting. It would be interesting to try it on Vista or Win2008, as people seem to be reporting some considerable I/O improvements on these in some circumstances. My only suggestion about a different API would be to try memory mapping the files - have you tried that? Unfortunately at 2GB per file, you're not going to be able to map multiple whole files on a 32-bit machine, which means this isn't quite as trivial as it might be.
How to obtain good concurrent read performance from disk
I'd like to ask a question then follow it up with my own answer, but also see what answers other people have. We have two large files which we'd like to read from two separate threads concurrently. One thread will sequentially read fileA while the other thread will sequentially read fileB. There is no locking or communication between the threads, both are sequentially reading as fast as they can, and both are immediately discarding the data they read. Our experience with this setup on Windows is very poor. The combined throughput of the two threads is in the order of 2-3 MiB/sec. The drive seems to be spending most of its time seeking backwards and forwards between the two files, presumably reading very little after each seek. If we disable one of the threads and temporarily look at the performance of a single thread then we get much better bandwidth (~45 MiB/sec for this machine). So clearly the bad two-thread performance is an artefact of the OS disk scheduler. Is there anything we can do to improve the concurrent thread read performance? Perhaps by using different APIs or by tweaking the OS disk scheduler parameters in some way. Some details: The files are in the order of 2 GiB each on a machine with 2GiB of RAM. For the purpose of this question we consider them not to be cached and perfectly defragmented. We have used defrag tools and rebooted to ensure this is the case. We are using no special APIs to read these files. The behaviour is repeatable across various bog-standard APIs such as Win32's CreateFile, C's fopen, C++'s std::ifstream, Java's FileInputStream, etc. Each thread spins in a loop making calls to the read function. We have varied the number of bytes requested from the API each iteration from values between 1KiB up to 128MiB. Varying this has had no effect, so clearly the amount the OS is physically reading after each disk seek is not dictated by this number. This is exactly what should be expected. The dramatic difference between one-thread and two-thread performance is repeatable across Windows 2000, Windows XP (32-bit and 64-bit), Windows Server 2003, and also with and without hardware RAID5.
[ "The problem seems to be in Windows I/O scheduling policy. According to what I found here there are many ways for an O.S. to schedule disk requests. While Linux and others can choose between different policies, before Vista Windows was locked in a single policy: a FIFO queue, where all requests where splitted in 64 KB blocks. I believe that this policy is the cause for the problem you are experiencing: the scheduler will mix requests from the two threads, causing continuous seek between different areas of the disk.\nNow, the good news is that according to here and here, Vista introduced a smarter disk scheduler, where you can set the priority of your requests and also allocate a minimum badwidth for your process.\nThe bad news is that I found no way to change disk policy or buffers size in previous versions of Windows. Also, even if raising disk I/O priority of your process will boost the performance against the other processes, you still have the problems of your threads competing against each other.\nWhat I can suggest is to modify your software by introducing a self-made disk access policy.\nFor example, you could use a policy like this in your thread B (similar for Thread A): \nif THREAD A is reading from disk then wait for THREAD A to stop reading or wait for X ms\nRead for X ms (or Y MB)\nStop reading and check status of thread A again \n\nYou could use semaphores for status checking or you could use perfmon counters to get the status of the actual disk queue.\nThe values of X and/or Y could also be auto-tuned by checking the actual trasfer rates and slowly modify them, thus maximizing the throughtput when the application runs on different machines and/or O.S. You could find that cache, memory or RAID levels affect them in a way or the other, but with auto-tuning you will always get the best performance in every scenario.\n", "I'd like to add some further notes in my response. All other non-Microsoft operating systems we have tested do not suffer from this problem. Linux, FreeBSD, and Mac OS X (this final one on different hardware) all degrade much more gracefully in terms of aggregate bandwidth when moving from one thread to two. Linux for example degraded from ~45 MiB/sec to ~42 MiB/sec. These other operating systems must be reading larger chunks of the file between each seek, and therefor not spending nearly all their time waiting on the disk to seek.\nOur solution for Windows is to pass the FILE_FLAG_NO_BUFFERING flag to CreateFile and use large (~16MiB) reads in each call to ReadFile. This is suboptimal for several reasons:\n\nFiles don't get cached when read like this, so there are none of the advantages that caching normally gives.\nThe constraints when working with this flag are much more complicated than normal reading (alignment of read buffers to page boundaries, etc).\n\n(As a final remark. Does this explain why swapping under Windows is so hellish? Ie, Windows is incapable of doing IO to multiple files concurrently with any efficiency, so while swapping all other IO operations are forced to be disproportionately slow.)\n\nEdit to add some further details for Will Dean:\nOf course across these different hardware configurations the raw figures did change (sometimes substantially). The problem however is the consistent degradation in performance that only Windows suffers when moving from one thread to two. Here is a summary of the machines tested:\n\nSeveral Dell workstations (Intel Xeon) of various ages running Windows 2000, Windows XP (32-bit), and Windows XP (64-bit) with single drive.\nA Dell 1U server (Intel Xeon) running Windows Server 2003 (64-bit) with RAID 1+0.\nAn HP workstation (AMD Opteron) with Windows XP (64-bit), and Windows Server 2003, and hardware RAID 5.\nMy home unbranded PC (AMD Athlon64) running Windows XP (32-bit), FreeBSD (64-bit), and Linux (64-bit) with single drive.\nMy home MacBook (Intel Core1) running Mac OS X, single SATA drive.\nMy home Koolu PC running Linux. Vastly underpowered compared to the other systems but I demonstrated that even this machine can outperform a Windows server with RAID5 when doing multi-threaded disk reads.\n\nCPU usage on all of these systems was very low during the tests and anti-virus was disabled.\nI forgot to mention before but we also tried the normal Win32 CreateFile API with the FILE_FLAG_SEQUENTIAL_SCAN flag set. This flag didn't fix the problem.\n", "It does seem a little strange that you see no difference across quite a wide range of windows versions and nothing between a single drive and hardware raid-5.\nIt's only 'gut feel', but that does make me doubtful that this is really a simple seeking problem. Other than the OS X and the Raid5, was all this tried on the same machine - have you tried another machine? Is your CPU usage basically zero during this test?\nWhat's the shortest app you can write which demonstrates this problem? - I would be interested to try it here.\n", "I would create some kind of in memory thread safe lock. Each thread could wait on the lock until it was free. When the lock becomes free, take the lock and read the file for a defined length of time or a defined amount of data, then release the lock for any other waiting threads.\n", "Do you use IOCompletionPorts under Windows? Windows via C++ has an in-depth chapter on this subject and as luck would have it, it is also available on MSDN.\n", "Paul - saw the update. Very interesting.\nIt would be interesting to try it on Vista or Win2008, as people seem to be reporting some considerable I/O improvements on these in some circumstances.\nMy only suggestion about a different API would be to try memory mapping the files - have you tried that? Unfortunately at 2GB per file, you're not going to be able to map multiple whole files on a 32-bit machine, which means this isn't quite as trivial as it might be.\n" ]
[ 12, 6, 1, 0, 0, 0 ]
[]
[]
[ "file_io", "multithreading", "windows" ]
stackoverflow_0000009191_file_io_multithreading_windows.txt
Q: SharePoint - Connection String dialog box during FeatureActivated event Does anyone know if it is possible to display a prompt to a user/administrator when activating or installing a sharepoint feature? I am writing a custom webpart and it is connecting to a separate database, I would like to allow the administrator to select or type in a connection string when installing the .wsp file or activating the feature. I am looking inside the FeatureActivated event and thinking of using the SPWebConfigModification class to actually write the connection string to the web.config files in the farm. I do not want to hand edit the web.configs or hard code the string into the DLL. If you have other methods for handling connection strings inside sharepoint I would be interested in them as well. A: Unfortunately there is no way to swap to a screen where you can get user via the feature activation process. Couple of comments for you: I'm assuming the connection string is going to be different for every installation, so there is no way you can include it directly in the Solution. I'm assuming that you couldn't programmatically construct this during installation. Therefore, you need some way to get user input. Here are a couple of options: It could be a web part property, though this would mean setting it each and every time the web part was added, and you would need to then maitain those settings individually. You could build out your own _layouts settings screen (good post: http://community.zevenseas.com/Blogs/Robin/archive/2008/03/17/lcm-creating-custom-application-page-and-using-the-propertybag-more-detailed.aspx), and from there users can maintain the property, storing it in either the Web Property bag, or inside the Web.Config. I try to avoid using the Web.Config where I can, but if you do wish to go this route then MAKE SURE you use the SPWebConfigModification class (Read this great blog: http://www.crsw.com/mark/Lists/Posts/Post.aspx?ID=32) Finally, a technique I often use is storing configuration information in a SharePoint List. Chris O'Brien has a great framework for that here: http://www.codeplex.com/SPConfigStore Hope that helps, Daniel A: Sounds good. I will look at these possible solutions. I do not think #1 will work since I am deploying multiple webparts inside a single solution which all use the same connectionString. #3 sounds like a very clean solution. I see the config items are cached so it looks like if I need to store a connection string, I will not be hit with a SP lookup each time I need that string. While searching for a solution I did stumble across another method. If you dig around their code, I looks like they have created an installer that accepts application specific values, adds the values into a FeatureTemplate.xml file and passes them to the SPFeatureReceiverProperties object in the Reciever. I was about to start tackling this method, but I think #3 would be better. Thank you, Keith
SharePoint - Connection String dialog box during FeatureActivated event
Does anyone know if it is possible to display a prompt to a user/administrator when activating or installing a sharepoint feature? I am writing a custom webpart and it is connecting to a separate database, I would like to allow the administrator to select or type in a connection string when installing the .wsp file or activating the feature. I am looking inside the FeatureActivated event and thinking of using the SPWebConfigModification class to actually write the connection string to the web.config files in the farm. I do not want to hand edit the web.configs or hard code the string into the DLL. If you have other methods for handling connection strings inside sharepoint I would be interested in them as well.
[ "Unfortunately there is no way to swap to a screen where you can get user via the feature activation process. Couple of comments for you:\n\nI'm assuming the connection string is going to be different for every installation, so there is no way you can include it directly in the Solution. \nI'm assuming that you couldn't programmatically construct this during installation.\n\nTherefore, you need some way to get user input. Here are a couple of options:\n\nIt could be a web part property, though this would mean setting it each and every time the web part was added, and you would need to then maitain those settings individually.\nYou could build out your own _layouts settings screen (good post: http://community.zevenseas.com/Blogs/Robin/archive/2008/03/17/lcm-creating-custom-application-page-and-using-the-propertybag-more-detailed.aspx), and from there users can maintain the property, storing it in either the Web Property bag, or inside the Web.Config. I try to avoid using the Web.Config where I can, but if you do wish to go this route then MAKE SURE you use the SPWebConfigModification class (Read this great blog: http://www.crsw.com/mark/Lists/Posts/Post.aspx?ID=32)\nFinally, a technique I often use is storing configuration information in a SharePoint List. Chris O'Brien has a great framework for that here: http://www.codeplex.com/SPConfigStore\n\nHope that helps,\nDaniel\n", "Sounds good. I will look at these possible solutions.\nI do not think #1 will work since I am deploying multiple webparts inside a single solution which all use the same connectionString.\n#3 sounds like a very clean solution. I see the config items are cached so it looks like if I need to store a connection string, I will not be hit with a SP lookup each time I need that string.\nWhile searching for a solution I did stumble across another method. \nIf you dig around their code, I looks like they have created an installer that accepts application specific values, adds the values into a FeatureTemplate.xml file and passes them to the SPFeatureReceiverProperties object in the Reciever.\nI was about to start tackling this method, but I think #3 would be better. \nThank you,\nKeith\n" ]
[ 1, 0 ]
[]
[]
[ "connection_string", "sharepoint" ]
stackoverflow_0000008849_connection_string_sharepoint.txt
Q: Calculate DateTime Weeks into Rows I am currently writing a small calendar in ASP.Net C#. Currently to produce the rows of the weeks I do the following for loop: var iWeeks = 6; for (int w = 0; w < iWeeks; w++) { This works fine, however, some month will only have 5 weeks and in some rare cases, 4. How can I calculate the number of rows that will be required for a particular month? This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google A: Here is the method that does it: public int GetWeekRows(int year, int month) { DateTime firstDayOfMonth = new DateTime(year, month, 1); DateTime lastDayOfMonth = new DateTime(year, month, 1).AddMonths(1).AddDays(-1); System.Globalization.Calendar calendar = System.Threading.Thread.CurrentThread.CurrentCulture.Calendar; int lastWeek = calendar.GetWeekOfYear(lastDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday); int firstWeek = calendar.GetWeekOfYear(firstDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday); return lastWeek - firstWeek + 1; } You can customize the calendar week rule by modifying the System.Globalization.CalendarWeekRule.FirstFourDayWeek part. I hope the code is self explanatory. A: Well, it depends on the culture you're using, but let's assume you can use Thread.CurrentThread.CurrentCulture, then the code to get the week of today would be: Culture culture = Thread.CurrentThread.CurrentCulture; Calendar cal = culture.Calendar; Int32 week = cal.GetWeekOfYear(DateTime.Today, culture.DateTimeFormat.CalendarWeekRule, culture.DateTimeFormat.FirstDayOfWeek); A: How about checking which week the first and last days will be in? A: The months in the Julian / Gregorian calendar have the same number of days each year, except February who can have 28 or 29 days depending on the leapness of the year. You can find the number of days in the Description section at http://en.wikipedia.org/wiki/Gregorian_calendar. As @darkdog mentioned you have DateTime.DaysInMonth. Just do this: var days = DateTime.DaysInMonth(year, month) + WhatDayOfWeekTheMonthStarts(year, month); int rows = (days / 7); if (0 < days % 7) { ++rows; } Take into consideration the fact that for globalization / localization purposes, some parts of the world use different calendars / methods of organization of the year. A: Try this, DateTime.DaysInMonth A: Check Calendar.GetWeekOfYear. It should do the trick. There is a problem with it, it does not follow the 4 day rule by ISO 8601, but otherwise it is neat. A: you can get the days of a month by using DateTime.DaysInMonth(int WhichYear,int WhichMonth); A: The problem isn't the number of days in the month, it's how many weeks it spans over. February in a non-leap year will have 28 days, and if the first day of the month is a monday, february will span exactly 4 week numbers. However, if the first day of the month is a tuesday, or any other day of the week, february will span 5 week numbers. A 31 day month can span 5 or 6 weeks the same way. If the month starts on a monday, the 31 days gives you 5 week numbers. If the month starts on saturday or sunday, it will span 6 week numbers. So the right way to obtain this number is to find the week number of the first and last days of the month. Edit #1: Here's how to calculate the number of weeks a given month spans: Edit #2: Fixed bugs in code public static Int32 GetWeekForDateCurrentCulture(DateTime dt) { CultureInfo culture = Thread.CurrentThread.CurrentCulture; Calendar cal = culture.Calendar; return cal.GetWeekOfYear(dt, culture.DateTimeFormat.CalendarWeekRule, culture.DateTimeFormat.FirstDayOfWeek); } public static Int32 GetWeekSpanCountForMonth(DateTime dt) { DateTime firstDayInMonth = new DateTime(dt.Year, dt.Month, 1); DateTime lastDayInMonth = firstDayInMonth.AddMonths(1).AddDays(-1); return GetWeekForDateCurrentCulture(lastDayInMonth) - GetWeekForDateCurrentCulture(firstDayInMonth) + 1; } A: First Find out which weekday the first day of the month is in. Just new up a datetime with the first day, always 1, and the year and month in question, there is a day of week property on it. Then from here, you can use the number of days in the month, DateTime.DaysInMonth, in order to determine how many weeks when you divide by seven and then add the number of days from 1 that your first day falls on. For instance, public static int RowsForMonth(int month, int year) { DateTime first = new DateTime(year, month, 1); //number of days pushed beyond monday this one sits int offset = ((int)first.DayOfWeek) - 1; int actualdays = DateTime.DaysInMonth(month, year) + offset; decimal rows = (actualdays / 7); if ((rows - ((int)rows)) > .1) { rows++; } return rows; }
Calculate DateTime Weeks into Rows
I am currently writing a small calendar in ASP.Net C#. Currently to produce the rows of the weeks I do the following for loop: var iWeeks = 6; for (int w = 0; w < iWeeks; w++) { This works fine, however, some month will only have 5 weeks and in some rare cases, 4. How can I calculate the number of rows that will be required for a particular month? This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google This is an example of what I am creating: As you can see for the above month, there are only 5 rows required, however. Take the this month (August 2008) which started on a Saturday and ends on a Monday on the 6th Week/Row. Image found on google
[ "Here is the method that does it:\npublic int GetWeekRows(int year, int month)\n{\n DateTime firstDayOfMonth = new DateTime(year, month, 1);\n DateTime lastDayOfMonth = new DateTime(year, month, 1).AddMonths(1).AddDays(-1);\n System.Globalization.Calendar calendar = System.Threading.Thread.CurrentThread.CurrentCulture.Calendar;\n int lastWeek = calendar.GetWeekOfYear(lastDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday);\n int firstWeek = calendar.GetWeekOfYear(firstDayOfMonth, System.Globalization.CalendarWeekRule.FirstFourDayWeek, DayOfWeek.Monday);\n return lastWeek - firstWeek + 1;\n}\n\nYou can customize the calendar week rule by modifying the System.Globalization.CalendarWeekRule.FirstFourDayWeek part. I hope the code is self explanatory.\n", "Well, it depends on the culture you're using, but let's assume you can use Thread.CurrentThread.CurrentCulture, then the code to get the week of today would be:\nCulture culture = Thread.CurrentThread.CurrentCulture;\nCalendar cal = culture.Calendar;\nInt32 week = cal.GetWeekOfYear(DateTime.Today,\n culture.DateTimeFormat.CalendarWeekRule,\n culture.DateTimeFormat.FirstDayOfWeek);\n\n", "How about checking which week the first and last days will be in?\n", "The months in the Julian / Gregorian calendar have the same number of days each year, except February who can have 28 or 29 days depending on the leapness of the year. You can find the number of days in the Description section at http://en.wikipedia.org/wiki/Gregorian_calendar.\nAs @darkdog mentioned you have DateTime.DaysInMonth. Just do this:\nvar days = DateTime.DaysInMonth(year, month) + \n WhatDayOfWeekTheMonthStarts(year, month); \nint rows = (days / 7); \nif (0 < days % 7) \n{ \n ++rows;\n} \n\nTake into consideration the fact that for globalization / localization purposes, some parts of the world use different calendars / methods of organization of the year.\n", "Try this,\nDateTime.DaysInMonth\n\n", "Check Calendar.GetWeekOfYear. It should do the trick.\nThere is a problem with it, it does not follow the 4 day rule by ISO 8601, but otherwise it is neat.\n", "you can get the days of a month by using DateTime.DaysInMonth(int WhichYear,int WhichMonth);\n", "The problem isn't the number of days in the month, it's how many weeks it spans over.\nFebruary in a non-leap year will have 28 days, and if the first day of the month is a monday, february will span exactly 4 week numbers.\nHowever, if the first day of the month is a tuesday, or any other day of the week, february will span 5 week numbers.\nA 31 day month can span 5 or 6 weeks the same way. If the month starts on a monday, the 31 days gives you 5 week numbers. If the month starts on saturday or sunday, it will span 6 week numbers.\nSo the right way to obtain this number is to find the week number of the first and last days of the month.\n\nEdit #1: Here's how to calculate the number of weeks a given month spans:\nEdit #2: Fixed bugs in code\npublic static Int32 GetWeekForDateCurrentCulture(DateTime dt)\n{\n CultureInfo culture = Thread.CurrentThread.CurrentCulture;\n Calendar cal = culture.Calendar;\n return cal.GetWeekOfYear(dt,\n culture.DateTimeFormat.CalendarWeekRule,\n culture.DateTimeFormat.FirstDayOfWeek);\n}\n\npublic static Int32 GetWeekSpanCountForMonth(DateTime dt)\n{\n DateTime firstDayInMonth = new DateTime(dt.Year, dt.Month, 1);\n DateTime lastDayInMonth = firstDayInMonth.AddMonths(1).AddDays(-1);\n return\n GetWeekForDateCurrentCulture(lastDayInMonth)\n - GetWeekForDateCurrentCulture(firstDayInMonth)\n + 1;\n}\n\n", "First Find out which weekday the first day of the month is in. Just new up a datetime with the first day, always 1, and the year and month in question, there is a day of week property on it.\nThen from here, you can use the number of days in the month, DateTime.DaysInMonth, in order to determine how many weeks when you divide by seven and then add the number of days from 1 that your first day falls on. For instance,\npublic static int RowsForMonth(int month, int year)\n{\n DateTime first = new DateTime(year, month, 1);\n\n //number of days pushed beyond monday this one sits\n int offset = ((int)first.DayOfWeek) - 1;\n\n int actualdays = DateTime.DaysInMonth(month, year) + offset;\n\n decimal rows = (actualdays / 7);\n if ((rows - ((int)rows)) > .1)\n {\n rows++;\n }\n return rows;\n}\n\n" ]
[ 6, 2, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "asp.net", "c#" ]
stackoverflow_0000009805_asp.net_c#.txt
Q: Document Server: Handling Concurrent Saves I'm implementing a document server. Currently, if two users open the same document, then modify it and save the changes, the document's state will be undefined (either the first user's changes are saved permanently, or the second's). This is entirely unsatisfactory. I considered two possibilities to solve this problem: The first is to lock the document when it is opened by someone the first time, and unlock it when it is closed. But if the network connection to the server is suddenly interrupted, the document would stay in a forever-locked state. The obvious solution is to send regular pings to the server. If the server doesn't receive K pings in a row (K > 1) from a particular client, documents locked by this client are unlocked. If that client re-appears, documents are locked again, if someone hadn't already locked them. This could also help if the client application (running in web browser) is terminated unexpectedly, making it impossible to send a 'quitting, unlock my documents' signal to the server. The second is to store multiple versions of the same document saved by different users. If changes to the document are made in rapid succession, the system would offer either to merge versions or to select a preferred version. To optimize storage space, only document diffs should be kept (just like source control software). What method should I choose, taking into consideration that the connection to the server might sometimes be slow and unresponsive? How should the parameters (ping interval, rapid succession interval) be determined? P.S. Unfortunately, I can't store the documents in a database. A: The first option you describe is essentially a pessimistic locking model whilst the second is an optimistic model. Which one to choose really comes down to a number of factors but essentially boils down to how the business wants to work. For example, would it unduly inconvenience the users if a document they needed to edit was locked by another user? What happens if a document is locked and someone goes on holiday with their client connected? What is the likely contention for each document - i.e. how likely is it that the same document will be modified by two users at the same time?, how localised are the modifications likely to be within a single document? (If the same section is modified regularly then performing a merge may take longer than simply making the changes again). Assuming the contention is relatively low and/or the size of each change is fairly small then I would probably opt for an optimistic model that resolves conflicts using an automatic or manual merge. A version number or a checksum of the document's contents can be used to determine if a merge is required. A: My suggestion would be something like your first one. When the first user (Bob) opens the document, he acquires a lock so that other users can only read the current document. If the user saves the document while he is using it, he keeps the lock. Only when he exits the document, it is unlocked and other people can edit it. If the second user (Kate) opens the document while Bob has the lock on it, Kate will get a message saying the document is uneditable but she can read it until it the lock has been released. So what happens when Bob acquires the lock, maybe saves the document once or twice but then exits the application leaving the lock hanging? As you said yourself, requiring the client with the lock to send pings at a certain frequency is probably the best option. If you don't get a ping from the client for a set amount of time, this effectively means his client is not responding anymore. If this is a web application you can use javascript for the pings. The document that was last saved releases its lock and Kate can now acquire it. A ping can contain the name of the document that the client has a lock on, and the server can calculate when the last ping for that document was received. A: Currently documents are published by a limited group of people, each of them working on a separate subject. So, the inconvenience introduced by locks is minimized. People mostly extend existing documents and correct mistakes in them. Speaking about the pessimistic model, the 'left client connected for N days' scenario could be avoided by setting lock expire date to, say, one day before lock start date. Because documents edited are by no means mission critical, and are modified by multiple users quite rarely, that could be enough. Now consider the optimistic model. How should the differences be detected, if the documents have some regular (say, hierarchical) structure? If not? What are the chances of successful automatic merge in these cases? The situation becomes more complicated, because some of the documents (edited by the 'admins' user group) contain important configuration information (document global index, user roles, etc.). To my mind, locks are more advantageous for precisely this kind of information, because it's not changed on everyday basis. So some hybrid solution might be acceptable. What do you think?
Document Server: Handling Concurrent Saves
I'm implementing a document server. Currently, if two users open the same document, then modify it and save the changes, the document's state will be undefined (either the first user's changes are saved permanently, or the second's). This is entirely unsatisfactory. I considered two possibilities to solve this problem: The first is to lock the document when it is opened by someone the first time, and unlock it when it is closed. But if the network connection to the server is suddenly interrupted, the document would stay in a forever-locked state. The obvious solution is to send regular pings to the server. If the server doesn't receive K pings in a row (K > 1) from a particular client, documents locked by this client are unlocked. If that client re-appears, documents are locked again, if someone hadn't already locked them. This could also help if the client application (running in web browser) is terminated unexpectedly, making it impossible to send a 'quitting, unlock my documents' signal to the server. The second is to store multiple versions of the same document saved by different users. If changes to the document are made in rapid succession, the system would offer either to merge versions or to select a preferred version. To optimize storage space, only document diffs should be kept (just like source control software). What method should I choose, taking into consideration that the connection to the server might sometimes be slow and unresponsive? How should the parameters (ping interval, rapid succession interval) be determined? P.S. Unfortunately, I can't store the documents in a database.
[ "The first option you describe is essentially a pessimistic locking model whilst the second is an optimistic model.\nWhich one to choose really comes down to a number of factors but essentially boils down to how the business wants to work. For example, would it unduly inconvenience the users if a document they needed to edit was locked by another user? What happens if a document is locked and someone goes on holiday with their client connected? What is the likely contention for each document - i.e. how likely is it that the same document will be modified by two users at the same time?, how localised are the modifications likely to be within a single document? (If the same section is modified regularly then performing a merge may take longer than simply making the changes again). \nAssuming the contention is relatively low and/or the size of each change is fairly small then I would probably opt for an optimistic model that resolves conflicts using an automatic or manual merge. A version number or a checksum of the document's contents can be used to determine if a merge is required.\n", "My suggestion would be something like your first one. When the first user (Bob) opens the document, he acquires a lock so that other users can only read the current document. If the user saves the document while he is using it, he keeps the lock. Only when he exits the document, it is unlocked and other people can edit it. \nIf the second user (Kate) opens the document while Bob has the lock on it, Kate will get a message saying the document is uneditable but she can read it until it the lock has been released.\nSo what happens when Bob acquires the lock, maybe saves the document once or twice but then exits the application leaving the lock hanging? \nAs you said yourself, requiring the client with the lock to send pings at a certain frequency is probably the best option. If you don't get a ping from the client for a set amount of time, this effectively means his client is not responding anymore. If this is a web application you can use javascript for the pings. The document that was last saved releases its lock and Kate can now acquire it. \nA ping can contain the name of the document that the client has a lock on, and the server can calculate when the last ping for that document was received.\n", "Currently documents are published by a limited group of people, each of them working on a separate subject. So, the inconvenience introduced by locks is minimized.\nPeople mostly extend existing documents and correct mistakes in them.\nSpeaking about the pessimistic model, the 'left client connected for N days' scenario could be avoided by setting lock expire date to, say, one day before lock start date. Because documents edited are by no means mission critical, and are modified by multiple users quite rarely, that could be enough.\nNow consider the optimistic model. How should the differences be detected, if the documents have some regular (say, hierarchical) structure? If not? What are the chances of successful automatic merge in these cases?\nThe situation becomes more complicated, because some of the documents (edited by the 'admins' user group) contain important configuration information (document global index, user roles, etc.). To my mind, locks are more advantageous for precisely this kind of information, because it's not changed on everyday basis. So some hybrid solution might be acceptable.\nWhat do you think?\n" ]
[ 1, 0, 0 ]
[]
[]
[ "concurrency", "locking", "versioning" ]
stackoverflow_0000009675_concurrency_locking_versioning.txt
Q: Access a SQL Server 2005 Express Edition from a network computer How do you access a SQL Server 2005 Express Edition from a application in a network computer? The access I need is both from application (Linq-to-SQL and ODBC) and from Management Studio A: See this KB Article. How to configure SQL Server 2005 to allow remote connections. Oh, and remember that the SQLServer name will probably be MyMachineName\SQLExpress A: If you're running it on a 2k3 box, you need to install all updates for Sql Server and the 2003 server. Check the event logs after you start the Sql Server. It logs everything well, telling you if its being blocked, and where it is listening for connections. From a remote machine, you can use telnet to see if a sql server is listening for remote connections. You just need the IP and the port of the server (default is 1433). From the command line: telnet 192.168.10.10 1433 If you get a blank screen, its listening. If you get thrown back to the command prompt, something is blocking you.
Access a SQL Server 2005 Express Edition from a network computer
How do you access a SQL Server 2005 Express Edition from a application in a network computer? The access I need is both from application (Linq-to-SQL and ODBC) and from Management Studio
[ "See this KB Article. How to configure SQL Server 2005 to allow remote connections.\nOh, and remember that the SQLServer name will probably be MyMachineName\\SQLExpress\n", "If you're running it on a 2k3 box, you need to install all updates for Sql Server and the 2003 server. \nCheck the event logs after you start the Sql Server. It logs everything well, telling you if its being blocked, and where it is listening for connections.\nFrom a remote machine, you can use telnet to see if a sql server is listening for remote connections. You just need the IP and the port of the server (default is 1433). From the command line:\ntelnet 192.168.10.10 1433\n\nIf you get a blank screen, its listening. If you get thrown back to the command prompt, something is blocking you.\n" ]
[ 5, 1 ]
[]
[]
[ "sql_server", "sql_server_2005_express" ]
stackoverflow_0000009383_sql_server_sql_server_2005_express.txt
Q: What do you look for from a User Group? I'm in the process of starting a User Group in my area related to .NET development. The format of the community will be the average free food, presentation, and then maybe free swag giveaway. What would you, as a member of a user community, look for in order to keep you coming back month to month? A: I always like talks on different subjects. The real hard thing about talking to a specialized community is keeping the detail level high and the scope narrow. What's the point of talking to a bunch of .NET programmers about the benefits of Polymorphism? It always kills me when I go to a meeting on a particular subject and get the most rudimentary explanation and examples. Its a waste of time. MSDN webcasts have a level system that describes the complexity of the subject. Most are level 100 or 200. If you're dealing with a group of professionals, your talks should always be at level 600-1000. In addition to talking about technical subjects, another big area to hit is professional development. How do you make yourself a more valuable programmer? These types of talks are great for bringing in other people, such as management, sales, customers, etc. People who you normally only associate with under protest, and who you typically curse under your breath when they walk by. A user group forum is a great way to bring these people together with developers in a pseudo-group therapy like setting. Also, donuts. A: It's true that some of the talks out there are very rudimentary, unfortunately some times the bulk of your crowd may need that. I consider myself a novice in a lot of fields, but I've attend talks that I thought were beneath me and still people were asking very basic questions. Perhaps it would be worth having a bi-monthly user group, one week for entry level and one week for advanced. It doesn't necessarily have to mean twice the work if you can get someone to help you coordinate a lot of the work will overlap. On the other hand you might just need to feel out the members of the group and see what their average skill level is and play to that. A: If there isn't beer, its not a good enough user group to attend. The open source guys get this. Their user group meetings are funner, and more dynamic because of this. Just make it BYOB and it'll naturally get better in my experience.
What do you look for from a User Group?
I'm in the process of starting a User Group in my area related to .NET development. The format of the community will be the average free food, presentation, and then maybe free swag giveaway. What would you, as a member of a user community, look for in order to keep you coming back month to month?
[ "I always like talks on different subjects. The real hard thing about talking to a specialized community is keeping the detail level high and the scope narrow. What's the point of talking to a bunch of .NET programmers about the benefits of Polymorphism? It always kills me when I go to a meeting on a particular subject and get the most rudimentary explanation and examples. Its a waste of time.\nMSDN webcasts have a level system that describes the complexity of the subject. Most are level 100 or 200. If you're dealing with a group of professionals, your talks should always be at level 600-1000. \nIn addition to talking about technical subjects, another big area to hit is professional development. How do you make yourself a more valuable programmer? These types of talks are great for bringing in other people, such as management, sales, customers, etc. People who you normally only associate with under protest, and who you typically curse under your breath when they walk by. A user group forum is a great way to bring these people together with developers in a pseudo-group therapy like setting.\nAlso, donuts.\n", "It's true that some of the talks out there are very rudimentary, unfortunately some times the bulk of your crowd may need that. I consider myself a novice in a lot of fields, but I've attend talks that I thought were beneath me and still people were asking very basic questions. Perhaps it would be worth having a bi-monthly user group, one week for entry level and one week for advanced. It doesn't necessarily have to mean twice the work if you can get someone to help you coordinate a lot of the work will overlap. On the other hand you might just need to feel out the members of the group and see what their average skill level is and play to that.\n", "If there isn't beer, its not a good enough user group to attend. The open source guys get this. Their user group meetings are funner, and more dynamic because of this. Just make it BYOB and it'll naturally get better in my experience.\n" ]
[ 4, 4, 1 ]
[]
[]
[ ".net" ]
stackoverflow_0000009977_.net.txt
Q: What is a good way to open large files across a WAN? I have an application deployed into multiple zones and there are some issues with opening larger documents (20-50MB) across the WAN. Currently the documents are stored in Zone 1 (Americas) and a link stored in the database to the docs. I have heard some things about blobs in oracle and store binary in MS SQL Server 2005 and then perhaps copying the database to other zones. Any other suggestions or good results with one of the described options? A: Your best option here may be caching the document in the requested zone the first time it is requested, and pinging the source document's last modified each time the cached document is requested in order to determine if it needs refreshed. In this case you're only requesting a small piece of information (a date) across the WAN most of the times the document is accessed. This works best for a subset of documents that are frequently requested. If you have a large set of documents, each infrequently requested by a disparate group, then you may want to look into replicating the documents in each of your zones each time the master is updated. This may best be accomplished by storing the document as binary data in your master database and having the slaves pull from the master. A: If you're running on Windows you could look at Distributed File Systems
What is a good way to open large files across a WAN?
I have an application deployed into multiple zones and there are some issues with opening larger documents (20-50MB) across the WAN. Currently the documents are stored in Zone 1 (Americas) and a link stored in the database to the docs. I have heard some things about blobs in oracle and store binary in MS SQL Server 2005 and then perhaps copying the database to other zones. Any other suggestions or good results with one of the described options?
[ "Your best option here may be caching the document in the requested zone the first time it is requested, and pinging the source document's last modified each time the cached document is requested in order to determine if it needs refreshed. In this case you're only requesting a small piece of information (a date) across the WAN most of the times the document is accessed. This works best for a subset of documents that are frequently requested. \nIf you have a large set of documents, each infrequently requested by a disparate group, then you may want to look into replicating the documents in each of your zones each time the master is updated. This may best be accomplished by storing the document as binary data in your master database and having the slaves pull from the master.\n", "If you're running on Windows you could look at Distributed File Systems\n" ]
[ 2, 1 ]
[]
[]
[ "database", "oracle", "sql_server" ]
stackoverflow_0000009932_database_oracle_sql_server.txt
Q: .NET: How do I find the Desktop path when Folder Redirection is on? I have been using Environment.GetFolderPath(Environment.SpecialFolder.Desktop) to get the path to the user's desktop for ages now, but since we changed our setup here at work so we use Folder Redirection to map our users' Desktop and My Documents folders to the server, it no-longer works. It still points to the Desktop folder in C:\Documents and Settings, which is not where my desktop lives. Any ideas on how to fix this? Burns A: You need to use the DesktopDirectory special folder instead: Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory) should give you the redirected directory.
.NET: How do I find the Desktop path when Folder Redirection is on?
I have been using Environment.GetFolderPath(Environment.SpecialFolder.Desktop) to get the path to the user's desktop for ages now, but since we changed our setup here at work so we use Folder Redirection to map our users' Desktop and My Documents folders to the server, it no-longer works. It still points to the Desktop folder in C:\Documents and Settings, which is not where my desktop lives. Any ideas on how to fix this? Burns
[ "You need to use the DesktopDirectory special folder instead:\nEnvironment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory)\nshould give you the redirected directory.\n" ]
[ 9 ]
[]
[]
[ ".net" ]
stackoverflow_0000010043_.net.txt
Q: Use for the phppgadmin Reports Database? Phppgadmin comes with instructions for creating a reports database on the system for use with phppgadmin. The instructions describe how to set it up, but do not really give any indication of what its purpose is, and the phppgadmin site was not very helpful either. It seems to allow you to store SQL queries, so is it for storing admin queries accessing tables like pg_class etc? A: This is just a standard location to store frequently used SQL scripts. The reports-pgsql.sql script creates a table for storing these queries, the database they are intended to be run on, a title and some descriptive text about what they do. PhpPgAdmin has functionality to browse and execute these reports. It's a pretty simple system just meant to aid in organization.
Use for the phppgadmin Reports Database?
Phppgadmin comes with instructions for creating a reports database on the system for use with phppgadmin. The instructions describe how to set it up, but do not really give any indication of what its purpose is, and the phppgadmin site was not very helpful either. It seems to allow you to store SQL queries, so is it for storing admin queries accessing tables like pg_class etc?
[ "This is just a standard location to store frequently used SQL scripts. The reports-pgsql.sql script creates a table for storing these queries, the database they are intended to be run on, a title and some descriptive text about what they do. PhpPgAdmin has functionality to browse and execute these reports. It's a pretty simple system just meant to aid in organization.\n" ]
[ 6 ]
[]
[]
[ "database", "php", "phppgadmin", "postgresql" ]
stackoverflow_0000008894_database_php_phppgadmin_postgresql.txt
Q: Where is TFS work item help text displayed? I'm creating some custom work items in TFS and the helptext field seems handy but I don't see where it is being displayed in Team Explorer or Team System Web Access. Where is this information displayed? A: When you hover over the type of the field. For instance create a new bug and hover over the "rank" and you should see "Stack rank used to prioritize work"
Where is TFS work item help text displayed?
I'm creating some custom work items in TFS and the helptext field seems handy but I don't see where it is being displayed in Team Explorer or Team System Web Access. Where is this information displayed?
[ "When you hover over the type of the field. For instance create a new bug and hover over the \"rank\" and you should see \"Stack rank used to prioritize work\"\n" ]
[ 2 ]
[]
[]
[ "tfs", "visual_studio" ]
stackoverflow_0000010088_tfs_visual_studio.txt
Q: C#.Net case-insensitive string Why does C#.Net allow the declaration of the string object to be case-insensitive? String sHello = "Hello"; string sHello = "Hello"; Both the lower-case and upper-case S of the word String are acceptable and this seems to be the only object that allows this. Can anyone explain why? A: string is a language keyword while System.String is the type it aliases. Both compile to exactly the same thing, similarly: int is System.Int32 long is System.Int64 float is System.Single double is System.Double char is System.Char byte is System.Byte short is System.Int16 ushort is System.UInt16 uint is System.UInt32 ulong is System.UInt64 I think in most cases this is about code legibility - all the basic system value types have aliases, I think the lower case string might just be for consistency. A: Further to the other answers, it's good practice to use keywords if they exist. E.g. you should use string rather than System.String. A: "String" is the name of the class. "string" is keyword that maps this class. it's the same like Int32 => int Decimal => decimal Int64 => long ... and so on... A: "string" is a C# keyword. it's just an alias for "System.String" - one of the .NET BCL classes. A: "string" is just an C# alias for the class "String" in the System-namespace. A: string is an alias for System.String. They are the same thing. By convention, though, objects of type (System.String) are generally refered to as the alias - e.g. string myString = "Hello"; whereas operations on the class use the uppercase version e.g. String.IsNullOrEmpty(myStringVariable); A: I use String and not string, Int32 instead of int, so that my syntax highlighting picks up on a string as a Type and not a keyword. I want keywords to jump out at me.
C#.Net case-insensitive string
Why does C#.Net allow the declaration of the string object to be case-insensitive? String sHello = "Hello"; string sHello = "Hello"; Both the lower-case and upper-case S of the word String are acceptable and this seems to be the only object that allows this. Can anyone explain why?
[ "string is a language keyword while System.String is the type it aliases.\nBoth compile to exactly the same thing, similarly:\n\nint is System.Int32\nlong is System.Int64\nfloat is System.Single\ndouble is System.Double\nchar is System.Char\nbyte is System.Byte\nshort is System.Int16\nushort is System.UInt16\nuint is System.UInt32\nulong is System.UInt64\n\nI think in most cases this is about code legibility - all the basic system value types have aliases, I think the lower case string might just be for consistency.\n", "Further to the other answers, it's good practice to use keywords if they exist. \nE.g. you should use string rather than System.String.\n", "\"String\" is the name of the class. \"string\" is keyword that maps this class.\nit's the same like\n\nInt32 => int\nDecimal => decimal\nInt64 => long\n\n... and so on...\n", "\"string\" is a C# keyword. it's just an alias for \"System.String\" - one of the .NET BCL classes. \n", "\"string\" is just an C# alias for the class \"String\" in the System-namespace.\n", "string is an alias for System.String. They are the same thing.\nBy convention, though, objects of type (System.String) are generally refered to as the alias - e.g.\nstring myString = \"Hello\";\n\nwhereas operations on the class use the uppercase version\ne.g.\nString.IsNullOrEmpty(myStringVariable);\n\n", "I use String and not string,\nInt32 instead of int, \nso that my syntax highlighting picks up on a string as a Type and not a keyword. I want keywords to jump out at me.\n" ]
[ 21, 6, 2, 1, 1, 0, 0 ]
[]
[]
[ ".net", "c#" ]
stackoverflow_0000009734_.net_c#.txt
Q: Interfaces on different logic layers Say you have an application divided into 3-tiers: GUI, business logic, and data access. In your business logic layer you have described your business objects: getters, setters, accessors, and so on... you get the idea. The interface to the business logic layer guarantees safe usage of the business logic, so all the methods and accessors you call will validate input. This great when you first write the UI code, because you have a neatly defined interface that you can trust. But here comes the tricky part, when you start writing the data access layer, the interface to the business logic does not accommodate your needs. You need to have more accessors and getters to set fields which are/used to be hidden. Now you are forced to erode the interface of your business logic; now it is possible set fields from the UI layer, which the UI layer has no business setting. Because of the changes needed for the data access layer, the interface to the business logic has eroded to the point where it is possible to even set the business logic with invalid data. Thus, the interface does not guarantee safe usage anymore. I hope I explained the problem clearly enough. How do you prevent interface eroding, maintain information hiding and encapsulation, and yet still accommodate different interface needs among different layers? A: If I understand the question correctly, you've created a domain model and you would like to write an object-relational mapper to map between records in your database and your domain objects. However, you're concerned about polluting your domain model with the 'plumbing' code that would be necessary to read and write to your object's fields. Taking a step back, you essentially have two choices of where to put your data mapping code - within the domain class itself or in an external mapping class. The first option is often called the Active Record pattern and has the advantage that each object knows how to persist itself and has sufficient access to its internal structure to allow it to perform the mapping without needing to expose non-business related fields. E.g public class User { private string name; private AccountStatus status; private User() { } public string Name { get { return name; } set { name = value; } } public AccountStatus Status { get { return status; } } public void Activate() { status = AccountStatus.Active; } public void Suspend() { status = AccountStatus.Suspended; } public static User GetById(int id) { User fetchedUser = new User(); // Lots of database and error-checking code // omitted for clarity // ... fetchedUser.name = (string) reader["Name"]; fetchedUser.status = (int)reader["statusCode"] == 0 ? AccountStatus.Suspended : AccountStatus.Active; return fetchedUser; } public static void Save(User user) { // Code to save User's internal structure to database // ... } } In this example, we have an object that represents a User with a Name and an AccountStatus. We don't want to allow the Status to be set directly, perhaps because we want to check that the change is a valid status transition, so we don't have a setter. Fortunately, the mapping code in the GetById and Save static methods have full access to the object's name and status fields. The second option is to have a second class that is responsible for the mapping. This has the advantage of seperating out the different concerns of business logic and persistence which can allow your design to be more testable and flexible. The challenge with this method is how to expose the name and status fields to the external class. Some options are: 1. Use reflection (which has no qualms about digging deep into your object's private parts) 2. Provide specially-named, public setters (e.g. prefix them with the word 'Private') and hope no one uses them accidentally 3. If your language suports it, make the setters internal but grant your data mapper module access. E.g. use the InternalsVisibleToAttribute in .NET 2.0 onwards or friend functions in C++ For more information, I'd recommend Martin Fowler's classic book 'Patterns of Enterprise Architecture' However, as a word of warning, before going down the path of writing your own mappers I'd strongly recommend looking at using a 3rd-party object relational mapper (ORM) tool such as nHibernate or Microsoft's Entity Framework. I've worked on four different projects where, for various reasons, we wrote our own mapper and it is very easy to waste a lot of time maintaining and extending the mapper instead of writing code that provides end user value. I've used nHibernate on one project so far and, although it has quite a steep learning curve initially, the investment you put in early on pays off considerably. A: This is a classic problem - separating your domain model from your database model. There are several ways to attack it, it really depends on the size of your project in my opinion. You could use the repository pattern as others have said. If you are using .net or java you could use NHibernate or Hibernate. What I do is use Test Driven Development so I write my UI and Model layers first and the Data layer is mocked, so the UI and model is build around domain specific objects, then later I map these object to what ever technology I'm using the the Data Layer. Is a very bad idea to let the database determine the design of your app, write the app first and think about the data later. ps the title of the question is a little mis-leading A: @Ice^^Heat: What do you mean by that the data tier should not be aware of the business logic tier? How would you fill an business object with data? The UI asks the ServiceClass in the business tier for a service, namely getting a list of objects filtered by an object with the needed parameter data. Then the ServiceClass creates an instance of one of the repository classes in the data tier, and calls the GetList(ParameterType filters). Then the data tier accesses the database, pulls up the data, and maps it to the common format defined in the "domain" assembly. The BL has no more work to do with this data, so it outputs it to the UI. Then the UI wants to edit Item X. It sends the item (or business object) to the service in the Business Tier. The business tier validates the object, and if it is OK, it sends it to the data tier for storage. The UI knows the service in the business tier which again knows about the data tier. The UI is responsible for mapping the users data input to and from the objects, and the data tier is responsible for mapping the data in the db to and from the objects. The Business tier stays purely business. :) A: I always create a separate assembly that contains: A lot of small Interfaces (think ICreateRepository, IReadRepository, IReadListRepsitory.. the list goes on and most of them relies heavily on generics) A lot of concrete Interfaces, like an IPersonRepository, that inherits from IReadRepository, you get the point.. Anything you cannot describe with just the smaller interfaces, you put into the concrete interface. As long as you use the IPersonRepository to declare your object, you get a clean, consistent interface to work with. But the kicker is, you can also make a class that takes f.x. a ICreateRepository in its constructor, so the code will end up being very easy to do some really funky stuff with. There are also interfaces for the Services in the business tier here. At last i stick all the domain objects into the extra assembly, just to make the code base itself a bit cleaner and more loosely coupled. These objects dont have any logic, they are just a common way to describe the data for all 3+ layers. Btw. Why would you define methods in the business logic tier to accommodate the data tier? The data tier should have no reason to even know there is a business tier.. A: It could be a solution, as it would not erode the interface. I guess you could have a class like this: public class BusinessObjectRecord : BusinessObject { } A: What do you mean by that the data tier should not be aware of the business logic tier? How would you fill an business object with data? I often do this: namespace Data { public class BusinessObjectDataManager { public void SaveObject(BusinessObject object) { // Exec stored procedure { } } A: So the problem is that the business layer needs to expose more functionality to the data layer, and adding this functionality means exposing too much to the UI layer? If I'm understanding your problem correctly, it sounds like you're trying to satisfy too much with a single interface, and that's just causing it to become cluttered. Why not have two interfaces into the business layer? One would be a simple, safe interface for the UI layer. The other would be a lower-level interface for the data layer. You can apply this two-interface approach to any objects which need to be passed to both the UI and the data layers, too. public class BusinessLayer : ISimpleBusiness {} public class Some3LayerObject : ISimpleSome3LayerObject {} A: You may want to split your interfaces into two types, namely: View interfaces -- which are interfaces that specify your interactions with your UI, and Data interfaces -- which are interfaces that will allow you to specify interactions with your data It is possible to inherit and implement both set of interfaces such that: public class BusinessObject : IView, IData This way, in your data layer you only need to see the interface implementation of IData, while in your UI you only need to see the interface implementation of IView. Another strategy you might want to use is to compose your objects in the UI or Data layers such that they are merely consumed by these layers, e.g., public class BusinessObject : DomainObject public class ViewManager<T> where T : DomainObject public class DataManager<T> where T : DomainObject This in turn allows your business object to remain ignorant of both the UI/View layer and the data layer. A: I'm going to continue my habit of going against the grain and say that you should question why you are building all these horribly complex object layers. I think many developers think of the database as a simple persistence layer for their objects, and are only concerned with the CRUD operations that those objects need. Too much effort is being put into the "impedence mismatch" between object and relational models. Here's an idea: stop trying. Write stored procedures to encapsulate your data. Use results sets, DataSet, DataTable, SqlCommand (or the java/php/whatever equivalent) as needed from code to interact with the database. You don't need those objects. An excellent example is embedding a SqlDataSource into a .ASPX page. You shouldn't try to hide your data from anyone. Developers need to understand exactly how and when they are interacting with the physical data store. Object-relational mappers are the devil. Stop using them. Building enterprise applications is often an exercise in managing complexity. You have to keep things as simple as possible, or you will have an absolutely un-maintainable system. If you are willing to allow some coupling (which is inherent in any application anyway), you can do away with both your business logic layer and your data access layer (replacing them with stored procedures), and you won't need any of those interfaces.
Interfaces on different logic layers
Say you have an application divided into 3-tiers: GUI, business logic, and data access. In your business logic layer you have described your business objects: getters, setters, accessors, and so on... you get the idea. The interface to the business logic layer guarantees safe usage of the business logic, so all the methods and accessors you call will validate input. This great when you first write the UI code, because you have a neatly defined interface that you can trust. But here comes the tricky part, when you start writing the data access layer, the interface to the business logic does not accommodate your needs. You need to have more accessors and getters to set fields which are/used to be hidden. Now you are forced to erode the interface of your business logic; now it is possible set fields from the UI layer, which the UI layer has no business setting. Because of the changes needed for the data access layer, the interface to the business logic has eroded to the point where it is possible to even set the business logic with invalid data. Thus, the interface does not guarantee safe usage anymore. I hope I explained the problem clearly enough. How do you prevent interface eroding, maintain information hiding and encapsulation, and yet still accommodate different interface needs among different layers?
[ "If I understand the question correctly, you've created a domain model and you would like to write an object-relational mapper to map between records in your database and your domain objects. However, you're concerned about polluting your domain model with the 'plumbing' code that would be necessary to read and write to your object's fields.\nTaking a step back, you essentially have two choices of where to put your data mapping code - within the domain class itself or in an external mapping class.\nThe first option is often called the Active Record pattern and has the advantage that each object knows how to persist itself and has sufficient access to its internal structure to allow it to perform the mapping without needing to expose non-business related fields.\nE.g\npublic class User\n{\n private string name;\n private AccountStatus status;\n\n private User()\n {\n }\n\n public string Name\n {\n get { return name; }\n set { name = value; }\n }\n\n public AccountStatus Status\n {\n get { return status; }\n }\n\n public void Activate()\n {\n status = AccountStatus.Active;\n }\n\n public void Suspend()\n {\n status = AccountStatus.Suspended;\n }\n\n public static User GetById(int id)\n {\n User fetchedUser = new User();\n\n // Lots of database and error-checking code\n // omitted for clarity\n // ...\n\n fetchedUser.name = (string) reader[\"Name\"];\n fetchedUser.status = (int)reader[\"statusCode\"] == 0 ? AccountStatus.Suspended : AccountStatus.Active;\n\n return fetchedUser;\n }\n\n public static void Save(User user)\n {\n // Code to save User's internal structure to database\n // ...\n }\n}\n\nIn this example, we have an object that represents a User with a Name and an AccountStatus. We don't want to allow the Status to be set directly, perhaps because we want to check that the change is a valid status transition, so we don't have a setter. Fortunately, the mapping code in the GetById and Save static methods have full access to the object's name and status fields.\nThe second option is to have a second class that is responsible for the mapping. This has the advantage of seperating out the different concerns of business logic and persistence which can allow your design to be more testable and flexible. The challenge with this method is how to expose the name and status fields to the external class. Some options are:\n 1. Use reflection (which has no qualms about digging deep into your object's private parts)\n 2. Provide specially-named, public setters (e.g. prefix them with the word 'Private') and hope no one uses them accidentally\n 3. If your language suports it, make the setters internal but grant your data mapper module access. E.g. use the InternalsVisibleToAttribute in .NET 2.0 onwards or friend functions in C++\nFor more information, I'd recommend Martin Fowler's classic book 'Patterns of Enterprise Architecture'\nHowever, as a word of warning, before going down the path of writing your own mappers I'd strongly recommend looking at using a 3rd-party object relational mapper (ORM) tool such as nHibernate or Microsoft's Entity Framework. I've worked on four different projects where, for various reasons, we wrote our own mapper and it is very easy to waste a lot of time maintaining and extending the mapper instead of writing code that provides end user value. I've used nHibernate on one project so far and, although it has quite a steep learning curve initially, the investment you put in early on pays off considerably.\n", "This is a classic problem - separating your domain model from your database model. There are several ways to attack it, it really depends on the size of your project in my opinion. You could use the repository pattern as others have said. If you are using .net or java you could use NHibernate or Hibernate. \nWhat I do is use Test Driven Development so I write my UI and Model layers first and the Data layer is mocked, so the UI and model is build around domain specific objects, then later I map these object to what ever technology I'm using the the Data Layer. Is a very bad idea to let the database determine the design of your app, write the app first and think about the data later.\nps the title of the question is a little mis-leading\n", "@Ice^^Heat:\n\nWhat do you mean by that the data tier should not be aware of the business logic tier? How would you fill an business object with data?\n\nThe UI asks the ServiceClass in the business tier for a service, namely getting a list of objects filtered by an object with the needed parameter data.\nThen the ServiceClass creates an instance of one of the repository classes in the data tier, and calls the GetList(ParameterType filters).\nThen the data tier accesses the database, pulls up the data, and maps it to the common format defined in the \"domain\" assembly.\nThe BL has no more work to do with this data, so it outputs it to the UI.\nThen the UI wants to edit Item X. It sends the item (or business object) to the service in the Business Tier. The business tier validates the object, and if it is OK, it sends it to the data tier for storage.\nThe UI knows the service in the business tier which again knows about the data tier.\nThe UI is responsible for mapping the users data input to and from the objects, and the data tier is responsible for mapping the data in the db to and from the objects. The Business tier stays purely business. :)\n", "I always create a separate assembly that contains: \n\nA lot of small Interfaces (think ICreateRepository, IReadRepository, IReadListRepsitory.. the list goes on and most of them relies heavily on generics) \nA lot of concrete Interfaces, like an IPersonRepository, that inherits from IReadRepository, you get the point..\nAnything you cannot describe with just the smaller interfaces, you put into the concrete interface.\nAs long as you use the IPersonRepository to declare your object, you get a clean, consistent interface to work with. But the kicker is, you can also make a class that takes f.x. a ICreateRepository in its constructor, so the code will end up being very easy to do some really funky stuff with. There are also interfaces for the Services in the business tier here.\nAt last i stick all the domain objects into the extra assembly, just to make the code base itself a bit cleaner and more loosely coupled. These objects dont have any logic, they are just a common way to describe the data for all 3+ layers.\n\nBtw. Why would you define methods in the business logic tier to accommodate the data tier?\nThe data tier should have no reason to even know there is a business tier..\n", "It could be a solution, as it would not erode the interface. I guess you could have a class like this:\npublic class BusinessObjectRecord : BusinessObject\n{\n}\n\n", "What do you mean by that the data tier should not be aware of the business logic tier? How would you fill an business object with data?\nI often do this:\nnamespace Data\n{\n public class BusinessObjectDataManager\n {\n public void SaveObject(BusinessObject object)\n {\n // Exec stored procedure\n {\n }\n}\n\n", "So the problem is that the business layer needs to expose more functionality to the data layer, and adding this functionality means exposing too much to the UI layer? If I'm understanding your problem correctly, it sounds like you're trying to satisfy too much with a single interface, and that's just causing it to become cluttered. Why not have two interfaces into the business layer? One would be a simple, safe interface for the UI layer. The other would be a lower-level interface for the data layer.\nYou can apply this two-interface approach to any objects which need to be passed to both the UI and the data layers, too.\npublic class BusinessLayer : ISimpleBusiness\n{}\n\npublic class Some3LayerObject : ISimpleSome3LayerObject\n{}\n\n", "You may want to split your interfaces into two types, namely:\n\nView interfaces -- which are interfaces that specify your interactions with your UI, and\nData interfaces -- which are interfaces that will allow you to specify interactions with your data\n\nIt is possible to inherit and implement both set of interfaces such that:\npublic class BusinessObject : IView, IData\n\nThis way, in your data layer you only need to see the interface implementation of IData, while in your UI you only need to see the interface implementation of IView.\nAnother strategy you might want to use is to compose your objects in the UI or Data layers such that they are merely consumed by these layers, e.g.,\npublic class BusinessObject : DomainObject\n\npublic class ViewManager<T> where T : DomainObject\n\npublic class DataManager<T> where T : DomainObject\n\nThis in turn allows your business object to remain ignorant of both the UI/View layer and the data layer.\n", "I'm going to continue my habit of going against the grain and say that you should question why you are building all these horribly complex object layers.\nI think many developers think of the database as a simple persistence layer for their objects, and are only concerned with the CRUD operations that those objects need. Too much effort is being put into the \"impedence mismatch\" between object and relational models. Here's an idea: stop trying.\nWrite stored procedures to encapsulate your data. Use results sets, DataSet, DataTable, SqlCommand (or the java/php/whatever equivalent) as needed from code to interact with the database. You don't need those objects. An excellent example is embedding a SqlDataSource into a .ASPX page.\nYou shouldn't try to hide your data from anyone. Developers need to understand exactly how and when they are interacting with the physical data store.\nObject-relational mappers are the devil. Stop using them.\nBuilding enterprise applications is often an exercise in managing complexity. You have to keep things as simple as possible, or you will have an absolutely un-maintainable system. If you are willing to allow some coupling (which is inherent in any application anyway), you can do away with both your business logic layer and your data access layer (replacing them with stored procedures), and you won't need any of those interfaces.\n" ]
[ 7, 5, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "architecture" ]
stackoverflow_0000009240_architecture.txt
Q: Multithreading Design Best Practice Consider this problem: I have a program which should fetch (let's say) 100 records from a database, and then for each one it should get updated information from a web service. There are two ways to introduce parallelism in this scenario: I start each request to the web service on a new Thread. The number of simultaneous threads is controlled by some external parameter (or dynamically adjusted somehow). I create smaller batches (let's say of 10 records each) and launch each batch on a separate thread (so taking our example, 10 threads). Which is a better approach, and why do you think so? A: Option 3 is the best: Use Async IO. Unless your request processing is complex and heavy, your program is going to spend 99% of it's time waiting for the HTTP requests. This is exactly what Async IO is designed for - Let the windows networking stack (or .net framework or whatever) worry about all the waiting, and just use a single thread to dispatch and 'pick up' the results. Unfortunately the .NET framework makes it a right pain in the ass. It's easier if you're just using raw sockets or the Win32 api. Here's a (tested!) example using C#3 anyway: using System.Net; // need this somewhere // need to declare an class so we can cast our state object back out class RequestState { public WebRequest Request { get; set; } } static void Main( string[] args ) { // stupid cast neccessary to create the request HttpWebRequest request = WebRequest.Create( "http://www.stackoverflow.com" ) as HttpWebRequest; request.BeginGetResponse( /* callback to be invoked when finished */ (asyncResult) => { // fetch the request object out of the AsyncState var state = (RequestState)asyncResult.AsyncState; var webResponse = state.Request.EndGetResponse( asyncResult ) as HttpWebResponse; // there we go; Debug.Assert( webResponse.StatusCode == HttpStatusCode.OK ); Console.WriteLine( "Got Response from server:" + webResponse.Server ); }, /* pass the request through to our callback */ new RequestState { Request = request } ); // blah Console.WriteLine( "Waiting for response. Press a key to quit" ); Console.ReadKey(); } EDIT: In the case of .NET, the 'completion callback' actually gets fired in a ThreadPool thread, not in your main thread, so you will still need to lock any shared resources, but it still saves you all the trouble of managing threads. A: Two things to consider. 1. How long will it take to process a record? If record processing is very quick, the overhead of handing off records to threads can become a bottleneck. In this case, you would want to bundle records so that you don't have to hand them off so often. If record processing is reasonably long-running, the difference will be negligible, so the simpler approach (1 record per thread) is probably the best. 2. How many threads are you planning on starting? If you aren't using a threadpool, I think you either need to manually limit the number of threads, or you need to break the data into big chunks. Starting a new thread for every record will leave your system thrashing if the number of records get large. A: The computer running the program is probably not the bottleneck, so: Remember that the HTTP protocol has a keep-alive header, that lets you send several GET requests on the same sockets, which saves you from the TCP/IP hand shake. Unfortunately I don't know how to use that in the .net libraries. (Should be possible.) There will probably also be a delay in answering your requests. You could try making sure that you allways have a given number of outstanding requests to the server. A: Get the Parallel Fx. Look at the BlockingCollection. Use a thread to feed it batches of records, and 1 to n threads pulling records off the collection to service. You can control the rate at which the collection is fed, and the number of threads that call to web services. Make it configurable via a ConfigSection, and make it generic by feeding the collection Action delegates, and you'll have a nice little batcher you can reuse to your heart's content.
Multithreading Design Best Practice
Consider this problem: I have a program which should fetch (let's say) 100 records from a database, and then for each one it should get updated information from a web service. There are two ways to introduce parallelism in this scenario: I start each request to the web service on a new Thread. The number of simultaneous threads is controlled by some external parameter (or dynamically adjusted somehow). I create smaller batches (let's say of 10 records each) and launch each batch on a separate thread (so taking our example, 10 threads). Which is a better approach, and why do you think so?
[ "Option 3 is the best:\nUse Async IO.\nUnless your request processing is complex and heavy, your program is going to spend 99% of it's time waiting for the HTTP requests.\nThis is exactly what Async IO is designed for - Let the windows networking stack (or .net framework or whatever) worry about all the waiting, and just use a single thread to dispatch and 'pick up' the results.\nUnfortunately the .NET framework makes it a right pain in the ass. It's easier if you're just using raw sockets or the Win32 api. Here's a (tested!) example using C#3 anyway:\nusing System.Net; // need this somewhere\n\n// need to declare an class so we can cast our state object back out\nclass RequestState {\n public WebRequest Request { get; set; }\n}\n\nstatic void Main( string[] args ) {\n // stupid cast neccessary to create the request\n HttpWebRequest request = WebRequest.Create( \"http://www.stackoverflow.com\" ) as HttpWebRequest;\n\n request.BeginGetResponse(\n /* callback to be invoked when finished */\n (asyncResult) => { \n // fetch the request object out of the AsyncState\n var state = (RequestState)asyncResult.AsyncState; \n var webResponse = state.Request.EndGetResponse( asyncResult ) as HttpWebResponse;\n\n // there we go;\n Debug.Assert( webResponse.StatusCode == HttpStatusCode.OK ); \n\n Console.WriteLine( \"Got Response from server:\" + webResponse.Server );\n },\n /* pass the request through to our callback */\n new RequestState { Request = request } \n );\n\n // blah\n Console.WriteLine( \"Waiting for response. Press a key to quit\" );\n Console.ReadKey();\n}\n\nEDIT:\nIn the case of .NET, the 'completion callback' actually gets fired in a ThreadPool thread, not in your main thread, so you will still need to lock any shared resources, but it still saves you all the trouble of managing threads.\n", "Two things to consider.\n1. How long will it take to process a record?\nIf record processing is very quick, the overhead of handing off records to threads can become a bottleneck. In this case, you would want to bundle records so that you don't have to hand them off so often.\nIf record processing is reasonably long-running, the difference will be negligible, so the simpler approach (1 record per thread) is probably the best.\n2. How many threads are you planning on starting?\nIf you aren't using a threadpool, I think you either need to manually limit the number of threads, or you need to break the data into big chunks. Starting a new thread for every record will leave your system thrashing if the number of records get large.\n", "The computer running the program is probably not the bottleneck, so:\nRemember that the HTTP protocol has a keep-alive header, that lets you send several GET requests on the same sockets, which saves you from the TCP/IP hand shake. Unfortunately I don't know how to use that in the .net libraries. (Should be possible.)\nThere will probably also be a delay in answering your requests. You could try making sure that you allways have a given number of outstanding requests to the server.\n", "Get the Parallel Fx. Look at the BlockingCollection. Use a thread to feed it batches of records, and 1 to n threads pulling records off the collection to service. You can control the rate at which the collection is fed, and the number of threads that call to web services. Make it configurable via a ConfigSection, and make it generic by feeding the collection Action delegates, and you'll have a nice little batcher you can reuse to your heart's content.\n" ]
[ 6, 2, 0, 0 ]
[]
[]
[ ".net", "multithreading" ]
stackoverflow_0000010229_.net_multithreading.txt
Q: Regex Rejecting matches because of Instr What's the easiest way to do an "instring" type function with a regex? For example, how could I reject a whole string because of the presence of a single character such as :? For example: this - okay there:is - not okay because of : More practically, how can I match the following string: //foo/bar/baz[1]/ns:foo2/@attr/text() For any node test on the xpath that doesn't include a namespace? (/)?(/)([^:/]+) Will match the node tests but includes the namespace prefix which makes it faulty. A: I'm still not sure whether you just wanted to detect if the Xpath contains a namespace, or whether you want to remove the references to the namespace. So here's some sample code (in C#) that does both. class Program { static void Main(string[] args) { string withNamespace = @"//foo/ns2:bar/baz[1]/ns:foo2/@attr/text()"; string withoutNamespace = @"//foo/bar/baz[1]/foo2/@attr/text()"; ShowStuff(withNamespace); ShowStuff(withoutNamespace); } static void ShowStuff(string input) { Console.WriteLine("'{0}' does {1}contain namespaces", input, ContainsNamespace(input) ? "" : "not "); Console.WriteLine("'{0}' without namespaces is '{1}'", input, StripNamespaces(input)); } static bool ContainsNamespace(string input) { // a namspace must start with a character, but can have characters and numbers // from that point on. return Regex.IsMatch(input, @"/?\w[\w\d]+:\w[\w\d]+/?"); } static string StripNamespaces(string input) { return Regex.Replace(input, @"(/?)\w[\w\d]+:(\w[\w\d]+)(/?)", "$1$2$3"); } } Hope that helps! Good luck. A: Match on :? I think the question isn't clear enough, because the answer is so obvious: if(Regex.Match(":", input)) // reject A: I dont know regex syntax very well but could you not do: [any alpha numeric]\*:[any alphanumeric]\* I think something like that should work no? A: You might want \w which is a "word" character. From javadocs, it is defined as [a-zA-Z_0-9], so if you don't want underscores either, that may not work.... A: Yeah, my question was not very clear. Here's a solution but rather than a single pass with a regex, I use a split and perform iteration. It works as well but isn't as elegant: string xpath = "//foo/bar/baz[1]/ns:foo2/@attr/text()"; string[] nodetests = xpath.Split( new char[] { '/' } ); for (int i = 0; i < nodetests.Length; i++) { if (nodetests[i].Length > 0 && Regex.IsMatch( nodetests[i], @"^(\w|\[|\])+$" )) { // does not have a ":", we can manipulate it. } } xpath = String.Join( "/", nodetests );
Regex Rejecting matches because of Instr
What's the easiest way to do an "instring" type function with a regex? For example, how could I reject a whole string because of the presence of a single character such as :? For example: this - okay there:is - not okay because of : More practically, how can I match the following string: //foo/bar/baz[1]/ns:foo2/@attr/text() For any node test on the xpath that doesn't include a namespace? (/)?(/)([^:/]+) Will match the node tests but includes the namespace prefix which makes it faulty.
[ "I'm still not sure whether you just wanted to detect if the Xpath contains a namespace, or whether you want to remove the references to the namespace. So here's some sample code (in C#) that does both.\nclass Program\n{\n static void Main(string[] args)\n {\n string withNamespace = @\"//foo/ns2:bar/baz[1]/ns:foo2/@attr/text()\";\n string withoutNamespace = @\"//foo/bar/baz[1]/foo2/@attr/text()\";\n\n ShowStuff(withNamespace);\n ShowStuff(withoutNamespace);\n }\n\n static void ShowStuff(string input)\n {\n Console.WriteLine(\"'{0}' does {1}contain namespaces\", input, ContainsNamespace(input) ? \"\" : \"not \");\n Console.WriteLine(\"'{0}' without namespaces is '{1}'\", input, StripNamespaces(input));\n }\n\n static bool ContainsNamespace(string input)\n {\n // a namspace must start with a character, but can have characters and numbers\n // from that point on.\n return Regex.IsMatch(input, @\"/?\\w[\\w\\d]+:\\w[\\w\\d]+/?\");\n }\n\n static string StripNamespaces(string input)\n {\n return Regex.Replace(input, @\"(/?)\\w[\\w\\d]+:(\\w[\\w\\d]+)(/?)\", \"$1$2$3\");\n }\n}\n\nHope that helps! Good luck.\n", "Match on :? I think the question isn't clear enough, because the answer is so obvious:\nif(Regex.Match(\":\", input)) // reject\n\n", "I dont know regex syntax very well but could you not do:\n[any alpha numeric]\\*:[any alphanumeric]\\*\nI think something like that should work no?\n", "You might want \\w which is a \"word\" character. From javadocs, it is defined as [a-zA-Z_0-9], so if you don't want underscores either, that may not work....\n", "Yeah, my question was not very clear. Here's a solution but rather than a single pass with a regex, I use a split and perform iteration. It works as well but isn't as elegant: \nstring xpath = \"//foo/bar/baz[1]/ns:foo2/@attr/text()\";\nstring[] nodetests = xpath.Split( new char[] { '/' } );\nfor (int i = 0; i < nodetests.Length; i++) \n{\n if (nodetests[i].Length > 0 && Regex.IsMatch( nodetests[i], @\"^(\\w|\\[|\\])+$\" ))\n {\n // does not have a \":\", we can manipulate it.\n }\n}\n\nxpath = String.Join( \"/\", nodetests );\n\n" ]
[ 2, 1, 0, 0, 0 ]
[]
[]
[ "regex", "xpath" ]
stackoverflow_0000010158_regex_xpath.txt
Q: Using C# with OpenOffice through reflection I'm working on some code to paste into the currently active OpenOffice document directly from C#. I can't include any of the OpenOffice libraries, because we don't want to package them, so we're using reflection to get access to the OpenOffice API. My question involves using a dispatcher through reflection. I can't figure out the correct parameters to pass to it, giving me a lovely "TargetInvocationException" due to mismatched types. object objframe = GetProperty<object>(objcontroller, "frame"); if (objframe != null) { object[] paramlist = new object[2] {".uno:Paste", objframe}; InvokeMethod<object>(objdispatcher, "executeDispatch", paramlist); } How can I fix it? A: Is it just me or are your parameters the wrong way around? Also, do you have the right number of parameters? I could be missing something though, so sorry if you've already checked this stuff: The documentation says: dispatcher.executeDispatch(document, ".uno:Paste", "", 0, Array()) Which would indicate to me that you need to have your parameter list defined as object[] paramlist = new object[5] {objframe, ".uno:Paste", "", 0, null};
Using C# with OpenOffice through reflection
I'm working on some code to paste into the currently active OpenOffice document directly from C#. I can't include any of the OpenOffice libraries, because we don't want to package them, so we're using reflection to get access to the OpenOffice API. My question involves using a dispatcher through reflection. I can't figure out the correct parameters to pass to it, giving me a lovely "TargetInvocationException" due to mismatched types. object objframe = GetProperty<object>(objcontroller, "frame"); if (objframe != null) { object[] paramlist = new object[2] {".uno:Paste", objframe}; InvokeMethod<object>(objdispatcher, "executeDispatch", paramlist); } How can I fix it?
[ "Is it just me or are your parameters the wrong way around? Also, do you have the right number of parameters? I could be missing something though, so sorry if you've already checked this stuff:\nThe documentation says:\ndispatcher.executeDispatch(document, \".uno:Paste\", \"\", 0, Array())\n\nWhich would indicate to me that you need to have your parameter list defined as \nobject[] paramlist = new object[5] {objframe, \".uno:Paste\", \"\", 0, null};\n\n" ]
[ 1 ]
[]
[]
[ "c#", "reflection" ]
stackoverflow_0000010531_c#_reflection.txt
Q: Asynchronous Remoting calls We have a remoting singleton server running in a separate windows service (let's call her RemotingService). The clients of the RemotingService are ASP.NET instances (many many). Currently, the clients remoting call RemotingService and blocks while the RemotingService call is serviced. However, the remoting service is getting complicated enough (with more RPC calls and complex algorithms) that the asp.net worker threads are blocked for a significantly long time (4-5 seconds). According to this msdn article, doing this will not scale well because an asp.net worker thread is blocked for each remoting RPC. It advises switching to async handlers to free up asp.net worker threads. The purpose of an asynchronous handler is to free up an ASP.NET thread pool thread to service additional requests while the handler is processing the original request. This seems fine, except the remoting call still takes up a thread from the thread pool. Is this the same thread pool as the asp.net worker threads? How should I go about turning my remoting singleton server into an async system such that I free up my asp.net worker threads? I've probably missed out some important information, please let me know if there is anything else you need to know to answer the question. A: The idea behind using the ThreadPool is that through it you can control the amount of synchronous threads, and if those get too many, then the thread pool automatically manages the waiting of newer threads. The Asp.Net worked thread (AFAIK) doesn't come from the Thread Pool and shouldn't get affected by your call to the remoting service (unless this is a very slow processor, and your remoting function is very CPU intensive - in which case, everything on your computer will be affected). You could always host the remoting service on a different physical server. In that case, your asp.net worker thread will be totally independent of your remoting call (if the remoting call is called on a separate thread that is).
Asynchronous Remoting calls
We have a remoting singleton server running in a separate windows service (let's call her RemotingService). The clients of the RemotingService are ASP.NET instances (many many). Currently, the clients remoting call RemotingService and blocks while the RemotingService call is serviced. However, the remoting service is getting complicated enough (with more RPC calls and complex algorithms) that the asp.net worker threads are blocked for a significantly long time (4-5 seconds). According to this msdn article, doing this will not scale well because an asp.net worker thread is blocked for each remoting RPC. It advises switching to async handlers to free up asp.net worker threads. The purpose of an asynchronous handler is to free up an ASP.NET thread pool thread to service additional requests while the handler is processing the original request. This seems fine, except the remoting call still takes up a thread from the thread pool. Is this the same thread pool as the asp.net worker threads? How should I go about turning my remoting singleton server into an async system such that I free up my asp.net worker threads? I've probably missed out some important information, please let me know if there is anything else you need to know to answer the question.
[ "The idea behind using the ThreadPool is that through it you can control the amount of synchronous threads, and if those get too many, then the thread pool automatically manages the waiting of newer threads.\nThe Asp.Net worked thread (AFAIK) doesn't come from the Thread Pool and shouldn't get affected by your call to the remoting service (unless this is a very slow processor, and your remoting function is very CPU intensive - in which case, everything on your computer will be affected).\nYou could always host the remoting service on a different physical server. In that case, your asp.net worker thread will be totally independent of your remoting call (if the remoting call is called on a separate thread that is).\n" ]
[ 0 ]
[]
[]
[ ".net_2.0", ".net_3.5", "c#", "remoting", "rpc" ]
stackoverflow_0000010670_.net_2.0_.net_3.5_c#_remoting_rpc.txt
Q: Best way to abstract season/show/episode data Basically, I've written an API to www.thetvdb.com in Python. The current code can be found here. It grabs data from the API as requested, and has to store the data somehow, and make it available by doing: print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1 What is the "best" way to abstract this data within the Tvdb() class? I originally used a extended Dict() that automatically created sub-dicts (so you could do x[1][2][3][4] = "something" without having to do if x[1].has_key(2): x[1][2] = [] and so on) Then I just stored the data by doing self.data[show_id][season_number][episode_number][attribute_name] = "something" This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception). Currently it's using four classes: ShowContainer, Show, Season and Episode. Each one is a very basic dict, which I can easily add extra functionality in (the search() function on Show() for example). Each has a __setitem__, __getitem_ and has_key. This works mostly fine, I can check in Shows if it has that season in it's self.data dict, if not, raise season_not_found. I can also check in Season() if it has that episode and so on. The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the __getitem__ and __setitem__ functions, it's easy to accidentally recursively call __getitem__ (so I'm not sure if extending the Dict class will cause problems). The other slight problem is adding data into the dict is a lot more work than the old Dict method (which was self.data[seas_no][ep_no]['attribute'] = 'something'). See _setItem and _setData. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant. I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the ShowContainer/etc classes with Dict cause problems? A: OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name). import new myexc=new.classobj("ExcName",(Exception,),{}) i=myexc("This is the exc msg!") raise i this gives you: Traceback (most recent call last): File "<stdin>", line 1, in <module> __main__.ExcName: This is the exc msg! remember that you can always get the class name through: self.__class__.__name__ So, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception. P.S. - you can also raise strings, but this is deprecated. raise(self.__class__.__name__+"Exception") A: Why not use SQLite? There is good support in Python and you can write SQL queries to get the data out. Here is the Python docs for sqlite3 If you don't want to use SQLite you could do an array of dicts. episodes = [] episodes.append({'season':1, 'episode': 2, 'name':'Something'}) episodes.append({'season':1, 'episode': 2, 'name':'Something', 'actors':['Billy Bob', 'Sean Penn']}) That way you add metadata to any record and search it very easily season_1 = [e for e in episodes if e['season'] == 1] billy_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']] for episode in billy_bob: print "Billy bob was in Season %s Episode %s" % (episode['season'], episode['episode']) A: I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out. NOTE: I'm not a Python guy so I don't know what your xml support is like. NOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend. A: I don't get this part here: This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception) There is a way to do it - called in: >>>x={} >>>x[1]={} >>>x[1][2]={} >>>x {1: {2: {}}} >>> 2 in x[1] True >>> 3 in x[1] False what seems to be the problem with that? A: Bartosz/To clarify "This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not" x['some show'][3][24] would return season 3, episode 24 of "some show". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if "some show" doesn't exist, then raise tvdb_shownotfound The current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on. It works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)
Best way to abstract season/show/episode data
Basically, I've written an API to www.thetvdb.com in Python. The current code can be found here. It grabs data from the API as requested, and has to store the data somehow, and make it available by doing: print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1 What is the "best" way to abstract this data within the Tvdb() class? I originally used a extended Dict() that automatically created sub-dicts (so you could do x[1][2][3][4] = "something" without having to do if x[1].has_key(2): x[1][2] = [] and so on) Then I just stored the data by doing self.data[show_id][season_number][episode_number][attribute_name] = "something" This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception). Currently it's using four classes: ShowContainer, Show, Season and Episode. Each one is a very basic dict, which I can easily add extra functionality in (the search() function on Show() for example). Each has a __setitem__, __getitem_ and has_key. This works mostly fine, I can check in Shows if it has that season in it's self.data dict, if not, raise season_not_found. I can also check in Season() if it has that episode and so on. The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the __getitem__ and __setitem__ functions, it's easy to accidentally recursively call __getitem__ (so I'm not sure if extending the Dict class will cause problems). The other slight problem is adding data into the dict is a lot more work than the old Dict method (which was self.data[seas_no][ep_no]['attribute'] = 'something'). See _setItem and _setData. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant. I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the ShowContainer/etc classes with Dict cause problems?
[ "OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name). \nimport new\nmyexc=new.classobj(\"ExcName\",(Exception,),{})\ni=myexc(\"This is the exc msg!\")\nraise i\n\nthis gives you:\nTraceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\n__main__.ExcName: This is the exc msg!\n\nremember that you can always get the class name through:\nself.__class__.__name__\n\nSo, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception.\nP.S. - you can also raise strings, but this is deprecated.\nraise(self.__class__.__name__+\"Exception\")\n\n", "Why not use SQLite? There is good support in Python and you can write SQL queries to get the data out. Here is the Python docs for sqlite3\n\nIf you don't want to use SQLite you could do an array of dicts.\nepisodes = []\nepisodes.append({'season':1, 'episode': 2, 'name':'Something'})\nepisodes.append({'season':1, 'episode': 2, 'name':'Something', 'actors':['Billy Bob', 'Sean Penn']})\n\nThat way you add metadata to any record and search it very easily\nseason_1 = [e for e in episodes if e['season'] == 1]\nbilly_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']]\n\nfor episode in billy_bob:\n print \"Billy bob was in Season %s Episode %s\" % (episode['season'], episode['episode'])\n\n", "I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out.\nNOTE: I'm not a Python guy so I don't know what your xml support is like.\nNOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend.\n", "I don't get this part here:\n\nThis worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception)\n\nThere is a way to do it - called in:\n>>>x={}\n>>>x[1]={}\n>>>x[1][2]={}\n>>>x\n{1: {2: {}}}\n>>> 2 in x[1]\nTrue\n>>> 3 in x[1]\nFalse\n\nwhat seems to be the problem with that?\n", "Bartosz/To clarify \"This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not\"\nx['some show'][3][24] would return season 3, episode 24 of \"some show\". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if \"some show\" doesn't exist, then raise tvdb_shownotfound\nThe current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on.\nIt works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)\n" ]
[ 7, 4, 0, 0, 0 ]
[]
[]
[ "data_structures", "python" ]
stackoverflow_0000005966_data_structures_python.txt
Q: Best format for displaying rendered time on a webpage I've started to add the time taken to render a page to the footer of our internal web applications. Currently it appears like this Rendered in 0.062 seconds Occasionally I get rendered times like this Rendered in 0.000 seconds Currently it's only meant to be a guide for users to judge whether a page is quick to load or not, allowing them to quickly inform us if a page is taking 17 seconds rather than the usual 0.5. My question is what format should the time be in? At which point should I switch to a statement such as Rendered in less than a second I like seeing the tenths of a second but the second example above is of no use to anyone, in fact it just highlights the limits of the calculation I use to find the render time. I'd rather not let the users see that at all! Any answers welcome, including whether anything should be included on the page. A: "Rendered instantly" sounds way better than "Rendered in less than a second". A: Rather than relying on your users to look at the page footer and to let you know if the value exceeds some patience threshold, it might be a better idea to log the page render times in a log file on the server. Once you have all that raw data, you can look for particular pages that tend to take longer than normal to render. With more detailed logging, you could also measure the elapsed times in database queries or whatever if your web app relies on external systems. A: I'm not sure there's any value in telling users how long it took for the server to render the page. It could well be worth you logging that sort of information, but they don't care. If it takes the server 0.001 of a second to draw the page but it takes 17 seconds for them to load it (due to network, javascript, page size, their rubbish PC, etc) their perception will be the latter. Then again adding the render time might help you fend off the enquiries about any percieved slowness with a "talk to your local network admin" response. Given that you know the accuracy of your measurements you could have the 0.000 text be "Rendered in less than a thousandth of a second" A: I think I over-emphasized it was for the users. I know by using in trace in the web.config I can get accurate information on page render times along with times for accessing the database. We have in the past had problems with applications running too slowly over the network although it's now fixed I'm adding the label to new applications so that users are aware it is something we are taking seriously and it's a very simple indicator for the developers. Taking all that into account I like "Rendered Instantly" and write a lot of sense so I'll accept both your answer and kokos'. Thanks
Best format for displaying rendered time on a webpage
I've started to add the time taken to render a page to the footer of our internal web applications. Currently it appears like this Rendered in 0.062 seconds Occasionally I get rendered times like this Rendered in 0.000 seconds Currently it's only meant to be a guide for users to judge whether a page is quick to load or not, allowing them to quickly inform us if a page is taking 17 seconds rather than the usual 0.5. My question is what format should the time be in? At which point should I switch to a statement such as Rendered in less than a second I like seeing the tenths of a second but the second example above is of no use to anyone, in fact it just highlights the limits of the calculation I use to find the render time. I'd rather not let the users see that at all! Any answers welcome, including whether anything should be included on the page.
[ "\"Rendered instantly\" sounds way better than \"Rendered in less than a second\".\n", "Rather than relying on your users to look at the page footer and to let you know if the value exceeds some patience threshold, it might be a better idea to log the page render times in a log file on the server. Once you have all that raw data, you can look for particular pages that tend to take longer than normal to render.\nWith more detailed logging, you could also measure the elapsed times in database queries or whatever if your web app relies on external systems.\n", "I'm not sure there's any value in telling users how long it took for the server to render the page. It could well be worth you logging that sort of information, but they don't care.\nIf it takes the server 0.001 of a second to draw the page but it takes 17 seconds for them to load it (due to network, javascript, page size, their rubbish PC, etc) their perception will be the latter.\nThen again adding the render time might help you fend off the enquiries about any percieved slowness with a \"talk to your local network admin\" response.\nGiven that you know the accuracy of your measurements you could have the 0.000 text be \"Rendered in less than a thousandth of a second\"\n", "I think I over-emphasized it was for the users.\nI know by using in trace in the web.config I can get accurate information on page render times along with times for accessing the database.\nWe have in the past had problems with applications running too slowly over the network although it's now fixed I'm adding the label to new applications so that users are aware it is something we are taking seriously and it's a very simple indicator for the developers.\nTaking all that into account I like \"Rendered Instantly\" and write a lot of sense so I'll accept both your answer and kokos'. \nThanks\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "render" ]
stackoverflow_0000008624_render.txt
Q: Fast database access test from .NET What would be a very fast way to determine if your connectionstring lets you connect to a database? Normally a connection attempt keeps the user waiting a long time before notifying the attempt was futile anyway. A: You haven't mentioned what database you are connecting to, however. In SQL Server 2005, from .NET, you can specify a connection timeout in your connection string like so: server=<server>;database=<database>;uid=<user>;password=<password>;Connect Timeout=3 This will try to connect to the server and if it doesn't do so in three seconds, it will throw a timeout error. A: Shorten the timeout on the connection string and execute something trivial. The wait should be about the same as the timeout. You would still need a second or two though.
Fast database access test from .NET
What would be a very fast way to determine if your connectionstring lets you connect to a database? Normally a connection attempt keeps the user waiting a long time before notifying the attempt was futile anyway.
[ "You haven't mentioned what database you are connecting to, however. In SQL Server 2005, from .NET, you can specify a connection timeout in your connection string like so:\nserver=<server>;database=<database>;uid=<user>;password=<password>;Connect Timeout=3\n\nThis will try to connect to the server and if it doesn't do so in three seconds, it will throw a timeout error.\n", "Shorten the timeout on the connection string and execute something trivial.\nThe wait should be about the same as the timeout.\nYou would still need a second or two though.\n" ]
[ 11, 2 ]
[]
[]
[ ".net", "connection", "connection_string", "database" ]
stackoverflow_0000010822_.net_connection_connection_string_database.txt
Q: Data Layer Best Practices I am in the middle of a "discussion" with a colleague about the best way to implement the data layer in a new application. One viewpoint is that the data layer should be aware of business objects (our own classes that represent an entity), and be able to work with that object natively. The opposing viewpoint is that the data layer should be object-agnostic, and purely handle simple data types (strings, bools, dates, etc.) I can see that both approaches may be valid, but my own viewpoint is that I prefer the former. That way, if the data storage medium changes, the business layer doesn't (necessarily) have to change to accommodate the new data layer. It would therefore be a trivial thing to change from a SQL data store to a serialized xml filesystem store. My colleague's point of view is that the data layer shouldn't have to know about object definitions, and that as long as the data is passed about appropriately, that is enough. Now, I know that this is one of those questions that has the potential to start a religious war, but I'd appreciate any feedback from the community on how you approach such things. TIA A: It really depends on your view of the world - I used to be in the uncoupled camp. The DAL was only there to supply data to the BAL - end of story. With emerging technologies such as Linq to SQL and Entity Framework becoming a bit more popular, then the line between DAL and BAL have been blurred a bit. In L2S especially your DAL is quite tightly coupled to the Business objects as the object model has a 1-1 mapping to your database field. Like anything in software development there is no right or wrong answer. You need to understand your requirements and future requirments and work from there. I would no more use a Ferrari on the Dakhar rally as I would a Range Rover on a track day. A: You can have both. Let data layer not know of your bussiness objects and make it capable of working with more than one type of data sources. If you supply a common interface (or an abstract class) for interacting with data, you can have different implementations for each type of data source. Factory pattern goes well here. A: An excellent book I have, which covers this topic, is Data Access Patterns, by Clifton Nock. It has got many good explanations and good ideas on how to decouple your business layer from the persistence layer. You really should give it a try. It's one of my favorite books. A: One trick I've found handy is to have my data layer be "collection agnostic". That is, whenever I want to return a list of objects from my data layer, I get the caller to pass in the list. So instead of this: public IList<Foo> GetFoosById(int id) { ... } I do this: public void GetFoosById(IList<Foo> foos, int id) { ... } This lets me pass in a plain old List if that's all I need, or a more intelligent implementation of IList<T> (like ObservableCollection<T>) if I plan to bind to it from the UI. This technique also lets me return stuff from the method like a ValidationResult containing an error message if one occurred. This still means that my data layer knows about my object definitions, but it gives me one extra degree of flexibility. A: Check out Linq to SQL, if I were creating a new application right now I would consider relying on an entirely Linq based data layer. Other than that I think it's good practise to de-couple data and logic as much as possible, but that isn't always practical. A pure separation between logic and data access makes joins and optimisations difficult, which is what makes Linq so powerful. A: In applications wherein we use NHibernate, the answer becomes "somewhere in between", in that, while the XML mapping definitions (they specify which table belongs to which object and which columns belong to which field, etc) are clearly in the business object tier. They are passed to a generic data session manager which is not aware of any of the business objects; the only requirement is that the business objects passed to it for CRUD have to have a mapping file.
Data Layer Best Practices
I am in the middle of a "discussion" with a colleague about the best way to implement the data layer in a new application. One viewpoint is that the data layer should be aware of business objects (our own classes that represent an entity), and be able to work with that object natively. The opposing viewpoint is that the data layer should be object-agnostic, and purely handle simple data types (strings, bools, dates, etc.) I can see that both approaches may be valid, but my own viewpoint is that I prefer the former. That way, if the data storage medium changes, the business layer doesn't (necessarily) have to change to accommodate the new data layer. It would therefore be a trivial thing to change from a SQL data store to a serialized xml filesystem store. My colleague's point of view is that the data layer shouldn't have to know about object definitions, and that as long as the data is passed about appropriately, that is enough. Now, I know that this is one of those questions that has the potential to start a religious war, but I'd appreciate any feedback from the community on how you approach such things. TIA
[ "It really depends on your view of the world - I used to be in the uncoupled camp. The DAL was only there to supply data to the BAL - end of story.\nWith emerging technologies such as Linq to SQL and Entity Framework becoming a bit more popular, then the line between DAL and BAL have been blurred a bit. In L2S especially your DAL is quite tightly coupled to the Business objects as the object model has a 1-1 mapping to your database field.\nLike anything in software development there is no right or wrong answer. You need to understand your requirements and future requirments and work from there. I would no more use a Ferrari on the Dakhar rally as I would a Range Rover on a track day.\n", "You can have both. Let data layer not know of your bussiness objects and make it capable of working with more than one type of data sources. If you supply a common interface (or an abstract class) for interacting with data, you can have different implementations for each type of data source. Factory pattern goes well here.\n", "An excellent book I have, which covers this topic, is Data Access Patterns, by Clifton Nock. It has got many good explanations and good ideas on how to decouple your business layer from the persistence layer. You really should give it a try. It's one of my favorite books.\n", "One trick I've found handy is to have my data layer be \"collection agnostic\". That is, whenever I want to return a list of objects from my data layer, I get the caller to pass in the list. So instead of this:\npublic IList<Foo> GetFoosById(int id) { ... }\n\nI do this:\npublic void GetFoosById(IList<Foo> foos, int id) { ... }\n\nThis lets me pass in a plain old List if that's all I need, or a more intelligent implementation of IList<T> (like ObservableCollection<T>) if I plan to bind to it from the UI. This technique also lets me return stuff from the method like a ValidationResult containing an error message if one occurred.\nThis still means that my data layer knows about my object definitions, but it gives me one extra degree of flexibility.\n", "Check out Linq to SQL, if I were creating a new application right now I would consider relying on an entirely Linq based data layer.\nOther than that I think it's good practise to de-couple data and logic as much as possible, but that isn't always practical. A pure separation between logic and data access makes joins and optimisations difficult, which is what makes Linq so powerful.\n", "In applications wherein we use NHibernate, the answer becomes \"somewhere in between\", in that, while the XML mapping definitions (they specify which table belongs to which object and which columns belong to which field, etc) are clearly in the business object tier. \nThey are passed to a generic data session manager which is not aware of any of the business objects; the only requirement is that the business objects passed to it for CRUD have to have a mapping file.\n" ]
[ 5, 3, 1, 0, 0, 0 ]
[]
[]
[ ".net", "n_tier_architecture" ]
stackoverflow_0000010860_.net_n_tier_architecture.txt
Q: What libraries do I need to link my mixed-mode application to? I'm integrating .NET support into our C++ application. It's an old-school MFC application, with 1 extra file compiled with the "/clr" option that references a CWinFormsControl. I'm not allowed to remove the linker flag "/NODEFAULTLIB". (We have our own build management system, not Visual Studio's.) This means I have to specify all necessary libraries: VC runtime and MFC. Other compiler options include "/MD" Next to that: I can't use the linker flag "/FORCE:MULTIPLE" and just add everything: I'm looking for a non-overlapping set of libraries. A: As a bare minimum: mscoree.lib MSVCRT.lib mfc90.lib (adjust version appropriately) And iterate from there. A: Use the AppWizard to create a bare-bones MFC app in your style (SDI / MDI / dialog ) and then put on your depends. A: How I solved it: link with "/FORCE:MULTIPLE /verbose" (that links ok) and set the output aside. link with "/NODEFAULTIB /verbose" and trace all unresolveds in the output of the previous step and add the libraries 1 by 1. This resulted in doubles: "AAA.lib: XXX already defined in BBB.lib" Then I finally got it: Recompiled managed AND unmanaged units with /MD and link to (among others): mscoree.lib msvcmrt.lib mfcm80d.lib Mixing /MT (unmanaged) and /MD (managed) turned out to be the bad idea: different(overlapping) libraries are needed. @ajryan: Dependcy Walker only tells me what dll's are used, not what libraries are linked to when linking. (e.g. msvcmrt.lib ?) I think. Thanks for the answers! Jan
What libraries do I need to link my mixed-mode application to?
I'm integrating .NET support into our C++ application. It's an old-school MFC application, with 1 extra file compiled with the "/clr" option that references a CWinFormsControl. I'm not allowed to remove the linker flag "/NODEFAULTLIB". (We have our own build management system, not Visual Studio's.) This means I have to specify all necessary libraries: VC runtime and MFC. Other compiler options include "/MD" Next to that: I can't use the linker flag "/FORCE:MULTIPLE" and just add everything: I'm looking for a non-overlapping set of libraries.
[ "As a bare minimum:\nmscoree.lib\nMSVCRT.lib\nmfc90.lib (adjust version appropriately)\nAnd iterate from there.\n", "Use the AppWizard to create a bare-bones MFC app in your style (SDI / MDI / dialog ) and then put on your depends.\n", "How I solved it: \n\nlink with \"/FORCE:MULTIPLE /verbose\" (that links ok) and set the output aside.\nlink with \"/NODEFAULTIB /verbose\" and trace all unresolveds in the output of the previous step and add the libraries 1 by 1. \nThis resulted in doubles: \"AAA.lib: XXX already defined in BBB.lib\"\nThen I finally got it:\nRecompiled managed AND unmanaged units with /MD\nand link to (among others):\nmscoree.lib\nmsvcmrt.lib\nmfcm80d.lib\n\nMixing /MT (unmanaged) and /MD (managed) turned out to be the bad idea: \ndifferent(overlapping) libraries are needed.\n@ajryan: Dependcy Walker only tells me what dll's are used, not what libraries are linked to when linking.\n(e.g. msvcmrt.lib ?)\nI think.\nThanks for the answers!\nJan\n" ]
[ 1, 1, 1 ]
[]
[]
[ "c++_cli", "linker", "mixed_mode" ]
stackoverflow_0000009570_c++_cli_linker_mixed_mode.txt
Q: Best way to connect To Sql Server with MFC Soooo, I am starting this new job soon where most of the code is legacy MFC. The end goal is to convert/rewrite to C#. I'm a little rusty with MFC so writing some code so I can try and spark my memory. Anyway, was wondering the best way to connect to a SS2005 database with MFC. Back in the day CDatabase via ODBC was the way to go, has that changed in the last few years? I see you can use ATL and ADO, but looked a little, well verbose, for my liking? A: The ATL consumer templates for OleDb are an option - start here. The ClassWizard is still there to assist you so the verbosity isn't too much of a hurdle at first. Very soon you will need to hand-code though. There is a lot of careful twiddling, for example ensuring that your command string has exactly the right number of ? marks corresponding to the COLUMN_ENTRYs for an accessor. Then you'll probably have a million CopyToCommandFromObject and CopyToObjectFromCommand methods. This app doesn't have any data access yet and you're going to be adding it? If so, I would seriously consider implementing a modern DAL (ADO.Net, linq if you're lucky enough to be on 2008) in a separate managed assembly and doing some interop.
Best way to connect To Sql Server with MFC
Soooo, I am starting this new job soon where most of the code is legacy MFC. The end goal is to convert/rewrite to C#. I'm a little rusty with MFC so writing some code so I can try and spark my memory. Anyway, was wondering the best way to connect to a SS2005 database with MFC. Back in the day CDatabase via ODBC was the way to go, has that changed in the last few years? I see you can use ATL and ADO, but looked a little, well verbose, for my liking?
[ "The ATL consumer templates for OleDb are an option - start here. The ClassWizard is still there to assist you so the verbosity isn't too much of a hurdle at first. Very soon you will need to hand-code though. There is a lot of careful twiddling, for example ensuring that your command string has exactly the right number of ? marks corresponding to the COLUMN_ENTRYs for an accessor. Then you'll probably have a million CopyToCommandFromObject and CopyToObjectFromCommand methods.\nThis app doesn't have any data access yet and you're going to be adding it? If so, I would seriously consider implementing a modern DAL (ADO.Net, linq if you're lucky enough to be on 2008) in a separate managed assembly and doing some interop.\n" ]
[ 2 ]
[]
[]
[ "mfc", "sql_server" ]
stackoverflow_0000010891_mfc_sql_server.txt
Q: How do you measure SQL Fill Factor value Usually when I'm creating indexes on tables, I generally guess what the Fill Factor should be based on an educated guess of how the table will be used (many reads or many writes). Is there a more scientific way to determine a more accurate Fill Factor value? A: You could try running a big list of realistic operations and looking at IO queues for the different actions. There are a lot of variables that govern it, such as the size of each row and the number of writes vs reads. Basically: high fill factor = quicker read, low = quicker write. However it's not quite that simple, as almost all writes will be to a subset of rows that need to be looked up first. For instance: set a fill factor to 10% and each single-row update will take 10 times as long to find the row it's changing, even though a page split would then be very unlikely. Generally you see fill factors 70% (very high write) to 95% (very high read). It's a bit of an art form. I find that a good way of thinking of fill factors is as pages in an address book - the more tightly you pack the addresses the harder it is to change them, but the slimmer the book. I think I explained it better on my blog. A: I would tend to be of the opinion that if you're after performance improvements, your time is much better spent elsewhere, tweaking your schema, optimising your queries and ensuring good index coverage. Fill factor is one of those things that you only need to worry about when you know that everything else in your system is optimal. I don't know anyone that can say that.
How do you measure SQL Fill Factor value
Usually when I'm creating indexes on tables, I generally guess what the Fill Factor should be based on an educated guess of how the table will be used (many reads or many writes). Is there a more scientific way to determine a more accurate Fill Factor value?
[ "You could try running a big list of realistic operations and looking at IO queues for the different actions.\nThere are a lot of variables that govern it, such as the size of each row and the number of writes vs reads.\nBasically: high fill factor = quicker read, low = quicker write.\nHowever it's not quite that simple, as almost all writes will be to a subset of rows that need to be looked up first.\nFor instance: set a fill factor to 10% and each single-row update will take 10 times as long to find the row it's changing, even though a page split would then be very unlikely.\nGenerally you see fill factors 70% (very high write) to 95% (very high read).\nIt's a bit of an art form.\nI find that a good way of thinking of fill factors is as pages in an address book - the more tightly you pack the addresses the harder it is to change them, but the slimmer the book. I think I explained it better on my blog. \n", "I would tend to be of the opinion that if you're after performance improvements, your time is much better spent elsewhere, tweaking your schema, optimising your queries and ensuring good index coverage. Fill factor is one of those things that you only need to worry about when you know that everything else in your system is optimal. I don't know anyone that can say that.\n" ]
[ 12, 2 ]
[]
[]
[ "fillfactor", "sql_server" ]
stackoverflow_0000010919_fillfactor_sql_server.txt
Q: What is a "reasonable" length of time to keep a SQL cursor open? In your applications, what's a "long time" to keep a transaction open before committing or rolling back? Minutes? Seconds? Hours? and on which database? A: I'm probably going to get flamed for this, but you really should try and avoid using cursors as they incur a serious performance hit. If you must use it, you should keep it open the absolute minimum amount of time possible so that you free up the resources being blocked by the cursor ASAP. A: transactions: minutes. Cursors: 0seconds maximum, if you use a cursor we fire you. This is not ridiculous when you consider we are in a high availability web environment, that has to run sql server, and we don't even allow stored procs because of inability to accurately version and maintain them. If we were using oracle maybe. A: @lomaxx, @ChanChan: to the best of my knowledge cursors are only a problem on SQL Server and Sybase (T-SQL variants). If your database of choice is Oracle, then cursors are your friend. I've seen a number of cases where the use of cursors has actually improved performance. Cursors are an incredibly useful mechanism and tbh, saying things like "if you use a cursor we fire you" is a little ridiculous. Having said that, you only want to keep a cursor open for the absolute minimum that is required. Specifying a maximum time would be arbitrary and pointless without understanding the problem domain. A: Generally I agree with the other answers: Avoid cursors when possible (in most cases) and close them as fast as possible. However: It all depends on the environment you're working in. If it is a production website environment with lots of users, make sure that the cursor goes away before someone gets a timeout. If you're - for example - writing a "log analyzing stored procedure" (or whatever) on a proprietary machine that does nothing else: feel free to do whatever you want to do. You'll be the only person who has to wait. It's not as if the database server is going to die because you use cursors. You should consider, though, that maybe usage behaviour will change over time and at some point there might be 10 people using that application. So try to find another way ;) A: @ninesided: performance issues aside, it's also about using the right tool for the job. Given the choice to move the cursor out of your query into code, I would think 99 times out of 100 it would be better to put that looping logic into some sort of managed code. Doing so allows you to get the advantages of using a debugger, compile time error checking, type saftey etc. My answer to the question is still the same, if you're using a cursor, close it ASAP, in oracle I'd also be trying to use explicit cursors.
What is a "reasonable" length of time to keep a SQL cursor open?
In your applications, what's a "long time" to keep a transaction open before committing or rolling back? Minutes? Seconds? Hours? and on which database?
[ "I'm probably going to get flamed for this, but you really should try and avoid using cursors as they incur a serious performance hit. If you must use it, you should keep it open the absolute minimum amount of time possible so that you free up the resources being blocked by the cursor ASAP.\n", "transactions: minutes.\nCursors: 0seconds maximum, if you use a cursor we fire you. \nThis is not ridiculous when you consider we are in a high availability web environment, that has to run sql server, and we don't even allow stored procs because of inability to accurately version and maintain them. If we were using oracle maybe.\n", "@lomaxx, @ChanChan: to the best of my knowledge cursors are only a problem on SQL Server and Sybase (T-SQL variants). If your database of choice is Oracle, then cursors are your friend. I've seen a number of cases where the use of cursors has actually improved performance. Cursors are an incredibly useful mechanism and tbh, saying things like \"if you use a cursor we fire you\" is a little ridiculous.\nHaving said that, you only want to keep a cursor open for the absolute minimum that is required. Specifying a maximum time would be arbitrary and pointless without understanding the problem domain.\n", "Generally I agree with the other answers: Avoid cursors when possible (in most cases) and close them as fast as possible.\nHowever: It all depends on the environment you're working in. \n\nIf it is a production website environment with lots of users, make sure that the cursor goes away before someone gets a timeout. \nIf you're - for example - writing a \"log analyzing stored procedure\" (or whatever) on a proprietary machine that does nothing else: feel free to do whatever you want to do. You'll be the only person who has to wait. It's not as if the database server is going to die because you use cursors. You should consider, though, that maybe usage behaviour will change over time and at some point there might be 10 people using that application. So try to find another way ;)\n\n", "@ninesided: performance issues aside, it's also about using the right tool for the job. Given the choice to move the cursor out of your query into code, I would think 99 times out of 100 it would be better to put that looping logic into some sort of managed code. Doing so allows you to get the advantages of using a debugger, compile time error checking, type saftey etc.\nMy answer to the question is still the same, if you're using a cursor, close it ASAP, in oracle I'd also be trying to use explicit cursors.\n" ]
[ 8, 5, 3, 2, 2 ]
[]
[]
[ "cursors", "sql" ]
stackoverflow_0000010727_cursors_sql.txt
Q: Future proofing a large UI Application - MFC with 2008 Feature pack, or C# and Winforms? My company has developed a long standing product using MFC in Visual C++ as the defacto standard for UI development. Our codebase contains ALOT of legacy/archaic code which must be kept operational. Some of this code is older than me (originally written in the late 70s) and some members of our team are still on Visual Studio 6. However, a conclusion has thankfully been reached internally that our product is looking somewhat antiquated compared to our competitors', and that something needs to be done. I am currently working on a new area of the UI which is quite separate from the rest of the product. I have therefore been given the chance to try out 'new' technology stacks as a sort of proving ground before the long process of moving over the rest of the UI begins. I have been using C# with Windows Forms and the .net framework for a while in my spare time and enjoy it, but am somewhat worried about the headaches caused by interop. While this particular branch of the UI won't require much interop with the legacy C++ codebase, I can forsee this becoming an issue in the future. The alternative is just to continue with MFC, but try and take advantage of the new feature pack that shipped with VS2008. This I guess is the easiest option, but I worry about longevity and not taking advantage of the goodness that is .net... So, which do I pick? We're a small team so my recommendation will quite probably be accepted as a future direction for our development - I want to get it right. Is MFC dead? Is C#/Winforms the way forward? Is there anything else I'm totally missing? Help greatly appreciated! A: I'm a developer on an app that has a ton of legacy MFC code, and we have all of your same concerns. A big driver for our strategy was to eliminate as much risk and uncertainty as we could, which meant avoiding The Big Rewrite. As we all know, TBR fails most of the time. So we chose an incremental approach that allows us to preserve modules that won't be changing in the current release, writing new features managed, andporting features that are getting enhancements to managed. You can do this several ways: Host WPF content on your MFC views (see here) For MFC MDI apps, create a new WinForms framework and host your MFC MDI views (see here) Host WinForms user controls in MFC Dialogs and Views (see here) The problem with adopting WPF (option 1) is that it will require you to rewrite all of your UI at once, otherwise it'll look pretty schizophrenic. The second approach looks viable but very complicated. The third approach is the one we selected and it's been working very well. It allows you to selectively refresh areas of your app while maintaining overall consistency and not touching things that aren't broken. The Visual C++ 2008 Feature Pack looks interesting, I haven't played with it though. Seems like it might help with your issue of outdated look. If the "ribbon" would be too jarring for your users you could look at third-party MFC and/or WinForms control vendors. My overall recommendation is that interop + incremental change is definitely preferable to sweeping changes. After reading your follow-up, I can definitely confirm that the productivity gains of the framework vastly outweigh the investment in learning it. Nobody on our team had used C# at the start of this effort and now we all prefer it. A: Depending on the application and the willingness of your customers to install .NET (not all of them are), I would definitely move to WinForms or WPF. Interop with C++ code is hugely simplified by refactoring non-UI code into class libraries using C++/CLI (as you've noted in your selection of tags). The only issue with WPF is that it may be hard to maintain the current look-and-feel. Moving to WinForms can be done while maintaining the current look of your GUI. WPF uses such a different model that to attempt to keep the current layout would probably be futile and would definitely not be in the spirit of WPF. WPF also apparently has poor performance on pre-Vista machines when more than one WPF process is running. My suggestion is to find out what your clients are using. If most have moved to Vista and your team is prepared to put in a lot of GUI work, I would say skip WinForms and move to WPF. Otherwise, definitely look seriously at WinForms. In either case, a class library in C++/CLI is the answer to your interop concerns. A: You don't give a lot of detail on what your legacy code does or how it's structured. If you have certain performance criteria you might want to maintain some of your codebase in C++. You'll have an easier time doing interop with your old code if it is exposed in the right way - can you call into the existing codebase from C# today? Might be worth thinking about a project to get this structure right. On the point of WPF, you could argue that WinForms may be more appropriate. Moving to WinForms is a big step for you and your team. Perhaps they may be more comfortable with the move to WinForms? It's better documented, more experience in the market, and useful if you still need to support windows 2000 clients. You might be interested in Extending MFC Applications with the .NET Framework Something else to consider is C++/CLI, but I don't have experience with it. A: Thank you all kindly for your responses, it's reassuring to see that generally the consensus follows my line of thinking. I am in the fortunate situation that our software also runs on our own custom hardware (for the broadcast industry) - so the choice of OS is really ours and is thrust upon our customers. Currently we're running XP/2000, but I can see a desire to move up to Vista soon. However, we also need to maintain very fine control over GPU performance, which I guess automatically rules out WPF and hardware acceleration? I should have made that point in my original post - sorry. Perhaps it's possible to use two GPUs... but that's another question altogether... The team doesn't have any significant C# experience and I'm no expert myself, but I think the overall long term benefits of a managed environment probably outweigh the time it'll take to get up to speed. Looks like Winforms and C# have it for now. A: Were you to look at moving to C# and therefore .NET, I would consider Windows Presentation Foundation rather than WinForms. WPF is the future of smart clients in .NET, and the skills you pick up you'll be able to reuse if you want to make browser-hosted Silverlight applications. A: I concur with the WPF sentiment. Tag/XML based UI would seem to be a bit more portable than WinForms. I guess too you have to consider your team, if there is not a lot of current C# skills, then that is a factor, but going forward the market for MFC developers is diminishing and C# is growing. Maybe some kind of piecemeal approach would be possible? I have been involved with recoding legacy applications to C# quite a bit, and it always takes a lot longer than you would estimate, especially if you are keeping some legacy code, or your team isn't that conversant with C#.
Future proofing a large UI Application - MFC with 2008 Feature pack, or C# and Winforms?
My company has developed a long standing product using MFC in Visual C++ as the defacto standard for UI development. Our codebase contains ALOT of legacy/archaic code which must be kept operational. Some of this code is older than me (originally written in the late 70s) and some members of our team are still on Visual Studio 6. However, a conclusion has thankfully been reached internally that our product is looking somewhat antiquated compared to our competitors', and that something needs to be done. I am currently working on a new area of the UI which is quite separate from the rest of the product. I have therefore been given the chance to try out 'new' technology stacks as a sort of proving ground before the long process of moving over the rest of the UI begins. I have been using C# with Windows Forms and the .net framework for a while in my spare time and enjoy it, but am somewhat worried about the headaches caused by interop. While this particular branch of the UI won't require much interop with the legacy C++ codebase, I can forsee this becoming an issue in the future. The alternative is just to continue with MFC, but try and take advantage of the new feature pack that shipped with VS2008. This I guess is the easiest option, but I worry about longevity and not taking advantage of the goodness that is .net... So, which do I pick? We're a small team so my recommendation will quite probably be accepted as a future direction for our development - I want to get it right. Is MFC dead? Is C#/Winforms the way forward? Is there anything else I'm totally missing? Help greatly appreciated!
[ "I'm a developer on an app that has a ton of legacy MFC code, and we have all of your same concerns. A big driver for our strategy was to eliminate as much risk and uncertainty as we could, which meant avoiding The Big Rewrite. As we all know, TBR fails most of the time. So we chose an incremental approach that allows us to preserve modules that won't be changing in the current release, writing new features managed, andporting features that are getting enhancements to managed.\nYou can do this several ways:\n\nHost WPF content on your MFC views (see here)\nFor MFC MDI apps, create a new WinForms framework and host your MFC MDI views (see here)\nHost WinForms user controls in MFC Dialogs and Views (see here)\n\nThe problem with adopting WPF (option 1) is that it will require you to rewrite all of your UI at once, otherwise it'll look pretty schizophrenic.\nThe second approach looks viable but very complicated.\nThe third approach is the one we selected and it's been working very well. It allows you to selectively refresh areas of your app while maintaining overall consistency and not touching things that aren't broken.\nThe Visual C++ 2008 Feature Pack looks interesting, I haven't played with it though. Seems like it might help with your issue of outdated look. If the \"ribbon\" would be too jarring for your users you could look at third-party MFC and/or WinForms control vendors.\nMy overall recommendation is that interop + incremental change is definitely preferable to sweeping changes.\n\nAfter reading your follow-up, I can definitely confirm that the productivity gains of the framework vastly outweigh the investment in learning it. Nobody on our team had used C# at the start of this effort and now we all prefer it.\n", "Depending on the application and the willingness of your customers to install .NET (not all of them are), I would definitely move to WinForms or WPF. Interop with C++ code is hugely simplified by refactoring non-UI code into class libraries using C++/CLI (as you've noted in your selection of tags).\nThe only issue with WPF is that it may be hard to maintain the current look-and-feel. Moving to WinForms can be done while maintaining the current look of your GUI. WPF uses such a different model that to attempt to keep the current layout would probably be futile and would definitely not be in the spirit of WPF. WPF also apparently has poor performance on pre-Vista machines when more than one WPF process is running.\nMy suggestion is to find out what your clients are using. If most have moved to Vista and your team is prepared to put in a lot of GUI work, I would say skip WinForms and move to WPF. Otherwise, definitely look seriously at WinForms. In either case, a class library in C++/CLI is the answer to your interop concerns.\n", "You don't give a lot of detail on what your legacy code does or how it's structured. If you have certain performance criteria you might want to maintain some of your codebase in C++. You'll have an easier time doing interop with your old code if it is exposed in the right way - can you call into the existing codebase from C# today? Might be worth thinking about a project to get this structure right.\nOn the point of WPF, you could argue that WinForms may be more appropriate. Moving to WinForms is a big step for you and your team. Perhaps they may be more comfortable with the move to WinForms? It's better documented, more experience in the market, and useful if you still need to support windows 2000 clients.\nYou might be interested in Extending MFC Applications with the .NET Framework\nSomething else to consider is C++/CLI, but I don't have experience with it.\n", "Thank you all kindly for your responses, it's reassuring to see that generally the consensus follows my line of thinking. I am in the fortunate situation that our software also runs on our own custom hardware (for the broadcast industry) - so the choice of OS is really ours and is thrust upon our customers. Currently we're running XP/2000, but I can see a desire to move up to Vista soon.\nHowever, we also need to maintain very fine control over GPU performance, which I guess automatically rules out WPF and hardware acceleration? I should have made that point in my original post - sorry. Perhaps it's possible to use two GPUs... but that's another question altogether...\nThe team doesn't have any significant C# experience and I'm no expert myself, but I think the overall long term benefits of a managed environment probably outweigh the time it'll take to get up to speed.\nLooks like Winforms and C# have it for now.\n", "Were you to look at moving to C# and therefore .NET, I would consider Windows Presentation Foundation rather than WinForms. WPF is the future of smart clients in .NET, and the skills you pick up you'll be able to reuse if you want to make browser-hosted Silverlight applications.\n", "I concur with the WPF sentiment. Tag/XML based UI would seem to be a bit more portable than WinForms.\nI guess too you have to consider your team, if there is not a lot of current C# skills, then that is a factor, but going forward the market for MFC developers is diminishing and C# is growing.\nMaybe some kind of piecemeal approach would be possible? I have been involved with recoding legacy applications to C# quite a bit, and it always takes a lot longer than you would estimate, especially if you are keeping some legacy code, or your team isn't that conversant with C#.\n" ]
[ 9, 2, 2, 2, 1, 0 ]
[]
[]
[ "c#", "c++", "mfc", "user_interface", "winforms" ]
stackoverflow_0000010901_c#_c++_mfc_user_interface_winforms.txt
Q: MS Team Foundation Server in distributed environments - hints tips tricks needed Is anyone out there using Team Foundation Server within a team that is geographically distributed? We're in the UK, trying work with a team in Australia and we're finding it quite tough. Our main two issues are: Things are being checked out to us without us asking on a get latest. Even when using a proxy, most thing take a while to happen. Lots of really annoying little things like this are hardening our arteries, stopping us from delivering code and is frankly creating a user experience akin to pushing golden syrup up a sand dune. Is anyone out there actually using TFS in this manner, on a daily basis with (relative) success? If so, do you have any hints, tips, tricks or gotchas that would be worth knowing? P.S. Upgrading to CruiseControl.NET is not an option. A: Definitely upgrade to TFS 2008 and Visual Studio 2008, as it is the "v2" version of Team System in every way. Fixes lots of small and medium sized problems. As for "things being randomly checked out" this is almost always due to Visual Studio deciding to edit files on your behalf. Try getting latest from the Team Explorer, with nothing open in Visual Studio, and see if that behavior persists. I bet it won't! Multiple TFS servers is a bad idea. Make sure your proxy is configured correctly, as it caches repeated GETs. That said, TFS is a server connected model, so it'll always be a bit slower than true "offline" source control systems. Also, if you could edit your question to contain more specific complaints or details, that would help -- right now it's awfully vague, so I can't answer very well. A: We use TFS with a somewhat distributed team - they aren't too far away but connect via a slow and unreliable VPN. For your first issue, get latest on checkout is not the default behaviour. (Here's an explanation) There is an add-in that will do it for you, though. Here's the workflow that works for us: Get latest Build and verify nothing's broken Work (changes pended) Get latest again Deal with merge conflicts Build and verify nothing's broken Check in [edit] OK looks like you rephrased this part of the question. Yes, Jeff's right, VS decides to check some files out "for you," like sln and proj files. It also automatically checks out any source file that you edit (that's what you want though, right? although you can change that setting in tools > options > source control) The proxy apparently takes a while to get ramped up (we don't use it) but once it has cached most of the tree it's supposed to be pretty quick. Can you do some monitoring and find the bottleneck(s)? Anything else giving you trouble, other than get-latest-on-checkout and speed? A: From my understanding you can have multiple TFS Application servers in different locations. They either can both talk to the same SQL Server or you could use SQL Server mirroring. Having your own local TFS server would likely speed up your development times.
MS Team Foundation Server in distributed environments - hints tips tricks needed
Is anyone out there using Team Foundation Server within a team that is geographically distributed? We're in the UK, trying work with a team in Australia and we're finding it quite tough. Our main two issues are: Things are being checked out to us without us asking on a get latest. Even when using a proxy, most thing take a while to happen. Lots of really annoying little things like this are hardening our arteries, stopping us from delivering code and is frankly creating a user experience akin to pushing golden syrup up a sand dune. Is anyone out there actually using TFS in this manner, on a daily basis with (relative) success? If so, do you have any hints, tips, tricks or gotchas that would be worth knowing? P.S. Upgrading to CruiseControl.NET is not an option.
[ "Definitely upgrade to TFS 2008 and Visual Studio 2008, as it is the \"v2\" version of Team System in every way. Fixes lots of small and medium sized problems.\nAs for \"things being randomly checked out\" this is almost always due to Visual Studio deciding to edit files on your behalf. Try getting latest from the Team Explorer, with nothing open in Visual Studio, and see if that behavior persists. I bet it won't!\nMultiple TFS servers is a bad idea. Make sure your proxy is configured correctly, as it caches repeated GETs. That said, TFS is a server connected model, so it'll always be a bit slower than true \"offline\" source control systems.\nAlso, if you could edit your question to contain more specific complaints or details, that would help -- right now it's awfully vague, so I can't answer very well.\n", "We use TFS with a somewhat distributed team - they aren't too far away but connect via a slow and unreliable VPN.\nFor your first issue, get latest on checkout is not the default behaviour. (Here's an explanation) There is an add-in that will do it for you, though.\nHere's the workflow that works for us:\n\nGet latest\nBuild and verify nothing's broken\nWork (changes pended)\nGet latest again\nDeal with merge conflicts\nBuild and verify nothing's broken\nCheck in\n\n[edit] OK looks like you rephrased this part of the question. Yes, Jeff's right, VS decides to check some files out \"for you,\" like sln and proj files. It also automatically checks out any source file that you edit (that's what you want though, right? although you can change that setting in tools > options > source control)\nThe proxy apparently takes a while to get ramped up (we don't use it) but once it has cached most of the tree it's supposed to be pretty quick. Can you do some monitoring and find the bottleneck(s)?\nAnything else giving you trouble, other than get-latest-on-checkout and speed?\n", "From my understanding you can have multiple TFS Application servers in different locations. They either can both talk to the same SQL Server or you could use SQL Server mirroring. Having your own local TFS server would likely speed up your development times.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "tfs", "visual_studio" ]
stackoverflow_0000010999_tfs_visual_studio.txt
Q: ASP.NET Caching Recently I have been investigating the possibilities of caching in ASP.NET. I rolled my own "Cache", because I didn't know any better, it looked a bit like this: public class DataManager { private static DataManager s_instance; public static DataManager GetInstance() { } private Data[] m_myData; private DataTime m_cacheTime; public Data[] GetData() { TimeSpan span = DateTime.Now.Substract(m_cacheTime); if(span.TotalSeconds > 10) { // Do SQL to get data m_myData = data; m_cacheTime = DateTime.Now; return m_myData; } else { return m_myData; } } } So the values are stored for a while in a singleton, and when the time expires, the values are renewed. If time has not expired, and a request for the data is done, the stored values in the field are returned. What are the benefits over using the real method (http://msdn.microsoft.com/en-us/library/aa478965.aspx) instead of this? A: I think the maxim "let the computer do it; it's smarter than you" applies here. Just like memory management and other complicated things, the computer is a lot more informed about what it's doing than your are; consequently, able to get more performance than you are. Microsoft has had a team of engineers working on it and they've probably managed to squeeze much more performance out of the system than would be possible for you to. It's also likely that ASP.NET's built-in caching operates at a different level (which is inaccessible to your application), making it much faster. A: The ASP.NET caching mechanism has been around for a while, so it's stable and well understood. There are lots of resources out there to help you make the most of it. Rolling your own might be the right solution, depending on your requirements. The hard part about caching is choosing what is safe to cache, and when. For applications in which data changes frequently, you can introduce some hard to troubleshoot bugs with caching, so be careful. A: Caching in ASP.NET is feature rich and you can configure caching in quite a granular way. In your case (data caching) one of the features you're missing out on is the ability to invalidate and refresh the cache if data on the SQL server is updated in some way (SQL Cache Dependency). http://msdn.microsoft.com/en-us/library/ms178604.aspx
ASP.NET Caching
Recently I have been investigating the possibilities of caching in ASP.NET. I rolled my own "Cache", because I didn't know any better, it looked a bit like this: public class DataManager { private static DataManager s_instance; public static DataManager GetInstance() { } private Data[] m_myData; private DataTime m_cacheTime; public Data[] GetData() { TimeSpan span = DateTime.Now.Substract(m_cacheTime); if(span.TotalSeconds > 10) { // Do SQL to get data m_myData = data; m_cacheTime = DateTime.Now; return m_myData; } else { return m_myData; } } } So the values are stored for a while in a singleton, and when the time expires, the values are renewed. If time has not expired, and a request for the data is done, the stored values in the field are returned. What are the benefits over using the real method (http://msdn.microsoft.com/en-us/library/aa478965.aspx) instead of this?
[ "I think the maxim \"let the computer do it; it's smarter than you\" applies here. Just like memory management and other complicated things, the computer is a lot more informed about what it's doing than your are; consequently, able to get more performance than you are.\nMicrosoft has had a team of engineers working on it and they've probably managed to squeeze much more performance out of the system than would be possible for you to. It's also likely that ASP.NET's built-in caching operates at a different level (which is inaccessible to your application), making it much faster.\n", "The ASP.NET caching mechanism has been around for a while, so it's stable and well understood. There are lots of resources out there to help you make the most of it.\nRolling your own might be the right solution, depending on your requirements.\nThe hard part about caching is choosing what is safe to cache, and when. For applications in which data changes frequently, you can introduce some hard to troubleshoot bugs with caching, so be careful.\n", "Caching in ASP.NET is feature rich and you can configure caching in quite a granular way. \nIn your case (data caching) one of the features you're missing out on is the ability to invalidate and refresh the cache if data on the SQL server is updated in some way (SQL Cache Dependency).\nhttp://msdn.microsoft.com/en-us/library/ms178604.aspx\n" ]
[ 4, 2, 1 ]
[]
[]
[ "asp.net", "caching", "sql" ]
stackoverflow_0000011141_asp.net_caching_sql.txt
Q: Best way to model Many-To-One Relationships in NHibernate When Dealing With a Legacy DB? Warning - I am very new to NHibernate. I know this question seems simple - and I'm sure there's a simple answer, but I've been spinning my wheels for some time on this one. I am dealing with a legacy db which really can't be altered structurally. I have a details table which lists payment plans that have been accepted by a customer. Each payment plan has an ID which links back to a reference table to get the plan's terms, conditions, etc. In my object model, I have an AcceptedPlan class, and a Plan class. Originally, I used a many-to-one relationship from the detail table back to the ref table to model this relationship in NHibernate. I also created a one-to-many relationship going in the opposite direction from the Plan class over to the AcceptedPlan class. This was fine while I was simply reading data. I could go to my Plan object, which was a property of my AcceptedPlan class to read the plan's details. My problem arose when I had to start inserting new rows to the details table. From my reading, it seems the only way to create a new child object is to add it to the parent object and then save the session. But I don't want to have to create a new parent Plan object every time I want to create a new detail record. This seems like unnecessary overhead. Does anyone know if I am going about this in the wrong way? A: I'd steer away from having child object containing their logical parent, it can get very messy and very recursive pretty quickly when you do that. I'd take a look at how you're intending to use the domain model before you do that sort of thing. You can easily still have the ID references in the tables and just leave them unmapped. Here are two example mappings that might nudge you in the right direction, I've had to adlib table names etc but it could possibly help. I'd probably also suggest mapping the StatusId to an enumeration. Pay attention to the way the bag effectivly maps the details table into a collection. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.Customer, Namespace" table="Customer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="CustomerAccountId" length="4" sql-type="int" not-null="true" unique="true" index="CustomerPK"/> <generator class="native" /> </id> <bag name="AcceptedOffers" inverse="false" lazy="false" cascade="all-delete-orphan" table="details"> <key column="CustomerAccountId" foreign-key="AcceptedOfferFK"/> <many-to-many class="Namespace.AcceptedOffer, Namespace" column="AcceptedOfferFK" foreign-key="AcceptedOfferID" lazy="false" /> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.AcceptedOffer, Namespace" table="AcceptedOffer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="AcceptedOfferId" length="4" sql-type="int" not-null="true" unique="true" index="AcceptedOfferPK"/> <generator class="native" /> </id> <many-to-one name="Plan" class="Namespace.Plan, Namespace" lazy="false" cascade="save-update" > <column name="PlanFK" length="4" sql-type="int" not-null="false"/> </many-to-one> <property name="StatusId" type="Int32"> <column name="StatusId" length="4" sql-type="int" not-null="true"/> </property> </class> </hibernate-mapping> A: Didn't see your database diagram whilst I was writing. <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.Customer, Namespace" table="Customer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="customer_id" length="4" sql-type="int" not-null="true" unique="true" index="CustomerPK"/> <generator class="native" /> </id> <bag name="AcceptedOffers" inverse="false" lazy="false" cascade="all-delete-orphan"> <key column="accepted_offer_id"/> <one-to-many class="Namespace.AcceptedOffer, Namespace"/> </bag> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping default-cascade="save-update" xmlns="urn:nhibernate-mapping-2.2"> <class lazy="false" name="Namespace.AcceptedOffer, Namespace" table="Accepted_Offer"> <id name="Id" type="Int32" unsaved-value="0"> <column name="accepted_offer_id" length="4" sql-type="int" not-null="true" unique="true" /> <generator class="native" /> </id> <many-to-one name="Plan" class="Namespace.Plan, Namespace" lazy="false" cascade="save-update"> <column name="plan_id" length="4" sql-type="int" not-null="false"/> </many-to-one> </class> </hibernate-mapping> Should probably do the trick (I've only done example mappings for the collections, you'll have to add other properties). A: The approach I'd take to model this is as follows: Customer object contains an ICollection <PaymentPlan> PaymentPlans which represent the plans that customer has accepted. The PaymentPlan to the Customer would be mapped using a bag which uses the details table to establish which customer id's mapped to which PaymentPlans. Using cascade all-delete-orphan, if the customer was deleted, both the entries from details and the PaymentPlans that customer owned would be deleted. The PaymentPlan object contains a PlanTerms object which represented the terms of the payment plan. The PlanTerms would be mapped to a PaymentPlan using a many-to-one mapping cascading save-update which would just insert a reference to the relevant PlanTerms object in to the PaymentPlan. Using this model, you could create PlanTerms independantly and then when you add a new PaymentPlan to a customer, you'd create a new PaymentPlan object passing in the relevant PlanTerms object and then add it to the collection on the relevant Customer. Finally you'd save the Customer and let nhibernate cascade the save operation. You'd end up with a Customer object, a PaymentPlan object and a PlanTerms object with the Customer (customer table) owning instances of PaymentPlans (the details table) which all adhear to specific PlanTerms (the plan table). I've got some more concrete examples of the mapping syntax if required but it's probably best to work it through with your own model and I don't have enough information on the database tables to provide any specific examples. A: I don't know if this is possibly because my NHibernate experience is limited, but could you create a BaseDetail class which has just the properties for the Details as they map directly to the Detail table. Then create a second class that inherits from the BaseDetail class that has the additional Parent Plan object so you can create a BaseDetail class when you want to just create a Detail row and assign the PlanId to it, but if you need to populate a full Detail record with the Parent plan object you can use the inherited Detail class. I don't know if that makes a whole lot of sense, but let me know and I'll clarify further. A: I think the problem you have here is that your AcceptedOffer object contains a Plan object, and then your Plan object appears to contain an AcceptedOffers collection that contains AcceptedOffer objects. Same thing with Customers. The fact that the objects are a child of each other is what causes your problem, I think. Likewise, what makes your AcceptedOffer complex is it has a two responsibilities: it indicates offers included in a plan, it indicates acceptance by a customer. That violates the Single Responsibility Principle. You may have to differentiate between an Offer that is under a Plan, and an Offer that is accepted by customers. So here's what I'm going to do: Create a separate Offer object which does not have a state, e.g., it does not have a customer and it does not have a status -- it only has an OfferId and the Plan it belongs to as its attributes. Modify your Plan object to have an Offers collection (it does not have to have accepted offer in its context). Finally, modify your AcceptedOffer object so that it contains an Offer, the Customer, and a Status. Customer remains the same. I think this will sufficiently untangle your NHibernate mappings and object saving problems. :) A: A tip that may (or may not) be helpful in NHibernate: you can map your objects against Views as though the View was a table. Just specify the view name as a table name; as long as all NOT NULL fields are included in the view and the mapping it will work fine.
Best way to model Many-To-One Relationships in NHibernate When Dealing With a Legacy DB?
Warning - I am very new to NHibernate. I know this question seems simple - and I'm sure there's a simple answer, but I've been spinning my wheels for some time on this one. I am dealing with a legacy db which really can't be altered structurally. I have a details table which lists payment plans that have been accepted by a customer. Each payment plan has an ID which links back to a reference table to get the plan's terms, conditions, etc. In my object model, I have an AcceptedPlan class, and a Plan class. Originally, I used a many-to-one relationship from the detail table back to the ref table to model this relationship in NHibernate. I also created a one-to-many relationship going in the opposite direction from the Plan class over to the AcceptedPlan class. This was fine while I was simply reading data. I could go to my Plan object, which was a property of my AcceptedPlan class to read the plan's details. My problem arose when I had to start inserting new rows to the details table. From my reading, it seems the only way to create a new child object is to add it to the parent object and then save the session. But I don't want to have to create a new parent Plan object every time I want to create a new detail record. This seems like unnecessary overhead. Does anyone know if I am going about this in the wrong way?
[ "I'd steer away from having child object containing their logical parent, it can get very messy and very recursive pretty quickly when you do that. I'd take a look at how you're intending to use the domain model before you do that sort of thing. You can easily still have the ID references in the tables and just leave them unmapped.\nHere are two example mappings that might nudge you in the right direction, I've had to adlib table names etc but it could possibly help. I'd probably also suggest mapping the StatusId to an enumeration.\nPay attention to the way the bag effectivly maps the details table into a collection.\n<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n<hibernate-mapping default-cascade=\"save-update\" xmlns=\"urn:nhibernate-mapping-2.2\">\n <class lazy=\"false\" name=\"Namespace.Customer, Namespace\" table=\"Customer\">\n <id name=\"Id\" type=\"Int32\" unsaved-value=\"0\">\n <column name=\"CustomerAccountId\" length=\"4\" sql-type=\"int\" not-null=\"true\" unique=\"true\" index=\"CustomerPK\"/>\n <generator class=\"native\" />\n </id>\n\n <bag name=\"AcceptedOffers\" inverse=\"false\" lazy=\"false\" cascade=\"all-delete-orphan\" table=\"details\">\n <key column=\"CustomerAccountId\" foreign-key=\"AcceptedOfferFK\"/>\n <many-to-many\n class=\"Namespace.AcceptedOffer, Namespace\"\n column=\"AcceptedOfferFK\"\n foreign-key=\"AcceptedOfferID\"\n lazy=\"false\"\n />\n </bag>\n\n </class>\n</hibernate-mapping>\n\n\n<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n<hibernate-mapping default-cascade=\"save-update\" xmlns=\"urn:nhibernate-mapping-2.2\">\n <class lazy=\"false\" name=\"Namespace.AcceptedOffer, Namespace\" table=\"AcceptedOffer\">\n <id name=\"Id\" type=\"Int32\" unsaved-value=\"0\">\n <column name=\"AcceptedOfferId\" length=\"4\" sql-type=\"int\" not-null=\"true\" unique=\"true\" index=\"AcceptedOfferPK\"/>\n <generator class=\"native\" />\n </id>\n\n <many-to-one \n name=\"Plan\"\n class=\"Namespace.Plan, Namespace\"\n lazy=\"false\"\n cascade=\"save-update\"\n >\n <column name=\"PlanFK\" length=\"4\" sql-type=\"int\" not-null=\"false\"/>\n </many-to-one>\n\n <property name=\"StatusId\" type=\"Int32\">\n <column name=\"StatusId\" length=\"4\" sql-type=\"int\" not-null=\"true\"/>\n </property>\n\n </class>\n</hibernate-mapping>\n\n", "Didn't see your database diagram whilst I was writing.\n<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n<hibernate-mapping default-cascade=\"save-update\" xmlns=\"urn:nhibernate-mapping-2.2\">\n <class lazy=\"false\" name=\"Namespace.Customer, Namespace\" table=\"Customer\">\n <id name=\"Id\" type=\"Int32\" unsaved-value=\"0\">\n <column name=\"customer_id\" length=\"4\" sql-type=\"int\" not-null=\"true\" unique=\"true\" index=\"CustomerPK\"/>\n <generator class=\"native\" />\n </id>\n\n <bag name=\"AcceptedOffers\" inverse=\"false\" lazy=\"false\" cascade=\"all-delete-orphan\">\n <key column=\"accepted_offer_id\"/>\n <one-to-many class=\"Namespace.AcceptedOffer, Namespace\"/>\n </bag>\n\n </class>\n</hibernate-mapping>\n\n\n<?xml version=\"1.0\" encoding=\"utf-8\" ?>\n<hibernate-mapping default-cascade=\"save-update\" xmlns=\"urn:nhibernate-mapping-2.2\">\n <class lazy=\"false\" name=\"Namespace.AcceptedOffer, Namespace\" table=\"Accepted_Offer\">\n <id name=\"Id\" type=\"Int32\" unsaved-value=\"0\">\n <column name=\"accepted_offer_id\" length=\"4\" sql-type=\"int\" not-null=\"true\" unique=\"true\" />\n <generator class=\"native\" />\n </id>\n\n <many-to-one name=\"Plan\" class=\"Namespace.Plan, Namespace\" lazy=\"false\" cascade=\"save-update\">\n <column name=\"plan_id\" length=\"4\" sql-type=\"int\" not-null=\"false\"/>\n </many-to-one>\n\n </class>\n</hibernate-mapping>\n\nShould probably do the trick (I've only done example mappings for the collections, you'll have to add other properties).\n", "The approach I'd take to model this is as follows:\nCustomer object contains an ICollection <PaymentPlan> PaymentPlans which represent the plans that customer has accepted.\nThe PaymentPlan to the Customer would be mapped using a bag which uses the details table to establish which customer id's mapped to which PaymentPlans. Using cascade all-delete-orphan, if the customer was deleted, both the entries from details and the PaymentPlans that customer owned would be deleted.\nThe PaymentPlan object contains a PlanTerms object which represented the terms of the payment plan.\nThe PlanTerms would be mapped to a PaymentPlan using a many-to-one mapping cascading save-update which would just insert a reference to the relevant PlanTerms object in to the PaymentPlan.\nUsing this model, you could create PlanTerms independantly and then when you add a new PaymentPlan to a customer, you'd create a new PaymentPlan object passing in the relevant PlanTerms object and then add it to the collection on the relevant Customer. Finally you'd save the Customer and let nhibernate cascade the save operation.\nYou'd end up with a Customer object, a PaymentPlan object and a PlanTerms object with the Customer (customer table) owning instances of PaymentPlans (the details table) which all adhear to specific PlanTerms (the plan table).\nI've got some more concrete examples of the mapping syntax if required but it's probably best to work it through with your own model and I don't have enough information on the database tables to provide any specific examples.\n", "I don't know if this is possibly because my NHibernate experience is limited, but could you create a BaseDetail class which has just the properties for the Details as they map directly to the Detail table.\nThen create a second class that inherits from the BaseDetail class that has the additional Parent Plan object so you can create a BaseDetail class when you want to just create a Detail row and assign the PlanId to it, but if you need to populate a full Detail record with the Parent plan object you can use the inherited Detail class.\nI don't know if that makes a whole lot of sense, but let me know and I'll clarify further.\n", "I think the problem you have here is that your AcceptedOffer object contains a Plan object, and then your Plan object appears to contain an AcceptedOffers collection that contains AcceptedOffer objects. Same thing with Customers. The fact that the objects are a child of each other is what causes your problem, I think.\nLikewise, what makes your AcceptedOffer complex is it has a two responsibilities: it indicates offers included in a plan, it indicates acceptance by a customer. That violates the Single Responsibility Principle.\nYou may have to differentiate between an Offer that is under a Plan, and an Offer that is accepted by customers. So here's what I'm going to do:\n\nCreate a separate Offer object which does not have a state, e.g., it does not have a customer and it does not have a status -- it only has an OfferId and the Plan it belongs to as its attributes. \nModify your Plan object to have an Offers collection (it does not have to have accepted offer in its context). \nFinally, modify your AcceptedOffer object so that it contains an Offer, the Customer, and a Status. Customer remains the same.\n\nI think this will sufficiently untangle your NHibernate mappings and object saving problems. :)\n", "A tip that may (or may not) be helpful in NHibernate: you can map your objects against Views as though the View was a table. Just specify the view name as a table name; as long as all NOT NULL fields are included in the view and the mapping it will work fine.\n" ]
[ 3, 1, 0, 0, 0, 0 ]
[]
[]
[ "c#", "nhibernate" ]
stackoverflow_0000010915_c#_nhibernate.txt
Q: How to create a tree-view preferences dialog type of interface in C#? I'm writing an application that is basically just a preferences dialog, much like the tree-view preferences dialog that Visual Studio itself uses. The function of the application is simply a pass-through for data from a serial device to a file. It performs many, many transformations on the data before writing it to the file, so the GUI for the application is simply all the settings that dictate what those transformations should be. What's the best way to go about designing/coding a tree-view preferences dialog? The way I've been going about it is building the main window with a docked tree control on the left. Then I have been creating container controls that correspond to each node of the tree. When a node is selected, the app brings that node's corresponding container control to the front, moves it to the right position, and maximizes it in the main window. This seems really, really clunky while designing it. It basically means I have tons of container controls beyond the edge of the main window during design time that I have to keep scrolling the main window over to in order to work with them. I don't know if this totally makes sense the way I'm writing this, but maybe this visual for what I'm talking about will make more sense: Basically I have to work with this huge form, with container controls all over the place, and then do a bunch of run-time reformatting to make it all work. This seems like a lot of extra work. Am I doing this in a totally stupid way? Is there some "obvious" easier way of doing this that I'm missing? A: A tidier way is to create separate forms for each 'pane' and, in each form constructor, set this.TopLevel = false; this.FormBorderStyle = FormBorderStyle.None; this.Dock = DockStyle.Fill; That way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the empty area like a normal control. Perhaps the main form could use a SplitContainer with a static TreeView in one panel, and space to add these forms in the other. Once they are added, they could be flipped through using Hide/Show or BringToFront/SendToBack methods. SeparateForm f = new SeparateForm(); MainFormSplitContainer.Panel2.Controls.Add(f); f.Show(); A: Greg Hurlman wrote: Why not just show/hide the proper container when a node is selected in the grid? Have the containers all sized appropriately in the same spot, and hide all but the default, which would be preselected in the grid on load. Unfortunately, that's what I'm trying to avoid. I'm looking for an easy way to handle the interface during design time, with minimal reformatting code needed to get it working during run time. I like Duncan's answer because it means the design of each node's interface can be kept completely separate. This means I don't get overlap on the snapping guidelines and other design time advantages. A: I would probably create several panel classes based on a base class inheriting CustomControl. These controls would then have methods like Save/Load and stuff like that. If so I can design each of these panels separately. I have used a Wizard control that in design mode, handled several pages, so that one could click next in the designer and design all the pages at once through the designer. Though this had several disadvantages when connecting code to the controls, it probably means that you could have a similar setup by building some designer classes. I have never myself written any designer classes in VS, so I can't say how to or if its worth it :-) I'm a little curious of how you intend to handle the load/save of values to/from the controls? There must be a lot of code in one class if all your pages are in one big Form? And yet another way would of course be to generate the gui code as each page is requested, using info about what type of settings there are.
How to create a tree-view preferences dialog type of interface in C#?
I'm writing an application that is basically just a preferences dialog, much like the tree-view preferences dialog that Visual Studio itself uses. The function of the application is simply a pass-through for data from a serial device to a file. It performs many, many transformations on the data before writing it to the file, so the GUI for the application is simply all the settings that dictate what those transformations should be. What's the best way to go about designing/coding a tree-view preferences dialog? The way I've been going about it is building the main window with a docked tree control on the left. Then I have been creating container controls that correspond to each node of the tree. When a node is selected, the app brings that node's corresponding container control to the front, moves it to the right position, and maximizes it in the main window. This seems really, really clunky while designing it. It basically means I have tons of container controls beyond the edge of the main window during design time that I have to keep scrolling the main window over to in order to work with them. I don't know if this totally makes sense the way I'm writing this, but maybe this visual for what I'm talking about will make more sense: Basically I have to work with this huge form, with container controls all over the place, and then do a bunch of run-time reformatting to make it all work. This seems like a lot of extra work. Am I doing this in a totally stupid way? Is there some "obvious" easier way of doing this that I'm missing?
[ "A tidier way is to create separate forms for each 'pane' and, in each form constructor, set\nthis.TopLevel = false;\nthis.FormBorderStyle = FormBorderStyle.None;\nthis.Dock = DockStyle.Fill;\n\nThat way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the empty area like a normal control.\nPerhaps the main form could use a SplitContainer with a static TreeView in one panel, and space to add these forms in the other. Once they are added, they could be flipped through using Hide/Show or BringToFront/SendToBack methods.\nSeparateForm f = new SeparateForm(); \nMainFormSplitContainer.Panel2.Controls.Add(f); \nf.Show();\n\n", "Greg Hurlman wrote:\n\nWhy not just show/hide the proper container when a node is selected in the grid? Have the containers all sized appropriately in the same spot, and hide all but the default, which would be preselected in the grid on load.\n\nUnfortunately, that's what I'm trying to avoid. I'm looking for an easy way to handle the interface during design time, with minimal reformatting code needed to get it working during run time.\nI like Duncan's answer because it means the design of each node's interface can be kept completely separate. This means I don't get overlap on the snapping guidelines and other design time advantages.\n", "I would probably create several panel classes based on a base class inheriting CustomControl. These controls would then have methods like Save/Load and stuff like that. If so I can design each of these panels separately.\nI have used a Wizard control that in design mode, handled several pages, so that one could click next in the designer and design all the pages at once through the designer. Though this had several disadvantages when connecting code to the controls, it probably means that you could have a similar setup by building some designer classes. I have never myself written any designer classes in VS, so I can't say how to or if its worth it :-)\nI'm a little curious of how you intend to handle the load/save of values to/from the controls? There must be a lot of code in one class if all your pages are in one big Form?\nAnd yet another way would of course be to generate the gui code as each page is requested, using info about what type of settings there are. \n" ]
[ 11, 2, 0 ]
[]
[]
[ "c#", "user_interface" ]
stackoverflow_0000003725_c#_user_interface.txt
Q: How to run remote shell scripts from ASP pages? I need to create an ASP page (classic, not ASP.NET) which runs remote shell scripts on a UNIX server, then captures the output into variables in VBScript within the page itself. I have never done ASP or VBScipt before. I have tried to google this stuff, but all I find are references to remote server side scripting, nothing concrete. I could really use: An elementary example of how this could be done. Any other better alternatives to achieve this in a secure manner. Are there any freeware/open source alternatives to these libraries? Any examples? A: If the shell scripts are normally run on a telnet session then you could screen scrape and parse the responses. There are commercial COM components out there such as the Dart telnet library: http://www.dart.com/pttel.aspx that would let you do this. Either that or you could roll your own using AspSock http://www.15seconds.com/component/pg000300.htm A: @Pascal, sadly I'm not aware of any F/OSS alternatives. We usually just buy in these types of libraries provided that they're not hugely expensive, and more often than not the cost is built into the customer's overall project cost. If you had .NET on the server, you could build a COM wrapped component to do the heavy lifting around System.Net.Sockets.TcpClient. Just a thought.
How to run remote shell scripts from ASP pages?
I need to create an ASP page (classic, not ASP.NET) which runs remote shell scripts on a UNIX server, then captures the output into variables in VBScript within the page itself. I have never done ASP or VBScipt before. I have tried to google this stuff, but all I find are references to remote server side scripting, nothing concrete. I could really use: An elementary example of how this could be done. Any other better alternatives to achieve this in a secure manner. Are there any freeware/open source alternatives to these libraries? Any examples?
[ "If the shell scripts are normally run on a telnet session then you could screen scrape and parse the responses. There are commercial COM components out there such as the Dart telnet library: http://www.dart.com/pttel.aspx that would let you do this.\nEither that or you could roll your own using AspSock http://www.15seconds.com/component/pg000300.htm\n", "@Pascal, sadly I'm not aware of any F/OSS alternatives. We usually just buy in these types of libraries provided that they're not hugely expensive, and more often than not the cost is built into the customer's overall project cost.\nIf you had .NET on the server, you could build a COM wrapped component to do the heavy lifting around System.Net.Sockets.TcpClient. Just a thought.\n" ]
[ 0, 0 ]
[]
[]
[ "asp_classic", "vbscript" ]
stackoverflow_0000011135_asp_classic_vbscript.txt
Q: Automatically incremented revision number doesn't show up in the About Box I have a small VB.NET application that I'm working on using the full version of Visual Studio 2005. In the Publish properties of the project, I have it set to Automatically increment revision with each publish. The issue is that it's only incrementing the revision in the Setup files. It doesn't seem to be updating the version number in the About Box (which is the generic, built-in, About Box template). That version number seems to be coming from My.Application.Info.Version. What should I be using instead so that my automatically incrementing revision number shows up in the about box? A: Change the code for the About box to Me.LabelVersion.Text = String.Format("Version {0}", My.Application.Deployment.CurrentVersion.ToString) Please note that all the other answers are correct for "how do I get my assembly version", not the stated question "how do I show my publish version". A: It took me a second to find this, but I believe this is what you are looking for: using System; using System.Reflection; public class VersionNumber { public static void Main() { System.Reflection.Assembly assembly = System.Reflection.Assembly.GetExecutingAssembly(); Version version = assembly.GetName().Version; Console.WriteLine ("Version: {0}", version); Console.WriteLine ("Major: {0}", version.Major); Console.WriteLine ("Minor: {0}", version.Minor); Console.WriteLine ("Build: {0}", version.Build); Console.WriteLine ("Revision: {0}", version.Revision); Console.Read(); } } It was based upon the code provided at the following site - http://en.csharp-online.net/Display_type_version_number A: I'm no VB.NET expert, but have you tried to set the value to for example 1.0.0.*? This should increase the revision number (at least it does in the AssemblyInfo.cs in C#). A: The option you select is only to update the setup number. To update the program number you have to modify the AssemblyInfo. C# [assembly: AssemblyVersion("X.Y.")] [assembly: AssemblyFileVersion("X.Y.")] VB.NET Assembly: AssemblyVersion("X.Y.*") A: It's a maximum of 65535 for each of the 4 values, but when using 1.0.* or 1.0.*.*, the Assembly Linker will use a coded timestamp (so it's not a simple auto-increment, and it can repeat!) that will fit 65535. See my answer to this question for more links and details.
Automatically incremented revision number doesn't show up in the About Box
I have a small VB.NET application that I'm working on using the full version of Visual Studio 2005. In the Publish properties of the project, I have it set to Automatically increment revision with each publish. The issue is that it's only incrementing the revision in the Setup files. It doesn't seem to be updating the version number in the About Box (which is the generic, built-in, About Box template). That version number seems to be coming from My.Application.Info.Version. What should I be using instead so that my automatically incrementing revision number shows up in the about box?
[ "Change the code for the About box to \nMe.LabelVersion.Text = String.Format(\"Version {0}\", My.Application.Deployment.CurrentVersion.ToString)\n\nPlease note that all the other answers are correct for \"how do I get my assembly version\", not the stated question \"how do I show my publish version\".\n", "It took me a second to find this, but I believe this is what you are looking for:\nusing System;\nusing System.Reflection;\npublic class VersionNumber\n{\n public static void Main()\n {\n System.Reflection.Assembly assembly = System.Reflection.Assembly.GetExecutingAssembly();\n Version version = assembly.GetName().Version;\n Console.WriteLine (\"Version: {0}\", version);\n Console.WriteLine (\"Major: {0}\", version.Major);\n Console.WriteLine (\"Minor: {0}\", version.Minor);\n Console.WriteLine (\"Build: {0}\", version.Build);\n Console.WriteLine (\"Revision: {0}\", version.Revision);\n Console.Read();\n }\n}\n\nIt was based upon the code provided at the following site - http://en.csharp-online.net/Display_type_version_number\n", "I'm no VB.NET expert, but have you tried to set the value to for example 1.0.0.*?\nThis should increase the revision number (at least it does in the AssemblyInfo.cs in C#).\n", "The option you select is only to update the setup number. To update the program number you have to modify the AssemblyInfo. \nC#\n[assembly: AssemblyVersion(\"X.Y.\")]\n[assembly: AssemblyFileVersion(\"X.Y.\")]\nVB.NET\nAssembly: AssemblyVersion(\"X.Y.*\")\n", "It's a maximum of 65535 for each of the 4 values, but when using 1.0.* or 1.0.*.*, the Assembly Linker will use a coded timestamp (so it's not a simple auto-increment, and it can repeat!) that will fit 65535.\nSee my answer to this question for more links and details.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "vb.net", "visual_studio" ]
stackoverflow_0000011279_vb.net_visual_studio.txt
Q: How to encourage someone to learn programming? I have a friend that has a little bit of a holiday coming up and they want ideas on what they should do during the holiday, I plan to suggest programming to them, what are the pros and cons that I need to mention? I'll add to the list below as people reply, I apologise if I duplicate any entries. Pros I have so far Minimal money requirement (they already have a computer) Will help them to think in new ways (Rob Cooper) Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. (Rob Cooper) I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. (Rob Cooper) Money is/can be pretty good. (Rob Cooper) Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. (Rob Cooper) It's an exciting industry to work in, theres massive amounts of tech to work and play with! (Quarrelsome) Jetpacks. Programming is Technology and the more time we spend with technology the closer we get to having Jetpacks. (Teifion: This is a really cool analogy!) (Saj) Profitable way of Exercising Brain Muscles. (Saj) It makes you look brilliant to some audience. (Saj) Makes you tech-smart. (Saj) Makes you eligible to the future world. (Saj) It's easy, fun, not in a math way.. (kiwiBastard) If the person likes problem solving then programming is no better example. (kiwiBastard) Brilliant sense of achivement when you can interact with something you have designed and coded (kiwiBastard) Great way to meet chicks/chaps - erm, maybe not that one (Teifion: I dunno where you do programming but I want to come visit some time) (epatel) Learning how to program is like learning spell casting at Hogwarts . The computer will be your servant forever... Cons I have so far Can be frustrating when it's not working Not physical exercise (Rob Cooper) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. (Rob Cooper) Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for .NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. (Rob Cooper) Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. (Rob Cooper) Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all! (Saj) Can cause virtual damage. (Saj) Can make one throw their computer away. (Saj) Can make one only virtually available to the world. A: I do it for the ladies :D Seriously though, for me Pro's Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. Money is/can be pretty good. Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. It's an exciting industry to work in, theres massive amounts of tech to work and play with! Cons (some of these can easily be Pro's too) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for .NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all! A: Jetpacks. Programming is Technology and the more time we spend with technology the closer we get to having Jetpacks. A: Programming is one of the ways to be the richest person in the world. So far, we do not know any other. A: My advice would be that you don't push your friend too hard. If you're going to suggest they take up programming, only mention it casually. Suggesting recreational computer programming to someone "unenlightened" could be taken about the same way as suggesting they do some recreational mathematics, or stamp collecting (no offense to any philatelists out there!). A: Learning how to program is like learning spell casting at Hogwarts . The computer will be your servant forever... --if you have a Mac-- A simple start could be just to look at Automator (are several screencasts online ie) which is a simple way of making programs do a little more than sit and wait for user interaction...not real programming but gives a feel for things that a little programming can do. A: If the person likes problem solving then programming is no better example. Brilliant sense of achivement when you can interact with something you have designed and coded Great way to meet chicks/chaps - erm, maybe not that one A: I'll follow up on Carl Russmann's comments by suggesting that you shouldn't push too hard on your friend. Most readers of this site find programming to be interesting and fun, but we are really weird. For most people, learning programming would be very hard work, with little short-term benefit. Most people have no aptitude for programming, and would find it to be as much fun as doing their income taxes. That's a big Con. A: You could tell him how into programmers girls are.. you know, lie.
How to encourage someone to learn programming?
I have a friend that has a little bit of a holiday coming up and they want ideas on what they should do during the holiday, I plan to suggest programming to them, what are the pros and cons that I need to mention? I'll add to the list below as people reply, I apologise if I duplicate any entries. Pros I have so far Minimal money requirement (they already have a computer) Will help them to think in new ways (Rob Cooper) Great challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that. (Rob Cooper) I like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming. (Rob Cooper) Money is/can be pretty good. (Rob Cooper) Its a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection. (Rob Cooper) It's an exciting industry to work in, theres massive amounts of tech to work and play with! (Quarrelsome) Jetpacks. Programming is Technology and the more time we spend with technology the closer we get to having Jetpacks. (Teifion: This is a really cool analogy!) (Saj) Profitable way of Exercising Brain Muscles. (Saj) It makes you look brilliant to some audience. (Saj) Makes you tech-smart. (Saj) Makes you eligible to the future world. (Saj) It's easy, fun, not in a math way.. (kiwiBastard) If the person likes problem solving then programming is no better example. (kiwiBastard) Brilliant sense of achivement when you can interact with something you have designed and coded (kiwiBastard) Great way to meet chicks/chaps - erm, maybe not that one (Teifion: I dunno where you do programming but I want to come visit some time) (epatel) Learning how to program is like learning spell casting at Hogwarts . The computer will be your servant forever... Cons I have so far Can be frustrating when it's not working Not physical exercise (Rob Cooper) There are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us. (Rob Cooper) Not so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for .NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line. (Rob Cooper) Its a lifelong career.. I truly feel you never really become a "master" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace. (Rob Cooper) Some geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic "I am more intelligent than you" geeks that can really just be a pain in the ass for all! (Saj) Can cause virtual damage. (Saj) Can make one throw their computer away. (Saj) Can make one only virtually available to the world.
[ "I do it for the ladies :D\nSeriously though, for me\nPro's\n\nGreat challenge, every day really is a fresh challenge in some way, shape or form. Not many jobs can truly offer that.\nI like the way it makes me think.. I look at EVERYTHING more logically as my skills improve.. This helps with general living as well as programming.\nMoney is/can be pretty good.\nIts a pretty portable trade.. With collaboration tech as it is, you can pretty much work anywhere in the world so long as you have an Internet connection.\nIt's an exciting industry to work in, theres massive amounts of tech to work and play with!\n\nCons (some of these can easily be Pro's too)\n\nThere are a lot of people doing it just for the money. They have no love for the craft and just appear lazy, annoying and sometimes it can really grind my gears seeing an industry and workforce I enjoy so much being diluted with crap. Which can often reflect badly on all of us.\nNot so sure about the initial cost.. Yeah you can get started with Java or something at low cost, but for me, locally, the vast demand is for .NET developers, which can be costly getting up and running with. However, this is rapidly/has not becoming the case with the amount of work put in by MS with releasing pretty damn good Express editions of their main development product line.\nIts a lifelong career.. I truly feel you never really become a \"master\" by nature of the industry, you stop for 1-2 years. You're behind the times.. Some people do not like the pace.\nSome geeks can be hard to work with.. While I think the general geek movement is really changing for the better, you will always have the classic \"I am more intelligent than you\" geeks that can really just be a pain in the ass for all!\n\n", "Jetpacks. \nProgramming is Technology and the more time we spend with technology the closer we get to having Jetpacks.\n", "Programming is one of the ways to be the richest person in the world. So far, we do not know any other.\n", "My advice would be that you don't push your friend too hard. If you're going to suggest they take up programming, only mention it casually.\nSuggesting recreational computer programming to someone \"unenlightened\" could be taken about the same way as suggesting they do some recreational mathematics, or stamp collecting (no offense to any philatelists out there!).\n", "Learning how to program is like learning spell casting at Hogwarts . \nThe computer will be your servant forever...\n--if you have a Mac--\nA simple start could be just to look at Automator (are several screencasts online ie) which is a simple way of making programs do a little more than sit and wait for user interaction...not real programming but gives a feel for things that a little programming can do.\n", "\nIf the person likes problem solving then programming is no better example.\nBrilliant sense of achivement when you can interact with something you have designed and coded\nGreat way to meet chicks/chaps - erm, maybe not that one \n\n", "I'll follow up on Carl Russmann's comments by suggesting that you shouldn't push too hard on your\nfriend. \nMost readers of this site find programming to be interesting and fun, but we are really weird. \nFor most people, learning programming would be very hard work, with little short-term benefit. Most people have no aptitude for programming, and would find it to be as much fun as doing their income taxes. That's a big Con.\n", "You could tell him how into programmers girls are.. you know, lie.\n" ]
[ 8, 6, 3, 3, 2, 1, 1, 0 ]
[]
[]
[ "language_agnostic" ]
stackoverflow_0000010872_language_agnostic.txt
Q: How to host a WPF form in a MFC application I'm looking for any resources on hosting a WPF form within an existing MFC application. Can anyone point me in the right direction on how to do this? A: From what I understand (haven't tried myself), it's almost as simple as just giving the WPF control the parent's handle. Here's a Walkthrough: Hosting WPF Content in Win32.
How to host a WPF form in a MFC application
I'm looking for any resources on hosting a WPF form within an existing MFC application. Can anyone point me in the right direction on how to do this?
[ "From what I understand (haven't tried myself), it's almost as simple as just giving the WPF control the parent's handle. Here's a Walkthrough: Hosting WPF Content in Win32.\n" ]
[ 5 ]
[]
[]
[ "c#", "mfc", "wpf" ]
stackoverflow_0000011423_c#_mfc_wpf.txt