content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: NHibernate and Oracle connect through Windows Authenication How do I use Windows Authentication to connect to an Oracle database? Currently, I just use an Oracle Username and password, however, a requirement is to give the user on install the option of selecting Windows Authentication since we offer the same as SQL. A: You need to modify the AUTHENTICATION_SERVICES entry in SQLNET.ORA to this: SQLNET.AUTHENTICATION_SERVICES= (NTS) As well, you will need to setup the accounts in Oracle to match the Windows accounts. Have a look at http://www.dba-oracle.com/bk_sqlnet_authentication_services.htm for more details.
NHibernate and Oracle connect through Windows Authenication
How do I use Windows Authentication to connect to an Oracle database? Currently, I just use an Oracle Username and password, however, a requirement is to give the user on install the option of selecting Windows Authentication since we offer the same as SQL.
[ "You need to modify the AUTHENTICATION_SERVICES entry in SQLNET.ORA to this:\nSQLNET.AUTHENTICATION_SERVICES= (NTS)\n\nAs well, you will need to setup the accounts in Oracle to match the Windows accounts. Have a look at http://www.dba-oracle.com/bk_sqlnet_authentication_services.htm for more details.\n" ]
[ 4 ]
[]
[]
[ "nhibernate", "oracle", "windows" ]
stackoverflow_0000030775_nhibernate_oracle_windows.txt
Q: Fast SQL Server 2005 script generation It seems like the generation of SQL scripts from the SQL Server Management Studio is terribly slow. I think that the old Enterprise Manager could run laps around the newer script generation tool. I've seen a few posts here and there with other folks complaining about the speed, but I haven't seen much offered in the way of alternatives. Is there a low-cost/free tool for scripting an entire SQL Server 2005 database that will perform better that SSMS? It would be hard to do worse. A: We are using the tools by RedGate which I personally find very useful in any aspect of work with databases. For scripting I would recommend the SQL Compare (you need a pro version for scripting). The SQL Compare is a must have for deploying schema changes from the deployment DB to the live Server and a real timesaver. Those tools are not free but I think they could save you money in a long run A: See the Database Publishing Wizard that is part of the SQL Server Hosting Toolkit. It generates a single SQL file for both schema and data. A: I don't know what is "terribly slow" for you, but I have a decent performance with SQL 2005 Management Studio. In either case, RedGate products are very cool. Unfortunately they are not free. A: What kind of scrpt generation are you talking about now?, generating create scripts from the objects in the database is way faster in SSMS compared to EM. But if you are running an select or something that gives you lots of rows in the grid, it is crazy slow.. like scripts generating inserts statements of all rows in an table, if you got lots of data, it is almost not doable.
Fast SQL Server 2005 script generation
It seems like the generation of SQL scripts from the SQL Server Management Studio is terribly slow. I think that the old Enterprise Manager could run laps around the newer script generation tool. I've seen a few posts here and there with other folks complaining about the speed, but I haven't seen much offered in the way of alternatives. Is there a low-cost/free tool for scripting an entire SQL Server 2005 database that will perform better that SSMS? It would be hard to do worse.
[ "We are using the tools by RedGate which I personally find very useful in any aspect of work with databases. For scripting I would recommend the SQL Compare (you need a pro version for scripting). The SQL Compare is a must have for deploying schema changes from the deployment DB to the live Server and a real timesaver.\nThose tools are not free but I think they could save you money in a long run\n", "See the Database Publishing Wizard that is part of the SQL Server Hosting Toolkit. It generates a single SQL file for both schema and data.\n", "I don't know what is \"terribly slow\" for you, but I have a decent performance with SQL 2005 Management Studio. In either case, RedGate products are very cool. Unfortunately they are not free.\n", "What kind of scrpt generation are you talking about now?, generating create scripts from the objects in the database is way faster in SSMS compared to EM.\nBut if you are running an select or something that gives you lots of rows in the grid, it is crazy slow.. like scripts generating inserts statements of all rows in an table, if you got lots of data, it is almost not doable.\n" ]
[ 3, 3, 0, 0 ]
[]
[]
[ "scripting", "sql_server" ]
stackoverflow_0000031296_scripting_sql_server.txt
Q: How do I add a constant column value during data transfer from CSV to SQL? I am reading in CSV file and translating it to an SQL Table. The kicker is that one of the columns in the table is of data type ID that needs to be set to a constant (in this case 2). I am not sure how to do this. A: You can use a Derived Column Transformation in which you'll create a new output column and set its value to 2. You can then use that column when outputting to SQL.
How do I add a constant column value during data transfer from CSV to SQL?
I am reading in CSV file and translating it to an SQL Table. The kicker is that one of the columns in the table is of data type ID that needs to be set to a constant (in this case 2). I am not sure how to do this.
[ "You can use a Derived Column Transformation in which you'll create a new output column and set its value to 2. You can then use that column when outputting to SQL.\n" ]
[ 25 ]
[]
[]
[ "csv", "sql_server", "ssis" ]
stackoverflow_0000031324_csv_sql_server_ssis.txt
Q: Proprietary plug-ins for GPL programs: what about interpreted languages? I am developing a GPL-licensed application in Python and need to know if the GPL allows my program to use proprietary plug-ins. This is what the FSF has to say on the issue: If a program released under the GPL uses plug-ins, what are the requirements for the licenses of a plug-in? It depends on how the program invokes its plug-ins. If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them. If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. This means the plug-ins must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when those plug-ins are distributed. If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case. The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile? (edit: I understand why the distinction between fork/exec and dynamic linking, but it seems like someone who wanted to comply with the GPL but go against the "spirit" --I don't-- could just use fork/exec and interprocess communication to do pretty much anything). The best solution would be to add an exception to my license to explicitly allow the use of proprietary plugins, but I am unable to do so since I'm using Qt/PyQt which is GPL. A: he distinction between fork/exec and dynamic linking, besides being kind of artificial, I don't think its artificial at all. Basically they are just making the division based upon the level of integration. If the program has "plugins" which are essentially fire and forget with no API level integration, then the resulting work is unlikely to be considered a derived work. Generally speaking a plugin which is merely forked/exec'ed would fit this criteria, though there may be cases where it does not. This case especially applies if the "plugin" code would work independently of your code as well. If, on the other hand, the code is deeply dependent upon the GPL'ed work, such as extensively calling APIs, or tight data structure integration, then things are more likely to be considered a derived work. Ie, the "plugin" cannot exist on its own without the GPL product, and a product with this plugin installed is essentially a derived work of the GPLed product. So to make it a little more clear, the same principles could apply to your interpreted code. If the interpreted code relies heavily upon your APIs (or vice-versa) then it would be considered a derived work. If it is just a script that executes on its own with extremely little integration, then it may not. Does that make more sense? A: @Daniel The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile? I'm not sure that the distinction is artificial. After a dynamic load the plugin code shares an execution context with the GPLed code. After a fork/exec it does not. In anycase I would guess that importing causes the new code to run in the same execution context as the GPLed bit, and you should treat it like the dynamic link case. No? A: How much info are you sharing between the Plugins and the main program? If you are doing anything more than just executing them and waiting for the results (sharing no data between the program and the plugin in the process) then you could most likely get away with them being proprietary, otherwise they would probably need to be GPL'd.
Proprietary plug-ins for GPL programs: what about interpreted languages?
I am developing a GPL-licensed application in Python and need to know if the GPL allows my program to use proprietary plug-ins. This is what the FSF has to say on the issue: If a program released under the GPL uses plug-ins, what are the requirements for the licenses of a plug-in? It depends on how the program invokes its plug-ins. If the program uses fork and exec to invoke plug-ins, then the plug-ins are separate programs, so the license for the main program makes no requirements for them. If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. This means the plug-ins must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when those plug-ins are distributed. If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case. The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile? (edit: I understand why the distinction between fork/exec and dynamic linking, but it seems like someone who wanted to comply with the GPL but go against the "spirit" --I don't-- could just use fork/exec and interprocess communication to do pretty much anything). The best solution would be to add an exception to my license to explicitly allow the use of proprietary plugins, but I am unable to do so since I'm using Qt/PyQt which is GPL.
[ "\nhe distinction between fork/exec and dynamic linking, besides being kind of artificial,\n\nI don't think its artificial at all. Basically they are just making the division based upon the level of integration. If the program has \"plugins\" which are essentially fire and forget with no API level integration, then the resulting work is unlikely to be considered a derived work. Generally speaking a plugin which is merely forked/exec'ed would fit this criteria, though there may be cases where it does not. This case especially applies if the \"plugin\" code would work independently of your code as well.\nIf, on the other hand, the code is deeply dependent upon the GPL'ed work, such as extensively calling APIs, or tight data structure integration, then things are more likely to be considered a derived work. Ie, the \"plugin\" cannot exist on its own without the GPL product, and a product with this plugin installed is essentially a derived work of the GPLed product.\nSo to make it a little more clear, the same principles could apply to your interpreted code. If the interpreted code relies heavily upon your APIs (or vice-versa) then it would be considered a derived work. If it is just a script that executes on its own with extremely little integration, then it may not.\nDoes that make more sense?\n", "@Daniel The distinction between fork/exec and dynamic linking, besides being kind of artificial, doesn't carry over to interpreted languages: what about a Python/Perl/Ruby plugin, which gets loaded via import or execfile?\nI'm not sure that the distinction is artificial. After a dynamic load the plugin code shares an execution context with the GPLed code. After a fork/exec it does not.\nIn anycase I would guess that importing causes the new code to run in the same execution context as the GPLed bit, and you should treat it like the dynamic link case. No?\n", "How much info are you sharing between the Plugins and the main program? If you are doing anything more than just executing them and waiting for the results (sharing no data between the program and the plugin in the process) then you could most likely get away with them being proprietary, otherwise they would probably need to be GPL'd.\n" ]
[ 7, 1, 0 ]
[]
[]
[ "interpreted_language", "licensing", "open_source", "plugins", "python" ]
stackoverflow_0000031412_interpreted_language_licensing_open_source_plugins_python.txt
Q: Storing a file in a database as opposed to the file system? Generally, how bad of a performance hit is storing a file in a database (specifically mssql) as opposed to the file system? I can't come up with a reason outside of application portability that I would want to store my files as varbinaries in SQL Server. A: Have a look at this answer: Storing Images in DB - Yea or Nay? Essentially, the space and performance hit can be quite big, depending on the number of users. Also, keep in mind that Web servers are cheap and you can easily add more to balance the load, whereas the database is the most expensive and hardest to scale part of a web architecture usually. There are some opposite examples (e.g., Microsoft Sharepoint), but usually, storing files in the database is not a good idea. Unless possibly you write desktop apps and/or know roughly how many users you will ever have, but on something as random and unexpectable like a public web site, you may pay a high price for storing files in the database. A: If you can move to SQL Server 2008, you can take advantage of the FILESTREAM support which gives you the best of both - the files are stored in the filesystem, but the database integration is much better than just storing a filepath in a varchar field. Your query can return a standard .NET file stream, which makes the integration a lot simpler. Getting Started with FILESTREAM Storage A: I'd say, it depends on your situation. For example, I work in local government, and we have lots of images like mugshots, etc. We don't have a high number of users, but we need to have good security and auditing around the data. The database is a better solution for us since it makes this easier and we aren't going to run into scaling problems. A: What's the question here? Modern DBMS SQL2008 have a variety of ways of dealing with BLOBs which aren't just sticking in them in a table. There are pros and cons, of course, and you might need to think about it a little deeper. This is an interesting paper, by the late (?) Jim Gray To BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem A: In my own experience, it is always better to store files as files. The reason is that the filesystem is optimised for file storeage, whereas a database is not. Of course, there are some exceptions (e.g. the much heralded next-gen MS filesystem is supposed to be built on top of SQL server), but in general that's my rule. A: While performance is an issue, I think modern database designs have made it much less of an issue for small files. Performance aside, it also depends on just how tightly-coupled the data is. If the file contains data that is closely related to the fields of the database, then it conceptually belongs close to it and may be stored in a blob. If it contains information which could potentially relate to multiple records or may have some use outside of the context of the database, then it belongs outside. For example, an image on a web page is fetched on a separate request from the page that links to it, so it may belong outside (depending on the specific design and security considerations). Our compromise, and I don't promise it's the best, has been to store smallish XML files in the database but images and other files outside it. A: We made the decision to store as varbinary for http://www.freshlogicstudios.com/Products/Folders/ halfway expecting performance issues. I can say that we've been pleasantly surprised at how well it's worked out. A: I agree with @ZombieSheep. Just one more thing - I generally don't think that databases actually need be portable because you miss all the features your DBMS vendor provides. I think that migrating to another database would be the last thing one would consider. Just my $.02 A: The overhead of having to parse a blob (image) into a byte array and then write it to disk in the proper file name and then reading it is enough of an overhead hit to discourage you from doing this too often, especially if the files are rather large. A: Not to be vague or anything but I think the type of 'file' you will be storing is one of the biggest determining factors. If you essentially talking about a large text field which could be stored as file my preference would be for db storage.
Storing a file in a database as opposed to the file system?
Generally, how bad of a performance hit is storing a file in a database (specifically mssql) as opposed to the file system? I can't come up with a reason outside of application portability that I would want to store my files as varbinaries in SQL Server.
[ "Have a look at this answer:\nStoring Images in DB - Yea or Nay?\nEssentially, the space and performance hit can be quite big, depending on the number of users. Also, keep in mind that Web servers are cheap and you can easily add more to balance the load, whereas the database is the most expensive and hardest to scale part of a web architecture usually.\nThere are some opposite examples (e.g., Microsoft Sharepoint), but usually, storing files in the database is not a good idea.\nUnless possibly you write desktop apps and/or know roughly how many users you will ever have, but on something as random and unexpectable like a public web site, you may pay a high price for storing files in the database.\n", "If you can move to SQL Server 2008, you can take advantage of the FILESTREAM support which gives you the best of both - the files are stored in the filesystem, but the database integration is much better than just storing a filepath in a varchar field. Your query can return a standard .NET file stream, which makes the integration a lot simpler.\nGetting Started with FILESTREAM Storage\n", "I'd say, it depends on your situation. For example, I work in local government, and we have lots of images like mugshots, etc. We don't have a high number of users, but we need to have good security and auditing around the data. The database is a better solution for us since it makes this easier and we aren't going to run into scaling problems.\n", "What's the question here?\nModern DBMS SQL2008 have a variety of ways of dealing with BLOBs which aren't just sticking in them in a table. There are pros and cons, of course, and you might need to think about it a little deeper.\nThis is an interesting paper, by the late (?) Jim Gray\nTo BLOB or Not To BLOB: Large Object Storage in a Database or a Filesystem\n", "In my own experience, it is always better to store files as files. The reason is that the filesystem is optimised for file storeage, whereas a database is not. Of course, there are some exceptions (e.g. the much heralded next-gen MS filesystem is supposed to be built on top of SQL server), but in general that's my rule. \n", "While performance is an issue, I think modern database designs have made it much less of an issue for small files.\nPerformance aside, it also depends on just how tightly-coupled the data is. If the file contains data that is closely related to the fields of the database, then it conceptually belongs close to it and may be stored in a blob. If it contains information which could potentially relate to multiple records or may have some use outside of the context of the database, then it belongs outside. For example, an image on a web page is fetched on a separate request from the page that links to it, so it may belong outside (depending on the specific design and security considerations).\nOur compromise, and I don't promise it's the best, has been to store smallish XML files in the database but images and other files outside it.\n", "We made the decision to store as varbinary for http://www.freshlogicstudios.com/Products/Folders/ halfway expecting performance issues. I can say that we've been pleasantly surprised at how well it's worked out.\n", "I agree with @ZombieSheep.\nJust one more thing - I generally don't think that databases actually need be portable because you miss all the features your DBMS vendor provides. I think that migrating to another database would be the last thing one would consider. Just my $.02\n", "The overhead of having to parse a blob (image) into a byte array and then write it to disk in the proper file name and then reading it is enough of an overhead hit to discourage you from doing this too often, especially if the files are rather large.\n", "Not to be vague or anything but I think the type of 'file' you will be storing is one of the biggest determining factors. If you essentially talking about a large text field which could be stored as file my preference would be for db storage. \n" ]
[ 79, 37, 22, 7, 5, 3, 2, 1, 1, 0 ]
[]
[]
[ "database", "filesystems", "sql_server", "storage" ]
stackoverflow_0000008952_database_filesystems_sql_server_storage.txt
Q: Visual Studio 2008 debugging issue I'm working in VS 2008 and have three projects in one solution. I'm debugging by attaching to a .net process invoked by a third party app (SalesLogix, a CRM app). Once it has attached to the process and I attempt to set a breakpoint in one of the projects, it doesn't set a breakpoint in that file. It actually switches the current tab to another file in another project and sets a breakpoint in that document. If the file isn't open, it even goes so far as to open it for me. I can't explain this. I've got no clue. Anyone seen such odd behavior? I wouldn't believe it if I wasn't seeing it myself. A little more info: if I set a breakpoint before attaching, it shows the "red dot" and says no symbols loaded...no problem...I expect that. When I attach and invoke my .net code from SalesLogix and switch back to VS, my breakpoint is completely gone (not even a warning that the source doesn't match the debug file). When I attempt to manually load the debug file, then I get a message that the symbol file does not match the module. The .pdb and the .dll are timestamped the same, so I'm stumped. Anyone have any ideas? Thx, Jeff A: I saw this functionality in older versions of VS.Net (2003 I think). It may still exist in current versions, but I haven't encountered it. Seems that files with the same name, even in different directories confuse VS.Net, and it ends up setting a break point in a file with the same name. May only happen if the classes in the file both have the same name also. So much for namespaces I guess. You also may want to check your build configuration to make sure that all the projects are in fact building in debug mode. I know I've been caught a couple times when the configuration got changed somehow for the solution, and some projects weren't compiling in debug mode. A: Kibbee, you were right! It was two files with the same name in different folders. I was setting the breakpoint in the correct file on line 58 - it was putting the breakpoint on the other file at line 58. I was finally able to set a breakpoint by using the "Debug-->New Breakpoint-->Break at Function Name" menu option and entering my function name. It stopped exactly like it should have then. I agree - so much for namespaces, right? Damn thing cost me a couple of hours. Oh, well...at least it's solved and I know why. Thx for the answer and thx to Matt for his reply, too!
Visual Studio 2008 debugging issue
I'm working in VS 2008 and have three projects in one solution. I'm debugging by attaching to a .net process invoked by a third party app (SalesLogix, a CRM app). Once it has attached to the process and I attempt to set a breakpoint in one of the projects, it doesn't set a breakpoint in that file. It actually switches the current tab to another file in another project and sets a breakpoint in that document. If the file isn't open, it even goes so far as to open it for me. I can't explain this. I've got no clue. Anyone seen such odd behavior? I wouldn't believe it if I wasn't seeing it myself. A little more info: if I set a breakpoint before attaching, it shows the "red dot" and says no symbols loaded...no problem...I expect that. When I attach and invoke my .net code from SalesLogix and switch back to VS, my breakpoint is completely gone (not even a warning that the source doesn't match the debug file). When I attempt to manually load the debug file, then I get a message that the symbol file does not match the module. The .pdb and the .dll are timestamped the same, so I'm stumped. Anyone have any ideas? Thx, Jeff
[ "I saw this functionality in older versions of VS.Net (2003 I think). It may still exist in current versions, but I haven't encountered it. Seems that files with the same name, even in different directories confuse VS.Net, and it ends up setting a break point in a file with the same name. May only happen if the classes in the file both have the same name also. So much for namespaces I guess. \nYou also may want to check your build configuration to make sure that all the projects are in fact building in debug mode. I know I've been caught a couple times when the configuration got changed somehow for the solution, and some projects weren't compiling in debug mode.\n", "Kibbee, you were right! It was two files with the same name in different folders. I was setting the breakpoint in the correct file on line 58 - it was putting the breakpoint on the other file at line 58. I was finally able to set a breakpoint by using the \"Debug-->New Breakpoint-->Break at Function Name\" menu option and entering my function name. It stopped exactly like it should have then. \nI agree - so much for namespaces, right? Damn thing cost me a couple of hours. Oh, well...at least it's solved and I know why.\nThx for the answer and thx to Matt for his reply, too!\n" ]
[ 4, 0 ]
[]
[]
[ "c#", "debugging", "visual_studio_2008" ]
stackoverflow_0000031410_c#_debugging_visual_studio_2008.txt
Q: WCF Backward Compatibility Issue I have a WCF service that I have to reference from a .net 2.0 project. I have tried to reference it using the "add web reference" method but it messes up the params. For example, I have a method in the service that expects a char[] to be passed in, but when I add the web reference, the method expects an int[]. So then I tried to setup svcutil and it worked... kind of. I could only get the service class to compile by adding a bunch of .net 3.0 references to my .net 2.0 project. This didn't sit well with the architect so I've had to can it (and probably for the best too). So I was wondering if anyone has any pointers or resources on how I can setup a .net 2.0 project to reference a WCF service. A: One of those instances that you need to edit the WSDL. For a start a useful tool http://codeplex.com/storm A: What binding are you using - I think if you stick to the basicHttp binding you should be able to generate a proxy using the "add web reference" approach from a .net 2 project? Perhaps if you post the contract/interface definition it might help? Cheers Richard A: Thanks for the resource. It certainly helped me test out the webservice, but it didn't much help with using the WCF service in my .net 2.0 application. What I eventually ended up doing was going back to the architects and explaining that the 3.0 dll's that I needed to reference got compiled back to run on the 2.0 CLR. We don't necessarily like the solution, but we're going to go with it for now as there doesn't seem to be too many viable alternatives A: I was using the basicHttp binding but the problem was actually with the XMLSerializer. It doesn't properly recognize the wsdl generated by WCF (even with basicHttp bindings) for anything other than basic value types. We got around this by added the reference to the 3.0 dll's and using the datacontract serializer.
WCF Backward Compatibility Issue
I have a WCF service that I have to reference from a .net 2.0 project. I have tried to reference it using the "add web reference" method but it messes up the params. For example, I have a method in the service that expects a char[] to be passed in, but when I add the web reference, the method expects an int[]. So then I tried to setup svcutil and it worked... kind of. I could only get the service class to compile by adding a bunch of .net 3.0 references to my .net 2.0 project. This didn't sit well with the architect so I've had to can it (and probably for the best too). So I was wondering if anyone has any pointers or resources on how I can setup a .net 2.0 project to reference a WCF service.
[ "One of those instances that you need to edit the WSDL. For a start a useful tool\nhttp://codeplex.com/storm\n", "What binding are you using - I think if you stick to the basicHttp binding you should be able to generate a proxy using the \"add web reference\" approach from a .net 2 project?\nPerhaps if you post the contract/interface definition it might help?\nCheers\nRichard\n", "Thanks for the resource. It certainly helped me test out the webservice, but it didn't much help with using the WCF service in my .net 2.0 application.\nWhat I eventually ended up doing was going back to the architects and explaining that the 3.0 dll's that I needed to reference got compiled back to run on the 2.0 CLR. We don't necessarily like the solution, but we're going to go with it for now as there doesn't seem to be too many viable alternatives\n", "I was using the basicHttp binding but the problem was actually with the XMLSerializer. It doesn't properly recognize the wsdl generated by WCF (even with basicHttp bindings) for anything other than basic value types.\nWe got around this by added the reference to the 3.0 dll's and using the datacontract serializer.\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ ".net", "c#", "wcf" ]
stackoverflow_0000009472_.net_c#_wcf.txt
Q: How do I configure an ASP.NET MVC project to work with Boo I want to build an ASP.NET MVC application with Boo instead of C#. If you know the steps to configure this type of project setup, I'd be interested to know what I need to do. The MVC project setup is no problem. What I'm trying to figure out how to configure the pages and project to switch to use the Boo language and compiler. A: So there are two levels of "work with Boo". One would be all the code (namely, the Controllers), and the other would be the views. For the code, I assume Boo compiles to standard .NET assemblies, so simply properly following the naming conventions using by ASP.NET MVC should allow you to write Controllers. You will probably need to start with a C# or VB version of the MVC web application project template and port some of the boilerplate code over into Boo to get the solution entirely in Boo (I presume Boo supports Web Application projects?). The other half is views. Someone will need to port the Brail view engine over to the ASP.NET MVC view engine system. This may already be done, but I don't know for sure. If it's not, then this is probably a significant amount of work to be done. Probably the best place to get answers to these kinds of questions is the MvcContrib community on CodePlex. A: The Brail view engine has been implemented to be used in ASP.NET MVC. The MvcContrib project implemented the code. The source code is located on Google Code. As far as the controllers, I really am not sure. I am not that familiar with Boo. I know a lot of developers use it for configuration instead of using xml for instance. My tips would be, if Boo can inherit off the Controller base class and you stick to the naming conventions, you should be alright. If you vary off the naming conventions, well you would need to implement your own IControllerFactory to instantiate the boo controllers as the requests come in. I have been following the ASP.NET MVC bits since the first CTP and through that whole time, I have not seen somebody use Boo to code with. I think you will be the first to try to accomplish this.
How do I configure an ASP.NET MVC project to work with Boo
I want to build an ASP.NET MVC application with Boo instead of C#. If you know the steps to configure this type of project setup, I'd be interested to know what I need to do. The MVC project setup is no problem. What I'm trying to figure out how to configure the pages and project to switch to use the Boo language and compiler.
[ "So there are two levels of \"work with Boo\". One would be all the code (namely, the Controllers), and the other would be the views.\nFor the code, I assume Boo compiles to standard .NET assemblies, so simply properly following the naming conventions using by ASP.NET MVC should allow you to write Controllers. You will probably need to start with a C# or VB version of the MVC web application project template and port some of the boilerplate code over into Boo to get the solution entirely in Boo (I presume Boo supports Web Application projects?).\nThe other half is views. Someone will need to port the Brail view engine over to the ASP.NET MVC view engine system. This may already be done, but I don't know for sure. If it's not, then this is probably a significant amount of work to be done.\nProbably the best place to get answers to these kinds of questions is the MvcContrib community on CodePlex.\n", "The Brail view engine has been implemented to be used in ASP.NET MVC. The MvcContrib project implemented the code. The source code is located on Google Code.\nAs far as the controllers, I really am not sure. I am not that familiar with Boo. I know a lot of developers use it for configuration instead of using xml for instance. My tips would be, if Boo can inherit off the Controller base class and you stick to the naming conventions, you should be alright. If you vary off the naming conventions, well you would need to implement your own IControllerFactory to instantiate the boo controllers as the requests come in.\nI have been following the ASP.NET MVC bits since the first CTP and through that whole time, I have not seen somebody use Boo to code with. I think you will be the first to try to accomplish this.\n" ]
[ 2, 1 ]
[]
[]
[ "asp.net", "asp.net_mvc", "boo" ]
stackoverflow_0000030752_asp.net_asp.net_mvc_boo.txt
Q: How stable is WPF? How stable is WPF not in terms of stability of a WPF program, but in terms of the 'stability' of the API itself. Let me explain: Microsoft is notorious for changing its whole methodology around with new technology. Like with the move from silverlight 1 to silverlight 2. With WPF, I know that MS changed a bunch of stuff with the release of the .NET service pack. I don't know how much they changed things around. So the bottom line is, in your opinion are they going to revamp the system again with the next release or do you think that it is stable enough now that they won't change the bulk of the system. I hate to have to unlearn stuff with every release. I hope that the question wasn't too long winded. A: MS do have a history of "fire and movement" with regards to introducing new technology into their development stack, but they also have a strong history of maintaining support for the older stuff, and backwards-compatibility. WPF seems to be getting stuff added to it with each new release of the framework but the things you learn aren't being superceded or invalidated. The only breaking change I've seen in my own WPF applications with a new release of the framework was one recently in 3.5 SP1, and that was because we were unknowingly relying on a bug to get a certain behaviour from our code. We adjusted the XAML to be more correct and it started working fine. So yeah, I think WPF is pretty "stable" as a client-side development technology. A: We've been using WPF since it was first released and yes it had it's problems at the beginning that caused us headaches and had us scratching our heads to find a work around, but each new update the stack has actually become pretty stable. It definitely became easier and easier to develop with it with the addition of Expression Blend. Creating the XAML in VS 2005 was not fun. The templating engine alone is enough to switch from WinForms, let alone the animation support. Either way, I agree with Matt that it is pretty stable as a framework for developing client applications. A: WPF is pretty stable as far as changes go. Silverlight is still in flux. Though you may watch out since silverlight brought the concept of the state manager(instead of implementing triggers) which may get adopted in wpf... If that happens there will be multiple ways to defining control templates and behavior... and that will be a headache.
How stable is WPF?
How stable is WPF not in terms of stability of a WPF program, but in terms of the 'stability' of the API itself. Let me explain: Microsoft is notorious for changing its whole methodology around with new technology. Like with the move from silverlight 1 to silverlight 2. With WPF, I know that MS changed a bunch of stuff with the release of the .NET service pack. I don't know how much they changed things around. So the bottom line is, in your opinion are they going to revamp the system again with the next release or do you think that it is stable enough now that they won't change the bulk of the system. I hate to have to unlearn stuff with every release. I hope that the question wasn't too long winded.
[ "MS do have a history of \"fire and movement\" with regards to introducing new technology into their development stack, but they also have a strong history of maintaining support for the older stuff, and backwards-compatibility. WPF seems to be getting stuff added to it with each new release of the framework but the things you learn aren't being superceded or invalidated.\nThe only breaking change I've seen in my own WPF applications with a new release of the framework was one recently in 3.5 SP1, and that was because we were unknowingly relying on a bug to get a certain behaviour from our code. We adjusted the XAML to be more correct and it started working fine.\nSo yeah, I think WPF is pretty \"stable\" as a client-side development technology.\n", "We've been using WPF since it was first released and yes it had it's problems at the beginning that caused us headaches and had us scratching our heads to find a work around, but each new update the stack has actually become pretty stable.\nIt definitely became easier and easier to develop with it with the addition of Expression Blend. Creating the XAML in VS 2005 was not fun. The templating engine alone is enough to switch from WinForms, let alone the animation support.\nEither way, I agree with Matt that it is pretty stable as a framework for developing client applications.\n", "WPF is pretty stable as far as changes go. Silverlight is still in flux. Though you may watch out since silverlight brought the concept of the state manager(instead of implementing triggers) which may get adopted in wpf... \nIf that happens there will be multiple ways to defining control templates and behavior...\nand that will be a headache.\n" ]
[ 12, 3, 0 ]
[]
[]
[ ".net", "wpf" ]
stackoverflow_0000031480_.net_wpf.txt
Q: How do I check the active solution configuration Visual Studio built with at runtime? I would like to enable/disable some code based on a custom solution configuration I added in Visual Studio. How do I check this value at runtime? A: You can use precompiler directives within Visual Studio. The #if directive will allow you to determine if you are going to include code or not based on your custom solution configuration. A: add a const value assign to a value that designate the configuration you are in. like #ifdef _ENABLE_CODE1_ const codeconfig = 1; #else const codeconfig = 2; #endif and add _ENABLE_CODE1_ in your configuration preprocessor. A: In each project's properties under the build section you can set different custom constants for each solution configuration. This is where you define custom pre-compiler directives. A: I'm not sure if you can figure out the exact name of the build configuration. Howerver, if you use Debug.Assert(...), that code will only be run when you compile in debug mode. Not sure it that helps you at all.
How do I check the active solution configuration Visual Studio built with at runtime?
I would like to enable/disable some code based on a custom solution configuration I added in Visual Studio. How do I check this value at runtime?
[ "You can use precompiler directives within Visual Studio. The #if directive will allow you to determine if you are going to include code or not based on your custom solution configuration.\n", "add a const value assign to a value that designate the configuration you are in.\nlike\n#ifdef _ENABLE_CODE1_\nconst codeconfig = 1;\n#else\nconst codeconfig = 2;\n#endif\n\nand add _ENABLE_CODE1_ in your configuration preprocessor.\n", "In each project's properties under the build section you can set different custom constants for each solution configuration. This is where you define custom pre-compiler directives.\n", "I'm not sure if you can figure out the exact name of the build configuration. Howerver, if you use Debug.Assert(...), that code will only be run when you compile in debug mode. Not sure it that helps you at all.\n" ]
[ 9, 7, 4, 0 ]
[]
[]
[ "visual_studio" ]
stackoverflow_0000031496_visual_studio.txt
Q: Query to list all tables that contain a specific column with SQL Server 2005 Question as stated in the title. A: http://blog.sqlauthority.com/2008/08/06/sql-server-query-to-find-column-from-all-tables-of-database/ USE AdventureWorks GO SELECT t.name AS table_name ,SCHEMA_NAME(schema_id) AS schema_name ,c.name AS column_name FROM sys.tables AS t INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID WHERE c.name LIKE '%EmployeeID%' ORDER BY schema_name ,table_name;
Query to list all tables that contain a specific column with SQL Server 2005
Question as stated in the title.
[ "http://blog.sqlauthority.com/2008/08/06/sql-server-query-to-find-column-from-all-tables-of-database/\nUSE AdventureWorks\nGO\nSELECT \n t.name AS table_name\n ,SCHEMA_NAME(schema_id) AS schema_name\n ,c.name AS column_name\nFROM \n sys.tables AS t\n INNER JOIN sys.columns c \n ON t.OBJECT_ID = c.OBJECT_ID\nWHERE \n c.name LIKE '%EmployeeID%'\nORDER BY \n schema_name\n ,table_name;\n\n" ]
[ 2 ]
[]
[]
[ "sql_server_2005" ]
stackoverflow_0000031566_sql_server_2005.txt
Q: Suggestions for Migrating from ASP.NET WebForms to ASP.NET MVC? ASP.NET MVC has been discussed on this forum a few times. I'm about to do a large migration of several websites from classic ASP/ASP.NET WebForms to ASP.NET MVC and was wondering what kind of advice those of you with experience in both technologies have. What I have: a typical ASP.NET app with heavily coupled presentation/business logic, all sorts of messy ASP.NET-generated Javascript cruft, and so forth. What I want: clean ASP.NET MVC-generated agnostic markup. 'Nuff said. Any pointers, tips, tricks, or gotchas to be aware of? Thanks! A: Wow, I'm not sure we're talking migration here anymore - the difference is more like re-writing! As others have also said, MVC is a whole new way to build web apps - most of your presentation code won't carry across. However, if you are re-writing in MVC what you already have is a good prototype. Your problem is likely to be that it would be hard to do bit by bit - for instance MVC uses URL renaming out-of-the-box, making linking back and forth rather messy. Another question would be why? Many of us have big sprawling legacy applications that we'd like to be in the latest technologies, but if your application is already working why switch? If I was looking at a new application right now MVC would be a very strong candidate, but there's no gain large enough to switching to it late in a project. A: Any pointers, tips, tricks, or gotchas to be aware of? Well, I think you're probably a little ways away from thinking about tricks & gotchas :) As I'm sure you're aware, ASP.NET MVC is not some new version of ASP.NET, but a totally different paradigm from ASP.NET, you won't be migrating, you'll be initiating a brand new development effort to replace an existing system. So maybe you can get a leg up on determining requirements for the app, but the rest will probably re-built from scratch. Based on the (very common) problems you described in your existing code base you should consider taking this opportunity to learn some of the current best practices in designing loosely coupled systems. This is easy to do because modern "best practices" are easy to understand and easy to practice, and there is enormous community support, and high quality, open source tooling to help in the process. We are moving an ASP/ASP.NET application to ASP.NET MVC at this time as well, and this is the conclusion my preparatory research has led me to, anyway. Here is a post to links on using ASP.NET MVC, but I would start by reading this post. The post is about NHibernate (an ORM tool) on its surface but the discussion and the links are about getting the foundations right and is the result of preparing to port an ASP.NET site to MVC. Some of the reference architectures linked to in that post are based on ASP.NET MVC. Here is another post about NHibernate, but in the "Best Practices & Reference Applications" section most if not all of the reference applications listed are ASP.NET MVC applications also. Reference architectures can be extremely useful for quickly getting a feeling for how an optimal, maintainable ASP.NET MVC site might be designed. A: WebForms can live with MVC controllers in the same app. By default, routing does not route requests for files that exist on disk. So you could start rewriting small parts of your site at a time to use the MVC pattern, and leave the rest of it using WebForms. A: My opinion is that the two technologies are so different that if you have tightly coupled code in the original Web Form applications that the best approach is to start by picking one of them and converting it by creating a new ASP.NET MVC application and ripping out code into their respective layers. Which will put you on the trail of reuse for porting the other applications.
Suggestions for Migrating from ASP.NET WebForms to ASP.NET MVC?
ASP.NET MVC has been discussed on this forum a few times. I'm about to do a large migration of several websites from classic ASP/ASP.NET WebForms to ASP.NET MVC and was wondering what kind of advice those of you with experience in both technologies have. What I have: a typical ASP.NET app with heavily coupled presentation/business logic, all sorts of messy ASP.NET-generated Javascript cruft, and so forth. What I want: clean ASP.NET MVC-generated agnostic markup. 'Nuff said. Any pointers, tips, tricks, or gotchas to be aware of? Thanks!
[ "Wow, I'm not sure we're talking migration here anymore - the difference is more like re-writing!\n\nAs others have also said, MVC is a whole new way to build web apps - most of your presentation code won't carry across.\nHowever, if you are re-writing in MVC what you already have is a good prototype. Your problem is likely to be that it would be hard to do bit by bit - for instance MVC uses URL renaming out-of-the-box, making linking back and forth rather messy.\nAnother question would be why? Many of us have big sprawling legacy applications that we'd like to be in the latest technologies, but if your application is already working why switch? \nIf I was looking at a new application right now MVC would be a very strong candidate, but there's no gain large enough to switching to it late in a project.\n", "\nAny pointers, tips, tricks, or\n gotchas to be aware of?\n\nWell, I think you're probably a little ways away from thinking about tricks & gotchas :) As I'm sure you're aware, ASP.NET MVC is not some new version of ASP.NET, but a totally different paradigm from ASP.NET, you won't be migrating, you'll be initiating a brand new development effort to replace an existing system. So maybe you can get a leg up on determining requirements for the app, but the rest will probably re-built from scratch.\nBased on the (very common) problems you described in your existing code base you should consider taking this opportunity to learn some of the current best practices in designing loosely coupled systems. This is easy to do because modern \"best practices\" are easy to understand and easy to practice, and there is enormous community support, and high quality, open source tooling to help in the process.\nWe are moving an ASP/ASP.NET application to ASP.NET MVC at this time as well, and this is the conclusion my preparatory research has led me to, anyway.\nHere is a post to links on using ASP.NET MVC, but I would start by reading this post. The post is about NHibernate (an ORM tool) on its surface but the discussion and the links are about getting the foundations right and is the result of preparing to port an ASP.NET site to MVC. Some of the reference architectures linked to in that post are based on ASP.NET MVC. Here is another post about NHibernate, but in the \"Best Practices & Reference Applications\" section most if not all of the reference applications listed are ASP.NET MVC applications also. Reference architectures can be extremely useful for quickly getting a feeling for how an optimal, maintainable ASP.NET MVC site might be designed.\n", "WebForms can live with MVC controllers in the same app. By default, routing does not route requests for files that exist on disk. So you could start rewriting small parts of your site at a time to use the MVC pattern, and leave the rest of it using WebForms.\n", "My opinion is that the two technologies are so different that if you have tightly coupled code in the original Web Form applications that the best approach is to start by picking one of them and converting it by creating a new ASP.NET MVC application and ripping out code into their respective layers. Which will put you on the trail of reuse for porting the other applications.\n" ]
[ 3, 2, 1, 0 ]
[]
[]
[ ".net", "asp.net", "asp.net_mvc", "webforms" ]
stackoverflow_0000013704_.net_asp.net_asp.net_mvc_webforms.txt
Q: How to get the libraries you need into the bin folder when using IoC/DI I'm using Castle Windsor to do some dependency injection, specifically I've abstracted the DAL layer to interfaces that are now being loaded by DI. Once the project is developed & deployed all the .bin files will be in the same location, but for while I'm developing in Visual Studio, the only ways I can see of getting the dependency injected project's .bin file into the startup project's bin folder is to either have a post-build event that copies it in, or to put in a manual reference to the DAL project to pull the file in. I'm not totally thrilled with either solution, so I was wondering if there was a 'standard' way of solving this problem? A: Could you set the build output path of the concrete DAL project to be the bin folder of the dependent project? A: Mike: Didn't think of that, that could work, have to remember to turn off copy-local for any libraries / projects that are common between them
How to get the libraries you need into the bin folder when using IoC/DI
I'm using Castle Windsor to do some dependency injection, specifically I've abstracted the DAL layer to interfaces that are now being loaded by DI. Once the project is developed & deployed all the .bin files will be in the same location, but for while I'm developing in Visual Studio, the only ways I can see of getting the dependency injected project's .bin file into the startup project's bin folder is to either have a post-build event that copies it in, or to put in a manual reference to the DAL project to pull the file in. I'm not totally thrilled with either solution, so I was wondering if there was a 'standard' way of solving this problem?
[ "Could you set the build output path of the concrete DAL project to be the bin folder of the dependent project? \n", "Mike: Didn't think of that, that could work, have to remember to turn off copy-local for any libraries / projects that are common between them\n" ]
[ 1, 0 ]
[]
[]
[ "castle_windsor", "dependency_injection", "inversion_of_control", "visual_studio" ]
stackoverflow_0000031592_castle_windsor_dependency_injection_inversion_of_control_visual_studio.txt
Q: Migrating from ASP Classic to .NET and pain mitigation We're in the process of redesigning the customer-facing section of our site in .NET 3.5. It's been going well so far, we're using the same workflow and stored procedures, for the most part, the biggest changes are the UI, the ORM (from dictionaries to LINQ), and obviously the language. Most of the pages to this point have been trivial, but now we're working on the heaviest workflow pages. The main page of our offer acceptance section is 1500 lines, about 90% of that is ASP, with probably another 1000 lines in function calls to includes. I think the 1500 lines is a bit deceiving too since we're working with gems like this function GetDealText(sUSCurASCII, sUSCurName, sTemplateOptionID, sSellerCompany, sOfferAmount, sSellerPremPercent, sTotalOfferToSeller, sSellerPremium, sMode, sSellerCurASCII, sSellerCurName, sTotalOfferToSeller_SellerCurr, sOfferAmount_SellerCurr, sSellerPremium_SellerCurr, sConditions, sListID, sDescription, sSKU, sInv_tag, sFasc_loc, sSerialNoandModel, sQTY, iLoopCount, iBidCount, sHTMLConditions, sBidStatus, sBidID, byRef bAlreadyAccepted, sFasc_Address1, sFasc_City, sFasc_State_id, sFasc_Country_id, sFasc_Company_name, sListingCustID, sAskPrice_SellerCurr, sMinPrice_SellerCurr, sListingCur, sOrigLocation) The standard practice I've been using so far is to spend maybe an hour or so reading over the app both to familiarize myself with it, but also to strip out commented-out/deprecated code. Then to work in a depth-first fashion. I'll start at the top and copy a segment of code in the aspx.cs file and start rewriting, making obvious refactorings as I go especially to take advantage of our ORM. If I get a function call that we don't have, I'll write out the definition. After I have everything coded I'll do a few passes at refactoring/testing. I'm just wondering if anyone has any tips on how to make this process a little easier/more efficient. A: Believe me, I know exactly where you are coming from.. I am currently migrating a large app from ASP classic to .NET.. And I am still learning ASP.NET! :S (yes, I am terrified!). The main things I have kept in my mind is this: I dont stray too far from the current design (i.e. no massive "lets rip ALL of this out and make it ASP.NET magical!) due to the incredibly high amount of coupling that ASP classic tends to have, this would be very dangerous. Of course, if you are confident, fill your boots :) This can always be refactored later. Back everything up with tests, tests and more tests! I am really trying hard to get into TDD, but its very difficult to test existing apps, so every time I remove a chunk of classic and replace with .NET, I ensure I have as much green-light tests backing me as possible. Research a lot, there are some MAJOR changes between classic and .NET and sometimes what can be many lines of code and includes in classic can be achieved in a few lines of code, think before coding.. I've learnt this the hard way, several times :D Its very much like playing Jenga with your code :) Best of luck with the project, any more questions, then please ask :) A: After I have everything coded I'll do a few passes at refactoring/testing. I'm just wondering if anyone has any tips on how to make this process a little easier/more efficient. Normally I'm not a fan of TDD, but in the case of refactoring it really is the way to go. Write some tests first which verify what the bit you're looking at is actually doing. Then refactor. This is a LOT more reliable than just 'it looks like it still works.' The other huge benefit to this is that when you're refactoring something which is further down the page, or in a shared library or something, you can just re-run the tests, as opposed to finding out the hard way that a seemingly unrelated change was actually related A: You're going from classic ASP to ASP with 3.5 without just re-writing? Skillz. I've had to deal with some legacy ASP @work and I think it's just easier to parse it and re-write it. A: A 1500-line ASP page? With lots of calls out to include files? Don't tell me -- the functions don't have any naming convention that tells you which include file has their implementation... That brings back memories (shudder)... It sounds to me like you have a pretty solid approach -- I'm not sure if there is any magical way to mitigate your pain. After your conversion effort, the architecture of your app will still be messy and UI-heavy (i.e. code-behind running workflows), and it will probably still be fairly painful to maintain, but the refactoring you are doing should definitely help. I hope you have weighed the upgrade you are doing against just rewriting from scratch -- as long as you are not intending to extend the app too much and you are not primarily responsible for maintaining the app, upgrading a complex workflow-based app like you are doing may be cheaper and a better choice than rewriting it from scratch. ASP.NET should give you better opportunities to improve performance and scalability, at least, than Classic ASP. From your question I imagine that it is too late in the process for that discussion anyway. Good luck! A: Sounds like you have a pretty good handle on things. I've seen a lot of people try to do a straight-line transliteration, includes and all, and it just doesn't work. You need to have a good understanding of how ASP.Net wants to work, because it's much different from Classic ASP, and it sounds like maybe you have that. For larger files, I'd try to get a higher level view first. For example, one thing I've noticed is that Classic ASP was horrible about function calls. You'd be reading through some code and find a call to a function with no clue as to where it might be implemented. As a result, Classic ASP code tended to have long functions and scripts to avoid those nasty jumps. I remember seeing a function that printed out to 40 pages! Parsing straight through that much code is no fun. ASP.Net makes it easier to follow function calls around, so you might start by breaking out your larger code blocks into several smaller functions. A: Don't tell me -- the functions don't have any naming convention that tells you which include file has their implementation... That brings back memories (shudder)... How did you guess? ;) I hope you have weighed the upgrade you are doing against just rewriting from scratch -- as long as you are not intending to extend the app too much and you are not primarily responsible for maintaining the app, upgrading a complex workflow-based app like you are doing may be cheaper and a better choice than rewriting it from scratch. ASP.NET should give you better opportunities to improve performance and scalability, at least, than Classic ASP. From your question I imagine that it is too late in the process for that discussion anyway. This was something we talked about. Based on timing (trying to beat a competitor's site to launch) and resources (basically two developers) it made sense to not nuke the site from orbit. Things have actually gone much better than I expected. We were aware even from the planning stages that this code was going to give us the most problems. You should see the revision history of the classic ASP pages involved, it's a bloodbath. For larger files, I'd try to get a higher level view first. For example, one thing I've noticed is that Classic ASP was horrible about function calls. You'd be reading through some code and find a call to a function with no clue as to where it might be implemented. As a result, Classic ASP code tended to have long functions and scripts to avoid those nasty jumps. I remember seeing a function that printed out to 40 pages! Parsing straight through that much code is no fun. I've actually had this displeasure of working with the legacy code quite a bit so I have a decent high level understanding of the system. You're right about the function length, there are some routines (most I've refactored down into much smaller ones) that are 3-4x as long as any of the aspx pages/helper classes/ORMs on the new site. A: I once came across a .Net app that was ported from ASP. The .aspx pages were totally blank. To render the UI, the developers used StringBuilders in the code behind and then did a response.write. This would be the wrong way to do it! A: I once came across a .Net app that was ported from ASP. The .aspx pages were totally blank. To render the UI, the developers used StringBuilders in the code behind and then did a response.write. This would be the wrong way to do it! I've seen it done the other way, the code behind page was blank, except for declaration of globals, then the VBScript was left in the ASPX.
Migrating from ASP Classic to .NET and pain mitigation
We're in the process of redesigning the customer-facing section of our site in .NET 3.5. It's been going well so far, we're using the same workflow and stored procedures, for the most part, the biggest changes are the UI, the ORM (from dictionaries to LINQ), and obviously the language. Most of the pages to this point have been trivial, but now we're working on the heaviest workflow pages. The main page of our offer acceptance section is 1500 lines, about 90% of that is ASP, with probably another 1000 lines in function calls to includes. I think the 1500 lines is a bit deceiving too since we're working with gems like this function GetDealText(sUSCurASCII, sUSCurName, sTemplateOptionID, sSellerCompany, sOfferAmount, sSellerPremPercent, sTotalOfferToSeller, sSellerPremium, sMode, sSellerCurASCII, sSellerCurName, sTotalOfferToSeller_SellerCurr, sOfferAmount_SellerCurr, sSellerPremium_SellerCurr, sConditions, sListID, sDescription, sSKU, sInv_tag, sFasc_loc, sSerialNoandModel, sQTY, iLoopCount, iBidCount, sHTMLConditions, sBidStatus, sBidID, byRef bAlreadyAccepted, sFasc_Address1, sFasc_City, sFasc_State_id, sFasc_Country_id, sFasc_Company_name, sListingCustID, sAskPrice_SellerCurr, sMinPrice_SellerCurr, sListingCur, sOrigLocation) The standard practice I've been using so far is to spend maybe an hour or so reading over the app both to familiarize myself with it, but also to strip out commented-out/deprecated code. Then to work in a depth-first fashion. I'll start at the top and copy a segment of code in the aspx.cs file and start rewriting, making obvious refactorings as I go especially to take advantage of our ORM. If I get a function call that we don't have, I'll write out the definition. After I have everything coded I'll do a few passes at refactoring/testing. I'm just wondering if anyone has any tips on how to make this process a little easier/more efficient.
[ "Believe me, I know exactly where you are coming from.. I am currently migrating a large app from ASP classic to .NET.. And I am still learning ASP.NET! :S (yes, I am terrified!).\nThe main things I have kept in my mind is this:\n\nI dont stray too far from the current design (i.e. no massive \"lets rip ALL of this out and make it ASP.NET magical!) due to the incredibly high amount of coupling that ASP classic tends to have, this would be very dangerous. Of course, if you are confident, fill your boots :) This can always be refactored later.\nBack everything up with tests, tests and more tests! I am really trying hard to get into TDD, but its very difficult to test existing apps, so every time I remove a chunk of classic and replace with .NET, I ensure I have as much green-light tests backing me as possible.\nResearch a lot, there are some MAJOR changes between classic and .NET and sometimes what can be many lines of code and includes in classic can be achieved in a few lines of code, think before coding.. I've learnt this the hard way, several times :D\n\nIts very much like playing Jenga with your code :)\nBest of luck with the project, any more questions, then please ask :)\n", "\nAfter I have everything coded I'll do a few passes at refactoring/testing. I'm just wondering if anyone has any tips on how to make this process a little easier/more efficient.\n\n\nNormally I'm not a fan of TDD, but in the case of refactoring it really is the way to go.\nWrite some tests first which verify what the bit you're looking at is actually doing. Then refactor. This is a LOT more reliable than just 'it looks like it still works.'\nThe other huge benefit to this is that when you're refactoring something which is further down the page, or in a shared library or something, you can just re-run the tests, as opposed to finding out the hard way that a seemingly unrelated change was actually related\n", "You're going from classic ASP to ASP with 3.5 without just re-writing? Skillz. I've had to deal with some legacy ASP @work and I think it's just easier to parse it and re-write it.\n", "A 1500-line ASP page? With lots of calls out to include files? Don't tell me -- the functions don't have any naming convention that tells you which include file has their implementation... That brings back memories (shudder)...\nIt sounds to me like you have a pretty solid approach -- I'm not sure if there is any magical way to mitigate your pain. After your conversion effort, the architecture of your app will still be messy and UI-heavy (i.e. code-behind running workflows), and it will probably still be fairly painful to maintain, but the refactoring you are doing should definitely help.\nI hope you have weighed the upgrade you are doing against just rewriting from scratch -- as long as you are not intending to extend the app too much and you are not primarily responsible for maintaining the app, upgrading a complex workflow-based app like you are doing may be cheaper and a better choice than rewriting it from scratch. ASP.NET should give you better opportunities to improve performance and scalability, at least, than Classic ASP. From your question I imagine that it is too late in the process for that discussion anyway.\nGood luck!\n", "Sounds like you have a pretty good handle on things. I've seen a lot of people try to do a straight-line transliteration, includes and all, and it just doesn't work. You need to have a good understanding of how ASP.Net wants to work, because it's much different from Classic ASP, and it sounds like maybe you have that. \nFor larger files, I'd try to get a higher level view first. For example, one thing I've noticed is that Classic ASP was horrible about function calls. You'd be reading through some code and find a call to a function with no clue as to where it might be implemented. As a result, Classic ASP code tended to have long functions and scripts to avoid those nasty jumps. I remember seeing a function that printed out to 40 pages! Parsing straight through that much code is no fun. \nASP.Net makes it easier to follow function calls around, so you might start by breaking out your larger code blocks into several smaller functions.\n", "\nDon't tell me -- the functions don't\n have any naming convention that tells\n you which include file has their\n implementation... That brings back\n memories (shudder)...\n\nHow did you guess? ;)\n\nI hope you have weighed the upgrade\n you are doing against just rewriting\n from scratch -- as long as you are not\n intending to extend the app too much\n and you are not primarily responsible\n for maintaining the app, upgrading a\n complex workflow-based app like you\n are doing may be cheaper and a better\n choice than rewriting it from scratch.\n ASP.NET should give you better\n opportunities to improve performance\n and scalability, at least, than\n Classic ASP. From your question I\n imagine that it is too late in the\n process for that discussion anyway.\n\nThis was something we talked about. Based on timing (trying to beat a competitor's site to launch) and resources (basically two developers) it made sense to not nuke the site from orbit. Things have actually gone much better than I expected. We were aware even from the planning stages that this code was going to give us the most problems. You should see the revision history of the classic ASP pages involved, it's a bloodbath.\n\nFor larger files, I'd try to get a\n higher level view first. For example,\n one thing I've noticed is that Classic\n ASP was horrible about function calls.\n You'd be reading through some code and\n find a call to a function with no clue\n as to where it might be implemented.\n As a result, Classic ASP code tended\n to have long functions and scripts to\n avoid those nasty jumps. I remember\n seeing a function that printed out to\n 40 pages! Parsing straight through\n that much code is no fun.\n\nI've actually had this displeasure of working with the legacy code quite a bit so I have a decent high level understanding of the system. You're right about the function length, there are some routines (most I've refactored down into much smaller ones) that are 3-4x as long as any of the aspx pages/helper classes/ORMs on the new site.\n", "I once came across a .Net app that was ported from ASP. The .aspx pages were totally blank. To render the UI, the developers used StringBuilders in the code behind and then did a response.write. This would be the wrong way to do it!\n", "\nI once came across a .Net app that was ported from ASP. The .aspx pages were totally blank. To render the UI, the developers used StringBuilders in the code behind and then did a response.write. This would be the wrong way to do it!\n\nI've seen it done the other way, the code behind page was blank, except for declaration of globals, then the VBScript was left in the ASPX.\n" ]
[ 6, 2, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "asp.net", "asp_classic", "migration" ]
stackoverflow_0000031192_asp.net_asp_classic_migration.txt
Q: What's the fastest way to determine a full URL from a relative URL (given a base URL) I'm currently using the module URI::URL to generate a full URL from a relative URL; however, it isn't running as fast as I'd like it to be. Does anyone know another way to do this that may be faster? A: Just happened across this article which point out shortcomings in Redhat/Centos/Fedora implementations of Perl which affect URI profoundly. If you are running one of these Linux flavours, you might want to recompile Perl from original source (not RPM source). I realized that anyone running perl code with the distribution perl interpretter on Redhat 5.2, Centos 5.2 or Fedora 9 is likely a victim. Yes, even if your code doesn’t use the fancy bless/overload idiom, many CPAN modules do! This google search shows 1500+ modules use the bless/overload idiom and they include some really popular ones like URI, JSON. ... ... At this point, I decided to recompile perl from source. The bug was gone. And the difference was appalling. Everything got seriously fast. CPUs were chilling at a loadavg below 0.10 and we were processing data 100x to 1000x faster! A: The following code should work. $uri = URI->new_abs( $str, $base_uri ) You should also take a look at the URI page on search.cpan.org. A: Brendan, I should have clarified that I can't guarantee what the relative path is going to look like. It could be pretty tricky (e.g. has a slash at the front, doesn't have a slash, has "../", etc). Peter, that's what I'm using now. Or is that faster then using the URI::URL->new($path)->abs? A: Could depend a bit how you obtain those 2 strings. Probably the secure, fireproof way to do that is what is in URI::URL or similar libraries, where all alternatives, including malicious ones, would be considered. Maybe slower, but in some environments faster will be the speed of a bullet going to your own foot. But if you expect there something plain and not tricky could see if it starts with /, chains of ../, or any other char. The 1st would put the server name + the url, the 2nd chop paths from the base uri till getting in one of the other 2 alternatives, or just add it to the base url. A: Perhaps I got the wrong end of the stick but wouldn't, $full_url = $base_url . $relative_url work? IIRC Perl text processing is pretty quick. @lennysan Ah sure yes of course. Sorry I can't help, my Perl is pretty rusty.
What's the fastest way to determine a full URL from a relative URL (given a base URL)
I'm currently using the module URI::URL to generate a full URL from a relative URL; however, it isn't running as fast as I'd like it to be. Does anyone know another way to do this that may be faster?
[ "Just happened across this article which point out shortcomings in Redhat/Centos/Fedora implementations of Perl which affect URI profoundly.\nIf you are running one of these Linux flavours, you might want to recompile Perl from original source (not RPM source).\n\nI realized that anyone running perl code with the distribution perl interpretter on Redhat 5.2, Centos 5.2 or Fedora 9 is likely a victim. Yes, even if your code doesn’t use the fancy bless/overload idiom, many CPAN modules do! This google search shows 1500+ modules use the bless/overload idiom and they include some really popular ones like URI, JSON. ...\n... At this point, I decided to recompile perl from source. The bug was gone. And the difference was appalling. Everything got seriously fast. CPUs were chilling at a loadavg below 0.10 and we were processing data 100x to 1000x faster!\n\n", "The following code should work.\n$uri = URI->new_abs( $str, $base_uri )\n\nYou should also take a look at the URI page on search.cpan.org.\n", "Brendan, I should have clarified that I can't guarantee what the relative path is going to look like. It could be pretty tricky (e.g. has a slash at the front, doesn't have a slash, has \"../\", etc).\nPeter, that's what I'm using now. Or is that faster then using the URI::URL->new($path)->abs?\n", "Could depend a bit how you obtain those 2 strings. Probably the secure, fireproof way to do that is what is in URI::URL or similar libraries, where all alternatives, including malicious ones, would be considered. Maybe slower, but in some environments faster will be the speed of a bullet going to your own foot.\nBut if you expect there something plain and not tricky could see if it starts with /, chains of ../, or any other char. The 1st would put the server name + the url, the 2nd chop paths from the base uri till getting in one of the other 2 alternatives, or just add it to the base url.\n", "Perhaps I got the wrong end of the stick but wouldn't,\n$full_url = $base_url . $relative_url\nwork? IIRC Perl text processing is pretty quick.\n@lennysan Ah sure yes of course. Sorry I can't help, my Perl is pretty rusty.\n" ]
[ 4, 3, 1, 1, 0 ]
[]
[]
[ "performance", "perl", "perl_module", "regex", "uri" ]
stackoverflow_0000026855_performance_perl_perl_module_regex_uri.txt
Q: Checklist for Database Schema Upgrades Having to upgrade a database schema makes installing a new release of software a lot trickier. What are the best practices for doing this? I'm looking for a checklist or timeline of action items, such as 8:30 shut down apps 8:45 modify schema 9:15 install new apps 9:30 restart db etc, showing how to minimize risk and downtime. Issues such as backing out of the upgrade if things go awry minimizing impact to existing apps "hot" updates while the database is running promoting from dev to test to production servers are especially of interest. A: I have a lot of experience with this. My application is highly iterative, and schema changes happen frequently. I do a production release roughly every 2 to 3 weeks, with 50-100 items cleared from my FogBugz list for each one. Every release we've done over the last few years has required schema changes to support new features. The key to this is to practice the changes several times in a test environment before actually making them on the live servers. I keep a deployment checklist file that is copied from a template and then heavily edited for each release with anything that is out of the ordinary. I have two scripts that I run on the database, one for schema changes, one for programmability (procedures, views, etc). The changes script is coded by hand, and the one with the procs is scripted via Powershell. The change script is run when everything is turned off (you have to pick a time that annoys the least amount of users for this), and it is run command by command, manually, just in case anything goes weird. The most common problem I have run into is adding a unique constraint that fails due to duplicate rows. When preparing for an integration testing cycle, I go through my checklist on a test server, as if that server was production. Then, in addition to that, I go get an actual copy of the production database (this is a good time to swap out your offsite backups), and I run the scripts on a restored local version (which is also good because it proves my latest backup is sound). I'm killing a lot of birds with one stone here. So that's 4 databases total: Dev: all changes must be made in the change script, never with studio. Test: Integration testing happens here Copy of production: Last minute deployment practice Production You really, really need to get it right when you do it on production. Backing out schema changes is hard. As far as hotfixes, I will only ever hotfix procedures, never schema, unless it's a very isolated change and crucial for the business. A: I guess you have considered the reads of Scott Ambler? http://www.agiledata.org/essays/databaseRefactoring.html A: This is a topic that I was just talking about at work. Mainly the problem is that unless database migrations is handled for you nicely by your framework, eg rails and their migration scripts, then it is left up to you. The current way that we do it has apparent flaws, and I am open to other suggestions. Have a schema dump with static data that is required to be there kept up to date and in version control. Every time you do a schema changing action, ALTER, CREATE, etc. dump it to a file and throw it in version control. Make sure you update the original sql db dump. When doing pushes to live make sure you or your script applies the sql files to the db. Clean up old sql files that are in version control as they become old. This is by no means optimal and is really not intended as a "backup" db. It's simply to make pushes to live easy, and to keep developers on the same page. There is probably something cool you could setup with capistrano as far as automating the application of the sql files to the db. Db specific version control would be pretty awesome. There is probably something that does that and if there isn't there probably should be. A: And if the Scott Ambler paper whets your appetite I can recommend his book with Pramod J Sadolage called 'Refactoring Databases' - http://www.ambysoft.com/books/refactoringDatabases.html There is also a lot of useful advice and information at the Agile Database group at Yahoo - http://tech.groups.yahoo.com/group/agileDatabases/ A: Two quick notes: It goes without saying... So I'll say it twice. Verify that you have a valid backup. Verify that you have a valid backup. @mk. Check out Jeff's blog post on database version control (if you haven't already)
Checklist for Database Schema Upgrades
Having to upgrade a database schema makes installing a new release of software a lot trickier. What are the best practices for doing this? I'm looking for a checklist or timeline of action items, such as 8:30 shut down apps 8:45 modify schema 9:15 install new apps 9:30 restart db etc, showing how to minimize risk and downtime. Issues such as backing out of the upgrade if things go awry minimizing impact to existing apps "hot" updates while the database is running promoting from dev to test to production servers are especially of interest.
[ "I have a lot of experience with this. My application is highly iterative, and schema changes happen frequently. I do a production release roughly every 2 to 3 weeks, with 50-100 items cleared from my FogBugz list for each one. Every release we've done over the last few years has required schema changes to support new features.\nThe key to this is to practice the changes several times in a test environment before actually making them on the live servers.\nI keep a deployment checklist file that is copied from a template and then heavily edited for each release with anything that is out of the ordinary.\nI have two scripts that I run on the database, one for schema changes, one for programmability (procedures, views, etc). The changes script is coded by hand, and the one with the procs is scripted via Powershell. The change script is run when everything is turned off (you have to pick a time that annoys the least amount of users for this), and it is run command by command, manually, just in case anything goes weird. The most common problem I have run into is adding a unique constraint that fails due to duplicate rows.\nWhen preparing for an integration testing cycle, I go through my checklist on a test server, as if that server was production. Then, in addition to that, I go get an actual copy of the production database (this is a good time to swap out your offsite backups), and I run the scripts on a restored local version (which is also good because it proves my latest backup is sound). I'm killing a lot of birds with one stone here.\nSo that's 4 databases total:\n\nDev: all changes must be made in the change script, never with studio.\nTest: Integration testing happens here\nCopy of production: Last minute deployment practice\nProduction\n\nYou really, really need to get it right when you do it on production. Backing out schema changes is hard.\nAs far as hotfixes, I will only ever hotfix procedures, never schema, unless it's a very isolated change and crucial for the business.\n", "I guess you have considered the reads of Scott Ambler?\nhttp://www.agiledata.org/essays/databaseRefactoring.html\n", "This is a topic that I was just talking about at work. Mainly the problem is that unless database migrations is handled for you nicely by your framework, eg rails and their migration scripts, then it is left up to you. \nThe current way that we do it has apparent flaws, and I am open to other suggestions. \n\nHave a schema dump with static data that is required to be there kept up to date and in version control. \nEvery time you do a schema changing action, ALTER, CREATE, etc. dump it to a file and throw it in version control. \nMake sure you update the original sql db dump. \nWhen doing pushes to live make sure you or your script applies the sql files to the db. \nClean up old sql files that are in version control as they become old.\n\nThis is by no means optimal and is really not intended as a \"backup\" db. It's simply to make pushes to live easy, and to keep developers on the same page. There is probably something cool you could setup with capistrano as far as automating the application of the sql files to the db. \nDb specific version control would be pretty awesome. There is probably something that does that and if there isn't there probably should be. \n", "And if the Scott Ambler paper whets your appetite I can recommend his book with Pramod J Sadolage called 'Refactoring Databases' - http://www.ambysoft.com/books/refactoringDatabases.html\nThere is also a lot of useful advice and information at the Agile Database group at Yahoo - http://tech.groups.yahoo.com/group/agileDatabases/\n", "Two quick notes:\n\nIt goes without saying... So I'll say it twice.\nVerify that you have a valid backup.\nVerify that you have a valid backup.\n@mk. Check out Jeff's blog post on database version control (if you haven't already)\n\n" ]
[ 5, 2, 1, 1, 1 ]
[]
[]
[ "database", "installation", "version_control" ]
stackoverflow_0000031303_database_installation_version_control.txt
Q: Find and Replace with Unique I am performing a find and replace on the line feed character (&#10;) and replacing it with the paragraph close and paragraph open tags using the following code: <xsl:template match="/STORIES/STORY"> <component> <xsl:if test="boolean(ARTICLEBODY)"> <p> <xsl:call-template name="replace-text"> <xsl:with-param name="text" select="ARTICLEBODY" /> <xsl:with-param name="replace" select="'&#10;'" /> <xsl:with-param name="by" select="'&lt;/p&gt;&lt;p&gt;'" /> </xsl:call-template> </p> </xsl:if> </component> </xsl:template> <xsl:template name="replace-text"> <xsl:param name="text"/> <xsl:param name="replace" /> <xsl:param name="by" /> <xsl:choose> <xsl:when test="contains($text, $replace)"> <xsl:value-of select="substring-before($text, $replace)"/> <xsl:value-of select="$by" disable-output-escaping="yes"/> <xsl:call-template name="replace-text"> <xsl:with-param name="text" select="substring-after($text, $replace)"/> <xsl:with-param name="replace" select="$replace" /> <xsl:with-param name="by" select="$by" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$text"/> </xsl:otherwise> </xsl:choose> </xsl:template> This almost works perfectly, except that I really need it to de-dup the line feeds as the paragraphs tend to be separated by 2 or more resulting in </p><p></p><p>. Is it possible to get it so that it will only ever replace this once per paragraph? A: disable-output-escaping isn't evil in itself, but there are only few cases where you should use it and this isn't one of them. In XSLT you work with trees, not markup string. Here's an XSTL 1.0 solution: <xsl:template match="/STORIES/STORY"> <component> <xsl:if test="ARTICLEBODY"> <xsl:call-template name="wrap-text"> <xsl:with-param name="text" select="ARTICLEBODY"/> <xsl:with-param name="delimiter" select="'&#10;'"/> <xsl:with-param name="element" select="'p'"/> </xsl:call-template> </xsl:if> </component> </xsl:template> <xsl:template name="wrap-text"> <xsl:param name="text"/> <xsl:param name="delimiter"/> <xsl:param name="element"/> <xsl:choose> <xsl:when test="contains($text, $delimiter)"> <xsl:variable name="t" select="substring-before($text, $delimiter)"/> <xsl:if test="normalize-space($t)"> <xsl:element name="{$element}"> <xsl:value-of select="$t"/> </xsl:element> </xsl:if> <xsl:call-template name="wrap-text"> <xsl:with-param name="text" select="substring-after($text, $delimiter)"/> <xsl:with-param name="delimiter" select="$delimiter"/> <xsl:with-param name="element" select="$element"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:if test="normalize-space($text)"> <xsl:element name="{$element}"> <xsl:value-of select="$text"/> </xsl:element> </xsl:if> </xsl:otherwise> </xsl:choose> </xsl:template> A: Given the XPath functions that you're calling which I don't remember having the luxury of in my MSXSL work, it looks like you're using an XPath 2-compatible processor. If that's the case, doesn't XPath 2 have a replace(string, pattern, replacement) function that takes a regex as a second parameter? <xsl:value-of select="replace(string(.), '&#10;(\s|&#10;)*', '&lt;/p&gt;&lt;p&gt;')" /> It might help to have some sample Xml input and to know what processor you plan to use. From your original example, it seems that the duplicate paragraphs all have a white-space only prefix. So something like this slight modification might trim the dupes. <xsl:when test="contains($text, $replace)"> <xsl:variable name="prefix" select="substring-before($text, $replace)" /> <xsl:choose> <xsl:when test="normalize-string($prefix)!=''"> <xsl:value-of select="$prefix"/> <xsl:value-of select="$by" disable-output-escaping="yes"/> </xsl:when> </xsl:choose> <xsl:call-template name="replace-text"> <xsl:with-param name="text" select="substring-after($text, $replace)"/> <xsl:with-param name="replace" select="$replace" /> <xsl:with-param name="by" select="$by" /> </xsl:call-template> A: Try this (XSLT 2.0): <xsl:template match="/STORIES/STORY"> <component> <xsl:if test="boolean(ARTICLEBODY)"> <xsl:call-template name="insert_paras"> <xsl:with-param name="text" select="ARTICLEBODY/text()"/> </xsl:call-template> </xsl:if> </component> </xsl:template> <xsl:template name="insert_paras"> <xsl:param name="text" /> <xsl:variable name="regex"> <xsl:text>&#10;(&#10;|\s)*</xsl:text> </xsl:variable> <xsl:variable name="tokenized-text" select="tokenize($text, $regex)"/> <xsl:for-each select="$tokenized-text"> <p> <xsl:value-of select="."/> </p> </xsl:for-each> </xsl:template> It's generally a bad idea to use literal strings to put in XML markup, since you can't guarantee that the results are balanced.
Find and Replace with Unique
I am performing a find and replace on the line feed character (&#10;) and replacing it with the paragraph close and paragraph open tags using the following code: <xsl:template match="/STORIES/STORY"> <component> <xsl:if test="boolean(ARTICLEBODY)"> <p> <xsl:call-template name="replace-text"> <xsl:with-param name="text" select="ARTICLEBODY" /> <xsl:with-param name="replace" select="'&#10;'" /> <xsl:with-param name="by" select="'&lt;/p&gt;&lt;p&gt;'" /> </xsl:call-template> </p> </xsl:if> </component> </xsl:template> <xsl:template name="replace-text"> <xsl:param name="text"/> <xsl:param name="replace" /> <xsl:param name="by" /> <xsl:choose> <xsl:when test="contains($text, $replace)"> <xsl:value-of select="substring-before($text, $replace)"/> <xsl:value-of select="$by" disable-output-escaping="yes"/> <xsl:call-template name="replace-text"> <xsl:with-param name="text" select="substring-after($text, $replace)"/> <xsl:with-param name="replace" select="$replace" /> <xsl:with-param name="by" select="$by" /> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:value-of select="$text"/> </xsl:otherwise> </xsl:choose> </xsl:template> This almost works perfectly, except that I really need it to de-dup the line feeds as the paragraphs tend to be separated by 2 or more resulting in </p><p></p><p>. Is it possible to get it so that it will only ever replace this once per paragraph?
[ "disable-output-escaping isn't evil in itself, but there are only few cases where you should use it and this isn't one of them. In XSLT you work with trees, not markup string. Here's an XSTL 1.0 solution:\n<xsl:template match=\"/STORIES/STORY\">\n <component>\n <xsl:if test=\"ARTICLEBODY\">\n <xsl:call-template name=\"wrap-text\">\n <xsl:with-param name=\"text\" select=\"ARTICLEBODY\"/>\n <xsl:with-param name=\"delimiter\" select=\"'&#10;'\"/>\n <xsl:with-param name=\"element\" select=\"'p'\"/>\n </xsl:call-template>\n </xsl:if>\n </component>\n</xsl:template>\n\n<xsl:template name=\"wrap-text\">\n <xsl:param name=\"text\"/>\n <xsl:param name=\"delimiter\"/>\n <xsl:param name=\"element\"/>\n\n <xsl:choose>\n <xsl:when test=\"contains($text, $delimiter)\">\n <xsl:variable name=\"t\" select=\"substring-before($text, $delimiter)\"/>\n <xsl:if test=\"normalize-space($t)\">\n <xsl:element name=\"{$element}\">\n <xsl:value-of select=\"$t\"/> \n </xsl:element>\n </xsl:if> \n <xsl:call-template name=\"wrap-text\">\n <xsl:with-param name=\"text\" select=\"substring-after($text, $delimiter)\"/>\n <xsl:with-param name=\"delimiter\" select=\"$delimiter\"/>\n <xsl:with-param name=\"element\" select=\"$element\"/>\n </xsl:call-template>\n </xsl:when>\n <xsl:otherwise>\n <xsl:if test=\"normalize-space($text)\">\n <xsl:element name=\"{$element}\">\n <xsl:value-of select=\"$text\"/> \n </xsl:element>\n </xsl:if>\n </xsl:otherwise>\n </xsl:choose>\n</xsl:template>\n\n", "Given the XPath functions that you're calling which I don't remember having the luxury of in my MSXSL work, it looks like you're using an XPath 2-compatible processor. \nIf that's the case, doesn't XPath 2 have a replace(string, pattern, replacement) function that takes a regex as a second parameter? \n<xsl:value-of \n select=\"replace(string(.), '&#10;(\\s|&#10;)*', '&lt;/p&gt;&lt;p&gt;')\" />\n\nIt might help to have some sample Xml input and to know what processor you plan to use.\nFrom your original example, it seems that the duplicate paragraphs all have a white-space only prefix. So something like this slight modification might trim the dupes.\n<xsl:when test=\"contains($text, $replace)\">\n <xsl:variable name=\"prefix\" select=\"substring-before($text, $replace)\" />\n <xsl:choose>\n <xsl:when test=\"normalize-string($prefix)!=''\">\n <xsl:value-of select=\"$prefix\"/>\n <xsl:value-of select=\"$by\" disable-output-escaping=\"yes\"/>\n </xsl:when>\n </xsl:choose>\n <xsl:call-template name=\"replace-text\">\n <xsl:with-param name=\"text\" select=\"substring-after($text, $replace)\"/>\n <xsl:with-param name=\"replace\" select=\"$replace\" />\n <xsl:with-param name=\"by\" select=\"$by\" />\n </xsl:call-template>\n\n\n", "Try this (XSLT 2.0):\n <xsl:template match=\"/STORIES/STORY\">\n <component>\n <xsl:if test=\"boolean(ARTICLEBODY)\">\n <xsl:call-template name=\"insert_paras\">\n <xsl:with-param name=\"text\" select=\"ARTICLEBODY/text()\"/>\n </xsl:call-template>\n </xsl:if>\n </component>\n </xsl:template>\n\n <xsl:template name=\"insert_paras\">\n <xsl:param name=\"text\" />\n\n <xsl:variable name=\"regex\">\n <xsl:text>&#10;(&#10;|\\s)*</xsl:text>\n </xsl:variable>\n <xsl:variable name=\"tokenized-text\" select=\"tokenize($text, $regex)\"/>\n\n <xsl:for-each select=\"$tokenized-text\">\n <p>\n <xsl:value-of select=\".\"/>\n </p>\n </xsl:for-each>\n </xsl:template>\n\nIt's generally a bad idea to use literal strings to put in XML markup, since you can't guarantee that the results are balanced.\n" ]
[ 5, 1, 1 ]
[]
[]
[ "xml", "xslt" ]
stackoverflow_0000031366_xml_xslt.txt
Q: Asp.net MVC User Control ViewData When a controller renders a view based on a model you can get the properties from the ViewData collection using the indexer (ie. ViewData["Property"]). However, I have a shared user control that I tried to call using the following: return View("Message", new { DisplayMessage = "This is a test" }); and on my Message control I had this: <%= ViewData["DisplayMessage"] %> I would think this would render the DisplayMessage correctly, however, null is being returned. After a heavy dose of tinkering around, I finally created a "MessageData" class in order to strongly type my user control: public class MessageControl : ViewUserControl<MessageData> and now this call works: return View("Message", new MessageData() { DisplayMessage = "This is a test" }); and can be displayed like this: <%= ViewData.Model.DisplayMessage %> Why wouldn't the DisplayMessage property be added to the ViewData (ie. ViewData["DisplayMessage"]) collection without strong typing the user control? Is this by design? Wouldn't it make sense that ViewData would contain a key for "DisplayMessage"? A: The method ViewData.Eval("DisplayMessage") should work for you. A: Of course after I create this question I immediately find the answer after a few more searches on Google http://forums.asp.net/t/1197059.aspx Apparently this happens because of the wrapper class. Even so, it seems like any property passed should get added to the ViewData collection by default. I really need to stop answering my own questions :(
Asp.net MVC User Control ViewData
When a controller renders a view based on a model you can get the properties from the ViewData collection using the indexer (ie. ViewData["Property"]). However, I have a shared user control that I tried to call using the following: return View("Message", new { DisplayMessage = "This is a test" }); and on my Message control I had this: <%= ViewData["DisplayMessage"] %> I would think this would render the DisplayMessage correctly, however, null is being returned. After a heavy dose of tinkering around, I finally created a "MessageData" class in order to strongly type my user control: public class MessageControl : ViewUserControl<MessageData> and now this call works: return View("Message", new MessageData() { DisplayMessage = "This is a test" }); and can be displayed like this: <%= ViewData.Model.DisplayMessage %> Why wouldn't the DisplayMessage property be added to the ViewData (ie. ViewData["DisplayMessage"]) collection without strong typing the user control? Is this by design? Wouldn't it make sense that ViewData would contain a key for "DisplayMessage"?
[ "The method \nViewData.Eval(\"DisplayMessage\") \n\nshould work for you.\n", "Of course after I create this question I immediately find the answer after a few more searches on Google\nhttp://forums.asp.net/t/1197059.aspx\nApparently this happens because of the wrapper class. Even so, it seems like any property passed should get added to the ViewData collection by default.\nI really need to stop answering my own questions :(\n" ]
[ 6, 2 ]
[]
[]
[ "asp.net", "asp.net_mvc", "viewdata", "viewusercontrol" ]
stackoverflow_0000018787_asp.net_asp.net_mvc_viewdata_viewusercontrol.txt
Q: Why do I receive a q[num] error when aborting a jQuery queue pipeline? When creating and executing a ajax request queue with $.manageAjax, I call ajaxManager.abort();, to abort the entire queue due to error, at which time I get an error stating: q[num] has no properties (jquery.ajaxmanager.js line 75) Here is the calling code: var ajaxManager = $.manageAjax({manageType:'sync', maxReq:0}); // setup code calling ajaxManager.add(...) // in success callback of first request ajaxManager.abort(); <-- causes error in jquery.ajaxManager.js There are 4 requests in the queue, this is being called in the success of the first request, if certain criteria is met, the queue needs to be aborted. Any ideas? A: It looks like you've got fewer items in q than you were expecting when you started iterating. Your script may be trying to access q[q.length], i.e. the element after the last element. Could it be that your successful request has been popped from the queue, and you have a race condition? Are you trying to abort a request that has already completed its life cycle? Alternatively, have you made a silly mistake as people sometimes do, and got your loop termination condition wrong? Just a few thoughts, I hope they help.
Why do I receive a q[num] error when aborting a jQuery queue pipeline?
When creating and executing a ajax request queue with $.manageAjax, I call ajaxManager.abort();, to abort the entire queue due to error, at which time I get an error stating: q[num] has no properties (jquery.ajaxmanager.js line 75) Here is the calling code: var ajaxManager = $.manageAjax({manageType:'sync', maxReq:0}); // setup code calling ajaxManager.add(...) // in success callback of first request ajaxManager.abort(); <-- causes error in jquery.ajaxManager.js There are 4 requests in the queue, this is being called in the success of the first request, if certain criteria is met, the queue needs to be aborted. Any ideas?
[ "It looks like you've got fewer items in q than you were expecting when you started iterating. Your script may be trying to access q[q.length], i.e. the element after the last element.\nCould it be that your successful request has been popped from the queue, and you have a race condition? Are you trying to abort a request that has already completed its life cycle? Alternatively, have you made a silly mistake as people sometimes do, and got your loop termination condition wrong?\nJust a few thoughts, I hope they help.\n" ]
[ 1 ]
[]
[]
[ "ajax", "jquery" ]
stackoverflow_0000030342_ajax_jquery.txt
Q: Install Python to match directory layout in OS X 10.5 The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work). I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in /Library/Python/2.5/ and not the one in buried at the top of the framework once I compile it. A: Not sure I entirely understand your question, but can't you simply build and install a 64 bit version and then create symbolic links so that /Library/Python/2.5 and below point to your freshly built version of python? A: Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it. You can also add a second python installation, but that also causes more problems than it's worth IMO. So I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python? A: Hyposaurus, It is possible to have multiple versions of Python installed simultaneously. Installing two versions in parallel solves your problem and helps avoid the problems laid out by Jason Baker above. The easiest way, and the way I recommend, is to use MacPorts, which will install all its software separately. By default, for example, everything is installed in /opt/local Another method is to simply download the source and compile with a specified prefix. Note that this method doesn't modify your PATH environment variable, so you'll need to do that yourself if you want to avoid typing the fully qualified path to the python executable each time ./configure --prefix=/usr/local/python64 make sudo make install Then you can simply point your Apache install at the new version using mod_python's PythonInterpreter directive A: Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files. A: The short answer is because I can. The long answer, expanding on what the OP said, is to be more compatible with apache and mysql/postgresql. They are all 64bit (apache is a fat binary with ppc, ppc64 x86 and x86 and x86_64, the others just straight 64bit). Mysqldb and mod_python wont compile unless they are all running the same architecture. Yes I could run them all in 32bit (and have in the past) but this is much more work then compiling one program. EDIT: You pretty much convinced though to just let the installer do its thing and update the PATH to reflect this.
Install Python to match directory layout in OS X 10.5
The default Python install on OS X 10.5 is 2.5.1 with a FAT 32 bit (Intel and PPC) client. I want to setup apache and mysql to run django. In the past, I have run Apache and MySQL to match this install in 32 bit mode (even stripping out the 64 bit stuff from Apache to make it work). I want to upgrade Python to 64 bit. I am completely comfortable with compiling it from source with one caveat. How do I match the way that the default install is laid out? Especially, with regards to site-packages being in /Library/Python/2.5/ and not the one in buried at the top of the framework once I compile it.
[ "Not sure I entirely understand your question, but can't you simply build and install a 64 bit version and then create symbolic links so that /Library/Python/2.5 and below point to your freshly built version of python?\n", "Personally, I wouldn't worry about it until you see a problem. Messing with the default python install on a *Nix system can cause more trouble than it's worth. I can say from personal experience that you never truly understand what python has done for the nix world until you have a problem with it.\nYou can also add a second python installation, but that also causes more problems than it's worth IMO.\nSo I suppose the best question to start out with would be why exactly do you want to use the 64 bit version of python?\n", "Hyposaurus,\nIt is possible to have multiple versions of Python installed simultaneously. Installing two versions in parallel solves your problem and helps avoid the problems laid out by Jason Baker above. \nThe easiest way, and the way I recommend, is to use MacPorts, which will install all its software separately. By default, for example, everything is installed in /opt/local\nAnother method is to simply download the source and compile with a specified prefix. Note that this method doesn't modify your PATH environment variable, so you'll need to do that yourself if you want to avoid typing the fully qualified path to the python executable each time\n./configure --prefix=/usr/local/python64\nmake\nsudo make install\n\nThen you can simply point your Apache install at the new version using mod_python's PythonInterpreter directive\n", "Essentially, yes. I was not sure you could do it like that (current version does not do it like that). When using the python install script, however, there is no option (that I can find) to specify where to put directories and files (eg --prefix). I was hoping to match the current layout of python related files so as to avoid 'polluting' my machine with redundant files.\n", "The short answer is because I can. The long answer, expanding on what the OP said, is to be more compatible with apache and mysql/postgresql. They are all 64bit (apache is a fat binary with ppc, ppc64 x86 and x86 and x86_64, the others just straight 64bit). Mysqldb and mod_python wont compile unless they are all running the same architecture. Yes I could run them all in 32bit (and have in the past) but this is much more work then compiling one program.\nEDIT: You pretty much convinced though to just let the installer do its thing and update the PATH to reflect this.\n" ]
[ 1, 1, 1, 0, 0 ]
[]
[]
[ "64_bit", "macos", "python" ]
stackoverflow_0000029856_64_bit_macos_python.txt
Q: How to plot a long path with Virtual Earth The obvious way to plot a path with virtual earth (VEMap.GetDirections) is limited to 25 points. When trying to plot a vehicle's journey this is extremely limiting. How can I plot a by-road journey of more than 25 points on a virtual earth map? A: According to this you need to call VEMap.GetDirections every 25 points until you reach the end of the route and then plot a custom shape of the complete route.
How to plot a long path with Virtual Earth
The obvious way to plot a path with virtual earth (VEMap.GetDirections) is limited to 25 points. When trying to plot a vehicle's journey this is extremely limiting. How can I plot a by-road journey of more than 25 points on a virtual earth map?
[ "According to this you need to call VEMap.GetDirections every 25 points until you reach the end of the route and then plot a custom shape of the complete route.\n" ]
[ 1 ]
[]
[]
[ "javascript", "virtual_earth" ]
stackoverflow_0000031711_javascript_virtual_earth.txt
Q: Process.StartTime Access Denied My code needs to determine how long a particular process has been running. But it continues to fail with an access denied error message on the Process.StartTime request. This is a process running with a User's credentials (ie, not a high-privilege process). There's clearly a security setting or a policy setting, or something that I need to twiddle with to fix this, as I can't believe the StartTime property is in the Framework just so that it can fail 100% of the time. A Google search indicated that I could resolve this by adding the user whose credentials the querying code is running under to the "Performance Log Users" group. However, no such user group exists on this machine. A: I've read something similar to what you said in the past, Lars. Unfortunately, I'm somewhat restricted with what I can do with the machine in question (in other words, I can't go creating user groups willy-nilly: it's a server, not just some random PC). Thanks for the answers, Will and Lars. Unfortunately, they didn't solve my problem. Ultimate solution to this is to use WMI: using System.Management; String queryString = "select CreationDate from Win32_Process where ProcessId='" + ProcessId + "'"; SelectQuery query = new SelectQuery(queryString); ManagementScope scope = new System.Management.ManagementScope(@"\\.\root\CIMV2"); ManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query); ManagementObjectCollection processes = searcher.Get(); //... snip ... logic to figure out which of the processes in the collection is the right one goes here DateTime startTime = ManagementDateTimeConverter.ToDateTime(processes[0]["CreationDate"].ToString()); TimeSpan uptime = DateTime.Now.Subtract(startTime); Parts of this were scraped from Code Project: http://www.codeproject.com/KB/system/win32processusingwmi.aspx And "Hey, Scripting Guy!": http://www.microsoft.com/technet/scriptcenter/resources/qanda/jul05/hey0720.mspx A: Process of .Net 1.1 uses the Performance Counters to get the information. Either they are disabled or the user does not have administrative rights. Making sure the Performance Counters are enabled and the user is an administrator should make your code work. Actually the "Performance Counter Users Group" should enough. The group doesn't exist by default. So you should create it yourself. Process of .Net 2.0 is not depended on the Performance Counters. See http://weblogs.asp.net/nunitaddin/archive/2004/11/21/267559.aspx A: The underlying code needs to be able to call OpenProcess, for which you may require SeDebugPrivilege. Is the process you're doing the StartTime request on running as a different user to your own process? A: OK, sorry that didn't work... I am no expert on ASP.NET impersonation, I tend to use app pools which I don't think you can do on W2K Have you tried writing a tiny little test app which does the same query, and then running that as various users? I am reluctant to post a chunk of MS framework code here, but you could use either Reflector or this: http://www.codeplex.com/NetMassDownloader to get the source code for the relevant bits of the framework so that you could try implementing various bits to see where it fails. Can you get any other info about the process without getting Access Denied? A: I can enumerate the process (ie, the GetProcessById function works), and we have other code that gets the EXE name and other bits of information. I will give the test app a try. I'm also going to attempt to use WMI to get this information if I can't get the C# implementation working properly in short order (this is not critical functionality, so I can't spend days on it).
Process.StartTime Access Denied
My code needs to determine how long a particular process has been running. But it continues to fail with an access denied error message on the Process.StartTime request. This is a process running with a User's credentials (ie, not a high-privilege process). There's clearly a security setting or a policy setting, or something that I need to twiddle with to fix this, as I can't believe the StartTime property is in the Framework just so that it can fail 100% of the time. A Google search indicated that I could resolve this by adding the user whose credentials the querying code is running under to the "Performance Log Users" group. However, no such user group exists on this machine.
[ "I've read something similar to what you said in the past, Lars. Unfortunately, I'm somewhat restricted with what I can do with the machine in question (in other words, I can't go creating user groups willy-nilly: it's a server, not just some random PC).\nThanks for the answers, Will and Lars. Unfortunately, they didn't solve my problem.\nUltimate solution to this is to use WMI:\nusing System.Management;\nString queryString = \"select CreationDate from Win32_Process where ProcessId='\" + ProcessId + \"'\";\nSelectQuery query = new SelectQuery(queryString);\n\nManagementScope scope = new System.Management.ManagementScope(@\"\\\\.\\root\\CIMV2\");\nManagementObjectSearcher searcher = new ManagementObjectSearcher(scope, query);\nManagementObjectCollection processes = searcher.Get();\n\n //... snip ... logic to figure out which of the processes in the collection is the right one goes here\n\nDateTime startTime = ManagementDateTimeConverter.ToDateTime(processes[0][\"CreationDate\"].ToString());\nTimeSpan uptime = DateTime.Now.Subtract(startTime);\n\nParts of this were scraped from Code Project:\nhttp://www.codeproject.com/KB/system/win32processusingwmi.aspx\nAnd \"Hey, Scripting Guy!\":\nhttp://www.microsoft.com/technet/scriptcenter/resources/qanda/jul05/hey0720.mspx\n", "Process of .Net 1.1 uses the Performance Counters to get the information. Either they are disabled or the user does not have administrative rights. Making sure the Performance Counters are enabled and the user is an administrator should make your code work.\nActually the \"Performance Counter Users Group\" should enough. The group doesn't exist by default. So you should create it yourself. \nProcess of .Net 2.0 is not depended on the Performance Counters.\nSee http://weblogs.asp.net/nunitaddin/archive/2004/11/21/267559.aspx\n", "The underlying code needs to be able to call OpenProcess, for which you may require SeDebugPrivilege.\nIs the process you're doing the StartTime request on running as a different user to your own process?\n", "OK, sorry that didn't work... I am no expert on ASP.NET impersonation, I tend to use app pools which I don't think you can do on W2K Have you tried writing a tiny little test app which does the same query, and then running that as various users? \nI am reluctant to post a chunk of MS framework code here, but you could use either Reflector or this: http://www.codeplex.com/NetMassDownloader to get the source code for the relevant bits of the framework so that you could try implementing various bits to see where it fails. \nCan you get any other info about the process without getting Access Denied?\n", "I can enumerate the process (ie, the GetProcessById function works), and we have other code that gets the EXE name and other bits of information.\nI will give the test app a try. I'm also going to attempt to use WMI to get this information if I can't get the C# implementation working properly in short order (this is not critical functionality, so I can't spend days on it).\n" ]
[ 6, 2, 1, 0, 0 ]
[]
[]
[ ".net_1.1", "c#", "windows_server_2000" ]
stackoverflow_0000028708_.net_1.1_c#_windows_server_2000.txt
Q: How many ServiceContracts can a WCF service have? How many ServiceContracts can a WCF service have? Specifically, since a ServiceContract is an attribute to an interface, how many interfaces can I code into one WCF web service? Is it a one-to-one? Does it make sense to separate the contracts across multiple web services? A: WCF services can have multiple endpoints, each of which can implement a different service contract. For example, you could have a service declared as follows: [ServiceBehavior(Namespace = "DemoService")] public class DemoService : IDemoService, IDoNothingService Which would have configuration along these lines: <service name="DemoService" behaviorConfiguration="Debugging"> <host> <baseAddresses> <add baseAddress = "http://localhost/DemoService.svc" /> </baseAddresses> </host> <endpoint address ="" binding="customBinding" bindingConfiguration="InsecureCustom" bindingNamespace="http://schemas.com/Demo" contract="IDemoService"/> <endpoint address ="" binding="customBinding" bindingConfiguration="InsecureCustom" bindingNamespace="http://schemas.com/Demo" contract="IDoNothingService"/> </service> Hope that helps, but if you were after the theoretical maximum interfaces you can have for a service I suspect it's some crazily large multiple of 2. A: You can have a service implement all the service contracts you want. I mean, I don't know if there is a limit, but I don't think there is. That's a neat way to separate operations that will be implemented by the same service in several conceptually different service contract interfaces. A: @jdiaz Of course you should strive to have very different business matters in different services, but consider the case in which you want that, for example, all your services implement a GetVersion() operation. You could have a service contract just for that operation and have every service implement it, instead of adding the GetVersion() operation to the contract of all your services. A: A service can theoretically have any number of Endpoints, and each Endpoint is bound to a particular contract, or interface, so it is possible for a single conceptual (and configured) service to host multiple interfaces via multiple endpoints or alternatively for several endpoints to host the same interface. If you are using the ServiceHost class to host your service, though, instead of IIS, you can only associate a single interface per ServiceHost. I'm not sure why this is the case, but it is.
How many ServiceContracts can a WCF service have?
How many ServiceContracts can a WCF service have? Specifically, since a ServiceContract is an attribute to an interface, how many interfaces can I code into one WCF web service? Is it a one-to-one? Does it make sense to separate the contracts across multiple web services?
[ "WCF services can have multiple endpoints, each of which can implement a different service contract.\nFor example, you could have a service declared as follows:\n[ServiceBehavior(Namespace = \"DemoService\")]\npublic class DemoService : IDemoService, IDoNothingService\n\nWhich would have configuration along these lines:\n<service name=\"DemoService\" behaviorConfiguration=\"Debugging\">\n <host>\n <baseAddresses>\n <add baseAddress = \"http://localhost/DemoService.svc\" />\n </baseAddresses>\n </host>\n <endpoint \n address =\"\"\n binding=\"customBinding\"\n bindingConfiguration=\"InsecureCustom\"\n bindingNamespace=\"http://schemas.com/Demo\" contract=\"IDemoService\"/>\n <endpoint \n address =\"\"\n binding=\"customBinding\"\n bindingConfiguration=\"InsecureCustom\"\n bindingNamespace=\"http://schemas.com/Demo\" contract=\"IDoNothingService\"/>\n</service> \n\nHope that helps, but if you were after the theoretical maximum interfaces you can have for a service I suspect it's some crazily large multiple of 2.\n", "You can have a service implement all the service contracts you want. I mean, I don't know if there is a limit, but I don't think there is.\nThat's a neat way to separate operations that will be implemented by the same service in several conceptually different service contract interfaces.\n", "@jdiaz\nOf course you should strive to have very different business matters in different services, but consider the case in which you want that, for example, all your services implement a GetVersion() operation. You could have a service contract just for that operation and have every service implement it, instead of adding the GetVersion() operation to the contract of all your services.\n", "A service can theoretically have any number of Endpoints, and each Endpoint is bound to a particular contract, or interface, so it is possible for a single conceptual (and configured) service to host multiple interfaces via multiple endpoints or alternatively for several endpoints to host the same interface.\nIf you are using the ServiceHost class to host your service, though, instead of IIS, you can only associate a single interface per ServiceHost. I'm not sure why this is the case, but it is.\n" ]
[ 18, 1, 1, 0 ]
[]
[]
[ "wcf", "web_services" ]
stackoverflow_0000031790_wcf_web_services.txt
Q: How do I make the manifest of a .net assembly private? What should I do if I want to release a .net assembly but wish to keep its internals detailed in the manifest private (from a utility such as ildasm.exe) ? A: I think what you're talking about is "obfuscation". There are lots of articles about it on the net: http://en.wikipedia.org/wiki/Obfuscation The "standard" tool for obfuscation on .NET is by Preemptive Solutions: http://www.preemptive.com/obfuscator.html They have a community edition that ships with Visual Studio which you can use. You mentioned ILDasm, have you looked at the .NET Reflector? http://aisto.com/roeder/dotnet/ It gives you an even better idea as to what people can see if you release a manifest! A: The CLR cannot directly load modules that contain no manifest. So you can't make an assembly completely private unless you also want to make it unloadable ;) You can however, as Mark noted above, use obfuscation tools to hide the parts you would like to keep truly internal. It's too bad the internal keyword doesn't exclude that metadata EDIT: it looks like this question is highly related
How do I make the manifest of a .net assembly private?
What should I do if I want to release a .net assembly but wish to keep its internals detailed in the manifest private (from a utility such as ildasm.exe) ?
[ "I think what you're talking about is \"obfuscation\".\nThere are lots of articles about it on the net:\nhttp://en.wikipedia.org/wiki/Obfuscation\nThe \"standard\" tool for obfuscation on .NET is by Preemptive Solutions:\nhttp://www.preemptive.com/obfuscator.html\nThey have a community edition that ships with Visual Studio which you can use.\nYou mentioned ILDasm, have you looked at the .NET Reflector?\nhttp://aisto.com/roeder/dotnet/\nIt gives you an even better idea as to what people can see if you release a manifest!\n", "The CLR cannot directly load modules that contain no manifest. So you can't make an assembly completely private unless you also want to make it unloadable ;)\nYou can however, as Mark noted above, use obfuscation tools to hide the parts you would like to keep truly internal. \nIt's too bad the internal keyword doesn't exclude that metadata\nEDIT: it looks like this question is highly related\n" ]
[ 7, 1 ]
[]
[]
[ ".net", "obfuscation", "security" ]
stackoverflow_0000029677_.net_obfuscation_security.txt
Q: XML Schema construct for "Any number of these elements - in any order" I need to create an XML schema that looks something like this: <xs:element name="wrapperElement"> <xs:complexType> <xs:sequence> <xs:element type="el1"> <xs:element type="el2"> </xs:sequence> <xs:WhatGoesHere?> <xs:element type="el3"> <xs:element type="el4"> <xs:element type="el5"> </xs:WhatGoesHere?> <xs:sequence> <xs:element type="el6"> <xs:element type="el7"> </xs:sequence> </xs:complexType> </xs:element> What I need is a replacement for "WhatGoesHere" such that any number of el3, el4 and el5 can appear in any order. For instance it could contain {el3, el3, el5, el3} Any idea on how to solve this? A: You want xs:choice with occurrence constraints: <xs:element name="wrapperElement"> <xs:complexType> <xs:sequence> <xs:element name="e11"/> <xs:element name="el2"/> <xs:choice minOccurs="0" maxOccurs="unbounded"> <xs:element name="el3"/> <xs:element name="el4"/> <xs:element name="el5"/> </xs:choice> <xs:element name="el6"/> <xs:element name="el7"/> </xs:sequence> </xs:complexType> </xs:element>
XML Schema construct for "Any number of these elements - in any order"
I need to create an XML schema that looks something like this: <xs:element name="wrapperElement"> <xs:complexType> <xs:sequence> <xs:element type="el1"> <xs:element type="el2"> </xs:sequence> <xs:WhatGoesHere?> <xs:element type="el3"> <xs:element type="el4"> <xs:element type="el5"> </xs:WhatGoesHere?> <xs:sequence> <xs:element type="el6"> <xs:element type="el7"> </xs:sequence> </xs:complexType> </xs:element> What I need is a replacement for "WhatGoesHere" such that any number of el3, el4 and el5 can appear in any order. For instance it could contain {el3, el3, el5, el3} Any idea on how to solve this?
[ "You want xs:choice with occurrence constraints:\n<xs:element name=\"wrapperElement\">\n <xs:complexType>\n <xs:sequence>\n <xs:element name=\"e11\"/>\n <xs:element name=\"el2\"/>\n <xs:choice minOccurs=\"0\" maxOccurs=\"unbounded\">\n <xs:element name=\"el3\"/>\n <xs:element name=\"el4\"/>\n <xs:element name=\"el5\"/>\n </xs:choice>\n <xs:element name=\"el6\"/>\n <xs:element name=\"el7\"/>\n </xs:sequence>\n </xs:complexType>\n</xs:element>\n\n" ]
[ 13 ]
[]
[]
[ "schema", "xml", "xsd" ]
stackoverflow_0000031847_schema_xml_xsd.txt
Q: Remove Meta Data from .NET applications? Is this possible? Does the .NET framework depend on the meta data in the bytecode? I'd like to have an application i write not work in reflector or a similar .NET decompiler. A: If you remove the metadata the framework won't be able to load your code, or figure out which other assemblies it references, or anything like that, so no, that's not a good idea. Obfuscators will make it a lot harder for an 'attacker' to decompile your code, but at the end of the day if someone is motivated and smart there's not a lot you can do to stop them. .NET will always compile down to MSIL, and MSIL is inherently easier to read than raw x86. That's just one of the tradeoffs you make for using .NET. Don't worry about it. The source code to apache, linux, and everything else is freely available on the net, but it's not providing much competitive advantage to microsoft is it :-) A: I think you are referring to the Assembly Manifest: Every assembly, whether static or dynamic, contains a collection of data that describes how the elements in the assembly relate to each other. The assembly manifest contains this assembly metadata. An assembly manifest contains all the metadata needed to specify the assembly's version requirements and security identity, and all metadata needed to define the scope of the assembly and resolve references to resources and classes. One of the most important features of .Net assemblies is that they are self-describing components and this is provided by the manifest. So removing them will somehow defeat its purpose. A: This looks like the same question as: How do I make the manifest of a .net assembly private? See my answer there: How do I make the manifest of a .net assembly private? "I think what you're talking about is "obfuscation". There are lots of articles about it on the net: http://en.wikipedia.org/wiki/Obfuscation The "standard" tool for obfuscation on .NET is by Preemptive Solutions: http://www.preemptive.com/obfuscator.html They have a community edition that ships with Visual Studio which you can use. You mentioned ILDasm, have you looked at the .NET Reflector? http://aisto.com/roeder/dotnet/ It gives you an even better idea as to what people can see if you release a manifest!" A: I don't think you can remove the meta data, but you can obfuscate your code if you're looking to protect your IP. A: The Dotfuscator will stop your code from being able to be decompiled http://www.preemptive.com/dotfuscator.html Edit: I should have mentioned, that's the professional version, the free community version ships with visual studio A: Wouldn't solutions like VMware's ThinApp (Formerly Thinstall) help a bit with protecting your code also? It comes at an extremely high price though..
Remove Meta Data from .NET applications?
Is this possible? Does the .NET framework depend on the meta data in the bytecode? I'd like to have an application i write not work in reflector or a similar .NET decompiler.
[ "If you remove the metadata the framework won't be able to load your code, or figure out which other assemblies it references, or anything like that, so no, that's not a good idea.\nObfuscators will make it a lot harder for an 'attacker' to decompile your code, but at the end of the day if someone is motivated and smart there's not a lot you can do to stop them.\n.NET will always compile down to MSIL, and MSIL is inherently easier to read than raw x86. That's just one of the tradeoffs you make for using .NET. \nDon't worry about it. The source code to apache, linux, and everything else is freely available on the net, but it's not providing much competitive advantage to microsoft is it :-)\n", "I think you are referring to the Assembly Manifest:\n\nEvery assembly, whether static or\n dynamic, contains a collection of data\n that describes how the elements in the\n assembly relate to each other. The\n assembly manifest contains this\n assembly metadata. An assembly\n manifest contains all the metadata\n needed to specify the assembly's\n version requirements and security\n identity, and all metadata needed to\n define the scope of the assembly and\n resolve references to resources and\n classes.\n\nOne of the most important features of .Net assemblies is that they are self-describing components and this is provided by the manifest. So removing them will somehow defeat its purpose.\n", "This looks like the same question as:\nHow do I make the manifest of a .net assembly private?\nSee my answer there:\nHow do I make the manifest of a .net assembly private?\n\"I think what you're talking about is \"obfuscation\". There are lots of articles about it on the net:\nhttp://en.wikipedia.org/wiki/Obfuscation\nThe \"standard\" tool for obfuscation on .NET is by Preemptive Solutions:\nhttp://www.preemptive.com/obfuscator.html\nThey have a community edition that ships with Visual Studio which you can use.\nYou mentioned ILDasm, have you looked at the .NET Reflector?\nhttp://aisto.com/roeder/dotnet/\nIt gives you an even better idea as to what people can see if you release a manifest!\"\n", "I don't think you can remove the meta data, but you can obfuscate your code if you're looking to protect your IP. \n", "The Dotfuscator will stop your code from being able to be decompiled\nhttp://www.preemptive.com/dotfuscator.html\nEdit: I should have mentioned, that's the professional version, the free community version ships with visual studio\n", "Wouldn't solutions like VMware's ThinApp (Formerly Thinstall) help a bit with protecting your code also?\nIt comes at an extremely high price though..\n" ]
[ 2, 1, 1, 0, 0, 0 ]
[]
[]
[ ".net", "decompiling", "metadata" ]
stackoverflow_0000031637_.net_decompiling_metadata.txt
Q: Using MySQLi - which is better for closing queries I have a habit of keeping my variable usage to a bare minimum. So I'm wondering if there is any advantage to be gained by the following: $query = $mysqli->query('SELECT * FROM `people` ORDER BY `name` ASC LIMIT 0,30'); // Example 1 $query = $query->fetch_assoc(); // Example 2 $query_r = $query->fetch_assoc(); $query->free(); Now if I'm right Example 1 should be more efficient as $query is unset when I reassign it which should free any memory associated with it. However there is a method (MySQLi_Result::free()) which frees associated memory - is this the same thing? If I don't call ::free() to free any memory associated with the result but unset it by reassigning the variable am I doing the same thing? I don't know how to log this sort of thing - does anyone have some ideas? A: The manual seems to suggest that you should still be using free() to release the memory. I believe the reasoning is that free() is freeing the memory in MySQL, not in PHP. Since PHP can't garbage-collect for MySQL, you need to call free(). A: Example 1 dissociates the $query variable from the MySQL result. The MySQL result still exists in memory, and will continue to exist and waste memory until garbage collection occurs. Example 2 frees the MySQL result immediately, releasing the used resources. However, since PHP pages are generally short-lived with small result-sets, the memory saved is trivial. You will not notice a slowdown unless you leave a ton of results in memory over a long period of time on pages that run for a long time. Brian, PHP can garbage collect the MySQL result, it just doesn't happen immediately. The result lives in PHP's memory pool, not in the MySQL server's. (the locality of memory when using unbuffered queries is slightly different, but they're so rarely used in PHP as to not be worth mentioning)
Using MySQLi - which is better for closing queries
I have a habit of keeping my variable usage to a bare minimum. So I'm wondering if there is any advantage to be gained by the following: $query = $mysqli->query('SELECT * FROM `people` ORDER BY `name` ASC LIMIT 0,30'); // Example 1 $query = $query->fetch_assoc(); // Example 2 $query_r = $query->fetch_assoc(); $query->free(); Now if I'm right Example 1 should be more efficient as $query is unset when I reassign it which should free any memory associated with it. However there is a method (MySQLi_Result::free()) which frees associated memory - is this the same thing? If I don't call ::free() to free any memory associated with the result but unset it by reassigning the variable am I doing the same thing? I don't know how to log this sort of thing - does anyone have some ideas?
[ "The manual seems to suggest that you should still be using free() to release the memory. I believe the reasoning is that free() is freeing the memory in MySQL, not in PHP. Since PHP can't garbage-collect for MySQL, you need to call free().\n", "Example 1 dissociates the $query variable from the MySQL result. The MySQL result still exists in memory, and will continue to exist and waste memory until garbage collection occurs.\nExample 2 frees the MySQL result immediately, releasing the used resources.\nHowever, since PHP pages are generally short-lived with small result-sets, the memory saved is trivial. You will not notice a slowdown unless you leave a ton of results in memory over a long period of time on pages that run for a long time.\nBrian,\nPHP can garbage collect the MySQL result, it just doesn't happen immediately.\nThe result lives in PHP's memory pool, not in the MySQL server's.\n(the locality of memory when using unbuffered queries is slightly different, but they're so rarely used in PHP as to not be worth mentioning) \n" ]
[ 5, 3 ]
[]
[]
[ "memory", "mysql", "mysqli", "php" ]
stackoverflow_0000026515_memory_mysql_mysqli_php.txt
Q: How scalable is System.Threading.Timer? I'm writing an app that will need to make use of Timers, but potentially very many of them. How scalable is the System.Threading.Timer class? The documentation merely say it's "lightweight", but doesn't explain further. Do these timers get sucked into a single thread (or very small threadpool) that processes all the callbacks on behalf of a Timer, or does each Timer have its own thread? I guess another way to rephrase the question is: How is System.Threading.Timer implemented? A: I say this in response to a lot of questions: Don't forget that the (managed) source code to the framework is available. You can use this tool to get it all: http://www.codeplex.com/NetMassDownloader Unfortunately, in this specific case, a lot of the implementation is in native code, so you don't get to look at it... They definitely use pool threads rather than a thread-per-timer, though. The standard way to implement a big collection of timers (which is how the kernel does it internally, and I would suspect is indirectly how your big collection of Timers ends up) is to maintain the list sorted by time-until-expiry - so the system only ever has to worry about checking the next timer which is going to expire, not the whole list. Roughly, this gives O(log n) for starting a timer and O(1) for processing running timers. Edit: Just been looking in Jeff Richter's book. He says (of Threading.Timer) that it uses a single thread for all Timer objects, this thread knows when the next timer (i.e. as above) is due and calls ThreadPool.QueueUserWorkItem for the callbacks as appropriate. This has the effect that if you don't finish servicing one callback on a timer before the next is due, that your callback will reenter on another pool thread. So in summary I doubt you'll see a big problem with having lots of timers, but you might suffer thread pool exhaustion if large numbers of them are firing at the same timer and/or their callbacks are slow-running. A: I think you might want to rethink your design (that is, if you have control over the design yourself). If you're using so many timers that this is actually a concern for you, there's clearly some potential for consolidation there. Here's a good article from MSDN Magazine from a few years ago that compares the three available timer classes, and gives some insight into their implementations: http://msdn.microsoft.com/en-us/magazine/cc164015.aspx A: Consolidate them. Create a timer service and ask that for the timers. It will only need to keep 1 active timer (for the next due call)... For this to be an improvement over just creating lots of Threading.Timer objects, you have to assume that it isn't exactly what Threading.Timer is already doing internally. I'd be interested to know how you came to that conclusion (I haven't disassembled the native bits of the framework, so you could well be right). A: ^^ as DannySmurf says : Consolidate them. Create a timer service and ask that for the timers. It will only need to keep 1 active timer (for the next due call) and a history of all the timer requests and recalculate this on AddTimer() / RemoveTimer().
How scalable is System.Threading.Timer?
I'm writing an app that will need to make use of Timers, but potentially very many of them. How scalable is the System.Threading.Timer class? The documentation merely say it's "lightweight", but doesn't explain further. Do these timers get sucked into a single thread (or very small threadpool) that processes all the callbacks on behalf of a Timer, or does each Timer have its own thread? I guess another way to rephrase the question is: How is System.Threading.Timer implemented?
[ "I say this in response to a lot of questions: Don't forget that the (managed) source code to the framework is available. You can use this tool to get it all: http://www.codeplex.com/NetMassDownloader\nUnfortunately, in this specific case, a lot of the implementation is in native code, so you don't get to look at it...\nThey definitely use pool threads rather than a thread-per-timer, though.\nThe standard way to implement a big collection of timers (which is how the kernel does it internally, and I would suspect is indirectly how your big collection of Timers ends up) is to maintain the list sorted by time-until-expiry - so the system only ever has to worry about checking the next timer which is going to expire, not the whole list.\nRoughly, this gives O(log n) for starting a timer and O(1) for processing running timers.\nEdit: Just been looking in Jeff Richter's book. He says (of Threading.Timer) that it uses a single thread for all Timer objects, this thread knows when the next timer (i.e. as above) is due and calls ThreadPool.QueueUserWorkItem for the callbacks as appropriate. This has the effect that if you don't finish servicing one callback on a timer before the next is due, that your callback will reenter on another pool thread. So in summary I doubt you'll see a big problem with having lots of timers, but you might suffer thread pool exhaustion if large numbers of them are firing at the same timer and/or their callbacks are slow-running.\n", "I think you might want to rethink your design (that is, if you have control over the design yourself). If you're using so many timers that this is actually a concern for you, there's clearly some potential for consolidation there.\nHere's a good article from MSDN Magazine from a few years ago that compares the three available timer classes, and gives some insight into their implementations:\nhttp://msdn.microsoft.com/en-us/magazine/cc164015.aspx\n", "\nConsolidate them. Create a timer\n service and ask that for the timers.\n It will only need to keep 1 active\n timer (for the next due call)...\n\nFor this to be an improvement over just creating lots of Threading.Timer objects, you have to assume that it isn't exactly what Threading.Timer is already doing internally. I'd be interested to know how you came to that conclusion (I haven't disassembled the native bits of the framework, so you could well be right).\n", "^^ as DannySmurf says : Consolidate them. Create a timer service and ask that for the timers. It will only need to keep 1 active timer (for the next due call) and a history of all the timer requests and recalculate this on AddTimer() / RemoveTimer().\n" ]
[ 29, 7, 5, 0 ]
[]
[]
[ ".net", "c#", "multithreading", "timer" ]
stackoverflow_0000031581_.net_c#_multithreading_timer.txt
Q: (Why) should I use obfuscation? It seems to me obfuscation is an idea that falls somewhere in the "security by obscurity" or "false sense of protection" camp. To protect intellectual property, there's copyright; to prevent security issues from being found, there's fixing those issues. In short, I regard it as a technical solution to a social problem. Those almost never work. However, I seem to be the only one in our dev team to feel that way, so I'm either wrong, or just need convincing arguments. Our product uses .NET, and one dev suggested .NET Reactor (which, incidentally, was suggested in this SO thread as well). .NET Reactor completely stops any decompiling by mixing any pure .NET assembly (written in C#, VB.NET, Delphi.NET, J#, MSIL...) with native machine code. So, basically, you throw all advantages of bytecode away in one go? Are there good engineering benefits to obfuscation? A: You asked for engineering reasons, so this is not strictly speaking an answer to the question. But I think it's a valid clarification. As you say, obfuscation is intended to address a social problem. And social (or business) problems, unlike technical ones, rarely have a complete solution. There are only degrees of success in addressing or minimising the problem. In this case, obfuscation will raise the barriers to someone decompiling and stealing your code. It will discourage casual attacks and, through inertia, may make your intellectual property less likely to be stolen. To make a tiresome analogy, an immobiliser doesn't prevent your car being stolen, but it will make it less likely. Of course there is a cost, in maintainability, (possibly) in performance and most importantly in making it harder for users to accurately submit bug reports. As GateKiller said, obfuscation won't prevent a determined team from decompiling, but (and it depends what your product is) how determined a team is likely to be attacking you? So, this is not a technical solution to a social problem, it's a technical decision which adds one influence to a complex social structure. A: If a big team of programmers really want to get at your source code and that had the time, money and effort, then they would be successful. Obfuscation, therefore, should stop people who don't have the time, money or effort to get your source, passers by you might call them. A: If you stick to pure managed code obfuscation, you can shave off quite a bit of an assembly size, and obfuscated classes/function names (collapsed to single letters) mean smaller memory footprint. This is almost always negligible, but does have an impact (and is used) on some mobile/embedded devices (though mostly in java). A: One potential engineering benefit is that in some cases obfuscation can create smaller executables or other artifacts -- e.g. obfuscating javascript results in smaller files (because all of the variables are named "a" and "b" instead of "descriptiveNameOne" and all the whitespace is stripped, etc). This results in faster load times for the web pages that use obfuscated javascript. Obviously this doesn't apply (as much) in the .NET world, but it's an example of a situation in which there is an direct engineering benefit. A: While not related to .net, I would consider obfuscation in Javascript, and possibly other interpeted languages. Javascript benefits well from obfuscation because it reduces the bandwith needed, and the tokens the parser has to read. But obfuscating compiled bytecode doesn't really seem that usefull to me. I mean what would you try and achieve? I can only see obfuscation beeing slightly usefull in license checking code to avoid it beeing circumvented too easily. A: The main reason to use obfuscation is to protect intellectual property as you have indicated. It is generally much more cost effective to a business to purchase an obfuscation product like .NET Reactor than it is to try and legally enforce your copyrights. Obfuscation can also provide other more incidental benefits such as performance improvements and assembly size reduction. These would the engineering benefits you are looking for. A: I posted a question which might help you as it discusses some of the issues: should-i-be-worried-about-obfuscating-my-net-code A: Use encryption to protect information on the way. Use obfuscation to protect information while your program still has it.
(Why) should I use obfuscation?
It seems to me obfuscation is an idea that falls somewhere in the "security by obscurity" or "false sense of protection" camp. To protect intellectual property, there's copyright; to prevent security issues from being found, there's fixing those issues. In short, I regard it as a technical solution to a social problem. Those almost never work. However, I seem to be the only one in our dev team to feel that way, so I'm either wrong, or just need convincing arguments. Our product uses .NET, and one dev suggested .NET Reactor (which, incidentally, was suggested in this SO thread as well). .NET Reactor completely stops any decompiling by mixing any pure .NET assembly (written in C#, VB.NET, Delphi.NET, J#, MSIL...) with native machine code. So, basically, you throw all advantages of bytecode away in one go? Are there good engineering benefits to obfuscation?
[ "You asked for engineering reasons, so this is not strictly speaking an answer to the question. But I think it's a valid clarification.\nAs you say, obfuscation is intended to address a social problem. And social (or business) problems, unlike technical ones, rarely have a complete solution. There are only degrees of success in addressing or minimising the problem.\nIn this case, obfuscation will raise the barriers to someone decompiling and stealing your code. It will discourage casual attacks and, through inertia, may make your intellectual property less likely to be stolen. To make a tiresome analogy, an immobiliser doesn't prevent your car being stolen, but it will make it less likely.\nOf course there is a cost, in maintainability, (possibly) in performance and most importantly in making it harder for users to accurately submit bug reports.\nAs GateKiller said, obfuscation won't prevent a determined team from decompiling, but (and it depends what your product is) how determined a team is likely to be attacking you?\nSo, this is not a technical solution to a social problem, it's a technical decision which adds one influence to a complex social structure.\n", "If a big team of programmers really want to get at your source code and that had the time, money and effort, then they would be successful.\nObfuscation, therefore, should stop people who don't have the time, money or effort to get your source, passers by you might call them.\n", "If you stick to pure managed code obfuscation, you can shave off quite a bit of an assembly size, and obfuscated classes/function names (collapsed to single letters) mean smaller memory footprint. This is almost always negligible, but does have an impact (and is used) on some mobile/embedded devices (though mostly in java).\n", "One potential engineering benefit is that in some cases obfuscation can create smaller executables or other artifacts -- e.g. obfuscating javascript results in smaller files (because all of the variables are named \"a\" and \"b\" instead of \"descriptiveNameOne\" and all the whitespace is stripped, etc). This results in faster load times for the web pages that use obfuscated javascript. Obviously this doesn't apply (as much) in the .NET world, but it's an example of a situation in which there is an direct engineering benefit.\n", "While not related to .net, I would consider obfuscation in Javascript, and possibly other interpeted languages. Javascript benefits well from obfuscation because it reduces the bandwith needed, and the tokens the parser has to read.\nBut obfuscating compiled bytecode doesn't really seem that usefull to me. I mean what would you try and achieve? I can only see obfuscation beeing slightly usefull in license checking code to avoid it beeing circumvented too easily. \n", "The main reason to use obfuscation is to protect intellectual property as you have indicated. It is generally much more cost effective to a business to purchase an obfuscation product like .NET Reactor than it is to try and legally enforce your copyrights.\nObfuscation can also provide other more incidental benefits such as performance improvements and assembly size reduction. These would the engineering benefits you are looking for.\n", "I posted a question which might help you as it discusses some of the issues:\nshould-i-be-worried-about-obfuscating-my-net-code\n", "Use encryption to protect information on the way.\nUse obfuscation to protect information while your program still has it.\n" ]
[ 14, 7, 4, 3, 3, 2, 2, 1 ]
[]
[]
[ ".net", "obfuscation", "protection", "security" ]
stackoverflow_0000031882_.net_obfuscation_protection_security.txt
Q: Giant NodeManagerLogs from hibernate in weblogic One of our weblogic 8.1s has suddenly started logging giant amounts of logs and filling the disk. The logs giving us hassle resides in mydrive:\bea\weblogic81\common\nodemanager\NodeManagerLogs\generatedManagedServer1\managedserveroutput.log and the entries in the logfile is just the some kind of entries repeated again and again. Stuff like 19:21:24,470 DEBUG [StdRowLockSemaphore] Lock 'TRIGGER_ACCESS' returned by: LLL-SCHEDULER_QuartzSchedulerThread 19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' is deLLLred by: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager 19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' is being obtained: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager 19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' given to: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager 19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'TRIGGER_ACCESS' is deLLLred by: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager ... 19:17:46,798 DEBUG [CascadingAction] cascading to saveOrUpdate: mypackage.config.common.Share 19:17:46,798 DEBUG [DefaultSaveOrUpdateEventListener] reassociated uninitialized proxy 19:17:46,798 DEBUG [Cascade] done processing cascade ACTION_SAVE_UPDATE for: mypackage.config.common.FileLocation 19:17:46,798 DEBUG [Cascade] processing cascade ACTION_SAVE_UPDATE for: mypackage.config.common.FileLocation 19:17:46,798 DEBUG [CascadingAction] cascading to saveOrUpdate: mypackage.config.common.Share 19:17:46,798 DEBUG [DefaultSaveOrUpdateEventListener] reassociated uninitialized proxy I can't find any debug settings set anywhere. I've looked in the Remote Start classpath and Arguments for the managed server. Can anyone point me in the direction to gain control over this logfile? A: Since those log entries aren't problems, it sounds like the global log level has been turned up to DEBUG. Alternatively, perhaps a new Logging mechanism has been implemented or a new log Appender that writes to stdout, and thus is being re-logged by Weblogic. I would look at the configuration of your logger. (Or provide it with one, if it is using a default config) For example, when using Hibernate with an active Log4J setup, Hibernate will automatically join in with the Log4J instance that you set up in your own application It can be tuned, as per the normal Log4J config. This example uses the properties configuration style: log4j.category.org.hibernate=WARN Hibernate may join in with other logging mechanisms via the apache commons logging API. Look at how to configure your own logger and tune out the org.hibernate.* frequencies. n.b. When debugging, switching back on log4j.category.org.hibernate.SQL=INFO or DEBUG can be useful. A: Is it a large system with many programmers? If so it might be worth checking that nowhere in the code is the logger having its config changed programatically. In log4j, this can be done using the LogManager or BasicConfigurator classes. Also via the PropertyConfigurator and DomConfigurator. Just one rogue line of code could set up a new Logger to stdout using the PatternLayout shown in your example. BasicConfigurator.configure();
Giant NodeManagerLogs from hibernate in weblogic
One of our weblogic 8.1s has suddenly started logging giant amounts of logs and filling the disk. The logs giving us hassle resides in mydrive:\bea\weblogic81\common\nodemanager\NodeManagerLogs\generatedManagedServer1\managedserveroutput.log and the entries in the logfile is just the some kind of entries repeated again and again. Stuff like 19:21:24,470 DEBUG [StdRowLockSemaphore] Lock 'TRIGGER_ACCESS' returned by: LLL-SCHEDULER_QuartzSchedulerThread 19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' is deLLLred by: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager 19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' is being obtained: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager 19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'STATE_ACCESS' given to: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager 19:21:31,923 DEBUG [StdRowLockSemaphore] Lock 'TRIGGER_ACCESS' is deLLLred by: QuartzScheduler_LLL-SCHEDULER-NACDLLLF011219763113220_ClusterManager ... 19:17:46,798 DEBUG [CascadingAction] cascading to saveOrUpdate: mypackage.config.common.Share 19:17:46,798 DEBUG [DefaultSaveOrUpdateEventListener] reassociated uninitialized proxy 19:17:46,798 DEBUG [Cascade] done processing cascade ACTION_SAVE_UPDATE for: mypackage.config.common.FileLocation 19:17:46,798 DEBUG [Cascade] processing cascade ACTION_SAVE_UPDATE for: mypackage.config.common.FileLocation 19:17:46,798 DEBUG [CascadingAction] cascading to saveOrUpdate: mypackage.config.common.Share 19:17:46,798 DEBUG [DefaultSaveOrUpdateEventListener] reassociated uninitialized proxy I can't find any debug settings set anywhere. I've looked in the Remote Start classpath and Arguments for the managed server. Can anyone point me in the direction to gain control over this logfile?
[ "Since those log entries aren't problems, it sounds like the global log level has been turned up to DEBUG. Alternatively, perhaps a new Logging mechanism has been implemented or a new log Appender that writes to stdout, and thus is being re-logged by Weblogic. I would look at the configuration of your logger. (Or provide it with one, if it is using a default config)\nFor example, when using Hibernate with an active Log4J setup, Hibernate will automatically join in with the Log4J instance that you set up in your own application\nIt can be tuned, as per the normal Log4J config. This example uses the properties configuration style:\nlog4j.category.org.hibernate=WARN\n\nHibernate may join in with other logging mechanisms via the apache commons logging API. Look at how to configure your own logger and tune out the org.hibernate.* frequencies.\nn.b. When debugging, switching back on\nlog4j.category.org.hibernate.SQL=INFO or DEBUG\n\ncan be useful.\n", "Is it a large system with many programmers? If so it might be worth checking that nowhere in the code is the logger having its config changed programatically.\nIn log4j, this can be done using the LogManager or BasicConfigurator classes. Also via the PropertyConfigurator and DomConfigurator. Just one rogue line of code could set up a new Logger to stdout using the PatternLayout shown in your example. \nBasicConfigurator.configure();\n\n" ]
[ 1, 1 ]
[]
[]
[ "hibernate", "logging", "weblogic" ]
stackoverflow_0000029822_hibernate_logging_weblogic.txt
Q: Compile a referenced dll Using VS2005 and VB.NET. I have a project that is an API for a data-store that I created. When compiled creates api.dll. I have a second project in the same solution that has a project reference to the API project which when compiled will create wrapper.dll. This is basically a wrapper for the API that is specific to an application. When I use wrapper.dll in that other application, I have to copy wrapper.dll and api.dll to my new application. How can I get the wrapper project to compile the api.dll into itself so that I only have one dll to move around? A: You'll probably have to use a tool, such as ILMerge, to merge the two assemblies. A: There's an easier way. Just create shortcuts (called linked files in Visual Studio-ese) in your wrapper.dll project that point to the source files in api.dll. That will compile your source directly into wrapper.dll. A: @Jas, it's a special feature in Visual Studio. The procedure is outlined in this blog entry, called "Sharing a Strong Name Key File Across Projects". The example is for sharing strong name key files, but will work for any kind of file. Briefly, you right-click on your project and select "Add Existing Item". Browse to the directory of the file(s) you want to link and highlight the file or files. Insted of just hitting "Add" or "Open" (depending on your version of Visual Studio), click on the little down arrow on the right-hand side of that button. You'll see options to "Open" or "Link File" if you're using Visual Studio 2003, or "Add" or "Add as Link" with 2005 (I'm not sure about 2008). In any case, choose the one that involves the word "Link". Then your project will essentially reference the file - it will be accessible both from the original project its was in and the project you "linked" it to. This is a convenient way to create an assembly that contains all the functionality of wrapper.dll and api.dll, but you'll have to remember to repeat this procedure every time you add a new file to api.dll (but not wrapper.dll). A: I think you could compile api.dll as a resource into wrapper.dll. Then manually access that Resource out of api.dll and manually load it. I have manually loaded assemblies from disk, so loading one from a Stream should not be any different. I would try including the dll in your project as a file, similar to including a text or xml file (in addition to its project reference for compilation). Then I would set the build action to "Embedded Resource." Within wrapper.dll, I would use the Assembly object to access api.dll just like any other embedded resource. You will then also want to load the assembly using Assembly.Load http://msdn.microsoft.com/en-us/library/system.reflection.assembly.load.aspx
Compile a referenced dll
Using VS2005 and VB.NET. I have a project that is an API for a data-store that I created. When compiled creates api.dll. I have a second project in the same solution that has a project reference to the API project which when compiled will create wrapper.dll. This is basically a wrapper for the API that is specific to an application. When I use wrapper.dll in that other application, I have to copy wrapper.dll and api.dll to my new application. How can I get the wrapper project to compile the api.dll into itself so that I only have one dll to move around?
[ "You'll probably have to use a tool, such as ILMerge, to merge the two assemblies.\n", "There's an easier way. Just create shortcuts (called linked files in Visual Studio-ese) in your wrapper.dll project that point to the source files in api.dll. That will compile your source directly into wrapper.dll.\n", "@Jas, it's a special feature in Visual Studio. The procedure is outlined in this blog entry, called \"Sharing a Strong Name Key File Across Projects\". The example is for sharing strong name key files, but will work for any kind of file.\nBriefly, you right-click on your project and select \"Add Existing Item\". Browse to the directory of the file(s) you want to link and highlight the file or files. Insted of just hitting \"Add\" or \"Open\" (depending on your version of Visual Studio), click on the little down arrow on the right-hand side of that button. You'll see options to \"Open\" or \"Link File\" if you're using Visual Studio 2003, or \"Add\" or \"Add as Link\" with 2005 (I'm not sure about 2008). In any case, choose the one that involves the word \"Link\". Then your project will essentially reference the file - it will be accessible both from the original project its was in and the project you \"linked\" it to. \nThis is a convenient way to create an assembly that contains all the functionality of wrapper.dll and api.dll, but you'll have to remember to repeat this procedure every time you add a new file to api.dll (but not wrapper.dll).\n", "I think you could compile api.dll as a resource into wrapper.dll. Then manually access that Resource out of api.dll and manually load it. I have manually loaded assemblies from disk, so loading one from a Stream should not be any different.\nI would try including the dll in your project as a file, similar to including a text or xml file (in addition to its project reference for compilation). Then I would set the build action to \"Embedded Resource.\" Within wrapper.dll, I would use the Assembly object to access api.dll just like any other embedded resource. You will then also want to load the assembly using Assembly.Load http://msdn.microsoft.com/en-us/library/system.reflection.assembly.load.aspx\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ ".net", "vb.net", "visual_studio_2005" ]
stackoverflow_0000030833_.net_vb.net_visual_studio_2005.txt
Q: how do you programmatically invoke a listview label edit in C# I'd like to invoke the label edit of a newly added item to a ListView. basically, if I have a click here to add scenario, as soon as the new item is added, I want the text label to be in a user editable state. Thanks! A: found it! ListViewItem::BeginEdit();
how do you programmatically invoke a listview label edit
in C# I'd like to invoke the label edit of a newly added item to a ListView. basically, if I have a click here to add scenario, as soon as the new item is added, I want the text label to be in a user editable state. Thanks!
[ "found it!\nListViewItem::BeginEdit();\n\n" ]
[ 10 ]
[]
[]
[ ".net", "c#", "user_interface", "winforms" ]
stackoverflow_0000032103_.net_c#_user_interface_winforms.txt
Q: Is regex case insensitivity slower? Source RegexOptions.IgnoreCase is more expensive than I would have thought (eg, should be barely measurable) Assuming that this applies to PHP, Python, Perl, Ruby etc as well as C# (which is what I assume Jeff was using), how much of a slowdown is it and will I incur a similar penalty with /[a-zA-z]/ as I will with /[a-z]/i ? A: Yes, [A-Za-z] will be much faster than setting the RegexOptions.IgnoreCase, largely because of Unicode strings. But it's also much more limiting -- [A-Za-z] does not match accented international characters, it's literally the A-Za-z ASCII set and nothing more. I don't know if you saw Tim Bray's answer to my message, but it's a good one: One of the trickiest issues in internationalized search is upper and lower case. This notion of case is limited to languages written in the Latin, Greek, and Cyrillic character sets. English-speakers naturally expect search to be case-insensitive if only because they’re lazy: if Nadia Jones wants to look herself up on Google she’ll probably just type in nadia jones and expect the system to take care of it. So it’s fairly common for search systems to “normalize” words by converting them all to lower- or upper-case, both for indexing and queries. The trouble is that the mapping between cases is not always as straightforward as it is in English. For example, the German lower-case character “ß” becomes “SS” when upper-cased, and good old capital “I” when down-cased in Turkish becomes the dotless “ı” (yes, they have “i”, its upper-case version is “İ”). I have read (but not verified first-hand) that the rules for upcasing accented characters such “é” are different in France and Québec. One of the results of all this is that software such as java.String.toLowerCase() tends to run astonishingly slow as it tries to work around all these corner-cases. http://www.tbray.org/ongoing/When/200x/2003/10/11/SearchI18n A: If you can tolerate having numbers and underscores in that regex, you can e.g. use the \w modifier (Perl syntax). I believe some engines support [:alpha:], but that is not pure Perl. \w takes into account the locale you are in, and matches both uppercase and lowercase, and I bet it is faster than using [A-Z] while ignoring case. A: If you're concerned about this, it may be worthwhile to set the case to all upper or all lower before you check. For instance, in Perl: $x = "abbCCDGBAdgfabv"; (lc $x) =~ /bad/; May in some cases be better than $x = "abbCCDGBAdgfabv"; $x =~ /bad/i;
Is regex case insensitivity slower?
Source RegexOptions.IgnoreCase is more expensive than I would have thought (eg, should be barely measurable) Assuming that this applies to PHP, Python, Perl, Ruby etc as well as C# (which is what I assume Jeff was using), how much of a slowdown is it and will I incur a similar penalty with /[a-zA-z]/ as I will with /[a-z]/i ?
[ "Yes, [A-Za-z] will be much faster than setting the RegexOptions.IgnoreCase, largely because of Unicode strings. But it's also much more limiting -- [A-Za-z] does not match accented international characters, it's literally the A-Za-z ASCII set and nothing more.\nI don't know if you saw Tim Bray's answer to my message, but it's a good one:\n\nOne of the trickiest issues in internationalized search is upper and lower case. This notion of case is limited to languages written in the Latin, Greek, and Cyrillic character sets. English-speakers naturally expect search to be case-insensitive if only because they’re lazy: if Nadia Jones wants to look herself up on Google she’ll probably just type in nadia jones and expect the system to take care of it.\nSo it’s fairly common for search systems to “normalize” words by converting them all to lower- or upper-case, both for indexing and queries.\nThe trouble is that the mapping between cases is not always as straightforward as it is in English. For example, the German lower-case character “ß” becomes “SS” when upper-cased, and good old capital “I” when down-cased in Turkish becomes the dotless “ı” (yes, they have “i”, its upper-case version is “İ”). I have read (but not verified first-hand) that the rules for upcasing accented characters such “é” are different in France and Québec. One of the results of all this is that software such as java.String.toLowerCase() tends to run astonishingly slow as it tries to work around all these corner-cases.\n\nhttp://www.tbray.org/ongoing/When/200x/2003/10/11/SearchI18n\n", "If you can tolerate having numbers and underscores in that regex, you can e.g. use the \\w modifier (Perl syntax). I believe some engines support [:alpha:], but that is not pure Perl. \\w takes into account the locale you are in, and matches both uppercase and lowercase, and I bet it is faster than using [A-Z] while ignoring case.\n", "If you're concerned about this, it may be worthwhile to set the case to all upper or all lower before you check.\nFor instance, in Perl:\n$x = \"abbCCDGBAdgfabv\";\n(lc $x) =~ /bad/;\n\nMay in some cases be better than\n$x = \"abbCCDGBAdgfabv\";\n$x =~ /bad/i;\n\n" ]
[ 20, 1, 0 ]
[]
[]
[ "language_agnostic", "regex" ]
stackoverflow_0000032010_language_agnostic_regex.txt
Q: Best way to fetch a varying HTML tag I'm trying to fetch some HTML from various blogs and have noticed that different providers use the same tag in different ways. For example, here are two major providers that use the meta name generator tag differently: Blogger: <meta content='blogger' name='generator'/> (content first, name later and, yes, single quotes!) WordPress: <meta name="generator" content="WordPress.com" /> (name first, content later) Is there a way to extract the value of content for all cases (single/double quotes, first/last in the row)? P.S. Although I'm using Java, the answer would probably help more people if it where for regular expressions generally. A: The answer is: don't use regular expressions. Seriously. Use a SGML parser, or an XML parser if you happen to know it's valid XML (probably almost never true). You will absolutely screw up and waste tons of time trying to get it right. Just use what's already available. A: Actually, you should probably use some sort of HTML parser where you can inspect each node (and therefore node attributes) in the DOM of the page. I've not used any of these for a while so I don't know the pros and cons but here's a list http://java-source.net/open-source/html-parsers A: Those differences are not really important according to the XHTML standard. In other words, they are exactly the same thing. Also, if you replace double quotes with single quotes would be the same. The typical way of 'normalizing' an xml document is to pare it using some API that treats the document as its Infoset representation. Both DOM and SAX style APIs work that way. If you want to parse them by hand (or with a RegEx) you have to replicate all those things in your code and, in my opinion, that's not practical. A: Note: single quotes (even no quotes, if the value doesn't contain a space) is valid according to the W3C HTML spec. Quote: By default, SGML requires that all attribute values be delimited using either double quotation marks (ASCII decimal 34) or single quotation marks (ASCII decimal 39)... In certain cases, authors may specify the value of an attribute without any quotation marks. Also, don't forget that the order of attributes can be reversed and that other attributes can appear in the tag. A: You may want to give Java's HTMLEditorKit a shot. It is easy to experiment with to see if the parsing provides what you are looking for. A: Ok, since you are looking for language-agnostic then you can try a REGEX like /<meta\s.*content=.*>/ and take the result from that and parse out the specific values that you are looking for. I'm by no means a REGEX expert so there is probably a better way but in using the tool at http://www.codehouse.com/webmaster_tools/regex/ I matched both of the strings you provided. A: If you must use regex, here is a regex to get just the content part: content\s*=\s*['"].*?['"] returns content = "blogger" and content='Worpress.com' respectively. I'm no regex expert, but it gets those when given your examples in regexpal. Once you get that you can get everything between the quotes however you choose, be it another regex (which is just immoral at that point) or just looping over the characters. A: If your using java you may want to look at tagsoup, which is a SAX-compliant parser for "[parsing] HTML as it is found in the wild".
Best way to fetch a varying HTML tag
I'm trying to fetch some HTML from various blogs and have noticed that different providers use the same tag in different ways. For example, here are two major providers that use the meta name generator tag differently: Blogger: <meta content='blogger' name='generator'/> (content first, name later and, yes, single quotes!) WordPress: <meta name="generator" content="WordPress.com" /> (name first, content later) Is there a way to extract the value of content for all cases (single/double quotes, first/last in the row)? P.S. Although I'm using Java, the answer would probably help more people if it where for regular expressions generally.
[ "The answer is: don't use regular expressions.\nSeriously. Use a SGML parser, or an XML parser if you happen to know it's valid XML (probably almost never true). You will absolutely screw up and waste tons of time trying to get it right. Just use what's already available.\n", "Actually, you should probably use some sort of HTML parser where you can inspect each node (and therefore node attributes) in the DOM of the page. I've not used any of these for a while so I don't know the pros and cons but here's a list http://java-source.net/open-source/html-parsers \n", "Those differences are not really important according to the XHTML standard.\nIn other words, they are exactly the same thing.\nAlso, if you replace double quotes with single quotes would be the same.\nThe typical way of 'normalizing' an xml document is to pare it using some API that treats the document as its Infoset representation. Both DOM and SAX style APIs work that way.\nIf you want to parse them by hand (or with a RegEx) you have to replicate all those things in your code and, in my opinion, that's not practical.\n", "Note: single quotes (even no quotes, if the value doesn't contain a space) is valid according to the W3C HTML spec. Quote:\n\nBy default, SGML requires that all attribute values be delimited using either double quotation marks (ASCII decimal 34) or single quotation marks (ASCII decimal 39)... In certain cases, authors may specify the value of an attribute without any quotation marks.\n\nAlso, don't forget that the order of attributes can be reversed and that other attributes can appear in the tag.\n", "You may want to give Java's HTMLEditorKit a shot. It is easy to experiment with to see if the parsing provides what you are looking for. \n", "Ok, since you are looking for language-agnostic then you can try a REGEX like /<meta\\s.*content=.*>/ and take the result from that and parse out the specific values that you are looking for. I'm by no means a REGEX expert so there is probably a better way but in using the tool at http://www.codehouse.com/webmaster_tools/regex/ I matched both of the strings you provided.\n", "If you must use regex, here is a regex to get just the content part:\ncontent\\s*=\\s*['\"].*?['\"]\n\nreturns\ncontent = \"blogger\"\n\nand\ncontent='Worpress.com'\n\nrespectively. I'm no regex expert, but it gets those when given your examples in regexpal.\nOnce you get that you can get everything between the quotes however you choose, be it another regex (which is just immoral at that point) or just looping over the characters.\n", "If your using java you may want to look at tagsoup, which is a SAX-compliant parser for \"[parsing] HTML as it is found in the wild\".\n" ]
[ 14, 3, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "html", "language_agnostic", "regex" ]
stackoverflow_0000031535_html_language_agnostic_regex.txt
Q: How to respond to an alternate URI in a RESTful web service I'm building a RESTful web service which has multiple URIs for one of its resources, because there is more than one unique identifier. Should the server respond to a GET request for an alternate URI by returning the resource, or should I send an HTTP 3xx redirect to the canonical URI? Is HTTP 303 (see also) the most appropriate redirect? Clarification: the HTTP specification makes it clear that the choice of redirect depends on which URI future requests should use. In my application, the 'canonical' URI is the most stable of the alternatives; an alternative URI will always direct to same canonical URI, or become invalid. A: I'd personally plump for returning the resource rather than faffing with a redirect, although I suspect that's only because my subcoscious is telling me redirects are slower. However, if you were to decide to use a redirect I'd think a 302 or 307 might be more appropiate than a 303, although the w3.org has details of the different redirect codes you could use. A: Under W3C's Architexture of the World Wide Web, Volume One, there is a section on URI aliases (Section 2.3.1) which states the following: "When a URI alias does become common currency, the URI owner should use protocol techniques such as server-side redirects to relate the two resources. The community benefits when the URI owner supports redirection of an aliased URI to the corresponding "official" URI. For more information on redirection, see section 10.3, Redirection, in RFC2616. See also CHIPS for a discussion of some best practices for server administrators." For what it's worth, I would recommend a 302 redirect. A: The answer from Ubiguchi had what I needed, except that I now think a redirect is the way to go, via the link to the HTTP 1.1 specifiction section on response codes. It turns out that I actually need a 301 redirect because the URI I'm redirecting to is more 'correct' and stable, and should therefore be used for future requests.
How to respond to an alternate URI in a RESTful web service
I'm building a RESTful web service which has multiple URIs for one of its resources, because there is more than one unique identifier. Should the server respond to a GET request for an alternate URI by returning the resource, or should I send an HTTP 3xx redirect to the canonical URI? Is HTTP 303 (see also) the most appropriate redirect? Clarification: the HTTP specification makes it clear that the choice of redirect depends on which URI future requests should use. In my application, the 'canonical' URI is the most stable of the alternatives; an alternative URI will always direct to same canonical URI, or become invalid.
[ "I'd personally plump for returning the resource rather than faffing with a redirect, although I suspect that's only because my subcoscious is telling me redirects are slower.\nHowever, if you were to decide to use a redirect I'd think a 302 or 307 might be more appropiate than a 303, although the w3.org has details of the different redirect codes you could use.\n", "Under W3C's Architexture of the World Wide Web, Volume One, there is a section on URI aliases (Section 2.3.1) which states the following:\n\"When a URI alias does become common currency, the URI owner should use protocol techniques such as server-side redirects to relate the two resources. The community benefits when the URI owner supports redirection of an aliased URI to the corresponding \"official\" URI. For more information on redirection, see section 10.3, Redirection, in RFC2616. See also CHIPS for a discussion of some best practices for server administrators.\"\nFor what it's worth, I would recommend a 302 redirect.\n", "The answer from Ubiguchi had what I needed, except that I now think a redirect is the way to go, via the link to the HTTP 1.1 specifiction section on response codes. It turns out that I actually need a 301 redirect because the URI I'm redirecting to is more 'correct' and stable, and should therefore be used for future requests.\n" ]
[ 4, 2, 1 ]
[]
[]
[ "http", "language_agnostic", "rest" ]
stackoverflow_0000031800_http_language_agnostic_rest.txt
Q: Accessing static fields in XAML How does one go about referencing a class's static properties in xaml? In other words, I want to do something like this: Class BaseThingy { public static readonly Style BaseStyle; ... } <ResoureDictionary ...> <Style BasedOn="BaseThingy.Style" TargetType="BaseThingy" /> </ResourceDictionary> What is the syntax to do this in the BasedOn? I assumed it would involve using StaticResource to some degree, but I haven't gotten it to work for me. A: Use x:Static markup extension <ResoureDictionary ... xmlns:local="clr-namespace:Namespace.Where.Your.BaseThingy.Class.Is.Defined" > <Style BasedOn="{x:Static local:BaseThingy.BaseStyle}" TargetType="BaseThingy" /> </ResourceDictionary>
Accessing static fields in XAML
How does one go about referencing a class's static properties in xaml? In other words, I want to do something like this: Class BaseThingy { public static readonly Style BaseStyle; ... } <ResoureDictionary ...> <Style BasedOn="BaseThingy.Style" TargetType="BaseThingy" /> </ResourceDictionary> What is the syntax to do this in the BasedOn? I assumed it would involve using StaticResource to some degree, but I haven't gotten it to work for me.
[ "Use x:Static markup extension\n<ResoureDictionary ...\n xmlns:local=\"clr-namespace:Namespace.Where.Your.BaseThingy.Class.Is.Defined\"\n>\n <Style BasedOn=\"{x:Static local:BaseThingy.BaseStyle}\" TargetType=\"BaseThingy\" />\n</ResourceDictionary>\n\n" ]
[ 11 ]
[]
[]
[ ".net", "c#", "silverlight", "wpf", "xaml" ]
stackoverflow_0000032395_.net_c#_silverlight_wpf_xaml.txt
Q: Algorithm to format text to Pascal or camel casing Using this question as the base is there an alogrithm or coding example to change some text to Pascal or Camel casing. For example: mynameisfred becomes Camel: myNameIsFred Pascal: MyNameIsFred A: I found a thread with a bunch of Perl guys arguing the toss on this question over at http://www.perlmonks.org/?node_id=336331. I hope this isn't too much of a non-answer to the question, but I would say you have a bit of a problem in that it would be a very open-ended algorithm which could have a lot of 'misses' as well as hits. For example, say you inputted:- camelCase("hithisisatest"); The output could be:- "hiThisIsATest" Or:- "hitHisIsATest" There's no way the algorithm would know which to prefer. You could add some extra code to specify that you'd prefer more common words, but again misses would occur (Peter Norvig wrote a very small spelling corrector over at http://norvig.com/spell-correct.html which might help algorithm-wise, I wrote a C# implementation if C#'s your language). I'd agree with Mark and say you'd be better off having an algorithm that takes a delimited input, i.e. this_is_a_test and converts that. That'd be simple to implement, i.e. in pseudocode:- SetPhraseCase(phrase, CamelOrPascal): if no delimiters if camelCase return lowerFirstLetter(phrase) else return capitaliseFirstLetter(phrase) words = splitOnDelimiter(phrase) if camelCase ret = lowerFirstLetter(first word) else ret = capitaliseFirstLetter(first word) for i in 2 to len(words): ret += capitaliseFirstLetter(words[i]) return ret capitaliseFirstLetter(word): if len(word) <= 1 return upper(word) return upper(word[0]) + word[1..len(word)] lowerFirstLetter(word): if len(word) <= 1 return lower(word) return lower(word[0]) + word[1..len(word)] You could also replace my capitaliseFirstLetter() function with a proper case algorithm if you so wished. A C# implementation of the above described algorithm is as follows (complete console program with test harness):- using System; class Program { static void Main(string[] args) { var caseAlgorithm = new CaseAlgorithm('_'); while (true) { string input = Console.ReadLine(); if (string.IsNullOrEmpty(input)) return; Console.WriteLine("Input '{0}' in camel case: '{1}', pascal case: '{2}'", input, caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.CamelCase), caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.PascalCase)); } } } public class CaseAlgorithm { public enum CaseMode { PascalCase, CamelCase } private char delimiterChar; public CaseAlgorithm(char inDelimiterChar) { delimiterChar = inDelimiterChar; } public string SetPhraseCase(string phrase, CaseMode caseMode) { // You might want to do some sanity checks here like making sure // there's no invalid characters, etc. if (string.IsNullOrEmpty(phrase)) return phrase; // .Split() will simply return a string[] of size 1 if no delimiter present so // no need to explicitly check this. var words = phrase.Split(delimiterChar); // Set first word accordingly. string ret = setWordCase(words[0], caseMode); // If there are other words, set them all to pascal case. if (words.Length > 1) { for (int i = 1; i < words.Length; ++i) ret += setWordCase(words[i], CaseMode.PascalCase); } return ret; } private string setWordCase(string word, CaseMode caseMode) { switch (caseMode) { case CaseMode.CamelCase: return lowerFirstLetter(word); case CaseMode.PascalCase: return capitaliseFirstLetter(word); default: throw new NotImplementedException( string.Format("Case mode '{0}' is not recognised.", caseMode.ToString())); } } private string lowerFirstLetter(string word) { return char.ToLower(word[0]) + word.Substring(1); } private string capitaliseFirstLetter(string word) { return char.ToUpper(word[0]) + word.Substring(1); } } A: The only way to do that would be to run each section of the word through a dictionary. "mynameisfred" is just an array of characters, splitting it up into my Name Is Fred means understanding what the joining of each of those characters means. You could do it easily if your input was separated in some way, e.g. "my name is fred" or "my_name_is_fred".
Algorithm to format text to Pascal or camel casing
Using this question as the base is there an alogrithm or coding example to change some text to Pascal or Camel casing. For example: mynameisfred becomes Camel: myNameIsFred Pascal: MyNameIsFred
[ "I found a thread with a bunch of Perl guys arguing the toss on this question over at http://www.perlmonks.org/?node_id=336331.\nI hope this isn't too much of a non-answer to the question, but I would say you have a bit of a problem in that it would be a very open-ended algorithm which could have a lot of 'misses' as well as hits. For example, say you inputted:-\ncamelCase(\"hithisisatest\");\n\nThe output could be:-\n\"hiThisIsATest\"\n\nOr:-\n\"hitHisIsATest\"\n\nThere's no way the algorithm would know which to prefer. You could add some extra code to specify that you'd prefer more common words, but again misses would occur (Peter Norvig wrote a very small spelling corrector over at http://norvig.com/spell-correct.html which might help algorithm-wise, I wrote a C# implementation if C#'s your language).\nI'd agree with Mark and say you'd be better off having an algorithm that takes a delimited input, i.e. this_is_a_test and converts that. That'd be simple to implement, i.e. in pseudocode:-\nSetPhraseCase(phrase, CamelOrPascal):\n if no delimiters\n if camelCase\n return lowerFirstLetter(phrase)\n else\n return capitaliseFirstLetter(phrase)\n words = splitOnDelimiter(phrase)\n if camelCase \n ret = lowerFirstLetter(first word) \n else\n ret = capitaliseFirstLetter(first word)\n for i in 2 to len(words): ret += capitaliseFirstLetter(words[i])\n return ret\n\ncapitaliseFirstLetter(word):\n if len(word) <= 1 return upper(word)\n return upper(word[0]) + word[1..len(word)]\n\nlowerFirstLetter(word):\n if len(word) <= 1 return lower(word)\n return lower(word[0]) + word[1..len(word)]\n\nYou could also replace my capitaliseFirstLetter() function with a proper case algorithm if you so wished.\nA C# implementation of the above described algorithm is as follows (complete console program with test harness):-\nusing System;\n\nclass Program {\n static void Main(string[] args) {\n\n var caseAlgorithm = new CaseAlgorithm('_');\n\n while (true) {\n string input = Console.ReadLine();\n\n if (string.IsNullOrEmpty(input)) return;\n\n Console.WriteLine(\"Input '{0}' in camel case: '{1}', pascal case: '{2}'\",\n input,\n caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.CamelCase),\n caseAlgorithm.SetPhraseCase(input, CaseAlgorithm.CaseMode.PascalCase));\n }\n }\n}\n\npublic class CaseAlgorithm {\n\n public enum CaseMode { PascalCase, CamelCase }\n\n private char delimiterChar;\n\n public CaseAlgorithm(char inDelimiterChar) {\n delimiterChar = inDelimiterChar;\n }\n\n public string SetPhraseCase(string phrase, CaseMode caseMode) {\n\n // You might want to do some sanity checks here like making sure\n // there's no invalid characters, etc.\n\n if (string.IsNullOrEmpty(phrase)) return phrase;\n\n // .Split() will simply return a string[] of size 1 if no delimiter present so\n // no need to explicitly check this.\n var words = phrase.Split(delimiterChar);\n\n // Set first word accordingly.\n string ret = setWordCase(words[0], caseMode);\n\n // If there are other words, set them all to pascal case.\n if (words.Length > 1) {\n for (int i = 1; i < words.Length; ++i)\n ret += setWordCase(words[i], CaseMode.PascalCase);\n }\n\n return ret;\n }\n\n private string setWordCase(string word, CaseMode caseMode) {\n switch (caseMode) {\n case CaseMode.CamelCase:\n return lowerFirstLetter(word);\n case CaseMode.PascalCase:\n return capitaliseFirstLetter(word);\n default:\n throw new NotImplementedException(\n string.Format(\"Case mode '{0}' is not recognised.\", caseMode.ToString()));\n }\n }\n\n private string lowerFirstLetter(string word) {\n return char.ToLower(word[0]) + word.Substring(1);\n }\n\n private string capitaliseFirstLetter(string word) {\n return char.ToUpper(word[0]) + word.Substring(1);\n }\n}\n\n", "The only way to do that would be to run each section of the word through a dictionary.\n\"mynameisfred\" is just an array of characters, splitting it up into my Name Is Fred means understanding what the joining of each of those characters means.\nYou could do it easily if your input was separated in some way, e.g. \"my name is fred\" or \"my_name_is_fred\".\n" ]
[ 3, 0 ]
[]
[]
[ "algorithm", "camelcasing", "coding_style", "pascalcasing" ]
stackoverflow_0000032241_algorithm_camelcasing_coding_style_pascalcasing.txt
Q: Assert action redirected to correct action/route? How do I exercise an action to ensure it redirects to the correct action or route? A: public ActionResult Foo() { return RedirectToAction("Products", "Index"); } [Test] public void foo_redirects_to_products_index() { var controller = new BarController(); var result = controller.Foo() as RedirectToRouteResult; if(result == null) Assert.Fail("should have redirected"); Assert.That(result.RouteData.Values["Controller"], Is.EqualTo("Products")); Assert.That(result.RouteData.Values["Action"], Is.EqualTo("Index")); }
Assert action redirected to correct action/route?
How do I exercise an action to ensure it redirects to the correct action or route?
[ "public ActionResult Foo()\n{\n return RedirectToAction(\"Products\", \"Index\");\n}\n\n[Test]\npublic void foo_redirects_to_products_index()\n{\n var controller = new BarController();\n var result = controller.Foo() as RedirectToRouteResult;\n\n if(result == null)\n Assert.Fail(\"should have redirected\");\n\n Assert.That(result.RouteData.Values[\"Controller\"], Is.EqualTo(\"Products\"));\n Assert.That(result.RouteData.Values[\"Action\"], Is.EqualTo(\"Index\"));\n\n}\n\n" ]
[ 10 ]
[]
[]
[ "asp.net_mvc", "unit_testing" ]
stackoverflow_0000032364_asp.net_mvc_unit_testing.txt
Q: Disable asp.net radiobutton with javascript I'm trying to disable a bunch of controls with JavaScript (so that they post back values). All the controls work fine except for my radio buttons as they lose their value. In the below code which is called via a recursive function to disable all child controls the Second else (else if (control is RadioButton)) is never hit and the RadioButton control is identified as a Checkbox control. private static void DisableControl(WebControl control) { if (control is CheckBox) { ((CheckBox)control).InputAttributes.Add("disabled", "disabled"); } else if (control is RadioButton) { } else if (control is ImageButton) { ((ImageButton)control).Enabled = false; } else { control.Attributes.Add("readonly", "readonly"); } } Two Questions: 1. How do I identify which control is a radiobutton? 2. How do I disable it so that it posts back its value? A: I found 2 ways to get this to work, the below code correctly distinguishes between the RadioButton and Checkbox controls. private static void DisableControl(WebControl control) { Type controlType = control.GetType(); if (controlType == typeof(CheckBox)) { ((CheckBox)control).InputAttributes.Add("disabled", "disabled"); } else if (controlType == typeof(RadioButton)) { ((RadioButton)control).InputAttributes.Add("disabled", "true"); } else if (controlType == typeof(ImageButton)) { ((ImageButton)control).Enabled = false; } else { control.Attributes.Add("readonly", "readonly"); } } And the solution I used is to set SubmitDisabledControls="True" in the form element which is not ideal as it allows a user to fiddle with the values but is fine in my scenario. The second solution is to mimic the Disabled behavior and details can be found here: https://web.archive.org/web/20210608183803/http://aspnet.4guysfromrolla.com/articles/012506-1.aspx'>https://web.archive.org/web/20210608183803/http://aspnet.4guysfromrolla.com/articles/012506-1.aspx. A: Off the top of my head, I think you have to check the "type" attribute of the checkbox to determine if it's a radio button.
Disable asp.net radiobutton with javascript
I'm trying to disable a bunch of controls with JavaScript (so that they post back values). All the controls work fine except for my radio buttons as they lose their value. In the below code which is called via a recursive function to disable all child controls the Second else (else if (control is RadioButton)) is never hit and the RadioButton control is identified as a Checkbox control. private static void DisableControl(WebControl control) { if (control is CheckBox) { ((CheckBox)control).InputAttributes.Add("disabled", "disabled"); } else if (control is RadioButton) { } else if (control is ImageButton) { ((ImageButton)control).Enabled = false; } else { control.Attributes.Add("readonly", "readonly"); } } Two Questions: 1. How do I identify which control is a radiobutton? 2. How do I disable it so that it posts back its value?
[ "I found 2 ways to get this to work, the below code correctly distinguishes between the RadioButton and Checkbox controls.\n private static void DisableControl(WebControl control)\n {\n Type controlType = control.GetType();\n\n if (controlType == typeof(CheckBox))\n {\n ((CheckBox)control).InputAttributes.Add(\"disabled\", \"disabled\");\n\n }\n else if (controlType == typeof(RadioButton))\n {\n ((RadioButton)control).InputAttributes.Add(\"disabled\", \"true\");\n }\n else if (controlType == typeof(ImageButton))\n {\n ((ImageButton)control).Enabled = false;\n }\n else\n {\n control.Attributes.Add(\"readonly\", \"readonly\");\n }\n }\n\nAnd the solution I used is to set SubmitDisabledControls=\"True\" in the form element which is not ideal as it allows a user to fiddle with the values but is fine in my scenario. The second solution is to mimic the Disabled behavior and details can be found here: https://web.archive.org/web/20210608183803/http://aspnet.4guysfromrolla.com/articles/012506-1.aspx'>https://web.archive.org/web/20210608183803/http://aspnet.4guysfromrolla.com/articles/012506-1.aspx.\n", "Off the top of my head, I think you have to check the \"type\" attribute of the checkbox to determine if it's a radio button.\n" ]
[ 3, 0 ]
[]
[]
[ "asp.net", "c#", "javascript" ]
stackoverflow_0000032173_asp.net_c#_javascript.txt
Q: Configurable Table Prefixes with a .Net OR/M? In a web application like wiki or forums or blogging software, it is often useful to store your data in a relational database. Since many hosting companies offer a single database with their hosting plans (with additional databases costing extra) it is very useful for your users when your database objects (tables, views, constraints, and stored procedures) have a common prefix. It is typical for applications aware of database scarcity to have a hard-coded table prefix. I want more, however. Specifically, I'd like to have a table prefix that users can designate—say in the web.config file (with an appropriate default, of course). Since I hate coding CRUD operations by hand, I prefer to work through a competent OR/M and have used (and enjoyed) LINQ to SQL, Subsonic, and ADO.Net. I'm having some thrash in a new project, however, when it comes to putting a table prefix in a user's web.config file. Are there any .Net-based OR/M products that can handle this scenario elegantly? The best I have been able to come up with so far is using LINQ to SQL with an external mapping file that I'd have to update somehow based on an as-yet hypothetical web.config setting. Anyone have a better solution? I tried to make it happen in Entity Framework, but that turned into a mess quickly. (Due to my unfamiliarity with EF? Possibly.) How about SubSonic? Does it have an option to apply a table prefix besides at code generation time? A: I've now researched what it takes to do this in both Entity Framework and LINQ to SQL and documented the steps required in each. It's much longer than answers here tend to be so I'll be content with a link to the answer rather than duplicate it here. It's relatively involved for each, but the LINQ to SQL is the more flexible solution and also the easiest to implment. A: LightSpeed allows you to specify an INamingStrategy that lets you resolve table names dynamically at runtime. A: Rather than use table prefixes instead have an application user that belongs to a schema (in MS Sql 2005 or above). This means that instead of: select * from dbo.clientAProduct select * from dbo.clientBroduct You have: select * from clientA.Product select * from clientB.Product
Configurable Table Prefixes with a .Net OR/M?
In a web application like wiki or forums or blogging software, it is often useful to store your data in a relational database. Since many hosting companies offer a single database with their hosting plans (with additional databases costing extra) it is very useful for your users when your database objects (tables, views, constraints, and stored procedures) have a common prefix. It is typical for applications aware of database scarcity to have a hard-coded table prefix. I want more, however. Specifically, I'd like to have a table prefix that users can designate—say in the web.config file (with an appropriate default, of course). Since I hate coding CRUD operations by hand, I prefer to work through a competent OR/M and have used (and enjoyed) LINQ to SQL, Subsonic, and ADO.Net. I'm having some thrash in a new project, however, when it comes to putting a table prefix in a user's web.config file. Are there any .Net-based OR/M products that can handle this scenario elegantly? The best I have been able to come up with so far is using LINQ to SQL with an external mapping file that I'd have to update somehow based on an as-yet hypothetical web.config setting. Anyone have a better solution? I tried to make it happen in Entity Framework, but that turned into a mess quickly. (Due to my unfamiliarity with EF? Possibly.) How about SubSonic? Does it have an option to apply a table prefix besides at code generation time?
[ "I've now researched what it takes to do this in both Entity Framework and LINQ to SQL and documented the steps required in each. It's much longer than answers here tend to be so I'll be content with a link to the answer rather than duplicate it here. It's relatively involved for each, but the LINQ to SQL is the more flexible solution and also the easiest to implment.\n", "LightSpeed allows you to specify an INamingStrategy that lets you resolve table names dynamically at runtime.\n", "Rather than use table prefixes instead have an application user that belongs to a schema (in MS Sql 2005 or above).\nThis means that instead of:\nselect * from dbo.clientAProduct\nselect * from dbo.clientBroduct\n\nYou have:\nselect * from clientA.Product\nselect * from clientB.Product\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ ".net", "orm" ]
stackoverflow_0000011740_.net_orm.txt
Q: C# - SQLClient - Simplest INSERT I'm basically trying to figure out the simplest way to perform your basic insert operation in C#.NET using the SqlClient namespace. I'm using SqlConnection for my db link, I've already had success executing some reads, and I want to know the simplest way to insert data. I'm finding what seem to be pretty verbose methods when I google. A: using (var conn = new SqlConnection(yourConnectionString)) { var cmd = new SqlCommand("insert into Foo values (@bar)", conn); cmd.Parameters.AddWithValue("@bar", 17); conn.Open(); cmd.ExecuteNonQuery(); } A: Since you seem to be just getting started with this now is the best time to familiarize yourself with the concept of a Data Access Layer (obligatory wikipedia link). It will be very helpful for you down the road when you're apps have more interaction with the database throughout and you want to minimize code duplication. Also makes for more consistent behavior, making testing and tons of other things easier. A: using (SqlConnection myConnection new SqlConnection("Your connection string")) { SqlCommand myCommand = new SqlCommand("INSERT INTO ... VALUES ...", myConnection); myConnection.Open(); myCommand.ExecuteNonQuery(); }
C# - SQLClient - Simplest INSERT
I'm basically trying to figure out the simplest way to perform your basic insert operation in C#.NET using the SqlClient namespace. I'm using SqlConnection for my db link, I've already had success executing some reads, and I want to know the simplest way to insert data. I'm finding what seem to be pretty verbose methods when I google.
[ "using (var conn = new SqlConnection(yourConnectionString))\n{\n var cmd = new SqlCommand(\"insert into Foo values (@bar)\", conn);\n cmd.Parameters.AddWithValue(\"@bar\", 17);\n conn.Open();\n cmd.ExecuteNonQuery();\n}\n\n", "Since you seem to be just getting started with this now is the best time to familiarize yourself with the concept of a Data Access Layer (obligatory wikipedia link). It will be very helpful for you down the road when you're apps have more interaction with the database throughout and you want to minimize code duplication. Also makes for more consistent behavior, making testing and tons of other things easier.\n", "using (SqlConnection myConnection new SqlConnection(\"Your connection string\")) \n{ \n SqlCommand myCommand = new SqlCommand(\"INSERT INTO ... VALUES ...\", myConnection); \n myConnection.Open(); \n myCommand.ExecuteNonQuery(); \n}\n\n" ]
[ 18, 2, 0 ]
[]
[]
[ "c#", "sql", "sql_server", "tsql" ]
stackoverflow_0000032000_c#_sql_sql_server_tsql.txt
Q: Pulling limited tagged photos from Flickr So I've got a hobby site I'm working on. I've got items that are tagged and I want to associate those items with photos from Flickr. Even with restrictive searches, I might get results numbering in the thousands. Requirements: I want to display between 10-20 pictures but I want to randomize the photos each time. I don't want to hit Flickr every time a page request is made. Not every Flickr photo with the same tags as my item will be relevant. How should I store that number of results and how would I determine which ones are relevant? A: I would suggest moving the code that selects, randomizes, downloads and caches photos to separate service. It could be locally accessible REST application. Keep your core code clean and don't clutter it with remote operations and retention policy. Build tags-to-images map and store it locally, in file or database. Randomizing array is easy in both cases. Point image src to local cache. Clean cache periodically, depending on your hosting capacity. Whitelist or blacklist photos to filter them in step 1. A: Your best bet for parts 1 and 2 is to make a large request, say returning 100 or 200 photos and store the URL and other details. Then producing random selections from your local copy should be simple. For part 3 I'm not sure how you would accomplish this without some form of human intervention, unless you can define 'relevant' in some terms you can program against. If human intervention is fine then obviously they can browse your local copy of photos and pick relevant ones (or discard un-relevant ones).
Pulling limited tagged photos from Flickr
So I've got a hobby site I'm working on. I've got items that are tagged and I want to associate those items with photos from Flickr. Even with restrictive searches, I might get results numbering in the thousands. Requirements: I want to display between 10-20 pictures but I want to randomize the photos each time. I don't want to hit Flickr every time a page request is made. Not every Flickr photo with the same tags as my item will be relevant. How should I store that number of results and how would I determine which ones are relevant?
[ "I would suggest moving the code that selects, randomizes, downloads and caches photos to separate service. It could be locally accessible REST application. Keep your core code clean and don't clutter it with remote operations and retention policy.\n\nBuild tags-to-images map and store it locally, in file or database. Randomizing array is easy in both cases.\nPoint image src to local cache. Clean cache periodically, depending on your hosting capacity.\nWhitelist or blacklist photos to filter them in step 1.\n\n", "Your best bet for parts 1 and 2 is to make a large request, say returning 100 or 200 photos and store the URL and other details. Then producing random selections from your local copy should be simple.\nFor part 3 I'm not sure how you would accomplish this without some form of human intervention, unless you can define 'relevant' in some terms you can program against.\nIf human intervention is fine then obviously they can browse your local copy of photos and pick relevant ones (or discard un-relevant ones).\n" ]
[ 1, 1 ]
[]
[]
[ "flickr", "php", "tags" ]
stackoverflow_0000032462_flickr_php_tags.txt
Q: How do you write code that is both 32 bit and 64 bit compatible? What considerations do I need to make if I want my code to run correctly on both 32bit and 64bit platforms ? EDIT: What kind of areas do I need to take care in, e.g. printing strings/characters or using structures ? A: Options: Code it in some language with a Virtual Machine (such as Java) Code it in .NET and don't target any specific architecture. The .NET JIT compiler will compile it for you to the right architecture before running it. A: One solution would be to target a virtual environment that runs on both platforms (I'm thinking Java, or .Net here). Or pick an interpreted language. Do you have other requirements, such as calling existing code or libraries? A: The same things you should have been doing all along to ensure you write portable code :) mozilla guidelines and the C faq are good starting points A: I assume you are still talking about compiling them separately for each individual platform? As running them on both is completely doable by just creating a 32bit binary. A: The biggest one is making sure you don't put pointers into 32-bit storage locations. But there's no proper 'language-agnostic' answer to this question, really. You couldn't even get a particularly firm answer if you restricted yourself to something like standard 'C' or 'C++' - the size of data storage, pointers, etc, is all terribly implementation dependant. A: It honestly depends on the language, because managed languages like C# and Java or Scripting languages like JavaScript, Python, or PHP are locked in to their current methodology and to get started and to do anything beyond the advanced stuff there is not much to worry about. But my guess is that you are asking about languages like C++, C, and other lower level languages. The biggest thing you have to worry about is the size of things, because in the 32-bit world you are limited to the power of 2^32 however in the 64-bit world things get bigger 2^64. With 64-bit you have a larger space for memory and storage in RAM, and you can compute larger numbers. However if you know you are compiling for both 32 and 64, you need to make sure to limit your expectations of the system to the 32-bit world and limitations of buffers and numbers. A: In C (and maybe C++) always remember to use the sizeof operator when calculating buffer sizes for malloc. This way you will write more portable code anyway, and this will automatically take 64bit datatypes into account. A: In most cases the only thing you have to do is just compile your code for both platforms. (And that's assuming that you're using a compiled language; if it's not, then you probably don't need to worry about anything.) The only thing I can think of that might cause problems is assuming the size of data types, which is something you probably shouldn't be doing anyway. And of course anything written in assembly is going to cause problems. A: Keep in mind that many compilers choose the size of integer based on the underlying architecture, given that the "int" should be the fastest number manipulator in the system (according to some theories). This is why so many programmers use typedefs for their most portable programs - if you want your code to work on everything from 8 bit processors up to 64 bit processors you need to recognize that, in C anyway, int is not rigidly defined. Pointers are another area to be careful - don't use a long, or long long, or any specific type if you are fiddling with the numeric value of the pointer - use the proper construct, which, unfortunately, varies from compiler to compiler (which is why you have a separate typedef.h file for each compiler you use). -Adam Davis
How do you write code that is both 32 bit and 64 bit compatible?
What considerations do I need to make if I want my code to run correctly on both 32bit and 64bit platforms ? EDIT: What kind of areas do I need to take care in, e.g. printing strings/characters or using structures ?
[ "Options:\nCode it in some language with a Virtual Machine (such as Java)\nCode it in .NET and don't target any specific architecture. The .NET JIT compiler will compile it for you to the right architecture before running it.\n", "One solution would be to target a virtual environment that runs on both platforms (I'm thinking Java, or .Net here).\nOr pick an interpreted language.\nDo you have other requirements, such as calling existing code or libraries?\n", "The same things you should have been doing all along to ensure you write portable code :)\nmozilla guidelines and the C faq are good starting points\n", "I assume you are still talking about compiling them separately for each individual platform? As running them on both is completely doable by just creating a 32bit binary.\n", "The biggest one is making sure you don't put pointers into 32-bit storage locations.\nBut there's no proper 'language-agnostic' answer to this question, really. You couldn't even get a particularly firm answer if you restricted yourself to something like standard 'C' or 'C++' - the size of data storage, pointers, etc, is all terribly implementation dependant.\n", "It honestly depends on the language, because managed languages like C# and Java or Scripting languages like JavaScript, Python, or PHP are locked in to their current methodology and to get started and to do anything beyond the advanced stuff there is not much to worry about.\nBut my guess is that you are asking about languages like C++, C, and other lower level languages.\nThe biggest thing you have to worry about is the size of things, because in the 32-bit world you are limited to the power of 2^32 however in the 64-bit world things get bigger 2^64. \nWith 64-bit you have a larger space for memory and storage in RAM, and you can compute larger numbers. However if you know you are compiling for both 32 and 64, you need to make sure to limit your expectations of the system to the 32-bit world and limitations of buffers and numbers.\n", "In C (and maybe C++) always remember to use the sizeof operator when calculating buffer sizes for malloc. This way you will write more portable code anyway, and this will automatically take 64bit datatypes into account.\n", "In most cases the only thing you have to do is just compile your code for both platforms. (And that's assuming that you're using a compiled language; if it's not, then you probably don't need to worry about anything.)\nThe only thing I can think of that might cause problems is assuming the size of data types, which is something you probably shouldn't be doing anyway. And of course anything written in assembly is going to cause problems.\n", "Keep in mind that many compilers choose the size of integer based on the underlying architecture, given that the \"int\" should be the fastest number manipulator in the system (according to some theories).\nThis is why so many programmers use typedefs for their most portable programs - if you want your code to work on everything from 8 bit processors up to 64 bit processors you need to recognize that, in C anyway, int is not rigidly defined.\nPointers are another area to be careful - don't use a long, or long long, or any specific type if you are fiddling with the numeric value of the pointer - use the proper construct, which, unfortunately, varies from compiler to compiler (which is why you have a separate typedef.h file for each compiler you use).\n-Adam Davis\n" ]
[ 2, 1, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "32bit_64bit", "compatibility", "language_agnostic" ]
stackoverflow_0000032533_32bit_64bit_compatibility_language_agnostic.txt
Q: Is there a way to do "intraWord" text navigation in Visual Studio? On Windows, Ctrl+Right Arrow will move the text cursor from one "word" to the next. While working with Xcode on the Mac, they extended that so that Option+Right Arrow will move the cursor to the beginning of the next subword. For example, if the cursor was at the beginning of the word myCamelCaseVar then hitting Option+Right Arrow will put the cursor at the first C. This was an amazingly useful feature that I haven't found in a Windows editor. Do you know of any way to do this in Visual Studio (perhaps with an Add-In)? I'm currently using pretty old iterations of Visual Studio (Visual Basic 6.0 and Visual C++), although I'm interested to know if the more modern releases can do this, too. A: ReSharper has a "Camel Humps" feature that lets you do this.
Is there a way to do "intraWord" text navigation in Visual Studio?
On Windows, Ctrl+Right Arrow will move the text cursor from one "word" to the next. While working with Xcode on the Mac, they extended that so that Option+Right Arrow will move the cursor to the beginning of the next subword. For example, if the cursor was at the beginning of the word myCamelCaseVar then hitting Option+Right Arrow will put the cursor at the first C. This was an amazingly useful feature that I haven't found in a Windows editor. Do you know of any way to do this in Visual Studio (perhaps with an Add-In)? I'm currently using pretty old iterations of Visual Studio (Visual Basic 6.0 and Visual C++), although I'm interested to know if the more modern releases can do this, too.
[ "ReSharper has a \"Camel Humps\" feature that lets you do this.\n" ]
[ 2 ]
[]
[]
[ "keyboard", "visual_studio" ]
stackoverflow_0000032617_keyboard_visual_studio.txt
Q: How can I get the number of occurrences in a SQL IN clause? Let's say I have four tables: PAGE, USER, TAG, and PAGE-TAG: Table | Fields ------------------------------------------ PAGE | ID, CONTENT TAG | ID, NAME USER | ID, NAME PAGE-TAG | ID, PAGE-ID, TAG-ID, USER-ID And let's say I have four pages: PAGE#1 'Content page 1' tagged with tag#1 by user1, tagged with tag#1 by user2 PAGE#2 'Content page 2' tagged with tag#3 by user2, tagged by tag#1 by user2, tagged by tag#8 by user1 PAGE#3 'Content page 3' tagged with tag#7 by user#1 PAGE#4 'Content page 4' tagged with tag#1 by user1, tagged with tag#8 by user1 I expect my query to look something like this: select page.content ? from page, page-tag where page.id = page-tag.pag-id and page-tag.tag-id in (1, 3, 8) order by ? desc I would like to get output like this: Content page 2, 3 Content page 4, 2 Content page 1, 1 Quoting Neall Your question is a bit confusing. Do you want to get the number of times each page has been tagged? No The number of times each page has gotten each tag? No The number of unique users that have tagged a page? No The number of unique users that have tagged each page with each tag? No I want to know how many of the passed tags appear in a particular page, not just if any of the tags appear. SQL IN works like an boolean operator OR. If a page was tagged with any value within the IN Clause then it returns true. I would like to know how many of the values inside of the IN clause return true. Below i show, the output i expect: page 1 | in (1,2) -> 1 page 1 | in (1,2,3) -> 1 page 1 | in (1) -> 1 page 1 | in (1,3,8) -> 1 page 2 | in (1,2) -> 1 page 2 | in (1,2,3) -> 2 page 2 | in (1) -> 1 page 2 | in (1,3,8) -> 3 page 4 | in (1,2,3) -> 1 page 4 | in (1,2,3) -> 1 page 4 | in (1) -> 1 page 4 | in (1,3,8) -> 2 This will be the content of the page-tag table i mentioned before: id page-id tag-id user-id 1 1 1 1 2 1 1 2 3 2 3 2 4 2 1 2 5 2 8 1 6 3 7 1 7 4 1 1 8 4 8 1 @Kristof does not exactly what i am searching for but thanks anyway. @Daren If i execute you code i get the next error: #1054 - Unknown column 'page-tag.tag-id' in 'having clause' @Eduardo Molteni Your answer does not give the output in the question but: Content page 2 8 Content page 4 8 content page 2 3 content page 1 1 content page 1 1 content page 2 1 cotnent page 4 1 @Keith I am using plain SQL not T-SQL and i am not familiar with T-SQL, so i do not know how your query translate to plain SQL. Any more ideas? A: This might work: select page.content, count(page-tag.tag-id) as tagcount from page inner join page-tag on page-tag.page-id = page.id group by page.content having page-tag.tag-id in (1, 3, 8) A: OK, so the key difference between this and kristof's answer is that you only want a count of 1 to show against page 1, because it has been tagged only with one tag from the set (even though two separate users both tagged it). I would suggest this: SELECT page.ID, page.content, count(*) AS uniquetags FROM ( SELECT DISTINCT page.content, page.ID, page-tag.tag-id FROM page INNER JOIN page-tag ON page.ID=page-tag.page-ID WHERE page-tag.tag-id IN (1, 3, 8) ) GROUP BY page.ID I don't have a SQL Server installation to check this, so apologies if there's a syntax mistake. But semantically I think this is what you need. This may not give the output in descending order of number of tags, but try adding: ORDER BY uniquetags DESC at the end. My uncertainty is whether you can use ORDER BY outside of grouping in SQL Server. If not, then you may need to nest the whole thing in another SELECT. A: In T-Sql: select count(distinct name) from page-tag where tag-id in (1, 3, 8) This will give you a count of the number of different tag names for your list of ids A: Agree with Neall, bit confusing the question. If you want the output listed in the question, the sql is as simple as: select page.content, page-tag.tag-id from page, page-tag where page.id = page-tag.pag-id and page-tag.tag-id in (1, 3, 8) order by page-tag.tag-id desc But if you want the tagcount, Daren answered your question A: select page.content, count(pageTag.tagID) as tagCount from page inner join pageTag on page.ID = pageTag.pageID where pageTag.tagID in (1, 3, 8) group by page.content order by tagCount desc That gives you the number of tags per each page; ordered by the higher number of tags I hope I understood your question correctly A: Leigh Caldwell answer is correct, thanks man, but need to add an alias at least in MySQL. So the query will look like: SELECT page.ID, page.content, count(*) AS uniquetags FROM ( SELECT DISTINCT page.content, page.ID, page-tag.tag-id FROM page INNER JOIN page-tag ON page.ID=page-tag.page-ID WHERE page-tag.tag-id IN (1, 3, 8) ) as page GROUP BY page.ID order by uniquetags desc
How can I get the number of occurrences in a SQL IN clause?
Let's say I have four tables: PAGE, USER, TAG, and PAGE-TAG: Table | Fields ------------------------------------------ PAGE | ID, CONTENT TAG | ID, NAME USER | ID, NAME PAGE-TAG | ID, PAGE-ID, TAG-ID, USER-ID And let's say I have four pages: PAGE#1 'Content page 1' tagged with tag#1 by user1, tagged with tag#1 by user2 PAGE#2 'Content page 2' tagged with tag#3 by user2, tagged by tag#1 by user2, tagged by tag#8 by user1 PAGE#3 'Content page 3' tagged with tag#7 by user#1 PAGE#4 'Content page 4' tagged with tag#1 by user1, tagged with tag#8 by user1 I expect my query to look something like this: select page.content ? from page, page-tag where page.id = page-tag.pag-id and page-tag.tag-id in (1, 3, 8) order by ? desc I would like to get output like this: Content page 2, 3 Content page 4, 2 Content page 1, 1 Quoting Neall Your question is a bit confusing. Do you want to get the number of times each page has been tagged? No The number of times each page has gotten each tag? No The number of unique users that have tagged a page? No The number of unique users that have tagged each page with each tag? No I want to know how many of the passed tags appear in a particular page, not just if any of the tags appear. SQL IN works like an boolean operator OR. If a page was tagged with any value within the IN Clause then it returns true. I would like to know how many of the values inside of the IN clause return true. Below i show, the output i expect: page 1 | in (1,2) -> 1 page 1 | in (1,2,3) -> 1 page 1 | in (1) -> 1 page 1 | in (1,3,8) -> 1 page 2 | in (1,2) -> 1 page 2 | in (1,2,3) -> 2 page 2 | in (1) -> 1 page 2 | in (1,3,8) -> 3 page 4 | in (1,2,3) -> 1 page 4 | in (1,2,3) -> 1 page 4 | in (1) -> 1 page 4 | in (1,3,8) -> 2 This will be the content of the page-tag table i mentioned before: id page-id tag-id user-id 1 1 1 1 2 1 1 2 3 2 3 2 4 2 1 2 5 2 8 1 6 3 7 1 7 4 1 1 8 4 8 1 @Kristof does not exactly what i am searching for but thanks anyway. @Daren If i execute you code i get the next error: #1054 - Unknown column 'page-tag.tag-id' in 'having clause' @Eduardo Molteni Your answer does not give the output in the question but: Content page 2 8 Content page 4 8 content page 2 3 content page 1 1 content page 1 1 content page 2 1 cotnent page 4 1 @Keith I am using plain SQL not T-SQL and i am not familiar with T-SQL, so i do not know how your query translate to plain SQL. Any more ideas?
[ "This might work:\nselect page.content, count(page-tag.tag-id) as tagcount\nfrom page inner join page-tag on page-tag.page-id = page.id\ngroup by page.content\nhaving page-tag.tag-id in (1, 3, 8)\n\n", "OK, so the key difference between this and kristof's answer is that you only want a count of 1 to show against page 1, because it has been tagged only with one tag from the set (even though two separate users both tagged it).\nI would suggest this:\nSELECT page.ID, page.content, count(*) AS uniquetags\nFROM\n ( SELECT DISTINCT page.content, page.ID, page-tag.tag-id \n FROM page INNER JOIN page-tag ON page.ID=page-tag.page-ID \n WHERE page-tag.tag-id IN (1, 3, 8) \n )\n GROUP BY page.ID\n\nI don't have a SQL Server installation to check this, so apologies if there's a syntax mistake. But semantically I think this is what you need.\nThis may not give the output in descending order of number of tags, but try adding:\n ORDER BY uniquetags DESC\n\nat the end. My uncertainty is whether you can use ORDER BY outside of grouping in SQL Server. If not, then you may need to nest the whole thing in another SELECT.\n", "In T-Sql:\nselect count(distinct name)\nfrom page-tag\nwhere tag-id in (1, 3, 8) \n\nThis will give you a count of the number of different tag names for your list of ids\n", "Agree with Neall, bit confusing the question.\nIf you want the output listed in the question, the sql is as simple as:\nselect page.content, page-tag.tag-id\nfrom page, page-tag \nwhere page.id = page-tag.pag-id \nand page-tag.tag-id in (1, 3, 8) \norder by page-tag.tag-id desc\n\nBut if you want the tagcount, Daren answered your question \n", "select \n page.content, \n count(pageTag.tagID) as tagCount\nfrom \n page\n inner join pageTag on page.ID = pageTag.pageID\nwhere \n pageTag.tagID in (1, 3, 8) \ngroup by\n page.content\norder by\n tagCount desc\n\nThat gives you the number of tags per each page; ordered by the higher number of tags\nI hope I understood your question correctly\n", "Leigh Caldwell answer is correct, thanks man, but need to add an alias at least in MySQL. So the query will look like: \nSELECT page.ID, page.content, count(*) AS uniquetags FROM\n ( SELECT DISTINCT page.content, page.ID, page-tag.tag-id FROM page INNER JOIN page-tag ON page.ID=page-tag.page-ID WHERE page-tag.tag-id IN (1, 3, 8) ) as page\n GROUP BY page.ID\norder by uniquetags desc\n\n" ]
[ 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "sql" ]
stackoverflow_0000032059_sql.txt
Q: Transactional Design Pattern I have a need to create a "transactional" process using an external API that does not support COM+ or .NET transactions (Sharepoint to be exact) What I need to do is to be able to perform a number of processes in a sequence, but any failure in that sequence means that I will have to manually undo all of the previous steps. In my case there are only 2 types of step, both af which are fairly easy to undo/roll back. Does anyony have any suggestions for design patterns or structures that could be usefull for this ? A: The GoF Command Pattern supports undoable operations. I think the same pattern can be used for sequential operations (sequential commands). A: If your changes are done to the SharePoint object model, you can use the fact that changes are not committed until you call the Update() method of the modified object, such as SPList.Update() or SPWeb.Update(). Otherwise, I would use the Command Design Pattern. Chapter 6 in Head First Design Patterns even has an example that implements the undo functionality. A: Another good way for rollback/undo is the Memento Pattern. It's usually used to take a snapshot of the object at a given time and let the object state to be reverted to the memento. A: Next to the GOF Command Pattern you might also want to have a look at the Transaction Script pattern from P of EAA. You should probably create a Composite Command (or Transaction Script) that executes in sequence. A: You might want to have a look at the Compensating Resource Manager: http://msdn.microsoft.com/en-us/library/8xkdw05k(VS.80).aspx A: If you're using C++ (or any other language with deterministic destructor execution when scopes end) you can take a look at Scope Guards. This technique can probably also be adapted to .NET by making ScopeGuard implement IDisposable and sprinkling "using" statements as needed.
Transactional Design Pattern
I have a need to create a "transactional" process using an external API that does not support COM+ or .NET transactions (Sharepoint to be exact) What I need to do is to be able to perform a number of processes in a sequence, but any failure in that sequence means that I will have to manually undo all of the previous steps. In my case there are only 2 types of step, both af which are fairly easy to undo/roll back. Does anyony have any suggestions for design patterns or structures that could be usefull for this ?
[ "The GoF Command Pattern supports undoable operations.\nI think the same pattern can be used for sequential operations (sequential commands).\n", "If your changes are done to the SharePoint object model, you can use the fact that changes are not committed until you call the Update() method of the modified object, such as SPList.Update() or SPWeb.Update().\nOtherwise, I would use the Command Design Pattern. Chapter 6 in Head First Design Patterns even has an example that implements the undo functionality.\n", "Another good way for rollback/undo is the Memento Pattern. It's usually used to take a snapshot of the object at a given time and let the object state to be reverted to the memento.\n", "Next to the GOF Command Pattern you might also want to have a look at the Transaction Script pattern from P of EAA.\nYou should probably create a Composite Command (or Transaction Script) that executes in sequence.\n", "You might want to have a look at the Compensating Resource Manager:\nhttp://msdn.microsoft.com/en-us/library/8xkdw05k(VS.80).aspx\n", "If you're using C++ (or any other language with deterministic destructor execution when scopes end) you can take a look at Scope Guards. This technique can probably also be adapted to .NET by making ScopeGuard implement IDisposable and sprinkling \"using\" statements as needed.\n" ]
[ 3, 3, 2, 1, 0, 0 ]
[]
[]
[ "design_patterns", "sharepoint", "transactions" ]
stackoverflow_0000022165_design_patterns_sharepoint_transactions.txt
Q: procmail lockfile utility and NFS Has anyone used the lockfile utility that ships with procmail in conjunction with NFS mounted directories? The lockfile man page states that "Lockfile is NFS-resistant and eight-bit clean." A: I've used it. My company had an very NFS-intensive infrastructure at one point (less so now) and many Perl sysadmin tools dating back to the mid 90s. We wrapped lockfile in a perl module so that we could do consistent locking across NFS mounts. For that matter, our home directories were NFS mounted and we used procmail to deliver mail into them using the same style of locking and never had any problems with it (procmail delivering mail via NFS from server-a and mail being read via firect file access or UW-imap from a bunch of other servers).
procmail lockfile utility and NFS
Has anyone used the lockfile utility that ships with procmail in conjunction with NFS mounted directories? The lockfile man page states that "Lockfile is NFS-resistant and eight-bit clean."
[ "I've used it. My company had an very NFS-intensive infrastructure at one point (less so now) and many Perl sysadmin tools dating back to the mid 90s. We wrapped lockfile in a perl module so that we could do consistent locking across NFS mounts. For that matter, our home directories were NFS mounted and we used procmail to deliver mail into them using the same style of locking and never had any problems with it (procmail delivering mail via NFS from server-a and mail being read via firect file access or UW-imap from a bunch of other servers).\n" ]
[ 2 ]
[]
[]
[ "lockfile", "nfs", "procmail", "unix" ]
stackoverflow_0000032123_lockfile_nfs_procmail_unix.txt
Q: Authenticating Domain Users with System.DirectoryServices Given a username and a password for a domain user, what would be the best way to authenticate that user programatically? A: It appears that .NET 3.5 added a new namespace to deal with this issue - System.DirectoryServices.AccountManagement. Code sample is below: Private Function ValidateExternalUser(ByVal username As String, ByVal password As String) As Boolean Using context As PrincipalContext = New PrincipalContext(ContextType.Domain, _defaultDomain) Return context.ValidateCredentials(username, password, ContextOptions.Negotiate) End Using End Function The namespace also seems to provide a lot of methods for manipulating a domain account (changing passwords, expiring passwords, etc). A: You can use some hacks to authenticate only. Try Dim directoryEntry as New DirectoryEntry("LDAP://DomainController:389/dc=domain,dc=suffix", "username", "password") Dim temp as Object = directoryEntry.NativeObject return true Catch return false End Try If the user is not valid, the directory entry NativeObject cannot be accessed and throws an exception. While this isn't the most efficient way (exceptions are evil, blah blah blah), it's quick and painless. This also has the super-cool advantage of working with all LDAP servers, not just AD.
Authenticating Domain Users with System.DirectoryServices
Given a username and a password for a domain user, what would be the best way to authenticate that user programatically?
[ "It appears that .NET 3.5 added a new namespace to deal with this issue - System.DirectoryServices.AccountManagement. Code sample is below:\nPrivate Function ValidateExternalUser(ByVal username As String, ByVal password As String) As Boolean\n Using context As PrincipalContext = New PrincipalContext(ContextType.Domain, _defaultDomain)\n Return context.ValidateCredentials(username, password, ContextOptions.Negotiate)\n End Using\nEnd Function\n\nThe namespace also seems to provide a lot of methods for manipulating a domain account (changing passwords, expiring passwords, etc). \n", "You can use some hacks to authenticate only.\nTry\n Dim directoryEntry as New DirectoryEntry(\"LDAP://DomainController:389/dc=domain,dc=suffix\", \"username\", \"password\")\n Dim temp as Object = directoryEntry.NativeObject\n return true\nCatch\n return false\nEnd Try\n\nIf the user is not valid, the directory entry NativeObject cannot be accessed and throws an exception. While this isn't the most efficient way (exceptions are evil, blah blah blah), it's quick and painless. This also has the super-cool advantage of working with all LDAP servers, not just AD.\n" ]
[ 17, 9 ]
[]
[]
[ ".net", "authentication", "directoryservices" ]
stackoverflow_0000030861_.net_authentication_directoryservices.txt
Q: List in JScrollPane painting outside the viewport I have a list, each item of which has several things in it, including a JProgressBar which can be updated a lot. Each time one of the items updates its JProgressBar, the ListDataListener on the list tries to scroll it to the visible range using /* * This makes the updating content item automatically scroll * into view if it is off the viewport. */ public void contentsChanged(final ListDataEvent evt) { if (!EventQueue.isDispatchThread()) { /** * Make sure the scrolling happens in the graphics "dispatch" thread. */ EventQueue.invokeLater(new Runnable() { public void run() { contentsChanged(evt); } }); } if (playbackInProgress) { int index = evt.getIndex0(); currentContentList.ensureIndexIsVisible(index); } } Note that I'm trying to make sure the scrolling is done in the dispatch thread, since I thought maybe the problem was it being scrolled while it was repainting. And yet, I still have a problem where if things are really active, some of the list items paint outside of the viewport, overwriting what's outside the JScrollPane. Forcing an exposure event will repaint those things, but it's annoying. Is there anything else I need to look out for to stop these things painting outside of their clipping area? A: Have you tried explicitly enabling double-buffering on the JList and/or the components that it is drawing over? (with:setDoubleBuffered(boolean aFlag)) Another thought is that you might need to exit the function immediately after delegating to the EDT. The way your code is written, it looks like the update will happen in both threads if ContentChanged is invoked from a non-EDT thread. Logging in the first if (or set a breakpoint in the if -- but not in the runnable -- should help determine if that is your problem. eg: public void contentsChanged(final ListDataEvent evt) { if (!EventQueue.isDispatchThread()) { log.debug("Delegating contentsChanged(...) to EDT"); EventQueue.invokeLater(new Runnable() { public void run() { contentsChanged(evt); } }); // don't run ensureIndexIsVisible twice: return; } if (playbackInProgress) { int index = evt.getIndex0(); currentContentList.ensureIndexIsVisible(index); } }
List in JScrollPane painting outside the viewport
I have a list, each item of which has several things in it, including a JProgressBar which can be updated a lot. Each time one of the items updates its JProgressBar, the ListDataListener on the list tries to scroll it to the visible range using /* * This makes the updating content item automatically scroll * into view if it is off the viewport. */ public void contentsChanged(final ListDataEvent evt) { if (!EventQueue.isDispatchThread()) { /** * Make sure the scrolling happens in the graphics "dispatch" thread. */ EventQueue.invokeLater(new Runnable() { public void run() { contentsChanged(evt); } }); } if (playbackInProgress) { int index = evt.getIndex0(); currentContentList.ensureIndexIsVisible(index); } } Note that I'm trying to make sure the scrolling is done in the dispatch thread, since I thought maybe the problem was it being scrolled while it was repainting. And yet, I still have a problem where if things are really active, some of the list items paint outside of the viewport, overwriting what's outside the JScrollPane. Forcing an exposure event will repaint those things, but it's annoying. Is there anything else I need to look out for to stop these things painting outside of their clipping area?
[ "Have you tried explicitly enabling double-buffering on the JList and/or the components that it is drawing over? (with:setDoubleBuffered(boolean aFlag))\nAnother thought is that you might need to exit the function immediately after delegating to the EDT. The way your code is written, it looks like the update will happen in both threads if ContentChanged is invoked from a non-EDT thread. Logging in the first if (or set a breakpoint in the if -- but not in the runnable -- should help determine if that is your problem.\neg:\npublic void contentsChanged(final ListDataEvent evt)\n{\n if (!EventQueue.isDispatchThread())\n {\n log.debug(\"Delegating contentsChanged(...) to EDT\");\n\n EventQueue.invokeLater(new Runnable() \n {\n public void run() \n {\n contentsChanged(evt);\n }\n });\n // don't run ensureIndexIsVisible twice:\n return;\n }\n\n if (playbackInProgress)\n {\n int index = evt.getIndex0();\n currentContentList.ensureIndexIsVisible(index);\n }\n}\n\n" ]
[ 3 ]
[]
[]
[ "java", "jscrollpane", "swing" ]
stackoverflow_0000032519_java_jscrollpane_swing.txt
Q: In MS SQL Server 2005, is there a way to export, the complete maintenance plan of a database as a SQL Script? Currently, if I want to output a SQL script for a table in my database, in Management Studio, I can right click and output a create script. Is there an equivalent to output an SQL script for a database's maintenance plan?# Edit The company I work for has 4 servers, 3 servers and no sign of integration, each one running about 500,000 transactions a day. The original maintenance plans were undocumented and trying to create a default template maintenance plan. A: You can't export them as scripts, but if your intention is to migrate them between server instances then you can import and export them as follows: Connect to Integration Services and expand Stored Packages>MSDB>Maintenance Plans. You can then right click on the plan and select import or export A: I don't think you can do that with Maintenance Plans, because those are DTS packages, well now they are called SSIS (SQL Server Integration Services). There was a stored procedure from which you could add maintenance plans, but I think that it may be deprecated. (sp_add_maintenance_plan). I don't have a SQL 2005 to test here. The question is, why would you want to export the complete mp? :) If it's for importing in other server, then a ssis package could be useful. I suggest you take a look in that direction, because those you can export/import among servers.
In MS SQL Server 2005, is there a way to export, the complete maintenance plan of a database as a SQL Script?
Currently, if I want to output a SQL script for a table in my database, in Management Studio, I can right click and output a create script. Is there an equivalent to output an SQL script for a database's maintenance plan?# Edit The company I work for has 4 servers, 3 servers and no sign of integration, each one running about 500,000 transactions a day. The original maintenance plans were undocumented and trying to create a default template maintenance plan.
[ "You can't export them as scripts, but if your intention is to migrate them between server instances then you can import and export them as follows:\nConnect to Integration Services and expand Stored Packages>MSDB>Maintenance Plans. You can then right click on the plan and select import or export\n", "I don't think you can do that with Maintenance Plans, because those are DTS packages, well now they are called SSIS (SQL Server Integration Services). \nThere was a stored procedure from which you could add maintenance plans, but I think that it may be deprecated. (sp_add_maintenance_plan). I don't have a SQL 2005 to test here. \nThe question is, why would you want to export the complete mp? :) If it's for importing in other server, then a ssis package could be useful. I suggest you take a look in that direction, because those you can export/import among servers.\n" ]
[ 2, 0 ]
[]
[]
[ "sql", "sql_server", "sql_server_2005" ]
stackoverflow_0000032689_sql_sql_server_sql_server_2005.txt
Q: Windows Service Increasing CPU Consumption At my job, I have a clutch of six Windows services that I am responsible for, written in C# 2003. Each of these services contain a timer that fires every minute or so, where the majority of their work happens. My problem is that, as these services run, they start to consume more and more CPU time through each iteration of the loop, even if there is no meaningful work for them to do (ie, they're just idling, looking through the database for something to do). When they start up, each service uses an average of (about) 2-3% of 4 CPUs, which is fine. After 24 hours, each service will be consuming an entire processor for the duration of its loop's run. Can anyone help? I'm at a loss as to what could be causing this. Our current solution is to restart the services once a day (they shut themselves down, then a script sees that they're offline and restarts them at about 3AM). But this is not a long term solution; my concern is that as the services get busier, restarting them once a day may not be sufficient... but as there's a significant startup penalty (they all use NHibernate for data access), as they get busier, exactly what we don't want to be doing is restarting them more frequently. @akmad: True, it is very difficult. Yes, a service run in isolation will show the same symptom over time. No, it doesn't. We've looked at that. This can happen at 10AM or 6PM or in the middle of the night. There's no consistency. We do; and they are. The services are doing exactly what they should be, and nothing else. Unfortunately, that requires foreknowledge of exactly when the services are going to be maxing out CPUs, which happens on an unpredictable schedule, and never very quickly... which makes things doubly difficult, because my boss will run and restart them when they start having problems without thinking of debug issues. No, they're using a fairly consistent amount of RAM (approx. 60-80MB each, out of 4GB on the machine). Good suggestions, but rest assured, we have tried all of the usual troubleshooting. What I'm hoping is that this is a .NET issue that someone might know about, that we can work on solving. My boss' solution (which I emphatically don't want to implement) is to put a field in the database which holds multiple times for the services to restart during the day, so that he can make the problem go away and not think about it. I'm desperately seeking the cause of the real problem so that I can fix it, because that solution will become a disaster in about six months. @Yaakov Ellis: They each have a different function. One reads records out of an Oracle database somewhere offsite; another one processes those records and transfers files belonging to those records over to our system; a third checks those files to make sure they're what we expect them to be; another is a maintenance service that constantly checks things like disk space (that we have enough) and polls other servers to make sure they're alive; one is running only to make sure all of these other ones are running and doing their jobs, monitors and reports errors, and restarts anything that's failed to keep the whole system going 24 hours a day. So, if you're asking what I think you're asking, no, there isn't one common thing that all these services do (other than database access via NHibernate) that I can point to as a potential problem. Unfortunately, if that turns out to be the actual issue (which wouldn't surprise me greatly), the whole thing might be screwed -- and I'll end up rewriting all of them in simple SQL. I'm hoping it's a garbage collector problem or something easier to deal with than NHibernate. @Joshdan: No secret. As I said, we've tried all the usual troubleshooting. Profiling was unhelpful: the profiler we use was unable to point to any code that was actually executing when the CPU usage was high. These services were torn apart about a month ago looking for this problem. Every section of code was analyzed to attempt to figure out if our code was the issue; I'm not here asking because I haven't done my homework. Were this a simple case of the services doing more work than anticipated, that's something that would have been caught. The problem here is that, most of the time, the services are not doing anything at all, yet still manage to consume 25% or more of four CPU cores: they're finding no work to do, and exiting their loop and waiting for the next iteration. This should, quite literally, take almost no CPU time at all. Here's a example of behaviour we're seeing, on a service with no work to do for two days (in an unchanging environment). This was captured last week: Day 1, 8AM: Avg. CPU usage approx 3% Day 1, 6PM: Avg. CPU usage approx 8% Day 2, 7AM: Avg. CPU usage approx 20% Day 2, 11AM: Avg. CPU usage approx 30% Having looked at all of the possible mundane reasons for this, I've asked this question here because I figured (rightly, as it turns out) that I'd get more innovative answers (like Ubiguchi's), or pointers to things I hadn't thought of (like Ian's suggestion). So does the CPU spike happen immediately preceding the timer callback, within the timer callback, or immediately following the timer callback? You misunderstand. This is not a spike. If it were, there would be no problem; I can deal with spikes. But it's not... the CPU usage is going up generally. Even when the service is doing nothing, waiting for the next timer hit. When the service starts up, things are nice and calm, and the graph looks like what you'd expect... generally, 0% usage, with spikes to 10% as NHibernate hits the database or the service does some trivial amount of work. But this increases to an across-the-board 25% (more if I let it go too far) usage at all times while the process is running. That made Ian's suggestion the logical silver bullet (NHibernate does a lot of stuff when you're not looking). Alas, I've implemented his solution, but it hasn't had an effect (I have no proof of this, but I actually think it's made things worse... average usage is seeming to go up much faster now). Note that stripping out the NHibernate "sections" (as you recommend) is not feasible, since that would strip out about 90% of the code in the service, which would let me rule out the timer as a problem (which I absolutely intend to try), but can't help me rule out NHibernate as the issue, because if NHibernate is causing this, then the dodgy fix that's implemented (see below) is just going to have to become The Way The System Works; we are so dependent on NHibernate for this project that the PM simply won't accept that it's causing an unresolvable structural problem. I just noted a sense of desperation in the question -- that your problems would continue barring a small miracle Don't mean for it to come off that way. At the moment, the services are being restarted daily (with an option to input any number of hours of the day for them to shutdown and restart), which patches the problem but cannot be a long-term solution once they go onto the production machine and start to become busy. The problems will not continue, whether I fix them or the PM maintains this constraint on them. Obviously, I would prefer to implement a real fix, but since the initial testing revealed no reason for this, and the services have already been extensively reviewed, the PM would rather just have them restart multiple times than spend any more time trying to fix them. That's entirely out of my control and makes the miracle you were talking about more important than it would otherwise be. That is extremely intriguing (insofar as you trust your profiler). I don't. But then, these are Windows services written in .NET 1.1 running on a Windows 2000 machine, deployed by a dodgy Nant script, using an old version of NHibernate for database access. There's little on that machine I would actually say I trust. A: You mentioned that you're using NHibernate - are you closing your NHibernate sessions at appropriate points (such as the end of each iteration?) If not, then the size of the object map loaded into memory will be gradually increasing over time, and each session flush will take increasingly more CPU time. A: Here's where I'd start: Get Process Explorer and show %Time in JIT, %Time in GC, CPU Cycles Delta, CPU Time, CPU %, and Threads. You'll also want kernel and user time, and a couple of representative stack traces but I think you have to hit Properties to get snapshots. Compare before and after shots. A couple of thoughts on possibilities: excessive GC (% Time in GC going up. Also, Perfmon GC and CPU counters would correspond) excessive threads and associated context switches (# of threads going up) polling (stack traces are consistently caught in a single function) excessive kernel time (kernel times are high - Task Manager shows large kernel time numbers when CPU is high) exceptions (PE .NET tab Exceptions thrown is high and getting higher. There's also a Perfmon counter) virus/rootkit (OK, this is a last ditch scenario - but it is possible to construct a rootkit that hides from TaskManager. I'd suspect that you could then allocate your inevitable CPU usage to another process if you were cunning enough. Besides, if you've ruled out all of the above, I'm out of ideas right now) A: It's obviously pretty difficult to remotely debug you're unknown application... but here are some things I'd look at: What happens when you only run one of the services at a time? Do you still see the slow-down? This may indicate that there is some contention between the services. Does the problem always occur around the same time, regardless of how long the service has been running? This may indicate that something else (a backup, virus scan, etc) is causing the machine (or db) as a whole to slow down. Do you have logging or some other mechanism to be sure that the service is only doing work as often as you think it should? If you can see the performance degradation over a short time period, try running the service for a while and then attach a profiler to see exactly what is pegging the CPU. You don't mention anything about memory usage. Do you have any of this information for the services? It's possible that your using up most of the RAM and causing the disk the trash, or some similar problem. Best of luck! A: I suggest to hack the problem into pieces. First, find a way to reproduce the problem 100% of the times and quickly. Lower the timer so that the services fire up more frequently (for example, 10 times quicker than normal). If the problem arises 10 times quicker, then it's related to the number of iterations and not to real time or to real work done by the services). And you will be able to do the next steps quicker than once a day. Second, comment out all the real work code, and let only the services, the timers and the synchronization mechanism. If the problem still shows up, than it will be in that part of the code. If it doesn't, then start adding back the code you commented out, one piece at a time. Eventually, you should find out what part of the code is causing the problem. A: 'Fraid this answer is only going to suggest some directions for you to look in, but having seen similar problems in .NET Windows Services I have a couple of thoughts you might find helpful. My first suggestion is your services might have some bugs in either the way they handle memory, or perhaps in the way they handle unmanaged memory. The last time I tracked down a similar issue it turned out a 3rd party OSS libray we were using stored handles to unmanaged objects in static memory. The longer the service ran the more handles the service picked up which caused the process' CPU performance to nose-dive very quickly. The way to try and resolve this sort of issue to ensure your services store nothing in memory inbetween the timer invocations, although if your 3rd party libraries use static memory you might have to do something clever like create an app domain for the timer invocation and ditch the app doamin (and its static memory) once processing is complete. The other issue I've seen in similar circumstances was with the timer synchronization code being suspect, which in effect allowed more than one thread to be running the processing code at once. When we debugged the code we found the 1st thread was blocking the 2nd, and by the time the 2nd kicked off there was a 3rd being blocked. Over time the blocking was lasting longer and longer and the CPU usage was therefore heading to the top. The solution we used to fix the issue was to implement proper synchronization code so the timer only kicked off another thread if it wouldn't be blocked. Hope this helps, but apologies up front if both my thoughts are red herrings. A: Sounds like a threading issue with the timer. You might have one unit of work blocking another running on different worker threads, causing them to stack up every time the timer fires. Or you might have instances living and working longer than you expect. I'd suggest refactoring out the timer. Replace it with a single thread that queues up work on the ThreadPool. You can Sleep() the thread to control how often it looks for new work. Make sure this is the only place where your code is multithreaded. All other objects should be instantiated as work is readied for processing and destroyed after that work is completed. STATE IS THE ENEMY in multithreaded code. Another area where the design is lacking appears to be that you have multiple services that are polling resources to do something. I'd suggest unifying them under a single service. They might do seperate things, but they're working in unison; you're just using the filesystem, database, etc as a substitution for method calls. Also, 2003? I feel bad for you. A: Good suggestions, but rest assured, we have tried all of the usual troubleshooting. What I'm hoping is that this is a .NET issue that someone might know about, that we can work on solving. My feeling is that no matter how bizarre the underlying cause, the usual troubleshooting steps are your best bet for locating the issue. Since this is a performance issue, good measurements are invaluable. The overall process CPU usage is far too broad a measurement. Where is your service spending its time? You could use a profiler to measure this, or just log various section start and stops. If you aren't able to do even that, then use Andrea Bertani's suggestion -- isolate sections by removing others. Once you've located the general area, then you can make even finer-grained measurements, until you sort out the source of the CPU usage. If it's not obvious how to fix it at that point, you at least have ammunition for a much more specific question. If you have in fact already done all this usual troubleshooting, please do let us in on the secret.
Windows Service Increasing CPU Consumption
At my job, I have a clutch of six Windows services that I am responsible for, written in C# 2003. Each of these services contain a timer that fires every minute or so, where the majority of their work happens. My problem is that, as these services run, they start to consume more and more CPU time through each iteration of the loop, even if there is no meaningful work for them to do (ie, they're just idling, looking through the database for something to do). When they start up, each service uses an average of (about) 2-3% of 4 CPUs, which is fine. After 24 hours, each service will be consuming an entire processor for the duration of its loop's run. Can anyone help? I'm at a loss as to what could be causing this. Our current solution is to restart the services once a day (they shut themselves down, then a script sees that they're offline and restarts them at about 3AM). But this is not a long term solution; my concern is that as the services get busier, restarting them once a day may not be sufficient... but as there's a significant startup penalty (they all use NHibernate for data access), as they get busier, exactly what we don't want to be doing is restarting them more frequently. @akmad: True, it is very difficult. Yes, a service run in isolation will show the same symptom over time. No, it doesn't. We've looked at that. This can happen at 10AM or 6PM or in the middle of the night. There's no consistency. We do; and they are. The services are doing exactly what they should be, and nothing else. Unfortunately, that requires foreknowledge of exactly when the services are going to be maxing out CPUs, which happens on an unpredictable schedule, and never very quickly... which makes things doubly difficult, because my boss will run and restart them when they start having problems without thinking of debug issues. No, they're using a fairly consistent amount of RAM (approx. 60-80MB each, out of 4GB on the machine). Good suggestions, but rest assured, we have tried all of the usual troubleshooting. What I'm hoping is that this is a .NET issue that someone might know about, that we can work on solving. My boss' solution (which I emphatically don't want to implement) is to put a field in the database which holds multiple times for the services to restart during the day, so that he can make the problem go away and not think about it. I'm desperately seeking the cause of the real problem so that I can fix it, because that solution will become a disaster in about six months. @Yaakov Ellis: They each have a different function. One reads records out of an Oracle database somewhere offsite; another one processes those records and transfers files belonging to those records over to our system; a third checks those files to make sure they're what we expect them to be; another is a maintenance service that constantly checks things like disk space (that we have enough) and polls other servers to make sure they're alive; one is running only to make sure all of these other ones are running and doing their jobs, monitors and reports errors, and restarts anything that's failed to keep the whole system going 24 hours a day. So, if you're asking what I think you're asking, no, there isn't one common thing that all these services do (other than database access via NHibernate) that I can point to as a potential problem. Unfortunately, if that turns out to be the actual issue (which wouldn't surprise me greatly), the whole thing might be screwed -- and I'll end up rewriting all of them in simple SQL. I'm hoping it's a garbage collector problem or something easier to deal with than NHibernate. @Joshdan: No secret. As I said, we've tried all the usual troubleshooting. Profiling was unhelpful: the profiler we use was unable to point to any code that was actually executing when the CPU usage was high. These services were torn apart about a month ago looking for this problem. Every section of code was analyzed to attempt to figure out if our code was the issue; I'm not here asking because I haven't done my homework. Were this a simple case of the services doing more work than anticipated, that's something that would have been caught. The problem here is that, most of the time, the services are not doing anything at all, yet still manage to consume 25% or more of four CPU cores: they're finding no work to do, and exiting their loop and waiting for the next iteration. This should, quite literally, take almost no CPU time at all. Here's a example of behaviour we're seeing, on a service with no work to do for two days (in an unchanging environment). This was captured last week: Day 1, 8AM: Avg. CPU usage approx 3% Day 1, 6PM: Avg. CPU usage approx 8% Day 2, 7AM: Avg. CPU usage approx 20% Day 2, 11AM: Avg. CPU usage approx 30% Having looked at all of the possible mundane reasons for this, I've asked this question here because I figured (rightly, as it turns out) that I'd get more innovative answers (like Ubiguchi's), or pointers to things I hadn't thought of (like Ian's suggestion). So does the CPU spike happen immediately preceding the timer callback, within the timer callback, or immediately following the timer callback? You misunderstand. This is not a spike. If it were, there would be no problem; I can deal with spikes. But it's not... the CPU usage is going up generally. Even when the service is doing nothing, waiting for the next timer hit. When the service starts up, things are nice and calm, and the graph looks like what you'd expect... generally, 0% usage, with spikes to 10% as NHibernate hits the database or the service does some trivial amount of work. But this increases to an across-the-board 25% (more if I let it go too far) usage at all times while the process is running. That made Ian's suggestion the logical silver bullet (NHibernate does a lot of stuff when you're not looking). Alas, I've implemented his solution, but it hasn't had an effect (I have no proof of this, but I actually think it's made things worse... average usage is seeming to go up much faster now). Note that stripping out the NHibernate "sections" (as you recommend) is not feasible, since that would strip out about 90% of the code in the service, which would let me rule out the timer as a problem (which I absolutely intend to try), but can't help me rule out NHibernate as the issue, because if NHibernate is causing this, then the dodgy fix that's implemented (see below) is just going to have to become The Way The System Works; we are so dependent on NHibernate for this project that the PM simply won't accept that it's causing an unresolvable structural problem. I just noted a sense of desperation in the question -- that your problems would continue barring a small miracle Don't mean for it to come off that way. At the moment, the services are being restarted daily (with an option to input any number of hours of the day for them to shutdown and restart), which patches the problem but cannot be a long-term solution once they go onto the production machine and start to become busy. The problems will not continue, whether I fix them or the PM maintains this constraint on them. Obviously, I would prefer to implement a real fix, but since the initial testing revealed no reason for this, and the services have already been extensively reviewed, the PM would rather just have them restart multiple times than spend any more time trying to fix them. That's entirely out of my control and makes the miracle you were talking about more important than it would otherwise be. That is extremely intriguing (insofar as you trust your profiler). I don't. But then, these are Windows services written in .NET 1.1 running on a Windows 2000 machine, deployed by a dodgy Nant script, using an old version of NHibernate for database access. There's little on that machine I would actually say I trust.
[ "You mentioned that you're using NHibernate - are you closing your NHibernate sessions at appropriate points (such as the end of each iteration?)\nIf not, then the size of the object map loaded into memory will be gradually increasing over time, and each session flush will take increasingly more CPU time.\n", "Here's where I'd start:\n\nGet Process Explorer and show %Time in JIT, %Time in GC, CPU Cycles Delta, CPU Time, CPU %, and Threads.\nYou'll also want kernel and user time, and a couple of representative stack traces but I think you have to hit Properties to get snapshots.\nCompare before and after shots.\n\nA couple of thoughts on possibilities:\n\nexcessive GC (% Time in GC going up. Also, Perfmon GC and CPU counters would correspond)\nexcessive threads and associated context switches (# of threads going up)\npolling (stack traces are consistently caught in a single function)\nexcessive kernel time (kernel times are high - Task Manager shows large kernel time numbers when CPU is high)\nexceptions (PE .NET tab Exceptions thrown is high and getting higher. There's also a Perfmon counter)\nvirus/rootkit (OK, this is a last ditch scenario - but it is possible to construct a rootkit that hides from TaskManager. I'd suspect that you could then allocate your inevitable CPU usage to another process if you were cunning enough. Besides, if you've ruled out all of the above, I'm out of ideas right now)\n\n", "It's obviously pretty difficult to remotely debug you're unknown application... but here are some things I'd look at:\n\nWhat happens when you only run one of the services at a time? Do you still see the slow-down? This may indicate that there is some contention between the services.\nDoes the problem always occur around the same time, regardless of how long the service has been running? This may indicate that something else (a backup, virus scan, etc) is causing the machine (or db) as a whole to slow down.\nDo you have logging or some other mechanism to be sure that the service is only doing work as often as you think it should?\nIf you can see the performance degradation over a short time period, try running the service for a while and then attach a profiler to see exactly what is pegging the CPU.\nYou don't mention anything about memory usage. Do you have any of this information for the services? It's possible that your using up most of the RAM and causing the disk the trash, or some similar problem.\n\nBest of luck!\n", "I suggest to hack the problem into pieces.\nFirst, find a way to reproduce the problem 100% of the times and quickly. Lower the timer so that the services fire up more frequently (for example, 10 times quicker than normal). If the problem arises 10 times quicker, then it's related to the number of iterations and not to real time or to real work done by the services). And you will be able to do the next steps quicker than once a day.\nSecond, comment out all the real work code, and let only the services, the timers and the synchronization mechanism. If the problem still shows up, than it will be in that part of the code.\nIf it doesn't, then start adding back the code you commented out, one piece at a time. Eventually, you should find out what part of the code is causing the problem.\n", "'Fraid this answer is only going to suggest some directions for you to look in, but having seen similar problems in .NET Windows Services I have a couple of thoughts you might find helpful.\nMy first suggestion is your services might have some bugs in either the way they handle memory, or perhaps in the way they handle unmanaged memory. The last time I tracked down a similar issue it turned out a 3rd party OSS libray we were using stored handles to unmanaged objects in static memory. The longer the service ran the more handles the service picked up which caused the process' CPU performance to nose-dive very quickly. The way to try and resolve this sort of issue to ensure your services store nothing in memory inbetween the timer invocations, although if your 3rd party libraries use static memory you might have to do something clever like create an app domain for the timer invocation and ditch the app doamin (and its static memory) once processing is complete.\nThe other issue I've seen in similar circumstances was with the timer synchronization code being suspect, which in effect allowed more than one thread to be running the processing code at once. When we debugged the code we found the 1st thread was blocking the 2nd, and by the time the 2nd kicked off there was a 3rd being blocked. Over time the blocking was lasting longer and longer and the CPU usage was therefore heading to the top. The solution we used to fix the issue was to implement proper synchronization code so the timer only kicked off another thread if it wouldn't be blocked.\nHope this helps, but apologies up front if both my thoughts are red herrings.\n", "Sounds like a threading issue with the timer. You might have one unit of work blocking another running on different worker threads, causing them to stack up every time the timer fires. Or you might have instances living and working longer than you expect.\nI'd suggest refactoring out the timer. Replace it with a single thread that queues up work on the ThreadPool. You can Sleep() the thread to control how often it looks for new work. Make sure this is the only place where your code is multithreaded. All other objects should be instantiated as work is readied for processing and destroyed after that work is completed. STATE IS THE ENEMY in multithreaded code. \nAnother area where the design is lacking appears to be that you have multiple services that are polling resources to do something. I'd suggest unifying them under a single service. They might do seperate things, but they're working in unison; you're just using the filesystem, database, etc as a substitution for method calls. Also, 2003? I feel bad for you.\n", "\nGood suggestions, but rest assured, we have tried all of the usual troubleshooting. What I'm hoping is that this is a .NET issue that someone might know about, that we can work on solving.\n\nMy feeling is that no matter how bizarre the underlying cause, the usual troubleshooting steps are your best bet for locating the issue.\nSince this is a performance issue, good measurements are invaluable. The overall process CPU usage is far too broad a measurement. Where is your service spending its time? You could use a profiler to measure this, or just log various section start and stops. If you aren't able to do even that, then use Andrea Bertani's suggestion -- isolate sections by removing others.\nOnce you've located the general area, then you can make even finer-grained measurements, until you sort out the source of the CPU usage. If it's not obvious how to fix it at that point, you at least have ammunition for a much more specific question.\nIf you have in fact already done all this usual troubleshooting, please do let us in on the secret.\n" ]
[ 3, 3, 2, 2, 1, 1, 0 ]
[]
[]
[ ".net_1.1", "c#", "nhibernate", "windows_services" ]
stackoverflow_0000026148_.net_1.1_c#_nhibernate_windows_services.txt
Q: 1:1 Foreign Key Constraints How do you specify that a foreign key constraint should be a 1:1 relationship in transact sql? Is declaring the column UNIQUE enough? Below is my existing code.! CREATE TABLE [dbo].MyTable( [MyTablekey] INT IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [OtherTableKey] INT NOT NULL UNIQUE CONSTRAINT [FK_MyTable_OtherTable] FOREIGN KEY REFERENCES [dbo].[OtherTable]([OtherTableKey]), ... CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED ( [MyTableKey] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO A: A foreign key column with the UNIQUE and NOT NULL constraints that references a UNIQUE, NOT NULL column in another table creates a 1:(0|1) relationship, which is probably what you want. If there was a true 1:1 relationship, every record in the first table would have a corresponding record in the second table and vice-versa. In that case, you would probably just want to make one table (unless you needed some strange storage optimization). A: You could declare the column to be both the primary key and a foreign key. This is a good strategy for "extension" tables that are used to avoid putting nullable columns into the main table. A: @bosnic: You have a table CLIENT that has a 1:1 relationship with table SALES_OFFICE because, for example, the logic of your system says so. What your app logic says, and what your data model say are 2 different things. There is nothing wrong with enforcing that relationship with your business logic code, but it has no place in the data model. Would you really incorporate the data of SALES_OFFICE into CLIENT table? If every CLIENT has a unique SALES_OFFICE, and every SALES_OFFICE has a singular, unique CLIENT - then yes, they should be in the same table. We just need a better name. ;) And if another tables need to relate them selfs with SALES_OFFICE? There's no reason to. Relate your other tables to CLIENT, since CLIENT has a unique SALES_OFFICE. And what about database normalization best practices and patterns? This is normalization. To be fair, SALES_OFFICE and CLIENT is obviously not a 1:1 relationship - it's 1:N. Hopefully, your SALES_OFFICE exists to serve more than 1 client, and will continue to exist (for a while, at least) without any clients. A more realistic example is SALES_OFFICE and ZIP_CODE. A SALES_OFFICE must have exactly 1 ZIP_CODE, and 2 SALES_OFFICEs - even if they have an equivalent ZIP_CODE - do not share the instance of a ZIP_CODE (so, changing the ZIP_CODE of 1 does not impact the other). Wouldn't you agree that ZIP_CODE belongs as a column in SALES_OFFICE? A: Based on your code above, the unique constraint would be enough given that the for every primary key you have in the table, the unique constrained column is also unique. Also, this assumes that in [OtherTable], the [OtherTableKey] column is the primary key of that table. A: If there was a true 1:1 relationship, every record in the first table would have a corresponding record in the second table and vice-versa. In that case, you would probably just want to make one table (unless you needed some strange storage optimization). This is very incorrect. Let me give you an example. You have a table CLIENT that has a 1:1 relationship with table SALES_OFFICE because, for example, the logic of your system says so. Would you really incorporate the data of SALES_OFFICE into CLIENT table? And if another tables need to relate them selfs with SALES_OFFICE? And what about database normalization best practices and patterns? A foreign key column with the UNIQUE and NOT NULL constraints that references a UNIQUE, NOT NULL column in another table creates a 1:(0|1) relationship, which is probably what you want. The first part of your answer is the right answer, without the second part, unless the data in second table is really a kind of information that belongs to first table and never will be used by other tables.
1:1 Foreign Key Constraints
How do you specify that a foreign key constraint should be a 1:1 relationship in transact sql? Is declaring the column UNIQUE enough? Below is my existing code.! CREATE TABLE [dbo].MyTable( [MyTablekey] INT IDENTITY(1,1) NOT FOR REPLICATION NOT NULL, [OtherTableKey] INT NOT NULL UNIQUE CONSTRAINT [FK_MyTable_OtherTable] FOREIGN KEY REFERENCES [dbo].[OtherTable]([OtherTableKey]), ... CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED ( [MyTableKey] ASC ) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO
[ "A foreign key column with the UNIQUE and NOT NULL constraints that references a UNIQUE, NOT NULL column in another table creates a 1:(0|1) relationship, which is probably what you want.\nIf there was a true 1:1 relationship, every record in the first table would have a corresponding record in the second table and vice-versa. In that case, you would probably just want to make one table (unless you needed some strange storage optimization).\n", "You could declare the column to be both the primary key and a foreign key. This is a good strategy for \"extension\" tables that are used to avoid putting nullable columns into the main table.\n", "@bosnic:\n\nYou have a table CLIENT that has a 1:1 relationship with table SALES_OFFICE because, for example, the logic of your system says so. \n\nWhat your app logic says, and what your data model say are 2 different things. There is nothing wrong with enforcing that relationship with your business logic code, but it has no place in the data model.\n\nWould you really incorporate the data of SALES_OFFICE into CLIENT table? \n\nIf every CLIENT has a unique SALES_OFFICE, and every SALES_OFFICE has a singular, unique CLIENT - then yes, they should be in the same table. We just need a better name. ;)\n\nAnd if another tables need to relate them selfs with SALES_OFFICE? \n\nThere's no reason to. Relate your other tables to CLIENT, since CLIENT has a unique SALES_OFFICE. \n\nAnd what about database normalization best practices and patterns?\n\nThis is normalization.\nTo be fair, SALES_OFFICE and CLIENT is obviously not a 1:1 relationship - it's 1:N. Hopefully, your SALES_OFFICE exists to serve more than 1 client, and will continue to exist (for a while, at least) without any clients.\nA more realistic example is SALES_OFFICE and ZIP_CODE. A SALES_OFFICE must have exactly 1 ZIP_CODE, and 2 SALES_OFFICEs - even if they have an equivalent ZIP_CODE - do not share the instance of a ZIP_CODE (so, changing the ZIP_CODE of 1 does not impact the other). Wouldn't you agree that ZIP_CODE belongs as a column in SALES_OFFICE?\n", "Based on your code above, the unique constraint would be enough given that the for every primary key you have in the table, the unique constrained column is also unique. Also, this assumes that in [OtherTable], the [OtherTableKey] column is the primary key of that table.\n", "\nIf there was a true 1:1 relationship, every record in the first table would have a corresponding record in the second table and vice-versa. In that case, you would probably just want to make one table (unless you needed some strange storage optimization).\n\nThis is very incorrect. Let me give you an example. You have a table CLIENT that has a 1:1 relationship with table SALES_OFFICE because, for example, the logic of your system says so. Would you really incorporate the data of SALES_OFFICE into CLIENT table? And if another tables need to relate them selfs with SALES_OFFICE? And what about database normalization best practices and patterns?\n\nA foreign key column with the UNIQUE and NOT NULL constraints that references a UNIQUE, NOT NULL column in another table creates a 1:(0|1) relationship, which is probably what you want.\n\nThe first part of your answer is the right answer, without the second part, unless the data in second table is really a kind of information that belongs to first table and never will be used by other tables.\n" ]
[ 9, 4, 1, 0, 0 ]
[]
[]
[ "sql", "sql_server" ]
stackoverflow_0000032360_sql_sql_server.txt
Q: Write files to App_Data under medium trust hack? Is there any way to Write files to App_Data under medium trust hack? Im sure I've heard about some hack, is that true? A: I don't think you are able to create new files, but you should be able to write to existing files in the App_Data folder. But I have honestly never experienced any problems with Medium Trust and writing to the App_Data folder. Are you sure it has the necessary permissions needed for writing files to the hard drive? A: I don't have access to the server itself, so I can't check that. I can only chmod files and folder from my FTP client. I think my hosting provider needs to grant write permission to the network service account on the App_Data folder.
Write files to App_Data under medium trust hack?
Is there any way to Write files to App_Data under medium trust hack? Im sure I've heard about some hack, is that true?
[ "I don't think you are able to create new files, but you should be able to write to existing files in the App_Data folder. But I have honestly never experienced any problems with Medium Trust and writing to the App_Data folder. Are you sure it has the necessary permissions needed for writing files to the hard drive?\n", "I don't have access to the server itself, so I can't check that. I can only chmod files and folder from my FTP client.\nI think my hosting provider needs to grant write permission to the network service account on the App_Data folder.\n" ]
[ 1, 0 ]
[]
[]
[ "asp.net", "trust" ]
stackoverflow_0000032513_asp.net_trust.txt
Q: Restrict the server access from LAN only Recently we got a new server at the office purely for testing purposes. It is set up so that we can access it from any computer. However today our ip got blocked from one of our other sites saying that our ip has been suspected of having a virus that sends spam emails. we learned this from the cbl http://cbl.abuseat.org/ So of course we turned the server off to stop this. The problem is the server must be on to continue developing our application and to access the database that is installed on it. Our normal admin is on vacation and is unreachable, and the rest of us are idiots(me included) in this area. We believe that the best solution is to remove it from connecting to the internet but still access it on the lan. If that is a valid solution how would this be done or is there a better way? say blocking specified ports or whatever. A: I assume that this server is behind a router? You should be able to block WAN connections to the server on the router and still leave it open to accepting LAN connection. Or you could restrict the IPs that can connect to the server to the development machines on the network.
Restrict the server access from LAN only
Recently we got a new server at the office purely for testing purposes. It is set up so that we can access it from any computer. However today our ip got blocked from one of our other sites saying that our ip has been suspected of having a virus that sends spam emails. we learned this from the cbl http://cbl.abuseat.org/ So of course we turned the server off to stop this. The problem is the server must be on to continue developing our application and to access the database that is installed on it. Our normal admin is on vacation and is unreachable, and the rest of us are idiots(me included) in this area. We believe that the best solution is to remove it from connecting to the internet but still access it on the lan. If that is a valid solution how would this be done or is there a better way? say blocking specified ports or whatever.
[ "I assume that this server is behind a router? You should be able to block WAN connections to the server on the router and still leave it open to accepting LAN connection. Or you could restrict the IPs that can connect to the server to the development machines on the network.\n" ]
[ 1 ]
[]
[]
[ "lan", "router", "server", "wan" ]
stackoverflow_0000032780_lan_router_server_wan.txt
Q: How do I get today's date in C# in mm/dd/yyyy format? How do I get today's date in C# in mm/dd/yyyy format? I need to set a string variable to today's date (preferably without the year), but there's got to be a better way than building it month-/-day one piece at a time. BTW: I'm in the US so M/dd would be correct, e.g. September 11th is 9/11. Note: an answer from kronoz came in that discussed internationalization, and I thought it was awesome enough to mention since I can't make it an 'accepted' answer as well. kronoz's answer A: DateTime.Now.ToString("M/d/yyyy"); http://msdn.microsoft.com/en-us/library/8kb3ddd4.aspx A: Not to be horribly pedantic, but if you are internationalising the code it might be more useful to have the facility to get the short date for a given culture, e.g.:- using System.Globalization; using System.Threading; ... var currentCulture = Thread.CurrentThread.CurrentCulture; try { Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture("en-us"); string shortDateString = DateTime.Now.ToShortDateString(); // Do something with shortDateString... } finally { Thread.CurrentThread.CurrentCulture = currentCulture; } Though clearly the "m/dd/yyyy" approach is considerably neater!! A: DateTime.Now.ToString("dd/MM/yyyy"); A: If you want it without the year: DateTime.Now.ToString("MM/DD"); DateTime.ToString() has a lot of cool format strings: http://msdn.microsoft.com/en-us/library/aa326721.aspx A: DateTime.Now.Date.ToShortDateString() is culture specific. It is best to stick with: DateTime.Now.ToString("d/MM/yyyy"); A: string today = DateTime.Today.ToString("M/d"); A: DateTime.Now.Date.ToShortDateString() I think this is what you are looking for A: Or without the year: DateTime.Now.ToString("M/dd")
How do I get today's date in C# in mm/dd/yyyy format?
How do I get today's date in C# in mm/dd/yyyy format? I need to set a string variable to today's date (preferably without the year), but there's got to be a better way than building it month-/-day one piece at a time. BTW: I'm in the US so M/dd would be correct, e.g. September 11th is 9/11. Note: an answer from kronoz came in that discussed internationalization, and I thought it was awesome enough to mention since I can't make it an 'accepted' answer as well. kronoz's answer
[ "DateTime.Now.ToString(\"M/d/yyyy\");\n\nhttp://msdn.microsoft.com/en-us/library/8kb3ddd4.aspx\n", "Not to be horribly pedantic, but if you are internationalising the code it might be more useful to have the facility to get the short date for a given culture, e.g.:-\nusing System.Globalization;\nusing System.Threading;\n\n...\n\nvar currentCulture = Thread.CurrentThread.CurrentCulture;\ntry {\n Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(\"en-us\");\n string shortDateString = DateTime.Now.ToShortDateString();\n // Do something with shortDateString...\n} finally {\n Thread.CurrentThread.CurrentCulture = currentCulture;\n}\n\nThough clearly the \"m/dd/yyyy\" approach is considerably neater!!\n", "DateTime.Now.ToString(\"dd/MM/yyyy\");\n\n", "If you want it without the year:\nDateTime.Now.ToString(\"MM/DD\");\n\nDateTime.ToString() has a lot of cool format strings:\nhttp://msdn.microsoft.com/en-us/library/aa326721.aspx\n", "DateTime.Now.Date.ToShortDateString()\n\nis culture specific. \nIt is best to stick with:\nDateTime.Now.ToString(\"d/MM/yyyy\");\n\n", "string today = DateTime.Today.ToString(\"M/d\");\n\n", "DateTime.Now.Date.ToShortDateString()\n\nI think this is what you are looking for\n", "Or without the year:\nDateTime.Now.ToString(\"M/dd\")\n\n" ]
[ 224, 23, 17, 9, 8, 8, 5, 4 ]
[]
[]
[ "c#", "date" ]
stackoverflow_0000032747_c#_date.txt
Q: Advice on how to be graphically creative I've always felt that my graphic design skills have lacked, but I do have a desire to improve them. Even though I'm not the worlds worst artist, it's discouraging to see the results from a professional designer, who can do an amazing mockup from a simple spec in just a few hours. I always wonder how they came up with their design and more importantly, how they executed it so quickly. I'd like to think that all good artists aren't naturally gifted. I'm guessing that a lot of skill/talent comes from just putting in the time. Is there a recommended path to right brain nirvana for someone starting from scratch, a little later in life? I'd be interested in book recommendations, personal theories, or anything else that may shed some light on the best path to take. I have questions like should I read books about color theory, should I draw any chance I have, should I analyze shapes like an architect, etc... As far as my current skills go, I can make my way around Photoshop enough where I can do simple image manipulation... Thanks for any advice A: Most of artistic talent comes from putting in the time. However, as in most skills, practicing bad habits doesn't help you progress. You need to learn basic drawing skills (form, mainly) and practice doing them well and right (which means slowly). As you practice correctly, you'll improve much faster. This is the kind of thing that changes you from a person who says, "It doesn't look right, but I can't tell why - it's just 'off' somehow" to a person who says, "Oops, the arm is a bit long. If I shorten the elbow end it'll change the piece in this way, if I shorten the hand end it'll change the piece this way..." So you've got to study the forms you intend to draw, and recognize their internally related parts (the body height is generally X times the size of the head, the arms and legs are related in size but vary from the torso, etc). Same thing with buildings, physical objects, etc. Another thing that will really help you is understanding light and shadow - humans pick up on shape relationships based on outlines and based on shadows. Color theory is something that will make your designs attractive, or evoke certain responses and emotions, but until you get the form and lighting right the colors are not something you should stress. One reason why art books and classes focus so much on monochrome drawings. There are books and classes out there for these subjects - I could recommend some, but what you really need is to look at them yourself and pick the ones that appeal to you. You won't want to learn if you don't like drawing fruit bowls, and that's all your book does. Though you shouldn't avoid what you don't like, given that you're going the self taught route you should make it easy in the beginning, and then force yourself to draw the uninteresting and bland once you've got a bit of confidence and speed so you can go through those barriers more quickly. Good luck! -Adam A: That's a difficult thing. Usually people think "artistic skills" come from your genes but actually they do not. The bests graphic designer I know have some sort of education in arts. Of course, photoshop knowledge will allow you to do things but being interested in art (painting specially) will improve your sensitivity and your "good taste". Painting is a pleasure, both doing it and seeing it. Learning to both understand and enjoy it will help and the better way to do it is by going to museums. I try to go to as much expositions as I can, as well as read what I can on authors and styles (Piccaso, Monet, Dali, Magritte, Expresionism, Impresionism, Cubism, etc) that will give you a general overview that WILL help. On the other side... you are a programmer so you shouldn't be in charge of actually drawing the icons or designing the enterprise logo. You should however be familiarized with user interface design, specially with ease of use, and terms as goal oriented design. Of course, in a sufficiently large company you won't be in charge of the UI design either, but it will help anyway. I'd recommend the book About Face, which centers in goal oriented design as well as going through some user interface methapores and giving some historic background for the matter. A: I'm no artist and I'm colorblind, but I have been able to do fairly well with track creation for Motocross Madness and other games of that type (http://twisteddirt.com & http://dirttwister.com). Besides being familiar with the toolset I believe it helps to bring out your inner artist. I found that the book "Drawing on the Right Side of the Brain" was an amazing eye opening experience for me. One of the tricks that it uses is for you to draw a fairly complicated picture while looking at the picture upside down. If I had drawn it while looking at it right side up it would have looked horrible. I impressed myself with what I was able to draw while copying it while it was upside down. I did this many years ago. I just looked at their website and I think I will order the updated book and check out their DVD. A: I have a BFA in Graphic Design, although I don't use it much lately. Here's my $.02. Get a copy of "Drawing on the Right Side of the Brain" and go through it. You will become a better artist/drawer as a result and I'm a firm believer that if you can't do it with pencil/paper you won't be successful on the computer. Also go to the bookstore and pick up a copy of How or one of the other publications. I maintain a subscription to How just for inspiration. I'll see if I can dig up some web links tonight for resources (although I'm sure others will provide some). most importantly, carry a sketch book and use it. Draw. Draw. Draw. A: Drawing is probably what I'd recommend the most. Whenever you have a chance, just start drawing. Keep in mind that what you draw doesn't have to be original; it's a perfectly natural learning tool to try and duplicate someone else's work. You'll learn a lot. If you look at all the great masters, they had understudies who actually did part of their masters' works, so fight that "it must be original" instinct that school's instilled in you, and get duplicating. (Just make sure you either destroy or properly label these attempts as copies--you don't want to accidentally use them later and then be accused of plagiarism..) I have a couple of friends in the animation sector, and one of them told me that while she was going through college, the way she was taught to draw the human body was to go through each body part, and draw it 100 times, each in a completely different pose. This gets you comfortable with the make-up of the object, and helps you get intimately knowledgeable about how it'll look from various positions. (That may not apply directly to what you're doing, but it should give you an indicator as to the amount of discipline that may be involved in getting to the point you seek.) Definitely put together a library of stuff that you can look to for inspiration. Value physical media that you can flip through over websites; it's much quicker to flip through a picture book than it is to search your bookmarks online. When it comes to getting your imagination fired up, having to meticulously click and wait repeatedly is going to be counter-productive. A: Inspiration is probably your biggest asset. Like creative writing, and even programming, looking at what people have done and how they have done will give you tools to put in your toolbox. But in the sense of graphic design (photoshop, illustrator, etc), just like programmers don't enjoy reinventing the wheel, I don't think artwork is any different. Search the web for 'pieces' that you can manipulate (vector graphics: example). Run through tutorials that can easily give you some tricks. Sketch out a very rough idea, and look through web images to find something that has already created. It's like anything else that you wish to master, or become proficient in. If you want it, you've got to practice it over, and over, and over. A: I, too was not born with a strong design skillset, in fact quite the opposite. When I started out, my philosophy was that if the page or form just works then my job was done! Over the years though, I've improved. Although I believe I'll never be as good as someone who was born with the skills, sites like CSS Zen Garden among others have helped me a lot. Read into usability too, as I think usability and design for computer applications are inextricably entwined. Books such as Don Norman's "The Design of Everyday Things" to Steve Krug's "Don't Make Me Think", have all helped improve my 'design skills'... slightly! ;-) Good luck with it. A: As I mentioned in a thread yesterday, I have found working through tutorials for Adobe Photoshop, Illustrator, InDesign, and After Effects to be very helpful. I use Adobe's Kuler site for help with colors. I think that designers spend a lot of time looking at other's designs. Some of the books out there on web site design might help, even for designing applications. Adobe TV has a lot of short videos on graphic design in general, as well as achieving particular results in one of their tools. I find these videos quite helpful.
Advice on how to be graphically creative
I've always felt that my graphic design skills have lacked, but I do have a desire to improve them. Even though I'm not the worlds worst artist, it's discouraging to see the results from a professional designer, who can do an amazing mockup from a simple spec in just a few hours. I always wonder how they came up with their design and more importantly, how they executed it so quickly. I'd like to think that all good artists aren't naturally gifted. I'm guessing that a lot of skill/talent comes from just putting in the time. Is there a recommended path to right brain nirvana for someone starting from scratch, a little later in life? I'd be interested in book recommendations, personal theories, or anything else that may shed some light on the best path to take. I have questions like should I read books about color theory, should I draw any chance I have, should I analyze shapes like an architect, etc... As far as my current skills go, I can make my way around Photoshop enough where I can do simple image manipulation... Thanks for any advice
[ "Most of artistic talent comes from putting in the time. However, as in most skills, practicing bad habits doesn't help you progress.\nYou need to learn basic drawing skills (form, mainly) and practice doing them well and right (which means slowly). As you practice correctly, you'll improve much faster.\nThis is the kind of thing that changes you from a person who says, \"It doesn't look right, but I can't tell why - it's just 'off' somehow\" to a person who says, \"Oops, the arm is a bit long. If I shorten the elbow end it'll change the piece in this way, if I shorten the hand end it'll change the piece this way...\"\nSo you've got to study the forms you intend to draw, and recognize their internally related parts (the body height is generally X times the size of the head, the arms and legs are related in size but vary from the torso, etc). Same thing with buildings, physical objects, etc.\nAnother thing that will really help you is understanding light and shadow - humans pick up on shape relationships based on outlines and based on shadows.\nColor theory is something that will make your designs attractive, or evoke certain responses and emotions, but until you get the form and lighting right the colors are not something you should stress. One reason why art books and classes focus so much on monochrome drawings.\nThere are books and classes out there for these subjects - I could recommend some, but what you really need is to look at them yourself and pick the ones that appeal to you. You won't want to learn if you don't like drawing fruit bowls, and that's all your book does. Though you shouldn't avoid what you don't like, given that you're going the self taught route you should make it easy in the beginning, and then force yourself to draw the uninteresting and bland once you've got a bit of confidence and speed so you can go through those barriers more quickly.\nGood luck!\n-Adam\n", "That's a difficult thing. Usually people think \"artistic skills\" come from your genes but actually they do not.\nThe bests graphic designer I know have some sort of education in arts. Of course, photoshop knowledge will allow you to do things but being interested in art (painting specially) will improve your sensitivity and your \"good taste\".\nPainting is a pleasure, both doing it and seeing it. Learning to both understand and enjoy it will help and the better way to do it is by going to museums. I try to go to as much expositions as I can, as well as read what I can on authors and styles (Piccaso, Monet, Dali, Magritte, Expresionism, Impresionism, Cubism, etc) that will give you a general overview that WILL help.\nOn the other side... you are a programmer so you shouldn't be in charge of actually drawing the icons or designing the enterprise logo. You should however be familiarized with user interface design, specially with ease of use, and terms as goal oriented design.\nOf course, in a sufficiently large company you won't be in charge of the UI design either, but it will help anyway. I'd recommend the book About Face, which centers in goal oriented design as well as going through some user interface methapores and giving some historic background for the matter.\n", "I'm no artist and I'm colorblind, but I have been able to do fairly well with track creation for Motocross Madness and other games of that type (http://twisteddirt.com & http://dirttwister.com). \nBesides being familiar with the toolset I believe it helps to bring out your inner artist.\nI found that the book \"Drawing on the Right Side of the Brain\" was an amazing eye opening experience for me.\nOne of the tricks that it uses is for you to draw a fairly complicated picture while looking at the picture upside down. If I had drawn it while looking at it right side up it would have looked horrible. I impressed myself with what I was able to draw while copying it while it was upside down.\nI did this many years ago. I just looked at their website and I think I will order the updated book and check out their DVD.\n", "I have a BFA in Graphic Design, although I don't use it much lately. Here's my $.02.\nGet a copy of \"Drawing on the Right Side of the Brain\" and go through it. You will become a better artist/drawer as a result and I'm a firm believer that if you can't do it with pencil/paper you won't be successful on the computer. Also go to the bookstore and pick up a copy of How or one of the other publications. I maintain a subscription to How just for inspiration.\nI'll see if I can dig up some web links tonight for resources (although I'm sure others will provide some).\nmost importantly, carry a sketch book and use it. Draw. Draw. Draw.\n", "Drawing is probably what I'd recommend the most. Whenever you have a chance, just start drawing. Keep in mind that what you draw doesn't have to be original; it's a perfectly natural learning tool to try and duplicate someone else's work. You'll learn a lot. If you look at all the great masters, they had understudies who actually did part of their masters' works, so fight that \"it must be original\" instinct that school's instilled in you, and get duplicating. (Just make sure you either destroy or properly label these attempts as copies--you don't want to accidentally use them later and then be accused of plagiarism..)\nI have a couple of friends in the animation sector, and one of them told me that while she was going through college, the way she was taught to draw the human body was to go through each body part, and draw it 100 times, each in a completely different pose. This gets you comfortable with the make-up of the object, and helps you get intimately knowledgeable about how it'll look from various positions.\n(That may not apply directly to what you're doing, but it should give you an indicator as to the amount of discipline that may be involved in getting to the point you seek.)\nDefinitely put together a library of stuff that you can look to for inspiration. Value physical media that you can flip through over websites; it's much quicker to flip through a picture book than it is to search your bookmarks online. When it comes to getting your imagination fired up, having to meticulously click and wait repeatedly is going to be counter-productive.\n", "Inspiration is probably your biggest asset. Like creative writing, and even programming, looking at what people have done and how they have done will give you tools to put in your toolbox.\nBut in the sense of graphic design (photoshop, illustrator, etc), just like programmers don't enjoy reinventing the wheel, I don't think artwork is any different. Search the web for 'pieces' that you can manipulate (vector graphics: example). Run through tutorials that can easily give you some tricks. Sketch out a very rough idea, and look through web images to find something that has already created.\nIt's like anything else that you wish to master, or become proficient in. If you want it, you've got to practice it over, and over, and over.\n", "I, too was not born with a strong design skillset, in fact quite the opposite. When I started out, my philosophy was that if the page or form just works then my job was done! \nOver the years though, I've improved. Although I believe I'll never be as good as someone who was born with the skills, sites like CSS Zen Garden among others have helped me a lot. \nRead into usability too, as I think usability and design for computer applications are inextricably entwined. Books such as Don Norman's \"The Design of Everyday Things\" to Steve Krug's \"Don't Make Me Think\", have all helped improve my 'design skills'... slightly! ;-)\nGood luck with it.\n", "As I mentioned in a thread yesterday, I have found working through tutorials for Adobe Photoshop, Illustrator, InDesign, and After Effects to be very helpful. I use Adobe's Kuler site for help with colors. I think that designers spend a lot of time looking at other's designs. Some of the books out there on web site design might help, even for designing applications. Adobe TV has a lot of short videos on graphic design in general, as well as achieving particular results in one of their tools. I find these videos quite helpful. \n" ]
[ 4, 2, 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "graphics" ]
stackoverflow_0000032493_graphics.txt
Q: What protocols and servers are involved in sending an email, and what are the steps? For the past few weeks, I've been trying to learn about just how email works. I understand the process of a client receiving mail from a server using POP pretty well. I also understand how a client computer can use SMTP to ask an SMTP server to send a message. However, I'm still missing something... The way I understand it, outgoing mail has to make three trips: Client (gmail user using Thunderbird) to a server (Gmail) First server (Gmail) to second server (Hotmail) Second server (Hotmail) to second client (hotmail user using OS X Mail) As I understand it, step one uses SMTP for the client to communicate. The client authenticates itself somehow (say, with USER and PASS), and then sends a message to the gmail server. However, I don't understand how gmail server transfers the message to the hotmail server. For step three, I'm pretty sure, the hotmail server uses POP to send the message to the hotmail client (using authentication, again). So, the big question is: when I click send Mail sends my message to my gmail server, how does my gmail server forward the message to, say, a hotmail server so my friend can recieve it? Thank you so much! ~Jason Thanks, that's been helpful so far. As I understand it, the first client sends the message to the first server using SMTP, often to an address such as smtp.mail.SOMESERVER.com on port 25 (usually). Then, SOMESERVER uses SMTP again to send the message to RECEIVESERVER.com on port 25 (not smtp.mail.RECEIVESERVER.com or anything fancy). Then, when the recipient asks RECEIVESERVER for its mail, using POP, s/he recieves the message... right? Thanks again (especially to dr-jan), Jason A: The SMTP server at Gmail (which accepted the message from Thunderbird) will route the message to the final recipient. It does this by using DNS to find the MX (mail exchanger) record for the domain name part of the destination email address (hotmail.com in this example). The DNS server will return an IP address which the message should be sent to. The server at the destination IP address will hopefully be running SMTP (on the standard port 25) so it can receive the incoming messages. Once the message has been received by the hotmail server, it is stored until the appropriate user logs in and retrieves their messages using POP (or IMAP). Jason - to answer your follow up... Then, SOMESERVER uses SMTP again to send the message to RECEIVESERVER.com on port 25 (not smtp.mail.RECEIVESERVER.com or anything fancy). That's correct - the domain name to send to is taken as everything after the '@' in the email address of the recipient. Often, RECEIVESERVER.com is an alias for something more specific, say something like incoming.RECEIVESERVER.com, (or, indeed, smtp.mail.RECEIVESERVER.com). You can use nslookup to query your local DNS servers (this works in Linux and in a Windows cmd window): nslookup > set type=mx > stackoverflow.com Server: 158.155.25.16 Address: 158.155.25.16#53 Non-authoritative answer: stackoverflow.com mail exchanger = 10 aspmx.l.google.com. stackoverflow.com mail exchanger = 20 alt1.aspmx.l.google.com. stackoverflow.com mail exchanger = 30 alt2.aspmx.l.google.com. stackoverflow.com mail exchanger = 40 aspmx2.googlemail.com. stackoverflow.com mail exchanger = 50 aspmx3.googlemail.com. Authoritative answers can be found from: aspmx.l.google.com internet address = 64.233.183.114 aspmx.l.google.com internet address = 64.233.183.27 > This shows us that email to anyone at stackoverflow.com should be sent to one of the gmail servers shown above. The Wikipedia article mentioned (http://en.wikipedia.org/wiki/Mx_record) discusses the priority numbers shown above (10, 20, ..., 50). A: You're looking for the Mail Transfer Agent, Wikipedia has a nice article on the topic. Within Internet message handling services (MHS), a message transfer agent or mail transfer agent (MTA) or mail relay is software that transfers electronic mail messages from one computer to another using a client–server application architecture. An MTA implements both the client (sending) and server (receiving) portions of the Simple Mail Transfer Protocol. The terms mail server, mail exchanger, and MX host may also refer to a computer performing the MTA function. The Domain Name System (DNS) associates a mail server to a domain with mail exchanger (MX) resource records containing the domain name of a host providing MTA services. A: You might also be interested to know why the GMail to HotMail link uses SMTP, just like your Thunderbird client. In other words, since your client can send email via SMTP, and it can use DNS to get the MX record for hotmail.com, why doesn't it just send it there directly, skipping gmail.com altogether? There are a couple of reasons, some historical and some for security. In the original question, it was assumed that your Thunderbird client logs in with a user name and password. This is often not the case. SMTP doesn't actually require a login to send a mail. And SMTP has no way to tell who's really sending the mail. Thus, spam was born! There are, unfortunately, still many SMTP servers out there that allow anyone and everyone to connect and send mail, trusting blindly that the sender is who they claim to be. These servers are called "open relays" and are routinely black-listed by smarter administrators of other mail servers, because of the spam they churn out. Responsible SMTP server admins set up their server to accept mail for delivery only in special cases 1) the mail is coming from "its own" network, or 2) the mail is being sent to "its own" network, or 3) the user presents credentials that identifies him as a trusted sender. Case #1 is probably what happens when you send mail from work; your machine is on the trusted network, so you can send mail to anyone. A lot of corporate mail servers still don't require authentication, so you can impersonate anyone in your office. Fun! Case #2 is when someone sends you mail. And case #3 is probably what happens with your GMail example. You're not coming from a trusted network, you’re just out on the Internet with the spammers. But by using a password, you can prove to GMail that you are who you say you are. The historical aspect is that in the old days, the link between gmail and hotmail was likely to be intermittent. By queuing your mail up at a local server, you could wash your hands of it, knowing that when a link was established, the local server could transfer your messages to the remote server, which would hold the message until the recipient's agent picked it up. A: The first server will look at DNS for a MX record of Hotmail server. MX is a special record that defines a mail server for a certain domain. Knowing IP address of Hotmail server, GMail server will sent the message using SMTP protocol and will wait for an answer. If Hotmail server goes down, GMail server wiil try to resend the message (it will depend on server software configuration). If the process terminates ok, then ok, if not, GMail server will notify you that he wasn´t able to deliver the message. A: If you really want to know how email works you could read the SMTP RFC or the POP3 RFC. A: Step 2 to 3 (i.e. Gmail to Hotmail) would normally happen through SMTP (or ESMTP - extended SMTP). Hotmail doesn't send anything to a client via POP3. It's important to understand some of the nuances here. The client contacts Hotmail via POP3 and requests its mail. (i.e. the client initiates the discussion). A: All emails are transferred using SMTP (or ESMTP). The important thing to understand is that the when you send message to someguy@hotmail.com this message's destination is not his PC. The destination is someguy's inbox folder at hotmail.com server. After the message arrives at it's destination. The user can check if he has any new messages on his account at hotmail server and retrieve them using POP3 Also it would be possible to send the message without using gmail server, by sending it directly from your PC to hotmail using SMTP.
What protocols and servers are involved in sending an email, and what are the steps?
For the past few weeks, I've been trying to learn about just how email works. I understand the process of a client receiving mail from a server using POP pretty well. I also understand how a client computer can use SMTP to ask an SMTP server to send a message. However, I'm still missing something... The way I understand it, outgoing mail has to make three trips: Client (gmail user using Thunderbird) to a server (Gmail) First server (Gmail) to second server (Hotmail) Second server (Hotmail) to second client (hotmail user using OS X Mail) As I understand it, step one uses SMTP for the client to communicate. The client authenticates itself somehow (say, with USER and PASS), and then sends a message to the gmail server. However, I don't understand how gmail server transfers the message to the hotmail server. For step three, I'm pretty sure, the hotmail server uses POP to send the message to the hotmail client (using authentication, again). So, the big question is: when I click send Mail sends my message to my gmail server, how does my gmail server forward the message to, say, a hotmail server so my friend can recieve it? Thank you so much! ~Jason Thanks, that's been helpful so far. As I understand it, the first client sends the message to the first server using SMTP, often to an address such as smtp.mail.SOMESERVER.com on port 25 (usually). Then, SOMESERVER uses SMTP again to send the message to RECEIVESERVER.com on port 25 (not smtp.mail.RECEIVESERVER.com or anything fancy). Then, when the recipient asks RECEIVESERVER for its mail, using POP, s/he recieves the message... right? Thanks again (especially to dr-jan), Jason
[ "The SMTP server at Gmail (which accepted the message from Thunderbird) will route the message to the final recipient.\nIt does this by using DNS to find the MX (mail exchanger) record for the domain name part of the destination email address (hotmail.com in this example). The DNS server will return an IP address which the message should be sent to. The server at the destination IP address will hopefully be running SMTP (on the standard port 25) so it can receive the incoming messages.\nOnce the message has been received by the hotmail server, it is stored until the appropriate user logs in and retrieves their messages using POP (or IMAP).\nJason - to answer your follow up...\n\nThen, SOMESERVER uses SMTP again to send the message to RECEIVESERVER.com on port 25 (not smtp.mail.RECEIVESERVER.com or anything fancy).\n\nThat's correct - the domain name to send to is taken as everything after the '@' in the email address of the recipient. Often, RECEIVESERVER.com is an alias for something more specific, say something like incoming.RECEIVESERVER.com, (or, indeed, smtp.mail.RECEIVESERVER.com).\nYou can use nslookup to query your local DNS servers (this works in Linux and in a Windows cmd window):\nnslookup\n> set type=mx\n> stackoverflow.com\nServer: 158.155.25.16\nAddress: 158.155.25.16#53\n\nNon-authoritative answer:\nstackoverflow.com mail exchanger = 10 aspmx.l.google.com.\nstackoverflow.com mail exchanger = 20 alt1.aspmx.l.google.com.\nstackoverflow.com mail exchanger = 30 alt2.aspmx.l.google.com.\nstackoverflow.com mail exchanger = 40 aspmx2.googlemail.com.\nstackoverflow.com mail exchanger = 50 aspmx3.googlemail.com.\n\nAuthoritative answers can be found from:\naspmx.l.google.com internet address = 64.233.183.114\naspmx.l.google.com internet address = 64.233.183.27\n> \n\nThis shows us that email to anyone at stackoverflow.com should be sent to one of the gmail servers shown above.\nThe Wikipedia article mentioned (http://en.wikipedia.org/wiki/Mx_record) discusses the priority numbers shown above (10, 20, ..., 50).\n", "You're looking for the Mail Transfer Agent, Wikipedia has a nice article on the topic.\n\nWithin Internet message handling services (MHS), a message transfer agent or mail transfer agent (MTA) or mail relay is software that transfers electronic mail messages from one computer to another using a client–server application architecture. An MTA implements both the client (sending) and server (receiving) portions of the Simple Mail Transfer Protocol.\nThe terms mail server, mail exchanger, and MX host may also refer to a computer performing the MTA function. The Domain Name System (DNS) associates a mail server to a domain with mail exchanger (MX) resource records containing the domain name of a host providing MTA services.\n\n", "You might also be interested to know why the GMail to HotMail link uses SMTP, just like your Thunderbird client. In other words, since your client can send email via SMTP, and it can use DNS to get the MX record for hotmail.com, why doesn't it just send it there directly, skipping gmail.com altogether?\nThere are a couple of reasons, some historical and some for security. In the original question, it was assumed that your Thunderbird client logs in with a user name and password. This is often not the case. SMTP doesn't actually require a login to send a mail. And SMTP has no way to tell who's really sending the mail. Thus, spam was born!\nThere are, unfortunately, still many SMTP servers out there that allow anyone and everyone to connect and send mail, trusting blindly that the sender is who they claim to be. These servers are called \"open relays\" and are routinely black-listed by smarter administrators of other mail servers, because of the spam they churn out.\nResponsible SMTP server admins set up their server to accept mail for delivery only in special cases 1) the mail is coming from \"its own\" network, or 2) the mail is being sent to \"its own\" network, or 3) the user presents credentials that identifies him as a trusted sender. Case #1 is probably what happens when you send mail from work; your machine is on the trusted network, so you can send mail to anyone. A lot of corporate mail servers still don't require authentication, so you can impersonate anyone in your office. Fun! Case #2 is when someone sends you mail. And case #3 is probably what happens with your GMail example. You're not coming from a trusted network, you’re just out on the Internet with the spammers. But by using a password, you can prove to GMail that you are who you say you are.\nThe historical aspect is that in the old days, the link between gmail and hotmail was likely to be intermittent. By queuing your mail up at a local server, you could wash your hands of it, knowing that when a link was established, the local server could transfer your messages to the remote server, which would hold the message until the recipient's agent picked it up.\n", "The first server will look at DNS for a MX record of Hotmail server. MX is a special record that defines a mail server for a certain domain. Knowing IP address of Hotmail server, GMail server will sent the message using SMTP protocol and will wait for an answer. If Hotmail server goes down, GMail server wiil try to resend the message (it will depend on server software configuration). If the process terminates ok, then ok, if not, GMail server will notify you that he wasn´t able to deliver the message.\n", "If you really want to know how email works you could read the SMTP RFC or the POP3 RFC.\n", "Step 2 to 3 (i.e. Gmail to Hotmail) would normally happen through SMTP (or ESMTP - extended SMTP).\nHotmail doesn't send anything to a client via POP3. It's important to understand some of the nuances here. The client contacts Hotmail via POP3 and requests its mail. (i.e. the client initiates the discussion).\n", "All emails are transferred using SMTP (or ESMTP). \nThe important thing to understand is that the when you send message to someguy@hotmail.com this message's destination is not his PC. The destination is someguy's inbox folder at hotmail.com server. \nAfter the message arrives at it's destination. The user can check if he has any new messages on his account at hotmail server and retrieve them using POP3\nAlso it would be possible to send the message without using gmail server, by sending it directly from your PC to hotmail using SMTP. \n" ]
[ 18, 5, 5, 2, 2, 1, 1 ]
[]
[]
[ "email", "pop3", "smtp" ]
stackoverflow_0000032744_email_pop3_smtp.txt
Q: Do you have any recommended macros for Microsoft Visual Studio? What are some macros that you have found useful in Visual Studio for code manipulation and automation? A: This is my macro to close the solution, delete the intellisense file, and reopen the solution. Essential if you're working in native C++. Sub UpdateIntellisense() Dim solution As Solution = DTE.Solution Dim filename As String = solution.FullName Dim ncbFile As System.Text.StringBuilder = New System.Text.StringBuilder ncbFile.Append(System.IO.Path.GetDirectoryName(filename) + "\") ncbFile.Append(System.IO.Path.GetFileNameWithoutExtension(filename)) ncbFile.Append(".ncb") solution.Close(True) System.IO.File.Delete(ncbFile.ToString()) solution.Open(filename) End Sub A: This is one of the handy ones I use on HTML and XML files: ''''replaceunicodechars.vb Option Strict Off Option Explicit Off Imports EnvDTE Imports System.Diagnostics Public Module ReplaceUnicodeChars Sub ReplaceUnicodeChars() DTE.ExecuteCommand("Edit.Find") ReplaceAllChar(ChrW(8230), "&#8230;") ' ellipses ReplaceAllChar(ChrW(8220), "&#8220;") ' left double quote ReplaceAllChar(ChrW(8221), "&#8221;") ' right double quote ReplaceAllChar(ChrW(8216), "&#8216;") ' left single quote ReplaceAllChar(ChrW(8217), "&#8217;") ' right single quote ReplaceAllChar(ChrW(8211), "&#8211;") ' en dash ReplaceAllChar(ChrW(8212), "&#8212;") ' em dash ReplaceAllChar(ChrW(176), "&#176;") ' ° ReplaceAllChar(ChrW(188), "&#188;") ' ¼ ReplaceAllChar(ChrW(189), "&#189;") ' ½ ReplaceAllChar(ChrW(169), "&#169;") ' © ReplaceAllChar(ChrW(174), "&#174;") ' ® ReplaceAllChar(ChrW(8224), "&#8224;") ' dagger ReplaceAllChar(ChrW(8225), "&#8225;") ' double-dagger ReplaceAllChar(ChrW(185), "&#185;") ' ¹ ReplaceAllChar(ChrW(178), "&#178;") ' ² ReplaceAllChar(ChrW(179), "&#179;") ' ³ ReplaceAllChar(ChrW(153), "&#8482;") ' ™ ''ReplaceAllChar(ChrW(0), "&#0;") DTE.Windows.Item(Constants.vsWindowKindFindReplace).Close() End Sub Sub ReplaceAllChar(ByVal findWhat, ByVal replaceWith) DTE.Find.FindWhat = findWhat DTE.Find.ReplaceWith = replaceWith DTE.Find.Target = vsFindTarget.vsFindTargetCurrentDocument DTE.Find.MatchCase = False DTE.Find.MatchWholeWord = False DTE.Find.MatchInHiddenText = True DTE.Find.PatternSyntax = vsFindPatternSyntax.vsFindPatternSyntaxLiteral DTE.Find.ResultsLocation = vsFindResultsLocation.vsFindResultsNone DTE.Find.Action = vsFindAction.vsFindActionReplaceAll DTE.Find.Execute() End Sub End Module It's useful when you have to do any kind of data entry and want to escape everything at once. A: This is one I created which allows you to easily change the Target Framework Version of all projects in a solution: http://geekswithblogs.net/sdorman/archive/2008/07/18/visual-studio-2008-and-targetframeworkversion.aspx A: I'm using Jean-Paul Boodhoo's BDD macro. It replaces whitespace characters with underscores within the header line of a method signature. This way I can type the names of a test case, for example, as a normal sentence, hit a keyboard shortcut and I have valid method signature. A: You might want to add in code snippets as well, they help to speed up the development time and increase productivity. The standard VB code snippets come with the default installation. The C# code snippets must be downloaded and added seperately. (Link below for those) As far as macros go, I generally have not used any but the working with Visual studio 2005 book has some pretty good ones in there. C# Code snippets Link: http://www.codinghorror.com/blog/files/ms-csharp-snippets.7z.zip (Jeff Atwood provided the link) HIH
Do you have any recommended macros for Microsoft Visual Studio?
What are some macros that you have found useful in Visual Studio for code manipulation and automation?
[ "This is my macro to close the solution, delete the intellisense file, and reopen the solution. Essential if you're working in native C++.\nSub UpdateIntellisense()\n Dim solution As Solution = DTE.Solution\n Dim filename As String = solution.FullName\n Dim ncbFile As System.Text.StringBuilder = New System.Text.StringBuilder\n ncbFile.Append(System.IO.Path.GetDirectoryName(filename) + \"\\\")\n ncbFile.Append(System.IO.Path.GetFileNameWithoutExtension(filename))\n ncbFile.Append(\".ncb\")\n solution.Close(True)\n System.IO.File.Delete(ncbFile.ToString())\n solution.Open(filename)\nEnd Sub\n\n", "This is one of the handy ones I use on HTML and XML files:\n''''replaceunicodechars.vb\nOption Strict Off\nOption Explicit Off\nImports EnvDTE\nImports System.Diagnostics\n\nPublic Module ReplaceUnicodeChars\n\n Sub ReplaceUnicodeChars()\n DTE.ExecuteCommand(\"Edit.Find\")\n ReplaceAllChar(ChrW(8230), \"&#8230;\") ' ellipses\n ReplaceAllChar(ChrW(8220), \"&#8220;\") ' left double quote\n ReplaceAllChar(ChrW(8221), \"&#8221;\") ' right double quote\n ReplaceAllChar(ChrW(8216), \"&#8216;\") ' left single quote\n ReplaceAllChar(ChrW(8217), \"&#8217;\") ' right single quote\n ReplaceAllChar(ChrW(8211), \"&#8211;\") ' en dash\n ReplaceAllChar(ChrW(8212), \"&#8212;\") ' em dash\n ReplaceAllChar(ChrW(176), \"&#176;\") ' °\n ReplaceAllChar(ChrW(188), \"&#188;\") ' ¼\n ReplaceAllChar(ChrW(189), \"&#189;\") ' ½\n ReplaceAllChar(ChrW(169), \"&#169;\") ' ©\n ReplaceAllChar(ChrW(174), \"&#174;\") ' ®\n ReplaceAllChar(ChrW(8224), \"&#8224;\") ' dagger\n ReplaceAllChar(ChrW(8225), \"&#8225;\") ' double-dagger\n ReplaceAllChar(ChrW(185), \"&#185;\") ' ¹\n ReplaceAllChar(ChrW(178), \"&#178;\") ' ²\n ReplaceAllChar(ChrW(179), \"&#179;\") ' ³\n ReplaceAllChar(ChrW(153), \"&#8482;\") ' ™\n ''ReplaceAllChar(ChrW(0), \"&#0;\")\n\n DTE.Windows.Item(Constants.vsWindowKindFindReplace).Close()\n End Sub\n\n Sub ReplaceAllChar(ByVal findWhat, ByVal replaceWith)\n DTE.Find.FindWhat = findWhat\n DTE.Find.ReplaceWith = replaceWith\n DTE.Find.Target = vsFindTarget.vsFindTargetCurrentDocument\n DTE.Find.MatchCase = False\n DTE.Find.MatchWholeWord = False\n DTE.Find.MatchInHiddenText = True\n DTE.Find.PatternSyntax = vsFindPatternSyntax.vsFindPatternSyntaxLiteral\n DTE.Find.ResultsLocation = vsFindResultsLocation.vsFindResultsNone\n DTE.Find.Action = vsFindAction.vsFindActionReplaceAll\n DTE.Find.Execute()\n End Sub\n\nEnd Module\n\nIt's useful when you have to do any kind of data entry and want to escape everything at once.\n", "This is one I created which allows you to easily change the Target Framework Version of all projects in a solution: http://geekswithblogs.net/sdorman/archive/2008/07/18/visual-studio-2008-and-targetframeworkversion.aspx\n", "I'm using Jean-Paul Boodhoo's BDD macro. It replaces whitespace characters with underscores within the header line of a method signature. This way I can type the names of a test case, for example, as a normal sentence, hit a keyboard shortcut and I have valid method signature.\n", "You might want to add in code snippets as well, they help to speed up the development time and increase productivity.\nThe standard VB code snippets come with the default installation. The C# code snippets must be downloaded and added seperately. (Link below for those)\nAs far as macros go, I generally have not used any but the working with Visual studio 2005 book has some pretty good ones in there.\nC# Code snippets Link:\nhttp://www.codinghorror.com/blog/files/ms-csharp-snippets.7z.zip \n(Jeff Atwood provided the link)\nHIH\n" ]
[ 10, 5, 1, 1, 0 ]
[]
[]
[ "automation", "macros", "visual_studio" ]
stackoverflow_0000015056_automation_macros_visual_studio.txt
Q: Why does sqlite3-ruby-1.2.2 not work on OS X? I am running OS X 10.5, Ruby 1.8.6, Rails 2.1, sqlite3-ruby 1.2.2 and I get the following error when trying to rake db:migrate on an app that works find connected to MySQL. rake aborted! no such file to load -- sqlite3/database A: Looks like there's a bug with 1.2.2. Just roll back to 1.2.1 with: gem install sqlite3-ruby -v=1.2.1 and that will fix the problem. A: Jamis has just released 1.2.4, and the comment history on that bug suggests that the fix is in 1.2.3 and later versions. As a quick test, I did the following on an OS X 10.5 box with Ruby 1.8.6: sudo gem install sqlite3-ruby (verified version number of 1.2.4) rails test (used default database.yml with sqlite3) cd test ./script/generate model Person name:string rake db:migrate Ran fine. The error would have happened when sqlite3 was required before the migration finished, so it looks like they've fixed the issue.
Why does sqlite3-ruby-1.2.2 not work on OS X?
I am running OS X 10.5, Ruby 1.8.6, Rails 2.1, sqlite3-ruby 1.2.2 and I get the following error when trying to rake db:migrate on an app that works find connected to MySQL. rake aborted! no such file to load -- sqlite3/database
[ "Looks like there's a bug with 1.2.2. Just roll back to 1.2.1 with:\n\ngem install sqlite3-ruby -v=1.2.1\n\nand that will fix the problem.\n", "Jamis has just released 1.2.4, and the comment history on that bug suggests that the fix is in 1.2.3 and later versions. As a quick test, I did the following on an OS X 10.5 box with Ruby 1.8.6:\nsudo gem install sqlite3-ruby\n\n(verified version number of 1.2.4)\nrails test\n\n(used default database.yml with sqlite3)\ncd test\n./script/generate model Person name:string\nrake db:migrate\n\nRan fine. The error would have happened when sqlite3 was required before the migration finished, so it looks like they've fixed the issue.\n" ]
[ 2, 2 ]
[]
[]
[ "ruby", "ruby_on_rails", "sqlite" ]
stackoverflow_0000011986_ruby_ruby_on_rails_sqlite.txt
Q: .NET : Double-click event in TabControl I would like to intercept the event in a .NET Windows Forms TabControl when the user has changed tab by double-clicking the tab (instead of just single-clicking it). Do you have any idea of how I can do that? A: The MouseDoubleClick event of the TabControl seems to respond just fine to double-clicking. The only additional step I would do is set a short timer after the TabIndexChanged event to track that a new tab has been selected and ignore any double-clicks that happen outside the timer. This will prevent double-clicking on the selected tab. A: For some reason, MouseDoubleClick, as suggested by Jason Z is only firing when clicking on the tabs and clicking on the tab panel does not do anything, so that's exactly what I was looking for. A: How about subclassing the TabControl class and adding your own DoubleClick event?
.NET : Double-click event in TabControl
I would like to intercept the event in a .NET Windows Forms TabControl when the user has changed tab by double-clicking the tab (instead of just single-clicking it). Do you have any idea of how I can do that?
[ "The MouseDoubleClick event of the TabControl seems to respond just fine to double-clicking. The only additional step I would do is set a short timer after the TabIndexChanged event to track that a new tab has been selected and ignore any double-clicks that happen outside the timer. This will prevent double-clicking on the selected tab.\n", "For some reason, MouseDoubleClick, as suggested by Jason Z is only firing when clicking on the tabs and clicking on the tab panel does not do anything, so that's exactly what I was looking for.\n", "How about subclassing the TabControl class and adding your own DoubleClick event? \n" ]
[ 3, 1, 0 ]
[]
[]
[ ".net", "c#", "tabcontrol", "vb.net", "winforms" ]
stackoverflow_0000032733_.net_c#_tabcontrol_vb.net_winforms.txt
Q: Print a Winform/visual element All the articles I've found via google are either obsolete or contradict one another. What's the easiest way to print a form or, say, a richtextbox in c#? I think it's using the PrintDiaglog class by setting the Document, but how does this get converted? A: At least in VS 2008, its very easy. It took me about a couple of minutes to code the answer after reading your question. Here's where I borrowed it from: http://msdn.microsoft.com/en-us/library/6he9hz8c.aspx I tested this, and it works. A: Someone I know created a component that extends controls with a lot of properties that give you a lot of control over how the form prints. It's worth a look. MCL PrintForm Helper Component
Print a Winform/visual element
All the articles I've found via google are either obsolete or contradict one another. What's the easiest way to print a form or, say, a richtextbox in c#? I think it's using the PrintDiaglog class by setting the Document, but how does this get converted?
[ "At least in VS 2008, its very easy. It took me about a couple of minutes to code the answer after reading your question. Here's where I borrowed it from:\nhttp://msdn.microsoft.com/en-us/library/6he9hz8c.aspx\nI tested this, and it works.\n", "Someone I know created a component that extends controls with a lot of properties that give you a lot of control over how the form prints. It's worth a look.\nMCL PrintForm Helper Component\n" ]
[ 6, 0 ]
[]
[]
[ "c#", "winforms" ]
stackoverflow_0000005307_c#_winforms.txt
Q: Creating System Restore Points - Thoughts? Is it "taboo" to programatically create system restore points? I would be doing this before I perform a software update. If there is a better method to create a restore point with just my software's files and data, please let me know. I would like a means by which I can get the user back to a known working state if everything goes kaput during an update (closes/kills the update app, power goes out, user pulls the plug, etc.) private void CreateRestorePoint(string description) { ManagementScope oScope = new ManagementScope("\\\\localhost\\root\\default"); ManagementPath oPath = new ManagementPath("SystemRestore"); ObjectGetOptions oGetOp = new ObjectGetOptions(); ManagementClass oProcess = new ManagementClass(oScope, oPath, oGetOp); ManagementBaseObject oInParams = oProcess.GetMethodParameters("CreateRestorePoint"); oInParams["Description"] = description; oInParams["RestorePointType"] = 12; // MODIFY_SETTINGS oInParams["EventType"] = 100; ManagementBaseObject oOutParams = oProcess.InvokeMethod("CreateRestorePoint", oInParams, null); } A: Whether it's a good idea or not really depends on how much you're doing. A full system restore point is weighty - it takes time to create, disk space to store, and gets added to the interface of restore points, possibly pushing earlier restore points out of storage. So, if your update is really only changing your application (i.e. the data it stores, the binaries that make it up, the registry entries for it), then it's not really a system level change, and I'd vote for no restore point. You can emulate the functionality by just backing up the parts you're changing, and offering a restore to backup option. My opinion is that System Restore should be to restore the system when global changes are made that might corrupt it (application install, etc). The counter argument that one should just use the system service doesn't hold water for me; I worry that, if you have to issue a number of updates to your application, the set of system restore points might get so large that important, real "system wide" updates might get pushed out, or lost in the noise. A: Is it "taboo" to programatically create system restore points? No. That's why the API is there; so that you can have pseudo-atomic updates of the system. A: No, it's not Taboo - in fact, I'd encourage it. The OS manages how much hard drive takes, and I'd put money down on Microsoft spending more money & time testing System Restore than you the money & time you're putting into testing your setup application. A: If you are developing an application for Vista you can use Transactional NTFS, which supports a similar feature to what you are looking for. http://en.wikipedia.org/wiki/Transactional_NTFS Wouldn't installer packages already include this type of rollback support, though? I'm not terribly familiar with most of them so I am not sure. Finally, Windows will typically automatically create a restore point anytime you run a setup application. A: Take a look at the following link: http://www.calumgrant.net/atomic/ The author described "Transactional Programming". This is analogous to the transactions in data bases. Example: Start transaction: Step 1 Step 2 Encounter error during step 2 Roll back to before transaction started. This is a new framework, but you can look at it more as a solution rather then using the framework. By using transactions, you get the "Restore Points" that you're looking for.
Creating System Restore Points - Thoughts?
Is it "taboo" to programatically create system restore points? I would be doing this before I perform a software update. If there is a better method to create a restore point with just my software's files and data, please let me know. I would like a means by which I can get the user back to a known working state if everything goes kaput during an update (closes/kills the update app, power goes out, user pulls the plug, etc.) private void CreateRestorePoint(string description) { ManagementScope oScope = new ManagementScope("\\\\localhost\\root\\default"); ManagementPath oPath = new ManagementPath("SystemRestore"); ObjectGetOptions oGetOp = new ObjectGetOptions(); ManagementClass oProcess = new ManagementClass(oScope, oPath, oGetOp); ManagementBaseObject oInParams = oProcess.GetMethodParameters("CreateRestorePoint"); oInParams["Description"] = description; oInParams["RestorePointType"] = 12; // MODIFY_SETTINGS oInParams["EventType"] = 100; ManagementBaseObject oOutParams = oProcess.InvokeMethod("CreateRestorePoint", oInParams, null); }
[ "Whether it's a good idea or not really depends on how much you're doing. A full system restore point is weighty - it takes time to create, disk space to store, and gets added to the interface of restore points, possibly pushing earlier restore points out of storage.\nSo, if your update is really only changing your application (i.e. the data it stores, the binaries that make it up, the registry entries for it), then it's not really a system level change, and I'd vote for no restore point. You can emulate the functionality by just backing up the parts you're changing, and offering a restore to backup option. My opinion is that System Restore should be to restore the system when global changes are made that might corrupt it (application install, etc).\nThe counter argument that one should just use the system service doesn't hold water for me; I worry that, if you have to issue a number of updates to your application, the set of system restore points might get so large that important, real \"system wide\" updates might get pushed out, or lost in the noise.\n", "\nIs it \"taboo\" to programatically create system restore points?\n\nNo. That's why the API is there; so that you can have pseudo-atomic updates of the system.\n", "No, it's not Taboo - in fact, I'd encourage it. The OS manages how much hard drive takes, and I'd put money down on Microsoft spending more money & time testing System Restore than you the money & time you're putting into testing your setup application.\n", "If you are developing an application for Vista you can use Transactional NTFS, which supports a similar feature to what you are looking for.\nhttp://en.wikipedia.org/wiki/Transactional_NTFS\nWouldn't installer packages already include this type of rollback support, though? I'm not terribly familiar with most of them so I am not sure.\nFinally, Windows will typically automatically create a restore point anytime you run a setup application.\n", "Take a look at the following link: http://www.calumgrant.net/atomic/\nThe author described \"Transactional Programming\". This is analogous to the transactions in data bases.\nExample:\nStart transaction:\n\nStep 1\nStep 2\nEncounter error during step 2\nRoll back to before transaction started.\n\nThis is a new framework, but you can look at it more as a solution rather then using the framework.\nBy using transactions, you get the \"Restore Points\" that you're looking for.\n" ]
[ 4, 4, 2, 1, 0 ]
[ "I don't think a complete system restore would be a good plan. Two reasons that quickly come to mind:\n\nWasted disk space\nUnintended consequences from a rollback\n\n" ]
[ -1 ]
[ "system_restore" ]
stackoverflow_0000032845_system_restore.txt
Q: How do I use ASP.NET Login Controls when my Login.aspx is not at the root of my application? I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. It keeps redirecting to a Login.aspx page at the root of my application that does not exist. My login page is within a folder. A: Use the LoginUrl property for the forms item? <authentication mode="Forms"> <forms defaultUrl="~/Default.aspx" loginUrl="~/login.aspx" timeout="1440" ></forms> </authentication> A: I found the answer at CoderSource.net. I had to put the correct path into my web.config file. <?xml version="1.0"?> <configuration> <system.web> ... <!-- The <authentication> section enables configuration of the security authentication mode used by ASP.NET to identify an incoming user. --> <authentication mode="Forms"> <forms loginUrl="~/FolderName/Login.aspx" /> </authentication> ... </system.web> ... </configuration>
How do I use ASP.NET Login Controls when my Login.aspx is not at the root of my application?
I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. It keeps redirecting to a Login.aspx page at the root of my application that does not exist. My login page is within a folder.
[ "Use the LoginUrl property for the forms item?\n<authentication mode=\"Forms\">\n <forms defaultUrl=\"~/Default.aspx\" loginUrl=\"~/login.aspx\" timeout=\"1440\" ></forms>\n</authentication>\n\n", "I found the answer at CoderSource.net. I had to put the correct path into my web.config file.\n<?xml version=\"1.0\"?>\n<configuration>\n <system.web>\n ...\n <!--\n The <authentication> section enables configuration \n of the security authentication mode used by \n ASP.NET to identify an incoming user. \n -->\n <authentication mode=\"Forms\">\n <forms loginUrl=\"~/FolderName/Login.aspx\" />\n </authentication>\n ...\n </system.web>\n ...\n</configuration>\n\n" ]
[ 6, 1 ]
[]
[]
[ "asp.net", "forms_authentication" ]
stackoverflow_0000033089_asp.net_forms_authentication.txt
Q: Beginning ASP.NET MVC with VB.net 2008 Where can I find a good tutorial on learning ASP.NET MVC using VB.net 2008 as the language? Most in-depth tutorials that I found in searching the web were written in C#. A: Have you tried adding the word "VB" to your searches?? http://www.myvbprof.com/2007_Version/MVC_Intro_Tutorial.aspx http://www.asp.net/learn/mvc/tutorial-07-vb.aspx <Link> A: http://quickstarts.asp.net/3-5-extensions/mvc/default.aspx Is that relevant?
Beginning ASP.NET MVC with VB.net 2008
Where can I find a good tutorial on learning ASP.NET MVC using VB.net 2008 as the language? Most in-depth tutorials that I found in searching the web were written in C#.
[ "Have you tried adding the word \"VB\" to your searches??\nhttp://www.myvbprof.com/2007_Version/MVC_Intro_Tutorial.aspx\nhttp://www.asp.net/learn/mvc/tutorial-07-vb.aspx\n<Link>\n", "http://quickstarts.asp.net/3-5-extensions/mvc/default.aspx\nIs that relevant?\n" ]
[ 3, 0 ]
[]
[]
[ ".net_3.5", "asp.net_mvc", "model_view_controller", "vb.net", "visual_studio_2008" ]
stackoverflow_0000032642_.net_3.5_asp.net_mvc_model_view_controller_vb.net_visual_studio_2008.txt
Q: Using Virtual PC for Web Development with Oracle Is anyone using Virtual PC to maintain multiple large .NET 1.1 and 2.0 websites? Are there any lessons learned? I used Virtual PC recently with a small WinForms app and it worked great, but then everything works great with WinForms. ASP.NET development hogs way more resources, requires IIS to be running, requires a ridiculously long wait after recompilations, etc., so I'm a little concerned. And I'll also be using Oracle, if that makes any difference. Also, is there any real reason to use VM Ware instead of Virtual PC? A: I've used VirtualPCs for a few years for development of some fairly hefty web apps without much problem. Lots of RAM is important. I keep my VPCs on an external USB drive and they perform great from there. This gives me the flexibility to take the drive with me if I need to do work somewhere else... just install VPC on a host plug in the USB drive and start coding. For servers, we use VMWare and have had little to no trouble with it. Recently I went back to working on my local machine as you lose the benefit of dual monitors with VPCs, and I don't need to be as mobile as I used to. A: As long as you have the resources (separate hard disk for the virtual machine, sufficient RAM), I don't see why you would have any problems. A: Virtual PC 2007 is very fast esp on a CPU that has hardware support for VM's. 3GB RAM a must for anything not small. XP makes a good guest OS, Vista works well as a host. A: Thanks for all the answers. So RAM is the key. As far as dual monitor capability, I found that I could use dual monitors, as long as one of those monitors was my actual machine. And that was what I wanted anyway. Mike A: If you are going to be using VPCs as a server...perhaps Hyper-V (http://en.wikipedia.org/wiki/Windows_Server_Virtualization) is something to look at. Its pretty powerful, in how it lets you assign RAM / CPU Cores to a virtualized machine.
Using Virtual PC for Web Development with Oracle
Is anyone using Virtual PC to maintain multiple large .NET 1.1 and 2.0 websites? Are there any lessons learned? I used Virtual PC recently with a small WinForms app and it worked great, but then everything works great with WinForms. ASP.NET development hogs way more resources, requires IIS to be running, requires a ridiculously long wait after recompilations, etc., so I'm a little concerned. And I'll also be using Oracle, if that makes any difference. Also, is there any real reason to use VM Ware instead of Virtual PC?
[ "I've used VirtualPCs for a few years for development of some fairly hefty web apps without much problem. Lots of RAM is important. I keep my VPCs on an external USB drive and they perform great from there. This gives me the flexibility to take the drive with me if I need to do work somewhere else... just install VPC on a host plug in the USB drive and start coding.\nFor servers, we use VMWare and have had little to no trouble with it.\nRecently I went back to working on my local machine as you lose the benefit of dual monitors with VPCs, and I don't need to be as mobile as I used to.\n", "As long as you have the resources (separate hard disk for the virtual machine, sufficient RAM), I don't see why you would have any problems.\n", "Virtual PC 2007 is very fast esp on a CPU that has hardware support for VM's. 3GB RAM a must for anything not small. XP makes a good guest OS, Vista works well as a host. \n", "Thanks for all the answers. So RAM is the key. \nAs far as dual monitor capability, I found that I could use dual monitors, as long as one of those monitors was my actual machine. And that was what I wanted anyway.\nMike\n", "If you are going to be using VPCs as a server...perhaps Hyper-V (http://en.wikipedia.org/wiki/Windows_Server_Virtualization) is something to look at.\nIts pretty powerful, in how it lets you assign RAM / CPU Cores to a virtualized machine.\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "performance", "virtual_pc" ]
stackoverflow_0000031287_performance_virtual_pc.txt
Q: How do I keep my Login.aspx page's ReturnUrl parameter from overriding my ASP.NET Login control's DestinationPageUrl property? I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. I've got pages such as PasswordRecovery.aspx that are accessable to only Anonymous users. When I click my login link from such a page, the login page has a ReturnUrl parameter in the address bar: http://www.example.com/Login.aspx?ReturnUrl=PasswordRecovery.aspx And then after a successful login, users are returned to the PasswordRecovery.aspx page specified in the ReturnUrl parameter to which they no longer have access. A: I found the answer on Velocity Reviews. I handled the LoggedIn event to force a redirection to the DestinationPageUrl page. Public Partial Class Login Inherits System.Web.UI.Page Protected Sub Login1_LoggedIn(ByVal sender As Object, _ ByVal e As System.EventArgs) Handles Login1.LoggedIn 'overrides ReturnUrl page parameter Response.Redirect(Login1.DestinationPageUrl) End Sub End Class
How do I keep my Login.aspx page's ReturnUrl parameter from overriding my ASP.NET Login control's DestinationPageUrl property?
I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. I've got pages such as PasswordRecovery.aspx that are accessable to only Anonymous users. When I click my login link from such a page, the login page has a ReturnUrl parameter in the address bar: http://www.example.com/Login.aspx?ReturnUrl=PasswordRecovery.aspx And then after a successful login, users are returned to the PasswordRecovery.aspx page specified in the ReturnUrl parameter to which they no longer have access.
[ "I found the answer on Velocity Reviews. I handled the LoggedIn event to force a redirection to the DestinationPageUrl page.\nPublic Partial Class Login\n Inherits System.Web.UI.Page\n\n Protected Sub Login1_LoggedIn(ByVal sender As Object, _ \n ByVal e As System.EventArgs) Handles Login1.LoggedIn\n 'overrides ReturnUrl page parameter\n Response.Redirect(Login1.DestinationPageUrl)\n End Sub\n\nEnd Class\n\n" ]
[ 14 ]
[]
[]
[ "asp.net", "forms_authentication" ]
stackoverflow_0000033166_asp.net_forms_authentication.txt
Q: Looking for Regex to find quoted newlines in a big string (for C#) I have a big string (let's call it a CSV file, though it isn't actually one, it'll just be easier for now) that I have to parse in C# code. The first step of the parsing process splits the file into individual lines by just using a StreamReader object and calling ReadLine until it's through the file. However, any given line might contain a quoted (in single quotes) literal with embedded newlines. I need to find those newlines and convert them temporarily into some other kind of token or escape sequence until I've split the file into an array of lines..then I can change them back. Example input data: 1,2,10,99,'Some text without a newline', true, false, 90 2,1,11,98,'This text has an embedded newline and continues here', true, true, 90 I could write all of the C# code needed to do this by using string.IndexOf to find the quoted sections and look within them for newlines, but I'm thinking a Regex might be a better choice (i.e. now I have two problems) A: Since this isn't a true CSV file, does it have any sort of schema? From your example, it looks like you have: int, int, int, int, string , bool, bool, int With that making up your record / object. Assuming that your data is well formed (I don't know enough about your source to know how valid this assumption is); you could: Read your line. Use a state machine to parse your data. If your line ends, and you're parsing a string, read the next line..and keep parsing. I'd avoid using a regex if possible. A: State-machines for doing such a job are made easy using C# 2.0 iterators. Here's hopefully the last CSV parser I'll ever write. The whole file is treated as a enumerable bunch of enumerable strings, i.e. rows/columns. IEnumerable is great because it can then be processed by LINQ operators. public class CsvParser { public char FieldDelimiter { get; set; } public CsvParser() : this(',') { } public CsvParser(char fieldDelimiter) { FieldDelimiter = fieldDelimiter; } public IEnumerable<IEnumerable<string>> Parse(string text) { return Parse(new StringReader(text)); } public IEnumerable<IEnumerable<string>> Parse(TextReader reader) { while (reader.Peek() != -1) yield return parseLine(reader); } IEnumerable<string> parseLine(TextReader reader) { bool insideQuotes = false; StringBuilder item = new StringBuilder(); while (reader.Peek() != -1) { char ch = (char)reader.Read(); char? nextCh = reader.Peek() > -1 ? (char)reader.Peek() : (char?)null; if (!insideQuotes && ch == FieldDelimiter) { yield return item.ToString(); item.Length = 0; } else if (!insideQuotes && ch == '\r' && nextCh == '\n') //CRLF { reader.Read(); // skip LF break; } else if (!insideQuotes && ch == '\n') //LF for *nix-style line endings break; else if (ch == '"' && nextCh == '"') // escaped quotes "" { item.Append('"'); reader.Read(); // skip next " } else if (ch == '"') insideQuotes = !insideQuotes; else item.Append(ch); } // last one yield return item.ToString(); } } Note that the file is read character by character with the code deciding when newlines are to be treated as row delimiters or part of a quoted string. A: What if you got the whole file into a variable then split that based on non-quoted newlines? A: EDIT: Sorry, I've misinterpreted your post. If you're looking for a regex, then here is one: content = Regex.Replace(content, "'([^']*)\n([^']*)'", "'\1TOKEN\2'"); There might be edge cases and that two problems but I think it should be ok most of the time. What the Regex does is that it first finds any pair of single quotes that has \n between it and replace that \n with TOKEN preserving any text in-between. But still, I'd go state machine like what @bryansh explained below.
Looking for Regex to find quoted newlines in a big string (for C#)
I have a big string (let's call it a CSV file, though it isn't actually one, it'll just be easier for now) that I have to parse in C# code. The first step of the parsing process splits the file into individual lines by just using a StreamReader object and calling ReadLine until it's through the file. However, any given line might contain a quoted (in single quotes) literal with embedded newlines. I need to find those newlines and convert them temporarily into some other kind of token or escape sequence until I've split the file into an array of lines..then I can change them back. Example input data: 1,2,10,99,'Some text without a newline', true, false, 90 2,1,11,98,'This text has an embedded newline and continues here', true, true, 90 I could write all of the C# code needed to do this by using string.IndexOf to find the quoted sections and look within them for newlines, but I'm thinking a Regex might be a better choice (i.e. now I have two problems)
[ "Since this isn't a true CSV file, does it have any sort of schema?\nFrom your example, it looks like you have:\nint, int, int, int, string , bool, bool, int\nWith that making up your record / object.\nAssuming that your data is well formed (I don't know enough about your source to know how valid this assumption is); you could:\n\nRead your line.\nUse a state machine to parse your data.\nIf your line ends, and you're parsing a string, read the next line..and keep parsing.\n\nI'd avoid using a regex if possible.\n", "State-machines for doing such a job are made easy using C# 2.0 iterators. Here's hopefully the last CSV parser I'll ever write. The whole file is treated as a enumerable bunch of enumerable strings, i.e. rows/columns. IEnumerable is great because it can then be processed by LINQ operators.\npublic class CsvParser\n{\n public char FieldDelimiter { get; set; }\n\n public CsvParser()\n : this(',')\n {\n }\n\n public CsvParser(char fieldDelimiter)\n {\n FieldDelimiter = fieldDelimiter;\n }\n\n public IEnumerable<IEnumerable<string>> Parse(string text)\n {\n return Parse(new StringReader(text));\n }\n public IEnumerable<IEnumerable<string>> Parse(TextReader reader)\n {\n while (reader.Peek() != -1)\n yield return parseLine(reader);\n }\n\n IEnumerable<string> parseLine(TextReader reader)\n {\n bool insideQuotes = false;\n StringBuilder item = new StringBuilder();\n\n while (reader.Peek() != -1)\n {\n char ch = (char)reader.Read();\n char? nextCh = reader.Peek() > -1 ? (char)reader.Peek() : (char?)null;\n\n if (!insideQuotes && ch == FieldDelimiter)\n {\n yield return item.ToString();\n item.Length = 0;\n }\n else if (!insideQuotes && ch == '\\r' && nextCh == '\\n') //CRLF\n {\n reader.Read(); // skip LF\n break;\n }\n else if (!insideQuotes && ch == '\\n') //LF for *nix-style line endings\n break;\n else if (ch == '\"' && nextCh == '\"') // escaped quotes \"\"\n {\n item.Append('\"');\n reader.Read(); // skip next \"\n }\n else if (ch == '\"')\n insideQuotes = !insideQuotes;\n else\n item.Append(ch);\n }\n // last one\n yield return item.ToString();\n }\n\n}\n\nNote that the file is read character by character with the code deciding when newlines are to be treated as row delimiters or part of a quoted string.\n", "What if you got the whole file into a variable then split that based on non-quoted newlines?\n", "EDIT: Sorry, I've misinterpreted your post. If you're looking for a regex, then here is one:\ncontent = Regex.Replace(content, \"'([^']*)\\n([^']*)'\", \"'\\1TOKEN\\2'\");\n\nThere might be edge cases and that two problems but I think it should be ok most of the time. What the Regex does is that it first finds any pair of single quotes that has \\n between it and replace that \\n with TOKEN preserving any text in-between.\nBut still, I'd go state machine like what @bryansh explained below.\n" ]
[ 3, 3, 1, 0 ]
[]
[]
[ "c#", "regex" ]
stackoverflow_0000033063_c#_regex.txt
Q: Shorthand conditional in C# similar to SQL 'in' keyword In C# is there a shorthand way to write this: public static bool IsAllowed(int userID) { return (userID == Personnel.JohnDoe || userID == Personnel.JaneDoe ...); } Like: public static bool IsAllowed(int userID) { return (userID in Personnel.JohnDoe, Personnel.JaneDoe ...); } I know I could also use switch, but there are probably 50 or so functions like this I have to write (porting a classic ASP site over to ASP.NET) so I'd like to keep them as short as possible. A: How about this? public static class Extensions { public static bool In<T>(this T testValue, params T[] values) { return values.Contains(testValue); } } Usage: Personnel userId = Personnel.JohnDoe; if (userId.In(Personnel.JohnDoe, Personnel.JaneDoe)) { // Do something } I can't claim credit for this, but I also can't remember where I saw it. So, credit to you, anonymous Internet stranger. A: How about something like this: public static bool IsAllowed(int userID) { List<int> IDs = new List<string> { 1,2,3,4,5 }; return IDs.Contains(userID); } (You could of course change the static status, initialize the IDs class in some other place, use an IEnumerable<>, etc, based on your needs. The main point is that the closest equivalent to the in operator in SQL is the Collection.Contains() function.) A: I would encapsulate the list of allowed IDs as data not code. Then it's source can be changed easily later on. List<int> allowedIDs = ...; public bool IsAllowed(int userID) { return allowedIDs.Contains(userID); } If using .NET 3.5, you can use IEnumerable instead of List thanks to extension methods. (This function shouldn't be static. See this posting: using too much static bad or good ?.) A: Are permissions user-id based? If so, you may end up with a better solution by going to role based permissions. Or you may end up having to edit that method quite frequently to add additional users to the "allowed users" list. For example, enum UserRole { User, Administrator, LordEmperor } class User { public UserRole Role{get; set;} public string Name {get; set;} public int UserId {get; set;} } public static bool IsAllowed(User user) { return user.Role == UserRole.LordEmperor; } A: A nice little trick is to sort of reverse the way you usually use .Contains(), like:- public static bool IsAllowed(int userID) { return new int[] { Personnel.JaneDoe, Personnel.JohnDoe }.Contains(userID); } Where you can put as many entries in the array as you like. If the Personnel.x is an enum you'd have some casting issues with this (and with the original code you posted), and in that case it'd be easier to use:- public static bool IsAllowed(int userID) { return Enum.IsDefined(typeof(Personnel), userID); } A: Here's the closest that I can think of: using System.Linq; public static bool IsAllowed(int userID) { return new Personnel[] { Personnel.JohnDoe, Personnel.JaneDoe }.Contains((Personnel)userID); } A: Just another syntax idea: return new [] { Personnel.JohnDoe, Personnel.JaneDoe }.Contains(userID); A: Can you write an iterator for Personnel. public static bool IsAllowed(int userID) { return (Personnel.Contains(userID)) } public bool Contains(int userID) : extends Personnel (i think that is how it is written) { foreach (int id in Personnel) if (id == userid) return true; return false; }
Shorthand conditional in C# similar to SQL 'in' keyword
In C# is there a shorthand way to write this: public static bool IsAllowed(int userID) { return (userID == Personnel.JohnDoe || userID == Personnel.JaneDoe ...); } Like: public static bool IsAllowed(int userID) { return (userID in Personnel.JohnDoe, Personnel.JaneDoe ...); } I know I could also use switch, but there are probably 50 or so functions like this I have to write (porting a classic ASP site over to ASP.NET) so I'd like to keep them as short as possible.
[ "How about this?\npublic static class Extensions\n{\n public static bool In<T>(this T testValue, params T[] values)\n {\n return values.Contains(testValue);\n }\n}\n\nUsage:\nPersonnel userId = Personnel.JohnDoe;\n\nif (userId.In(Personnel.JohnDoe, Personnel.JaneDoe))\n{\n // Do something\n}\n\nI can't claim credit for this, but I also can't remember where I saw it. So, credit to you, anonymous Internet stranger.\n", "How about something like this:\npublic static bool IsAllowed(int userID) {\n List<int> IDs = new List<string> { 1,2,3,4,5 };\n return IDs.Contains(userID);\n}\n\n(You could of course change the static status, initialize the IDs class in some other place, use an IEnumerable<>, etc, based on your needs. The main point is that the closest equivalent to the in operator in SQL is the Collection.Contains() function.)\n", "I would encapsulate the list of allowed IDs as data not code. Then it's source can be changed easily later on.\nList<int> allowedIDs = ...;\n\npublic bool IsAllowed(int userID)\n{\n return allowedIDs.Contains(userID);\n}\n\nIf using .NET 3.5, you can use IEnumerable instead of List thanks to extension methods.\n(This function shouldn't be static. See this posting: using too much static bad or good ?.)\n", "Are permissions user-id based? If so, you may end up with a better solution by going to role based permissions. Or you may end up having to edit that method quite frequently to add additional users to the \"allowed users\" list.\nFor example, \n enum UserRole {\n User, Administrator, LordEmperor\n }\nclass User {\n public UserRole Role{get; set;}\n public string Name {get; set;}\n public int UserId {get; set;}\n}\n\npublic static bool IsAllowed(User user) {\n return user.Role == UserRole.LordEmperor;\n}\n\n", "A nice little trick is to sort of reverse the way you usually use .Contains(), like:-\npublic static bool IsAllowed(int userID) {\n return new int[] { Personnel.JaneDoe, Personnel.JohnDoe }.Contains(userID);\n}\n\nWhere you can put as many entries in the array as you like.\nIf the Personnel.x is an enum you'd have some casting issues with this (and with the original code you posted), and in that case it'd be easier to use:-\npublic static bool IsAllowed(int userID) {\n return Enum.IsDefined(typeof(Personnel), userID);\n}\n\n", "Here's the closest that I can think of:\nusing System.Linq;\npublic static bool IsAllowed(int userID)\n{\n return new Personnel[]\n { Personnel.JohnDoe, Personnel.JaneDoe }.Contains((Personnel)userID);\n}\n\n", "Just another syntax idea:\nreturn new [] { Personnel.JohnDoe, Personnel.JaneDoe }.Contains(userID);\n\n", "Can you write an iterator for Personnel.\npublic static bool IsAllowed(int userID)\n{\n return (Personnel.Contains(userID))\n}\n\npublic bool Contains(int userID) : extends Personnel (i think that is how it is written)\n{\n foreach (int id in Personnel)\n if (id == userid)\n return true;\n return false;\n}\n\n" ]
[ 13, 4, 2, 1, 0, 0, 0, 0 ]
[]
[]
[ "c#", "conditional", "if_statement", "lambda" ]
stackoverflow_0000032937_c#_conditional_if_statement_lambda.txt
Q: Any pitfalls developing C#/.NET code in a VM running on a Mac? I am considering buying an Apple MacBook Pro. Are there any pitfalls developing C#/.NET code in a virtual machine running on a Mac? Also, is it better to run Vista or XP Pro for this purpose? A: I can't tell you any specific experiences since I don't have a Mac, but I did want to point out that there was an awesome episode of the DeepFriedBytes podcast that discussed this very topic. It made me want to give it a try. They discuss the pros and cons of going this route - well worth the listen IMO if this is something you're considering: Episode 5: Developing .NET Software on a Mac A: I'm developing in a Parallels VM running Windows Server 2008, and overall it is terrific. I'd highly recommend the server OS over Vista or XP if you are doing web development. Other than the keyboard issue, the one pitfall with the MacBook Pro is that the fan is extremely loud and annoying, and running a VM has in my experience tended to heat up the laptop enough to kick it on relatively frequently. However, there are utilities out there such as Coolbook to keep it from kicking on. A: XP Pro is definitely better, unless you have a really beefy Mac. Regarding your other question, no there are no pitfalls, other than performance. I prefer to use a real PC to do actual coding, using VMs for testing. Clearly, that's not an option for you within OSX. However, you do have the option of Boot Camp if the VM performance becomes an issue for you. That will also let you run Vista with no performance degradation. Bear in mind that the two virtual machine solutions for the Mac are fairly immature. I've used both, and while they are perfectly adequate for development, I've found both to be flaky, to varying degrees. Parallels seems mostly stable, but does crash and seems to have memory leaks; VMWare is beefer, and sucks more of the system's performance away by default (also seems to perform somewhat better than Parallels), but can have serious graphical problems depending on your setup, particularly if you try to use Unity mode. A: I'm developing .NET apps in a Vista VM under VMWare Fusion. Obviously you need a lot of memory, but other than not having Aero, I haven't run into any problems yet. A: I develop on my Macbook (not pro) using VMWare Fusion and WinXP. For the most part, it is a very good experience. I assign 1GB of memory, out of my 4GB, to the VM and its pretty speedy. The one major pitfall I've encountered is disk space. If you install a full VS2008 install and other tools, you can quickly eat up 30-40GB of disk. If you start using the snapshot feature or running multiple VMs, you'll eat up even more. Since I use my laptop as a primary machine and have lots of data and applications on the OSX side, I have run low on disk space with the standard 120GB drive. So, if you keep in mind the disk space issue, I think you'll find the experience quite satisfactory. A: You'd have the least problems running windows not in a VM, but for development your experience should be close to perfect with a VM. Both will give you less issues than MonoDevelop presumably, which is an entirely different CLR, compiler and a reimplementation of the framework. A: I use Parallels. I used Vista for 4 months then switched to XP. I prefer XP as it is faster. Key bindings are quirky. Using function keys while debugging in the hosted XP will trigger events in OS X, effectively popping you out. I have 3 "spaces" set up. One for OS X, one for XP VM, and the last for a RDC to my desktop. THIS IS BRILLIANTLY USEFUL. I can't live without spaces now. This technique actually killed my desire for a second monitor. Like Jason said, any files stored on the OS X partition will be seen as a network resource to the XP/Vista VM. So trying to run EXEs or storing web roots there cause trust issues. Studio doesn't like project web roots to be on network shares. peace|dewde http://dewde.com A: I would look into the VMWare Fusion 2 Beta to get around the quirks with the key bindings experienced by those using Parallels. Fusion will capture all key events inside the virtual machine unless you hit a special key sequence to escape from the VM. You will, however, still have to get used to some of the oddities having an Apple based keyboard layout (no backspace, etc.). Those things aside, it really is quite seamless. A: Probably better not to run vista in a VM. Especially if you want the Aero UI turned on. VMs aren't very good with advanced graphics, so you'll probably want to run XP, or Vista in classic mode. A: Not really, it should run just fine. Your dev environment will just be a little bit slower...but in my experience, it's not really all that bad. I wouldn't want it as my main machine, but it's perfectly usable. A: I don't think Kibbee advice is correct. VMware Fusion (for the mac) currently supports up to DirectX9. The Vista integration is very good. If you have any trouble, you can natively boot into your Virtual Machine (If you have set it up as a BootCamp partition on the mac). I don't see any trouble with this setup, although I would not do it myself. The only thing, that my be a problem to you, is the keyboard-layout. The mac-keyboard has a different layout to pc-keyboards. (Especially on a german mac running a german windows, some characters might be a bit harder to type). You will have to relearn some parts of the keyboard! A: I do asp.net development on a MacBook Pro, running VMWare Fusion and Vista x64. It works great for me. As someone else mentioned, the keybindings are a little weird. I usually use a full size external keyboard, which helps a lot. A: I am developing .net applications using XP Pro in VMWare Fusion and I am not finding any issues. I am not even seeing any performance issues as the hardware in the MacBook Pro is much better than the hardware I had in my previous laptop. I found that there were a few things that I had to fiddle with to make the experience the same as working on my previous laptop. I had to install Sharp Keys to be able to access the right-click/context menu key on the keyboard, which I use often when in VS. I also made sure that some of the Mac OS keyboard and mouse shortcuts were not registered in VMWare Fusion, to stop strange things happening. I just noticed that I am only allowed my VM to use 1GB of memory, maybe I should up this just a little. There are posts out there that warn about assigning too much memory to a VM. One thing that is suggested for improving performance is to run the VM on another spindle. I haven't found a suitably priced 7200rpm portable drive yet, so I can't comment on this. [Edit] I knew I had seen this somewhere, Setting Up Windows Server 2008 VMWare Virtual Machines For .Net - This is something that I have been meaning to try out, I just haven't got around to it yet. (Too much time spent reading CrackOverflow) A: For virtualization, I'd try Sun's Virtual Box. I use it in Windows XP and Windows Vista and it works great, I expect performance would be similar running on a Mac. As for which OS to run, I would stick with Windows XP Pro. You'll not need to dedicate as much RAM to the VM as you would if you ran Vista. A: I've been doing .NET development using Parallels for over a year now, using WinXP Pro and can't complain, it runs fast (just as it would on a regular machine) and I get the best of all worlds --> a tip, use spaces, so have Windows running in one desk and your Mac stuff on the other, and with just a keystroke you move from one side to the other, flawlessly! On the Bootcamp side, to be honest I tried for a while, but having to reboot to access my apps on Mac became annoying after some time. Just a word of advice: if you go with this option take a look at MacDrive, can't go wrong with it, as you will maintain access to your Mac partitions. Been there, done that... and I kind of like it ;)... good luck with the transition! A: Just to mention an alternative to VMWare Fusion, I'm using Parallels als a VM. Performance has not been an issue so far when I've given the VM 1 GiB of main memory. Before deciding on one VM, I'd suggest testing them all extensively. I am quite happy with Parallels but I'm not sure I wouldn't use VMWare Fusion the next time. Contrarily to what Mo said, I actually find the Mac keyboard layout much better than the Windows layout, using a Germany key binding.
Any pitfalls developing C#/.NET code in a VM running on a Mac?
I am considering buying an Apple MacBook Pro. Are there any pitfalls developing C#/.NET code in a virtual machine running on a Mac? Also, is it better to run Vista or XP Pro for this purpose?
[ "I can't tell you any specific experiences since I don't have a Mac, but I did want to point out that there was an awesome episode of the DeepFriedBytes podcast that discussed this very topic. It made me want to give it a try. They discuss the pros and cons of going this route - well worth the listen IMO if this is something you're considering: \nEpisode 5: Developing .NET Software on a Mac\n", "I'm developing in a Parallels VM running Windows Server 2008, and overall it is terrific. I'd highly recommend the server OS over Vista or XP if you are doing web development.\nOther than the keyboard issue, the one pitfall with the MacBook Pro is that the fan is extremely loud and annoying, and running a VM has in my experience tended to heat up the laptop enough to kick it on relatively frequently. However, there are utilities out there such as Coolbook to keep it from kicking on.\n", "XP Pro is definitely better, unless you have a really beefy Mac.\nRegarding your other question, no there are no pitfalls, other than performance. I prefer to use a real PC to do actual coding, using VMs for testing. Clearly, that's not an option for you within OSX. However, you do have the option of Boot Camp if the VM performance becomes an issue for you. That will also let you run Vista with no performance degradation. \nBear in mind that the two virtual machine solutions for the Mac are fairly immature. I've used both, and while they are perfectly adequate for development, I've found both to be flaky, to varying degrees. Parallels seems mostly stable, but does crash and seems to have memory leaks; VMWare is beefer, and sucks more of the system's performance away by default (also seems to perform somewhat better than Parallels), but can have serious graphical problems depending on your setup, particularly if you try to use Unity mode.\n", "I'm developing .NET apps in a Vista VM under VMWare Fusion.\nObviously you need a lot of memory, but other than not having Aero, I haven't run into any problems yet.\n", "I develop on my Macbook (not pro) using VMWare Fusion and WinXP. For the most part, it is a very good experience. I assign 1GB of memory, out of my 4GB, to the VM and its pretty speedy.\nThe one major pitfall I've encountered is disk space. If you install a full VS2008 install and other tools, you can quickly eat up 30-40GB of disk. If you start using the snapshot feature or running multiple VMs, you'll eat up even more. Since I use my laptop as a primary machine and have lots of data and applications on the OSX side, I have run low on disk space with the standard 120GB drive.\nSo, if you keep in mind the disk space issue, I think you'll find the experience quite satisfactory.\n", "You'd have the least problems running windows not in a VM, but for development your experience should be close to perfect with a VM. Both will give you less issues than MonoDevelop presumably, which is an entirely different CLR, compiler and a reimplementation of the framework.\n", "\nI use Parallels. I used Vista for 4 months then switched to XP. I prefer XP as it is faster.\nKey bindings are quirky. Using function keys while debugging in the hosted XP will trigger events in OS X, effectively popping you out.\nI have 3 \"spaces\" set up. One for OS X, one for XP VM, and the last for a RDC to my desktop. THIS IS BRILLIANTLY USEFUL. I can't live without spaces now. This technique actually killed my desire for a second monitor.\nLike Jason said, any files stored on the OS X partition will be seen as a network resource to the XP/Vista VM. So trying to run EXEs or storing web roots there cause trust issues. Studio doesn't like project web roots to be on network shares.\n\npeace|dewde\nhttp://dewde.com\n", "I would look into the VMWare Fusion 2 Beta to get around the quirks with the key bindings experienced by those using Parallels. Fusion will capture all key events inside the virtual machine unless you hit a special key sequence to escape from the VM. You will, however, still have to get used to some of the oddities having an Apple based keyboard layout (no backspace, etc.). Those things aside, it really is quite seamless.\n", "Probably better not to run vista in a VM. Especially if you want the Aero UI turned on. VMs aren't very good with advanced graphics, so you'll probably want to run XP, or Vista in classic mode.\n", "Not really, it should run just fine. Your dev environment will just be a little bit slower...but in my experience, it's not really all that bad. I wouldn't want it as my main machine, but it's perfectly usable.\n", "I don't think Kibbee advice is correct. VMware Fusion (for the mac) currently supports up to DirectX9. The Vista integration is very good. If you have any trouble, you can natively boot into your Virtual Machine (If you have set it up as a BootCamp partition on the mac).\nI don't see any trouble with this setup, although I would not do it myself.\nThe only thing, that my be a problem to you, is the keyboard-layout. The mac-keyboard has a different layout to pc-keyboards. (Especially on a german mac running a german windows, some characters might be a bit harder to type). You will have to relearn some parts of the keyboard!\n", "I do asp.net development on a MacBook Pro, running VMWare Fusion and Vista x64. It works great for me. \nAs someone else mentioned, the keybindings are a little weird. I usually use a full size external keyboard, which helps a lot.\n", "I am developing .net applications using XP Pro in VMWare Fusion and I am not finding any issues. I am not even seeing any performance issues as the hardware in the MacBook Pro is much better than the hardware I had in my previous laptop.\nI found that there were a few things that I had to fiddle with to make the experience the same as working on my previous laptop.\nI had to install Sharp Keys to be able to access the right-click/context menu key on the keyboard, which I use often when in VS. I also made sure that some of the Mac OS keyboard and mouse shortcuts were not registered in VMWare Fusion, to stop strange things happening.\nI just noticed that I am only allowed my VM to use 1GB of memory, maybe I should up this just a little. There are posts out there that warn about assigning too much memory to a VM.\nOne thing that is suggested for improving performance is to run the VM on another spindle. I haven't found a suitably priced 7200rpm portable drive yet, so I can't comment on this.\n[Edit] I knew I had seen this somewhere, Setting Up Windows Server 2008 VMWare Virtual Machines For .Net - This is something that I have been meaning to try out, I just haven't got around to it yet. (Too much time spent reading CrackOverflow)\n", "For virtualization, I'd try Sun's Virtual Box. I use it in Windows XP and Windows Vista and it works great, I expect performance would be similar running on a Mac.\nAs for which OS to run, I would stick with Windows XP Pro. You'll not need to dedicate as much RAM to the VM as you would if you ran Vista.\n", "I've been doing .NET development using Parallels for over a year now, using WinXP Pro and can't complain, it runs fast (just as it would on a regular machine) and I get the best of all worlds --> a tip, use spaces, so have Windows running in one desk and your Mac stuff on the other, and with just a keystroke you move from one side to the other, flawlessly!\nOn the Bootcamp side, to be honest I tried for a while, but having to reboot to access my apps on Mac became annoying after some time. Just a word of advice: if you go with this option take a look at MacDrive, can't go wrong with it, as you will maintain access to your Mac partitions.\nBeen there, done that... and I kind of like it ;)... good luck with the transition!\n", "Just to mention an alternative to VMWare Fusion, I'm using Parallels als a VM. Performance has not been an issue so far when I've given the VM 1 GiB of main memory. Before deciding on one VM, I'd suggest testing them all extensively. I am quite happy with Parallels but I'm not sure I wouldn't use VMWare Fusion the next time.\nContrarily to what Mo said, I actually find the Mac keyboard layout much better than the Windows layout, using a Germany key binding.\n" ]
[ 13, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ ".net", "macos", "vmware" ]
stackoverflow_0000028268_.net_macos_vmware.txt
Q: In a C/C++ program, how does the system (Windows, Linux, and Mac OS X) call the main() function? I am looking for a more technical explanation than the OS calls the function. Is there a website or book? A: The .exe file (or equivalent on other platforms) contains an 'entry point' address. To a first approximation, the OS loads the relevant sections of the .EXE file into RAM, and then jumps to the entry point. As others have said, this entry point will not be 'main', but will instead be a part of the runtime library - it will do things like initialising static objects, setting up the argc and argv parameters, setting up standard input, standard output, standard error, etc. When it's done all that, it will call your main() function. When main exits, the runtime goes through an analogous process of passing your return code back to the environment, calling static destructors, calling _atexit routines, etc. If you have Microsoft tools (perhaps not the freebie ones), then you have all the runtime source, and an easy way to look at it is to put a breakpoint on the closing brace of your main() method, and single step back up into the runtime. A: main() is part of the C library and is not a system function. I don't know for OS X or Linux, but Windows usually starts a program with WinMainCRTStartup(). This symbol init your process, extract command line arguments and environment (argc, argv, end) and calls main(). It is also responsible of calling any code that should run after main(), like atexit(). By looking in your Visual Studio file, you should be able to find the default implementation of WinMainCRTStartup to see what it does. You can also define a function of your own to call at startup, this is done by changing "entry point" in the linker options. This is often a function that takes no arguments and returns a void. A: As far as Windows goes, the entry point functions are: Console: void __cdecl mainCRTStartup( void ) {} GUI: void __stdcall WinMainCRTStartup( void ) {} DLL: BOOL __stdcall _DllMainCRTStartup(HINSTANCE hinstDLL,DWORD fdwReason,void* lpReserved) {} The only reason to use these over the normal main, WinMain, and DllMain is if you wanted to use your own run time library. (If you want smaller file size or custom features.) For custom run-time implementations and other tricks to get smaller PE files, see: http://www.microsoft.com/msj/archive/S569.aspx http://www.codeproject.com/KB/tips/aggressiveoptimize.aspx http://www.catch22.net/tuts/minexe.asp http://www.hailstorm.net/papers/smallwin32.htm A: It's OS dependent. In OS X, there's a frame in the mach header that contains the start address for the EIP (instruction pointer) register. Once the binary is loaded, the OS launches execution from this address: cristi:test diciu$ otool -l ./a.out | grep -A 10 LC_UNIXTHREAD cmd LC_UNIXTHREAD cmdsize 80 flavor i386_THREAD_STATE count i386_THREAD_STATE_COUNT [..] ss 0x00000000 eflags 0x00000000 eip 0x00001f8c cs 0x00000000 [..] The address is the address of the "start" function from the binary: cristi:test diciu$ nm ./a.out 0000200c D _NXArgc 00002008 D _NXArgv 00002000 D ___progname 00001fe0 t __dyld_func_lookup 00001000 A __mh_execute_header [..] 00001f8c T start In Mac OS X, it's the "start" function that gets called first, even before the "main" function: (gdb) b start Breakpoint 1 at 0x1f90 (gdb) b main Breakpoint 2 at 0x1ff4 (gdb) r Starting program: /Users/diciu/Programming/test/a.out Reading symbols for shared libraries ++. done Breakpoint 1, 0x00001f90 in start () A: Expert C++/CLI (check around page 279) has very specific details of the different bootstrap scenarios for native, mixed, and pure CLR assemblies.
In a C/C++ program, how does the system (Windows, Linux, and Mac OS X) call the main() function?
I am looking for a more technical explanation than the OS calls the function. Is there a website or book?
[ "The .exe file (or equivalent on other platforms) contains an 'entry point' address. To a first approximation, the OS loads the relevant sections of the .EXE file into RAM, and then jumps to the entry point.\nAs others have said, this entry point will not be 'main', but will instead be a part of the runtime library - it will do things like initialising static objects, setting up the argc and argv parameters, setting up standard input, standard output, standard error, etc. When it's done all that, it will call your main() function. When main exits, the runtime goes through an analogous process of passing your return code back to the environment, calling static destructors, calling _atexit routines, etc.\nIf you have Microsoft tools (perhaps not the freebie ones), then you have all the runtime source, and an easy way to look at it is to put a breakpoint on the closing brace of your main() method, and single step back up into the runtime.\n", "main() is part of the C library and is not a system function. I don't know for OS X or Linux, but Windows usually starts a program with WinMainCRTStartup(). This symbol init your process, extract command line arguments and environment (argc, argv, end) and calls main(). It is also responsible of calling any code that should run after main(), like atexit().\nBy looking in your Visual Studio file, you should be able to find the default implementation of WinMainCRTStartup to see what it does.\nYou can also define a function of your own to call at startup, this is done by changing \"entry point\" in the linker options. This is often a function that takes no arguments and returns a void.\n", "As far as Windows goes, the entry point functions are:\n\nConsole: void __cdecl mainCRTStartup( void ) {}\nGUI: void __stdcall WinMainCRTStartup( void ) {}\nDLL: BOOL __stdcall _DllMainCRTStartup(HINSTANCE hinstDLL,DWORD fdwReason,void* lpReserved) {}\n\nThe only reason to use these over the normal main, WinMain, and DllMain is if you wanted to use your own run time library. (If you want smaller file size or custom features.)\nFor custom run-time implementations and other tricks to get smaller PE files, see:\n\nhttp://www.microsoft.com/msj/archive/S569.aspx\nhttp://www.codeproject.com/KB/tips/aggressiveoptimize.aspx\nhttp://www.catch22.net/tuts/minexe.asp\nhttp://www.hailstorm.net/papers/smallwin32.htm\n\n", "It's OS dependent.\nIn OS X, there's a frame in the mach header that contains the start address for the EIP (instruction pointer) register.\nOnce the binary is loaded, the OS launches execution from this address:\n\ncristi:test diciu$ otool -l ./a.out | grep -A 10 LC_UNIXTHREAD\n cmd LC_UNIXTHREAD\n cmdsize 80\n flavor i386_THREAD_STATE\n count i386_THREAD_STATE_COUNT\n[..]\n ss 0x00000000 eflags 0x00000000 eip 0x00001f8c cs 0x00000000\n[..]\n\nThe address is the address of the \"start\" function from the binary:\n\ncristi:test diciu$ nm ./a.out\n0000200c D _NXArgc\n00002008 D _NXArgv\n00002000 D ___progname\n00001fe0 t __dyld_func_lookup\n00001000 A __mh_execute_header\n[..]\n00001f8c T start\n\nIn Mac OS X, it's the \"start\" function that gets called first, even before the \"main\" function:\n\n(gdb) b start\nBreakpoint 1 at 0x1f90\n(gdb) b main\nBreakpoint 2 at 0x1ff4\n(gdb) r\nStarting program: /Users/diciu/Programming/test/a.out \nReading symbols for shared libraries ++. done\n\nBreakpoint 1, 0x00001f90 in start ()\n\n", "Expert C++/CLI (check around page 279) has very specific details of the different bootstrap scenarios for native, mixed, and pure CLR assemblies.\n" ]
[ 27, 8, 6, 2, 1 ]
[]
[]
[ "c", "c++", "program_entry_point" ]
stackoverflow_0000012332_c_c++_program_entry_point.txt
Q: In SQL Server 2000, is there a sysobjects query that will retrieve user views and not system views? Assuming such a query exists, I would greatly appreciate the help. I'm trying to develop a permissions script that will grant "select" and "references" permissions on the user tables and views in a database. My hope is that executing the "grant" commands on each element in such a set will make it easier to keep permissions current when new tables and views are added to the database. A: select * from information_schema.tables WHERE OBJECTPROPERTY(OBJECT_ID(table_name),'IsMSShipped') =0 Will exclude dt_properties and system tables add where table_type = 'view' if you just want the view A: SELECT * FROM sysobjects WHERE xtype = 'V' AND type = 'V' AND category = 0 Here is a list of the possible values for xtype: C = CHECK constraint D = Default or DEFAULT constraint F = FOREIGN KEY constraint L = Log P = Stored procedure PK = PRIMARY KEY constraint (type is K) RF = Replication filter stored procedure S = System table TR = Trigger U = User table UQ = UNIQUE constraint (type is K) V = View X = Extended stored procedure Here are the possible values for type: C = CHECK constraint D = Default or DEFAULT constraint F = FOREIGN KEY constraint FN = Scalar function IF = Inlined table-function K = PRIMARY KEY or UNIQUE constraint L = Log P = Stored procedure R = Rule RF = Replication filter stored procedure S = System table TF = Table function TR = Trigger U = User table V = View X = Extended stored procedure Finally, the category field looks like it groups based on different types of objects. After analyzing the return resultset, the system views look to have a category = 2, whereas all of the user views have a category = 0. Hope this helps. For more information, visit http://msdn.microsoft.com/en-us/library/aa260447(SQL.80).aspx A: select * from information_schema.tables where table_type = 'view'
In SQL Server 2000, is there a sysobjects query that will retrieve user views and not system views?
Assuming such a query exists, I would greatly appreciate the help. I'm trying to develop a permissions script that will grant "select" and "references" permissions on the user tables and views in a database. My hope is that executing the "grant" commands on each element in such a set will make it easier to keep permissions current when new tables and views are added to the database.
[ "select * from information_schema.tables\nWHERE OBJECTPROPERTY(OBJECT_ID(table_name),'IsMSShipped') =0 \n\nWill exclude dt_properties and system tables\nadd \nwhere table_type = 'view' \n\nif you just want the view\n", "SELECT\n *\nFROM\n sysobjects\nWHERE\n xtype = 'V' AND\n type = 'V' AND\n category = 0\n\nHere is a list of the possible values for xtype:\n\nC = CHECK constraint\nD = Default or DEFAULT constraint\nF = FOREIGN KEY constraint\nL = Log\nP = Stored procedure\nPK = PRIMARY KEY constraint (type is K)\nRF = Replication filter stored procedure\nS = System table\nTR = Trigger\nU = User table\nUQ = UNIQUE constraint (type is K)\nV = View\nX = Extended stored procedure\n\nHere are the possible values for type:\n\nC = CHECK constraint\nD = Default or DEFAULT constraint\nF = FOREIGN KEY constraint\nFN = Scalar function\nIF = Inlined table-function\nK = PRIMARY KEY or UNIQUE constraint\nL = Log\nP = Stored procedure\nR = Rule\nRF = Replication filter stored procedure\nS = System table\nTF = Table function\nTR = Trigger\nU = User table\nV = View\nX = Extended stored procedure\n\nFinally, the category field looks like it groups based on different types of objects. After analyzing the return resultset, the system views look to have a category = 2, whereas all of the user views have a category = 0. Hope this helps.\nFor more information, visit http://msdn.microsoft.com/en-us/library/aa260447(SQL.80).aspx\n", "select * from information_schema.tables\nwhere table_type = 'view'\n\n" ]
[ 6, 2, 0 ]
[]
[]
[ "sql_server_2000", "sysobjects" ]
stackoverflow_0000033226_sql_server_2000_sysobjects.txt
Q: Is there any way to override the drag/drop or copy/paste behavior of an existing app in Windows? I would like to extend some existing applications' drag and drop behavior, and I'm wondering if there is any way to hack on drag and drop support or changes to drag and drop behavior by monitoring the app's message loop and injecting my own messages. It would also work to monitor for when a paste operation is executed, basically to create a custom behavior when a control only supports pasting text and an image is pasted. I'm thinking Detours might be my best bet, but one problem is that I would have to write custom code for each app I wanted to extend. If only Windows was designed with extensibility in mind! On another note, is there any OS that supports extensibility of this nature? A: Well, think of this from the point of view of the app designer. If you wrote an application, do you want users to be able to inject things into your application (more importantly, would you want to incur the support/revenue headache of clueless users doing this and then blaming you)? Each application's drag and drop infrastructure is written specifically for the application, not to allow you to drop anything you want onto it (potentially causing crashes and all sorts of other nasty behaviour when you drag something onto an app that simply can't handle it). Stuff like this is hard to do for a reason. It is possible to do, but it's a lot of work: you need to acquire the window handle of the thing you want to drop something onto, and then replace that window's message handler with your own. That's fraught with danger, of course, since you either have to replicate all of the existing functionality of that window yourself, or risk the app not working correctly. A: If you're willing to do in-memory diddling while the application is loaded, you could probably finagle that. But if you're looking for an easy way to just inject code you want into another window's message pump, you're not going to find it. The skills required to accomplish something like this are formidable (unless someone has wrapped all of this up in an application/library that I'm unaware of, but I doubt it). It's like clipboard hooking, writ-large: it's frowned upon, there are tons of gotchas, and you're extremely likely to introduce significant instability into your system if you don't really know what you're doing. A: Hm thats really too bad. I suppose there are sometimes reasons why apps don't exist yet. Basically what I'm trying to do is simplify the process of sending image links to people using various apps (mainly web browser text forms, but also anytime I'm editing in a terminal window) by hooking the process of pasting an image in a text context, uploading the image in the background, and pasting a url to where the image was uploaded all with a single action. Edit: I suppose the easier solution to this is to just create a new keyboard combo that is hooked by my app before it gets to any other app. There's no reason in particular that I need to tie it to copy/paste functionality.
Is there any way to override the drag/drop or copy/paste behavior of an existing app in Windows?
I would like to extend some existing applications' drag and drop behavior, and I'm wondering if there is any way to hack on drag and drop support or changes to drag and drop behavior by monitoring the app's message loop and injecting my own messages. It would also work to monitor for when a paste operation is executed, basically to create a custom behavior when a control only supports pasting text and an image is pasted. I'm thinking Detours might be my best bet, but one problem is that I would have to write custom code for each app I wanted to extend. If only Windows was designed with extensibility in mind! On another note, is there any OS that supports extensibility of this nature?
[ "Well, think of this from the point of view of the app designer. If you wrote an application, do you want users to be able to inject things into your application (more importantly, would you want to incur the support/revenue headache of clueless users doing this and then blaming you)? Each application's drag and drop infrastructure is written specifically for the application, not to allow you to drop anything you want onto it (potentially causing crashes and all sorts of other nasty behaviour when you drag something onto an app that simply can't handle it). Stuff like this is hard to do for a reason.\nIt is possible to do, but it's a lot of work: you need to acquire the window handle of the thing you want to drop something onto, and then replace that window's message handler with your own. That's fraught with danger, of course, since you either have to replicate all of the existing functionality of that window yourself, or risk the app not working correctly.\n", "If you're willing to do in-memory diddling while the application is loaded, you could probably finagle that.\nBut if you're looking for an easy way to just inject code you want into another window's message pump, you're not going to find it. The skills required to accomplish something like this are formidable (unless someone has wrapped all of this up in an application/library that I'm unaware of, but I doubt it). It's like clipboard hooking, writ-large: it's frowned upon, there are tons of gotchas, and you're extremely likely to introduce significant instability into your system if you don't really know what you're doing.\n", "Hm thats really too bad. I suppose there are sometimes reasons why apps don't exist yet. Basically what I'm trying to do is simplify the process of sending image links to people using various apps (mainly web browser text forms, but also anytime I'm editing in a terminal window) by hooking the process of pasting an image in a text context, uploading the image in the background, and pasting a url to where the image was uploaded all with a single action.\nEdit: I suppose the easier solution to this is to just create a new keyboard combo that is hooked by my app before it gets to any other app. There's no reason in particular that I need to tie it to copy/paste functionality.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "detours", "windows" ]
stackoverflow_0000033113_detours_windows.txt
Q: Does ReadUncommitted imply NoLock When writing a SQL statement in SQL Server 2005, does the READUNCOMMITTED query hint imply NOLOCK or do I have to specify it manually too? So is: With (NoLock, ReadUnCommitted) the same as: With (ReadUnCommitted) A: According to Kalen Delaney... The NOLOCK hint has nothing to do with the index options. The hint tells SQL Server not to request locks when doing SELECT operations, so there will be no conflict with data that is already locked. The index options just tell SQL Server that this level of locking is allowed, when locking is going to occur. For example, if ALLOW_ROW_LOCKS was off, the only possible locks would be page or table locks. The index options don't force locks to be held, they just control the possible size of the locks. In answer to the question in your subject, the NOLOCK hint and the READUNCOMMITTED hint are equivalent. A: Yes they are one and the same
Does ReadUncommitted imply NoLock
When writing a SQL statement in SQL Server 2005, does the READUNCOMMITTED query hint imply NOLOCK or do I have to specify it manually too? So is: With (NoLock, ReadUnCommitted) the same as: With (ReadUnCommitted)
[ "According to Kalen Delaney...\nThe NOLOCK hint has nothing to do with the index options. The hint tells SQL\nServer not to request locks when doing SELECT operations, so there will be\nno conflict with data that is already locked. The index options just tell\nSQL Server that this level of locking is allowed, when locking is going to\noccur. For example, if ALLOW_ROW_LOCKS was off, the only possible locks\nwould be page or table locks. The index options don't force locks to be\nheld, they just control the possible size of the locks.\nIn answer to the question in your subject, the NOLOCK hint and the\nREADUNCOMMITTED hint are equivalent.\n", "Yes they are one and the same\n" ]
[ 1, 1 ]
[ "I think you can say that\nReadUnCommitted has the abilities of NoLock\nHowever you cannot say that\nNoLock has the abilities of ReadUnCommitted\n" ]
[ -1 ]
[ "nolock", "sql_server_2005" ]
stackoverflow_0000032550_nolock_sql_server_2005.txt
Q: What's an alternative to GWL_USERDATA for storing an object pointer? In the Windows applications I work on, we have a custom framework that sits directly above Win32 (don't ask). When we create a window, our normal practice is to put this in the window's user data area via SetWindowLong(hwnd, GWL_USERDATA, this), which allows us to have an MFC-like callback or a tightly integrated WndProc, depending. The problem is that this will not work on 64-bit Windows, since LONG is only 32-bits wide. What's a better solution to this problem that works on both 32- and 64-bit systems? A: SetWindowLongPtr was created to replace SetWindowLong in these instances. It's LONG_PTR parameter allows you to store a pointer for 32-bit or 64-bit compilations. LONG_PTR SetWindowLongPtr( HWND hWnd, int nIndex, LONG_PTR dwNewLong ); Remember that the constants have changed too, so usage now looks like: SetWindowLongPtr(hWnd, GWLP_USERDATA, this); Also don't forget that now to retrieve the pointer, you must use GetWindowLongPtr: LONG_PTR GetWindowLongPtr( HWND hWnd, int nIndex ); And usage would look like (again, with changed constants): LONG_PTR lpUserData = GetWindowLongPtr(hWnd, GWLP_USERDATA); MyObject* pMyObject = (MyObject*)lpUserData; A: The other alternative is SetProp/RemoveProp (When you are subclassing a window that already uses GWLP_USERDATA) Another good alternative is ATL style thunking of the WNDPROC, for more info on that, see http://www.ragestorm.net/blogs/?cat=20 http://www.hackcraft.net/cpp/windowsThunk/
What's an alternative to GWL_USERDATA for storing an object pointer?
In the Windows applications I work on, we have a custom framework that sits directly above Win32 (don't ask). When we create a window, our normal practice is to put this in the window's user data area via SetWindowLong(hwnd, GWL_USERDATA, this), which allows us to have an MFC-like callback or a tightly integrated WndProc, depending. The problem is that this will not work on 64-bit Windows, since LONG is only 32-bits wide. What's a better solution to this problem that works on both 32- and 64-bit systems?
[ "SetWindowLongPtr was created to replace SetWindowLong in these instances. It's LONG_PTR parameter allows you to store a pointer for 32-bit or 64-bit compilations.\nLONG_PTR SetWindowLongPtr( \n HWND hWnd,\n int nIndex,\n LONG_PTR dwNewLong\n);\n\nRemember that the constants have changed too, so usage now looks like:\nSetWindowLongPtr(hWnd, GWLP_USERDATA, this);\n\nAlso don't forget that now to retrieve the pointer, you must use GetWindowLongPtr:\nLONG_PTR GetWindowLongPtr( \n HWND hWnd,\n int nIndex\n);\n\nAnd usage would look like (again, with changed constants):\nLONG_PTR lpUserData = GetWindowLongPtr(hWnd, GWLP_USERDATA);\nMyObject* pMyObject = (MyObject*)lpUserData;\n\n", "The other alternative is SetProp/RemoveProp (When you are subclassing a window that already uses GWLP_USERDATA)\nAnother good alternative is ATL style thunking of the WNDPROC, for more info on that, see\n\nhttp://www.ragestorm.net/blogs/?cat=20\nhttp://www.hackcraft.net/cpp/windowsThunk/\n\n" ]
[ 43, 12 ]
[]
[]
[ "32bit_64bit", "winapi", "windows" ]
stackoverflow_0000023083_32bit_64bit_winapi_windows.txt
Q: How do I best handle role based permissions using Forms Authentication on my ASP.NET web application? I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. I've got two roles: Users Administrators I want pages to be viewable by four different groups: Everyone (Default, Help) Anonymous (CreateUser, Login, PasswordRecovery) Users (ChangePassword, DataEntry) Administrators (Report) Expanding on the example in the ASP.NET HOW DO I Video Series: Membership and Roles, I've put those page files into such folders: And I used the ASP.NET Web Site Administration Tool to set up access rules for each folder. It works but seems kludgy to me and it creates issues when Login.aspx is not at the root and with the ReturnUrl parameter of Login.aspx. Is there a better way to do this? Is there perhaps a simple way I can set permissions at the page level rather than at the folder level? A: A couple solutions off the top of my head. You could set up restrictions for each page in your web.config file. This would allow you to have whatever folder hierarchy you wish to use. However, it will require that you keep the web.config file up to date whenever you add additional pages. The nice part of having the folder structure determine accessibility is that you don't have to think about it when you add in new pages. Have your pages inherit from custom classes (i.e. EveryonePage, UserPage, AdminPage, etc.) and put a role check in the Page_Load routine. A: One solution I've used in the past is this: Create a base page called 'SecurePage' or something to that effect. Add a property 'AllowedUserRoles' to the base page that is a generic list of user roles List or List where int is the role id. In the Page_Load event of any page extending SecurePage you add each allowed user role to the AllowedUserroles property. In the base page override OnLoad() and check if the current user has one of the roles listed in AllowedUserRoles. This allows each page to be customized without you having to put tons of stuff in your web.config to control each page. A: In the master page I define a public property that toggles security checking, defaulted to true. I also declare a string that is a ; delimited list of roles needed for that page. in the page load of my master page I do the following if (_secure) { if (Request.IsAuthenticated) { if (_role.Length > 0) { if (PortalSecurity.IsInRoles(_role)) { return; } else { accessDenied = true; } } else { return; } } } //do whatever you wanna do to people who dont have access.. bump to a login page or whatever also you'll have to put at the top of your pages so you can access the extended properties of your master page
How do I best handle role based permissions using Forms Authentication on my ASP.NET web application?
I'm using the ASP.NET Login Controls and Forms Authentication for membership/credentials for an ASP.NET web application. I've got two roles: Users Administrators I want pages to be viewable by four different groups: Everyone (Default, Help) Anonymous (CreateUser, Login, PasswordRecovery) Users (ChangePassword, DataEntry) Administrators (Report) Expanding on the example in the ASP.NET HOW DO I Video Series: Membership and Roles, I've put those page files into such folders: And I used the ASP.NET Web Site Administration Tool to set up access rules for each folder. It works but seems kludgy to me and it creates issues when Login.aspx is not at the root and with the ReturnUrl parameter of Login.aspx. Is there a better way to do this? Is there perhaps a simple way I can set permissions at the page level rather than at the folder level?
[ "A couple solutions off the top of my head.\n\nYou could set up restrictions for each page in your web.config file. This would allow you to have whatever folder hierarchy you wish to use. However, it will require that you keep the web.config file up to date whenever you add additional pages. The nice part of having the folder structure determine accessibility is that you don't have to think about it when you add in new pages.\nHave your pages inherit from custom classes (i.e. EveryonePage, UserPage, AdminPage, etc.) and put a role check in the Page_Load routine.\n\n", "One solution I've used in the past is this:\n\nCreate a base page called 'SecurePage' or something to that effect.\nAdd a property 'AllowedUserRoles' to the base page that is a generic list of user roles List or List where int is the role id.\nIn the Page_Load event of any page extending SecurePage you add each allowed user role to the AllowedUserroles property.\nIn the base page override OnLoad() and check if the current user has one of the roles listed in AllowedUserRoles.\n\nThis allows each page to be customized without you having to put tons of stuff in your web.config to control each page.\n", "In the master page I define a public property that toggles security checking, defaulted to true. I also declare a string that is a ; delimited list of roles needed for that page.\nin the page load of my master page I do the following\nif (_secure)\n{\n if (Request.IsAuthenticated)\n {\n if (_role.Length > 0)\n {\n if (PortalSecurity.IsInRoles(_role))\n {\n return;\n }\n else\n {\n accessDenied = true;\n }\n }\n else\n {\n return;\n }\n }\n}\n\n//do whatever you wanna do to people who dont have access.. bump to a login page or whatever\n\nalso you'll have to put\nat the top of your pages so you can access the extended properties of your master page\n" ]
[ 1, 1, 1 ]
[]
[]
[ "asp.net", "forms_authentication" ]
stackoverflow_0000033263_asp.net_forms_authentication.txt
Q: In Exchange (2003) how do I list the forms used in the public folder tree? Does anyone have a script or can anyone point me in the right direction to query an Exchange (2003) public folder tree to determine all the forms being used ? I need an output of form object/display name and count for each form type in each folder. A: You can give Redemption a try. It should be suitable for what you need.
In Exchange (2003) how do I list the forms used in the public folder tree?
Does anyone have a script or can anyone point me in the right direction to query an Exchange (2003) public folder tree to determine all the forms being used ? I need an output of form object/display name and count for each form type in each folder.
[ "You can give Redemption a try. It should be suitable for what you need.\n" ]
[ 1 ]
[]
[]
[ "exchange_server", "reporting", "scripting" ]
stackoverflow_0000030788_exchange_server_reporting_scripting.txt
Q: Best use pattern for a DataContext What's the best lifetime model for a DataContext? Should I just create a new one whenever I need it (aka, function level), should I keep one available in each class that would use it (class level), or should I create a static class with a static DataContext (app-domain level)? Are there any considered best practices on this? A: You pretty much need to keep the same data context available throughout the lifetime of the operations you want to perform if you're ever going to be storing changes which are to be .SubmitChanges()'d later, as otherwise you will lose those changes. If you're just querying stuff then it's fine to create them as needed, but then if later you want to .SubmitChanges() you'll have to refactor your code a lot, so you may as well adopt the pattern of effectively keeping the datacontext global throughout your app from the beginning. Note the data context is disconnected. The connection is only made when the query data is enumerated (not when you first run the query, it's a 'lazy' data type so only provides data when it's needed), and then closed immediately afterwards. On .SubmitChanges() the connection is opened to submit the changes then closed immediately afterwards. So don't think keeping the datacontext around keeps a connection open, it doesn't (you can hook the StateChange event of the connection to confirm this for yourself, that's how I'm sure). There is a great article over at Rick Strahl's Blog which covers this topic in depth, far more than my answer here provides!! A: I think Jeff Atwood talked about this in the Herding Code podcast, when he was questioned about the exact same thing. Listen to it towards the last 15-20 minutes or so. I think in SO, the datacontext is created in the Controller class. Not sure about a lot of details here. But that's what it looked like.
Best use pattern for a DataContext
What's the best lifetime model for a DataContext? Should I just create a new one whenever I need it (aka, function level), should I keep one available in each class that would use it (class level), or should I create a static class with a static DataContext (app-domain level)? Are there any considered best practices on this?
[ "You pretty much need to keep the same data context available throughout the lifetime of the operations you want to perform if you're ever going to be storing changes which are to be .SubmitChanges()'d later, as otherwise you will lose those changes.\nIf you're just querying stuff then it's fine to create them as needed, but then if later you want to .SubmitChanges() you'll have to refactor your code a lot, so you may as well adopt the pattern of effectively keeping the datacontext global throughout your app from the beginning.\nNote the data context is disconnected. The connection is only made when the query data is enumerated (not when you first run the query, it's a 'lazy' data type so only provides data when it's needed), and then closed immediately afterwards. On .SubmitChanges() the connection is opened to submit the changes then closed immediately afterwards. So don't think keeping the datacontext around keeps a connection open, it doesn't (you can hook the StateChange event of the connection to confirm this for yourself, that's how I'm sure).\nThere is a great article over at Rick Strahl's Blog which covers this topic in depth, far more than my answer here provides!!\n", "I think Jeff Atwood talked about this in the Herding Code podcast, when he was questioned about the exact same thing. Listen to it towards the last 15-20 minutes or so.\nI think in SO, the datacontext is created in the Controller class. Not sure about a lot of details here. But that's what it looked like.\n" ]
[ 5, 0 ]
[]
[]
[ ".net_3.5", "linq_to_sql" ]
stackoverflow_0000033390_.net_3.5_linq_to_sql.txt
Q: What's the false operator in C# good for? There are two weird operators in C#: the true operator the false operator If I understand this right these operators can be used in types which I want to use instead of a boolean expression and where I don't want to provide an implicit conversion to bool. Let's say I have a following class: public class MyType { public readonly int Value; public MyType(int value) { Value = value; } public static bool operator true (MyType mt) { return mt.Value > 0; } public static bool operator false (MyType mt) { return mt.Value < 0; } } So I can write the following code: MyType mTrue = new MyType(100); MyType mFalse = new MyType(-100); MyType mDontKnow = new MyType(0); if (mTrue) { // Do something. } while (mFalse) { // Do something else. } do { // Another code comes here. } while (mDontKnow) However for all the examples above only the true operator is executed. So what's the false operator in C# good for? Note: More examples can be found here, here and here. A: You can use it to override the && and || operators. The && and || operators can't be overridden, but if you override |, &, true and false in exactly the right way the compiler will call | and & when you write || and &&. For example, look at this code (from http://ayende.com/blog/1574/nhibernate-criteria-api-operator-overloading - where I found out about this trick; archived version by @BiggsTRC): public static AbstractCriterion operator &(AbstractCriterion lhs, AbstractCriterion rhs) { return new AndExpression(lhs, rhs); } public static AbstractCriterion operator |(AbstractCriterion lhs, AbstractCriterion rhs) { return new OrExpression(lhs, rhs); } public static bool operator false(AbstractCriterion criteria) { return false; } public static bool operator true(AbstractCriterion criteria) { return false; } This is obviously a side effect and not the way it's intended to be used, but it is useful. A: Shog9 and Nir: thanks for your answers. Those answers pointed me to Steve Eichert article and it pointed me to msdn: The operation x && y is evaluated as T.false(x) ? x : T.&(x, y), where T.false(x) is an invocation of the operator false declared in T, and T.&(x, y) is an invocation of the selected operator &. In other words, x is first evaluated and operator false is invoked on the result to determine if x is definitely false. Then, if x is definitely false, the result of the operation is the value previously computed for x. Otherwise, y is evaluated, and the selected operator & is invoked on the value previously computed for x and the value computed for y to produce the result of the operation. A: The page you link to http://msdn.microsoft.com/en-us/library/6x6y6z4d.aspx says what they were for, which was a way of handling nullable bools before nullable value types were introduced. I'd guess nowadays they're good for the same sort of stuff as ArrayList - i.e. absolutely nothing. A: AFAIK, it would be used in a test for false, such as when the && operator comes into play. Remember, && short-circuits, so in the expression if ( mFalse && mTrue) { // ... something } mFalse.false() is called, and upon returning true the expression is reduced to a call to 'mFalse.true()' (which should then return false, or things will get weird). Note that you must implement the & operator in order for that expression to compile, since it's used if mFalse.false() returns false. A: It appears from the MSDN article you linked to it was provided to allow for nullable boolean types prior to the Nullable (i.e. int?, bool?, etc.) type being introducted into the language in C#2. Thus you would store an internal value indicating whether the value is true or false or null, i.e. in your example >0 for true, <0 for false and ==0 for null, and then you'd get SQL-style null semantics. You would also have to implement a .IsNull method or property in order that nullity could be checked explicitly. Comparing to SQL, imagine a table Table with 3 rows with value Foo set to true, 3 rows with value Foo set to false and 3 rows with value Foo set to null. SELECT COUNT(*) FROM Table WHERE Foo = TRUE OR Foo = FALSE 6 In order to count all rows you'd have to do the following:- SELECT COUNT(*) FROM Table WHERE Foo = TRUE OR Foo = FALSE OR Foo IS NULL 9 This 'IS NULL' syntax would have equivilent code in your class as .IsNull(). LINQ makes the comparison to C# even clearer:- int totalCount = (from s in MyTypeEnumerable where s || !s select s).Count(); Imagining that MyTypeEnumberable has exactly the same contents of the database, i.e. 3 values equal to true, 3 values equal to false and 3 values equal to null. In this case totalCount would evaluate to 6 in this case. However, if we re-wrote the code as:- int totalCount = (from s in MyTypeEnumerable where s || !s || s.IsNull() select s).Count(); Then totalCount would evaluate to 9. The DBNull example given in the linked MSDN article on the false operator demonstrates a class in the BCL which has this exact behaviour. In effect the conclusion is you shouldn't use this unless you're completely sure you want this type of behaviour, it's better to just use the far simpler nullable syntax!! Update: I just noticed you need to manually override the logic operators !, || and && to make this work properly. I believe the false operator feeds into these logical operators, i.e. indicating truth, falsity or 'otherwise'. As noted in another comment !x won't work off the bat; you have to overload !. Weirdness!
What's the false operator in C# good for?
There are two weird operators in C#: the true operator the false operator If I understand this right these operators can be used in types which I want to use instead of a boolean expression and where I don't want to provide an implicit conversion to bool. Let's say I have a following class: public class MyType { public readonly int Value; public MyType(int value) { Value = value; } public static bool operator true (MyType mt) { return mt.Value > 0; } public static bool operator false (MyType mt) { return mt.Value < 0; } } So I can write the following code: MyType mTrue = new MyType(100); MyType mFalse = new MyType(-100); MyType mDontKnow = new MyType(0); if (mTrue) { // Do something. } while (mFalse) { // Do something else. } do { // Another code comes here. } while (mDontKnow) However for all the examples above only the true operator is executed. So what's the false operator in C# good for? Note: More examples can be found here, here and here.
[ "You can use it to override the && and || operators.\nThe && and || operators can't be overridden, but if you override |, &, true and false in exactly the right way the compiler will call | and & when you write || and &&.\nFor example, look at this code (from http://ayende.com/blog/1574/nhibernate-criteria-api-operator-overloading - where I found out about this trick; archived version by @BiggsTRC):\npublic static AbstractCriterion operator &(AbstractCriterion lhs, AbstractCriterion rhs)\n{\n return new AndExpression(lhs, rhs);\n}\n\npublic static AbstractCriterion operator |(AbstractCriterion lhs, AbstractCriterion rhs)\n{\n return new OrExpression(lhs, rhs);\n}\n\npublic static bool operator false(AbstractCriterion criteria)\n{\n return false;\n}\npublic static bool operator true(AbstractCriterion criteria)\n{\n return false;\n}\n\nThis is obviously a side effect and not the way it's intended to be used, but it is useful.\n", "Shog9 and Nir:\nthanks for your answers. Those answers pointed me to Steve Eichert article and it pointed me to msdn:\n\nThe operation x && y is evaluated as T.false(x) ? x : T.&(x, y), where T.false(x) is an invocation of the operator false declared in T, and T.&(x, y) is an invocation of the selected operator &. In other words, x is first evaluated and operator false is invoked on the result to determine if x is definitely false. Then, if x is definitely false, the result of the operation is the value previously computed for x. Otherwise, y is evaluated, and the selected operator & is invoked on the value previously computed for x and the value computed for y to produce the result of the operation.\n\n", "The page you link to http://msdn.microsoft.com/en-us/library/6x6y6z4d.aspx says what they were for, which was a way of handling nullable bools before nullable value types were introduced.\nI'd guess nowadays they're good for the same sort of stuff as ArrayList - i.e. absolutely nothing.\n", "AFAIK, it would be used in a test for false, such as when the && operator comes into play. Remember, && short-circuits, so in the expression\nif ( mFalse && mTrue) \n{\n // ... something\n}\n\nmFalse.false() is called, and upon returning true the expression is reduced to a call to 'mFalse.true()' (which should then return false, or things will get weird).\nNote that you must implement the & operator in order for that expression to compile, since it's used if mFalse.false() returns false.\n", "It appears from the MSDN article you linked to it was provided to allow for nullable boolean types prior to the Nullable (i.e. int?, bool?, etc.) type being introducted into the language in C#2. Thus you would store an internal value indicating whether the value is true or false or null, i.e. in your example >0 for true, <0 for false and ==0 for null, and then you'd get SQL-style null semantics. You would also have to implement a .IsNull method or property in order that nullity could be checked explicitly. \nComparing to SQL, imagine a table Table with 3 rows with value Foo set to true, 3 rows with value Foo set to false and 3 rows with value Foo set to null.\nSELECT COUNT(*) FROM Table WHERE Foo = TRUE OR Foo = FALSE\n6\n\nIn order to count all rows you'd have to do the following:-\nSELECT COUNT(*) FROM Table WHERE Foo = TRUE OR Foo = FALSE OR Foo IS NULL\n9\n\nThis 'IS NULL' syntax would have equivilent code in your class as .IsNull().\nLINQ makes the comparison to C# even clearer:-\nint totalCount = (from s in MyTypeEnumerable\n where s || !s\n select s).Count();\n\nImagining that MyTypeEnumberable has exactly the same contents of the database, i.e. 3 values equal to true, 3 values equal to false and 3 values equal to null. In this case totalCount would evaluate to 6 in this case. However, if we re-wrote the code as:-\nint totalCount = (from s in MyTypeEnumerable\n where s || !s || s.IsNull()\n select s).Count();\n\nThen totalCount would evaluate to 9.\nThe DBNull example given in the linked MSDN article on the false operator demonstrates a class in the BCL which has this exact behaviour.\nIn effect the conclusion is you shouldn't use this unless you're completely sure you want this type of behaviour, it's better to just use the far simpler nullable syntax!!\nUpdate: I just noticed you need to manually override the logic operators !, || and && to make this work properly. I believe the false operator feeds into these logical operators, i.e. indicating truth, falsity or 'otherwise'. As noted in another comment !x won't work off the bat; you have to overload !. Weirdness!\n" ]
[ 66, 27, 14, 7, 3 ]
[]
[]
[ ".net", "c#", "syntax" ]
stackoverflow_0000033265_.net_c#_syntax.txt
Q: How to pass method name to custom server control in asp.net? I am working on a Customer Server Control that extends another control. There is no problem with attaching to other controls on the form. in vb.net: Parent.FindControl(TargetControlName) I would like to pass a method to the control in the ASPX markup. for example: <c:MyCustomerControl runat=server InitializeStuffCallback="InitializeStuff"> So, I tried using reflection to access the given method name from the Parent. Something like (in VB) Dim pageType As Type = Page.GetType Dim CallbackMethodInfo As MethodInfo = pageType.GetMethod( "MethodName" ) 'Also tried sender.Parent.GetType.GetMethod("MethodName") sender.Parent.Parent.GetType.GetMethod("MethodName") The method isn't found, because it just isn't apart of the Page. Where should I be looking? I'm fairly sure this is possible because I've seen other controls do similar. I forgot to mention, my work-around is to give the control events and attaching to them in the Code-behind. A: If you want to be able to pass a method in the ASPX markup, you need to use the Browsable attribute in your code on the event. VB.NET <Browsable(True)> Public Event InitializeStuffCallback C# [Browsable(true)] public event EventHandler InitializeStuffCallback; Reference: Design-Time Attributes for Components and BrowsableAttribute Class All the events, properties, or whatever need to be in the code-behind of the control with the browsable attribute to make it so you can change it in the tag code. A: Normally you wouldn't need to get the method via reflection. Inside your user control, define a public event (sorry I do not know the vb syntax so this will be in c#) public event EventHandler EventName; Now, inside your aspx page, or whatever container of the user control, define a protected method that matches the EventHandler: protected void MyCustomerControl_MethodName(object sender, EventArgs e) { } Now, inside your markup, you can use <c:MyCustomerControl id="MyCustomerControl" runat=server OnEventName="MyCustomerControl_MethodName"> A: Your workaround is actually the better answer. If you have code that you must run at a certain part of your control's lifecycle, you should expose events to let the container extend the lifecycle with custom functionality. A: Every ASP.NET page is class of its own inherited from Page as in: class MyPage : Page Therefore, to find that method via Reflection, you must get the correct type, which is the type of the page class that stores the page code. I suppose you need to support multiple pages for this control to be instantiated in I believe you can find the child type of any instance of Page via Reflection, but I do not remember how, but you should be able to do it. but... like everyone else has said, such case is what events are for. A: buyutec and Jesse Dearing both have an acceptable answer. [Browsable(true)] lets you see the property in the Properties window. However, the event doesn't show up, which makes no difference to me. The thing I overlooked earlier was the fact that when you reference a control's even from the tag, it prep-ends On.
How to pass method name to custom server control in asp.net?
I am working on a Customer Server Control that extends another control. There is no problem with attaching to other controls on the form. in vb.net: Parent.FindControl(TargetControlName) I would like to pass a method to the control in the ASPX markup. for example: <c:MyCustomerControl runat=server InitializeStuffCallback="InitializeStuff"> So, I tried using reflection to access the given method name from the Parent. Something like (in VB) Dim pageType As Type = Page.GetType Dim CallbackMethodInfo As MethodInfo = pageType.GetMethod( "MethodName" ) 'Also tried sender.Parent.GetType.GetMethod("MethodName") sender.Parent.Parent.GetType.GetMethod("MethodName") The method isn't found, because it just isn't apart of the Page. Where should I be looking? I'm fairly sure this is possible because I've seen other controls do similar. I forgot to mention, my work-around is to give the control events and attaching to them in the Code-behind.
[ "If you want to be able to pass a method in the ASPX markup, you need to use the Browsable attribute in your code on the event.\nVB.NET\n<Browsable(True)> Public Event InitializeStuffCallback\n\nC#\n[Browsable(true)]\npublic event EventHandler InitializeStuffCallback;\n\nReference:\nDesign-Time Attributes for Components and BrowsableAttribute Class\nAll the events, properties, or whatever need to be in the code-behind of the control with the browsable attribute to make it so you can change it in the tag code.\n", "Normally you wouldn't need to get the method via reflection. Inside your user control, define a public event (sorry I do not know the vb syntax so this will be in c#)\npublic event EventHandler EventName;\n\nNow, inside your aspx page, or whatever container of the user control, define a protected method that matches the EventHandler:\nprotected void MyCustomerControl_MethodName(object sender, EventArgs e) { }\n\nNow, inside your markup, you can use\n<c:MyCustomerControl id=\"MyCustomerControl\" runat=server OnEventName=\"MyCustomerControl_MethodName\">\n\n", "Your workaround is actually the better answer. If you have code that you must run at a certain part of your control's lifecycle, you should expose events to let the container extend the lifecycle with custom functionality.\n", "Every ASP.NET page is class of its own inherited from Page as in:\nclass MyPage : Page\n\nTherefore, to find that method via Reflection, you must get the correct type, which is the type of the page class that stores the page code.\nI suppose you need to support multiple pages for this control to be instantiated in I believe you can find the child type of any instance of Page via Reflection, but I do not remember how, but you should be able to do it.\nbut... like everyone else has said, such case is what events are for.\n", "buyutec and Jesse Dearing both have an acceptable answer.\n[Browsable(true)] \n\nlets you see the property in the Properties window. However, the event doesn't show up, which makes no difference to me.\nThe thing I overlooked earlier was the fact that when you reference a control's even from the tag, it prep-ends On.\n" ]
[ 2, 2, 0, 0, 0 ]
[]
[]
[ "asp.net", "c#", "custom_server_controls", "vb.net", "web_controls" ]
stackoverflow_0000033150_asp.net_c#_custom_server_controls_vb.net_web_controls.txt
Q: How can you implement trackbacks on a custom-coded blog (written in C#)? How can you implement trackbacks on a custom-coded blog (written in C#)? A: The TrackBack specification was created by Six Apart back in the day for their Movable Type blogging system. After some corporate changes it seems to be no longer available, but here's an archived version: http://web.archive.org/web/20081228043036/http://www.sixapart.com/pronet/docs/trackback_spec A: Personally, I wouldn't. Trackbacks became completely unusable years ago from all the spammers and even Akismet hasn't been enough to drag them back to usable (obviously IMO). The best way I've seen to handle trackbacks any more is to have a function that will turn an article's "referrer" (you are tracking those, right?) into a trackback (probably as a customized comment type). This leverages the meat-space processing that guarantees that no spam gets through and still allows you to easily recognize and enable further discussion.
How can you implement trackbacks on a custom-coded blog (written in C#)?
How can you implement trackbacks on a custom-coded blog (written in C#)?
[ "The TrackBack specification was created by Six Apart back in the day for their Movable Type blogging system. After some corporate changes it seems to be no longer available, but here's an archived version:\nhttp://web.archive.org/web/20081228043036/http://www.sixapart.com/pronet/docs/trackback_spec\n", "Personally, I wouldn't. Trackbacks became completely unusable years ago from all the spammers and even Akismet hasn't been enough to drag them back to usable (obviously IMO). The best way I've seen to handle trackbacks any more is to have a function that will turn an article's \"referrer\" (you are tracking those, right?) into a trackback (probably as a customized comment type). This leverages the meat-space processing that guarantees that no spam gets through and still allows you to easily recognize and enable further discussion.\n" ]
[ 2, 2 ]
[ "If you're custom coding your own blog you have too much time on your hands. Start with something like dasBlog or SubText and customize that to your needs. Then you get trackbacks for free.\n" ]
[ -4 ]
[ "blogs", "c#", "trackback" ]
stackoverflow_0000033217_blogs_c#_trackback.txt
Q: How do banks remember "your computer"? As many of you probably know, online banks nowadays have a security system whereby you are asked some personal questions before you even enter your password. Once you have answered them, you can choose for the bank to "remember this computer" so that in the future you can login by only entering your password. How does the "remember this computer" part work? I know it cannot be cookies, because the feature still works despite the fact that I clear all of my cookies. I thought it might be by IP address, but my friend with a dynamic IP claims it works for him, too (but maybe he's wrong). He thought it was MAC address or something, but I strongly doubt that! So, is there a concept of https-only cookies that I don't clear? Finally, the programming part of the question: how can I do something similar myself in, say, PHP? A: In fact they most probably use cookies. An alternative for them would be to use "flash cookies" (officially called "Local Shared Objects"). They are similar to cookies in that they are tied to a website and have an upper size limit, but they are maintained by the flash player, so they are invisible to any browser tools. To clear them (and test this theory), you can use the instructions provided by Adobe. An other nifty (or maybe worrying, depending on your viewpoint) feature is that the LSO storage is shared by all browsers, so using LSO you can identify users even if they switched browser (as long as they are logged in as the same user). A: The particular bank I was interested in is Bank of America. I have confirmed that if I only clear my cookies or my LSOs, the site does not require me to re-enter info. If, however, I clear both, I had to go through additional authentication. Thus, that appears to be the answer in my particular case! But thank you all for the heads-up regarding other banks, and possibilities such as including the User-Agent string. A: This kind of session tracking is very likely to be done using a combination of a cookie with a unique id identifying your current session, and the website pairing that id with the last IP address you used to connect to their server. That way, if the IP changes, but you still have the cookie, you're identified and logged in, and if the cookie is absent but you have the same IP address as the one save on the server, then they set your cookie to the id paired with that IP. Really, it's that second possibility that is tricky to get right. If the cookie is missing, and you only have your IP address to show for identification, it's quite unsafe to log someone in just based of that. So servers probably store additional info about you, LSO seem like a good choice, geo IP too, but User Agent, not so much because they don't really say anything about you, every body using the same version of the same browser as you has the same. As an aside, it has been mentioned above that it could work with MAC adresses. I strongly disagree! Your MAC address never reaches your bank's server, as they are only used to identify sides of an Ethernet connection, and to connect to your bank you make a bunch of Ethernet connections: from your computer to your home router, or your ISP, then from there to the first internet router you go through, then to the second, etc... and each time a new connection is made, each machine on each side provide their very own MAC addresses. So your MAC address can only be known to the machines directly connected to you through a switch or hub, because anything else that routes your packets will replace your MAC with their own. Only the IP address stays the same all the way. If MAC addresses did go all the way, it would be a privacy nightmare, as all MAC addresses are unique to a single device, hence to a single person. This is a slightly simplified explanation because it's not the point of the question, but it seemed useful to clear what looked like a misunderstanding. A: It could be a combination of cookies, and ip address logging. Edit: I have just checked my bank and cleared the cookies. Now I have to re-enter all of my info. A: I think it depends on the bank. My bank does use a cookie since I lose it when I wipe cookies. A: It is possible for flash files to store a small amount of data on your computer. It's also possible that the bank uses that approach to "remember" your computer, but it's risky to rely on users having (and not having disabled) flash. A: My bank's site makes me re-authenticate every time a new version of Firefox is out, so there's definitely a user-agent string component in some. A: Are you using a laptop? Does it remember you, after you delete your cookies, if you access from a different WiFi network? If so, IP/physical location mapping is highly unlikely. A: Based on all these posts, the conclusions that I'm reaching are (1) it depends on the bank and (2) there's probably more than one piece of data that's involved, but see (1).
How do banks remember "your computer"?
As many of you probably know, online banks nowadays have a security system whereby you are asked some personal questions before you even enter your password. Once you have answered them, you can choose for the bank to "remember this computer" so that in the future you can login by only entering your password. How does the "remember this computer" part work? I know it cannot be cookies, because the feature still works despite the fact that I clear all of my cookies. I thought it might be by IP address, but my friend with a dynamic IP claims it works for him, too (but maybe he's wrong). He thought it was MAC address or something, but I strongly doubt that! So, is there a concept of https-only cookies that I don't clear? Finally, the programming part of the question: how can I do something similar myself in, say, PHP?
[ "In fact they most probably use cookies. An alternative for them would be to use \"flash cookies\" (officially called \"Local Shared Objects\"). They are similar to cookies in that they are tied to a website and have an upper size limit, but they are maintained by the flash player, so they are invisible to any browser tools.\nTo clear them (and test this theory), you can use the instructions provided by Adobe. An other nifty (or maybe worrying, depending on your viewpoint) feature is that the LSO storage is shared by all browsers, so using LSO you can identify users even if they switched browser (as long as they are logged in as the same user).\n", "The particular bank I was interested in is Bank of America.\nI have confirmed that if I only clear my cookies or my LSOs, the site does not require me to re-enter info. If, however, I clear both, I had to go through additional authentication. Thus, that appears to be the answer in my particular case!\nBut thank you all for the heads-up regarding other banks, and possibilities such as including the User-Agent string.\n", "This kind of session tracking is very likely to be done using a combination of a cookie with a unique id identifying your current session, and the website pairing that id with the last IP address you used to connect to their server. That way, if the IP changes, but you still have the cookie, you're identified and logged in, and if the cookie is absent but you have the same IP address as the one save on the server, then they set your cookie to the id paired with that IP.\nReally, it's that second possibility that is tricky to get right. If the cookie is missing, and you only have your IP address to show for identification, it's quite unsafe to log someone in just based of that. So servers probably store additional info about you, LSO seem like a good choice, geo IP too, but User Agent, not so much because they don't really say anything about you, every body using the same version of the same browser as you has the same.\nAs an aside, it has been mentioned above that it could work with MAC adresses. I strongly disagree! Your MAC address never reaches your bank's server, as they are only used to identify sides of an Ethernet connection, and to connect to your bank you make a bunch of Ethernet connections: from your computer to your home router, or your ISP, then from there to the first internet router you go through, then to the second, etc... and each time a new connection is made, each machine on each side provide their very own MAC addresses. So your MAC address can only be known to the machines directly connected to you through a switch or hub, because anything else that routes your packets will replace your MAC with their own. Only the IP address stays the same all the way.\nIf MAC addresses did go all the way, it would be a privacy nightmare, as all MAC addresses are unique to a single device, hence to a single person.\nThis is a slightly simplified explanation because it's not the point of the question, but it seemed useful to clear what looked like a misunderstanding.\n", "It could be a combination of cookies, and ip address logging.\nEdit: I have just checked my bank and cleared the cookies. Now I have to re-enter all of my info.\n", "I think it depends on the bank. My bank does use a cookie since I lose it when I wipe cookies.\n", "It is possible for flash files to store a small amount of data on your computer. It's also possible that the bank uses that approach to \"remember\" your computer, but it's risky to rely on users having (and not having disabled) flash.\n", "My bank's site makes me re-authenticate every time a new version of Firefox is out, so there's definitely a user-agent string component in some.\n", "Are you using a laptop? Does it remember you, after you delete your cookies, if you access from a different WiFi network? If so, IP/physical location mapping is highly unlikely.\n", "Based on all these posts, the conclusions that I'm reaching are (1) it depends on the bank and (2) there's probably more than one piece of data that's involved, but see (1).\n" ]
[ 20, 6, 5, 1, 1, 1, 1, 0, 0 ]
[ "MAC address is possible.\nIP to physical location mapping is also a possibility.\nUser agents and other HTTP headers are quiet unique to each of the machines too.\nI'm thinking about those websites that prevents you from using an accelerating download managers. There must be a way.\n" ]
[ -1 ]
[ "https", "onlinebanking", "sessiontracking" ]
stackoverflow_0000033034_https_onlinebanking_sessiontracking.txt
Q: Communicating between websites (using Javascript or ?) Here's my problem - I'd like to communicate between two websites and I'm looking for a clean solution. The current solution uses Javascript but there are nasty workarounds because of (understandable) cross-site scripting restrictions. At the moment, website A opens a modal window containing website B using a jQuery plug-in called jqModal. Website B does some work and returns some results to website A. To return that information we have to work around cross-site scripting restrictions - website B creates an iframe that refers to a page on website A and includes *fragment identifiers" containing the information to be returned. The iframe is polled by website A to detect the returned information. It's a common technique but it's hacky. There are variations such as CrossSite and I could perhaps use an HTTP POST from website B to website A but I'm trying to avoid page refreshes. Does anyone have any alternatives? EDIT: I'd like to avoid having to save state on website B. A: My best suggestion would be to create a webservice on each site that the other could call with the information that needs to get passed. If security is necessary, it's easy to add an SSL-like authentication scheme (or actual SSL even, if you like) to this system to ensure that only the two servers are able to talk to their respective web services. This would let you avoid the hackiness that's inherent in any scheme that involves one site opening windows on the other. A: With jQuery newer than 1.2 you can use JSONP A: @jmein - you've described how to create a modal popup (which is exactly what jqModal does) however you've missed that the content of the modal window is served from another domain. The two domains involved belong to two separate companies so can't be combined in the way you describe. A: i believe @pat was refering to this "As of jQuery 1.2, you can load JSON data located on another domain if you specify a JSONP callback, " http://docs.jquery.com/Ajax/jQuery.getJSON#urldatacallback
Communicating between websites (using Javascript or ?)
Here's my problem - I'd like to communicate between two websites and I'm looking for a clean solution. The current solution uses Javascript but there are nasty workarounds because of (understandable) cross-site scripting restrictions. At the moment, website A opens a modal window containing website B using a jQuery plug-in called jqModal. Website B does some work and returns some results to website A. To return that information we have to work around cross-site scripting restrictions - website B creates an iframe that refers to a page on website A and includes *fragment identifiers" containing the information to be returned. The iframe is polled by website A to detect the returned information. It's a common technique but it's hacky. There are variations such as CrossSite and I could perhaps use an HTTP POST from website B to website A but I'm trying to avoid page refreshes. Does anyone have any alternatives? EDIT: I'd like to avoid having to save state on website B.
[ "My best suggestion would be to create a webservice on each site that the other could call with the information that needs to get passed. If security is necessary, it's easy to add an SSL-like authentication scheme (or actual SSL even, if you like) to this system to ensure that only the two servers are able to talk to their respective web services.\nThis would let you avoid the hackiness that's inherent in any scheme that involves one site opening windows on the other.\n", "With jQuery newer than 1.2 you can use JSONP\n", "@jmein - you've described how to create a modal popup (which is exactly what jqModal does) however you've missed that the content of the modal window is served from another domain. The two domains involved belong to two separate companies so can't be combined in the way you describe.\n", "i believe @pat was refering to this\n\"As of jQuery 1.2, you can load JSON data located on another domain if you specify a JSONP callback, \"\nhttp://docs.jquery.com/Ajax/jQuery.getJSON#urldatacallback\n" ]
[ 5, 3, 0, 0 ]
[]
[]
[ "javascript", "jquery", "web", "xss" ]
stackoverflow_0000033104_javascript_jquery_web_xss.txt
Q: How to marshal an array of structs - (.Net/C# => C++) Disclaimer: Near zero with marshalling concepts.. I have a struct B that contains a string + an array of structs C. I need to send this across the giant interop chasm to a COM - C++ consumer. What are the right set of attributes I need to decorate my struct definition ? [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct A { public string strA public B b; } [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct B { public int Count; [MarshalAs(UnmanagedType.LPArray, ArraySubType=UnmanagedType.Struct, SizeParamIndex=0)] public C [] c; } [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct C { public string strVar; } edit: @Andrew Basically this is my friends' problem. He has this thing working in .Net - He does some automagic to have the .tlb/.tlh created that he can then use in the C++ realm. Trouble is he can't fix the array size. A: C++: The Most Powerful Language for .NET Framework Programming I was about to approach a project that needed to marshal structured data across the C++/C# boundary, but I found what could be a better way (especially if you know C++ and like learning new programming languages). If you have access to Visual Studio 2005 or above you might consider using C++/CLI rather than marshaling. It basically allows you to create this magic hybrid .NET/native class library that's 100% compatible with C# (as if you had written everything in C#, for the purposes of consuming it in another C# project) that is also 100% compatible with C and/or C++. In your case you could write a C++/CLI wrapper that marshaled the data from C++ in memory to CLI in memory types. I've had pretty good luck with this, using pure C++ code to read and write out datafiles (this could be a third party library of some kind, even), and then my C++/CLI code converts (copies) the C++ data into .NET types, in memory, which can be consumed directly as if I had written the read/write library in C#. For me the only barrier was syntax, since you have to learn the CLI extensions to C++. I wish I'd had StackOverflow to ask syntax questions, back when I was learning this! In return for trudging through the syntax, you learn probably the most powerful programming language imaginable. Think about it: the elegance and sanity of C# and the .NET libraries, and the low level and native library compatibility of C++. You wouldn't want to write all your projects in C++/CLI but it's great for getting C++ data into C#/.NET projects. It "just works." Tutorial: http://www.codeproject.com/KB/mcpp/cppcliintro01.aspx A: The answer depends on what the native definitions are that you are trying to marshal too. You haven't provided enough information for anyone to be able to really help. A common thing that trips people up when marshalling strings in native arrays is that native arrays often use a fixed-size buffer for the string that is allocated inline with the struct. Your definition is marshalling the strings as a pointer to another block of memory containing the string (which is the default). [MarshalAs(UnmanagedType.ByValTStr, SizeConst = ##)] might be what you are looking for...
How to marshal an array of structs - (.Net/C# => C++)
Disclaimer: Near zero with marshalling concepts.. I have a struct B that contains a string + an array of structs C. I need to send this across the giant interop chasm to a COM - C++ consumer. What are the right set of attributes I need to decorate my struct definition ? [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct A { public string strA public B b; } [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct B { public int Count; [MarshalAs(UnmanagedType.LPArray, ArraySubType=UnmanagedType.Struct, SizeParamIndex=0)] public C [] c; } [ComVisible (true)] [StructLayout(LayoutKind.Sequential)] public struct C { public string strVar; } edit: @Andrew Basically this is my friends' problem. He has this thing working in .Net - He does some automagic to have the .tlb/.tlh created that he can then use in the C++ realm. Trouble is he can't fix the array size.
[ "C++: The Most Powerful Language for .NET Framework Programming\nI was about to approach a project that needed to marshal structured data across the C++/C# boundary, but I found what could be a better way (especially if you know C++ and like learning new programming languages). If you have access to Visual Studio 2005 or above you might consider using C++/CLI rather than marshaling. It basically allows you to create this magic hybrid .NET/native class library that's 100% compatible with C# (as if you had written everything in C#, for the purposes of consuming it in another C# project) that is also 100% compatible with C and/or C++. In your case you could write a C++/CLI wrapper that marshaled the data from C++ in memory to CLI in memory types.\nI've had pretty good luck with this, using pure C++ code to read and write out datafiles (this could be a third party library of some kind, even), and then my C++/CLI code converts (copies) the C++ data into .NET types, in memory, which can be consumed directly as if I had written the read/write library in C#. For me the only barrier was syntax, since you have to learn the CLI extensions to C++. I wish I'd had StackOverflow to ask syntax questions, back when I was learning this!\nIn return for trudging through the syntax, you learn probably the most powerful programming language imaginable. Think about it: the elegance and sanity of C# and the .NET libraries, and the low level and native library compatibility of C++. You wouldn't want to write all your projects in C++/CLI but it's great for getting C++ data into C#/.NET projects. It \"just works.\"\nTutorial:\n\nhttp://www.codeproject.com/KB/mcpp/cppcliintro01.aspx\n\n", "The answer depends on what the native definitions are that you are trying to marshal too. You haven't provided enough information for anyone to be able to really help.\nA common thing that trips people up when marshalling strings in native arrays is that native arrays often use a fixed-size buffer for the string that is allocated inline with the struct. Your definition is marshalling the strings as a pointer to another block of memory containing the string (which is the default).\n[MarshalAs(UnmanagedType.ByValTStr, SizeConst = ##)] might be what you are looking for...\n" ]
[ 4, 1 ]
[]
[]
[ ".net", "interop", "marshalling" ]
stackoverflow_0000031854_.net_interop_marshalling.txt
Q: ssh hangs when command invoked directly, but exits cleanly when run interactive I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr. I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this: import re, subprocess executable = ... # Name of executable regex = ... # Regex to extract the port number from the output p = subprocess.Popen(executable, bufsize=1, # line buffered stderr=subprocess.PIPE ) s = p.stderr.readline() port = re.match(regex).groups()[0] print port If I log in interactively, this script works: $ ssh remotehost.example.com Last login: Thu Aug 28 17:31:18 2008 from localhost $ ./invokejob.py 63409 $ exit logout Connection to remotehost.example.com closed. (Note: successful logout, it did not hang). However, if I try to invoke it from the command-line, it just hangs: $ ssh remotehost.example.com invokejob.py Does anybody know why it hangs in the second case, and what I can do to avoid this? Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output. A: s = p.stderr.readline() I suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from. When you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works. A: what if you do the following: ssh <remote host> '<your command> ;<your regexp using awk or something>' For example ssh <remote host> '<your program>; ps aux | awk \'/root/ {print $2}\'' This will connect to , execute and then print each PSID for any user root or any process with root in its description. I have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).
ssh hangs when command invoked directly, but exits cleanly when run interactive
I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr. I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this: import re, subprocess executable = ... # Name of executable regex = ... # Regex to extract the port number from the output p = subprocess.Popen(executable, bufsize=1, # line buffered stderr=subprocess.PIPE ) s = p.stderr.readline() port = re.match(regex).groups()[0] print port If I log in interactively, this script works: $ ssh remotehost.example.com Last login: Thu Aug 28 17:31:18 2008 from localhost $ ./invokejob.py 63409 $ exit logout Connection to remotehost.example.com closed. (Note: successful logout, it did not hang). However, if I try to invoke it from the command-line, it just hangs: $ ssh remotehost.example.com invokejob.py Does anybody know why it hangs in the second case, and what I can do to avoid this? Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output.
[ "\ns = p.stderr.readline()\n\n\nI suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from.\nWhen you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works.\n", "what if you do the following:\nssh <remote host> '<your command> ;<your regexp using awk or something>'\nFor example\nssh <remote host> '<your program>; ps aux | awk \\'/root/ {print $2}\\''\nThis will connect to , execute and then print each PSID for any user root or any process with root in its description.\nI have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).\n" ]
[ 3, 0 ]
[]
[]
[ "python", "ssh" ]
stackoverflow_0000033475_python_ssh.txt
Q: How do you find what debug switches are available? Or given a switch find out what is being disabled? In this question the answer was to flip on a switch that is picked up by the debugger disabling the extraneous header that was causing the problem. The Microsoft help implies these switched are user generated and does not list any switches. <configuration> <system.diagnostics> <switches> <add name="Remote.Disable" value="1" /> </switches> </system.diagnostics> </configuration> What I would like to know is where the value "Remote.Disable" comes from and how find out what other things can be switched on or off. Currently it is just some config magic, and I don't like magic. A: As you suspected, Remote.Disable stops the app from attaching debug info to remote requests. It's defined inside the .NET framework methods that make the SOAP request. The basic situation is that these switches can be defined anywhere in code, you just need to create a new System.Diagnostics.BooleanSwitch with the name given and the config file can control them. This particular one is defined in System.ComponentModel.CompModSwitches.DisableRemoteDebugging: public static BooleanSwitch DisableRemoteDebugging { get { if (disableRemoteDebugging == null) { disableRemoteDebugging = new BooleanSwitch("Remote.Disable", "Disable remote debugging for web methods."); } return disableRemoteDebugging; } } In your case it's probably being called from System.Web.Services.Protocols.RemoteDebugger.IsClientCallOutEnabled(), which is being called by System.Web.Services.Protocols.WebClientProtocol.NotifyClientCallOut which is in turn being called by the Invoke method of System.Web.Services.Protocols.SoapHttpClientProtocol Unfortunately, to my knowledge, short of decompiling the framework & seaching for new BooleanSwitch or any of the other inheritors of the System.Diagnostics.Switch class, there's no easy way to know what switches are defined. It seems to be a case of searching msdn/google/stack overflow for the specific case In this case I just used Reflector & searched for the Remote.Disable string A: You can use Reflector to search for uses of the Switch class and its subclasss (BooleanSwitch, TraceSwitch, etc). The various switches are hardcoded by name, so AFAIK there's no master list somewhere.
How do you find what debug switches are available? Or given a switch find out what is being disabled?
In this question the answer was to flip on a switch that is picked up by the debugger disabling the extraneous header that was causing the problem. The Microsoft help implies these switched are user generated and does not list any switches. <configuration> <system.diagnostics> <switches> <add name="Remote.Disable" value="1" /> </switches> </system.diagnostics> </configuration> What I would like to know is where the value "Remote.Disable" comes from and how find out what other things can be switched on or off. Currently it is just some config magic, and I don't like magic.
[ "As you suspected, Remote.Disable stops the app from attaching debug info to remote requests. It's defined inside the .NET framework methods that make the SOAP request.\nThe basic situation is that these switches can be defined anywhere in code, you just need to create a new System.Diagnostics.BooleanSwitch with the name given and the config file can control them.\nThis particular one is defined in System.ComponentModel.CompModSwitches.DisableRemoteDebugging:\npublic static BooleanSwitch DisableRemoteDebugging\n{\n get\n {\n if (disableRemoteDebugging == null)\n {\n disableRemoteDebugging = new BooleanSwitch(\"Remote.Disable\", \"Disable remote debugging for web methods.\");\n }\n return disableRemoteDebugging;\n }\n}\n\nIn your case it's probably being called from System.Web.Services.Protocols.RemoteDebugger.IsClientCallOutEnabled(), which is being called by System.Web.Services.Protocols.WebClientProtocol.NotifyClientCallOut which is in turn being called by the Invoke method of System.Web.Services.Protocols.SoapHttpClientProtocol\nUnfortunately, to my knowledge, short of decompiling the framework & seaching for \nnew BooleanSwitch\n\nor any of the other inheritors of the System.Diagnostics.Switch class,\nthere's no easy way to know what switches are defined. It seems to be a case of searching msdn/google/stack overflow for the specific case\nIn this case I just used Reflector & searched for the Remote.Disable string\n", "You can use Reflector to search for uses of the Switch class and its subclasss (BooleanSwitch, TraceSwitch, etc). The various switches are hardcoded by name, so AFAIK there's no master list somewhere. \n" ]
[ 2, 1 ]
[]
[]
[ ".net", "app_config" ]
stackoverflow_0000033334_.net_app_config.txt
Q: A more generic visitor pattern I'm sorry if my question is so long and technical but I think it's so important other people will be interested about it I was looking for a way to separate clearly some softwares internals from their representation in c++ I have a generic parameter class (to be later stored in a container) that can contain any kind of value with the the boost::any class I have a base class (roughly) of this kind (of course there is more stuff) class Parameter { public: Parameter() template typename<T> T GetValue() const { return any_cast<T>( _value ); } template typename<T> void SetValue(const T& value) { _value = value; } string GetValueAsString() const = 0; void SetValueFromString(const string& str) const = 0; private: boost::any _value; } There are two levels of derived classes: The first level defines the type and the conversion to/from string (for example ParameterInt or ParameterString) The second level defines the behaviour and the real creators (for example deriving ParameterAnyInt and ParameterLimitedInt from ParameterInt or ParameterFilename from GenericString) Depending on the real type I would like to add external function or classes that operates depending on the specific parameter type without adding virtual methods to the base class and without doing strange casts For example I would like to create the proper gui controls depending on parameter types: Widget* CreateWidget(const Parameter& p) Of course I cannot understand real Parameter type from this unless I use RTTI or implement it my self (with enum and switch case), but this is not the right OOP design solution, you know. The classical solution is the Visitor design pattern http://en.wikipedia.org/wiki/Visitor_pattern The problem with this pattern is that I have to know in advance which derived types will be implemented, so (putting together what is written in wikipedia and my code) we'll have sort of: struct Visitor { virtual void visit(ParameterLimitedInt& wheel) = 0; virtual void visit(ParameterAnyInt& engine) = 0; virtual void visit(ParameterFilename& body) = 0; }; Is there any solution to obtain this behaviour in any other way without need to know in advance all the concrete types and without deriving the original visitor? Edit: Dr. Pizza's solution seems the closest to what I was thinking, but the problem is still the same and the method is actually relying on dynamic_cast, that I was trying to avoid as a kind of (even if weak) RTTI method Maybe it is better to think to some solution without even citing the visitor Pattern and clean our mind. The purpose is just having the function such: Widget* CreateWidget(const Parameter& p) behave differently for each "concrete" parameter without losing info on its type A: For a generic implementation of Vistor, I'd suggest the Loki Visitor, part of the Loki library. A: I've used this ("acyclic visitor") to good effect; it makes adding new classes to the hierarchy possible without changing existing ones, to some extent. A: If I understand this correctly... We had a object that could use different hardware options. To facilitate this we used a abstract interface of Device. Device had a bunch of functions that would be fired on certain events. The use would be the same but the various implementations of the Device would either have a fully-fleshed out functions or just return immediately. To make life even easier, the functions were void and threw exceptions on when something went wrong. A: For completeness's sake: it's of course completely possible to write an own implementation of a multimethod pointer table for your objects and calculate the method addresses manually at run time. There's a paper by Stroustrup on the topic of implementing multimethods (albeit in the compiler). I wouldn't really advise anyone to do this. Getting the implementation to perform well is quite complicated and the syntax for using it will probably be very awkward and error-prone. If everything else fails, this might still be the way to go, though. A: I am having trouble understanding your requirements. But Ill state - in my own words as it were - what I understand the situation to be: You have abstract Parameter class, which is subclassed eventually to some concrete classes (eg: ParameterLimitedInt). You have a seperate GUI system which will be passed these parameters in a generic fashion, but the catch is that it needs to present the GUI component specific to the concrete type of the parameter class. The restrictions are that you dont want to do RTTID, and dont want to write code to handle every possible type of concrete parameter. You are open to using the visitor pattern. With those being your requirements, here is how I would handle such a situation: I would implement the visitor pattern where the accept() returns a boolean value. The base Parameter class would implement a virtual accept() function and return false. Concrete implementations of the Parameter class would then contain accept() functions which will call the visitor's visit(). They would return true. The visitor class would make use of a templated visit() function so you would only override for the concrete Parameter types you care to support: class Visitor { public: template< class T > void visit( const T& param ) const { assert( false && "this parameter type not specialised in the visitor" ); } void visit( const ParameterLimitedInt& ) const; // specialised implementations... } Thus if accept() returns false, you know the concrete type for the Parameter has not implemented the visitor pattern yet (in case there is additional logic you would prefer to handle on a case by case basis). If the assert() in the visitor pattern triggers, its because its not visiting a Parameter type which you've implemented a specialisation for. One downside to all of this is that unsupported visits are only caught at runtime.
A more generic visitor pattern
I'm sorry if my question is so long and technical but I think it's so important other people will be interested about it I was looking for a way to separate clearly some softwares internals from their representation in c++ I have a generic parameter class (to be later stored in a container) that can contain any kind of value with the the boost::any class I have a base class (roughly) of this kind (of course there is more stuff) class Parameter { public: Parameter() template typename<T> T GetValue() const { return any_cast<T>( _value ); } template typename<T> void SetValue(const T& value) { _value = value; } string GetValueAsString() const = 0; void SetValueFromString(const string& str) const = 0; private: boost::any _value; } There are two levels of derived classes: The first level defines the type and the conversion to/from string (for example ParameterInt or ParameterString) The second level defines the behaviour and the real creators (for example deriving ParameterAnyInt and ParameterLimitedInt from ParameterInt or ParameterFilename from GenericString) Depending on the real type I would like to add external function or classes that operates depending on the specific parameter type without adding virtual methods to the base class and without doing strange casts For example I would like to create the proper gui controls depending on parameter types: Widget* CreateWidget(const Parameter& p) Of course I cannot understand real Parameter type from this unless I use RTTI or implement it my self (with enum and switch case), but this is not the right OOP design solution, you know. The classical solution is the Visitor design pattern http://en.wikipedia.org/wiki/Visitor_pattern The problem with this pattern is that I have to know in advance which derived types will be implemented, so (putting together what is written in wikipedia and my code) we'll have sort of: struct Visitor { virtual void visit(ParameterLimitedInt& wheel) = 0; virtual void visit(ParameterAnyInt& engine) = 0; virtual void visit(ParameterFilename& body) = 0; }; Is there any solution to obtain this behaviour in any other way without need to know in advance all the concrete types and without deriving the original visitor? Edit: Dr. Pizza's solution seems the closest to what I was thinking, but the problem is still the same and the method is actually relying on dynamic_cast, that I was trying to avoid as a kind of (even if weak) RTTI method Maybe it is better to think to some solution without even citing the visitor Pattern and clean our mind. The purpose is just having the function such: Widget* CreateWidget(const Parameter& p) behave differently for each "concrete" parameter without losing info on its type
[ "For a generic implementation of Vistor, I'd suggest the Loki Visitor, part of the Loki library.\n", "I've used this (\"acyclic visitor\") to good effect; it makes adding new classes to the hierarchy possible without changing existing ones, to some extent.\n", "If I understand this correctly...\nWe had a object that could use different hardware options. To facilitate this we used a abstract interface of Device. Device had a bunch of functions that would be fired on certain events. The use would be the same but the various implementations of the Device would either have a fully-fleshed out functions or just return immediately. To make life even easier, the functions were void and threw exceptions on when something went wrong.\n", "For completeness's sake:\nit's of course completely possible to write an own implementation of a multimethod pointer table for your objects and calculate the method addresses manually at run time. There's a paper by Stroustrup on the topic of implementing multimethods (albeit in the compiler).\nI wouldn't really advise anyone to do this. Getting the implementation to perform well is quite complicated and the syntax for using it will probably be very awkward and error-prone. If everything else fails, this might still be the way to go, though.\n", "I am having trouble understanding your requirements. But Ill state - in my own words as it were - what I understand the situation to be:\n\nYou have abstract Parameter class, which is subclassed eventually to some concrete classes (eg: ParameterLimitedInt).\nYou have a seperate GUI system which will be passed these parameters in a generic fashion, but the catch is that it needs to present the GUI component specific to the concrete type of the parameter class.\nThe restrictions are that you dont want to do RTTID, and dont want to write code to handle every possible type of concrete parameter.\nYou are open to using the visitor pattern.\n\nWith those being your requirements, here is how I would handle such a situation:\nI would implement the visitor pattern where the accept() returns a boolean value. The base Parameter class would implement a virtual accept() function and return false.\nConcrete implementations of the Parameter class would then contain accept() functions which will call the visitor's visit(). They would return true.\nThe visitor class would make use of a templated visit() function so you would only override for the concrete Parameter types you care to support:\nclass Visitor\n{\npublic:\n template< class T > void visit( const T& param ) const\n {\n assert( false && \"this parameter type not specialised in the visitor\" );\n }\n void visit( const ParameterLimitedInt& ) const; // specialised implementations...\n}\n\nThus if accept() returns false, you know the concrete type for the Parameter has not implemented the visitor pattern yet (in case there is additional logic you would prefer to handle on a case by case basis). If the assert() in the visitor pattern triggers, its because its not visiting a Parameter type which you've implemented a specialisation for.\nOne downside to all of this is that unsupported visits are only caught at runtime.\n" ]
[ 4, 1, 0, 0, 0 ]
[]
[]
[ "c++", "design_patterns", "visitors" ]
stackoverflow_0000031913_c++_design_patterns_visitors.txt
Q: Good reasons for not letting the browser launch local applications I know this might be a no-brainer, but please read on. I also know it's generally not considered a good idea, maybe the worst, to let a browser run and interact with local apps, even in an intranet context. We use Citrix for home-office, and people really like it. Now, they would like the same kind of environment at work, a nice page where every important application/document/folder is nicely arranged and classified in an orderly fashion. These folks are not particularly tech savvy; I don't even consider thinking that they could understand the difference between remote delivered applications and local ones. So, I've been asked if it's possible. Of course, it is, with IE's good ol' ActiveX controls. And I even made a working prototype (that's where it hurts). But now, I doubt. Isn't it madness to allow such 'dangerous' ActiveX controls, even in the 'local intranet' zone? People will use the same browser to surf the web, can I fully trust IE? Isn't there a risk that Microsoft would just disable those controls in future updates/versions? What if a website, or any kind of malware, just put another site on the trust list? With that extent of control, you could as well uninstall every protection and just run amok 'till you got hanged by the IT dept. I'm about to confront my superiors with the fact that, even if they saw it is doable, it would be a very bad thing. So I'm desperately in need of good and strong arguments, because "let's don't" won't do it. Of course, if there is nothing to be scared of, that'll be nice too. But I strongly doubt that. A: We use Citrix for home-office, and people really like it. Now, they would like the same kind of environment at work, a nice page where every important application/document/folder is nicely arranged and classified in an orderly fashion I haven't used Citrix very many times, but what's it got to do with executing local applications? I don't see how "People like Citrix" and "browser executing local applications" relate at all? If the people are accessing your Citrix server from home, and want the same experience in the office, then buy a cheap PC, and run the exact same Citrix software they run on their home computers. Put this computer in the corner and tell them to go use it. They'll be overjoyed. Isn't it madness to allow such 'dangerous' ActiveX controls, even in the 'local intranet' zone ? People will use the same browser to surf the web, can I fully trust IE ? Put it this way. IE has built-in support for AX controls. It uses it's security mechanisms to prevent them from running unless in a trusted site. By default, no sites are trusted at all. If you use IE at all then you're putting yourself at the mercy of these security mechanisms. Whether or not you tell it to trust the local intranet is beside the point, and isn't going to affect the operation of any other zones. The good old security holes that require you to reboot your computer every few weeks when MS issues a patch will continue to exist and cause problems, regardless of whether you allow ActiveX in your local intranet. Isn't there a risk that Microsoft would just disable those controls in future updates / versions ? Since XP-SP2, Microsoft has been making it increasingly difficult to use ActiveX controls. I don't know how many scary looking warning messages and "This might destroy your computer" dialogs you have to click through these days to get them to run, but it's quite a few. This will only get worse over time. A: Microsoft is walking a fine line. On one hand, they regularly send ActiveX killbits with Windows Update to remove/disable applications that have been misbehaving. On the other hand, the latest version of Sharepoint 2007 (can't speak for earlier versions) allows for Office documents to be opened by clicking a link in the browser, and edited in the local application. When the edit is finished, the changes are transmitted back to the server and the webpage (generally) is refreshed. This is only an IE thing, as Firefox will throw up an error message. I can see the logic behind it, though. Until Microsoft gets all of their apps 'in the cloud', there are cases that need to bridge the gap between the old client-side apps and a more web-centric business environment. While there is likely a non-web workaround, more and more information workers have come to expect that a large portion of their work will be done in a browser. Anything that makes the integration with the desktop easier is not going to be opposed by anyone except the sysadmins. A: The standard citrix homepage (or how we use it) is a simple web page with program icons. Click on it, and the application get's delivered to you. People want the same thing, at work, with their applications/folders/documents. And because I'm a web developer, and they asked me, I do it with a web page... Perhaps I should pass the whole thing over to the VB guy.. Ahh... I know of 2 ways to accomplish this: You can embed internet explorer into an application, and hook into it and intercept certain kinds of URL's and so on I saw this done a few years ago - a telephony application embedded internet explorer in itself, and loaded some specially formatted webpages. In the webpage there was this: <a href="dial#1800-234-567">Call John Smith</a> Normally this would be a broken URL, but when the user clicked on this link, the application containing the embedded IE got notified, and proceeded to execute it's own custom code to dial the number from the URL. You could get your VB guy to write an application which basically just wraps IE, and has handlers for executing applications. You could then code normal webpages with links to just open applications, and the VB app would launch them. This allows you to write your own security stuff (like, only launch applications in a preset list, or so on) into the VB app, and because VB is launching them, not IE, none of the IE security issues will be involved. The second way is with browser plug-ins. For example, skype comes with a Firefox plug-in, which looks for phone-numbers in web-pages, and attaches special links to them. When you click on these links it invokes skype - you could conceivably do something similar for launching your citrix apps. You'd then be tied to firefox though. Writing plugins for IE is much harder than for FF, I wouldn't go down that path unless forced to.
Good reasons for not letting the browser launch local applications
I know this might be a no-brainer, but please read on. I also know it's generally not considered a good idea, maybe the worst, to let a browser run and interact with local apps, even in an intranet context. We use Citrix for home-office, and people really like it. Now, they would like the same kind of environment at work, a nice page where every important application/document/folder is nicely arranged and classified in an orderly fashion. These folks are not particularly tech savvy; I don't even consider thinking that they could understand the difference between remote delivered applications and local ones. So, I've been asked if it's possible. Of course, it is, with IE's good ol' ActiveX controls. And I even made a working prototype (that's where it hurts). But now, I doubt. Isn't it madness to allow such 'dangerous' ActiveX controls, even in the 'local intranet' zone? People will use the same browser to surf the web, can I fully trust IE? Isn't there a risk that Microsoft would just disable those controls in future updates/versions? What if a website, or any kind of malware, just put another site on the trust list? With that extent of control, you could as well uninstall every protection and just run amok 'till you got hanged by the IT dept. I'm about to confront my superiors with the fact that, even if they saw it is doable, it would be a very bad thing. So I'm desperately in need of good and strong arguments, because "let's don't" won't do it. Of course, if there is nothing to be scared of, that'll be nice too. But I strongly doubt that.
[ "\nWe use Citrix for home-office, and people really like it. Now, they would like the same kind of environment at work, a nice page where every important application/document/folder is nicely arranged and classified in an orderly fashion\n\nI haven't used Citrix very many times, but what's it got to do with executing local applications? I don't see how \"People like Citrix\" and \"browser executing local applications\" relate at all?\nIf the people are accessing your Citrix server from home, and want the same experience in the office, then buy a cheap PC, and run the exact same Citrix software they run on their home computers. Put this computer in the corner and tell them to go use it. They'll be overjoyed.\n\nIsn't it madness to allow such 'dangerous' ActiveX controls, even in the 'local intranet' zone ? People will use the same browser to surf the web, can I fully trust IE ?\n\nPut it this way. IE has built-in support for AX controls. It uses it's security mechanisms to prevent them from running unless in a trusted site. By default, no sites are trusted at all.\nIf you use IE at all then you're putting yourself at the mercy of these security mechanisms. Whether or not you tell it to trust the local intranet is beside the point, and isn't going to affect the operation of any other zones. \nThe good old security holes that require you to reboot your computer every few weeks when MS issues a patch will continue to exist and cause problems, regardless of whether you allow ActiveX in your local intranet.\n\nIsn't there a risk that Microsoft would just disable those controls in future updates / versions ?\n\nSince XP-SP2, Microsoft has been making it increasingly difficult to use ActiveX controls. I don't know how many scary looking warning messages and \"This might destroy your computer\" dialogs you have to click through these days to get them to run, but it's quite a few. This will only get worse over time.\n", "Microsoft is walking a fine line. On one hand, they regularly send ActiveX killbits with Windows Update to remove/disable applications that have been misbehaving. On the other hand, the latest version of Sharepoint 2007 (can't speak for earlier versions) allows for Office documents to be opened by clicking a link in the browser, and edited in the local application. When the edit is finished, the changes are transmitted back to the server and the webpage (generally) is refreshed. This is only an IE thing, as Firefox will throw up an error message.\nI can see the logic behind it, though. Until Microsoft gets all of their apps 'in the cloud', there are cases that need to bridge the gap between the old client-side apps and a more web-centric business environment. While there is likely a non-web workaround, more and more information workers have come to expect that a large portion of their work will be done in a browser. Anything that makes the integration with the desktop easier is not going to be opposed by anyone except the sysadmins.\n", "\nThe standard citrix homepage (or how we use it) is a simple web page with program icons. Click on it, and the application get's delivered to you. People want the same thing, at work, with their applications/folders/documents. And because I'm a web developer, and they asked me, I do it with a web page... Perhaps I should pass the whole thing over to the VB guy..\n\nAhh... I know of 2 ways to accomplish this:\nYou can embed internet explorer into an application, and hook into it and intercept certain kinds of URL's and so on\nI saw this done a few years ago - a telephony application embedded internet explorer in itself, and loaded some specially formatted webpages.\nIn the webpage there was this:\n<a href=\"dial#1800-234-567\">Call John Smith</a>\n\nNormally this would be a broken URL, but when the user clicked on this link, the application containing the embedded IE got notified, and proceeded to execute it's own custom code to dial the number from the URL.\nYou could get your VB guy to write an application which basically just wraps IE, and has handlers for executing applications. You could then code normal webpages with links to just open applications, and the VB app would launch them. This allows you to write your own security stuff (like, only launch applications in a preset list, or so on) into the VB app, and because VB is launching them, not IE, none of the IE security issues will be involved.\nThe second way is with browser plug-ins.\nFor example, skype comes with a Firefox plug-in, which looks for phone-numbers in web-pages, and attaches special links to them. When you click on these links it invokes skype - you could conceivably do something similar for launching your citrix apps.\nYou'd then be tied to firefox though. Writing plugins for IE is much harder than for FF, I wouldn't go down that path unless forced to.\n" ]
[ 4, 4, 1 ]
[]
[]
[ "activex", "internet_explorer", "intranet", "security" ]
stackoverflow_0000031865_activex_internet_explorer_intranet_security.txt
Q: Concurrent logins in a web farm I'm really asking this by proxy, another team at work has had a change request from our customer. The problem is that our customer doesn't want their employees to login with one user more than one at the same time. That they are getting locked out and sharing logins. Since this is on a web farm, what would be the best way to tackle this issue? Wouldn't caching to the database cause performance issues? A: You could look at using a distributed cache system like memcached It would solve this problem pretty well (it's MUCH faster than a database), and is also excellent for caching pretty much anything else too A: It's just a cost of doing business. Yes, caching to a database is slower than caching on your webserver. But you've got to store that state information in a centralized location, otherwise one webserver isn't going to know what users are logged into another. Assumption: You're trying to prevent multiple concurrent log-ins by a single user. A: A database operation at login and logout won't cause a performance problem. If you are using a caching proxy, that will cause a problem: a user will log out, but won't be able to log back in until the logout reaches the cache Your biggest potential problem might be: if the app/box crashes without a chance for the user to log out, the user's state in the database will remain "logged in". A: It depends on how the authentication is done. If you store the last successful login datetime (whatever the backend), so maybe you can change the schema to store a flag "logged_in" and that won't involve an extra performance cost. (ok, it's not clean at all)
Concurrent logins in a web farm
I'm really asking this by proxy, another team at work has had a change request from our customer. The problem is that our customer doesn't want their employees to login with one user more than one at the same time. That they are getting locked out and sharing logins. Since this is on a web farm, what would be the best way to tackle this issue? Wouldn't caching to the database cause performance issues?
[ "You could look at using a distributed cache system like memcached\nIt would solve this problem pretty well (it's MUCH faster than a database), and is also excellent for caching pretty much anything else too\n", "It's just a cost of doing business.\nYes, caching to a database is slower than caching on your webserver. But you've got to store that state information in a centralized location, otherwise one webserver isn't going to know what users are logged into another.\nAssumption: You're trying to prevent multiple concurrent log-ins by a single user.\n", "A database operation at login and logout won't cause a performance problem.\n\nIf you are using a caching proxy, that will cause a problem:\na user will log out, but won't be able to log back in until the logout reaches the cache\n\nYour biggest potential problem might be:\n\nif the app/box crashes without a chance for the user to log out, the user's state in the database will remain \"logged in\".\n\n", "It depends on how the authentication is done. If you store the last successful login datetime (whatever the backend), so maybe you can change the schema to store a flag \"logged_in\" and that won't involve an extra performance cost. (ok, it's not clean at all)\n" ]
[ 7, 6, 3, 1 ]
[]
[]
[ "asp.net", "authentication" ]
stackoverflow_0000033619_asp.net_authentication.txt
Q: How does GPS in a mobile phone work exactly? I assume it doesn't connect to anything (other than the satelite I guess), is this right? Or it does and has some kind of charge? A: GPS, the Global Positioning System run by the United States Military, is free for civilian use, though the reality is that we're paying for it with tax dollars. However, GPS on cell phones is a bit more murky. In general, it won't cost you anything to turn on the GPS in your cell phone, but when you get a location it usually involves the cell phone company in order to get it quickly with little signal, as well as get a location when the satellites aren't visible (since the gov't requires a fix even if the satellites aren't visible for emergency 911 purposes). It uses up some cellular bandwidth. This also means that for phones without a regular GPS receiver, you cannot use the GPS at all if you don't have cell phone service. For this reason most cell phone companies have the GPS in the phone turned off except for emergency calls and for services they sell you (such as directions). This particular kind of GPS is called assisted GPS (AGPS), and there are several levels of assistance used. GPS A normal GPS receiver listens to a particular frequency for radio signals. Satellites send time coded messages at this frequency. Each satellite has an atomic clock, and sends the current exact time as well. The GPS receiver figures out which satellites it can hear, and then starts gathering those messages. The messages include time, current satellite positions, and a few other bits of information. The message stream is slow - this is to save power, and also because all the satellites transmit on the same frequency and they're easier to pick out if they go slow. Because of this, and the amount of information needed to operate well, it can take 30-60 seconds to get a location on a regular GPS. When it knows the position and time code of at least 3 satellites, a GPS receiver can assume it's on the earth's surface and get a good reading. 4 satellites are needed if you aren't on the ground and you want altitude as well. AGPS As you saw above, it can take a long time to get a position fix with a normal GPS. There are ways to speed this up, but unless you're carrying an atomic clock with you all the time, or leave the GPS on all the time, then there's always going to be a delay of between 5-60 seconds before you get a location. In order to save cost, most cell phones share the GPS receiver components with the cellular components, and you can't get a fix and talk at the same time. People don't like that (especially when there's an emergency) so the lowest form of GPS does the following: Get some information from the cell phone company to feed to the GPS receiver - some of this is gross positioning information based on what cellular towers can 'hear' your phone, so by this time they already phone your location to within a city block or so. Switch from cellular to GPS receiver for 0.1 second (or some small, practically unoticable period of time) and collect the raw GPS data (no processing on the phone). Switch back to the phone mode, and send the raw data to the phone company The phone company processes that data (acts as an offline GPS receiver) and send the location back to your phone. This saves a lot of money on the phone design, but it has a heavy load on cellular bandwidth, and with a lot of requests coming it requires a lot of fast servers. Still, overall it can be cheaper and faster to implement. They are reluctant, however, to release GPS based features on these phones due to this load - so you won't see turn by turn navigation here. More recent designs include a full GPS chip. They still get data from the phone company - such as current location based on tower positioning, and current satellite locations - this provides sub 1 second fix times. This information is only needed once, and the GPS can keep track of everything after that with very little power. If the cellular network is unavailable, then they can still get a fix after awhile. If the GPS satellites aren't visible to the receiver, then they can still get a rough fix from the cellular towers. But to completely answer your question - it's as free as the phone company lets it be, and so far they do not charge for it at all. I doubt that's going to change in the future. In the higher end phones with a full GPS receiver you may even be able to load your own software and access it, such as with mologogo on a motorola iDen phone - the J2ME development kit is free, and the phone is only $40 (prepaid phone with $5 credit). Unlimited internet is about $10 a month, so for $40 to start and $10 a month you can get an internet tracking system. (Prices circa August 2008) It's only going to get cheaper and more full featured from here on out... Re: Google maps and such Yes, Google maps and all other cell phone mapping systems require a data connection of some sort at varying times during usage. When you move far enough in one direction, for instance, it'll request new tiles from its server. Your average phone doesn't have enough storage to hold a map of the US, nor the processor power to render it nicely. iPhone would be able to if you wanted to use the storage space up with maps, but given that most iPhones have a full time unlimited data plan most users would rather use that space for other things. A: There's 3 satellites at least that you must be able to receive from of the 24-32 out there, and they each broadcast a time from a synchronized atomic clock. The differences in those times that you receive at any one time tell you how long the broadcast took to reach you, and thus where you are in relation to the satellites. So, it sort of reads from something, but it doesn't connect to that thing. Note that this doesn't tell you your orientation, many GPSes fake that (and speed) by interpolating data points. If you don't count the cost of the receiver, it's a free service. Apparently there's higher resolution services out there that are restricted to military use. Those are likely a fixed cost for a license to decrypt the signals along with a confidentiality agreement. Now your device may support GPS tracking, in which case it might communicate, say via GPRS, to a database which will store the location the device has found itself to be at, so that multiple devices may be tracked. That would require some kind of connection. Maps are either stored on the device or received over a connection. Navigation is computed based on those maps' databases. These likely are a licensed item with a cost associated, though if you use a service like Google Maps they have the license with NAVTEQ and others.
How does GPS in a mobile phone work exactly?
I assume it doesn't connect to anything (other than the satelite I guess), is this right? Or it does and has some kind of charge?
[ "GPS, the Global Positioning System run by the United States Military, is free for civilian use, though the reality is that we're paying for it with tax dollars.\nHowever, GPS on cell phones is a bit more murky. In general, it won't cost you anything to turn on the GPS in your cell phone, but when you get a location it usually involves the cell phone company in order to get it quickly with little signal, as well as get a location when the satellites aren't visible (since the gov't requires a fix even if the satellites aren't visible for emergency 911 purposes). It uses up some cellular bandwidth. This also means that for phones without a regular GPS receiver, you cannot use the GPS at all if you don't have cell phone service.\nFor this reason most cell phone companies have the GPS in the phone turned off except for emergency calls and for services they sell you (such as directions).\nThis particular kind of GPS is called assisted GPS (AGPS), and there are several levels of assistance used.\nGPS\nA normal GPS receiver listens to a particular frequency for radio signals. Satellites send time coded messages at this frequency. Each satellite has an atomic clock, and sends the current exact time as well.\nThe GPS receiver figures out which satellites it can hear, and then starts gathering those messages. The messages include time, current satellite positions, and a few other bits of information. The message stream is slow - this is to save power, and also because all the satellites transmit on the same frequency and they're easier to pick out if they go slow. Because of this, and the amount of information needed to operate well, it can take 30-60 seconds to get a location on a regular GPS.\nWhen it knows the position and time code of at least 3 satellites, a GPS receiver can assume it's on the earth's surface and get a good reading. 4 satellites are needed if you aren't on the ground and you want altitude as well.\nAGPS\nAs you saw above, it can take a long time to get a position fix with a normal GPS. There are ways to speed this up, but unless you're carrying an atomic clock with you all the time, or leave the GPS on all the time, then there's always going to be a delay of between 5-60 seconds before you get a location.\nIn order to save cost, most cell phones share the GPS receiver components with the cellular components, and you can't get a fix and talk at the same time. People don't like that (especially when there's an emergency) so the lowest form of GPS does the following:\n\nGet some information from the cell phone company to feed to the GPS receiver - some of this is gross positioning information based on what cellular towers can 'hear' your phone, so by this time they already phone your location to within a city block or so.\nSwitch from cellular to GPS receiver for 0.1 second (or some small, practically unoticable period of time) and collect the raw GPS data (no processing on the phone).\nSwitch back to the phone mode, and send the raw data to the phone company\nThe phone company processes that data (acts as an offline GPS receiver) and send the location back to your phone.\n\nThis saves a lot of money on the phone design, but it has a heavy load on cellular bandwidth, and with a lot of requests coming it requires a lot of fast servers. Still, overall it can be cheaper and faster to implement. They are reluctant, however, to release GPS based features on these phones due to this load - so you won't see turn by turn navigation here.\nMore recent designs include a full GPS chip. They still get data from the phone company - such as current location based on tower positioning, and current satellite locations - this provides sub 1 second fix times. This information is only needed once, and the GPS can keep track of everything after that with very little power. If the cellular network is unavailable, then they can still get a fix after awhile. If the GPS satellites aren't visible to the receiver, then they can still get a rough fix from the cellular towers.\nBut to completely answer your question - it's as free as the phone company lets it be, and so far they do not charge for it at all. I doubt that's going to change in the future. In the higher end phones with a full GPS receiver you may even be able to load your own software and access it, such as with mologogo on a motorola iDen phone - the J2ME development kit is free, and the phone is only $40 (prepaid phone with $5 credit). Unlimited internet is about $10 a month, so for $40 to start and $10 a month you can get an internet tracking system. (Prices circa August 2008)\nIt's only going to get cheaper and more full featured from here on out...\nRe: Google maps and such\nYes, Google maps and all other cell phone mapping systems require a data connection of some sort at varying times during usage. When you move far enough in one direction, for instance, it'll request new tiles from its server. Your average phone doesn't have enough storage to hold a map of the US, nor the processor power to render it nicely. iPhone would be able to if you wanted to use the storage space up with maps, but given that most iPhones have a full time unlimited data plan most users would rather use that space for other things.\n", "There's 3 satellites at least that you must be able to receive from of the 24-32 out there, and they each broadcast a time from a synchronized atomic clock. The differences in those times that you receive at any one time tell you how long the broadcast took to reach you, and thus where you are in relation to the satellites. So, it sort of reads from something, but it doesn't connect to that thing. Note that this doesn't tell you your orientation, many GPSes fake that (and speed) by interpolating data points.\nIf you don't count the cost of the receiver, it's a free service. Apparently there's higher resolution services out there that are restricted to military use. Those are likely a fixed cost for a license to decrypt the signals along with a confidentiality agreement.\nNow your device may support GPS tracking, in which case it might communicate, say via GPRS, to a database which will store the location the device has found itself to be at, so that multiple devices may be tracked. That would require some kind of connection.\nMaps are either stored on the device or received over a connection. Navigation is computed based on those maps' databases. These likely are a licensed item with a cost associated, though if you use a service like Google Maps they have the license with NAVTEQ and others.\n" ]
[ 56, 12 ]
[]
[]
[ "gps", "mobile_phones" ]
stackoverflow_0000033637_gps_mobile_phones.txt
Q: Pattern recognition algorithms In the past I had to develop a program which acted as a rule evaluator. You had an antecedent and some consecuents (actions) so if the antecedent evaled to true the actions where performed. At that time I used a modified version of the RETE algorithm (there are three versions of RETE only the first being public) for the antecedent pattern matching. We're talking about a big system here with million of operations per rule and some operators "repeated" in several rules. It's possible I'll have to implement it all over again in other language and, even though I'm experienced in RETE, does anyone know of other pattern matching algorithms? Any suggestions or should I keep using RETE? A: The TREAT algorithm is similar to RETE, but doesn't record partial matches. As a result, it may use less memory than RETE in certain situations. Also, if you modify a significant number of the known facts, then TREAT can be much faster because you don't have to spend time on retractions. There's also RETE* which balances between RETE and TREAT by saving some join node state depending on how much memory you want to use. So you still save some assertion time, but also get memory and retraction time savings depending on how you tune your system. You may also want to check out LEAPS, which uses a lazy evaluation scheme and incorporates elements of both RETE and TREAT. I only have personal experience with RETE, but it seems like RETE* or LEAPS are the better, more flexible choices.
Pattern recognition algorithms
In the past I had to develop a program which acted as a rule evaluator. You had an antecedent and some consecuents (actions) so if the antecedent evaled to true the actions where performed. At that time I used a modified version of the RETE algorithm (there are three versions of RETE only the first being public) for the antecedent pattern matching. We're talking about a big system here with million of operations per rule and some operators "repeated" in several rules. It's possible I'll have to implement it all over again in other language and, even though I'm experienced in RETE, does anyone know of other pattern matching algorithms? Any suggestions or should I keep using RETE?
[ "The TREAT algorithm is similar to RETE, but doesn't record partial matches. As a result, it may use less memory than RETE in certain situations. Also, if you modify a significant number of the known facts, then TREAT can be much faster because you don't have to spend time on retractions.\nThere's also RETE* which balances between RETE and TREAT by saving some join node state depending on how much memory you want to use. So you still save some assertion time, but also get memory and retraction time savings depending on how you tune your system.\nYou may also want to check out LEAPS, which uses a lazy evaluation scheme and incorporates elements of both RETE and TREAT.\nI only have personal experience with RETE, but it seems like RETE* or LEAPS are the better, more flexible choices.\n" ]
[ 5 ]
[]
[]
[ "algorithm", "pattern_recognition" ]
stackoverflow_0000033076_algorithm_pattern_recognition.txt
Q: My (Java/Swing) MouseListener isn't listening, help me figure out why So I've got a JPanel implementing MouseListener and MouseMotionListener: import javax.swing.*; import java.awt.*; import java.awt.event.*; public class DisplayArea extends JPanel implements MouseListener, MouseMotionListener { public DisplayArea(Rectangle bounds, Display display) { setLayout(null); setBounds(bounds); setOpaque(false); setPreferredSize(new Dimension(bounds.width, bounds.height)); this.display = display; } public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D)g; if (display.getControlPanel().Antialiasing()) { g2.addRenderingHints(new RenderingHints(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON)); } g2.setColor(Color.white); g2.fillRect(0, 0, getWidth(), getHeight()); } public void mousePressed(MouseEvent event) { System.out.println("mousePressed()"); mx1 = event.getX(); my1 = event.getY(); } public void mouseReleased(MouseEvent event) { System.out.println("mouseReleased()"); mx2 = event.getX(); my2 = event.getY(); int mode = display.getControlPanel().Mode(); switch (mode) { case ControlPanel.LINE: System.out.println("Line from " + mx1 + ", " + my1 + " to " + mx2 + ", " + my2 + "."); } } public void mouseEntered(MouseEvent event) { System.out.println("mouseEntered()"); } public void mouseExited(MouseEvent event) { System.out.println("mouseExited()"); } public void mouseClicked(MouseEvent event) { System.out.println("mouseClicked()"); } public void mouseMoved(MouseEvent event) { System.out.println("mouseMoved()"); } public void mouseDragged(MouseEvent event) { System.out.println("mouseDragged()"); } private Display display = null; private int mx1 = -1; private int my1 = -1; private int mx2 = -1; private int my2 = -1; } The trouble is, none of these mouse functions are ever called. DisplayArea is created like this: da = new DisplayArea(new Rectangle(CONTROL_WIDTH, 0, DISPLAY_WIDTH, DISPLAY_HEIGHT), this); I am not really a Java programmer (this is part of an assignment), but I can't see anything glaringly obvious. Can someone smarter than I see anything? A: The implements mouselistener, mousemotionlistener just allows the displayArea class to listen to some, to be defined, Swing component's mouse events. You have to explicitly define what it should be listening at. So I suppose you could add something like this to the constructor: this.addMouseListener(this); this.addMouseMotionListener(this); A: I don't see anywhere in the code where you call addMouseListener(this) or addMouseMotionListener(this) for the DisplayArea in order for it to subscribe to those events. A: I don't see any code here to register to the mouse listeners. You have to call addMouseListener(this) and addMouseMotionListener(this) on the DisplayArea.
My (Java/Swing) MouseListener isn't listening, help me figure out why
So I've got a JPanel implementing MouseListener and MouseMotionListener: import javax.swing.*; import java.awt.*; import java.awt.event.*; public class DisplayArea extends JPanel implements MouseListener, MouseMotionListener { public DisplayArea(Rectangle bounds, Display display) { setLayout(null); setBounds(bounds); setOpaque(false); setPreferredSize(new Dimension(bounds.width, bounds.height)); this.display = display; } public void paintComponent(Graphics g) { Graphics2D g2 = (Graphics2D)g; if (display.getControlPanel().Antialiasing()) { g2.addRenderingHints(new RenderingHints(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON)); } g2.setColor(Color.white); g2.fillRect(0, 0, getWidth(), getHeight()); } public void mousePressed(MouseEvent event) { System.out.println("mousePressed()"); mx1 = event.getX(); my1 = event.getY(); } public void mouseReleased(MouseEvent event) { System.out.println("mouseReleased()"); mx2 = event.getX(); my2 = event.getY(); int mode = display.getControlPanel().Mode(); switch (mode) { case ControlPanel.LINE: System.out.println("Line from " + mx1 + ", " + my1 + " to " + mx2 + ", " + my2 + "."); } } public void mouseEntered(MouseEvent event) { System.out.println("mouseEntered()"); } public void mouseExited(MouseEvent event) { System.out.println("mouseExited()"); } public void mouseClicked(MouseEvent event) { System.out.println("mouseClicked()"); } public void mouseMoved(MouseEvent event) { System.out.println("mouseMoved()"); } public void mouseDragged(MouseEvent event) { System.out.println("mouseDragged()"); } private Display display = null; private int mx1 = -1; private int my1 = -1; private int mx2 = -1; private int my2 = -1; } The trouble is, none of these mouse functions are ever called. DisplayArea is created like this: da = new DisplayArea(new Rectangle(CONTROL_WIDTH, 0, DISPLAY_WIDTH, DISPLAY_HEIGHT), this); I am not really a Java programmer (this is part of an assignment), but I can't see anything glaringly obvious. Can someone smarter than I see anything?
[ "The implements mouselistener, mousemotionlistener just allows the displayArea class to listen to some, to be defined, Swing component's mouse events. You have to explicitly define what it should be listening at. So I suppose you could add something like this to the constructor:\nthis.addMouseListener(this);\nthis.addMouseMotionListener(this);\n\n", "I don't see anywhere in the code where you call addMouseListener(this) or addMouseMotionListener(this) for the DisplayArea in order for it to subscribe to those events. \n", "I don't see any code here to register to the mouse listeners. You have to call addMouseListener(this) and addMouseMotionListener(this) on the DisplayArea.\n" ]
[ 13, 3, 3 ]
[]
[]
[ "actionlistener", "java", "mouselistener", "swing" ]
stackoverflow_0000033708_actionlistener_java_mouselistener_swing.txt
Q: Can SlickEdit automatically update its tag files? I prefer SlickEdit for my IDE but the only way I can get it to update the tag files to incorporate code changes is to recreate the project and/or run start a re-tag manually. Is there a way to setup Slick Edit so that it automatically incorporates changes in the code base that happen after project creation. This problem is especially noticeable when working on large shared code bases where I must check out files that have been modified by other users. A: Okay, I asked a question on the SlickEdit forums. http://community.slickedit.com/index.php?topic=3854.0 EDIT: Winnar! Options->Editing->Background Tagging of Other Files
Can SlickEdit automatically update its tag files?
I prefer SlickEdit for my IDE but the only way I can get it to update the tag files to incorporate code changes is to recreate the project and/or run start a re-tag manually. Is there a way to setup Slick Edit so that it automatically incorporates changes in the code base that happen after project creation. This problem is especially noticeable when working on large shared code bases where I must check out files that have been modified by other users.
[ "Okay, I asked a question on the SlickEdit forums.\nhttp://community.slickedit.com/index.php?topic=3854.0\nEDIT: Winnar! Options->Editing->Background Tagging of Other Files\n" ]
[ 6 ]
[]
[]
[ "ide", "slickedit" ]
stackoverflow_0000028843_ide_slickedit.txt
Q: How do I add a MIME type to .htaccess? I would like to add the following MIME type to a site run by Apache: <mime-mapping> <extension>jnlp</extension> <mime-type>application/x-java-jnlp-file</mime-type> </mime-mapping> That is the Tomcat format. I'm on a shared host, so I can only create an .htaccess file. Would someone please specify the complete contents of such a file? A: AddType application/x-java-jnlp-file .jnlp Note that you might not actually be allowed to do that. See also the documentation of the AddType directive and the .htaccess howto. A: You should be able to just add this line: AddType application/x-java-jnlp-file .jnlp
How do I add a MIME type to .htaccess?
I would like to add the following MIME type to a site run by Apache: <mime-mapping> <extension>jnlp</extension> <mime-type>application/x-java-jnlp-file</mime-type> </mime-mapping> That is the Tomcat format. I'm on a shared host, so I can only create an .htaccess file. Would someone please specify the complete contents of such a file?
[ "AddType application/x-java-jnlp-file .jnlp\n\nNote that you might not actually be allowed to do that.\nSee also the documentation of the AddType directive and the .htaccess howto.\n", "You should be able to just add this line:\nAddType application/x-java-jnlp-file .jnlp\n\n" ]
[ 11, 4 ]
[]
[]
[ ".htaccess", "apache", "java", "jnlp", "mime" ]
stackoverflow_0000033751_.htaccess_apache_java_jnlp_mime.txt
Q: How can I use `scp` to deploy a website's `.htaccess` file? I am currently using the following command to upload my site content: scp -r web/* user@site.com:site.com/ This works great except that the .htaccess file is not sent. Presumably, this is because it's hidden. I have tried adding a second line to send the file explicitely: scp -r web/.htaccess user@site.com:site.com/.htaccess This works great except now I have to enter my password twice. Any thoughts on how to make this deploy with only 1 or 0 entries of my password? A: Just combine the two commands: scp -r web/* web/.htaccess user@site.com:site.com/ If you want 0 entries of your password you can set up public key authentication for ssh/scp. A: Some background info: the * wildcard does not match so-called "dot-files" (i.e. files whose name begins with a dot). Some shells allow you to set an option, so that it will match dot-files, however, doing that is asking for a lot of pain: now * will also match . (the current directory) and .. (the parent directory), which is usually not what is intended and can be quite surprising! (rm -rf * deleting the parent directory is probably not the best way to start a day ...) A: A word of caution - don't attempt to match dotted files (like .htaccess) with .* - this inconveniently also matches .., and would result in copying all the files on the path to the root directory. I did this once (with rm, no less!) and I had to rebuild the server because I'd messed with /var. @jwmittag: I just did a test on Ubuntu and .* matches when I use cp. Here's an example: root@krash:/# mkdir a root@krash:/# mkdir b root@krash:/# mkdir a/c root@krash:/# touch a/d root@krash:/# touch a/c/e root@krash:/# cp -r a/c/.* b cp: will not create hard link `b/c' to directory `b/.' root@krash:/# ls b d e If .* did not match .., then d shouldn't be in b.
How can I use `scp` to deploy a website's `.htaccess` file?
I am currently using the following command to upload my site content: scp -r web/* user@site.com:site.com/ This works great except that the .htaccess file is not sent. Presumably, this is because it's hidden. I have tried adding a second line to send the file explicitely: scp -r web/.htaccess user@site.com:site.com/.htaccess This works great except now I have to enter my password twice. Any thoughts on how to make this deploy with only 1 or 0 entries of my password?
[ "Just combine the two commands:\nscp -r web/* web/.htaccess user@site.com:site.com/\n\nIf you want 0 entries of your password you can set up public key authentication for ssh/scp.\n", "Some background info: the * wildcard does not match so-called \"dot-files\" (i.e. files whose name begins with a dot).\nSome shells allow you to set an option, so that it will match dot-files, however, doing that is asking for a lot of pain: now * will also match . (the current directory) and .. (the parent directory), which is usually not what is intended and can be quite surprising! (rm -rf * deleting the parent directory is probably not the best way to start a day ...)\n", "A word of caution - don't attempt to match dotted files (like .htaccess) with .* - this inconveniently also matches .., and would result in copying all the files on the path to the root directory. I did this once (with rm, no less!) and I had to rebuild the server because I'd messed with /var.\n@jwmittag:\nI just did a test on Ubuntu and .* matches when I use cp. Here's an example:\nroot@krash:/# mkdir a\nroot@krash:/# mkdir b\nroot@krash:/# mkdir a/c\nroot@krash:/# touch a/d\nroot@krash:/# touch a/c/e\nroot@krash:/# cp -r a/c/.* b\ncp: will not create hard link `b/c' to directory `b/.'\nroot@krash:/# ls b\nd e\n\nIf .* did not match .., then d shouldn't be in b.\n" ]
[ 9, 4, 3 ]
[]
[]
[ ".htaccess", "deployment", "scp", "shared_hosting" ]
stackoverflow_0000033790_.htaccess_deployment_scp_shared_hosting.txt
Q: Could you make a case for using Berkeley DB XML I'm trying to read through the documentation on Berkeley DB XML, and I think I could really use a developer's blog post or synopsis of when they had a problem that found the XML layer atop Berkeley DB was the exact prescription for. Maybe I'm not getting it, but it seems like they're both in-process DBs, and ultimately you will parse your XML into objects or data, so why not start by storing your data parsed, rather than as XML? A: Ultimately I want my data stored in some reasonable format. If that data started as XML and I want to retrieve it/them using XQuery, without the XML layer, I have to write a lot of code to do the XQuery by myself, and perhaps even worse to know my XML well enough to be able to have a reasonable storage system for it. Conversely, so long as the performance of the system allows, I can forget about that part of the back end, and just worry about my XML document and up (i.e. to the user) level and leave the rest as a black box. It gives me the B-DB storage goodness, but I get to use it from a document-centric perspective.
Could you make a case for using Berkeley DB XML
I'm trying to read through the documentation on Berkeley DB XML, and I think I could really use a developer's blog post or synopsis of when they had a problem that found the XML layer atop Berkeley DB was the exact prescription for. Maybe I'm not getting it, but it seems like they're both in-process DBs, and ultimately you will parse your XML into objects or data, so why not start by storing your data parsed, rather than as XML?
[ "Ultimately I want my data stored in some reasonable format.\nIf that data started as XML and I want to retrieve it/them using XQuery, without the XML layer, I have to write a lot of code to do the XQuery by myself, and perhaps even worse to know my XML well enough to be able to have a reasonable storage system for it.\nConversely, so long as the performance of the system allows, I can forget about that part of the back end, and just worry about my XML document and up (i.e. to the user) level and leave the rest as a black box. It gives me the B-DB storage goodness, but I get to use it from a document-centric perspective.\n" ]
[ 6 ]
[]
[]
[ "berkeley_db", "berkeley_db_xml" ]
stackoverflow_0000033495_berkeley_db_berkeley_db_xml.txt
Q: How do I detect if a function is available during JNLP execution? I have an application which really should be installed, but does work fine when deployed using JNLP. However, it would seem that some Java functions such as Runtime.exec don't work using the default security options. I would like to therefore disable UI functionality that relies upon such functions. So my question is, how do I detect at runtime whether certain functions are available or not? The case study, here of course, is Runtime.exec. A: You want to ask to the SecurityManager if you have Exec right with the checkExec method. A: I have also found that adding the following to the JNLP file: <security> <all-permissions/> </security> And signing the JAR file allows the app to run with all the permissions needed for Runtime.exec. A: For the specific example of Runtime.exec there is a method on the SecurityManager class checkExec(String cmd) that will throw an exception that can be caught to determine if the necessary command can be executed. For more information see the javadoc for Runtime.exec and SecurityManager.checkExec. The more general case requires creating a Permission object representing the task being checked and running SecurityManager's checkPermission method.
How do I detect if a function is available during JNLP execution?
I have an application which really should be installed, but does work fine when deployed using JNLP. However, it would seem that some Java functions such as Runtime.exec don't work using the default security options. I would like to therefore disable UI functionality that relies upon such functions. So my question is, how do I detect at runtime whether certain functions are available or not? The case study, here of course, is Runtime.exec.
[ "You want to ask to the SecurityManager if you have Exec right with the checkExec method.\n", "I have also found that adding the following to the JNLP file:\n<security>\n <all-permissions/>\n</security>\n\nAnd signing the JAR file allows the app to run with all the permissions needed for Runtime.exec.\n", "For the specific example of Runtime.exec there is a method on the SecurityManager class checkExec(String cmd) that will throw an exception that can be caught to determine if the necessary command can be executed. For more information see the javadoc for Runtime.exec and SecurityManager.checkExec.\nThe more general case requires creating a Permission object representing the task being checked and running SecurityManager's checkPermission method. \n" ]
[ 1, 1, 0 ]
[]
[]
[ "deployment", "java", "jnlp", "permissions", "security" ]
stackoverflow_0000033829_deployment_java_jnlp_permissions_security.txt
Q: How can I find the revision history of the file that was deleted and then resubmitted to SVN? This is a follow on question to "How do I delete 1 file from a revision in SVN?" but because it probably has a very different answer and I believe that others would benefit from knowing the answer. (I don't know the answer yet.) The previous question was answered and I discovered that it is not possible to remove a revision from SVN. The second best solution was to remove the file from SVN, commit and then add the file back and commit again. I now want to make sure that the original file's revision history has gone. So I am hoping that the answer to the question "How can I find the revision history of the file that was deleted and then resubmitted to SVN?" is that you can't. A: What makes you think that it is not possible to remove a revision from Subversion? The solution given to your other question (svndumpfilter) does exactly that (see the parameters --drop-empty-revs and --renumber-revs)! And when the revision is gone, there's obviously no way to get at the revision history, because it was never there in the first place. A: With a simple svn log -v [folder] you can browse quickly the adding and deletion. ------------------------------------------------------------------------ r14 | kame | 2008-08-29 04:23:43 +0200 (ven., 29 aoû2008) | 1 line Chemins modifié : A /a.txt Readded a ------------------------------------------------------------------------ r13 | kame | 2008-08-29 04:23:24 +0200 (ven., 29 aoû2008) | 1 line Chemins modifié : D /a.txt Delete a ------------------------------------------------------------------------ r12 | kame | 2008-08-29 04:23:06 +0200 (ven., 29 aoû2008) | 1 line Chemins modifié : A /a.txt svn log won't show the file, svn diff will pretend that the old revision does not exist, but a svn checkout targeting the old revision will happily give you the old file. A: Short answer: you can Long answer: Unfortunately (for you but perhaps not for most folks) , the revision history for a deleted file is still there - it's just a little harder to get at. Here's an example: $ touch one $ svn add one $ svn ci -m "Added file one" $ date >> one $ svn ci -m "Updated file one" $ date >> one $ svn ci -m "Updated file one again" $ svn log file:///repos/one ------------------------------------------------------------------------ r3 | andrewr | 2008-08-29 12:27:10 +1000 (Fri, 29 Aug 2008) | 1 line Updated file one again ------------------------------------------------------------------------ r2 | andrewr | 2008-08-29 12:26:50 +1000 (Fri, 29 Aug 2008) | 1 line Updated file one ------------------------------------------------------------------------ r1 | andrewr | 2008-08-29 12:25:07 +1000 (Fri, 29 Aug 2008) | 1 line Added file one ------------------------------------------------------------------------ $ svn delete one $ svn ci -m "Deleted file one" $ svn up $ touch one $ svn add one $ svn ci -m "Adding file one back in" $ svn log file:///repos/one ------------------------------------------------------------------------ r5 | andrewr | 2008-08-29 12:29:13 +1000 (Fri, 29 Aug 2008) | 1 line add one back ------------------------------------------------------------------------ It looks like it works (the old history is gone), but if you request the file at older revisions you get the history of the deleted file. $ svn log -r 3:1 file:///repos/one ------------------------------------------------------------------------ r3 | andrewr | 2008-08-29 12:27:10 +1000 (Fri, 29 Aug 2008) | 1 line Updated file one again ------------------------------------------------------------------------ r2 | andrewr | 2008-08-29 12:26:50 +1000 (Fri, 29 Aug 2008) | 1 line Updated file one ------------------------------------------------------------------------ r1 | andrewr | 2008-08-29 12:25:07 +1000 (Fri, 29 Aug 2008) | 1 line Added file one ------------------------------------------------------------------------ A: I would have said you can't - you have created a new file and thus revision tree in the eyes of SVN. It may be possible to recover the old tree independently (not sure if you managed an actual delete or just SVN Delete) but there is no link between the old revision tree and the new one.
How can I find the revision history of the file that was deleted and then resubmitted to SVN?
This is a follow on question to "How do I delete 1 file from a revision in SVN?" but because it probably has a very different answer and I believe that others would benefit from knowing the answer. (I don't know the answer yet.) The previous question was answered and I discovered that it is not possible to remove a revision from SVN. The second best solution was to remove the file from SVN, commit and then add the file back and commit again. I now want to make sure that the original file's revision history has gone. So I am hoping that the answer to the question "How can I find the revision history of the file that was deleted and then resubmitted to SVN?" is that you can't.
[ "What makes you think that it is not possible to remove a revision from Subversion? The solution given to your other question (svndumpfilter) does exactly that (see the parameters --drop-empty-revs and --renumber-revs)! And when the revision is gone, there's obviously no way to get at the revision history, because it was never there in the first place.\n", "With a simple \nsvn log -v [folder]\n\nyou can browse quickly the adding and deletion.\n------------------------------------------------------------------------\nr14 | kame | 2008-08-29 04:23:43 +0200 (ven., 29 aoû2008) | 1 line\nChemins modifié :\n A /a.txt\n\nReadded a\n------------------------------------------------------------------------\nr13 | kame | 2008-08-29 04:23:24 +0200 (ven., 29 aoû2008) | 1 line\nChemins modifié :\n D /a.txt\n\nDelete a\n------------------------------------------------------------------------\nr12 | kame | 2008-08-29 04:23:06 +0200 (ven., 29 aoû2008) | 1 line\nChemins modifié :\n A /a.txt\n\nsvn log won't show the file, svn diff will pretend that the old revision does not exist, but a svn checkout targeting the old revision will happily give you the old file.\n", "Short answer: you can\nLong answer:\nUnfortunately (for you but perhaps not for most folks) , the revision history for a deleted file is still there - it's just a little harder to get at.\nHere's an example:\n$ touch one\n$ svn add one\n$ svn ci -m \"Added file one\"\n$ date >> one \n$ svn ci -m \"Updated file one\"\n$ date >> one\n$ svn ci -m \"Updated file one again\"\n$ svn log file:///repos/one\n\n------------------------------------------------------------------------\nr3 | andrewr | 2008-08-29 12:27:10 +1000 (Fri, 29 Aug 2008) | 1 line\n\nUpdated file one again\n------------------------------------------------------------------------\nr2 | andrewr | 2008-08-29 12:26:50 +1000 (Fri, 29 Aug 2008) | 1 line\n\nUpdated file one\n------------------------------------------------------------------------\nr1 | andrewr | 2008-08-29 12:25:07 +1000 (Fri, 29 Aug 2008) | 1 line\n\nAdded file one\n------------------------------------------------------------------------\n\n$ svn delete one\n$ svn ci -m \"Deleted file one\"\n$ svn up\n$ touch one\n$ svn add one\n$ svn ci -m \"Adding file one back in\"\n$ svn log file:///repos/one\n\n------------------------------------------------------------------------\nr5 | andrewr | 2008-08-29 12:29:13 +1000 (Fri, 29 Aug 2008) | 1 line\n\nadd one back\n------------------------------------------------------------------------\n\nIt looks like it works (the old history is gone), but if you request the file at older revisions you get the history\nof the deleted file.\n$ svn log -r 3:1 file:///repos/one\n\n------------------------------------------------------------------------\nr3 | andrewr | 2008-08-29 12:27:10 +1000 (Fri, 29 Aug 2008) | 1 line\n\nUpdated file one again\n------------------------------------------------------------------------\nr2 | andrewr | 2008-08-29 12:26:50 +1000 (Fri, 29 Aug 2008) | 1 line\n\nUpdated file one\n------------------------------------------------------------------------\nr1 | andrewr | 2008-08-29 12:25:07 +1000 (Fri, 29 Aug 2008) | 1 line\n\nAdded file one\n------------------------------------------------------------------------\n\n", "I would have said you can't - you have created a new file and thus revision tree in the eyes of SVN.\nIt may be possible to recover the old tree independently (not sure if you managed an actual delete or just SVN Delete) but there is no link between the old revision tree and the new one.\n" ]
[ 3, 2, 2, 1 ]
[]
[]
[ "svn" ]
stackoverflow_0000033836_svn.txt
Q: How do you handle versioning on a Web Application? What are the strategies for versioning of a web application/ website? I notice that here in the Beta there is an svn revision number in the footer and that's ideal for an application that uses svn over one repository. But what if you use externals or a different source control application that versions separate files? It seems easy for a Desktop app, but I can't seem to find a suitable way of versioning for an asp.net web application. NB I'm not sure that I have been totally clear with my question. What I want to know is how to build and auto increment a version number for an asp.net application. I'm not interested in how to link it with svn. A: I think what you are looking for is something like this: How to auto-increment assembly version using a custom MSBuild task. It's a little old but I think it will work. A: For my big apps I just use a incrementing version number id (1.0, 1.1, ...) that i store in a comment of the main file (usually index.php). For just websites I usually just have a revision number (1,2,3,...). A: I have a tendency to stick with basic integers at first (1,2,3), moving onto rational numbers (2.1, 3.13) when things get bigger... Tried using fruit at one point, that works well for a small office. Oh, the 'banana' release? looks over in the corner "yeah... that's getting pretty old now..." Unfortunately, confusion started to set in when the development team grew, is it an Orange, or Mandarin, or Tangelo? It looks ok. What do you mean "rotten on the inside?" ... but in all honesty. Setup a separate repository as a master, development goes on in various repositories. For every scheduled release everything is checked into the master repository so that you can quickly roll back when something goes wrong. (I'm assuming dev/test/production are all separate servers, and dev is never allowed to touch production or the master repository....) A: I maintain a system of web applications with various components that live in separate SVN repos. To be able to version track the system as a whole, I have another SVN repo which contains all other repos as external references. It also contains install / setup script(s) to deploy the whole thing. With that setup, the SVN revision number of the "metarepository" could possibly be used for versioning the complete system. In another case, I include the SVN revision via SVN keywords in a class file that serves no other purpose (to avoid the risk of keyword substitution breaking my code). The class in that file contains a string variable that is manipulated by SVN and parsed by a class method. An inconvenience with both approaches is that the revision number is not automatically updated by changes in the externals (approach 1) or the rest of the code (approach 2). A: During internal development, I'm using milestone numbers (M1, M2, M3...). After release, I'll probably just update dates ("the January 2009 update").
How do you handle versioning on a Web Application?
What are the strategies for versioning of a web application/ website? I notice that here in the Beta there is an svn revision number in the footer and that's ideal for an application that uses svn over one repository. But what if you use externals or a different source control application that versions separate files? It seems easy for a Desktop app, but I can't seem to find a suitable way of versioning for an asp.net web application. NB I'm not sure that I have been totally clear with my question. What I want to know is how to build and auto increment a version number for an asp.net application. I'm not interested in how to link it with svn.
[ "I think what you are looking for is something like this: How to auto-increment assembly version using a custom MSBuild task. It's a little old but I think it will work.\n", "For my big apps I just use a incrementing version number id (1.0, 1.1, ...) that i store in a comment of the main file (usually index.php).\nFor just websites I usually just have a revision number (1,2,3,...).\n", "I have a tendency to stick with basic integers at first (1,2,3), moving onto rational numbers (2.1, 3.13) when things get bigger... \nTried using fruit at one point, that works well for a small office. Oh, the 'banana' release? looks over in the corner \"yeah... that's getting pretty old now...\"\nUnfortunately, confusion started to set in when the development team grew, is it an Orange, or Mandarin, or Tangelo? It looks ok. What do you mean \"rotten on the inside?\"\n... but in all honesty. Setup a separate repository as a master, development goes on in various repositories. For every scheduled release everything is checked into the master repository so that you can quickly roll back when something goes wrong. \n(I'm assuming dev/test/production are all separate servers, and dev is never allowed to touch production or the master repository....)\n", "I maintain a system of web applications with various components that live in separate SVN repos. To be able to version track the system as a whole, I have another SVN repo which contains all other repos as external references. It also contains install / setup script(s) to deploy the whole thing. With that setup, the SVN revision number of the \"metarepository\" could possibly be used for versioning the complete system.\nIn another case, I include the SVN revision via SVN keywords in a class file that serves no other purpose (to avoid the risk of keyword substitution breaking my code). The class in that file contains a string variable that is manipulated by SVN and parsed by a class method.\nAn inconvenience with both approaches is that the revision number is not automatically updated by changes in the externals (approach 1) or the rest of the code (approach 2).\n", "During internal development, I'm using milestone numbers (M1, M2, M3...). After release, I'll probably just update dates (\"the January 2009 update\").\n" ]
[ 4, 2, 2, 0, 0 ]
[]
[]
[ "asp.net", "version_control", "versioning" ]
stackoverflow_0000029802_asp.net_version_control_versioning.txt
Q: ASP.NET AJAX and PageRequestManagerParserErrorException Has anyone run into this error message before when using a timer on an ASP.NET page to update a DataGrid every x seconds? Searching google yielded this blog entry and many more but nothing that seems to apply to me yet. The full text of the error message below: Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled. A: Many issues can cause that error. It's usually a Response.Write call, but anything that modifies the response can cause it. We probably won't be able to help you unless you post some pertinent code-behind. A: The RoleProvider sets a cookie to cache role information in a cookie. When the cookie resets during an asynch post back from AJAX, you will get this error. The solution is to either set the cookieTimeout in the roleManager section of your web.config to a very large number of minutes, or set the cacheRolesInCookie=false. This was a known bug in AJAX 1.0 Extensions. I'm not sure if it was fixed in future releases, and I should have mentioned that I was using AJAX 1.0 extensions in VS2008 targeting the 2.0 framework. Happy programming! A: Regarding the formatting of your post: If you use the quote-button instead of code-button, people do not have to scroll to see the complete error message.
ASP.NET AJAX and PageRequestManagerParserErrorException
Has anyone run into this error message before when using a timer on an ASP.NET page to update a DataGrid every x seconds? Searching google yielded this blog entry and many more but nothing that seems to apply to me yet. The full text of the error message below: Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.
[ "Many issues can cause that error. It's usually a Response.Write call, but anything that modifies the response can cause it. \nWe probably won't be able to help you unless you post some pertinent code-behind.\n", "The RoleProvider sets a cookie to cache role information in a cookie. When the cookie resets during an asynch post back from AJAX, you will get this error. The solution is to either set the cookieTimeout in the roleManager section of your web.config to a very large number of minutes, or set the cacheRolesInCookie=false.\nThis was a known bug in AJAX 1.0 Extensions. I'm not sure if it was fixed in future releases, and I should have mentioned that I was using AJAX 1.0 extensions in VS2008 targeting the 2.0 framework.\nHappy programming!\n", "Regarding the formatting of your post: If you use the quote-button instead of code-button, people do not have to scroll to see the complete error message.\n" ]
[ 3, 2, 1 ]
[]
[]
[ "asp.net" ]
stackoverflow_0000027535_asp.net.txt
Q: Can I export translations of place names from freebase.com So I've looked at this use of the freebase API and I was really impressed with the translations of the name that it found. IE Rome, Roma, Rom, Rzym, Rooma,로마, 罗马市. This is because I have a database of some 5000+ location names and I would very much like all French, German or Korean translations for these English names. The problem is I spent about two hours clicking around freebase, and could find no way to get a view of city/location names in a different language mapped to English. So I'd love it if someone who understands what freebase is and how it's organized could get me a link to that view which theoretically I could then export. Also I just wanted to share this question because I'm totally impressed with freebase and think if people haven't looked at it they should. A: The link you posted uses mjt, a javascript framework designed for Freebase. The Query they use. mjt.freebase.MqlRead([{ limit: 100, id:qid, /* allow fuzzy matches in the value for more results... */ /* 'q:name': {'value~=': qname, value:null, lang: '/lang/'+qlang}, */ 'q:name': {value: qname, lang: '/lang/'+qlang}, type: '/common/topic', name: [{ value:null, lang:{ id:null, name:{ value:null, lang:'/lang/en', optional:true }, 'q:name':{ value:null, lang:'/lang/'+qlang, optional:true } } }], article: [{id:null, limit:1}], image: [{id:null, limit:1, optional:true}], creator: null, timestamp:null }]) Where: qlang - is your desired language to translate too. qname - is is the location to query. To get the link you want, you'll need the API, and you can convert the above query to a link that will return a JSON object containing the translated string. A: The query [{ limit: 100, type: '/location/location', name: [{ value: null, lang: { name: { value: null, lang: '/lang/en', }, } }], }]; returns for every location and every language, the name of that location in that language. The results are organized by language. For example, here is a very small segment of the return value: { 'lang': { 'name': { 'lang': '/lang/en', 'value': 'Russian' } }, 'value': 'Сан-Франциско' }, { 'lang': { 'name': { 'lang': '/lang/en', 'value': 'Swedish' } }, 'value': 'San Francisco' }, { 'lang': { 'name': { 'lang': '/lang/en', 'value': 'Portuguese' } }, 'value': 'São Francisco (Califórnia)' }, For a no-programming solution, copy-paste the following into an HTML file and open it with your browser: <html><head> <script type="text/javascript" src="http://mjtemplate.org/dist/mjt-0.6/mjt.js"></script> </head> <body onload="mjt.run()"> <div mjt.task="q"> mjt.freebase.MqlRead([{ limit: 10, type: '/location/location', name: [{ value:null, lang:{ name:{ value:null, lang:'/lang/en', }, } }], }]) </div> <table><tr mjt.for="topic in q.result"><td> <table><tr mjt.for="(var rowi = 0; rowi &lt; topic.name.length; rowi++)" mjt.if="rowi &lt; topic.name.length" style="padding-left:2em"><td> <pre mjt.script=""> var name = topic.name[rowi]; </pre> ${(name.lang['q:name']||name.lang.name).value}: </td><td>$name.value</td></tr></table></td></tr></table></body></html> Of course, that will only include the first 10 results. Up the limit above if you want more. (By the way, not only is Freebase cool, so is this mjt templating language!)
Can I export translations of place names from freebase.com
So I've looked at this use of the freebase API and I was really impressed with the translations of the name that it found. IE Rome, Roma, Rom, Rzym, Rooma,로마, 罗马市. This is because I have a database of some 5000+ location names and I would very much like all French, German or Korean translations for these English names. The problem is I spent about two hours clicking around freebase, and could find no way to get a view of city/location names in a different language mapped to English. So I'd love it if someone who understands what freebase is and how it's organized could get me a link to that view which theoretically I could then export. Also I just wanted to share this question because I'm totally impressed with freebase and think if people haven't looked at it they should.
[ "The link you posted uses mjt, a javascript framework designed for Freebase. \nThe Query they use.\n mjt.freebase.MqlRead([{\n limit: 100,\n id:qid,\n /* allow fuzzy matches in the value for more results... */\n /* 'q:name': {'value~=': qname, value:null, lang: '/lang/'+qlang}, */\n 'q:name': {value: qname, lang: '/lang/'+qlang},\n\n type: '/common/topic',\n name: [{\n value:null,\n lang:{\n id:null,\n name:{\n value:null,\n lang:'/lang/en',\n optional:true\n },\n 'q:name':{\n value:null,\n lang:'/lang/'+qlang,\n optional:true\n }\n }\n }],\n article: [{id:null, limit:1}],\n image: [{id:null, limit:1, optional:true}],\n creator: null,\n timestamp:null\n }]) \n\nWhere:\nqlang - is your desired language to translate too. \nqname - is is the location to query.\nTo get the link you want, you'll need the API, and you can convert the above query to a link that will return a JSON object containing the translated string.\n", "The query\n[{\n limit: 100,\n type: '/location/location',\n name: [{\n value: null,\n lang: {\n name: {\n value: null,\n lang: '/lang/en',\n },\n }\n }],\n}];\n\nreturns for every location and every language, the name of that location in that language. The results are organized by language. For example, here is a very small segment of the return value:\n {\n 'lang': {\n 'name': {\n 'lang': '/lang/en',\n 'value': 'Russian'\n }\n },\n 'value': 'Сан-Франциско'\n },\n {\n 'lang': {\n 'name': {\n 'lang': '/lang/en',\n 'value': 'Swedish'\n }\n },\n 'value': 'San Francisco'\n },\n {\n 'lang': {\n 'name': {\n 'lang': '/lang/en',\n 'value': 'Portuguese'\n }\n },\n 'value': 'São Francisco (Califórnia)'\n },\n\nFor a no-programming solution, copy-paste the following into an HTML file and open it with your browser:\n<html><head>\n<script type=\"text/javascript\" src=\"http://mjtemplate.org/dist/mjt-0.6/mjt.js\"></script>\n</head>\n<body onload=\"mjt.run()\">\n<div mjt.task=\"q\">\n mjt.freebase.MqlRead([{\n limit: 10,\n type: '/location/location',\n name: [{\n value:null,\n lang:{\n name:{\n value:null,\n lang:'/lang/en',\n },\n }\n }],\n }]) \n</div>\n\n<table><tr mjt.for=\"topic in q.result\"><td>\n<table><tr mjt.for=\"(var rowi = 0; rowi &lt; topic.name.length; rowi++)\"\n mjt.if=\"rowi &lt; topic.name.length\" style=\"padding-left:2em\"><td>\n <pre mjt.script=\"\">\n var name = topic.name[rowi];\n </pre>\n ${(name.lang['q:name']||name.lang.name).value}:\n</td><td>$name.value</td></tr></table></td></tr></table></body></html>\n\nOf course, that will only include the first 10 results. Up the limit above if you want more. (By the way, not only is Freebase cool, so is this mjt templating language!)\n" ]
[ 4, 4 ]
[]
[]
[ "freebase", "translation" ]
stackoverflow_0000033484_freebase_translation.txt