text
stringlengths
8
267k
meta
dict
Q: Normalizing a Table with Low Integrity I've been handed a table with about 18000 rows. Each record describes the location of one customer. The issue is, that when the person created the table, they did not add a field for "Company Name", only "Location Name," and one company can have many locations. For example, here are some records that describe the same customer: Location Table ID Location_Name 1 TownShop#1 2 Town Shop - Loc 2 3 The Town Shop 4 TTS - Someplace 5 Town Shop,the 3 6 Toen Shop4 My goal is to make it look like: Location Table ID Company_ID Location_Name 1 1 Town Shop#1 2 1 Town Shop - Loc 2 3 1 The Town Shop 4 1 TTS - Someplace 5 1 Town Shop,the 3 6 1 Toen Shop4 Company Table Company_ID Company_Name 1 The Town Shop There is no "Company" table, I will have to generate the Company Name list from the most descriptive or best Location Name that represents the multiple locations. Currently I am thinking I need to generate a list of Location Names that are similar, and then and go through that list by hand. Any suggestions on how I can approach this is appreciated. @Neall, Thank you for your statement, but unfortunately, each location name is distinct, there are no duplicate location names, only similar. So in the results from your statement "repcount" is 1 in each row. @yukondude, Your step 4 is the heart of my question. A: I've had to do this before. The only real way to do it is to manually match up the various locations. Use your database's console interface and grouping select statements. First, add your "Company Name" field. Then: SELECT count(*) AS repcount, "Location Name" FROM mytable WHERE "Company Name" IS NULL GROUP BY "Location Name" ORDER BY repcount DESC LIMIT 5; Figure out what company the location at the top of the list belongs to and then update your company name field with an UPDATE ... WHERE "Location Name" = "The Location" statement. P.S. - You should really break your company names and location names out into separate tables and refer to them by their primary keys. Update: - Wow - no duplicates? How many records do you have? A: Please update the question, do you have a list of CompanyNames available to you? I ask because you maybe able to use Levenshtein algo to find a relationship between your list of CompanyNames and LocationNames. Update There is not a list of Company Names, I will have to generate the company name from the most descriptive or best Location Name that represents the multiple locations. Okay... try this: * *Build a list of candidate CompanyNames by finding LocationNames made up of mostly or all alphabetic characters. You can use regular expressions for this. Store this list in a separate table. *Sort that list alphabetically and (manually) determine which entries should be CompanyNames. *Compare each CompanyName to each LocationName and come up with a match score (use Levenshtein or some other string matching algo). Store the result in a separate table. *Set a threshold score such that any MatchScore < Threshold will not be considered a match for a given CompanyName. *Manually vet through the LocationNames by CompanyName | LocationName | MatchScore, and figure out which ones actually match. Ordering by MatchScore should make the process less painful. The whole purpose of the above actions is to automate parts and limit the scope of your problem. It's far from perfect, but will hopefully save you the trouble of going through 18K records by hand. A: I was going to recommend some complicated token matching algorithm but it's really tricky to get right and if you're data does not have a lot of correlation (typos, etc) then it's not going to give very good results. I would recommend you submit a job to the Amazon Mechanical Turk and let a human sort it out. A: Ideally, you'd probably want a separate table named Company and then a company_id column in this "Location" table that is a foreign key to the Company table's primary key, likely called id. That would avoid a fair bit of text duplication in this table (over 18,000 rows, an integer foreign key would save quite a bit of space over a varchar column). But you're still faced with a method for loading that Company table and then properly associating it with the rows in Location. There's no general solution, but you could do something along these lines: * *Create the Company table, with an id column that auto-increments (depends on your RDBMS). *Find all of the unique company names and insert them into Company. *Add a column, company_id, to Location that accepts NULLs (for now) and that is a foreign key of the Company.id column. *For each row in Location, determine the corresponding company, and UPDATE that row's company_id column with that company's id. This is likely the most challenging step. If your data is like what you show in the example, you'll likely have to take many runs at this with various string matching approaches. *Once all rows in Location have a company_id value, then you can ALTER the Company table to add a NOT NULL constraint to the company_id column (assuming that every location must have a company, which seems reasonable). If you can make a copy of your Location table, you can gradually build up a series of SQL statements to populate the company_id foreign key. If you make a mistake, you can just start over and rerun the script up to the point of failure. A: Yes, that step 4 from my previous post is a doozy. No matter what, you're probably going to have to do some of this by hand, but you may be able to automate the bulk of it. For the example locations you gave, a query like the following would set the appropriate company_id value: UPDATE Location SET Company_ID = 1 WHERE (LOWER(Location_Name) LIKE '%to_n shop%' OR LOWER(Location_Name) LIKE '%tts%') AND Company_ID IS NULL; I believe that would match your examples (I added the IS NULL part to not overwrite previously set Company_ID values), but of course in 18,000 rows you're going to have to be pretty inventive to handle the various combinations. Something else that might help would be to use the names in Company to generate queries like the one above. You could do something like the following (in MySQL): SELECT CONCAT('UPDATE Location SET Company_ID = ', Company_ID, ' WHERE LOWER(Location_Name) LIKE ', LOWER(REPLACE(Company_Name), ' ', '%'), ' AND Company_ID IS NULL;') FROM Company; Then just run the statements that it produces. That could do a lot of the grunge work for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/6110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the best way to store connection string in .NET DLLs? The application my team is currently developing has a DLL that is used to perform all database access. The application can not use a trusted connection because the database is behind a firewall and the domain server is not. So it appears that the connection string needs to have a DB username and password. The DLL currently has the database connection string hard coded, but I don't want to do this when we launch as the assembly can be disassembled and the username and password would be right there in the open. One of the requirements is that the password needs to be changed once every few months, so we would need to roll that out to our internal user base. Is there a way to store the password encrypted in such a way we can easily distribute to the entire user base without storing it in the assembly? UPDATE: Thanks to everyone who's answered. I'll try to answer some of the questions back to me... The data DLL is used by both ASP.NET WebForms and VB.NET WinForms. I understand that Applications can have their own config files, but I haven't seen anything on config files for DLLs. Unfortunately, I can't get to the Jon Galloway post at work so I can't judge if that will work. From a development standpoint, we don't want to use web services inhouse, but may be providing them to third parties sometime next year. I don't think impersonation will work because we can't authenticate the user through the firewall. As a user (or former user) can be an attacker, we're keeping it from everyone! A: Just assume that the bad guys WILL get the credentials out of your config file. This means that they'd be able to login to your database and do whatever that user is capable of. So just make sure that user can't do anything bad like access the tables directly. Make that user only capable of executing certain stored procedures and you'll be in better shape. This is one place that sprocs shine. A: I'm not certain, but I believe you can put it in a config file and encrypt the config file. Update: See Jon Galloway's post here. A: I hate to say this but as soon as you put something on a client machine, security for that data goes out the window. If your program is going to decrypt that string, you need to assume that an attacker can do the same. Attaching a debugger to your program would be one way. Storing the connection string on a server, and obtaining it through a web connection sounds good, until you realize that you need security on that web connection as well, otherwise an attacker could just as well impersonate your program and talk to the web connection. Let me ask a question. Who are you hiding the connection string from? The user or an attacker? And if the user, why? A: If the app is an ASP.NET app then just encrypt the connection strings section of your web.config. If the app is a client application running on multiple machines, instead of storing the connection string locally, consider using a web service or some other kind of secure mechanism to store it centrally. This would facilitate easier updates in the future and you're not storing the connection string locally. Just some thoughts. Updated: @lassevk "Storing the connection string on a server, and obtaining it through a web connection sounds good, until you realize that you need security on that web connection as well, otherwise an attacker could just as well impersonate your program and talk to the web connection." Security on the web service was implicit. Depending on the type of deployment there are a numerous options...for example client side certificates. A: There are some other idea's also. You can always use impersonation. Also, you can use the Enterprise Library's (Common Library). <section name="enterpriseLibrary.ConfigurationSource" type="Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ConfigurationSourceSection, Microsoft.Practices.EnterpriseLibrary.Common, Version=3.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" /> <enterpriseLibrary.ConfigurationSource selectedSource="Common"> <sources> <add name="Common" type="Microsoft.Practices.EnterpriseLibrary.Common.Configuration.FileConfigurationSource, Microsoft.Practices.EnterpriseLibrary.Common, Version=3.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" filePath="Config\Exception.config" /> </sources> A: .NET supports encryption on config values like this. You could leave it in a config file, but encrypted. A: You want to be able to distribute the DLL with all of the setup information being in a configurable place, but the fact is you can't have one of the handy-dandy .NET config files for a DLL unless you do something custom. Maybe you need to rethink what responsibility your DLL should have. Would it be possible, or make sense to require that the connection string be passed in by the user of your library? Does it really make sense that your DLL reads a config file? A: Several options: * *Store in web.config and encrypt *Store in dll and obfuscate (dotfuscator) *Store one in web.config (encrypted of course) and rest in the database (if you have to use multiple and encryption/decryption becomes a pain)
{ "language": "en", "url": "https://stackoverflow.com/questions/6113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do you handle huge if-conditions? It's something that's bugged me in every language I've used, I have an if statement but the conditional part has so many checks that I have to split it over multiple lines, use a nested if statement or just accept that it's ugly and move on with my life. Are there any other methods that you've found that might be of use to me and anybody else that's hit the same problem? Example, all on one line: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true) { Example, multi-line: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true) { Example-nested: if (var1 = true && var2 = true && var2 = true && var3 = true) { if (var4 = true && var5 = true && var6 = true) { A: There are two issues to address here: readability and understandability The "readability" solution is a style issue and as such is open to interpretation. My preference is this: if (var1 == true && // Explanation of the check var2 == true && // Explanation of the check var3 == true && // Explanation of the check var4 == true && // Explanation of the check var5 == true && // Explanation of the check var6 == true) // Explanation of the check { } or this: if (var1 && // Explanation of the check var2 && // Explanation of the check var3 && // Explanation of the check var4 && // Explanation of the check var5 && // Explanation of the check var6) // Explanation of the check { } That said, this kind of complex check can be quite difficult to mentally parse while scanning the code (especially if you are not the original author). Consider creating a helper method to abstract some of the complexity away: /// <Summary> /// Tests whether all the conditions are appropriately met /// </Summary> private bool AreAllConditionsMet ( bool var1, bool var2, bool var3, bool var4, bool var5, bool var6) { return ( var1 && // Explanation of the check var2 && // Explanation of the check var3 && // Explanation of the check var4 && // Explanation of the check var5 && // Explanation of the check var6); // Explanation of the check } private void SomeMethod() { // Do some stuff (including declare the required variables) if (AreAllConditionsMet (var1, var2, var3, var4, var5, var6)) { // Do something } } Now when visually scanning the "SomeMethod" method, the actual complexity of the test logic is hidden but the semantic meaning is preserved for humans to understand at a high-level. If the developer really needs to understand the details, the AreAllConditionsMet method can be examined. This is formally known as the "Decompose Conditional" refactoring pattern I think. Tools like Resharper or Refactor Pro! can making doing this kind of refactoring easy! In all cases, the key to having readable and understandable code is to use realistic variable names. While I understand this is a contrived example, "var1", "var2", etc are not acceptable variable names. They should have a name which reflects the underlying nature of the data they represent. A: Separate the condition in several booleans and then use a master boolean as the condition. bool isOpaque = object.Alpha == 1.0f; bool isDrawable = object.CanDraw && object.Layer == currentLayer; bool isHidden = hideList.Find(object); bool isVisible = isOpaque && isDrawable && ! isHidden; if(isVisible) { // ... } Better yet: public bool IsVisible { get { bool isOpaque = object.Alpha == 1.0f; bool isDrawable = object.CanDraw && object.Layer == currentLayer; bool isHidden = hideList.Find(object); return isOpaque && isDrawable && ! isHidden; } } void Draw() { if(IsVisible) { // ... } } Make sure you give your variables name that actualy indicate intention rather than function. This will greatly help the developer maintaining your code... it could be YOU! A: I'll often split these up into component boolean variables: bool orderValid = orderDate < DateTime.Now && orderStatus != Status.Canceled; bool custValid = customerBalance == 0 && customerName != "Mike"; if (orderValid && custValid) { ... A: First, I'd remove all the == true parts, that would make it 50% shorter ;) When I have big condition I search for the reasons. Sometimes I see I should use polymorphism, sometimes I need to add some state object. Basically, it implies a refactoring is needed (a code smell). Sometimes I use De-Morgan's laws to simplify boolean expressions a bit. A: Check out Implementation Patterns by Kent Beck. There is a particular pattern I am thinking of that may help in this situation... it is called "Guards". Rather than having tons of conditions, you can break them out into a guard, which makes it clear which are the adverse conditions in a method. So for example, if you have a method that does something, but there are certain conditions where it shouldn't do something, rather than: public void doSomething() { if (condition1 && condition2 && condition3 && condition4) { // do something } } You could change it to: public void doSomething() { if (!condition1) { return; } if (!condition2) { return; } if (!condition3) { return; } if (!condition4) { return; } // do something } It's a bit more verbose, but a lot more readable, especially when you start having weird nesting, the guard can help (combined with extracting methods). I HIGHLY recommend that book by the way. A: I've seen a lot of people and editors either indenting each condition in your if statement with one tab, or matching it up with the open paren: if (var1 == true && var2 == true && var3 == true ) { /* do something.. */ } I usually put the close paren on the same line as the last condition: if (var1 == true && var2 == true && var3 == true) { /* do something.. */ } But I don't think this is quite as clean. A: Well, first off, why not: if (var1 && var2 && var2 && var3 && var4 && var5 && var6) { ... Also, it's very hard to refactor abstract code examples. If you showed a specific example it would be easier to identify a better pattern to fit the problem. It's no better, but what I've done in the past: (The following method prevents short-circuiting boolean testing, all tests are run even if the first is false. Not a recommended pattern unless you know you need to always execute all the code before returning -- Thanks to ptomato for spotting my mistake!) boolean ok = cond1; ok &= cond2; ok &= cond3; ok &= cond4; ok &= cond5; ok &= cond6; Which is the same as: (not the same, see above note!) ok = (cond1 && cond2 && cond3 && cond4 && cond5 && cond6); A: Try looking at Functors and Predicates. The Apache Commons project has a great set of objects to allow you to encapsulate conditional logic into objects. Example of their use is available on O'reilly here. Excerpt of code example: import org.apache.commons.collections.ClosureUtils; import org.apache.commons.collections.CollectionUtils; import org.apache.commons.collections.functors.NOPClosure; Map predicateMap = new HashMap(); predicateMap.put( isHonorRoll, addToHonorRoll ); predicateMap.put( isProblem, flagForAttention ); predicateMap.put( null, ClosureUtils.nopClosure() ); Closure processStudents = ClosureUtils.switchClosure( predicateMap ); CollectionUtils.forAllDo( allStudents, processStudents ); Now the details of all those isHonorRoll predicates and the closures used to evaluate them: import org.apache.commons.collections.Closure; import org.apache.commons.collections.Predicate; // Anonymous Predicate that decides if a student // has made the honor roll. Predicate isHonorRoll = new Predicate() { public boolean evaluate(Object object) { Student s = (Student) object; return( ( s.getGrade().equals( "A" ) ) || ( s.getGrade().equals( "B" ) && s.getAttendance() == PERFECT ) ); } }; // Anonymous Predicate that decides if a student // has a problem. Predicate isProblem = new Predicate() { public boolean evaluate(Object object) { Student s = (Student) object; return ( ( s.getGrade().equals( "D" ) || s.getGrade().equals( "F" ) ) || s.getStatus() == SUSPENDED ); } }; // Anonymous Closure that adds a student to the // honor roll Closure addToHonorRoll = new Closure() { public void execute(Object object) { Student s = (Student) object; // Add an award to student record s.addAward( "honor roll", 2005 ); Database.saveStudent( s ); } }; // Anonymous Closure flags a student for attention Closure flagForAttention = new Closure() { public void execute(Object object) { Student s = (Student) object; // Flag student for special attention s.addNote( "talk to student", 2005 ); s.addNote( "meeting with parents", 2005 ); Database.saveStudent( s ); } }; A: Steve Mcconell's advice, from Code Complete: Use a multi-dimensional table. Each variable serves as an index to the table, and the if statement turns into a table lookup. For example if (size == 3 && weight > 70) translates into the table entry decision[size][weight_group] A: I'm surprised no one got this one yet. There's a refactoring specifically for this type of problem: http://www.refactoring.com/catalog/decomposeConditional.html A: I resort to separate boolean values: Bool cond1 == (var1 && var2); Bool cond2 == (var3 && var4); if ( cond1 && cond2 ) {} A: As others have mentioned, I would analyze your conditionals to see if there's a way you can outsource it to other methods to increase readability. A: In reflective languages like PHP, you can use variable-variables: $vars = array('var1', 'var2', ... etc.); foreach ($vars as $v) if ($$v == true) { // do something break; } A: I like to break them down by level, so I'd format you example like this: if (var1 = true && var2 = true && var2 = true && var3 = true && var4 = true && var5 = true && var6 = true){ It's handy when you have more nesting, like this (obviously the real conditions would be more interesting than "= true" for everything): if ((var1 = true && var2 = true) && ((var2 = true && var3 = true) && (var4 = true && var5 = true)) && (var6 = true)){ A: If you happen to be programming in Python, it's a cinch with the built-in all() function applied over the list of your variables (I'll just use Boolean literals here): >>> L = [True, True, True, False, True] >>> all(L) # True, only if all elements of L are True. False >>> any(L) # True, if any elements of L are True. True Is there any corresponding function in your language (C#? Java?). If so, that's likely the cleanest approach. A: McDowell, You are correct that when using the single '&' operator that both sides of the expression evaluate. However, when using the '&&' operator (at least in C#) then the first expression to return false is the last expression evaluated. This makes putting the evaulation before the FOR statement just as good as any other way of doing it. A: @tweakt It's no better, but what I've done in the past: boolean ok = cond1; ok &= cond2; ok &= cond3; ok &= cond4; ok &= cond5; ok &= cond6; Which is the same as: ok = (cond1 && cond2 && cond3 && cond4 && cond5 && cond6); Actually, these two things are not the same in most languages. The second expression will typically stop being evaluated as soon as one of the conditions is false, which can be a big performance improvement if evaluating the conditions is expensive. For readability, I personally prefer Mike Stone's proposal above. It's easy to verbosely comment and preserves all of the computational advantages of being able to early out. You can also do the same technique inline in a function if it'd confuse the organization of your code to move the conditional evaluation far away from your other function. It's a bit cheesy, but you can always do something like: do { if (!cond1) break; if (!cond2) break; if (!cond3) break; ... DoSomething(); } while (false); the while (false) is kind of cheesy. I wish languages had a scoping operator called "once" or something that you could break out of easily. A: I like to break each condition into descriptive variables. bool isVar1Valid, isVar2Valid, isVar3Valid, isVar4Valid; isVar1Valid = ( var1 == 1 ) isVar2Valid = ( var2.Count >= 2 ) isVar3Valid = ( var3 != null ) isVar4Valid = ( var4 != null && var4.IsEmpty() == false ) if ( isVar1Valid && isVar2Valid && isVar3Valid && isVar4Valid ) { //do code } A: If I was doing it in Perl, This is how I might run the checks. { last unless $var1; last unless $var2; last unless $var3; last unless $var4; last unless $var5; last unless $var6; ... # Place Code Here } If you plan on using this over a subroutine replace every instance of last with return; A: if ( (condition_A) && (condition_B) && (condition_C) && (condition_D) && (condition_E) && (condition_F) ) { ... } as opposed to if (condition_A) { if (condition_B) { if (condition_C) { if (condition_D) { if (condition_E) { if (condition_F) { ... } } } } } } and if ( ( (condition_A) && (condition_B) ) || ( (condition_C) && (condition_D) ) || ( (condition_E) && (condition_F) ) ) { do_this_same_thing(); } as opposed to if (condition_A && condition_B) { do_this_same_thing(); } if (condition_C && (condition_D) { do_this_same_thing(); } if (condition_E && condition_F) { do_this_same_thing(); } Most of the static analysis tools for examining code will complain if multiple conditional expressions do not use explicit parenthesis dictating expression analysis, instead of relying on operator precedence rules and fewer parenthesis. Vertical alignment at the same indent level of open/close braces {}, open close parenthesis (), conditional expressions with parenthesis and operators on the left is an very useful practice, which greatly ENHANCES readability and clarity of the code as opposed to jamming everything that can possibly be jammed onto a single line, sans vertical alignment, spaces or parenthesis Operator precedence rules are tricky, e.g. && has higher precedence than ||, but | has precedence than && So, ... if (expr_A & expr_B || expr_C | expr_D & expr_E || expr_E && expr_F & expr_G || expr_H { } is a really easy multiple conditional expression for mere humans to read and evaluate improperly. if ( ( (expr_A) & (expr_B) ) || ( (expr_C) | ( (expr_D) & (expr_E) ) ) || ( (expr_E) && ( (expr_F) & (expr_G) ) ) || (expr_H) ) { } There is nothing wrong with horizontal space (linefeeds), vertical alignment, or explicit parenthesis guiding expression evaluation, all of which ENHANCES readability and clarity A: If you do this: if (var1 == true) { if (var2 == true) { if (var3 == true) { ... } } } Then you can also respond to cases where something isn't true. For example, if you're validating input, you could give the user a tip for how to properly format it, or whatever.
{ "language": "en", "url": "https://stackoverflow.com/questions/6126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Repair SVN Checksum I'm using subclipse in Flex Builder 3, and recently received this error when trying to commit: svn: Checksum mismatch for '/Users/redacted/Documents/Flex Builder 3/path/to/my/file.mxml'; expected: 'f8cb275de72776657406154dd3c10348', actual: 'null' I worked around it by: * *Committing all the other changed files, omitting the troublesome one. *Copying the contents of the trouble file to a TextMate window *Deleting my project in FlexBuilder/Eclipse *Checking my project out fresh from SVN *Copying the text of the trouble file back in from the TextMate Window *Committing the changes. It worked, but I can't help but think there's a better way. What's actaully happening to cause the svn:checksum error, and what's the best fix. Maybe more important -- is this a symptom of a greater problem? A: Just today, I managed to recover from this error by checking out a copy of the corrupted directory to /tmp and replacing the files in .svn/text-base with the just co'ed ones. I wrote up the procedure in some detail here on my blog. I'd be interested to hear from more experienced SVN users what are the advantages and disadvantages of each approach. A: I occasionally get similar things, usually with files that nobody has been near in weeks. Generally, if you know you haven't been working in the directory in question, you can just delete the directory with the problem and run svn update to recreate it. If you have live changes in the directory then as lassevk and you yourself suggested, a more careful approach is required. Generally speaking I would say it's a good idea not to leave edited files uncommitted, and keep the working copy tidy - don't add a whole bunch of extra files into the working copy that you aren't going to use. Commit regularly, and then if the working copy goes tits up, you can just delete the whole thing and start over without worrying about what you might or might not be losing, and without the pain of trying to figure out what files to save. A: try: svn up --force file.c This worked for me without having to do anything extra A: I've observed a lot of solutions from patching .svn/entries file to fresh checkout. It can be a new way (thank to my collegue): - go to work directory where recorder/expected checksum issue occured - call "svn diff" and make sure that there isnt any local modifications - cd .. - remove trouble file's directory with "rm -rf" - issue "svn up" command, svn client will restore new fresh files copies A: The file in the .svn directory that keeps track of what you have checked out, when, what revision, and from where, has gotten corrupted somehow, for that particular file. This is no more dangerous or critical than the normal odd file problem, and can be because of various problems, like a subversion program dying mid-change, power-disruption, etc. Unless it happens more I wouldn't make much out of it. It can be fixed by doing what you did, make a copy of your work-files, check out a fresh copy, and add the modified files back in. Note that this might cause problems if you have a busy project where you would normally have to merge in changes. For instance, you and a collegue both check out a fresh copy, and start working on the same file. At some point, your collegue checks in his modifications. When you attempt to do the same, you get the checksum problem you have. If you now make copies of your changed files, do a fresh checkout, then subversion will lose track of how your changes should be merged back in. If you didn't get the problem in this case, when you got around to checkin in your modifications, you would need to update your working copy first, and possibly handle a conflict with your file. However, if you do a fresh checkout, complete with your collegues changes, it now looks like you removed his changes and substituted with your own. No conflicts, and no indications from subversion that something is amiss. A: Another, possibly even scarier, workaround for checksum conflicts I found is as follows: CAVEAT: Make sure your local copy is the best known version, AND that anyone else on your project knows what you're doing! (in case this wasn't obvious already). if you know your local copy of the file is "the good one" you can directly delete the file from the SVN server and then force commit your local copy. syntax for direct deletion: svn delete -m "deleting corrupted file XXXX" svn+ssh://username@svnserver/path/to/XXXX good luck! J A: Matt, there is easier way than you described - modifying checksum in .svn/entries file. Here is full description: http://maymay.net/blog/2008/06/17/fix-subversion-checksum-mismatch-error-by-editing-svnentries-file/ A: There's also a simpler cause for this than just bugs, or disk corruption etc. I think it the most likely cause for this to happen is when someone writes a recursive text replacement on the working copy, without excluding .svn files. This means the pristine copies of the files (basically the BASE version of the files, that's stored inside the .svn administrative area) get modified, and that invalidates the MD5 sum. @Andrew Hedges: that also explains why your solution fixes this. A: As an alternative to checking out a fresh copy (which I also had to do after trying all other options) and than merging all your changes which you previously saved backed into it, the following approach worked the same way, but saved me a considerable amount of time, and possibly some errors: * *Check out a fresh working copy *Copy .svn folder from you fresh copy into your corrupt copy *Voila Of course, you should backup your original corrupt working copy just in case. In my case, I was free to remove it after I was done, as everything went fine. A: This will happens when the .svn folder corrupted. Solution: Remove the entire folder of the file contains and checkout the folder again. A: SVN keeps pristine copies of all the files you checkout buried in the .svn directories. This is called the text-base. This allows for speedy diffs and reverts. During various operations, SVN will do checksums on these text-base files to catch file corruption issues. In general, an SVN checksum mismatch means a file that shouldn't have been altered was changed somehow. What does that mean? * *Disk corruption (bad HDD or IDE cable) *Bad RAM *Bad network link *Some kind of background process altered a file behind your back (malware) All of these are bad. HOWEVER, I think your problem is different. Look at the error message. Note that it expected some MD5 hashes, but instead got back 'null'. If this were a simple file corruption issue, I would expect that you would have two different MD5 hashes for the expected/got. The fact that you have a 'null' implies that something else is wrong. I have two theories: * *Bug in SVN. *Something had an exclusive lock on the file, and the MD5 couldn't happen. In the case of #1, try upgrading to the latest SVN version. Maybe also post this on the svn-devel mailing list (http://svn.haxx.se), so the developers can see it. In the case of #2, check to see if anything has the file locked. You can download a copy of Process Explorer to check. Note that you probably want to see who has a lock on the text-base file, not the actual file you were trying to commit. A: Had this issue, our dev VM's are all *nix our workstations win32. some fool(s) created files of the same name (different case) on the *nix box all of a sudden checkouts on Win32 blows up... because win doesn't know which of the 2 files of the same name to MD5 against, checkouts on *nix were fine... leaving us to scratch our heads for a bit I was able to update the repo on the win box by copying the ".svn" folders over from a *nix box with a good working copy. have yet to see if the repo can be cleaned to the point where we can do a full checkout again A: One other easy way.... * *Update your project to get latest version *checkout the same version in an other folder *replace .svn folder from the new checkout to the working copy ( i've replaced .svn-base files ) A: * *Check out only folder with problematic file from repository to some other location. *Make sure .svn\text-base\<problematic file>.svn-base is identical to one checked out. *Make sure problematic file section (all lines of the section) in .svn\entries is identical to one checked out. A: You won't believe this, but I have actually fixed my error by removing the <scm>...</scm> stance from the offending pom.xml file I was hoping to check in. It contained the URL of the subversion repository it is checked in (this is what the Maven setting is for!), but somehow it generated a faulty checksum for the file while checking in. I literally tried all aforementioned methods of fixing this, but to no avail. Did I encounter a very rare situation in where the checksum generator is not robust enough? A: I also stumbled upon this issue and was trying to look for quick solutions, tried some of the solution given in this thread. This is how I resolved this issue in my development environment (to me it was minimal change): 1- Locally deleted directory in which the file got corrupted (WEB-INF): svn: Checksum mismatch for 'path-to-folder\WEB-INF\web.xml': expected: d60cb051162e4a6790a3ea0c9ddfb434 actual: 16885ded2cbc1adc250e4cbbc1427546 2- Copied and pasted directory (WEB-INF) from a fresh checkout 3- Did svn up, now Eclipse/TortoiseSVN started showing conflict in this directory 4- Marked conflict as Resolved This worked, I was able to update, commit earlier corrupted web.xml A: In my case the sum was different. All I've done was: 1) Make Check Out to separate folder 2) Replace by file from this folder in .svn directory with my project problem-file which was said in svn-client error message 3) ..Profit! A: * *Go to the folder causing problem *Execute command svn update --set-depth empty *This folder will empty and revert the empty folder *Sync with the svn and update. This work for me. A: Although this is an old issue, I thought I would give my 2 cents as well, since Ive just wrestled with the problem for more than an hour. The solutions above either didn't work for me, or seemed over-complicated. My solution was simply to remove all svn folders from the project. find . -name .svn -exec rm -rf {} \; After this, I did simple checkout of the project again. Thus leaving all my un-committed files intact, but still got a rebuild of all the svn files. A: I had this problem on ubuntu 14.04 and solve it by follow steps: * *$ cd /var/www/myProject *$ svn upgrade *$ svn update after these steps i could commit file without error. A: here's how i fixed the issue - v simple, but as per jsh above, need to be sure your copy is the best one. simply * *make a copy all problem files, in the same folder. *delete the old ones with svn rm *commit. *then rename the copies back to the original file names. *commit again. suspect this probably kills all sorts of revision history on that file, so it's a pretty ugly way to go about it...
{ "language": "en", "url": "https://stackoverflow.com/questions/6130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: How do you kill all Linux processes that are older than a certain age? I have a problem with some zombie-like processes on a certain server that need to be killed every now and then. How can I best identify the ones that have run for longer than an hour or so? A: In this way you can obtain the list of the ten oldest processes: ps -elf | sort -r -k12 | head -n 10 A: Jodie C and others have pointed out that killall -i can be used, which is fine if you want to use the process name to kill. But if you want to kill by the same parameters as pgrep -f, you need to use something like the following, using pure bash and the /proc filesystem. #!/bin/sh max_age=120 # (seconds) naughty="$(pgrep -f offlineimap)" if [[ -n "$naughty" ]]; then # naughty is running age_in_seconds=$(echo "$(date +%s) - $(stat -c %X /proc/$naughty)" | bc) if [[ "$age_in_seconds" -ge "$max_age" ]]; then # naughty is too old! kill -s 9 "$naughty" fi fi This lets you find and kill processes older than max_age seconds using the full process name; i.e., the process named /usr/bin/python2 offlineimap can be killed by reference to "offlineimap", whereas the killall solutions presented here will only work on the string "python2". A: Perl's Proc::ProcessTable will do the trick: http://search.cpan.org/dist/Proc-ProcessTable/ You can install it in debian or ubuntu with sudo apt-get install libproc-processtable-perl Here is a one-liner: perl -MProc::ProcessTable -Mstrict -w -e 'my $anHourAgo = time-60*60; my $t = new Proc::ProcessTable;foreach my $p ( @{$t->table} ) { if ($p->start() < $anHourAgo) { print $p->pid, "\n" } }' Or, more formatted, put this in a file called process.pl: #!/usr/bin/perl -w use strict; use Proc::ProcessTable; my $anHourAgo = time-60*60; my $t = new Proc::ProcessTable; foreach my $p ( @{$t->table} ) { if ($p->start() < $anHourAgo) { print $p->pid, "\n"; } } then run perl process.pl This gives you more versatility and 1-second-resolution on start time. A: Found an answer that works for me: warning: this will find and kill long running processes ps -eo uid,pid,etime | egrep '^ *user-id' | egrep ' ([0-9]+-)?([0-9]{2}:?){3}' | awk '{print $2}' | xargs -I{} kill {} (Where user-id is a specific user's ID with long-running processes.) The second regular expression matches the a time that has an optional days figure, followed by an hour, minute, and second component, and so is at least one hour in length. A: If they just need to be killed: if [[ "$(uname)" = "Linux" ]];then killall --older-than 1h someprocessname;fi If you want to see what it's matching if [[ "$(uname)" = "Linux" ]];then killall -i --older-than 1h someprocessname;fi The -i flag will prompt you with yes/no for each process match. A: You can use bc to join the two commands in mob's answer and get how many seconds ellapsed since the process started: echo `date +%s` - `stat -t /proc/<pid> | awk '{print $14}'` | bc edit: Out of boredom while waiting for long processes to run, this is what came out after a few minutes fiddling: #file: sincetime #!/bin/bash init=`stat -t /proc/$1 | awk '{print $14}'` curr=`date +%s` seconds=`echo $curr - $init| bc` name=`cat /proc/$1/cmdline` echo $name $seconds If you put this on your path and call it like this: sincetime it will print the process cmdline and seconds since started. You can also put this in your path: #file: greptime #!/bin/bash pidlist=`ps ax | grep -i -E $1 | grep -v grep | awk '{print $1}' | grep -v PID | xargs echo` for pid in $pidlist; do sincetime $pid done And than if you run: greptime <pattern> where patterns is a string or extended regular expression, it will print out all processes matching this pattern and the seconds since they started. :) A: For anything older than one day, ps aux will give you the answer, but it drops down to day-precision which might not be as useful. USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 7200 308 ? Ss Jun22 0:02 init [5] root 2 0.0 0.0 0 0 ? S Jun22 0:02 [migration/0] root 3 0.0 0.0 0 0 ? SN Jun22 0:18 [ksoftirqd/0] root 4 0.0 0.0 0 0 ? S Jun22 0:00 [watchdog/0] If you're on linux or another system with the /proc filesystem, In this example, you can only see that process 1 has been running since June 22, but no indication of the time it was started. stat /proc/<pid> will give you a more precise answer. For example, here's an exact timestamp for process 1, which ps shows only as Jun22: ohm ~$ stat /proc/1 File: `/proc/1' Size: 0 Blocks: 0 IO Block: 4096 directory Device: 3h/3d Inode: 65538 Links: 5 Access: (0555/dr-xr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root) Access: 2008-06-22 15:37:44.347627750 -0700 Modify: 2008-06-22 15:37:44.347627750 -0700 Change: 2008-06-22 15:37:44.347627750 -0700 A: do a ps -aef. this will show you the time at which the process started. Then using the date command find the current time. Calculate the difference between the two to find the age of the process. A: I did something similar to the accepted answer but slightly differently since I want to match based on process name and based on the bad process running for more than 100 seconds kill $(ps -o pid,bsdtime -p $(pgrep bad_process) | awk '{ if ($RN > 1 && $2 > 100) { print $1; }}') A: stat -t /proc/<pid> | awk '{print $14}' to get the start time of the process in seconds since the epoch. Compare with current time (date +%s) to get the current age of the process. A: Using ps is the right way. I've already done something similar before but don't have the source handy. Generally - ps has an option to tell it which fields to show and by which to sort. You can sort the output by running time, grep the process you want and then kill it. HTH A: In case anyone needs this in C, you can use readproc.h and libproc: #include <proc/readproc.h> #include <proc/sysinfo.h> float pid_age(pid_t pid) { proc_t proc_info; int seconds_since_boot = uptime(0,0); if (!get_proc_stats(pid, &proc_info)) { return 0.0; } // readproc.h comment lies about what proc_t.start_time is. It's // actually expressed in Hertz ticks since boot int seconds_since_1970 = time(NULL); int time_of_boot = seconds_since_1970 - seconds_since_boot; long t = seconds_since_boot - (unsigned long)(proc_info.start_time / Hertz); int delta = t; float days = ((float) delta / (float)(60*60*24)); return days; } A: Came across somewhere..thought it is simple and useful You can use the command in crontab directly , * * * * * ps -lf | grep "user" | perl -ane '($h,$m,$s) = split /:/,$F +[13]; kill 9, $F[3] if ($h > 1);' or, we can write it as shell script , #!/bin/sh # longprockill.sh ps -lf | grep "user" | perl -ane '($h,$m,$s) = split /:/,$F[13]; kill + 9, $F[3] if ($h > 1);' And call it crontab like so, * * * * * longprockill.sh A: My version of sincetime above by @Rafael S. Calsaverini : #!/bin/bash ps --no-headers -o etimes,args "$1" This reverses the output fields: elapsed time first, full command including arguments second. This is preferred because the full command may contain spaces.
{ "language": "en", "url": "https://stackoverflow.com/questions/6134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "68" }
Q: How do I edit work items in the Visual Studio 2008 xml editor? I'm trying to customize some TFS work items via the VS2008 xml editor, but every time I open a work item xml file it jumps to the graphical designer. All that gives me is a "View XML" button that doesn't let you edit the xml directly. A: Ah, looks like you have to go to File->Open and click the down arrow next to the Open button to "Open With" the xml editor. If someone wants to copy and paste this, free accepted answer :P A: I don't have TFS but I know in regular VS there is an Open With... option in most items' contextual menu that even let you change the default editor. Very useful when you are tired of the Designer opening instead of the Code file on Windows forms. A: As per Coincoin's answer, this feature is also great for setting the default editor for ASPX. If you want to go to the Code Editor most often, then this is a default you'd want to change. A: Reading this - I think perhaps you don't realize - that there is no need to edit the XML - in fact it is very difficult to do so. The graphical designer will actually let you change the Work Item type, adding new fields, changing workflow, rules etc. The only reason to change the XML is if there's a bug in the Process Editor (the tool that gives the graphic designer). I have done extensive modifications of Work Item types and only had one instance where I had to change the XML.
{ "language": "en", "url": "https://stackoverflow.com/questions/6151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Common Types of Subversion Hooks What kinds of hook scripts are people using for Subversion? Just general ideas but code would be great too! A: If you have a mix of unix and Windows users working with the repository, I urge you to use the case-insensitive.py pre-commit hook-script as a precautionary measure. It prevents hard-to-sort-out situations where svn updates fail for Windows users because of a file rename which only changed the case of the file name. Believe me, there is a good chance it will save you trouble. A: I am using the pre-revprop-change hook that allows me to actually go back and edit comments and such information after the commit has been performed. This is very useful if there is missing/erroneous information in the commit comments. Here I post a pre-revprop-change.bat batch file for Windows NT or later. You can certainly enhance it with more modifications. You can also derive a post-revprop-change.cmd from it to back up the old snv:log somewhere or just to append it to the new log. The only tricky part was to be able to actually parse the stdin from the batch file. This is done here with the FIND.EXE command. The other thing is that I have had reports from other users of issues with the use of the /b with the exit command. You may just need to remove that /b in your specific application if error cases do not behave well. @ECHO OFF set repos=%1 set rev=%2 set user=%3 set propname=%4 set action=%5 :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Only allow changes to svn:log. The author, date and other revision :: properties cannot be changed :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: if /I not '%propname%'=='svn:log' goto ERROR_PROPNAME :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Only allow modifications to svn:log (no addition/overwrite or deletion) :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: if /I not '%action%'=='M' goto ERROR_ACTION :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Make sure that the new svn:log message contains some text. :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: set bIsEmpty=true for /f "tokens=*" %%g in ('find /V ""') do ( set bIsEmpty=false ) if '%bIsEmpty%'=='true' goto ERROR_EMPTY goto :eof :ERROR_EMPTY echo Empty svn:log properties are not allowed. >&2 goto ERROR_EXIT :ERROR_PROPNAME echo Only changes to svn:log revision properties are allowed. >&2 goto ERROR_EXIT :ERROR_ACTION echo Only modifications to svn:log revision properties are allowed. >&2 goto ERROR_EXIT :ERROR_EXIT exit /b 1 A: We use FogBugz for bug tracking, it provides subversion commit scripts that allow you to include a case number in your check in comments and then associates the bug with the check in that fixed it. It does require a WebSVN instance to be set up so that you have a web based viewer for your repository. A: In my work place we've set up a post-commit hook that generates RSS feeds that are displayed in various dash boards and are used for code reviewers to know when it is time to review and for us to see that new employees are committing enough. A: several things we use them for: * *integrating with the bug tracker (Trac in our case - a commit message that says 'Closes #514' automatically marks that bug as closed *integrating with the build integration (buildbot in our case - a commit to a watched branch triggers a build *pre-commit hook for validating the commit - we use svnchecker. It validates our Python code for PEP8 correctness *sending checkin mails to a mailing list *running indentation scripts A: For those who are looking for a pre-revprop-change.bat for a snvsync operation : https://gist.github.com/1679659 @ECHO OFF set user=%3 if /I '%user%'=='syncuser' goto ERROR_REV exit 0 :ERROR_REV echo "Only the syncuser user may change revision properties" >&2 exit 1 It just comes from here : http://chestofbooks.com/computers/revision-control/subversion-svn/Repository-Replication-Reposadmin-Maint-Replication.html and has been adapted for Windows. A: I'm using post-commit hooks (I think it's this one) to post a message to a forum on Basecamp for each commit. Two advantages: * *As the lead developer, I get a roll-up of commits every morning (via the RSS feed from that basecamp forum) and can see what my team has been up to pretty quickly. *Our Trac/SVN install is behind our firewall, so this gives my higher-ups in other locations a window into what we're doing. They might not understand it, but to a manager a lot of activity looks like, well, a lot of activity ;) I guess the end result of this is similar to what @Aviv is doing. I'm looking into solutions for building the latest commit on a separate server for continuous integration, but I'm going to have to change the way we make changes to our database schema before that will work. A: This was discussed on the subversion users mailing list a while ago. This post in particular has some useful ideas. A: post-commit hook to send email notification that something changed in the repository to a list of emails. You need sendmail.exe in the same folder than your hook file, along with sendmail.ini. You also need a file post-commit.tos.txt next to your post-commit.cmd to list the mail recipients. The file should contain: user1@example.com,user2@example.com,user3@example.com Here is the hook code: @ECHO OFF setlocal ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Get subversion arguments set repos=%~1 set rev=%2 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Set some variables set tos=%repos%\hooks\%~n0.tos.txt set reposname=%~nx1 set svnlookparam="%repos%" --revision %rev% if not exist "%tos%" goto :END ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Prepare sendmail email file set author= for /f "tokens=* usebackq" %%g in (`svnlook author %svnlookparam%`) do ( set author=%%g ) for /f "tokens=* usebackq delims=" %%g in ("%tos%") do ( set EmailNotificationTo=%%g ) set SendMailFile=%~n0_%reposname%_%rev%.sm echo To: %EmailNotificationTo% >> "%SendMailFile%" echo From: %reposname%.svn.technologie@gsmprjct.com >> "%SendMailFile%" echo Subject: [%reposname%] Revision %rev% - Subversion Commit Notification >> "%SendMailFile%" echo --- log [%author%] --- >> "%SendMailFile%" svnlook log %svnlookparam% >> "%SendMailFile%" 2>&1 echo --- changed --- >> "%SendMailFile%" svnlook changed %svnlookparam% --copy-info >> "%SendMailFile%" 2>&1 echo .>> "%SendMailFile%" ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Send email type "%SendMailFile%" | "%~dp0sendmail.exe" -t ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Clean-up if exist "%SendMailFile%" del "%SendMailFile%" :END endlocal A: The most common one I think is to allow people to change revision comments after comitting. You need to enable the 'pre-revprop-change' hook script to allow that. The example provided, if enabled allows editing only the comment property and only be the original comitter. Great for correcting typos. A: A hook to notify the bug/issue management system of changes to repository. Ie. the commit message has issue:546 or similar tag in it that is parsed and fed to the bug management system. A: We check the following with our hook scripts: * *That a commit log message has been supplied *That a reviewer has been specified for the commit *That no automatically generated code or banned file types land up in the repository *Send an email out when a branch / tag is created We still want to implement the following: * *Send an email when a user acquires a lock on a file *Send an email when your lock has been stolen *Send an email to everyone when a revision property has been changed A: We use a commit hook script to trigger our release robot. Writing new release information to a file named changes.txt in our different products will trigger the creation of a tag and the relevant artifacts. A: I have one setup using the Ruby Tinder library that I send to a campfire room, if anyone wants the script I can post or send the code to you. Other common ones I've seen are posts to bug tracking systems and email notifications. A: Windows pre-commit hook to check that log contains something. @ECHO OFF setlocal ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Get subversion arguments set repos=%~1 set txn=%2 ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Set some variables set svnlookparam="%repos%" -t %txn% :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :: Make sure that the new svn:log message contains some text. set bIsEmpty=true for /f "tokens=* usebackq" %%g in (`svnlook log %svnlookparam%`) do ( set bIsEmpty=false ) if '%bIsEmpty%'=='true' goto ERROR_EMPTY echo Allowed. >&2 goto :END :ERROR_EMPTY echo Empty log messages are not allowed. >&2 goto ERROR_EXIT :ERROR_EXIT :: You may require to remove the /b below if your hook is called directly by subversion exit /b 1 :END endlocal A: I forgot to enter a comment while committing. Didn't have time to figure out why my pre-revprop-change hook wasn't working. So the following svnadmin command worked for me to enter a commit message: svnadmin setlog <filesystem path to my repository> --bypass-hooks -r 117 junk, where "junk" is the file containing the text which I wanted to be the comment. svn setlog help has more usage info...
{ "language": "en", "url": "https://stackoverflow.com/questions/6155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Regular expression for parsing links from a webpage? I'm looking for a .NET regular expression extract all the URLs from a webpage but haven't found one to be comprehensive enough to cover all the different ways you can specify a link. And a side question: Is there one regex to rule them all? Or am I better off using a series of less complicated regular expressions and just using mutliple passes against the raw HTML? (Speed vs. Maintainability) A: from the RegexBuddy library: URL: Find in full text The final character class makes sure that if an URL is part of some text, punctuation such as a comma or full stop after the URL is not interpreted as part of the URL. \b(https?|ftp|file)://[-A-Z0-9+&@#/%?=~_|!:,.;]*[-A-Z0-9+&@#/%=~_|] A: With Html Agility Pack, you can use: HtmlDocument doc = new HtmlDocument(); doc.Load("file.htm"); foreach(HtmlNode link in doc.DocumentElement.SelectNodes("//a@href") { Response.Write(link["href"].Value); } doc.Save("file.htm"); A: All HTTP's and MAILTO's (["'])(mailto:|http:).*?\1 All links, including relative ones, that are called by href or src. #Matches things in single or double quotes, but not the quotes themselves (?<=(["']))((?<=href=['"])|(?<=src=['"])).*?(?=\1) #Maches thing in either double or single quotes, including the quotes. (["'])((?<=href=")|(?<=src=")).*?\1 The second one will only get you links that use double quotes, however. A: ((mailto\:|(news|(ht|f)tp(s?))\://){1}\S+) I took this from regexlib.com [editor's note: the {1} has no real function in this regex; see this post] A: Look at the URI specification. That could help you a lot. And as far as performance goes, you can pretty much extract all the HTTP links in a modest web page. When I say modest I definitely do not mean one page all encompassing HTML manuals like that of ELisp manual. Also performance is a touchy topic. My advice would be to measure your performance and then decide if you are going to extract all the links using one single regex or with multiple simpler regex expressions. http://gbiv.com/protocols/uri/rfc/rfc3986.html A: I don't have time to try and think of a regex that probably won't work, but I wanted to comment that you should most definitely break up your regex, at least if it gets to this level of ugliness: (?:(?:\r\n)?[ \t])*(?:(?:(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t] )+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?: \r\n)?[ \t])*)(?:\.(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:( ?:\r\n)?[ \t])+|\Z|(?=[\["()<>@,;:\\".\[\]]))|"(?:[^\"\r\\]|\\.|(?:(?:\r\n)?[ \t]))*"(?:(?:\r\n)?[ \t])*))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\0 ....*SNIP*.... *))*@(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t]) +|\Z|(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*)(?:\ .(?:(?:\r\n)?[ \t])*(?:[^()<>@,;:\\".\[\] \000-\031]+(?:(?:(?:\r\n)?[ \t])+|\Z |(?=[\["()<>@,;:\\".\[\]]))|\[([^\[\]\r\\]|\\.)*\](?:(?:\r\n)?[ \t])*))*\>(?:( ?:\r\n)?[ \t])*))*)?;\s*) (this supposedly matches email addresses) Edit: I can't even fit it on one post it's so nasty.... A: URL's? As in images/scripts/css/etc.? %href="(.["]*)"% A: This will capture the URLs from all a tags as long as the author of the HTML used quotes: <a[^>]+href="([^"]+)"[^>]*> I made an example here. A: according to https://www.rfc-editor.org/rfc/rfc3986 extracting url from ANY text (not only HTML) (http\\://[:/?#\\[\\]@!%$&'()*+,;=a-zA-Z0-9._\\-~]+)
{ "language": "en", "url": "https://stackoverflow.com/questions/6173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do I make event callbacks into my win forms thread safe? When you subscribe to an event on an object from within a form, you are essentially handing over control of your callback method to the event source. You have no idea whether that event source will choose to trigger the event on a different thread. The problem is that when the callback is invoked, you cannot assume that you can make update controls on your form because sometimes those controls will throw an exception if the event callback was called on a thread different than the thread the form was run on. A: I use anonymous methods a lot in this scenario: void SomethingHappened(object sender, EventArgs ea) { MethodInvoker del = delegate{ textBox1.Text = "Something happened"; }; InvokeRequired ? Invoke( del ) : del(); } A: To simplify Simon's code a bit, you could use the built in generic Action delegate. It saves peppering your code with a bunch of delegate types you don't really need. Also, in .NET 3.5 they added a params parameter to the Invoke method so you don't have to define a temporary array. void SomethingHappened(object sender, EventArgs ea) { if (InvokeRequired) { Invoke(new Action<object, EventArgs>(SomethingHappened), sender, ea); return; } textBox1.Text = "Something happened"; } A: I'm a bit late to this topic, but you might want to take a look at the Event-Based Asynchronous Pattern. When implemented properly, it guarantees that events are always raised from the UI thread. Here's a brief example that only allows one concurrent invocation; supporting multiple invocations/events requires a little bit more plumbing. using System; using System.ComponentModel; using System.Threading; using System.Windows.Forms; namespace WindowsFormsApplication1 { public class MainForm : Form { private TypeWithAsync _type; [STAThread()] public static void Main() { Application.EnableVisualStyles(); Application.Run(new MainForm()); } public MainForm() { _type = new TypeWithAsync(); _type.DoSomethingCompleted += DoSomethingCompleted; var panel = new FlowLayoutPanel() { Dock = DockStyle.Fill }; var btn = new Button() { Text = "Synchronous" }; btn.Click += SyncClick; panel.Controls.Add(btn); btn = new Button { Text = "Asynchronous" }; btn.Click += AsyncClick; panel.Controls.Add(btn); Controls.Add(panel); } private void SyncClick(object sender, EventArgs e) { int value = _type.DoSomething(); MessageBox.Show(string.Format("DoSomething() returned {0}.", value)); } private void AsyncClick(object sender, EventArgs e) { _type.DoSomethingAsync(); } private void DoSomethingCompleted(object sender, DoSomethingCompletedEventArgs e) { MessageBox.Show(string.Format("DoSomethingAsync() returned {0}.", e.Value)); } } class TypeWithAsync { private AsyncOperation _operation; // synchronous version of method public int DoSomething() { Thread.Sleep(5000); return 27; } // async version of method public void DoSomethingAsync() { if (_operation != null) { throw new InvalidOperationException("An async operation is already running."); } _operation = AsyncOperationManager.CreateOperation(null); ThreadPool.QueueUserWorkItem(DoSomethingAsyncCore); } // wrapper used by async method to call sync version of method, matches WaitCallback so it // can be queued by the thread pool private void DoSomethingAsyncCore(object state) { int returnValue = DoSomething(); var e = new DoSomethingCompletedEventArgs(returnValue); _operation.PostOperationCompleted(RaiseDoSomethingCompleted, e); } // wrapper used so async method can raise the event; matches SendOrPostCallback private void RaiseDoSomethingCompleted(object args) { OnDoSomethingCompleted((DoSomethingCompletedEventArgs)args); } private void OnDoSomethingCompleted(DoSomethingCompletedEventArgs e) { var handler = DoSomethingCompleted; if (handler != null) { handler(this, e); } } public EventHandler<DoSomethingCompletedEventArgs> DoSomethingCompleted; } public class DoSomethingCompletedEventArgs : EventArgs { private int _value; public DoSomethingCompletedEventArgs(int value) : base() { _value = value; } public int Value { get { return _value; } } } } A: As the lazy programmer, I have a very lazy method of doing this. What I do is simply this. private void DoInvoke(MethodInvoker del) { if (InvokeRequired) { Invoke(del); } else { del(); } } //example of how to call it private void tUpdateLabel(ToolStripStatusLabel lbl, String val) { DoInvoke(delegate { lbl.Text = val; }); } You could inline the DoInvoke inside your function or hide it within separate function to do the dirty work for you. Just keep in mind you can pass functions directly into the DoInvoke method. private void directPass() { DoInvoke(this.directInvoke); } private void directInvoke() { textLabel.Text = "Directly passed."; } A: Here are the salient points: * *You can't make UI control calls from a different thread than the one they were created on (the form's thread). *Delegate invocations (ie, event hooks) are triggered on the same thread as the object that is firing the event. So, if you have a separate "engine" thread doing some work and have some UI watching for state changes which can be reflected in the UI (such as a progress bar or whatever), you have a problem. The engine fire's an object changed event which has been hooked by the Form. But the callback delegate that the Form registered with the engine gets called on the engine's thread… not on the Form's thread. And so you can't update any controls from that callback. Doh! BeginInvoke comes to the rescue. Just use this simple coding model in all your callback methods and you can be sure that things are going to be okay: private delegate void EventArgsDelegate(object sender, EventArgs ea); void SomethingHappened(object sender, EventArgs ea) { // // Make sure this callback is on the correct thread // if (this.InvokeRequired) { this.Invoke(new EventArgsDelegate(SomethingHappened), new object[] { sender, ea }); return; } // // Do something with the event such as update a control // textBox1.Text = "Something happened"; } It's quite simple really. * *Use InvokeRequired to find out if this callback happened on the correct thread. *If not, then reinvoke the callback on the correct thread with the same parameters. You can reinvoke a method by using the Invoke (blocking) or BeginInvoke (non-blocking) methods. *The next time the function is called, InvokeRequired returns false because we are now on the correct thread and everybody is happy. This is a very compact way of addressing this problem and making your Forms safe from multi-threaded event callbacks. A: In many simple cases, you can use the MethodInvoker delegate and avoid the need to create your own delegate type.
{ "language": "en", "url": "https://stackoverflow.com/questions/6184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39" }
Q: Experiences of the Smart Client Software Factory Has anyone had any experience in building a 'real world' application with the Smart Client Software Factory, from Microsofts Patterns and Practices group? I'm looking for advice on how difficult it was to master, whether it decreased your time to market and any other general pitfalls. A: I don't have personal experience, so favor the advice of someone that does over mine. I know two coworkers that have used this factory and both had the same take-way: * *It hurt to set up and learn *It was worth it in the end So if you have up-front time to spare, I'd go for it. A: We developed our SCSF Application (into recruitment) in 2006 with 8 (4 UI + 4 WCF Service) developers which is currently used by 350 users in one floor. In beginning there was too much to learn as there were less tutorials, Am thankful to Matias Wolosky and Eugenio Pace who contributed a lot in patterns and practices/ codeplex. The key areas in which we scored were :- 1) Clear separation of UI and Business 2) Focussed role for developers 3) Module based on-demand structure of application 4) Easily deployable through clickonce 5) Ready patterns and helpers which makes developers life easy and more structured. It has gained a lot of respect amongst users with time as it supports :- 1) RBAC - Role Based Access Control 2) Quick turnarounds of feature as we separated infrastructure services/Business services/ UI Helper services neatly and the entire application is module based (Best part of CAB). 3) Now we are thinking to move to WPF to add some more jazz element. A: We used SCSF for a real world app with about 10 developers. It was a steep learning curve to set up and develop a pattern of usage, but once it was set up, introducing new developers to the project was VERY easy. Using CAB and SCSF was very beneficial to our project especially getting each developer up to speed and productive. A downfall of SCSF is that it provides ALOT of functionality that may not be used (we probably only used 60% of the functionality). I am also using SCSF for a new project and am considering refactoring to PRISM. PRISM allows you to cull the functionality that is not used. If you use WPF, I suggest looking into PRISM. A: We use the Web Service Software Factory, and we really like it because it makes it easier for developers to follow standards and appropriate patterns. The learning curve for us wasn't bad - a few hours per developer at most. Other than that, there aren't any other pros & cons worth mentioning. A: We used SCSF for a real world composite app with 6 developers; the full team size was 14, including BAs, PMs, testers, etc. Like Torrey said, it was a steep learning curve for the 3 developers that didn't have OO or design patterns experience. Myself and two others had been OO-purists for years; so we took to CAB like ducks to water just by recognizing the patterns. Part-way through the project, we put together a one-week training course on OO principles and then design patterns. Once the other 3 went through this course, the productivity started to increase immediately. My advice, make sure your team has sound OO and Design Patterns knowledge. The curve drops off when they can see patterns that they recognize.
{ "language": "en", "url": "https://stackoverflow.com/questions/6207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: FF3 WinXP != FF3 Ubuntu - why? I've got a website that I've just uploaded onto the interwebs, and it's displaying differently using Firefox 3.0.1 on Ubuntu and WinXP. Two things I've noticed on Ubuntu: * *The favicon is missing *The background color isn't displaying (it's set in the stylesheet) What have I done wrong? The CSS file is being fetched under Ubuntu, so why isn't it applying all of the stylesheet, just the bits it likes? And why isn't the favicon displaying? Are they the same problem? The answer on the background color: invalid HTML. But I'd love for someone to explain why it works under Windows and not Ubuntu. The answer on favicon: previously, there was no favicon. The browser cached the lack of favicon. Clear the Firefox cache, and all is well. A: I would first suggesting getting you html and css code validated. If there are any errors in your markup, these can cause errors in the rendering. * *CSS Validator *HTML Validator A: I've also run into differences between FF3 on WinXP and FF3 on OS X (mostly with CSS positioning). The CSS and HTML both validated properly, but I was never able to figure out why there was this difference. I would think that the rendering engine would be the same, but apparently there are at least a few subtle differences. A: I agree.. there are subtle differences between the two operating systems. Part of this is just font sizes and how line height and letter spacing is determined. So much of page flow is based on these whitespace elements interact with other page elements. A: i believe this is a font issue and a browser / OS issue. we know that different firefox versions are dependent on the OS - there are some firefox extensions available for Linux, some firefox extensions for windows are available. it's the font I guess. Try to download mtts core fonts (microsoft true type ) which includes all the windows fonts so that firefox can display the fonts you specified in the css. also you could check that you use fonts which are available on both platforms. Otherwise, I suggest rechecking and revalidating your code. The other issue could be the screen resolution. It might be okay in windows with your high resolution but not with the low resolution ubuntu version. A: Almost too obvious to say, but are they both "Firefox 3.01"? One isn't, for instance, Firefox 3.01 revision 3 update 6 service pack 9 and the other, well, you get the picture. Even if they were both the very latest Firefox for that platform, doesn't mean they're exactly the same thing. A: To see what's different, enter about:config in the address bar in Firefox in both Linux and Windows, press Enter, and compare the output A: Ubuntu (I believe) apply their own patches to Firefox, so maybe this cause. Having said that, I thought that the patches were only for minor, GUI-type changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/6208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Split a string ignoring quoted sections Given a string like this: a,"string, with",various,"values, and some",quoted What is a good algorithm to split this based on commas while ignoring the commas inside the quoted sections? The output should be an array: [ "a", "string, with", "various", "values, and some", "quoted" ] A: Python: import csv reader = csv.reader(open("some.csv")) for row in reader: print row A: Looks like you've got some good answers here. For those of you looking to handle your own CSV file parsing, heed the advice from the experts and Don't roll your own CSV parser. Your first thought is, "I need to handle commas inside of quotes." Your next thought will be, "Oh, crap, I need to handle quotes inside of quotes. Escaped quotes. Double quotes. Single quotes..." It's a road to madness. Don't write your own. Find a library with an extensive unit test coverage that hits all the hard parts and has gone through hell for you. For .NET, use the free FileHelpers library. A: Of course using a CSV parser is better but just for the fun of it you could: Loop on the string letter by letter. If current_letter == quote : toggle inside_quote variable. Else if (current_letter ==comma and not inside_quote) : push current_word into array and clear current_word. Else append the current_letter to current_word When the loop is done push the current_word into array A: If my language of choice didn't offer a way to do this without thinking then I would initially consider two options as the easy way out: * *Pre-parse and replace the commas within the string with another control character then split them, followed by a post-parse on the array to replace the control character used previously with the commas. *Alternatively split them on the commas then post-parse the resulting array into another array checking for leading quotes on each array entry and concatenating the entries until I reached a terminating quote. These are hacks however, and if this is a pure 'mental' exercise then I suspect they will prove unhelpful. If this is a real world problem then it would help to know the language so that we could offer some specific advice. A: The author here dropped in a blob of C# code that handles the scenario you're having a problem with: CSV File Imports in .Net Shouldn't be too difficult to translate. A: What if an odd number of quotes appear in the original string? This looks uncannily like CSV parsing, which has some peculiarities to handling quoted fields. The field is only escaped if the field is delimited with double quotations, so: field1, "field2, field3", field4, "field5, field6" field7 becomes field1 field2, field3 field4 "field5 field6" field7 Notice if it doesn't both start and end with a quotation, then it's not a quoted field and the double quotes are simply treated as double quotes. Insedently my code that someone linked to doesn't actually handle this correctly, if I recall correctly. A: Here's a simple python implementation based on Pat's pseudocode: def splitIgnoringSingleQuote(string, split_char, remove_quotes=False): string_split = [] current_word = "" inside_quote = False for letter in string: if letter == "'": if not remove_quotes: current_word += letter if inside_quote: inside_quote = False else: inside_quote = True elif letter == split_char and not inside_quote: string_split.append(current_word) current_word = "" else: current_word += letter string_split.append(current_word) return string_split A: I use this to parse strings, not sure if it helps here; but with some minor modifications perhaps? function getstringbetween($string, $start, $end){ $string = " ".$string; $ini = strpos($string,$start); if ($ini == 0) return ""; $ini += strlen($start); $len = strpos($string,$end,$ini) - $ini; return substr($string,$ini,$len); } $fullstring = "this is my [tag]dog[/tag]"; $parsed = getstringbetween($fullstring, "[tag]", "[/tag]"); echo $parsed; // (result = dog) /mp A: Here's a simple algorithm: * *Determine if the string begins with a '"' character *Split the string into an array delimited by the '"' character. *Mark the quoted commas with a placeholder #COMMA# * *If the input starts with a '"', mark those items in the array where the index % 2 == 0 *Otherwise mark those items in the array where the index % 2 == 1 *Concatenate the items in the array to form a modified input string. *Split the string into an array delimited by the ',' character. *Replace all instances in the array of #COMMA# placeholders with the ',' character. *The array is your output. Heres the python implementation: (fixed to handle '"a,b",c,"d,e,f,h","i,j,k"') def parse_input(input): quote_mod = int(not input.startswith('"')) input = input.split('"') for item in input: if item == '': input.remove(item) for i in range(len(input)): if i % 2 == quoted_mod: input[i] = input[i].replace(",", "#COMMA#") input = "".join(input).split(",") for item in input: if item == '': input.remove(item) for i in range(len(input)): input[i] = input[i].replace("#COMMA#", ",") return input # parse_input('a,"string, with",various,"values, and some",quoted') # -> ['a,string', ' with,various,values', ' and some,quoted'] # parse_input('"a,b",c,"d,e,f,h","i,j,k"') # -> ['a,b', 'c', 'd,e,f,h', 'i,j,k'] A: This is a standard CSV-style parse. A lot of people try to do this with regular expressions. You can get to about 90% with regexes, but you really need a real CSV parser to do it properly. I found a fast, excellent C# CSV parser on CodeProject a few months ago that I highly recommend! A: Here's one in pseudocode (a.k.a. Python) in one pass :-P def parsecsv(instr): i = 0 j = 0 outstrs = [] # i is fixed until a match occurs, then it advances # up to j. j inches forward each time through: while i < len(instr): if j < len(instr) and instr[j] == '"': # skip the opening quote... j += 1 # then iterate until we find a closing quote. while instr[j] != '"': j += 1 if j == len(instr): raise Exception("Unmatched double quote at end of input.") if j == len(instr) or instr[j] == ',': s = instr[i:j] # get the substring we've found s = s.strip() # remove extra whitespace # remove surrounding quotes if they're there if len(s) > 2 and s[0] == '"' and s[-1] == '"': s = s[1:-1] # add it to the result outstrs.append(s) # skip over the comma, move i up (to where # j will be at the end of the iteration) i = j+1 j = j+1 return outstrs def testcase(instr, expected): outstr = parsecsv(instr) print outstr assert expected == outstr # Doesn't handle things like '1, 2, "a, b, c" d, 2' or # escaped quotes, but those can be added pretty easily. testcase('a, b, "1, 2, 3", c', ['a', 'b', '1, 2, 3', 'c']) testcase('a,b,"1, 2, 3" , c', ['a', 'b', '1, 2, 3', 'c']) # odd number of quotes gives a "unmatched quote" exception #testcase('a,b,"1, 2, 3" , "c', ['a', 'b', '1, 2, 3', 'c']) A: I just couldn't resist to see if I could make it work in a Python one-liner: arr = [i.replace("|", ",") for i in re.sub('"([^"]*)\,([^"]*)"',"\g<1>|\g<2>", str_to_test).split(",")] Returns ['a', 'string, with', 'various', 'values, and some', 'quoted'] It works by first replacing the ',' inside quotes to another separator (|), splitting the string on ',' and replacing the | separator again. A: Since you said language agnostic, I wrote my algorithm in the language that's closest to pseudocode as posible: def find_character_indices(s, ch): return [i for i, ltr in enumerate(s) if ltr == ch] def split_text_preserving_quotes(content, include_quotes=False): quote_indices = find_character_indices(content, '"') output = content[:quote_indices[0]].split() for i in range(1, len(quote_indices)): if i % 2 == 1: # end of quoted sequence start = quote_indices[i - 1] end = quote_indices[i] + 1 output.extend([content[start:end]]) else: start = quote_indices[i - 1] + 1 end = quote_indices[i] split_section = content[start:end].split() output.extend(split_section) output += content[quote_indices[-1] + 1:].split() return output
{ "language": "en", "url": "https://stackoverflow.com/questions/6209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: E-mail Notifications In a .net system I'm building, there is a need for automated e-mail notifications. These should be editable by an admin. What's the easiest way to do this? SQL table and WYSIWIG for editing? The queue is a great idea. I've been throwing around that type of process for awhile with my old company. A: From a high level, yes. :D The main thing is some place to store the templates. A database is a great option unless you're not already using one, then file systems work fine. WSIWIG editors (such as fckeditor) work well and give you some good options regarding the features that you allow. Some sort of token replacement system is also a good idea if you need it. For example, if someone puts %FIRSTNAME% in the email template, the code that generates the email can do some simple pattern matching to replace known tokens with other known values that may be dynamic based on user or other circumstances. A: I am thinking that if these are automated notifications, then this means they are probably going out as a result of some type of event in your software. If this is a web based app, and you are going to have a number of these being sent out, then consider implementing an email queue rather than sending out an email on every event. A component can query the queue periodically and send out any pending items. A: Are you just talking about the interface and storage, or the implementation of sending the emails as well? Yes, a SQL table with FROM, TO, Subject, Body should work for storage and, heck, a textbox or even maybe a RichText box should work for editing. Or is this a web interface? For actually sending it, check out the System.Web.Mail namespace, it's pretty self explanatory and easy to use :) A: Adam Haile writes: check out the System.Web.Mail namespace By which you mean System.Net.Mail in .Net 2.0 and above :) A: How about using the new Workflow components in .NET 3.0 (and 3.5)? That is what we use in combination with templates in my current project. The templates have the basic format and the tokens are replaced with user information.
{ "language": "en", "url": "https://stackoverflow.com/questions/6210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Eclipse on win64 Is anyone successfully using the latest 64-bit Ganymede release of Eclipse on Windows XP or Vista 64-bit? Currently I run the normal Eclipse 3.4 distribution on a 32bit JDK and launch & compile my apps with a 64bit JDK. Our previous experience has been that the 64bit Eclipse distro is unstable for us, so I'm curious if anyone is using it successfully. We are using JDK 1.6.0_05. A: I'm using Eclipse with a 64bit VM. However I have to use Java 1.5, because with Java 1.6, even 1.6.0_10ea, Eclipse crashed when changing the .classpath-file. On Linux I had the same problems and could only get the 64bit Eclipse to work with 64bit Java 1.5. The problem seems to be with the just in time compilation, since with vmparam -Xint eclipse works -- but this is not a sollution, because it's slow then. Edit: With 1.6.0_11 it seems to work. 1.6_10 final might work as well as mentioned in the comment, but I've not tested that. A: I've been successfully using it on Vista x64 for some light Java work. Nothing too involved and no extra plugins, but basic Java coding has been working without any issues. I'm using the 3.4M7 build but it looks like the 3.4 stable build supports Vista x64 now.
{ "language": "en", "url": "https://stackoverflow.com/questions/6222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: VS 2008 - Objects disappearing? I've only been using VS 2008 Team Foundation for a few weeks. Over the last few days, I've noticed that sometimes one of my objects/controls on my page just disappears from intellisense. The project builds perfectly and the objects are still in the HTML, but I still can't find the object. Any one else notice this? Edit: For what it's worth, I know if I close VS and then open it up again, it comes back. A: The Visual Studio 2008 and .NET 3.5 Framework Service Pack 1 has gone out of beta, maybe you can see if this bug still occurs? A: I am also having a number of problems with VS 2008. Who would guess that I don't ever need to select multiple controls on a web form... Anyway, a lot has been fixed in Service Pack 1, which is in Beta currently. Might be worth installing that. It has gone a little way to fixing absolute positioning. This isn't your problem, of course, but your fix might be in there as well. A: I occasionally get this in Visual Studio 2005. A method I use to get the controls back, is to switch the web page between code view and design view. I know it's not a fix but it's a little quicker than restarting Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/6284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why is Array.Length an int, and not an uint Why is Array.Length an int, and not a uint. This bothers me (just a bit) because a length value can never be negative. This also forced me to use an int for a length-property on my own class, because when you specify an int-value, this needs to be cast explicitly... So the ultimate question is: is there any use for an unsigned int (uint)? Even Microsoft seems not to use them. A: Unsigned int isn't CLS compliant and would therefore restrict usage of the property to those languages that do implement a UInt. See here: Framework 1.1 Introduction to the .NET Framework Class Library Framework 2.0 .NET Framework Class Library Overview A: Many reasons: * *uint is not CLS compliant, thus making a built in type (array) dependent on it would have been problematic *The runtime as originally designed prohibits any object on the heap occupying more than 2GB of memory. Since the maximum sized array that would less than or equal to this limit would be new byte[int.MaxValue] it would be puzzling to people to be able to generate positive but illegal array lengths. * *Note that this limitation has been somewhat removed in the 4.5 release, though the standard Length as int remains. *Historically C# inherits much of its syntax and convention from C and C++. In those arrays are simply pointer arithmetic so negative array indexing was possible (though normally illegal and dangerous). Since much existing code assumes that the array index is signed this would have been a factor *On a related note the use of signed integers for array indexes in C/C++ means that interop with these languages and unmanaged functions would require the use of ints in those circumstances anyway, which may confuse due to the inconsistency. *The BinarySearch implementation (a very useful component of many algorithms) relies on being able to use the negative range of the int to indicate that the value was not found and the location at which such a value should be inserted to maintain sorting. *When operating on an array it is likely that you would want to take a negative offset of an existing index. If you used an offset which would take you past the start of the array using unit then the wrap around behaviour would make your index possibly legal (in that it is positive). With an int the result would be illegal (but safe since the runtime would guard against reading invalid memory) A: I think it also might have to do with simplifying things on a lower level, since Array.Length will of course be added to a negative number at some point, if Array.Length were unsigned, and added to a negative int (two's complement), there could be messy results. A: Looks like nobody provided answer to "the ultimate question". I believe primary use of unsigned ints is to provide easier interfacing with external systems (P/Invoke and the like) and to cover needs of various languages being ported to .NET. A: Typically, integer values are signed, unless you explicitly need an unsigned value. It's just the way they are used. I may not agree with that choice, but that's just the way it is. For the time being, with todays typical memory constraints, if your array or similar data structure needs an UInt32 length, you should consider other data structures. With an array of bytes, Int32 will give you 2GB of values
{ "language": "en", "url": "https://stackoverflow.com/questions/6301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "106" }
Q: Why are unsigned int's not CLS compliant? Why are unsigned integers not CLS compliant? I am starting to think the type specification is just for performance and not for correctness. A: Not all languages have the concept of unsigned ints. For example VB 6 had no concept of unsigned ints which I suspect drove the decision of the designers of VB7/7.1 not to implement as well (it's implemented now in VB8). To quote: http://msdn.microsoft.com/en-us/library/12a7a7h3.aspx The CLS was designed to be large enough to include the language constructs that are commonly needed by developers, yet small enough that most languages are able to support it. In addition, any language construct that makes it impossible to rapidly verify the type safety of code was excluded from the CLS so that all CLS-compliant languages can produce verifiable code if they choose to do so. Update: I did wonder about this some years back, and whilst I can't see why a UInt wouldn't be type safety verifiable, I guess the CLS guys had to have a cut off point somewhere as to what would be the baseline minimum number of value types supported. Also when you think about the longer term where more and more languages are being ported to the CLR why force them to implement unsigned ints to gain CLS compliance if there is absolutely no concept, ever? A: Unsigned integers are not CLS compliant because they're not interoperable between certain languages. A: Unsigned int's don't gain you that much in real life, however having more than 1 type of int gives you pain, so a lot of languages only have singed ints. CLS compliant is aimed at allowing a class to be made use of from lots of languages… Remember that no one makes you be CLS compliant. You can still use unsigned ints within a method, or as parms to a private method, as it is only the public API that CLS compliant restricts. A: Part of the issue, I suspect, revolves around the fact that unsigned integer types in C are required to behave as members of an abstract algebraic ring rather than as numbers [meaning, for example, that if an unsigned 16-bit integer variable equals zero, decrementing it is required to yield 65,535, and if it's equal to 65,535 then incrementing it is required to yield zero.] There are times when such behavior is extremely useful, but numeric types exhibit such behavior may have gone against the spirit of some languages. I would conjecture that the decision to omit unsigned types probably predates the decision to support both checked and unchecked numeric contexts. Personally, I wish there had been separate integer types for unsigned numbers and algebraic rings; applying a unary minus operator to unsigned 32-bit number should yield a 64-bit signed result [negating anything other than zero would yield a negative number] but applying a unary minus to a ring type should yield the additive inverse within that ring. In any case, the reason unsigned integers are not CLS compliant is that Microsoft decided that languages didn't have to support unsigned integers in order to be considered "CLS compatible".
{ "language": "en", "url": "https://stackoverflow.com/questions/6325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "122" }
Q: Haml: how do I set a dynamic class value? I have the following html.erb code that I'm looking to move to Haml: <span class="<%= item.dashboardstatus.cssclass %>" ><%= item.dashboardstatus.status %></span> What it does is associate the CSS class of the currently assigned status to the span. How is this done in Haml? I'm sure I'm missing something really simple. A: Not sure. Maybe: %span{:class => item.dashboardstatus.cssclass }= item.dashboardstatus.status A: You can do multiple conditional class selectors with array syntax: %div{ class: [ ("active" if @thing.active?), ("highlight" if @thing.important?) ] } A: This worked. Where ever the link is to the page do something like this %div{"data-turbolinks" => "false"} = link_to 'Send payment', new_payments_manager_path(sender_id: current_user.id, receiver_id: @collaboration.with(current_user).id, collaboration_id: params[:id]), class: 'button'
{ "language": "en", "url": "https://stackoverflow.com/questions/6326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Multiple foreign keys? I've got a table that is supposed to track days and costs for shipping product from one vendor to another. We (brilliantly :p) stored both the shipping vendors (FedEx, UPS) with the product handling vendors (Think... Dunder Mifflin) in a "VENDOR" table. So, I have three columns in my SHIPPING_DETAILS table that all reference VENDOR.no. For some reason MySQL isn't letting me define all three as foreign keys. Any ideas? CREATE TABLE SHIPPING_GRID( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique ID for each row', shipping_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the shipping vendor (vendors_type must be 3)', start_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the vendor being shipped from', end_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to the VENDOR.no for the vendor being shipped to', shipment_duration INT(1) DEFAULT 1 COMMENT 'Duration in whole days shipment will take', price FLOAT(5,5) NOT NULL COMMENT 'Price in US dollars per shipment lbs (down to 5 decimal places)', is_flat_rate TINYINT(1) DEFAULT 0 COMMENT '1 if is flat rate regardless of weight, 0 if price is by lbs', INDEX (shipping_vendor_no), INDEX (start_vendor_no), INDEX (end_vendor_no), FOREIGN KEY (shipping_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (start_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (end_vendor_no) REFERENCES VENDOR (no) ) TYPE = INNODB; Edited to remove double primary key definition... Yeah, unfortunately that didn't fix it though. Now I'm getting: Can't create table './REMOVED MY DB NAME/SHIPPING_GRID.frm' (errno: 150) Doing a phpinfo() tells me this for mysql: Client API version 5.0.45 Yes, the VENDOR.no is type int(6). A: You defined the primary key twice. Try: CREATE TABLE SHIPPING_GRID( id INT NOT NULL AUTO_INCREMENT PRIMARY KEY COMMENT 'Unique ID for each row', shipping_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the shipping vendor (vendors_type must be 3)', start_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to VENDOR.no for the vendor being shipped from', end_vendor_no INT(6) NOT NULL COMMENT 'Foreign key to the VENDOR.no for the vendor being shipped to', shipment_duration INT(1) DEFAULT 1 COMMENT 'Duration in whole days shipment will take', price FLOAT(5,5) NOT NULL COMMENT 'Price in US dollars per shipment lbs (down to 5 decimal places)', is_flat_rate TINYINT(1) DEFAULT 0 COMMENT '1 if is flat rate regardless of weight, 0 if price is by lbs', INDEX (shipping_vendor_no), INDEX (start_vendor_no), INDEX (end_vendor_no), FOREIGN KEY (shipping_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (start_vendor_no) REFERENCES VENDOR (no), FOREIGN KEY (end_vendor_no) REFERENCES VENDOR (no) ) TYPE = INNODB; The VENDOR primary key must be INT(6), and both tables must be of type InnoDB. A: Can you provide the definition of the VENDOR table I figured it out. The VENDOR table was MyISAM... (edited your answer to tell me to make them both INNODB ;) ) (any reason not to just switch the VENDOR type over to INNODB?) A: I ran the code here, and the error message showed (and it is right!) that you are setting id field twice as primary key.
{ "language": "en", "url": "https://stackoverflow.com/questions/6340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How to pass a comma separated list to a stored procedure? So I have a Sybase stored proc that takes 1 parameter that's a comma separated list of strings and runs a query with in in an IN() clause: CREATE PROCEDURE getSomething @keyList varchar(4096) AS SELECT * FROM mytbl WHERE name IN (@keyList) How do I call my stored proc with more than 1 value in the list? So far I've tried exec getSomething 'John' -- works but only 1 value exec getSomething 'John','Tom' -- doesn't work - expects two variables exec getSomething "'John','Tom'" -- doesn't work - doesn't find anything exec getSomething '"John","Tom"' -- doesn't work - doesn't find anything exec getSomething '\'John\',\'Tom\'' -- doesn't work - syntax error EDIT: I actually found this page that has a great reference of the various ways to pas an array to a sproc A: If you're using Sybase 12.5 or earlier then you can't use functions. A workaround might be to populate a temporary table with the values and read them from there. A: This is a little late, but I had this exact issue a while ago and I found a solution. The trick is double quoting and then wrapping the whole string in quotes. exec getSomething """John"",""Tom"",""Bob"",""Harry""" Modify your proc to match the table entry to the string. CREATE PROCEDURE getSomething @keyList varchar(4096) AS SELECT * FROM mytbl WHERE @keyList LIKE '%'+name+'%' I've had this in production since ASE 12.5; we're now on 15.0.3. A: Pass the comma separated list into a function that returns a table value. There is a MS SQL example somewhere on StackOverflow, damned if I can see it at the moment. CREATE PROCEDURE getSomething @keyList varchar(4096) AS SELECT * FROM mytbl WHERE name IN (fn_GetKeyList(@keyList)) Call with - exec getSomething 'John,Tom,Foo,Bar' I'm guessing Sybase should be able to do something similar? A: Do you need to use a comma separated list? The last couple of years, I've been taking this type of idea and passing in an XML file. The openxml "function" takes a string and makes it like xml and then if you create a temp table with the data, it is queryable. DECLARE @idoc int DECLARE @doc varchar(1000) SET @doc =' <ROOT> <Customer CustomerID="VINET" ContactName="Paul Henriot"> <Order CustomerID="VINET" EmployeeID="5" OrderDate="1996-07-04T00:00:00"> <OrderDetail OrderID="10248" ProductID="11" Quantity="12"/> <OrderDetail OrderID="10248" ProductID="42" Quantity="10"/> </Order> </Customer> <Customer CustomerID="LILAS" ContactName="Carlos Gonzlez"> <Order CustomerID="LILAS" EmployeeID="3" OrderDate="1996-08-16T00:00:00"> <OrderDetail OrderID="10283" ProductID="72" Quantity="3"/> </Order> </Customer> </ROOT>' --Create an internal representation of the XML document. EXEC sp_xml_preparedocument @idoc OUTPUT, @doc -- Execute a SELECT statement that uses the OPENXML rowset provider. SELECT * FROM OPENXML (@idoc, '/ROOT/Customer',1) WITH (CustomerID varchar(10), ContactName varchar(20)) A: Regarding Kevin's idea of passing the parameter to a function that splits the text into a table, here's my implementation of that function from a few years back. Works a treat. Splitting Text into Words in SQL A: This is a quick and dirty method that may be useful: select * from mytbl where "," + ltrim(rtrim(@keylist)) + "," like "%," + ltrim(rtrim(name)) + ",%" A: Not sure if it's in ASE, but in SQL Anywhere, the sa_split_list function returns a table from a CSV. It has optional arguments to pass a different delimiter (default is a comma) and a maxlength for each returned value. sa_split_list function A: The problem with the calls like this: exec getSomething '"John","Tom"' is that it's treating '"John","Tom"' as a single string, it will only match an entry in the table that is '"John","Tom"'. If you didn't want to use a temp table as in Paul's answer, then you could use dynamic sql. (Assumes v12+) CREATE PROCEDURE getSomething @keyList varchar(4096) AS declare @sql varchar(4096) select @sql = "SELECT * FROM mytbl WHERE name IN (" + @keyList +")" exec(@sql) You will need to ensure the items in @keylist have quotes around them, even if they are single values. A: This works in SQL. Declare in your GetSomething procedure a variable of type XML as such: DECLARE @NameArray XML = NULL The body of the stored procedure implements the following: SELECT * FROM MyTbl WHERE name IN (SELECT ParamValues.ID.value('.','VARCHAR(10)') FROM @NameArray.nodes('id') AS ParamValues(ID)) From within the SQL code that calls the SP to declare and initialize the XML variable before calling the stored procedure: DECLARE @NameArray XML SET @NameArray = '<id><</id>id>Name_1<<id>/id></id><id><</id>id>Name_2<<id>/id></id><id><</id>id>Name_3<<id>/id></id><id><</id>id>Name_4<<id>/id></id>' Using your example the call to the stored procedure would be: EXEC GetSomething @NameArray I have used this method before and it works fine. If you want a quick test, copy and paste the following code to a new query and execute: DECLARE @IdArray XML SET @IdArray = '<id><</id>id>Name_1<<id>/id></id><id><</id>id>Name_2<<id>/id></id><id><</id>id>Name_3<<id>/id></id><id><</id>id>Name_4<<id>/id></id>' SELECT ParamValues.ID.value('.','VARCHAR(10)') FROM @IdArray.nodes('id') AS ParamValues(ID) A: To touch on what @Abel provided, what helped me out was: My purpose was to take what ever the end user inputted from SSRS and use that in my where clause as an In (SELECT) Obviously @ICD_VALUE_RPT would be commented out in my Dataset query. DECLARE @ICD_VALUE_RPT VARCHAR(MAX) SET @ICD_VALUE_RPT = 'Value1, Value2' DECLARE @ICD_VALUE_ARRAY XML SET @ICD_VALUE_ARRAY = CONCAT('<id>', REPLACE(REPLACE(@ICD_VALUE_RPT, ',', '</id>,<id>'),' ',''), '</id>') then in my WHERE i added: (PATS_WITH_PL_DIAGS.ICD10_CODE IN (SELECT ParamValues.ID.value('.','VARCHAR(MAX)') FROM @ICD_VALUE_ARRAY.nodes('id') AS ParamValues(ID)) OR PATS_WITH_PL_DIAGS.ICD9_CODE IN (SELECT ParamValues.ID.value('.','VARCHAR(MAX)') FROM @ICD_VALUE_ARRAY.nodes('id') AS ParamValues(ID)) ) A: Try this way. Its works for me. @itemIds varchar(max) CREATE PROCEDURE getSomething @keyList varchar(4096) AS SELECT * FROM mytbl WHERE name IN (SELECT Value FROM [Global_Split] (@itemIds,','))
{ "language": "en", "url": "https://stackoverflow.com/questions/6369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How do you manage databases in development, test, and production? I've had a hard time trying to find good examples of how to manage database schemas and data between development, test, and production servers. Here's our setup. Each developer has a virtual machine running our app and the MySQL database. It is their personal sandbox to do whatever they want. Currently, developers will make a change to the SQL schema and do a dump of the database to a text file that they commit into SVN. We're wanting to deploy a continuous integration development server that will always be running the latest committed code. If we do that now, it will reload the database from SVN for each build. We have a test (virtual) server that runs "release candidates." Deploying to the test server is currently a very manual process, and usually involves me loading the latest SQL from SVN and tweaking it. Also, the data on the test server is inconsistent. You end up with whatever test data the last developer to commit had on his sandbox server. Where everything breaks down is the deployment to production. Since we can't overwrite the live data with test data, this involves manually re-creating all the schema changes. If there were a large number of schema changes or conversion scripts to manipulate the data, this can get really hairy. If the problem was just the schema, It'd be an easier problem, but there is "base" data in the database that is updated during development as well, such as meta-data in security and permissions tables. This is the biggest barrier I see in moving toward continuous integration and one-step-builds. How do you solve it? A follow-up question: how do you track database versions so you know which scripts to run to upgrade a given database instance? Is a version table like Lance mentions below the standard procedure? Thanks for the reference to Tarantino. I'm not in a .NET environment, but I found their DataBaseChangeMangement wiki page to be very helpful. Especially this Powerpoint Presentation (.ppt) I'm going to write a Python script that checks the names of *.sql scripts in a given directory against a table in the database and runs the ones that aren't there in order based on a integer that forms the first part of the filename. If it is a pretty simple solution, as I suspect it will be, then I'll post it here. I've got a working script for this. It handles initializing the DB if it doesn't exist and running upgrade scripts as necessary. There are also switches for wiping an existing database and importing test data from a file. It's about 200 lines, so I won't post it (though I might put it on pastebin if there's interest). A: There are a couple of good options. I wouldn't use the "restore a backup" strategy. * *Script all your schema changes, and have your CI server run those scripts on the database. Have a version table to keep track of the current database version, and only execute the scripts if they are for a newer version. *Use a migration solution. These solutions vary by language, but for .NET I use Migrator.NET. This allows you to version your database and move up and down between versions. Your schema is specified in C# code. A: You could also look at using a tool like SQL Compare to script the difference between various versions of a database, allowing you to quickly migrate between versions A: * *Name your databases as follows - dev_<<db>> , tst_<<db>> , stg_<<db>> , prd_<<db>> (Obviously you never should hardcode db names *Thus you would be able to deploy even the different type of db's on same physical server ( I do not recommend that , but you may have to ... if resources are tight ) *Ensure you would be able to move data between those automatically *Separate the db creation scripts from the population = It should be always possible to recreate the db from scratch and populate it ( from the old db version or external data source *do not use hardcode connection strings in the code ( even not in the config files ) - use in the config files connection string templates , which you do populate dynamically , each reconfiguration of the application_layer which does need recompile is BAD *do use database versioning and db objects versioning - if you can afford it use ready products , if not develop something on your own *track each DDL change and save it into some history table ( example here ) *DAILY backups ! Test how fast you would be able to restore something lost from a backup (use automathic restore scripts *even your DEV database and the PROD have exactly the same creation script you will have problems with the data, so allow developers to create the exact copy of prod and play with it ( I know I will receive minuses for this one , but change in the mindset and the business process will cost you much less when shit hits the fan - so force the coders to subscript legally whatever it makes , but ensure this one A: This is something that I'm constantly unsatisfied with - our solution to this problem that is. For several years we maintained a separate change script for each release. This script would contain the deltas from the last production release. With each release of the application, the version number would increment, giving something like the following: * *dbChanges_1.sql *dbChanges_2.sql *... *dbChanges_n.sql This worked well enough until we started maintaining two lines of development: Trunk/Mainline for new development, and a maintenance branch for bug fixes, short term enhancements, etc. Inevitably, the need arose to make changes to the schema in the branch. At this point, we already had dbChanges_n+1.sql in the Trunk, so we ended up going with a scheme like the following: * *dbChanges_n.1.sql *dbChanges_n.2.sql *... *dbChanges_n.3.sql Again, this worked well enough, until we one day we looked up and saw 42 delta scripts in the mainline and 10 in the branch. ARGH! These days we simply maintain one delta script and let SVN version it - i.e. we overwrite the script with each release. And we shy away from making schema changes in branches. So, I'm not satisfied with this either. I really like the concept of migrations from Rails. I've become quite fascinated with LiquiBase. It supports the concept of incremental database refactorings. It's worth a look and I'll be looking at it in detail soon. Anybody have experience with it? I'd be very curious to hear about your results. A: We have a very similar setup to the OP. Developers develop in VM's with private DB's. [Developers will soon be committing into private branches] Testing is run on different machines ( actually in in VM's hosted on a server) [Will soon be run by Hudson CI server] Test by loading the reference dump into the db. Apply the developers schema patches then apply the developers data patches Then run unit and system tests. Production is deployed to customers as installers. What we do: We take a schema dump of our sandbox DB. Then a sql data dump. We diff that to the previous baseline. that pair of deltas is to upgrade n-1 to n. we configure the dumps and deltas. So to install version N CLEAN we run the dump into an empty db. To patch, apply the intervening patches. ( Juha mentioned Rail's idea of having a table recording the current DB version is a good one and should make installing updates less fraught. ) Deltas and dumps have to be reviewed before beta test. I can't see any way around this as I've seen developers insert test accounts into the DB for themselves. A: I'm afraid I'm in agreement with other posters. Developers need to script their changes. In many cases a simple ALTER TABLE won't work, you need to modify existing data too - developers need to thing about what migrations are required and make sure they're scripted correctly (of course you need to test this carefully at some point in the release cycle). Moreover, if you have any sense, you'll get your developers to script rollbacks for their changes as well so they can be reverted if need be. This should be tested as well, to ensure that their rollback not only executes without error, but leaves the DB in the same state as it was in previously (this is not always possible or desirable, but is a good rule most of the time). How you hook that into a CI server, I don't know. Perhaps your CI server needs to have a known build snapshot on, which it reverts to each night and then applies all the changes since then. That's probably best, otherwise a broken migration script will break not just that night's build, but all subsequent ones. A: Your developers need to write change scripts (schema and data change) for each bug/feature they work on, not just simply dump the entire database into source control. These scripts will upgrade the current production database to the new version in development. Your build process can restore a copy of the production database into an appropriate environment and run all the scripts from source control on it, which will update the database to the current version. We do this on a daily basis to make sure all the scripts run correctly. A: Have a look at how Ruby on Rails does this. First there are so called migration files, that basically transform database schema and data from version N to version N+1 (or in case of downgrading from version N+1 to N). Database has table which tells current version. Test databases are always wiped clean before unit-tests and populated with fixed data from files. A: The book Refactoring Databases: Evolutionary Database Design might give you some ideas on how to manage the database. A short version is readable also at http://martinfowler.com/articles/evodb.html In one PHP+MySQL project I've had the database revision number stored in the database, and when the program connects to the database, it will first check the revision. If the program requires a different revision, it will open a page for upgrading the database. Each upgrade is specified in PHP code, which will change the database schema and migrate all existing data. A: If you are in the .NET environment then the solution is Tarantino (archived). It handles all of this (including which sql scripts to install) in a NANT build. A: Check out the dbdeploy, there are Java and .net tools already available, you could follow their standards for the SQL file layouts and schema version table and write your python version. A: We are using command-line mysql-diff: it outputs a difference between two database schemas (from live DB or script) as ALTER script. mysql-diff is executed at application start, and if schema changed, it reports to developer. So developers do not need to write ALTERs manually, schema updates happen semi-automatically. A: I've written a tool which (by hooking into Open DBDiff) compares database schemas, and will suggest migration scripts to you. If you make a change that deletes or modifies data, it will throw an error, but provide a suggestion for the script (e.g. when a column in missing in the new schema, it will check if the column has been renamed and create xx - generated script.sql.suggestion containing a rename statement). http://code.google.com/p/migrationscriptgenerator/ SQL Server only I'm afraid :( It's also pretty alpha, but it is VERY low friction (particularly if you combine it with Tarantino or http://code.google.com/p/simplescriptrunner/) The way I use it is to have a SQL scripts project in your .sln. You also have a db_next database locally which you make your changes to (using Management Studio or NHibernate Schema Export or LinqToSql CreateDatabase or something). Then you execute migrationscriptgenerator with the _dev and _next DBs, which creates. the SQL update scripts for migrating across. A: For oracle database we use oracle-ddl2svn tools. This tool automated next process * *for every db scheme get scheme ddls *put it under version contol changes between instances resolved manually
{ "language": "en", "url": "https://stackoverflow.com/questions/6371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "182" }
Q: What's the difference in closure style There are two popular closure styles in javascript. The first I call anonymous constructor: new function() { var code... } and the inline executed function: (function() { var code... })(); are there differences in behaviour between those two? Is one "better" over the other? A: @Lance: the first one is also executing. Compare it with a named constructor: function Blah() { alert('blah'); } new Bla(); this is actually also executing code. The same goes for the anonymous constructor... But that was not the question ;-) A: They both create a closure by executing the code block. As a matter of style I much prefer the second for a couple of reasons: It's not immediately obvious by glancing at the first that the code will actually be executed; the line looks like it is creating a new function, rather than executing it as a constructor, but that's not what's actually happening. Avoid code that doesn't do what it looks like it's doing! Also the (function(){ ... })(); make nice bookend tokens so that you can immediately see that you're entering and leaving a closure scope. This is good because it alerts the programmer reading it to the scope change, and is especially useful if you're doing some postprocessing of the file, eg for minification. A: Both cases will execute the function, the only real difference is what the return value of the expression may be, and what the value of "this" will be inside the function. Basically behaviour of new expression Is effectively equivalent to var tempObject = {}; var result = expression.call(tempObject); if (result is not an object) result = tempObject; Although of course tempObject and result are transient values you can never see (they're implementation details in the interpreter), and there is no JS mechanism to do the "is not an object" check. Broadly speaking the "new function() { .. }" method will be slower due to the need to create the this object for the constructor. That said this should be not be a real difference as object allocation is not slow, and you shouldn't be using such code in hot code (due to the cost of creating the function object and associated closure). Edit: one thing i realised that i missed from this is that the tempObject will get expressions prototype, eg. (before the expression.call) tempObject.__proto__ = expression.prototype A: Well, I made a page like this: <html> <body> <script type="text/javascript"> var a = new function() { alert("method 1"); return "test"; }; var b = (function() { alert("method 2"); return "test"; })(); alert(a); //a is a function alert(b); //b is a string containing "test" </script> </body> </html> Surprisingly enough (to me anyway) it alerted both "method 1" and method 2". I didn't expect "method 1" to be alerted. The difference was what the values of a and b were. a was the function itself, while b was the string that the function returned. A: Yes, there are differences between the two. Both are anonymous functions and execute in the exact same way. But, the difference between the two is that in the second case scope of the variables is restricted to the anonymous function itself. There is no chance of accidentally adding variables to the global scope. This implies that by using the second method, you are not cluttering up the global variables scope which is good as these global variable values can interfere with some other global variables that you may use in some other library or are being used in a third party library. Example: <html> <body> <script type="text/javascript"> new function() { a = "Hello"; alert(a + " Inside Function"); }; alert(a + " Outside Function"); (function() { var b = "World"; alert(b + " Inside Function"); })(); alert(b + " Outside Function"); </script> </body> </html> In the above code the output is something like: Hello Inside Function Hello Outside Function World Inside Function ... then, you get an error as 'b' is not defined outside the function! Thus, I believe that the second method is better... safer!
{ "language": "en", "url": "https://stackoverflow.com/questions/6373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: What client(s) should be targeted in implementing an ICalendar export for events? http://en.wikipedia.org/wiki/ICalendar I'm working to implement an export feature for events. The link above lists tons of clients that support the ICalendar standard, but the "three big ones" I can see are Apple's iCal, Microsoft's Outlook, and Google's Gmail. I'm starting to get the feeling that each of these client implement different parts of the "standard", and I'm unsure of what pieces of information we should be trying to export from the application so that someone can put it on their calendar (especially around recurrence). For example, from what I understand Outlook doesn't support hourly recurrence. Could any of you provide guidance of the "happy medium" here from a features implementation standpoint? Secondary question, if we decide to cut features from the export (such as hourly recurrence) because it isn't supported in Outlook, should we support it in the application as well? (it is a general purpose event scheduling application, with no business specific use in mind...so we really are looking for the happy medium). A: I have to say that I don't use the hourly recurrence feature as really how many people have events that repeat in the same day? I could see if someone however was to schedule when they needed to take a particular medicine at recurring times throughout the day. I would say support full features in the application itself, but provide a warning when they go to export the calendar that all event details may not work as expected or find a way to export in a different manner for Outlook alone that does provide the hourly recurrence feature. A: I use iCal in Lightning (Thunderbird) and Rainlendar. I have used Calendaring software for years (decades) and have never had a need for repeating events within the same day. It is simple to add additional daily repeating events in the same day if it is really needed.
{ "language": "en", "url": "https://stackoverflow.com/questions/6378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Java Time Zone is messed up I am running a Tomcat application, and I need to display some time values. Unfortunately, the time is coming up an hour off. I looked into it and discovered that my default TimeZone is being set to: sun.util.calendar.ZoneInfo[id="GMT-08:00", offset=-28800000, dstSavings=0, useDaylight=false, transitions=0, lastRule=null] Rather than the Pacific time zone. This is further indicated when I try to print the default time zone's display name, and it comes up "GMT-08:00", which seems to indicate to me that it is not correctly set to the US Pacific time zone. I am running on Ubuntu Hardy Heron, upgraded from Gutsy Gibbon. Is there a configuration file I can update to tell the JRE to use Pacific with all the associated daylight savings time information? The time on my machine shows correctly, so it doesn't seem to be an OS-wide misconfiguration. Ok, here's an update. A coworker suggested I update JAVA_OPTS in my /etc/profile to include "-Duser.timezone=US/Pacific", which worked (I also saw CATALINA_OPTS, which I updated as well). Actually, I just exported the change into the variables rather than use the new /etc/profile (a reboot later will pick up the changes and I will be golden). However, I still think there is a better solution... there should be a configuration for Java somewhere that says what timezone it is using, or how it is grabbing the timezone. If someone knows such a setting, that would be awesome, but for now this is a decent workaround. I am using 1.5, and it is most definitely a DST problem. As you can see, the time zone is set to not use daylight savings. My belief is it is generically set to -8 offset rather than the specific Pacific timezone. Since the generic -8 offset has no daylight savings info, it's of course not using it, but the question is, where do I tell Java to use Pacific time zone when it starts up? I'm NOT looking for a programmatic solution, it should be a configuration solution. A: I had a similar issue, possibly the same one. However my tomcat server runs on a windows box so the symlink solution will not work. I set -Duser.timezone=Australia/Sydney in the JAVA_OPTS however tomcat would not recognize that DST was in effect. As a workaround I changed Australia/Sydney (GMT+10:00) to Pacific/Numea (GMT+11:00) so that times would correctly display however I would love to know the actual solution or bug, if any. A: It's a "quirk" in the way the JVM looks up the zoneinfo file. See Bug ID 6456628. The easiest workaround is to make /etc/localtime a symlink to the correct zoneinfo file. For Pacific time, the following commands should work: # sudo cp /etc/localtime /etc/localtime.dist # sudo ln -fs /usr/share/zoneinfo/America/Los_Angeles /etc/localtime I haven't had any problems with the symlink approach. Edit: Added "sudo" to the commands. A: On Ubuntu, it's not enough to just change the /etc/localtime file. It seems to read /etc/timezone file, too. It's better follow the instruction to set the time zone properly. In particular, do the following: $ sudo cp /etc/timezone /etc/timezone.dist $ echo "Australia/Adelaide" | sudo tee /etc/timezone Australia/Adelaide $ sudo dpkg-reconfigure --frontend noninteractive tzdata Current default time zone: 'Australia/Adelaide' Local time is now: Sat May 8 21:19:24 CST 2010. Universal Time is now: Sat May 8 11:49:24 UTC 2010. On my Ubuntu, if /etc/localtime and /etc/timezone are inconsistent, Java seems to read default time zone from /etc/timezone . A: It may help to double-check the timezone rules your OS is using. /usr/bin/zdump -v /etc/localtime | less This file should contain your daylight savings rules, like this one for the year 2080: /etc/localtime Sun Mar 31 01:00:00 2080 UTC = Sun Mar 31 02:00:00 2080 BST isdst=1 gmtoff=3600 You can compare this with the timezone rules you think you should be using. They can be found in /usr/share/zoneinfo/. A: Adding a short answer that worked for me, you can use timedatectl to set the timezone. Then restart the JVM afterwards. sudo timedatectl set-timezone UTC
{ "language": "en", "url": "https://stackoverflow.com/questions/6392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: How to access .Net element on Master page from a Content page? Is it possible to access an element on a Master page from the page loaded within the ContentPlaceHolder for the master? I have a ListView that lists people's names in a navigation area on the Master page. I would like to update the ListView after a person has been added to the table that the ListView is data bound to. The ListView currently does not update it's values until the cache is reloaded. We have found that just re-running the ListView.DataBind() will update a listview's contents. We have not been able to run the ListView.DataBind() on a page that uses the Master page. Below is a sample of what I wanted to do but a compiler error says "PeopleListView does not exist in the current context" GIS.master - Where ListView resides ...<asp:ListView ID="PeopleListView"... GISInput_People.aspx - Uses GIS.master as it's master page GISInput_People.aspx.cs AddNewPerson() { // Add person to table .... // Update Person List PeopleListView.DataBind(); ... } What would be the best way to resolve an issue like this in C# .Net? A: Assuming the control is called "PeopleListView" on the master page ListView peopleListView = (ListView)this.Master.FindControl("PeopleListView"); peopleListView.DataSource = [whatever]; peopleListView.DataBind(); But @palmsey is more correct, especially if your page could have the possibility of more than one master page. Decouple them and use an event. A: Option 1 :you can create public property of your master page control public TextBox PropMasterTextBox1 { get { return txtMasterBox1; } set { txtMasterBox1 = value; } } access it on content page like Master.PropMasterTextBox1.Text="SomeString"; Option 2: on Master page: public string SetMasterTextBox1Text { get { return txtMasterBox1.Text; } set { txtMasterBox1.Text = value; } } on Content Page: Master.SetMasterTextBox1Text="someText"; option 3 : you can create some public method that works for you these approach is not so useful but it helps if you just want to use some limited and predefined control A: I believe you could do this by using this.Master.FindControl or something similar, but you probably shouldn't - it requires the content page to know too much about the structure of the master page. I would suggest another method, such as firing an event in the content area that the master could listen for and re-bind when fired. A: One think to remember is the following ASP.NET directive. <%@ MasterType attribute="value" [attribute="value"...] %> MSDN Reference It will help you when referencing this.Master by creating a strongly typed reference to the master page. You can then reference your ListView without needing to CAST. A: you can access with the code this.Master.FindControl(ControlID) which control you wish. It returns the reference of the control, so that the changes are effective. about firing an event could not be possible each situation. A: Assuming your master page was named MyMaster: (Master as MyMaster).PeopleListView.DataBind(); Edit: since PeopleListView will be declared protected by default, you will either need to change this to public, or create a public property wrapper so that you can access it from your page.
{ "language": "en", "url": "https://stackoverflow.com/questions/6406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: C# loop - break vs. continue In a C# (feel free to answer for other languages) loop, what's the difference between break and continue as a means to leave the structure of the loop, and go to the next iteration? Example: foreach (DataRow row in myTable.Rows) { if (someConditionEvalsToTrue) { break; //what's the difference between this and continue ? //continue; } } A: Break Break forces a loop to exit immediately. Continue This does the opposite of break. Instead of terminating the loop, it immediately loops again, skipping the rest of the code. A: Simple answer: Break exits the loop immediately. Continue starts processing the next item. (If there are any, by jumping to the evaluating line of the for/while) A: By example foreach(var i in Enumerable.Range(1,3)) { Console.WriteLine(i); } Prints 1, 2, 3 (on separate lines). Add a break condition at i = 2 foreach(var i in Enumerable.Range(1,3)) { if (i == 2) break; Console.WriteLine(i); } Now the loop prints 1 and stops. Replace the break with a continue. foreach(var i in Enumerable.Range(1,3)) { if (i == 2) continue; Console.WriteLine(i); } Now to loop prints 1 and 3 (skipping 2). Thus, break stops the loop, whereas continue skips to the next iteration. A: When to use break vs continue? * *Break - We're leaving the loop forever and breaking up forever. Good bye. *Continue - means that you're gonna give today a rest and sort it all out tomorrow (i.e. skip the current iteration)! (Corny stories ¯¯\(ツ)/¯¯ and pics but hopefully helps you remember. Grip Alert: No idea why those words are being used. If you want to skip the iteration, why not use the word skip instead of continue? This entire Stack overflow question and 1000s of developers would not be confused if the proper name was given.) A: Ruby unfortunately is a bit different. PS: My memory is a bit hazy on this so apologies if I'm wrong instead of break/continue, it has break/next, which behave the same in terms of loops Loops (like everything else) are expressions, and "return" the last thing that they did. Most of the time, getting the return value from a loop is pointless, so everyone just does this a = 5 while a < 10 a + 1 end You can however do this a = 5 b = while a < 10 a + 1 end # b is now 10 HOWEVER, a lot of ruby code 'emulates' a loop by using a block. The canonical example is 10.times do |x| puts x end As it is much more common for people to want to do things with the result of a block, this is where it gets messy. break/next mean different things in the context of a block. break will jump out of the code that called the block next will skip the rest of the code in the block, and 'return' what you specify to the caller of the block. This doesn't make any sense without examples. def timesten 10.times{ |t| puts yield t } end timesten do |x| x * 2 end # will print 2 4 6 8 ... and so on timesten do |x| break x * 2 end # won't print anything. The break jumps out of the timesten function entirely, and the call to `puts` inside it gets skipped timesten do |x| break 5 x * 2 end # This is the same as above. it's "returning" 5, but nobody is catching it. If you did a = timesten... then a would get assigned to 5 timesten do |x| next 5 x * 2 end # this would print 5 5 5 ... and so on, because 'next 5' skips the 'x * 2' and 'returns' 5. So yeah. Ruby is awesome, but it has some awful corner-cases. This is the second worst one I've seen in my years of using it :-) A: Please let me state the obvious: note that adding neither break nor continue, will resume your program; i.e. I trapped for a certain error, then after logging it, I wanted to resume processing, and there were more code tasks in between the next row, so I just let it fall through. A: A really easy way to understand this is to place the word "loop" after each of the keywords. The terms now make sense if they are just read like everyday phrases. break loop - looping is broken and stops. continue loop - loop continues to execute with the next iteration. A: break would stop the foreach loop completely, continue would skip to the next DataRow. A: To break completely out of a foreach loop, break is used; To go to the next iteration in the loop, continue is used; Break is useful if you’re looping through a collection of Objects (like Rows in a Datatable) and you are searching for a particular match, when you find that match, there’s no need to continue through the remaining rows, so you want to break out. Continue is useful when you have accomplished what you need to in side a loop iteration. You’ll normally have continue after an if. A: There are more than a few people who don't like break and continue. The latest complaint I saw about them was in JavaScript: The Good Parts by Douglas Crockford. But I find that sometimes using one of them really simplifies things, especially if your language doesn't include a do-while or do-until style of loop. I tend to use break in loops that are searching a list for something. Once found, there's no point in continuing, so you might as well quit. I use continue when doing something with most elements of a list, but still want to skip over a few. The break statement also comes in handy when polling for a valid response from somebody or something. Instead of: Ask a question While the answer is invalid: Ask the question You could eliminate some duplication and use: While True: Ask a question If the answer is valid: break The do-until loop that I mentioned before is the more elegant solution for that particular problem: Do: Ask a question Until the answer is valid No duplication, and no break needed either. A: if you don't want to use break you just increase value of I in such a way that it make iteration condition false and loop will not execute on next iteration. for(int i = 0; i < list.Count; i++){ if(i == 5) i = list.Count; //it will make "i<list.Count" false and loop will exit } A: break will exit the loop completely, continue will just skip the current iteration. For example: for (int i = 0; i < 10; i++) { if (i == 0) { break; } DoSomeThingWith(i); } The break will cause the loop to exit on the first iteration - DoSomeThingWith will never be executed. This here: for (int i = 0; i < 10; i++) { if(i == 0) { continue; } DoSomeThingWith(i); } Will not execute DoSomeThingWith for i = 0, but the loop will continue and DoSomeThingWith will be executed for i = 1 to i = 9. A: All have given a very good explanation. I am still posting my answer just to give an example if that can help. // break statement for (int i = 0; i < 5; i++) { if (i == 3) { break; // It will force to come out from the loop } lblDisplay.Text = lblDisplay.Text + i + "[Printed] "; } Here is the output: 0[Printed] 1[Printed] 2[Printed] So 3[Printed] & 4[Printed] will not be displayed as there is break when i == 3 //continue statement for (int i = 0; i < 5; i++) { if (i == 3) { continue; // It will take the control to start point of loop } lblDisplay.Text = lblDisplay.Text + i + "[Printed] "; } Here is the output: 0[Printed] 1[Printed] 2[Printed] 4[Printed] So 3[Printed] will not be displayed as there is continue when i == 3 A: break causes the program counter to jump out of the scope of the innermost loop for(i = 0; i < 10; i++) { if(i == 2) break; } Works like this for(i = 0; i < 10; i++) { if(i == 2) goto BREAK; } BREAK:; continue jumps to the end of the loop. In a for loop, continue jumps to the increment expression. for(i = 0; i < 10; i++) { if(i == 2) continue; printf("%d", i); } Works like this for(i = 0; i < 10; i++) { if(i == 2) goto CONTINUE; printf("%d", i); CONTINUE:; } A: As for other languages: 'VB For i=0 To 10 If i=5 then Exit For '= break in C#; 'Do Something for i<5 next For i=0 To 10 If i=5 then Continue For '= continue in C# 'Do Something for i<>5... Next A: Since the example written here are pretty simple for understanding the concept I think it's also a good idea to look at the more practical version of the continue statement being used. For example: we ask the user to enter 5 unique numbers if the number is already entered we give them an error and we continue our program. static void Main(string[] args) { var numbers = new List<int>(); while (numbers.Count < 5) { Console.WriteLine("Enter 5 uniqe numbers:"); var number = Convert.ToInt32(Console.ReadLine()); if (numbers.Contains(number)) { Console.WriteLine("You have already entered" + number); continue; } numbers.Add(number); } numbers.Sort(); foreach(var number in numbers) { Console.WriteLine(number); } } lets say the users input were 1,2,2,2,3,4,5.the result printed would be: 1,2,3,4,5 Why? because every time user entered a number that was already on the list, our program ignored it and didn't add what's already on the list to it. Now if we try the same code but without continue statement and let's say with the same input from the user which was 1,2,2,2,3,4,5. the output would be : 1,2,2,2,3,4 Why? because there was no continue statement to let our program know it should ignore the already entered number. Now for the Break statement, again I think its the best to show by example. For example: Here we want our program to continuously ask the user to enter a number. We want the loop to terminate when the user types “ok" and at the end Calculate the sum of all the previously entered numbers and display it on the console. This is how the break statement is used in this example: { var sum = 0; while (true) { Console.Write("Enter a number (or 'ok' to exit): "); var input = Console.ReadLine(); if (input.ToLower() == "ok") break; sum += Convert.ToInt32(input); } Console.WriteLine("Sum of all numbers is: " + sum); } The program will ask the user to enter a number till the user types "OK" and only after that, the result would be shown. Why? because break statement finished or stops the ongoing process when it has reached the condition needed. if there was no break statement there, the program would keep running and nothing would happen when the user typed "ok". I recommend copying this code and trying to remove or add these statements and see the changes yourself.
{ "language": "en", "url": "https://stackoverflow.com/questions/6414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "857" }
Q: How to programmatically iterate datagrid rows? I'm suddenly back to WinForms, after years of web development, and am having trouble with something that should be simple. I have an ArrayList of business objects bound to a Windows Forms DataGrid. I'd like the user to be able to edit the cells, and when finished, press a Save button. At that point I'd like to iterate the all the rows and columns in the DataGrid to find any changes, and save them to the database. But I can't find a way to access the DataGrid rows. I'll also want to validate individual cells real time, as they are edited, but I'm pretty sure that can be done. (Maybe not with an ArrayList as the DataSource?) But as for iterating the DataGrid, I'm quite surprised it doesn't seem possible. Must I really stuff my business objects data into datatables in order to use the datagrid? A: foreach(var row in DataGrid1.Rows) { DoStuff(row); } //Or --------------------------------------------- foreach(DataGridRow row in DataGrid1.Rows) { DoStuff(row); } //Or --------------------------------------------- for(int i = 0; i< DataGrid1.Rows.Count - 1; i++) { DoStuff(DataGrid1.Rows[i]); } A: object cell = myDataGrid[row, col]; A: Is there anything about WinForms 3.0 that is so much better than in 1.1 I don't know about 3.0, but you can write code in VS 2008 which runs on the .NET 2.0 framework. (So, you get to use the latest C# language, but you can only use the 2.0 libraries) This gets you Generics (List<DataRow> instead of those GodAwful ArrayLists) and a ton of other stuff, you'll literally end up writing 3x less code. A: Aha, I was really just testing everyone once again! :) The real answer is, you rarely need to iterate the datagrid. Because even when binding to an ArrayList, the binding is 2 way. Still, it is handy to know how to itereate the grid directly, it can save a few lines of code now and then. But NotMyself and Orion gave the better answers: Convince the stakeholders to move up to a higher version of C#, to save development costs and increase maintainability and extensability.
{ "language": "en", "url": "https://stackoverflow.com/questions/6430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: .NET 3.5 Redistributable -- 200 MB? Other options? I've been using a lot of new .NET 3.5 features in the work that I've been doing, lately. The application that I'm building is intended for distribution among consumers who will probably not have the latest version (or perhaps any version) of the .NET framework on their machines. I went to go download the .NET 3.5 redistributable package only to find out that it's almost 200 MB! This is unacceptable for my application, because it's supposed to be a quick and painless consumer application that installs quickly and keeps a low profile on the user's machine. For users that have .NET 3.5 already installed, our binary downloads have been instantaneous, so far. This 200 MB gorilla will more than quadruple the size of the download. Is there any other option than this redistributable package that I can use to make sure the framework is on the machine that won't take the user out of our "quick and painless" workflow? Our target time from beginning of download to finalizing the install is less than two minutes. Is it just not possible for someone who doesn't already have .NET installed? A: Have you looked at the .NET Framework Client Profile? It is much smaller than the full redistributable package and is optimized for delivering just the functionality needed for smart clients. Here is a nice overview. I don't know if this will keep the download under two minutes or not, but it should get you quite a bit closer. A: That's one of the sad reasons i'm still targeting .net 2.0 whenever possible :/ But people don't neccessarily need the full 200 MB Package. There is a 3 MB Bootstrapper which will only download the required components: .net 3.5 SP1 Bootstrapper However, the worst case scenario is still a pretty hefty download. Also, see this article for a more detailed explanation on the size and an alternative workaround to the size problem. Addition: Since answering this question, Scott Hanselman created SmallestDotNet.com, which will determine the smallest required download. Doesn't change the worst case scenario, but is still useful to know. A: The Client Profile has got better (and smaller) in .NET 4 see * *Towards a Smaller .NET 4 - Details on the Client Profile and Downloading .NET *What’s new in .NET Framework 4 Client Profile RTM *.NET Framework Client Profile (MSDN) A: Once .NET Framework 3.5 SP1 comes out (should be fairly soon) there will be a second option of frameworks, namely the "Client Profile", which is a cut-down framework that only weighs in about about 30Mb from memory. It doesn't include all of the namespaces and classes of the full framework, but should be enough for most common apps in theory. It can be upgraded to the full framework if necessary (eg. if an update to your software introduces a new dependency) More more information, see here: BCL Team blog A: Also, it is worth including (in some fashion) the Service Pack downloads as well. In fact, depending on how your executables are built, you might be forced to install the Framework and the Service Packs. A: For the record, .Net Framework 3.5 SP1 is required for Microsoft SQL Server 2008 to install and RTM'd around the same time as the release this week. Still a hefty install but you can extract the client profile from it. Just not to sure how.
{ "language": "en", "url": "https://stackoverflow.com/questions/6440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How can I enable disabled radio buttons? The following code works great in IE, but not in FF or Safari. I can't for the life of me work out why. The code is supposed to disable radio buttons if you select the "Disable 2 radio buttons" option. It should enable the radio buttons if you select the "Enable both radio buttons" option. These both work... However, if you don't use your mouse to move between the 2 options ("Enable..." and "Disable...") then the radio buttons do not appear to be disabled or enabled correctly, until you click anywhere else on the page (not on the radio buttons themselves). If anyone has time/is curious/feeling helpful, please paste the code below into an html page and load it up in a browser. It works great in IE, but the problem manifests itself in FF (3 in my case) and Safari, all on Windows XP. function SetLocationOptions() { var frmTemp = document.frm; var selTemp = frmTemp.user; if (selTemp.selectedIndex >= 0) { var myOpt = selTemp.options[selTemp.selectedIndex]; if (myOpt.attributes[0].nodeValue == '1') { frmTemp.transfer_to[0].disabled = true; frmTemp.transfer_to[1].disabled = true; frmTemp.transfer_to[2].checked = true; } else { frmTemp.transfer_to[0].disabled = false; frmTemp.transfer_to[1].disabled = false; } } } <form name="frm" action="coopfunds_transfer_request.asp" method="post"> <select name="user" onchange="javascript: SetLocationOptions()"> <option value="" />Choose One <option value="58" user_is_tsm="0" />Enable both radio buttons <option value="157" user_is_tsm="1" />Disable 2 radio buttons </select> <br /><br /> <input type="radio" name="transfer_to" value="fund_amount1" />Premium&nbsp;&nbsp;&nbsp; <input type="radio" name="transfer_to" value="fund_amount2" />Other&nbsp;&nbsp;&nbsp; <input type="radio" name="transfer_to" value="both" CHECKED />Both <br /><br /> <input type="button" class="buttonStyle" value="Submit Request" /> </form> A: To get FF to mimic IE's behavior when using the keyboard, you can use the keyup event on the select box. In your example (I am not a fan of attaching event handlers this way, but that's another topic), it would be like this: <select name="user" id="selUser" onchange="javascript:SetLocationOptions()" onkeyup="javascript:SetLocationOptions()"> A: Well, IE has a somewhat non-standard object model; what you're doing shouldn't work but you're getting away with it because IE is being nice to you. In Firefox and Safari, document.frm in your code evaluates to undefined. You need to be using id values on your form elements and use document.getElementById('whatever') to return a reference to them instead of referring to non-existent properties of the document object. So this works a bit better and may do what you're after: Line 27: <form name="frm" id="f" ... Line 6: var frmTemp = document.getElementById('f'); But you might want to check out this excellent book if you want to learn more about the right way of going about things: DOM Scripting by Jeremy Keith Also while we're on the subject, Bulletproof Ajax by the same author is also deserving of a place on your bookshelf as is JavaScript: The Good Parts by Doug Crockford A: Why not grab one of the AJAX scripting libraries, they abstract away a lot of the cross browser DOM scripting black magic and make life a hell of a lot easier.
{ "language": "en", "url": "https://stackoverflow.com/questions/6441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Date arithmetic in Unix shell scripts I need to do date arithmetic in Unix shell scripts that I use to control the execution of third party programs. I'm using a function to increment a day and another to decrement: IncrementaDia(){ echo $1 | awk ' BEGIN { diasDelMes[1] = 31 diasDelMes[2] = 28 diasDelMes[3] = 31 diasDelMes[4] = 30 diasDelMes[5] = 31 diasDelMes[6] = 30 diasDelMes[7] = 31 diasDelMes[8] = 31 diasDelMes[9] = 30 diasDelMes[10] = 31 diasDelMes[11] = 30 diasDelMes[12] = 31 } { anio=substr($1,1,4) mes=substr($1,5,2) dia=substr($1,7,2) if((anio % 4 == 0 && anio % 100 != 0) || anio % 400 == 0) { diasDelMes[2] = 29; } if( dia == diasDelMes[int(mes)] ) { if( int(mes) == 12 ) { anio = anio + 1 mes = 1 dia = 1 } else { mes = mes + 1 dia = 1 } } else { dia = dia + 1 } } END { printf("%04d%02d%02d", anio, mes, dia) } ' } if [ $# -eq 1 ]; then tomorrow=$1 else today=$(date +"%Y%m%d") tomorrow=$(IncrementaDia $hoy) fi but now I need to do more complex arithmetic. What it's the best and more compatible way to do this? A: For BSD / OS X compatibility, you can also use the date utility with -j and -v to do date math. See the FreeBSD manpage for date. You could combine the previous Linux answers with this answer which might provide you with sufficient compatibility. On BSD, as Linux, running date will give you the current date: $ date Wed 12 Nov 2014 13:36:00 AEDT Now with BSD's date you can do math with -v, for example listing tomorrow's date (+1d is plus one day): $ date -v +1d Thu 13 Nov 2014 13:36:34 AEDT You can use an existing date as the base, and optionally specify the parse format using strftime, and make sure you use -j so you don't change your system date: $ date -j -f "%a %b %d %H:%M:%S %Y %z" "Sat Aug 09 13:37:14 2014 +1100" Sat 9 Aug 2014 12:37:14 AEST And you can use this as the base of date calculations: $ date -v +1d -f "%a %b %d %H:%M:%S %Y %z" "Sat Aug 09 13:37:14 2014 +1100" Sun 10 Aug 2014 12:37:14 AEST Note that -v implies -j. Multiple adjustments can be provided sequentially: $ date -v +1m -v -1w Fri 5 Dec 2014 13:40:07 AEDT See the manpage for more details. A: Assuming you have GNU date, like so: date --date='1 days ago' '+%a' And similar phrases. A: To do arithmetic with dates on UNIX you get the date as the number seconds since the UNIX epoch, do some calculation, then convert back to your printable date format. The date command should be able to both give you the seconds since the epoch and convert from that number back to a printable date. My local date command does this, % date -n 1219371462 % date 1219371462 Thu Aug 21 22:17:42 EDT 2008 % See your local date(1) man page. To increment a day add 86400 seconds. A: date --date='1 days ago' '+%a' It's not a very compatible solution. It will work only in Linux. At least, it didn't worked in Aix and Solaris. It works in RHEL: date --date='1 days ago' '+%Y%m%d' 20080807 A: Why not write your scripts using a language like perl or python instead which more naturally supports complex date processing? Sure you can do it all in bash, but I think you will also get more consistency across platforms using python for example, so long as you can ensure that perl or python is installed. I should add that it is quite easy to wire in python and perl scripts into a containing shell script. A: I have bumped into this a couple of times. My thoughts are: * *Date arithmetic is always a pain *It is a bit easier when using EPOCH date format *date on Linux converts to EPOCH, but not on Solaris *For a portable solution, you need to do one of the following: *Install gnu date on solaris (already mentioned, needs human interaction to complete) *Use perl for the date part (most unix installs include perl, so I would generally assume that this action does not require additional work). A sample script (checks for the age of certain user files to see if the account can be deleted): #!/usr/local/bin/perl $today = time(); $user = $ARGV[0]; $command="awk -F: '/$user/ {print \$6}' /etc/passwd"; chomp ($user_dir = `$command`); if ( -f "$user_dir/.sh_history" ) { @file_dates = stat("$user_dir/.sh_history"); $sh_file_date = $file_dates[8]; } else { $sh_file_date = 0; } if ( -f "$user_dir/.bash_history" ) { @file_dates = stat("$user_dir/.bash_history"); $bash_file_date = $file_dates[8]; } else { $bash_file_date = 0; } if ( $sh_file_date > $bash_file_date ) { $file_date = $sh_file_date; } else { $file_date = $bash_file_date; } $difference = $today - $file_date; if ( $difference >= 3888000 ) { print "User needs to be disabled, 45 days old or older!\n"; exit (1); } else { print "OK\n"; exit (0); } A: Looking into it further, I think you can simply use date. I've tried the following on OpenBSD: I took the date of Feb. 29th 2008 and a random hour (in the form of 080229301535) and added +1 to the day part, like so: $ date -j 0802301535 Sat Mar 1 15:35:00 EST 2008 As you can see, date formatted the time correctly... HTH A: If you want to continue with awk, then the mktime and strftime functions are useful: BEGIN { dateinit } { newdate=daysadd(OldDate,DaysToAdd)} # daynum: convert DD-MON-YYYY to day count #----------------------------------------- function daynum(date, d,m,y,i,n) { y=substr(date,8,4) m=gmonths[toupper(substr(date,4,3))] d=substr(date,1,2) return mktime(y" "m" "d" 12 00 00") } #numday: convert day count to DD-MON-YYYY #------------------------------------------- function numday(n, y,m,d) { m=toupper(substr(strftime("%B",n),1,3)) return strftime("%d-"m"-%Y",n) } # daysadd: add (or subtract) days from date (DD-MON-YYYY), return new date (DD-MON-YYYY) #------------------------------------------ function daysadd(date, days) { return numday(daynum(date)+(days*86400)) } #init variables for date calcs #----------------------------------------- function dateinit( x,y,z) { # Stuff for date calcs split("JAN:1,FEB:2,MAR:3,APR:4,MAY:5,JUN:6,JUL:7,AUG:8,SEP:9,OCT:10,NOV:11,DEC:12", z) for (x in z) { split(z[x],y,":") gmonths[y[1]]=y[2] } } A: The book "Shell Script Recipes: A Problem Solution Approach" (ISBN: 978-1-59059-471-1) by Chris F.A. Johnson has a date functions library that might be helpful. The source code is available at http://apress.com/book/downloadfile/2146 (the date functions are in Chapter08/data-funcs-sh within the tar file). A: Here is an easy way for doing date computations in shell scripting. meetingDate='12/31/2011' # MM/DD/YYYY Format reminderDate=`date --date=$meetingDate'-1 day' +'%m/%d/%Y'` echo $reminderDate Below are more variations of date computation that can be achieved using date utility. http://www.cyberciti.biz/tips/linux-unix-get-yesterdays-tomorrows-date.html http://www.cyberciti.biz/faq/linux-unix-formatting-dates-for-display/ This worked for me on RHEL. A: I have written a bash script for converting dates expressed in English into conventional mm/dd/yyyy dates. It is called ComputeDate. Here are some examples of its use. For brevity I have placed the output of each invocation on the same line as the invocation, separarted by a colon (:). The quotes shown below are not necessary when running ComputeDate: $ ComputeDate 'yesterday': 03/19/2010 $ ComputeDate 'yes': 03/19/2010 $ ComputeDate 'today': 03/20/2010 $ ComputeDate 'tod': 03/20/2010 $ ComputeDate 'now': 03/20/2010 $ ComputeDate 'tomorrow': 03/21/2010 $ ComputeDate 'tom': 03/21/2010 $ ComputeDate '10/29/32': 10/29/2032 $ ComputeDate 'October 29': 10/1/2029 $ ComputeDate 'October 29, 2010': 10/29/2010 $ ComputeDate 'this monday': 'this monday' has passed. Did you mean 'next monday?' $ ComputeDate 'a week after today': 03/27/2010 $ ComputeDate 'this satu': 03/20/2010 $ ComputeDate 'next monday': 03/22/2010 $ ComputeDate 'next thur': 03/25/2010 $ ComputeDate 'mon in 2 weeks': 03/28/2010 $ ComputeDate 'the last day of the month': 03/31/2010 $ ComputeDate 'the last day of feb': 2/28/2010 $ ComputeDate 'the last day of feb 2000': 2/29/2000 $ ComputeDate '1 week from yesterday': 03/26/2010 $ ComputeDate '1 week from today': 03/27/2010 $ ComputeDate '1 week from tomorrow': 03/28/2010 $ ComputeDate '2 weeks from yesterday': 4/2/2010 $ ComputeDate '2 weeks from today': 4/3/2010 $ ComputeDate '2 weeks from tomorrow': 4/4/2010 $ ComputeDate '1 week after the last day of march': 4/7/2010 $ ComputeDate '1 week after next Thursday': 4/1/2010 $ ComputeDate '2 weeks after the last day of march': 4/14/2010 $ ComputeDate '2 weeks after 1 day after the last day of march': 4/15/2010 $ ComputeDate '1 day after the last day of march': 4/1/2010 $ ComputeDate '1 day after 1 day after 1 day after 1 day after today': 03/24/2010 I have included this script as an answer to this problem because it illustrates how to do date arithmetic via a set of bash functions and these functions may prove useful for others. It handles leap years and leap centuries correctly: #! /bin/bash # ConvertDate -- convert a human-readable date to a MM/DD/YY date # # Date ::= Month/Day/Year # | Month/Day # | DayOfWeek # | [this|next] DayOfWeek # | DayofWeek [of|in] [Number|next] weeks[s] # | Number [day|week][s] from Date # | the last day of the month # | the last day of Month # # Month ::= January | February | March | April | May | ... | December # January ::= jan | january | 1 # February ::= feb | january | 2 # ... # December ::= dec | december | 12 # Day ::= 1 | 2 | ... | 31 # DayOfWeek ::= today | Sunday | Monday | Tuesday | ... | Saturday # Sunday ::= sun* # ... # Saturday ::= sat* # # Number ::= Day | a # # Author: Larry Morell if [ $# = 0 ]; then printdirections $0 exit fi # Request the value of a variable GetVar () { Var=$1 echo -n "$Var= [${!Var}]: " local X read X if [ ! -z $X ]; then eval $Var="$X" fi } IsLeapYear () { local Year=$1 if [ $[20$Year % 4] -eq 0 ]; then echo yes else echo no fi } # AddToDate -- compute another date within the same year DayNames=(mon tue wed thu fri sat sun ) # To correspond with 'date' output Day2Int () { ErrorFlag= case $1 in -e ) ErrorFlag=-e; shift ;; esac local dow=$1 n=0 while [ $n -lt 7 -a $dow != "${DayNames[n]}" ]; do let n++ done if [ -z "$ErrorFlag" -a $n -eq 7 ]; then echo Cannot convert $dow to a numeric day of wee exit fi echo $[n+1] } Months=(31 28 31 30 31 30 31 31 30 31 30 31) MonthNames=(jan feb mar apr may jun jul aug sep oct nov dec) # Returns the month (1-12) from a date, or a month name Month2Int () { ErrorFlag= case $1 in -e ) ErrorFlag=-e; shift ;; esac M=$1 Month=${M%%/*} # Remove /... case $Month in [a-z]* ) Month=${Month:0:3} M=0 while [ $M -lt 12 -a ${MonthNames[M]} != $Month ]; do let M++ done let M++ esac if [ -z "$ErrorFlag" -a $M -gt 12 ]; then echo "'$Month' Is not a valid month." exit fi echo $M } # Retrieve month,day,year from a legal date GetMonth() { echo ${1%%/*} } GetDay() { echo $1 | col / 2 } GetYear() { echo ${1##*/} } AddToDate() { local Date=$1 local days=$2 local Month=`GetMonth $Date` local Day=`echo $Date | col / 2` # Day of Date local Year=`echo $Date | col / 3` # Year of Date local LeapYear=`IsLeapYear $Year` if [ $LeapYear = "yes" ]; then let Months[1]++ fi Day=$[Day+days] while [ $Day -gt ${Months[$Month-1]} ]; do Day=$[Day - ${Months[$Month-1]}] let Month++ done echo "$Month/$Day/$Year" } # Convert a date to normal form NormalizeDate () { Date=`echo "$*" | sed 'sX *X/Xg'` local Day=`date +%d` local Month=`date +%m` local Year=`date +%Y` #echo Normalizing Date=$Date > /dev/tty case $Date in */*/* ) Month=`echo $Date | col / 1 ` Month=`Month2Int $Month` Day=`echo $Date | col / 2` Year=`echo $Date | col / 3` ;; */* ) Month=`echo $Date | col / 1 ` Month=`Month2Int $Month` Day=1 Year=`echo $Date | col / 2 ` ;; [a-z]* ) # Better be a month or day of week Exp=${Date:0:3} case $Exp in jan|feb|mar|apr|may|june|jul|aug|sep|oct|nov|dec ) Month=$Exp Month=`Month2Int $Month` Day=1 #Year stays the same ;; mon|tue|wed|thu|fri|sat|sun ) # Compute the next such day local DayOfWeek=`date +%u` D=`Day2Int $Exp` if [ $DayOfWeek -le $D ]; then Date=`AddToDate $Month/$Day/$Year $[D-DayOfWeek]` else Date=`AddToDate $Month/$Day/$Year $[7+D-DayOfWeek]` fi # Reset Month/Day/Year Month=`echo $Date | col / 1 ` Day=`echo $Date | col / 2` Year=`echo $Date | col / 3` ;; * ) echo "$Exp is not a valid month or day" exit ;; esac ;; * ) echo "$Date is not a valid date" exit ;; esac case $Day in [0-9]* );; # Day must be numeric * ) echo "$Date is not a valid date" exit ;; esac [0-9][0-9][0-9][0-9] );; # Year must be 4 digits [0-9][0-9] ) Year=20$Year ;; esac Date=$Month/$Day/$Year echo $Date } # NormalizeDate jan # NormalizeDate january # NormalizeDate jan 2009 # NormalizeDate jan 22 1983 # NormalizeDate 1/22 # NormalizeDate 1 22 # NormalizeDate sat # NormalizeDate sun # NormalizeDate mon ComputeExtension () { local Date=$1; shift local Month=`GetMonth $Date` local Day=`echo $Date | col / 2` local Year=`echo $Date | col / 3` local ExtensionExp="$*" case $ExtensionExp in *w*d* ) # like 5 weeks 3 days or even 5w2d ExtensionExp=`echo $ExtensionExp | sed 's/[a-z]/ /g'` weeks=`echo $ExtensionExp | col 1` days=`echo $ExtensionExp | col 2` days=$[7*weeks+days] Due=`AddToDate $Month/$Day/$Year $days` ;; *d ) # Like 5 days or 5d ExtensionExp=`echo $ExtensionExp | sed 's/[a-z]/ /g'` days=$ExtensionExp Due=`AddToDate $Month/$Day/$Year $days` ;; * ) Due=$ExtensionExp ;; esac echo $Due } # Pop -- remove the first element from an array and shift left Pop () { Var=$1 eval "unset $Var[0]" eval "$Var=(\${$Var[*]})" } ComputeDate () { local Date=`NormalizeDate $1`; shift local Expression=`echo $* | sed 's/^ *a /1 /;s/,/ /' | tr A-Z a-z ` local Exp=(`echo $Expression `) local Token=$Exp # first one local Ans= #echo "Computing date for ${Exp[*]}" > /dev/tty case $Token in */* ) # Regular date M=`GetMonth $Token` D=`GetDay $Token` Y=`GetYear $Token` if [ -z "$Y" ]; then Y=$Year elif [ ${#Y} -eq 2 ]; then Y=20$Y fi Ans="$M/$D/$Y" ;; yes* ) Ans=`AddToDate $Date -1` ;; tod*|now ) Ans=$Date ;; tom* ) Ans=`AddToDate $Date 1` ;; the ) case $Expression in *day*after* ) #the day after Date Pop Exp; # Skip the Pop Exp; # Skip day Pop Exp; # Skip after #echo Calling ComputeDate $Date ${Exp[*]} > /dev/tty Date=`ComputeDate $Date ${Exp[*]}` #Recursive call #echo "New date is " $Date > /dev/tty Ans=`AddToDate $Date 1` ;; *last*day*of*th*month|*end*of*th*month ) M=`date +%m` Day=${Months[M-1]} if [ $M -eq 2 -a `IsLeapYear $Year` = yes ]; then let Day++ fi Ans=$Month/$Day/$Year ;; *last*day*of* ) D=${Expression##*of } D=`NormalizeDate $D` M=`GetMonth $D` Y=`GetYear $D` # echo M is $M > /dev/tty Day=${Months[M-1]} if [ $M -eq 2 -a `IsLeapYear $Y` = yes ]; then let Day++ fi Ans=$[M]/$Day/$Y ;; * ) echo "Unknown expression: " $Expression exit ;; esac ;; next* ) # next DayOfWeek Pop Exp dow=`Day2Int $DayOfWeek` # First 3 chars tdow=`Day2Int ${Exp:0:3}` # First 3 chars n=$[7-dow+tdow] Ans=`AddToDate $Date $n` ;; this* ) Pop Exp dow=`Day2Int $DayOfWeek` tdow=`Day2Int ${Exp:0:3}` # First 3 chars if [ $dow -gt $tdow ]; then echo "'this $Exp' has passed. Did you mean 'next $Exp?'" exit fi n=$[tdow-dow] Ans=`AddToDate $Date $n` ;; [a-z]* ) # DayOfWeek ... M=${Exp:0:3} case $M in jan|feb|mar|apr|may|june|jul|aug|sep|oct|nov|dec ) ND=`NormalizeDate ${Exp[*]}` Ans=$ND ;; mon|tue|wed|thu|fri|sat|sun ) dow=`Day2Int $DayOfWeek` Ans=`NormalizeDate $Exp` if [ ${#Exp[*]} -gt 1 ]; then # Just a DayOfWeek #tdow=`GetDay $Exp` # First 3 chars #if [ $dow -gt $tdow ]; then #echo "'this $Exp' has passed. Did you mean 'next $Exp'?" #exit #fi #n=$[tdow-dow] #else # DayOfWeek in a future week Pop Exp # toss monday Pop Exp # toss in/off if [ $Exp = next ]; then Exp=2 fi n=$[7*(Exp-1)] # number of weeks n=$[n+7-dow+tdow] Ans=`AddToDate $Date $n` fi ;; esac ;; [0-9]* ) # Number weeks [from|after] Date n=$Exp Pop Exp; case $Exp in w* ) let n=7*n;; esac Pop Exp; Pop Exp #echo Calling ComputeDate $Date ${Exp[*]} > /dev/tty Date=`ComputeDate $Date ${Exp[*]}` #Recursive call #echo "New date is " $Date > /dev/tty Ans=`AddToDate $Date $n` ;; esac echo $Ans } Year=`date +%Y` Month=`date +%m` Day=`date +%d` DayOfWeek=`date +%a |tr A-Z a-z` Date="$Month/$Day/$Year" ComputeDate $Date $* This script makes extensive use of another script I wrote (called col ... many apologies to those who use the standard col supplied with Linux). This version of col simplifies extracting columns from the stdin. Thus, $ echo a b c d e | col 5 3 2 prints e c b Here it the col script: #!/bin/sh # col -- extract columns from a file # Usage: # col [-r] [c] col-1 col-2 ... # where [c] if supplied defines the field separator # where each col-i represents a column interpreted according to the presence of -r as follows: # -r present : counting starts from the right end of the line # -r absent : counting starts from the left side of the line Separator=" " Reverse=false case "$1" in -r ) Reverse=true; shift; ;; [0-9]* ) ;; * )Separator="$1"; shift; ;; esac case "$1" in -r ) Reverse=true; shift; ;; [0-9]* ) ;; * )Separator="$1"; shift; ;; esac # Replace each col-i with $i Cols="" for f in $* do if [ $Reverse = true ]; then Cols="$Cols \$(NF-$f+1)," else Cols="$Cols \$$f," fi done Cols=`echo "$Cols" | sed 's/,$//'` #echo "Using column specifications of $Cols" awk -F "$Separator" "{print $Cols}" It also uses printdirections for printing out directions when the script is invoked improperly: #!/bin/sh # # printdirections -- print header lines of a shell script # # Usage: # printdirections path # where # path is a *full* path to the shell script in question # beginning with '/' # # To use printdirections, you must include (as comments at the top # of your shell script) documentation for running the shell script. if [ $# -eq 0 -o "$*" = "-h" ]; then printdirections $0 exit fi # Delete the command invocation at the top of the file, if any # Delete from the place where printdirections occurs to the end of the file # Remove the # comments # There is a bizarre oddity here. sed '/#!/d;/.*printdirections/,$d;/ *#/!d;s/# //;s/#//' $1 > /tmp/printdirections.$$ # Count the number of lines numlines=`wc -l /tmp/printdirections.$$ | awk '{print $1}'` # Remove the last line numlines=`expr $numlines - 1` head -n $numlines /tmp/printdirections.$$ rm /tmp/printdirections.$$ To use this place the three scripts in the files ComputeDate, col, and printdirections, respectively. Place the file in directory named by your PATH, typically, ~/bin. Then make them executable with: $ chmod a+x ComputeDate col printdirections Problems? Send me some emaiL: morell AT cs.atu.edu Place ComputeDate in the subject. A: If the GNU version of date works for you, why don't you grab the source and compile it on AIX and Solaris? http://www.gnu.org/software/coreutils/ In any case, the source ought to help you get the date arithmetic correct if you are going to write you own code. As an aside, comments like "that solution is good but surely you can note it's not as good as can be. It seems nobody thought of tinkering with dates when constructing Unix." don't really get us anywhere. I found each one of the suggestions so far to be very useful and on target. A: Here are my two pennies worth - a script wrapper making use of date and grep. Example Usage > sh ./datecalc.sh "2012-08-04 19:43:00" + 1s 2012-08-04 19:43:00 + 0d0h0m1s 2012-08-04 19:43:01 > sh ./datecalc.sh "2012-08-04 19:43:00" - 1s1m1h1d 2012-08-04 19:43:00 - 1d1h1m1s 2012-08-03 18:41:59 > sh ./datecalc.sh "2012-08-04 19:43:00" - 1d2d1h2h1m2m1s2sblahblah 2012-08-04 19:43:00 - 1d1h1m1s 2012-08-03 18:41:59 > sh ./datecalc.sh "2012-08-04 19:43:00" x 1d Bad operator :-( > sh ./datecalc.sh "2012-08-04 19:43:00" Missing arguments :-( > sh ./datecalc.sh gibberish + 1h date: invalid date `gibberish' Invalid date :-( Script #!/bin/sh # Usage: # # datecalc "<date>" <operator> <period> # # <date> ::= see "man date", section "DATE STRING" # <operator> ::= + | - # <period> ::= INTEGER<unit> | INTEGER<unit><period> # <unit> ::= s | m | h | d if [ $# -lt 3 ]; then echo "Missing arguments :-(" exit; fi date=`eval "date -d \"$1\" +%s"` if [ -z $date ]; then echo "Invalid date :-(" exit; fi if ! ([ $2 == "-" ] || [ $2 == "+" ]); then echo "Bad operator :-(" exit; fi op=$2 minute=$[60] hour=$[$minute*$minute] day=$[24*$hour] s=`echo $3 | grep -oe '[0-9]*s' | grep -m 1 -oe '[0-9]*'` m=`echo $3 | grep -oe '[0-9]*m' | grep -m 1 -oe '[0-9]*'` h=`echo $3 | grep -oe '[0-9]*h' | grep -m 1 -oe '[0-9]*'` d=`echo $3 | grep -oe '[0-9]*d' | grep -m 1 -oe '[0-9]*'` if [ -z $s ]; then s=0; fi if [ -z $m ]; then m=0; fi if [ -z $h ]; then h=0; fi if [ -z $d ]; then d=0; fi ms=$[$m*$minute] hs=$[$h*$hour] ds=$[$d*$day] sum=$[$s+$ms+$hs+$ds] out=$[$date$op$sum] formattedout=`eval "date -d @$out +\"%Y-%m-%d %H:%M:%S\""` echo $1 $2 $d"d"$h"h"$m"m"$s"s" echo $formattedout A: This works for me: TZ=GMT+6; export TZ mes=`date --date='2 days ago' '+%m'` dia=`date --date='2 days ago' '+%d'` anio=`date --date='2 days ago' '+%Y'` hora=`date --date='2 days ago' '+%H'`
{ "language": "en", "url": "https://stackoverflow.com/questions/6467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Faster way to find duplicates conditioned by time In a machine with AIX without PERL I need to filter records that will be considered duplicated if they have the same id and if they were registered between a period of four hours. I implemented this filter using AWK and work pretty well but I need a solution much faster: # Generar lista de Duplicados awk 'BEGIN { FS="," } /OK/ { old[$8] = f[$8]; f[$8] = mktime($4, $3, $2, $5, $6, $7); x[$8]++; } /OK/ && x[$8]>1 && f[$8]-old[$8] Any suggestions? Are there ways to improve the environment (preloading the file or someting like that)? The input file is already sorted. With the corrections suggested by jj33 I made a new version with better treatment of dates, still maintaining a low profile for incorporating more operations: awk 'BEGIN { FS=","; SECSPERMINUTE=60; SECSPERHOUR=3600; SECSPERDAY=86400; split("0 31 59 90 120 151 181 212 243 273 304 334", DAYSTOMONTH, " "); split("0 366 731 1096 1461 1827 2192 2557 2922 3288 3653 4018 4383 4749 5114 5479 5844 6210 6575 6940 7305", DAYSTOYEAR, " "); } /OK/ { old[$8] = f[$8]; f[$8] = mktime($4, $3, $2, $5, $6, $7); x[$8]++; } /OK/ && x[$8]>1 && f[$8]-old[$8] 2 ) && ( ((y % 4 == 0) && (y % 100 != 0)) || (y % 400 == 0) ) ) { d2m = d2m + 1; } d2y = DAYSTOYEAR[ y - 1999 ]; return ss + (mm*SECSPERMINUTE) + (hh*SECSPEROUR) + (d*SECSPERDAY) + (d2m*SECSPERDAY) + (d2y*SECSPERDAY); } ' A: This sounds like a job for an actual database. Even something like SQLite could probably help you reasonably well here. The big problem I see is your definition of "within 4 hours". That's a sliding window problem, which means you can't simply quantize all the data to 4 hour segments... you have to compute all "nearby" elements for every other element separately. Ugh. A: If your data file contains all your records (i.e. it includes records that do not have dupicate ids within the file) you could pre-process it and produce a file that only contains records that have duplicate (ids). If this is the case that would reduce the size of file you need to process with your AWK program. A: How is the input file sorted? Like, cat file|sort, or sorted via a single specific field, or multiple fields? If multiple fields, what fields and what order? It appears the hour fields are a 24 hour clock, not 12, right? Are all the date/time fields zero-padded (would 9am be "9" or "09"?) Without taking into account performance it looks like your code has problems with month boundaries since it assumes all months are 30 days long. Take the two dates 2008-05-31/12:00:00 and 2008-06-01:12:00:00. Those are 24 hours apart but your code produces the same time code for both (63339969600) A: I think you would need to consider leap years. I didn't do the math, but I think during a leap year, with a hard code of 28 days for feb, a comparison of noon on 2/29 and noon on 3/1 would result in the same duplicate time stamp as before. Although it looks like you didn't implement it like that. They way you implemented it, I think you still have the problem but it's between dates on 12/31 of $leapyear and 1/1 of $leapyear+1. I think you might also have some collisions during time changes if your code has to handle time zones that handle them. The file doesn't really seem to be sorted in any useful way. I'm guessing that field $1 is some sort of status (the "OK" you're checking for). So it's sorted by record status, then by DAY, then MONTH, YEAR, HOURS, MINUTES, SECONDS. If it was year,month,day I think there could be some optimizations there. Still might be but my brain's going in a different direction right now. If there are a small number of duplicate keys in proportion to total number of lines, I think your best bet is to reduce the file your awk script works over to just duplicate keys (as David said). You could also preprocess the file so the only lines present are the /OK/ lines. I think I would do this with a pipeline where the first awk script only prints the lines with duplicate IDs and the second awk script is basically the one above but optimized to not look for /OK/ and with the knowledge that any key present is a duplicate key. If you know ahead of time that all or most lines will have repeated keys, it's probably not worth messing with. I'd bite the bullet and write it in C. Tons more lines of code, much faster than the awk script. A: On many unixen, you can get sort to sort by a particular column, or field. So by sorting the file by the ID, and then by the date, you no longer need to keep the associative array of when you last saw each ID at all. All the context is there in the order of the file. On my Mac, which has GNU sort, it's: sort -k 8 < input.txt > output.txt to sort on the ID field. You can sort on a second field too, by saying (e.g) 8,3 instead, but ONLY 2 fields. So a unix-style time_t timestamp might not be a bad idea in the file - it's easy to sort, and saves you all those date calculations. Also, (again at least in GNU awk), there is a mktime function that makes the time_t for you from the components. A: @AnotherHowie, I thought the whole preprocessing could be done with sort and uniq. The problem is that the OP's data seems to be comma delimited and (Solaris 8's) uniq doesn't allow you any way specify the record separator, so there wasn't a super clean way to do the preprocessing using standard unix tools. I don't think it would be any faster so I'm not going to look up the exact options, but you could do something like: cut -d, -f8 <infile.txt | sort | uniq -d | xargs -i grep {} infile.txt >outfile.txt That's not very good because it executes grep for every line containing a duplicate key. You could probably massage the uniq output into a single regexp to feed to grep, but the benefit would only be known if the OP posts expected ratio of lines containing suspected duplicate keys to total lines in the file.
{ "language": "en", "url": "https://stackoverflow.com/questions/6475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How do you mock a Sealed class? Mocking sealed classes can be quite a pain. I currently favor an Adapter pattern to handle this, but something about just keeps feels weird. So, What is the best way you mock sealed classes? Java answers are more than welcome. In fact, I would anticipate that the Java community has been dealing with this longer and has a great deal to offer. But here are some of the .NET opinions: * *Why Duck Typing Matters for C# Develoepers *Creating wrappers for sealed and other types for mocking *Unit tests for WCF (and Moq) A: The problem with TypeMock is that it excuses bad design. Now, I know that it is often someone else's bad design that it's hiding, but permitting it into your development process can lead very easily to permitting your own bad designs. I think if you're going to use a mocking framework, you should use a traditional one (like Moq) and create an isolation layer around the unmockable thing, and mock the isolation layer instead. A: I came across this problem recently and after reading / searching web, seems like there is no easy way around except to use another tool as mentioned above. Or crude of handling things as I did: * *Create instance of sealed class without getting constructor called. *System.Runtime.Serialization.FormatterServices.GetUninitializedObject(instanceType); *Assign values to your properties / fields via reflection *YourObject.GetType().GetProperty("PropertyName").SetValue(dto, newValue, null); *YourObject.GetType().GetField("FieldName").SetValue(dto, newValue); A: I almost always avoid having dependencies on external classes deep within my code. Instead, I'd much rather use an adapter/bridge to talk to them. That way, I'm dealing with my semantics, and the pain of translating is isolated in one class. It also makes it easier to switch my dependencies in the long run. A: For .NET, you could use something like TypeMock, which uses the profiling API and allows you to hook into calls to nearly anything. A: My general rule of thumb is that objects that I need to mock should have a common interface too. I think this is right design-wise and makes tests a lot easier (and is usually what you get if you do TDD). More about this can be read in the Google Testing Blog latest post (See point 9). Also, I've been working mainly in Java in the past 4 years and I can say that I can count on one hand the number of times I've created a final (sealed) class. Another rule here is I should always have a good reason to seal a class, as opposed to sealing it by default. A: I believe that Moles, from Microsoft Research, allows you to do that. From the Moles page: Moles may be used to detour any .NET method, including non-virtual/static methods in sealed types. UPDATE: there is a new framework called "Fakes" in the upcoming VS 11 release that is designed to replace Moles: The Fakes Framework in Visual Studio 11 is the next generation of Moles & Stubs, and will eventually replace it. Fakes is different from Moles, however, so moving from Moles to Fakes will require some modifications to your code. A guide for this migration will be available at a later date. Requirements: Visual Studio 11 Ultimate, .NET 4.5 A: I generally take the route of creating an interface and adaptor/proxy class to facilitate mocking of the sealed type. However, I've also experimented with skipping creation of the interface and making the proxy type non-sealed with virtual methods. This worked well when the proxy is really a natural base class that encapsulates and users part of the sealed class. When dealing with code that required this adaptation, I got tired of performing the same actions to create the interface and proxy type so I implemented a library to automate the task. The code is somewhat more sophisticated than the sample given in the article you reference, as it produces an assembly (instead of source code), allows for code generation to be performed on any type, and doesn't require as much configuration. For more information, please refer to this page. A: It is perfectly reasonable to mock a sealed class because many framework classes are sealed. In my case I'm trying to mock .Net's MessageQueue class so that I can TDD my graceful exception handling logic. If anyone has ideas on how to overcome Moq's error regarding "Invalid setup on a non-overridable member", please let me know. code: [TestMethod] public void Test() { Queue<Message> messages = new Queue<Message>(); Action<Message> sendDelegate = msg => messages.Enqueue(msg); Func<TimeSpan, MessageQueueTransaction, Message> receiveDelegate = (v1, v2) => { throw new Exception("Test Exception to simulate a failed queue read."); }; MessageQueue mockQueue = QueueMonitorHelper.MockQueue(sendDelegate, receiveDelegate).Object; } public static Mock<MessageQueue> MockQueue (Action<Message> sendDelegate, Func<TimeSpan, MessageQueueTransaction, Message> receiveDelegate) { Mock<MessageQueue> mockQueue = new Mock<MessageQueue>(MockBehavior.Strict); Expression<Action<MessageQueue>> sendMock = (msmq) => msmq.Send(It.IsAny<Message>()); //message => messages.Enqueue(message); mockQueue.Setup(sendMock).Callback<Message>(sendDelegate); Expression<Func<MessageQueue, Message>> receiveMock = (msmq) => msmq.Receive(It.IsAny<TimeSpan>(), It.IsAny<MessageQueueTransaction>()); mockQueue.Setup(receiveMock).Returns<TimeSpan, MessageQueueTransaction>(receiveDelegate); return mockQueue; } A: Although it's currently only available in beta release, I think it's worthwhile keeping in mind the shim feature of the new Fakes framework (part of the Visual Studio 11 Beta release). Shim types provide a mechanism to detour any .NET method to a user defined delegate. Shim types are code-generated by the Fakes generator, and they use delegates, which we call shim types, to specify the new method implementations. Under the hood, shim types use callbacks that were injected at runtime in the method MSIL bodies. Personally, I was looking at using this to mock the methods on sealed framework classes such as DrawingContext. A: Is there a way to implement a sealed class from an interface... and mock the interface instead? Something in me feels that having sealed classes is wrong in the first place, but that's just me :)
{ "language": "en", "url": "https://stackoverflow.com/questions/6484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77" }
Q: What style do you use for creating a "class"? There are a few ways to get class-like behavior in javascript, the most common seem to be prototype based like this: function Vector(x, y, x) { this.x = x; this.y = y; this.z = z; return this; } Vector.prototype.length = function () { return Math.sqrt(this.x * this.x ... ); } and closure based approaches similar to function Vector(x, y, z) { this.length = function() { return Math.sqrt(x * x + ...); } } For various reasons the latter is faster, but I've seen (and I frequently do write) the prototype version and was curious as to what other people do. A: There is also the object literal approach to the prototype: var Vector = function(){}; Vector.prototype = { init:function(x,y,z) { this.x = x; this.y = y; this.z = z; }, length:function() { return Math.sqrt(x * x + ...); } }; var v1 = new Vector(); v1.init(1,2,3); A: Fortunately I get to use prototype.js, which provides some nice wrappers. So you can do this: var Person = Class.create({ initialize: function(name) { this.name = name; }, say: function(message) { return this.name + ': ' + message; } }); Prototype.js Documentation: Defining classes and inheritance A: Well, I don't really have an expert opinion on this. I usually end up using closures based approach just because it keeps the code simpler to manager. But, I have found myself using prototypes for methods that have loads of lines of code. A: You also have the choice of: function Vector(x, y, z) { function length() { return Math.sqrt(x * x + ...); } } Which is probably just as slow as example two, but it looks more like Java/C# and is a bit more explicit. A: Assigning functions to the prototype is better (for public methods) because all instances of the class will share the same copy of the method. If you assign the function inside the constructor as in the second example, every time you create a new instance, the constructor creates a new copy of the length function and assigns it to just that one instance. However this latter technique is useful if you want each copy to have it's own copy, the main use of that being to do private/privileges methods which have access to private variables declared inside the constructor and inherited via the closure mechanism. Douglas Crockford has a good summary. A: I'm a big fan of using John Resig's library for this. Lightweight, straight-forward, and you already know how to use it if you're familiar with the 'usual' object-oriented style. A: There are no classes in javascript. There are objects however. You don't need a class to create an object in javascript. It does have constuctor functions that you can invoke with new for example: var james = new Person(); You can simulate class like behavior with: prototype example: function Car (type) { this.type = type; this.color = "red"; } Car.prototype.getInfo = function() { return this.color + ' ' + this.type + ' car'; }; object literal example var car = { type: "honda", color: "red", getInfo: function () { return this.color + ' ' + this.type + ' car'; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/6499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How to implement continuations? I'm working on a Scheme interpreter written in C. Currently it uses the C runtime stack as its own stack, which is presenting a minor problem with implementing continuations. My current solution is manual copying of the C stack to the heap then copying it back when needed. Aside from not being standard C, this solution is hardly ideal. What is the simplest way to implement continuations for Scheme in C? A: Besides the nice answers you've got so far, I recommend Andrew Appel's Compiling with Continuations. It's very well written and while not dealing directly with C, it is a source of really nice ideas for compiler writers. The Chicken Wiki also has pages that you'll find very interesting, such as internal structure and compilation process (where CPS is explained with an actual example of compilation). A: The traditional way is to use setjmp and longjmp, though there are caveats. Here's a reasonably good explanation A: Examples that you can look at are: Chicken (a Scheme implementation, written in C that support continuations); Paul Graham's On Lisp - where he creates a CPS transformer to implement a subset of continuations in Common Lisp; and Weblocks - a continuation based web framework, which also implements a limited form of continuations in Common Lisp. A: Continuations aren't the problem: you can implement those with regular higher-order functions using CPS. The issue with naive stack allocation is that tail calls are never optimised, which means you can't be scheme. The best current approach to mapping scheme's spaghetti stack onto the stack is using trampolines: essentially extra infrastructure to handle non-C-like calls and exits from procedures. See Trampolined Style (ps). There's some code illustrating both of these ideas. A: Continuations basically consist of the saved state of the stack and CPU registers at the point of context switches. At the very least you don't have to copy the entire stack to the heap when switching, you could only redirect the stack pointer. Continuations are trivially implemented using fibers. http://en.wikipedia.org/wiki/Fiber_%28computer_science%29 . The only things that need careful encapsulation are parameter passing and return values. In Windows fibers are done using the CreateFiber/SwitchToFiber family of calls. in Posix-compliant systems it can be done with makecontext/swapcontext. boost::coroutine has a working implementation of coroutines for C++ that can serve as a reference point for implementation. A: A good summary is available in Implementation Strategies for First-Class Continuations, an article by Clinger, Hartheimer, and Ost. I recommend looking at Chez Scheme's implementation in particular. Stack copying isn't that complex and there are a number of well-understood techniques available to improve performance. Using heap-allocated frames is also fairly simple, but you make a tradeoff of creating overhead for "normal" situation where you aren't using explicit continuations. If you convert input code to continuation passing style (CPS) then you can get away with eliminating the stack altogether. However, while CPS is elegant it adds another processing step in the front end and requires additional optimization to overcome certain performance implications. A: As soegaard pointed out, the main reference remains R. Kent Dybvig. "Three Implementation Models for Scheme". The idea is, a continuation is a closure that keeps its evaluation control stack. The control stack is required in order to continue the evalution from the moment the continuation was created using call/cc. Oftenly invoking the continuation makes long time of execution and fills the memory with duplicated stacks. I wrote this stupid code to prove that, in mit-scheme it makes the scheme crash, The code sums the first 1000 numbers 1+2+3+...+1000. (call-with-current-continuation (lambda (break) ((lambda (s) (s s 1000 break)) (lambda (s n cc) (if (= 0 n) (cc 0) (+ n ;; non-tail-recursive, ;; the stack grows at each recursive call (call-with-current-continuation (lambda (__) (s s (- n 1) __))))))))) If you switch from 1000 to 100 000 the code will spend 2 seconds, and if you grow the input number it will crash. A: I remember reading an article that may be of help to you: Cheney on the M.T.A. :-) Some implementations of Scheme I know of, such as SISC, allocate their call frames on the heap. @ollie: You don't need to do the hoisting if all your call frames are on the heap. There's a tradeoff in performance, of course: the time to hoist, versus the overhead required to allocate all frames on the heap. Maybe it should be a tunable runtime parameter in the interpreter. :-P A: If you are starting from scratch, you really should look in to Continuation Passing Style (CPS) transformation. Good sources include "LISP in small pieces" and Marc Feeley's Scheme in 90 minutes presentation. A: It seems Dybvig's thesis is unmentioned so far. It is a delight to read. The heap based model is the easiest to implement, but the stack based is more efficient. Ignore the string based model. R. Kent Dybvig. "Three Implementation Models for Scheme". http://www.cs.indiana.edu/~dyb/papers/3imp.pdf Also check out the implementation papers on ReadScheme.org. https://web.archive.org/http://library.readscheme.org/page8.html The abstract is as follows: This dissertation presents three implementation models for the Scheme Programming Language. The first is a heap-based model used in some form in most Scheme implementations to date; the second is a new stack-based model that is considerably more efficient than the heap-based model at executing most programs; and the third is a new string-based model intended for use in a multiple-processor implementation of Scheme. The heap-based model allocates several important data structures in a heap, including actual parameter lists, binding environments, and call frames. The stack-based model allocates these same structures on a stack whenever possible. This results in less heap allocation, fewer memory references, shorter instruction sequences, less garbage collection, and more efficient use of memory. The string-based model allocates versions of these structures right in the program text, which is represented as a string of symbols. In the string-based model, Scheme programs are translated into an FFP language designed specifically to support Scheme. Programs in this language are directly executed by the FFP machine, a multiple-processor string-reduction computer. The stack-based model is of immediate practical benefit; it is the model used by the author's Chez Scheme system, a high-performance implementation of Scheme. The string-based model will be useful for providing Scheme as a high-level alternative to FFP on the FFP machine once the machine is realized. A: Use an explicit stack instead. A: Patrick is correct, the only way you can really do this is to use an explicit stack in your interpreter, and hoist the appropriate segment of stack into the heap when you need to convert to a continuation. This is basically the same as what is needed to support closures in languages that support them (closures and continuations being somewhat related).
{ "language": "en", "url": "https://stackoverflow.com/questions/6512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: Where should I put my log file for an asp.net application? I have a ASP.NET application that we've written our own logging module for. My question is, where is the standard place to write a log file to? I.e. the website will be running as the anonymous user identity (e.g. IUSR on IIS7) and I need a place where I know it'll have permission to write to. Cheers, A: App_Data folder on the root of the project. It isn't served to web requests; so other people can't snoop for it. A: I would suggest putting the log file onto a seperate disk, though should give you a little performance gain so that your not trying to both read and write to the same disk as the website. If you cannot put the log file on a seperate disk, then I would simply choose a folder of your choice. In any case, you will have to give the "Network Service" account "Modify" permissions to the desired folder. If on the other hand, you have access to a databse, then log the information there. It will be much quicker than accessing the hard drive and won't be publically available. You'll also be able to report from the data quite easily. A: I'm not in a position to modify the permissions on folders (especially outside of the virtual directory home folder), and don't already have an App_Data folder, so am a bit hesitant to go with that. So for the moment I'm going with the CommonApplicationData Folder. * *On Vista/Server 2008 this is C:\ProgramData\ *On XP/Server 2003 this is C:\Documents and Settings\All Users\Application Data\ A: I'm not in a position to modify the permissions on folders (especially outside of the virtual directory home folder), and don't already have an App_Data folder, so am a bit hesitant to go with that. If you have a website, you clearly have a folder somewhere. Can you not add a (non-web-facing) subfolder? It seems like that would be a more appropriate place to put your logs than dumping them into a global, shared folder. A: You could also log to the Windows Event log or to a table in a database. How often are people looking at the event log? If it's being examined on a regualr basis, writing to a table amkes the reporting back much easier as it's trivial to reverse the order and only show the last X events for the current time period. The Windows Event log you can also query the Windows Event Log through PowerShell or with LogParser. A: Push the app_data is the best idea, just bear in mind, when the publishing the projects, if the option "Delete all existing files before publishing" is ticked, then the current data in the folder will be gone. The workaround is to skip the deletion of app_data folder. Another option to do logging is to use some existing framework such as Log4net.
{ "language": "en", "url": "https://stackoverflow.com/questions/6530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Two marbles and a 100 story building One of those classic programming interview questions... You are given two marbles, and told that they will break when dropped from some certain height (and presumably suffer no damage if dropped from below that height). You’re then taken to a 100 story building (presumably higher than the certain height), and asked to find the highest floor your can drop a marble from without breaking it as efficiently as possible. Extra info * *You must find the correct floor (not a possible range) *The marbles are both guaranteed to break at the same floor *Assume it takes zero time for you to change floors - only the number of marble drops counts *Assume the correct floor is randomly distributed in the building A: The interesting thing here is how you can do it in the least amount of drops possible. Going to the 50th floor and dropping the first would be disastrous if the breaking floor is the 49th, resulting in us having to do 50 drops. We should drop the first marble at floor n, where n is the max amount of drops required. If the marble breaks at floor n, we may have to make n-1 drops after that. If the marble doesn't break we go up to floor 2n-1 and if it breaks here we have to drop the second marble n-2 times in the worst case. We continue like this up to the 100th floor and try to break it at 3n-2, 4n-3.... and n+(n-1)+(n-2)+...1 <=100 n=14 Is the maximum drops required A: I think the real question is how accurate do you want the answer. Because your efficiency is going to really depend on that. I'm going to agree with Justin if you want 100% accuracy on the marbles then once the first marble breaks your going to have to go up 1 floor at a time from the last known "good" floor until you find out which floor is the "winner." Maybe even throw in some statistics and start at the 25th floor instead of the 50th floor so that you're worst case scenario would be 24 instead of 49. If you're answer can be plus or minus a floor or two then there could be some optimizations. Secondly, does walking up/down the stairs count against your efficiency? In that case always drop both marbles and pick up both marbles on every up/down trip. A: They each break when dropped from the same height, or are they different? If they're the same, I go to the 50th floor and drop the first marble. If it doesn't break, I go to the 75th floor and do the same, as long as it keeps not breaking I keep going up by 50% of what's left. When it does break, I go back to one higher than where I was previously (so if it broke at the 75th floor I go back to the 51st floor) and drop the second marble and move up a floor at a time until it breaks, at which point I know the highest floor I can drop from with no marble breakage. Probably not the best answer, I'm curious to see how others answer. A: Drop the first marble at floor 10, 20, 30, etc. until it breaks then jump back to the last known good floor and start dropping marbles from there one floor at a time. Worst case is 99 being the Magic Floor and you can always find it in 19 drops or less. A: This problem is covered in Problem 6.5 from Book "Cracking the Coding Interview (5th)", with solutions summarized as follows: Observation: Regardless of how we drop Marble1, Marble2 must do a linear search. Eg, if the Marble1 breaks between floor 10 and 15, we have to check every floor in between with the Marble2 The Approach: A First Try: Suppose we drop an Marble from the 10th floor, then the 20th, … * *In the first Marble breaks on the first drop (Floor 10), then we have at most 10 drops total. *If the first Marble breaks on the last drop (Floor 100), then we have at most 19 drops total (floors 1 through 100, then 91 through 99). *That’s pretty good, but all we’re considered about is the absolute worst case. We should do some “load balancing” to make those two cases more even. Goal: Create a system for dropping Marble1 so that the most drops required is consistent, whether Marble1 breaks on the first drop or the last drop. * *A perfectly load balanced system would be one in which Drops of Marble1 + Drops of Marble2 is always the same, regardless of where Marble1 broke. *For that to be the case, since each drop of Marble1 takes one more step, Marble2 is allowed one fewer step. *We must, therefore, reduce the number of steps potentially required by Marble2 by one drop each time. For example, if Marble1 is dropped on Floor 20 and then Floor 30, Marble2 is potentially required to take 9 steps. When we drop Marble1 again, we must reduce potential Marble2 steps to only 8. eg, we must drop Marble1 at floor 39. *We know, therefore, Marble1 must start at Floor X, then go up by X-1 floors, then X-2, …, until it gets to 100. *Solve for X+(X-1)+(X-2)+…+1 = 100. X(X+1)/2 = 100 -> X = 14 We go to Floor 14, then 27, then 39, … This takes 14 steps maximum. Code & Extension: * *For code implementation, you can check out here. *For the extension to N marbles, M floors, check out Chapter 12: The puzzle of eggs and floors . A: I'm personally not very big a fan of such puzzle questions, I prefer actual programming exercises in interviews. That said, first it would depend on if I can tell if they are broken or not from the floor I am dropping them at. I will presume I can. I would go up to the second floor, drop the first marble. If it broke I would try the first floor. If that broke I would know it was no floor. If the first didn't break, I would go to the 4th floor and drop from there. If that broke, I would go back down and get the other marble, then drop at the 3rd floor, breaking or not I would know which is the limit. If neither broke, I would go get both, and do the same process, this time starting at the 6th floor. This way, I can skip every other floor until I get a marble that breaks. This would be optimized for if the marble breaks early... I suppose there is probably an optimal amount of floors I could skip to get the most for each skip... but then if one breaks, I would have to check each floor individually from the first floor above the last known floor... which of course would be a pain if I skipped too many floors (sorry, not going to figure out the optimal solution right now). Ideally, I would want a whole bag of marbles, then I could use a binary search algorithm and divide the number of floors in half with each drop... but then, that wasn't the question, was it? A: Drop the first marble from the 3rd floor. If it breaks, you know it's floor 1 or 2, so drop the other marble from floor 2. If it doesn't break you've found that floor 2 is the highest. If it does break, you've found that floor 1 is the highest. 2 drops. If dropping from the 3rd floor does not break the marble, drop from floor 6. If it breaks, you know floor 4 or 5 is the highest. Drop the second marble from floor 5. If it doesn't break you've found that 5 is the highest. If it does, floor 4 is the highest. 4 drops. Continue. 3 floors - maximum of 2 drops 6 floors - maximum of 4 drops 9 floors - maximum of 6 drops 12 floors - maximum of 8 drops etc. 3x floors - maximum of 2x drops So for a 99 floor building you'd have a maximum of 66 drops. And that is the maximum. You'd likely have less drops than that. Oh, and 66 is the maximum for a 100 story building too. You'd only need 66 drops if the break floor was floor 98 or 97. If the break floor was 100 you'd use 34 drops. Even though you said it didn't matter, this would probably require the least amount of walking and you don't have to know how high the building is. Part of the problem is how you define efficiency. Is it more "efficient" to always have a solution in less than x drops, or is it it more efficient to have a good chance at having a solution in y drops where y < x with the caveat that you could have more than x drops? A: If you want a general solution which will give you the result for N floors (in your case N=100) then you can just solve quadratic equation $x^2+x-2\cdot(N-1)=0$ and the result is a ceiling of a positive root. Which is: $$f(N)=ceiling\bigg(\frac{-1+\sqrt{1+4\cdot2\cdot(N-1))}}{2}\bigg)$$ A: First thing I would do is use the dead simple algorithm that starts at floor 1 drops the marble one floor at a time until it reaches 100 or the marble breaks. Then I'd ask why should I spend time optimizing it until somone can show that it will be a problem. Too many times people get all hung up on finding the perfect complicated algorithm when a much simpler one will solve the problem. In other words don't optimize things until it's needed. This might be trick question to see if you are one of those people who can make a mountain out of a mole hill. A: This can be done better with just 7 marbles. So following the voted answer, say marble breaks at at least 49th floor. * *50th floor -> breaks (answer is between 1 to 50 inclusive) *25th floor -> doesn't break (26 to 50) *37th floor -> doesn't break (38 to 50) *43rd floor -> doesn't break (44 to 50) *46th floor -> doesn't break (47 to 50) *48th floor -> doesn't break (49 or 50) *49th floor -> breaks (49th, this step is actually needed because it might have been the min floor for marble to break is at 50th) This can be imagined as doing a binary search in a sorted set for some k, where we half the solution space with each try. For 100 floors, we need log2 100 = 6.644 (7 tries). With 7 marbles we can be sure which is the minimum floor that marble will break up to 128 storeys.
{ "language": "en", "url": "https://stackoverflow.com/questions/6547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: In C#, why can't a List object be stored in a List variable It seems that a List object cannot be stored in a List variable in C#, and can't even be explicitly cast that way. List<string> sl = new List<string>(); List<object> ol; ol = sl; results in Cannot implicitly convert type System.Collections.Generic.List<string> to System.Collections.Generic.List<object> And then... List<string> sl = new List<string>(); List<object> ol; ol = (List<object>)sl; results in Cannot convert type System.Collections.Generic.List<string> to System.Collections.Generic.List<object> Of course, you can do it by pulling everything out of the string list and putting it back in one at a time, but it is a rather convoluted solution. A: This has a lot to do with covariance, e.g., generic types are considered as parameters, and if the parameters do not resolve properly to a more specific type then the operation fails. The implication of such is that you really cannot cast to a more general type like object. And as stated by Rex, the List object won't convert each object for you. You might want to try the ff code instead: List<string> sl = new List<string>(); //populate sl List<object> ol = new List<object>(sl); or: List<object> ol = new List<object>(); ol.AddRange(sl); ol will (theoretically) copy all the contents of sl without problems. A: Yes, you can, from .NET 3.5: List<string> sl = new List<string>(); List<object> ol = sl.Cast<object>().ToList(); A: Think of it this way, if you were to do such a cast, and then add an object of type Foo to the list, the list of strings is no longer consistent. If you were to iterate the first reference, you would get a class cast exception because once you hit the Foo instance, the Foo could not be converted to string! As a side note, I think it would be more significant whether or not you can do the reverse cast: List<object> ol = new List<object>(); List<string> sl; sl = (List<string>)ol; I haven't used C# in a while, so I don't know if that is legal, but that sort of cast is actually (potentially) useful. In this case, you are going from a more general class (object) to a more specific class (string) that extends from the general one. In this way, if you add to the list of strings, you are not violating the list of objects. Does anybody know or can test if such a cast is legal in C#? A: If you're using .NET 3.5 have a look at the Enumerable.Cast method. It's an extension method so you can call it directly on the List. List<string> sl = new List<string>(); IEnumerable<object> ol; ol = sl.Cast<object>(); It's not exactly what you asked for but should do the trick. Edit: As noted by Zooba, you can then call ol.ToList() to get a List A: Mike - I believe contravariance isn't allowed in C# either See Generic type parameter variance in the CLR for some more info. A: I think that this (contravariance) will actually be supported in C# 4.0. http://blogs.msdn.com/charlie/archive/2008/10/27/linq-farm-covariance-and-contravariance-in-visual-studio-2010.aspx A: You cannot cast between generic types with different type parameters. Specialized generic types don't form part of the same inheritance tree and so are unrelated types. To do this pre-NET 3.5: List<string> sl = new List<string>(); // Add strings to sl List<object> ol = new List<object>(); foreach(string s in sl) { ol.Add((object)s); // The cast is performed implicitly even if omitted } Using Linq: var sl = new List<string>(); // Add strings to sl var ol = new List<object>(sl.Cast<object>()); // OR var ol = sl.Cast<object>().ToList(); // OR (note that the cast to object here is required) var ol = sl.Select(s => (object)s).ToList(); A: The reason is that a generic class like List<> is, for most purposes, treated externally as a normal class. e.g. when you say List<string>() the compiler says ListString() (which contains strings). [Technical folk: this is an extremely plain-English-ified version of what's going on] Consequently, obviously the compiler can't be smart enough to convert a ListString to a ListObject by casting the items of its internal collection. That's why there's extension methods for IEnumerable like Convert() that allow you to easily supply conversion for the items stored inside a collection, which could be as simple as casting from one to another. A: That's actually so that you don't try to put any odd "object" in your "ol" list variant (as List<object> would seem to allow) - because your code would crash then (because the list really is List<string> and will only accept String type objects). That's why you can't cast your variable to a more general specification. On Java it's the other way around, you don't have generics, and instead everything is List of object at runtime, and you really can stuff any strange object in your supposedly-strictly typed List. Search for "Reified generics" to see a wider discussion of java's problem... A: Such covariance on generics is not supported, but you can actually do this with arrays: object[] a = new string[] {"spam", "eggs"}; C# performs runtime checks to prevent you from putting, say, an int into a. A: Here is another pre-.NET 3.5 solution for any IList whose contents can be cast implicitly. public IList<B> ConvertIList<D, B>(IList<D> list) where D : B { List<B> newList = new List<B>(); foreach (D item in list) { newList.Add(item); } return newList; } (Based on Zooba's example) A: I have a: private List<Leerling> Leerlingen = new List<Leerling>(); And I was going to fill it with data collected in an List<object> What finally worked for me was this one: Leerlingen = (List<Leerling>)_DeserialiseerLeerlingen._TeSerialiserenObjecten.Cast<Leerling>(); .Cast it to the type you want to get an IEnumerable from that type, then typecast the IEnemuerable to the List<> you want. A: Mm, thanks to previous comments I found two ways to find it out. The first one is getting the string list of elements and then casting it to IEnumerable object list: IEnumerable<object> ob; List<string> st = new List<string>(); ob = st.Cast<object>(); And the second one is avoiding the IEnumerable object type, just casting the string to object type and then using the function "toList()" in the same sentence: List<string> st = new List<string>(); List<object> ob = st.Cast<object>().ToList(); I like more the second way. I hope this helps. A: List<string> sl = new List<string>(); List<object> ol; ol = new List<object>(sl);
{ "language": "en", "url": "https://stackoverflow.com/questions/6557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "85" }
Q: Understanding reference counting with Cocoa and Objective-C I'm just beginning to have a look at Objective-C and Cocoa with a view to playing with the iPhone SDK. I'm reasonably comfortable with C's malloc and free concept, but Cocoa's references counting scheme has me rather confused. I'm told it's very elegant once you understand it, but I'm just not over the hump yet. How do release, retain and autorelease work and what are the conventions about their use? (Or failing that, what did you read which helped you get it?) A: If you're writing code for the desktop and you can target Mac OS X 10.5, you should at least look into using Objective-C garbage collection. It really will simplify most of your development — that's why Apple put all the effort into creating it in the first place, and making it perform well. As for the memory management rules when not using GC: * *If you create a new object using +alloc/+allocWithZone:, +new, -copy or -mutableCopy or if you -retain an object, you are taking ownership of it and must ensure it is sent -release. *If you receive an object in any other way, you are not the owner of it and should not ensure it is sent -release. *If you want to make sure an object is sent -release you can either send that yourself, or you can send the object -autorelease and the current autorelease pool will send it -release (once per received -autorelease) when the pool is drained. Typically -autorelease is used as a way of ensuring that objects live for the length of the current event, but are cleaned up afterwards, as there is an autorelease pool that surrounds Cocoa's event processing. In Cocoa, it is far more common to return objects to a caller that are autoreleased than it is to return objets that the caller itself needs to release. A: Objective-C uses Reference Counting, which means each Object has a reference count. When an object is created, it has a reference count of "1". Simply speaking, when an object is referred to (ie, stored somewhere), it gets "retained" which means its reference count is increased by one. When an object is no longer needed, it is "released" which means its reference count is decreased by one. When an object's reference count is 0, the object is freed. This is basic reference counting. For some languages, references are automatically increased and decreased, but objective-c is not one of those languages. Thus the programmer is responsible for retaining and releasing. A typical way to write a method is: id myVar = [someObject someMessage]; .... do something ....; [myVar release]; return someValue; The problem of needing to remember to release any acquired resources inside of code is both tedious and error-prone. Objective-C introduces another concept aimed at making this much easier: Autorelease Pools. Autorelease pools are special objects that are installed on each thread. They are a fairly simple class, if you look up NSAutoreleasePool. When an object gets an "autorelease" message sent to it, the object will look for any autorelease pools sitting on the stack for this current thread. It will add the object to the list as an object to send a "release" message to at some point in the future, which is generally when the pool itself is released. Taking the code above, you can rewrite it to be shorter and easier to read by saying: id myVar = [[someObject someMessage] autorelease]; ... do something ...; return someValue; Because the object is autoreleased, we no longer need to explicitly call "release" on it. This is because we know some autorelease pool will do it for us later. Hopefully this helps. The Wikipedia article is pretty good about reference counting. More information about autorelease pools can be found here. Also note that if you are building for Mac OS X 10.5 and later, you can tell Xcode to build with garbage collection enabled, allowing you to completely ignore retain/release/autorelease. A: Joshua (#6591) - The Garbage collection stuff in Mac OS X 10.5 seems pretty cool, but isn't available for the iPhone (or if you want your app to run on pre-10.5 versions of Mac OS X). Also, if you're writing a library or something that might be reused, using the GC mode locks anyone using the code into also using the GC mode, so as I understand it, anyone trying to write widely reusable code tends to go for managing memory manually. A: As ever, when people start trying to re-word the reference material they almost invariably get something wrong or provide an incomplete description. Apple provides a complete description of Cocoa's memory management system in Memory Management Programming Guide for Cocoa, at the end of which there is a brief but accurate summary of the Memory Management Rules. A: I'll not add to the specific of retain/release other than you might want to think about dropping $50 and getting the Hillegass book, but I would strongly suggest getting into using the Instruments tools very early in the development of your application (even your first one!). To do so, Run->Start with performance tools. I'd start with Leaks which is just one of many of the instruments available but will help to show you when you've forgot to release. It's quit daunting how much information you'll be presented with. But check out this tutorial to get up and going fast: COCOA TUTORIAL: FIXING MEMORY LEAKS WITH INSTRUMENTS Actually trying to force leaks might be a better way of, in turn, learning how to prevent them! Good luck ;) A: Matt Dillard wrote: return [[s autorelease] release]; Autorelease does not retain the object. Autorelease simply puts it in queue to be released later. You do not want to have a release statement there. A: My usual collection of Cocoa memory management articles: cocoa memory management A: NilObject's answer is a good start. Here's some supplemental info pertaining to manual memory management (required on the iPhone). If you personally alloc/init an object, it comes with a reference count of 1. You are responsible for cleaning up after it when it's no longer needed, either by calling [foo release] or [foo autorelease]. release cleans it up right away, whereas autorelease adds the object to the autorelease pool, which will automatically release it at a later time. autorelease is primarily for when you have a method that needs to return the object in question (so you can't manually release it, else you'll be returning a nil object) but you don't want to hold on to it, either. If you acquire an object where you did not call alloc/init to get it -- for example: foo = [NSString stringWithString:@"hello"]; but you want to hang on to this object, you need to call [foo retain]. Otherwise, it's possible it will get autoreleased and you'll be holding on to a nil reference (as it would in the above stringWithString example). When you no longer need it, call [foo release]. A: There's a free screencast available from the iDeveloperTV Network Memory Management in Objective-C A: The answers above give clear restatements of what the documentation says; the problem most new people run into is the undocumented cases. For example: * *Autorelease: docs say it will trigger a release "at some point in the future." WHEN?! Basically, you can count on the object being around until you exit your code back into the system event loop. The system MAY release the object any time after the current event cycle. (I think Matt said that, earlier.) *Static strings: NSString *foo = @"bar"; -- do you have to retain or release that? No. How about -(void)getBar { return @"bar"; } ... NSString *foo = [self getBar]; // still no need to retain or release *The Creation Rule: If you created it, you own it, and are expected to release it. In general, the way new Cocoa programmers get messed up is by not understanding which routines return an object with a retainCount > 0. Here is a snippet from Very Simple Rules For Memory Management In Cocoa: Retention Count rules * *Within a given block, the use of -copy, -alloc and -retain should equal the use of -release and -autorelease. *Objects created using convenience constructors (e.g. NSString's stringWithString) are considered autoreleased. *Implement a -dealloc method to release the instancevariables you own The 1st bullet says: if you called alloc (or new fooCopy), you need to call release on that object. The 2nd bullet says: if you use a convenience constructor and you need the object to hang around (as with an image to be drawn later), you need to retain (and then later release) it. The 3rd should be self-explanatory. A: Let's start with retain and release; autorelease is really just a special case once you understand the basic concepts. In Cocoa, each object keeps track of how many times it is being referenced (specifically, the NSObject base class implements this). By calling retain on an object, you are telling it that you want to up its reference count by one. By calling release, you tell the object you are letting go of it, and its reference count is decremented. If, after calling release, the reference count is now zero, then that object's memory is freed by the system. The basic way this differs from malloc and free is that any given object doesn't need to worry about other parts of the system crashing because you've freed memory they were using. Assuming everyone is playing along and retaining/releasing according to the rules, when one piece of code retains and then releases the object, any other piece of code also referencing the object will be unaffected. What can sometimes be confusing is knowing the circumstances under which you should call retain and release. My general rule of thumb is that if I want to hang on to an object for some length of time (if it's a member variable in a class, for instance), then I need to make sure the object's reference count knows about me. As described above, an object's reference count is incremented by calling retain. By convention, it is also incremented (set to 1, really) when the object is created with an "init" method. In either of these cases, it is my responsibility to call release on the object when I'm done with it. If I don't, there will be a memory leak. Example of object creation: NSString* s = [[NSString alloc] init]; // Ref count is 1 [s retain]; // Ref count is 2 - silly // to do this after init [s release]; // Ref count is back to 1 [s release]; // Ref count is 0, object is freed Now for autorelease. Autorelease is used as a convenient (and sometimes necessary) way to tell the system to free this object up after a little while. From a plumbing perspective, when autorelease is called, the current thread's NSAutoreleasePool is alerted of the call. The NSAutoreleasePool now knows that once it gets an opportunity (after the current iteration of the event loop), it can call release on the object. From our perspective as programmers, it takes care of calling release for us, so we don't have to (and in fact, we shouldn't). What's important to note is that (again, by convention) all object creation class methods return an autoreleased object. For example, in the following example, the variable "s" has a reference count of 1, but after the event loop completes, it will be destroyed. NSString* s = [NSString stringWithString:@"Hello World"]; If you want to hang onto that string, you'd need to call retain explicitly, and then explicitly release it when you're done. Consider the following (very contrived) bit of code, and you'll see a situation where autorelease is required: - (NSString*)createHelloWorldString { NSString* s = [[NSString alloc] initWithString:@"Hello World"]; // Now what? We want to return s, but we've upped its reference count. // The caller shouldn't be responsible for releasing it, since we're the // ones that created it. If we call release, however, the reference // count will hit zero and bad memory will be returned to the caller. // The answer is to call autorelease before returning the string. By // explicitly calling autorelease, we pass the responsibility for // releasing the string on to the thread's NSAutoreleasePool, which will // happen at some later time. The consequence is that the returned string // will still be valid for the caller of this function. return [s autorelease]; } I realize all of this is a bit confusing - at some point, though, it will click. Here are a few references to get you going: * *Apple's introduction to memory management. *Cocoa Programming for Mac OS X (4th Edition), by Aaron Hillegas - a very well written book with lots of great examples. It reads like a tutorial. *If you're truly diving in, you could head to Big Nerd Ranch. This is a training facility run by Aaron Hillegas - the author of the book mentioned above. I attended the Intro to Cocoa course there several years ago, and it was a great way to learn. A: If you understand the process of retain/release then there are two golden rules that are "duh" obvious to established Cocoa programmers, but unfortunately are rarely spelled out this clearly for newcomers. * *If a function which returns an object has alloc, create or copy in its name then the object is yours. You must call [object release] when you are finished with it. Or CFRelease(object), if it's a Core-Foundation object. *If it does NOT have one of these words in its name then the object belongs to someone else. You must call [object retain] if you wish to keep the object after the end of your function. You would be well served to also follow this convention in functions you create yourself. (Nitpickers: Yes, there are unfortunately a few API calls that are exceptions to these rules but they are rare). A: Lots of good information on cocoadev too: * *MemoryManagement *RulesOfThumb A: As several people mentioned already, Apple's Intro to Memory Management is by far the best place to start. One useful link I haven't seen mentioned yet is Practical Memory Management. You'll find it in the middle of Apple's docs if you read through them, but it's worth direct linking. It's a brilliant executive summary of the memory management rules with examples and common mistakes (basically what other answers here are trying to explain, but not as well).
{ "language": "en", "url": "https://stackoverflow.com/questions/6578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "123" }
Q: Which Agile software development methods have you had the most success with? There are numerous Agile software development methods. Which ones have you used in practice to deliver a successful project, and how did the method contribute to that success? A: I've been involved with quite a few organisations which claimed to work in an 'agile' way, and their processed usually seemed to be base on XP (extreme programming), but none of them ever followed anywhere near all the practices. That said, I can probably comment on a few of the XP practices * *Unit testing seems to prove very useful if it's done from the start of a project, but it seems very difficult to come into an existing code-base and start trying to add unit tests. If you get the opportunity to start from scratch, test driven development is a real help. *Continuous integration seems to be a really good thing (or rather, the lack of it is really bad). That said, the organisations I've seen have usually been so small as to make any other approach seem foolish. *User story cards are nice in that it's great to have a physical object to throw around for prioritisation, but they're not nearly detailed enough unless your developer really knows the domain, or you've got an onsite customer (which I've never actually seen). *Standup meetings tend to be really useful for new team members to get to know everyone, and what they work on. The old hands very quickly slack off, and just say things like 'I'm still working on X', which they've been doing for the past week - It takes a strong leader to force them to delve into details. *Refactoring is now a really misused term, but when you've got sufficient unit tests, it's really useful to conceptually separate the activity of 'changing the design of the existing code without changing the functionality' from 'adding new functionality' A: Scrum because it shows where the slackers are. It also identifies much faster that the business unit usually doesn't have a clue what they really want delivered A: Scrum. The daily standup meeting is a great way to make sure things stay on track and progress is being made. I also think it's key to get the product/market folks involved in the process in a real, meaningful way. It'll create a more collaborative environment and removes a lot of the adversarial garbage that comes up when the product team and the dev teams are separate "silos". A: Having regular retrospectives is a great way to help a team become more effective/agile. More than adhering to a specific flavor of Agile this practice can help a team identify what is working well and adapt to a changing environment. Just make sure the person running the retrospective knows what he/she is doing otherwise it can degenerate into a complaining session. There are a number of exercises you can take a team through to help them reflect and extract value from the retrospective. I suggest listening to the interview with Linda Rising on Software Engineering Radio for a good introduction. Do a Google search for "Heartbeat retrospectives" for more information. A: I've been working with a team using XP and Scrum practices sprinkled with some lean. It's been very productive. Daily Standup- helps us keep complete track of what and where everyone is working on. Pair Programming- has improved our code base and helped remove "silly" bugs being introduced into the system. iterative development- using 1 week iterations has helped up improve our velocity by setting more direct goals which has also helped us size requirements TDD- has helped me change my way of programming, now I don't write any code that doesn't fix a broken test and I don't write any test that doesn't have a clearly defined requirement. We've also been using executable requirements which has really helped devs and BAs reach requirements understandings. kanban boards- show in real time where we are. We have one for the Milestone as well as the current iteration. At a glance you can see what is left to do and what's being done and what's done and accepted. If you don't report in your daily standup something pertaining to what's on the board you have explaining to do. co-located team- everyone is up to speed and on page with what everyone else is doing. communication is just-in-time, very productive, I don't miss my cube at all.
{ "language": "en", "url": "https://stackoverflow.com/questions/6579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: PHP4 to PHP5 Migration What are some good steps to follow for a smooth migration from PHP4 to PHP5. What are some types of code that are likely to break? A: I also once worked on an app which used PHP4's XML support quite heavily, and would have required quite a bit of work to move to PHP5. One of the other significant changes I was looking at at the time was the change of the default handling of function parameters. In PHP4 if I remember, they were pass-by-copy unless you specified otherwise, but in PHP5 is changed to pass-by-reference by default. In well written code, that probably won't make a big difference to you, but it could certainly cause problems. I think one other thing I found changed is that objects are no longer allowed to overwrite their 'this' field. I would say that was a really bad idea to begin with (and I think it may have not been an intentional feature in PHP4), but I certainly found a few parts of our system that relied on it. Hope some of that helps. A: The best advice I could give anyone working with PHP4 is this: error_reporting( E_ALL ); It pretty much will tell you exactly what you need to do. A: We had an app that relied heavily on the PHP 4 XML DOM functions and it required a lot of retooling to change over PHP 5. Beyond that most changes were improvements to things like error handling (to take advantage of exceptions) and PHP Classes. A: OOP is one of the largest differences. It won't break as the PHP4 and PHP5 OOP styles are interchangeable but I'd reccommend taking advantage of PHP5's new OOP styles. It's not a huge amount of work to convert your existing classes to PHP5 and it does give you some extra magical methods that may help solve some existing hacks (I remember having a near useless __toString equivalent method in most classes).
{ "language": "en", "url": "https://stackoverflow.com/questions/6594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Registry vs. INI file for storing user configurable application settings I'm a new Windows programmer and I'm not sure where I should store user configurable application settings. I understand the need to provide a user friendly means for the user to change application settings, like an Edit | Settings form or similar. But where should I store the values after the user hits the Apply button on that form? What are the pros and cons of storing settings in the Windows registry vs. storing them in a local INI file or config file or similar? A: There's one more advantage to using an INI file over the registry which I haven't seen mentioned: If the user is using some sort of volume/file based encryption, they can get the INI file to be encrypted pretty easily. With the registry it will probably be more problematic. A: Pros of config file: * *Easy to do. Don't need to know any Windows API calls. You just need to know the file I/O interface of your programming language. *Portable. If you port your application to another OS, you don't need to change your settings format. *User-editable. The user can edit the config file outside of the program executing. Pros of registry: * *Secure. The user can't accidentally delete the config file or corrupt the data unless he/she knows about regedit. And then the user is just asking for trouble. *I'm no expert Windows programmer, but I'm sure that using the registry makes it easier to do other Windows-specific things (user-specific settings, network administration stuff like group policy, or whatever else). If you just need a simple way to store config information, I would recommend a config file, using INI or XML as the format. I suggest using the registry only if there is something specific you want to get out of using the registry. A: According to the documentation for GetPrivateProfileString, you should use the registry for storing initialisation information. However, in so saying, if you still want to use .ini files, and use the standard profile APIs (GetPrivateProfileString, WritePrivateProfileString, and the like) for accessing them, they provide built-in ways to automatically provide "virtual .ini files" backed by the registry. Win-win! A: There's a similar question here that covers some of the pros and cons. I would suggest not using the registry unless your application absolutely needs it. From my understanding, Microsoft is trying to discourage the use of the registry due to the flexibility of settings files. Also, I wouldn't recommend using .ini files, but instead using some of the built-in functionality to .Net for saving user/app settings. A: Use of an ini file, in the same directory as the application, makes it possible to back it up with the application. So after you reload your OS, you simply restore the application directory, and you have your configuration the way you want it. A: I agree with Daniel. If it's a large application I think I'd do things in the registry. If it's a small application and you want to have aspects of it user-configurable without making a configuration form, go for a quick INI file. I usually do the parsing like this (if the format in the .ini file is option = value, 1 per line, comments starting with #): static void Parse() { StreamReader tr = new StreamReader("config.ini"); string line; Dictionary<string, string> config = new Dictionary<string, string>(); while ((line = tr.ReadLine()) != null) { // Allow for comments and empty lines. if (line == "" || line.StartsWith("#")) continue; string[] kvPair = line.Split('='); // Format must be option = value. if (kvPair.Length != 2) continue; // If the option already exists, it's overwritten. config[kvPair[0].Trim()] = kvPair[1].Trim(); } } Edit: Sorry, I thought you had specified the language. The implementation above is in C#. A: Jeff Atwood has a great article about Windows' registry and why is better to use .INI files instead. My life would be a heck of a lot easier if per-application settings were stored in a place I could easily see them, manipulate them, and back them up. Like, say... in INI files. * *The registry is a single point of failure. That's why every single registry editing tip you'll ever find starts with a big fat screaming disclaimer about how you can break your computer with regedit. *The registry is opaque and binary. As much as I dislike the angle bracket tax, at least XML config files are reasonably human-readable, and they allow as many comments as you see fit. *The registry has to be in sync with the filesystem. Delete an application without "uninstalling" it and you're left with stale registry cruft. Or if an app has a poorly written uninstaller. The filesystem is no longer the statement of record-- it has to be kept in sync with the registry somehow. It's a total violation of the DRY principle. *The registry is monolithic. Let's say you wanted to move an application to a different path on your machine, or even to a different machine altogether. Good luck extracting the relevant settings for that one particular application from the giant registry tarball. A given application typically has dozens of settings strewn all over the registry. A: As Daniel indicated, storing configuration data in the registry gives you the option to use Admin Templates. That is, you can define an Admin Template, use it in a Group Policy and administer the configuration of your application network-wide. Depending on the nature of the application, this can be a big boon. A: The existing answers cover a lot of ground but I thought I would mention one other point. I use the registry to store system-wide settings. That is, when 2 or more programs need the exact same setting. In other words, a setting shared by several programs. In all other cases I use a local config file that sits either in the same path as the executable or one level down (in a Configuration directory). The reasons are already covered in other answers (portable, can be edited with a text editor etc). Why put system-wide settings into the registry? Well, I found that if a setting is shared but you use local config files you end up duplicating settings. This may mean you end up needing to change a setting in multiple places. For example, say Program A and Program B both point to the same database. You can have a "system-wide" registry setting for the connection string. If you want to point to a different database, you can change the connection string in one place, and both programs will now run against the other database. Note - there is no point in using the registry in this way if two or more programs don't need to use the same values. Such as, Program A and Program B both needing a database connection string that may be the same, but not always. For example, I want Program B to now use a test database but Program A should carry on using a production database. With the above example, you could have some local configuration override system-wide settings but it may start getting overly complicated for simple tasks. A: The registry is optimized for quick access and easy update, and it's the only way to do certain Windows-specific things like associating with an extension. And you can ignore the argument about deleting a single directory to uninstall your program - Windows Vista won't let you modify files in the Program Files directory, so your config will need to go in a different folder anyway. There's a general guideline for Windows programming - do things the way Microsoft expects you to, and your life will be a lot easier. That said, I can see the appeal of the INI file, and I wouldn't blame anyone for considering it. A: There is one drawback to ini or config files and that is locating them if the user has the option to select where the program is installed. A: Other disadvantage of using the registry is that it is a pain if you are working in a mixed environment of 32 and 64 bit applications, as the system call for accessing the registry will randomly(*) add \Wow6432Node\ to your registry path making you crazy while debugging. (*of course not randomly, but very easy to get lost) A: Advantages: * *Replacement for a large number of configuration files. *Common administrative functions at a central point. *Almost any data can be saved by applications / drivers. *In contrast to configuration files, code sequences can even be saved. *Access faster than files because the database is indexed. *Access can be logged using the RegMon utility Disadvantages: * *Difficult to work with in the absence of graphical configuration programs. *Direct changes using the registry editor can create inconsistent states produce. *Incomplete uninstallers leave “reminiscences” in the registry Cause problems, e.g. with a new installation. *Installed applications are difficult to export to other PCs. *Chronically poorly documented. *Proprietary structure, therefore not suitable for standard DB access (e.g. SQL) *Computer-specific, therefore not portable to other computers. *Insufficient protection of the registry: depends on the configuration.
{ "language": "en", "url": "https://stackoverflow.com/questions/6607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Are there any decent free Java data plotting libraries out there? On a recent Java project, we needed a free Java based real-time data plotting utility. After much searching, we found this tool called the Scientific Graphics Toolkit or SGT from NOAA. It seemed pretty robust, but we found out that it wasn't terribly configurable. Or at least not configurable enough to meet our needs. We ended up digging very deeply into the Java code and reverse engineering the code and changing it all around to make the plot tool look and act the way we wanted it to look and act. Of course, this killed any chance for future upgrades from NOAA. So what free or cheap Java based data plotting tools or libraries do you use? Followup: Thanks for the JFreeChart suggestions. I checked out their website and it looks like a very nice data charting and plotting utility. I should have made it clear in my original question that I was looking specifically to plot real-time data. I corrected my question above to make that point clear. It appears that JFreeChart support for live data is marginal at best, though. Any other suggestions out there? A: I just ran into a similar issue (displaying fast-updating data for engineering purposes), and I'm using JChart2D. It's pretty minimalist and has a few quirks but it seems fairly fast: I'm running a benchmark speed test where it's adding 2331 points per second (333x7 traces) to a strip chart and uses 1% of the CPU on my 3GHz Pentium 4. A: Live Graph supports real-time rendering. A: I'm using GRAL for real-time plotting. It's an LGPL Java library. Although it's not as powerful as JFreeChart it has a nicer API. I got a plot up and running in very short time. They also ship a real-time plotting example. A: I found this question when I was googling for open source plotting libraries for java. I wasn't quite happy with the answers posted here so I did some further research on the issue. Although this question has been posted back in 2008 this might still be interesting to someone. Here is a list of Open Source Charting & Reporting Tools in Java A: I've had success using JFreeChart on multiple projects. It is very configurable. JFreeChart is open source, but they charge for the developer guide. If you're doing something simple, the sample code is probably good enough. Otherwise, $50 for the developer guide is a pretty good bargain. With respect to "real-time" data, I've also used JFreeChart for these sorts of applications. Unfortunately, I had to create some custom data models with appropriate synchronization mechanisms to avoid race conditions. However, it wasn't terribly difficult and JFreeChart would still be my first choice. However, as the FAQ suggests, JFreeChart might not give you the best performance if that is a big concern. A: The library I wrote, Plot4j, also supports real-time plotting. A: http://autoplot.org/ allows for real-time updates and can be used to create many types of scientific plots. To update the plot, specify the URL to a data file and then append &filePollUpdates=1&tail=100. See the example at http://autoplot.org/cookbook#Loading_Data A: Waterloo Scientific Graphics is a new LGPL project. Data objects are observable and could be updated in a real time plotting scenario. For details see http://waterloo.sourceforge.net/ A few screenshots: A: I used JFreeChart (http://www.jfree.org/jfreechart/) on a previous project. It has some very good built-in capabilities, and the design was WAY extensible so you could always roll your own extension later if you needed some custom chart annotation or wanted an axis to render differently, or whatever. It's definitely worth checking out. A: I've used JFreeChart in a rather complex application that needed to visualize data streams and calculations based on the data. We implemented the ability to visually edit the data plots by mouse and had a very large set of data points. JFreeChart handled it very well. Unfortunately I was stuck with v0.7, but the newest release are sooo much better when it comes to API clarity. The community is very helpful and the developers are responding to mails too. If you're doing a web application and don't want to bother with libraries, you can check the Google Chart API. Didn't use it myself, but I started some tests which were very promising. A: Check ILOG's JViews - they have a lot of stuff and something might fit your needs. All of them are extremely configurable and quite fast. Not free though. A: For real-time plotting you can use QN Plot, JOpenChart or its fork Openchart2. A: JHandles is an alternative graphics package for Octave (a math package). It is probably worth looking into, but being Octave specific may not have what you need. -Adam A: PtPlot may be a good choice. Formerly called Ptolemy. A: jcckit can handle real-time plotting. It's a bear to use though. I forked it, and made a very simple wrapper around it for non-realtime plotting. The underlying complicated interface can be used directly too. https://bitbucket.org/hughperkins/easyjcckit A: You might want to check out JMathPlot
{ "language": "en", "url": "https://stackoverflow.com/questions/6612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: What rule engine should I use? What are some of the best or most popular rule engines? I haven't settled on a programming language, so tell me the rule engine and what programming languages it supports. A: Depending on what your requirements are, Windows Workflow Foundation (.NET 3.5) might be worth having a look at. The .NET rule engine InRule supports WF and BizTalk; I've not tried it though so don't know if it's any good. A: This is a great article by Martin Fowler, which is a discussion about when rules engines can be useful. You may find it helpful. http://martinfowler.com/bliki/RulesEngine.html A: I have a bit of experience with both Haley Expert Rules and Haley Office Rules. Both nice systems, but I'd need to know a bit more abut what you want to use them for to answer definitively (See http://www.haley.com) They both support C# and Java (and I think also a web service api). The difference between the two is mostly around how much natural language modelling you want to get into. Office rules lets business users write rules in an Office document, and is mostly focused around legislative requirements modelling. Expert rules can be a bit more flexible in definition how it handles natural language, but requires more work defining language structures up front. Hope some of that helps. A: I am one of the authors of Drools, I will avoid pimping my wares. But some other options are Jess (not open source) but uses the clips syntax (which we also support a subset of) - which is kinda a lisp dialect. It really depends what you want it for, Haley have strong Natural language tech (and they recently aquired RuleBurst - who has also interesting natural language tech which could deal with word documents with embedded rules - eg legal documentation). RuleBurst was able to target .Net runtimes as well (there is a Drools.net "port" available as well - I haven't seen what it has been up to lately, alas, not enough time). Ok I will put my pimp bling away now... sorry about that. A: We've used both http://jatha.sourceforge.net and http://www.jboss.com/products/rules. They're both pretty good, but for the most part, JBoss rules seems to me to be overkill for a lot of what people do. They're both Java based. It's worth remembering Greenspun's Tenth Rule of Programming and skip ahead to importing it :) A: I've checked out JBoss Rules aka Drools and it looks pretty good. I'd love to hear from people using it in production, because I'm probably gonna need a rule engine in my current project as well. A: Inrule see website is very good! it is a .NET based rule engine with a solid SDK and a nice UI for non technical users. Worked great for me in past - pretty much cut my development cost in half. A: I found another rule engine that supports different kinds of rules; Procedural, Inference (RETE) and FlowRule. This is quite flexible and extensible rule engine (also event driven). They had express version as free edition while ago. Take a look at http://www.flexrule.com A: For very well understood, procedural rules (like eligibility rules, insurance rules, audit rules, etc.) then simple decision tables with a Domain Specific Language can give you the performance and simplicity without the overhead of RETE based engines. A Java open sourced rules engine of this sort can be found at DTRules A: WF is available already in .net 3.0. It is a bit buggy though on the Designer-side in Visual Studio and can get quite messy. I use it on Sharepoint (where WF is pretty much your only option anyway) and overall I am quite satisfied with it, even though the learning curve is rather steep. Foundations of WF is a good book to start with it, as they implement a complete solution from beginning to the end and explain the concepts behind it.
{ "language": "en", "url": "https://stackoverflow.com/questions/6613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: sgen.exe fails during build After changing the output directory of a visual studio project it started to fail to build with an error very much like: C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\bin\sgen.exe /assembly:C:\p4root\Zantaz\trunk\EASDiscovery\EASDiscoveryCaseManagement\obj\Release\EASDiscoveryCaseManagement.dll /proxytypes /reference:C:\p4root\Zantaz\trunk\EASDiscovery\EasDiscovery.Common\target\win_x32\release\results\EASDiscovery.Common.dll /reference:C:\p4root\Zantaz\trunk\EASDiscovery\EasDiscovery.Export\target\win_x32\release\results\EASDiscovery.Export.dll /reference:c:\p4root\Zantaz\trunk\EASDiscovery\ItemCache\target\win_x32\release\results\EasDiscovery.ItemCache.dll /reference:c:\p4root\Zantaz\trunk\EASDiscovery\RetrievalEngine\target\win_x32\release\results\EasDiscovery.RetrievalEngine.dll /reference:C:\p4root\Zantaz\trunk\EASDiscovery\EASDiscoveryJobs\target\win_x32\release\results\EASDiscoveryJobs.dll /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Shared.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.Misc.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinChart.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinDataSource.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinDock.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinEditors.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinGrid.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinListView.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinMaskedEdit.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinStatusBar.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinTabControl.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinToolbars.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.UltraWinTree.v8.1.dll" /reference:"C:\Program Files\Infragistics\NetAdvantage for .NET 2008 Vol. 1 CLR 2.0\Windows Forms\Bin\Infragistics2.Win.v8.1.dll" /reference:"C:\Program Files\Microsoft Visual Studio 8\ReportViewer\Microsoft.ReportViewer.Common.dll" /reference:"C:\Program Files\Microsoft Visual Studio 8\ReportViewer\Microsoft.ReportViewer.WinForms.dll" /reference:C:\p4root\Zantaz\trunk\EASDiscovery\PreviewControl\target\win_x32\release\results\PreviewControl.dll /reference:C:\p4root\Zantaz\trunk\EASDiscovery\Quartz\src\Quartz\target\win_x32\release\results\Scheduler.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.configuration.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Design.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.DirectoryServices.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Drawing.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Web.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Web.Services.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Windows.Forms.dll /reference:c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /compiler:/delaysign- Error: The specified module could not be found. (Exception from HRESULT: 0x8007007E) C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Microsoft.Common.targets(1902,9): error MSB6006: "sgen.exe" exited with code 1. I changed the output directory to target/win_x32/release/results but the path in sgen doesn't seem to have been updated. There seems to be no reference in the project to what path is passed into sgen so I'm unsure how to fix it. As a workaround I have disabled the serialization generation but it would be nice to fix the underlying problem. Has anybody else seen this? A: see msdn for the options to sgen.exe [you have the command line, you can play with it manually... delete your .XmlSerializers.dll or use /force though] Today I also ran across how to more manually specify the sgen options. I wanted this to not use the /proxy switch, but it appears it can let you specify the output directory. I don't know enough about msbuild to make it awesome, but this should get you started [open your .csproj/.vbproj in your non-visual studio editor of choice, look at the bottom and you should be able to figure out how/where this goes] [the below code has had UseProxyTypes set to true for your convenience] <Target Name="GenerateSerializationAssembliesForAllTypes" DependsOnTargets="AssignTargetPaths;Compile;ResolveKeySource" Inputs="$(MSBuildAllProjects);@(IntermediateAssembly)" Outputs="$(OutputPath)$(_SGenDllName)"> <SGen BuildAssemblyName="$(TargetFileName)" BuildAssemblyPath="$(OutputPath)" References="@(ReferencePath)" ShouldGenerateSerializer="true" UseProxyTypes="true" KeyContainer="$(KeyContainerName)" KeyFile="$(KeyOriginatorFile)" DelaySign="$(DelaySign)" ToolPath="$(SGenToolPath)"> <Output TaskParameter="SerializationAssembly" ItemName="SerializationAssembly" /> </SGen> </Target> <!-- <Target Name="BeforeBuild"> </Target> --> <Target Name="AfterBuild" DependsOnTargets="GenerateSerializationAssembliesForAllTypes"> </Target> A: If you are having this problem while building your VS.NET project in Release mode here is the solution: Go to the project properties and click on the Build tab and set the value of the "Generate Serialization Assembly" dropdown to "Off". Sgen.exe is "The XML Serializer Generator creates an XML serialization assembly for types in a specified assembly in order to improve the startup performance of a XmlSerializer when it serializes or deserializes objects of the specified types." (MSDN) A: I've not seen this particular problem, but recently for us a "C1001: An internal error has occurred in the compiler" type crash from cl.exe was fixed after installing some random and unrelated (or so we thought) Windows security updates. We knew the code didn't crash the compiler on other machines using the same version and service pack level of Visual Studio, but we were really clutching at straws when we tried the Windows security updates. A: It looks reasonable enough to me, unless something is imposing a 4096 character limit [you list 4020 characters] A 4096 limit to me seems a bit absurd, it'd be 2048 or 32767 or 8192 from stuff I've found by searching for the command-line limits. A: I ran into this issue when I had referenced an assembly on a web site project in the GAC that had been since uninstalled, and for some reason that reference triggered a serialization assembly generation, and sgen choked on the reference (since it no longer existed). After removing the reference and turning serialization assembly generation to Off, I no longer had the issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/6623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: PHP array indexing: $array[$index] vs $array["$index"] vs $array["{$index}"] What is the difference, if any, between these methods of indexing into a PHP array: $array[$index] $array["$index"] $array["{$index}"] I'm interested in both the performance and functional differences. Update: (In response to @Jeremy) I'm not sure that's right. I ran this code: $array = array(100, 200, 300); print_r($array); $idx = 0; $array[$idx] = 123; print_r($array); $array["$idx"] = 456; print_r($array); $array["{$idx}"] = 789; print_r($array); And got this output: Array ( [0] => 100 [1] => 200 [2] => 300 ) Array ( [0] => 123 [1] => 200 [2] => 300 ) Array ( [0] => 456 [1] => 200 [2] => 300 ) Array ( [0] => 789 [1] => 200 [2] => 300 ) A: I believe from a performance perspective that $array["$index"] is faster than $array[$index] See Best practices to optimize PHP code performance Don't believe everything you read so blindly... I think you misinterpreted that. The article says $array['index'] is faster than $array[index] where index is a string, not a variable. That's because if you don't wrap it in quotes PHP looks for a constant var and can't find one so assumes you meant to make it a string. A: When will the different indexing methods resolve to different indices? According to http://php.net/types.array, an array index can only be an integer or a string. If you try to use a float as an index, it will truncate it to integer. So if $index is a float with the value 3.14, then $array[$index] will evaluate to $array[3] and $array["$index"] will evaluate to $array['3.14']. Here is some code that confirms this: $array = array(3.14 => 'float', '3.14' => 'string'); print_r($array); $index = 3.14; echo $array[$index]."\n"; echo $array["$index"]."\n"; The output: Array([3] => float [3.14] => string) float string A: see @svec and @jeremy above. All array indices are of type 'int' first, then type 'string', and will be cast to that as PHP sees fit. Performance wise, $index should be faster than "$index" and "{$index}" (which are the same). Once you start a double-quote string, PHP will go into interpolation mode and treat it as a string first, but looking for variable markers ($, {}, etc) to replace from the local scope. This is why in most discussions, true 'static' strings should always be single quotes unless you need the escape-shortcuts like "\n" or "\t", because PHP will not need to try to interpolate the string at runtime and the full string can be compiled statically. In this case, doublequoting will first copy the $index into that string, then return the string, where directly using $index will just return the string. A: I timed the 3 ways of using an index like this: for ($ii = 0; $ii < 1000000; $ii++) { // TEST 1 $array[$idx] = $ii; // TEST 2 $array["$idx"] = $ii; // TEST 3 $array["{$idx}"] = $ii; } The first set of tests used $idx=0, the second set used $idx="0", and the third set used $idx="blah". Timing was done using microtime() diffs. I'm using WinXP, PHP 5.2, Apache 2.2, and Vim. :-) And here are the results: Using $idx = 0 $array[$idx] // time: 0.45435905456543 seconds $array["$idx"] // time: 1.0537171363831 seconds $array["{$idx}"] // time: 1.0621709823608 seconds ratio "$idx" / $idx // 2.3191287282497 ratio "{$idx}" / $idx // 2.3377348193858 Using $idx = "0" $array[$idx] // time: 0.5107250213623 seconds $array["$idx"] // time: 0.77445602416992 seconds $array["{$idx}"] // time: 0.77329802513123 seconds ratio "$idx" / $idx // = 1.5163855142717 ratio "{$idx}" / $idx // = 1.5141181512285 Using $idx = "blah" $array[$idx] // time: 0.48077392578125 seconds $array["$idx"] // time: 0.73676419258118 seconds $array["{$idx}"] // time: 0.71499705314636 seconds ratio "$idx" / $idx // = 1.5324545551923 ratio "{$idx}" / $idx // = 1.4871793473086 So $array[$idx] is the hands-down winner of the performance competition, at least on my machine. (The results were very repeatable, BTW, I ran it 3 or 4 times and got the same results.) A: Response to the Update: Oh, you're right, I guess PHP must convert array index strings to numbers if they contain only digits. I tried this code: $array = array('1' => 100, '2' => 200, 1 => 300, 2 => 400); print_r($array); And the output was: Array([1] => 300 [2] => 400) I've done some more tests and found that if an array index (or key) is made up of only digits, it's always converted to an integer, otherwise it's a string. ejunker: Can you explain why that's faster? Doesn't it take the interpreter an extra step to parse "$index" into the string to use as an index instead of just using $index as the index? A: If $index is a string there is no difference because $index, "$index", and "{$index}" all evaluate to the same string. If $index is a number, for example 10, the first line will evaluate to $array[10] and the other two lines will evaluate to $array["10"] which is a different element than $array[10]. A: I believe from a performance perspective that $array["$index"] is faster than $array[$index] See Best practices to optimize PHP code performance Another variation that I use sometimes when I have an array inside a string is: $str = "this is my string {$array["$index"]}"; Edit: What I meant to say is $row[’id’] is faster than $row[id]
{ "language": "en", "url": "https://stackoverflow.com/questions/6628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Making code work with register_globals turned off I have inherited some legacy PHP code what was written back when it was standard practice to use register_globals (As of PHP 4.2.0, this directive defaults to off, released 22. Apr 2002). We know now that it is bad for security to have it enabled. The problem is how do I find all the places in the code where I need to use $_GET or $_POST? My only thought was to set the error reporting to warn about uninitialized variables and then test each part of the site. Is there an easier way? Will I have to test each code path in the site or will PHP give a warning on a file basis? A: If you set error reporting to E_ALL, it warns in the error log about undefined variables complete with filename and line number (assuming you are logging to a file). However, it will warn only if when it comes across an undefined variable, so I think you will have to test each code path. Running php from the command line doesn't seem to help also. There is a debugging tool named xdebug, haven't tried it, but maybe that can be useful? A: I wrote a script using the built-in Tokenizer functions. Its pretty rough but it worked for the code base I was working on. I believe you could also use CodeSniffer. A: You could manually 'fake' the register globals effect but add some security. (I partly grabbed this from the osCommerce fork called xoops) // Detect bad global variables $bad_global_list = array('GLOBALS', '_SESSION', 'HTTP_SESSION_VARS', '_GET', 'HTTP_GET_VARS', '_POST', 'HTTP_POST_VARS', '_COOKIE', 'HTTP_COOKIE_VARS', '_REQUEST', '_SERVER', 'HTTP_SERVER_VARS', '_ENV', 'HTTP_ENV_VARS', '_FILES', 'HTTP_POST_FILES'); foreach ($bad_global_list as $bad_global ) { if ( isset( $_REQUEST[$bad_global] ) ) { die('Bad Global'); } } // Make global variables foreach ($_REQUEST as $name -> $value) { $$name = $value; // Creates a varable nammed $name equal to $value. } Though you'd want to tweak it to make your code more secure, at least by adding your global configuration variables (like the path and base url) to the bad globals list. You could also use it to easily compile a list of all used get/post variables to help you eventually replace all occurrences of, say $return_url, with $_REQUEST['return_url]; A: I know that there's a way to set php.ini values for that script with a certain command, I thus went looking and found this too - Goto last post on page I also found the following post which may be of use - Goto last post on the page I will add to this more if nobody has found an answer but I must now catch a train.
{ "language": "en", "url": "https://stackoverflow.com/questions/6633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How should I load files into my Java application? How should I load files into my Java application? A: What are you loading the files for - configuration or data (like an input file) or as a resource? * *If as a resource, follow the suggestion and example given by Will and Justin *If configuration, then you can use a ResourceBundle or Spring (if your configuration is more complex). *If you need to read a file in order to process the data inside, this code snippet may help BufferedReader file = new BufferedReader(new FileReader(filename)) and then read each line of the file using file.readLine(); Don't forget to close the file. A: The short answer Use one of these two methods: * *Class.getResource(String) *Class.getResourceAsStream(String) For example: InputStream inputStream = YourClass.class.getResourceAsStream("image.jpg"); -- The long answer Typically, one would not want to load files using absolute paths. For example, don’t do this if you can help it: File file = new File("C:\\Users\\Joe\\image.jpg"); This technique is not recommended for at least two reasons. First, it creates a dependency on a particular operating system, which prevents the application from easily moving to another operating system. One of Java’s main benefits is the ability to run the same bytecode on many different platforms. Using an absolute path like this makes the code much less portable. Second, depending on the relative location of the file, this technique might create an external dependency and limit the application’s mobility. If the file exists outside the application’s current directory, this creates an external dependency and one would have to be aware of the dependency in order to move the application to another machine (error prone). Instead, use the getResource() methods in the Class class. This makes the application much more portable. It can be moved to different platforms, machines, or directories and still function correctly. A: I haven't had a problem just using Unix-style path separators, even on Windows (though it is good practice to check File.separatorChar). The technique of using ClassLoader.getResource() is best for read-only resources that are going to be loaded from JAR files. Sometimes, you can programmatically determine the application directory, which is useful for admin-configurable files or server applications. (Of course, user-editable files should be stored somewhere in the System.getProperty("user.home") directory.) A: public byte[] loadBinaryFile (String name) { try { DataInputStream dis = new DataInputStream(new FileInputStream(name)); byte[] theBytes = new byte[dis.available()]; dis.read(theBytes, 0, dis.available()); dis.close(); return theBytes; } catch (IOException ex) { } return null; } // () A: getResource is fine, but using relative paths will work just as well too, as long as you can control where your working directory is (which you usually can). Furthermore the platform dependence regarding the separator character can be gotten around using File.separator, File.separatorChar, or System.getProperty("file.separator"). A: use docs.oracle.com/en/java/javase/11/docs/api/java.base/java/lang/ClassLoader.html#getResource(java.lang.String) A: public static String loadTextFile(File f) { try { BufferedReader r = new BufferedReader(new FileReader(f)); StringWriter w = new StringWriter(); try { String line = reader.readLine(); while (null != line) { w.append(line).append("\n"); line = r.readLine(); } return w.toString(); } finally { r.close(); w.close(); } } catch (Exception ex) { ex.printStackTrace(); return ""; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/6639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "72" }
Q: Preferred way to use favicons? I was trying to add a favicon to a website earlier and looked for a better way to implement this than to dump a favicon.ico file in the root of the website. I found this nice little guide: How to Add a Favicon. However, the preferred method did not work in IE (7) and the second method is the old fashioned way (which I resigned myself to use). Is there a third method that works across all the most popular browsers? A: You can use HTML to specify the favicon, but that will only work on pages that have this HTML. A better way to do this is by adding the following to your httpd.conf (Apache): AddType image/x-icon .ico A: This is what I always use: <link rel="icon" href="favicon.ico" type="image/x-icon" /> <link rel="shortcut icon" href="favicon.ico" type="image/x-icon" /> The second one is for IE. The first one is for other browsers. A: I think the most reliable method is the simply added the favicon.ico file to the root of your website. I don't think there is any need for a meta tag unless you want to manually override the default favicon, but I was unable to find any research to support my argument. A: This is how they're doing it right here on Stack Overflow: <link rel="shortcut icon" href="/favicon.ico" /> A: Well, the file is in the root so it does not show whether the tag works or if the browser just got the icon from the usual location (the root). Edit: I'll try it and see if it works. Edit 2: Using both tags make it work even for any file name as long as the file is an icon for IE7: I tried using .png files and it only worked with Firefox. A: There are a million different ways these icons are used (different browsers, different platforms, mobile site-pinning, smart TVs, etc, etc), so there's really no longer a simple answer. For a great explanation see this S.O. answer, but the short answer is: Use this site which lets you upload a png/jpg and then does all the hard work for you: https://realfavicongenerator.net/
{ "language": "en", "url": "https://stackoverflow.com/questions/6642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: JUnit vs TestNG At work we are currently still using JUnit 3 to run our tests. We have been considering switching over to JUnit 4 for new tests being written but I have been keeping an eye on TestNG for a while now. What experiences have you all had with either JUnit 4 or TestNG, and which seems to work better for very large numbers of tests? Having flexibility in writing tests is also important to us since our functional tests cover a wide aspect and need to be written in a variety of ways to get results. Old tests will not be re-written as they do their job just fine. What I would like to see in new tests though is flexibility in the way the test can be written, natural assertions, grouping, and easily distributed test executions. A: Also one more advantage of TestNG is supporting of parallel testing. In our era of multicores it's important, i think. I also used both frameworks. But i using hamcrest for assertations. Hamcrest allows you easily write your own assert method. So instead of assertEquals(operation.getStatus(), Operation.Status.Active); You can write assertThat(operation, isActive()); That gives you opportunity to use higher level of abstraction in your tests. And this makes your tests more robust. A: JUnit 4 Vs TestNG – Comparison by mkyong.com ( updated on 2013). Conclusion: I suggest to use TestNG as core unit test framework for Java project, because TestNG is more advance in parameterize testing, dependency testing and suite testing (Grouping concept). TestNG is meant for functional, high-level testing and complex integration test. Its flexibility is especially useful with large test suites. In addition, TestNG also cover the entire core JUnit4 functionality. It’s just no reason for me to use JUnit anymore. In simple terms, TestNG = JUnit + lot more. So, Why debate ? go and grab TestNG :-) You can find more detailed comparison here. A: I've used both, but I have to agree with Justin Standard that you shouldn't really consider rewriting your existing tests to any new format. Regardless of the decision, it is pretty trivial to run both. TestNG strives to be much more configurable than JUnit, but in the end they both work equally well. TestNG has a neat feature where you can mark tests as a particular group, and then easily run all tests of a specific group, or exclude tests of a particular group. Thus you can mark tests that run slowly as in the "slow" group and then ignore them when you want quick results. A suggestion from their documentation is to mark some subset as "checkin" tests which should be run whenever you check new files in. I never saw such a feature in JUnit, but then again, if you don't have it, you don't REALLY miss it. For all its claims of high configuration, I did run into a corner case the a couple weeks ago where I couldn't do what I wanted to do... I wish I could remember what it is, but I wanted to bring it up so you know that it's not perfect. The biggest advantage TestNG has is annotations... which JUnit added in version 4 anyways. A: Why we use TestNG instead of JUnit? * *The declaration of @BeforeClass and @AfterClass method has to be static in JUnit whereas, there is more flexibility in TestNG in the method declaration, it does not have these constraints. *In TestNG, we can parametrize tests using 2 ways. @Parameter or @DataProvider annotation. i) @Parameter for simple cases, where key value mapping is required.(data is provided through xml file) ii) @DataProvider for complex cases. Using 2 dimensional array, It can provide data. *In TestNG, since @DataProvider method need not be static, we can use multiple data provider methods in the same test class. *Dependency Testing: In TestNG, if the initial test fails, then all subsequent dependent tests will be skipped, not marked as failed. But JUnit marked it failed. *Grouping: Single tests can belong to multiple groups and then run in different contexts (like slow or fast tests). A similar feature exists in JUnit Categories but lacks the @BeforeGroups / @AfterGroups TestNG annotations that allow initializing the test / tearing it down. *Parallelism: If you’d like to run the same test in parallel on multiple threads, TestNG has you covered with a simple to use annotation while JUnit doesn’t offer a simple way to do so out of the box. *TestNG @DataProvider can also support XML for feeding in data, CSVs, or even plain text files. *TestNG allows you to declare dependencies between tests, and skip them if the dependency test didn’t pass. @Test(dependsOnMethods = { "dependOnSomething" }) This functionality doesn’t exist in JUnit *Reporting: TestNG reports are generated by default to a test-output folder that includes HTML reports with all of the test data, passed/failed/skipped, how long did they run, which input was used and the complete test logs. In addition, it also exports everything to an XML file which can be used to construct your own report template. On the JUnit front, all of this data is also available via XML, but there’s no out of the box report and you need to rely on plugins. Resource Link: * *A Quick JUnit vs TestNG Comparison *JUnit vs. TestNG: Which Testing Framework Should You Choose? A good difference is given in this tutorial side by side: TestNG Vs JUnit: What's the Difference? A: A couple of additions to Mike Stone's reply: 1) The most frequent thing I use TestNG's groups for is when I want to run a single test method in a test suite. I simply add this test to the group "phil" and then run this group. When I was using JUnit 3, I would comment out the entries for all methods but the one I wanted to run in the "suite" method, but then would commonly forget to uncomment them before checkin. With the groups, I no longer have this problem. 2) Depending on the complexity of the tests, migrating tests from JUnit3 to TestNG can be done somewhat automatically with sed and creating a base class to replace TestCase that static imports all of the TestNG assert methods. I have info on my migration from JUnit to TestNG here and here. A: My opinion about what makes TestNG truly far more powerful: 1. JUnit still requires the before/after class methods to be static, which limits what you can do prior to the running of tests, TestNG never has this issue. 2. TestNG @Configuration methods can all take an optional argument to their annotated methods in the form of a ITestResult, XmlTest, Method, or ITestContext. This allows you to pass things around that JUnit wouldn't provide you. JUnit only does this in listeners and it is limited in use. 3. TestNG comes with some pre-made report generation classes that you can copy and edit and make into your own beautiful test output with very little effort. Just copy the report class into your project and add a listener to run it. Also, ReportNG is available. 4. TestNG has a handful of nice listeners that you can hook onto so you can do additional AOP style magic at certain phases during testing. A: Your question seems two folded to me. On one had you would like to compare two test frameworks, on the other hand you would like to implement tests easily, have natural assertions, etc... Ok, firstly JUnit has been playing catchup with TestNG in terms of functionality, they have bridged the gap some what with v4, but not well enough in my opinion. Things like annotations and dataproviders are still much better in TestNG. Also they are more flexible in terms of test execution, since TestNG has test dependency, grouping and ordering. JUnit still requires certain before/after methods to be static, which limits what you can do prior to the running of tests, TestNG never has this issue. TBH, mostly the differences between the two frameworks don't mean much, unless your focusing on integration/automation testing. JUnit from my experience is built from the ground up for unit testing and is now being pushed towards higher levels of testing, which IMO makes it the wrong tool for the job. TestNG does well at unit testing and due to its robust dataproviding and great test execution abilities, works even better at integration/automation test level. Now for what I believe is a separate issue, how to write well structured, readable and maintainable tests. Most of this I am sure you know, but things like Factory Pattern, Command Pattern and PageObjects (if your testing websites) are vital, it is very important to have a layer of abstraction between what your testing (SUT) and what the actual test is (assertions of business logic). In order to have much nicer assertions, you can use Hamcrest. Make use of javas inheritance/interfaces to reduce repetition and enforce commonality. Almost forgot, also use the Test Data Builder Pattern, this coupled with TestNG's dataprovider annotation is very useful. A: First I would say, don't rewrite all your tests just to suit the latest fad. Junit3 works perfectly well, and the introduction of annotations in 4 doesn't buy you very much (in my opinion). It is much more important that you guys write tests, and it sounds like you do. Use whatever seems most natural and helps you get your work done. I can't comment on TestNG b/c I haven't used it. But I would recommend unitils, a great wrapper for JUnit/TestNG/DBUnit/EasyMock, regardless of which route you take. (It supports all the flavors mentioned above) A: About a year ago, we had the same problem. I spent sometime considering which move was better, and eventually we realized that TestNG has no 'killer features'. It's nice, and has some features JUnit 4 doesn't have, but we don't need them. We didn't want people to feel uncomfortable writing tests while getting to know TestNG because we wanted them to keep writing a lot of tests. Also, JUnit is pretty much the de-facto standard in the Java world. There's no decent tool that doesn't support it from the box, you can find a lot of help on the web and they added a lot of new features in the past year which shows it's alive. We decided to stick with JUnit and never looked back. A: TestNG's biggest draw cards for me include its support test groups, and more importantly - test group dependencies (marking a test as being dependent of a group causes the tests to simply skip running when the dependent group fails). TestNG's other big draw cards for me include test parameters, data providers, annotation transformers, and more than anything - the vibrant and responsive user community. Whilst on the surface one might not think all of TestNGs features above might not be needed, once you start to understand the flexibility bring to your tests, you'll wonder how you coped with JUnit. (disclaimer - I've not used JUnit 4.x at all, so am unable to really comment on advances or new features there). A: Cheers to all the above. Some other things I've personally found I like more in TestNG are: * *The @BeforeClass for TestNG takes place after class creation, so you aren't constrained by only being able to call static methods of your class in it. *Parallel and parameterized tests, maybe I just don't have enough of a life... but I just get a kick writing one set of Selenium tests, accepting a driver name as a parameter. Then defining 3 parallel test groups, 1 each for the IE, FF and Chrome drivers, and watching the race! I originally did 4, but way too many of the pages I've worked on break the HtmlUnit driver for one reason or another. Yeah, probably need to find that life. ;) A: I wanted to share the one I encountered today. I found built-in Parameterized runner is quite crude in Junit4 as compare to TestNG (I know each framework has its strengths but still). The Junit4 annotation @parameters is restricted to one set of parameters. I encountered this problem while testing the valid and invalid behavior for functionality in same test class. So the first public, static annotated method that it finds will be used, but it may find them in any order. This causes us to write different classes unnecessarily. However TestNG provides clean way to provide different kind of data providers for each and every method. So we can test the same unit of code with valid and invalid way in same test class putting the valid/invalid data separately. I will go with TestNG.
{ "language": "en", "url": "https://stackoverflow.com/questions/6658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "126" }
Q: Reference that lists available JavaScript events? I'm aware of things like onchange, onmousedown and onmouseup but is there a good reference somewhere that lists all of them complete with possibly a list of the elements that they cover? A: Here is a pretty good JavaScript event reference with the elements they are for: JavaScript Tutorial >> JavaScript Events A: This Javascript Cheat Sheet has a complete list of of event handlers. Nearly all of them can be used on any html element except for one or two. If you want to use a lightweight javascript library, DOMAssistant is very lightweight and allows you to add events to elements very easily. Like so: $("#navigation a").addEvent("click", myFunc); A: If you're going to be working with events (setting custom functions and event handlers), then I'd recommend checking out the jQuery library. It makes event binding so much easier than doing it by hand. A: W3Schools seems to have a good Javascript events reference: HTML DOM Events A: Quirksmode has a nice event-compatibility table and an introduction.
{ "language": "en", "url": "https://stackoverflow.com/questions/6661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Web Services -- WCF vs. ASMX ("Standard") I am working on a new project. Is there any benefit with going with a WCF web service over a regular old fashion web service? Visual Studio offers templates for both. What are the differences? Pros and cons? A: The Pros of doing all by yourself is: * *No learning curve *Very flexible The Pros of WCF are: * *Costs less time in the longer run *Switch protocols without programming A disadvantage of WCF: some static property names can be pretty lengthy... To summarize: WCF lets you focus on programming, but you need to learn it first ;-) A: What is a "regular old fashioned web service?" An ASMX service, or are you using WSE as well? ASMX services are not naturally interoperable, don't support WS-* specs, and ASMX is a technology that is aging very quickly. WSE (Web Service Enhancements) services DO add support for WS-* and can be made to be interoperable, but WCF is meant to replace WSE, so you should take the time to learn it. I would say that unless your application is a quick an dirty one-off, you will gain immense flexibility and end up with a better design if you choose WCF. WCF does have a learning curve beyond a [WebMethod] attribute, but the learning curve is over-exaggerated in my opinion, and it is exponentially more powerful and future proof than legacy ASMX services. Unless your time line simply cannot tolerate the learning curve, you would be doing yourself a huge favor learning WCF instead of just sticking with ASP.NET Web Services. Applications will only continue to become more and more distributed and interconnected, and WCF is the future of distributed computing on the Microsoft platform. Here is a comparison between the two. A: Pro for WCF : You don't need a web server (i.e. IIS). You actually don't need a server OS. A: I like the fact writing WCF services makes it easy to separate your service from the implementation. You can write your service and then host it in IIS, a console application, or a Windows service; you can also talk to it via HTTP, net TCP, etc. A: Unit tests on your services implamentation and interaction are easier to do ! A: If your project is using framework 4.0, Why don't your try WebApi, which is easy to understand and uses the convention over configuration. Its a great way of building application with super fast interfaces Have look at the getting started videos from MS, It has evolved from WCF data Services. http://www.asp.net/web-api/overview/getting-started-with-aspnet-web-api A: In my experience WCF It is absurdly verbose to work with it, it is not quite compatible with other microsoft products and, of course, it is not widely accepted outside ot the microsoft world. But my main problem is it is not stable, it trends to fail (in some situation) and it requires to tweak it before it can be used. Instead SOAP (aka standard Webservice), it works, it is easy to work and it is widely compatible (Java-JAX accepts it without any modification). To add authentication in SOAP could be a bit tricky but not impossible.
{ "language": "en", "url": "https://stackoverflow.com/questions/6666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: ASP.NET Web Service Results, Proxy Classes and Type Conversion I'm still new to the ASP.NET world, so I could be way off base here, but so far this is to the best of my (limited) knowledge! Let's say I have a standard business object "Contact" in the Business namespace. I write a Web Service to retrieve a Contact's info from a database and return it. I then write a client application to request said details. Now, I also then create a utility method that takes a "Contact" and does some magic with it, like Utils.BuyContactNewHat() say. Which of course takes the Contact of type Business.Contact. I then go back to my client application and want to utilise the BuyContactNewHat method, so I add a reference to my Utils namespace and there it is. However, a problem arises with: Contact c = MyWebService.GetContact("Rob); Utils.BuyContactNewHat(c); // << Error Here Since the return type of GetContact is of MyWebService.Contact and not Business.Contact as expected. I understand why this is because when accessing a web service, you are actually programming against the proxy class generated by the WSDL. So, is there an "easier" way to deal with this type of mismatch? I was considering perhaps trying to create a generic converter class that uses reflection to ensure two objects have the same structure than simply transferring the values across from one to the other. A: You are on the right track. To get the data from the proxy object back into one of your own objects, you have to do left-hand-right-hand code. i.e. copy property values. I'll bet you that there is already a generic method out there that uses reflection. Some people will use something other than a web service (.net remoting) if they just want to get a business object across the wire. Or they'll use binary serialization. I'm guessing you are using the web service for a reason, so you'll have to do property copying. A: You don't actually have to use the generated class that the WSDL gives you. If you take a look at the code that it generates, it's just making calls into some .NET framework classes to submit SOAP requests. In the past I have copied that code into a normal .cs file and edited it. Although I haven't tried this specifically, I see no reason why you couldn't drop the proxy class definition and use the original class to receive the results of the SOAP call. It must already be doing reflection under the hood, it seems a shame to do it twice. A: I would recommend that you look at writing a Schema Importer Extension, which you can use to control proxy code generation. This approach can be used to (gracefully) resolve your problem without kludges (such as copying around objects from one namespace to another, or modifying the proxy generated reference.cs class only to have it replaced the next time you update the web reference). Here's a (very) good tutorial on the subject: http://www.microsoft.com/belux/msdn/nl/community/columns/jdruyts/wsproxy.mspx
{ "language": "en", "url": "https://stackoverflow.com/questions/6681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: When is OOP better suited for? Since I started studying object-oriented programming, I frequently read articles/blogs saying functions are better, or not all problems should be modeled as objects. From your personal programming adventures, when do you think a problem is better solved by OOP? A: I'm an old timer, but have also programmed OOP for a long time. I am personally against using OOP just to use it. I prefer objects to have specific reasons for existing, that they model something concrete, and that they make sense. The problem that I have with a lot of the newer developers is that they have no concept of the resources that they are consuming with the code that they create. When dealing with a large amount of data and accessing databases the "perfect" object model may be the worst thing you can do for performance and resources. My bottom line is if it makes sense as an object then program it as an object, as long as you consider the performance/resource impact of the implementation of your object model. A: I think it fits best when you are modeling something cohesive with state and associated actions on those states. I guess that's kind of vague, but I'm not sure there is a perfect answer here. The thing about OOP is that it lets you encapsulate and abstract data and information away, which is a real boon in building a large system. You can do the same with other paradigms, but it seems OOP is especially helpful in this category. It also kind of depends on the language you are using. If it is a language with rich OOP support, you should probably use that to your advantage. If it doesn't, then you may need to find other mechanisms to help break up the problem into smaller, easily testable pieces. A: I am sold to OOP. Anytime you can define a concept for a problem, it can probably be wrapped in an object. The problem with OOP is that some people overused it and made their code even more difficult to understand. If you are careful about what you put in objects and what you put in services (static classes) you will benefit from using objects. Just don't put something that doesn't belong to an object in the object because you need your object to do something new that you didn't think of initially, refactor and find the best way to add that functionality. A: There are 5 criteria whether you should favor Object Oriented over Object Based,Functional or Procedural code. Remember all of these styles are available in all languages, they're styles. All of these are written in a style of "Should I favor OO in this situation?" The system is very complex and has over approximately 9k LOC (Just an arbitrary level). -- As systems get more complex, the benefits gained by encapsulating complexity go up quite a bit. With OO, as opposed to the other techniques, you tend to encapsulate more and more of the complexity, which is very valuable at this level. Object Based or procedural should be favored before this. (This is not advocating a particular language mind you. OO C fits these features more than OO C++ in my mind, a language with a notorious reputation for leaky abstractions and an ability to eat shops with even 1 mediocre/obstinate programmer for lunch). Your code is not operations on data (i.e. Database based or math/analysis based). Database based code is often more easily represented via procedural style. Analysis based code is often easier represented in a functional style. Your model is a simulation of something (OO excels at simulations). You're doing something for which the object based subtype dispatch of OO is valuable (aka, you need to send a message to all objects of a certain type and various subtypes and get an appropriate, but different, reaction out of all of them). Your app is not multi-threaded, especially in a non-worker task method type of codebase. OO is quite problematic in programs which are multithreaded and require different threads to do different tasks. If your program is structured with one or two main threads and many worker threads doing the same thing, the muddled control flow of OO programs is easier to handle, as all of the worker threads will be isolated in what they touch and can be considered as a monolithic section of code. Consider any other paradigm actually. Functional excels at multithreading (lack of side effects is a huge boon), and object based programming can give you boons with some of the encapsulation of OO, however with more traceable procedural code in critical sections of your codebase. Procedural of course excels in this arena as well. A: There is no hard and fast rule. A problem is better solved with OOP when you are better at solving problems and thinking in an OO mentality. Object Orientation is just another tool which has come along through trying to make computing a better tool for solving problems. However, it can allow for better code reuse, and can also lead to neater code. But quite often these highly praised qualities are, in-relity, of little real value. Applying OO techniques to an existing functional application could really cause a lot of problems. The skill lies in learning many different techniques and applying the most appropriate to the problem at hand. OO is often quoted as a Nirvana-like solution to the software development, however there are many times when it is not appropriate to be applied to the issue at hand. It can, quite often, lead to over-engineering of a problem to reach the perfect solution, when often it is really not necessary. In essence, OOP is not really Object Oriented Programming, but mapping Object Oriented Thinking to a programming language capable of supporting OO Techniques. OO techniques can be supported by languages which are not inherently OO, and there are techniques you can use within functional languages to take advantage of the benefits. As an example, I have been developing OO software for about 20 years now, so I tend to think in OO terms when solving problems, irrespective of the language I am writing in. Currently I am implementing polymorphism using Perl 5.6, which does not natively support it. I have chosen to do this as it will make maintenance and extension of the code a simple configuration task, rather than a development issue. Not sure if this is clear. There are people who are hard in the OO court, and there are people who are hard in the Functional court. And then there are people who have tried both and try to take the best from each. Neither is perfect, but both have some very good traits that you can utilise no matter what the language. If you are trying to learn OOP, don't just concentrate on OOP, but try to utilise Object Oriented Analysis and general OO principles to the whole spectrum of the problem solution. A: Some places where OO isn't so good are where you're dealing with "Sets" of data like in SQL. OO tends to make set based operations more difficult because it isn't really designed to optimally take the intersection of two sets or the superset of two sets. Also, there are times when a functional approach would make more sense such as this example taken from MSDN: Consider, for example, writing a program to convert an XML document into a different form of data. While it would certainly be possible to write a C# program that parsed through the XML document and applied a variety of if statements to determine what actions to take at different points in the document, an arguably superior approach is to write the transformation as an eXtensible Stylesheet Language Transformation (XSLT) program. Not surprisingly, XSLT has a large streak of functionalism inside of it A: I find it helps to think of a given problem in terms of 'things'. If the problem can be thought of as having one or more 'things', where each 'thing' has a number of attributes or pieces of information that refer to its state, and a number of operations that can be performed on it - then OOP is probably the way to go! A: The key to learning Object Oriented Programming is learning about Design Pattern. By learning about design patterns you can see better when classes are needed and when they are not. Like anything else used in programming the use of classes and other features of OOP languages depends on your design and requirements. Like algorithms Design patterns are a higher level concept. A Design Pattern plays similar role to that of algorithms for traditional programming languages. A design pattern tells you how create and combine object to perform some useful task. Like the best algorithms the best design patterns are general enough to be application to a variety of common problems. A: In my opinion it is more a question about you as a person. Certain people think better in functional terms and others prefer classes and objects. I would say that OOP is better suited when it matches your internal (subjective) mental model of the world. A: Object oriented code and procedural code have different extensibility points. Object oriented solutions make it easier to add new classes without modifying existing functions (see the Open-Closed Principle), while procedural code allows you to add functions without modifying existing data structures. Quite often different parts of a system require different approaches depending upon the type of change that is anticipated. A: OO allows for logic related to an object to be placed within a single place (the class, or object) so that it can be decoupled and easier to debug and maintain. What I have observed, is that every app is a combination of OO and procedural code, where the procedural code is the glue that binds all your objects together (at the very least, the code in your main function). The more you can turn your procedural code into OO, the easier it will be to maintain yor code. A: Why OOP is used for programming: * *Its flexibility – OOP is really flexible in terms of use implementations. *It can reduce your source codes by more than 99.9% – it may sound like I’m over exaggerating, but it is true. *It’s much easier in implementing security – We all know that security is one of the vital requirements when it comes to web development. Using OOP can ease the security implementations in your web projects. *It makes the coding more organized – We all know that a Clean Program is a Clean Coding. Using OOP instead of procedural makes things more organized and systematized (obviously). *It helps your team to work with each other easily – I know some of you had/have experienced team projects and some of you guys know that it’s important to have the same method, implementations, algorithm etc etc etc A: It depends by the problem: the OOP paradigm is useful in designing distribuited systems or framework with a lot of entity living during the actions of the user (example: web application). But if you have a math problem you will prefer a functional language (LISP); for a performance-critical systems you will use ADA or C, etc etc. The language OOP is useful because too it use probabily the garbage collector (automatic use of memory) in the run of program: you you program in C a lot of time you must debug and correct manually a problem of memory. A: OOP is useful when you have things. A socket, a button, a file. If you end a class in er it is almost always a function that is pretending to be a class. TestRunner more than likely should be a function that runs tests(and probably named run tests). A: Personally, I think OOP is practically a necessity for any large application. I can't imagine having a program over 100k lines of code without using OOP, it would be a maintenance and design nightmare. A: I tell you when OOP is bad. When the architect writes really complicated, non-documented OOP code. Leaves half way through the project. And many of his common code pieces he used across various project has missing code. Thank god for .NET Reflector. And the organization was not running Visual Source Safe or Subversion. And I'm sorry. 2 pages of code to login is rather ridiculous even if it is cutely OOPed....
{ "language": "en", "url": "https://stackoverflow.com/questions/6703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: Master Pages for large web sites I've just been learning about master pages in ASP.NET 2.0. They sound great, but how well do they work in practice? Does anybody have experience of using them for a large web site? A: I'm pretty sure I've only used master pages in the context of ASP.NET MVC so I'm not sure if it differs from web forms but in my experience they are not only excellent but I couldn't imagine not using them. Master pages are code inheritance to web pages. A: They are a must if you want to maintain the look of your application throughout all the pages in the application. They are fairly easy to use: First of all, design your master page and define where you want the content to be placed: <%@ Master ... %> <%-- HTML code --%> <asp:ContentPlaceHolder id="plhMainContent" runat="server" /> <%-- HTML code --%> You can have any number of place holders, just give them proper identifiers because you'll need them later. Then when creating an aspx page, you will need to mention which master page to use and in which place holder to put what content. <%@ Page ... master="~/MasterPage.master" ... %> <asp:Content ID="ContentIdentifier" ContentPlaceholderid="plhMainContent" runat="server"> <%-- More HTML here --%> <%-- Insert web controls here --%> </asp:content> Just make sure you link to the correct master page and that your content refers to the correct place holder. Master pages save a lot of time and are very powerful. There are tutorials out there, learn the power of place holders and web controls. Where I work we use master pages and web controls extensively for some major corporations, it gives us an edge when comparing with what other companies can offer. A: They are extremely useful, especially in a CMS environment and for large sites, and as MattMitchell says it's inconceivable that you would build a large site without them. Select template, each template has different editable areas, job done. Master pages can also be inherited, so you can have a Style.Master, derive a Header.Master, then derive all of your layout-based templates from that. A: Master Pages have made building template-able websites easy. I think the trickiest part in building a website using master pages is knowing when to put things into the master page and when to put things into the ContentPlaceHolder on the child page. Generally, dynamic stuff goes into the placeholder while static items go into the master page, but there is sometimes a gray area. It's mostly a design/architecture question. A: In practise I don't often find sites developed not using MasterPages. They allow simple and easy manipulation of site look and feel and also makes navigation elements and shared content pieces a breeze. ASP.Net 3.5 even allows multiple contentpages and manipulation of header sections across a single master pages. I rate it as being in the Top 10 tools for Web Developers using ASP.Net. Even ASP.Net MVC uses MasterPages and all the samples Paul Haack and his crowd put's together makes use of them. A: I echo other voices in here. I have used Master Pages in 2.0 and the feature have been great to me. I have been embedding banners, standardized background, captures from Active Dir and other JavaScript features on it for use throughout the app, maintaining the look and feel consistency and without the need to duplicate the effort on multiple pages. Great feature.
{ "language": "en", "url": "https://stackoverflow.com/questions/6719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Icons: How does a developer with no design skill make his/her application icons look pretty? I probably spend far too much time trying to make my visual interfaces look good, and while I'm pretty adept at finding the right match between usability and style one area I am hopeless at is making nice looking icons. How do you people overcome this (I'm sure common) problem? I'm thinking of things like images on buttons and perhaps most important of all, the actual application icon. Do you rely on third party designers, in or out of house? Or do you know of some hidden website that offers lots of icons for us to use? I've tried Google but I seem to find either expensive packages that are very specific, millions of Star Trek icons or icons that look abysmal at 16x16 which is my preferred size on in-application buttons. Any help/advice appreciated. A: Don't forget either that unless you have lots of toolbar buttons or other such objects to fill, its possible to get by with very few icons beyond your own unique application icon and system icons. (Remember to call them from existing libraries on the user's machine, not repackage them, keep the lawyers happy.) Having too many icons can be just as bad as having some ugly ones. Many guidelines state that if you can't have professional looking icons have some whitespace instead. It can help to keep the interface light and feel uncluttered. A: Good icons are hard to design. I have tried to design my own, and have used in-house graphics designers as well. However, building a good icon set takes a lot of work, even for the graphic designer. I believe your best solution is to buy/find a set of icons for use in your projects. The silk icon set is a good, free set and can be found at FamFamFam. There are over 1000 icons in this set, and it is very popular. If you are looking for something "different", you can purchase icon sets for a couple hundred bucks. Considering the cost of having a designer create them for you, or doing them yourself, the cost of these sets is cheap! Here's a few icon designers I've come across of the web: * *Icon Factory *Icon Experience *Icon Buffet A: If you have money, definitely go with a professional designer. At first if you don't have too many projects requiring a designer, just hire one on a contract basis. If you start feeling the need for a full-time designer then it's going to be beneficial to hire one. Good looking free icon sets are available, but you should shop around for a decent icon set which you can reuse for most of your projects. Finally, if you don't have access to a designer, just keep the look very clean and simple, because chances are that you can't do a good looking design (since you're not an artist). A: You don't have to be a great designer to come out with a decent UI and a great user experience for your application. I think there are certain principles you can follow that can dramatically improve your application. At a high level this includes: * *Identifying your top 3 use cases *Measuring and reducing the number of clicks it takes to get through the top use cases *Sketch, Prototype, Throw it away, and challenge yourself to do it with less I've written a blog entry that attempts to write out some principles related to GUI design. Check it out and let me know what you think. How to improve the User Experience of your GUI application with some simple principles. A: These are the icon links I found some time ago, I think they were not posted yet: * *Eyecandy for your KDE-Desktop *+3700 Free icons for your website or blog *Icons for people who need icons *Tango Icon Library this also sounds interesting: OpenClipArt but their web does not seem to offer quick previews (thumbnails) A: We have an in-house designer that does ours, although we also use freelance designers, too. You could try starting a design competition on 99 Designs? There are also some free icon sets available, like these. If you google around, you'll also find quite a few commercially available icon sets that you can use (although obviously neither of the icon set options will get you an icon specific to your app!). Hope that helps! A: I have a couple that I really like: GlyFx and Liquidicity Vector Icons. Those from Liquidicity are specially useful for WPF or Silverlight, you can make an interface that even zoomed looks great. A: I came across IconBuffet when registering my copy of Visual Studio Express. They have some awesome icons that you could use in your applications. For application development I have also started playing with WPF interfaces in .Net, soon to be available on Mono as well. With the ability to use web images and pictures in Windows Applications even not creatives can develop some awesome interfaces. For website layouts I use sites like OSWD for designs and layouts that are free. A: I recommend IconFactory, too. Or to be exact: http://stockicons.com/ There are a lot of icon sets for an affordable price and I think buying a professional set is the best choice you have if you're not a graphics designer yourself. If you only need a single icon, it's probably worth to hire a graphics designer. A: But why on earth do you think you need to make icons to make your interfaces look good? Icon driven interfaces are the bane of UI these days. Look at a screenshot of Komodo IDE or Eclipse for example. These are horrendously badly designed interfaces. It's impossible to tell what the buttons do until you hover over them and get the tooltip. I suggest you use text unless there is an icon that better represents the concept, not feel you need to represent every idea visually. I guess it depends what context you're developing UI for - have you a lot of users who don't speak English as their first language? Is it for occasional use or for power users to use every day? Is it a web site? Is it a desktop application? But when you really need icons, there are some good libraries out there. I think consistency is really important; Tango for instance "exists to help create a consistent graphical user interface experience for free and Open Source software." and the rather attractive icons are licensed under Creative Commons A: I thought it may be interesting to mention that Axialis has just released a version of the their Icon Studio for Visual Studio 2008 for free. It will only install if VS 2008 Pro or TFS is installed and plugs directly into the VS toolbar. A: I have always liked the icon selections at VirtualLNK. They have a good number of icon packages that have a modern look at a reasonable price. A: Try this site www.iconsreview.com, they offer reviews for a variety of icon packages, both free and for purchase. A: www.iconshock.com has over 400,000 icons and you can buy the entire collection for around $400. Alternatively there are a number of sites offering free icons, just be sure to check the licence. A: I ended up getting $20 credits (min amount) from vectorstock - they have a lot of vector images, icons and stuff like that to choose from, selling for 1 credit (1$). A: You can always go on Elance and hire someone to make any icons/logos for you. I've done it several times and it's pretty cheap for what you're getting. There is so much competition on that site that someone will eventually come in at your price point. I've always believed theres no point in spinning your wheels with something you don't really specialize in. Oh yeah and like this site, keeping it simple is always best! A: I try to keep my applications very simple. Simplicity and usability can be a good design in-and-of itself if done intelligently. Take all of the buttons for Stackoverflow for example. It shouldn't be very difficult to implement something similar in your own application. They are simple and they get the job done. A: Try aiming for clean and simple first, and then worry about aesthetic elements. A: Cheat and use photo's! I've struggle with this, What I tend to do now, is find photos of objects BUT on a plain white background. The background can be made transparent. I then use this Conversion tool to convert a PNG file into an .ico. When exporting include all resolutions up to 512 pixel size for good results.
{ "language": "en", "url": "https://stackoverflow.com/questions/6729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60" }
Q: Why doesn't my favicon display for my web site? I have a website that I've just uploaded onto the Internet. When I browse to the site using Firefox 3.0.1 on Ubuntu I don't see the favicon; Firefox 3.0.1 on WinXP does display it. Why isn't the favicon displaying under Ubuntu? It's a favicon.ico file in the root directory, not referenced in the meta tags; would it work better as a GIF? A: Previously, there was no favicon. The browser cached the lack of favicon. Clear the Firefox cache, and all is well. A: Try taking a look at this previous Question, which shows you must put two meta tags in your html in order for it to work across all browsers and operating systems.
{ "language": "en", "url": "https://stackoverflow.com/questions/6732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Java + SQL Server - a viable solution? I'm going to start a new project - rewriting an existing system (PHP + SQL Server) from scratch because of some very serious limitations by design. We have some quite good knowledge of SQL Server (currently we're using SQL Server 2000 in existing system) and we would like to employ its newer version (2008 I guess) in our new project. I am really fond of technologies that Java offers - particularly Spring Framework and Wicket and I am quite familiar with Java from others projects and assignments before. Therefore, we consider using Java and Microsoft SQL Server. There are two JDBC drivers for SQL Server - jTDS and Microsoft's one - http://msdn.microsoft.com/en-us/data/aa937724.aspx. I think we should test both of them. Are there any limitations in such solution I should know of? Has someone experience with such a technology combination? A: I've worked on a project using MSQL Server in conjunction with a Java Stack. It works very well and as long, since JDBC shouldn't really care about your database. We used ehcache together with Hibernate and had problems with the MS JDBC Driver, so we switched to jtds and it works really good. It's quite a while ago, so you still might wanna give the MS driver a chance... A: I don't know about Java and 2008... but you shouldn't have too much trouble with Java and SQL2000. As lubos suggested, you'd be doing yourself a favour to look at c# but if you're much more comfortable with Java then there shouldn't be any real limitations as the JDBC connector is supported by Microsoft A: We've been running an application using Hibernate talking to multiple remote MSQL Server instances for a few years now and we also switched to the jTDS driver early on after a few issues with the M$ driver. Since the switch we haven't had any issues at all. However, it's not a complicated application so it doesn't use any LOB's. Hope that helps. A: jTDS is excellent. I've been using it for years without issue in high-availability production environments. A: I would lean towards the jTDS driver. The MSSQL driver has a limitation where you cannot re-read the same column twice. This happens frequently when using Hibernate. A: The JDBC driver works well with SQL Server 2008, I've not had any problems with it. The version that you need to download depends on the version of the JRE you have installed. JRE6 uses JDBC4, JRE7 uses JDBC4.1, etc. Once you download the correct driver from Microsoft and run the installer you will need to copy the sqljdbc_auth.dll from the \auth directory to the c:\windows\system32 directory. Then can then use this code to make a connection: In your header: import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.sql.Statement; and in your class: public class connectToSQL { public void connectToDB() throws Exception { Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver"); String connectionUrl = "jdbc:sqlserver://<IPADDRESS>:<PORT>;DatabaseName=<NAME OF DATABASE TO CONNECT TO>;IntegratedSecurity=false"; Connection con = DriverManager.getConnection(connectionUrl, "<SQL SERVER USER LOGIN>", "<SQL SERVER PASSWORD>"); Statement s = con.createStatement(); ResultSet r = s.executeQuery("SELECT * FROM <TABLENAME TO SELECT FROM>"); while (r.next()) { System.out.println(r.getString(1)); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/6765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How to make a Flash movie with a transparent background This page from Adobe says to add a "wmode" parameter and set its value to "transparent": http://kb.adobe.com/selfservice/viewContent.do?externalId=tn_1420 This works flawlessly in IE. The background renders correctly in Firefox and Safari, however as soon as you use the browser's scroll bar then mouse over the Flash control you must click once to activate the control. You can see this behavior if you try to hit the play button in Adobe's example. Anyone know a way around this? A: On another note; setting the wmode to transparent has a few kinks. For instance it can break the scrolling (the flash stays in the same place disregarding the scroll) in some older versions of Firefox (pre 2.0). I've also had issues with ALT-key combinations in textfields not working when wmode is transparent. Also, if you need to place html-content above flash-content (not a good idea generally, but there are cases when it's useful) wmode=transparent is the way to go. A: You know you can set the background color when you're embedding? The following attributes are optional when defining the object and/or embed tags. For object , all attributes are defined in param tags unless otherwise specified: bgcolor - [ hexadecimal RGB value] in the format #RRGGBB . Specifies the background color of the movie. Use this attribute to override the background color setting specified in the Flash file. This attribute does not affect the background color of the HTML page. Cut 'n paste from http://kb.adobe.com/selfservice/viewContent.do?externalId=tn_12701&sliceId=1 A: Enabling windowless mode (wmode=) makes embedded flash act and render just like other elements. Without that, it's rendered in a seperate step and just overlaid on the browser's window. Could the flash element be losing focus? Sounds like input focus is moved to the scollbar, then you have to move it back. Also you weren't clear whether the focus issue was only in FF or also in IE. A: The Adobe example "works" in Firefox 3.0.1 in the sense that the background is transparent. However, in Firefox 3.0.1 and Safari 3.1.2 you must click the play button twice to see the animation. A: After spending some more time on this I agree with @grapefrukt. Setting wmode to transparent leads to all sorts of strange issues and in my opinion it should be avoided. Instead I've resorted to passing the background color as a parameter. I use the following ActionScript to draw the background. var parameters:Object = LoaderInfo(this.root.loaderInfo).parameters; opaqueBackground = parameters["background-color"]; EDIT: Thanks to @grapefrukt for reminding me of the bgcolor param (which makes the ActionScript above totally unnecessary)
{ "language": "en", "url": "https://stackoverflow.com/questions/6778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is String.Format as efficient as StringBuilder Suppose I have a stringbuilder in C# that does this: StringBuilder sb = new StringBuilder(); string cat = "cat"; sb.Append("the ").Append(cat).(" in the hat"); string s = sb.ToString(); would that be as efficient or any more efficient as having: string cat = "cat"; string s = String.Format("The {0} in the hat", cat); If so, why? EDIT After some interesting answers, I realised I probably should have been a little clearer in what I was asking. I wasn't so much asking for which was quicker at concatenating a string, but which is quicker at injecting one string into another. In both cases above I want to inject one or more strings into the middle of a predefined template string. Sorry for the confusion A: I think in most cases like this clarity, and not efficiency, should be your biggest concern. Unless you're crushing together tons of strings, or building something for a lower powered mobile device, this probably won't make much of a dent in your run speed. I've found that, in cases where I'm building strings in a fairly linear fashion, either doing straight concatenations or using StringBuilder is your best option. I suggest this in cases where the majority of the string that you're building is dynamic. Since very little of the text is static, the most important thing is that it's clear where each piece of dynamic text is being put in case it needs updated in the future. On the other hand, if you're talking about a big chunk of static text with two or three variables in it, even if it's a little less efficient, I think the clarity you gain from string.Format makes it worth it. I used this earlier this week when having to place one bit of dynamic text in the center of a 4 page document. It'll be easier to update that big chunk of text if its in one piece than having to update three pieces that you concatenate together. A: If only because string.Format doesn't exactly do what you might think, here is a rerun of the tests 6 years later on Net45. Concat is still fastest but really it's less than 30% difference. StringBuilder and Format differ by barely 5-10%. I got variations of 20% running the tests a few times. Milliseconds, a million iterations: * *Concatenation: 367 *New stringBuilder for each key: 452 *Cached StringBuilder: 419 *string.Format: 475 The lesson I take away is that the performance difference is trivial and so it shouldn't stop you writing the simplest readable code you can. Which for my money is often but not always a + b + c. const int iterations=1000000; var keyprefix= this.GetType().FullName; var maxkeylength=keyprefix + 1 + 1+ Math.Log10(iterations); Console.WriteLine("KeyPrefix \"{0}\", Max Key Length {1}",keyprefix, maxkeylength); var concatkeys= new string[iterations]; var stringbuilderkeys= new string[iterations]; var cachedsbkeys= new string[iterations]; var formatkeys= new string[iterations]; var stopwatch= new System.Diagnostics.Stopwatch(); Console.WriteLine("Concatenation:"); stopwatch.Start(); for(int i=0; i<iterations; i++){ var key1= keyprefix+":" + i.ToString(); concatkeys[i]=key1; } Console.WriteLine(stopwatch.ElapsedMilliseconds); Console.WriteLine("New stringBuilder for each key:"); stopwatch.Restart(); for(int i=0; i<iterations; i++){ var key2= new StringBuilder(keyprefix).Append(":").Append(i.ToString()).ToString(); stringbuilderkeys[i]= key2; } Console.WriteLine(stopwatch.ElapsedMilliseconds); Console.WriteLine("Cached StringBuilder:"); var cachedSB= new StringBuilder(maxkeylength); stopwatch.Restart(); for(int i=0; i<iterations; i++){ var key2b= cachedSB.Clear().Append(keyprefix).Append(":").Append(i.ToString()).ToString(); cachedsbkeys[i]= key2b; } Console.WriteLine(stopwatch.ElapsedMilliseconds); Console.WriteLine("string.Format"); stopwatch.Restart(); for(int i=0; i<iterations; i++){ var key3= string.Format("{0}:{1}", keyprefix,i.ToString()); formatkeys[i]= key3; } Console.WriteLine(stopwatch.ElapsedMilliseconds); var referToTheComputedValuesSoCompilerCantOptimiseTheLoopsAway= concatkeys.Union(stringbuilderkeys).Union(cachedsbkeys).Union(formatkeys).LastOrDefault(x=>x[1]=='-'); Console.WriteLine(referToTheComputedValuesSoCompilerCantOptimiseTheLoopsAway); A: String.Format uses StringBuilder internally, so logically that leads to the idea that it would be a little less performant due to more overhead. However, a simple string concatenation is the fastest method of injecting one string between two others, by a significant degree. This evidence was demonstrated by Rico Mariani in his very first Performance Quiz, years ago. Simple fact is that concatenations, when the number of string parts is known (without limitation — you could concatenate a thousand parts, as long as you know it's always 1000 parts), are always faster than StringBuilder or String.Format. They can be performed with a single memory allocation and a series of memory copies. Here is the proof. And here is the actual code for some String.Concat methods, which ultimately call FillStringChecked, which uses pointers to copy memory (extracted via Reflector): public static string Concat(params string[] values) { int totalLength = 0; if (values == null) { throw new ArgumentNullException("values"); } string[] strArray = new string[values.Length]; for (int i = 0; i < values.Length; i++) { string str = values[i]; strArray[i] = (str == null) ? Empty : str; totalLength += strArray[i].Length; if (totalLength < 0) { throw new OutOfMemoryException(); } } return ConcatArray(strArray, totalLength); } public static string Concat(string str0, string str1, string str2, string str3) { if (((str0 == null) && (str1 == null)) && ((str2 == null) && (str3 == null))) { return Empty; } if (str0 == null) { str0 = Empty; } if (str1 == null) { str1 = Empty; } if (str2 == null) { str2 = Empty; } if (str3 == null) { str3 = Empty; } int length = ((str0.Length + str1.Length) + str2.Length) + str3.Length; string dest = FastAllocateString(length); FillStringChecked(dest, 0, str0); FillStringChecked(dest, str0.Length, str1); FillStringChecked(dest, str0.Length + str1.Length, str2); FillStringChecked(dest, (str0.Length + str1.Length) + str2.Length, str3); return dest; } private static string ConcatArray(string[] values, int totalLength) { string dest = FastAllocateString(totalLength); int destPos = 0; for (int i = 0; i < values.Length; i++) { FillStringChecked(dest, destPos, values[i]); destPos += values[i].Length; } return dest; } private static unsafe void FillStringChecked(string dest, int destPos, string src) { int length = src.Length; if (length > (dest.Length - destPos)) { throw new IndexOutOfRangeException(); } fixed (char* chRef = &dest.m_firstChar) { fixed (char* chRef2 = &src.m_firstChar) { wstrcpy(chRef + destPos, chRef2, length); } } } So, then: string what = "cat"; string inthehat = "The " + what + " in the hat!"; Enjoy! A: From the MSDN documentation: The performance of a concatenation operation for a String or StringBuilder object depends on how often a memory allocation occurs. A String concatenation operation always allocates memory, whereas a StringBuilder concatenation operation only allocates memory if the StringBuilder object buffer is too small to accommodate the new data. Consequently, the String class is preferable for a concatenation operation if a fixed number of String objects are concatenated. In that case, the individual concatenation operations might even be combined into a single operation by the compiler. A StringBuilder object is preferable for a concatenation operation if an arbitrary number of strings are concatenated; for example, if a loop concatenates a random number of strings of user input. A: Oh also, the fastest would be: string cat = "cat"; string s = "The " + cat + " in the hat"; A: NOTE: This answer was written when .NET 2.0 was the current version. This may no longer apply to later versions. String.Format uses a StringBuilder internally: public static string Format(IFormatProvider provider, string format, params object[] args) { if ((format == null) || (args == null)) { throw new ArgumentNullException((format == null) ? "format" : "args"); } StringBuilder builder = new StringBuilder(format.Length + (args.Length * 8)); builder.AppendFormat(provider, format, args); return builder.ToString(); } The above code is a snippet from mscorlib, so the question becomes "is StringBuilder.Append() faster than StringBuilder.AppendFormat()"? Without benchmarking I'd probably say that the code sample above would run more quickly using .Append(). But it's a guess, try benchmarking and/or profiling the two to get a proper comparison. This chap, Jerry Dixon, did some benchmarking: http://jdixon.dotnetdevelopersjournal.com/string_concatenation_stringbuilder_and_stringformat.htm Updated: Sadly the link above has since died. However there's still a copy on the Way Back Machine: http://web.archive.org/web/20090417100252/http://jdixon.dotnetdevelopersjournal.com/string_concatenation_stringbuilder_and_stringformat.htm At the end of the day it depends whether your string formatting is going to be called repetitively, i.e. you're doing some serious text processing over 100's of megabytes of text, or whether it's being called when a user clicks a button now and again. Unless you're doing some huge batch processing job I'd stick with String.Format, it aids code readability. If you suspect a perf bottleneck then stick a profiler on your code and see where it really is. A: I ran some quick performance benchmarks, and for 100,000 operations averaged over 10 runs, the first method (String Builder) takes almost half the time of the second (String Format). So, if this is infrequent, it doesn't matter. But if it is a common operation, then you may want to use the first method. A: I would expect String.Format to be slower - it has to parse the string and then concatenate it. Couple of notes: * *Format is the way to go for user-visible strings in professional applications; this avoids localization bugs *If you know the length of the resultant string beforehand, use the StringBuilder(Int32) constructor to predefine the capacity A: It really depends. For small strings with few concatenations, it's actually faster just to append the strings. String s = "String A" + "String B"; But for larger string (very very large strings), it's then more efficient to use StringBuilder. A: In both cases above I want to inject one or more strings into the middle of a predefined template string. In which case, I would suggest String.Format is the quickest because it is design for that exact purpose. A: It really depends on your usage pattern. A detailed benchmark between string.Join, string,Concat and string.Format can be found here: String.Format Isn't Suitable for Intensive Logging A: I would suggest not, since String.Format was not designed for concatenation, it was design for formatting the output of various inputs such as a date. String s = String.Format("Today is {0:dd-MMM-yyyy}.", DateTime.Today);
{ "language": "en", "url": "https://stackoverflow.com/questions/6785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "163" }
Q: How can I encode xml files to xfdl (base64-gzip)? Before reading anything else, please take time to read the original thread. Overview: a .xfdl file is a gzipped .xml file which has then been encoded in base64. I wish to de-encode the .xfdl into xml which I can then modify and then re-encode back into a .xfdl file. xfdl > xml.gz > xml > xml.gz > xfdl I have been able to take a .xfdl file and de-encode it from base64 using uudeview: uudeview -i yourform.xfdl Then decommpressed it using gunzip gunzip -S "" < UNKNOWN.001 > yourform-unpacked.xml The xml produced is 100% readable and looks wonderful. Without modifying the xml then, i should be able to re-compress it using gzip: gzip yourform-unpacked.xml Then re-encoded in base-64: base64 -e yourform-unpacked.xml.gz yourform_reencoded.xfdl If my thinking is correct, the original file and the re-encoded file should be equal. If I put yourform.xfdl and yourform_reencoded.xfdl into beyond compare, however, they do not match up. Also, the original file can be viewed in an http://www.grants.gov/help/download_software.jsp#pureedge">.xfdl viewer. The viewer says that the re-encoded xfdl is unreadable. I have also tried uuenview to re-encode in base64, it also produces the same results. Any help would be appreciated. A: As far as I know you cannot find the compression level of an already compressed file. When you are compressing the file you can specify the compression level with -# where the # is from 1 to 9 (1 being the fastest compression and 9 being the most compressed file). In practice you should never compare a compressed file with one that has been extracted and recompressed, slight variations can easily crop up. In your case I would compare the base64 encoded versions instead of the gzip'd versions. A: You'll need to put the following line at the beginning of the XFDL file: application/vnd.xfdl; content-encoding="base64-gzip" After you've generated the base64-encoded file, open it in a text editor and paste the line above on the first line. Ensure that the base64'ed block starts at the beginning of the second line. Save it and try it in the Viewer! If it still does not work, it may be that the changes that were made to the XML made it non-compliant in some manner. In this case, after the XML has been modified, but before it has been gzipped and base64 encoded, save it with an .xfdl file extension and try opening it with the Viewer tool. The viewer should be able to parse and display the uncompressed / unencoded file if it is in a valid XFDL format. A: Check these out: http://www.ourada.org/blog/archives/375 http://www.ourada.org/blog/archives/390 They are in Python, not Ruby, but that should get you pretty close. And the algorithm is actually for files with header 'application/x-xfdl;content-encoding="asc-gzip"' rather than 'application/vnd.xfdl; content-encoding="base64-gzip"' But the good news is that PureEdge (aka IBM Lotus Forms) will open that format with no problem. Then to top it off, here's a base64-gzip decode (in Python) so you can make the full round-trip: with open(filename, 'r') as f: header = f.readline() if header == 'application/vnd.xfdl; content-encoding="base64-gzip"\n': decoded = b'' for line in f: decoded += base64.b64decode(line.encode("ISO-8859-1")) xml = zlib.decompress(decoded, zlib.MAX_WBITS + 16) A: I did this in Java with the help of the Base64 class from http://iharder.net/base64. I've been working on an application to do form manipulation in Java. I decode the file, create an DOM document from the XML then write it back to file. My code in Java to read the file looks like this: public XFDLDocument(String inputFile) throws IOException, ParserConfigurationException, SAXException { fileLocation = inputFile; try{ //create file object File f = new File(inputFile); if(!f.exists()) { throw new IOException("Specified File could not be found!"); } //open file stream from file FileInputStream fis = new FileInputStream(inputFile); //Skip past the MIME header fis.skip(FILE_HEADER_BLOCK.length()); //Decompress from base 64 Base64.InputStream bis = new Base64.InputStream(fis, Base64.DECODE); //UnZIP the resulting stream GZIPInputStream gis = new GZIPInputStream(bis); DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); doc = db.parse(gis); gis.close(); bis.close(); fis.close(); } catch (ParserConfigurationException pce) { throw new ParserConfigurationException("Error parsing XFDL from file."); } catch (SAXException saxe) { throw new SAXException("Error parsing XFDL into XML Document."); } } My code in java looks like this to write the file to disk: /** * Saves the current document to the specified location * @param destination Desired destination for the file. * @param asXML True if output needs should be as un-encoded XML not Base64/GZIP * @throws IOException File cannot be created at specified location * @throws TransformerConfigurationExample * @throws TransformerException */ public void saveFile(String destination, boolean asXML) throws IOException, TransformerConfigurationException, TransformerException { BufferedWriter bf = new BufferedWriter(new FileWriter(destination)); bf.write(FILE_HEADER_BLOCK); bf.newLine(); bf.flush(); bf.close(); OutputStream outStream; if(!asXML) { outStream = new GZIPOutputStream( new Base64.OutputStream( new FileOutputStream(destination, true))); } else { outStream = new FileOutputStream(destination, true); } Transformer t = TransformerFactory.newInstance().newTransformer(); t.transform(new DOMSource(doc), new StreamResult(outStream)); outStream.flush(); outStream.close(); } Hope that helps. A: I've been working on something like that and this should work for php. You must have a writable tmp folder and the php file be named example.php! <?php function gzdecode($data) { $len = strlen($data); if ($len < 18 || strcmp(substr($data,0,2),"\x1f\x8b")) { echo "FILE NOT GZIP FORMAT"; return null; // Not GZIP format (See RFC 1952) } $method = ord(substr($data,2,1)); // Compression method $flags = ord(substr($data,3,1)); // Flags if ($flags & 31 != $flags) { // Reserved bits are set -- NOT ALLOWED by RFC 1952 echo "RESERVED BITS ARE SET. VERY BAD"; return null; } // NOTE: $mtime may be negative (PHP integer limitations) $mtime = unpack("V", substr($data,4,4)); $mtime = $mtime[1]; $xfl = substr($data,8,1); $os = substr($data,8,1); $headerlen = 10; $extralen = 0; $extra = ""; if ($flags & 4) { // 2-byte length prefixed EXTRA data in header if ($len - $headerlen - 2 < 8) { return false; // Invalid format echo "INVALID FORMAT"; } $extralen = unpack("v",substr($data,8,2)); $extralen = $extralen[1]; if ($len - $headerlen - 2 - $extralen < 8) { return false; // Invalid format echo "INVALID FORMAT"; } $extra = substr($data,10,$extralen); $headerlen += 2 + $extralen; } $filenamelen = 0; $filename = ""; if ($flags & 8) { // C-style string file NAME data in header if ($len - $headerlen - 1 < 8) { return false; // Invalid format echo "INVALID FORMAT"; } $filenamelen = strpos(substr($data,8+$extralen),chr(0)); if ($filenamelen === false || $len - $headerlen - $filenamelen - 1 < 8) { return false; // Invalid format echo "INVALID FORMAT"; } $filename = substr($data,$headerlen,$filenamelen); $headerlen += $filenamelen + 1; } $commentlen = 0; $comment = ""; if ($flags & 16) { // C-style string COMMENT data in header if ($len - $headerlen - 1 < 8) { return false; // Invalid format echo "INVALID FORMAT"; } $commentlen = strpos(substr($data,8+$extralen+$filenamelen),chr(0)); if ($commentlen === false || $len - $headerlen - $commentlen - 1 < 8) { return false; // Invalid header format echo "INVALID FORMAT"; } $comment = substr($data,$headerlen,$commentlen); $headerlen += $commentlen + 1; } $headercrc = ""; if ($flags & 1) { // 2-bytes (lowest order) of CRC32 on header present if ($len - $headerlen - 2 < 8) { return false; // Invalid format echo "INVALID FORMAT"; } $calccrc = crc32(substr($data,0,$headerlen)) & 0xffff; $headercrc = unpack("v", substr($data,$headerlen,2)); $headercrc = $headercrc[1]; if ($headercrc != $calccrc) { echo "BAD CRC"; return false; // Bad header CRC } $headerlen += 2; } // GZIP FOOTER - These be negative due to PHP's limitations $datacrc = unpack("V",substr($data,-8,4)); $datacrc = $datacrc[1]; $isize = unpack("V",substr($data,-4)); $isize = $isize[1]; // Perform the decompression: $bodylen = $len-$headerlen-8; if ($bodylen < 1) { // This should never happen - IMPLEMENTATION BUG! echo "BIG OOPS"; return null; } $body = substr($data,$headerlen,$bodylen); $data = ""; if ($bodylen > 0) { switch ($method) { case 8: // Currently the only supported compression method: $data = gzinflate($body); break; default: // Unknown compression method echo "UNKNOWN COMPRESSION METHOD"; return false; } } else { // I'm not sure if zero-byte body content is allowed. // Allow it for now... Do nothing... echo "ITS EMPTY"; } // Verifiy decompressed size and CRC32: // NOTE: This may fail with large data sizes depending on how // PHP's integer limitations affect strlen() since $isize // may be negative for large sizes. if ($isize != strlen($data) || crc32($data) != $datacrc) { // Bad format! Length or CRC doesn't match! echo "LENGTH OR CRC DO NOT MATCH"; return false; } return $data; } echo "<html><head></head><body>"; if (empty($_REQUEST['upload'])) { echo <<<_END <form enctype="multipart/form-data" action="example.php" method="POST"> <input type="hidden" name="MAX_FILE_SIZE" value="100000" /> <table> <th> <input name="uploadedfile" type="file" /> </th> <tr> <td><input type="submit" name="upload" value="Convert File" /></td> </tr> </table> </form> _END; } if (!empty($_REQUEST['upload'])) { $file = "tmp/" . $_FILES['uploadedfile']['name']; $orgfile = $_FILES['uploadedfile']['name']; $name = str_replace(".xfdl", "", $orgfile); $convertedfile = "tmp/" . $name . ".xml"; $compressedfile = "tmp/" . $name . ".gz"; $finalfile = "tmp/" . $name . "new.xfdl"; $target_path = "tmp/"; $target_path = $target_path . basename($_FILES['uploadedfile']['name']); if (move_uploaded_file($_FILES['uploadedfile']['tmp_name'], $target_path)) { } else { echo "There was an error uploading the file, please try again!"; } $firstline = "application/vnd.xfdl; content-encoding=\"base64-gzip\"\n"; $data = file($file); $data = array_slice($data, 1); $raw = implode($data); $decoded = base64_decode($raw); $decompressed = gzdecode($decoded); $compressed = gzencode($decompressed); $encoded = base64_encode($compressed); $decoded2 = base64_decode($encoded); $decompressed2 = gzdecode($decoded2); $header = bin2hex(substr($decoded, 0, 10)); $tail = bin2hex(substr($decoded, -8)); $header2 = bin2hex(substr($compressed, 0, 10)); $tail2 = bin2hex(substr($compressed, -8)); $header3 = bin2hex(substr($decoded2, 0, 10)); $tail3 = bin2hex(substr($decoded2, -8)); $filehandle = fopen($compressedfile, 'w'); fwrite($filehandle, $decoded); fclose($filehandle); $filehandle = fopen($convertedfile, 'w'); fwrite($filehandle, $decompressed); fclose($filehandle); $filehandle = fopen($finalfile, 'w'); fwrite($filehandle, $firstline); fwrite($filehandle, $encoded); fclose($filehandle); echo "<center>"; echo "<table style='text-align:center' >"; echo "<tr><th>Stage 1</th>"; echo "<th>Stage 2</th>"; echo "<th>Stage 3</th></tr>"; echo "<tr><td>RAW DATA -></td><td>DECODED DATA -></td><td>UNCOMPRESSED DATA -></td></tr>"; echo "<tr><td>LENGTH: ".strlen($raw)."</td>"; echo "<td>LENGTH: ".strlen($decoded)."</td>"; echo "<td>LENGTH: ".strlen($decompressed)."</td></tr>"; echo "<tr><td><a href='tmp/".$orgfile."'/>ORIGINAL</a></td><td>GZIP HEADER:".$header."</td><td><a href='".$convertedfile."'/>XML CONVERTED</a></td></tr>"; echo "<tr><td></td><td>GZIP TAIL:".$tail."</td><td></td></tr>"; echo "<tr><td><textarea cols='30' rows='20'>" . $raw . "</textarea></td>"; echo "<td><textarea cols='30' rows='20'>" . $decoded . "</textarea></td>"; echo "<td><textarea cols='30' rows='20'>" . $decompressed . "</textarea></td></tr>"; echo "<tr><th>Stage 6</th>"; echo "<th>Stage 5</th>"; echo "<th>Stage 4</th></tr>"; echo "<tr><td>ENCODED DATA <-</td><td>COMPRESSED DATA <-</td><td>UNCOMPRESSED DATA <-</td></tr>"; echo "<tr><td>LENGTH: ".strlen($encoded)."</td>"; echo "<td>LENGTH: ".strlen($compressed)."</td>"; echo "<td>LENGTH: ".strlen($decompressed)."</td></tr>"; echo "<tr><td></td><td>GZIP HEADER:".$header2."</td><td></td></tr>"; echo "<tr><td></td><td>GZIP TAIL:".$tail2."</td><td></td></tr>"; echo "<tr><td><a href='".$finalfile."'/>FINAL FILE</a></td><td><a href='".$compressedfile."'/>RE-COMPRESSED FILE</a></td><td></td></tr>"; echo "<tr><td><textarea cols='30' rows='20'>" . $encoded . "</textarea></td>"; echo "<td><textarea cols='30' rows='20'>" . $compressed . "</textarea></td>"; echo "<td><textarea cols='30' rows='20'>" . $decompressed . "</textarea></td></tr>"; echo "</table>"; echo "</center>"; } echo "</body></html>"; ?> A: Different implementations of the gzip algorithm will always produce slightly different but still correct files, also the compression level of the original file may be different then what you are running it at. A: Interesting, I'll give it a shot. The variations aren't slight, however. The newly encoded file is longer and when comparing the binary of the before and after, the data hardly matches up at all. Before (the first three lines) H4sIAAAAAAAAC+19eZOiyNb3/34K3r4RT/WEU40ssvTtrhuIuKK44Bo3YoJdFAFZ3D79C6hVVhUq dsnUVN/qmIkSOLlwlt/JPCfJ/PGf9dwAlorj6pb58wv0LfcFUEzJknVT+/ml2uXuCSJP3kNf/vOQ +TEsFVkgoDfdn18mnmd/B8HVavWt5TsKI2vKN8magyENiH3Lf9kRfpd817PmF+jpiOhQRFZcXTMV After (the first three lines): H4sICJ/YnEgAAzEyNDQ2LTExNjk2NzUueGZkbC54bWwA7D1pU+JK19/9FV2+H5wpByEhJMRH uRUgCMom4DBYt2oqkAZyDQlmQZ1f/3YSNqGzKT3oDH6RdE4vOXuf08vFP88TFcygYSq6dnlM naWOAdQGuqxoo8vjSruRyGYzfII6/id3dPGjVKwCBK+Zl8djy5qeJ5NPT09nTduAojyCZwN9 As you can see H4SI match up, then after that it's pandemonium. A: gzip will put the filename in the header of file, so that a gzipped file vary in length depending on the filename of the uncompressed file. If gzip acts on a stream, the filename is omitted and the file is a bit shorter, so the following should work: gzip yourform-unpacked.xml.gz Then re-encoded in base-64: base64 -e yourform-unpacked.xml.gz yourform_reencoded.xfdl maybe this will produce a file of the same length
{ "language": "en", "url": "https://stackoverflow.com/questions/6811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Mapping my custom keys in Debian I have a Microsoft keyboard with a series of non-standard buttons such as "Mail", "Search" , "Web/Home" etc. It would be nice to be able to bind these keys so they execute arbitrary programs. Does anybody know how to do this in Debian Etch? A: I can't say for certain because I'm not using Debian but if you're using Gnome the easiest way is to run gnome-keybinding-properties (System > Preferences > Keyboard Shortcuts) Instead of typing a shortcut such as Ctrl+M, hit the button on your keyboard. If you would prefer to do this via command line or with a different desktop environment, this may help: Unusual keys and keyboards A: Running Debian, I had the same issue. What I did is run xev and see what keycode those keys return. Microsoft likes to break standards a little, so some of the multimedia keys just won't work. But the ones that do will return a keycode. Then write a script with xmodmap to map those keys properly. The Gentoo Wiki has excellent documentation on how to do these things. I put my xmodmap script in ~/.kde/Autostart/ because I use kde, but you could just as easily put it in your home folder and have your .bashrc or .profile source it. Once you've mapped the keycodes, you can assign those keys to specific actions in your desktop environment. A: I used Gizmo Daemon for my PowerMate under Debian - it supports fancy keyboard keys as well (although I haven't tried it for those keys). Hacking on gizmod to get it to do what I wanted was pretty easy. Gizmo Daemon A: There's a few different ways to do so, the easiest generally being keytouch, which is probably available in the Debian repositories. The user manual is here. There is a chance that your keyboard won't work with it though. A: If you want to do it manualy you can edit ~/.xmodmap and use xmodmap ~/.xmodmap to apply the modifications
{ "language": "en", "url": "https://stackoverflow.com/questions/6812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Class file name must end with .class exception in Java Search I was hoping someone could help me out with a problem I'm having using the java search function in Eclipse on a particular project. When using the java search on one particular project, I get an error message saying Class file name must end with .class (see stack trace below). This does not seem to be happening on all projects, just one particular one, so perhaps there's something I should try to get rebuilt? I have already tried Project -> Clean... and Closing Eclipse, deleting all the built class files and restarting Eclipse to no avail. The only reference I've been able to find on Google for the problem is at http://www.crazysquirrel.com/computing/java/eclipse/error-during-java-search.jspx, but unfortunately his solution (closing, deleting class files, restarting) did not work for me. If anyone can suggest something to try, or there's any more info I can gather which might help track it's down, I'd greatly appreciate the pointers. Version: 3.4.0 Build id: I20080617-2000 Also just found this thread - http://www.myeclipseide.com/PNphpBB2-viewtopic-t-20067.html - which indicates the same problem may occur when the project name contains a period. Unfortunately, that's not the case in my setup, so I'm still stuck. Caused by: java.lang.IllegalArgumentException: Class file name must end with .class at org.eclipse.jdt.internal.core.PackageFragment.getClassFile(PackageFragment.java:182) at org.eclipse.jdt.internal.core.util.HandleFactory.createOpenable(HandleFactory.java:109) at org.eclipse.jdt.internal.core.search.matching.MatchLocator.locateMatches(MatchLocator.java:1177) at org.eclipse.jdt.internal.core.search.JavaSearchParticipant.locateMatches(JavaSearchParticipant.java:94) at org.eclipse.jdt.internal.core.search.BasicSearchEngine.findMatches(BasicSearchEngine.java:223) at org.eclipse.jdt.internal.core.search.BasicSearchEngine.search(BasicSearchEngine.java:506) at org.eclipse.jdt.core.search.SearchEngine.search(SearchEngine.java:551) at org.eclipse.jdt.internal.corext.refactoring.RefactoringSearchEngine.internalSearch(RefactoringSearchEngine.java:142) at org.eclipse.jdt.internal.corext.refactoring.RefactoringSearchEngine.search(RefactoringSearchEngine.java:129) at org.eclipse.jdt.internal.corext.refactoring.rename.RenameTypeProcessor.initializeReferences(RenameTypeProcessor.java:594) at org.eclipse.jdt.internal.corext.refactoring.rename.RenameTypeProcessor.doCheckFinalConditions(RenameTypeProcessor.java:522) at org.eclipse.jdt.internal.corext.refactoring.rename.JavaRenameProcessor.checkFinalConditions(JavaRenameProcessor.java:45) at org.eclipse.ltk.core.refactoring.participants.ProcessorBasedRefactoring.checkFinalConditions(ProcessorBasedRefactoring.java:225) at org.eclipse.ltk.core.refactoring.Refactoring.checkAllConditions(Refactoring.java:160) at org.eclipse.jdt.internal.ui.refactoring.RefactoringExecutionHelper$Operation.run(RefactoringExecutionHelper.java:77) at org.eclipse.jdt.internal.core.BatchOperation.executeOperation(BatchOperation.java:39) at org.eclipse.jdt.internal.core.JavaModelOperation.run(JavaModelOperation.java:709) at org.eclipse.core.internal.resources.Workspace.run(Workspace.java:1800) at org.eclipse.jdt.core.JavaCore.run(JavaCore.java:4650) at org.eclipse.jdt.internal.ui.actions.WorkbenchRunnableAdapter.run(WorkbenchRunnableAdapter.java:92) at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121) Thanks McDowell, closing and opening the project seems to have fixed it (at least for now). A: Comment #9 to bug 269820 explains how to delete the search index, which appears to be the solution to a corrupt index whose symptoms are the dreaded An internal error occurred during: "Items filtering". Class file name must end with .class message box. How to delete the search index: * *Close Eclipse *Delete <workspace>/.metadata/.plugins/org.eclipse.jdt.core/*.index *Delete <workspace>/.metadata/.plugins/org.eclipse.jdt.core/savedIndexNames.txt *Start Eclipse again A: Got this error to the other day. Tried deleting the all .class-files and resources from my output folder manually. Didn't work. Restarted my computer (WinXP). Didn't work. Closed my project in Eclipse and opened it again. Worked!!! Hopes this solves someones problem out there. The search facilities and truly essential to Eclipse. A: Two more general-purpose mechanisms for fixing some of Eclipse's idiosyncrasies: * *Close and open the project *Delete the project (but not from disk!) and reimport it as an existing project Failing that, bugs.eclipse.org might provide the answer. If the workspace is caching something broken, you may be able to delete it by poking around in workspace/.metadata/.plugins. Most of that stuff is fairly transient (though backup and watch for deleted preferences). A: I also encountered this issue recently, the below scenario worked for me. * *Close Eclipse *Delete <workspace>/.metadata/.plugins/org.eclipse.jdt.core/*.index *Delete <workspace>/.metadata/.plugins/org.eclipse.jdt.core/savedIndexNames.txt *Start Eclipse again A: Closing the projects didn't do the trick for me. I started eclipse with the -clean flag and that worked for some reason. A: Just * *Close project *Clear manually output folder(s) *Open project (Eclipse 3.5 SR2, Build id: 20100218-1602)
{ "language": "en", "url": "https://stackoverflow.com/questions/6816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: Scaling multithreaded applications on multicored machines I'm working on a project were we need more performance. Over time we've continued to evolve the design to work more in parallel(both threaded and distributed). Then latest step has been to move part of it onto a new machine with 16 cores. I'm finding that we need to rethink how we do things to scale to that many cores in a shared memory model. For example the standard memory allocator isn't good enough. What resources would people recommend? So far I've found Sutter's column Dr. Dobbs to be a good start. I just got The Art of Multiprocessor Programming and The O'Reilly book on Intel Threading Building Blocks A: A couple of other books that are going to be helpful are: * *Synchronization Algorithms and Concurrent Programming *Patterns for Parallel Programming *Communicating Sequential Processes by C. A. R. Hoare (a classic, free PDF at that link) Also, consider relying less on sharing state between concurrent processes. You'll scale much, much better if you can avoid it because you'll be able to parcel out independent units of work without having to do as much synchronization between them. Even if you need to share some state, see if you can partition the shared state from the actual processing. That will let you do as much of the processing in parallel, independently from the integration of the completed units of work back into the shared state. Obviously this doesn't work if you have dependencies among units of work, but it's worth investigating instead of just assuming that the state is always going to be shared. A: You might want to check out Google's Performance Tools. They've released their version of malloc they use for multi-threaded applications. It also includes a nice set of profiling tools. A: Jeffrey Richter is into threading a lot. He has a few chapters on threading in his books and check out his blog: http://www.wintellect.com/cs/blogs/jeffreyr/default.aspx. A: As monty python would say "and now for something completely different" - you could try a language/environment that doesn't use threads, but processes and messaging (no shared state). One of the most mature ones is erlang (and this excellent and fun book: http://www.pragprog.com/titles/jaerlang/programming-erlang). May not be exactly relevant to your circumstances, but you can still learn a lot of ideas that you may be able to apply in other tools. For other environments: .Net has F# (to learn functional programming). JVM has Scala (which has actors, very much like Erlang, and is functional hybrid language). Also there is the "fork join" framework from Doug Lea for Java which does a lot of the hard work for you. A: The allocator in FreeBSD recently got an update for FreeBSD 7. The new one is called jemaloc and is apparently much more scaleable with respect to multiple threads. You didn't mention which platform you are using, so perhaps this allocator is available to you. (I believe Firefox 3 uses jemalloc, even on windows. So ports must exist somewhere.) A: Take a look at Hoard if you are doing a lot of memory allocation. Roll your own Lock Free List. A good resource is here - it's in C# but the ideas are portable. Once you get used to how they work you start seeing other places where they can be used and not just in lists. A: I will have to check-out Hoard, Google Perftools and jemalloc sometime. For now we are using scalable_malloc from Intel Threading Building Blocks and it performs well enough. For better or worse, we're using C++ on Windows, though much of our code will compile with gcc just fine. Unless there's a compelling reason to move to redhat (the main linux distro we use), I doubt it's worth the headache/political trouble to move. I would love to use Erlang, but there way to much here to redo it now. If we think about the requirements around the development of Erlang in a telco setting, the are very similar to our world (electronic trading). Armstrong's book is on my to read stack :) In my testing to scale out from 4 cores to 16 cores I've learned to appreciate the cost of any locking/contention in the parallel portion of the code. Luckily we have a large portion that scales with the data, but even that didn't work at first because of an extra lock and the memory allocator. A: I maintain a concurrency link blog that may be of ongoing interest: http://concurrency.tumblr.com
{ "language": "en", "url": "https://stackoverflow.com/questions/6817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Defensive programming When writing code do you consciously program defensively to ensure high program quality and to avoid the possibility of your code being exploited maliciously, e.g. through buffer overflow exploits or code injection ? What's the "minimum" level of quality you'll always apply to your code ? A: Similar to abyx, in the team I am on developers always use unit testing and code reviews. In addition to that, I also aim to make sure that I don't incorporate code that people may use - I tend to write code only for the basic set of methods required for the object at hand to function as has been spec'd out. I've found that incorporating methods that may never be used, but provide functionality can unintentionally introduce a "backdoor" or unintended/unanticipated use into the system. It's much easier to go back later and introduce methods, attributes, and properties for which are asked versus anticipating something that may never come. A: In my line of work, our code has to be top quality. So, we focus on two main things: * *Testing *Code reviews Those bring home the money. A: I'd recommend being defensive for data that enter a "component" or framework. Within a "component" or framework one should think that the data is "correct". Thinking like this. It is up to the caller to supply correct parameters otherwise ALL functions and methods have to check every incomming parameter. But if the check is only done for the caller the check is only needed once. So, a parameter should be "correct" and thus can be passed through to lower levels. * *Always check data from external sources, users etc *A "component" or framework should always check incomming calls. If there is a bug and a wrong value is used in a call. What is really the right thing todo? One only have an indication that the "data" the program is working on is wrong and some like ASSERTS but others want to use advanced error reporting and possible error recovery. In any case the data is found to be faulty and in few cases it's good to continue working on it. (note it's good if servers don't die at least) An image sent from a satellite might be a case to try advanced error recovery on...an image downloaded from the internet to put up an error icon for... A: I recommend people write code that is fascist in the development environment and benevolent in production. During development you want to catch bad data/logic/code as early as possible to prevent problems either going unnoticed or resulting in later problems where the root cause is hard to track. In production handle problems as gracefully as possible. If something really is a non-recoverable error then handle it and present that information to the user. As an example here's our code to Normalize a vector. If you feed it bad data in development it will scream, in production it returns a safety value. inline const Vector3 Normalize( Vector3arg vec ) { const float len = Length(vec); ASSERTMSG(len > 0.0f "Invalid Normalization"); return len == 0.0f ? vec : vec / len; } A: I always work to prevent things like injection attacks. However, when you work on an internal intranet site, most of the security features feel like wasted effort. I still do them, maybe just not as well. A: Well, there is a certain set of best practices for security. At a minimum, for database applications, you need to watch out for SQL Injection. Other stuff like hashing passwords, encrypting connection strings, etc. are also a standard. From here on, it depends on the actual application. Luckily, if you are working with frameworks such as .Net, a lot of security protection comes built-in. A: You have to always program defensively I would say even for internal apps, simply because users could just through sheer luck write something that breaks your app. Granted you probably don't have to worry about trying to cheat you out of money but still. Always program defensively and assume the app will fail. A: Using Test Driven Development certainly helps. You write a single component at a time and then enumerate all of the potential cases for inputs (via tests) before writing the code. This ensures that you've covered all bases and haven't written any cool code that no-one will use but might break. Although I don't do anything formal I generally spend some time looking at each class and ensuring that: * *if they are in a valid state that they stay in a valid state *there is no way to construct them in an invalid state *Under exceptional circumstances they will fail as gracefully as possible (frequently this is a cleanup and throw) A: It depends. If I am genuinely hacking something up for my own use then I will write the best code that I don't have to think about. Let the compiler be my friend for warnings etc. but I won't automatically create types for the hell of it. The more likely the code is to be used, even occasionally, I ramp up the level of checks. * *minimal magic numbers *better variable names *fully checked & defined array/string lengths *programming by contract assertions *null value checks *exceptions (depending upon context of the code) *basic explanatory comments *accessible usage documentation (if perl etc.) A: I'll take a different definition of defensive programming, as the one that's advocated by Effective Java by Josh Bloch. In the book, he talks about how to handle mutable objects that callers pass to your code (e.g., in setters), and mutable objects that you pass to callers (e.g., in getters). * *For setters, make sure to clone any mutable objects, and store the clone. This way, callers cannot change the passed-in object after the fact to break your program's invariants. *For getters, either return an immutable view of your internal data, if the interface allows it; or else return a clone of the internal data. *When calling user-supplied callbacks with internal data, send in an immutable view or clone, as appropriate, unless you intend the callback to alter the data, in which case you have to validate it after the fact. The take-home message is to make sure no outside code can hold an alias to any mutable objects that you use internally, so that you can maintain your invariants. A: I am very much of the opinion that correct programming will protect against these risks. Things like avoiding deprecated functions, which (in the Microsoft C++ libraries at least) are commonly deprecated because of security vulnerabilities, and validating everything that crosses an external boundary. Functions that are only called from your code should not require excessive parameter validation because you control the caller, that is, no external boundary is crossed. Functions called by other people's code should assume that the incoming parameters will be invalid and/or malicious at some point. My approach to dealing with exposed functions is to simply crash out, with a helpful message if possible. If the caller can't get the parameters right then the problem is in their code and they should fix it, not you. (Obviously you have provided documentation for your function, since it is exposed.) Code injection is only an issue if your application is able to elevate the current user. If a process can inject code into your application then it could easily write the code to memory and execute it anyway. Without being able to gain full access to the system code injection attacks are pointless. (This is why applications used by administrators should not be writeable by lesser users.) A: In my experience, positively employing defensive programming does not necessarily mean that you end up improving the quality of your code. Don't get me wrong, you need to defensively program to catch the kinds of problems that users will come across - users don't like it when your program crashes on them - but this is unlikely to make the code any easier to maintain, test, etc. Several years ago, we made it policy to use assertions at all levels of our software and this - along with unit testing, code reviews, etc. plus our existing application test suites - had a significant, positive effect on the quality of our code. A: Java, Signed JARs and JAAS. Java to prevent buffer overflow and pointer/stack whacking exploits. Don't use JNI. ( Java Native Interface) it exposes you to DLL/Shared libraries. Signed JAR's to stop class loading being a security problem. JAAS can let your application not trust anyone, even itself. J2EE has (admittedly limited) built-in support for Role based security. There is some overhead for some of this but the security holes go away. A: Simple answer: It depends. Too much defensive coding can cause major performance issues.
{ "language": "en", "url": "https://stackoverflow.com/questions/6847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Developing addins for World of Warcraft - Getting started? As a long time World of Warcraft player, and a passionate developer I have decided that I would like to combine the two and set about developing some addins. Not only to improve my gameplay experience but as a great opportunity to learn something new. Does anyone have any advice on how to go about starting out? Is there an IDE one can use? How does one go about testing? Are there any ready made libraries available? Or would I get a better learning experience by ignoring the libraries and building from scratch? How do I oneshot Hogger? Would love to hear your advice, experiences and views. A: I learned the art of add-ons primarily by looking at the code of Blizzard's UI. You can see that code by extracting the default UI or finding a copy of the default UI online. Add-on developers sometimes like to over-engineer their pet projects (who doesn't?), while Blizzard's code is usually pretty no-nonsense and straightforward. In addition, Programming in Lua is a pretty useful (if slightly out-of-date) reference for the actual Lua language. A: The best way to start is with the book World of Warcraft Programming. It covers LUA, XML, WarcraftAddOnStudio and the WoW API. The book also has sections on best practices and avoiding common mistakes. A: This article explains how to start pretty well. Your first bookmark is possibly the US Interface Forum, especially the Stickies for that: http://us.battle.net/wow/en/forum/1011693/ Then, grab some simple addons to learn how XML and LUA interacts. The WoWWiki HOWTO List is a good point here as well. One important thing to keep in mind: World of Warcraft is available in many languages. If you have a EU Account, you got an excellent testing bed by simply downloading the language Packs for Spanish, German and French. If you're an US Guy, check if you can get the Latin America version. That way, you can test it against another language version. Once you made 1 or 2 really small and simple addons just to learn how to use it, have a look at the various frameworks. WowAce is a popular one, but there are others. Just keep one thing in mind: Making an Addon is work. Maintaining one is even more work. With each new Patch, there may be breaking changes, and the next Addon will surely cause a big Exodus of Addons, just like Patch 2.0.1 did. A: Another useful tools you might like is WarcraftAddOnStudio which lets you make plugins in the visual studio environment.
{ "language": "en", "url": "https://stackoverflow.com/questions/6859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How to wait for thread complete before continuing? I have some code for starting a thread on the .NET CF 2.0: ThreadStart tStart = new ThreadStart(MyMethod); Thread t = new Thread(tStart); t.Start(); If I call this inside a loop the items completely out of order. How do introduce a wait after t.Start(), so that the work on the thread completes before the code continues? Will BeginInvoke/EndInvoke be a better option for this than manually creating threads? A: Another way of waiting for a thread to finish is using an AutoResetEvent. private readonly AutoResetEvent mWaitForThread = new AutoResetEvent(false); private void Blah() { ThreadStart tStart = new ThreadStart(MyMethod); Thread t = new Thread(tStart); t.Start(); //... (any other things) mWaitForThread.WaitOne(); } private void MyMethod() { //... (execute any other action) mWaitForThread.Set(); } A: If I'm reading this properly, you're starting work items on bunch of threads but you're concerned that the threads are completing out of order, and, would prefer to wait until each thread completes in the order they were started ? If this is the case then I'm not sure why you're creating a thread per work item. Maybe you could expand? If you did need to wait for a thread to complete the you could just do: t.Join(); A: How much order do you need to impose on the threads? If you just need all of the work started in the loop to finish before the code continues, but you don't care about the order the work within the loop finishes, then calling Join is the answer. To add more detail to Kevin Kenny's answer, you should call Join outside the loop. This means you will need a collection to hold references to the threads you started: // Start all of the threads. List<Thread> startedThreads = new List<Thread>(); foreach (...) { Thread thread = new Thread(new ThreadStart(MyMethod)); thread.Start(); startedThreads.Add(thread); } // Wait for all of the threads to finish. foreach (Thread thread in startedThreads) { thread.Join(); } In contrast, if you called Join inside the loop, the result would basically be the same as not using threads at all. Each iteration of the loop body would create and start a thread but then immediately Join it and wait for it to finish. If the individual threads produce some result (write a message in a log, for example) then the messages may still appear out of order because there's no coordination between the threads. It is possible to get the threads to output their results in order by coordinating them with a Monitor.
{ "language": "en", "url": "https://stackoverflow.com/questions/6890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Performance Considerations for throwing Exceptions I have come across the following type of code many a times, and I wonder if this is a good practice (from Performance perspective) or not: try { ... // some code } catch (Exception ex) { ... // Do something throw new CustomException(ex); } Basically, what the coder is doing is that they are encompassing the exception in a custom exception and throwing that again. How does this differ in Performance from the following two: try { ... // some code } catch (Exception ex) { .. // Do something throw ex; } or try { ... // some code } catch (Exception ex) { .. // Do something throw; } Putting aside any functional or coding best practice arguments, is there any performance difference between the 3 approaches? A: Like David, I suppose that the second and third perform better. But would any one of the three perform poorly enough to spend any time worrying about it? I think there are larger problems than performance to worry about. FxCop always recommends the third approach over the second so that the original stack trace is not lost. Edit: Removed stuff that was just plain wrong and Mike was kind enough to point out. A: Obviously you incur in the penalty of creating new objects (the new Exception) so, exactly as you do with every line of code that you append to your program, you must to decide if the better categorization of exceptions pays for the extra work. As a piece of advice to make that decision, if your new objects are not carrying extra information about the exception then you can forget constructing new exceptions. However, in other circumstances, having a hierarchy of exceptions is very convenient for the user of your classes. Suppose you're implementing the Facade pattern neither of the so far considered scenarios is good: * *is not good that you raise every exception as an Exception object because you're losing (probably) valuable information *is not good neither to raise every kind of object that you catch because doing so you're failing in creating the facade In this hypothetical case, the better thing to do is to create a hierarchy of exception classes that, abstracting your users from the inner complexities of the system, allows them to know something about the kind of exception produced. As a side note: I personally dislike the use of exceptions (hierarchies of classes derived from the Exception class) to implement logic. Like in the case: try { // something that will raise an exception almost half the time } catch( InsufficientFunds e) { // Inform the customer is broke } catch( UnknownAccount e ) { // Ask for a new account number } A: Don't do: try { // some code } catch (Exception ex) { throw ex; } As this will lose the stack trace. Instead do: try { // some code } catch (Exception ex) { throw; } Just the throw will do, you only need to pass the exception variable if you want it to be the inner exception on a new custom exception. A: @Brad Tutterow The exception is not being lost in the first case, it is being passed in to the constructor. I will agree with you on the rest though, the second approach is a very bad idea because of the loss of stack trace. When I worked with .NET, I ran into many cases where other programmers did just that, and it frustrated me to no end when I needed to see the true cause of an exception, only to find it being rethrown from a huge try block where I now have no idea where the problem originated. I also second Brad's comment that you shouldn't worry about the performance. This kind of micro optimization is a HORRIBLE idea. Unless you are talking about throwing an exception in every iteration of a for loop that is running for a long time, you will more than likely not run into performance issues by the way of your exception usage. Always optimize performance when you have metrics that indicate you NEED to optimize performance, and then hit the spots that are proven to be the culprit. It is much better to have readable code with easy debugging capabilities (IE not hiding the stack trace) rather than make something run a nanosecond faster. A final note about wrapping exceptions into a custom exception... this can be a very useful construct, especially when dealing with UIs. You can wrap every known and reasonable exceptional case into some base custom exception (or one that extends from said base exception), and then the UI can just catch this base exception. When caught, the exception will need to provide means of displaying information to the user, say a ReadableMessage property, or something along those lines. Thus, any time the UI misses an exception, it is because of a bug you need to fix, and anytime it catches an exception, it is a known error condition that can and should be handled properly by the UI. A: As others have stated, the best performance comes from the bottom one since you are just rethrowing an existing object. The middle one is least correct because it looses the stack. I personally use custom exceptions if I want to decouple certain dependencies in code. For example, I have a method that loads data from an XML file. This can go wrong in many different ways. It could fail to read from the disk (FileIOException), the user could try to access it from somewhere where they are not allowed (SecurityException), the file could be corrupt (XmlParseException), data could be in the wrong format (DeserialisationException). In this case, so its easier for the calling class to make sense of all this, all these exceptions rethrow a single custom exception (FileOperationException) so that means the caller does not need references to System.IO or System.Xml, but can still access what error occurred through an enum and any important information. As stated, don't try to micro-optimize something like this, the act of throwing an exception at all is the slowest thing that occurs here. The best improvement to make is to try avoiding an exception at all. public bool Load(string filepath) { if (File.Exists(filepath)) //Avoid throwing by checking state { //Wrap anyways in case something changes between check and operation try { .... } catch (IOException ioFault) { .... } catch (OtherException otherFault) { .... } return true; //Inform caller of success } else { return false; } //Inform caller of failure due to state } A: The throw in your first example has the overhead of the creation of a new CustomException object. The re-throw in your second example will throw an exception of type Exception. The re-throw in your third example will throw an exception of the same type that was thrown by your "some code". So the second and third examples use less resources. A: From a purely performance stand-point I'd guess that the third case is most performant. The other two need to extract a stack-trace and construct new objects, both of which are potentially fairly time-consuming. Having said that these three blocks of code have very different (external) behaviors so comparing them is like asking whether QuickSort is more efficient than Adding an item to a red-black tree. It's not as important as selecting the right thing to do. A: Wait.... why do we care about performance if an exception is thrown? Unless we're using exceptions as part of normal application flow (which is WAYYYY against best practise). I've only seen performance requirements in regards to success but never in regards to failure.
{ "language": "en", "url": "https://stackoverflow.com/questions/6891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: How to create a SQL Server function to "join" multiple rows from a subquery into a single delimited field? To illustrate, assume that I have two tables as follows: VehicleID Name 1 Chuck 2 Larry LocationID VehicleID City 1 1 New York 2 1 Seattle 3 1 Vancouver 4 2 Los Angeles 5 2 Houston I want to write a query to return the following results: VehicleID Name Locations 1 Chuck New York, Seattle, Vancouver 2 Larry Los Angeles, Houston I know that this can be done using server side cursors, ie: DECLARE @VehicleID int DECLARE @VehicleName varchar(100) DECLARE @LocationCity varchar(100) DECLARE @Locations varchar(4000) DECLARE @Results TABLE ( VehicleID int Name varchar(100) Locations varchar(4000) ) DECLARE VehiclesCursor CURSOR FOR SELECT [VehicleID] , [Name] FROM [Vehicles] OPEN VehiclesCursor FETCH NEXT FROM VehiclesCursor INTO @VehicleID , @VehicleName WHILE @@FETCH_STATUS = 0 BEGIN SET @Locations = '' DECLARE LocationsCursor CURSOR FOR SELECT [City] FROM [Locations] WHERE [VehicleID] = @VehicleID OPEN LocationsCursor FETCH NEXT FROM LocationsCursor INTO @LocationCity WHILE @@FETCH_STATUS = 0 BEGIN SET @Locations = @Locations + @LocationCity FETCH NEXT FROM LocationsCursor INTO @LocationCity END CLOSE LocationsCursor DEALLOCATE LocationsCursor INSERT INTO @Results (VehicleID, Name, Locations) SELECT @VehicleID, @Name, @Locations END CLOSE VehiclesCursor DEALLOCATE VehiclesCursor SELECT * FROM @Results However, as you can see, this requires a great deal of code. What I would like is a generic function that would allow me to do something like this: SELECT VehicleID , Name , JOIN(SELECT City FROM Locations WHERE VehicleID = Vehicles.VehicleID, ', ') AS Locations FROM Vehicles Is this possible? Or something similar? A: Note that Matt's code will result in an extra comma at the end of the string; using COALESCE (or ISNULL for that matter) as shown in the link in Lance's post uses a similar method but doesn't leave you with an extra comma to remove. For the sake of completeness, here's the relevant code from Lance's link on sqlteam.com: DECLARE @EmployeeList varchar(100) SELECT @EmployeeList = COALESCE(@EmployeeList + ', ', '') + CAST(EmpUniqueID AS varchar(5)) FROM SalesCallsEmployees WHERE SalCal_UniqueID = 1 A: I've found a solution by creating the following function: CREATE FUNCTION [dbo].[JoinTexts] ( @delimiter VARCHAR(20) , @whereClause VARCHAR(1) ) RETURNS VARCHAR(MAX) AS BEGIN DECLARE @Texts VARCHAR(MAX) SELECT @Texts = COALESCE(@Texts + @delimiter, '') + T.Texto FROM SomeTable AS T WHERE T.SomeOtherColumn = @whereClause RETURN @Texts END GO Usage: SELECT dbo.JoinTexts(' , ', 'Y') A: I don't belive there's a way to do it within one query, but you can play tricks like this with a temporary variable: declare @s varchar(max) set @s = '' select @s = @s + City + ',' from Locations select @s It's definitely less code than walking over a cursor, and probably more efficient. A: Mun's answer didn't work for me so I made some changes to that answer to get it to work. Hope this helps someone. Using SQL Server 2012: SELECT [VehicleID] , [Name] , STUFF((SELECT DISTINCT ',' + CONVERT(VARCHAR,City) FROM [Location] WHERE (VehicleID = Vehicle.VehicleID) FOR XML PATH ('')), 1, 2, '') AS Locations FROM [Vehicle] A: If you're using SQL Server 2005, you could use the FOR XML PATH command. SELECT [VehicleID] , [Name] , (STUFF((SELECT CAST(', ' + [City] AS VARCHAR(MAX)) FROM [Location] WHERE (VehicleID = Vehicle.VehicleID) FOR XML PATH ('')), 1, 2, '')) AS Locations FROM [Vehicle] It's a lot easier than using a cursor, and seems to work fairly well. Update For anyone still using this method with newer versions of SQL Server, there is another way of doing it which is a bit easier and more performant using the STRING_AGG method that has been available since SQL Server 2017. SELECT [VehicleID] ,[Name] ,(SELECT STRING_AGG([City], ', ') FROM [Location] WHERE VehicleID = V.VehicleID) AS Locations FROM [Vehicle] V This also allows a different separator to be specified as the second parameter, providing a little more flexibility over the former method. A: In a single SQL query, without using the FOR XML clause. A Common Table Expression is used to recursively concatenate the results. -- rank locations by incrementing lexicographical order WITH RankedLocations AS ( SELECT VehicleID, City, ROW_NUMBER() OVER ( PARTITION BY VehicleID ORDER BY City ) Rank FROM Locations ), -- concatenate locations using a recursive query -- (Common Table Expression) Concatenations AS ( -- for each vehicle, select the first location SELECT VehicleID, CONVERT(nvarchar(MAX), City) Cities, Rank FROM RankedLocations WHERE Rank = 1 -- then incrementally concatenate with the next location -- this will return intermediate concatenations that will be -- filtered out later on UNION ALL SELECT c.VehicleID, (c.Cities + ', ' + l.City) Cities, l.Rank FROM Concatenations c -- this is a recursion! INNER JOIN RankedLocations l ON l.VehicleID = c.VehicleID AND l.Rank = c.Rank + 1 ), -- rank concatenation results by decrementing length -- (rank 1 will always be for the longest concatenation) RankedConcatenations AS ( SELECT VehicleID, Cities, ROW_NUMBER() OVER ( PARTITION BY VehicleID ORDER BY Rank DESC ) Rank FROM Concatenations ) -- main query SELECT v.VehicleID, v.Name, c.Cities FROM Vehicles v INNER JOIN RankedConcatenations c ON c.VehicleID = v.VehicleID AND c.Rank = 1 A: From what I can see FOR XML (as posted earlier) is the only way to do it if you want to also select other columns (which I'd guess most would) as the OP does. Using COALESCE(@var... does not allow inclusion of other columns. Update: Thanks to programmingsolutions.net there is a way to remove the "trailing" comma to. By making it into a leading comma and using the STUFF function of MSSQL you can replace the first character (leading comma) with an empty string as below: stuff( (select ',' + Column from Table inner where inner.Id = outer.Id for xml path('') ), 1,1,'') as Values A: In SQL Server 2005 SELECT Stuff( (SELECT N', ' + Name FROM Names FOR XML PATH(''),TYPE) .value('text()[1]','nvarchar(max)'),1,2,N'') In SQL Server 2016 you can use the FOR JSON syntax i.e. SELECT per.ID, Emails = JSON_VALUE( REPLACE( (SELECT _ = em.Email FROM Email em WHERE em.Person = per.ID FOR JSON PATH) ,'"},{"_":"',', '),'$[0]._' ) FROM Person per And the result will become Id Emails 1 abc@gmail.com 2 NULL 3 def@gmail.com, xyz@gmail.com This will work even your data contains invalid XML characters the '"},{"":"' is safe because if you data contain '"},{"":"', it will be escaped to "},{\"_\":\" You can replace ', ' with any string separator And in SQL Server 2017, Azure SQL Database You can use the new STRING_AGG function A: VERSION NOTE: You must be using SQL Server 2005 or greater with Compatibility Level set to 90 or greater for this solution. See this MSDN article for the first example of creating a user-defined aggregate function that concatenates a set of string values taken from a column in a table. My humble recommendation would be to leave out the appended comma so you can use your own ad-hoc delimiter, if any. Referring to the C# version of Example 1: change: this.intermediateResult.Append(value.Value).Append(','); to: this.intermediateResult.Append(value.Value); And change: output = this.intermediateResult.ToString(0, this.intermediateResult.Length - 1); to: output = this.intermediateResult.ToString(); That way when you use your custom aggregate, you can opt to use your own delimiter, or none at all, such as: SELECT dbo.CONCATENATE(column1 + '|') from table1 NOTE: Be careful about the amount of the data you attempt to process in your aggregate. If you try to concatenate thousands of rows or many very large datatypes you may get a .NET Framework error stating "[t]he buffer is insufficient." A: With the other answers, the person reading the answer must be aware of the vehicle table and create the vehicle table and data to test a solution. Below is an example that uses SQL Server "Information_Schema.Columns" table. By using this solution, no tables need to be created or data added. This example creates a comma separated list of column names for all tables in the database. SELECT Table_Name ,STUFF(( SELECT ',' + Column_Name FROM INFORMATION_SCHEMA.Columns Columns WHERE Tables.Table_Name = Columns.Table_Name ORDER BY Column_Name FOR XML PATH ('')), 1, 1, '' )Columns FROM INFORMATION_SCHEMA.Columns Tables GROUP BY TABLE_NAME A: The below code will work for Sql Server 2000/2005/2008 CREATE FUNCTION fnConcatVehicleCities(@VehicleId SMALLINT) RETURNS VARCHAR(1000) AS BEGIN DECLARE @csvCities VARCHAR(1000) SELECT @csvCities = COALESCE(@csvCities + ', ', '') + COALESCE(City,'') FROM Vehicles WHERE VehicleId = @VehicleId return @csvCities END -- //Once the User defined function is created then run the below sql SELECT VehicleID , dbo.fnConcatVehicleCities(VehicleId) AS Locations FROM Vehicles GROUP BY VehicleID A: If you're running SQL Server 2005, you can write a custom CLR aggregate function to handle this. C# version: using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using System.Text; using Microsoft.SqlServer.Server; [Serializable] [Microsoft.SqlServer.Server.SqlUserDefinedAggregate(Format.UserDefined,MaxByteSize=8000)] public class CSV:IBinarySerialize { private StringBuilder Result; public void Init() { this.Result = new StringBuilder(); } public void Accumulate(SqlString Value) { if (Value.IsNull) return; this.Result.Append(Value.Value).Append(","); } public void Merge(CSV Group) { this.Result.Append(Group.Result); } public SqlString Terminate() { return new SqlString(this.Result.ToString()); } public void Read(System.IO.BinaryReader r) { this.Result = new StringBuilder(r.ReadString()); } public void Write(System.IO.BinaryWriter w) { w.Write(this.Result.ToString()); } } A: Try this query SELECT v.VehicleId, v.Name, ll.LocationList FROM Vehicles v LEFT JOIN (SELECT DISTINCT VehicleId, REPLACE( REPLACE( REPLACE( ( SELECT City as c FROM Locations x WHERE x.VehicleID = l.VehicleID FOR XML PATH('') ), '</c><c>',', ' ), '<c>','' ), '</c>', '' ) AS LocationList FROM Locations l ) ll ON ll.VehicleId = v.VehicleId
{ "language": "en", "url": "https://stackoverflow.com/questions/6899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "210" }
Q: Getting DirectoryNotFoundException when trying to Connect to Device with CoreCon API I'm trying to use the CoreCon API in Visual Studio 2008 to programmatically launch device emulators. When I call device.Connect(), I inexplicably get a DirectoryNotFoundException. I get it if I try it in PowerShell or in C# Console Application. Here's the code I'm using: static void Main(string[] args) { DatastoreManager dm = new DatastoreManager(1033); Collection<Platform> platforms = dm.GetPlatforms(); foreach (var p in platforms) { Console.WriteLine("{0} {1}", p.Name, p.Id); } Platform platform = platforms[3]; Console.WriteLine("Selected {0}", platform.Name); Device device = platform.GetDevices()[0]; device.Connect(); Console.WriteLine("Device Connected"); SystemInfo info = device.GetSystemInfo(); Console.WriteLine("System OS Version:{0}.{1}.{2}",info.OSMajor, info.OSMinor, info.OSBuildNo); Console.ReadLine(); } Does anyone know why I'm getting this error? I'm running this on WinXP 32-bit, plain jane Visual Studio 2008 Pro. I imagine it's some config issue since I can't do it from a Console app or PowerShell. Here's the stack trace: System.IO.DirectoryNotFoundException was unhandled Message="The system cannot find the path specified.\r\n" Source="Device Connection Manager" StackTrace: at Microsoft.VisualStudio.DeviceConnectivity.Interop.ConManServerClass.ConnectDevice() at Microsoft.SmartDevice.Connectivity.Device.Connect() at ConsoleApplication1.Program.Main(String[] args) in C:\Documents and Settings\Thomas\Local Settings\Application Data\Temporary Projects\ConsoleApplication1\Program.cs:line 23 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() A: Installing VS 2008 SP 1 fixed it for me. A: It can be found at <systemdrive>:\Program files\Common Files\Microsoft Shared\CoreCon\1.0\Bin. This is the path where you can get this dll, so add this dll to your project. A: I tried this and it works ok. Can you paste in the whole exception and stack trace? Updated: Strangely I can't find that interop assy on my machine either other than under the c:\windows\assembly\GAC_MSIL folders. Why not fire up SysInternals FileMon or Process Monitor, it'd save some guesswork. A: I suspect there is a problem with my Microsoft.VisualStudio.DeviceConnectivity.Interop assembly. There is no copy of that on disk that I can find. It's in the GAC only. I tried to inspect in Reflector, but it needs that Interop assembly also. Since ConManServerClass is obviously COM, maybe there's a COM library that has to be registered?
{ "language": "en", "url": "https://stackoverflow.com/questions/6904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I best share an embeddable form in VB6? Is there a good way to create a form in VB6 that can easily be embedded inside other forms? On a few occasions recently, I've wanted to design and code a Form object that I could plug into several other "parent" forms. My goal is to create a centralized piece of code for managing several UI components in a particular way, and then be able to use that (both the UI layout and the logic) in more than one place. I'm certainly willing to use code (rather than the Design View) to load the child form. The best I've come up with so far is to pull all of the interesting logic for the child form into a Class Module, and have each parent form lay out the UI (in a Picture control, perhaps) and pass that Picture object into the class module. The class then knows how to operate on the picture, and it assumes that all its expected pieces have been laid out appropriately. This approach has several downsides, and I'd like something a bit more elegant. A: Take a look at VB6 UserControls; I think they are exactly what you need. You can create a UserControl within your project, add controls and code to that control, and then insert it onto a form just like standard VB6 controls. I've used UserControls to share UI layouts on many occasions and it works great.
{ "language": "en", "url": "https://stackoverflow.com/questions/6913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Thread safe lazy construction of a singleton in C++ Is there a way to implement a singleton object in C++ that is: * *Lazily constructed in a thread safe manner (two threads might simultaneously be the first user of the singleton - it should still only be constructed once). *Doesn't rely on static variables being constructed beforehand (so the singleton object is itself safe to use during the construction of static variables). (I don't know my C++ well enough, but is it the case that integral and constant static variables are initialized before any code is executed (ie, even before static constructors are executed - their values may already be "initialized" in the program image)? If so - perhaps this can be exploited to implement a singleton mutex - which can in turn be used to guard the creation of the real singleton..) Excellent, it seems that I have a couple of good answers now (shame I can't mark 2 or 3 as being the answer). There appears to be two broad solutions: * *Use static initialisation (as opposed to dynamic initialisation) of a POD static variable, and implementing my own mutex with that using the builtin atomic instructions. This was the type of solution I was hinting at in my question, and I believe I knew already. *Use some other library function like pthread_once or boost::call_once. These I certainly didn't know about - and am very grateful for the answers posted. A: You can't do it without any static variables, however if you are willing to tolerate one, you can use Boost.Thread for this purpose. Read the "one-time initialisation" section for more info. Then in your singleton accessor function, use boost::call_once to construct the object, and return it. A: For gcc, this is rather easy: LazyType* GetMyLazyGlobal() { static const LazyType* instance = new LazyType(); return instance; } GCC will make sure that the initialization is atomic. For VC++, this is not the case. :-( One major issue with this mechanism is the lack of testability: if you need to reset the LazyType to a new one between tests, or want to change the LazyType* to a MockLazyType*, you won't be able to. Given this, it's usually best to use a static mutex + static pointer. Also, possibly an aside: It's best to always avoid static non-POD types. (Pointers to PODs are OK.) The reasons for this are many: as you mention, initialization order isn't defined -- neither is the order in which destructors are called though. Because of this, programs will end up crashing when they try to exit; often not a big deal, but sometimes a showstopper when the profiler you are trying to use requires a clean exit. A: Unfortunately, Matt's answer features what's called double-checked locking which isn't supported by the C/C++ memory model. (It is supported by the Java 1.5 and later — and I think .NET — memory model.) This means that between the time when the pObj == NULL check takes place and when the lock (mutex) is acquired, pObj may have already been assigned on another thread. Thread switching happens whenever the OS wants it to, not between "lines" of a program (which have no meaning post-compilation in most languages). Furthermore, as Matt acknowledges, he uses an int as a lock rather than an OS primitive. Don't do that. Proper locks require the use of memory barrier instructions, potentially cache-line flushes, and so on; use your operating system's primitives for locking. This is especially important because the primitives used can change between the individual CPU lines that your operating system runs on; what works on a CPU Foo might not work on CPU Foo2. Most operating systems either natively support POSIX threads (pthreads) or offer them as a wrapper for the OS threading package, so it's often best to illustrate examples using them. If your operating system offers appropriate primitives, and if you absolutely need it for performance, instead of doing this type of locking/initialization you can use an atomic compare and swap operation to initialize a shared global variable. Essentially, what you write will look like this: MySingleton *MySingleton::GetSingleton() { if (pObj == NULL) { // create a temporary instance of the singleton MySingleton *temp = new MySingleton(); if (OSAtomicCompareAndSwapPtrBarrier(NULL, temp, &pObj) == false) { // if the swap didn't take place, delete the temporary instance delete temp; } } return pObj; } This only works if it's safe to create multiple instances of your singleton (one per thread that happens to invoke GetSingleton() simultaneously), and then throw extras away. The OSAtomicCompareAndSwapPtrBarrier function provided on Mac OS X — most operating systems provide a similar primitive — checks whether pObj is NULL and only actually sets it to temp to it if it is. This uses hardware support to really, literally only perform the swap once and tell whether it happened. Another facility to leverage if your OS offers it that's in between these two extremes is pthread_once. This lets you set up a function that's run only once - basically by doing all of the locking/barrier/etc. trickery for you - no matter how many times it's invoked or on how many threads it's invoked. A: Basically, you're asking for synchronized creation of a singleton, without using any synchronization (previously-constructed variables). In general, no, this is not possible. You need something available for synchronization. As for your other question, yes, static variables which can be statically initialized (i.e. no runtime code necessary) are guaranteed to be initialized before other code is executed. This makes it possible to use a statically-initialized mutex to synchronize creation of the singleton. From the 2003 revision of the C++ standard: Objects with static storage duration (3.7.1) shall be zero-initialized (8.5) before any other initialization takes place. Zero-initialization and initialization with a constant expression are collectively called static initialization; all other initialization is dynamic initialization. Objects of POD types (3.9) with static storage duration initialized with constant expressions (5.19) shall be initialized before any dynamic initialization takes place. Objects with static storage duration defined in namespace scope in the same translation unit and dynamically initialized shall be initialized in the order in which their definition appears in the translation unit. If you know that you will be using this singleton during the initialization of other static objects, I think you'll find that synchronization is a non-issue. To the best of my knowledge, all major compilers initialize static objects in a single thread, so thread-safety during static initialization. You can declare your singleton pointer to be NULL, and then check to see if it's been initialized before you use it. However, this assumes that you know that you'll use this singleton during static initialization. This is also not guaranteed by the standard, so if you want to be completely safe, use a statically-initialized mutex. Edit: Chris's suggestion to use an atomic compare-and-swap would certainly work. If portability is not an issue (and creating additional temporary singletons is not a problem), then it is a slightly lower overhead solution. A: Here's a very simple lazily constructed singleton getter: Singleton *Singleton::self() { static Singleton instance; return &instance; } This is lazy, and the next C++ standard (C++0x) requires it to be thread safe. In fact, I believe that at least g++ implements this in a thread safe manner. So if that's your target compiler or if you use a compiler which also implements this in a thread safe manner (maybe newer Visual Studio compilers do? I don't know), then this might be all you need. Also see http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2008/n2513.html on this topic. A: While this question has already been answered, I think there are some other points to mention: * *If you want lazy-instantiation of the singleton while using a pointer to a dynamically allocated instance, you'll have to make sure you clean it up at the right point. *You could use Matt's solution, but you'd need to use a proper mutex/critical section for locking, and by checking "pObj == NULL" both before and after the lock. Of course, pObj would also have to be static ;) . A mutex would be unnecessarily heavy in this case, you'd be better going with a critical section. But as already stated, you can't guarantee threadsafe lazy-initialisation without using at least one synchronisation primitive. Edit: Yup Derek, you're right. My bad. :) A: You could use Matt's solution, but you'd need to use a proper mutex/critical section for locking, and by checking "pObj == NULL" both before and after the lock. Of course, pObj would also have to be static ;) . A mutex would be unnecessarily heavy in this case, you'd be better going with a critical section. OJ, that doesn't work. As Chris pointed out, that's double-check locking, which is not guaranteed to work in the current C++ standard. See: C++ and the Perils of Double-Checked Locking Edit: No problem, OJ. It's really nice in languages where it does work. I expect it will work in C++0x (though I'm not certain), because it's such a convenient idiom. A: * *read on weak memory model. It can break double-checked locks and spinlocks. Intel is strong memory model (yet), so on Intel it's easier *carefully use "volatile" to avoid caching of parts the object in registers, otherwise you'll have initialized the object pointer, but not the object itself, and the other thread will crash *the order of static variables initialization versus shared code loading is sometimes not trivial. I've seen cases when the code to destruct an object was already unloaded, so the program crashed on exit *such objects are hard to destroy properly In general singletons are hard to do right and hard to debug. It's better to avoid them altogether. A: I suppose saying don't do this because it's not safe and will probably break more often than just initializing this stuff in main() isn't going to be that popular. (And yes, I know that suggesting that means you shouldn't attempt to do interesting stuff in constructors of global objects. That's the point.)
{ "language": "en", "url": "https://stackoverflow.com/questions/6915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Beginning Shader Development I want to get started doing some game development using Microsoft's XNA. Part of that is Shader development, but I have no idea how to get started. I know that nVidia's FX Composer is a great tool to develop shaders, but I did not find much useful and updated content on how to actually get started. What tutorials would you recommend? A: I'd just like to reiterate how great the GPU Gems books are - a truly fantastic resource for any serious graphics development. OJ has basically summed up a really good process of learning, I'd just add that having a good foundation in geometric math (vectors/matrices as a minimum) cannot be underestimated - but it's not as hard as people sometimes make out. Learning what a dot product, cross product, normal vector and matrix multiply are and do is a good first step :). Try to understand exactly what is happening between World/View/Projection-Clip/Screen space, what the perspective divide is, etc. When I was beginning learning, a good exercise is to actually implement the entire T&L pipeline in software, complete with clipping/culling etc. This is a long process and may not seem worthwhile as I'm sure you want to just dive into the fun stuff, but having a proper understanding of what's going on is really useful and will prove worthwhile when you invariably run into more insidious and difficult to diagnose bugs. Try not to be sidelined with tools like the FX Composer initially, they're useful for prototyping, but having a solid foundation in the basics is much more worthwhile in the long run. A: Development of shaders in XNA (which obviously uses DirectX) requires knowledge of HLSL or shader assembly. I'd recommend getting familiar with the former before diving into the latter. Before writing any shaders, it's a good idea to get solid understanding of the shader pipeline, and attempt to get your mind around what is possible when using programmable shaders. When you're familiar with the life of a pixel (from source data all the way through to the screen) then understanding examples of shaders becomes a lot easier. Next make an attempt to write your own HLSL which does what the Fixed T&L pipeline used to do, just to get you hands dirty. This is the equivalent of a "hello world" program in vertex/pixel shader world. When you're able to do that and you understand what you've written you're ready to go onto the more fun stuff. As a next step you might want to simulate basic sepcular lighting in one of your shaders from a single light source. You can then adapt this down the track to use multiple lights. Play with colours, and movement of lights. This will help get familiar with the use of shader constants as well. When you have a couple of basic shaders together, you should attempt to make it so that your game/engine uses multiple/different shaders on different objects. Start adding some other bits like basic bump or normal maps. When you get to this stage, the world is your oyster. You can start diving into some funky effectcs, and even consider using the GPU for more than it was originally intended. For those who are a little more advanced, there are a couple of good books that are available for free online which have some great information from Nvidia here and here. Don't forget that there's an excellent series of books called ShaderX which covers some awesome shader stuff. There's 1, 2, 3, 4, 5 and 6 already in print, and 7 is coming soon. Good luck. If you get some shaders going, I'd love to see them :) A: One of the best ways to get started with shaders is to read Introduction to 3D Game Programming with DirectX 9.0c a Shader Approach by Frank Luna. The author first introduces the DirectX and the HLSL and then gradualy reveals the power of shader. He starts with very simple shaders but by the end of the book you know how to create lighting, shadows, particle systems, etc. Great Book. OJ suggested that you read the ShaderX series, but the stuff there is not for beginners so it will not help you a lot when you are making your first steps. I've also posted an article here about the best books that will get you started with game development. A: I'd join the praises given to OJ - truely a good answer. Still, once you do have the basic understanding you can learn a lot and fast by downloading one of the following two great tools: * *Render monkey http://ati.amd.com/developer/rendermonkey/downloads.html or the *Fx Composer http://developer.nvidia.com/object/fx_composer_home.html Once you do that, go to their project libraries and start browsing through examples, starting with the basic shading and moving to shadows, normal mapping, materials, effects and everything you find of interest. take a project, start altering the algorithms according to some goals you set and see how to get it. You'll find that many of the examples are really advanced and they will open up your horizons. Have fun A: Whilst a lot of good advice has already been given, if you're having a hard job getting your mind around the steps involved, Mental Mill provides a visual way of constructing Shaders (the Artist Edition is bundled free with Fx Composer -- itself also free!). Whilst you'd be better off long term learning the HLSL code directly, Mental Mill can generate these Fx files for you (so you can peek at the changes). Note, like all code-generators, it's a little more verbose than you're likely to do once you understand HLSL! The visual progression of the effects, from one method to another, is very impressive! A: SAMS's XNA Unleashed by Chad Carter is a great starting point for XNA and assumes little knowledge of game development practices or hard maths before you start. It has two chapters on basic and advanced shaders. As a sidenote, keep an eye out on Google for WPF Shader tutorials, it now uses the same technology to allow customer shaders in WPF applications and tutorials for that I believe are largely compatible with XNA. A: I haven't used XNA or DirectX. But, for getting to know the basics of shader programming with Cg, the Cg Tutorial is the best book I've found. A: You should also look at RenderMonkey. It's a good tool. As far as books go, check out XNA 2.0 Game Programming Recipes from Riemer Grootjans ... great book. A: One quick thought on visual shader editors. Those sort of editors are a very fun thing to play with, but I would really, really recommend to stick with working with text-based HLSL (there both RM and FXC do the job). I work as a game dev and have built a few shader systems (the one in Battlefield 2 and Farcry 2, for instance) and, to be honest, have found that HLSL is more valuable than any graphic shader network to date. There are things like following the flow of execution, debugging the shader or being able to quickly iterate test cases that give you an insight that is pretty much impossible to get through a visual tool. I have used visual shader builders myself, and I like them, and one day, when shading is not as performance-critical as it is today, they might be the right tool for the job. ..though my guess is that we'll see deferred lighting and probably full software rendering before that day comes ;)
{ "language": "en", "url": "https://stackoverflow.com/questions/6926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Transcoding audio and video What is the best way to transcode audio and video to show on the web? I need to do it programmatically. I'd like to do something like YouTube or Google Video where users can upload whatever format they want, and I encode it to flv, mp3, and/or mp4. I could do it on our server, but I would rather use an EC2 instance or even a web service. We have a Windows 2008 server. A: kind of depends on how much you want to spend. if this is a brand new (and mostly unfunded) idea, then go the ffmpeg route, but as you scale and look to iprove the quality, consider one of the more professional encoding tools that can be automated (Rhozet, Inlet, Digital Rapids are 3 options). A: I strongly recommend ffmpeg. On Windows, I have found this site to host good binaries. A: ffmpeg can do it, its a command-line tool that uses the libavcodec. Can handle conversion of most video formats. Its license is LPGL, if that suits your needs. You can utilize it as a separate process programmatically, or if you're feeling hardcore, you can use the libavcodec library yourself to encode directly. A: When you want to transcode to Flv (which is probably the best for the web) then I use this line: ffmpeg -hq -y -i $directory/$file -r 20 -s 300x200 -deinterlace -ar 22050 $directory/$file.flv 1>/dev/null 2>/dev/null It works very well, under linux of course :-). A: If you are looking for GPL'ed stuff: For audio mucking about, try sox. Very powerful! It does a lot! It's included in most linux distributions. There is also the famous LAME for mp3 [audio] encoding. For video, mencoder is impressive! It's part of the mplayer package. It will handle conversions from most video formats. Far more than I ever dreamed existed. (For documentation, see Chapter 9. Basic usage of MEncoder and Chapter 10. Encoding with MEncoder.) It's somewhat more limited about what it can create. But it does support mpeg4, mpeg2, dvd-mpeg, flv, and many others. (While I haven't tried flv myself, google shows other folks are using it.) I have done things like jpeg + sound -> mpeg4 movie: nice +20 $MENCODER mf://${JPEGFILE} -mf w=720:h=480:fps=1/${SOUNDLENGTH}:type=jpeg -audiofile ${SOUNDFILE} -ovc lavc -oac lavc -lavcopts vcodec=mpeg4 -ofps 30000/1001 -o ${MENU_MPG} Or transcode arbitrarily formated video to dvd-compatible mpeg: nice +20 $MENCODER -edl ${EDL} -ovc lavc -oac lavc -lavcopts vcodec=mpeg2video:vrc_buf_size=1835:vrc_maxrate=9800:vbitrate=${VBITRATE}:keyint=18:acodec=ac3:abitrate=192:aspect=4/3:trell:mbd=2:dia=4:cmp=3:precmp=3:ildctcmp=3:subcmp=3:mbcmp=3:cbp:mv0:dc=10 -of mpeg -mpegopts format=dvd -vf scale=720:480,harddup -srate 48000 -af lavcresample=48000 -ofps 30000/1001 -o ./${INFILE}.reformatted ${FILEPATH} -edl/-edlout [Edit Decision Lists] are used to snip out just the video sections I want. ${VBITRATE} is normally 5000 for DVD-mpeg-video. But if you flub it a bit you can squeeze more video onto a dvd. Assuming you can tolerate the artifacts. scale=720:480,harddup -- Little issue with the scale being wrong for my dvdplayer, and harddup to resolve a sound-video desync issue on my "el cheapo" player. (To playback on a widescreen player that wouldn't handle 4x3 video, I've used atrocities like "aspect=16/9", "-vf scale=560:480,expand=720:480,harddup". But in general you don't want to waste bits encoding black-bars.) This is not the most efficient set of options to mencoder by far! It can be time consuming to run. But I had other goals in mind... A: Do be aware that certain parts of ffmpeg are under GPL. I believe the libpostproc module is and if I recall correctly this is used in transcoding. Make sure this license is compatible with what you're doing. A: I would take a look at Main Concept's Reference SDK: http://www.mainconcept.com/site/developer-products-6/pc-based-sdks-20974/reference-sdk-21221/information-21243.html It is built for transcoding and, since it is a licensed SDK it doesn't have any of the legal issues surrounding ffmpeg/libavcodec. A: Rhozet Carbon Coder can handle a wide range of formats plus you can use plugins to alter the video (eg add a watermark)
{ "language": "en", "url": "https://stackoverflow.com/questions/6932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Visual Studio and dual/multiple monitors: how do I get optimized use out of my monitors? Ultramon is a great program for dual monitors (stretching screen across monitors), but I was wondering if there is any way do to something in Visual Studio like have one tab of code open on one monitor and a second tab of code open on the second monitor with only one instance of Visual Studio running? Or are there any other suggestions on getting most bang for buck on dual monitors and Visual Studio? A: I have three monitors, so I usually run with this configuration: * *Left Monitor: documentation / ebooks. *Middle Monitor: code / debugging *Right Monitor: Test application / scrolling logfiles (if needed) This usually works pretty well, and since the monitors are fairly big I rarely need to use the test application in full-screen, so there's plenty of room for my tail -f windows. I also use AutoHotkey to assign hotkeys that flip to the most important windows, like Firefox or my SSH session. That way I can simply use a shortcut key to access them when necessary. The left monitor is actually a separate computer running Linux and keyboard/mouse shared with Synergy, so I have multiple ebooks or documentation pages open, one on each virtual desktop... I can flip between the documentation by moving my mouse to the left and using a shortcut key. A: have one tab of code open on one monitor and a second tab of code open on the second monitor with only one instance of Visual Studio running you can simply drag a Tab outside of VS onto your other screen. A: Personally, I have my windows set up so that one my main monitor, I have the main visual studio monitor, so therefore my code window, maximized, with only the toolbox docked, on the left. This means the code window takes up as much space as possible, while keeping the left hand edge of the code close to the middle of the screen, where my eyes naturally look. My main monitor is a wide screen, so I find that gives me more than enough room for my code. My secondary monitor has a second window, which contains the tool windows that I use. So I have solution explorer, error list, task list (//todo: comments), output window, find results etc. all taking up as much space as they like on my secondary monitor. When debugging, the solution explorer moves the main monitor, and the watch, autos and locals windows take its place. I find this gives me a very large area to write code, and really helps usage of all of those additional windows, by giving them more real estate than they'd usually have. Update: In response to everyone talking about using the second monitor for documentation or running the app, I wholeheartedly agree, and forgot to mention how I do that. I use PowerMenu alot to acheive this. Basically I can right-click on any window and set Always On Top. So while i'm debugging, i want to see my output window, but then if I have to refer to some documentation, I just flick to Mozilla (on the second monitor), set it on top, and go back to visual studio. I find this lets me manage the tool windows without having to either shuffle them around a lot, or take up valuable space in the code window. A: When I first got two monitors I wanted to do the same as you, use all the space for visual studio, but I think that you come to realize that it's best to keep VS on one monitor and use the second monitor for documentation, external resources etc. You wouldn't think it at first, but all the little touches like just being able to maximize other resources without them hiding your code is a great feature. A: For GUI debugging is awesome being able to run the app into one screen and having the debugger in another screen. That's one of the most practical uses.. But really, depends on which kind of application you're developing, i.e., if you need to monitor open file handles, logs, etc. A: I have VS in my left monitor and the GUI/running window in the right. However, if you want to have to code tabs open on each monitor, you could use UltraMon's option to expand a window across both monitors, then drag a code page over such that it puts up a divider. Then, you align that divider with the break in your monitors. I've done that before, just to test it out. It's not a bad setup. A: Three monitors -- all 1600x1200 * *Left: Email, IM, SQL Server Management Studio, Remote Desktops to servers *Middle: VisualStudio -- maybe multiple instances -- maximized, solution explorer and team explorer docked on right, errors/output docked bottom, others auto-hide *Right: Web browsers -- app debugging and normal web work, ADUC (if needed) Other apps get moved around depending on what I'm working on and how crowded the monitors are and the interaction between the app that's open and what I need the info from it for. A: I have three monitors, set up where Visual Studio is full screen on the middle monitor, the right hand monitor has all the tool windows configured and the left monitor is for browser, help, SSMS, email, etc.. Works well except if I have to remote in, so I have a separate exported configuration to bring move the tool windows back into Visual Studio, and one to set them back up for multiple monitors. A: Though I use StudioTools for other purposes, it has a "Tear off Editor" option, with which you can "tear off" the file to a window and resize the window. Find it quite helpful A: I find the Code Definition window absolutely invaluable to have open in my other monitor. As the cursor moves over a type name in your editor the other window shows its definition. A: You could try right-clicking a file in solution explorer, Open With, and then go find devenv.exe. That will open it up in a new instance of VS. Plus, it saves devenv as one of your default options in the future, so you don't have to go hunting around for devenv all the time. Not beautiful, but an option.
{ "language": "en", "url": "https://stackoverflow.com/questions/6937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: Has anybody used Google Performance Tools? Looking for feedback on : http://code.google.com/p/google-perftools/wiki/GooglePerformanceTools A: There's a pretty good post that I read a while back that outlines some testing and analysis of GPT in a variety of scenarios A: I use GPT at work since 2007, and I totally satisfied. I use it to monitor and optimize a Linux network library, and I have obtained a significant results. The main flaw of GPT is the lack of precision. Due to the design, you only got the main time consuming functions, but this is often what you need when you want to optimize a program. To be more precise I advise to use other tools like gprof or IBM Quantify.
{ "language": "en", "url": "https://stackoverflow.com/questions/6957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Invalid Resource File When attempting to compile my C# project, I get the following error: 'C:\Documents and Settings\Dan\Desktop\Rowdy Pixel\Apps\CleanerMenu\CleanerMenu\obj\Debug\CSC97.tmp' is not a valid Win32 resource file. Having gone through many Google searches, I have determined that this is usually caused by a 256x256 image inside an icon used by the project. I've gone through all the icons and removed the 256x256 versions, but the error persists. Any ideas on how to get rid of this? @Mike: It showed up mysteriously one night. I've searched the csproj file, but there's no mention of a CSC97.tmp (I also checked the solution file, but I had no luck there either). In case it helps, I've posted the contents of the csproj file on pastebin. @Derek: No problem. Here's the compiler output. ------ Build started: Project: Infralution.Licensing, Configuration: Debug Any CPU ------ Infralution.Licensing -> C:\Documents and Settings\Dan\Desktop\Rowdy Pixel\Apps\CleanerMenu\Infralution.Licensing\bin\Debug\Infralution.Licensing.dll ------ Build started: Project: CleanerMenu, Configuration: Debug Any CPU ------ C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Csc.exe /noconfig /nowarn:1701,1702 /errorreport:prompt /warn:4 /define:DEBUG;TRACE /main:CleanerMenu.Program /reference:"C:\Documents and Settings\Dan\Desktop\Rowdy Pixel\Apps\CleanerMenu\Infralution.Licensing\bin\Debug\Infralution.Licensing.dll" /reference:..\NotificationBar.dll /reference:..\PSTaskDialog.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Data.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Drawing.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Windows.Forms.dll /reference:C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\System.Xml.dll /reference:obj\Debug\Interop.IWshRuntimeLibrary.dll /debug+ /debug:full /optimize- /out:obj\Debug\CleanerMenu.exe /resource:obj\Debug\CleanerMenu.Form1.resources /resource:obj\Debug\CleanerMenu.frmAbout.resources /resource:obj\Debug\CleanerMenu.ModalProgressWindow.resources /resource:obj\Debug\CleanerMenu.Properties.Resources.resources /resource:obj\Debug\CleanerMenu.ShortcutPropertiesViewer.resources /resource:obj\Debug\CleanerMenu.LocalizedStrings.resources /resource:obj\Debug\CleanerMenu.UpdatedLicenseForm.resources /target:winexe /win32icon:CleanerMenu.ico ErrorHandler.cs Form1.cs Form1.Designer.cs frmAbout.cs frmAbout.Designer.cs Licensing.cs ModalProgressWindow.cs ModalProgressWindow.Designer.cs Program.cs Properties\AssemblyInfo.cs Properties\Resources.Designer.cs Properties\Settings.Designer.cs Scanner.cs ShortcutPropertiesViewer.cs ShortcutPropertiesViewer.Designer.cs LocalizedStrings.Designer.cs UpdatedLicenseForm.cs UpdatedLicenseForm.Designer.cs error CS1583: 'C:\Documents and Settings\Dan\Desktop\Rowdy Pixel\Apps\CleanerMenu\CleanerMenu\obj\Debug\CSC97.tmp' is not a valid Win32 resource file Compile complete -- 1 errors, 0 warnings ------ Skipped Build: Project: CleanerMenu Installer, Configuration: Debug ------ Project not selected to build for this solution configuration ========== Build: 1 succeeded or up-to-date, 1 failed, 1 skipped ========== I have also uploaded the icon I am using. You can view it here. @Mike: Thanks! After removing everything but the 32x32 image, everything worked great. Now I can go back and add the other sizes one-by-one to see which one is causing me grief. :) @Derek: Since I first got the error, I'd done a complete reinstall of Windows (and along with it, the SDK.) It wasn't the main reason for the reinstall, but I had a slim hope that it would fix the problem. Now if only I can figure out why it previously worked with all the other sizes... A: I had a similar issue with an "obj/debug/*.tmp" file erroring out in my build log. Turns out my C:\ drive was out of space. After clearing some space, my builds started working. A: I don't know if this will help, but from this forum: Add an .ico file to the application section of the properties page, and recieved the error thats been described, when I checked the Icon file with an icon editor, it turn out that the file had more than one version of the image ie (16 x 16, 24 x 24, 32 x 32, 48 x 48 vista compressed), I removed the other formats that I didnt want resaved the file (just with 32x 32) and the application now compiles without error. Try opening the icon in an icon editor and see if you see other formats like described (also, try removing the icon and seeing if the project will build again, just to verify the icon is causing it). A: Is this a file you created and added to the project or did it mysteriously show up? You can maybe check your .csproj file and see how it is being referenced (it should be a simple xml file and you can search for CSC97.tmp). Perhaps post the information you find so we can have more details to help solve your problem A: Looking around, it seems some people resolved this by repairing or reinstalling the .NET SDK. You might want to give that a try. P.S. I see why you didn't include more of the compiler output, now. Not much to really see there. :) A: In the project properties, Application tap: In the Resources group just select Icon and manifest radio button. in my project that was the problem and fixed with above steps.
{ "language": "en", "url": "https://stackoverflow.com/questions/6973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Multicore Text File Parsing I have a quad core machine and would like to write some code to parse a text file that takes advantage of all four cores. The text file basically contains one record per line. Multithreading isn't my forte so I'm wondering if anyone could give me some patterns that I might be able to use to parse the file in an optimal manner. My first thoughts are to read all the lines into some sort of queue and then spin up threads to pull the lines off the queue and process them, but that means the queue would have to exist in memory and these are fairly large files so I'm not so keen on that idea. My next thoughts are to have some sort of controller that will read in a line and assign it a thread to parse, but I'm not sure if the controller will end up being a bottleneck if the threads are processing the lines faster than it can read and assign them. I know there's probably another simpler solution than both of these but at the moment I'm just not seeing it. A: I'd go with your original idea. If you are concerned that the queue might get too large implement a buffer-zone for it (i.e. If is gets above 100 lines the stop reading the file and if it gets below 20 then start reading again. You'd need to do some testing to find the optimal barriers). Make it so that any of the threads can potentially be the "reader thread" as it has to lock the queue to pull an item out anyway it can also check to see if the "low buffer region" has been hit and start reading again. While it's doing this the other threads can read out the rest of the queue. Or if you prefer, have one reader thread assign the lines to three other processor threads (via their own queues) and implement a work-stealing strategy. I've never done this so I don't know how hard it is. A: Mark's answer is the simpler, more elegant solution. Why build a complex program with inter-thread communication if it's not necessary? Spawn 4 threads. Each thread calculates size-of-file/4 to determine it's start point (and stop point). Each thread can then work entirely independently. The only reason to add a special thread to handle reading is if you expect some lines to take a very long time to process and you expect that these lines are clustered in a single part of the file. Adding inter-thread communication when you don't need it is a very bad idea. You greatly increase the chance of introducing an unexpected bottleneck and/or synchronization bugs. A: This will eliminate bottlenecks of having a single thread do the reading: open file for each thread n=0,1,2,3: seek to file offset 1/n*filesize scan to next complete line process all lines in your part of the file A: My experience is with Java, not C#, so apologies if these solutions don't apply. The immediate solution I can think up off the top of my head would be to have an executor that runs 3 threads (using Executors.newFixedThreadPool, say). For each line/record read from the input file, fire off a job at the executor (using ExecutorService.submit). The executor will queue requests for you, and allocate between the 3 threads. Probably better solutions exist, but hopefully that will do the job. :-) ETA: Sounds a lot like Wolfbyte's second solution. :-) ETA2: System.Threading.ThreadPool sounds like a very similar idea in .NET. I've never used it, but it may be worth your while! A: Since the bottleneck will generally be in the processing and not the reading when dealing with files I'd go with the producer-consumer pattern. To avoid locking I'd look at lock free lists. Since you are using C# you can take a look at Julian Bucknall's Lock-Free List code. A: @lomaxx @Derek & Mark: I wish there was a way to accept 2 answers. I'm going to have to end up going with Wolfbyte's solution because if I split the file into n sections there is the potential for a thread to come across a batch of "slow" transactions, however if I was processing a file where each process was guaranteed to require an equal amount of processing then I really like your solution of just splitting the file into chunks and assigning each chunk to a thread and being done with it. No worries. If clustered "slow" transactions is a issue, then the queuing solution is the way to go. Depending on how fast or slow the average transaction is, you might also want to look at assigning multiple lines at a time to each worker. This will cut down on synchronization overhead. Likewise, you might need to optimize your buffer size. Of course, both of these are optimizations that you should probably only do after profiling. (No point in worrying about synchronization if it's not a bottleneck.) A: If the text that you are parsing is made up of repeated strings and tokens, break the file into chunks and for each chunk you could have one thread pre-parse it into tokens consisting of keywords, "punctuation", ID strings, and values. String compares and lookups can be quite expensive and passing this off to several worker threads can speed up the purely logical / semantic part of the code if it doesn't have to do the string lookups and comparisons. The pre-parsed data chunks (where you have already done all the string comparisons and "tokenized" it) can then be passed to the part of the code that would actually look at the semantics and ordering of the tokenized data. Also, you mention you are concerned with the size of your file occupying a large amount of memory. There are a couple things you could do to cut back on your memory budget. Split the file into chunks and parse it. Read in only as many chunks as you are working on at a time plus a few for "read ahead" so you do not stall on disk when you finish processing a chunk before you go to the next chunk. Alternatively, large files can be memory mapped and "demand" loaded. If you have more threads working on processing the file than CPUs (usually threads = 1.5-2X CPU's is a good number for demand paging apps), the threads that are stalling on IO for the memory mapped file will halt automatically from the OS until their memory is ready and the other threads will continue to process.
{ "language": "en", "url": "https://stackoverflow.com/questions/7015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What is the best way to draw skinnable "buttons" in a video game? I'm looking for ideas on how to draw a skinnable "button" in a game application. If I use a fixed sprite non-vector image for the button background, then I can't size the button easily. If I write code to draw a resizable button (like Windows buttons are drawn), then the programmer has to get involved -- and it makes skinning difficult. Another option is to break the button into 9 smaller images (3x3) and stretch them all -- kindof like rounded corners are done in HTML. Is there another "easier" way? What's the best approach? A: If you really want to accomplish this, the way to do it is with texture coordinates. Much like you would make a stretchable button/panel design in HTML with a table, you simply define the UV coordinates for the corners, the top stretchable area, the bottom stretchable area, the side stretchable areas, and then the middle. For an example of this being implemented in an XNA engine (though it's a closed source engine), see this: http://www.flatredball.com/frb/docs/index.php?title=SpriteFrame_Tutorial A: Yes. I am working on an Editor. It's a XAML-like language (or OpenLaszlo-like, if you prefer) for XNA. So the buttons can be resized by the editor. The buttons might also be resized by a style sheet. I guess you're right that it's likely that the buttons will all be the same size in a real design. But even in a window's app, you sometimes need to make a button wider to accomodate text. A: It should be possible with shaders. You would probably have to do texel lookups, but it would cut down on the number of triangles in a complex UI.
{ "language": "en", "url": "https://stackoverflow.com/questions/7017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "-1" }
Q: Login Script with hidden buttons I have been using PHP and JavaScript for building my dad's website. He wants to incorporate a login system into his website, and I have the design for the system using PHP. My problem is how do I show buttons if the person is logged in?­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ For Example - You have Home, Products, About Us, and Contact. I want to have buttons for Dealer, Distributor, and maybe other information if the user is logged in. So I will have Home, Products, About Us, Contacts, Dealer (if dealer login), Distributor (if distributor login), and so forth. Would JavaScript be a good way to do this or would PHP, or maybe even both? Using JavaScript to show and hide buttons, and PHP to check to see which buttons to show. ­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­­ A: In your menu file or w/e you put: <? require 'auth.php' ?> <ul> <li><a href="">Home</a></li> <li><a href="">Products</a></li> <? if( loggedin() ): ?><li><a href="">Secret area</a></li><? endif; ?> </ul> Then in pages that require auth just do this: <?php require 'auth.php'; require_login(); ?> Where auth.php may contain: <?php function loggedin(){ return isset( $_SESSION['loggedin'] ); } function require_login(){ if( !loggedin() ){ header( 'Location: /login.php?referrer='.$_SERVER['REQUEST_URI'] ); exit; } } ?> A: Regarding security, you cannot trust what comes from the client: * *The visitor can see all your code (HTML and Javascript, not PHP) and try stuff *The visitor may not even use a browser; it's trivially easy to send a request with a script This means hiding the buttons is good User Interface design (because you can't use them if you are not logged in). But it's not a security feature. The security feature is checking, on the server, that the visitor is logged in before each action that requires it. If you don't intend to show the buttons, it's not useful to send the HTML and images to the browser and then hide them with Javascript. I would check with PHP. A: If you use javascript to hide the buttons, you open a security hole in the application. A malicious user could either disable javascript or apply some of their own to get around your security. I suggest using PHP to chose to either render the buttons or not. I do this in .NET quite often. You should be able to check the user's access on the server-side whenever they try to use a restricted button as well. A: What we have done at my work is have a library the provides functions such as checking if the user is logged in. For example: <?php require_once 'Auth.php'; // output some html if (isLoggedIn()) { echo 'html for logged in user'; } // rest of html For pages that only authenicated users should see, the controller checks if they are logged in and if not it redirects them to the login page. <?php public function viewCustomer($customerId) { if (!isLoggedIn()) redirectToLoginPage(); } A: Everything that Christian Lescuyer wrote is correct. Notice, however, that he said "I would" and not "you should". The choice is not that easy. First of all, security is not an issue in the choice. You should have security check on server when you execute an action. Which code decides to show/hide the button that leads to the action is irrelevant. That leaves us with only one drawback of doing show/hide logic in Javascript - the HTML sent to user is bigger than necessary. This may not be a big deal. Having show/hide logic in PHP does have a minus, though. The PHP code required is usually a tag soup. Akira's code provides a good example of how it is usually done. Corresponding Javascript code would probably look something like this: if (logged()) { elementSecretArea.style.display = "list-item"; } (assuming that elements that could be hidden have display:none by default). This style also allows nice "Ajax" scenario: user sees a page w/o secret area, inputs password, sees the secret area all without refreshing the page. So, if you already have a script that runs when your document load for other reasons, I would seriously consider having show/hide logic there. A: Basically where you have your menu in html, say as a list <ul> <li>Home</li> </ul> you add php after </li> of the last item: <?php if($session-logged_in) { ?> <li>My Account</li> <?php } ?>
{ "language": "en", "url": "https://stackoverflow.com/questions/7031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Graph visualization library in JavaScript I have a data structure that represents a directed graph, and I want to render that dynamically on an HTML page. These graphs will usually be just a few nodes, maybe ten at the very upper end, so my guess is that performance isn't going to be a big deal. Ideally, I'd like to be able to hook it in with jQuery so that users can tweak the layout manually by dragging the nodes around. Note: I'm not looking for a charting library. A: I've just put together what you may be looking for: http://www.graphdracula.net It's JavaScript with directed graph layouting, SVG and you can even drag the nodes around. Still needs some tweaking, but is totally usable. You create nodes and edges easily with JavaScript code like this: var g = new Graph(); g.addEdge("strawberry", "cherry"); g.addEdge("cherry", "apple"); g.addEdge("id34", "cherry"); I used the previously mentioned Raphael JS library (the graffle example) plus some code for a force based graph layout algorithm I found on the net (everything open source, MIT license). If you have any remarks or need a certain feature, I may implement it, just ask! You may want to have a look at other projects, too! Below are two meta-comparisons: * *SocialCompare has an extensive list of libraries, and the "Node / edge graph" line will filter for graph visualization ones. *DataVisualization.ch has evaluated many libraries, including node/graph ones. Unfortunately there's no direct link so you'll have to filter for "graph": Here's a list of similar projects (some have been already mentioned here): Pure JavaScript Libraries * *vis.js supports many types of network/edge graphs, plus timelines and 2D/3D charts. Auto-layout, auto-clustering, springy physics engine, mobile-friendly, keyboard navigation, hierarchical layout, animation etc. MIT licensed and developed by a Dutch firm specializing in research on self-organizing networks. *Cytoscape.js - interactive graph analysis and visualization with mobile support, following jQuery conventions. Funded via NIH grants and developed by by @maxkfranz (see his answer below) with help from several universities and other organizations. *The JavaScript InfoVis Toolkit - Jit, an interactive, multi-purpose graph drawing and layout framework. See for example the Hyperbolic Tree. Built by Twitter dataviz architect Nicolas Garcia Belmonte and bought by Sencha in 2010. *D3.js Powerful multi-purpose JS visualization library, the successor of Protovis. See the force-directed graph example, and other graph examples in the gallery. *Plotly's JS visualization library uses D3.js with JS, Python, R, and MATLAB bindings. See a nexworkx example in IPython here, human interaction example here, and JS Embed API. *sigma.js Lightweight but powerful library for drawing graphs *jsPlumb jQuery plug-in for creating interactive connected graphs *Springy - a force-directed graph layout algorithm *JS Graph It - drag'n'drop boxes connected by straight lines. Minimal auto-layout of the lines. *RaphaelJS's Graffle - interactive graph example of a generic multi-purpose vector drawing library. RaphaelJS can't layout nodes automatically; you'll need another library for that. *JointJS Core - David Durman's MPL-licensed open source diagramming library. It can be used to create either static diagrams or fully interactive diagramming tools and application builders. Works in browsers supporting SVG. Layout algorithms not-included in the core package *mxGraph Previously commercial HTML 5 diagramming library, now available under Apache v2.0. mxGraph is the base library used in draw.io. Commercial libraries * *GoJS Interactive graph drawing and layout library *yFiles for HTML Commercial graph drawing and layout library *KeyLines Commercial JS network visualization toolkit *ZoomCharts Commercial multi-purpose visualization library *Syncfusion JavaScript Diagram Commercial diagram library for drawing and visualization. Abandoned libraries * *Cytoscape Web Embeddable JS Network viewer (no new features planned; succeeded by Cytoscape.js) *Canviz JS renderer for Graphviz graphs. Abandoned in Sep 2013. *arbor.js Sophisticated graphing with nice physics and eye-candy. Abandoned in May 2012. Several semi-maintained forks exist. *jssvggraph "The simplest possible force directed graph layout algorithm implemented as a Javascript library that uses SVG objects". Abandoned in 2012. *jsdot Client side graph drawing application. Abandoned in 2011. *Protovis Graphical Toolkit for Visualization (JavaScript). Replaced by d3. *Moo Wheel Interactive JS representation for connections and relations (2008) *JSViz 2007-era graph visualization script *dagre Graph layout for JavaScript Non-Javascript Libraries * *Graphviz Sophisticated graph visualization language * *Graphviz has been compiled to Javascript using Emscripten here with an online interactive demo here *Flare Beautiful and powerful Flash based graph drawing *NodeBox Python Graph Visualization *Processing.js Javascript port of the Processing library by John Resig A: As guruz mentioned, the JIT has several lovely graph/tree layouts, including quite appealing RGraph and HyperTree visualizations. Also, I've just put up a super simple SVG-based implementation at github (no dependencies, ~125 LOC) that should work well enough for small graphs displayed in modern browsers. A: Disclaimer: I'm a developer of Cytoscape.js Cytoscape.js is a HTML5 graph visualisation library. The API is sophisticated and follows jQuery conventions, including * *selectors for querying and filtering (cy.elements("node[weight >= 50].someClass") does much as you would expect), *chaining (e.g. cy.nodes().unselect().trigger("mycustomevent")), *jQuery-like functions for binding to events, *elements as collections (like jQuery has collections of HTMLDomElements), *extensibility (can add custom layouts, UI, core & collection functions, and so on), *and more. If you're thinking about building a serious webapp with graphs, you should at least consider Cytoscape.js. It's free and open-source: http://js.cytoscape.org A: In a commercial scenario, a serious contestant for sure is yFiles for HTML: It offers: * *Easy import of custom data (this interactive online demo seems to pretty much do exactly what the OP was looking for) *Interactive editing for creating and manipulating the diagrams through user gestures (see the complete editor) *A huge programming API for customizing each and every aspect of the library *Support for grouping and nesting (both interactive, as well as through the layout algorithms) *Does not depend on a specfic UI toolkit but supports integration into almost any existing Javascript toolkit (see the "integration" demos) *Automatic layout (various styles, like "hierarchic", "organic", "orthogonal", "tree", "circular", "radial", and more) *Automatic sophisticated edge routing (orthogonal and organic edge routing with obstacle avoidance) *Incremental and partial layout (adding and removing elements and only slightly or not at all changing the rest of the diagram) *Support for grouping and nesting (both interactive, as well as through the layout algorithms) *Implementations of graph analysis algorithms (paths, centralities, network flows, etc.) *Uses HTML 5 technologies like SVG+CSS and Canvas and modern Javascript leveraging properties and other more ES5 and ES6 features (but for the same reason will not run in IE versions 8 and lower). *Uses a modular API that can be loaded on-demand using UMD loaders Here is a sample rendering that shows most of the requested features: Full disclosure: I work for yWorks, but on Stackoverflow I do not represent my employer.
{ "language": "en", "url": "https://stackoverflow.com/questions/7034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "523" }
Q: How to show a GUI message box from a bash script in linux? I'm writing a few little bash scripts under Ubuntu linux. I want to be able to run them from the GUI without needing a terminal window to enter any input or view any output. So far the only input required is a password for sudo - and gksudo handles that fine. But I haven't found an easy way to show a message box yet. Is there some kind of 'gkmessage' command available? I'd prefer something present in a default Ubuntu install, but I don't mind installing a new package if necessary. A: Here's a little Tcl script that will do what you want. The Wish interpreter should be installed by default on Ubuntu. #!/usr/bin/wish pack [label .msg -text [lindex $argv 0]] pack [entry .ent] bind .ent <KeyPress-Return> { puts [.ent get]; destroy . } focus .ent Call it like this: myanswer=`gui-prompt "type your answer and press enter"` A: There is also dialog and the KDE version kdialog. dialog is used by slackware, so it might not be immediately available on other distributions. A: The zenity application appears to be what you are looking for. To take input from zenity, you can specify a variable and have the output of zenity --entry saved to it. It looks something like this: my_variable=$(zenity --entry) If you look at the value in my_variable now, it will be whatever was typed in the zenity pop up entry dialog. If you want to give some sort of prompt as to what the user (or you) should enter in the dialog, add the --text switch with the label that you want. It looks something like this: my_variable=$(zenity --entry --text="What's my variable:") Zenity has lot of other nice options that are for specific tasks, so you might want to check those out as well with zenity --help. One example is the --calendar option that let's you select a date from a graphical calendar. my_date=$(zenity --calendar) Which gives a nicely formatted date based on what the user clicked on: echo ${my_date} gives: 08/05/2009 There are also options for slider selectors, errors, lists and so on. Hope this helps. A: How about Ubuntu's alert. It can be used after any operation to alert it finished and even show red cross icon if operaton was finnished with errors ls -la; alert A: Zenity is really the exact tool that I think that you are looking for. or zenity --help A: I found the xmessage command, which is sort of good enough. A: You can use shellmarks to display a GUI dialog prior to your shell script running, that will allow the user to enter data that will be placed in the environment. #!/bin/bash echo "Hello ${name}" exit 0 --- [name] type="text" label="Please enter your name" required=true Running script: shellmarks hello.sh If you enter "Steve" in the box and press run, the output will be Hello Steve Disclosure: I'm the author of Shellmarks A: In many Linux distros the notify-send command will throw one of those nice perishable notifications in the top right corner. Like so: notify-send "My name is bash and I rock da house" B.e.a.utiful! A: I believe Zenity will do what you want. It's specifically designed for displaying GTK dialogs from the command line, and it's available as an Ubuntu package. A: Everyone mentions zenity, there seem to be many others. A mixed up but interesting list is at http://alternativeto.net/software/zenity/ zenity: First, an example of zenity featuring text formatting markup, window title, button label. zenity \ --info \ --text="<span size=\"xx-large\">Time is $(date +%Hh%M).</span>\n\nGet your <b>coffee</b>." \ --title="Coffee time" \ --ok-label="Sip" gxmessage: gxmessage "my text" xmessage: xmessage is very old so it is stable and probably available in all distributions that use X (since it's distributed with X). It is customizable through X resources, for those that have been using Linux or Unix for long enough to know what it means (.Xdefaults, anyone ?). xmessage -buttons Ok:0,"Not sure":1,Cancel:2 -default Ok -nearmouse "Is xmessage enough for the job ?" -timeout 10 kdialog (KDE tool): kdialog --error "Some error occurred" YAD (Yet Another Dialog): Yad is included in newer Ubuntu versions. There is also this PPA: YAD: Zenity On Steroids [Display Graphical Dialogs From Shell Scripts] ~ Web Upd8: Ubuntu / Linux blog. Does not seem to auto-size dialogs. echo My text | yad \ --text-info \ --width=400 \ --height=200 An bigger example yad \ --title="Desktop entry editor" \ --text="Simple desktop entry editor" \ --form \ --field="Type:CB" \ --field="Name" \ --field="Generic name" \ --field="Comment" \ --field="Command:FL" \ --field="Icon" \ --field="In terminal:CHK" \ --field="Startup notify:CHK" "Application" "Name" "Generic name" "This is the comment" "/usr/bin/yad" "yad" FALSE TRUE \ --button="WebUpd8:2" \ --button="gtk-ok:0" \ --button="gtk-cancel:1" Others not in Ubuntu standard repositories: * *shellgui *xdialog *gtkdialog Off-topic (for terminal): whiptail --msgbox "my text" 10 20 dialog --msgbox "my text" 10 20 Feel free to edit. A: alert and notify-send seem to be the same thing. I use notify-send for non-input messages as it doesn't steal focus and I cannot find a way to stop zenity etc. from doing this. e.g. # This will display message and then disappear after a delay: notify-send "job complete" # This will display message and stay on-screen until clicked: notify-send -u critical "job complete" A: if nothing else is present. you can launch an xterm and echo in it, like this: xterm -e bash -c 'echo "this is the message";echo;echo -n "press enter to continue "; stty sane -echo;answer=$( while ! head -c 1;do true ;done);' A: Kdialog and dialog are both good, but I'd recommend Zenity. Quick, easy, and much better looking the xmessage or dialog. A: I'm liking what I'm seeing with script-dialog. It ticks all my boxes, plus some: * *pop up GUI boxes, but has text-mode fallback *support for various sudo variants (gksudo, kde-sudo, ...) *can re-launch itself in terminal window Indeed it's a wrapper for kdialog, zenity, dialog, whiptail and a custom fall-back. Draw-back is that it doesn't have a CLI, but instead is meant to be sources into a bash script.
{ "language": "en", "url": "https://stackoverflow.com/questions/7035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "179" }
Q: What is the difference between String and string in C#? What are the differences between these two and which one should I use? string s = "Hello world!"; String s = "Hello world!"; A: Against what seems to be common practice among other programmers, I prefer String over string, just to highlight the fact that String is a reference type, as Jon Skeet mentioned. A: string is an alias (or shorthand) of System.String. That means, by typing string we meant System.String. You can read more in think link: 'string' is an alias/shorthand of System.String. A: String: A String object is called immutable (read-only) because its value cannot be modified once it has been created. Methods that appear to modify a String object actually return a new String object that contains the modification. If it is necessary to modify the actual contents of a string-like object string: The string type represents a sequence of zero or more Unicode characters. string is an alias for String in the .NET Framework. string is the intrinsic C# datatype, and is an alias for the system provided type "System.String". The C# specification states that as a matter of style the keyword (string) is preferred over the full system type name (System.String, or String). Although string is a reference type, the equality operators (== and !=) are defined to compare the values of string objects, not references. This makes testing for string equality more intuitive. For example: Difference between string & String: * *The string is usually used for declaration while String is used for accessing static string methods *You can use 'string' do declare fields, properties etc that use the predefined type 'string', since the C# specification tells me this is good style. *You can use 'String' to use system-defined methods, such as String.Compare etc. They are originally defined on 'System.String', not 'string'. 'string' is just an alias in this case. *You can also use 'String' or 'System.Int32' when communicating with other system, especially if they are CLR-compliant. i.e. - if I get data from elsewhere, I'd de-serialize it into a System.Int32 rather than an 'int', if the origin by definition was something else than a C# system. A: There is no difference between the two. You can use either of them in your code. System.String is a class (reference type) defined the mscorlib in the namespace System. In other words, System.String is a type in the CLR. string is a keyword in C# A: I'd just like to add this to lfousts answer, from Ritchers book: The C# language specification states, “As a matter of style, use of the keyword is favored over use of the complete system type name.” I disagree with the language specification; I prefer to use the FCL type names and completely avoid the primitive type names. In fact, I wish that compilers didn’t even offer the primitive type names and forced developers to use the FCL type names instead. Here are my reasons: * *I’ve seen a number of developers confused, not knowing whether to use string or String in their code. Because in C# string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used. Similarly, I’ve heard some developers say that int represents a 32-bit integer when the application is running on a 32-bit OS and that it represents a 64-bit integer when the application is running on a 64-bit OS. This statement is absolutely false: in C#, an int always maps to System.Int32, and therefore it represents a 32-bit integer regardless of the OS the code is running on. If programmers would use Int32 in their code, then this potential confusion is also eliminated. *In C#, long maps to System.Int64, but in a different programming language, long could map to an Int16 or Int32. In fact, C++/CLI does treat long as an Int32. Someone reading source code in one language could easily misinterpret the code’s intention if he or she were used to programming in a different programming language. In fact, most languages won’t even treat long as a keyword and won’t compile code that uses it. *The FCL has many methods that have type names as part of their method names. For example, the BinaryReader type offers methods such as ReadBoolean, ReadInt32, ReadSingle, and so on, and the System.Convert type offers methods such as ToBoolean, ToInt32, ToSingle, and so on. Although it’s legal to write the following code, the line with float feels very unnatural to me, and it’s not obvious that the line is correct: BinaryReader br = new BinaryReader(...); float val = br.ReadSingle(); // OK, but feels unnatural Single val = br.ReadSingle(); // OK and feels good *Many programmers that use C# exclusively tend to forget that other programming languages can be used against the CLR, and because of this, C#-isms creep into the class library code. For example, Microsoft’s FCL is almost exclusively written in C# and developers on the FCL team have now introduced methods into the library such as Array’s GetLongLength, which returns an Int64 value that is a long in C# but not in other languages (like C++/CLI). Another example is System.Linq.Enumerable’s LongCount method. I didn't get his opinion before I read the complete paragraph. A: String stands for System.String and it is a .NET Framework type. string is an alias in the C# language for System.String. Both of them are compiled to System.String in IL (Intermediate Language), so there is no difference. Choose what you like and use that. If you code in C#, I'd prefer string as it's a C# type alias and well-known by C# programmers. I can say the same about (int, System.Int32) etc.. A: String (System.String) is a class in the base class library. string (lower case) is a reserved work in C# that is an alias for System.String. Int32 vs int is a similar situation as is Boolean vs. bool. These C# language specific keywords enable you to declare primitives in a style similar to C. A: In the context of MSDN Documentation, String class is documented like any other data type (e.g., XmlReader, StreamReader) in the BCL. And string is documented like a keyword (C# Reference) or like any basic C# language construct (e.g., for, while, default). Reference. A: As pointed out, they are the same thing and string is just an alias to String. For what it's worth, I use string to declare types - variables, properties, return values and parameters. This is consistent with the use of other system types - int, bool, var etc (although Int32 and Boolean are also correct). I use String when using the static methods on the String class, like String.Split() or String.IsNullOrEmpty(). I feel that this makes more sense because the methods belong to a class, and it is consistent with how I use other static methods. A: I prefer to use string because this type is used so much that I don't want the syntax highlighter blending it in with all the other classes. Although it is a class it is used more like a primitive therefore I think the different highlight colour is appropriate. If you right click on the string keyword and select Go to definition from the context menu it'll take you to the String class - it's just syntactic sugar but it improves readability imo. A: A string is a sequential collection of characters that is used to represent text. A String object is a sequential collection of System.Char objects that represent a string; a System.Char object corresponds to a UTF-16 code unit. The value of the String object is the content of the sequential collection of System.Char objects, and that value is immutable (that is, it is read-only). For more information about the immutability of strings, see the Immutability and the StringBuilder class section in msdn. The maximum size of a String object in memory is 2GB, or about 1 billion characters. Note : answer is extracted from msdn help section. You can see the full content here in msdn String Class topic under Remarks section A: string is short name of System.String. String or System.String is name of string in CTS(Common Type System). A: It's a matter of convention, really. string just looks more like C/C++ style. The general convention is to use whatever shortcuts your chosen language has provided (int/Int for Int32). This goes for "object" and decimal as well. Theoretically this could help to port code into some future 64-bit standard in which "int" might mean Int64, but that's not the point, and I would expect any upgrade wizard to change any int references to Int32 anyway just to be safe. A: @JaredPar (a developer on the C# compiler and prolific SO user!) wrote a great blog post on this issue. I think it is worth sharing here. It is a nice perspective on our subject. string vs. String is not a style debate [...] The keyword string has concrete meaning in C#. It is the type System.String which exists in the core runtime assembly. The runtime intrinsically understands this type and provides the capabilities developers expect for strings in .NET. Its presence is so critical to C# that if that type doesn’t exist the compiler will exit before attempting to even parse a line of code. Hence string has a precise, unambiguous meaning in C# code. The identifier String though has no concrete meaning in C#. It is an identifier that goes through all the name lookup rules as Widget, Student, etc … It could bind to string or it could bind to a type in another assembly entirely whose purposes may be entirely different than string. Worse it could be defined in a way such that code like String s = "hello"; continued to compile. class TricksterString { void Example() { String s = "Hello World"; // Okay but probably not what you expect. } } class String { public static implicit operator String(string s) => null; } The actual meaning of String will always depend on name resolution. That means it depends on all the source files in the project and all the types defined in all the referenced assemblies. In short it requires quite a bit of context to know what it means. True that in the vast majority of cases String and string will bind to the same type. But using String still means developers are leaving their program up to interpretation in places where there is only one correct answer. When String does bind to the wrong type it can leave developers debugging for hours, filing bugs on the compiler team, and generally wasting time that could’ve been saved by using string. Another way to visualize the difference is with this sample: string s1 = 42; // Errors 100% of the time String s2 = 42; // Might error, might not, depends on the code Many will argue that while this is information technically accurate using String is still fine because it’s exceedingly rare that a codebase would define a type of this name. Or that when String is defined it’s a sign of a bad codebase. [...] You’ll see that String is defined for a number of completely valid purposes: reflection helpers, serialization libraries, lexers, protocols, etc … For any of these libraries String vs. string has real consequences depending on where the code is used. So remember when you see the String vs. string debate this is about semantics, not style. Choosing string gives crisp meaning to your codebase. Choosing String isn’t wrong but it’s leaving the door open for surprises in the future. Note: I copy/pasted most of the blog posts for archive reasons. I ignore some parts, so I recommend skipping and reading the blog post if you can. A: String is not a keyword and it can be used as Identifier whereas string is a keyword and cannot be used as Identifier. And in function point of view both are same. A: Coming late to the party: I use the CLR types 100% of the time (well, except if forced to use the C# type, but I don't remember when the last time that was). I originally started doing this years ago, as per the CLR books by Ritchie. It made sense to me that all CLR languages ultimately have to be able to support the set of CLR types, so using the CLR types yourself provided clearer, and possibly more "reusable" code. Now that I've been doing it for years, it's a habit and I like the coloration that VS shows for the CLR types. The only real downer is that auto-complete uses the C# type, so I end up re-typing automatically generated types to specify the CLR type instead. Also, now, when I see "int" or "string", it just looks really wrong to me, like I'm looking at 1970's C code. A: As far as I know, string is just an alias for System.String, and similar aliases exist for bool, object, int... the only subtle difference is that you can use string without a "using System;" directive, while String requires it (otherwise you should specify System.String in full). About which is the best to use, I guess it's a matter of taste. Personally I prefer string, but I it's not a religious issue. A: String : Represent a class string : Represent an alias It's just a coding convention from microsoft . A: As you already know string is just alias for System.String. But what should I use? it just personal preference. In my case, I love to use string rather than use System.String because String requires a namespace using System; or a full name System.String. So I believe the alias string was created for simplicity and I love it! A: string is a shortcut for System.String. The only difference is that you don´t need to reference to System.String namespace. So would be better using string than String. A: it is common practice to declare a variable using C# keywords. In fact, every C# type has an equivalent in .NET. As another example, short and int in C# map to Int16 and Int32 in .NET. So, technically there is no difference between string and String, but In C#, string is an alias for the String class in .NET framework. A: string is an alias in C# for System.String. So technically, there is no difference. It's like int vs. System.Int32. As far as guidelines, it's generally recommended to use string any time you're referring to an object. e.g. string place = "world"; Likewise, I think it's generally recommended to use String if you need to refer specifically to the class. e.g. string greet = String.Format("Hello {0}!", place); This is the style that Microsoft tends to use in their examples. It appears that the guidance in this area may have changed, as StyleCop now enforces the use of the C# specific aliases. A: string is an alias for String in the .NET Framework. Where "String" is in fact System.String. I would say that they are interchangeable and there is no difference when and where you should use one or the other. It would be better to be consistent with which one you did use though. For what it's worth, I use string to declare types - variables, properties, return values and parameters. This is consistent with the use of other system types - int, bool, var etc (although Int32 and Boolean are also correct). I use String when using the static methods on the String class, like String.Split() or String.IsNullOrEmpty(). I feel that this makes more sense because the methods belong to a class, and it is consistent with how I use other static methods. A: String is the class of string. If you remove System namespace from using statements, you can see that String has gone but string is still here. string is keyword for String. Like int and Int32 short and Int16 long and Int64 So the keywords are just some words that uses a class. These keywords are specified by C#(so Microsoft, because C# is Microsoft's). Briefly, there's no difference. Using string or String. That doesn't matter. They are same. A: The best answer I have ever heard about using the provided type aliases in C# comes from Jeffrey Richter in his book CLR Via C#. Here are his 3 reasons: * *I've seen a number of developers confused, not knowing whether to use string or String in their code. Because in C# the string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used. *In C#, long maps to System.Int64, but in a different programming language, long could map to an Int16 or Int32. In fact, C++/CLI does in fact treat long as an Int32. Someone reading source code in one language could easily misinterpret the code's intention if he or she were used to programming in a different programming language. In fact, most languages won't even treat long as a keyword and won't compile code that uses it. *The FCL has many methods that have type names as part of their method names. For example, the BinaryReader type offers methods such as ReadBoolean, ReadInt32, ReadSingle, and so on, and the System.Convert type offers methods such as ToBoolean, ToInt32, ToSingle, and so on. Although it's legal to write the following code, the line with float feels very unnatural to me, and it's not obvious that the line is correct: BinaryReader br = new BinaryReader(...); float val = br.ReadSingle(); // OK, but feels unnatural Single val = br.ReadSingle(); // OK and feels good So there you have it. I think these are all really good points. I however, don't find myself using Jeffrey's advice in my own code. Maybe I am too stuck in my C# world but I end up trying to make my code look like the framework code. A: There is no difference. The C# keyword string maps to the .NET type System.String - it is an alias that keeps to the naming conventions of the language. Similarly, int maps to System.Int32. A: There's a quote on this issue from Daniel Solis' book. All the predefined types are mapped directly to underlying .NET types. The C# type names (string) are simply aliases for the .NET types (String or System.String), so using the .NET names works fine syntactically, although this is discouraged. Within a C# program, you should use the C# names rather than the .NET names. A: string is a reserved word, but String is just a class name. This means that string cannot be used as a variable name by itself. If for some reason you wanted a variable called string, you'd see only the first of these compiles: StringBuilder String = new StringBuilder(); // compiles StringBuilder string = new StringBuilder(); // doesn't compile If you really want a variable name called string you can use @ as a prefix: StringBuilder @string = new StringBuilder(); Another critical difference: Stack Overflow highlights them differently. A: declare a string variable with string but use the String class when accessing one of its static members: String.Format() Variable string name = ""; A: All the above is basically correct. One can check it. Just write a short method public static void Main() { var s = "a string"; } compile it and open .exe with ildasm to see .method private hidebysig static void Main(string[] args) cil managed { .entrypoint // Code size 8 (0x8) .maxstack 1 .locals init ([0] string s) IL_0000: nop IL_0001: ldstr "a string" IL_0006: stloc.0 IL_0007: ret } // end of method Program::Main then change var to string and String, compile, open with ildasm and see IL does not change. It also shows the creators of the language prefer just string when difining variables (spoiler: when calling members they prefer String). A: There is one difference - you can't use String without using System; beforehand. A: Yes, that's no difference between them, just like the bool and Boolean. A: string is a keyword, and you can't use string as an identifier. String is not a keyword, and you can use it as an identifier: Example string String = "I am a string"; The keyword string is an alias for System.String aside from the keyword issue, the two are exactly equivalent. typeof(string) == typeof(String) == typeof(System.String) A: There is no major difference between string and String in C#. String is a class in the .NET framework in the System namespace. The fully qualified name is System.String. The lower case string is an alias of System.String. But its recommended to use string while declaring variables like: string str = "Hello"; And we can use String while using any built-in method for strings like String.IsNullOrEmpty(). Also one difference between these two is like before using String we have to import system namespace in cs file and string can be used directly. A: Just for the sake of completeness, here's a brain dump of related information... As others have noted, string is an alias for System.String. Assuming your code using String compiles to System.String (i.e. you haven't got a using directive for some other namespace with a different String type), they compile to the same code, so at execution time there is no difference whatsoever. This is just one of the aliases in C#. The complete list is: object: System.Object string: System.String bool: System.Boolean byte: System.Byte sbyte: System.SByte short: System.Int16 ushort: System.UInt16 int: System.Int32 uint: System.UInt32 long: System.Int64 ulong: System.UInt64 float: System.Single double: System.Double decimal: System.Decimal char: System.Char Apart from string and object, the aliases are all to value types. decimal is a value type, but not a primitive type in the CLR. The only primitive type which doesn't have an alias is System.IntPtr. In the spec, the value type aliases are known as "simple types". Literals can be used for constant values of every simple type; no other value types have literal forms available. (Compare this with VB, which allows DateTime literals, and has an alias for it too.) There is one circumstance in which you have to use the aliases: when explicitly specifying an enum's underlying type. For instance: public enum Foo : UInt32 {} // Invalid public enum Bar : uint {} // Valid That's just a matter of the way the spec defines enum declarations - the part after the colon has to be the integral-type production, which is one token of sbyte, byte, short, ushort, int, uint, long, ulong, char... as opposed to a type production as used by variable declarations for example. It doesn't indicate any other difference. Finally, when it comes to which to use: personally I use the aliases everywhere for the implementation, but the CLR type for any APIs. It really doesn't matter too much which you use in terms of implementation - consistency among your team is nice, but no-one else is going to care. On the other hand, it's genuinely important that if you refer to a type in an API, you do so in a language-neutral way. A method called ReadInt32 is unambiguous, whereas a method called ReadInt requires interpretation. The caller could be using a language that defines an int alias for Int16, for example. The .NET framework designers have followed this pattern, good examples being in the BitConverter, BinaryReader and Convert classes. A: There is no difference between the two - string, however, appears to be the preferred option when considering other developers' source code. A: It's been covered above; however, you can't use string in reflection; you must use String. A: System.String is the .NET string class - in C# string is an alias for System.String - so in use they are the same. As for guidelines I wouldn't get too bogged down and just use whichever you feel like - there are more important things in life and the code is going to be the same anyway. If you find yourselves building systems where it is necessary to specify the size of the integers you are using and so tend to use Int16, Int32, UInt16, UInt32 etc. then it might look more natural to use String - and when moving around between different .net languages it might make things more understandable - otherwise I would use string and int. A: One argument not mentioned elsewhere to prefer the pascal case String: System.String is a reference type, and reference types names are pascal case by convention. A: There are many (e.g. Jeffrey Richter in his book CLR Via C#) who are saying that there is no difference between System.String and string, and also System.Int32 and int, but we must discriminate a little deeper to really squeeze the juice out of this question so we can get all the nutritional value out of it (write better code). A. They are the Same... * *to the compiler. *to the developer. (We know #1 and eventually achieve autopilot.) B. They are Different in Famework and in Non-C# Contexts. Different... * *to OTHER languages that are NOT C# *in an optimized CIL (was MSIL) context (the .NET VM assembly language) *in a platform-targeted context -- the .NET Framework or Mono or any CIL-type area *in a book targeting multiple .NET Languages (such as VB.NET, F#, etc.) So, the true answer is that it is only because C# has to co-own the .NET space with other languages that this question even exists. C. To Summarize... You use string and int and the other C# types in a C#-only targeted audience (ask the question, who is going to read this code, or use this library). For your internal company, if you only use C#, then stick to the C# types. ...and you use System.String and System.Int32 in a multilingual or framework targeted audience (when C# is not the only audience). For your internal organization, if you also use VB.NET or F# or any other .NET language, or develop libraries for consumption by customers who may, then you should use the "Frameworky" types in those contexts so that everyone can understand your interface, no matter what universe they are from. (What is Klingon for System.String, anyway?) HTH. A: Essentially, there is no difference between string and String in C#. String is a class in the .NET framework in the System namespace under System.String, Whereas, lower case string is an alias of System.String. Logging the full name of both types can prove this string s1= "hello there 1"; String s2 = "hello there 2"; Console.WriteLine(s1.GetType().FullName); // System.String Console.WriteLine(s2.GetType().FullName); // System.String It is recommended to use string over String but it's really a matter of choice. Most developers use string to declare variables in C# and use System.String class to use any built-in string methods like for an example , the String.IsNullOrEmpty() method. A: Both are the same.The difference is how you use it. Convention is, string is for variables String is for calling other String class methods Like: string fName = "John"; string lName = "Smith"; string fullName = String.Concat(fName,lName); if (String.IsNullOrEmpty(fName)) { Console.WriteLine("Enter first name"); } A: I prefer the capitalized .NET types (rather than the aliases) for formatting reasons. The .NET types are colored the same as other object types (the value types are proper objects, after all). Conditional and control keywords (like if, switch, and return) are lowercase and colored dark blue (by default). And I would rather not have the disagreement in use and format. Consider: String someString; string anotherString; A: There is practically no difference The C# keyword string maps to the .NET type System.String - it is an alias that keeps to the naming conventions of the language. A: There is one practical difference between string and String. nameof(String); // compiles nameof(string); // doesn't compile This is because string is a keyword (an alias in this case) whereas String is a type. The same is true for the other aliases as well. | Alias | Type | |-----------|------------------| | bool | System.Boolean | | byte | System.Byte | | sbyte | System.SByte | | char | System.Char | | decimal | System.Decimal | | double | System.Double | | float | System.Single | | int | System.Int32 | | uint | System.UInt32 | | long | System.Int64 | | ulong | System.UInt64 | | object | System.Object | | short | System.Int16 | | ushort | System.UInt16 | | string | System.String | A: string and String are identical in all ways (except the uppercase "S"). There are no performance implications either way. Lowercase string is preferred in most projects due to the syntax highlighting A: This YouTube video demonstrates practically how they differ. But now for a long textual answer. When we talk about .NET there are two different things one there is .NET framework and the other there are languages (C#, VB.NET etc) which use that framework. "System.String" a.k.a "String" (capital "S") is a .NET framework data type while "string" is a C# data type. In short "String" is an alias (the same thing called with different names) of "string". So technically both the below code statements will give the same output. String s = "I am String"; or string s = "I am String"; In the same way, there are aliases for other C# data types as shown below: object: System.Object, string: System.String, bool: System.Boolean, byte: System.Byte, sbyte: System.SByte, short: System.Int16 and so on. Now the million-dollar question from programmer's point of view: So when to use "String" and "string"? The first thing to avoid confusion use one of them consistently. But from best practices perspective when you do variable declaration it's good to use "string" (small "s") and when you are using it as a class name then "String" (capital "S") is preferred. In the below code the left-hand side is a variable declaration and it is declared using "string". On the right-hand side, we are calling a method so "String" is more sensible. string s = String.ToUpper() ; A: C# is a language which is used together with the CLR. string is a type in C#. System.String is a type in the CLR. When you use C# together with the CLR string will be mapped to System.String. Theoretically, you could implement a C#-compiler that generated Java bytecode. A sensible implementation of this compiler would probably map string to java.lang.String in order to interoperate with the Java runtime library. A: Image result for string vs String www.javatpoint.com In C#, string is an alias for the String class in .NET framework. In fact, every C# type has an equivalent in .NET. Another little difference is that if you use the String class, you need to import the System namespace, whereas you don't have to import namespace when using the string keyword A: Lower case string is an alias for System.String. They are the same in C#. There's a debate over whether you should use the System types (System.Int32, System.String, etc.) types or the C# aliases (int, string, etc). I personally believe you should use the C# aliases, but that's just my personal preference. A: string is just an alias for System.String. The compiler will treat them identically. The only practical difference is the syntax highlighting as you mention, and that you have to write using System if you use String. A: In case it's useful to really see there is no difference between string and System.String: var method1 = typeof(MyClass).GetMethod("TestString1").GetMethodBody().GetILAsByteArray(); var method2 = typeof(MyClass).GetMethod("TestString2").GetMethodBody().GetILAsByteArray(); //... public string TestString1() { string str = "Hello World!"; return str; } public string TestString2() { String str = "Hello World!"; return str; } Both produce exactly the same IL byte array: [ 0, 114, 107, 0, 0, 112, 10, 6, 11, 43, 0, 7, 42 ] A: Both are same. But from coding guidelines perspective it's better to use string instead of String. This is what generally developers use. e.g. instead of using Int32 we use int as int is alias to Int32 FYI “The keyword string is simply an alias for the predefined class System.String.” - C# Language Specification 4.2.3 http://msdn2.microsoft.com/En-US/library/aa691153.aspx A: String refers to a string object which comes with various functions for manipulating the contained string. string refers to a primitive type In C# they both compile to String but in other languages they do not so you should use String if you want to deal with String objects and string if you want to deal with literals. A: In C#, string is the shorthand version of System.String (String). They basically mean the same thing. It's just like bool and Boolean, not much difference.. A: You don't need import namespace (using System;) to use string because it is a global alias of System.String. To know more about aliases you can check this link. A: First of All, both string & String are not same. There is a difference: String is not a keyword and it can be used as an identifier whereas string is a keyword and cannot be used as identifier. I am trying to explain with different example : First, when I put string s; into Visual Studio and hover over it I get (without the colour): That says that string is System.String, right? The documentation is at https://msdn.microsoft.com/en-us/library/362314fe.aspx. The second sentence says "string is an alias for String in the .NET Framework.". A: As the others are saying, they're the same. StyleCop rules, by default, will enforce you to use string as a C# code style best practice, except when referencing System.String static functions, such as String.Format, String.Join, String.Concat, etc... A: New answer after 6 years and 5 months (procrastination). While string is a reserved C# keyword that always has a fixed meaning, String is just an ordinary identifier which could refer to anything. Depending on members of the current type, the current namespace and the applied using directives and their placement, String could be a value or a type distinct from global::System.String. I shall provide two examples where using directives will not help. First, when String is a value of the current type (or a local variable): class MySequence<TElement> { public IEnumerable<TElement> String { get; set; } void Example() { var test = String.Format("Hello {0}.", DateTime.Today.DayOfWeek); } } The above will not compile because IEnumerable<> does not have a non-static member called Format, and no extension methods apply. In the above case, it may still be possible to use String in other contexts where a type is the only possibility syntactically. For example String local = "Hi mum!"; could be OK (depending on namespace and using directives). Worse: Saying String.Concat(someSequence) will likely (depending on usings) go to the Linq extension method Enumerable.Concat. It will not go to the static method string.Concat. Secondly, when String is another type, nested inside the current type: class MyPiano { protected class String { } void Example() { var test1 = String.Format("Hello {0}.", DateTime.Today.DayOfWeek); String test2 = "Goodbye"; } } Neither statement in the Example method compiles. Here String is always a piano string, MyPiano.String. No member (static or not) Format exists on it (or is inherited from its base class). And the value "Goodbye" cannot be converted into it. A: To be honest, in practice usually there is not difference between System.String and string. All types in C# are objects and all derives from System.Object class. One difference is that string is a C# keyword and String you can use as variable name. System.String is conventional .NET name of this type and string is convenient C# name. Here is simple program which presents difference between System.String and string. string a = new string(new char[] { 'x', 'y', 'z' }); string b = new String(new char[] { 'x', 'y', 'z' }); String c = new string(new char[] { 'x', 'y', 'z' }); String d = new String(new char[] { 'x', 'y', 'z' }); MessageBox.Show((a.GetType() == typeof(String) && a.GetType() == typeof(string)).ToString()); // shows true MessageBox.Show((b.GetType() == typeof(String) && b.GetType() == typeof(string)).ToString()); // shows true MessageBox.Show((c.GetType() == typeof(String) && c.GetType() == typeof(string)).ToString()); // shows true MessageBox.Show((d.GetType() == typeof(String) && d.GetType() == typeof(string)).ToString()); // shows true @JonSkeet in my compiler public enum Foo : UInt32 { } is working. I've Visual Studio 2015 Community. A: string is equal to System.String in VS2015 if you write this: System.String str; Than compiler will show potential fix to optimize it and after applying that fixe it will look like this string str; A: Using System types makes it easier to port between C# and VB.Net, if you are into that sort of thing. A: Jeffrey Richter written: Another way to think of this is that the C# compiler automatically assumes that you have the following using directives in all of your source code files: using int = System.Int32; using uint = System.UInt32; using string = System.String; ... I’ve seen a number of developers confused, not knowing whether to use string or String in their code. Because in C# string (a keyword) maps exactly to System.String (an FCL type), there is no difference and either can be used. A: The tiny difference between string and String in C#. the string is just an alias of the system. String. Both string and Strings are compiled in the same manner. Answer is very Simple. string is keyword provide limited functionality and mainly use string creating variable like (string name = "Abrar"). Or we can say it is a datatype we use especial for alphabetic. and String is a class which give rich set of functions and properties manipulate the string. Hopefully You Will understand easily. A: A good way to thing about it is string is a data type where String is a class type. Each have different methods and which one to use depends on your need. A: There are at least 4 differences: 1- string is a reserved word, but String is just a class name. This means that string cannot be used as a variable name by itself. 2- you can't use String without "using System".so you write less code by using "string". 3- 'String' is better naming convention than 'string', as it is a type not variable. 4- "string" is a C# keyword and syntax highlighted in most coding editors, but not "String".
{ "language": "en", "url": "https://stackoverflow.com/questions/7074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7428" }
Q: When should I use type abstraction in embedded systems I've worked on a number of different embedded systems. They have all used typedefs (or #defines) for types such as UINT32. This is a good technique as it drives home the size of the type to the programmer and makes you more conscious of chances for overflow etc. But on some systems you know that the compiler and processor won't change for the life of the project. So what should influence your decision to create and enforce project-specific types? EDIT I think I managed to lose the gist of my question, and maybe it's really two. With embedded programming you may need types of specific size for interfaces and also to cope with restricted resources such as RAM. This can't be avoided, but you can choose to use the basic types from the compiler. For everything else the types have less importance. You need to be careful not to cause overflow and may need to watch out for register and stack usage. Which may lead you to UINT16, UCHAR. Using types such as UCHAR can add compiler 'fluff' however. Because registers are typically larger, some compilers may add code to force the result into the type. i++; can become ADD REG,1 AND REG, 0xFF which is unecessary. So I think my question should have been :- given the constraints of embedded software what is the best policy to set for a project which will have many people working on it - not all of whom will be of the same level of experience. A: The C99 standard has a number of standard sized-integer types. If you can use a compiler that supports C99 (gcc does), you'll find these in <stdint.h> and you can just use them in your projects. Also, it can be especially important in embedded projects to use types as a sort of "safety net" for things like unit conversions. If you can use C++, I understand that there are some "unit" libraries out there that let you work in physical units that are defined by the C++ type system (via templates) that are compiled as operations on the underlying scalar types. For example, these libraries won't let you add a distance_t to a mass_t because the units don't line up; you'll actually get a compiler error. Even if you can't work in C++ or another language that lets you write code that way, you can at least use the C type system to help you catch errors like that by eye. (That was actually the original intent of Simonyi's Hungarian notation.) Just because the compiler won't yell at you for adding a meter_t to a gram_t doesn't mean you shouldn't use types like that. Code reviews will be much more productive at discovering unit errors then. A: My opinion is if you are depending on a minimum/maximum/specific size don't just assume that (say) an unsigned int is 32 bytes - use uint32_t instead (assuming your compiler supports C99). A: I like using stdint.h types for defining system APIs specifically because they explicitly say how large items are. Back in the old days of Palm OS, the system APIs were defined using a bunch of wishy-washy types like "Word" and "SWord" that were inherited from very classic Mac OS. They did a cleanup to instead say Int16 and it made the API easier for newcomers to understand, especially with the weird 16-bit pointer issues on that system. When they were designing Palm OS Cobalt, they changed those names again to match stdint.h's names, making it even more clear and reducing the amount of typedefs they had to manage. A: I believe that MISRA standards suggest (require?) the use of typedefs. From a personal perspective, using typedefs leaves no confusion as to the size (in bits / bytes) of certain types. I have seen lead developers attempt both ways of developing by using standard types e.g. int and using custom types e.g. UINT32. If the code isn't portable there is little real benefit in using typedefs, however , if like me then you work on both types of software (portable and fixed environment) then it can be useful to keep a standard and use the cutomised types. At the very least like you say, the programmer is then very much aware of how much memory they are using. Another factor to consider is how 'sure' are you that the code will not be ported to another environment? Ive seen processor specific code have to be translated as a hardware engieer has suddenly had to change a board, this is not a nice situation to be in but due to the custom typedefs it could have been a lot worse! A: I use type abstraction very rarely. Here are my arguments, sorted in increasing order of subjectivity: * *Local variables are different from struct members and arrays in the sense that you want them to fit in a register. On a 32b/64b target, a local int16_t can make code slower compared to a local int since the compiler will have to add operations to /force/ overflow according to the semantics of int16_t. While C99 defines an intfast_t typedef, AFAIK a plain int will fit in a register just as well, and it sure is a shorter name. *Organizations which like these typedefs almost invariably end up with several of them (INT32, int32_t, INT32_T, ad infinitum). Organizations using built-in types are thus better off, in a way, having just one set of names. I wish people used the typedefs from stdint.h or windows.h or anything existing; and when a target doesn't have that .h file, how hard is it to add one? *The typedefs can theoretically aid portability, but I, for one, never gained a thing from them. Is there a useful system you can port from a 32b target to a 16b one? Is there a 16b system that isn't trivial to port to a 32b target? Moreover, if most vars are ints, you'll actually gain something from the 32 bits on the new target, but if they are int16_t, you won't. And the places which are hard to port tend to require manual inspection anyway; before you try a port, you don't know where they are. Now, if someone thinks it's so easy to port things if you have typedefs all over the place - when time comes to port, which happens to few systems, write a script converting all names in the code base. This should work according to the "no manual inspection required" logic, and it postpones the effort to the point in time where it actually gives benefit. *Now if portability may be a theoretical benefit of the typedefs, readability sure goes down the drain. Just look at stdint.h: {int,uint}{max,fast,least}{8,16,32,64}_t. Lots of types. A program has lots of variables; is it really that easy to understand which need to be int_fast16_t and which need to be uint_least32_t? How many times are we silently converting between them, making them entirely pointless? (I particularly like BOOL/Bool/eBool/boolean/bool/int conversions. Every program written by an orderly organization mandating typedefs is littered with that). *Of course in C++ we could make the type system more strict, by wrapping numbers in template class instantiations with overloaded operators and stuff. This means that you'll now get error messages of the form "class Number<int,Least,32> has no operator+ overload for argument of type class Number<unsigned long long,Fast,64>, candidates are..." I don't call this "readability", either. Your chances of implementing these wrapper classes correctly are microscopic, and most of the time you'll wait for the innumerable template instantiations to compile. A: Consistency, convenience and readability. "UINT32" is much more readable and writeable than "unsigned long long", which is the equivalent for some systems. Also, the compiler and processor may be fixed for the life of a project, but the code from that project may find new life in another project. In this case, having consistent data types is very convenient. A: If your embedded systems is somehow a safety critical system (or similar), it's strongly advised (if not required) to use typedefs over plain types. As TK. has said before, MISRA-C has an (advisory) rule to do so: Rule 6.3 (advisory): typedefs that indicate size and signedness should be used in place of the basic numerical types. (from MISRA-C 2004; it's Rule #13 (adv) of MISRA-C 1998) Same also applies to C++ in this area; eg. JSF C++ coding standards: AV Rule 209 A UniversalTypes file will be created to define all sta ndard types for developers to use. The types include: [uint16, int16, uint32_t etc.] A: Using <stdint.h> makes your code more portable for unit testing on a pc. It can bite you pretty hard when you have tests for everything but it still breaks on your target system because an int is suddenly only 16 bit long. A: Maybe I'm weird, but I use ub, ui, ul, sb, si, and sl for my integer types. Perhaps the "i" for 16 bits seems a bit dated, but I like the look of ui/si better than uw/sw.
{ "language": "en", "url": "https://stackoverflow.com/questions/7084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Creating rounded corners using CSS How can I create rounded corners using CSS? A: With support for CSS3 being implemented in newer versions of Firefox, Safari and Chrome, it will also be helpful to look at "Border Radius". -moz-border-radius: 10px; -webkit-border-radius: 10px; border-radius: 10px; Like any other CSS shorthand, the above can also be written in expanded format, and thus achieve different Border Radius for the topleft, topright, etc. -moz-border-radius-topleft: 10px; -moz-border-radius-topright: 7px; -moz-border-radius-bottomleft: 5px; -moz-border-radius-bottomright: 3px; -webkit-border-top-right-radius: 10px; -webkit-border-top-left-radius: 7px; -webkit-border-bottom-left-radius: 5px; -webkit-border-bottom-right-radius: 3px; A: I looked at this early on in the creation of Stack Overflow and couldn't find any method of creating rounded corners that didn't leave me feeling like I just walked through a sewer. CSS3 does finally define the border-radius: Which is exactly how you'd want it to work. Although this works OK in the latest versions of Safari and Firefox, but not at all in IE7 (and I don't think in IE8) or Opera. In the meantime, it's hacks all the way down. I'm interested in hearing what other people think is the cleanest way to do this across IE7, FF2/3, Safari3, and Opera 9.5 at the moment.. A: jQuery is the way i'd deal with this personally. css support is minimal, images are too fiddly, to be able to select the elements you want to have round corners in jQuery makes perfect sense to me even though some will no doubt argue otherwise. Theres a plugin I recently used for a project at work here: http://web.archive.org/web/20111120191231/http://plugins.jquery.com:80/project/jquery-roundcorners-canvas A: There's always the JavaScript way (see other answers) but since it's is purely styling, I'm kind of against use client scripts to achieve this. The way I prefer (though it has its limits), is to use 4 rounded corner images that you will position in the 4 corners of your box using CSS: <div class="Rounded"> <!-- content --> <div class="RoundedCorner RoundedCorner-TopLeft"></div> <div class="RoundedCorner RoundedCorner-TopRight"></div> <div class="RoundedCorner RoundedCorner-BottomRight"></div> <div class="RoundedCorner RoundedCorner-BottomLeft"></div> </div> /******************************** * Rounded styling ********************************/ .Rounded { position: relative; } .Rounded .RoundedCorner { position: absolute; background-image: url('SpriteSheet.png'); background-repeat: no-repeat; overflow: hidden; /* Size of the rounded corner images */ height: 5px; width: 5px; } .Rounded .RoundedCorner-TopLeft { top: 0; left: 0; /* No background position change (or maybe depending on your sprite sheet) */ } .Rounded .RoundedCorner-TopRight { top: 0; right: 0; /* Move the sprite sheet to show the appropriate image */ background-position: -5px 0; } /* Hack for IE6 */ * html .Rounded .RoundedCorner-TopRight { right: -1px; } .Rounded .RoundedCorner-BottomLeft { bottom: 0; left: 0; /* Move the sprite sheet to show the appropriate image */ background-position: 0 -5px; } /* Hack for IE6 */ * html .Rounded .RoundedCorner-BottomLeft { bottom: -20px; } .Rounded .RoundedCorner-BottomRight { bottom: 0; right: 0; /* Move the sprite sheet to show the appropriate image */ background-position: -5px -5px; } /* Hack for IE6 */ * html .Rounded .RoundedCorner-BottomRight { bottom: -20px; right: -1px; } As mentioned, it has its limits (the background behind the rounded box should be plain otherwise the corners won't match the background), but it works very well for anything else. Updated: Improved the implentation by using a sprite sheet. A: Here's an HTML/js/css solution that I did recently. There's a 1px rounding error with absolute positioning in IE so you want the container to be an even number of pixels wide, but it's pretty clean. HTML: <div class="s">Content</div> jQuery: $("div.s") .wrapInner("<div class='s-iwrap'><div class='s-iwrap2'>") .prepend('<div class="tr"/><div class="tl"/><div class="br"/><div class="bl"/>'); CSS: /*rounded corner orange box - no title*/ .s { position: relative; margin: 0 auto 15px; zoom: 1; } .s-iwrap { border: 1px solid #FF9933; } .s-iwrap2 { margin: 12px; } .s .br,.s .bl, .s .tl, .s .tr { background: url(css/images/orange_corners_sprite.png) no-repeat; line-height: 1px; font-size: 1px; width: 9px; height: 9px; position: absolute; } .s .br { bottom: 0; right: 0; background-position: bottom right; } .s .bl { bottom: 0; left: 0; background-position: bottom left; } .s .tl { top: 0; left: 0; background-position: top left; } .s .tr { top: 0; right: 0; background-position: top right; } Image is just 18px wide and has all 4 corners packed together. Looks like a circle. Note: you don't need the second inner wrapper, but I like to use margin on the inner wrapper so that margins on paragraphs and headings still maintain margin collapse. You can also skip the jquery and just put the inner wrapper in the html. A: In Safari, Chrome, Firefox > 2, IE > 8 and Konquerer (and probably others) you can do it in CSS by using the border-radius property. As it's not officially part of the spec yet, please use a vendor specific prefix... Example #round-my-corners-please { -webkit-border-radius: 20px; -moz-border-radius: 20px; border-radius: 20px; } The JavaScript solutions generally add a heap of small divs to make it look rounded, or they use borders and negative margins to make 1px notched corners. Some may also utilise SVG in IE. IMO, the CSS way is better, as it is easy, and will degrade gracefully in browsers that don't support it. This is, of course, only the case where the client doesn't enforce them in non supported browsers such as IE < 9. A: I personally like this solution the best, its an .htc to allow IE to render curved borders. http://www.htmlremix.com/css/curved-corner-border-radius-cross-browser A: As an indication of how complex it is to get rounded corners working, even Yahoo discourages them (see first bulleted point)! Granted, they're only talking about 1 pixel rounded corners in that article but it's interesting to see that even a company with their expertise has concluded they're just too much pain to get them working most of the time. If your design can survive without them, that's the easiest solution. A: I generally get rounded corners just with css, if browser does not support they see the content with flat corners. If rounded corners are not so critical for your site you can use these lines below. If you want to use all corners with same radius this is the easy way: .my_rounded_corners{ -webkit-border-radius: 5px; border-radius: 5px; } but if you want to control every corner this is good: .my_rounded_corners{ border: 1px solid #ccc; /* each value for each corner clockwise starting from top left */ -webkit-border-radius: 10px 3px 0 20px; border-radius: 10px 3px 0 20px; } As you see in each set you have browser specific styles and on the fourth rows we declare in standard way by this we assume if in future the others (hopefully IE too) decide to implement the feature to have our style be ready for them too. As told in other answers, this works beautifully on Firefox, Safari, Camino, Chrome. A: Sure, if it's a fixed width, it's super easy using CSS, and not at all offensive or laborious. It's when you need it to scale in both directions that things get choppy. Some of the solutions have a staggering amount of divs stacked on top of each other to make it happen. My solution is to dictate to the designer that if they want to use rounded corners (for the time being), it needs to be a fixed width. Designers love rounded corners (so do I), so I find this to be a reasonable compromise. A: Ruzee Borders is the only Javascript-based anti-aliased rounded corner solution I've found that works in all major browsers (Firefox 2/3, Chrome, Safari 3, IE6/7/8), and ALSO the only one that works when both the rounded element AND the parent element contain a background image. It also does borders, shadows, and glowing. The newer RUZEE.ShadedBorder is another option, but it lacks support for obtaining style information from CSS. A: If you're interested in creating corners in IE then this may be of use - http://css3pie.com/ A: I would recommend using background images. The other ways aren't nearly as good: No anti-aliasing and senseless markup. This is not the place to use JavaScript. A: As Brajeshwar said: Using the border-radius css3 selector. By now, you can apply -moz-border-radius and -webkit-border-radius for Mozilla and Webkit based browsers, respectively. So, what happens with Internet Explorer?. Microsoft has many behaviors to make Internet Explorer have some extra features and get more skills. Here: a .htc behavior file to get round-corners from border-radius value in your CSS. For example. div.box { background-color: yellow; border: 1px solid red; border-radius: 5px; behavior: url(corners.htc); } Of course, behavior selector does not a valid selector, but you can put it on a different css file with conditional comments (only for IE). The behavior HTC file A: Since CSS3 was introduced, the best way to add rounded corners using CSS is by using the border-radius property. You can read the spec on the property, or get some useful implementation information on MDN: If you are using a browser that doesn't implement border-radius (Chrome pre-v4, Firefox pre-v4, IE8, Opera pre-v10.5, Safari pre-v5), then the links below detail a whole bunch of different approaches. Find one that suits your site and coding style, and go with it. * *CSS Design: Creating Custom Corners & Borders *CSS Rounded Corners 'Roundup' *25 Rounded Corners Techniques with CSS A: There is no "the best" way; there are ways that work for you and ways that don't. Having said that, I posted an article about creating CSS+Image based, fluid round corner technique here: Box with Round Corners Using CSS and Images - Part 2 An overview of this trick is that that uses nested DIVs and background image repetition and positioning. For fixed width layouts (fixed width stretchable height), you'll need three DIVs and three images. For a fluid width layout (stretchable width and height) you'll need nine DIVs and nine images. Some might consider it too complicated but IMHO its the neatest solution ever. No hacks, no JavaScript. A: If you are to go with the border-radius solution, there is this awesome website to generate the css that will make it work for safari/chrome/FF. Anyway, I think your design should not depend on the rounded corner, and if you look at Twitter, they just say F**** to IE and opera users. Rounded corners is a nice to have, and I'm personally ok keeping this for the cool users who don't use IE :). Now of course it's not the opinion of the clients. Here is the link : http://border-radius.com/ A: To addition of htc solutions mention above, here're another solutions and examples to reach rounded corners in IE. A: I wrote a blog article on this a while back, so for more info, see here <div class="item_with_border"> <div class="border_top_left"></div> <div class="border_top_right"></div> <div class="border_bottom_left"></div> <div class="border_bottom_right"></div> This is the text that is displayed </div> <style> div.item_with_border { border: 1px solid #FFF; postion: relative; } div.item_with_border > div.border_top_left { background-image: url(topleft.png); position: absolute; top: -1px; left: -1px; width: 30px; height: 30px; z-index: 2; } div.item_with_border > div.border_top_right { background-image: url(topright.png); position: absolute; top: -1px; right: -1px; width: 30px; height: 30px; z-index: 2; } div.item_with_border > div.border_bottom_left { background-image: url(bottomleft.png); position: absolute; bottom: -1px; left: -1px; width: 30px; height: 30px; z-index: 2; } div.item_with_border > div.border_bottom_right { background-image: url(bottomright.png); position: absolute; bottom: -1px; right: -1px; width: 30px; height: 30px; z-index: 2; } </style> It works quite well. No Javascript needed, just CSS and HTML. With minimal HTML interfering with the other stuff. It's very similar to what Mono posted, but doesn't contain any IE 6 specific hacks, and after checking, doesn't seem to work at all. Also, another trick is to make the inside portion of each corner image transparent so it doesn't block text that is near the corner. The outer portion must not be transparent so it can cover up the border of the non-rounded div. Also, once CSS3 is widely supported with border-radius, that will be the official best way of making rounded corners. A: Don't use CSS, jQuery has been mentioned several times. If you need full control of the background and border of your elements give thejQuery Background Canvas Plugin a try. It places a HTML5 Canvas element in the background and allows yo to draw every background or border you want. Rounded corners, gradients and so on. A: Opera does not support border-radius yet (apparently it will be in the release after version 10). In the meantime, you can use CSS to set an SVG background to create a similar effect.
{ "language": "en", "url": "https://stackoverflow.com/questions/7089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "257" }
Q: Is the C# static constructor thread safe? In other words, is this Singleton implementation thread safe: public class Singleton { private static Singleton instance; private Singleton() { } static Singleton() { instance = new Singleton(); } public static Singleton Instance { get { return instance; } } } A: While all of these answers are giving the same general answer, there is one caveat. Remember that all potential derivations of a generic class are compiled as individual types. So use caution when implementing static constructors for generic types. class MyObject<T> { static MyObject() { //this code will get executed for each T. } } EDIT: Here is the demonstration: static void Main(string[] args) { var obj = new Foo<object>(); var obj2 = new Foo<string>(); } public class Foo<T> { static Foo() { System.Diagnostics.Debug.WriteLine(String.Format("Hit {0}", typeof(T).ToString())); } } In the console: Hit System.Object Hit System.String A: Here's the Cliffnotes version from the above MSDN page on c# singleton: Use the following pattern, always, you can't go wrong: public sealed class Singleton { private static readonly Singleton instance = new Singleton(); private Singleton(){} public static Singleton Instance { get { return instance; } } } Beyond the obvious singleton features, it gives you these two things for free (in respect to singleton in c++): * *lazy construction (or no construction if it was never called) *synchronization A: Static constructors are guaranteed to fire only once per App Domain so your approach should be OK. However, it is functionally no different from the more concise, inline version: private static readonly Singleton instance = new Singleton(); Thread safety is more of an issue when you are lazily initializing things. A: The static constructor will finish running before any thread is allowed to access the class. private class InitializerTest { static private int _x; static public string Status() { return "_x = " + _x; } static InitializerTest() { System.Diagnostics.Debug.WriteLine("InitializerTest() starting."); _x = 1; Thread.Sleep(3000); _x = 2; System.Diagnostics.Debug.WriteLine("InitializerTest() finished."); } } private void ClassInitializerInThread() { System.Diagnostics.Debug.WriteLine(Thread.CurrentThread.GetHashCode() + ": ClassInitializerInThread() starting."); string status = InitializerTest.Status(); System.Diagnostics.Debug.WriteLine(Thread.CurrentThread.GetHashCode() + ": ClassInitializerInThread() status = " + status); } private void classInitializerButton_Click(object sender, EventArgs e) { new Thread(ClassInitializerInThread).Start(); new Thread(ClassInitializerInThread).Start(); new Thread(ClassInitializerInThread).Start(); } The code above produced the results below. 10: ClassInitializerInThread() starting. 11: ClassInitializerInThread() starting. 12: ClassInitializerInThread() starting. InitializerTest() starting. InitializerTest() finished. 11: ClassInitializerInThread() status = _x = 2 The thread 0x2650 has exited with code 0 (0x0). 10: ClassInitializerInThread() status = _x = 2 The thread 0x1f50 has exited with code 0 (0x0). 12: ClassInitializerInThread() status = _x = 2 The thread 0x73c has exited with code 0 (0x0). Even though the static constructor took a long time to run, the other threads stopped and waited. All threads read the value of _x set at the bottom of the static constructor. A: Using a static constructor actually is threadsafe. The static constructor is guaranteed to be executed only once. From the C# language specification: The static constructor for a class executes at most once in a given application domain. The execution of a static constructor is triggered by the first of the following events to occur within an application domain: * *An instance of the class is created. *Any of the static members of the class are referenced. So yes, you can trust that your singleton will be correctly instantiated. Zooba made an excellent point (and 15 seconds before me, too!) that the static constructor will not guarantee thread-safe shared access to the singleton. That will need to be handled in another manner. A: The Common Language Infrastructure specification guarantees that "a type initializer shall run exactly once for any given type, unless explicitly called by user code." (Section 9.5.3.1.) So unless you have some whacky IL on the loose calling Singleton::.cctor directly (unlikely) your static constructor will run exactly once before the Singleton type is used, only one instance of Singleton will be created, and your Instance property is thread-safe. Note that if Singleton's constructor accesses the Instance property (even indirectly) then the Instance property will be null. The best you can do is detect when this happens and throw an exception, by checking that instance is non-null in the property accessor. After your static constructor completes the Instance property will be non-null. As Zoomba's answer points out you will need to make Singleton safe to access from multiple threads, or implement a locking mechanism around using the singleton instance. A: Although other answers are mostly correct, there is yet another caveat with static constructors. As per section II.10.5.3.3 Races and deadlocks of the ECMA-335 Common Language Infrastructure Type initialization alone shall not create a deadlock unless some code called from a type initializer (directly or indirectly) explicitly invokes blocking operations. The following code results in a deadlock using System.Threading; class MyClass { static void Main() { /* Won’t run... the static constructor deadlocks */ } static MyClass() { Thread thread = new Thread(arg => { }); thread.Start(); thread.Join(); } } Original author is Igor Ostrovsky, see his post here. A: Static constructors are guaranteed to be run only once per application domain, before any instances of a class are created or any static members are accessed. https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/static-constructors The implementation shown is thread safe for the initial construction, that is, no locking or null testing is required for constructing the Singleton object. However, this does not mean that any use of the instance will be synchronised. There are a variety of ways that this can be done; I've shown one below. public class Singleton { private static Singleton instance; // Added a static mutex for synchronising use of instance. private static System.Threading.Mutex mutex; private Singleton() { } static Singleton() { instance = new Singleton(); mutex = new System.Threading.Mutex(); } public static Singleton Acquire() { mutex.WaitOne(); return instance; } // Each call to Acquire() requires a call to Release() public static void Release() { mutex.ReleaseMutex(); } } A: Just to be pedantic, but there is no such thing as a static constructor, but rather static type initializers, here's a small demo of cyclic static constructor dependency which illustrates this point. A: Static constructor is guaranteed to be thread safe. Also, check out the discussion on Singleton at DeveloperZen: http://web.archive.org/web/20160404231134/http://www.developerzen.com/2007/07/15/whats-wrong-with-this-code-1-discussion/ A: The static constructor is locked. While the type initializer is running, any other thread which attempts to access the class in such a way that would trigger the type initializer will block. However, the thread which is running the type initializer can access uninitialized static members. So be sure not to call Monitor.Enter() (lock(){}) or ManualResetEventSlim.Wait() from a type initializer if it is run from a UI thread—those are “interruptible” waits which result in the event loop running, executing arbitrary other parts of your program while your type initializer is still unfinished. It is preferable for you to use managed blocking rather than unmanaged blocking. WaitHandle.WaitOne, WaitHandle.WaitAny, WaitHandle.WaitAll, Monitor.Enter, Monitor.TryEnter, Thread.Join, GC.WaitForPendingFinalizers, and so on are all responsive to Thread.Interrupt and to Thread.Abort. Also, if your thread is in a single-threaded apartment, all these managed blocking operations will correctly pump messages in your apartment while your thread is blocked:
{ "language": "en", "url": "https://stackoverflow.com/questions/7095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "265" }
Q: Debugging JavaScript in Internet Explorer and Safari Currently, I don't really have a good method of debugging JavaScript in Internet Explorer and Safari. In Firefox, you can use Firebug's logging feature and command Line functions. However, this doesn't help me when I move to other browsers. A: This is the Firebug Lite that @John was referring to that works on IE, Safari and Opera. A: A post on the IE Blog, Scripting Debugging in Internet Explorer, explains different options for script debugging in Internet Explorer. Here is the Apple Developer FAQ on debugging JavaScript in Safari. A: For Safari you need to enable the "Develop" menu via Preferences (in Safari 3.1; see the entry in Apple's Safari development FAQ) or via $ defaults write com.apple.Safari IncludeDebugMenu 1 at the terminal in Mac OS X. Then from the Develop menu choose Show Web Inspector and click on the Console link. Your script can write to the console using window.console.log. For Internet Explorer, Visual Studio is really the best script debugger but the Microsoft Script Debugger is okay if you don't have Visual Studio. This post on the IE team blog walks you through installing it and connecting to Internet Explorer. Internet Explorer 8 looks like it will have a very fancy script debugger, so if you're feeling really adventurous you could install the Internet Explorer 8 beta and give that a whirl. A: Safari 3.0 and 3.1 include the Drosera JavaScript debugger, which you can enable on the Mac by following the instructions at that link. There's also the Safari Web Inspector.. A: Visual Studio 2005 has the Script Explorer (under the Debug > Windows menu). It shows a tree of all the scripted stuff that's currently debuggable. Previously I was breaking into the debugger via IE's View > Script Debugger menu, but I'm finding the Script Explorer is a quicker way to get to what I want. A: The best method I've used for debugging JavaScript in Internet Explorer is through the Microsoft Script Editor. The debugger is full featured and very easy to use. The article bellow teaches how to install the Microsoft Script Editor and configure it. HOW-TO: Debug JavaScript in Internet Explorer for Safari, sorry no answer... A: There is now a Firebug Lite that works on other browsers such as Internet Explorer, Safari and Opera built. It does have a limited set of commands and is not as fully featured as the version in Firefox. If you are using ASP.NET in Visual Studio 2008 will also debug JavaScript in Internet Explorer. A: Safari 3.1 doesn't need any magical commandline preferences -- the Advanced tab of the preferences window has an enable develop menu checkbox. That said if you can use the webkit nightlies (http://nightly.webkit.org), you're probably better off doing that as the developer tools are vastly improved, and you can more easily file bug reports requesting features that you want :D A: See the Debugging chapter of the Safari User Guide for Web Developers for full documentation of how to debug in Safari. (For the most part it is the same API as Firebug.) In IE you can use the IE Dev Tools, but I prefer Firebug Lite as others have mentioned.
{ "language": "en", "url": "https://stackoverflow.com/questions/7118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Very slow merge with Subversion 1.5 (and 1.4 Server) I switched locally from subversion 1.4 to 1.5, our server still runs 1.4. Since then every merge takes ages to perform. What took only a couple of seconds is now in the area of 5-10 minutes (or more). There is no difference between the command line client and tortoise (so we talk about the windows versions). Has anybody else this strange phenomenon? A: Upgrading to 1.5.3 (when it is out) will significantly speed up your merges. A: SVN 1.5 introduced the concept of automatic merge tracking, although I thought it required a 1.5 server and client. See Apache Subversion 1.5 release notes for details. A: We did some performance analysis on merging last weekend and found two severe performance issues. One of those was very Windows specific and made disk IO while merging much slower than needed and the other was on the type of network connections used. (Too less reuse of existing knowledge) These fixes and a few others that enhance the merge performance even more will be available in Subversion 1.5.3 that is expected to be released by the end of this week. [Edit: This performance enhancement is in the code path that assumes your server is 1.5+] A: We've had problems when trying to add large numbers of files to repositories through the client which I assume created orphaned processes on the server when we killed the crashed client. We had to kill the server processes too and restart the subversion service (we run SVN as a windows service). Our SVN machine is dedicated so we actually just rebooted the box and everything went back to normal.
{ "language": "en", "url": "https://stackoverflow.com/questions/7173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I improve the edit-compile-test loop when developing a SharePoint workflow? Recently I had to develop a SharePoint workflow, and I found the experience quite honestly the most painful programming task I've ever had to tackle. One big problem I had was the problems I encountered when I had to step through it in the debugger. There's an article on how to debug a SharePoint workflow here that tells you how to set breakpoints etc. This involves copying the .pdb file into the GAC alongside the .dll file containing your workflow. You have to do this from a command prompt (or a batch file) because Windows Explorer doesn't let you view the relevant subdirectory of c:\windows\assembly. However, if you do this, the next time you try to deploy the workflow from within Visual Studio, it complains that it can't be deployed because "the file may not be signed" and if you attempt to copy the new version of the dll into the GAC, it tells you that the .dll file is locked. I've found that some of the time, you can get round this by doing an iisreset, but on other occasions you have to restart Visual Studio and there have been frequent times when I've even had to reboot the computer altogether because some mystery process has locked the file. When I don't use the debugger, on the other hand, everything works just fine. Does anyone know of a simpler way of debugging workflows than this? A: I've got a lot faster developing SharePoint-Solutions in general (not only Workflows) when i started using WSPBuilder. WSPBuilder has a Visual Studio Addin called WSPBuilder Extensions and in my opinion the WSPBuilder Extensions do a better job than the infamous Windows SharePoint Services 3.0 Tools: Visual Studio 2008 Extensions, Version 1.2. Thanks to the WSPBuilder Menu deploy/upgrade/uninstall of a solution is just one click away! A: The SharePoint team is currently working on MOSS extensions for VS 2008 which will allow this type of functionality. This was available in VS 2005 with MOSS extensions, but has to be run off Windows Server with a full MOSS installation and the correct permissions set. A: One thing that would really help is if the SharePoint team provided interfaces for the SP-specific workflow services needed to run SP workflows. This would allow you to mock those interfaces and run the workflows outside of SP proper. AFAIK, you can't do that today. I've personally found SharePoint extremely painful to develop against... not just with workflows, but overall. I understand the administrative wins and the end user productivity, but it's a fairly dreadful experience for Joe .NET Developer. A: As for speeding up the IIS reset, Andrew Connell has some tips here as well http://www.andrewconnell.com/blog/archive/2006/08/21/3882.aspx This brought my IIS reset time from 10+ seconds down to less than 2 seconds. A: I'm not sure you need to get the pdb file into the GAC. (At least, the fix I'm about to describe works just fine for debugging SharePoint web parts in VS2005, which have a similar problem.) There's a checkbox marked "Enable Just My Code (Managed Only)" in Tools-->Options-->Debugging; if you uncheck it, then Visual Studio will happily load your pdb's from the bin\Debug folder where it built them. Probably. Can't hurt to try, anyhow... A: Check out STSDev on CodePlex by SharePoint MVPs like Ted Pattison, Andrew Connell, Scot Hillier, and more. STSDEV is a proof-of-concept utility application which demonstrates how to generate Visual Studio project files and solution files to facilitate the development and deployment of templates and components for the SharePoint 2007 platform including Windows SharePoint Services 3.0 (WSS) and Microsoft Office SharePoint Server 2007 (MOSS). Note that the current version of the stsdev utility only supports creating projects with the C# programming language. Keith
{ "language": "en", "url": "https://stackoverflow.com/questions/7174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Animation in .NET What is a good way to perform animation using .NET? I would prefer not to use Flash if possible, so am looking for suggestions of ways which will work to implement different types of animation on a new site I am producing. The new site is for a magician, so I want to provide animated buttons (Cards turning over, etc.) and also embed video. Is it possible to do this without using Flash or is this the only real solution? I would like to keep it as cross-platform and standard as possible. A: Silverlight springs to mind as an obvious choice if you want to do animation using .NET on the web. It may not cover all platforms but will work in IE and FireFox and on the Mac. A: Have a look at the jQuery cross browser JavaScript library for animation (it is what is used on Stack Overflow). The reference for it can be found at http://visualjquery.com/1.1.2.html. Unfortunately without Flash, Silverlight or another plug-in cross system video support is limited. A: JavaScript is probably the way to go if you want to avoid Flash. Check this: http://www.webreference.com/programming/javascript/java_anim/ It won't work for embedded video, though, so you're stuck with Flash for that (or Silverlight, or QuickTime). A: Silverlight is the answer and Moonlight will be the linux equivalent and available shortly. We have done some beta testing on moonlight and found it fairly stable at with most of the Silverlight work we do.
{ "language": "en", "url": "https://stackoverflow.com/questions/7180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Setting up Continuous Integration with SVN What tools would you recommend for setting up CI for build and deployment of multiple websites built on DotNetNuke using SVN for source control? We are currently looking at configuring Cruise Control to work with NAnt, NUnit, NCover and Trac as a test case. What other combinations would worth investigating? We have full control of our development environment so using some form of CI is certain here but I would also like to convince our production services team that they can reliably deploy to the system test, uat and even production environments using these tools. A: Take a look at Hudson. It's highly customizable, and, IMHO, easier than CruiseControl. A: We use CruiseControl with NUnit, NCover, FxCop, SVN and some custom tools we wrote ourselves to produce the reports. In my opinion it has proven (over the last few years) to be an excellent combination. It's frustrating that MS restricts all of its integration tools to VSTS. Its test framework is as good as NUnit, but you can't use its code coverage tools or anything else. I'd check out XNuit - it's looking pretty promising (but currently lacking UI). We automate nightly builds, and you could automate UAT and manual test builds, but I'm not sure that we'd ever want to automate the release to our production servers. Even if it were any change would be important enough that someone would have to watch over it anyway. A: I would have a look at Team City http://www.jetbrains.com/teamcity/index.html I know some people who are looking in to this and they say good things about it. My companies build process is done in FinalBuilder so I'm going to be looking at their server soon. CC is quite good in that you can have one CC server monitor another CC server so you could set up stuff like - when a build completes on your build server, your test server would wake up, boot up a virtual machine and deploy your application. Stuff like that. A: Microsoft loosened it's constraint on the Testing Platform by including it in Visual Studio 2008 Professional and allowing for the tests to be run from the command line with Framework 3.5 installed. We did a crossover for a client recently and so far they have been able to run all the tests without the need for NUnit. A: We use CruiseControl.NET running msbuild scripts. Msbuild is responsible for updating from SVN on every commit, compiling, and running FxCop and NCover/NUnit. A: I would recommend you take a look at NAnt + NUnit ( + NCover) + TeamCity with SVN for your build system. There is actually a very nice article describing this configuration at Pete W's idea book (Sorry, this link doesn't exist anymore!)
{ "language": "en", "url": "https://stackoverflow.com/questions/7190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Alpha blending sprites in Nintendo DS Homebrew I'm trying to alpha blend sprites and backgrounds with devkitPro (including libnds, libarm, etc). Does anyone know how to do this? A: As a generic reference, i once wrote a small blog entry about that issue. Basically, you first have to define which layer is alpha-blended against which other layer(s). Afaik, * *the source layer(s) must be over destination layer(s) to have some blending displayed. that means the priority of source layers should be numerically lower than the the priority of destination layers. *the source layer is what is going to be translucent, the destination(s) is what is going to be seen through (and yes, i find this rather confusing). For the sprites, specifically, you then have 3 ways to achieve alpha-blending depending on what you need and what you're "ready to pay" for it: * *You can make all the sprites have some alpha-blending by turning on BLEND_SRC_SPRITE in REG_BLDCNT[_SUB] ... not that useful. *You can selectively turn on blending of some sprites by using ATTR0_TYPE_BLENDED. The blending level will be the same for all sprites (and layers) *bitmap-type sprites use direct colors (bypassing the palettes), so the ATTR2_PALETTE() field of GBA sprites is useless and has been recycled into ATTR2_ALPHA. A: Sprites on the DS can be alpha blended using the blend control registers. TONC gives the necessary information for getting blending working on the main screen because the register locations are the same. Alpha blending on the subscreen uses the same process with different registers at a 1000h offset. The registers you'll be looking at are REG_BLDMOD, REG_COLV, and REG_COLY for the main screen and REG_BLDMOD_SUB, REG_COLV_SUB, and REG_COLY_SUB for the sub screen. Also remember that you'll have to change the sprite's graphic mode to enable blending per sprite. A: It's been a long time since I've done any GBA programming, but as I recall, the DS supports most (if not all) of the stuff that GBA supports. This link has a section on how to do alpha blending for GBA (section 13.2). I don't know if there's a DS-specific way of doing it, but this should work for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/7209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What's the best way of converting a mysql database to a sqlite one? I currently have a relatively small (4 or 5 tables, 5000 rows) MySQL database that I would like to convert to an sqlite database. As I'd potentially have to do this more than once, I'd be grateful if anyone could recommend any useful tools, or at least any easily-replicated method. (I have complete admin access to the database/machines involved.) A: I've had to do similar things a few times. The easiest approach for me has been to write a script that pulls from one data source and produces an output for the new data source. Just do a SELECT * query for each table in your current database, and then dump all the rows into an INSERT INTO query for your new database. You can either dump this into a file or pipe it straight into the database frontend. It's not pretty, but honestly, pretty hardly seems to be a major concern for things like this. This technique is quick to write, and it works. Those are my primary criteria for things like this. You might want to check out this thread, too. It looks like a couple of people have already put together basically what you need. I didn't look that far into it, though, so no guarantees. A: As long as a MySQL dump file doesn't exceed the SQLite query language, you should be able to migrate fairly easily: tgl@moto~$ mysqldump old-database > old-database-dump.sql tgl@moto~$ sqlite3 -init old-database-dump.sql new-database I haven't tried this myself. UPDATE: Looks like you'll need to do a couple edits of the MySQL dump. I'd use sed, or Google for it. Just the comment syntax, auto_increment & TYPE= declaration, and escape characters differ. A: Here is a list of converters: * *http://www.sqlite.org/cvstrac/wiki?p=ConverterTools An alternative method that would work nicely but is rarely mentioned is: use a ORM class that abstracts the specific database differences away for you. e.g. you get these in PHP (RedBean), Python (Django's ORM layer, Storm, SqlAlchemy), Ruby on Rails ( ActiveRecord), Cocoa (CoreData) i.e. you could do this: * *Load data from source database using the ORM class. *Store data in memory or serialize to disk. *Store data into source database using the ORM class. A: If it's just a few tables you could probably script this in your preferred scripting langauge and have it all done by the time it'd take to read all the replies or track down a suitable tool. I would any way. :)
{ "language": "en", "url": "https://stackoverflow.com/questions/7211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Is it possible to add behavior to a non-dynamic ActionScript 3 class without inheriting the class? What I'd like to do is something like the following: FooClass.prototype.method = function():String { return "Something"; } var foo:FooClass = new FooClass(); foo.method(); Which is to say, I'd like to extend a generated class with a single method, not via inheritance but via the prototype. The class is generated from a WSDL, it's not a dynamic class, and I don't want to touch the generated code because it will be overwritten anyway. Long story short, I'd like to have the moral equivalent of C# 3:s Extension Methods for AS3. Edit: I accepted aib's answer, because it fits what I was asking best -- although upon further reflection it doesn't really solve my problem, but that's my fault for asking the wrong question. :) Also, upmods for the good suggestions. A: Yes, such a thing is possible. In fact, your example is very close to the solution. Try foo["method"](); instead of foo.method(); A: @Theo: How would you explain the following working in 3.0.0.477 with the default flex-config.xml (<strict>true</strict>) and even a -compiler.strict parameter passed to mxmlc? Foo.as: package { public class Foo { public var foo:String; public function Foo() { foo = "foo!"; } } } footest.as: package { import flash.display.Sprite; public class footest extends Sprite { public function footest() { Foo.prototype.method = function():String { return "Something"; } var foo:Foo = new Foo(); trace(foo["method"]()); } } } Note that the OP said inheritance was unacceptable, as was modifying the generated code. (If that weren't the case, adding "dynamic" to the class definition would probably be the easiest solution.) A: Depending on how many methods your class has, this may work: Actual Class: public class SampleClass { public function SampleClass() { } public function method1():void { Alert.show("Hi"); } Quick Wrapper: var actualClass:SampleClass = new SampleClass(); var QuickWrapper:Object = { ref: actualClass, method1: function():void { this.ref.method1(); }, method2: function():void { Alert.show("Hello!"); } }; QuickWrapper.method1(); QuickWrapper.method2(); A: @aib is unfortunately incorrect. Assuming strict mode (the default compiler mode) it is not possible to modify the prototype of non-dynamic class types in ActionScript 3. I'm not even sure that it's possible in non-strict mode. Is wrapping an option? Basically you create a class that takes one of the objects you get from the web service and just forwards all method calls to that, but also has methods of its own: public class FooWrapper extends Foo { private var wrappedFoo : Foo; public function FooWrapper( foo : Foo ) { wrappedFoo = foo; } override public function methodFromFoo( ) : void { wrappedFoo.methodFromFoo(); } override public function anotherMethodFromFoo( ) : void { wrappedFoo.anotherMethodFromFoo(); } public function newMethodNotOnFoo( ) : String { return "Hello world!" } } When you want to work with a Foo, but also have the extra method you need you wrap the Foo instance in a FooWrapper and work with that object instead. It's not the most convenient solution, there's a lot of typing and if the generated code changes you have to change the FooWrapper class by hand, but unless you can modify the generated code either to include the method you want or to make the class dynamic I don't see how it can be done. Another solution is to add a step to your build process that modifies the source of the generated classes. I assume that you already have a step that generates the code from a WSDL, so what you could do is to add a step after that that inserts the methods you need. A: Monkey patching is an (inelegant) option. For example, suppose you don't like the fact that Flex 3 SpriteAsset.as returns a default border metrics of [7,7,7,7] (unlike flex 2). To fix this, you can: * *Create a copy of SpriteAsset.as and add it to your project at /mx/core/SpriteAsset.as *Edit your local copy to fix any problems you find *Run your ap Google "flex monkey patch" for more examples and instructions.
{ "language": "en", "url": "https://stackoverflow.com/questions/7212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: User authentication on Resin webserver I'm currently tasked with replacing an Apache + Resin Java web server with a Resin-only configuration. Currently in our apache setup, we use .htaccess and a small user database to provide passwording at a directory level. Does anyone know the equivalent setup when using just Resin as the web server? A: See "Authentication on Resin" The closest you'll come to .htaccess with apache is configuring for Basic auth using an XmlAuthenticator. You'll need to convert your htpasswd file into the XML format that resin uses. Follow the "Quick Start" section and you'll get what you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/7214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Change the width of a scrollbar Is it possible to change the width of a scroll bar on a form. This app is for a touch screen and it is a bit too narrow. A: This is a Windows Forms application? I was able to make a very fat and thick scrollbar by adjusting the "Width" property of my scroll bar control. Is your scroll bar something you have programmatic access to (i.e. it is a control you added to the form)? A: The width of the scrollbars is controlled by Windows. You can adjust the scrollbar width in Display Properties and it will affect all windows on the terminal.
{ "language": "en", "url": "https://stackoverflow.com/questions/7224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Automatically check bounced emails via POP3? Can anyone recommend software or a .NET library that will check for bounced emails and the reason for the bounce? I get bounced emails into a pop3 account that I can read then. I need it to keep my user database clean from invalid email addresses and want to automate this (mark user as invalid email). A: I have done a great deal of work handling bounce emails and there different types. If you want to be absolutely sure that the email your looking at is indeed a bounce of a specific kind I highly recommend getting a good filter. I have worked with Boogie Tools and it has worked very well. It lets you know what kind of bounce it is, Hard, Soft, Transient or if its even someone trying to unsubscribe. It has a muliple API's including .Net and I found it quite easy to get working. A: It's pretty easy to do with a TcpClient. Open the server: TcpClient tcpClient = new TcpClient(); tcpClient.Connect(POP3Server, POP3Port); NetworkStream stream = tcpClient.GetStream(); Read the welcome message: int read = stream.Read(inBuffer, 0, inBuffer.Length); string response = Encoding.ASCII.GetString(inBuffer, 0, read); if (response.IndexOf("+OK") != 0) throw new ...; Write back to the server: byte[] outBuffer = Encoding.ASCII.GetBytes("USER " + account + "\r\n"); stream.Write(outBuffer, 0, outBuffer.Length); That sends the USER command. You need to login and then you can start grabbing messages - see the POP3 RFC for the full list of commands. If you're not looking to roll your own check out this CodeProject article. A: as abfo says, the POP3 protocol is super simple, getting the messages is a no brainer. Parsing the messages to get the failures is harder, and reliably parsing out which email caused the failure and why it failed is really hard. The problem is that bounce messages don't have a standard format, the default forms vary from MTA to MTA. Then the failure reason can be tweaked by the site admin making it harder to recognize, and the site admin could modify the failure message template which makes it darn near impossible. See if you can find a .NET mailing list manager and if you can repurpose the bounce handling code. Failing that, see if you can change the tool that's sending the messages to send each email from a unique (and reversible) enveloper sender (VERP I think it's called?). That way you don't need to scan the body of the email, you can tell which recipient failed by examining the recipient address of the failure message. A: Thanks for the answer, great! I did some research myself and found ListNanny - also super simple to use and tells you the type of bounce. Will write some proof of concept and see which one I like better... A: Your question made me realize that the Wordpress Newsletter plugin I was going to use, did not have bounce management, and I'd need something as well. I looked around for awhile, and I've settled on the free and open-source PHPlist newsletter manager. They describe in detail their settings for handling bounces and they do have an experimental advanced bounce handling feature that will allow you to customize the bounce handling exactly the way you want. Evem if you decide not to use PHPlist, reading how they do it will be useful information for you.
{ "language": "en", "url": "https://stackoverflow.com/questions/7231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Does running a SQL Server 2005 database in compatibility level 80 have a negative impact on performance? Our software must be able to run on SQL Server 2000 and 2005. To simplify development, we're running our SQL Server 2005 databases in compatibility level 80. However, database performance seems slower on SQL 2005 than on SQL 2000 in some cases (we have not confirmed this using benchmarks yet). Would upgrading the compatibility level to 90 improve performance on the SQL 2005 servers? A: I think i read somewhere, that the SQL Server 2005 database engine should be about 30% faster than the SQL Server 2000 engine. It might be, that you have to run your database in compatibility mode 90 to get these benefits. But i stumbled on two scenarios, where performance can drop dramatically when using mssql 2005 compared to mssql 2000: * *Parameter Sniffing: When using a stored procedure, sql server will calculate exactly one execution plan at the time, you first call the procedure. The execution plan depends on the parameter values given for that call. In our case, procedures which normally took about 10 seconds are running for hours under mssql 2005. Take a look here and here. *When using distributed queries, mssql 2005 behaves different concerning assumptions about the sort order on the remote server. Default behavior is, that the server copies the whole remote tables involved in a query to the local tempdb and then execute the joins locally. Workaround is to use OPENQUERY, where you can control exactly which resultset is transferred from the remote server. A: after you moved the DBs over to 2005 did you update the stats with full scan? rebuilt the indexes? first try that and then check performance again A: Also a FYI, if you run compatibility level 90 then some things are not supported anymore like old style outer joins (*= and =*) A: Are you using subselects in your queries? From my experience, a SELECT statement with subselects that runs fine on SQL Server 2000 can crawl on SQL Server 2005 (it can be like 10x slower!). Make an experiment - re-write one query to eliminate the subselects and see how its performance changes.
{ "language": "en", "url": "https://stackoverflow.com/questions/7237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Anyone know a good workaround for the lack of an enum generic constraint? What I want to do is something like this: I have enums with combined flagged values. public static class EnumExtension { public static bool IsSet<T>( this T input, T matchTo ) where T:enum //the constraint I want that doesn't exist in C#3 { return (input & matchTo) != 0; } } So then I could do: MyEnum tester = MyEnum.FlagA | MyEnum.FlagB if( tester.IsSet( MyEnum.FlagA ) ) //act on flag a Unfortunately, C#'s generic where constraints have no enum restriction, only class and struct. C# doesn't see enums as structs (even though they are value types) so I can't add extension types like this. Does anyone know a workaround? A: You can achieve this using IL Weaving and ExtraConstraints Allows you to write this code public class Sample { public void MethodWithDelegateConstraint<[DelegateConstraint] T> () { } public void MethodWithEnumConstraint<[EnumConstraint] T>() { } } What gets compiled public class Sample { public void MethodWithDelegateConstraint<T>() where T: Delegate { } public void MethodWithEnumConstraint<T>() where T: struct, Enum { } } A: EDIT: This is now live in version 0.0.0.2 of UnconstrainedMelody. (As requested on my blog post about enum constraints. I've included the basic facts below for the sake of a standalone answer.) The best solution is to wait for me to include it in UnconstrainedMelody1. This is a library which takes C# code with "fake" constraints such as where T : struct, IEnumConstraint and turns it into where T : struct, System.Enum via a postbuild step. It shouldn't be too hard to write IsSet... although catering for both Int64-based and UInt64-based flags could be the tricky part. (I smell some helper methods coming on, basically allowing me to treat any flags enum as if it had a base type of UInt64.) What would you want the behaviour to be if you called tester.IsSet(MyFlags.A | MyFlags.C) ? Should it check that all the specified flags are set? That would be my expectation. I'll try to do this on the way home tonight... I'm hoping to have a quick blitz on useful enum methods to get the library up to a usable standard quickly, then relax a bit. EDIT: I'm not sure about IsSet as a name, by the way. Options: * *Includes *Contains *HasFlag (or HasFlags) *IsSet (it's certainly an option) Thoughts welcome. I'm sure it'll be a while before anything's set in stone anyway... 1 or submit it as a patch, of course... A: This doesn't answer the original question, but there is now a method in .NET 4 called Enum.HasFlag which does what you are trying to do in your example A: As of C# 7.3, there is now a built-in way to add enum constraints: public class UsingEnum<T> where T : System.Enum { } source: https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/where-generic-type-constraint A: The way I do it is put a struct constraint, then check that T is an enum at runtime. This doesn't eliminate the problem completely, but it does reduce it somewhat A: Darren, that would work if the types were specific enumerations - for general enumerations to work you have to cast them to ints (or more likely uint) to do the boolean math: public static bool IsSet( this Enum input, Enum matchTo ) { return ( Convert.ToUInt32( input ) & Convert.ToUInt32( matchTo ) ) != 0; } A: As of C# 7.3, you can use the Enum constraint on generic types: public static TEnum Parse<TEnum>(string value) where TEnum : Enum { return (TEnum) Enum.Parse(typeof(TEnum), value); } If you want to use a Nullable enum, you must leave the orginial struct constraint: public static TEnum? TryParse<TEnum>(string value) where TEnum : struct, Enum { if( Enum.TryParse(value, out TEnum res) ) return res; else return null; } A: Actually, it is possible, with an ugly trick. However, it cannot be used for extension methods. public abstract class Enums<Temp> where Temp : class { public static TEnum Parse<TEnum>(string name) where TEnum : struct, Temp { return (TEnum)Enum.Parse(typeof(TEnum), name); } } public abstract class Enums : Enums<Enum> { } Enums.IsSet<DateTimeKind>("Local") If you want to, you can give Enums<Temp> a private constructor and a public nested abstract inherited class with Temp as Enum, to prevent inherited versions for non-enums. A: Using your original code, inside the method you can also use reflection to test that T is an enum: public static class EnumExtension { public static bool IsSet<T>( this T input, T matchTo ) { if (!typeof(T).IsEnum) { throw new ArgumentException("Must be an enum", "input"); } return (input & matchTo) != 0; } } A: Here's some code that I just did up that seems to work like you want without having to do anything too crazy. It's not restricted to only enums set as Flags, but there could always be a check put in if need be. public static class EnumExtensions { public static bool ContainsFlag(this Enum source, Enum flag) { var sourceValue = ToUInt64(source); var flagValue = ToUInt64(flag); return (sourceValue & flagValue) == flagValue; } public static bool ContainsAnyFlag(this Enum source, params Enum[] flags) { var sourceValue = ToUInt64(source); foreach (var flag in flags) { var flagValue = ToUInt64(flag); if ((sourceValue & flagValue) == flagValue) { return true; } } return false; } // found in the Enum class as an internal method private static ulong ToUInt64(object value) { switch (Convert.GetTypeCode(value)) { case TypeCode.SByte: case TypeCode.Int16: case TypeCode.Int32: case TypeCode.Int64: return (ulong)Convert.ToInt64(value, CultureInfo.InvariantCulture); case TypeCode.Byte: case TypeCode.UInt16: case TypeCode.UInt32: case TypeCode.UInt64: return Convert.ToUInt64(value, CultureInfo.InvariantCulture); } throw new InvalidOperationException("Unknown enum type."); } } A: I just wanted to add Enum as a generic constraint. While this is just for a tiny helper method using ExtraConstraints is a bit too much overhead for me. I decided to just just create a struct constraint and add a runtime check for IsEnum. For converting a variable from T to Enum I cast it to object first. public static Converter<T, string> CreateConverter<T>() where T : struct { if (!typeof(T).IsEnum) throw new ArgumentException("Given Type is not an Enum"); return new Converter<T, string>(x => ((Enum)(object)x).GetEnumDescription()); } A: if someone needs generic IsSet (created out of box on fly could be improved on), and or string to Enum onfly conversion (which uses EnumConstraint presented below): public class TestClass { } public struct TestStruct { } public enum TestEnum { e1, e2, e3 } public static class TestEnumConstraintExtenssion { public static bool IsSet<TEnum>(this TEnum _this, TEnum flag) where TEnum : struct { return (((uint)Convert.ChangeType(_this, typeof(uint))) & ((uint)Convert.ChangeType(flag, typeof(uint)))) == ((uint)Convert.ChangeType(flag, typeof(uint))); } //public static TestClass ToTestClass(this string _this) //{ // // #generates compile error (so no missuse) // return EnumConstraint.TryParse<TestClass>(_this); //} //public static TestStruct ToTestStruct(this string _this) //{ // // #generates compile error (so no missuse) // return EnumConstraint.TryParse<TestStruct>(_this); //} public static TestEnum ToTestEnum(this string _this) { // #enum type works just fine (coding constraint to Enum type) return EnumConstraint.TryParse<TestEnum>(_this); } public static void TestAll() { TestEnum t1 = "e3".ToTestEnum(); TestEnum t2 = "e2".ToTestEnum(); TestEnum t3 = "non existing".ToTestEnum(); // default(TestEnum) for non existing bool b1 = t3.IsSet(TestEnum.e1); // you can ommit type bool b2 = t3.IsSet<TestEnum>(TestEnum.e2); // you can specify explicite type TestStruct t; // #generates compile error (so no missuse) //bool b3 = t.IsSet<TestEnum>(TestEnum.e1); } } If someone still needs example hot to create Enum coding constraint: using System; /// <summary> /// would be same as EnumConstraint_T&lt;Enum>Parse&lt;EnumType>("Normal"), /// but writen like this it abuses constrain inheritence on System.Enum. /// </summary> public class EnumConstraint : EnumConstraint_T<Enum> { } /// <summary> /// provides ability to constrain TEnum to System.Enum abusing constrain inheritence /// </summary> /// <typeparam name="TClass">should be System.Enum</typeparam> public abstract class EnumConstraint_T<TClass> where TClass : class { public static TEnum Parse<TEnum>(string value) where TEnum : TClass { return (TEnum)Enum.Parse(typeof(TEnum), value); } public static bool TryParse<TEnum>(string value, out TEnum evalue) where TEnum : struct, TClass // struct is required to ignore non nullable type error { evalue = default(TEnum); return Enum.TryParse<TEnum>(value, out evalue); } public static TEnum TryParse<TEnum>(string value, TEnum defaultValue = default(TEnum)) where TEnum : struct, TClass // struct is required to ignore non nullable type error { Enum.TryParse<TEnum>(value, out defaultValue); return defaultValue; } public static TEnum Parse<TEnum>(string value, TEnum defaultValue = default(TEnum)) where TEnum : struct, TClass // struct is required to ignore non nullable type error { TEnum result; if (Enum.TryParse<TEnum>(value, out result)) return result; return defaultValue; } public static TEnum Parse<TEnum>(ushort value) { return (TEnum)(object)value; } public static sbyte to_i1<TEnum>(TEnum value) { return (sbyte)(object)Convert.ChangeType(value, typeof(sbyte)); } public static byte to_u1<TEnum>(TEnum value) { return (byte)(object)Convert.ChangeType(value, typeof(byte)); } public static short to_i2<TEnum>(TEnum value) { return (short)(object)Convert.ChangeType(value, typeof(short)); } public static ushort to_u2<TEnum>(TEnum value) { return (ushort)(object)Convert.ChangeType(value, typeof(ushort)); } public static int to_i4<TEnum>(TEnum value) { return (int)(object)Convert.ChangeType(value, typeof(int)); } public static uint to_u4<TEnum>(TEnum value) { return (uint)(object)Convert.ChangeType(value, typeof(uint)); } } hope this helps someone.
{ "language": "en", "url": "https://stackoverflow.com/questions/7244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "96" }
Q: Puzzle: Find largest rectangle (maximal rectangle problem) What's the most efficient algorithm to find the rectangle with the largest area which will fit in the empty space? Let's say the screen looks like this ('#' represents filled area): .................... ..............###### ##.................. .................### .................### #####............... #####............... #####............... A probable solution is: .................... ..............###### ##...++++++++++++... .....++++++++++++### .....++++++++++++### #####++++++++++++... #####++++++++++++... #####++++++++++++... Normally I'd enjoy figuring out a solution. Although this time I'd like to avoid wasting time fumbling around on my own since this has a practical use for a project I'm working on. Is there a well-known solution? Shog9 wrote: Is your input an array (as implied by the other responses), or a list of occlusions in the form of arbitrarily sized, positioned rectangles (as might be the case in a windowing system when dealing with window positions)? Yes, I have a structure which keeps track of a set of windows placed on the screen. I also have a grid which keeps track of all the areas between each edge, whether they are empty or filled, and the pixel position of their left or top edge. I think there is some modified form which would take advantage of this property. Do you know of any? A: I am the author of the Maximal Rectangle Solution on LeetCode, which is what this answer is based on. Since the stack-based solution has already been discussed in the other answers, I would like to present an optimal O(NM) dynamic programming solution which originates from user morrischen2008. Intuition Imagine an algorithm where for each point we computed a rectangle by doing the following: * *Finding the maximum height of the rectangle by iterating upwards until a filled area is reached *Finding the maximum width of the rectangle by iterating outwards left and right until a height that doesn't accommodate the maximum height of the rectangle For example finding the rectangle defined by the yellow point: We know that the maximal rectangle must be one of the rectangles constructed in this manner (the max rectangle must have a point on its base where the next filled square is height above that point). For each point we define some variables: h - the height of the rectangle defined by that point l - the left bound of the rectangle defined by that point r - the right bound of the rectangle defined by that point These three variables uniquely define the rectangle at that point. We can compute the area of this rectangle with h * (r - l). The global maximum of all these areas is our result. Using dynamic programming, we can use the h, l, and r of each point in the previous row to compute the h, l, and r for every point in the next row in linear time. Algorithm Given row matrix[i], we keep track of the h, l, and r of each point in the row by defining three arrays - height, left, and right. height[j] will correspond to the height of matrix[i][j], and so on and so forth with the other arrays. The question now becomes how to update each array. height h is defined as the number of continuous unfilled spaces in a line from our point. We increment if there is a new space, and set it to zero if the space is filled (we are using '1' to indicate an empty space and '0' as a filled one). new_height[j] = old_height[j] + 1 if row[j] == '1' else 0 left: Consider what causes changes to the left bound of our rectangle. Since all instances of filled spaces occurring in the row above the current one have already been factored into the current version of left, the only thing that affects our left is if we encounter a filled space in our current row. As a result we can define: new_left[j] = max(old_left[j], cur_left) cur_left is one greater than rightmost filled space we have encountered. When we "expand" the rectangle to the left, we know it can't expand past that point, otherwise it'll run into the filled space. right: Here we can reuse our reasoning in left and define: new_right[j] = min(old_right[j], cur_right) cur_right is the leftmost occurrence of a filled space we have encountered. Implementation def maximalRectangle(matrix): if not matrix: return 0 m = len(matrix) n = len(matrix[0]) left = [0] * n # initialize left as the leftmost boundary possible right = [n] * n # initialize right as the rightmost boundary possible height = [0] * n maxarea = 0 for i in range(m): cur_left, cur_right = 0, n # update height for j in range(n): if matrix[i][j] == '1': height[j] += 1 else: height[j] = 0 # update left for j in range(n): if matrix[i][j] == '1': left[j] = max(left[j], cur_left) else: left[j] = 0 cur_left = j + 1 # update right for j in range(n-1, -1, -1): if matrix[i][j] == '1': right[j] = min(right[j], cur_right) else: right[j] = n cur_right = j # update the area for j in range(n): maxarea = max(maxarea, height[j] * (right[j] - left[j])) return maxarea A: I implemented the solution of Dobbs in Java. No warranty for anything. package com.test; import java.util.Stack; public class Test { public static void main(String[] args) { boolean[][] test2 = new boolean[][] { new boolean[] { false, true, true, false }, new boolean[] { false, true, true, false }, new boolean[] { false, true, true, false }, new boolean[] { false, true, false, false } }; solution(test2); } private static class Point { public Point(int x, int y) { this.x = x; this.y = y; } public int x; public int y; } public static int[] updateCache(int[] cache, boolean[] matrixRow, int MaxX) { for (int m = 0; m < MaxX; m++) { if (!matrixRow[m]) { cache[m] = 0; } else { cache[m]++; } } return cache; } public static void solution(boolean[][] matrix) { Point best_ll = new Point(0, 0); Point best_ur = new Point(-1, -1); int best_area = 0; final int MaxX = matrix[0].length; final int MaxY = matrix.length; Stack<Point> stack = new Stack<Point>(); int[] cache = new int[MaxX + 1]; for (int m = 0; m != MaxX + 1; m++) { cache[m] = 0; } for (int n = 0; n != MaxY; n++) { int openWidth = 0; cache = updateCache(cache, matrix[n], MaxX); for (int m = 0; m != MaxX + 1; m++) { if (cache[m] > openWidth) { stack.push(new Point(m, openWidth)); openWidth = cache[m]; } else if (cache[m] < openWidth) { int area; Point p; do { p = stack.pop(); area = openWidth * (m - p.x); if (area > best_area) { best_area = area; best_ll.x = p.x; best_ll.y = n; best_ur.x = m - 1; best_ur.y = n - openWidth + 1; } openWidth = p.y; } while (cache[m] < openWidth); openWidth = cache[m]; if (openWidth != 0) { stack.push(p); } } } } System.out.printf("The maximal rectangle has area %d.\n", best_area); System.out.printf("Location: [col=%d, row=%d] to [col=%d, row=%d]\n", best_ll.x + 1, best_ll.y + 1, best_ur.x + 1, best_ur.y + 1); } } A: I'm the author of that Dr. Dobb's article and get occasionally asked about an implementation. Here is a simple one in C: #include <assert.h> #include <stdio.h> #include <stdlib.h> typedef struct { int one; int two; } Pair; Pair best_ll = { 0, 0 }; Pair best_ur = { -1, -1 }; int best_area = 0; int *c; /* Cache */ Pair *s; /* Stack */ int top = 0; /* Top of stack */ void push(int a, int b) { s[top].one = a; s[top].two = b; ++top; } void pop(int *a, int *b) { --top; *a = s[top].one; *b = s[top].two; } int M, N; /* Dimension of input; M is length of a row. */ void update_cache() { int m; char b; for (m = 0; m!=M; ++m) { scanf(" %c", &b); fprintf(stderr, " %c", b); if (b=='0') { c[m] = 0; } else { ++c[m]; } } fprintf(stderr, "\n"); } int main() { int m, n; scanf("%d %d", &M, &N); fprintf(stderr, "Reading %dx%d array (1 row == %d elements)\n", M, N, M); c = (int*)malloc((M+1)*sizeof(int)); s = (Pair*)malloc((M+1)*sizeof(Pair)); for (m = 0; m!=M+1; ++m) { c[m] = s[m].one = s[m].two = 0; } /* Main algorithm: */ for (n = 0; n!=N; ++n) { int open_width = 0; update_cache(); for (m = 0; m!=M+1; ++m) { if (c[m]>open_width) { /* Open new rectangle? */ push(m, open_width); open_width = c[m]; } else /* "else" optional here */ if (c[m]<open_width) { /* Close rectangle(s)? */ int m0, w0, area; do { pop(&m0, &w0); area = open_width*(m-m0); if (area>best_area) { best_area = area; best_ll.one = m0; best_ll.two = n; best_ur.one = m-1; best_ur.two = n-open_width+1; } open_width = w0; } while (c[m]<open_width); open_width = c[m]; if (open_width!=0) { push(m0, w0); } } } } fprintf(stderr, "The maximal rectangle has area %d.\n", best_area); fprintf(stderr, "Location: [col=%d, row=%d] to [col=%d, row=%d]\n", best_ll.one+1, best_ll.two+1, best_ur.one+1, best_ur.two+1); return 0; } It takes its input from the console. You could e.g. pipe this file to it: 16 12 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 1 0 0 0 1 1 0 1 0 0 0 0 1 1 0 1 1 1 0 1 1 1 0 1 0 0 0 0 0 1 1 * * * * * * 0 0 1 0 0 0 0 0 0 0 * * * * * * 0 0 1 0 0 0 0 0 0 0 1 1 0 1 1 1 1 1 1 0 0 0 1 0 0 0 0 1 0 0 1 1 1 0 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 And after printing its input, it will output: The maximal rectangle has area 12. Location: [col=7, row=6] to [col=12, row=5] The implementation above is nothing fancy of course, but it's very close to the explanation in the Dr. Dobb's article and should be easy to translate to whatever is needed. A: @lassevk I found the referenced article, from DDJ: The Maximal Rectangle Problem A: @lassevk // 4. Outer double-for-loop to consider all possible positions // for topleft corner. for (int i=0; i < M; i++) { for (int j=0; j < N; j++) { // 2.1 With (i,j) as topleft, consider all possible bottom-right corners. for (int a=i; a < M; a++) { for (int b=j; b < N; b++) { HAHA... O(m2 n2).. That's probably what I would have come up with. I see they go on to develop optmizations... looks good, I'll have a read. A: Implementation of the stack-based algorithm in plain Javascript (with linear time complexity): function maxRectangle(mask) { var best = {area: 0} const width = mask[0].length const depth = Array(width).fill(0) for (var y = 0; y < mask.length; y++) { const ranges = Array() for (var x = 0; x < width; x++) { const d = depth[x] = mask[y][x] ? depth[x] + 1 : 0 if (!ranges.length || ranges[ranges.length - 1].height < d) { ranges.push({left: x, height: d}) } else { for (var j = ranges.length - 1; j >= 0 && ranges[j].height >= d; j--) { const {left, height} = ranges[j] const area = (x - left) * height if (area > best.area) { best = {area, left, top: y + 1 - height, right: x, bottom: y + 1} } } ranges.splice(j+2) ranges[j+1].height = d } } } return best; } var example = [ [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0], [0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0], [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0], [0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], [0,0,0,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]] console.log(maxRectangle(example))
{ "language": "en", "url": "https://stackoverflow.com/questions/7245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43" }
Q: How to make junior programmers write tests? We have a junior programmer that simply doesn't write enough tests. I have to nag him every two hours, "have you written tests?" We've tried: * *Showing that the design becomes simpler *Showing it prevents defects *Making it an ego thing saying only bad programmers don't *This weekend 2 team members had to come to work because his code had a NULL reference and he didn't test it My work requires top quality stable code, and usually everyone 'gets it' and there's no need to push tests through. We know we can make him write tests, but we all know the useful tests are those written when you're into it. Do you know of more motivations? A: Here's what I would do: * *First time out... "we're going to do this project jointly. I'm going to write the tests and you're going to write the code. Pay attention to how I write the tests, coz that's how we do things around here and that's what I'll expect of you." *Following that... "You're done? Great! First let's look at the tests that are driving your development. Oh, no tests? Let me know when that is done and we'll reschedule looking at your code. If you're needing help to formulate the tests let me know and I'll help you." A: He's already doing this. Really. He just doesn't write it down. Not convinced? Watch him go through the standard development cycle: * *Write a piece of code *Compile it *Run to see what it does *Write the next piece of code Step #3 is the test. He already does testing, he just does it manually. Ask him this question: "How do you know tomorrow that the code from today still works?" He will answer: "It's such a little amount of code!" Ask: "How about next week?" When he hasn't got an answer, ask: "How would you like your program to tell you when a change breaks something that worked yesterday?" That's what automatic unit testing is all about. A: As a junior programmer, I'm still trying to get into the habit of writing tests. Obviously it's not easy to pick up new habits, but thinking about what would make this work for me, I have to +1 the comments about code reviews and coaching/pair programming. It may also be worth emphasising the long-term purpose of testing: ensuring that what worked yesterday is still working today, and next week, and next month. I only say that because in skimming the answers I didn't see that mentioned. In doing code reviews (if you decide to do that), make sure your young dev knows it's not about putting him down, it's about making the code better. Because that way his confidence is less likely to get damaged. And that's important. On the other hand, so is knowing how little you know. Of course, I don't really know anything. But I hope the words have been useful. Edit: [Justin Standard] Don't put yourself down, what you have to say is pretty much right on. On your point about code reviews: what you will find is that not only will the junior dev learn in the process, but so will the reviewers. Everyone in a code review learns if you make it a collaborative process. A: Frankly, if you are having to put that much effort into getting him to do something then you may have to come to terms with the thought that he may just not be a good fit for the team, and may need to go. Now, this doesn't necessarily mean firing him... it may mean finding someplace else in the company his skills are more suited. But if there is no place else...you know what to do. I'm assuming he is also a fairly new hire (< 1 year) and probably recently out of school...in which case he may not be accustomed to how things work in a corporate setting. Things like that most of us could get away with in college. If this is the case, one thing I've found works is to have a sort of "surprise new hire review." It doesn't matter if you've never done it before...he won't know that. Just sit him down and tell him your are going to go over his performance and show him some real numbers...take your normal review sheet (you do have a formal review process right?) and change the heading if you want so it looks official and show him where he stands. If you show him in a very formal setting that not doing tests is adversely affecting his performance rating as opposed to just "nagging" him about it, he will hopefully get the point. You've got to show him that his actions will actually affect him be it pay wise or otherwise. I know, you may want to stay away from doing this because it's not official... but I think you are within reason to do it and it's probably going to be a whole lot cheaper than having to fire him and recruit someone new. A: As a junior programmer myself, I thought that Id reveal what it was like when I found myself in a similar situation to your junior developer. When I first came out of uni, I found that it had severly un equipped me to deal with the real world. Yes I knew some JAVA basics and some philosophy (don't ask) but that was about it. When I first got my job it was a little daunting to say the least. Let me tell you I was probably one of the biggest cowboys around, I would hack together a little bug fix / algorithm with no comments / testing / documentation and ship it out the door. I was lucky enough to be under the supervision of a kind and very patient senior programmer. Luckily for me, he decided to sit down with me and spend 1-2 weeks going through my very hacked togethor code. He would explain where I'd gone wrong, the finer points of c and pointers (boy did that confuse me!). We managed to write a pretty decent class/module in about a week. All I can say is that if the senior dev hadn't invested the time to help me along the right path, I probably wouldn't have lasted very long. Happily, 2 years down the line, I would hope that some of my collegues might even consider me an average programmer. Take home points * *Most Universities are very bad at preparing students for the real world *Paired programming really helped me. Thats not to say that it will help everyone but it worked for me. A: Assign them to projects that don't require "top quality stable code" if that's your concern and let the jr. developer fail. Have them be the one to 'come in on the weekend' to fix their bugs. Have lunch a lot and talk about software development practices (not lectures, but discussions). In time they will acquire and develop the best practices to do the tasks they are assigned. Who knows, they might even come up with something better than the techniques your team currently uses. A: Have a code review before every commit (even if it's a 1 minute "I've changed this variable name"), and as part of the code review, review any unit tests. Don't sign off on the commit until the tests are in place. (Also - If his work wasn't tested - why was it in a production build in the first place? If it's not tested, don't let it in, then you won't have to work weekends) A: For myself, I have started insisting that every bug I find and fix be expressed as a test: * *"Hmmm, that's not right..." *Find possible problem *Write a test, show that the code fails *Fix the problem *Show that the new code passes *Loop if the original problem persists I try to do this even while banging stuff out, and I get done in about the same time, only with a partial test suite already in place. (I don't live in a commercial programming environment, and am often the only coder working a particular project.) A: If the junior programmer, or anyone, doesn't see the value in testing, then it will be hard to get them to do it...period. I would have made the junior programmer sacrifice their weekend to fix the bug. His actions (or lack there of) are not affecting him directly. Also, make it apparent, that he will not see advancement and/or pay increases if he doesn't improve his skills in testing. In the end, even with all your help, encouragement, mentoring, he might not be a fit for your team, so let him go and look for someone who does get it. A: I second RodeoClown's comment about code reviewing every commit. Once he's done it a fair few times he'll get in the habit of testing stuff. I don't know if you need to block commits like that though. At my workplace everyone has free commit to everything, and all SVN commit messages (with diffs) are emailed to the team. Note: you really want the thunderbird colored-diffs addon if you plan on doing this. My boss or myself (the 2 'senior' coders) will end up reading over the commits, and if there's any stuff like "you forgot to add unit tests" we just flick an email or go and chat to the person, explaining why they needed unit tests or whatever. Everyone else is encouraged to read the commits too, as it's a great way of seeing what's going on, but the junior devs don't comment so much. You can help encourage people to get into the habit of this by periodically saying things like "Hey, bob, did you see that commit I did this morning, I found this neat trick where you can do blah blah whatever, read the commit and see how it works!" NB: We have 2 'senior' devs and 3 junior ones. This may not scale, or you might need to adjust the process a bit with more developers. A: It's his Mentor's responsibility to Teach him/her. How well are you teaching him/her HOW to test. Are you pair programming with him? The Junior more than likely doesn't know HOW to set up a good test for xyz. As a Junior freshout of school he knows many Concepts. Some technique. Some experience. But in the end, all a Junior is POTENTIAL. Almost every feature they work on, there will be something new that they have never done before. Sure the Junior may have done a simple State pattern for a project in class, opening and shutting "doors", but never a real world application of the patterns. He/she will only be as good as how well you teach. If they were able to "Just get it" do you think they would have taken a Junior position in the first place? In my experience Juniors are hired and given almost same responsibility as Seniors, but are just paid less and then ignored when they start to falter. Forgive me if i seem bitter, it's 'cause i am. A: * *Make code coverage part of the reviews. *Make "write a test that exposes the bug" a prerequisite to fixing a bug. *Require a certain level of coverage before code can be checked in. *Find a good book on test-driven development and use it to show how test-first can speed development. A: Lots of psychology and helpful "mentoring" techniques but, in all honestly, this just boils down to "write tests if you want to still have a job, tomorrow." You can couch it in whatever terms you think are appropriate, harsh or soft, it doesn't matter. But the fact is, programmers are not paid to just throw together code & check it in -- they're paid to carefully put together code, then put together tests, then test their code, THEN check the whole thing in. (At least that's what it sounds like, from your description.) Hence, if someone is going to refuse to do their job, explain to them that they can stay home, tomorrow, and you'll hire someone who WILL do the job. Again, you can do all this gently, if you think that's necessary, but a lot of people just need a big hard slap of Life In The Real World, and you'd be doing them a favor by giving it to them. Good luck. A: Change his job description for a while to solely be writing tests and maintaining tests. I've heard that many companies do this for fresh new inexperienced people for a while when they start. Additionally, issue a challenge while he's in that role: Write tests that will a) fail on current code a) fulfill the requirements of the software. Hopefully it'll cause him to create some solid and thorough tests (improving the project) and make him better at writing tests for when he re-integrates into core development. edit> fulfull the requirements of the software meaning that he's not just writing tests to purposely break the code when the code never intended or needed to take that test case into account. A: Imagine I am a mock programmer, named... Marco. Imagine I have graduated school not that long ago, and never really had to write tests. Imagine I work in a company that doesn't really enforce or asks for this. OK? good! Now imagine, that the company is switching to using tests, and they are trying to get me inline with this. I will give somewhat snarky reaction to items mentioned so far, as if I didn't do any research on this. Let's get this started with the creator: Showing that the design becomes simpler. How can writing more, make things simpler. I would now have to keep tabs on getting more cases, and etc. This makes it more complicated if you ask me. Give me solid details. Showing it prevents defects. I know that. This is why they are called tests. My code is good, and I checked it for issues, so I don't see where those tests would help. Making it an ego thing saying only bad programmers don't. Ohh, so you think I am a bad programmer just because I don't do as much used testing. I'm insulted and positively annoyed at you. I would rather have assistance and support than sayings. @Justin Standard: On start of new propect pair the junior guy up with yourself or another senior programmer. Ohh, this is so important that resources will be spent making sure I see how things are done, and have some assist me on how things are done. This is helpful, and I might just start doing it more. @Justin Standard: Read Unit Testing 101 presentation by Kate Rhodes. Ahh, that was an interesting presentation, and it made me think about testing. It hammered some points in that I should consider, and it might have swayed my views a bit. I would love to see more compelling articles, and other tools to assist me in getting in line with thinking this is the right way to do things. @Dominic Cooney: Spend some time and share testing techniques. Ahh, this helps me understand what is expected of me as far as techniques, and it puts more items in my bag of knowledge, that I might use again. @Dominic Cooney: Answer questions, examples and books. Having a point person (people) to answer question is helpful, it might make me more likely to try. Good examples are great, and it gives me something to aim for, and something to look for reference. Books that are relevant to this directly are great reference. @Adam Hayle: Surprise Review. Say what, you sprung something that I am completely unprepared for. I feel uncomfortable with this, but will do my best. I will now be scared and mildly apprehensive of this coming up again, thank you. However, the scare tactic might have worked, but it does have a cost. However, if nothing else works, this might just be the push that is needed. @Rytmis: Items are only considered done when they have test cases. Ohh, interesting. I see I really do have to do this now, otherwise I'm not completing anything. This makes sense. @jmorris: Get Rid / Sacrifice. glares, glares, glares - There is a chance I might learn, and with support, and assistance, I can become a very important and functional part of the teams. This is one of my handicaps now, but it won't be for long. However, if I just don't get it, I understand that I will go. I think I will get it. In the end, the support of my team with play a large part in all this. Having a person take their time to assist, and get me started into good habits is always welcome. Then, afterward having a good support net would be great. It would always be appreciated to have someone come a few times afterward, and go over some code, to see how everything is flowing, not in a review per se, but more as a friendly visit. Reasoning, Preparing, Teaching, Follow up, Support. A: This is one of the hardest things to do. Getting your people to get it. Sometimes one of the best ways to help junior level programmers 'get it' and learn the right techniques from the seniors is to do a bit of pair programming. Try this: on an upcoming project, pair the junior guy up with yourself or another senior programmer. They should work together, taking turns "driving" (being the one typing at they keyboard) and "coaching" (looking over the shoulder of the driver and pointing out suggestions, mistakes, etc as they go). It may seem like a waste of resources, but you will find: * *That these guys together can produce code plenty fast and of higher quality. *If your junior guy learns enough to "get it" with a senior guy directing him along the right path (eg. "Ok, now before we continue, lets write at test for this function.") It will be well worth the resources you commit to it. Maybe also have someone in your group give the Unit Testing 101 presentation by Kate Rhodes, I think its a great way to get people excited about testing, if delivered well. Another thing you can do is have your Jr. Devs practice the Bowling Game Kata which will help them learn Test Driven Development. It is in java, but could easily be adapted to any language. A: I've noticed that a lot of programmers see the value of testing on a rational level. If you've heard things like "yeah, I know I should test this but I really need to get this done quickly" then you know what I mean. However, on an emotional level they feel that they get something done only when they're writing the real code. The goal, then, should be to somehow get them to understand that testing is in fact the only way to measure when something is "done", and thus give them the intrinsic motivation to write tests. I'm afraid that's a lot harder than it should be, though. You'll hear a lot of excuses along the lines of "I'm in a real hurry, I'll rewrite/refactor this later and then add the tests" -- and of course, the followup never happens because, surprisingly, they're just as busy the next week. A: If your colleague lacks experience writing tests maybe he or she is having difficulty testing beyond simple situations, and that is manifesting itself as inadequate testing. Here's what I would try: * *Spend some time and share testing techniques, like dependency injection, looking for edge cases, and so on with your colleague. *Offer to answer questions about testing. *Do code reviews of tests for a while. Ask your colleague to review changes of yours that are exemplary of good testing. Look at their comments to see if they're really reading and understanding your test code. *If there are books that fit particularly well with your team's testing philosophy give him or her a copy. It might help if your code review feedback or discussions reference the book so he or she has a thread to pick up and follow. I wouldn't especially emphasize the shame/guilt factor. It is worth pointing out that testing is a widely adopted, good practice and that writing and maintaining good tests is a professional courtesy so your team mates don't need to spend their weekends at work, but I wouldn't belabor those points. If you really need to "get tough" then institute an impartial system; nobody likes to feel like they're being singled out. For example your team might require code to maintain a certain level of test coverage (able to be gamed, but at least able to be automated); require new code to have tests; require reviewers to consider the quality of tests when doing code reviews; and so on. Instituting that system should come from team consensus. If you moderate the discussion carefully you might uncover other underlying reasons your colleague's testing isn't what you expect. A: @ jsmorris I once had the senior developer and "architect" berate me and a tester(it was my first job out of college) in email for not staying late and finishing such an "easy" task the night before. We had been at it all day and called it quits at 7pm, I had been thrashing since 11am before lunch that day and had pestered every member our team for help at least twice. I responded and cc'd the team with: "I've been disappointed in you for a month now. I never get help from the team. I'll be at the coffee shop across the street if you need me. I'm sorry i couldn't debug the 12 parameter, 800 line method that just about everything is dependent on." After cooling off at the coffee shop for an hour, i went back in the office, grabbed my crap and left. After a few days they called me asking if I was coming in, I said I would but I had an interview, maybe tomorrow. "So your quitting then?" A: On your source repository : use hooks before each commits (pre-commit hook for SVN for example) In that hook, check for the existence of at least one use case for each method. Use a convention for unit test organisation that you could easily enforce via a pre-commit hook. On an integration server compile everything and check regularely the test coverage using a test coverage tool. If test coverage is not 100% for a code, block any commit of the developer. He should send you the test case that covers 100% of the code. Only automatic checks can scale well on a project. You cannot check everything by hand. The developer should have a mean to check if his test cases covers 100% of the code. That way, if he doesn't commit a 100% tested code, it is his own fault, not a "oops, sorry I forgot" fault. Remember : People never do what you expect, they always do what you inspect. A: First off, like most respondents here point out, if the guy doesn't see the value in testing, there's not much you can do about it, and you've already pointed out that you can't fire the guy. However, failure is not an option here, so what about the few things you can do? If your organization is large enough to have over 6 developers, I strongly recommend having a Quality Assurance department (even if its just one person to start). Ideally, you should have a ratio of 1 tester to 3-5 developers. The thing about programmers is ... they are programmers, not testers. I have yet to interview a programmer that has been formally taught proper QA techniques. Most organizations make the fatal flaw of assigning the testing roles to the new-hire, the person with the LEAST amount of exposure to your code -- ideally, the senior developers should be moved to the QA role as they have the experience in the code, and (hopefully) have developed a sixth sense for code smells and failure points that can crop up. Furthermore, the programmer that made the mistake is probably not going to find the defect because its usually not a syntax error (those get picked up in the compile), but a logic error -- and the same logic is at work when they write the test as when they write the code. Don't have the person who developed the code test that code -- they'll find less bugs than anyone else would. In your case, if you can afford the redirected work effort, make this new guy the first member of your QA team. Get him to read "Software Testing In The Real World: Improving The Process", because he obviously will need some training in his new role. If he doesn't like it, he'll quit and your problem is still solved. A slightly less vengeful approach would be let this person do what they are good at (I'm assuming this person got hired because they are actually competent at the programming part of the job) , and hire a tester or two to do the testing (University students often have practicum or "co-op" terms, would love the exposure, and are cheap) Side Note: Eventually, you'll want the QA team reporting to a QA director, or at least not to a software developer manager, because having the QA team report to the manager who's primary goal is to get the product done is a conflict of interest. If your organization is smaller than 6, or you can't get away with creating a new team, I recommend paired programming (PP). I'm not a total convert of all the extreme programming techniques, but I'm definitely a believer in paired programming. However, both members of the paired programming team have to be dedicated, or it simply doesn't work. They have to follow two rules: the inspector has to fully understand what is being coded on the screen or he has to ask the coder to explain it; the coder can only code what he can explain -- no "you'll see" or "trust me" or hand-waving will be tolerated. I only recommend PP if your team is capable of it, because, like testing, no amount of cheering or threatening will persuade a couple of ego-filled introverts to work together if they don't feel comfortable doing so. However, I find that between the choice of writing detailed functional specs and doing code reviews vs. paired programming, the PP usually wins out. If PP is not for you, then TDD is your best bet, but only if its taken literally. Test Driven Development mean you write the tests FIRST, run the tests to prove they actually do fail, then write the simplest code to make it work. The trade off is now you (should) have a collection of thousands of tests, which is also code, and is just as likely as production code to contain bugs. I'll be honest, I'm not a big fan of TDD, mainly because of this reason, but it works for many developers who would rather write test scripts than test case documents -- some testing is better than none. Couple TDD with PP for a better likelihood of test coverage and less bugs in the script. If all else fails, have the programmers equivalence of a swear jar -- each time the programmer breaks the build, they have to put $20, $50, $100 (whatever is moderately painful for your staff) into a jar that goes to your favorite (registered!) charity. Until they pay up, shun them :) All joking aside, the best way to get your programmer to write tests is don't let him program. If you want a programmer, hire a programmer -- If you want tests, hire a tester. I started as a junior programmer 12 years ago doing testing, and it turned into my career path, and I wouldn't trade it for anything. A solid QA department that is properly nurtured and given the power and mandate to improve the software is just as valuable as the developers writing the software in the first place. A: Based on your comment, "Showing that the design becomes simpler" I'm assuming you guys practice TDD. Doing a code review after the fact is not going to work. The whole thing about TDD is that it's a design and not a testing philosophy. If he didn't write the tests as part of the design, you aren't going to get a lot of benefit from writing tests after the fact - especially from a junior developer. He'll end up missing a whole lot of corner cases and his code will still be crappy. Your best bet is to have a very patient senior developer to sit with him and do some pair programming. And just keep at it until he learns. Or doesn't learn, in which case you need to reassign him to a task he is better suited to because you will just end up frustrating your real developers. Not everyone has the same level of talent and/or motivation. Development teams, even agile ones, are made up of people on the "A-Team" and people on "B-Team". A-Team members are the one who architect the solution, write all the non-trivial production code, and communicate with the business owners - all the work that requires thinking outside the box. The B-Team handle things like configuration management, writing scripts, fixing lame bugs, and doing maintenance work - all the work that has strict procedures that have small consequences for failure. A: This may be a bit heartless, but the way you describe the situation it sounds like you need to fire this guy. Or at least make it clear: refusing to follow house development practices (including writing tests) and checking in buggy code that other people have to clean up will eventually get you fired. A: The main reason junior engineers/programmers don't take lots of time to design and perform test scripts, is because most CS certifications do not heavily require this, so other areas of engineering are covered further in college programs, such as design patters. In my experience, the best way to get the junior professionals into the habit, is to make it part of the process explicitly. That is, when estimating the time an iteration should take, the time of design, write and/or execute the cases should be incorporated into this time estimate. Finally, reviewing the test script design should be part of a design review, and the actual code should be reviewed in the code review. This makes the programmer liable for doing proper testing of each line of code he/she writes, and the senior engineer and peers liable to provide feedback and guidance on the code and test written.
{ "language": "en", "url": "https://stackoverflow.com/questions/7252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "109" }