source_id
int64 1
74.7M
| question
stringlengths 0
40.2k
| response
stringlengths 0
111k
| metadata
dict |
---|---|---|---|
26,732 | <servlet> <servlet-name>myservlet</servlet-name> <servlet-class>workflow.WDispatcher</servlet-class> <load-on-startup>2</load-on-startup></servlet><servlet-mapping> <servlet-name>myservlet</servlet-name> <url-pattern>*NEXTEVENT*</url-pattern></servlet-mapping> Above is the snippet from Tomcat's web.xml . The URL pattern *NEXTEVENT* on start up throws java.lang.IllegalArgumentException: Invalid <url-pattern> in servlet mapping It will be greatly appreciated if someone can hint at the error. | <url-pattern>*NEXTEVENT*</url-pattern> The URL pattern is not valid. It can either end in an asterisk or start with one (to denote a file extension mapping). The url-pattern specification: A string beginning with a ‘/’ character and ending with a ‘/*’ suffix is used for path mapping. A string beginning with a ‘*.’ prefix is used as an extension mapping. A string containing only the ’/’ character indicates the "default" servlet of the application. In this case the servlet path is the request URI minus the context path and the path info is null. All other strings are used for exact matches only. See section 12.2 of the Java Servlet Specification Version 3.1 for more details. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/26732",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
26,733 | Using reflection, how can I get all types that implement an interface with C# 3.0/.NET 3.5 with the least code, and minimizing iterations? This is what I want to re-write: foreach (Type t in this.GetType().Assembly.GetTypes()) if (t is IMyInterface) ; //do stuff | Mine would be this in c# 3.0 :) var type = typeof(IMyInterface);var types = AppDomain.CurrentDomain.GetAssemblies() .SelectMany(s => s.GetTypes()) .Where(p => type.IsAssignableFrom(p)); Basically, the least amount of iterations will always be: loop assemblies loop types see if implemented. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/26733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1782/"
]
} |
26,760 | I have some strings of xxh:yym format where xx is hours and yy is minutes like "05h:30m". What is an elegant way to convert a string of this type to TimeSpan? | This seems to work, though it is a bit hackish: TimeSpan span;if (TimeSpan.TryParse("05h:30m".Replace("m","").Replace("h",""), out span)) MessageBox.Show(span.ToString()); | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/26760",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/31505/"
]
} |
26,762 | Our dev shop currently uses Visual SourceSafe. We all know how that could end up (badly), so we're investigating other systems. First up is Perforce. Does anyone have experience with using it and its integration into Visual Studio (2003/2005/2008)? Is it as good as any other, or is it pretty solid with good features, comparatively? | I used Perforce at my last 3 jobs (my current job I'm using Subversion, which I don't like nearly as much.) I'm a big fan of Perforce, and moving from SourceSafe it will seem like Nirvana. Just getting atomic checkin will be a big boost for your company. Otherwise, Perforce is fast, it has good tools, and the workflow is simple for doing things like merges and integrations. I wholeheartedly recommend it. It may not be all new and flashy like the latest distributed VCS's, but honestly, I prefer the client/server model for its speed, especially if you're working with people in other countries that may have slow connections to you. The Visual Studio integration is pretty good, but it has a few irritating issues. If you run another Perforce client at the same time (like P4V), it's very poor at keeping changes from the other client in sync in terms of showing what files are currently checked in/out. You generally have to shut down Visual Studio and load the project again if you want it to sync correctly. But, the sync status doesn't actually affect checkins/checkouts/updates from working correctly, it just means you can be fooled in to thinking something is in a different state than it actually is while you're in Visual Studio. The Perforce clients will always show the correct status as they sync continually with the database. Also, on occasion you'll find you need to work "offline" (not connected to the Perforce database for some reason) and when you load the project again the next time, your Perforce bindings may be lost and you'll have to rebind each project individually. If you work with a solution that contains many projects this can be a big pain in the patoot. Same goes for when you first check out a solution, binding to Perforce is needed before the integration occurs. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/26762",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1212/"
]
} |
26,796 | What is the best way to use ResolveUrl() in a Shared/static function in Asp.Net? My current solution for VB.Net is: Dim x As New System.Web.UI.Controlx.ResolveUrl("~/someUrl") Or C#: System.Web.UI.Control x = new System.Web.UI.Control();x.ResolveUrl("~/someUrl"); But I realize that isn't the best way of calling it. | I use System.Web.VirtualPathUtility.ToAbsolute . | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/26796",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1414/"
]
} |
26,799 | How do you back up your development machine so that in the event of a catastrophic hardware malfunction, you are up and running in the least amount of time possible? | There's an important distinction between backing up your development machine and backing up your work. For a development machine your best bet is an imaging solution that offers as near a "one-click-restore" process as possible. TimeMachine (Mac) and Windows Home Server (Windows) are both excellent for this purpose. Not only can you have your entire machine restored in 1-2 hours (depending on HDD size), but both run automatically and store deltas so you can have months of backups in relatively little space. There are also numerous "ghosting" packages, though they usually do not offer incremental/delta backups so take more time/space to backup your machine. Less good are products such as Carbonite/Mozy/JungleDisk/RSync. These products WILL allow you to retrieve your data, but you will still have to reinstall the OS and programs. Some have limited/no histories either. In terms of backing up your code and data then I would recommend a sourcecode control product like SVN. While a general backup solution will protect your data, it does not offer the labeling/branching/history functionality that SCC packages do. These functions are invaluable for any type of project with a shelf-life. You can easily run a SVN server on your local machine. If your machine is backed up then your SVN database will be also. This IMO is the best solution for a home developer and is how I keep things. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/26799",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1574/"
]
} |
26,809 | I frequently have problems dealing with DataRows returned from SqlDataAdapters . When I try to fill in an object using code like this: DataRow row = ds.Tables[0].Rows[0];string value = (string)row; What is the best way to deal with DBNull's in this type of situation. | Nullable types are good, but only for types that are not nullable to begin with. To make a type "nullable" append a question mark to the type, for example: int? value = 5; I would also recommend using the " as " keyword instead of casting. You can only use the "as" keyword on nullable types, so make sure you're casting things that are already nullable (like strings) or you use nullable types as mentioned above. The reasoning for this is If a type is nullable, the " as " keyword returns null if a value is DBNull . It's ever-so-slightly faster than casting though only in certain cases . This on its own is never a good enough reason to use as , but coupled with the reason above it's useful. I'd recommend doing something like this DataRow row = ds.Tables[0].Rows[0];string value = row as string; In the case above, if row comes back as DBNull , then value will become null instead of throwing an exception. Be aware that if your DB query changes the columns/types being returned, using as will cause your code to silently fail and make values simple null instead of throwing the appropriate exception when incorrect data is returned so it is recommended that you have tests in place to validate your queries in other ways to ensure data integrity as your codebase evolves. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/26809",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2191/"
]
} |
26,816 | is there a way to abort threads created with QueueUserWorkItem? Or maybe I don't need to? What happens if the main application exits? Are all thread created from it aborted automatically? | You don't need to abort them. When your application exits, .NET will kill any threads with IsBackground = true. The .NET threadpool has all its threads set to IsBackground = true, so you don't have to worry about it. Now if you're creating threads by newing up the Thread class, then you'll either need to abort them or set their IsBackground property to true. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/26816",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1782/"
]
} |
26,845 | I'd like to hear from people who are using distributed version control (aka distributed revision control, decentralized version control) and how they are finding it. What are you using, Mercurial, Darcs, Git, Bazaar? Are you still using it? If you've used client/server rcs in the past, are you finding it better, worse or just different? What could you tell me that would get me to jump on the bandwagon? Or jump off for that matter, I'd be interested to hear from people with negative experiences as well. I'm currently looking at replacing our current source control system (Subversion) which is the impetus for this question. I'd be especially interested in anyone who's used it with co-workers in other countries, where your machines may not be on at the same time, and your connection is very slow. If you're not sure what distributed version control is, here are a couple articles: Intro to Distributed Version Control Wikipedia Entry | I've been using Mercurial both at work and in my own personal projects, and I am really happy with it. The advantages I see are: Local version control. Sometimes I'm working on something, and I want to keep a version history on it, but I'm not ready to push it to the central repositories. With distributed VCS, I can just commit to my local repo until it's ready, without branching. That way, if other people make changes that I need, I can still get them and integrate them into my code. When I'm ready, I push it out to the servers. Fewer merge conflicts. They still happen, but they seem to be less frequent, and are less of a risk, because all the code is checked in to my local repo, so even if I botch the merge, I can always back up and do it again. Separate repos as branches. If I have a couple development vectors running at the same time, I can just make several clones of my repo and develop each feature independently. That way, if something gets scrapped or slipped, I don't have to pull pieces out. When they're ready to go, I just merge them together. Speed. Mercurial is much faster to work with, mostly because most of your common operations are local. Of course, like any new system, there was some pain during the transition. You have to think about version control differently than you did when you were using SVN, but overall I think it's very much worth it. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/26845",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1329401/"
]
} |
26,857 | Using C# and ASP.NET I want to programmatically fill in some values (4 text boxes) on a web page (form) and then 'POST' those values. How do I do this? Edit: Clarification: There is a service (www.stopforumspam.com) where you can submit ip, username and email address on their 'add' page. I want to be able to create a link/button on my site's page that will fill in those values and submit the info without having to copy/paste them across and click the submit button. Further clarification: How do automated spam bots fill out forms and click the submit button if they were written in C#? | The code will look something like this: WebRequest req = WebRequest.Create("http://mysite/myform.aspx");string postData = "item1=11111&item2=22222&Item3=33333";byte[] send = Encoding.Default.GetBytes(postData);req.Method = "POST";req.ContentType = "application/x-www-form-urlencoded";req.ContentLength = send.Length;Stream sout = req.GetRequestStream();sout.Write(send, 0, send.Length);sout.Flush();sout.Close();WebResponse res = req.GetResponse();StreamReader sr = new StreamReader(res.GetResponseStream());string returnvalue = sr.ReadToEnd(); | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/26857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
26,863 | I had a plugin installed in Visual Studio 2008, and it created some extra dockable windows. I have uninstalled it, and I can't get rid of the windows it created - I close them, but they always come back. They're just empty windows now, since the plugin is no longer present, but nothing I've tried gets rid of them. I've tried: Window -> Reset Window Layout Deleting the .suo files in my project directories Deleting the Visual Studio 9.0 folder in my Application Settings directory Any ideas? | Have you tried this? In Visual Studio go to Tools > Import and Export Settings > Reset all settings Be sure you back up your settings before you do this. I made the mistake of trying this to fix an issue and didn't realize it would undo all my appearance settings and toolbars as well. Took a lot of time to get back to the way I like things. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/26863",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2348/"
]
} |
26,877 | In C#, what is the difference (if any) between these two lines of code? tmrMain.Elapsed += new ElapsedEventHandler(tmrMain_Tick); and tmrMain.Elapsed += tmrMain_Tick; Both appear to work exactly the same. Does C# just assume you mean the former when you type the latter? | I did this static void Hook1(){ someEvent += new EventHandler( Program_someEvent );}static void Hook2(){ someEvent += Program_someEvent;} And then ran ildasm over the code. The generated MSIL was exactly the same. So to answer your question, yes they are the same thing. The compiler is just inferring that you want someEvent += new EventHandler( Program_someEvent ); -- You can see it creating the new EventHandler object in both cases in the MSIL | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/26877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/369/"
]
} |
26,903 | Is there a way? I need all types that implement a specific interface to have a parameterless constructor, can it be done? I am developing the base code for other developers in my company to use in a specific project. There's a proccess which will create instances of types (in different threads) that perform certain tasks, and I need those types to follow a specific contract (ergo, the interface). The interface will be internal to the assembly If you have a suggestion for this scenario without interfaces, I'll gladly take it into consideration... | Not to be too blunt, but you've misunderstood the purpose of interfaces. An interface means that several people can implement it in their own classes, and then pass instances of those classes to other classes to be used. Creation creates an unnecessary strong coupling. It sounds like you really need some kind of registration system, either to have people register instances of usable classes that implement the interface, or of factories that can create said items upon request. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/26903",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1782/"
]
} |
26,947 | What built-in PHP functions are useful for web scraping? What are some good resources (web or print) for getting up to speed on web scraping with PHP? | Scraping generally encompasses 3 steps: first you GET or POST your requestto a specified URL next you receive the html that is returned as the response finally you parse out of that html the text you'd like to scrape. To accomplish steps 1 and 2, below is a simple php class which uses Curl to fetch webpages using either GET or POST. After you get the HTML back, you just use Regular Expressions to accomplish step 3 by parsing out the text you'd like to scrape. For regular expressions, my favorite tutorial site is the following: Regular Expressions Tutorial My Favorite program for working with RegExs is Regex Buddy . I would advise you to try the demo of that product even if you have no intention of buying it. It is an invaluable tool and will even generate code for your regexs you make in your language of choice (including php). Usage: $curl = new Curl();$html = $curl->get(" http://www.google.com "); // now, do your regex work against $html PHP Class: <?phpclass Curl{ public $cookieJar = ""; public function __construct($cookieJarFile = 'cookies.txt') { $this->cookieJar = $cookieJarFile; } function setup() { $header = array(); $header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"; $header[] = "Cache-Control: max-age=0"; $header[] = "Connection: keep-alive"; $header[] = "Keep-Alive: 300"; $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[] = "Accept-Language: en-us,en;q=0.5"; $header[] = "Pragma: "; // browsers keep this blank. curl_setopt($this->curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7'); curl_setopt($this->curl, CURLOPT_HTTPHEADER, $header); curl_setopt($this->curl,CURLOPT_COOKIEJAR, $this->cookieJar); curl_setopt($this->curl,CURLOPT_COOKIEFILE, $this->cookieJar); curl_setopt($this->curl,CURLOPT_AUTOREFERER, true); curl_setopt($this->curl,CURLOPT_FOLLOWLOCATION, true); curl_setopt($this->curl,CURLOPT_RETURNTRANSFER, true); } function get($url) { $this->curl = curl_init($url); $this->setup(); return $this->request(); } function getAll($reg,$str) { preg_match_all($reg,$str,$matches); return $matches[1]; } function postForm($url, $fields, $referer='') { $this->curl = curl_init($url); $this->setup(); curl_setopt($this->curl, CURLOPT_URL, $url); curl_setopt($this->curl, CURLOPT_POST, 1); curl_setopt($this->curl, CURLOPT_REFERER, $referer); curl_setopt($this->curl, CURLOPT_POSTFIELDS, $fields); return $this->request(); } function getInfo($info) { $info = ($info == 'lasturl') ? curl_getinfo($this->curl, CURLINFO_EFFECTIVE_URL) : curl_getinfo($this->curl, $info); return $info; } function request() { return curl_exec($this->curl); }}?> | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/26947",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2052/"
]
} |
26,971 | As someone who hasn't used either technology on real-world projects I wonder if anyone knows how these two complement each other and how much their functionalities overlap? | LINQ to SQL forces you to use the table-per-class pattern. The benefits of using this pattern are that it's quick and easy to implement and it takes very little effort to get your domain running based on an existing database structure. For simple applications, this is perfectly acceptable (and oftentimes even preferable), but for more complex applications devs will often suggest using a domain driven design pattern instead (which is what NHibernate facilitates). The problem with the table-per-class pattern is that your database structure has a direct influence over your domain design. For instance, let's say you have a Customers table with the following columns to hold a customer's primary address information: StreetAddress City State Zip Now, let's say you want to add columns for the customer's mailing address as well so you add in the following columns to the Customers table: MailingStreetAddress MailingCity MailingState MailingZip Using LINQ to SQL, the Customer object in your domain would now have properties for each of these eight columns. But if you were following a domain driven design pattern, you would probably have created an Address class and had your Customer class hold two Address properties, one for the mailing address and one for their current address. That's a simple example, but it demonstrates how the table-per-class pattern can lead to a somewhat smelly domain. In the end, it's up to you. Again, for simple apps that just need basic CRUD (create, read, update, delete) functionality, LINQ to SQL is ideal because of simplicity. But personally I like using NHibernate because it facilitates a cleaner domain. Edit: @lomaxx - Yes, the example I used was simplistic and could have been optimized to work well with LINQ to SQL. I wanted to keep it as basic as possible to drive home the point. The point remains though that there are several scenarios where having your database structure determine your domain structure would be a bad idea, or at least lead to suboptimal OO design. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/26971",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2133/"
]
} |
26,984 | I have been in both situations: Creating too many custom Exceptions Using too many general Exception class In both cases the project started OK but soon became an overhead to maintain (and refactor). So what is the best practice regarding the creation of your own Exception classes? | The Java Specialists wrote a post about Exceptions in Java , and in it they list a few "best practices" for creating Exceptions, summarized below: Don't Write Own Exceptions (there are lots of useful Exceptions that are already part of the Java API) Write Useful Exceptions (if you have to write your own Exceptions, make sure they provide useful information about the problem that occurred) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/26984",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2644/"
]
} |
27,009 | What are some of the strategies that are used when implementing FxCop / static analysis on existing code bases with existing violations? How can one most effectively reduce the static analysis violations? | Make liberal use of [SuppressMessage] attribute to begin with. At least at the beginning. Once you get the count to 0 via the attribute, you then put in a rule that new checkins may not introduce FxCop violations. Visual Studio 2008 has a nice code analysis feature that allows you to ensure that code analysis runs on every build and you can treat warnings as errors. That might slow things down a bit so I recommend setting up a continuous integration server (like CruiseControl.NET) and having it run code analysis on every checkin. Once you get it under control and aren't introducing new violations with every checkin, start to tackle whole classes of FxCop violations at a time with the goal of removing the SuppressMessageAttributes that you used. The way to keep track of which ones you really want to keep is to always add a Justification value to the ones you really want to suppress. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27009",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2822/"
]
} |
27,018 | When applying the Single Responsibility Principle and looking at a class's reason to change, how do you determine whether that reason too change is too granular, or not granular enough? | Make liberal use of [SuppressMessage] attribute to begin with. At least at the beginning. Once you get the count to 0 via the attribute, you then put in a rule that new checkins may not introduce FxCop violations. Visual Studio 2008 has a nice code analysis feature that allows you to ensure that code analysis runs on every build and you can treat warnings as errors. That might slow things down a bit so I recommend setting up a continuous integration server (like CruiseControl.NET) and having it run code analysis on every checkin. Once you get it under control and aren't introducing new violations with every checkin, start to tackle whole classes of FxCop violations at a time with the goal of removing the SuppressMessageAttributes that you used. The way to keep track of which ones you really want to keep is to always add a Justification value to the ones you really want to suppress. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27018",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2822/"
]
} |
27,020 | I have an Excel Spreadsheet like this id | data for id | more data for idid | data for idid | data for id | more data for id | even more data for idid | data for id | more data for idid | data for idid | data for id | more data for id Now I want to group the data of one id by alternating the background color of the rows var color = whitefor each row if the first cell is not empty and color is white set color to green if the first cell is not empty and color is green set color to white set background of row to color Can anyone help me with a macro or some VBA code Thanks | I use this formula to get the input for a conditional formatting: =IF(B2=B1,E1,1-E1)) [content of cell E2] Where column B contains the item that needs to be grouped and E is an auxiliary column. Every time that the upper cell (B1 on this case) is the same as the current one (B2), the upper row content from column E is returned. Otherwise, it will return 1 minus that content (that is, the outupt will be 0 or 1, depending on the value of the upper cell). | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27020",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2798/"
]
} |
27,030 | I want to compare 2 arrays of objects in JavaScript code. The objects have 8 total properties, but each object will not have a value for each, and the arrays are never going to be any larger than 8 items each, so maybe the brute force method of traversing each and then looking at the values of the 8 properties is the easiest way to do what I want to do, but before implementing, I wanted to see if anyone had a more elegant solution. Any thoughts? | EDIT: You cannot overload operators in current, common browser-based implementations of JavaScript interpreters. To answer the original question, one way you could do this, and mind you, this is a bit of a hack, simply serialize the two arrays to JSON and then compare the two JSON strings. That would simply tell you if the arrays are different, obviously you could do this to each of the objects within the arrays as well to see which ones were different. Another option is to use a library which has some nice facilities for comparing objects - I use and recommend MochiKit . EDIT: The answer kamens gave deserves consideration as well, since a single function to compare two given objects would be much smaller than any library to do what I suggest (although my suggestion would certainly work well enough). Here is a naïve implemenation that may do just enough for you - be aware that there are potential problems with this implementation: function objectsAreSame(x, y) { var objectsAreSame = true; for(var propertyName in x) { if(x[propertyName] !== y[propertyName]) { objectsAreSame = false; break; } } return objectsAreSame;} The assumption is that both objects have the same exact list of properties. Oh, and it is probably obvious that, for better or worse, I belong to the only-one-return-point camp. :) | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/27030",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2176/"
]
} |
27,065 | Do any of you know of a tool that will search for .class files and then display their compiled versions? I know you can look at them individually in a hex editor but I have a lot of class files to look over (something in my giant application is compiling to Java6 for some reason). | Use the javap tool that comes with the JDK. The -verbose option will print the version number of the class file. > javap -verbose MyClassCompiled from "MyClass.java"public class MyClass SourceFile: "MyClass.java" minor version: 0 major version: 46... To only show the version: WINDOWS> javap -verbose MyClass | find "version"LINUX > javap -verbose MyClass | grep version | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/27065",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1666/"
]
} |
27,148 | I want to merge multiple rss feeds into a single feed, removing any duplicates. Specifically, I'm interested in merging the feeds for the tags I'm interested in. [A quick search turned up some promising links, which I don't have time to visit at the moment] Broadly speaking, the ideal would be a reader that would list all the available tags on the site and toggle them on and off, allowing me to explore what's available, keep track of questions I've visited, new answers on interesting feeds, etc, etc . . . though I don't suppose such a things exists right now. As I randomly explore the site and see questions I think are interesting, I inevitably find "oh yes, that one looked interesting a couple days ago when I read it the first time, and hasn't been updated since". It would be much nicer if my machine would keep track of such deails for me :) Update: You can now use "and", "or", and "not" to combine multiple tags into a single feed: Tags AND Tags OR Tags Update: You can now use Filters to watch tags across one or multiple sites: Improved Tag Stes | Have you heard of Yahoo's Pipes . Its an interactive feed aggregator and manipulator. List of 'hot pipes' to subscribe to, and ability to create your own (yahoo account required). I played with it during beta back in the day, however I had a blast. Its really fun and easy to aggregate different feeds and you can add logic or filters to the "pipes". You can even do more then just RSS like import images from flickr. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27148",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2495/"
]
} |
27,219 | Given a select with multiple option's in jQuery. $select = $("<select></select>");$select.append("<option>Jason</option>") //Key = 1 .append("<option>John</option>") //Key = 32 .append("<option>Paul</option>") //Key = 423 How should the key be stored and retrieved? The ID may be an OK place but would not be guaranteed unique if I had multiple select's sharing values (and other scenarios). Thanks and in the spirit of TMTOWTDI. $option = $("<option></option>");$select = $("<select></select>");$select.addOption = function(value,text){ $(this).append($("<option/>").val(value).text(text));};$select.append($option.val(1).text("Jason").clone()) .append("<option value=32>John</option>") .append($("<option/>").val(423).text("Paul")) .addOption("321","Lenny"); | Like lucas said the value attribute is what you need. Using your code it would look something like this ( I added an id attribute to the select to make it fit ): $select = $('<select id="mySelect"></select>');$select.append('<option value="1">Jason</option>') //Key = 1 .append('<option value="32">John</option>') //Key = 32 .append('<option value="423">Paul</option>') //Key = 423 jQuery lets you get the value using the val() method. Using it on the select tag you get the current selected option's value. $( '#mySelect' ).val(); //Gets the value for the current selected option$( '#mySelect > option' ).each( function( index, option ) { option.val(); //The value for each individual option} ); Just in case, the .each method loops throught every element the query matched. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1293/"
]
} |
27,220 | I'm looking for a method, or a code snippet for converting std::string to LPCWSTR | The solution is actually a lot easier than any of the other suggestions: std::wstring stemp = std::wstring(s.begin(), s.end());LPCWSTR sw = stemp.c_str(); Best of all, it's platform independent. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/27220",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2701/"
]
} |
27,222 | I am looking for good methods of manipulating HTML in PHP. For example, the problem I currently have is dealing with malformed HTML. I am getting input that looks something like this: <div>This is some <b>text As you noticed, the HTML is missing closing tags. I could use regex or an XML Parser to solve this problem. However, it is likely that I will have to do other DOM manipulation in the future. I wonder if there are any good PHP libraries that handle DOM manipulation similar to how Javascript deals with DOM manipulation. | PHP has a PECL extension that gives you access to the features of HTML Tidy . Tidy is a pretty powerful library that should be able to take code like that and close tags in an intelligent manner. I use it to clean up malformed XML and HTML sent to me by a classified ad system prior to import. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27222",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/889/"
]
} |
27,240 | In Java 5 and above you have the foreach loop, which works magically on anything that implements Iterable : for (Object o : list) { doStuff(o);} However, Enumerable still does not implement Iterable , meaning that to iterate over an Enumeration you must do the following: for(; e.hasMoreElements() ;) { doStuff(e.nextElement());} Does anyone know if there is a reason why Enumeration still does not implement Iterable ? Edit: As a clarification, I'm not talking about the language concept of an enum , I'm talking a Java-specific class in the Java API called ' Enumeration '. | Enumeration hasn't been modified to support Iterable because it's an interface not a concrete class (like Vector, which was modifed to support the Collections interface). If Enumeration was changed to support Iterable it would break a bunch of people's code. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27240",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1666/"
]
} |
27,242 | I've had a lot of good experiences learning about web development on w3schools.com . It's hit or miss, I know, but the PHP and CSS sections specifically have proven very useful for reference. Anyway, I was wondering if there was a similar site for jQuery . I'm interested in learning, but I need it to be online/searchable, so I can refer back to it easily when I need the information in the future. Also, as a brief aside, is jQuery worth learning? Or should I look at different JavaScript libraries? I know Jeff uses jQuery on Stack Overflow and it seems to be working well. Thanks! Edit : jQuery's website has a pretty big list of tutorials , and a seemingly comprehensive documentation page . I haven't had time to go through it all yet, has anyone else had experience with it? Edit 2 : It seems Google is now hosting the jQuery libraries. That should give jQuery a pretty big advantage in terms of publicity. Also, if everyone uses a single unified aQuery library hosted at the same place, it should get cached for most Internet users early on and therefore not impact the download footprint of your site should you decide to use it. 2 Months Later... Edit 3 : I started using jQuery on a project at work recently and it is great to work with! Just wanted to let everyone know that I have concluded it is ABSOLUTELY worth it to learn and use jQuery. Also, I learned almost entirely from the Official jQuery documentation and tutorials . It's very straightforward. 10 Months Later... jQuery is a part of just about every web app I've made since I initially wrote this post. It makes progressive enhancement a breeze, and helps make the code maintainable. Also, all the jQuery plug-ins are an invaluable resource! 3 Years Later... Still using jQuery just about every day. I now author jQuery plug-ins and consult full time. I'm primarily a Djangonaut but I've done several javascript only contracts with jQuery. It's a life saver. From one jQuery user to another... You should look at templating with jQuery (or underscore -- see below). Other things I've found valuable in addition to jQuery (with estimated portion of projects I use it on): jQuery Form Plugin (95%) jQuery Form Example Plugin (75%) jQuery UI (70%) Underscore.js (80%) CoffeeScript (30%) Backbone.js (10%) | Rick Strahl and Matt Berseth's blogs both tipped me into jQuery and man am I glad they did. jQuery completely changes a) your client programming perspective, b) the grief it causes it you, and c) how much fun it can be! http://www.west-wind.com/weblog/ http://mattberseth.com/ I used the book jQuery in Action http://www.amazon.com/jQuery-Action-Bear-Bibeault/dp/1933988355/ref=sr_1_1?ie=UTF8&s=books&qid=1219716122&sr=1-1 (I bought it used at Amazon for about $22). It has been a big help into bootstrapping me into jQuery. The documentation at jquery.com are also very helpful. A place where jQuery falls a little flat is with its UI components. Those don't seem to be quite ready for primetime just yet. It could be that Prototype or MooTools or ExtJS are as good as jQuery. But for me, jQuery seems to have a little more momentum behind it right now and that counts for something for me. Check jQuery out. It is very cool! | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27242",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2908/"
]
} |
27,247 | I need to know about Epoll On linux System. Could you recommend manual or guides about epoll library? need more detailed guides. it's better to have some examples. help me. and Thank you for reading. | Here's an introduction to Epoll, a pretty basic tutorial: http://blog.kovyrin.net/2006/04/13/epoll-asynchronous-network-programming/ A more complete example can be found here: https://banu.com/blog/2/how-to-use-epoll-a-complete-example-in-c/ Also, the man pages | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27247",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2370764/"
]
} |
27,267 | I've been interacting with Amazon S3 through S3Fox and I can't seem to delete my buckets. I select a bucket, hit delete, confirm the delete in a popup, and... nothing happens. Is there another tool that I should use? | It is finally possible to delete all the files in one go using the new Lifecycle (expiration) rules feature. You can even do it from the AWS console. Simply right click on the bucket name in AWS console, select "Properties" and then in the row of tabs at the bottom of the page select "lifecycle" and "add rule". Create a lifecycle rule with the "Prefix" field set blank (blank means all files in the bucket, or you could set it to "a" to delete all files whose names begin with "a"). Set the "Days" field to "1". That's it. Done. Assuming the files are more than one day old they should all get deleted, then you can delete the bucket. I only just tried this for the first time so I'm still waiting to see how quickly the files get deleted (it wasn't instant but presumably should happen within 24 hours) and whether I get billed for one delete command or 50 million delete commands... fingers crossed! | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/27267",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/658/"
]
} |
27,294 | I'm working on an internal project for my company, and part of the project is to be able to parse various "Tasks" from an XML file into a collection of tasks to be ran later. Because each type of Task has a multitude of different associated fields, I decided it would be best to represent each type of Task with a seperate class. To do this, I constructed an abstract base class: public abstract class Task{ public enum TaskType { // Types of Tasks } public abstract TaskType Type { get; } public abstract LoadFromXml(XmlElement task); public abstract XmlElement CreateXml(XmlDocument currentDoc);} Each task inherited from this base class, and included the code necessary to create itself from the passed in XmlElement, as well as serialize itself back out to an XmlElement. A basic example: public class MergeTask : Task{ public override TaskType Type { get { return TaskType.Merge; } } // Lots of Properties / Methods for this Task public MergeTask (XmlElement elem) { this.LoadFromXml(elem); } public override LoadFromXml(XmlElement task) { // Populates this Task from the Xml. } public override XmlElement CreateXml(XmlDocument currentDoc) { // Serializes this class back to xml. }} The parser would then use code similar to this to create a task collection: XmlNode taskNode = parent.SelectNode("tasks");TaskFactory tf = new TaskFactory();foreach (XmlNode task in taskNode.ChildNodes){ // Since XmlComments etc will show up if (task is XmlElement) { tasks.Add(tf.CreateTask(task as XmlElement)); }} All of this works wonderfully, and allows me to pass tasks around using the base class, while retaining the structure of having individual classes for each task. However, I am not happy with my code for TaskFactory.CreateTask. This method accepts an XmlElement, and then returns an instance of the appropriate Task class: public Task CreateTask(XmlElement elem){ if (elem != null) { switch(elem.Name) { case "merge": return new MergeTask(elem); default: throw new ArgumentException("Invalid Task"); } }} Because I have to parse the XMLElement, I'm using a huge (10-15 cases in the real code) switch to pick which child class to instantiate. I'm hoping there is some sort of polymorphic trick I can do here to clean up this method. Any advice? | I use reflection to do this.You can make a factory that basically expands without you having to add any extra code. make sure you have "using System.Reflection", place the following code in your instantiation method. public Task CreateTask(XmlElement elem){ if (elem != null) { try { Assembly a = typeof(Task).Assembly string type = string.Format("{0}.{1}Task",typeof(Task).Namespace,elem.Name); //this is only here, so that if that type doesn't exist, this method //throws an exception Type t = a.GetType(type, true, true); return a.CreateInstance(type, true) as Task; } catch(System.Exception) { throw new ArgumentException("Invalid Task"); } }} Another observation, is that you can make this method, a static and hang it off of the Task class, so that you don't have to new up the TaskFactory, and also you get to save yourself a moving piece to maintain. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27294",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1965/"
]
} |
27,345 | I'm running MAMP 1.7.2 on a Mac and I'd like to install the extension php_gd2. How do I do this? I know that on Windows using WAMP I'd simply select the php_gd2 entry in the extensions menu to activate it. How is it done when using MAMP? I know that I can do it using MacPorts but I'd prefer not to make any changes to my default OS X PHP installation. | You shouldn't need to install the extension. I have 1.7.2 installed and running right now and it has GD bundled (2.0.34 compatible). From the MAMP start page, click on phpinfo and you should see a GD section. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27345",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/412/"
]
} |
27,381 | I'm asking this question purely from a usability standpoint ! Should a website expand/stretch to fill the viewing area when you resize a browser window? I know for sure there are the obvious cons: Wide columns of text are hard to read. Writing html/css using percents can be a pain. It makes you vulnerable to having your design stretched past it's limits if an image is too wide, or a block of text is added that is too long. (see it's a pain to code the html/css ). The only Pro I can think of is that users who use the font-resizing that is built into their browser won't have to deal with columns that are only a few words long, with a body of white-space on either side. However, I think that may be a browser problem more than anything else (Firefox 3 allows you to zoom everything instead of just the text, which comes in handy all the time) edit : I noticed stack overflow is fixed width, but coding horror resizes. It seems Jeff doesn't have a strong preference either way. | Raw HTML does just that. Are you changing your data so that it doesn't render so good in random sized windows? In the olden days, everyone had VGA screens. Now, that resolution is most uncommon. Who knows what resolutions are going to be common in the future? And why expect a certain minimum width or height? From a usability viewpoint, demanding a certain resolution from your users is just going to create a degraded experience for anyone not using that resolution. Another thing that comes from this is what is fixed width? I've seen plenty of fixed size windows (popups) that just don't render right because my fonts are different from the designer's. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27381",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2908/"
]
} |
27,435 | I am working on a web application using Python (Django) and would like to know whether MySQL or PostgreSQL would be more suitable when deploying for production. In one podcast Joel said that he had some problems with MySQL and the data wasn't consistent. I would like to know whether someone had any such problems. Also when it comes to performance which can be easily tweaked? | A note to future readers: The text below was last edited in August 2008. That's nearly 11 years ago as of this edit. Software can change rapidly from version to version, so before you go choosing a DBMS based on the advice below, do some research to see if it's still accurate.Check for newer answers below. Better? MySQL is much more commonly provided by web hosts. PostgreSQL is a much more mature product. There's this discussion addressing your "better" question Apparently, according to this web page , MySQL is fast when concurrent access levels are low, and when there are many more reads than writes. On the other hand, it exhibits low scalability with increasing loads and write/read ratios. PostgreSQL is relatively slow at low concurrency levels, but scales well with increasing load levels, while providing enough isolation between concurrent accesses to avoid slowdowns at high write/read ratios. It goes on to link to a number of performance comparisons, because these things are very... sensitive to conditions. So if your decision factor is, " which is faster? " Then the answer is "it depends. If it really matters, test your application against both. " And if you really, really care, you get in two DBAs (one who specializes in each database) and get them to tune the crap out of the databases, and then choose. It's astonishing how expensive good DBAs are; and they are worth every cent . When it matters. Which it probably doesn't, so just pick whichever database you like the sound of and go with it; better performance can be bought with more RAM and CPU, and more appropriate database design, and clever stored procedure tricks and so on - and all of that is cheaper and easier for random-website-X than agonizing over which to pick, MySQL or PostgreSQL, and specialist tuning from expensive DBAs. Joel also said in that podcast that comment would come back to bite him because people would be saying that MySQL was a piece of crap - Joel couldn't get a count of rows back. The plural of anecdote is not data. He said : MySQL is the only database I've ever programmed against in my career that has had data integrity problems, where you do queries and you get nonsense answers back, that are incorrect. and he also said: It's just an anecdote. And that's one of the things that frustrates me, actually, about blogging or just the Internet in general. [...] There's just a weird tendency to make anecdotes into truths and I actually as a blogger I'm starting to feel a little bit guilty about this | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/27435",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1448/"
]
} |
27,472 | I have a long running SQL statement that I want to run, and no matter what I put in the "timeout=" clause of my connection string, it always seems to end after 30 seconds. I'm just using SqlHelper.ExecuteNonQuery() to execute it, and letting it take care of opening connections, etc. Is there something else that could be overriding my timeout, or causing sql server to ignore it? I have run profiler over the query, and the trace doesn't look any different when I run it in management studio, versus in my code. Management studio completes the query in roughly a minute, but even with a timeout set to 300, or 30000, my code still times out after 30 seconds. | What are you using to set the timeout in your connection string? From memory that's "ConnectionTimeout" and only affects the time it takes to actually connect to the server. Each individual command has a separate "CommandTimeout" which would be what you're looking for. Not sure how SqlHelper implements that though. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27472",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/489/"
]
} |
27,474 | I need to send hundreds of newsletters, but would like to check first if email exists on server. It's called SMTP validation, at least I think so, based on my research on Internet. There's several libraries that can do that, and also a page with open-source code in ASP Classic ( http://www.coveryourasp.com/ValidateEmail.asp#Result3 ), but I have hard time reading ASP Classic, and it seems that it uses some third-party library... Is there some code for SMTP validation in C#, and/or general explanation of how it works? | Be aware that most MTAs (Mail Transfer Agent) will have the VRFY command turned off for spam protection reasons, they'll probably even block you if you try several RCPT TO in a row (see http://www.spamresource.com/2007/01/whatever-happened-to-vrfy.html ). So even if you find a library to do that verification, it won't be worth a lot. Ishmaeel is right, the only way to really find out, is sending an email and see if it bounces or not. @Hrvoje: Yes, I'm suggesting you monitor rejected emails. BUT: not all the bounced mails should automatically end up on your "does not exist"-list, you also have to differentiate between temporary (e.g. mailbox full) and permanent errors. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27474",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1407/"
]
} |
27,492 | I've learned in College that you always have to free your unused Objects but not how you actually do it. For example structuring your code right and so on. Are there any general rules on how to handle pointers in C++? I'm currently not allowed to use boost. I have to stick to pure c++ because the framework I'm using forbids any use of generics. | I have worked with the embedded Symbian OS, which had an excellent system in place for this, based entirely on developer conventions. Only one object will ever own a pointer. By default this is the creator. Ownership can be passed on. To indicate passing of ownership, the object is passed as a pointer in the method signature (e.g. void Foo(Bar *zonk);). The owner will decide when to delete the object. To pass an object to a method just for use, the object is passed as a reference in the method signature (e.g. void Foo(Bat &zonk);). Non-owner classes may store references (never pointers) to objects they are given only when they can be certain that the owner will not destroy it during use. Basically, if a class simply uses something, it uses a reference. If a class owns something, it uses a pointer. This worked beautifully and was a pleasure to use. Memory issues were very rare. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27492",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2450/"
]
} |
27,499 | Recently Jeff has posted regarding his trouble with database deadlocks related to reading. Multiversion Concurrency Control (MVCC) claims to solve this problem. What is it, and what databases support it? updated: these support it (which others?) oracle postgresql | Oracle has had an excellent multi version control system in place since very long(at least since oracle 8.0) Following should help. User A starts a transaction and is updating 1000 rows with some value At Time T1 User B reads the same 1000 rows at time T2. User A updates row 543 with value Y (original value X) User B reaches row 543 and finds that a transaction is in operation since Time T1. The database returns the unmodified record from the Logs. The returned value is the value that was committed at the time less than or equal to T2. If the record could not be retreived from the redo logs it means the database is not setup appropriately. There needs to be more space allocated to the logs. This way the read consitency is achieved. The returned results are always the same with respect to the start time of transaction. So within a transaction the read consistency is achieved. I have tried to explain in the simplest terms possible...there is a lot to multiversioning in databases. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27499",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/116/"
]
} |
27,509 | How do I check if an object property in JavaScript is undefined? | The usual way to check if the value of a property is the special value undefined , is: if(o.myProperty === undefined) { alert("myProperty value is the special value `undefined`");} To check if an object does not actually have such a property, and will therefore return undefined by default when you try to access it: if(!o.hasOwnProperty('myProperty')) { alert("myProperty does not exist");} To check if the value associated with an identifier is the special value undefined , or if that identifier has not been declared: if(typeof myVariable === 'undefined') { alert('myVariable is either the special value `undefined`, or it has not been declared');} Note: this last method is the only way to refer to an undeclared identifier without an early error, which is different from having a value of undefined . In versions of JavaScript prior to ECMAScript 5, the property named "undefined" on the global object was writeable, and therefore a simple check foo === undefined might behave unexpectedly if it had accidentally been redefined. In modern JavaScript, the property is read-only. However, in modern JavaScript, "undefined" is not a keyword, and so variables inside functions can be named "undefined" and shadow the global property. If you are worried about this (unlikely) edge case, you can use the void operator to get at the special undefined value itself: if(myVariable === void 0) { alert("myVariable is the special value `undefined`");} | {
"score": 13,
"source": [
"https://Stackoverflow.com/questions/27509",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797/"
]
} |
27,532 | Given 2 rgb colors and a rectangular area, I'd like to generate a basic linear gradient between the colors. I've done a quick search and the only thing I've been able to find is this blog entry , but the example code seems to be missing, or at least it was as of this posting. Anything helps, algorithms, code examples, whatever. This will be written in Java, but the display layer is already taken care of, I just need to figure out how to figure out what to display. | you want an interpolation between the first and the second colour. Interpolating colours is easy by calculating the same interpolation for each of its components (R, G, B). There are many ways to interpolate. The easiest is to use linear interpolation: just take percentage p of the first colour and percentage 1 - p of the second: R = firstCol.R * p + secondCol.R * (1 - p) There's another question related to this. There are other methods of interpolation that sometimes work better. For example, using a bell-shaped (sigmoidal) interpolation function makes the transition smoother. /EDIT: Oops, you mean using a predefined function. OK, even easier. The blog post you linked now has an example code in Python. In Java, you could use the GradientPaint . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27532",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/85/"
]
} |
27,568 | I'd like to learn how to program in Assembler. I've done a bit of assembly before (during my A-Level Computing course) but that was very definitely a simplified 'pseudo-assembler'. I've borrowed my Dad's old Z80 Assembler reference manual, and that seems quite interesting so if possible I'd like to have a go with Z80 assembler. However, I don't have a Z80 processor to hand, and would like to do it on my PC (I have windows or linux so either is good). I've found various assemblers around on the internet, but I'm not particularly interested in assembling down to a hex file, I want to just be able to assemble it to something that some kind of simulator on the PC can run. Preferably this simulator would show me the contents of all the registers, memory locations etc, and let me step through instructions. I've found a few bits of software that suggest they might do this - but they either refuse to compile, or don't seem to work properly. Has anyone got any suggestions? If there are good simulator/IDE things available for another type of assembler then I could try that instead (assuming there is a good online reference manual available). | I've found a few bits of software that suggest they might do this - but they either refuse to compile, or don't seem to work properly. Has anyone got any suggestions? Write one. You're best off picking a nice, simple instruction set (Z80 should be perfect). I remember doing this as a first-year undergraduate exercise - I think we wrote the simulator in C++ and simulated 6800 assembly, but really any language/instruction set will do. The idea of "learning assembly language" these days is to get the idea of how computers work at the lowest level, only a select few (compiler writers, etc.) have any real reason to actually be writing assembly code these days. Modern processors are stuffed full of features designed to be used by compilers to help optimise code for speed/concurrent execution/power consumption/etc., and trying to write assembly by hand for a modern processor would be a nightmare. Don't fret about getting your application production-ready unless you want to - in all likelihood the bits of software you've found so far were written by people exactly like you who wanted to figure out how assembly works and wrote their own simulator, then realised how much work would be involved in getting it "production ready" so the general public could use it. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27568",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1912/"
]
} |
27,570 | Is there a way to find the number of files of a specific type without having to loop through all results inn a Directory.GetFiles() or similar method? I am looking for something like this: int ComponentCount = MagicFindFileCount(@"c:\windows\system32", "*.dll"); I know that I can make a recursive function to call Directory.GetFiles , but it would be much cleaner if I could do this without all the iterating. EDIT: If it is not possible to do this without recursing and iterating yourself, what would be the best way to do it? | You should use the Directory.GetFiles(path, searchPattern, SearchOption) overload of Directory.GetFiles(). Path specifies the path, searchPattern specifies your wildcards (e.g., *, *.format) and SearchOption provides the option to include subdirectories. The Length property of the return array of this search will provide the proper file count for your particular search pattern and option: string[] files = directory.GetFiles(@"c:\windows\system32", "*.dll", SearchOption.AllDirectories);return files.Length; EDIT: Alternatively you can use Directory.EnumerateFiles method return Directory.EnumerateFiles(@"c:\windows\system32", "*.dll", SearchOption.AllDirectories).Count(); | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27570",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2257/"
]
} |
27,578 | In Java (or any other language with checked exceptions), when creating your own exception class, how do you decide whether it should be checked or unchecked? My instinct is to say that a checked exception would be called for in cases where the caller might be able to recover in some productive way, where as an unchecked exception would be more for unrecoverable cases, but I'd be interested in other's thoughts. | Checked Exceptions are great, so long as you understand when they should be used. The Java core API fails to follow these rules for SQLException (and sometimes for IOException) which is why they are so terrible. Checked Exceptions should be used for predictable , but unpreventable errors that are reasonable to recover from . Unchecked Exceptions should be used for everything else. I'll break this down for you, because most people misunderstand what this means. Predictable but unpreventable : The caller did everything within their power to validate the input parameters, but some condition outside their control has caused the operation to fail. For example, you try reading a file but someone deletes it between the time you check if it exists and the time the read operation begins. By declaring a checked exception, you are telling the caller to anticipate this failure. Reasonable to recover from : There is no point telling callers to anticipate exceptions that they cannot recover from. If a user attempts to read from an non-existing file, the caller can prompt them for a new filename. On the other hand, if the method fails due to a programming bug (invalid method arguments or buggy method implementation) there is nothing the application can do to fix the problem in mid-execution. The best it can do is log the problem and wait for the developer to fix it at a later time. Unless the exception you are throwing meets all of the above conditions it should use an Unchecked Exception. Reevaluate at every level : Sometimes the method catching the checked exception isn't the right place to handle the error. In that case, consider what is reasonable for your own callers. If the exception is predictable, unpreventable and reasonable for them to recover from then you should throw a checked exception yourself. If not, you should wrap the exception in an unchecked exception. If you follow this rule you will find yourself converting checked exceptions to unchecked exceptions and vice versa depending on what layer you are in. For both checked and unchecked exceptions, use the right abstraction level . For example, a code repository with two different implementations (database and filesystem) should avoid exposing implementation-specific details by throwing SQLException or IOException . Instead, it should wrap the exception in an abstraction that spans all implementations (e.g. RepositoryException ). | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/27578",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797/"
]
} |
27,581 | What issues / pitfalls must be considered when overriding equals and hashCode ? | The theory (for the language lawyers and the mathematically inclined): equals() ( javadoc ) must define an equivalence relation (it must be reflexive , symmetric , and transitive ). In addition, it must be consistent (if the objects are not modified, then it must keep returning the same value). Furthermore, o.equals(null) must always return false. hashCode() ( javadoc ) must also be consistent (if the object is not modified in terms of equals() , it must keep returning the same value). The relation between the two methods is: Whenever a.equals(b) , then a.hashCode() must be same as b.hashCode() . In practice: If you override one, then you should override the other. Use the same set of fields that you use to compute equals() to compute hashCode() . Use the excellent helper classes EqualsBuilder and HashCodeBuilder from the Apache Commons Lang library. An example: public class Person { private String name; private int age; // ... @Override public int hashCode() { return new HashCodeBuilder(17, 31). // two randomly chosen prime numbers // if deriving: appendSuper(super.hashCode()). append(name). append(age). toHashCode(); } @Override public boolean equals(Object obj) { if (!(obj instanceof Person)) return false; if (obj == this) return true; Person rhs = (Person) obj; return new EqualsBuilder(). // if deriving: appendSuper(super.equals(obj)). append(name, rhs.name). append(age, rhs.age). isEquals(); }} Also remember: When using a hash-based Collection or Map such as HashSet , LinkedHashSet , HashMap , Hashtable , or WeakHashMap , make sure that the hashCode() of the key objects that you put into the collection never changes while the object is in the collection. The bulletproof way to ensure this is to make your keys immutable, which has also other benefits . | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/27581",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/797/"
]
} |
27,621 | On the UNIX bash shell (specifically Mac OS X Leopard) what would be the simplest way to copy every file having a specific extension from a folder hierarchy (including subdirectories) to the same destination folder (without subfolders)? Obviously there is the problem of having duplicates in the source hierarchy. I wouldn't mind if they are overwritten. Example: I need to copy every .txt file in the following hierarchy /foo/a.txt/foo/x.jpg/foo/bar/a.txt/foo/bar/c.jpg/foo/bar/b.txt To a folder named 'dest' and get: /dest/a.txt/dest/b.txt | In bash: find /foo -iname '*.txt' -exec cp \{\} /dest/ \; find will find all the files under the path /foo matching the wildcard *.txt , case insensitively (That's what -iname means). For each file, find will execute cp {} /dest/ , with the found file in place of {} . | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/27621",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2954/"
]
} |
27,695 | It happens to me all the time. I accidentally version a file, I do not want to be versioned (i.e. developer/machine specific config-files). If I commit this file, I will mess up the paths on all the other developer machines - they will be unhappy. If I do delete the file from versioning, it will be deleted from the other developers machines - they will be unhappy. If I choose to never commit the file, I always have a "dirty" checkout - I am unhappy. Is a clean way to "unversion" a file from revision-control, that will result in no-one being unhappy? edit: trying to clarify a bit: I have already commited the file to the repository and I want to only remove it from versioning - I specifically do not want it to be physically deleted from everyone doing a checkout. I initially wanted it to be ignored. Answer: If I could accept a second answer, it would be this . It answers my question with respect to git - the accepted answer is about svn. | SVN version 1.5 supports removing/deleting a file from a repository with out losing the local file taken from http://subversion.tigris.org/svn_1.5_releasenotes.html New --keep-local option retains path after delete.. Delete (remove) now takes a --keep-local option to retain its targets locally, so paths will not be removed even if unmodified. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27695",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1870/"
]
} |
27,743 | My RubyOnRails app is set up with the usual pack of mongrels behind Apache configuration. We've noticed that our Mongrel web server memory usage can grow quite large on certain operations and we'd really like to be able to dynamically do a graceful restart of selected Mongrel processes at any time. However, for reasons I won't go into here it can sometimes be very important that we don't interrupt a Mongrel while it is servicing a request, so I assume a simple process kill isn't the answer. Ideally, I want to send the Mongrel a signal that says "finish whatever you're doing and then quit before accepting any more connections". Is there a standard technique or best practice for this? | I've done a little more investigation into the Mongrel source and it turns out that Mongrel installs a signal handler to catch an standard process kill (TERM) and do a graceful shutdown, so I don't need a special procedure after all. You can see this working from the log output you get when killing a Mongrel while it's processing a request. For example: ** TERM signal received.Thu Aug 28 00:52:35 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:41 +0000 2008: Reaping 2 threads for slow workers because of 'shutdown'Waiting for 2 requests to finish, could take 60 seconds.Thu Aug 28 00:52:43 +0000 2008 (13051) Rendering layoutfalsecontent_typetext/htmlactionindex within layouts/application | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27743",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2994/"
]
} |
27,745 | Given the URL (single line): http://test.example.com/dir/subdir/file.html How can I extract the following parts using regular expressions: The Subdomain (test) The Domain (example.com) The path without the file (/dir/subdir/) The file (file.html) The path with the file (/dir/subdir/file.html) The URL without the path ( http://test.example.com ) (add any other that you think would be useful) The regex should work correctly even if I enter the following URL: http://example.example.com/example/example/example.html | A single regex to parse and breakup a full URL including query parameters and anchors e.g. https://www.google.com/dir/1/2/search.html?arg=0-a&arg1=1-b&arg3-c#hash ^((http[s]?|ftp):\/)?\/?([^:\/\s]+)((\/\w+)*\/)([\w\-\.]+[^#?\s]+)(.*)?(#[\w\-]+)?$ RexEx positions: url: RegExp['$&'], protocol:RegExp.$2, host:RegExp.$3, path:RegExp.$4, file:RegExp.$6, query:RegExp.$7, hash:RegExp.$8 you could then further parse the host ('.' delimited) quite easily. What I would do is use something like this: /* ^(.*:)//([A-Za-z0-9\-\.]+)(:[0-9]+)?(.*)$*/proto $1host $2port $3the-rest $4 the further parse 'the rest' to be as specific as possible. Doing it in one regex is, well, a bit crazy. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/27745",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2644/"
]
} |
27,757 | I am storing a PNG as an embedded resource in an assembly. From within the same assembly I have some code like this: Bitmap image = new Bitmap(typeof(MyClass), "Resources.file.png"); The file, named "file.png" is stored in the "Resources" folder (within Visual Studio), and is marked as an embedded resource. The code fails with an exception saying: Resource MyNamespace.Resources.file.png cannot be found in class MyNamespace.MyClass I have identical code (in a different assembly, loading a different resource) which works. So I know the technique is sound. My problem is I end up spending a lot of time trying to figure out what the correct path is. If I could simply query (eg. in the debugger) the assembly to find the correct path, that would save me a load of headaches. | This will get you a string array of all the resources: System.Reflection.Assembly.GetExecutingAssembly().GetManifestResourceNames(); | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/27757",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1006/"
]
} |
27,758 | OK, I know what you're thinking, "why write a method you do not want people to use?" Right? Well, in short, I have a class that needs to be serialized to XML. In order for the XmlSerializer to do its magic, the class must have a default, empty constructor: public class MyClass{ public MyClass() { // required for xml serialization }} So, I need to have it, but I don't want people to use it, so is there any attribute that can be use to mark the method as "DO NOT USE"? I was thinking of using the Obsolete attribute (since this can stop the build), but that just seems kinda "wrong", is there any other way of doing this, or do I need to go ahead and bite the bullet? :) Update OK, I have accepted Keith's answer, since I guess in my heart of hearts, I totally agree. This is why I asked the question in the first place, I don't like the notion of having the Obsolete attribute. However... There is still a problem, while we are being notified in intellisense, ideally, we would like to break the build, so is there any way to do this? Perhaps create a custom attribute? More focused question has been created here . | If a class is [Serialisable] (i.e. it can be copied around the place as needed) the param-less constructor is needed to deserialise. I'm guessing that you want to force your code's access to pass defaults for your properties to a parameterised constructor. Basically you're saying that it's OK for the XmlSerializer to make a copy and then set properties, but you don't want your own code to. To some extent I think this is over-designing. Just add XML comments that detail what properties need initialising (and what to). Don't use [Obsolete] , because it isn't. Reserve that for genuinely deprecated methods. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27758",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/832/"
]
} |
27,832 | I have a DirectShow graph to render MPEG2/4 movies from a network stream. When I assemble the graph by connecting the pins manually it doesn't render. But when I call Render on the GraphBuilder it renders fine. Obviously there is some setup step that I'm not performing on some filter in the graph that GraphBuilder is performing. Is there any way to see debug output from GraphBuilder when it assembles a graph? Is there a way to dump a working graph to see how it was put together? Any other ideas for unraveling the mystery that lives in the DirectShow box? Thanks!-Z | You can watch the graph you created using GraphEdit, a tool from the DirectShow SDK. In GraphEdit, select File->Connect to remote Graph... In order to find your graph in the list, you have to register it in the running object table: void AddToRot( IUnknown *pUnkGraph, DWORD *pdwRegister ) { IMoniker* pMoniker; IRunningObjectTable* pROT; GetRunningObjectTable( 0, &pROT ); WCHAR wsz[256]; swprintf_s( wsz, L"FilterGraph %08p pid %08x", (DWORD_PTR)pUnkGraph, GetCurrentProcessId() ); CreateItemMoniker( L"!", wsz, &pMoniker ); pROT->Register( 0, pUnkGraph, pMoniker, pdwRegister ); // Clean up any COM stuff here ...} After destroying your graph, you should remove it from the ROT by calling IRunningObjectTable::Revoke | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2587612/"
]
} |
27,835 | Recently, I started changing some of our applications to support MS SQL Server as an alternative back end. One of the compatibility issues I ran into is the use of MySQL's CREATE TEMPORARY TABLE to create in-memory tables that hold data for very fast access during a session with no need for permanent storage. What is the equivalent in MS SQL? A requirement is that I need to be able to use the temporary table just like any other, especially JOIN it with the permanent ones. | @Keith This is a common misconception: Table variables are NOT necessarily stored in memory. In fact SQL Server decides whether to keep the variable in memory or to spill it to TempDB. There is no reliable way (at least in SQL Server 2005) to ensure that table data is kept in memory. For more detailed info look here | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27835",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2077/"
]
} |
27,846 | What is the configuration setting for modifying the default homepage in a Grails application to no longer be appName/index.gsp? Of course you can set that page to be a redirect but there must be a better way. | Add this in UrlMappings.groovy "/" { controller = "yourController" action = "yourAction" } By configuring the URLMappings this way, the home-page of the app will be yourWebApp/yourController/yourAction. (cut/pasted from IntelliGrape Blog ) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27846",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3014/"
]
} |
27,857 | Basically I want tools which generate source code visualization like: function call graph dependency graph ... | Doxygen is really excellent for this, although you will need to install GraphViz to get the the graphs to draw. Once you've got everything installed, it's really rather simple to draw the graphs. Make sure you set EXTRACT_ALL and CALL_GRAPH to true and you should be good to go. The full documentation on this function for doxygen is here . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27857",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3006/"
]
} |
27,894 | In SQL Server 2005, we can create temp tables one of two ways: declare @tmp table (Col1 int, Col2 int); or create table #tmp (Col1 int, Col2 int); What are the differences between these two? I have read conflicting opinions on whether @tmp still uses tempdb, or if everything happens in memory. In which scenarios does one out-perform the other? | There are a few differences between Temporary Tables (#tmp) and Table Variables (@tmp), although using tempdb isn't one of them, as spelt out in the MSDN link below. As a rule of thumb, for small to medium volumes of data and simple usage scenarios you should use table variables. (This is an overly broad guideline with of course lots of exceptions - see below and following articles.) Some points to consider when choosing between them: Temporary Tables are real tables so you can do things like CREATE INDEXes, etc. If you have large amounts of data for which accessing by index will be faster then temporary tables are a good option. Table variables can have indexes by using PRIMARY KEY or UNIQUE constraints. (If you want a non-unique index just include the primary key column as the last column in the unique constraint. If you don't have a unique column, you can use an identity column.) SQL 2014 has non-unique indexes too . Table variables don't participate in transactions and SELECT s are implicitly with NOLOCK . The transaction behaviour can be very helpful, for instance if you want to ROLLBACK midway through a procedure then table variables populated during that transaction will still be populated! Temp tables might result in stored procedures being recompiled, perhaps often. Table variables will not. You can create a temp table using SELECT INTO, which can be quicker to write (good for ad-hoc querying) and may allow you to deal with changing datatypes over time, since you don't need to define your temp table structure upfront. You can pass table variables back from functions, enabling you to encapsulate and reuse logic much easier (eg make a function to split a string into a table of values on some arbitrary delimiter). Using Table Variables within user-defined functions enables those functions to be used more widely (see CREATE FUNCTION documentation for details). If you're writing a function you should use table variables over temp tables unless there's a compelling need otherwise. Both table variables and temp tables are stored in tempdb. But table variables (since 2005) default to the collation of the current database versus temp tables which take the default collation of tempdb ( ref ). This means you should be aware of collation issues if using temp tables and your db collation is different to tempdb's, causing problems if you want to compare data in the temp table with data in your database. Global Temp Tables (##tmp) are another type of temp table available to all sessions and users. Some further reading: Martin Smith's great answer on dba.stackexchange.com MSDN FAQ on difference between the two: https://support.microsoft.com/en-gb/kb/305977 MDSN blog article: https://learn.microsoft.com/archive/blogs/sqlserverstorageengine/tempdb-table-variable-vs-local-temporary-table Article: https://searchsqlserver.techtarget.com/tip/Temporary-tables-in-SQL-Server-vs-table-variables Unexpected behaviors and performance implications of temp tables and temp variables: Paul White on SQLblog.com | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/27894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1219/"
]
} |
27,899 | Is there a way to make S3 default to an index.html page? E.g.: My bucket object listing: /index.html/favicon.ico/images/logo.gif A call to www.example.com/ index.html works great! But if one were to call www.example.com/ we'd either get a 403 or a REST object listing XML document depending on how bucket-level ACL was configured. So, the question: Is there a way to have index.html functionality with content hosted on S3? | Amazon S3 now supports Index Documents The index document for a bucket can be set to something like index.html . When accessing the root of the site or a sub-directory containing a document of that name that document is returned. It is extremely easy to do using the aws cli: aws s3 website $MY_BUCKET_NAME --index-document index.html You can set the index document from the AWS Management Console: | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27899",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2961/"
]
} |
27,910 | The DOI system places basically no useful limitations on what constitutes a reasonable identifier . However, being able to pull DOIs out of PDFs, web pages, etc. is quite useful for citation information, etc. Is there a reliable way to identify a DOI in a block of text without assuming the 'doi:' prefix? (any language acceptable, regexes preferred, and avoiding false positives a must) | Ok, I'm currently extracting thousands of DOIs from free form text (XML) and I realized that my previous approach had a few problems, namely regarding encoded entities and trailing punctuation, so I went on reading the specification and this is the best I could come with. The DOI prefix shall be composed of a directory indicator followed bya registrant code. These two components shall be separated by a fullstop (period). The directory indicator shall be "10". The directory indicatordistinguishes the entire set of character strings (prefix and suffix)as digital object identifiers within the resolution system. Easy enough, the initial \b prevents us from "matching" a "DOI" that doesn't start with 10. : $pattern = '\b(10[.]'; The second element of the DOI prefix shall be the registrant code. Theregistrant code is a unique string assigned to a registrant. Also, all assigned registrant code are numeric, and at least 4 digits long, so: $pattern = '\b(10[.][0-9]{4,}'; The registrant code may be further divided into sub-elements foradministrative convenience if desired. Each sub-element of theregistrant code shall be preceded by a full stop. $pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*'; The DOI syntax shall be made up of a DOI prefix and a DOI suffixseparated by a forward slash. However, this isn't absolutely necessary, section 2.2.3 states that uncommon suffix systems may use other conventions (such as 10.1000.123456 instead of 10.1000/123456 ), but lets cut some slack. $pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/'; The DOI name is case-insensitive and can incorporate any printablecharacters from the legal graphic characters of Unicode. The DOIsuffix shall consist of a character string of any length chosen by theregistrant. Each suffix shall be unique to the prefix element thatprecedes it. The unique suffix can be a sequential number, or it mightincorporate an identifier generated from or based on another system. Now this is where it gets trickier, from all the DOIs I have processed, I saw the following characters (besides [0-9a-zA-Z] of course) in their suffixes : .-()/:- -- so, while it doesn't exist, the DOI 10.1016.12.31/nature.S0735-1097(98)2000/12/31/34:7-7 is completely plausible. The logical choice would be to use \S or the [[:graph:]] PCRE POSIX class, so lets do that: $pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/\S+'; // or $pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/[[:graph:]]+'; Now we have a difficult problem, the [[:graph:]] class is a super-set of the [[:punct:]] class, which includes characters easily found in free text or any markup language: "'&<> among others. Lets just filter the markup ones for now using a negative lookahead: $pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/(?:(?!["&\'<>])\S)+'; // or $pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/(?:(?!["&\'<>])[[:graph:]])+'; The above should cover encoded entities ( & ), attribute quotes ( ["'] ) and open / close tags ( [<>] ). Unlike markup languages, free text usually doesn't employ punctuation characters unless they are bounded by at least one space or placed at the end of a sentence, for instance: This is a long DOI: 10.1016.12.31/nature.S0735-1097(98)2000/12/31/34:7-7 !!! The solution here is to close our capture group and assert another word boundary: $pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/(?:(?!["&\'<>])\S)+)\b'; // or $pattern = '\b(10[.][0-9]{4,}(?:[.][0-9]+)*/(?:(?!["&\'<>])[[:graph:]])+)\b'; And voilá , here is a demo . | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/27910",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2963/"
]
} |
27,921 | Story: The user uploads an image that will be added to a photo gallery. As part of the upload process, we need to A) store the image on the web server's hard drive and B) store a thumbnail of the image on the web server's hard drive. "Best" here is defined as Relatively easy to implement, understand, and maintain Results in a thumbnail of reasonable quality Performance and high-quality thumbnails are secondary. | I suppose your best solution would be using the GetThumbnailImage from the .NET Image class. // Example in C#, should be quite alike in ASP.NET// Assuming filename as the uploaded fileusing ( Image bigImage = new Bitmap( filename ) ){ // Algorithm simplified for purpose of example. int height = bigImage.Height / 10; int width = bigImage.Width / 10; // Now create a thumbnail using ( Image smallImage = image.GetThumbnailImage( width, height, new Image.GetThumbnailImageAbort(Abort), IntPtr.Zero) ) { smallImage.Save("thumbnail.jpg", ImageFormat.Jpeg); }} | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27921",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/308/"
]
} |
27,928 | How do I calculate the distance between two points specified by latitude and longitude? For clarification, I'd like the distance in kilometers; the points use the WGS84 system and I'd like to understand the relative accuracies of the approaches available. | This link might be helpful to you, as it details the use of the Haversine formula to calculate the distance. Excerpt: This script [in Javascript] calculates great-circle distances between the two points – that is, the shortest distance over the earth’s surface – using the ‘Haversine’ formula. function getDistanceFromLatLonInKm(lat1,lon1,lat2,lon2) { var R = 6371; // Radius of the earth in km var dLat = deg2rad(lat2-lat1); // deg2rad below var dLon = deg2rad(lon2-lon1); var a = Math.sin(dLat/2) * Math.sin(dLat/2) + Math.cos(deg2rad(lat1)) * Math.cos(deg2rad(lat2)) * Math.sin(dLon/2) * Math.sin(dLon/2) ; var c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a)); var d = R * c; // Distance in km return d;}function deg2rad(deg) { return deg * (Math.PI/180)} | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/27928",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1456/"
]
} |
27,952 | Basically what I want to do it this: a pdb file contains a location of source files (e.g. C:\dev\proj1\helloworld.cs ). Is it possible to modify that pdb file so that it contains a different location (e.g. \more\differenter\location\proj1\helloworld.cs )? | You can use the source indexing feature of the Debugging Tools for Windows, which will save references to the appropriate revisions of the files in your source repository as an alternate stream in the PDB file. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2822/"
]
} |
27,972 | JavaScript needs access to cookies if AJAX is used on a site with access restrictions based on cookies. Will HttpOnly cookies work on an AJAX site? Edit: Microsoft created a way to prevent XSS attacks by disallowing JavaScript access to cookies if HttpOnly is specified. FireFox later adopted this. So my question is: If you are using AJAX on a site, like StackOverflow, are Http-Only cookies an option? Edit 2: Question 2. If the purpose of HttpOnly is to prevent JavaScript access to cookies, and you can still retrieve the cookies via JavaScript through the XmlHttpRequest Object, what is the point of HttpOnly ? Edit 3: Here is a quote from Wikipedia: When the browser receives such a cookie, it is supposed to use it as usual in the following HTTP exchanges, but not to make it visible to client-side scripts.[32] The HttpOnly flag is not part of any standard, and is not implemented in all browsers. Note that there is currently no prevention of reading or writing the session cookie via a XMLHTTPRequest. [33]. I understand that document.cookie is blocked when you use HttpOnly. But it seems that you can still read cookie values in the XMLHttpRequest object, allowing for XSS. How does HttpOnly make you any safer than? By making cookies essentially read only? In your example, I cannot write to your document.cookie , but I can still steal your cookie and post it to my domain using the XMLHttpRequest object. <script type="text/javascript"> var req = null; try { req = new XMLHttpRequest(); } catch(e) {} if (!req) try { req = new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) {} if (!req) try { req = new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) {} req.open('GET', 'http://stackoverflow.com/', false); req.send(null); alert(req.getAllResponseHeaders());</script> Edit 4: Sorry, I meant that you could send the XMLHttpRequest to the StackOverflow domain, and then save the result of getAllResponseHeaders() to a string, regex out the cookie, and then post that to an external domain. It appears that Wikipedia and ha.ckers concur with me on this one, but I would love be re-educated... Final Edit: Ahh, apparently both sites are wrong, this is actually a bug in FireFox . IE6 & 7 are actually the only browsers that currently fully support HttpOnly. To reiterate everything I've learned: HttpOnly restricts all access to document.cookie in IE7 & and FireFox (not sure about other browsers) HttpOnly removes cookie information from the response headers in XMLHttpObject.getAllResponseHeaders() in IE7. XMLHttpObjects may only be submitted to the domain they originated from, so there is no cross-domain posting of the cookies. edit: This information is likely no longer up to date. | Yes, HTTP-Only cookies would be fine for this functionality. They will still be provided with the XmlHttpRequest's request to the server. In the case of Stack Overflow, the cookies are automatically provided as part of the XmlHttpRequest request. I don't know the implementation details of the Stack Overflow authentication provider, but that cookie data is probably automatically used to verify your identity at a lower level than the "vote" controller method. More generally, cookies are not required for AJAX. XmlHttpRequest support (or even iframe remoting, on older browsers) is all that is technically required. However, if you want to provide security for AJAX enabled functionality, then the same rules apply as with traditional sites. You need some method for identifying the user behind each request, and cookies are almost always the means to that end. In your example, I cannot write to your document.cookie, but I can still steal your cookie and post it to my domain using the XMLHttpRequest object. XmlHttpRequest won't make cross-domain requests (for exactly the sorts of reasons you're touching on). You could normally inject script to send the cookie to your domain using iframe remoting or JSONP, but then HTTP-Only protects the cookie again since it's inaccessible. Unless you had compromised StackOverflow.com on the server side, you wouldn't be able to steal my cookie. Edit 2: Question 2. If the purpose of Http-Only is to prevent JavaScript access to cookies, and you can still retrieve the cookies via JavaScript through the XmlHttpRequest Object, what is the point of Http-Only? Consider this scenario: I find an avenue to inject JavaScript code into the page. Jeff loads the page and my malicious JavaScript modifies his cookie to match mine. Jeff submits a stellar answer to your question. Because he submits it with my cookie data instead of his, the answer will become mine. You vote up "my" stellar answer. My real account gets the point. With HTTP-Only cookies, the second step would be impossible, thereby defeating my XSS attempt. Edit 4: Sorry, I meant that you could send the XMLHttpRequest to the StackOverflow domain, and then save the result of getAllResponseHeaders() to a string, regex out the cookie, and then post that to an external domain. It appears that Wikipedia and ha.ckers concur with me on this one, but I would love be re-educated... That's correct. You can still session hijack that way. It does significantly thin the herd of people who can successfully execute even that XSS hack against you though. However, if you go back to my example scenario, you can see where HTTP-Only does successfully cut off the XSS attacks which rely on modifying the client's cookies (not uncommon). It boils down to the fact that a) no single improvement will solve all vulnerabilities and b) no system will ever be completely secure. HTTP-Only is a useful tool in shoring up against XSS. Similarly, even though the cross domain restriction on XmlHttpRequest isn't 100% successful in preventing all XSS exploits, you'd still never dream of removing the restriction. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/27972",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/26/"
]
} |
27,983 | I have a table of tags and want to get the highest count tags from the list. Sample data looks like this id (1) tag ('night')id (2) tag ('awesome')id (3) tag ('night') using SELECT COUNT(*), `Tag` from `images-tags`GROUP BY `Tag` gets me back the data I'm looking for perfectly. However, I would like to organize it, so that the highest tag counts are first, and limit it to only send me the first 20 or so. I tried this... SELECT COUNT(id), `Tag` from `images-tags`GROUP BY `Tag`ORDER BY COUNT(id) DESCLIMIT 20 and I keep getting an "Invalid use of group function - ErrNr 1111" What am I doing wrong? I'm using MySQL 4.1.25-Debian | In all versions of MySQL, simply alias the aggregate in the SELECT list, and order by the alias: SELECT COUNT(id) AS theCount, `Tag` from `images-tags`GROUP BY `Tag`ORDER BY theCount DESCLIMIT 20 | {
"score": 9,
"source": [
"https://Stackoverflow.com/questions/27983",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1477/"
]
} |
27,998 | Most of my C/C++ development involves monolithic module files and absolutely no classes whatsoever, so usually when I need to make a DLL with accessible functions I just export them using the standard __declspec(dllexport) directive. Then access them either dynamically via LoadLibrary() or at compile time with a header and lib file. How do you do this when you want to export an entire class (and all it's public methods and properties)? Is it possible to dynamically load that class at runtime and if so, how? How would you do it with a header and lib for compile time linking? | What about late-binding? As in loading it with LoadLibrary() and GetProcAddress() ? I'm used being able to load the library at run time and it would be great if you could do that here. So there are two ways to load the DLL. The first is to reference one or more symbols from the DLL (your classname, for example), supply an appropriate import .LIB and let the linker figure everything out. The second is to explicitly load the DLL via LoadLibrary. Either approach works fine for C-level function exports. You can either let the linker handle it or call GetProcAddress as you noted. But when it comes to exported classes , typically only the first approach is used, i.e., implicitly link to the DLL. In this case the DLL is loaded at application start time, and the application fails to load if the DLL can't be found. If you want to link to a class defined in a DLL, and you want that DLL to be loaded dynamically, sometime after program initiation, you have two options: Create objects of the class using a special factory function, which internally will have to use (a tiny bit of) assembler to "hook up" newly created objects to their appropriate offsets. This has to be done at run-time AFTER the DLL has been loaded, obviously. A good explanation of this approach can be found here . Use a delay-load DLL . All things considered... probably better to just go with implicit linking, in which case you definitely want to use the preprocessor technique shown above. In fact, if you create a new DLL in Visual Studio and choose the "export symbols" option these macros will be created for you. Good luck... | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/27998",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/194/"
]
} |
28,002 | I've been writing C and C++ code for almost twenty years, but there's one aspect of these languages that I've never really understood. I've obviously used regular casts i.e. MyClass *m = (MyClass *)ptr; all over the place, but there seem to be two other types of casts, and I don't know the difference. What's the difference between the following lines of code? MyClass *m = (MyClass *)ptr;MyClass *m = static_cast<MyClass *>(ptr);MyClass *m = dynamic_cast<MyClass *>(ptr); | static_cast static_cast is used for cases where you basically want to reverse an implicit conversion, with a few restrictions and additions. static_cast performs no runtime checks. This should be used if you know that you refer to an object of a specific type, and thus a check would be unnecessary. Example: void func(void *data) { // Conversion from MyClass* -> void* is implicit MyClass *c = static_cast<MyClass*>(data); ...}int main() { MyClass c; start_thread(&func, &c) // func(&c) will be called .join();} In this example, you know that you passed a MyClass object, and thus there isn't any need for a runtime check to ensure this. dynamic_cast dynamic_cast is useful when you don't know what the dynamic type of the object is. It returns a null pointer if the object referred to doesn't contain the type casted to as a base class (when you cast to a reference, a bad_cast exception is thrown in that case). if (JumpStm *j = dynamic_cast<JumpStm*>(&stm)) { ...} else if (ExprStm *e = dynamic_cast<ExprStm*>(&stm)) { ...} You can not use dynamic_cast for downcast (casting to a derived class) if the argument type is not polymorphic. For example, the following code is not valid, because Base doesn't contain any virtual function: struct Base { };struct Derived : Base { };int main() { Derived d; Base *b = &d; dynamic_cast<Derived*>(b); // Invalid} An "up-cast" (cast to the base class) is always valid with both static_cast and dynamic_cast , and also without any cast, as an "up-cast" is an implicit conversion (assuming the base class is accessible, i.e. it's a public inheritance). Regular Cast These casts are also called C-style cast. A C-style cast is basically identical to trying out a range of sequences of C++ casts, and taking the first C++ cast that works, without ever considering dynamic_cast . Needless to say, this is much more powerful as it combines all of const_cast , static_cast and reinterpret_cast , but it's also unsafe, because it does not use dynamic_cast . In addition, C-style casts not only allow you to do this, but they also allow you to safely cast to a private base-class, while the "equivalent" static_cast sequence would give you a compile-time error for that. Some people prefer C-style casts because of their brevity. I use them for numeric casts only, and use the appropriate C++ casts when user defined types are involved, as they provide stricter checking. | {
"score": 12,
"source": [
"https://Stackoverflow.com/questions/28002",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1821/"
]
} |
28,098 | How do I convert the value of a PHP variable to string? I was looking for something better than concatenating with an empty string: $myText = $myVar . ''; Like the ToString() method in Java or .NET. | You can use the casting operators : $myText = (string)$myVar; There are more details for string casting and conversion in the Strings section of the PHP manual, including special handling for booleans and nulls. | {
"score": 11,
"source": [
"https://Stackoverflow.com/questions/28098",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2680/"
]
} |
28,160 | For whatever reason, our company has a coding guideline that states: Each class shall have it's own header and implementation file. So if we wrote a class called MyString we would need an associated MyStringh.h and MyString.cxx . Does anyone else do this? Has anyone seen any compiling performance repercussions as a result? Does 5000 classes in 10000 files compile just as quickly as 5000 classes in 2500 files? If not, is the difference noticeable? [We code C++ and use GCC 3.4.4 as our everyday compiler] | The term here is translation unit and you really want to (if possible) have one class per translation unit ie, one class implementation per .cpp file, with a corresponding .h file of the same name. It's usually more efficient (from a compile/link) standpoint to do things this way, especially if you're doing things like incremental link and so forth. The idea being, translation units are isolated such that, when one translation unit changes, you don't have to rebuild a lot of stuff, as you would have to if you started lumping many abstractions into a single translation unit. Also you'll find many errors/diagnostics are reported via file name ("Error in Myclass.cpp, line 22") and it helps if there's a one-to-one correspondence between files and classes. (Or I suppose you could call it a 2 to 1 correspondence). | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/28160",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1881/"
]
} |
28,171 | If I open a solution in Visual Studio 2008 and run a unit test then VS creates a new .vsmdi file in the Solution Items folder and gives it the next number available e.g. My Solution2.vsmdi. Any idea why VS is doing this and how I can get it to stop doing this? | It appears that the VSMDI problem is a known bug and has been around since VS2005 Team System but it has no clear fix as yet. Another reason to NOT use MS Test. An MSDN blog details how to run unit tests without VSMDI files . | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/28171",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1463/"
]
} |
28,197 | For more information - Personal Software Process on Wikipedia and Team Software Process on Wikipedia . I have two questions: What benefits have you seen fromthese processes? What tools and/ormethods do you use to follow theseprocesses? | I went through the training and then my company paid for me to go to Carnegie Mellon and go through the PSP instructor training course to get certified as an instructor. I think the goal was to use this as part of our company's CMM/CMMI effort. I met Watts Humphrey and found him to be a kind, gentle soul with some deeply held ideas about process. I read several of his books as well. Here's my take on it in a nutshell - it is too structured for most people to follow, assuming you follow things to the letter. The idea of estimation based on historic info is OK, particularly in the classroom setting, but in the real world where estimates are undone in a day due to the changing tide of requirements and direction, it is far less useful. I've also done Wide Band Delphi estimation and that was OK but honestly wasn't necessarily any better than the 'best guess' I'd make. My team was less than enthusiastic about PSP and that is part of the problem - developer buy-in. My company was doing it for the wrong reason - simply to say "hey, look we use PSP and have some certified instructors!". In the end, I've found using a 'agile' approach to be better. I have a backlog of work to do and can generally estimate it pretty well. I've been doing it long enough that I can make pretty good rough estimates on time and frankly don't think that the time tracking really improves things much. Perhaps in some environments it would work well, but at my place, we'll keep pumping out quality software without all the process hoops that yield questionable benefits. Just my two cents. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28197",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
]
} |
28,219 | Can I get a 'when to use' for these and others? <% %><%# EVAL() %> Thanks | Check out the Web Forms Syntax Reference on MSDN. For basics, <% %> is used for pure code blocks. I generally only use this for if statements <div class="authenticated"> <div class="unauthenticated"> is used to add text into your markup; that is, it equates to <div class='<%= IsLoggedIn ? "authenticated" : "unauthenticated" %>'> <%# Expression %> is very similar to the above, but it is evaluated in a DataBinding scenario. One thing that this means is that you can use these expressions to set values of runat="server" controls, which you can't do with the <%= %> syntax. Typically this is used inside of a template for a databound control, but you can also use it in your page, and then call Page.DataBind() (or Control.DataBind()) to cause that code to evaluate. The others mentioned in the linked article are less common, though certainly have their uses, too. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28219",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1293/"
]
} |
28,235 | Using JDeveloper , I started developing a set of web pages for a project at work. Since I didn't know much about JDev at the time, I ran over to Oracle to follow some tutorials. The JDev tutorials recommended doing JSPX instead of JSP , but didn't really explain why. Are you developing JSPX pages? Why did you decide to do so? What are the pros/cons of going the JSPX route? | The main difference is that a JSPX file (officially called a 'JSP document') may be easier to work with because the requirement for well-formed XML may allow your editor to identify more typos and syntax errors as you type. However, there are also disadvantages. For example, well-formed XML must escape things like less-than signs, so your file could end up with content like: <script type="text/javascript"> if (number < 0) { The XML syntax may also be more verbose. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/28235",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
28,241 | I am merging a CVS branch and one of the larger changes is the replacement wherever it occurs of a Singleton pattern with abstract classes that have a static initialisation block and all static methods. Is this something that's worth keeping since it will require merging a lot of conflicts, what sort of situation would I be looking at for this refactoring to be worthwhile? We are running this app under Weblogic 8.1 (so JDK 1.4.2) sorry Thomas, let me clarify.. the HEAD version has the traditional singleton pattern (private constructor, getInstance() etc) the branch version has no constructor, is a 'public abstract class' and modified all the methods on the object to be 'static'. The code that used to exist in the private constructor is moved into a static block. Then all usages of the class are changed which causes multiple conflicts in the merge. There are a few cases where this change was made. | From a strict runtime performance point of view, the difference is really negligible. The main difference between the two lies down in the fact that the "static" lifecycle is linked to the classloader, whereas for the singleton it's a regular instance lifecycle. Usually it's better to stay away from the ClassLoader business, you avoid some tricky problems, especially when you try to reload the web application. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28241",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
]
} |
28,256 | I've developed an equation parser using a simple stack algorithm that will handle binary (+, -, |, &, *, /, etc) operators, unary (!) operators, and parenthesis. Using this method, however, leaves me with everything having the same precedence - it's evaluated left to right regardless of operator, although precedence can be enforced using parenthesis. So right now "1+11*5" returns 60, not 56 as one might expect. While this is suitable for the current project, I want to have a general purpose routine I can use for later projects. Edited for clarity: What is a good algorithm for parsing equations with precedence? I'm interested in something simple to implement and understand that I can code myself to avoid licensing issues with available code. Grammar: I don't understand the grammar question - I've written this by hand. It's simple enough that I don't see the need for YACC or Bison. I merely need to calculate strings with equations such as "2+3 * (42/13)". Language: I'm doing this in C, but I'm interested in an algorithm, not a language specific solution. C is low level enough that it'll be easy to convert to another language should the need arise. Code Example I posted the test code for the simple expression parser I was talking about above. The project requirements altered and so I never needed to optimize the code for performance or space as it wasn't incorporated into the project. It's in the original verbose form, and should be readily understandable. If I do anything further with it in terms of operator precedence, I'll probably choose the macro hack because it matches the rest of the program in simplicity. If I ever use this in a real project, though, I'll be going for a more compact/speedy parser. Related question Smart design of a math parser? -Adam | The hard way You want a recursive descent parser . To get precedence you need to think recursively, for example, using your sample string, 1+11*5 to do this manually, you would have to read the 1 , then see the plus and start a whole new recursive parse "session" starting with 11 ... and make sure to parse the 11 * 5 into its own factor, yielding a parse tree with 1 + (11 * 5) . This all feels so painful even to attempt to explain, especially with the added powerlessness of C. See, after parsing the 11, if the * was actually a + instead, you would have to abandon the attempt at making a term and instead parse the 11 itself as a factor. My head is already exploding. It's possible with the recursive decent strategy, but there is a better way... The easy (right) way If you use a GPL tool like Bison, you probably don't need to worry about licensing issues since the C code generated by bison is not covered by the GPL (IANAL but I'm pretty sure GPL tools don't force the GPL on generated code/binaries; for example Apple compiles code like say, Aperture with GCC and they sell it without having to GPL said code). Download Bison (or something equivalent, ANTLR, etc.). There is usually some sample code that you can just run bison on and get your desired C code that demonstrates this four function calculator: http://www.gnu.org/software/bison/manual/html_node/Infix-Calc.html Look at the generated code, and see that this is not as easy as it sounds. Also, the advantages of using a tool like Bison are 1) you learn something (especially if you read the Dragon book and learn about grammars), 2) you avoid NIH trying to reinvent the wheel. With a real parser-generator tool, you actually have a hope at scaling up later, showing other people you know that parsers are the domain of parsing tools. Update: People here have offered much sound advice. My only warning against skipping the parsing tools or just using the Shunting Yard algorithm or a hand rolled recursive decent parser is that little toy languages 1 may someday turn into big actual languages with functions (sin, cos, log) and variables, conditions and for loops. Flex/Bison may very well be overkill for a small, simple interpreter, but a one off parser+evaluator may cause trouble down the line when changes need to be made or features need to be added. Your situation will vary and you will need to use your judgement; just don't punish other people for your sins [2] and build a less than adequate tool. My favorite tool for parsing The best tool in the world for the job is the Parsec library (for recursive decent parsers) which comes with the programming language Haskell. It looks a lot like BNF , or like some specialized tool or domain specific language for parsing (sample code [3]), but it is in fact just a regular library in Haskell, meaning that it compiles in the same build step as the rest of your Haskell code, and you can write arbitrary Haskell code and call that within your parser, and you can mix and match other libraries all in the same code . (Embedding a parsing language like this in a language other than Haskell results in loads of syntactic cruft, by the way. I did this in C# and it works quite well but it is not so pretty and succinct.) Notes: 1 Richard Stallman says, in Why you should not use Tcl The principal lesson of Emacs is that a language for extensions should not be a mere "extension language". It should be a real programming language, designed for writing and maintaining substantial programs. Because people will want to do that! [2] Yes, I am forever scarred from using that "language". Also note that when I submitted this entry, the preview was correct, but SO's less than adequate parser ate my close anchor tag on the first paragraph , proving that parsers are not something to be trifled with because if you use regexes and one off hacks you will probably get something subtle and small wrong . [3] Snippet of a Haskell parser using Parsec: a four function calculator extended with exponents, parentheses, whitespace for multiplication, and constants (like pi and e). aexpr = expr `chainl1` toOpexpr = optChainl1 term addop (toScalar 0)term = factor `chainl1` mulopfactor = sexpr `chainr1` powopsexpr = parens aexpr <|> scalar <|> identpowop = sym "^" >>= return . (B Pow) <|> sym "^-" >>= return . (\x y -> B Pow x (B Sub (toScalar 0) y))toOp = sym "->" >>= return . (B To)mulop = sym "*" >>= return . (B Mul) <|> sym "/" >>= return . (B Div) <|> sym "%" >>= return . (B Mod) <|> return . (B Mul)addop = sym "+" >>= return . (B Add) <|> sym "-" >>= return . (B Sub)scalar = number >>= return . toScalarident = literal >>= return . Litparens p = do lparen result <- p rparen return result | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/28256",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2915/"
]
} |
28,268 | I am considering buying an Apple MacBook Pro. Are there any pitfalls developing C#/.NET code in a virtual machine running on a Mac? Also, is it better to run Vista or XP Pro for this purpose? | I can't tell you any specific experiences since I don't have a Mac, but I did want to point out that there was an awesome episode of the DeepFriedBytes podcast that discussed this very topic. It made me want to give it a try. They discuss the pros and cons of going this route - well worth the listen IMO if this is something you're considering: Episode 5: Developing .NET Software on a Mac | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28268",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2858/"
]
} |
28,303 | What are the most user-friendly color combinations for Web 2.0 websites, such as background, button colors, etc.? | ColorSchemer will suggest good schemes for you. If you want to try something out on your own, try Color Combinations . | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28303",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2141/"
]
} |
28,377 | In Visual Basic, is there a performance difference when using the IIf function instead of the If statement? | VB has the following If statement which the question refers to, I think: ' Usage 1Dim result = If(a > 5, "World", "Hello")' Usage 2Dim foo = If(result, "Alternative") The first is basically C#'s ternary conditional operator and the second is its coalesce operator (return result unless it’s Nothing , in which case return "Alternative" ). If has thus replaced IIf and the latter is obsolete. Like in C#, VB's conditional If operator short-circuits, so you can now safely write the following, which is not possible using the IIf function: Dim len = If(text Is Nothing, 0, text.Length) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/28377",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/299/"
]
} |
28,380 | Has anybody managed to get the Android Emulator working behind a proxy that requires authentication? I've tried setting the -http-proxy argument to http://DOMAIN/USERNAME:PASSWORD@IP:PORT but am having no success. I've tried following the docs to no avail. I've also tried the -verbose-proxy setting but this no longer seems to exist. Any pointers? | I Managed to do it in the Adndroid 2.2 Emulator. Go to "Settings" -> "Wireless & Networks" -> "Mobile Networks" -> "Access Point Names" -> "Telkila" Over there set the proxy host name in the property "Proxy"and the Proxy port in the property "Port" | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/28380",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1281/"
]
} |
28,395 | How do you pass $_POST values to a page using cURL ? | Should work fine. $data = array('name' => 'Ross', 'php_master' => true);// You can POST a file by prefixing with an @ (for <input type="file"> fields)$data['file'] = '@/home/user/world.jpg';$handle = curl_init($url);curl_setopt($handle, CURLOPT_POST, true);curl_setopt($handle, CURLOPT_POSTFIELDS, $data);curl_exec($handle);curl_close($handle) We have two options here, CURLOPT_POST which turns HTTP POST on, and CURLOPT_POSTFIELDS which contains an array of our post data to submit. This can be used to submit data to POST <form> s. It is important to note that curl_setopt($handle, CURLOPT_POSTFIELDS, $data); takes the $data in two formats, and that this determines how the post data will be encoded. $data as an array() : The data will be sent as multipart/form-data which is not always accepted by the server. $data = array('name' => 'Ross', 'php_master' => true);curl_setopt($handle, CURLOPT_POSTFIELDS, $data); $data as url encoded string: The data will be sent as application/x-www-form-urlencoded , which is the default encoding for submitted html form data. $data = array('name' => 'Ross', 'php_master' => true);curl_setopt($handle, CURLOPT_POSTFIELDS, http_build_query($data)); I hope this will help others save their time. See: curl_init curl_setopt | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/28395",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2863/"
]
} |
28,464 | I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them. Where do you draw the line on what to interface out vs just adding a property to the class? | The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't. What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided. Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code. Uses that I've seen useful for an Inversion of Control pattern: A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided. Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one. Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid. Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28464",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1574/"
]
} |
28,478 | I recently asked a question about IIf vs. If and found out that there is another function in VB called If which basically does the same thing as IIf but is a short-circuit. Does this If function perform better than the IIf function? Does the If statement trump the If and IIf functions? | Damn, I really thought you were talking about the operator all along. ;-) Anyway … Does this If function perform better than the IIf function? Definitely. Remember, it's built into the language. Only one of the two conditional arguments has to be evaluated, potentially saving a costly operation. Does the If statement trump the If and IIf functions? I think you can't compare the two because they do different things. If your code semantically performs an assignment you should emphasize this, instead of the decision-making. Use the If operator here instead of the statement. This is especially true if you can use it in the initialization of a variable because otherwise the variable will be default initialized, resulting in slower code: Dim result = If(a > 0, Math.Sqrt(a), -1.0)' versusDim result As Double ' Redundant default initialization!If a > 0 Then result = Math.Sqrt(a)Else result = -1End If | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28478",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/299/"
]
} |
28,529 | When using jQuery 's ajax method to submit form data, what is the best way to handle errors?This is an example of what a call might look like: $.ajax({ url: "userCreation.ashx", data: { u:userName, p:password, e:email }, type: "POST", beforeSend: function(){disableSubmitButton();}, complete: function(){enableSubmitButton();}, error: function(xhr, statusText, errorThrown){ // Work out what the error was and display the appropriate message }, success: function(data){ displayUserCreatedMessage(); refreshUserList(); }}); The request might fail for a number of reasons, such as duplicate user name, duplicate email address etc, and the ashx is written to throw an exception when this happens. My problem seems to be that by throwing an exception the ashx causes the statusText and errorThrown to be undefined . I can get to the XMLHttpRequest.responseText which contains the HTML that makes up the standard .net error page. I am finding the page title in the responseText and using the title to work out which error was thrown. Although I have a suspicion that this will fall apart when I enable custom error handling pages. Should I be throwing the errors in the ashx, or should I be returning a status code as part of the data returned by the call to userCreation.ashx , then using this to decide what action to take? How do you handle these situations? | Should I be throwing the errors in the ashx, or should I be returning a status code as part of the data returned by the call to userCreation.ashx, then using this to decide what action to take? How do you handle these situations? Personally, if possible, I would prefer to handle this on the server side and work up a message to the user there. This works very well in a scenario where you only want to display a message to the user telling them what happened (validation message, essentially). However, if you want to perform an action based on what happened on the server, you may want to use a status code and write some javascript to perform various actions based on that status code. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28529",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1403/"
]
} |
28,530 | What open source licenses are more corporate-friendly, i.e., they can be used in commercial products without the need to open source the commercial product? | I recommend the Apache License (specifically, version 2). It is not a “copy left” license and it addresses several matters that are important to established companies and their lawyers. “Copy left” is the philosophy of the free software foundation requiring anything incorporating the licensed opens source code to also be licensed as open source. That philosophy is regarded as poison by established companies that want to keep their products proprietary. Aside from not having “copy left” provisions, the Apache license specifically addresses the grant of rights from project contributors and it expressly addresses the fact that modern companies are typically made up for more than one legal entity (for example, a parent company and its subsidiaries). Most open source licenses don’t address these points. Whatever license you choose, if you want your code to be “corporate friendly,” in the sense that you want it to be incorporated into commercial, non-open source products, it is essential that you avoid GPL and other “copy left” type licenses. While it would be best to consult with your own lawyer before investing time or money in a project for which this is an important factor, a quick shorthand for licenses that are and are not “copy left” can be found on the Free Software Foundation’s website. They identify which licenses they don’t find meet their standards as “copy left.” The ones FSF rejects are most likely the ones that will be corporate friendly in this sense. (Although the question didn’t ask this, it is worth mentioning that, with very few exceptions, even GPL and other “copy left” type licenses are perfectly corporate friendly if they are only used internally by the commercial entities and not incorporated into their products.) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/28530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2141/"
]
} |
28,605 | I'm trying to learn C. As a C# developer, my IDE is Visual Studio. I've heard this is a good environment for C/C++ development. However, it seems no matter what little thing I try to do, intuition fails me. Can someone give good resources for how to either: learn the ins and out of C in Visual Studio recommend a better C IDE + compiler Edit: See also: https://stackoverflow.com/questions/951516/a-good-c-ide | well you can use visual studio just fine take a look at here man http://www.daniweb.com/forums/thread16256.html Go to View Menu select Solution Explorer or CTRL+ ALT +L Then Select The project that your are developing and right click on that. Then select the Properties from the submenu. Then select the Configuration properties from the Tree structure. under that select C/C++ then select Advanced. Now in the right side pane change the property Compile As from Compile as C++ Code (/TP) to Compile as C Code (/TC) Finally change your file extensions to .c Now you configured you Visual Studio to compile C programs And you can use NetBeans too it could even be more user friendly than Visual Studio download it you wont regret i promise | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/28605",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/356/"
]
} |
28,637 | I need to find a bottleneck and need to accurately as possible measure time. Is the following code snippet the best way to measure the performance? DateTime startTime = DateTime.Now;// Some execution processDateTime endTime = DateTime.Now;TimeSpan totalTimeTaken = endTime.Subtract(startTime); | No, it's not. Use the Stopwatch (in System.Diagnostics ) Stopwatch sw = Stopwatch.StartNew();PerformWork();sw.Stop();Console.WriteLine("Time taken: {0}ms", sw.Elapsed.TotalMilliseconds); Stopwatch automatically checks for the existence of high-precision timers. It is worth mentioning that DateTime.Now often is quite a bit slower than DateTime.UtcNow due to the work that has to be done with timezones, DST and such. DateTime.UtcNow typically has a resolution of 15 ms. See John Chapman's blog post about DateTime.Now precision for a great summary. Interesting trivia: The stopwatch falls back on DateTime.UtcNow if your hardware doesn't support a high frequency counter. You can check to see if Stopwatch uses hardware to achieve high precision by looking at the static field Stopwatch.IsHighResolution . | {
"score": 10,
"source": [
"https://Stackoverflow.com/questions/28637",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2469/"
]
} |
28,713 | Is there a simple way of getting a HTML textarea and an input type="text" to render with (approximately) equal width (in pixels), that works in different browsers? A CSS/HTML solution would be brilliant. I would prefer not to have to use Javascript. Thanks/Erik | You should be able to use .mywidth { width: 100px; } <input class="mywidth"><br><textarea class="mywidth"></textarea> | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28713",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/276/"
]
} |
28,716 | I'm trying to improve performance under high load and would like to implement opcode caching. Which of the following should I use? APC - Installation Guide eAccelerator - Installation Guide XCache - Installation Guide I'm also open to any other alternatives that have slipped under my radar. Currently running on a stock Debian Etch with Apache 2 and PHP 5.2 [Update 1] HowtoForge installation links added [Update 2] Based on the answers and feedback given, I have tested all 3 implementations using the following Apache JMeter test plan on my application: Login Access Home Page With 50 concurrent connections, the results are as follows: No Opcode Caching APC eAccelerator XCache Performance Graph (smaller is better) From the above results, eAccelerator has a slight edge in performance compared to APC and XCache. However, what matters most from the above data is that any sort of opcode caching gives a tremendous boost in performance. I have decided to use APC due to the following 2 reasons: Package is available in official Debian repository More functional control panel To summarize my experience: Ease of Installation: APC > eAccelerator > XCache Performance: eAccelerator > APC, XCache Control Panel: APC > XCache > eAccelerator | I think the answer might depend on the type of web applications you are running. I had to make this decision myself two years ago and couldn't decide between Zend Optimizer and eAccelerator. In order to make my decision, I used ab (apache bench) to test the server, and tested the three combinations (zend, eaccelerator, both running) and proved that eAccelerator on its own gave the greatest performance. If you have the luxury of time, I would recommend doing similar tests yourself, and making the decision based on your results. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28716",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2633/"
]
} |
28,756 | Whats the best/easiest way to obtain a count of items within an IEnumerable collection without enumerating over all of the items in the collection? Possible with LINQ or Lambda? | You will have to enumerate to get a count. Other constructs like the List keep a running count. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2993/"
]
} |
28,793 | I have been doing some java development lately and have started using Eclipse. For the most part, I think it is great, but being a C/C++ guy used to doing all of his editing in vim, I find myself needlessly hitting the Esc key over and over. It would be really nice if I got all the nice features of Eclipse, but still could do basic editing the same way I can in vim. Anyone know of any Eclipse pluggins that would help with this? | Vrapper : an Eclipse plugin which acts as a wrapper for Eclipse text editors to provide a Vim-like input scheme for moving around and editing text. Unlike other plugins which embed Vim in Eclipse, Vrapper imitates the behaviour of Vim while still using whatever editor you have opened in the workbench. The goal is to have the comfort and ease which comes with the different modes, complex commands and count/operator/motion combinations which are the key features behind editing with Vim, while preserving the powerful features of the different Eclipse text editors, like code generation and refactoring... | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/28793",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2328/"
]
} |
28,796 | I have a bunch of classes I want to rename. Some of them have names that are small and that name is reused in other class names, where I don't want that name changed. Most of this lives in Python code, but we also have some XML code that references class names. Simple search and replace only gets me so far. In my case, I want to rename AdminAction to AdminActionPlug and AdminActionLogger to AdminActionLoggerPlug, so the first one's search-and-replace would also hit the second, wrongly. Does anyone have experience with Python refactoring tools ? Bonus points if they can fix class names in the XML documents too. | In the meantime, I've tried it two tools that have some sort of integration with vim. The first is Rope , a python refactoring library that comes with a Vim (and emacs) plug-in. I tried it for a few renames, and that definitely worked as expected. It allowed me to preview the refactoring as a diff, which is nice. It is a bit text-driven, but that's alright for me, just takes longer to learn. The second is Bicycle Repair Man which I guess wins points on name. Also plugs into vim and emacs. Haven't played much with it yet, but I remember trying it a long time ago. Haven't played with both enough yet, or tried more types of refactoring, but I will do some more hacking with them. | {
"score": 7,
"source": [
"https://Stackoverflow.com/questions/28796",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2900/"
]
} |
28,826 | My university is part of MSDNAA, so I downloaded it a while back, but I just got around to installing it. I guess part of it replaces FrontPage for web editing, and there appears to be a video editor and a vector graphics editor, but I don't think I've even scratched the surface of what it is and what it can do. Could someone enlighten me, especially since I haven't found an "Expression Studio for Dummies" type website. | Expression Studio is basically a design studio. It consists of a bunch of design software that Microsoft has bought for the most part. The audience is designers, not developers. The gist of the software is that Expression Blend enables designers and programmers to work seamlessly together in letting the designer create the graphical user interface. In a normal workflow, the designer would deliver a mockup which the developer would have to implement. Using Expression Blend in combination with WPF, this is no longer necessary. The graphical UI made by the designer is functional. All the developer has to do is write the code for the function behind the design. This in itself is great because developers invariably fail to implement the design as thought out by the designer. Technical limitations, lack of communication … whatever the reason. UIs never look like them mockup done up front. Expression Design is basically a vector drawing program that can be used to design smaller components that are then used within Expression Blend as parts of the UI. For example, graphical buttons could be designed that way. It can also be used as a vanilla drawing program. I did the graphics in my thesis using Expression Design. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28826",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
]
} |
28,832 | If I call finalize() on an object from my program code, will the JVM still run the method again when the garbage collector processes this object? This would be an approximate example: MyObject m = new MyObject();m.finalize();m = null;System.gc() Would the explicit call to finalize() make the JVM 's garbage collector not to run the finalize() method on object m ? | According to this simple test program, the JVM will still make its call to finalize() even if you explicitly called it: private static class Blah{ public void finalize() { System.out.println("finalizing!"); }}private static void f() throws Throwable{ Blah blah = new Blah(); blah.finalize();}public static void main(String[] args) throws Throwable{ System.out.println("start"); f(); System.gc(); System.out.println("done");} The output is: start finalizing! finalizing! done Every resource out there says to never call finalize() explicitly, and pretty much never even implement the method because there are no guarantees as to if and when it will be called. You're better off just closing all of your resources manually. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/28832",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2697/"
]
} |
28,858 | Saw a post about hidden features in C# but not a lot of people have written linq/lambdas example so... I wonder... What's the coolest (as in the most elegant) use of the C# LINQ and/or Lambdas/anonymous delegates you have ever saw/written? Bonus if it has went into production too! | The LINQ Raytracer certainly tops my list =) I'm not quite sure if qualifies as elegant but it is most certainly the coolest linq-expression I've ever seen! Oh, and just to be extremely clear; I did not write it ( Luke Hoban did) | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28858",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3055/"
]
} |
28,877 | I have a sproc that puts 750K records into a temp table through a query as one of its first actions. If I create indexes on the temp table before filling it, the item takes about twice as long to run compared to when I index after filling the table. (The index is an integer in a single column, the table being indexed is just two columns each a single integer.) This seems a little off to me, but then I don't have the firmest understanding of what goes on under the hood. Does anyone have an answer for this? | If you create a clustered index, it affects the way the data is physically ordered on the disk. It's better to add the index after the fact and let the database engine reorder the rows when it knows how the data is distributed. For example, let's say you needed to build a brick wall with numbered bricks so that those with the highest number are at the bottom of the wall. It would be a difficult task if you were just handed the bricks in random order, one at a time - you wouldn't know which bricks were going to turn out to be the highest numbered, and you'd have to tear the wall down and rebuild it over and over. It would be a lot easier to handle that task if you had all the bricks lined up in front of you, and could organize your work. That's how it is for the database engine - if you let it know about the whole job, it can be much more efficient than if you just feed it a row at a time. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28877",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1327/"
]
} |
28,878 | I'm translating my C# code for YouTube video comments into PHP. In order to properly nest comment replies, I need to re-arrange XML nodes. In PHP I'm using DOMDocument and DOMXPath which closely corresponds to C# XmlDocument. I've gotten pretty far in my translation but now I'm stuck on getting the parent node of a DOMElement. A DOMElement does not have a parent_node() property, only a DOMNode provides that property. After determining that a comment is a reply to a previous comment based in the string "in-reply-to" in a link element, I need to get its parent node in order to nest it beneath the comment it is in reply to: // Get the parent entry node of this link element$importnode = $objReplyXML->importNode($link->parent_node(), true); | DOMElement is a subclass of DOMNode , so it does have parent_node property. Just use $domNode->parentNode; to find the parent node. In your example, the parent node of $importnode is null, because it has been imported into the document, and therefore does not have a parent yet. You need to attach it to another element before it has a parent. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28878",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2601/"
]
} |
28,881 | Using the same sort command with the same input produces different results on different machines. How do I fix that? | The man-page on OS X says: ******* WARNING ******* The locale specified by the environment affects sort order. Set LC_ALL=C to getthe traditional sort order that uses native byte values. which might explain things. If some of your systems have no locale support, they would default to that locale (C), so you wouldn't have to set it on those. If you have some that supports locales and want the same behavior, set LC_ALL=C on those systems. That would be the way to have as many systems as I know do it the same way. If you don't have any locale-less systems, just making sure they share locale would probably be enough. For more canonical information, see The Single UNIX ® Specification, Version 2 description of locale , environment variables , setlocale() and the description of the sort(1) utility. | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/28881",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1438/"
]
} |
28,894 | For years I have been using the DEBUG compiler constant in VB.NET to write messages to the console. I've also been using System.Diagnostics.Debug.Write in similar fashion. It was always my understanding that when RELEASE was used as the build option, that all of these statements were left out by the compiler, freeing your production code of the overhead of debug statements. Recently when working with Silverlight 2 Beta 2, I noticed that Visual Studio actually attached to a RELEASE build that I was running off of a public website and displayed DEBUG statements which I assumed weren't even compiled! Now, my first inclination is to assume that that there is something wrong with my environment, but I also want to ask anyone with deep knowledge on System.Diagnostics.Debug and the DEBUG build option in general what I may be misunderstanding here. | The preferred method is to actually use the conditional attribute to wrap your debug calls, not use the compiler directives. #ifs can get tricky and can lead to weird build problems. An example of using a conditional attribute is as follows (in C#, but works in VB.NET too): [ Conditional("Debug") ]private void WriteDebug(string debugString){ // do stuff} When you compile without the DEBUG flag set, any call to WriteDebug will be removed as was assumed was happening with Debug.Write(). | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/28894",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3059/"
]
} |
28,932 | What is the the best JavaScript compressor available? I'm looking for a tool that: is easy to use has a high compression rate Produce reliable end results (doesn't mess up the code) | I recently released UglifyJS , a JavaScript compressor which is written in JavaScript (runs on the NodeJS Node.js platform, but it can be easily modified to run on any JavaScript engine, since it doesn't need any Node.js internals). It's a lot faster than both YUI Compressor and Google Closure , it compresses better than YUI on all scripts I tested it on, and it's safer than Closure (knows to deal with "eval" or "with"). Other than whitespace removal, UglifyJS also does the following: changes local variable names (usually to single characters) joins consecutive var declarations avoids inserting any unneeded brackets, parens and semicolons optimizes IFs (removes "else" when it detects that it's not needed, transforms IFs into the &&, || or ?/: operators when possible, etc.). transforms foo["bar"] into foo.bar where possible removes quotes from keys in object literals, where possible resolves simple expressions when this leads to smaller code (1+3*4 ==> 13) PS: Oh, it can "beautify" as well. ;-) | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/28932",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1363/"
]
} |
28,949 | After reading this question , I was reminded of when I was taught Java and told never to call finalize() or run the garbage collector because "it's a big black box that you never need to worry about". Can someone boil the reasoning for this down to a few sentences? I'm sure I could read a technical report from Sun on this matter, but I think a nice, short, simple answer would satisfy my curiosity. | The short answer: Java garbage collection is a very finely tuned tool. System.gc() is a sledge-hammer. Java's heap is divided into different generations, each of which is collected using a different strategy. If you attach a profiler to a healthy app, you'll see that it very rarely has to run the most expensive kinds of collections because most objects are caught by the faster copying collector in the young generation. Calling System.gc() directly, while technically not guaranteed to do anything, in practice will trigger an expensive, stop-the-world full heap collection. This is almost always the wrong thing to do . You think you're saving resources, but you're actually wasting them for no good reason, forcing Java to recheck all your live objects “just in case”. If you are having problems with GC pauses during critical moments, you're better off configuring the JVM to use the concurrent mark/sweep collector, which was designed specifically to minimise time spent paused, than trying to take a sledgehammer to the problem and just breaking it further. The Sun document you were thinking of is here: Java SE 6 HotSpot™ Virtual Machine Garbage Collection Tuning (Another thing you might not know: implementing a finalize() method on your object makes garbage collection slower. Firstly, it will take two GC runs to collect the object: one to run finalize() and the next to ensure that the object wasn't resurrected during finalization. Secondly, objects with finalize() methods have to be treated as special cases by the GC because they have to be collected individually, they can't just be thrown away in bulk.) | {
"score": 6,
"source": [
"https://Stackoverflow.com/questions/28949",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/572/"
]
} |
28,952 | Is it possible to get a breakdown of CPU utilization by database ? I'm ideally looking for a Task Manager type interface for SQL server, but instead of looking at the CPU utilization of each PID (like taskmgr ) or each SPID (like spwho2k5 ), I want to view the total CPU utilization of each database. Assume a single SQL instance. I realize that tools could be written to collect this data and report on it, but I'm wondering if there is any tool that lets me see a live view of which databases are contributing most to the sqlservr.exe CPU load. | Sort of. Check this query out: SELECT total_worker_time/execution_count AS AvgCPU , total_worker_time AS TotalCPU, total_elapsed_time/execution_count AS AvgDuration , total_elapsed_time AS TotalDuration , (total_logical_reads+total_physical_reads)/execution_count AS AvgReads , (total_logical_reads+total_physical_reads) AS TotalReads, execution_count , SUBSTRING(st.TEXT, (qs.statement_start_offset/2)+1 , ((CASE qs.statement_end_offset WHEN -1 THEN datalength(st.TEXT) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) AS txt , query_planFROM sys.dm_exec_query_stats AS qs cross apply sys.dm_exec_sql_text(qs.sql_handle) AS st cross apply sys.dm_exec_query_plan (qs.plan_handle) AS qp ORDER BY 1 DESC This will get you the queries in the plan cache in order of how much CPU they've used up. You can run this periodically, like in a SQL Agent job, and insert the results into a table to make sure the data persists beyond reboots. When you read the results, you'll probably realize why we can't correlate that data directly back to an individual database. First, a single query can also hide its true database parent by doing tricks like this: USE msdbDECLARE @StringToExecute VARCHAR(1000)SET @StringToExecute = 'SELECT * FROM AdventureWorks.dbo.ErrorLog'EXEC @StringToExecute The query would be executed in MSDB, but it would poll results from AdventureWorks. Where should we assign the CPU consumption? It gets worse when you: Join between multiple databases Run a transaction in multiple databases, and the locking effort spans multiple databases Run SQL Agent jobs in MSDB that "work" in MSDB, but back up individual databases It goes on and on. That's why it makes sense to performance tune at the query level instead of the database level. In SQL Server 2008R2, Microsoft introduced performance management and app management features that will let us package a single database in a distributable and deployable DAC pack, and they're promising features to make it easier to manage performance of individual databases and their applications. It still doesn't do what you're looking for, though. For more of those, check out the T-SQL repository at Toad World's SQL Server wiki (formerly at SQLServerPedia) . Updated on 1/29 to include total numbers instead of just averages. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/28952",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1690/"
]
} |
28,965 | Watching SO come online has been quite an education for me. I'd like to make a checklist of various vunerabilities and exploits used against web sites, and what programming techniques can be used to defend against them. What categories of vunerabilities? crashing site breaking into server breaking into other people's logins spam sockpuppeting , meatpuppeting etc... What kind of defensive programming techniques? etc... | From the Open Web Application Security Project : The OWASP Top Ten vulnerabilities (pdf) For a more painfully exhaustive list: Category:Vulnerability The top ten are: Cross-site scripting (XSS) Injection flaws (SQL injection, script injection) Malicious file execution Insecure direct object reference Cross-site request forgery (XSRF) Information leakage and improper error handling Broken authentication and session management Insecure cryptographic storage Insecure communications Failure to restrict URL access | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28965",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/116/"
]
} |
28,975 | I've followed the CouchDB project with interest over the last couple of years, and see it is now an Apache Incubator project. Prior to that, the CouchDB web site was full of do not use for production code type disclaimers, so I'd done no more than keep an eye on it. I'd be interested to know your experiences if you've been using CouchDB either for a live project, or a technology pilot. | I use the CouchDB to power a Facebook application (over 35k monthly active users). For a while it was using MySQL but after porting the entire project over from Perl to Erlang, I decided to go for the gold and migrate all of the data into CouchDB and use that instead. CouchDB has been a great data store to work with. I think that it is on track to becoming a major player in web-based services. | {
"score": 5,
"source": [
"https://Stackoverflow.com/questions/28975",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2745/"
]
} |
28,982 | Related to my CouchDB question. Can anyone explain MapReduce in terms a numbnuts could understand? | Going all the way down to the basics for Map and Reduce. Map is a function which "transforms" items in some kind of list to another kind of item and put them back in the same kind of list. suppose I have a list of numbers: [1,2,3] and I want to double every number, in this case, the function to "double every number" is function x = x * 2. And without mappings, I could write a simple loop, say A = [1, 2, 3]foreach (item in A) A[item] = A[item] * 2 and I'd have A = [2, 4, 6] but instead of writing loops, if I have a map function I could write A = [1, 2, 3].Map(x => x * 2) the x => x * 2 is a function to be executed against the elements in [1,2,3]. What happens is that the program takes each item, execute (x => x * 2) against it by making x equals to each item, and produce a list of the results. 1 : 1 => 1 * 2 : 2 2 : 2 => 2 * 2 : 4 3 : 3 => 3 * 2 : 6 so after executing the map function with (x => x * 2) you'd have [2, 4, 6]. Reduce is a function which "collects" the items in lists and perform some computation on all of them, thus reducing them to a single value. Finding a sum or finding averages are all instances of a reduce function. Such as if you have a list of numbers, say [7, 8, 9] and you want them summed up, you'd write a loop like this A = [7, 8, 9]sum = 0foreach (item in A) sum = sum + A[item] But, if you have access to a reduce function, you could write it like this A = [7, 8, 9]sum = A.reduce( 0, (x, y) => x + y ) Now it's a little confusing why there are 2 arguments (0 and the function with x and y) passed. For a reduce function to be useful, it must be able to take 2 items, compute something and "reduce" that 2 items to just one single value, thus the program could reduce each pair until we have a single value. the execution would follows: result = 07 : result = result + 7 = 0 + 7 = 78 : result = result + 8 = 7 + 8 = 159 : result = result + 9 = 15 + 9 = 24 But you don't want to start with zeroes all the time, so the first argument is there to let you specify a seed value specifically the value in the first result = line. say you want to sum 2 lists, it might look like this: A = [7, 8, 9]B = [1, 2, 3]sum = 0sum = A.reduce( sum, (x, y) => x + y )sum = B.reduce( sum, (x, y) => x + y ) or a version you'd more likely to find in the real world: A = [7, 8, 9]B = [1, 2, 3]sum_func = (x, y) => x + ysum = A.reduce( B.reduce( 0, sum_func ), sum_func ) Its a good thing in a DB software because, with Map\Reduce support you can work with the database without needing to know how the data are stored in a DB to use it, thats what a DB engine is for. You just need to be able to "tell" the engine what you want by supplying them with either a Map or a Reduce function and then the DB engine could find its way around the data, apply your function, and come up with the results you want all without you knowing how it loops over all the records. There are indexes and keys and joins and views and a lot of stuffs a single database could hold, so by shielding you against how the data is actually stored, your code are made easier to write and maintain. Same goes for parallel programming, if you only specify what you want to do with the data instead of actually implementing the looping code, then the underlying infrastructure could "parallelize" and execute your function in a simultaneous parallel loop for you. | {
"score": 8,
"source": [
"https://Stackoverflow.com/questions/28982",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2745/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.