instruction
stringlengths
0
30k
In Ruby 1.9: weights.zip(data).map{|a,b| a*b}.reduce(:+) In Ruby 1.8: weights.zip(data).inject(0) {|sum,(w,d)| sum + w*d }
SQL Server 2005 index recommendations
|sql-server|
We're in the process of upgrading one of our SQL Server instances from 2000 to 2005. I installed the performance dashboard (<http://www.microsoft.com/downloads/details.aspx?FamilyId=1d3a4a0d-7e0c-4730-8204-e419218c1efc&displaylang=en>) for access to some high level reporting. One of the reports shows missing (recommended) indexes. I think it's based on some system view that is maintained by the query optimizer. My question is what is the best way to determine when to take an index recommendation. I know that it doesn't make sense to apply all of the optimizer's suggestions. I see a lot of advice that basically says to try the index and to keep it if performance improves and to drop it if performances degrades or stays the same. I wondering if there is a better way to make the decision and what best practices exist on this subject.
How do I know which SQL Server 2005 index recommendations to implement, if any?
I am in charge of about 100+ documents (word document, not source code) that needs revision by different people in my department. Currently all the documents are in a shared folder where they will retrieve, revise and save back into the folder. What I am doing now is looking up the "date modified" in the shared folder, opened up recent modified documents and use the "Track Change" function in MS Word to apply the changes. I find this a bit tedious. So will it be better and easier if I commit this in a version control database? Basically I want to keep different version of a file. <hr> What have I learn from answers: - Use Time Machine to save different version (or Shadow copy in Vista) - There is a difference between text and binary documents when you use version control app. (I didn't know that) - Diff won't work on binary files - A notification system (ie email) for revision is great
I am in charge of about 100+ documents (word document, not source code) that needs revision by different people in my department. Currently all the documents are in a shared folder where they will retrieve, revise and save back into the folder. What I am doing now is looking up the "date modified" in the shared folder, opened up recent modified documents and use the "Track Change" function in MS Word to apply the changes. I find this a bit tedious. So will it be better and easier if I commit this in a version control database? Basically I want to keep different version of a file. <hr> What have I learn from answers: - Use Time Machine to save different version (or Shadow copy in Vista) - There is a difference between text and binary documents when you use version control app. (I didn't know that) - Diff won't work on binary files - A notification system (ie email) for revision is great - Google Docs revision feature. **Update** : I played around with Google Docs revision feature and feel that it is almost right for me. Just a bit annoyed with the too frequent versioning (autosaving). But what feels right for me doesn't mean it feels right for my dept. Will they be okay with saving all these documents with Google?
@[PJ][1]: you know, I had considered that, but dismissed it because I thought I wouldn't be able to later chain on a *third* named scope, like so: Foo.ab(x, y).c(z) But since `ab(x, y)` returns whatever `b(y)` would return, I think the chain would work. Way to make me rethink the obvious! [1]: http://stackoverflow.com/questions/29011/is-there-a-way-to-combine-named-scopes-into-a-new-named-scope#30719
The proxy server you describe would deal mostly with string data and I think its reasonable to implement in C#. In your example, if header x == y, do z the slowest part might actually be doing whatever 'z' is and you'll have to do that work regardless of the language.
Why do you expect a much higher performance from the C++ application? There is no inherent slowdown added by a C# application when you are doing it right. (not too many dropped references, frequent object creation/dropping per call, etc.) The only time a C++ application really outperforms an equivalent C# application is when you can do (very) low level operations. E.g. casting raw memory pointers, inline assembler, etc. The C++ compiler may be better at creating fast code, but mostly this is wasted in most applications. If you do really have a part of your application that must be blindingly fast, try writing a C call for that hot spot. Only if most of the system behaves too slowly you should consider writing it in C/C++. But there are many pitfalls that may kill your performance in your C++ code. (TLDR: A C++ expert may create 'faster' code as an C# expert, but a mediocre C++ programmer may create slower code than mediocre C# one)
[Link To Resource][1] [1]: http://blogs.msdn.com/sharepoint/archive/2007/07/25/scaling-large-lists.aspx
How do you build a ratings implementation?
|algorithm|data-structures|database-design|
We have need for a "rating" system in a project we are working on, similar to the one in SO. However, in ours there are multiple entities that need to be "tagged" with a vote up (only up, never down, like an increment). Sometimes we will need to show all of the entities in order of what is rated highest, regardless of entity type, basically mixing the result sets, I guess. What data structures / algorithms do you use to implement this so that is flexible and still scalable?
|algorithm|architecture|database-design|data-structures|
I agree with Konrad, but I will add the "-C" option which demangle the symbols. If it's a C++ library, it's more readable demangled. nm -gC yourLib.so
I think cmd.exe hangs if the parameters are incorrect. If the batch executes correctly then I would just shell execute it like this instead. ProcessStartInfo psi = new ProcessStartInfo(); Process p = new Process(); psi.WindowStyle = ProcessWindowStyle.Hidden; psi.WorkingDirectory = @"c:\build"; psi.FileName = @"C:\build\build.cmd"; psi.UseShellExecute = true; psi.UserName = "builder"; psi.Password = password; p.StartInfo = psi; p.Start(); Also it could be that cmd.exe just can't find build.cmd so why not give the full path to the file?
How to Format Numbers in WinForms 1.1 DataGrid?
|winforms|
Is there a simple way to format numbers in a Winforms 1.1 datagrid? The Format property of the DataGridTextBoxColumn seems to be completely ignored. I know there are [ugly solutions](http://support.microsoft.com/kb/318581), but was hoping there might be simpler solutions, or perhaps some trick to making the Format property work. Thanks Mike
Is there a simple way to format numbers in a Winforms 1.1 datagrid? The Format property of the DataGridTextBoxColumn seems to be completely ignored. I know there is a [solution](http://support.microsoft.com/kb/318581) that involves subclassing a Column control, and it's fairly simple, but was hoping there might be some trick to making the Format property work. Thanks Mike
Is there a simple way to format numbers in a Winforms 1.1 datagrid? The Format property of the DataGridTextBoxColumn seems to be completely ignored. I know there is a [solution](http://support.microsoft.com/kb/318581) that involves subclassing a Column control, and it's fairly simple, but was hoping there might be some trick to making the Format property just work. Thanks Mike
ValidationRule To Enforce Unique Name
|wpf|validation|data-binding|
Another note on exception handling performance: simple tests don't take caching into account. The try-code and the catch-code are both so small that everything fits in the instruction and data caches. But compilers may try to move the catch-code far away from the try-code, which reduces the amount of code to keep in cache normally, thus enhancing performance. If you compare exception handling to traditional C-style return-value checking, this caching effect should be taken into account as well (the question is usually ignored in discussions). Carl
I know you mentioned that you don't want to, but I actually think that using the `disabled` attribute is a better solution: <select multiple="multiple"> <option value="volvo" selected="true" disabled="disabled">Volvo</option> <option value="saab" disabled="disabled">Saab</option> <option value="opel" disabled="disabled">Opel</option> <option value="audi" disabled="disabled">Audi</option> </select> If necessary, you can always give the `select` a `class` and style it with CSS. This solution will work in all browsers regardless of scripting capabilities.
The [ListView][1] control introduced with framework 3.5 might be a little bit better solution. Your markup would look like this: <asp:ListView runat="server" ID="ListView1" DataSourceID="SqlDataSource1"> <LayoutTemplate> <table runat="server" id="table1" runat="server" > <tr runat="server" id="itemPlaceholder" ></tr> </table> </LayoutTemplate> <ItemTemplate> <tr runat="server> <td runat="server"> <asp:Label ID="NameLabel" runat="server" Text='<%#Eval("Name") %>' /> </td> </tr> </ItemTemplate> </asp:ListView> [1]: http://msdn.microsoft.com/en-us/library/bb398790.aspx
$(".container .component").each(function() { if ($(".container", this).css('width') === "auto") $(".container", this).css('border', '1px solid #f00'); });
you will also need to have a asp:ScriptManager control on every page that you want to use ajax controls on. you should be able to just drag the scriptmanager over from your toolbox one the toolkit is installed following Zack's instructions.
$(".container .component").each(function() { $(".container", this).each(function() { if($(this).css('width') == 'auto') { $(this).css('border', '1px solid #f00'); } }); }); Similar to the other answer but since components can also have multiple containers, also needs the .each() check in here too for the width.
Read ScuttGu's post about [VS 2008 Multi-Targeting Support][1] > > One of the big changes we are making > starting with the VS 2008 release is > to support what we call > "Multi-Targeting" - which means that > Visual Studio will now support > targeting multiple versions of the > .NET Framework, and developers will be > able to start taking advantage of the > new features Visual Studio provides > without having to always upgrade their > existing projects and deployed > applications to use a new version of > the .NET Framework library. > > Now when you open an existing project > or create a new one with VS 2008, you > can pick which version of the .NET > Framework to work with - and the IDE > will update its compilers and > feature-set to match this. Among > other things, this means that > features, controls, projects, > item-templates, and assembly > references that don't work with that > version of the framework will be > hidden, and when you build your > application you'll be able to take the > compiled output and copy it onto a > machine that only has an older version > of the .NET Framework installed, and > you'll know that the application will > work. That way you can use VS2008 to develop .NET 2.0 projects that will work on both .NET 2.0, 3.0 and 3.5 [1]: http://weblogs.asp.net/scottgu/archive/2007/06/20/vs-2008-multi-targeting-support.aspx
In my experience, the design and implementation has much more to do with performance than do the choice of language/framework (however, the usual caveats apply: eg, don't write a device driver in C# or java). I wouldn't think twice about writing the type of program you describe in a managed language (be it Java, C#, etc). These days, the performance gains you get from using a lower level language (in terms of closeness to hardware) is often easily offset by the runtime abilities of a managed environment. Of course this is coming from a C#/python developer so I'm not exactly unbiased...
Options you have: - Write a script that will wipe out database before you start unit tests, then populate db with predefined set of data and run the tests. You can also do that before every test &ndash; it'll be slow, but less error prone. - Inject the database. (Example in pseudo-Java, but applies to all OO-languages) <pre> class Database { public Result query(String query) {... real db here ...} } class MockDatabase extends Database { public Result query(String query) { return "mock result"; } } class ObjectThatUsesDB { public ObjectThatUsesDB(Database db) { this.database = db; } } </pre> now in production you use normal database and for all tests you just inject the mock database that you can create ad hoc. * Do not use DB at all throughout most of code (that's a bad practice anyway). Create a "database" object that instead of returning with results will return normal objects (i.e. will return `User` instead of a tuple `{name: "marcin", password: "blah"}`) write all your tests with ad hoc constructed _real_ objects and write one big test that depends on a database that makes sure this conversion works OK. Of course these approaches are not mutually exclusive and you can mix and match them as you need.
I always start with the most readable version I can think of. If performance is a problem, I refactor. If the readable version makes it hard to generalize, I refactor. The key is to have good tests so that refactoring is easy. I view readability as the #1 most important issue in code, though working correctly is a close second.
Choose readability over performance unless you can prove that you need the performance.
Understanding Interfaces
|interface|
I have class method that returns a list of employees that I can iterate through. What's the best way to return the list? Typically I just return an ArrayList. However, as I understand, interfaces are better suited for this type of action. Which would be the best interface to use? Also, why is it better to return an interface, rather than the implementation (say ArrayList object)? It just seems like a lot more work to me.
I would say that you should only sacrifice readability for performance if there's a proven performance problem that's significant. Of course "significant" is the catch there, and what's significant and what isn't should be specific to the code you're working on.
Readability is most important. With modern computers, only the most intensive routines of the most demanding applications need to worry too much about performance.
Actually, with group policies, there's the ZAP file alternative, but I would recommend regardless that you learn MSI. It's not that hard, and very flexible.
@Chris.Lively You know you can pass a message in your exception, or even the "status codes". You are reinventing the wheel here.
I'm unclear as to what your actual problem is, so I'll throw out a few thoughts. - Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at [CopyFileEx with COPY_FILE_RESTARTABLE][1] - Do you want verifiable copies? Sounds like you already have that by verifying hashes. - Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, [TransmitFile][2] may help. - Do you just want a fire and forget operation? I suppose shelling out to robocopy, or [TeraCopy][3] or something would work - but it seems a bit hacky to me. - Do you want to know when the network comes back? [IsNetworkAlive][4] has your answer. Based on what I know so far, I think the following pseudo-code would be my approach: sourceFile = Compress("*.*"); destFile = "X:\files.zip"; int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE; while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) { do { // optionally, increment a failed counter to break out at some point Sleep(1000); while (!IsNetworkAlive(NETWORKALIVELAN)); } Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side. [1]: http://msdn.microsoft.com/en-us/library/aa363852.aspx [2]: http://msdn.microsoft.com/en-us/library/ms740565(VS.85).aspx [3]: http://www.codesector.com/teracopy.php [4]: http://msdn.microsoft.com/en-us/library/aa377522(VS.85).aspx
I'm unclear as to what your actual problem is, so I'll throw out a few thoughts. - Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at [CopyFileEx with COPYFILERESTARTABLE][1] - Do you want verifiable copies? Sounds like you already have that by verifying hashes. - Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, [TransmitFile][2] may help. - Do you just want a fire and forget operation? I suppose shelling out to robocopy, or [TeraCopy][3] or something would work - but it seems a bit hacky to me. - Do you want to know when the network comes back? [IsNetworkAlive][4] has your answer. Based on what I know so far, I think the following pseudo-code would be my approach: sourceFile = Compress("*.*"); destFile = "X:\files.zip"; int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE; while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) { do { // optionally, increment a failed counter to break out at some point Sleep(1000); while (!IsNetworkAlive(NETWORKALIVELAN)); } Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side. [1]: http://msdn.microsoft.com/en-us/library/aa363852.aspx [2]: http://msdn.microsoft.com/en-us/library/ms740565(VS.85).aspx [3]: http://www.codesector.com/teracopy.php [4]: http://msdn.microsoft.com/en-us/library/aa377522(VS.85).aspx
@Michiel de Mare Your Ruby 1.9 example can be shortened a bit further: weights.zip(data).map(:*).reduce(:+) Also note that in Ruby 1.8, if you require ActiveSupport (from Rails) you can use: weights.zip(data).map(&:*).reduce(&:+)
If you're running SQL 2005 you could do this in a CLR integration assembly and use the FTP classes in the System.Net namespace to build a simple FTP client. You'd benefit from being able to trap and handle exceptions and reduce the security risk of having to use xp_cmdshell. Just some thoughts.
Autoboxing and unboxing occurs at runtime, and your compiler still views myInt as a primitive.
Some features of autoboxing and unboxing occur at runtime, and your compiler still views <code>myInt</code> as a primitive. There's a lot of information about the details at [jcp.org][1]. So unfortunately, you would have to do it like this: <pre><code> ((Integer)myInt).toString(); </code></pre> [1]: http://jcp.org/aboutJava/communityprocess/jsr/tiger/autoboxing.html
Some features of autoboxing and unboxing occur at runtime, and your compiler still views <code>myInt</code> as a primitive. There's a lot of information about the details at [jcp.org][1]. So unfortunately, you would have to do it like this: (kudos Patrick, I switched to your way) <pre><code> Integer.toString(myInt); </code></pre> [1]: http://jcp.org/aboutJava/communityprocess/jsr/tiger/autoboxing.html
Java autoboxing/unboxing doesn't go to the extent to allow you to dereference a primitive, so your compiler prevents it. Your compiler still knows <code>myInt</code> as a primitive. There's a paper about this issue at [jcp.org][1]. Autoboxing is mainly useful during assignment or parameter passing -- allowign you to pass a primitive as an object (or vice versa), or assign a primitive to an object (or vice versa). So unfortunately, you would have to do it like this: (kudos Patrick, I switched to your way) <pre><code> Integer.toString(myInt); </code></pre> [1]: http://jcp.org/aboutJava/communityprocess/jsr/tiger/autoboxing.html
Why not use a map of primitives (triangles, squares), distribute the starting points for the countries (the "capitals"), and then randomly expanding the countries by adding a random adjacent primitive to the country.
[Expression Engine][1] is fantastic. It's free to download and try but you must purchase a license if you are making a profit with it. [1]: http://expressionengine.com/
The best reference I've seen on them is [Computational Geometry: Algorithms and Applications][1], which covers Voronoi diagrams, Delaunay triangulations (similar to Voronoi diagrams and each can be converted into the other), and other similar data structures. They talk about all the data structures you need but they don't give you the code necessary to implement it (which may be a good exercise). In terms of code, an Amazon search shows the book [Computational Geometry in C][2], which presumably comes with the code (although since you're stuck in C, you'd mind as well get the other one and implement it in whatever language you want). I also don't have any experience with this book, only the first. Sorry to have only books to recommend! The only decent online resource I've seen on them are the two [Wikipedia][3] [articles][4], which doesn't really tell you implementation details. [This link][5] may be helpful though. [1]: http://www.amazon.com/Computational-Geometry-Applications-Mark-Berg/dp/3540779736/ [2]: http://www.amazon.com/Computational-Geometry-Cambridge-Theoretical-Computer/dp/0521649765/ [3]: http://en.wikipedia.org/wiki/Voronoi_diagram [4]: http://en.wikipedia.org/wiki/Delaunay_triangulation [5]: http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/packages.html#Part:VoronoiDiagrams
What I would like to do is create a clean virtual machine image as the output of a build of an application. So a new virtual machine would be created (from a template is fine, with the OS installed, and some base software installed) --- a new web site would be created in IIS, and the web app build output copied to a location on the virtual machine hard disk, and IIS configured correctly, the VM would start up and run. I know there are MSBuild tasks to script all the administrative actions in IIS, but how do you script all the actions with Virtual machines? Specifically, creating a new virtual machine from a template, naming it uniquely, starting it, configuring it, etc... Specifically I was wondering if anyone has successfully implemented any VM scripting as part of a build process. Update: I assume with Hyper-V, there is a different set of libraries/APIs to script virtual machines, anyone played around with this? And anyone with real practical experience of doing something like this?
You use the [Haversine formula][1]. [1]: http://en.wikipedia.org/wiki/Haversine_formula
editing the web.config or updating a dll in the bin folder just recycles the worker process for that application, not the whole pool
IISReset restarts the entire webserver (including all associated sites). If you're just looking to reset a single ASP.NET website, you should just recycle that AppDomain. The most common way to reset an ASP.NET website is to edit the web.config file, but you can also create an admin page with the following: public partial class Recycle : System.Web.UI.Page { protected void Page _Load(object sender, EventArgs e) { HttpRuntime.UnloadAppDomain(); } } Here's a blog post I wrote with more info: [Avoid IISRESET in ASP.NET Applications][1] [1]: http://weblogs.asp.net/jgalloway/archive/2006/06/01/Avoid-IISRESET-in-ASP.NET-applications-_2800_added-bonus_3A00_-ASPRESET_2900_.aspx
It operates on the whole IIS process tree, as opposed to just your application pools. C:\>iisreset /? IISRESET.EXE (c) Microsoft Corp. 1998-1999 Usage: iisreset [computername] /RESTART Stop and then restart all Internet services. /START Start all Internet services. /STOP Stop all Internet services. /REBOOT Reboot the computer. /REBOOTONERROR Reboot the computer if an error occurs when starting, stopping, or restarting Internet services. /NOFORCE Do not forcefully terminate Internet services if attempting to stop them gracefully fails. /TIMEOUT:val Specify the timeout value ( in seconds ) to wait for a successful stop of Internet services. On expiration of this timeout the computer can be rebooted if the /REBOOTONERROR parameter is specified. The default value is 20s for restart, 60s for stop, and 0s for reboot. /STATUS Display the status of all Internet services. /ENABLE Enable restarting of Internet Services on the local system. /DISABLE Disable restarting of Internet Services on the local system.
I've run into this before also. There is no built-in aggregate function to concatenate strings. It seems like this would be needed all the time, but it's just not part of the default set. I Googled and found [the same example][1]: CREATE AGGREGATE textcat_all( basetype = text, sfunc = textcat, stype = text, initcond = '' ); [Here is the CREATE AGGREGATE documentation.][2] In order to get the ", " inserted in between them without having it at the end, you might want to make your own concatenation function and substitute it for the "textcat" above. Here is one I put together but haven't tested: CREATE FUNCTION commacat(acc varchar, instr varchar) RETURNS varchar AS $$ BEGIN IF acc IS NULL OR acc = '' THEN RETURN instr; ELSE RETURN acc || ', ' || instr; END IF; END; $$ LANGUAGE plpgsql; [1]: http://archives.postgresql.org/pgsql-novice/2003-09/msg00177.php [2]: http://www.postgresql.org/docs/8.3/static/sql-createaggregate.html
Im not sure if this is correct, but i'd try to compile it for the lowest version, the higher versions should be able to run the lower versions exe's.
@Jack & @17 of 26, good point but the end user will be expecting the select box to be disabled so that confusion shouldn't be an issue. I should have been clearer about why I couldn't just disable the control. The application that will be using this will need to disable the selection of the options and there is a requirement that the "locked" control still maintain the look and feel of normal form controls.
This isn't really the same, but you might want to look at something like [JChronic][1], which can do natural language processing on dates. So, the input date could be something like "tomorrow", or "two weeks from next tuesday". This may not help at all for your application, but then again, it might. [1]: https://jchronic.dev.java.net/
Along side multi targeting, the frameworks are backwards compatible, so something compiled to 1.0 will run on 1.1 and 2. Somthing compiled on 1.1 will run on 2 ... etc.
I know [@John Boker](http://stackoverflow.com/questions/43939/targeting-multiple-versions-of-net-framework#43946) is correct when it comes to .Net class libraries. You can compile a class library against .Net 1.1 and then use it in a .Net 2.0 or higher project. I suspect the same is also true for executables.
with 2005 & 2008, yes (on CLR 2.0) With 2003, no.. because it compiles down to CLR 1.1 You could theorectically write some code using #if (DOTNET35) and such so that you don't use features outside the compilers knowledge and then run the desired compiler on the app... I question the usefulness of this though.
Changing the default title of confirm() in javascript
|javascript|
Is it possible to modify the title of the message box the confirm() function opens in javascript? I could create a modal popup box, but I would like to do this as minimalistic as possible. I would like to do something like this: confirm("This is the content of the message box", "Modified title"); The default title in IE is "Windows internet explorer" and in Firefox it's "[Javascript-program]." Not very informative. Though I can understand from a browser security stand point that you shouldn't be able to do this.
I'm unclear as to what your actual problem is, so I'll throw out a few thoughts. - Do you want restartable copies (with such small file sizes, that doesn't seem like it'd be that big of a deal)? If so, look at [CopyFileEx with COPYFILERESTARTABLE][1] - Do you want verifiable copies? Sounds like you already have that by verifying hashes. - Do you want better performance? It's going to be tough, as it sounds like you can't run anything on the server. Otherwise, [TransmitFile][2] may help. - Do you just want a fire and forget operation? I suppose shelling out to robocopy, or [TeraCopy][3] or something would work - but it seems a bit hacky to me. - Do you want to know when the network comes back? [IsNetworkAlive][4] has your answer. Based on what I know so far, I think the following pseudo-code would be my approach: sourceFile = Compress("*.*"); destFile = "X:\files.zip"; int copyFlags = COPYFILEFAILIFEXISTS | COPYFILERESTARTABLE; while (CopyFileEx(sourceFile, destFile, null, null, false, copyFlags) == 0) { do { // optionally, increment a failed counter to break out at some point Sleep(1000); while (!IsNetworkAlive(NETWORKALIVELAN)); } Compressing the files first saves you the tracking of which files you've successfully copied, and which you need to restart. It should also make the copy go faster (smaller total file size, and larger single file size), at the expense of some CPU power on both sides. A simple batch file can decompress it on the server side. [1]: http://msdn.microsoft.com/en-us/library/aa363852.aspx [2]: http://msdn.microsoft.com/en-us/library/ms740565%28VS.85%29.aspx [3]: http://www.codesector.com/teracopy.php [4]: http://msdn.microsoft.com/en-us/library/aa377522%28VS.85%29.aspx
Eric, You are dead on. For any really scalable / easily maintained / robust application the only real answer is to dispense with all the garbage and stick to the basics. I've followed a similiar trajectory with my career and have come to the same conclusions. Of course, we're considered heretics and looked at funny. But my stuff works and works well. Every line of code should be looked at with suspicion.
My solution just for kicks (this was a fun exercise) -- Setup test table DECLARE @names TABLE ( id INT IDENTITY(1,1), name NVARCHAR(25) NOT NULL, grp UNIQUEIDENTIFIER NULL ) INSERT @names (name) SELECT 'Harry Johns' UNION ALL SELECT 'Adam Taylor' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'Bill Manning' -- Set the group to a newid() if the name does not equal the previous UPDATE n SET grp = CASE WHEN ISNULL(b.Name, '') <> n.Name THEN newid() ELSE NULL END FROM @names n LEFT JOIN @names b ON (n.ID - 1) = b.ID -- Set groups that are null to the previous group -- Keep on doing this until all groups have been set WHILE (EXISTS(SELECT 1 FROM @names WHERE grp IS NULL)) BEGIN UPDATE n SET grp = b.grp FROM @names n INNER JOIN @names b ON (n.ID - 1) = b.ID AND n.grp IS NULL END -- Final output SELECT MIN(id) AS id_start, MAX(id) AS id_end, name, count(1) AS consecutive FROM @names GROUP BY grp, name ORDER BY id_start
There's also [gitx](http://github.com/pieter/gitx/tree/master), it's progressing well and under active development (multiple commits per day).
I've used singletons a bunch of times in conjunction with [Spring][1] and didn't consider it a crutch or lazy. What this pattern allowed me to do was create a single class for a bunch of configuration-type values and then share the single (non-mutable) instance of that specific configuration instance between several users of my web application. In my case, the singleton contained client configuration criteria - css file location, db connection criteria, feature sets, etc. - specific for that client. These classes were instantiated and accessed through Spring and shared by users with the same configuration (i.e. 2 users from the same company). * **I know there's a name for this type of application but it's escaping me* I feel it would've been wasteful to create (then garbage collect) new instances of these "constant" objects for each user of the app. [1]: http://www.springframework.org/
Two questions to think about: 1. How many columns could be nominated for the query? 2. Does the data change frequently? A lot of it? If you have a *small* number of candidate columns, and the data doesn't change *a lot*, then you might want to consider adding a permanent index on any or even all candidate column. "*Blasphemy!*", I hear. Most sources tell you to "never" index every column of a table, but that advised is rooted on the generic assumption that tables are modified frequently. You will pay a price in additional storage, as well as a performance hit when the data changes. How small is *small* and how much is *a lot*, and is the tradeoff worth it? There is no way to tell a priory because "too slow" is usually a subjective measurement. You will have to try it, measure the size of your indexes and then the effect they have in the searches. You will have to balance the costs against the increase in satisfaction of your customers. Good luck.
Two questions to think about: 1. How many columns could be nominated for the query? 2. Does the data change frequently? A lot of it? If you have a *small* number of candidate columns, and the data doesn't change *a lot*, then you might want to consider adding a permanent index on any or even all candidate column. "*Blasphemy!*", I hear. Most sources tell you to "never" index every column of a table, but that advised is rooted on the generic assumption that tables are modified frequently. You will pay a price in additional storage, as well as a performance hit when the data changes. How small is *small* and how much is *a lot*, and is the tradeoff worth it? There is no way to tell a priory because "too slow" is usually a subjective measurement. You will have to try it, measure the size of your indexes and then the effect they have in the searches. You will have to balance the costs against the increase in satisfaction of your customers. Oh, one more thing: temporary indexes are not only physically slower than a table scan, but they would destroy your concurrency. Re-indexing a table usually (always?) requires a full table lock, so in effect only one user search could be done at a time. Good luck.
Two questions to think about: 1. How many columns could be nominated for the query? 2. Does the data change frequently? A lot of it? If you have a *small* number of candidate columns, and the data doesn't change *a lot*, then you might want to consider adding a permanent index on any or even all candidate column. "*Blasphemy!*", I hear. Most sources tell you to "never" index every column of a table, but that advised is rooted on the generic assumption that tables are modified frequently. You will pay a price in additional storage, as well as a performance hit when the data changes. How small is *small* and how much is *a lot*, and is the tradeoff worth it? There is no way to tell a priory because "too slow" is usually a subjective measurement. You will have to try it, measure the size of your indexes and then the effect they have in the searches. You will have to balance the costs against the increase in satisfaction of your customers. **[Added]** Oh, one more thing: temporary indexes are not only physically slower than a table scan, but they would destroy your concurrency. Re-indexing a table usually (always?) requires a full table lock, so in effect only one user search could be done at a time. Good luck.
According to the *Pro\*C/C++ Programmer's Guide* (chapter 5 "Advanced Topics"), Pro*C silently ignores a number of preprocessor directives including #error and #pragma, but sadly not #warning. Since your warning directives are included in a header file, you might be able to use the ORA_PROC macro: #ifndef ORA_PROC #include <irrelevant.h> #endif For some reason, Pro*C errors out if you try to hide a straight #warning that way, however.
In all my life, I have had maybe one application where I had to put an assembly in the GAC, simply because these assemblies were part of a framework that a number of applications would use it, and it seemed right to put them into the GAC.
I think one of the biggest advantages of using the GAC is that you can have multiple versions of the same assembly registered and available to your applications. Personally, i don't like how it restricts movement from machine to machine (i don't like having to say, check out source on a new VPC and go through a bunch of steps to get it running because I have to register stuff in the GAC)
You ca use the <a href="http://en.wikipedia.org/wiki/A*_search_algorithm">A*</a> algorithm to find the shortest path between those two cities and this way you'll have the distance.
if you need a code example I think I have one I could dig up at home, but like many of the previous answers, you need a long / lat db to do the calculation
The GAC runs with Full Trust and can be used by applications outside of your Web App. For example, Timer Jobs in Sharepoint HAVE to be in the GAC because the sptimer service is a separate process. The "Full Trust" Part is also a possible source for security issues. Sure, you can work with Code Access Security, but I do not see too many Assemblies using CAS unfortunately :( The /bin Folder can be locked down to Medium which is normally fine. Daniel Larson has a [post on CAS as well][1] which details the differences a bit more. [1]: http://daniellarson.spaces.live.com/blog/cns!D3543C5837291E93!937.entry
Here what's technet has to say about [iisreset][1] > You might need to restart Internet Information Services (IIS) before certain configuration changes take effect or when applications become unavailable. Restarting IIS is the same as first stopping IIS, and then starting it again, except it is accomplished with a single command. [1]: http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/003ed2fe-6339-4919-b577-6aa965994a9b.mspx?mfr=true
I know this is my own question but I came across this text editor [Sublime Text][1] and thought it was pretty sweet. There are a few features in it that i have never seen before. It has multiple line select ( lines that are not continuous ) and a birds eye view navigation. It's a little pricey but I am having fun playing with the free version. [1]: http://www.sublimetext.com/
Reporting Systems for ASP.NET
|asp.net|report|
What are the best open source (open source and commercial) reporting tools for ASP.NET similar to Crystal Reports for ASP.NET?
My solution just for kicks (this was a fun exercise) -- Setup test table DECLARE @names TABLE ( id INT IDENTITY(1,1), name NVARCHAR(25) NOT NULL, grp UNIQUEIDENTIFIER NULL ) INSERT @names (name) SELECT 'Harry Johns' UNION ALL SELECT 'Adam Taylor' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'Bill Manning' -- Set the first id's group to a newid() UPDATE n SET grp = newid() FROM @names n WHERE n.id = (SELECT MIN(id) FROM @names) -- Set the group to a newid() if the name does not equal the previous UPDATE n SET grp = newid() FROM @names n INNER JOIN @names b ON (n.ID - 1) = b.ID AND ISNULL(b.Name, '') <> n.Name -- Set groups that are null to the previous group -- Keep on doing this until all groups have been set WHILE (EXISTS(SELECT 1 FROM @names WHERE grp IS NULL)) BEGIN UPDATE n SET grp = b.grp FROM @names n INNER JOIN @names b ON (n.ID - 1) = b.ID AND n.grp IS NULL END -- Final output SELECT MIN(id) AS id_start, MAX(id) AS id_end, name, count(1) AS consecutive FROM @names GROUP BY grp, name ORDER BY id_start
My solution just for kicks (this was a fun exercise), no cursors, no iterations, but i do have a helper field -- Setup test table DECLARE @names TABLE ( id INT IDENTITY(1,1), name NVARCHAR(25) NOT NULL, grp UNIQUEIDENTIFIER NULL ) INSERT @names (name) SELECT 'Harry Johns' UNION ALL SELECT 'Adam Taylor' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'Bill Manning' UNION ALL SELECT 'John Smith' UNION ALL SELECT 'Bill Manning' -- Set the first id's group to a newid() UPDATE n SET grp = newid() FROM @names n WHERE n.id = (SELECT MIN(id) FROM @names) -- Set the group to a newid() if the name does not equal the previous UPDATE n SET grp = newid() FROM @names n INNER JOIN @names b ON (n.ID - 1) = b.ID AND ISNULL(b.Name, '') <> n.Name -- Set groups that are null to the previous group -- Keep on doing this until all groups have been set WHILE (EXISTS(SELECT 1 FROM @names WHERE grp IS NULL)) BEGIN UPDATE n SET grp = b.grp FROM @names n INNER JOIN @names b ON (n.ID - 1) = b.ID AND n.grp IS NULL END -- Final output SELECT MIN(id) AS id_start, MAX(id) AS id_end, name, count(1) AS consecutive FROM @names GROUP BY grp, name ORDER BY id_start /* Results: id_start id_end name consecutive 1 1 Harry Johns 1 2 2 Adam Taylor 1 3 4 John Smith 2 5 7 Bill Manning 3 8 8 John Smith 1 9 9 Bill Manning 1 */
I don't know of any free SMS services, you usually buy bulk sms'seses and use an API to send them out. [Whitepages.com][1] has an API that will allow developers to reverse lookup a phone number. It reports the carrier on mobile number, however a lot of the time it's some non-existent-anymore carrier like Powertel or something. [1]: http://developer.whitepages.com/docs/
Where I work we've been using <http://www.clickatell.com> for sending out SMS - it looks like its about 6 or 7 cents a message. They just take http POST requests to send out a message. I don't know if you'll be able to find any good free gateways. We used to send out emails, but found they were unreliable.
How Do You Determine The PID of the Parent of a Process
|erlang|
I have a process in erlang that is supposed to do something immediately after spawn, then send the result back to the parent when it is finished. How do I figure out the PID of the process that spawned it?
Supporting [Angus][1], I can vouch for [http://www.clickatell.com][2]. It was used at a company I used to work at. It was a very easy solution to setup and use and worked great. You just need to anticipate how many messages you intend to send out and bulk order messages. They're pretty cheap, overall. [1]: http://stackoverflow.com/questions/4269/programmatic-sms#4285 [2]: http://www.clickatell.com
I would say mbUnit also, I like being able to run a single test many times just by specifying inputs and result right above the test function. Horrible description of what I mean so [here is a link that shows you what I mean.][1] [1]: http://www.hanselman.com/blog/MbUnitUnitTestingOnCrack.aspx
LinqDataSource - Can you limit the amount of records returned?
|asp.net|c#|linq|linq-to-sql|
I'd like to use a LinqDataSource control on a page and limit the amount of records returned. I know if I use code behind I could do something like this: IEnumerable<int> values = Enumerable.Range(0, 10); IEnumerable<int> take3 = values.Take(3); Does anyone know if something like this is possible with a LinqDataSource control?
I'd like to use a LinqDataSource control on a page and limit the amount of records returned. I know if I use code behind I could do something like this: IEnumerable<int> values = Enumerable.Range(0, 10); IEnumerable<int> take3 = values.Take(3); Does anyone know if something like this is possible with a LinqDataSource control? **[Update]** I'm going to use the LinqDataSource with the ListView control, *not* a GridView or Repeater. The LinqDataSource wizard does not provide the ability to limit the number of records return. The Advanced options only allow you to enabled deletes, inserts, and updates.