instruction
stringlengths 0
30k
⌀ |
---|
In college I've had numerous design and UML oriented courses, and I recognize that UML can be used to benefit a software project, especially use-case mapping, but is it really practical? I've done a few co-op work terms, and it appears that UML is not used heavily in the industry (outside of IBM). Is it worth the time during a project to create UML diagrams? Also, I find that class diagrams are generally not useful, because it's just faster to look at the header file for a class. Specifically which diagrams are the most useful? |
|uml|diagram|bestpractices|classdesign| |
Is UML practical? |
|bestpractices|diagram|uml|classdesign| |
In college I've had numerous design and [UML][1] oriented courses, and I recognize that UML can be used to benefit a software project, especially [use-case][2] mapping, but is it really practical? I've done a few co-op work terms, and it appears that UML is not used heavily in the industry. Is it worth the time during a project to create UML diagrams? Also, I find that class diagrams are generally not useful, because it's just faster to look at the header file for a class. Specifically which diagrams are the most useful?
**Edit:** My experience is limited to small, under 10 developer projects.
[1]: http://en.wikipedia.org/wiki/Unified_Modeling_Language
[2]: http://en.wikipedia.org/wiki/Use_case |
|uml|diagram|class-design| |
You could consider inverting the problem. That is add the control to the repeaters definition and the remove it if it is not needed. Not knowing the details of your app this might be a tremendous waste of time but it might just work out in the end. |
Just make sure you only have RTM versions and not Beta or RC versions installed. You'll have no end of pain if you don't cleanly remove the beta or RC versions before installing the RTM versions. |
@Daniel
Thanks for the reply, I also figured out how to do it with the minidom (I'm not sure of the difference between the ElementTree vs the minidom)
<pre>
<code>
from xml.dom.minidom import *
def make_xml():
doc = Document();
node = doc.createElement('foo')
node.appendChild(doc.createTextNode('bar'))
doc.appendChild(node)
return doc
if __name__ == '__main__':
make_xml().writexml(sys.stdout)
</code>
</pre>
I swear I tried this before posting my question... |
1. High-level discussions about purpose, scope, limitations of operating environment, size, etc
2. Audition a single paragraph description of the system, hammer it out
3. Mock up UI
4. Formalize known requirements
5. Now iterate between 3 and 4 with more and more functional prototypes and more specs with more details. Write tests as you go. Do this until you have functional software and a complete, objective, testable requirements spec.
That's the dream. The reality is usually after a couple iterations everybody goes head-down and codes until there's a month left to test. |
If I understand correctly, Perforce makes all files that are not checked out read-only. This is similar to the behavior under Microsoft TFS and VSS. Subversion on the other hand does not set read-only attributes. IMO, the Subversion method is easier because you don't have to bother with a source control client in order to modify files -- you go ahead and modify with reckless abandon and then compare what has changed on disk with the server when you are ready to check in.
When all files are read-only, I find myself constantly changing a file, attempting to save, discovering it is read-only, then having to hop over to the source control client to check it out. Its not so bad if the source control client is integrated into your editor, but if you are storing things that are not source code under version control this often isn't an option. |
In addition to other people's suggestions, I'd recommend either acquiring a copy of SICP, or [reading it online](http://mitpress.mit.edu/sicp/full-text/book/book.html). It's one of the few books that I've read that I feel _greatly_ increased my skill in designing software, particularly in creating good abstraction layers.
A book that is not _directly_ related to programming, but is also a good read for programmers (IMO) is [Concrete Mathematics](http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/0201558025). Most, if not all of the topics in it are useful for programmers to know about, and it does a better job of explaining things than any other math book I've read to date. |
I'm a big fan of most titles by Robert C. Martin, especially [Agile Software Development, Principles, and Practices][1] and [Clean Code: A Handbook of Agile Software Craftsmanship][2].
[1]: http://amazon.com/o/ASIN/0135974445
[2]: http://amazon.com/o/ASIN/0132350882 |
[XMLBeans](http://xmlbeans.apache.org/) works great if you have a schema for your XML. It creates Java objects for the schema and creates easy to use parse methods. |
You will want to use a geometric construction called a [Voronoi diagram](http://mathworld.wolfram.com/VoronoiDiagram.html). This divides up the plane into a number of areas, one for each point, that encompass all the points that are closest to each of your given points.
The code for the exact algorithms to create the Voronoi diagram and arrange the data structure lookups are too large to fit in this little edit box. :) |
You will want to use a geometric construction called a [Voronoi diagram](http://mathworld.wolfram.com/VoronoiDiagram.html). This divides up the plane into a number of areas, one for each point, that encompass all the points that are closest to each of your given points.
The code for the exact algorithms to create the Voronoi diagram and arrange the data structure lookups are too large to fit in this little edit box. :)
@Linor: That's essentially what you would do after creating a Voronoi diagram. But instead of making a rectangular grid, you can choose dividing lines that closely match the lines of the Voronoi diagram (this way you will get fewer areas that cross dividing lines). If you recursively divide your Voronoi diagram in half along the best dividing line for each subdiagram, you can then do a tree search for each point you want to look up. This requires a bit of work up front but saves time later. Each lookup would be on the order of log N where N is the number of points. 16 comparisons is a lot better than 15,000! |
The key thing for logging is good planning. I would suggest that you look into the enterprise library exception and logging application block (http://msdn.microsoft.com/en-us/library/cc467894.aspx). There is a wee bit of a learning curve but it does work quite well. The approach I favour at the moment is to define 4 priority levels. 4=Unhandled exception (error in event log), 3=Handled exception (warning in event log), 2=Access an external resource such as a webservice, db or mainframe system (information in event log), 1=Verbose/anything else of interest (information in event log).
Using the application block it's then quite easy to tweak what level of priority you want to log. So in development you'd log everything but as you get a stable system in production, you'd probably only be interested in unhandled exceptions and possibly handled exceptions. |
The key thing for logging is good planning. I would suggest that you look into the enterprise library exception and logging application block (http://msdn.microsoft.com/en-us/library/cc467894.aspx). There is a wee bit of a learning curve but it does work quite well. The approach I favour at the moment is to define 4 priority levels. 4=Unhandled exception (error in event log), 3=Handled exception (warning in event log), 2=Access an external resource such as a webservice, db or mainframe system (information in event log), 1=Verbose/anything else of interest (information in event log).
Using the application block it's then quite easy to tweak what level of priority you want to log. So in development you'd log everything but as you get a stable system in production, you'd probably only be interested in unhandled exceptions and possibly handled exceptions.
Update: For clarity, I would suggest you have logging in both your winform/wpf app and your webservices. In a web scenario, I've had problems in the past where it can be difficult to tie an error on the client back through to the app servers. Mainly because any error through webservices gets wrapped up as a SOAP exception. I can't remember off the top of my head, but I think if you use a custom exception handler (that is part of the enterprise library) you can add data onto exceptions such as the handlinginstance id of the exception from the app server. This makes it easier to tie up exceptions on a client back to your app box by using LogParser (http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en). |
[Office 2007 MIME Types for IIS][1]
[1]: http://www.bram.us/2007/05/25/office-2007-mime-types-for-iis/ |
It is very simple with the Skip and Take extension methods.
var query = from i in ideas
select i;
var paggedCollection = query.Skip(startIndex).Take(count); |
Here's a general description of a technique for calculating pi that i learnt in high-school.
I only share this because I think it is simple enough that anyone can remember it, indefinitely, plus it teaches you the concept of "Monte-Carlo" methods -- which are statistical methods of arriving at answers that don't immediately appear to be deducible through random processes.
Draw a square, and inscribe a quadrant (one quarter of a semi-circle) inside that square (a quadrant with radius equal to the side of the square, so it fills as much of the square as possible)
Now throw a dart at the square, and record where it lands -- that is, choose a random point anywhere inside the square. Of course it landed inside the square, but is it inside the semi-circle? record this fact.
Repeat this process many times -- and you will find there is a ratio of the number of points inside the semi-circle versus the total number thrown, call this ratio x.
Since the area of the square is r times r, you can deduce that the area of the semi circle is x times r times r (that is, x times r squared). Hence x times 4 will give you pi.
This is not a quick method to use. But it's a nice example of a monte carlo method. And if you look around, you may find that many problems otherwise outside your computational skills can be solved by such methods. |
I believe the easiest way would be to follow Jeff's own advice and have a look around existing code. Try looking at Wordpress, Drupal, Joomla and other well known PHP-based CMS's to see how their API hooks look and feel. This way you can even get ideas you may have not thought of previously to make things a little more rubust.
A more direct answer would be to write general files that they would "include_once" into their file that would provide the usability they would need. This would be broken up into categories and NOT provided in one MASSIVE "hooks.php" file. Be careful though, because what ends up happening is that files that they include end up having more and more dependencies and functionality improves. Try to keep API dependencies low. I.E fewer files for them to include. |
You could use an Observer pattern. A simple functional way to accomplish this:
<?php
/** Plugin system **/
$listeners = array();
/* Create an entry point for plugins */
function hook(){
global $listeners;
$num_args = func_num_args();
$args = func_get_args();
if($num_args < 2)
trigger_error("Insufficient arguments", E_USER_ERROR);
// Hook name should always be first argument
$hook_name = array_shift($args);
if(!isset($listeners[$hook_name]))
return; // No plugins have registered this hook
foreach($listeners[$hook_name] as $func){
$args = $func($args);
}
return $args;
}
/* Attach a function to a hook */
function add_listener($hook, $function_name){
global $listeners;
$listeners[$hook][] = $function_name;
}
/////////////////////////
/** Sample Plugin **/
add_listener('a_b', 'my_plugin_func1');
add_listener('str', 'my_plugin_func2');
function my_plugin_func1($args){
return array(4, 5);
}
function my_plugin_func2($args){
return str_replace('sample', 'CRAZY', $args[0]);
}
/////////////////////////
/** Sample Application **/
$a = 1;
$b = 2;
list($a, $b) = hook('a_b', $a, $b);
$str = "This is my sample application\n";
$str .= "$a + $b = ".($a+$b)."\n";
$str .= "$a * $b = ".($a*$b)."\n";
$str = hook('str', $str);
echo $str;
?>
**Output:**
This is my CRAZY application
4 + 5 = 9
4 * 5 = 20
**Notes:**
For this example source code, you must declare all your plugins before the actual source code that you want to be extendable. I've included an example of how to handle single or multiple values being passed to the plugin. The hardest part of this is writing the actual documentation which lists what arguments get passed to each hook.
This is just one method of accomplishing a plugin system in PHP. There are better alternatives, I suggest you check out the WordPress Documentation for more information.
*Sorry, it appears underscore characters are replaced by HTML entities by Markdown? I can re-post this code when this bug gets fixed.*
*Edit: Nevermind, it only appears that way when you are editing* |
You can use a UML static class diagram with << stereotype >> annotations, which is the kind of thing you would do in Rational Rose for using UML for things that aren't necessarily classes and methods, such as databases. |
It would be pretty easy to build this by setting up a sample database with a extremely small amount of dummy data, which would receive the query first. A couple of things will happen:
1. You might get a SQL syntax error, which would not load the database much since it's a small database.
2. You might get back a response which could clearly be shown to contain every row in one or more tables, which is probably not what they want.
3. Things which pass the above conditions are likely to be okay, so you can run them against the copy of the production database.
Assuming your schema doesn't change much and is not particularly weird, writing the above is likely the quickest solution to your problem.
|
Jeff built an [Ultimate Developer Rig][1] for Scott Hanselman a while back. You can check out his requirements and see if it matches closely to what you are looking for.
From what you've mentioned, an Intel Q9450, 4 or 8gigs of ram and a couple good sized hard drives will suit you well. I would say there is no reason not to get Vista x64 at this point. The ability to utilize more than 3.2gb of ram is very important for a developer.
If you're in the more than two monitor club, you'll need two video cards as well.
Hope this helps!
[1]: http://www.hanselman.com/blog/TheCodingHorrorUltimateDeveloperRigThrowdownPart2.aspx |
It may not help if the problem is inside one of your controls - as you expect - but if the page is poorly designed and that's causing render to be slow, [YSlow][1] should help clean that up.
[1]:http://developer.yahoo.com/yslow/ |
One useful example is the guys who run DB4O object oriented database. There, WeakReferences are used as a kind of light cache: it will keep your objects in memory only as long as your application does, allowing you to put a real cache on top.
Another use would be in the implementation of weak event handlers. Currently, one big source of memory leaks in .NET applications is forgetting to remove event handlers. E.g.
public MyForm()
{
MyApplication.Foo += someHandler;
}
See the problem? In the above snippet, MyForm will be kept alive in memory forever as long as MyApplication is alive in memory. Create 10 MyForms, close them all, your 10 MyForms will still be in memory, kept alive by the event handler.
Enter WeakReference. You can build a weak event handler using WeakReferences so that someHandler is a weak event handler to MyApplication.Foo, thus fixing your memory leaks!
This isn't just theory. Dustin Campbell from the DidItWith.NET blog posted [an implementation of weak event handlers][1] using System.WeakReference.
[1]: http://diditwith.net/PermaLink,guid,aacdb8ae-7baa-4423-a953-c18c1c7940ab.aspx |
If your people are using the mysql(1) program to run queries, you can use the [safe-updates][1] option (aka i-am-a-dummy) to get you part of what you need. Its name is somewhat misleading; it not only prevents UPDATE and DELETE without a WHERE (which you're not worried about), but also adds an implicit LIMIT 1000 to SELECT statements, and aborts SELECTs that have joins and are estimated to consider over 1,000,000 tuples --- perfect for discouraging Cartesian joins.
[1]: http://dev.mysql.com/doc/refman/5.1/en/mysql-tips.html |
try
SELECT 1 AS Tag,
0 AS Parent,
AccountNumber AS [Root!1!AccountNumber!element]
FROM Location.LocationMDAccount
WHERE LocationID = 'long-guid-here'
FOR XML EXPLICIT |
Normally, I'm storngly against taking the most expensive and hardest to scale part of your infrastructure (the database) and putting all load into it. On the other hand: It greatly simplifies backup strategy, especially when you have multiple web servers and need to somehow keep the data synchronized.
Like most other things, It depends on the expected size and Budget. |
[ASP.Net PropertyGrid][1]
[1]: http://www.codeplex.com/PropertyGrid "CodePlex" |
Second the recommendation on file paths. I've worked on a couple of projects that needed to manage large-ish asset collections, and any attempts to store things directly in the DB resulted in pain and frustration long-term.
The only real "pro" I can think of regarding storing them in the DB is the potential for easy of individual image assets. If there are no file paths to use, and all images are streamed straight out of the DB, there's no danger of a user finding files they shouldn't have access to.
That seems like it would be better solved with an intermediary script pulling data from a web-inaccessible file store, though. So the DB storage isn't REALLY necessary. |
If this is web-based application then there could be advantages to storing the images on a third-party storage delivery network, such as Amazon's S3 or the Nirvanix platform. |
Something like this came to my mind
select @rownum:=@rownum+1 rownum, entries.*
from (select @rownum:=0) r, entries
where uid = ? and ronum % 150 = 0
I don't have MySQL at my hand but maybe this will help ... |
Something like this came to my mind
select @rownum:=@rownum+1 rownum, entries.*
from (select @rownum:=0) r, entries
where uid = ? and rownum % 150 = 0
I don't have MySQL at my hand but maybe this will help ... |
In my experience, sometimes the simplest solution is to **name the images according to the primary key**. So it's easy to find the image that belongs to a particular record, and vice versa. But at the same time you're not storing *anything* about the image in the database. |
Well, if you don't want to do it using AJAX or any other way and just want a normal ASP.NET postback to happen, here is how you do it (without using any other libraries):
*It is a little tricky though... :)*
i. In your code file (assuming you are using C# and .NET 2.0 or later) add the following Interface to your Page Class to make it look like
public partial class Default : System.Web.UI.Page, IPostBackEventHandler{}
ii. This should add (using Tab-Tab) this function to your code file:
public void RaisePostBackEvent(string eventArgument) { }
iii. In your onclick event in Javascript write the following code:
var pageId = '<%= Page.ClientID %>';
__ doPostBack(pageId, argumentString);
This will call the 'RaisePostBackEvent' method in your code file with the 'eventArgument' as the 'argumentString' you passed from the Javascript. Now, you can call any other event you like.
P.S: That is 'underscore-underscore-doPostBack' ... And, there should be no space in that sequence... Somehow the WMD does not allow me to write to underscores followed by a character! |
The key thing for logging is good planning. I would suggest that you look into the enterprise library exception and logging application block (http://msdn.microsoft.com/en-us/library/cc467894.aspx). There is a wee bit of a learning curve but it does work quite well. The approach I favour at the moment is to define 4 priority levels. 4=Unhandled exception (error in event log), 3=Handled exception (warning in event log), 2=Access an external resource such as a webservice, db or mainframe system (information in event log), 1=Verbose/anything else of interest (information in event log).
Using the application block it's then quite easy to tweak what level of priority you want to log. So in development you'd log everything but as you get a stable system in production, you'd probably only be interested in unhandled exceptions and possibly handled exceptions.
**Update**: For clarity, I would suggest you have logging in both your winform/wpf app and your webservices. In a web scenario, I've had problems in the past where it can be difficult to tie an error on the client back through to the app servers. Mainly because any error through webservices gets wrapped up as a SOAP exception. I can't remember off the top of my head, but I think if you use a custom exception handler (that is part of the enterprise library) you can add data onto exceptions such as the handlinginstance id of the exception from the app server. This makes it easier to tie up exceptions on a client back to your app box by using LogParser (http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en).
**Second Update**: I also like to give each different event a seperate event id and to track that in a text file or spreadsheet under source control. Yes, its a pain but if you're lucky enough to have an IT team looking after your systems in production, I find they tend to expect different events to have different event ids. |
G'day,
I've worked with both techniques and I would say that developing on the trunk and branching off stable points as releases is the best way to go.
Those people above who object saying that you'll have:
> - Constant build problems for daily builds
> - Productivity loss when a a developer commits a problem for all
> other people on the project
have probably not used continuous integration techniques.
It's true that if you don't perform several test builds during the day, say once every hour or so, will leave themselves open to these problems which will quickly strangle the pace of development.
Doing several test builds during the day quickly folds in updates to the main code base so that other's can use it and also alerts you during the day if someone has broken the build so that they can fix it before going home.
As pointed out, only finding out about a broken build when the nightly build for running the regression tests fails is sheer folly and will quickly slow things down.
Have a read of Martin Fowler's paper on [Continuous Integration][1]. We rolled our own such system for a major project (3,000kSLOC) in about 2,000 lines of Posix sh.
HTH
cheers,
Rob
[1]: http://martinfowler.com/articles/continuousIntegration.html |
Which version did you try? Perhaps you should give <a href="http://mono.ximian.com/monobuild/preview/download-preview/">Mono 2.0 preview</a> a go, it might work for you (no, the Data* controls are not yet perfect, but they have improved greatly). From my experience GTK# controls on Windows are not that great either... |
It sounds like this.Opacity is a double value, and the compiler doesn't like you trying to cram a decimal value into it. |
@Jax: The `extern "C"` thing matters, very very much. If a header file doesn't have one, then (unless it's a C++-only header file), you would have to enclose your `#include` with it:
#ifdef __cplusplus
extern "C" {
#endif
#include <sys/socket.h>
// include other similarly non-compliant header files
#ifdef __cplusplus
}
#endif
Basically, anytime where a C++ program wants to link to C-based facilities, the `extern "C"` is vital. In practical terms, it means that the names used in external references will not be mangled, like normal C++ names would. [Reference.](http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html) |
@Jax: The `extern "C"` thing matters, very very much. If a header file doesn't have one, then (unless it's a C++-only header file), you would have to enclose your `#include` with it:
extern "C" {
#include <sys/socket.h>
// include other similarly non-compliant header files
}
Basically, anytime where a C++ program wants to link to C-based facilities, the `extern "C"` is vital. In practical terms, it means that the names used in external references will not be mangled, like normal C++ names would. [Reference.](http://www.parashift.com/c++-faq-lite/mixing-c-and-cpp.html) |
I believe you have answered your own question...
System.totalMemory gives you the total amount of memory being "used", not allocated. It is accurate that your application may only be using 20mb, but it has 5mb that is free for future allocations.
I'm not sure if the Adobe docs would shed light on the way that it manages memory... |
Have a DTS job (or a job that is started by a windows service) that runs at a given interval. Each time it is run, it gets information about the given table by using the system [INFORMATION_SCHEMA][1] tables, and records this data in the data repository. Compare the data returned regarding the structure of the table with the data returned the previous time. If it is different, then you know that the structure has changed.
Example query to return information regarding all of the columns in table ABC (ideally listing out just the columns from the INFORMATION_SCHEMA table that you want, instead of using *select ** like I do here):
select * from INFORMATION_SCHEMA.COLUMNS where TABLE_NAME = 'ABC'
You would monitor different columns and INFORMATION_SCHEMA views depending on how exactly you define "changes to a table".
[1]: http://msdn.microsoft.com/en-us/library/ms186778.aspx |
One framework I'm considering would be for a blogging platform. Since just about any possible view of data you would want would be sorted by date, I was thinking about this structure:
One directory per content node:
./content/YYYYMMDDHHMMSS/
Subdirectories of each node including
/tags
/authors
/comments
as well as simple text files in the node directory for pre- and post-rendered content and the like.
This would allow a simple PHP [glob()][1] call (and probably a reversal of the result array) to query on just about anything within the content structure:
glob("content/*/tags/funny");
would return paths including all articles tagged "funny".
[1]: http://us3.php.net/glob |
Make sure to add the "--routines" parameter to mysqldump if you have any stored procs in your database so it backs them up too.
|
If the failover logic is in your application you could write a status screen that shows which box you're connected by writing to a var when the first connection attempt fails.
I think your best bet would be a ping daemon/cron job that checks the status of each box periodically and sends an email if one doesn't respond.
|
As far as I can tell, the main view of what makes a language "Object Oriented" is supporting the idea of grouping data, and methods that work on that data, which is generally achieved through classes, modules, inheritance, polymorphism, etc.
See [this discussion](http://c2.com/cgi/wiki?NobodyAgreesOnWhatOoIs) for an overview of what people think (thought?) Object-Orientation means.
As for the "archetypal" OO language - that is indeed Smalltalk, as Kristopher pointed out. |
I have seen [PSPad][1] used as a hex editor, but I usually do the same thing you do. I'm surprised there's not an "instant answer" for this question. It's a very common need.
[1]: http://www.pspad.com/ |
[Agile Software Development](http://www.amazon.com/Agile-Software-Development/dp/0201699699) by Alistair Cockburn |
Rather than attaching to a new data context why not just requery the object in the new datacontext? It believe it is a more reliable and stateless strategy. |
I don't think it's a matter of which language is better. In the .NET world there are some inconsistencies between the libraries different languages provide. There are certain functionality that is available in VB.NET that you might like to use from C# but can't. I remember I had to use J# to use some ZIP libraries that were not available in any other language in .NET. |
[Definitions for Object-Orientation][1] are of course a [huge can of worms][2], but here are my 2 cents:
To me, Object-Orientation is all about objects that collaborate by sending messages. That is, to me, the single most important trait of an object-oriented language.
If I had to put up an ordered list of all the features that an object-oriented language must have, it would look like this:
1. [Objects sending messages to other objects][3]
2. Everything is an Object
3. [Late Binding][4]
4. [Subtype Polymorphism][5]
5. Inheritance or something similarly expressive, like [Delegation][6]
6. [Encapsulation][7]
7. [Information Hiding][8]
8. Abstraction
Obviously, this list is very controversial, since it excludes a great variety of languages that are widely regarded as object-oriented, such as [Java][9], [C#][10] and [C++][11], all of which violate points 1, 2 and 3. However, there is no doubt that those languages allow for object-oriented programming (but so does [C][12]) and even facilitate it (which C doesn't). So, I have came to call languages that satisfy those requirements "purely object-oriented".
As archetypical object-oriented languages I would name [Self][13] and [Newspeak][14].
Both satisfy the above-mentioned requirements. Both are inspired by and successors to [Smalltalk][15], and both actually manage to be "more OO" in some sense. The things that I like about Self and Newspeak are that both take the message sending paradigm to the extreme (Newspeak even more so than Self).
In Newspeak, *everything* is a message send. There are no instance variables, no fields, no attributes, no constants, no class names. They are all emulated by using getters and setters.
In Self, there are *no classes*, only objects. This emphasizes, what OO is *really* about: objects, not classes.
[1]: http://C2.Com/cgi/wiki?DefinitionsForOo "Definitions for OO on Ward's WikiWikiWeb"
[2]: http://C2.Com/cgi/wiki?OoBestFeatures "OO Best Features on Ward's WikiWikiWeb"
[3]: http://C2.com/cgi/wiki?MessagePassing "Message Passing on Ward's WikiWikiWeb"
[4]: http://C2.Com/cgi/wiki?LateBinding "Late Binding on Ward's WikiWikiWeb"
[5]: http://C2.Com/cgi/wiki?PolyMorphism "Polymorphism on Ward's WikiWikiWeb"
[6]: http://C2.Com/cgi/wiki?DelegationInheritance "Delegation Inheritance on Ward's WikiWikiWeb"
[7]: http://C2.com/cgi/wiki?EncapsulationDefinition "Encapsulation Definition on Ward's WikiWikiWeb"
[8]: http://C2.com/cgi/wiki?InformationHiding "Information Hiding on Ward's WikiWikiWeb"
[9]: http://Java.Com/ "Java Programming Language"
[10]: http://MSDN.Microsoft.Com/vcsharp/ "C# Programming Language"
[11]: http://Open-Std.Org/jtc1/sc22/wg21/ "C++ Programming Language"
[12]: http://Open-Std.Org/jtc1/sc22/wg14/ "C Programming Language"
[13]: http://Research.Sun.Com/self/ "Self Homepage at Sun Research"
[14]: http://NewspeakLanguage.Org/ "Newspeak Programming Language"
[15]: http://Smalltalk.Org/ "Smalltalk Programming Language" |
Nothing. They both do the same, both perform about equally. The only reason you should choose one over the other is if you help out with a project that already uses one..
The other possible reason for choosing one is an application or service which only supports one of the system.. For example, I pretty much chose to learn git because of [github](http://github.com).. |
[This post on highscalability.com](http://highscalability.com/log-everything-all-time) provides a good perspective on logging in a large scale distributed system. (And coincidentally it starts out by mentioning a post on the JoelOnSoftware). |
How do I autorun an application in a terminal in Ubuntu? |
|bash|ubuntu|gnome|autorun| |
I've created a few `autorun` script files on various USB devices that run `bash` scripts when they mount. These scripts run "in the background", how do I get them to run in a terminal window? (Like the "Application in Terminal" gnome Launcher type.) |
Do not mix up the user mode ZwOpenDirectoryObject with the kernel mode ZwOpenDirectoryObject -- the kernel mode API (http://msdn.microsoft.com/en-us/library/ms800966.aspx) indeed seems to available as of XP only, but the user mode version should be available at least since NT 4. Anyway, I would not recommend using ZwOpenDirectoryObject.
Why should configuration files and registry keys fail on Vista? Of course, you have to get the security settings right -- but you would have to do that for your named events as well -- so there should not be a big difference here. Maybe you should tell us some more details about the nature of your processes -- do they all run within the same logon session or do they run as different users even? And is there some master process or who creates the events in the first place?
Frankly, I tend to find the Process Explorer idea to be not a very good one. Despite the fact that you probably will not be able to accomplish that without using undocumented APIs and/or a device driver, I do not think that a process should be spelunking around in the handle table of another process just to find out the names of some kernel objects. And, of course, the same security issues apply again. |
@Domenic : I don't get it.. can't you do this with auto-properties?:
public string Title { get; }
or
public string Title { get; private set; }
Is this what you are referring to? |
Thanks to the fact it doesn't need to communicate with a central server constantly, pretty much every command runs in less than a second (obviously git push/pull/fetch are slower simply because they have to initalise SSH connections). Branching is far far easier (one simple command to branch, one simple command to merge) |
The general concept you're describing is [nearest-neighbour search][1], and there are a whole raft of techniques which deal with solving these types of queries, either exactly or approximately. The basic idea is to use a spatial partitioning technique to reduce the complexity from O(n) per query to (approximately) O( log n ) per query.
KD-Trees, and variants of KD-Trees seem to work very well, but quad-trees will also work. The quality of these searches depends on whether your set of 15,000 data points are static (you're not adding a-lot of data points to the reference set). Mount and Arya's work on the [Approximate Nearest Neighbour][2] library is both easy to use and understand, even without a good grounding in the math. It also gives you some flexibility in the types and tolerances of your queries.
[1]: http://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm
[2]: http://www.cs.umd.edu/~mount/ANN/ |
if your sub projects can be released at different versions (like controls, web parts, ect...) then it may make sense to build your structure like this:
**Solution**
Project 1
>* Branch
>* Tags
>* Trunk
Project 2
>* Branch
>* Tags
>* Trunk
This way you can manage each project release independently.
Otherwise the most common structure is:
>* Branch
>* Tags
>* Trunk
>* Docs (Optional) |
Wow, that sounds like a really strange use for Visual Studio. I'm very happy chugging away in vim. However, the one thing I love about Visual Studio is the debugger. It sounds like you are not even using it.
When I opened the question I thought you must be referring to developing portable applications in Visual Studio and then migrating them to Solaris. I have done this and had pleasant experiences with it. |
The convetion/best practise would still be not to use them at all and to refactor the code so that is more readable using extract as method. |
Subversion repositories are typical sub-divided into:
branch/
tags/
trunk/
You would either place all of your DLL and application projects into the trunk and then use branch and tags for all of them as necessary too. Alternatively, you could create folders for each project in the root and then place the common branch, tags and trunk folders within them.
Note that this practice is simply convention and nothing in SVN requires (or really promotes) doing it exactly this way. However, everyone is used to it. So, you would be doing people a favor to go along.
To elaborate further, the **trunk** is where your main development will take place. When you want to mark a particular revision (e.g. a release version), then simply __svn__ __copy__ the project into the tags directory. Also, just copy code into the **branch** directory when you want to do something dramatic or prolonged and don't want to hinder progress in the **trunk**. Later you can __svn__ __merge__ your **branch** back into the **trunk** when it is ready for action! |
Subversion repositories are typical sub-divided into:
branch/
tags/
trunk/
You would either place all of your DLL and application projects into the **trunk** and then use **branch** and **tags** for all of them as necessary too. Alternatively, you could create folders for each project in the root and then place the common branch, tags and trunk folders within them.
Note that this practice is simply convention and nothing in SVN requires (or really promotes) doing it exactly this way. However, everyone is used to it. So, you would be doing people a favor to go along.
To elaborate further, the **trunk** is where your main development will take place. When you want to mark a particular revision (e.g. a release version), then simply __svn__ __copy__ the project into the tags directory. Also, just copy code into the **branch** directory when you want to do something dramatic or prolonged and don't want to hinder progress in the **trunk**. Later you can __svn__ __merge__ your **branch** back into the **trunk** when it is ready for action!
If you want to correct mishaps in your current Subverion repository, then just use __svn__ __move__ to relocate them. Unlike the delete and add process of [CVS](http://www.nongnu.org/cvs/), move will retain version history for the new location. |
I've had problems with JavaHL in Eclipse Ganymede, when it worked fine in Eclipse Europa. I'm not sure how Aptana is different, but try either upgrading JavaHL or switching to the pure-java SVNKit implementation within the Subclipse config. |
[Version Control with Subversion][1]
A very good resource for source control in general. Not really TortoiseSVN specific, though.
[1]: http://svnbook.red-bean.com/ |
>
The actual resolution of gettimeofday() depends on the hardware architecture. Intel processors as well as SPARC machines offer high resolution timers that measure microseconds. Other hardware architectures fall back to the system’s timer, which is typically set to 100 Hz. In such cases, the time resolution will be less accurate.
I obtained this answer from [High Resolution Time Measurement and Timers, Part I][1]
[1]: http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=272 |
From my experience, and from what I've read across the internet, the answer is "No," it is not guaranteed. It depends on CPU speed, operating system, flavor of Linux, etc. |
Maybe. But you have bigger problems. `gettimeofday()` can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of `gettimeofday()` is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.
You should look into `clock_gettime(CLOCK_MONOTONIC)` for timing intervals. It suffers from several less issues due to things like dual-core systems and external clock settings.
Also, look into the clock_getres() function. |
Maybe. But you have bigger problems. `gettimeofday()` can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of `gettimeofday()` is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.
You should look into `clock_gettime(CLOCK_MONOTONIC)` for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings.
Also, look into the clock_getres() function. |
Javascript is the easiest way to get the client's local time. I would suggest using an XMLHttpRequest to send back the local time, and if that fails, fall back to the timezone detected based on their IP address.
As far as geolocation, I've used [MaxMind GeoIP][1] on several projects and it works well, though I'm not sure if they provide timezone data. It's a service you pay for and they provide monthly updates to your database. They provide wrappers in several web languages.
[1]: http://www.maxmind.com/app/ip-location |
I guess in a managed environment, a leak would be you keeping an unnecessary reference to a large chunk of memory around. |
I will concur with Bernard as to in .net what a mem leak would be.
You could profile your application to see its memory use, and determine that if its managing a lot of memory when it should not be you could say it has a leak.
In managed terms I will put my neck on the line to say it does go away once the process is killed/removed.
Unmanaged code is it's own beast and if a leak exists within it, it will follow a standard mem. leak definition. |
This isn't necessarily something you can pop on your ipod and just chill to, but [Diggnation][1] is a hillarious video podcast with Kevin Rose and Alex Albrecht.
They talk about "some of the top stories on the user-submitted news site digg.com". This doesn't really have much in the way of software development (though sometimes a story pops up with that), but is great for entertainment value.
[1]: http://diggnation.com |
Prototyping a GUI with a customer |
|user-interface|prototyping| |
When prototyping initial GUI functionality with a customer is it better to use a pen/paper drawing or to mock something up using a tool and show them that ?
The argument against a tool generated design being that the customer can sometimes focus on the low-level specifics of the mock-up rather than taking a higher level functional view of the GUI overall. |
A tidier way is to create separate forms for each 'pane' and, in their constructor, set
Form.TopLevel = false;
Form.FormBorderStyle = FormBorderStyle.None;
That way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the area like a normal control (perhaps you could use a splitcontainer with the TreeView in one panel, and these forms in the other) i.e.
SeparateForm f = new SomePane();
MainFormSplitContainer.Controls.Add(f);
f.DockStyle = DockStyle.Fill;
f.Show(); |
A tidier way is to create separate forms for each 'pane' and, in their constructor, set
Form.TopLevel = false;
Form.FormBorderStyle = FormBorderStyle.None;
That way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the empty area like a normal control.
Perhaps the main form could use a SplitContainer with a static TreeView in one panel, and space to add these forms in the other. Once they are added, they could be flipped through using Hide/Show or BringToFront/SendToBack.
SeparateForm f = new SeparateForm();
MainFormSplitContainer.Panel2.Controls.Add(f);
f.DockStyle = DockStyle.Fill;
f.Show(); |
A tidier way is to create separate forms for each 'pane' and, in their constructor, set
Form.TopLevel = false;
Form.FormBorderStyle = FormBorderStyle.None;
Form.DockStyle = DockStyle.Fill;
That way, each of these forms can be laid out in its own designer, instantiated one or more times at runtime, and added to the empty area like a normal control.
Perhaps the main form could use a SplitContainer with a static TreeView in one panel, and space to add these forms in the other. Once they are added, they could be flipped through using Hide/Show or BringToFront/SendToBack.
SeparateForm f = new SeparateForm();
MainFormSplitContainer.Panel2.Controls.Add(f);
f.Show(); |
There is a book called [Paper Prototyping][1] which details pen and paper drawing and what you can gain from it. I think it has a lot of benefits, particularly that you can, very early on (and easily), modify what the end result will be without much effort, and then start off on the right foot.
[1]: http://www.amazon.com/Paper-Prototyping-Interfaces-Interactive-Technologies/dp/1558608702/ref=pd_bbs_sr_1?ie=UTF8&s=books&qid=1218046399&sr=8-1 |
Subversion repositories are typical sub-divided into:
branch/
tags/
trunk/
You would either place all of your DLL and application projects into the **trunk** and then use **branch** and **tags** for all of them as necessary too:
branch/
tags/
trunk/
project1/
project2/
Alternatively, you could create folders for each project in the root and then place the common branch, tags and trunk folders within them.
project1/
branch/
tags/
trunk/
project2/
branch/
tags/
trunk/
Note that this practice is simply convention and nothing in SVN requires (or really promotes) doing it exactly this way. However, everyone is used to it. So, you would be doing people a favor to go along.
To elaborate further, the **trunk** is where your main development will take place. When you want to mark a particular revision (e.g. a release version), then simply __svn__ __copy__ the project into the tags directory. Also, just copy code into the **branch** directory when you want to do something dramatic or prolonged and don't want to hinder progress in the **trunk**. Later you can __svn__ __merge__ your **branch** back into the **trunk** when it is ready for action!
If you want to correct mishaps in your current Subverion repository, then just use __svn__ __move__ to relocate them. Unlike the delete and add process of [CVS](http://www.nongnu.org/cvs/), move will retain version history for the new location. |
using the branch/trunk/tag repository structure is pretty standard, but if I'm understanding you properly, your issue is that you have a set of common dll projects that get used across multiple projects. This can definately become tricky to manage.
So the typical scenario here is that you have some class library called Common.Helpers that has code that is common to all your applications.
Let's say I'm starting a new application called StackOverflow.Web that needs to reference Common.Helpers.
Usually what you would do is create a new solution file and add a new project called Stackoverflow.Web and add the existing Common.Helpers project and then reference it from the new Stackoverflow.Web project.
What I usually try and do is create a repository for the Common.Helpers project and then in subversion reference it as an [external][1]. That way you can keep the code under source control in a single location, but still use it seperately in multiple projects.
[1]: http://svnbook.red-bean.com/en/1.1/ch07s04.html |
This question actually seems to be about Java vs. C++ performance, and that's not the object orientation so much as running on a virtual machine with garbage collection and such.
[This whitepaper][1] on Java vs. C++ performance might be worth a read.
[1]: http://www.idiom.com/~zilla/Computer/javaCbenchmark.html |
The best way to do bit manipulation/unsigned bytes is through using **int**s. Even though they are signed they have plenty of spare bits (32 total) to treat as an unsigned byte. Also, all of the mathematical operators will convert smaller fixed precision numbers to **int**. Example:
short a = 1s;
short b = 2s;
int c = a + b; // the result is up-converted
short small = (short)c; // must cast to get it back to short
Because of this it is best to just stick with integer and mask it to get the bits that you are interested in. Example:
int a = 32;
int b = 128;
int foo = (a + b) | 255;
Here is some more info on Java primitive types http://mindprod.com/jgloss/primitive.html
One last trivial note, there is one unsigned fixed precision number in Java. That is the **char** primitive.
|
[Definitions for Object-Orientation][1] are of course a [huge can of worms][2], but here are my 2 cents:
To me, Object-Orientation is all about objects that collaborate by sending messages. That is, to me, the single most important trait of an object-oriented language.
If I had to put up an ordered list of all the features that an object-oriented language must have, it would look like this:
1. [Objects sending messages to other objects][3]
2. Everything is an Object
3. [Late Binding][4]
4. [Subtype Polymorphism][5]
5. Inheritance or something similarly expressive, like [Delegation][6]
6. [Encapsulation][7]
7. [Information Hiding][8]
8. Abstraction
Obviously, this list is very controversial, since it excludes a great variety of languages that are widely regarded as object-oriented, such as [Java][9], [C#][10] and [C++][11], all of which violate points 1, 2 and 3. However, there is no doubt that those languages allow for object-oriented programming (but so does [C][12]) and even facilitate it (which C doesn't). So, I have come to call languages that satisfy those requirements "purely object-oriented".
As archetypical object-oriented languages I would name [Self][13] and [Newspeak][14].
Both satisfy the above-mentioned requirements. Both are inspired by and successors to [Smalltalk][15], and both actually manage to be "more OO" in some sense. The things that I like about Self and Newspeak are that both take the message sending paradigm to the extreme (Newspeak even more so than Self).
In Newspeak, *everything* is a message send. There are no instance variables, no fields, no attributes, no constants, no class names. They are all emulated by using getters and setters.
In Self, there are *no classes*, only objects. This emphasizes, what OO is *really* about: objects, not classes.
[1]: http://C2.Com/cgi/wiki?DefinitionsForOo "Definitions for OO on Ward's WikiWikiWeb"
[2]: http://C2.Com/cgi/wiki?OoBestFeatures "OO Best Features on Ward's WikiWikiWeb"
[3]: http://C2.com/cgi/wiki?MessagePassing "Message Passing on Ward's WikiWikiWeb"
[4]: http://C2.Com/cgi/wiki?LateBinding "Late Binding on Ward's WikiWikiWeb"
[5]: http://C2.Com/cgi/wiki?PolyMorphism "Polymorphism on Ward's WikiWikiWeb"
[6]: http://C2.Com/cgi/wiki?DelegationInheritance "Delegation Inheritance on Ward's WikiWikiWeb"
[7]: http://C2.com/cgi/wiki?EncapsulationDefinition "Encapsulation Definition on Ward's WikiWikiWeb"
[8]: http://C2.com/cgi/wiki?InformationHiding "Information Hiding on Ward's WikiWikiWeb"
[9]: http://Java.Com/ "Java Programming Language"
[10]: http://MSDN.Microsoft.Com/vcsharp/ "C# Programming Language"
[11]: http://Open-Std.Org/jtc1/sc22/wg21/ "C++ Programming Language"
[12]: http://Open-Std.Org/jtc1/sc22/wg14/ "C Programming Language"
[13]: http://Research.Sun.Com/self/ "Self Homepage at Sun Research"
[14]: http://NewspeakLanguage.Org/ "Newspeak Programming Language"
[15]: http://Smalltalk.Org/ "Smalltalk Programming Language" |
Like Rob Allen, I use SQL Compare / Data Compare by Redgate. It's running on a virtual machine which resets each time I start it so the trial never expires. I also use the Database publishing wizard by Microsoft. I also have a console app I wrote in C# that takes a sql script and runs it on a server. This way you can run large scripts with 'GO' commands in it from a command line or in a batch script.
I use Microsoft.SqlServer.BatchParser.dll and Microsoft.SqlServer.ConnectionInfo.dll libraries in the console application. |
What's the best way to escape os.system() calls in Python? |
|python|bash|shell|os.system| |
When using `os.system()` it's often necessary to escape filenames passed as parameters to commands. What's the best way to do this? Preferably something that would work on multiple operating systems/shells.
I'm currently doing the following, but am sure there must be a library function for this, or at least a more elegant/robust/efficient option:
def sh_escape(s):
return s.replace("(","\\(").replace(")","\\)").replace(" ","\\ ")
os.system("cat %s | grep something | sort > %s"
% (sh_escape(in_filename),
sh_escape(out_filename))) |
When using `os.system()` it's often necessary to escape filenames passed as parameters to commands. What's the best way to do this? Preferably something that would work on multiple operating systems/shells but in particular for `bash`.
I'm currently doing the following, but am sure there must be a library function for this, or at least a more elegant/robust/efficient option:
def sh_escape(s):
return s.replace("(","\\(").replace(")","\\)").replace(" ","\\ ")
os.system("cat %s | grep something | sort > %s"
% (sh_escape(in_filename),
sh_escape(out_filename))) |
When using `os.system()` it's often necessary to escape filenames passed as parameters to commands. What's the best way to do this? Preferably something that would work on multiple operating systems/shells but in particular for `bash`.
I'm currently doing the following, but am sure there must be a library function for this, or at least a more elegant/robust/efficient option:
def sh_escape(s):
return s.replace("(","\\(").replace(")","\\)").replace(" ","\\ ")
os.system("cat %s | grep something | sort > %s"
% (sh_escape(in_filename),
sh_escape(out_filename)))
**Edit**: I've accepted the simple answer of using quotes, don't know why I didn't think of that, I guess it's because I came from windows where ' and " behave a little differently.
Regarding security, I understand the concern, but in this case, I'm interested in a quick and easy solution which `os.system()` provides and the source of the strings is either not user generated or at least entered by a trusted user (me). |
To remove a file entirely from a git repository (Say you commited a file with a password in it, or accidently commited temporary files)
git filter-branch --index-filter 'git update-index --remove filename' HEAD
Then I think you have to commit, and push -f if it's in remote branches (remember it might annoy people if you start changing the repository's history.. and if they have pulled from you before, they could still have the file) |
[Premature optimization is the root of all evil.][1]
15K coordinates aren't that much. Why not iterate over the 15K coordinates and see if that's really a performance problem? You could save a lot of work and maybe it never gets too slow to even notice.
[1]: http://c2.com/cgi/wiki?PrematureOptimization |
+1 [flot][1], requires jQuery though, so might not play well with GWT, I haven't used that.
[1]: http://code.google.com/p/flot/ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.