text
stringlengths
8
267k
meta
dict
Q: Are Multiple DataContext classes ever appropriate? In order to fully use LinqToSql in an ASP.net 3.5 application, it is necessary to create DataContext classes (which is usually done using the designer in VS 2008). From the UI perspective, the DataContext is a design of the sections of your database that you would like to expose to through LinqToSql and is integral in setting up the ORM features of LinqToSql. My question is: I am setting up a project that uses a large database where all tables are interconnected in some way through Foreign Keys. My first inclination is to make one huge DataContext class that models the entire database. That way I could in theory (though I don't know if this would be needed in practice) use the Foreign Key connections that are generated through LinqToSql to easily go between related objects in my code, insert related objects, etc. However, after giving it some thought, I am now thinking that it may make more sense to create multiple DataContext classes, each one relating to a specific namespace or logical interrelated section within my database. My main concern is that instantiating and disposing one huge DataContext class all the time for individual operations that relate to specific areas of the Database would be impose an unnecessary imposition on application resources. Additionally, it is easier to create and manage smaller DataContext files than one big one. The thing that I would lose is that there would be some distant sections of the database that would not be navigable through LinqToSql (even though a chain of relationships connects them in the actual database). Additionally, there would be some table classes that would exist in more than one DataContext. Any thoughts or experience on whether multiple DataContexts (corresponding to DB namespaces) are appropriate in place of (or in addition to) one very large DataContext class (corresponding to the whole DB)? A: I'd been wrangling over the same question whilst retro fitting LINQ to SQL over a legacy DB. Our database is a bit of a whopper (150 tables) and after some thought and experimentation I elected to use multiple DataContexts. Whether this is considered an anti-pattern remains to be seen, but for now it makes life manageable. A: I think John is correct. "My main concern is that instantiating and disposing one huge DataContext class all the time for individual operations that relate to specific areas of the Database would be impose an unnecessary imposition on application resources" How do you support that statement? What is your experiment that shows that a large DataContext is a performance bottleneck? Having multiple datacontexts is a lot like having multiple databases and makes sense in similar scenarios, that is, hardly ever. If you are working with multiple datacontexts you need to keep track of which objects belong to which datacontext and you can't relate objects that are not in the same data context. That is a costly design smell for no real benefit. @Evan "The DataContext (or Linq to Entities ObjectContext) is more of a "unit of work" than a connection" That is precisely why you should not have more than one datacontext. Why would you want more that one "unit of work" at a time? A: I have to disagree with the accepted answer. In the question posed, the system has a single large database with strong foreign key relationships between almost every table (also the case where I work). In this scenario, breaking it up into smaller DataContexts (DC's) has two immediate and major drawbacks (both mentioned by the question): * *You lose relationships between some tables. You can try to choose your DC boundaries wisely, but you will eventually run into a situation where it would be very convenient to use a relationship from a table in one DC to a table in another, and you won't be able to. *Some tables may appear in multiple DC's. This means that if you want to add table-specific helper methods, business logic, or other code in partial classes, the types won't be compatible across DC's. You can work around this by inheriting each entity class from its own specific base class, which gets messy. Also, schema changes will have to be duplicated across multiple DC's. Now those are significant drawbacks. Are there advantages big enough to overcome them? The question mentions performance: My main concern is that instantiating and disposing one huge DataContext class all the time for individual operations that relate to specific areas of the Database would be impose an unnecessary imposition on application resources. Actually, it is not true that a large DC takes significantly more time to instantiate or use in a typical unit of work. In fact, after the first instance is created in a running process, subsequent copies of the same DC can be created almost instantaneously. The only real advantage from multiple DC's for a single, large database with thorough foreign key relationships is that you can compartmentalize your code a little better. But you can already do this with partial classes. Also, the unit of work concept is not really relevant to the original question. Unit of work typically refers to how much a work a single DC instance is doing, not how much work a DC class is capable of doing. A: I disagree with John's answer. The DataContext (or Linq to Entities ObjectContext) is more of a "unit of work" than a connection. It manages change tracking, etc. See this blog post for a description: Lifetime of a LINQ to SQL DataContext The four main points of this blog post are that DataContext: * *Is ideally suited for a "unit of work" approach *Is also designed for "stateless" server operation *Is not designed for Long-lived usage *Should be used very carefully after any SumbitChanges() operation. Considering that, I don't think using more than one DataContext would do any harm- in fact, creating different DataContexts for different types of work would help make your LinqToSql impelmentation more usuable and organized. The only downside is you wouldn't be able to use sqlmetal to auto-generate your dmbl. A: In my experience with LINQ to SQL and LINQ to Entities a DataContext is synonymous to a connection to the database. So if you were to use multiple data stores you would need to use multiple DataContexts. My gut reaction is you wouldn't notice to much of a slow down with a DataContext that encompasses a large number of tables. If you did however you could always split the database logically at points where you can isolate tables that don't have any relationship to other sets of tables and create multiple contexts.
{ "language": "en", "url": "https://stackoverflow.com/questions/1949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: CPU throttling in C++ I was just wondering if there is an elegant way to set the maximum CPU load for a particular thread doing intensive calculations. Right now I have located the most time consuming loop in the thread (it does only compression) and use GetTickCount() and Sleep() with hardcoded values. It makes sure that the loop continues for a certain period and then sleeps for a certain minimum time. It more or less does the job, i.e. guarantees that the thread will not use more than 50% of CPU. However, behavior is dependent on the number of CPU cores (huge disadvantage) and simply ugly (smaller disadvantage :)). Any ideas? A: On linux, you can change the scheduling priority of a thread with nice(). A: I am not aware of any API to do get the OS's scheduler to do what you want (even if your thread is idle-priority, if there are no higher-priority ready threads, yours will run). However, I think you can improvise a fairly elegant throttling function based on what you are already doing. Essentially (I don't have a Windows dev machine handy): Pick a default amount of time the thread will sleep each iteration. Then, on each iteration (or on every nth iteration, such that the throttling function doesn't itself become a significant CPU load), * *Compute the amount of CPU time your thread used since the last time your throttling function was called (I'll call this dCPU). You can use the GetThreadTimes() API to get the amount of time your thread has been executing. *Compute the amount of real time elapsed since the last time your throttling function was called (I'll call this dClock). *dCPU / dClock is the percent CPU usage (of one CPU). If it is higher than you want, increase your sleep time, if lower, decrease the sleep time. *Have your thread sleep for the computed time. Depending on how your watchdog computes CPU usage, you might want to use GetProcessAffinityMask() to find out how many CPUs the system has. dCPU / (dClock * CPUs) is the percentage of total CPU time available. You will still have to pick some magic numbers for the initial sleep time and the increment/decrement amount, but I think this algorithm could be tuned to keep a thread running at fairly close to a determined percent of CPU. A: I can't think of any cross platform way of what you want (or any guaranteed way full stop) but as you are using GetTickCount perhaps you aren't interested in cross platform :) I'd use interprocess communications and set the intensive processes nice levels to get what you require but I'm not sure that's appropriate for your situation. EDIT: I agree with Bernard which is why I think a process rather than a thread might be more appropriate but it just might not suit your purposes. A: The problem is it's not normal to want to leave the CPU idle while you have work to do. Normally you set a background task to IDLE priority, and let the OS handle scheduling it all the CPU time that isn't used by interactive tasks. It sound to me like the problem is the watchdog process. If your background task is CPU-bound then you want it to take all the unused CPU time for its task. Maybe you should look at fixing the watchdog program? A: You may be able to change the priority of a thread, but changing the maximum utilization would either require polling and hacks to limit how many things are occurring, or using OS tools that can set the maximum utilization of a process. However, I don't see any circumstance where you would want to do this.
{ "language": "en", "url": "https://stackoverflow.com/questions/1982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Python: what is the difference between (1,2,3) and [1,2,3], and when should I use each? In many places, (1,2,3) (a tuple) and [1,2,3] (a list) can be used interchangeably. When should I use one or the other, and why? A: The notion of tuples are highly expressive: * *Pragmatically, they are great for packing and unpacking values (x,y=coord). *In combination with dictionaries (hash tables), they allow forms of mapping that would otherwise require many levels of association. For example, consider marking that (x,y) has been found. // PHP if (!isset($found[$x])) { $found[$x] = Array(); $found[$x][$y] = true; } else if (!isset($found[$x][$y])) { $found[$x][$y] = true; } # Python found[(x,y)] = True # parens added for clarity *Lists should be used with the expectation of operations on its contents (hence the various mentions of immutability). One will want to pop, push, splice, slice, search, insert before, insert after, etc with a list. *Tuples should be a low-level representation of an object, where simple comparisons are made, or operations such as extracting the n'th element or n elements in a predictable fashion, such as the coordinates example given earlier. *Lastly, lists are not hashable, so the type of mapping done with dictionaries (hash tables in Perl, associative arrays in PHP) must be done with tuples. Here's a simple example of tuples and dictionaries, together at last: """ couple is a tuple of two people doesLike is a dictionary mapping couples to True or False """ couple = "john", "jane" doesLike = dict() doesLike[couple] = True doesLike["jane", "john"] = False # unrequited love :'( A: [1, 2, 3] is a list in which one can add or delete items. (1, 2, 3) is a tuple in which once defined, modification cannot be done. A: Whenever I need to pass in a collection of items to a function, if I want the function to not change the values passed in - I use tuples. Else if I want to have the function to alter the values, I use list. Always if you are using external libraries and need to pass in a list of values to a function and are unsure about the integrity of the data, use a tuple. A: From the Python FAQ: Lists and tuples, while similar in many respects, are generally used in fundamentally different ways. Tuples can be thought of as being similar to Pascal records or C structs; they're small collections of related data which may be of different types which are operated on as a group. For example, a Cartesian coordinate is appropriately represented as a tuple of two or three numbers. Lists, on the other hand, are more like arrays in other languages. They tend to hold a varying number of objects all of which have the same type and which are operated on one-by-one. Generally by convention you wouldn't choose a list or a tuple just based on its (im)mutability. You would choose a tuple for small collections of completely different pieces of data in which a full-blown class would be too heavyweight, and a list for collections of any reasonable size where you have a homogeneous set of data. A: As others have mentioned, Lists and tuples are both containers which can be used to store python objects. Lists are extensible and their contents can change by assignment, on the other hand tuples are immutable. Also, lists cannot be used as keys in a dictionary whereas tuples can. A: open a console and run python. Try this: >>> list = [1, 2, 3] >>> dir(list) ['__add__', '__class__', '__contains__', '__delattr__', '__delitem__', '__delsli ce__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getit em__', '__getslice__', '__gt__', '__hash__', '__iadd__', '__imul__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__mul__', '__ne__', '__new__', '__r educe__', '__reduce_ex__', '__repr__', '__reversed__', '__rmul__', '__setattr__' , '__setitem__', '__setslice__', '__sizeof__', '__str__', '__subclasshook__', 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort'] As you may see the last on the last line list have the following methods: 'append', 'count', 'extend', 'index', 'insert', 'pop', 'remove', 'reverse', 'sort' Now try the same for tuple: >>> tuple = (1, 2, 3) >>> dir(tuple) ['__add__', '__class__', '__contains__', '__delattr__', '__doc__', '__eq__', '__ format__', '__ge__', '__getattribute__', '__getitem__', '__getnewargs__', '__get slice__', '__gt__', '__hash__', '__init__', '__iter__', '__le__', '__len__', '__ lt__', '__mul__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__' , '__rmul__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'count', 'index'] Only 'count' and 'index' from list methods appears here. This is because tuples are immutable and they don't support any modifications. Instead they are simpler and faster in internal implementation. A: * *A tuple might represent a key in dictionary, because it's immutable. *Use lists if you have a collection of data that doesn't need random access. A: The list [1,2,3] is dynamic and flexible but that flexibility comes at a speed cost. The tuple (1,2,3) is fixed (immutable) and therefore faster. A: If you can find a solution that works with tuples, use them, as it forces immutability which kind of drives you down a more functional path. You almost never regret going down the functional/immutable path. A: [1,2,3] is a list. (1,2,3) is a tuple and immutable. A: (1,2,3) and [1,2,3] can be used interchangeably in rare conditions. So (1,2,3) is a tuple and is immutable. Any changes you wish to make need to overwrite the object. [1,2,3] is a list and elements can be appended and removed. List has more features than a tuple. A: Tuples are a quick\flexible way to create composite data-types. Lists are containers for, well, lists of objects. For example, you would use a List to store a list of student details in a class. Each student detail in that list may be a 3-tuple containing their roll number, name and test score. `[(1,'Mark',86),(2,'John',34)...]` Also, because tuples are immutable they can be used as keys in dictionaries. A: (1,2,3) is a tuple while [1,2,3] is a list. A tuple is an immutable object while a list is mutable. A: (1,2,3) is a tuple and [1,2,3] is a list. You either of the two represent sequences of numbers but note that tuples are immutable and list are mutable Python objects. A: (1,2,3)-tuple [1,2,3]-list lists are mutable on which various operations can be performed whereas tuples are immutable which cannot be extended.we cannot add,delete or update any element from a tuple once it is created. A: a = (1,2,3) is a tuple which is immutable meaning you can't add anything into a b = [1,2,3] is a list in python which is immutable meaning you can make changes into 'b' either delete or add numbers into it. A: In simple words, lists are mutable whereas tuples are not. Hence, if you want to modify the elements in your program i.e., adding, deleting or altering elements, go for a list. But, if you don't want tat to happen i.e., may be for setting sequence in for loop, etc. go for a tuple A: (1,2,3) is immutable, so you can't add to it or change one of the items. In contrast, [1,2,3] is mutable, so you can add to it or change the items.
{ "language": "en", "url": "https://stackoverflow.com/questions/1983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58" }
Q: How far can LISP macros go? I have read a lot that LISP can redefine syntax on the fly, presumably with macros. I am curious how far does this actually go? Can you redefine the language structure so much that it borderline becomes a compiler for another language? For example, could you change the functional nature of LISP into a more object oriented syntax and semantics, maybe say having syntax closer to something like Ruby? Especially, is it possible to get rid of the parenthesis hell using macros? I have learned enough (Emacs-)LISP to customize Emacs with my own micro-features, but I am very curious how far macros can go in customizing the language. A: What you are asking is somewhat like asking how to become an expert chocolatier so that you can remove all that hellish brown stuff from your favourite chocolate cake. A: Yes, you can fundamentally change the syntax, and even escape "the parentheses hell". For that you will need to define a new reader syntax. Look into reader macros. I do suspect however that to reach the level of Lisp expertise to program such macros you will need to immerse yourself in the language to such an extent that you will no longer consider parenthese "hell". I.e. by the time you know how to avoid them, you will have come to accept them as a good thing. A: That's a really good question. I think it's nuanced but definitely answerable: Macros are not stuck in s-expressions. See the LOOP macro for a very complex language written using keywords (symbols). So, while you may start and end the loop with parentheses, inside it has its own syntax. Example: (loop for x from 0 below 100 when (even x) collect x) That being said, most simple macros just use s-expressions. And you'd be "stuck" using them. But s-expressions, like Sergio has answered, start to feel right. The syntax gets out of the way and you start coding in the syntax tree. As for reader macros, yes, you could conceivably write something like this: #R{ ruby.code.goes.here } But you'd need to write your own Ruby syntax parser. You can also mimic some of the Ruby constructs, like blocks, with macros that compile to the existing Lisp constructs. #B(some lisp (code goes here)) would translate to (lambda () (some lisp (code goes here))) See this page for how to do it. A: Yes, you can redefine the syntax so that Lisp becomes a compiler. You do this using "Reader Macros," which are different from the normal "Compiler Macros" that you're probably thinking of. Common Lisp has the built-in facility to define new syntax for the reader and reader macros to process that syntax. This processing is done at read-time (which comes before compile or eval time). To learn more about defining reader macros in Common Lisp, see the Common Lisp Hyperspec -- you'll want to read Ch. 2, "Syntax" and Ch. 23, "Reader". (I believe Scheme has the same facility, but I'm not as familiar with it -- see the Scheme sources for the Arc programming language). As a simple example, let's suppose you want Lisp to use curly braces rather than parentheses. This requires something like the following reader definitions: ;; { and } become list delimiters, along with ( and ). (set-syntax-from-char #\{ #\( ) (defun lcurly-brace-reader (stream inchar) ; this was way too easy to do. (declare (ignore inchar)) (read-delimited-list #\} stream t)) (set-macro-character #\{ #'lcurly-brace-reader) (set-macro-character #\} (get-macro-character #\) )) (set-syntax-from-char #\} #\) ) ;; un-lisp -- make parens meaningless (set-syntax-from-char #\) #\] ) ; ( and ) become normal braces (set-syntax-from-char #\( #\[ ) You're telling Lisp that the { is like a ( and that the } is like a ). Then you create a function (lcurly-brace-reader) that the reader will call whenever it sees a {, and you use set-macro-character to assign that function to the {. Then you tell Lisp that ( and ) are like [ and ] (that is, not meaningful syntax). Other things you could do include, for example, creating a new string syntax or using [ and ] to enclose in-fix notation and process it into S-expressions. You can also go far beyond this, redefining the entire syntax with your own macro characters that will trigger actions in the reader, so the sky really is the limit. This is just one of the reasons why Paul Graham and others keep saying that Lisp is a good language in which to write a compiler. A: If you want lisp to look like Ruby use Ruby. It's possible to use Ruby (and Python) in a very lisp like way which is one of the main reasons they have gained acceptance so quickly. A: see this example of how reader macros can extend the lisp reader with complex tasks like XML templating: http://common-lisp.net/project/cl-quasi-quote/present-class.html this user library compiles the static parts of the XML into UTF-8 encoded literal byte arrays at compile time that are ready to be write-sequence'd into the network stream. and they are usable in normal lisp macros, they are orthogonal... the placement of the comma character influences which parts are constant and which should be evaluated at runtime. more details available at: http://common-lisp.net/project/cl-quasi-quote/ another project that for Common Lisp syntax extensions: http://common-lisp.net/project/cl-syntax-sugar/ A: I'm not a Lisp expert, heck I'm not even a Lisp programmer, but after a bit of experimenting with the language I came to the conclusion that after a while the parenthesis start becoming 'invisible' and you start seeing the code as you want it to be. You start paying more attention to the syntactical constructs you create via s-exprs and macros, and less to the lexical form of the text of lists and parenthesis. This is specially true if you take advantage of a good editor that helps with the indentation and syntax coloring (try setting the parenthesis to a color very similar to the background). You might not be able to replace the language completely and get 'Ruby' syntax, but you don't need it. Thanks to the language flexibility you could end having a dialect that feels like you are following the 'Ruby style of programming' if you want, whatever that would mean to you. I know this is just an empirical observation, but I think I had one of those Lisp enlightenment moments when I realized this. A: Over and over again, newcomers to Lisp want to "get rid of all the parenthesis." It lasts for a few weeks. No project to build a serious general purpose programming syntax on top of the usual S-expression parser ever gets anywhere, because programmers invariably wind up preferring what you currently perceive as "parenthesis hell." It takes a little getting used to, but not much! Once you do get used to it, and you can really appreciate the plasticity of the default syntax, going back to languages where there's only one way to express any particular programming construct is really grating. That being said, Lisp is an excellent substrate for building Domain Specific Languages. Just as good as, if not better than, XML. Good luck! A: The best explanation of Lisp macros I have ever seen is at https://www.youtube.com/watch?v=4NO83wZVT0A starting at about 55 minutes in. This is a video of a talk given by Peter Seibel, the author of "Practical Common Lisp", which is the best Lisp textbook there is. The motivation for Lisp macros is usually hard to explain, because they really come into their own in situations that are too lengthy to present in a simple tutorial. Peter comes up with a great example; you can grasp it completely, and it makes good, proper use of Lisp macros. You asked: "could you change the functional nature of LISP into a more object oriented syntax and semantics". The answer is yes. In fact, Lisp originally didn't have any object-oriented programming at all, not surprising since Lisp has been around since way before object-oriented programming! But when we first learned about OOP in 1978, we were able to add it to Lisp easily, using, among other things, macros. Eventually the Common Lisp Object System (CLOS) was developed, a very powerful object-oriented programming system that fits elegantly into Lisp. The whole thing can be loaded as an extension -- nothing is built-in! It's all done with macros. Lisp has an entirely different feature, called "reader macros", that can be used to extend the surface syntax of the language. Using reader macros, you can make sublanguages that have C-like or Ruby-like syntax. They transform the text into Lisp, internally. These are not used widely by most real Lisp programmers, mainly because it is hard to extend the interactive development environment to understand the new syntax. For example, Emacs indentation commands would be confused by a new syntax. If you're energetic, though, Emacs is extensible too, and you could teach it about your new lexical syntax. A: Regular macros operate on lists of objects. Most commonly, these objects are other lists (thus forming trees) and symbols, but they can be other objects such as strings, hashtables, user-defined objects, etc. These structures are called s-exps. So, when you load a source file, your Lisp compiler will parse the text and produce s-exps. Macros operate on these. This works great and it's a marvellous way to extend the language within the spirit of s-exps. Additionally, the aforementioned parsing process can be extended through "reader macros" that let you customize the way your compiler turns text into s-exps. I suggest, however, that you embrace Lisp's syntax instead of bending it into something else. You sound a bit confused when you mention Lisp's "functional nature" and Ruby's "object-oriented syntax". I'm not sure what "object-oriented syntax" is supposed to be, but Lisp is a multi-paradigm language and it supports object-oriented programming extremelly well. BTW, when I say Lisp, I mean Common Lisp. I suggest you put your prejudices away and give Lisp an honest go. A: Parenthesis hell? I see no more parenthesis in: (function toto) than in: function(toto); And in (if tata (toto) (titi) (tutu)) no more than in: if (tata) toto(); else { titi(); tutu(); } I see less brackets and ';' though. A: @sparkes Sometimes LISP is the clear language choice, namely Emacs extensions. I'm sure I could use Ruby to extend Emacs if I wanted to, but Emacs was designed to be extended with LISP, so it seems to make sense to use it in that situation. A: It's a tricky question. Since lisp is already structurally so close to a parse tree the difference between a large number of macros and implementing your own mini-language in a parser generator isn't very clear. But, except for the opening and closing paren, you could very easily end up with something that looks nothing like lisp. A: One of the uses of macros that blew my mind was the compile-time verification of SQL requests against DB. Once you realize you have the full language at hand at compile-time, it opens up interesting new perspectives. Which also means you can shoot yourself in the foot in interesting new ways (like rendering compilation not reproducible, which can very easily turn into a debugging nightmare).
{ "language": "en", "url": "https://stackoverflow.com/questions/1988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Any solution to Illegal Cross Thread Operation exception? When you data bind in C#, the thread that changes the data causes the control to change too. But if this thread is not the one on which the control was created, you'll get an Illegal Cross Thread Operation exception. Is there anyway to prevent this? A: You should be able to do something like: if (control.InvokeRequired) { control.Invoke(delegateWithMyCode); } else { delegateWithMyCode(); } InvokeRequired is a property on Controls to see if you are on the correct thread, then Invoke will invoke the delegate on the correct thread. UPDATE: Actually, at my last job we did something like this: private void SomeEventHandler(Object someParam) { if (this.InvokeRequired) { this.Invoke(new SomeEventHandlerDelegate(SomeEventHandler), someParam); } // Regular handling code } which removes the need for the else block and kind of tightens up the code. A: As I don't have a test case to go from I can't guarantee this solution, but it seems to me that a scenario similar to the one used to update progress bars in different threads (use a delegate) would be suitable here. public delegate void DataBindDelegate(); public DataBindDelegate BindData = new DataBindDelegate(DoDataBind); public void DoDataBind() { DataBind(); } If the data binding needs to be done by a particular thread, then let that thread do the work! A: If the thread call is "illegal" (i.e. the DataBind call affects controls that were not created in the thread it is being called from) then you need to create a delegate so that even if the decision / preparation for the DataBind is not done in the control-creating thread, any resultant modification of them (i.e. DataBind()) will be. You would call my code from the worker thread like so: this.BindData.Invoke(); This would then cause the original thread to do the binding, which (presuming it is the thread that created the controls) should work. A: In WPF and Silverlight the binding infrastructure takes care of the switching to the UI thread.
{ "language": "en", "url": "https://stackoverflow.com/questions/1994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Most Efficient Way to Test Object Type I have values stored as strings in a DataTable where each value could really represent an int, double, or string (they were all converted to strings during an import process from an external data source). I need to test and see what type each value really is. What is more efficient for the application (or is there no practical difference)? * *Try to convert to int (and then double). If conversion works, the return true. If an exception is thrown, return false. *Regular expressions designed to match the pattern of an int or double *Some other method? A: I would say, don't worry so much about such micro performance. It is much better to just get something to work, and then make it as clear and concise and easy to read as possible. The worst thing you can do is sacrifice readability for an insignificant amount of performance. In the end, the best way to deal with performance issues is to save them for when you have data that indicates there is an actual performance problem... otherwise you will spend a lot of time micro-optimizing and actually cause higher maintenance costs for later on. If you find this parsing situation is really the bottleneck in your application, THEN is the time to try and figure out what the fastest way to solve the problem is. I think Jeff (and many others) have blogged about this sort of thing a lot. A: You'll get different results for the different methods depending on whether you compile with optimisations on. You basically have a few options: object o; //checking with is o is int //check type o.GetType() != typeof( int ) //cast and catch exception try{ int j = (int) o; } catch {} //use the tryparse int.TryParse( Convert.ToString( o ), out j ) You can easily set up a console app that tries each of these 10,000 times and returns durations for each (test when o is an int and when it's something else). The try-catch method is the quickest if the object does hold an int, and by far the slowest if it doesn't (even slower than GetType). int.TryParse is pretty quick if you have a string, but if you have an unknown object it's slower. Interestingly, with .Net 3.5 and optimisations turned on the o is int check takes the same time as try-catch when o actually is an int. o is int is only slightly slower if o actually is something else. Annoyingly FxCop will throw up warnings if you do something like: if( o is int ) int j = (int) o; But I think that's a bug in FxCop - it doesn't know int is a value type and recommends you to use o as int instead. If your input is always a string int.TryParse is best, otherwise the is operator is quickest. As you have a string I'd look at whether you need to know that it's an int, rather than a double. If int.TryParse passes then so will double.TryParse so you could half the number of checks - return either double or string and floor the doubles when you expect an int. A: The trouble you have is that there could be situations where the answer could be all three types. 3 could be an int, a double or a string! It depends upon what you are trying to do and how important it is that they are a particular type. It might be best just to leave them as they are as long as you can or, alternatively, some up with a method to mark each one (if you have control of the source of the original string). A: I'd personally use int.tryparse, then double.tryparse. Performance on those methods is quite fast. They both return a Boolean. If both fail then you have a string, per how you defined your data. A: Would use double.TryParse, it has performance benefits.
{ "language": "en", "url": "https://stackoverflow.com/questions/1995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Pass by reference or pass by value? When learning a new programming language, one of the possible roadblocks you might encounter is the question whether the language is, by default, pass-by-value or pass-by-reference. So here is my question to all of you, in your favorite language, how is it actually done? And what are the possible pitfalls? Your favorite language can, of course, be anything you have ever played with: popular, obscure, esoteric, new, old... A: Since I haven't seen a Perl answer yet, I thought I'd write one. Under the hood, Perl works effectively as pass-by-reference. Variables as function call arguments are passed referentially, constants are passed as read-only values, and results of expressions are passed as temporaries. The usual idioms to construct argument lists by list assignment from @_, or by shift tend to hide this from the user, giving the appearance of pass-by-value: sub incr { my ( $x ) = @_; $x++; } my $value = 1; incr($value); say "Value is now $value"; This will print Value is now 1 because the $x++ has incremented the lexical variable declared within the incr() function, rather than the variable passed in. This pass-by-value style is usually what is wanted most of the time, as functions that modify their arguments are rare in Perl, and the style should be avoided. However, if for some reason this behaviour is specifically desired, it can be achieved by operating directly on elements of the @_ array, because they will be aliases for variables passed into the function. sub incr { $_[0]++; } my $value = 1; incr($value); say "Value is now $value"; This time it will print Value is now 2, because the $_[0]++ expression incremented the actual $value variable. The way this works is that under the hood @_ is not a real array like most other arrays (such as would be obtained by my @array), but instead its elements are built directly out of the arguments passed to a function call. This allows you to construct pass-by-reference semantics if that would be required. Function call arguments that are plain variables are inserted as-is into this array, and constants or results of more complex expressions are inserted as read-only temporaries. It is however exceedingly rare to do this in practice, because Perl supports reference values; that is, values that refer to other variables. Normally it is far clearer to construct a function that has an obvious side-effect on a variable, by passing in a reference to that variable. This is a clear indication to the reader at the callsite, that pass-by-reference semantics are in effect. sub incr_ref { my ( $ref ) = @_; $$ref++; } my $value = 1; incr(\$value); say "Value is now $value"; Here the \ operator yields a reference in much the same way as the & address-of operator in C. A: There's a good explanation here for .NET. A lot of people are surprise that reference objects are actually passed by value (in both C# and Java). It's a copy of a stack address. This prevents a method from changing where the object actually points to, but still allows a method to change the values of the object. In C# its possible to pass a reference by reference, which means you can change where an actual object points to. A: Don't forget there is also pass by name, and pass by value-result. Pass by value-result is similar to pass by value, with the added aspect that the value is set in the original variable that was passed as the parameter. It can, to some extent, avoid interference with global variables. It is apparently better in partitioned memory, where a pass by reference could cause a page fault (Reference). Pass by name means that the values are only calculated when they are actually used, rather than at the start of the procedure. Algol used pass-by-name, but an interesting side effect is that is it very difficult to write a swap procedure (Reference). Also, the expression passed by name is re-evaluated each time it is accessed, which can also have side effects. A: Whatever you say as pass-by-value or pass-by-reference must be consistent across languages. The most common and consistent definition used across languages is that with pass-by-reference, you can pass a variable to a function "normally" (i.e. without explicitly taking address or anything like that), and the function can assign to (not mutate the contents of) the parameter inside the function and it will have the same effect as assigning to the variable in the calling scope. From this view, the languages are grouped as follows; each group having the same passing semantics. If you think that two languages should not be put in the same group, I challenge you to come up with an example that distinguishes them. The vast majority of languages including C, Java, Python, Ruby, JavaScript, Scheme, OCaml, Standard ML, Go, Objective-C, Smalltalk, etc. are all pass-by-value only. Passing a pointer value (some languages call it a "reference") does not count as pass by reference; we are only concerned about the thing passed, the pointer, not the thing pointed to. Languages such as C++, C#, PHP are by default pass-by-value like the languages above, but functions can explicitly declare parameters to be pass-by-reference, using & or ref. Perl is always pass-by-reference; however, in practice people almost always copy the values after getting it, thus using it in a pass-by-value way. A: by value * *is slower than by reference since the system has to copy the parameter *used for input only by reference * *faster since only a pointer is passed *used for input and output *can be very dangerous if used in conjunction with global variables A: Concerning J, while there is only, AFAIK, passing by value, there is a form of passing by reference which enables moving a lot of data. You simply pass something known as a locale to a verb (or function). It can be an instance of a class or just a generic container. spaceused=: [: 7!:5 < exectime =: 6!:2 big_chunk_of_data =. i. 1000 1000 100 passbyvalue =: 3 : 0 $ y '' ) locale =. cocreate'' big_chunk_of_data__locale =. big_chunk_of_data passbyreference =: 3 : 0 l =. y $ big_chunk_of_data__l '' ) exectime 'passbyvalue big_chunk_of_data' 0.00205586720663967 exectime 'passbyreference locale' 8.57957102144893e_6 The obvious disadvantage is that you need to know the name of your variable in some way in the called function. But this technique can move a lot of data painlessly. That's why, while technically not pass by reference, I call it "pretty much that". A: Here is my own contribution for the Java programming language. first some code: public void swap(int x, int y) { int tmp = x; x = y; y = tmp; } calling this method will result in this: int pi = 3; int everything = 42; swap(pi, everything); System.out.println("pi: " + pi); System.out.println("everything: " + everything); "Output: pi: 3 everything: 42" even using 'real' objects will show a similar result: public class MyObj { private String msg; private int number; //getters and setters public String getMsg() { return this.msg; } public void setMsg(String msg) { this.msg = msg; } public int getNumber() { return this.number; } public void setNumber(int number) { this.number = number; } //constructor public MyObj(String msg, int number) { setMsg(msg); setNumber(number); } } public static void swap(MyObj x, MyObj y) { MyObj tmp = x; x = y; y = tmp; } public static void main(String args[]) { MyObj x = new MyObj("Hello world", 1); MyObj y = new MyObj("Goodbye Cruel World", -1); swap(x, y); System.out.println(x.getMsg() + " -- "+ x.getNumber()); System.out.println(y.getMsg() + " -- "+ y.getNumber()); } "Output: Hello world -- 1 Goodbye Cruel World -- -1" thus it is clear that Java passes its parameters by value, as the value for pi and everything and the MyObj objects aren't swapped. be aware that "by value" is the only way in java to pass parameters to a method. (for example a language like c++ allows the developer to pass a parameter by reference using '&' after the parameter's type) now the tricky part, or at least the part that will confuse most of the new java developers: (borrowed from javaworld) Original author: Tony Sintes public void tricky(Point arg1, Point arg2) { arg1.x = 100; arg1.y = 100; Point temp = arg1; arg1 = arg2; arg2 = temp; } public static void main(String [] args) { Point pnt1 = new Point(0,0); Point pnt2 = new Point(0,0); System.out.println("X: " + pnt1.x + " Y: " +pnt1.y); System.out.println("X: " + pnt2.x + " Y: " +pnt2.y); System.out.println(" "); tricky(pnt1,pnt2); System.out.println("X: " + pnt1.x + " Y:" + pnt1.y); System.out.println("X: " + pnt2.x + " Y: " +pnt2.y); } "Output X: 0 Y: 0 X: 0 Y: 0 X: 100 Y: 100 X: 0 Y: 0" tricky successfully changes the value of pnt1! This would imply that Objects are passed by reference, this is not the case! A correct statement would be: the Object references are passed by value. more from Tony Sintes: The method successfully alters the value of pnt1, even though it is passed by value; however, a swap of pnt1 and pnt2 fails! This is the major source of confusion. In the main() method, pnt1 and pnt2 are nothing more than object references. When you pass pnt1 and pnt2 to the tricky() method, Java passes the references by value just like any other parameter. This means the references passed to the method are actually copies of the original references. Figure 1 below shows two references pointing to the same object after Java passes an object to a method. (source: javaworld.com) Conclusion or a long story short: * *Java passes it parameters by value *"by value" is the only way in java to pass a parameter to a method *using methods from the object given as parameter will alter the object as the references point to the original objects. (if that method itself alters some values) useful links: * *http://www.javaworld.com/javaworld/javaqa/2000-05/03-qa-0526-pass.html *http://www.ibm.com/developerworks/java/library/j-passbyval/ *http://www.ibm.com/developerworks/library/j-praxis/pr1.html *http://javadude.com/articles/passbyvalue.htm A: Here is another article for the c# programming language c# passes its arguments by value (by default) private void swap(string a, string b) { string tmp = a; a = b; b = tmp; } calling this version of swap will thus have no result: string x = "foo"; string y = "bar"; swap(x, y); "output: x: foo y: bar" however, unlike java c# does give the developer the opportunity to pass parameters by reference, this is done by using the 'ref' keyword before the type of the parameter: private void swap(ref string a, ref string b) { string tmp = a; a = b; b = tmp; } this swap will change the value of the referenced parameter: string x = "foo"; string y = "bar"; swap(x, y); "output: x: bar y: foo" c# also has a out keyword, and the difference between ref and out is a subtle one. from msdn: The caller of a method which takes an out parameter is not required to assign to the variable passed as the out parameter prior to the call; however, the callee is required to assign to the out parameter before returning. and In contrast ref parameters are considered initially assigned by the callee. As such, the callee is not required to assign to the ref parameter before use. Ref parameters are passed both into and out of a method. a small pitfall is, like in java, that objects passed by value can still be changed using their inner methods conclusion: * *c# passes its parameters, by default, by value *but when needed parameters can also be passed by reference using the ref keyword *inner methods from a parameter passed by value will alter the object (if that method itself alters some values) useful links: * *http://msdn.microsoft.com/en-us/vcsharp/aa336814.aspx *http://www.c-sharpcorner.com/UploadFile/saragana/Willswapwork11162005012542AM/Willswapwork.aspx *http://en.csharp-online.net/Value_vs_Reference A: Python uses pass-by-value, but since all such values are object references, the net effect is something akin to pass-by-reference. However, Python programmers think more about whether an object type is mutable or immutable. Mutable objects can be changed in-place (e.g., dictionaries, lists, user-defined objects), whereas immutable objects can't (e.g., integers, strings, tuples). The following example shows a function that is passed two arguments, an immutable string, and a mutable list. >>> def do_something(a, b): ... a = "Red" ... b.append("Blue") ... >>> a = "Yellow" >>> b = ["Black", "Burgundy"] >>> do_something(a, b) >>> print a, b Yellow ['Black', 'Burgundy', 'Blue'] The line a = "Red" merely creates a local name, a, for the string value "Red" and has no effect on the passed-in argument (which is now hidden, as a must refer to the local name from then on). Assignment is not an in-place operation, regardless of whether the argument is mutable or immutable. The b parameter is a reference to a mutable list object, and the .append() method performs an in-place extension of the list, tacking on the new "Blue" string value. (Because string objects are immutable, they don't have any methods that support in-place modifications.) Once the function returns, the re-assignment of a has had no effect, while the extension of b clearly shows pass-by-reference style call semantics. As mentioned before, even if the argument for a is a mutable type, the re-assignment within the function is not an in-place operation, and so there would be no change to the passed argument's value: >>> a = ["Purple", "Violet"] >>> do_something(a, b) >>> print a, b ['Purple', 'Violet'] ['Black', 'Burgundy', 'Blue', 'Blue'] If you didn't want your list modified by the called function, you would instead use the immutable tuple type (identified by the parentheses in the literal form, rather than square brackets), which does not support the in-place .append() method: >>> a = "Yellow" >>> b = ("Black", "Burgundy") >>> do_something(a, b) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 3, in do_something AttributeError: 'tuple' object has no attribute 'append' A: PHP is also pass by value. <?php class Holder { private $value; public function __construct($value) { $this->value = $value; } public function getValue() { return $this->value; } } function swap($x, $y) { $tmp = $x; $x = $y; $y = $tmp; } $a = new Holder('a'); $b = new Holder('b'); swap($a, $b); echo $a->getValue() . ", " . $b->getValue() . "\n"; Outputs: a b However in PHP4 objects were treated like primitives. Which means: <?php $myData = new Holder('this should be replaced'); function replaceWithGreeting($holder) { $myData->setValue('hello'); } replaceWithGreeting($myData); echo $myData->getValue(); // Prints out "this should be replaced" A: By default, ANSI/ISO C uses either--it depends on how you declare your function and its parameters. If you declare your function parameters as pointers then the function will be pass-by-reference, and if you declare your function parameters as not-pointer variables then the function will be pass-by-value. void swap(int *x, int *y); //< Declared as pass-by-reference. void swap(int x, int y); //< Declared as pass-by-value (and probably doesn't do anything useful.) You can run into problems if you create a function that returns a pointer to a non-static variable that was created within that function. The returned value of the following code would be undefined--there is no way to know if the memory space allocated to the temporary variable created in the function was overwritten or not. float *FtoC(float temp) { float c; c = (temp-32)*9/5; return &c; } You could, however, return a reference to a static variable or a pointer that was passed in the parameter list. float *FtoC(float *temp) { *temp = (*temp-32)*9/5; return temp; }
{ "language": "en", "url": "https://stackoverflow.com/questions/2027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: What do the result codes in SVN mean? What do the result codes in SVN mean? I need a quick reference. A: Also note that a result code in the second column refers to the properties of the file. For example: U filename.1 U filename.2 UU filename.3 filename.1: the file was updated filename.2: a property or properties on the file (such as svn:keywords) was updated filename.3: both the file and its properties were updated A: Whenever you don't have access to documentation (SVNBook), type (Linux): svn help status | grep \'\?\' svn help status | grep \'\!\' svn help status | grep \'\YOUR_SYMBOL_HERE\' or insert the following function in your ~/.bashrc file, like so: svncode() { symbol=$1 [ $symbol ] && svn help status | grep \'$(echo $symbol)\' || \ echo "usage: svncode <symbol>" } A: For additional details see the SVNBook: "Status of working copy files and directories". The common statuses: U: Working file was updated G: Changes on the repo were automatically merged into the working copy M: Working copy is modified C: This file conflicts with the version in the repo ?: This file is not under version control !: This file is under version control but is missing or incomplete A: This file will be added to version control (after commit) A+: This file will be moved (after commit) D: This file will be deleted (after commit) S: This signifies that the file or directory has been switched from the path of the rest of the working copy (using svn switch) to a branch I: Ignored X: External definition ~: Type changed R: Item has been replaced in your working copy. This means the file was scheduled for deletion, and then a new file with the same name was scheduled for addition in its place. L : Item is locked E: Item existed, as it would have been created, by an svn update. A: You can always get a list by running: svn status --help A: I want to say something about the "G" status, G: Changes on the repo were automatically merged into the working copy I think the above definition is not cleary, it can generate a little confusion, because all files are automatically merged in to working copy, the correct one should be: U = item (U)pdated to repository version G = item’s local changes mer(G)ed with repository C = item’s local changes (C)onflicted with repository D = item (D)eleted from working copy A = item (A)dded to working copy A: There is also an 'E' status E = File existed before update This can happen if you have manually created a folder that would have been created by performing an update. A: SVN status columns $ svn status L index.html The output of the command is split into six columns, but that is not obvious because sometimes the columns are empty. Perhaps it would have made more sense to indicate the empty columns with dashes, the way ls -l does, instead of nothing. Then, for example, L index.html would look like --L--- index.html, which makes it obvious the only information we have is in the third column the one about locking. Anyway, once you know that it begins to make more sense. SVN Status first column: A, D, M, R, C, X, I, ?, !, ~ The first column indicates that an item was added, deleted, or otherwise changed.       No modifications.  A    Item is scheduled for Addition.  D    Item is scheduled for Deletion.  M    Item has been modified.  R    Item has been replaced in your working copy. This means the file was scheduled for deletion, and then a new file with the same name was scheduled for addition in its place.  C    The contents (as opposed to the properties) of the item conflict with updates received from the repository.  X    Item is related to an externals definition.  I    Item is being ignored (e.g. with the svn:ignore property).  ?    Item is not under version control.  !    Item is missing (e.g. you moved or deleted it without using svn). This also indicates that a directory is incomplete (a checkout or update was interrupted).  ~    Item is versioned as one kind of object (file, directory, link), but has been replaced by different kind of object. SVN Status second column: M, C The second column tells the status of a file’s or directory’s properties.       No modifications.  M    Properties for this item have been modified.  C    Properties for this item are in conflict with property updates received from the repository. SVN Status third column: L The third column is populated only if the working copy directory is locked (an svn cleanup should normally be enough to clear it out)       Item is not locked.  L    Item is locked. SVN Status fourth column: + The fourth column is populated only if the item is scheduled for addition-with-history.       No history scheduled with commit.  +    History scheduled with commit. SVN Status fifth column: S The fifth column is populated only if the item’s working copy is switched relative to its parent       Item is a child of its parent directory.  S    Item is switched. SVN Status sixth column: K, O, T, B The sixth column is populated with lock information.       When –show-updates is used, the file is not locked. If –show-updates is not used, this merely means that the file is not locked in this working copy.  K    File is locked in this working copy.  O    File is locked either by another user or in another working copy. This only appears when –show-updates is used.  T    File was locked in this working copy, but the lock has been stolen and is invalid. The file is currently locked in the repository. This only appears when –show-updates is used.-  B    File was locked in this working copy, but the lock has been broken and is invalid. The file is no longer locked This only appears when –show-updates is used. SVN Status seventh column: * The out-of-date information appears in the seventh column (only if you pass the –show-updates switch). This is something people who are new to SVN expect the command to do, not realizing it only compare the current state of the file with what information it fetched from the server on the last update.       The item in your working copy is up-to-date.  *    A newer revision of the item exists on the server. A: I usually use svn through a gui, either my IDE or a client. Because of that, I can never remember the codes when I do have to resort to the command line. I find this cheat sheet a great help: Subversion Cheat Sheet A: Take a look in the Subversion Book reference: "Status of working copy files and directories" Highly recommended for anyone doing pretty much anything with SVN.
{ "language": "en", "url": "https://stackoverflow.com/questions/2034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "316" }
Q: How do I create a branch? How do I create a branch in SVN? A: Create a new branch using the svn copy command as follows: $ svn copy svn+ssh://host.example.com/repos/project/trunk \ svn+ssh://host.example.com/repos/project/branches/NAME_OF_BRANCH \ -m "Creating a branch of project" A: * *Create a new folder outside of your current project. You can give it any name. (Example: You have a checkout for a project named "Customization". And it has many projects, like "Project1", "Project2"....And you want to create a branch of "Project1". So first open the "Customization", right click and create a new folder and give it a name, "Project1Branch"). *Right click on "Myproject1"....TortoiseSVN -> Branch/Tag. *Choose working copy. *Open browser....Just right of parallel on "To URL". *Select customization.....right click then Add Folder. and go through the folder which you have created. Here it is "Project1Branch". Now clik the OK button to add. *Take checkout of this new banch. *Again go to your project which branch you want to create. Right click TorotoiseSVN -> branch/tag. Then select working copy. And you can give the URL as your branch name. like {your IP address/svn/AAAA/Customization/Project1Branch}. And you can set the name in the URL so it will create the folder with this name only. Like {Your IP address/svn/AAAA/Customization/Project1Branch/MyProject1Branch}. *Press the OK button. Now you can see the logs in ...your working copy will be stored in your branch. *Now you can take a check out...and let you enjoy your work. :) A: If you're repo is available via https, you can use this command to branch ... svn copy https://host.example.com/repos/project/trunk \ https://host.example.com/repos/project/branches/branch-name \ -m "Creating a branch of project" A: Branching in Subversion is facilitated by a very very light and efficient copying facility. Branching and tagging are effectively the same. Just copy a whole folder in the repository to somewhere else in the repository using the svn copy command. Basically this means that it is by convention what copying a folder means - whether it be a backup, tag, branch or whatever. Depending upon how you want to think about things (normally depending upon which SCM tool you have used in the past) you need to set up a folder structure within your repository to support your style. Common styles are to have a bunch of folders at the top of your repository called tags, branches, trunk, etc. - that allows you to copy your whole trunk (or sub-sets) into the tags and/or branches folders. If you have more than one project you might want to replicate this kind of structure under each project: It can take a while to get used to the concept - but it works - just make sure you (and your team) are clear on the conventions that you are going to use. It is also a good idea to have a good naming convention - something that tells you why the branch/tag was made and whether it is still appropriate - consider ways of archiving branches that are obsolete. A: Below are the steps to create a branch from trunk using TortoiseSVN in windows machine. This obviously needs TortoiseSVN client to be installed. * *Right Click on updated trunk from local windows machine *Select TortoiseSVN *Click branch/Tag *Select the To path in SVN repository. Note that destination URL is updated according to the path and branch name given *Do not create folder inside branches in repository browser *Add branches path. For example, branches/ *Add a meaningful log message for your reference *Click Ok, this creates new folder on local system *Checkout the branch created into new folder A: svn cp /trunk/ /branch/NEW_Branch If you have some local changes in trunk then use Rsync to sync changes rsync -r -v -p --exclude ".svn" /trunk/ /branch/NEW_Branch A: Suppose you want to create a branch from a trunk name (as "TEST") then use: svn cp -m "CREATE BRANCH TEST" $svn_url/trunk $svn_url/branches/TEST A: Top tip for new SVN users; this may help a little with getting the correct URLs quickly. Run svn info to display useful information about the current checked-out branch. The URL should (if you run svn in the root folder) give you the URL you need to copy from. Also to switch to the newly created branch, use the svn switch command: svn switch http://my.repo.url/myrepo/branches/newBranchName A: Normally you'd copy it to svn+ssh://host.example.com/repos/project/branches/mybranch so that you can keep several branches in the repository, but your syntax is valid. Here's some advice on how to set up your repository layout. A: If you even plan on merging your branch, I highly suggest you look at this: Svnmerge.py I hear Subversion 1.5 builds more of the merge tracking in, I have no experience with that. My project is on 1.4.x and svnmerge.py is a life saver!
{ "language": "en", "url": "https://stackoverflow.com/questions/2041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "662" }
Q: Can a Windows dll retrieve its own filename? A Windows process created from an exe file has access to the command string which invoked it, including its file's path and filename. eg. C:\MyApp\MyApp.exe --help. But this is not so for a dll invoked via LoadLibrary. Does anyone know of a way for a function loaded via dll to find out what its path and filename is? Specifically I'm interested in a Delphi solution, but I suspect that the answer would be pretty much the same for any language. A: I think you're looking for GetModuleFileName. http://www.swissdelphicenter.ch/torry/showcode.php?id=143: { If you are working on a DLL and are interested in the filename of the DLL rather than the filename of the application, then you can use this function: } function GetModuleName: string; var szFileName: array[0..MAX_PATH] of Char; begin FillChar(szFileName, SizeOf(szFileName), #0); GetModuleFileName(hInstance, szFileName, MAX_PATH); Result := szFileName; end; Untested though, been some time since I worked with Delphi :)
{ "language": "en", "url": "https://stackoverflow.com/questions/2043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: How do I unit test persistence? As a novice in practicing test-driven development, I often end up in a quandary as to how to unit test persistence to a database. I know that technically this would be an integration test (not a unit test), but I want to find out the best strategies for the following: * *Testing queries. *Testing inserts. How do I know that the insert that has gone wrong if it fails? I can test it by inserting and then querying, but how can I know that the query wasn't wrong? *Testing updates and deletes -- same as testing inserts What are the best practices for doing these? Regarding testing SQL: I am aware that this could be done, but if I use an O/R Mapper like NHibernate, it attaches some naming warts in the aliases used for the output queries, and as that is somewhat unpredictable I'm not sure I could test for that. Should I just, abandon everything and simply trust NHibernate? I'm not sure that's prudent. A: You do the unit testing by mocking out the database connection. This way, you can build scenarios where specific queries in the flow of a method call succeed or fail. I usually build my mock expectations so that the actual query text is ignored, because I really want to test the fault tolerance of the method and how it handles itself -- the specifics of the SQL are irrelevant to that end. Obviously this means your test won't actually verify that the method works, because the SQL may be wrong. This is where integration tests kick in. For that, I expect someone else will have a more thorough answer, as I'm just beginning to get to grips with those myself. A: I have written a post here concerning unit testing the data layer which covers this exact problem. Apologies for the (shameful) plug, but the article is too long to post here. I hope that helps you - it has worked very well for me over the last 6 months on 3 active projects. Regards, Rob G A: The problem I experienced when unit testing persistence, especially without an ORM and thus mocking your database (connection), is that you don't really know if your queries succeed. It could be that you your queries are specifically designed for a particular database version and only succeed with that version. You'll never find that out if you mock your database. So in my opinion, unit testing persistence is only of limited use. You should always add tests running against the targeted database. A: For NHibernate, I'd definitely advocate just mocking out the NHibernate API for unit tests -- trust the library to do the right thing. If you want to ensure that the data actually goes to the DB, do an integration test. A: For JDBC based projects, my Acolyte framework can be used: http://acolyte.eu.org . It allows to mockup data access you want to tests, benefiting from JDBC abstraction, without having to manage a specific test DB. A: Look into DB Unit. It is a Java library, but there must be a C# equivalent. It lets you prepare the database with a set of data so that you know what is in the database, then you can interface with DB Unit to see what is in the database. It can run against many database systems, so you can use your actual database setup, or use something else, like HSQL in Java (a Java database implementation with an in memory option). If you want to test that your code is using the database properly (which you most likely should be doing), then this is the way to go to isolate each test and ensure the database has expected data prepared. A: As Mike Stone said, DbUnit is great for getting the database into a known state before running your tests. When your tests are finished, DbUnit can put the database back into the state it was in before you ran the tests. DbUnit (Java) DbUnit.NET A: I would also mock the database, and check that the queries are what you expected. There is the risk that the test checks the wrong sql, but this would be detected in the integration tests A: Technically unit tests of persistance are not unit tests. They are integration tests. With C# using mbUnit, you simply use the SqlRestoreInfo and RollBack attributes: [TestFixture] [SqlRestoreInfo(<connectionsting>, <name>,<backupLocation>] public class Tests { [SetUp] public void Setup() { } [Test] [RollBack] public void TEST() { //test insert. } } The same can be done in NUnit, except the attribute names differ slightly. As for checking, if your query iss successful, you normally need to follow it with a second query to see if the database has been changed as you expected. A: I usually create a repository, use that to save my entity and retrieve a fresh one. Then I assert that the retrieved is equal to the saved.
{ "language": "en", "url": "https://stackoverflow.com/questions/2046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Monitor a specific RSS For all the RSS feeds I subscribe to I use Google Reader, which I love. I do however have a couple of specific RSS feeds that I'd like to be notified of as soon as they get updated (say, for example, an RSS feed for a forum I like to monitor and respond to as quickly as possible). Are there any tools out there for this kind of monitoring which also have some kind of alert functionality (for example, a prompt window)? I've tried Simbolic RSS Alert but I found it a bit buggy and couldn't get it to alert me as often as I liked. Suggestions? Or perhaps a different experience with Simbolic? A: If you have access to Microsoft Outlook 2007 or Thunderbird, these email clients allow you to add RSS feeds in the same way you would add an email account. I use Google Reader generally but when I want to keep up-to-date with something specific, I add the RSS feed to Outlook and it arrives in my inbox as if it was an email. A: RSS isn't "push", which means that you need to have something that polls the website. It's much less traffic than getting the whole site or front page (for instance, you can say "Give me all articles newer than the last time I asked"), but it's traffic nonetheless. It's generally understood you shouldn't have a refresh of more than 30 minutes in an automated client. (Citation required). Having said that, you may find a client which allows you to set a more frequent refresh. A: I've used Pingie to send me an SMS when a new item appears in an RSS feed. Perhaps, it will be useful for you, if you have a cellphone text messaging plan. A: RSS2mail is a simple python script which I used extensively a few years back. As Matthew stated you really shouldn't bother an RSS feed more than the producer allows but you can use http headers to check for changes in a very light way which is something rss2email does quite well. A: You could always knock something up yourself... I've done it in the past and it really isn't too difficult a job to write an RSS parser. Of course, as others have mentioned, there's an etiquette question as to how much of the website's valuable bandwidth you want to hog for yourself in RSS request traffic. That's a matter for your own conscience. ;) A: I use RSS Bandit (for Windows) to stay up to date with my RSS feeds/blogs. There are lots of other RSS aggregator applications though. If you don't want another "big" application but have Windows Vista, you can also choose to make Internet Explorer monitor the RSS feed and use the Feed sidebar application (called "Feedschlagzeilen in German version, not sure about the English one) that comes with Vista to show the latest headlines. A: Reading all the answers reminded me that I actually never looked into solving this using a Firefox add-on. I soon found Update Scanner and I think it look really promising! A: I like an old version of feedreader for that kind of use, where the icon in the system tray started spinning when new stuff came in (the new version goes from grey to yellow instead). it's also possible to be alerted for each new message. A: Since you mentioned a pop-up, I'll add Feed Notifier to the list. It sits in the Windows Tray (or whatever they call it now in Windows 7) and pops up a notification when there are new entries to your feeds. You can set it up with multiple feeds, each with its own polling interval. When there are new entries, it pops up a prompt which you can dismiss or click to go to the entry. You are able to go back and review recent entries later, even if you clicked to dismiss them the first time. If your PC is asleep when a new entry is added, you will be notified the next time you wake it up.
{ "language": "en", "url": "https://stackoverflow.com/questions/2048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: What are MVP and MVC and what is the difference? When looking beyond the RAD (drag-drop and configure) way of building user interfaces that many tools encourage you are likely to come across three design patterns called Model-View-Controller, Model-View-Presenter and Model-View-ViewModel. My question has three parts to it: * *What issues do these patterns address? *How are they similar? *How are they different? A: In MVP the view draws data from the presenter which draws and prepares/normalizes data from the model while in MVC the controller draws data from the model and set, by push in the view. In MVP you can have a single view working with multiple types of presenters and a single presenter working with different multiple views. MVP usually uses some sort of a binding framework, such as Microsoft WPF binding framework or various binding frameworks for HTML5 and Java. In those frameworks, the UI/HTML5/XAML, is aware of what property of the presenter each UI element displays, so when you bind a view to a presenter, the view looks for the properties and knows how to draw data from them and how to set them when a value is changed in the UI by the user. So, if for example, the model is a car, then the presenter is some sort of a car presenter, exposes the car properties (year, maker, seats, etc.) to the view. The view knows that the text field called 'car maker' needs to display the presenter Maker property. You can then bind to the view many different types of presenter, all must have Maker property - it can be of a plane, train or what ever , the view doesn't care. The view draws data from the presenter - no matter which - as long as it implements an agreed interface. This binding framework, if you strip it down, it's actually the controller :-) And so, you can look on MVP as an evolution of MVC. MVC is great, but the problem is that usually its controller per view. Controller A knows how to set fields of View A. If now, you want View A to display data of model B, you need Controller A to know model B, or you need Controller A to receive an object with an interface - which is like MVP only without the bindings, or you need to rewrite the UI set code in Controller B. Conclusion - MVP and MVC are both decouple of UI patterns, but MVP usually uses a bindings framework which is MVC underneath. THUS MVP is at a higher architectural level than MVC and a wrapper pattern above of MVC. A: MVC (Model View Controller) The input is directed at the Controller first, not the view. That input might be coming from a user interacting with a page, but it could also be from simply entering a specific url into a browser. In either case, its a Controller that is interfaced with to kick off some functionality. There is a many-to-one relationship between the Controller and the View. That’s because a single controller may select different views to be rendered based on the operation being executed. Note the one way arrow from Controller to View. This is because the View doesn’t have any knowledge of or reference to the controller. The Controller does pass back the Model, so there is knowledge between the View and the expected Model being passed into it, but not the Controller serving it up. MVP (Model View Presenter) The input begins with the View, not the Presenter. There is a one-to-one mapping between the View and the associated Presenter. The View holds a reference to the Presenter. The Presenter is also reacting to events being triggered from the View, so its aware of the View its associated with. The Presenter updates the View based on the requested actions it performs on the Model, but the View is not Model aware. For more Reference A: There are many answers to the question, but I felt there is a need for some really simple answer clearly comparing the two. Here's the discussion I made up when a user searches for a movie name in an MVP and MVC app: User: Click click … View: Who’s that? [MVP|MVC] User: I just clicked on the search button … View: Ok, hold on a sec … . [MVP|MVC] ( View calling the Presenter|Controller … ) [MVP|MVC] View: Hey Presenter|Controller, a User has just clicked on the search button, what shall I do? [MVP|MVC] Presenter|Controller: Hey View, is there any search term on that page? [MVP|MVC] View: Yes,… here it is … “piano” [MVP|MVC] Presenter|Controller: Thanks View,… meanwhile I’m looking up the search term on the Model, please show him/her a progress bar [MVP|MVC] ( Presenter|Controller is calling the Model … ) [MVP|MVC] Presenter|Controller: Hey Model, Do you have any match for this search term?: “piano” [MVP|MVC] Model: Hey Presenter|Controller, let me check … [MVP|MVC] ( Model is making a query to the movie database … ) [MVP|MVC] ( After a while ... ) -------------- This is where MVP and MVC start to diverge --------------- Model: I found a list for you, Presenter, here it is in JSON “[{"name":"Piano Teacher","year":2001},{"name":"Piano","year":1993}]” [MVP] Model: There is some result available, Controller. I have created a field variable in my instance and filled it with the result. It's name is "searchResultsList" [MVC] (Presenter|Controller thanks Model and gets back to the View) [MVP|MVC] Presenter: Thanks for waiting View, I found a list of matching results for you and arranged them in a presentable format: ["Piano Teacher 2001","Piano 1993"]. Please show it to the user in a vertical list. Also please hide the progress bar now [MVP] Controller: Thanks for waiting View, I have asked Model about your search query. It says it has found a list of matching results and stored them in a variable named "searchResultsList" inside its instance. You can get it from there. Also please hide the progress bar now [MVC] View: Thank you very much Presenter [MVP] View: Thank you "Controller" [MVC] (Now the View is questioning itself: How should I present the results I get from the Model to the user? Should the production year of the movie come first or last...? Should it be in a vertical or horizontal list? ...) In case you're interested, I have been writing a series of articles dealing with app architectural patterns (MVC, MVP, MVVP, clean architecture, ...) accompanied by a Github repo here. Even though the sample is written for android, the underlying principles can be applied to any medium. A: My humble short view: MVP is for large scales, and MVC for tiny scales. With MVC, I sometime feel the V and the C may be seen a two sides of a single indivisible component rather directly bound to M, and one inevitably falls to this when going down‑to shorter scales, like UI controls and base widgets. At this level of granularity, MVP makes little sense. When one on the contrary go to larger scales, proper interface becomes more important, the same with unambiguous assignment of responsibilities, and here comes MVP. On the other hand, this scale rule of a thumb, may weight very little when the platform characteristics favours some kind of relations between the components, like with the web, where it seems to be easier to implement MVC, more than MVP. A: I think this image by Erwin Vandervalk (and the accompanying article) is the best explanation of MVC, MVP, and MVVM, their similarities, and their differences. The article does not show up in search engine results for queries on "MVC, MVP, and MVVM" because the title of the article does not contain the words "MVC" and "MVP"; but it is the best explanation, I think. (The article also matches what Uncle Bob Martin said in his one of his talks: that MVC was originally designed for the small UI components, not for the architecture of the system) A: There are many versions of MVC, this answer is about the original MVC in Smalltalk. In brief, it is This talk droidcon NYC 2017 - Clean app design with Architecture Components clarifies it A: MVC (Model-View-Controller) In MVC, the Controller is the one in charge! The Controller is triggered or accessed based on some events/requests then, manages the Views. Views in MVC are virtually stateless, the Controller is responsible for choosing which View to show. E.g.: When the user clicks on the “Show MyProfile” button, the Controller is triggered. It communicates with the Model to get the appropriate data. Then, it shows a new View that resembles the profile page. The Controller may take the data from the Model and feed it directly to the View -as proposed in the above diagram- or let the View fetch the data from the Model itself. MVP (Model-View-Presenter) In MVP, the View is the one in charge! each View calls its Presenter or has some events that the Presenter listens to. Views in MVP don’t implement any logic, the Presenter is responsible for implementing all the logic and communicates with the View using some sort of interface. E.g.: When the user clicks the “Save” button, the event handler in the View delegates to the Presenter’s “OnSave” method. The Presenter will do the required logic and any needed communication with the Model then, calls back the View through its interface so that the View can display that the save has been completed. MVC vs. MVP * *MVC doesn’t put the View in charge, Views act as slaves that the Controller can manage and direct. *In MVC, Views are stateless contrary to Views in MVP where they are stateful and can change over time. *In MVP, Views have no logic and we should keep them dumb as possible. On the other hand, Views in MVC may have some sort of logic. *In MVP, the Presenter is decoupled from the View and talks to it through an interface. This allows mocking the View in unit tests. *In MVP, Views are completely isolated from the Model. However, in MVC, Views can communicate with the Model to keep it up with the most up-to-date data. A: This is an oversimplification of the many variants of these design patterns, but this is how I like to think about the differences between the two. MVC MVP A: The simplest answer is how the view interacts with the model. In MVP the view is updated by the presenter, which acts as as intermediary between the view and the model. The presenter takes the input from the view, which retrieves the data from the model and then performs any business logic required and then updates the view. In MVC the model updates the view directly rather than going back through the controller. A: There is this nice video from Uncle Bob where he briefly explains MVC & MVP at the end. IMO, MVP is an improved version of MVC where you basically separate the concern of what you're gonna show (the data) from how you're gonna show (the view). The presenter includes kinda the business logic of your UI, implicitly imposes what data should be presented and gives you a list of dumb view models. And when the time comes to show the data, you simply plug your view (probably includes the same id's) into your adapter and set the relevant view fields using those view models with a minimum amount of code being introduced (just using setters). Its main benefit is you can test your UI business logic against many/various views like showing items in a horizontal list or vertical list. In MVC, we talk through interfaces (boundaries) to glue different layers. A controller is a plug-in to our architecture but it has no such a restriction to impose what to show. In that sense, MVP is kind of an MVC with a concept of views being pluggable to the controller over adapters. I hope this helps better. A: I blogged about this a while back, quoting on Todd Snyder's excellent post on the difference between the two: Here are the key differences between the patterns: MVP Pattern * *View is more loosely coupled to the model. The presenter is responsible for binding the model to the view. *Easier to unit test because interaction with the view is through an interface *Usually view to presenter map one to one. Complex views may have multi presenters. MVC Pattern * *Controller are based on behaviors and can be shared across views *Can be responsible for determining which view to display It is the best explanation on the web I could find. A: Model-View-Controller MVC is a pattern for the architecture of a software application. It separate the application logic into three separate parts, promoting modularity and ease of collaboration and reuse. It also makes applications more flexible and welcoming to iterations.It separates an application into the following components: * *Models for handling data and business logic *Controllers for handling the user interface and application *Views for handling graphical user interface objects and presentation To make this a little more clear, let's imagine a simple shopping list app. All we want is a list of the name, quantity and price of each item we need to buy this week. Below we'll describe how we could implement some of this functionality using MVC. Model-View-Presenter * *The model is the data that will be displayed in the view (user interface). *The view is an interface that displays data (the model) and routes user commands (events) to the Presenter to act upon that data. The view usually has a reference to its Presenter. *The Presenter is the “middle-man” (played by the controller in MVC) and has references to both, view and model. Please note that the word “Model” is misleading. It should rather be business logic that retrieves or manipulates a Model. For instance: If you have a database storing User in a database table and your View wants to display a list of users, then the Presenter would have a reference to your database business logic (like a DAO) from where the Presenter will query a list of Users. If you want to see a sample with simple implementation please check this GitHub post A concrete workflow of querying and displaying a list of users from a database could work like this: What is the difference between MVC and MVP patterns? MVC Pattern * *Controller are based on behaviors and can be shared across views *Can be responsible for determining which view to display (Front Controller Pattern) MVP Pattern * *View is more loosely coupled to the model. The presenter is responsible for binding the model to the view. *Easier to unit test because interaction with the view is through an interface *Usually view to presenter map one to one. Complex views may have multi presenters. A: * *MVP = Model-View-Presenter *MVC = Model-View-Controller * *Both presentation patterns. They separate the dependencies between a Model (think Domain objects), your screen/web page (the View), and how your UI is supposed to behave (Presenter/Controller) *They are fairly similar in concept, folks initialize the Presenter/Controller differently depending on taste. *A great article on the differences is here. Most notable is that MVC pattern has the Model updating the View. A: Also worth remembering is that there are different types of MVPs as well. Fowler has broken the pattern into two - Passive View and Supervising Controller. When using Passive View, your View typically implement a fine-grained interface with properties mapping more or less directly to the underlaying UI widget. For instance, you might have a ICustomerView with properties like Name and Address. Your implementation might look something like this: public class CustomerView : ICustomerView { public string Name { get { return txtName.Text; } set { txtName.Text = value; } } } Your Presenter class will talk to the model and "map" it to the view. This approach is called the "Passive View". The benefit is that the view is easy to test, and it is easier to move between UI platforms (Web, Windows/XAML, etc.). The disadvantage is that you can't leverage things like databinding (which is really powerful in frameworks like WPF and Silverlight). The second flavor of MVP is the Supervising Controller. In that case your View might have a property called Customer, which then again is databound to the UI widgets. You don't have to think about synchronizing and micro-manage the view, and the Supervising Controller can step in and help when needed, for instance with compled interaction logic. The third "flavor" of MVP (or someone would perhaps call it a separate pattern) is the Presentation Model (or sometimes referred to Model-View-ViewModel). Compared to the MVP you "merge" the M and the P into one class. You have your customer object which your UI widgets is data bound to, but you also have additional UI-spesific fields like "IsButtonEnabled", or "IsReadOnly", etc. I think the best resource I've found to UI architecture is the series of blog posts done by Jeremy Miller over at The Build Your Own CAB Series Table of Contents. He covered all the flavors of MVP and showed C# code to implement them. I have also blogged about the Model-View-ViewModel pattern in the context of Silverlight over at YouCard Re-visited: Implementing the ViewModel pattern. A: You forgot about Action-Domain-Responder (ADR). As explained in some graphics above, there's a direct relation/link between the Model and the View in MVC. An action is performed on the Controller, which will execute an action on the Model. That action in the Model, will trigger a reaction in the View. The View, is always updated when the Model's state changes. Some people keep forgetting, that MVC was created in the late 70", and that the Web was only created in late 80"/early 90". MVC wasn't originally created for the Web, but for Desktop applications instead, where the Controller, Model and View would co-exist together. Because we use web frameworks (eg:. Laravel) that still use the same naming conventions (model-view-controller), we tend to think that it must be MVC, but it's actually something else. Instead, have a look at Action-Domain-Responder. In ADR, the Controller gets an Action, which will perform an operation in the Model/Domain. So far, the same. The difference is, it then collects that operation's response/data, and pass it to a Responder (eg:. view()) for rendering. When a new action is requested on the same component, the Controller is called again, and the cycle repeats itself. In ADR, there's no connection between the Model/Domain and the View (Reponser's response). Note: Wikipedia states that "Each ADR action, however, is represented by separate classes or closures.". This is not necessarily true. Several Actions can be in the same Controller, and the pattern is still the same. mvc adr model-view-controller action-domain-responder A: Here are illustrations which represent communication flow A: Model-View-Presenter In MVP, the Presenter contains the UI business logic for the View. All invocations from the View delegate directly to the Presenter. The Presenter is also decoupled directly from the View and talks to it through an interface. This is to allow mocking of the View in a unit test. One common attribute of MVP is that there has to be a lot of two-way dispatching. For example, when someone clicks the "Save" button, the event handler delegates to the Presenter's "OnSave" method. Once the save is completed, the Presenter will then call back the View through its interface so that the View can display that the save has completed. MVP tends to be a very natural pattern for achieving separated presentation in WebForms. The reason is that the View is always created first by the ASP.NET runtime. You can find out more about both variants. Two primary variations Passive View: The View is as dumb as possible and contains almost zero logic. A Presenter is a middle man that talks to the View and the Model. The View and Model are completely shielded from one another. The Model may raise events, but the Presenter subscribes to them for updating the View. In Passive View there is no direct data binding, instead, the View exposes setter properties that the Presenter uses to set the data. All state is managed in the Presenter and not the View. * *Pro: maximum testability surface; clean separation of the View and Model *Con: more work (for example all the setter properties) as you are doing all the data binding yourself. Supervising Controller: The Presenter handles user gestures. The View binds to the Model directly through data binding. In this case, it's the Presenter's job to pass off the Model to the View so that it can bind to it. The Presenter will also contain logic for gestures like pressing a button, navigation, etc. * *Pro: by leveraging data binding the amount of code is reduced. *Con: there's a less testable surface (because of data binding), and there's less encapsulation in the View since it talks directly to the Model. Model-View-Controller In the MVC, the Controller is responsible for determining which View to display in response to any action including when the application loads. This differs from MVP where actions route through the View to the Presenter. In MVC, every action in the View correlates with a call to a Controller along with an action. In the web, each action involves a call to a URL on the other side of which there is a Controller who responds. Once that Controller has completed its processing, it will return the correct View. The sequence continues in that manner throughout the life of the application: Action in the View -> Call to Controller -> Controller Logic -> Controller returns the View. One other big difference about MVC is that the View does not directly bind to the Model. The view simply renders and is completely stateless. In implementations of MVC, the View usually will not have any logic in the code behind. This is contrary to MVP where it is absolutely necessary because, if the View does not delegate to the Presenter, it will never get called. Presentation Model One other pattern to look at is the Presentation Model pattern. In this pattern, there is no Presenter. Instead, the View binds directly to a Presentation Model. The Presentation Model is a Model crafted specifically for the View. This means this Model can expose properties that one would never put on a domain model as it would be a violation of separation-of-concerns. In this case, the Presentation Model binds to the domain model and may subscribe to events coming from that Model. The View then subscribes to events coming from the Presentation Model and updates itself accordingly. The Presentation Model can expose commands which the view uses for invoking actions. The advantage of this approach is that you can essentially remove the code-behind altogether as the PM completely encapsulates all of the behavior for the view. This pattern is a very strong candidate for use in WPF applications and is also called Model-View-ViewModel. There is a MSDN article about the Presentation Model and a section in the Composite Application Guidance for WPF (former Prism) about Separated Presentation Patterns A: Both of these frameworks aim to seperate concerns - for instance, interaction with a data source (model), application logic (or turning this data into useful information) (Controller/Presenter) and display code (View). In some cases the model can also be used to turn a data source into a higher level abstraction as well. A good example of this is the MVC Storefront project. There is a discussion here regarding the differences between MVC vs MVP. The distinction made is that in an MVC application traditionally has the view and the controller interact with the model, but not with each other. MVP designs have the Presenter access the model and interact with the view. Having said that, ASP.NET MVC is by these definitions an MVP framework because the Controller accesses the Model to populate the View which is meant to have no logic (just displays the variables provided by the Controller). To perhaps get an idea of the ASP.NET MVC distinction from MVP, check out this MIX presentation by Scott Hanselman. A: In a few words, * *In MVC, View has the UI part, which calls the controller which in turn calls the model & model in turn fires events back to view. *In MVP, View contains UI and calls the presenter for implementation part. The presenter calls the view directly for updates to the UI part. Model which contains business logic is called by the presenter and no interaction whatsoever with the view. So here presenter does most of the work :) A: MVP is not necessarily a scenario where the View is in charge (see Taligent's MVP for example). I find it unfortunate that people are still preaching this as a pattern (View in charge) as opposed to an anti-pattern as it contradicts "It's just a view" (Pragmatic Programmer). "It's just a view" states that the final view shown to the user is a secondary concern of the application. Microsoft's MVP pattern renders re-use of Views much more difficult and conveniently excuses Microsoft's designer from encouraging bad practice. To be perfectly frank, I think the underlying concerns of MVC hold true for any MVP implementation and the differences are almost entirely semantic. As long as you are following separation of concerns between the view (that displays the data), the controller (that initialises and controls user interaction) and the model (the underlying data and/or services)) then you are achieving the benefits of MVC. If you are achieving the benefits then who really cares whether your pattern is MVC, MVP or Supervising Controller? The only real pattern remains as MVC, the rest are just differing flavours of it. Consider this highly exciting article that comprehensively lists a number of these differing implementations. You may note that they're all basically doing the same thing but slightly differently. I personally think MVP has only been recently re-introduced as a catchy term to either reduce arguments between semantic bigots who argue whether something is truly MVC or not or to justify Microsofts Rapid Application Development tools. Neither of these reasons in my books justify its existence as a separate design pattern. A: Both are patterns trying to separate presentation and business logic, decoupling business logic from UI aspects Architecturally, MVP is Page Controller based approach where MVC is Front Controller based approach. That means that in MVP standard web form page life cycle is just enhanced by extracting the business logic from code behind. In other words, page is the one servicing http request. In other words, MVP IMHO is web form evolutionary type of enhancement. MVC on other hand changes completely the game because the request gets intercepted by controller class before page is loaded, the business logic is executed there and then at the end result of controller processing the data just dumped to the page ("view") In that sense, MVC looks (at least to me) a lot to Supervising Controller flavor of MVP enhanced with routing engine Both of them enable TDD and have downsides and upsides. Decision on how to choose one of them IMHO should be based on how much time one invested in ASP NET web form type of web development. If one would consider himself good in web forms, I would suggest MVP. If one would feel not so comfortable in things such as page life cycle etc MVC could be a way to go here. Here's yet another blog post link giving a little bit more details on this topic http://blog.vuscode.com/malovicn/archive/2007/12/18/model-view-presenter-mvp-vs-model-view-controller-mvc.aspx A: MVP: the view is in charge. The view, in most cases, creates its presenter. The presenter will interact with the model and manipulate the view through an interface. The view will sometimes interact with the presenter, usually through some interface. This comes down to implementation; do you want the view to call methods on the presenter or do you want the view to have events the presenter listens to? It boils down to this: The view knows about the presenter. The view delegates to the presenter. MVC: the controller is in charge. The controller is created or accessed based on some event/request. The controller then creates the appropriate view and interacts with the model to further configure the view. It boils down to: the controller creates and manages the view; the view is slave to the controller. The view does not know about the controller. A: I have used both MVP and MVC and although we as developers tend to focus on the technical differences of both patterns the point for MVP in IMHO is much more related to ease of adoption than anything else. If I’m working in a team that already as a good background on web forms development style it’s far easier to introduce MVP than MVC. I would say that MVP in this scenario is a quick win. My experience tells me that moving a team from web forms to MVP and then from MVP to MVC is relatively easy; moving from web forms to MVC is more difficult. I leave here a link to a series of articles a friend of mine has published about MVP and MVC. http://www.qsoft.be/post/Building-the-MVP-StoreFront-Gutthrie-style.aspx A: MVP MVP stands for Model - View- Presenter. This came to a picture in early 2007 where Microsoft introduced Smart Client windows applications. A presenter is acting as a supervisory role in MVP which binding View events and business logic from models. View event binding will be implemented in the Presenter from a view interface. The view is the initiator for user inputs and then delegates the events to the Presenter and the presenter handles event bindings and gets data from models. Pros: The view is having only UI not any logics High level of testability Cons: Bit complex and more work when implementing event bindings MVC MVC stands for Model-View-Controller. Controller is responsible for creating models and rendering views with binding models. Controller is the initiator and it decides which view to render. Pros: Emphasis on Single Responsibility Principle High level of testability Cons: Sometimes too much workload for Controllers, if try to render multiple views in same controller.
{ "language": "en", "url": "https://stackoverflow.com/questions/2056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2335" }
Q: How to get started writing a code coverage tool? Looking for books or other references that discuss actually how to write a code coverage tool in Java; some of the various techniques or tricks - source vs. byte code instrumentation. This is for a scripting language that generates Java byte code under the hood. A: Thxm, Mc! http://asm.objectweb.org/ is another one. Excellent documentation on byte code instrumentation, but nothing "directly" aimed at writing a coverage tool - just some hints or ideas. A: You can also get the source from a Open Source code coverage tool and learn from it. A: You might also want to use something like BCEL to analyse which lines of source actually exist in the byte-code. You don't want to report that things like blank lines and comments haven't been covered. A: If you're talking about ColdFusion (which I assume you are from the tags) then I'm not sure this is doable but I may be very wrong here... IIRC, When CF compiles it essentially compiles into a interpreted form of the CFML as a plain old java source file, this is then compiled into the class. Therefore, any instrumentation that you may have will apply to the intermediary version rather than the CFML itself. Saying that though, Adobe have got the CF debugger now which can step though code, so please prove me wrong - I'd love code coverage in CFML. A: Does your scripting language generate bytecode? Does it generate debug metadata? If so, bytecode instrumentation is probably the way to go. In fact existing tools will probably work (perhaps with minimal modification). The typical problem with such tools that they are written to work with Java and assume that a class com.foo.Bar.class corresponds to a file com/foo/Bar.java. Unwinding that assumption can be tedious. EMMA is a ClassLoader that does byte-code re-writing for code-coverage collection in Java. The coding style is a little funky, but I recommend reading source code for some ideas. If your scripting language is interpreted then you will need a higher-level class loader (at a source level) that hooks into the interpreter.
{ "language": "en", "url": "https://stackoverflow.com/questions/2092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Convert HashBytes to VarChar I want to get the MD5 Hash of a string value in SQL Server 2005. I do this with the following command: SELECT HashBytes('MD5', 'HelloWorld') However, this returns a VarBinary instead of a VarChar value. If I attempt to convert 0x68E109F0F40CA72A15E05CC22786F8E6 into a VarChar I get há ðô§*à\Â'†øæ instead of 68E109F0F40CA72A15E05CC22786F8E6. Is there any SQL-based solution? Yes A: Use master.dbo.fn_varbintohexsubstring(0, HashBytes('SHA1', @input), 1, 0) instead of master.dbo.fn_varbintohexstr and then substringing the result. In fact fn_varbintohexstr calls fn_varbintohexsubstring internally. The first argument of fn_varbintohexsubstring tells it to add 0xF as the prefix or not. fn_varbintohexstr calls fn_varbintohexsubstring with 1 as the first argument internaly. Because you don't need 0xF, call fn_varbintohexsubstring directly. A: Contrary to what David Knight says, these two alternatives return the same response in MS SQL 2008: SELECT CONVERT(VARCHAR(32),HashBytes('MD5', 'Hello World'),2) SELECT UPPER(master.dbo.fn_varbintohexsubstring(0, HashBytes('MD5', 'Hello World'), 1, 0)) So it looks like the first one is a better choice, starting from version 2008. A: I have found the solution else where: SELECT SUBSTRING(master.dbo.fn_varbintohexstr(HashBytes('MD5', 'HelloWorld')), 3, 32) A: convert(varchar(34), HASHBYTES('MD5','Hello World'),1) (1 for converting hexadecimal to string) convert this to lower and remove 0x from the start of the string by substring: substring(lower(convert(varchar(34), HASHBYTES('MD5','Hello World'),1)),3,32) exactly the same as what we get in C# after converting bytes to string A: SELECT CONVERT(NVARCHAR(32),HashBytes('MD5', 'Hello World'),2) A: With personal experience of using the following code within a Stored Procedure which Hashed a SP Variable I can confirm, although undocumented, this combination works 100% as per my example: @var=SUBSTRING(master.dbo.fn_varbintohexstr(HashBytes('SHA2_512', @SPvar)), 3, 128) A: Changing the datatype to varbinary seems to work the best for me.
{ "language": "en", "url": "https://stackoverflow.com/questions/2120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "137" }
Q: How do I make a checkbox toggle from clicking on the text label as well? Checkboxes in HTML forms don't have implicit labels with them. Adding an explicit label (some text) next to it doesn't toggle the checkbox. How do I make a checkbox toggle from clicking on the text label as well? A: If you correctly markup your HTML code, there is no need for javascript. The following code will allow the user to click on the label text to tick the checkbox. <label for="surname">Surname</label> <input type="checkbox" name="surname" id="surname" /> The for attribute on the label element links to the id attribute on the input element and the browser does the rest. This has been testing to work in: * *IE6 *IE7 *Firefox A: Ronnie, If you wanted to enclose the label text and checkbox inside a wrapper element, you could do the following: <label for="surname"> Surname <input type="checkbox" name="surname" id="surname" /> </label> A: As indicated by @Gatekiller and others, the correct solution is the <label> tag. Click-in-the-text is nice, but there is another reason to use the <label> tag: accessibility. The tools that visually-impaired people use to access the web need the <label>s to read-out the meaning of checkboxes and radio buttons. Without <label>s, they have to guess based on surrounding text, and they often get it wrong or have to give up. It is very frustrating to be faced with a form that reads "Please select your shipping method, radio-button1, radio-button2, radio-button3". Note that web accessibility is a complex topic; <label>s are a necessary step but they are not enough to guarantee accessibility or compliance with government regulations where it applies. A: Set the CSS display property for the label to be a block element and use that instead of your div - it keeps the semantic meaning of a label while allowing whatever styling you like. For example: label { width: 100px; height: 100px; display: block; background-color: #e0e0ff; } <label for="test"> A ticky box! <input type="checkbox" id="test" /> </label> A: You can wrap your checkbox in the label: <label style="display: block; padding: 50px 0 0 50px; background-color: pink; width: 80px; height: 80px"> <input type="checkbox" name="surname"> </label> A: You need to just wrap the checkbox in label tag just like this <label style="height: 10px; width: 150px; display: block; "> [Checkbox Label Here] <input type="checkbox"/> </label> FIDDLE or you can also use the for attribute of label and id of your checkbox like below <label for="other">Other Details</label> <input type="checkbox" id="other" /> FIDDLE A: Wrapping with the label still doesn't allow clicking 'anywhere in the box' - still just on the text! This does the job for me: <div onclick="dob.checked=!dob.checked" class="checkbox"><input onclick="checked=!checked" id="dob" type="checkbox"/>Date of birth entry must be completed</div> but unfortunately has lots of javascript that is effectively toggling twice. A: this should work: <script> function checkbox () { var check = document.getElementById("myCheck").checked; var box = document.getElementById("myCheck") if (check == true) { box.checked = false; } else if (check == false) { box.checked = true; } } </script> <input type="checkbox"><p id="myCheck" onClick="checkbox();">checkbox</p> if it doesnt, pleae corect me!
{ "language": "en", "url": "https://stackoverflow.com/questions/2123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Do sealed classes really offer performance Benefits? I have come across a lot of optimization tips which say that you should mark your classes as sealed to get extra performance benefits. I ran some tests to check the performance differential and found none. Am I doing something wrong? Am I missing the case where sealed classes will give better results? Has anyone run tests and seen a difference? Help me learn :) A: The JITter will sometimes use non-virtual calls to methods in sealed classes since there is no way they can be extended further. There are complex rules regarding calling type, virtual/nonvirtual, and I don't know them all so I can't really outline them for you, but if you google for sealed classes and virtual methods you might find some articles on the topic. Note that any kind of performance benefit you would obtain from this level of optimization should be regarded as last-resort, always optimize on the algorithmic level before you optimize on the code-level. Here's one link mentioning this: Rambling on the sealed keyword A: <off-topic-rant> I loathe sealed classes. Even if the performance benefits are astounding (which I doubt), they destroy the object-oriented model by preventing reuse via inheritance. For example, the Thread class is sealed. While I can see that one might want threads to be as efficient as possible, I can also imagine scenarios where being able to subclass Thread would have great benefits. Class authors, if you must seal your classes for "performance" reasons, please provide an interface at the very least so we don't have to wrap-and-replace everywhere that we need a feature you forgot. Example: SafeThread had to wrap the Thread class because Thread is sealed and there is no IThread interface; SafeThread automatically traps unhandled exceptions on threads, something completely missing from the Thread class. [and no, the unhandled exception events do not pick up unhandled exceptions in secondary threads]. </off-topic-rant> A: Marking a class sealed should have no performance impact. There are cases where csc might have to emit a callvirt opcode instead of a call opcode. However, it seems those cases are rare. And it seems to me that the JIT should be able to emit the same non-virtual function call for callvirt that it would for call, if it knows that the class doesn't have any subclasses (yet). If only one implementation of the method exists, there's no point loading its address from a vtable—just call the one implementation directly. For that matter, the JIT can even inline the function. It's a bit of a gamble on the JIT's part, because if a subclass is later loaded, the JIT will have to throw away that machine code and compile the code again, emitting a real virtual call. My guess is this doesn't happen often in practice. (And yes, VM designers really do aggressively pursue these tiny performance wins.) A: Update: As of .NET Core 2.0 and .NET Desktop 4.7.1, the CLR now supports devirtualization. It can take methods in sealed classes and replace virtual calls with direct calls - and it can also do this for non-sealed classes if it can figure out it's safe to do so. In such a case (a sealed class that the CLR couldn't otherwise detect as safe to devirtualise), a sealed class should actually offer some kind of performance benefit. That said, I wouldn't think it'd be worth worrying about unless you had already profiled the code and determined that you were in a particularly hot path being called millions of times, or something like that: https://blogs.msdn.microsoft.com/dotnet/2017/06/29/performance-improvements-in-ryujit-in-net-core-and-net-framework/ Original Answer: I made the following test program, and then decompiled it using Reflector to see what MSIL code was emitted. public class NormalClass { public void WriteIt(string x) { Console.WriteLine("NormalClass"); Console.WriteLine(x); } } public sealed class SealedClass { public void WriteIt(string x) { Console.WriteLine("SealedClass"); Console.WriteLine(x); } } public static void CallNormal() { var n = new NormalClass(); n.WriteIt("a string"); } public static void CallSealed() { var n = new SealedClass(); n.WriteIt("a string"); } In all cases, the C# compiler (Visual studio 2010 in Release build configuration) emits identical MSIL, which is as follows: L_0000: newobj instance void <NormalClass or SealedClass>::.ctor() L_0005: stloc.0 L_0006: ldloc.0 L_0007: ldstr "a string" L_000c: callvirt instance void <NormalClass or SealedClass>::WriteIt(string) L_0011: ret The oft-quoted reason that people say sealed provides performance benefits is that the compiler knows the class isn't overriden, and thus can use call instead of callvirt as it doesn't have to check for virtuals, etc. As proven above, this is not true. My next thought was that even though the MSIL is identical, perhaps the JIT compiler treats sealed classes differently? I ran a release build under the visual studio debugger and viewed the decompiled x86 output. In both cases, the x86 code was identical, with the exception of class names and function memory addresses (which of course must be different). Here it is // var n = new NormalClass(); 00000000 push ebp 00000001 mov ebp,esp 00000003 sub esp,8 00000006 cmp dword ptr ds:[00585314h],0 0000000d je 00000014 0000000f call 70032C33 00000014 xor edx,edx 00000016 mov dword ptr [ebp-4],edx 00000019 mov ecx,588230h 0000001e call FFEEEBC0 00000023 mov dword ptr [ebp-8],eax 00000026 mov ecx,dword ptr [ebp-8] 00000029 call dword ptr ds:[00588260h] 0000002f mov eax,dword ptr [ebp-8] 00000032 mov dword ptr [ebp-4],eax // n.WriteIt("a string"); 00000035 mov edx,dword ptr ds:[033220DCh] 0000003b mov ecx,dword ptr [ebp-4] 0000003e cmp dword ptr [ecx],ecx 00000040 call dword ptr ds:[0058827Ch] // } 00000046 nop 00000047 mov esp,ebp 00000049 pop ebp 0000004a ret I then thought perhaps running under the debugger causes it to perform less aggressive optimization? I then ran a standalone release build executable outside of any debugging environments, and used WinDBG + SOS to break in after the program had completed, and view the dissasembly of the JIT compiled x86 code. As you can see from the code below, when running outside the debugger the JIT compiler is more aggressive, and it has inlined the WriteIt method straight into the caller. The crucial thing however is that it was identical when calling a sealed vs non-sealed class. There is no difference whatsoever between a sealed or nonsealed class. Here it is when calling a normal class: Normal JIT generated code Begin 003c00b0, size 39 003c00b0 55 push ebp 003c00b1 8bec mov ebp,esp 003c00b3 b994391800 mov ecx,183994h (MT: ScratchConsoleApplicationFX4.NormalClass) 003c00b8 e8631fdbff call 00172020 (JitHelp: CORINFO_HELP_NEWSFAST) 003c00bd e80e70106f call mscorlib_ni+0x2570d0 (6f4c70d0) (System.Console.get_Out(), mdToken: 060008fd) 003c00c2 8bc8 mov ecx,eax 003c00c4 8b1530203003 mov edx,dword ptr ds:[3302030h] ("NormalClass") 003c00ca 8b01 mov eax,dword ptr [ecx] 003c00cc 8b403c mov eax,dword ptr [eax+3Ch] 003c00cf ff5010 call dword ptr [eax+10h] 003c00d2 e8f96f106f call mscorlib_ni+0x2570d0 (6f4c70d0) (System.Console.get_Out(), mdToken: 060008fd) 003c00d7 8bc8 mov ecx,eax 003c00d9 8b1534203003 mov edx,dword ptr ds:[3302034h] ("a string") 003c00df 8b01 mov eax,dword ptr [ecx] 003c00e1 8b403c mov eax,dword ptr [eax+3Ch] 003c00e4 ff5010 call dword ptr [eax+10h] 003c00e7 5d pop ebp 003c00e8 c3 ret Vs a sealed class: Normal JIT generated code Begin 003c0100, size 39 003c0100 55 push ebp 003c0101 8bec mov ebp,esp 003c0103 b90c3a1800 mov ecx,183A0Ch (MT: ScratchConsoleApplicationFX4.SealedClass) 003c0108 e8131fdbff call 00172020 (JitHelp: CORINFO_HELP_NEWSFAST) 003c010d e8be6f106f call mscorlib_ni+0x2570d0 (6f4c70d0) (System.Console.get_Out(), mdToken: 060008fd) 003c0112 8bc8 mov ecx,eax 003c0114 8b1538203003 mov edx,dword ptr ds:[3302038h] ("SealedClass") 003c011a 8b01 mov eax,dword ptr [ecx] 003c011c 8b403c mov eax,dword ptr [eax+3Ch] 003c011f ff5010 call dword ptr [eax+10h] 003c0122 e8a96f106f call mscorlib_ni+0x2570d0 (6f4c70d0) (System.Console.get_Out(), mdToken: 060008fd) 003c0127 8bc8 mov ecx,eax 003c0129 8b1534203003 mov edx,dword ptr ds:[3302034h] ("a string") 003c012f 8b01 mov eax,dword ptr [ecx] 003c0131 8b403c mov eax,dword ptr [eax+3Ch] 003c0134 ff5010 call dword ptr [eax+10h] 003c0137 5d pop ebp 003c0138 c3 ret To me, this provides solid proof that there cannot be any performance improvement between calling methods on sealed vs non-sealed classes... I think I'm happy now :-) A: Sealed classes should provide a performance improvement. Since a sealed class cannot be derived, any virtual members can be turned into non-virtual members. Of course, we're talking really small gains. I wouldn't mark a class as sealed just to get a performance improvement unless profiling revealed it to be a problem. A: I consider "sealed" classes the normal case and I ALWAYS have a reason to omit the "sealed" keyword. The most important reasons for me are: a) Better compile time checks (casting to interfaces not implemented will be detected at compile time, not only at runtime) and, top reason: b) Abuse of my classes is not possible that way I wish Microsoft would have made "sealed" the standard, not "unsealed". A: As I know, there is no guarantee of performance benefit. But there is a chance to decrease performance penalty under some specific condition with sealed method. (sealed class makes all methods to be sealed.) But it's up to compiler implementation and execution environment. Details Many of modern CPUs use long pipeline structure to increase performance. Because CPU is incredibly faster than memory, CPU has to prefetch code from memory to accelerate pipeline. If the code is not ready at proper time, the pipelines will be idle. There is a big obstacle called dynamic dispatch which disrupts this 'prefetching' optimization. You can understand this as just a conditional branching. // Value of `v` is unknown, // and can be resolved only at runtime. // CPU cannot know which code to prefetch. // Therefore, just prefetch any one of a() or b(). // This is *speculative execution*. int v = random(); if (v==1) a(); else b(); CPU cannot prefetch next code to execute in this case because the next code position is unknown until the condition is resolved. So this makes hazard causes pipeline idle. And performance penalty by idle is huge in regular. Similar thing happen in case of method overriding. Compiler may determine proper method overriding for current method call, but sometimes it's impossible. In this case, proper method can be determined only at runtime. This is also a case of dynamic dispatch, and, a main reason of dynamically-typed languages are generally slower than statically-typed languages. Some CPU (including recent Intel's x86 chips) uses technique called speculative execution to utilize pipeline even on the situation. Just prefetch one of execution path. But hit rate of this technique is not so high. And speculation failure causes pipeline stall which also makes huge performance penalty. (this is completely by CPU implementation. some mobile CPU is known as does not this kind of optimization to save energy) Basically, C# is a statically compiled language. But not always. I don't know exact condition and this is entirely up to compiler implementation. Some compilers can eliminate possibility of dynamic dispatch by preventing method overriding if the method is marked as sealed. Stupid compilers may not. This is the performance benefit of the sealed. This answer (Why is it faster to process a sorted array than an unsorted array?) is describing the branch prediction a lot better. A: sealed classes will be at least a tiny bit faster, but sometimes can be waayyy faster... if the JIT Optimizer can inline calls that would have otherwise been virtual calls. So, where there's oft-called methods that are small enough to be inlined, definitely consider sealing the class. However, the best reason to seal a class is to say "I didn't design this to be inherited from, so I'm not going to let you get burned by assuming it was designed to be so, and I'm not going to burn myself by getting locked into an implementation because I let you derive from it." I know some here have said they hate sealed classes because they want the opportunity to derive from anything... but that is OFTEN not the most maintainable choice... because exposing a class to derivation locks you in a lot more than not exposing all that. Its similar to saying "I loathe classes that have private members... I often can't make the class do what I want because I don't have access." Encapsulation is important... sealing is one form of encapsulation. A: To really see them you need to analyze the JIT-Compiled code (last one). C# Code public sealed class Sealed { public string Message { get; set; } public void DoStuff() { } } public class Derived : Base { public sealed override void DoStuff() { } } public class Base { public string Message { get; set; } public virtual void DoStuff() { } } static void Main() { Sealed sealedClass = new Sealed(); sealedClass.DoStuff(); Derived derivedClass = new Derived(); derivedClass.DoStuff(); Base BaseClass = new Base(); BaseClass.DoStuff(); } MIL Code .method private hidebysig static void Main() cil managed { .entrypoint // Code size 41 (0x29) .maxstack 8 IL_0000: newobj instance void ConsoleApp1.Program/Sealed::.ctor() IL_0005: callvirt instance void ConsoleApp1.Program/Sealed::DoStuff() IL_000a: newobj instance void ConsoleApp1.Program/Derived::.ctor() IL_000f: callvirt instance void ConsoleApp1.Program/Base::DoStuff() IL_0014: newobj instance void ConsoleApp1.Program/Base::.ctor() IL_0019: callvirt instance void ConsoleApp1.Program/Base::DoStuff() IL_0028: ret } // end of method Program::Main JIT- Compiled Code --- C:\Users\Ivan Porta\source\repos\ConsoleApp1\Program.cs -------------------- { 0066084A in al,dx 0066084B push edi 0066084C push esi 0066084D push ebx 0066084E sub esp,4Ch 00660851 lea edi,[ebp-58h] 00660854 mov ecx,13h 00660859 xor eax,eax 0066085B rep stos dword ptr es:[edi] 0066085D cmp dword ptr ds:[5842F0h],0 00660864 je 0066086B 00660866 call 744CFAD0 0066086B xor edx,edx 0066086D mov dword ptr [ebp-3Ch],edx 00660870 xor edx,edx 00660872 mov dword ptr [ebp-48h],edx 00660875 xor edx,edx 00660877 mov dword ptr [ebp-44h],edx 0066087A xor edx,edx 0066087C mov dword ptr [ebp-40h],edx 0066087F nop Sealed sealedClass = new Sealed(); 00660880 mov ecx,584E1Ch 00660885 call 005730F4 0066088A mov dword ptr [ebp-4Ch],eax 0066088D mov ecx,dword ptr [ebp-4Ch] 00660890 call 00660468 00660895 mov eax,dword ptr [ebp-4Ch] 00660898 mov dword ptr [ebp-3Ch],eax sealedClass.DoStuff(); 0066089B mov ecx,dword ptr [ebp-3Ch] 0066089E cmp dword ptr [ecx],ecx 006608A0 call 00660460 006608A5 nop Derived derivedClass = new Derived(); 006608A6 mov ecx,584F3Ch 006608AB call 005730F4 006608B0 mov dword ptr [ebp-50h],eax 006608B3 mov ecx,dword ptr [ebp-50h] 006608B6 call 006604A8 006608BB mov eax,dword ptr [ebp-50h] 006608BE mov dword ptr [ebp-40h],eax derivedClass.DoStuff(); 006608C1 mov ecx,dword ptr [ebp-40h] 006608C4 mov eax,dword ptr [ecx] 006608C6 mov eax,dword ptr [eax+28h] 006608C9 call dword ptr [eax+10h] 006608CC nop Base BaseClass = new Base(); 006608CD mov ecx,584EC0h 006608D2 call 005730F4 006608D7 mov dword ptr [ebp-54h],eax 006608DA mov ecx,dword ptr [ebp-54h] 006608DD call 00660490 006608E2 mov eax,dword ptr [ebp-54h] 006608E5 mov dword ptr [ebp-44h],eax BaseClass.DoStuff(); 006608E8 mov ecx,dword ptr [ebp-44h] 006608EB mov eax,dword ptr [ecx] 006608ED mov eax,dword ptr [eax+28h] 006608F0 call dword ptr [eax+10h] 006608F3 nop } 0066091A nop 0066091B lea esp,[ebp-0Ch] 0066091E pop ebx 0066091F pop esi 00660920 pop edi 00660921 pop ebp 00660922 ret While the creation of the objects is the same, the instruction executed to invoke the methods of the sealed and derived/base class are slightly different. After moving data into registers or RAM (mov instruction), the invoke of the sealed method, execute a comparison between dword ptr [ecx],ecx (cmp instruction) and then call the method while the derived/base class execute directly the method.. According to the report written by Torbj¨orn Granlund, Instruction latencies and throughput for AMD and Intel x86 processors, the speed of the following instruction in a Intel Pentium 4 are: * *mov: has 1 cycle as latency and the processor can sustain 2.5 instructions per cycle of this type *cmp: has 1 cycle as latency and the processor can sustain 2 instructions per cycle of this type Link: https://gmplib.org/~tege/x86-timing.pdf This mean that, ideally, the time needed to invoke a sealed method is 2 cycles while the time needed to invoke a derived or base class method is 3 cycles. The optimization of the compilers have made the difference between the performances of a sealed and not-sealed classed so low that we are talking about processor circles and for this reason are irrelevant for the majority of applications. A: Starting from .NET 6.0 the answer is yes. Sealing a class can help the JIT de-virtualize calls, resulting in less overhead when calling a method. This has additional benefits, because the de-virtualized call can be inlined by the JIT if necessary, which can also lead to constant folding. For example, in this code from the MSDN article: [Benchmark(Baseline = true)] public int NonSealed() => _nonSealed.M() + 42; [Benchmark] public int Sealed() => _sealed.M() + 42; public class BaseType { public virtual int M() => 1; } public class NonSealedType : BaseType { public override int M() => 2; } public sealed class SealedType : BaseType { public override int M() => 2; } The "NonSealed" benchmark runs in 0.9837ns, but the "Sealed" method doesn't take more time than a function that simply returns a constant value. This is due to constant folding. Type checking sealed classes also has performance benefits, like in this code from the MSDN article: private object _o = "hello"; [Benchmark(Baseline = true)] public bool NonSealed() => _o is NonSealedType; [Benchmark] public bool Sealed() => _o is SealedType; public class NonSealedType { } public sealed class SealedType { } Checking against a non-sealed type takes ~1.76ns, while checking the sealed type is only ~0.07ns. In fact, the .NET team made a policy to seal all the private and internal classes that can be sealed. Notice that we're dealing with saving less than 2 nanoseconds on a call, so the overhead of calling a virtual method is not gonna be the bottleneck most of the time. I think it's more appropriate for simple virtual getters or very short methods. A: The answer was no, sealed classes do not perform better than non-sealed. 2021: The answer is now yes there are performance benefits to sealing a class. Sealing a class may not always provide a performance boost, but the dotnet team are adopting the rule of sealing all internal classes to give the optimiser the best chance. For details you can read https://devblogs.microsoft.com/dotnet/performance-improvements-in-net-6/#peanut-butter Old answer below. The issue comes down to the call vs callvirt IL op codes. Call is faster than callvirt, and callvirt is mainly used when you don't know if the object has been subclassed. So people assume that if you seal a class all the op codes will change from calvirts to calls and will be faster. Unfortunately callvirt does other things that make it useful too, like checking for null references. This means that even if a class is sealed, the reference might still be null and thus a callvirt is needed. You can get around this (without needing to seal the class), but it becomes a bit pointless. Structs use call because they cannot be subclassed and are never null. See this question for more information: Call and callvirt A: Run this code and you'll see that sealed classes are 2 times faster: class Program { static void Main(string[] args) { Console.ReadLine(); var watch = new Stopwatch(); watch.Start(); for (int i = 0; i < 10000000; i++) { new SealedClass().GetName(); } watch.Stop(); Console.WriteLine("Sealed class : {0}", watch.Elapsed.ToString()); watch.Start(); for (int i = 0; i < 10000000; i++) { new NonSealedClass().GetName(); } watch.Stop(); Console.WriteLine("NonSealed class : {0}", watch.Elapsed.ToString()); Console.ReadKey(); } } sealed class SealedClass { public string GetName() { return "SealedClass"; } } class NonSealedClass { public string GetName() { return "NonSealedClass"; } } output: Sealed class : 00:00:00.1897568 NonSealed class : 00:00:00.3826678
{ "language": "en", "url": "https://stackoverflow.com/questions/2134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "164" }
Q: LINQ on the .NET 2.0 Runtime Can a LINQ enabled app run on a machine that only has the .NET 2.0 runtime installed? In theory, LINQ is nothing more than syntactic sugar, and the resulting IL code should look the same as it would have in .NET 2.0. How can I write LINQ without using the .NET 3.5 libraries? Will it run on .NET 2.0? A: It's weird that no one has mentioned LINQBridge. This little awesome project is a backport of LINQ (IEnumerable, but without IQueryable) and its dependencies (Func, Action, etc) to .NET 2.0. And: If your project references LINQBridge during compilation, then it will bind to LINQBridge's query operators; if it references System.Core during compilation, then it will bind to Framework 3.5's query operators. A: You can use the LINQ sources from mono (.NET for Linux) to get LINQ running on .NET 2.0. IEnumerable<T> : yes IQueryable<T> : yes LINQ to XML : has been working in the trunk, but due to further additions, the trunk doesn't compile anymore Someone has done it here: LINQ for .NET 2.0 A: Short answer: * *LINQ to Objects: yes (IEnumerable<T>) *LINQ to SQL/Entities: no (IQueryable<T>) *LINQ to XML/DataSets: not yet? See this question about .Net 3.5 features available automatically or with little effort when targetting .Net 2.0 from VS2008. Basically, anything that is only "syntax sugar" and the new compilers (C# 3.0, VB 9.0) emit as 2.0-compatible IL will work. This includes many features used by LINQ such as anonymous classes, lambdas as anonymous delegates, automatic properties, object initializers, and collection initializers. Some LINQ features use classes, interfaces, delegates, and extension methods that live in the new 3.5 assemblies (such as System.Core.dll). Redistributing these assemblies is a license violation, but they could be reimplemented. Using extension methods need only that you declare an empty System.Runtime.CompilerServices.ExtensionAttribute. LINQ to Objects relies on IEnumerable<T> extensions and several delegate declarations (the Action<T> and Func<T> families) and have been implemented in LINQBridge (as mausch mentioned). LINQ to XML and LINQ to DataSets rely on LINQ to Objects which I guess could also be implemented for .Net 2.0, but I haven't seen this done yet. LINQ to SQL and LINQ to Entities require many new classes (DataContext/ObjectContext, lots of attributes, EntitySet<T>, EntityRef<T>, Link<T>, IQueryable<T>, etc) and expression trees, which, even if somehow reimplemented, will probably require at least .Net 2.0 SP1 to work. A: I'm not sure about C#. I do know, however, that you can write VB LINNQ code w/out the 3.5 libraries as long as you use the VS 2008 compiler to target the 2.0 framework. You will, however, have to implement some of the LINQ methods your self. LINQ uses a syntatic transformation to translate queries into executable code. Basically, it will take code like this: dim q = from x in xs where x > 2 select x*4; and convert it into code like this: dim q = xs.where(function(x) x > 2).select(function(x) x * 4); For the LINQ functionality that ships with the 3.5 framework, those methods are implemented as extension methods on either IEnumerable or IQueryable (there's also a bunch of methods that work on data sets too). The default IEnumerable extension methods are defined in System.Linq.Enumerable and look like this: <Extension()> public function Select(of T, R)(source as IEnumerable(of T), transform as Func(of T, R)) as IEnumerable(of R) 'do the transformation... end function The IQueryable extension methods take expressions trees as arguments, rather than lambdas. They look like this: <Extension()> public function Select(of T, R)(source as IQueryable<T>, transform as Expression(of Func(of T, R)) 'build a composite IQueryable that contains the expression tree for the transformation end function The expression tree versions enable you to get a tree representation of the expressions provided to the clauses which can then be used to generate SQL code (or what ever else you want). You could probably create your own version of LINQ to objects in about a day or so. It's all pretty straight forward. If you want to use DLINQ, then things would be a little bit more difficult. A: There are some "Hacks" that involve using a System.Core.dll from the 3.5 Framework to make it run with .net 2.0, but personally I would not want use such a somewhat shaky foundation. See here: LINQ support on .NET 2.0 * *Create a new console application *Keep only System and System.Core as referenced assemblies *Set Copy Local to true for System.Core, because it does not exist in .NET 2.0 *Use a LINQ query in the Main method. For example the one below. *Build *Copy all the bin output to a machine where only .NET 2.0 is installed *Run (Requires .net 2.0 SP1 and I have no idea if bundling the System.Core.dll violates the EULA) A: No, because while you thought LINQ is really just syntactic sugar, it actually heavily used expression trees -- a feature absent in .NET 2.0. That being said .NET 3.5 only builds up on top of .NET 2.0, and that's the reason why the IL doesn't look "different" or "special". I do not see a reason why you shouldn't just install the .NET 3.5 Framework. Everything .NET 2.0 will work fine on it, promise :) A: As far as I know the LINQ library is only available since the framework 3.0. If you want to use something similar in the framework 2.0, you would need to rewritte it yourself :) or find a similar third-party library. I only found a bit of information here but it didn't convinced me either. A: In theory yes, provided you distribute the LINQ specific assemblies and any dependencies. However that is in violation of Microsoft's licensing. Scott Hanselman wrote a blog post about Deploying ASP.NET MVC on ASP.NET 2.0 which is similar to what you are wanting to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/2138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Developing for ASP.NET-MVC without Visual Studio Instead of writing my ASP.NET C# applications in Visual Studio, I used my favorite text editor UltraEdit32. Is there any way I can implement MVC without the use of VS? A: Assuming you have the correct assemblies and a C# compiler you in theory can use whatever you want to edit the code and then just run the compiler by hand or using a build script. That being said it is a real pain doing .NET development without Visual Studio/SharpEdit/Monodevelop in my opinion. A: Even if you didn't want to actually edit in VS, you could create the project there and edit the files in another editor. A: There is nothing VS specific with the MVC framework - it is just a bunch of DLLs that you can use. The wizards in VS just build you a quick-start framework. ASP.NET MVC is "bin-deployable" - there is nothing too clever to set up on the server either - just point the wildcard ISAPI filter to ASP.NET A: For small to mid size mvc project WebMatrix is not bad at all. Also for simple changes to the projects I often use SublimeText.
{ "language": "en", "url": "https://stackoverflow.com/questions/2154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31" }
Q: How do I define custom web.config sections with potential child elements and attributes for the properties? The web applications I develop often require co-dependent configuration settings and there are also settings that have to change as we move between each of our environments. All our settings are currently simple key-value pairs but it would be useful to create custom config sections so that it is obvious when two values need to change together or when the settings need to change for an environment. What's the best way to create custom config sections and are there any special considerations to make when retrieving the values? A: Using attributes, child config sections and constraints There is also the possibility to use attributes which automatically takes care of the plumbing, as well as providing the ability to easily add constraints. I here present an example from code I use myself in one of my sites. With a constraint I dictate the maximum amount of disk space any one user is allowed to use. MailCenterConfiguration.cs: namespace Ani { public sealed class MailCenterConfiguration : ConfigurationSection { [ConfigurationProperty("userDiskSpace", IsRequired = true)] [IntegerValidator(MinValue = 0, MaxValue = 1000000)] public int UserDiskSpace { get { return (int)base["userDiskSpace"]; } set { base["userDiskSpace"] = value; } } } } This is set up in web.config like so <configSections> <!-- Mailcenter configuration file --> <section name="mailCenter" type="Ani.MailCenterConfiguration" requirePermission="false"/> </configSections> ... <mailCenter userDiskSpace="25000"> <mail host="my.hostname.com" port="366" /> </mailCenter> Child elements The child xml element mail is created in the same .cs file as the one above. Here I've added constraints on the port. If the port is assigned a value not in this range the runtime will complain when the config is loaded. MailCenterConfiguration.cs: public sealed class MailCenterConfiguration : ConfigurationSection { [ConfigurationProperty("mail", IsRequired=true)] public MailElement Mail { get { return (MailElement)base["mail"]; } set { base["mail"] = value; } } public class MailElement : ConfigurationElement { [ConfigurationProperty("host", IsRequired = true)] public string Host { get { return (string)base["host"]; } set { base["host"] = value; } } [ConfigurationProperty("port", IsRequired = true)] [IntegerValidator(MinValue = 0, MaxValue = 65535)] public int Port { get { return (int)base["port"]; } set { base["port"] = value; } } Use To then use it practically in code, all you have to do is instantiate the MailCenterConfigurationObject, this will automatically read the relevant sections from web.config. MailCenterConfiguration.cs private static MailCenterConfiguration instance = null; public static MailCenterConfiguration Instance { get { if (instance == null) { instance = (MailCenterConfiguration)WebConfigurationManager.GetSection("mailCenter"); } return instance; } } AnotherFile.cs public void SendMail() { MailCenterConfiguration conf = MailCenterConfiguration.Instance; SmtpClient smtpClient = new SmtpClient(conf.Mail.Host, conf.Mail.Port); } Check for validity I previously mentioned that the runtime will complain when the configuration is loaded and some data does not comply to the rules you have set up (e.g. in MailCenterConfiguration.cs). I tend to want to know these things as soon as possible when my site fires up. One way to solve this is load the configuration in _Global.asax.cx.Application_Start_ , if the configuration is invalid you will be notified of this with the means of an exception. Your site won't start and instead you will be presented detailed exception information in the Yellow screen of death. Global.asax.cs protected void Application_ Start(object sender, EventArgs e) { MailCenterConfiguration.Instance; } A: The custom configuration are quite handy thing and often applications end up with a demand for an extendable solution. For .NET 1.1 please refer the article https://web.archive.org/web/20211027113329/http://aspnet.4guysfromrolla.com/articles/020707-1.aspx Note: The above solution works for .NET 2.0 as well. For .NET 2.0 specific solution, please refer the article https://web.archive.org/web/20210802144254/https://aspnet.4guysfromrolla.com/articles/032807-1.aspx A: You can accomplish this with Section Handlers. There is a basic overview of how to write one at http://www.codeproject.com/KB/aspnet/ConfigSections.aspx however it refers to app.config which would be pretty much the same as writing one for use in web.config. This will allow you to essentially have your own XML tree in the config file and do some more advanced configuration. A: The most simple method, which I found, is using appSettings section. * *Add to Web.config the following: <appSettings> <add key="MyProp" value="MyVal"/> </appSettings> *Access from your code NameValueCollection appSettings = ConfigurationManager.AppSettings; string myPropVal = appSettings["MyProp"]; A: Quick'n Dirty: First create your ConfigurationSection and ConfigurationElement classes: public class MyStuffSection : ConfigurationSection { ConfigurationProperty _MyStuffElement; public MyStuffSection() { _MyStuffElement = new ConfigurationProperty("MyStuff", typeof(MyStuffElement), null); this.Properties.Add(_MyStuffElement); } public MyStuffElement MyStuff { get { return this[_MyStuffElement] as MyStuffElement; } } } public class MyStuffElement : ConfigurationElement { ConfigurationProperty _SomeStuff; public MyStuffElement() { _SomeStuff = new ConfigurationProperty("SomeStuff", typeof(string), "<UNDEFINED>"); this.Properties.Add(_SomeStuff); } public string SomeStuff { get { return (String)this[_SomeStuff]; } } } Then let the framework know how to handle your configuration classes in web.config: <configuration> <configSections> <section name="MyStuffSection" type="MyWeb.Configuration.MyStuffSection" /> </configSections> ... And actually add your own section below: <MyStuffSection> <MyStuff SomeStuff="Hey There!" /> </MyStuffSection> Then you can use it in your code thus: MyWeb.Configuration.MyStuffSection configSection = ConfigurationManager.GetSection("MyStuffSection") as MyWeb.Configuration.MyStuffSection; if (configSection != null && configSection.MyStuff != null) { Response.Write(configSection.MyStuff.SomeStuff); }
{ "language": "en", "url": "https://stackoverflow.com/questions/2155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: Creating a custom JButton in Java Is there a way to create a JButton with your own button graphic and not just with an image inside the button? If not, is there another way to create a custom JButton in java? A: When I was first learning Java we had to make Yahtzee and I thought it would be cool to create custom Swing components and containers instead of just drawing everything on one JPanel. The benefit of extending Swing components, of course, is to have the ability to add support for keyboard shortcuts and other accessibility features that you can't do just by having a paint() method print a pretty picture. It may not be done the best way however, but it may be a good starting point for you. Edit 8/6 - If it wasn't apparent from the images, each Die is a button you can click. This will move it to the DiceContainer below. Looking at the source code you can see that each Die button is drawn dynamically, based on its value. Here are the basic steps: * *Create a class that extends JComponent *Call parent constructor super() in your constructors *Make sure you class implements MouseListener *Put this in the constructor: enableInputMethods(true); addMouseListener(this); *Override these methods: public Dimension getPreferredSize() public Dimension getMinimumSize() public Dimension getMaximumSize() *Override this method: public void paintComponent(Graphics g) The amount of space you have to work with when drawing your button is defined by getPreferredSize(), assuming getMinimumSize() and getMaximumSize() return the same value. I haven't experimented too much with this but, depending on the layout you use for your GUI your button could look completely different. And finally, the source code. In case I missed anything. A: I'm probably going a million miles in the wrong direct (but i'm only young :P ). but couldn't you add the graphic to a panel and then a mouselistener to the graphic object so that when the user on the graphic your action is preformed. A: I haven't done SWING development since my early CS classes but if it wasn't built in you could just inherit javax.swing.AbstractButton and create your own. Should be pretty simple to wire something together with their existing framework. A: Yes, this is possible. One of the main pros for using Swing is the ease with which the abstract controls can be created and manipulates. Here is a quick and dirty way to extend the existing JButton class to draw a circle to the right of the text. package test; import java.awt.Color; import java.awt.Container; import java.awt.Dimension; import java.awt.FlowLayout; import java.awt.Graphics; import javax.swing.JButton; import javax.swing.JFrame; public class MyButton extends JButton { private static final long serialVersionUID = 1L; private Color circleColor = Color.BLACK; public MyButton(String label) { super(label); } @Override protected void paintComponent(Graphics g) { super.paintComponent(g); Dimension originalSize = super.getPreferredSize(); int gap = (int) (originalSize.height * 0.2); int x = originalSize.width + gap; int y = gap; int diameter = originalSize.height - (gap * 2); g.setColor(circleColor); g.fillOval(x, y, diameter, diameter); } @Override public Dimension getPreferredSize() { Dimension size = super.getPreferredSize(); size.width += size.height; return size; } /*Test the button*/ public static void main(String[] args) { MyButton button = new MyButton("Hello, World!"); JFrame frame = new JFrame(); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.setSize(400, 400); Container contentPane = frame.getContentPane(); contentPane.setLayout(new FlowLayout()); contentPane.add(button); frame.setVisible(true); } } Note that by overriding paintComponent that the contents of the button can be changed, but that the border is painted by the paintBorder method. The getPreferredSize method also needs to be managed in order to dynamically support changes to the content. Care needs to be taken when measuring font metrics and image dimensions. For creating a control that you can rely on, the above code is not the correct approach. Dimensions and colours are dynamic in Swing and are dependent on the look and feel being used. Even the default Metal look has changed across JRE versions. It would be better to implement AbstractButton and conform to the guidelines set out by the Swing API. A good starting point is to look at the javax.swing.LookAndFeel and javax.swing.UIManager classes. http://docs.oracle.com/javase/8/docs/api/javax/swing/LookAndFeel.html http://docs.oracle.com/javase/8/docs/api/javax/swing/UIManager.html Understanding the anatomy of LookAndFeel is useful for writing controls: Creating a Custom Look and Feel A: You could always try the Synth look & feel. You provide an xml file that acts as a sort of stylesheet, along with any images you want to use. The code might look like this: try { SynthLookAndFeel synth = new SynthLookAndFeel(); Class aClass = MainFrame.class; InputStream stream = aClass.getResourceAsStream("\\default.xml"); if (stream == null) { System.err.println("Missing configuration file"); System.exit(-1); } synth.load(stream, aClass); UIManager.setLookAndFeel(synth); } catch (ParseException pe) { System.err.println("Bad configuration file"); pe.printStackTrace(); System.exit(-2); } catch (UnsupportedLookAndFeelException ulfe) { System.err.println("Old JRE in use. Get a new one"); System.exit(-3); } From there, go on and add your JButton like you normally would. The only change is that you use the setName(string) method to identify what the button should map to in the xml file. The xml file might look like this: <synth> <style id="button"> <font name="DIALOG" size="12" style="BOLD"/> <state value="MOUSE_OVER"> <imagePainter method="buttonBackground" path="dirt.png" sourceInsets="2 2 2 2"/> <insets top="2" botton="2" right="2" left="2"/> </state> <state value="ENABLED"> <imagePainter method="buttonBackground" path="dirt.png" sourceInsets="2 2 2 2"/> <insets top="2" botton="2" right="2" left="2"/> </state> </style> <bind style="button" type="name" key="dirt"/> </synth> The bind element there specifies what to map to (in this example, it will apply that styling to any buttons whose name property has been set to "dirt"). And a couple of useful links: http://javadesktop.org/articles/synth/ http://docs.oracle.com/javase/tutorial/uiswing/lookandfeel/synth.html
{ "language": "en", "url": "https://stackoverflow.com/questions/2158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "103" }
Q: Easy way to AJAX WebControls I've got a web application that I'm trying to optimize. Some of the controls are hidden in dialog-style DIVs. So, I'd like to have them load in via AJAX only when the user wants to see them. This is fine for controls that are mostly literal-based (various menus and widgets), but when I have what I call "dirty" controls - ones that write extensive information to the ViewState, put tons of CSS or script on the page, require lots of references, etc - these are seemingly impossible to move "out of page", especially considering how ASP.NET will react on postback. I was considering some kind of step where I override Render, find markers for the bits I want to move out and put AJAX placeholders in there, but not only does the server overhead seem extreme, it also feels like a complete hack. Besides, the key element here is the dialog boxes that contain forms with validation controls on them, and I can't imagine how I would move the controls and their required scripts. In my fevered imagination, I want to do this: AJAXifier.AJAXify(ctlEditForm); Sadly, I know this is a dream. How close can I really get to a quick-and-easy AJAXification without causing too much load on the server? A: Check out the RadAjax control from Telerik - it allows you to avoid using UpdatePanels, and limit the amount of info passed back and forth between server and client by declaring direct relationships between calling controls, and controls that should be "Ajaxified" when the calling controls submit postbacks. A: I recommend that you walk over to your local book store this weekend, get a cup of coffee and find jQuery in Action by Manning Press. Go ahead and read the first chapter of this 300 page book in the store, then buy it if it resonates with you. I think you'll be surprized by how easy jQuery lets you perform what your describing here. From ajax calls to the server in the background, to showing and hiding div tags based on the visitor's actions. The amount of code you have to write is super small. There are a bunch of good JavaScript libraries, this is just one of them that I like, and it really is easy to get started. Start by including a reference to the current jQuery file with a tag and then write a few lines of code to interact with your page. A: Step one is to make your "dirty" pieces self contained user controls Step two is to embed those controls on your consuming page Step three is to wrap each user control tag in their own Asp:UpdatePanel Step four is to ensure your control gets the data it needs by having it read from properties which check against the viewstate for pre-existing values. I know this makes your code rely on ugly global variables but it's a fast way to get this done. Your mileage may vary.
{ "language": "en", "url": "https://stackoverflow.com/questions/2196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: How can I change the background of a masterpage from the code behind of a content page? I specifically want to add the style of background-color to the <body> tag of a master page, from the code behind (C#) of a content page that uses that master page. I have different content pages that need to make the master page has different colors depending on which content page is loaded, so that the master page matches the content page's theme. I have a solution below: I'm looking for something more like: Master.Attributes.Add("style", "background-color: 2e6095"); Inside of the page load function of the content page. But I can't get the above line to work. I only need to change the background-color for the <body> tag of the page. A: What I would do for the particular case is: i. Define the body as a server side control <body runat="server" id="masterpageBody"> ii. In your content aspx page, register the MasterPage with the register: <% MasterPageFile="..." %> iii. In the Content Page, you can now simply use Master.FindControl("masterpageBody") and have access to the control. Now, you can change whatever properties/style that you like! A: This is what I came up with: In the page load function: HtmlGenericControl body = (HtmlGenericControl)Master.FindControl("default_body"); body.Style.Add(HtmlTextWriterStyle.BackgroundColor, "#2E6095"); Where default_body = the id of the body tag. A: I believe you are talking about a content management system. The way I have delt with this situation in the past is to either: * *Allow a page/content to define an extra custom stylesheet or *Allow a page/content to define inline style tags
{ "language": "en", "url": "https://stackoverflow.com/questions/2209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: What's the best way to implement BDD/TDD in .NET 2.0? I'm looking to add a testing suite to my application, however I can't move to the newer testing frameworks for .NET 3.5. Does anyone have a suggestion about good testing frameworks to use? A: We use MbUnit and Rihno Mocks and they prove to work very well together. When doing TDD you will almost certainly need to do some form of dependency injection, while this can be done manually, its worth looking at an IoC container such as Castle Windsor. It well worth looking at John Paul Bodhood's screen casts to get you started. JPB's Blog A: NUnit and Rhino suit well and the auto-mocking container might be of interest. If you're looking at BDD too then NBehave is probably a good choice. If however you just mean the style of BDD that relates to unit testing (xSpec) though you can get away with adding a framework (though things like specunit do add some synctactic sugar), but you might want to look at MSpec is also interesting. A: Check out Rob Conery's screencast on BDD using MSpec. Very impressive http://blog.wekeroad.com/mvc-storefront/kona-3/ edit: I now use this approach: http://10printhello.com/the-one-bdd-framework-to-rule-them/ A: For a Mock Object library, I've found the BSD-licensed Rhino.Mocks to be rather pleasing. A: I've had great success using NUnit as well. I've also used NMock when the need arose for mock objects. As an added bonus, the factory for creating your mock objects is called the Mockery. To facilitate the running of unit tests, I've used TestDriven.NET to run unit tests as I coded. Also, I've used Cruise Control .NET to watch SVN and check that every new commit builds and passes all unit tests. A: This is probably a summary of what has already been said, but for TDD I personally use Rhino Mocks and MBUnit. Rhino Mocks is a mocking framework that is free and open source. The advantage of Rhino Mocks is we do not need to use magic strings in setting your expectations as you do in NMock. I like MBUnit because MbUnit has the concept of RowTests which allow you to vary your inputs to your test method. MBUnit is also freely available. You also want to make sure that whatever you choose for your unit testing framework is supported by your CI (Continuous Integration Server). Nunit is supported by default in Cruise Control.NET and you have to do a little extra work to get MBUnit to work in ccnet. From an IDE standpoint you must have TestDriven.NET. TestDriven.NET allows you to right click and run tests in the IDE and it supports MBUnit and Nunit and others. NBehave is the BDD library I have used. I have not used any others so I could not compare and contrast them with you, but NBehave is supported by Gallio from the MBUnit team, which means you can run your BDD tests just as you would your unit tests with TestDriven.NET. I would also highly recommend Resharper. You will find your productivity increase significantly with this refactoring and guidance tool. It will assist you with changing your code as you are developing your tests. Hope this helps A: NUnit is available at http://www.nunit.org I would suggest this even when working on the MS stack - the support for non-MS frameworks is happening in the MVC previews which shows a definite movement in the right direction to allow us all to customise our stacks to fit. A: Using nUnit with TFS isn't too difficult. There's even a project on codeplex to implement this: NUnit for Team Build which even "publishes" the results to the warehouse. I haven't tried it - but I would advise clients who have a large investment (or who have a strong preference for it over the MSTest tool) in nUnit who are interested in implementing TFS to continue with nUnit as opposed to trying convert all their existing tests. A: I have to put a shout-out for Moq. It is a clean light mocking framework that guides you into the pit of success. The testing tools built into TFS are okay. They will get the job done but can often be a little cumbersome to work with. The generated reports, code coverage and a few other portions are particularly bad. They make you go bald at 22 rather than 50. If you are really loving the testing, consider trying some Continuous Integration. You will feel the pain from regression quickly and this pain potentially helps you get to the end goal faster. Regardless of what you do, try out a few and see which one is the most natural, if you have time. Good luck and happy coding. A: NUnit is always a favorite of mine. However if you are using TFS as your source control I suggest you stick with the Microsoft Stack. A: I recommend the following: TestDriven.NET - Unit Testing add on for VS that is fully integrated with all major unit testing frameworks including NUnit, MbUnit etc... Typemock Isolator- A mocking framework for .Net Unit Testing NUnit - An open source unit testing framework that is in C#. A: For my project, I used NUnit and TestDriven.NET with great success. You can either create a separate library just to host your test code or put it in your executable or library. It all depends on whether you want your production code to be intertwined with your test code. For Dependency Injection, I use NInject in my current project and it works great. If you use Constructor injection, you don't need to clutter your code with the [Inject] attribute. I haven't used a mock library for my .NET 2.0 project but for another .NET 3.5 project I will use Moq. Note that all these work with .NET 2.0 and higher. (except Moq)
{ "language": "en", "url": "https://stackoverflow.com/questions/2214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: How can I unit test Flex applications from within the IDE or a build script? I'm currently working on an application with a frontend written in Adobe Flex 3. I'm aware of FlexUnit but what I'd really like is a unit test runner for Ant/NAnt and a runner that integrates with the Flex Builder IDE (AKA Eclipse). Does one exist? Also, are there any other resources on how to do Flex development "the right way" besides the Cairngorm microarchitecture example? A: The dpUint testing framework has a test runner built with AIR which can be integrated with a build script. There is also my FlexUnit automation kit which does more or less the same for FlexUnit. It has an Ant macro that makes it possible to run the tests as a part of an Ant script, for example: <target name="run-tests" depends="compile-tests"> <flexunit swf="${build.home}/tests.swf" failonerror="true"/> </target> A: On my project we're using Maven to build both our Flex RIA and the Java-based back end. In order to build and test the Flex app we use the flex-mojos maven plugins. They do a great job for us and I would highly recommend using Maven over Ant. That being said, if you're already using Ant it can be a little tricky to transition over to Maven. So if you're in that position I would recommend using the flexunit tasks available here: Ant Task Both of these libraries do basically the same thing, they launch a generated flexunit test runner mxml application in a window and open a socket connection back to the build process using a JUnit test runner. Amazingly enough it works pretty well. The only problem is that you can't run it headless so if you want to run the build from a CI server you have to make sure that process has the ability to launch new windows otherwise it won't work. A: About how to develop Flex applications the right way, I wouldn't look too much at the Cairngorm framework. It does claim to show "best practice" and so on, but I would say that the opposite is true. It's based around the use of global variables, and other things you should try to avoid. I've outlined some of the problems on my blog. I would suggest that you look at the Mate framework instead, which has good documentation and good examples to get you going. It uses Flex to its full potential, doesn't rely on global variables as Cairngorm and PureMVC, and it makes it possible to write much more decoupled code. A: An alternative to FlexUnit is the AsUnit testing tools. There are versions for actionscript 2 and 3. It also has good integration with Project Sprouts, which is a build tool for Flex and Flash similar to ant, however it uses ruby rake tasks and includes excellent dependency management along the lines of maven. No IDE integration that I know of however.
{ "language": "en", "url": "https://stackoverflow.com/questions/2222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: How to call shell commands from Ruby How do I call shell commands from inside of a Ruby program? How do I then get output from these commands back into Ruby? A: The backticks (`) method is the easiest one to call shell commands from Ruby. It returns the result of the shell command: url_request = 'http://google.com' result_of_shell_command = `curl #{url_request}` A: Given a command like attrib: require 'open3' a="attrib" Open3.popen3(a) do |stdin, stdout, stderr| puts stdout.read end I've found that while this method isn't as memorable as system("thecommand") or `thecommand` in backticks, a good thing about this method compared to other methods is backticks don't seem to let me puts the command I run/store the command I want to run in a variable, and system("thecommand") doesn't seem to let me get the output whereas this method lets me do both of those things, and it lets me access stdin, stdout and stderr independently. See "Executing commands in ruby" and Ruby's Open3 documentation. A: If you have a more complex case than the common case that can not be handled with ``, then check out Kernel.spawn(). This seems to be the most generic/full-featured provided by stock Ruby to execute external commands. You can use it to: * *create process groups (Windows). *redirect in, out, error to files/each-other. *set env vars, umask. *change the directory before executing a command. *set resource limits for CPU/data/etc. *Do everything that can be done with other options in other answers, but with more code. The Ruby documentation has good enough examples: env: hash name => val : set the environment variable name => nil : unset the environment variable command...: commandline : command line string which is passed to the standard shell cmdname, arg1, ... : command name and one or more arguments (no shell) [cmdname, argv0], arg1, ... : command name, argv[0] and zero or more arguments (no shell) options: hash clearing environment variables: :unsetenv_others => true : clear environment variables except specified by env :unsetenv_others => false : dont clear (default) process group: :pgroup => true or 0 : make a new process group :pgroup => pgid : join to specified process group :pgroup => nil : dont change the process group (default) create new process group: Windows only :new_pgroup => true : the new process is the root process of a new process group :new_pgroup => false : dont create a new process group (default) resource limit: resourcename is core, cpu, data, etc. See Process.setrlimit. :rlimit_resourcename => limit :rlimit_resourcename => [cur_limit, max_limit] current directory: :chdir => str umask: :umask => int redirection: key: FD : single file descriptor in child process [FD, FD, ...] : multiple file descriptor in child process value: FD : redirect to the file descriptor in parent process string : redirect to file with open(string, "r" or "w") [string] : redirect to file with open(string, File::RDONLY) [string, open_mode] : redirect to file with open(string, open_mode, 0644) [string, open_mode, perm] : redirect to file with open(string, open_mode, perm) [:child, FD] : redirect to the redirected file descriptor :close : close the file descriptor in child process FD is one of follows :in : the file descriptor 0 which is the standard input :out : the file descriptor 1 which is the standard output :err : the file descriptor 2 which is the standard error integer : the file descriptor of specified the integer io : the file descriptor specified as io.fileno file descriptor inheritance: close non-redirected non-standard fds (3, 4, 5, ...) or not :close_others => false : inherit fds (default for system and exec) :close_others => true : dont inherit (default for spawn and IO.popen) A: Here's the best article in my opinion about running shell scripts in Ruby: "6 Ways to Run Shell Commands in Ruby". If you only need to get the output use backticks. I needed more advanced stuff like STDOUT and STDERR so I used the Open4 gem. You have all the methods explained there. A: My favourite is Open3 require "open3" Open3.popen3('nroff -man') { |stdin, stdout, stderr| ... } A: This is not really an answer but maybe someone will find it useful: When using TK GUI on Windows, and you need to call shell commands from rubyw, you will always have an annoying CMD window popping up for less then a second. To avoid this you can use: WIN32OLE.new('Shell.Application').ShellExecute('ipconfig > log.txt','','','open',0) or WIN32OLE.new('WScript.Shell').Run('ipconfig > log.txt',0,0) Both will store the ipconfig output inside log.txt, but no windows will come up. You will need to require 'win32ole' inside your script. system(), exec() and spawn() will all pop up that annoying window when using TK and rubyw. A: Here's a flowchart based on "When to use each method of launching a subprocess in Ruby". See also, "Trick an application into thinking its stdout is a terminal, not a pipe". A: Some things to think about when choosing between these mechanisms are: * *Do you just want stdout or do you need stderr as well? Or even separated out? *How big is your output? Do you want to hold the entire result in memory? *Do you want to read some of your output while the subprocess is still running? *Do you need result codes? *Do you need a Ruby object that represents the process and lets you kill it on demand? You may need anything from simple backticks (``), system(), and IO.popen to full-blown Kernel.fork/Kernel.exec with IO.pipe and IO.select. You may also want to throw timeouts into the mix if a sub-process takes too long to execute. Unfortunately, it very much depends. A: I'm definitely not a Ruby expert, but I'll give it a shot: $ irb system "echo Hi" Hi => true You should also be able to do things like: cmd = 'ls' system(cmd) A: One more option: When you: * *need stderr as well as stdout *can't/won't use Open3/Open4 (they throw exceptions in NetBeans on my Mac, no idea why) You can use shell redirection: puts %x[cat bogus.txt].inspect => "" puts %x[cat bogus.txt 2>&1].inspect => "cat: bogus.txt: No such file or directory\n" The 2>&1 syntax works across Linux, Mac and Windows since the early days of MS-DOS. A: The answers above are already quite great, but I really want to share the following summary article: "6 Ways to Run Shell Commands in Ruby" Basically, it tells us: Kernel#exec: exec 'echo "hello $HOSTNAME"' system and $?: system 'false' puts $? Backticks (`): today = `date` IO#popen: IO.popen("date") { |f| puts f.gets } Open3#popen3 -- stdlib: require "open3" stdin, stdout, stderr = Open3.popen3('dc') Open4#popen4 -- a gem: require "open4" pid, stdin, stdout, stderr = Open4::popen4 "false" # => [26327, #<IO:0x6dff24>, #<IO:0x6dfee8>, #<IO:0x6dfe84>] A: Not sure about shell commands. I used following for capturing system command's output into a variable val: val = capture(:stdout) do system("pwd") end puts val shortened version: val = capture(:stdout) { system("pwd") } capture method is provided by active_support/core_ext/kernel/reporting.rb Simlarly we can also capture standard errors too with :stderr A: If you really need Bash, per the note in the "best" answer. First, note that when Ruby calls out to a shell, it typically calls /bin/sh, not Bash. Some Bash syntax is not supported by /bin/sh on all systems. If you need to use Bash, insert bash -c "your Bash-only command" inside of your desired calling method: quick_output = system("ls -la") quick_bash = system("bash -c 'ls -la'") To test: system("echo $SHELL") system('bash -c "echo $SHELL"') Or if you are running an existing script file like script_output = system("./my_script.sh") Ruby should honor the shebang, but you could always use system("bash ./my_script.sh") to make sure, though there may be a slight overhead from /bin/sh running /bin/bash, you probably won't notice. A: The way I like to do this is using the %x literal, which makes it easy (and readable!) to use quotes in a command, like so: directorylist = %x[find . -name '*test.rb' | sort] Which, in this case, will populate file list with all test files under the current directory, which you can process as expected: directorylist.each do |filename| filename.chomp! # work with file end A: This explanation is based on a commented Ruby script from a friend of mine. If you want to improve the script, feel free to update it at the link. First, note that when Ruby calls out to a shell, it typically calls /bin/sh, not Bash. Some Bash syntax is not supported by /bin/sh on all systems. Here are ways to execute a shell script: cmd = "echo 'hi'" # Sample string that can be used * *Kernel#` , commonly called backticks – `cmd` This is like many other languages, including Bash, PHP, and Perl. Returns the result (i.e. standard output) of the shell command. Docs: http://ruby-doc.org/core/Kernel.html#method-i-60 value = `echo 'hi'` value = `#{cmd}` *Built-in syntax, %x( cmd ) Following the x character is a delimiter, which can be any character. If the delimiter is one of the characters (, [, {, or <, the literal consists of the characters up to the matching closing delimiter, taking account of nested delimiter pairs. For all other delimiters, the literal comprises the characters up to the next occurrence of the delimiter character. String interpolation #{ ... } is allowed. Returns the result (i.e. standard output) of the shell command, just like the backticks. Docs: https://docs.ruby-lang.org/en/master/syntax/literals_rdoc.html#label-Percent+Strings value = %x( echo 'hi' ) value = %x[ #{cmd} ] *Kernel#system Executes the given command in a subshell. Returns true if the command was found and run successfully, false otherwise. Docs: http://ruby-doc.org/core/Kernel.html#method-i-system wasGood = system( "echo 'hi'" ) wasGood = system( cmd ) *Kernel#exec Replaces the current process by running the given external command. Returns none, the current process is replaced and never continues. Docs: http://ruby-doc.org/core/Kernel.html#method-i-exec exec( "echo 'hi'" ) exec( cmd ) # Note: this will never be reached because of the line above Here's some extra advice: $?, which is the same as $CHILD_STATUS, accesses the status of the last system executed command if you use the backticks, system() or %x{}. You can then access the exitstatus and pid properties: $?.exitstatus For more reading see: * *http://www.elctech.com/blog/i-m-in-ur-commandline-executin-ma-commands *http://blog.jayfields.com/2006/06/ruby-kernel-system-exec-and-x.html *http://tech.natemurray.com/2007/03/ruby-shell-commands.html A: You can also use the backtick operators (`), similar to Perl: directoryListing = `ls /` puts directoryListing # prints the contents of the root directory Handy if you need something simple. Which method you want to use depends on exactly what you're trying to accomplish; check the docs for more details about the different methods. A: We can achieve it in multiple ways. Using Kernel#exec, nothing after this command is executed: exec('ls ~') Using backticks or %x `ls ~` => "Applications\nDesktop\nDocuments" %x(ls ~) => "Applications\nDesktop\nDocuments" Using Kernel#system command, returns true if successful, false if unsuccessful and returns nil if command execution fails: system('ls ~') => true A: Using the answers here and linked in Mihai's answer, I put together a function that meets these requirements: * *Neatly captures STDOUT and STDERR so they don't "leak" when my script is run from the console. *Allows arguments to be passed to the shell as an array, so there's no need to worry about escaping. *Captures the exit status of the command so it is clear when an error has occurred. As a bonus, this one will also return STDOUT in cases where the shell command exits successfully (0) and puts anything on STDOUT. In this manner, it differs from system, which simply returns true in such cases. Code follows. The specific function is system_quietly: require 'open3' class ShellError < StandardError; end #actual function: def system_quietly(*cmd) exit_status=nil err=nil out=nil Open3.popen3(*cmd) do |stdin, stdout, stderr, wait_thread| err = stderr.gets(nil) out = stdout.gets(nil) [stdin, stdout, stderr].each{|stream| stream.send('close')} exit_status = wait_thread.value end if exit_status.to_i > 0 err = err.chomp if err raise ShellError, err elsif out return out.chomp else return true end end #calling it: begin puts system_quietly('which', 'ruby') rescue ShellError abort "Looks like you don't have the `ruby` command. Odd." end #output: => "/Users/me/.rvm/rubies/ruby-1.9.2-p136/bin/ruby" A: Don't forget the spawn command to create a background process to execute the specified command. You can even wait for its completion using the Process class and the returned pid: pid = spawn("tar xf ruby-2.0.0-p195.tar.bz2") Process.wait pid pid = spawn(RbConfig.ruby, "-eputs'Hello, world!'") Process.wait pid The doc says: This method is similar to #system but it doesn't wait for the command to finish. A: The easiest way is, for example: reboot = `init 6` puts reboot A: Here's a cool one that I use in a ruby script on OS X (so that I can start a script and get an update even after toggling away from the window): cmd = %Q|osascript -e 'display notification "Server was reset" with title "Posted Update"'| system ( cmd ) A: You can use format method as below to print some information: puts format('%s', `ps`) puts format('%d MB', (`ps -o rss= -p #{Process.pid}`.to_i / 1024))
{ "language": "en", "url": "https://stackoverflow.com/questions/2232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1245" }
Q: Datatable vs Dataset I currently use a DataTable to get results from a database which I can use in my code. However, many example on the web show using a DataSet instead and accessing the table(s) through the collections method. Is there any advantage, performance wise or otherwise, of using DataSets or DataTables as a storage method for SQL results? A: in 1.x there used to be things DataTables couldn't do which DataSets could (don't remember exactly what). All that was changed in 2.x. My guess is that's why a lot of examples still use DataSets. DataTables should be quicker as they are more lightweight. If you're only pulling a single resultset, its your best choice between the two. A: One feature of the DataSet is that if you can call multiple select statements in your stored procedures, the DataSet will have one DataTable for each. A: One major difference is that DataSets can hold multiple tables and you can define relationships between those tables. If you are only returning a single result set though I would think a DataTable would be more optimized. I would think there has to be some overhead (granted small) to offer the functionality a DataSet does and keep track of multiple DataTables. A: There are some optimizations you can use when filling a DataTable, such as calling BeginLoadData(), inserting the data, then calling EndLoadData(). This turns off some internal behavior within the DataTable, such as index maintenance, etc. See this article for further details. A: It really depends on the sort of data you're bringing back. Since a DataSet is (in effect) just a collection of DataTable objects, you can return multiple distinct sets of data into a single, and therefore more manageable, object. Performance-wise, you're more likely to get inefficiency from unoptimized queries than from the "wrong" choice of .NET construct. At least, that's been my experience. A: When you are only dealing with a single table anyway, the biggest practical difference I have found is that DataSet has a "HasChanges" method but DataTable does not. Both have a "GetChanges" however, so you can use that and test for null. A: A DataTable object represents tabular data as an in-memory, tabular cache of rows, columns, and constraints. The DataSet consists of a collection of DataTable objects that you can relate to each other with DataRelation objects.
{ "language": "en", "url": "https://stackoverflow.com/questions/2250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "142" }
Q: Mapping Stream data to data structures in C# Is there a way of mapping data collected on a stream or array to a data structure or vice-versa? In C++ this would simply be a matter of casting a pointer to the stream as a data type I want to use (or vice-versa for the reverse) eg: in C++ Mystruct * pMyStrct = (Mystruct*)&SomeDataStream; pMyStrct->Item1 = 25; int iReadData = pMyStrct->Item2; obviously the C++ way is pretty unsafe unless you are sure of the quality of the stream data when reading incoming data, but for outgoing data is super quick and easy. A: In case lubos hasko's answer was not unsafe enough, there is also the really unsafe way, using pointers in C#. Here's some tips and pitfalls I've run into: using System; using System.Runtime.InteropServices; using System.IO; using System.Diagnostics; // Use LayoutKind.Sequential to prevent the CLR from reordering your fields. [StructLayout(LayoutKind.Sequential)] unsafe struct MeshDesc { public byte NameLen; // Here fixed means store the array by value, like in C, // though C# exposes access to Name as a char*. // fixed also requires 'unsafe' on the struct definition. public fixed char Name[16]; // You can include other structs like in C as well. public Matrix Transform; public uint VertexCount; // But not both, you can't store an array of structs. //public fixed Vector Vertices[512]; } [StructLayout(LayoutKind.Sequential)] unsafe struct Matrix { public fixed float M[16]; } // This is how you do unions [StructLayout(LayoutKind.Explicit)] unsafe struct Vector { [FieldOffset(0)] public fixed float Items[16]; [FieldOffset(0)] public float X; [FieldOffset(4)] public float Y; [FieldOffset(8)] public float Z; } class Program { unsafe static void Main(string[] args) { var mesh = new MeshDesc(); var buffer = new byte[Marshal.SizeOf(mesh)]; // Set where NameLen will be read from. buffer[0] = 12; // Use Buffer.BlockCopy to raw copy data across arrays of primitives. // Note we copy to offset 2 here: char's have alignment of 2, so there is // a padding byte after NameLen: just like in C. Buffer.BlockCopy("Hello!".ToCharArray(), 0, buffer, 2, 12); // Copy data to struct Read(buffer, out mesh); // Print the Name we wrote above: var name = new char[mesh.NameLen]; // Use Marsal.Copy to copy between arrays and pointers to arrays. unsafe { Marshal.Copy((IntPtr)mesh.Name, name, 0, mesh.NameLen); } // Note you can also use the String.String(char*) overloads Console.WriteLine("Name: " + new string(name)); // If Erik Myers likes it... mesh.VertexCount = 4711; // Copy data from struct: // MeshDesc is a struct, and is on the stack, so it's // memory is effectively pinned by the stack pointer. // This means '&' is sufficient to get a pointer. Write(&mesh, buffer); // Watch for alignment again, and note you have endianess to worry about... int vc = buffer[100] | (buffer[101] << 8) | (buffer[102] << 16) | (buffer[103] << 24); Console.WriteLine("VertexCount = " + vc); } unsafe static void Write(MeshDesc* pMesh, byte[] buffer) { // But byte[] is on the heap, and therefore needs // to be flagged as pinned so the GC won't try to move it // from under you - this can be done most efficiently with // 'fixed', but can also be done with GCHandleType.Pinned. fixed (byte* pBuffer = buffer) *(MeshDesc*)pBuffer = *pMesh; } unsafe static void Read(byte[] buffer, out MeshDesc mesh) { fixed (byte* pBuffer = buffer) mesh = *(MeshDesc*)pBuffer; } } A: if its .net on both sides: think you should use binary serialization and send the byte[] result. trusting your struct to be fully blittable can be trouble. you will pay in some overhead (both cpu and network) but will be safe. A: If you need to populate each member variable by hand you can generalize it a bit as far as the primitives are concerned by using FormatterServices to retrieve in order the list of variable types associated with an object. I've had to do this in a project where I had a lot of different message types coming off the stream and I definitely didn't want to write the serializer/deserializer for each message. Here's the code I used to generalize the deserialization from a byte[]. public virtual bool SetMessageBytes(byte[] message) { MemberInfo[] members = FormatterServices.GetSerializableMembers(this.GetType()); object[] values = FormatterServices.GetObjectData(this, members); int j = 0; for (int i = 0; i < members.Length; i++) { string[] var = members[i].ToString().Split(new char[] { ' ' }); switch (var[0]) { case "UInt32": values[i] = (UInt32)((message[j] << 24) + (message[j + 1] << 16) + (message[j + 2] << 8) + message[j + 3]); j += 4; break; case "UInt16": values[i] = (UInt16)((message[j] << 8) + message[j + 1]); j += 2; break; case "Byte": values[i] = (byte)message[j++]; break; case "UInt32[]": if (values[i] != null) { int len = ((UInt32[])values[i]).Length; byte[] b = new byte[len * 4]; Array.Copy(message, j, b, 0, len * 4); Array.Copy(Utilities.ByteArrayToUInt32Array(b), (UInt32[])values[i], len); j += len * 4; } break; case "Byte[]": if (values[i] != null) { int len = ((byte[])values[i]).Length; Array.Copy(message, j, (byte[])(values[i]), 0, len); j += len; } break; default: throw new Exception("ByteExtractable::SetMessageBytes Unsupported Type: " + var[1] + " is of type " + var[0]); } } FormatterServices.PopulateObjectMembers(this, members, values); return true; } A: Most people use .NET serialization (there is faster binary and slower XML formatter, they both depend on reflection and are version tolerant to certain degree) However, if you want the fastest (unsafe) way - why not: Writing: YourStruct o = new YourStruct(); byte[] buffer = new byte[Marshal.SizeOf(typeof(YourStruct))]; GCHandle handle = GCHandle.Alloc(buffer, GCHandleType.Pinned); Marshal.StructureToPtr(o, handle.AddrOfPinnedObject(), false); handle.Free(); Reading: handle = GCHandle.Alloc(buffer, GCHandleType.Pinned); o = (YourStruct)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(YourStruct)); handle.Free();
{ "language": "en", "url": "https://stackoverflow.com/questions/2256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: ASP.NET URL Rewriting How do I rewrite a URL in ASP.NET? I would like users to be able to go to http://www.website.com/users/smith instead of http://www.website.com/?user=smith A: Microsoft now ships an official URL Rewriting Module for IIS: http://www.iis.net/download/urlrewrite It supports most types of rewriting including setting server variables and wildcards. It also will exist on all Azure web instances out of the box. A: I have used an httpmodule for url rewriting from www.urlrewriting.net with great success (albeit I believe a much earlier, simpler version) If you have very few actual rewriting rules then url mappings built in to .NET 2.0 are probably an easier option, there are a few write ups of these on the web, the 4guysfromrolla one seems fairly exhaustive but as you can see they don't support regular expression mappings are are as such rendered fairly useless in a dynamic environment (assuming "smith" in your example is not a special case then these would be of no use) A: Try the Managed Fusion Url Rewriter and Reverse Proxy: http://urlrewriter.codeplex.com The rule for rewriting this would be: # clean up old rules and forward to new URL RewriteRule ^/?user=(.*) /users/$1 [NC,R=301] # rewrite the rule internally RewriteRule ^/users/(.*) /?user=$1 [NC,L]
{ "language": "en", "url": "https://stackoverflow.com/questions/2262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: How to filter and combine 2 datasets in C# I am building a web page to show a customer what software they purchased and to give them a link to download said software. Unfortunately, the data on what was purchased and the download information are in separate databases so I can't just take care of it with joins in an SQL query. The common item is SKU. I'll be pulling a list of SKUs from the customer purchases database and on the download table is a comma delineated list of SKUs associated with that download. My intention, at the moment, is to create from this one datatable to populate a GridView. Any suggestions on how to do this efficiently would be appreciated. If it helps, I can pretty easily pull back the data as a DataSet or a DataReader, if either one would be better for this purpose. A: As long as the two databases are on the same physical server (assuming MSSQL) and the username/password being used in the connection string has rights to both DBs, then you should be able to perform a join across the two databases. Example: select p.Date, p.Amount, d.SoftwareName, d.DownloadLink from PurchaseDB.dbo.Purchases as p join ProductDB.dbo.Products as d on d.sku = p.sku where p.UserID = 12345 A: Why not create a Domain object based approach to this problem: public class CustomerDownloadInfo { private string sku; private readonly ICustomer customer; public CustomerDownloadInfo(ICustomer Customer){ customer = Customer; } public void AttachSku(string Sku){ sku = Sku; } public string Sku{ get { return sku; } } public string Link{ get{ // etc... etc... } } } There are a million variations on this, but once you aggregate this information, wouldn't it be easier to present? A: I am thinking off the top of my head here. If you load both as Data Tables in the same Data Sets, and define a relation between the two over SKU, and then run a query on the Data Set which produces the desired result.
{ "language": "en", "url": "https://stackoverflow.com/questions/2267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Add Custom Tag to Visual Studio Validation How can I add rules to Visual Studio (2005 and up) for validating property markup (HTML) for a vendor's proprietary controls? My client uses a control which requires several properties to be set as tags in the aspx file which generates something like 215 validation errors on each build. It's not preventing me from building, but real errors are getting lost in the noise. A: Right-click on the Source view of an HTML / ASP page and select "Formatting and Validation". * *Click "Tag Specific Options". *Expand "Client HTML Tags" and select the heading. *Click "New Tag...". *And just fill it in! I wish that I could add custom CSS values as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/2279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How do I traverse a collection in classic ASP? I want to be able to do: For Each thing In things End For CLASSIC ASP - NOT .NET! A: Whatever your [things] are need to be written outside of VBScript. In VB6, you can write a Custom Collection class, then you'll need to compile to an ActiveX DLL and register it on your webserver to access it. A: The closest you are going to get is using a Dictionary (as mentioned by Pacifika) Dim objDictionary Set objDictionary = CreateObject("Scripting.Dictionary") objDictionary.CompareMode = vbTextCompare 'makes the keys case insensitive' objDictionary.Add "Name", "Scott" objDictionary.Add "Age", "20" But I loop through my dictionaries like a collection For Each Entry In objDictionary Response.write objDictionary(Entry) & "<br />" Next You can loop through the entire dictionary this way writing out the values which would look like this: Scott 20 You can also do this For Each Entry In objDictionary Response.write Entry & ": " & objDictionary(Entry) & "<br />" Next Which would produce Name: Scott Age: 20 A: Something like this? dim cars(2),x cars(0)="Volvo" cars(1)="Saab" cars(2)="BMW" For Each x in cars response.write(x & "<br />") Next See www.w3schools.com. If you want to associate keys and values use a dictionary object instead: Dim objDictionary Set objDictionary = CreateObject("Scripting.Dictionary") objDictionary.Add "Name", "Scott" objDictionary.Add "Age", "20" if objDictionary.Exists("Name") then ' Do something else ' Do something else end if A: One approach I've used before is to use a property of the collection that returns an array, which can be iterated over. Class MyCollection Public Property Get Items Items = ReturnItemsAsAnArray() End Property ... End Class Iterate like: Set things = New MyCollection For Each thing in things.Items ... Next A: As Brett said, its better to use a vb component to create collections. Dictionary objects are not very commonly used in ASP unless for specific need based applications. A: Be VERY carefully on using VB Script Dictionary Object! Just discover this "autovivication" thing, native on this object: http://en.wikipedia.org/wiki/Autovivification So, when you need to compare values, NEVER use a boolen comparison like: If objDic.Item("varName") <> "" Then... This will automatically add the key "varName" to the dictionary (if it doesn't exist, with an empty value) , in order to carry on evaluating the boolean expression. If needed, use instead If objDic.Exists("varName"). Just spend a few days knocking walls, with this Microsoft "feature"... vbscript-dictionary-object-creating-a-key-which-never-existed-but-present-in-another-object
{ "language": "en", "url": "https://stackoverflow.com/questions/2300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: ASP.NET Display SVN Revision Number I see in the Stack Overflow footer that the SVN Revision number is displayed. Is this automated and if so, how does one implement it in ASP.NET? (Solutions in other languages are acceptable) A: Make sure that the file has svn:keywords "Rev Id" and then put $Rev$ somewhere in there. See this question and the answers to it. A: in our continuous integration setup we use SVNRevisionLabeller and pass the variables from this to MSBuild to use when creating the compiled website dll. It's then available to .NET using GetCurrentAssembly() in the final build. A: In my rails app I have a secret action which literally does this: render :text => `svn info #{RAILS_ROOT}` This is the equivalent of Process.Start( "svn info..." ) if you're only familiar with .NET) If I'm wondering if the guy who manages the servers has updated the site recently, I can hit this URL, and have a look.
{ "language": "en", "url": "https://stackoverflow.com/questions/2308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: File size differences after copying a file to a server vía FTP I have created a PHP-script to update a web server that is live inside a local directory. I'm migrating the script into Python. It works fine for the most part, but after a PUT command, the size of the file appears to change. Thus, the size of the file is different from that of the file on the server. Once I download again the file from the FTP server, the only difference is the CR/LF mark. This annoys me because the same script is comparing the size of the files to update. Also, in case it means anything, the script works perfectly in PHP vía ftp_put. from ftplib import FTP ftpserver = "myserver" ftpuser = "myuser" ftppass = "mypwd" locfile = "g:/test/style.css" ftpfile = "/temp/style.css" try: ftp = FTP(ftpserver, ftpuser, ftppass) except: exit ("Cannot connect") f = open (locfile, "r") try: ftp.delete (ftpfile) except: pass # ftp.sendcmd ("TYPE I") # ftp.storlines("STOR %s" % ftpfile, f) ftp.storbinary("STOR %s" % ftpfile, f) f.close() ftp.dir (ftpfile) ftp.quit() Any suggestions? A: Well if you go under the properties of your file in Windows or a *nix environment, you will notice two sizes. One is the sector size, and one is the actual size. The sector size is the number of sectors in bytes that are used up on your hard disk. That is because two files cannot be in the same sector with most modern file systems, so if your file fills up half of the sector the whole sector is marked as filled. So you might be comparing the sector file size to the actual file size on the FTP server or vice versa. A: Do you need to open the locfile in binary using rb? f = open (locfile, "rb") A: Small files take up a whole node on the file system whatever the size is. My host tends to report all small files as 4KB in ftp but gives an accurate size in a shell so it might be a 'feature' common to ftp clients.
{ "language": "en", "url": "https://stackoverflow.com/questions/2311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46" }
Q: Tracking state using ASP.NET AJAX / ICallbackEventHandler I have a problem with maintaining state in an ASP.NET AJAX page. Short version: I need some way to update the page ViewState after an async callback has been made, to reflect any state changes the server made during the async call. This seems to be a common problem, but I will describe my scenario to help explain: I have a grid-like control which has some JavaScript enhancements - namely, the ability to drag and drop columns and rows. When a column or row is dropped into a new position, an AJAX method is invoked to notify the control server-side and fire a corresponding server-side event ("OnColumnMoved" or "OnRowMoved"). ASP.NET AJAX calls, by default, send the entire page as the request. That way the page goes through a complete lifecycle, viewstate is persisted and the state of the control is restored before the RaiseCallbackEvent method is invoked. However, since the AJAX call does not update the page, the ViewState reflects the original state of the control, even after the column or row has been moved. So the second time a client-side action occurs, the AJAX request goes to the server and the page & control are built back up again to reflect the first state of the control, not the state after the first column or row was moved. This problem extends to many implications. For example if we have a client-side/AJAX action to add a new item to the grid, and then a row is dragged, the grid is built server-side with one less item than on the client-side. And finally & most seriously for my specific example, the actual data source object we are acting upon is stored in the page ViewState. That was a design decision to allow keeping a stateful copy of the manipulated data which can either be committed to DB after many manipulations or discarded if the user backs out. That is very difficult to change. So, again, I need a way for the page ViewState to be updated on callback after the AJAX method is fired. A: If you're already shuffling the ViewState around anyway, you might as well use an UpdatePanel. Its partial postbacks will update the page's ViewState automatically. A: Check out this blog post: Tweaking the ICallbackEventHandler and Viewstate. The author seems to be addressing the very situation that you are experiencing: So when using ICallbackEventHandler you have two obstacles to overcome to have updated state management for callbacks. First is the problem of the read-only viewstate. The other is actually registering the changes the user has made to the page before triggering the callback. See the blog post for his suggestions on how to solve this. Also check out this forum post which discusses the same problem as well. A: I actually found both of those links you provided, but as noted they are simply describing the problem, not solving it. The author of the blog post suggests a workaround by using a different ViewState provider, but unfortunately that isn't a possibility in this case...I really need to leave the particulars of the ViewState alone and just hook on to what is being done out-of-the-box. A: I found a fairly elegant solution with Telerik's RadAjaxManager. It works quite nicely. Essentially you register each control which might invoke a postback, and then register each control which should be re-drawn after that postback is performed asynchronously. The RadAjaxManager will update the DOM after the async postback and rewrite the ViewState and all affected controls. After taking a peek in Reflector, it looks a little kludgy under the hood, but it suits my purposes. A: I don't understand why you would use a custom control for that, when the built-in ASP.NET AJAX UpdatePanel does the same thing. It just adds more complexity, gives you less support, and makes it more difficult for others to work on your app.
{ "language": "en", "url": "https://stackoverflow.com/questions/2328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How do I turn on line numbers by default in TextWrangler on the Mac? I am fed up having to turn them on every time I open the application. A: Go to TextWrangler > Preferences. Choose Text Status Display in the category pane, then check the option "Show line numbers" and close the preferences. This should now be on by default when you open existing documents.
{ "language": "en", "url": "https://stackoverflow.com/questions/2332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: What is the best way to iterate through an array in Classic Asp VBScript? In the code below For i = LBound(arr) To UBound(arr) What is the point in asking using LBound? Surely that is always 0. A: Why not use For Each? That way you don't need to care what the LBound and UBound are. Dim x, y, z x = Array(1, 2, 3) For Each y In x z = DoSomethingWith(y) Next A: LBound may not always be 0. Whilst it is not possible to create an array that has anything other than a 0 Lower bound in VBScript, it is still possible to retrieve an array of variants from a COM component which may have specified a different LBound. That said I've never come across one that has done anything like that. A: There is a good reason NOT TO USE For i = LBound(arr) To UBound(arr) dim arr(10) allocates eleven members of the array, 0 through 10 (assuming the VB6 default Option Base). Many VB6 programmers assume the array is one-based, and never use the allocated arr(0). We can remove a potential source of bug by using For i = 1 To UBound(arr) or For i = 0 To UBound(arr), because then it is clear whether arr(0) is being used. For each makes a copy of each array element, rather than a pointer. This has two problems. * *When we try to assign a value to an array element, it doesn't reflect on original. This code assigns a value of 47 to the variable i, but does not affect the elements of arr. arr = Array(3,4,8) for each i in arr i = 47 next i Response.Write arr(0) '- returns 3, not 47 *We don't know the index of an array element in for each, and we are not guaranteed the sequence of elements (although it seems to be in order). A: Probably it comes from VB6. Because with Option Base statement in VB6, you can alter the lower bound of arrays like this: Option Base 1 Also in VB6, you can alter the lower bound of a specific array like this: Dim myArray(4 To 42) As String A: I've always used For Each loop. A: This is my approach: dim arrFormaA(15) arrFormaA( 0 ) = "formaA_01.txt" arrFormaA( 1 ) = "formaA_02.txt" arrFormaA( 2 ) = "formaA_03.txt" arrFormaA( 3 ) = "formaA_04.txt" arrFormaA( 4 ) = "formaA_05.txt" arrFormaA( 5 ) = "formaA_06.txt" arrFormaA( 6 ) = "formaA_07.txt" arrFormaA( 7 ) = "formaA_08.txt" arrFormaA( 8 ) = "formaA_09.txt" arrFormaA( 9 ) = "formaA_10.txt" arrFormaA( 10 ) = "formaA_11.txt" arrFormaA( 11 ) = "formaA_12.txt" arrFormaA( 12 ) = "formaA_13.txt" arrFormaA( 13 ) = "formaA_14.txt" arrFormaA( 14 ) = "formaA_15.txt" Wscript.echo(UBound(arrFormaA)) ''displays "15" For i = 0 To UBound(arrFormaA)-1 Wscript.echo(arrFormaA(i)) Next Hope it helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/2348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: How to tab focus onto a dropdown field in Mac OSX In Windows, in any windows form or web browser, you can use the tab button to switch focus through all of the form fields. It will stop on textboxes, radiobuttons, checkboxes, dropdown menus, etc. However, in Mac OSX, tab skips dropdown menus. Is there anyway to change this behavior, or access the above items mentioned, without using a mouse? A: Apple Menu > System Preferences > Keyboard & Mouse > Keyboard Shortcuts: Change the radio button at the bottom from "Text boxes and lists only" to "All controls." Edit: Dammit. We're a fast group around here aren't we? :-) A: Go to System Preferences > Keyboard and Mouse, then choose Keyboard Shortcuts. At the bottom, ensure Full Keyboard Access is set to "All controls". It's a long time since I turned it on but I think that's all you need to do A: I have found that I also need to set accessibility.tabfocus to 7 in Firefox's about:config. A: It's in the System Preferences - this blog post shows where the setting is.
{ "language": "en", "url": "https://stackoverflow.com/questions/2349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Can anyone explain Monads? Possible Duplicate: What is a monad? I think I understand what 'Maybe Monads' are, but I'm not sure about the other types.
{ "language": "en", "url": "https://stackoverflow.com/questions/2366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "79" }
Q: Read binary file into a struct I'm trying to read binary data using C#. I have all the information about the layout of the data in the files I want to read. I'm able to read the data "chunk by chunk", i.e. getting the first 40 bytes of data converting it to a string, get the next 40 bytes. Since there are at least three slightly different version of the data, I would like to read the data directly into a struct. It just feels so much more right than by reading it "line by line". I have tried the following approach but to no avail: StructType aStruct; int count = Marshal.SizeOf(typeof(StructType)); byte[] readBuffer = new byte[count]; BinaryReader reader = new BinaryReader(stream); readBuffer = reader.ReadBytes(count); GCHandle handle = GCHandle.Alloc(readBuffer, GCHandleType.Pinned); aStruct = (StructType) Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(StructType)); handle.Free(); The stream is an opened FileStream from which I have began to read from. I get an AccessViolationException when using Marshal.PtrToStructure. The stream contains more information than I'm trying to read since I'm not interested in data at the end of the file. The struct is defined like: [StructLayout(LayoutKind.Explicit)] struct StructType { [FieldOffset(0)] public string FileDate; [FieldOffset(8)] public string FileTime; [FieldOffset(16)] public int Id1; [FieldOffset(20)] public string Id2; } The examples code is changed from original to make this question shorter. How would I read binary data from a file into a struct? A: As Ronnie said, I'd use BinaryReader and read each field individually. I can't find the link to the article with this info, but it's been observed that using BinaryReader to read each individual field can be faster than Marshal.PtrToStruct, if the struct contains less than 30-40 or so fields. I'll post the link to the article when I find it. The article's link is at: http://www.codeproject.com/Articles/10750/Fast-Binary-File-Reading-with-C When marshaling an array of structs, PtrToStruct gains the upper-hand more quickly, because you can think of the field count as fields * array length. A: The problem is the strings in your struct. I found that marshaling types like byte/short/int is not a problem; but when you need to marshal into a complex type such as a string, you need your struct to explicitly mimic an unmanaged type. You can do this with the MarshalAs attrib. For your example, the following should work: [StructLayout(LayoutKind.Explicit)] struct StructType { [FieldOffset(0)] [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string FileDate; [FieldOffset(8)] [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] public string FileTime; [FieldOffset(16)] public int Id1; [FieldOffset(20)] [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 66)] //Or however long Id2 is. public string Id2; } A: I don't see any problem with your code. just out of my head, what if you try to do it manually? does it work? BinaryReader reader = new BinaryReader(stream); StructType o = new StructType(); o.FileDate = Encoding.ASCII.GetString(reader.ReadBytes(8)); o.FileTime = Encoding.ASCII.GetString(reader.ReadBytes(8)); ... ... ... also try StructType o = new StructType(); byte[] buffer = new byte[Marshal.SizeOf(typeof(StructType))]; GCHandle handle = GCHandle.Alloc(buffer, GCHandleType.Pinned); Marshal.StructureToPtr(o, handle.AddrOfPinnedObject(), false); handle.Free(); then use buffer[] in your BinaryReader instead of reading data from FileStream to see whether you still get AccessViolation exception. I had no luck using the BinaryFormatter, I guess I have to have a complete struct that matches the content of the file exactly. That makes sense, BinaryFormatter has its own data format, completely incompatible with yours. A: I had no luck using the BinaryFormatter, I guess I have to have a complete struct that matches the content of the file exactly. I realised that in the end I wasn't interested in very much of the file content anyway so I went with the solution of reading part of stream into a bytebuffer and then converting it using Encoding.ASCII.GetString() for strings and BitConverter.ToInt32() for the integers. I will need to be able to parse more of the file later on but for this version I got away with just a couple of lines of code. A: Here is what I am using.This worked successfully for me for reading Portable Executable Format.It's a generic function, so T is your struct type. public static T ByteToType<T>(BinaryReader reader) { byte[] bytes = reader.ReadBytes(Marshal.SizeOf(typeof(T))); GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned); T theStructure = (T)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(T)); handle.Free(); return theStructure; } A: Try this: using (FileStream stream = new FileStream(fileName, FileMode.Open)) { BinaryFormatter formatter = new BinaryFormatter(); StructType aStruct = (StructType)formatter.Deserialize(filestream); } A: Reading straight into structs is evil - many a C program has fallen over because of different byte orderings, different compiler implementations of fields, packing, word size....... You are best of serialising and deserialising byte by byte. Use the build in stuff if you want or just get used to BinaryReader. A: I had structure: [StructLayout(LayoutKind.Explicit, Size = 21)] public struct RecordStruct { [FieldOffset(0)] public double Var1; [FieldOffset(8)] public byte var2 [FieldOffset(9)] [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 12)] public string String1; } } and I received "incorrectly aligned or overlapped by non-object". Based on that I found: https://social.msdn.microsoft.com/Forums/vstudio/en-US/2f9ffce5-4c64-4ea7-a994-06b372b28c39/strange-issue-with-layoutkindexplicit?forum=clr OK. I think I understand what's going on here. It seems like the problem is related to the fact that the array type (which is an object type) must be stored at a 4-byte boundary in memory. However, what you're really trying to do is serialize the 6 bytes separately. I think the problem is the mix between FieldOffset and serialization rules. I'm thinking that structlayout.sequential may work for you, since it doesn't actually modify the in-memory representation of the structure. I think FieldOffset is actually modifying the in-memory layout of the type. This causes problems because the .NET framework requires object references to be aligned on appropriate boundaries (it seems). So my struct was defined as explicit with: [StructLayout(LayoutKind.Explicit, Size = 21)] and thus my fields had specified [FieldOffset(<offset_number>)] but when you change your struct to Sequentional, you can get rid of those offsets and the error will disappear. Something like: [StructLayout(LayoutKind.Sequential, Size = 21)] public struct RecordStruct { public double Var1; public byte var2; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 12)] public string String1; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/2384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61" }
Q: Binary file layout reference Where are some good sources of information on binary file layout structures? If I wanted to pull in a BTrieve index file, parse MP3 headers, etc. Where does one get reliable information? A: I'm not sure if there's a general information source for this kind of information. I always just search on google or wikipedia for that particular file type. The binary file layout structure information should be included. For example, http://en.wikipedia.org/wiki/MP3#File_structure">MP3 file layout structure
{ "language": "en", "url": "https://stackoverflow.com/questions/2405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Have you ever encountered a query that SQL Server could not execute because it referenced too many tables? Have you ever seen any of there error messages? -- SQL Server 2000 Could not allocate ancillary table for view or function resolution. The maximum number of tables in a query (256) was exceeded. -- SQL Server 2005 Too many table names in the query. The maximum allowable is 256. If yes, what have you done? Given up? Convinced the customer to simplify their demands? Denormalized the database? @(everyone wanting me to post the query): * *I'm not sure if I can paste 70 kilobytes of code in the answer editing window. *Even if I can this this won't help since this 70 kilobytes of code will reference 20 or 30 views that I would also have to post since otherwise the code will be meaningless. I don't want to sound like I am boasting here but the problem is not in the queries. The queries are optimal (or at least almost optimal). I have spent countless hours optimizing them, looking for every single column and every single table that can be removed. Imagine a report that has 200 or 300 columns that has to be filled with a single SELECT statement (because that's how it was designed a few years ago when it was still a small report). A: For SQL Server 2005, I'd recommend using table variables and partially building the data as you go. To do this, create a table variable that represents your final result set you want to send to the user. Then find your primary table (say the orders table in your example above) and pull that data, plus a bit of supplementary data that is only say one join away (customer name, product name). You can do a SELECT INTO to put this straight into your table variable. From there, iterate through the table and for each row, do a bunch of small SELECT queries that retrieves all the supplemental data you need for your result set. Insert these into each column as you go. Once complete, you can then do a simple SELECT * from your table variable and return this result set to the user. I don't have any hard numbers for this, but there have been three distinct instances that I have worked on to date where doing these smaller queries has actually worked faster than doing one massive select query with a bunch of joins. A: I have never come across this kind of situation, and to be honest the idea of referencing > 256 tables in a query fills me with a mortal dread. Your first question should probably by "Why so many?", closely followed by "what bits of information do I NOT need?" I'd be worried that the amount of data being returned from such a query would begin to impact performance of the application quite severely, too. A: @chopeen You could change the way you're calculating these statistics, and instead keep a separate table of all per-product stats.. when an order is placed, loop through the products and update the appropriate records in the stats table. This would shift a lot of the calculation load to the checkout page rather than running everything in one huge query when running a report. Of course there are some stats that aren't going to work as well this way, e.g. tracking customers' next purchases after purchasing a particular product. A: This would happen all the time when writing Reporting Services Reports for Dynamics CRM installations running on SQL Server 2000. CRM has a nicely normalised data schema which results in a lot of joins. There's actually a hotfix around that will up the limit from 256 to a whopping 260: http://support.microsoft.com/kb/818406 (we always thought this a great joke on the part of the SQL Server team). The solution, as Dillie-O aludes to, is to identify appropriate "sub-joins" (preferably ones that are used multiple times) and factor them out into temp-table variables that you then use in your main joins. It's a major PIA and often kills performance. I'm sorry for you. @Kevin, love that tee -- says it all :-). A: I'd like to see that query, but I imagine it's some problem with some sort of iterator, and while I can't think of any situations where its possible, I bet it's from a bad while/case/cursor or a ton of poorly implemented views. A: Post the query :D Also I feel like one of the possible problems could be having a ton (read 200+) of name/value tables which could condensed into a single lookup table. A: I had this same problem... my development box runs SQL Server 2008 (the view worked fine) but on production (with SQL Server 2005) the view didn't. I ended up creating views to avoid this limitation, using the new views as part of the query in the view that threw the error. Kind of silly considering the logical execution is the same... A: Had the same issue in SQL Server 2005 (worked in 2008) when I wanted to create a view. I resolved the issue by creating a stored procedure instead of a view.
{ "language": "en", "url": "https://stackoverflow.com/questions/2432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Are there best practices for testing security in an Agile development shop? Regarding Agile development, what are the best practices for testing security per release? If it is a monthly release, are there shops doing pen-tests every month? A: What's your application domain? It depends. Since you used the word "Agile", I'm guessing it's a web app. I have a nice easy answer for you. Go buy a copy of Burp Suite (it's the #1 Google result for "burp" --- a sure endorsement!); it'll cost you 99EU, or ~$180USD, or $98 Obama Dollars if you wait until November. Burp works as a web proxy. You browse through your web app using Firefox or IE or whatever, and it collects all the hits you generate. These hits get fed to a feature called "Intruder", which is a web fuzzer. Intruder will figure out all the parameters you provide to each one of your query handlers. It will then try crazy values for each parameter, including SQL, filesystem, and HTML metacharacters. On a typical complex form post, this is going to generate about 1500 hits, which you'll look through to identify scary --- or, more importantly in an Agile context, new --- error responses. Fuzzing every query handler in your web app at each release iteration is the #1 thing you can do to improve application security without instituting a formal "SDLC" and adding headcount. Beyond that, review your code for the major web app security hot spots: * *Use only parameterized prepared SQL statements; don't ever simply concatenate strings and feed them to your database handle. *Filter all inputs to a white list of known good characters (alnum, basic punctuation), and, more importantly, output filter data from your query results to "neutralize" HTML metacharacters to HTML entities (quot, lt, gt, etc). *Use long random hard-to-guess identifiers anywhere you're currently using simple integer row IDs in query parameters, and make sure user X can't see user Y's data just by guessing those identifiers. *Test every query handler in your application to ensure that they function only when a valid, logged-on session cookie is presented. *Turn on the XSRF protection in your web stack, which will generate hidden form token parameters on all your rendered forms, to prevent attackers from creating malicious links that will submit forms for unsuspecting users. *Use bcrypt --- and nothing else --- to store hashed passwords. A: I'm no expert on Agile development, but I would imagine that integrating some basic automated pen-test software into your build cycle would be a good start. I have seen several software packages out there that will do basic testing and are well suited for automation. A: I'm not a security expert, but I think the most important fact you should be aware of, before testing security, is what you are trying to protect. Only if you know what you are trying to protect, you can do a proper analysis of your security measures and only then you can start testing those implemented measures. Very abstract, I know. However, I think it should be the first step of every security audit. A: Unit testing, Defense Programming and lots of logs Unit testing Make sure you unit test as early as possible (e.g. the password should be encrypted before sending, the SSL tunnel is working, etc). This would prevent your programmers from accidentally making the program insecure. Defense Programming I personally call this the Paranoid Programming but Wikipedia is never wrong (sarcasm). Basically, you add tests to your functions that checks all the inputs: * *is the user's cookies valid? *is he still currently logged in? *are the function's parameters protected against SQL injection? (even though you know that the input are generated by your own functions, you will test anyway) Logging Log everything like crazy. Its easier to remove logs then to add them. A user have logged in? Log it. A user found a 404? Log it. The admin edited/deleted a post? Log it. Someone was able to access a restricted page? Log it. Don't be surprised if your log file reaches 15+ Mb during your development phase. During beta, you can decide which logs to remove. If you want, you can add a flag to decide when a certain event is logged.
{ "language": "en", "url": "https://stackoverflow.com/questions/2447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I tell if a web client is blocking advertisements? What is the best way to record statistics on the number of visitors visiting my site that have set their browser to block ads? A: You need to think about the different ways that ads are blocked. The first thing to look at is whether they are running noscript, so you could add a script that would check for that. The next thing is to see if they are blocking flash, a small movie should do that. If you look at the adblock site, there is some indication of how it does blocking: How does element hiding work? If you look further down that page, you will see that conventional chrome probing will not work, so you need to try and parse the altered DOM. A: AdBlock forum says this is used to detect AdBlock. After some tweaking you could use this to gather some statistics. setTimeout("detect_abp()", 10000); var isFF = (navigator.userAgent.indexOf("Firefox") > -1) ? true : false, hasABP = false; function detect_abp() { if(isFF) { if(Components.interfaces.nsIAdblockPlus != undefined) { hasABP = true; } else { var AbpImage = document.createElement("img"); AbpImage.id = "abp_detector"; AbpImage.src = "/textlink-ads.jpg"; AbpImage.style.width = "0"; AbpImage.style.height = "0"; AbpImage.style.top = "-1000px"; AbpImage.style.left = "-1000px"; document.body.appendChild(AbpImage); hasABP = (document.getElementById("abp_detector").style.display == "none"); var e = document.getElementsByTagName("iframe"); for (var i = 0; i < e.length; i++) { if(e[i].clientHeight == 0) { hasABP = true; } } if(hasABP == true) { history.go(1); location = "http://www.tweaktown.com/supportus.html"; window.location(location); } } } } A: I suppose you could compare the ad prints with the page views on your website (which you can get from your analytics software). A: Since programs like AdBlock actually never request the advert, you would have to look the server logs to see if the same user accessed a webpage but didn't access an advert. This is assuming the advert is on the same server. If your adverts are on a separate server, then I would suggest it's impossible to do so. The best way to stop users from blocking adverts, is to have inline text adverts which are generated by the server and dished up inside your html. A: Add the user id to the request for the ad: <img src="./ads/viagra.jpg?{user.id}"/> that way you can check what ads are seen by which users.
{ "language": "en", "url": "https://stackoverflow.com/questions/2472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Best self-balancing BST for quick insertion of a large number of nodes I've been able to find details on several self-balancing BSTs through several sources, but I haven't found any good descriptions detailing which one is best to use in different situations (or if it really doesn't matter). I want a BST that is optimal for storing in excess of ten million nodes. The order of insertion of the nodes is basically random, and I will never need to delete nodes, so insertion time is the only thing that would need to be optimized. I intend to use it to store previously visited game states in a puzzle game, so that I can quickly check if a previous configuration has already been encountered. A: Red-black is better than AVL for insertion-heavy applications. If you foresee relatively uniform look-up, then Red-black is the way to go. If you foresee a relatively unbalanced look-up where more recently viewed elements are more likely to be viewed again, you want to use splay trees. A: Why use a BST at all? From your description a dictionary will work just as well, if not better. The only reason for using a BST would be if you wanted to list out the contents of the container in key order. It certainly doesn't sound like you want to do that, in which case go for the hash table. O(1) insertion and search, no worries about deletion, what could be better? A: The two self-balancing BSTs I'm most familiar with are red-black and AVL, so I can't say for certain if any other solutions are better, but as I recall, red-black has faster insertion and slower retrieval compared to AVL. So if insertion is a higher priority than retrieval, red-black may be a better solution. A: [hash tables have] O(1) insertion and search I think this is wrong. First of all, if you limit the keyspace to be finite, you could store the elements in an array and do an O(1) linear scan. Or you could shufflesort the array and then do a linear scan in O(1) expected time. When stuff is finite, stuff is easily O(1). So let's say your hash table will store any arbitrary bit string; it doesn't much matter, as long as there's an infinite set of keys, each of which are finite. Then you have to read all the bits of any query and insertion input, else I insert y0 in an empty hash and query on y1, where y0 and y1 differ at a single bit position which you don't look at. But let's say the key lengths are not a parameter. If your insertion and search take O(1), in particular hashing takes O(1) time, which means that you only look at a finite amount of output from the hash function (from which there's likely to be only a finite output, granted). This means that with finitely many buckets, there must be an infinite set of strings which all have the same hash value. Suppose I insert a lot, i.e. ω(1), of those, and start querying. This means that your hash table has to fall back on some other O(1) insertion/search mechanism to answer my queries. Which one, and why not just use that directly?
{ "language": "en", "url": "https://stackoverflow.com/questions/2481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: What are some good resources for learning threaded programming? With the rise of multicore CPUs on the desktop, multithreading skills will become a valuable asset for programmers. Can you recommend some good resources (books, tutorials, websites, etc.) for a programmer who is looking to learn about threaded programming? A: I've honestly never read it myself, but Concurrent Programming in Java is a book I've heard recommended by several people. A: http://www.yoda.arachsys.com/csharp/threads/ A: I write about multithreading and concurrency in C++ on my blog. I'm also writing a book on concurrency in C++: C++ Concurrency in Action. A: I've read (most of) Java Concurrency in Practice by Brian Goetz, which is very good. There is obviously a Java-based theme running through the book (using Java specific implementations of threads, locks etc.), but pretty much all of the principles can be applied to other languages. The author's home page contains a list of articles he has written, some of which include threading related stuff. Maybe start there and if you like his style, buy the book. A: For a great guide and reference for concurrency programming in C# (or .NET in general) I'd recommend the MSDN What Every Dev Must Know About Multithreaded Apps article by Vance Morrison on MSDN. It contains a great deal of best-practice information and caveats about multithread development A: I maintain a linkblog for concurrency articles, blogs, and projects at: http://concurrency.tumblr.com I usually post a link or two per day on a variety of topics (threads, actors, locking, parallel programming) in a variety of environments (Erlang, Java, Scala, .NET, C++, Ruby, Python, etc). A: It's Delphi specific, but no reason why the concept wouldn't apply to any other language! Multi Threading Tutorial A: Take a look at Herb Sutter's "The Free Lunch Is Over" and then his series of articles on Effective Concurrency. A: Joseph Albahari wrote a good overview of Threading in C# here: http://www.albahari.com/threading/ A: http://www.cilk.com/multicore-e-book/ That's a nice general overview of the sitution, if you're looking for tuorials and books it might be best to specify a language as a starting point so you can mess around with some code. A: The Erlang programming language provides an easy-to-use style of concurrent programming. You may never actually use Erlang, but the concepts are transportable to other languages. You might want to read the book Programming Erlang: Software for a Concurrent World . Fans of functional programming claim that there is no need to learn anything new. Just use a pure functional language, and the compiler or interpreter will automatically parallelize everything. So you might want to learn Haskell, OCaml, or another functional language. A: I don't know what exactly you are looking for, but if you are doing WindowsForms development the following blog post is worth every minute reading: WinForms UI Thread Invokes: An In-Depth Review of Invoke/BeginInvoke/InvokeRequred A: I think Boost.Threads is a great C++ concurrency library to learn, especially if you just want to get started in writing multithreaded applications. The code is very succinct and easy to understand, plus the next C++ standard will likely include a threading library based on Boost.Threads (tutorial: http://www.ddj.com/cpp/184401518) A: If you want to have a go at doing a highly parallel version of a simple task, or see real solutions, you could do worse than look at the wide finder project. Basically it's about how to do parallel regex matching of log files efficiently, but trying to add as little code as possible. Participants have submitted solutions in many different languages and the performance results are posted. The original project has now finished and there is now wide finder 2 taking the work on. CodingHorror has a good introduction to wide finder. A: For a rich, thorough treatment of the subject, with a good balance between computer science and practice, I recommend The Art of Multiprocessor Programming. A lot of examples are in object-oriented code, i.e. Java, with other languages scattered throughout. It just depends on the topic being covered. What I really love about this book is that it discusses how common algorithms should be implemented in a concurrent design. Of course, there's so much more! For general concepts and a treatment of pthreads, I really like Programming with POSIX Threads. Being the library and API that it is, it's in C. For Windows and C# developers, check out Joe Duffy's blog. Joe works on parallel libraries, infrastructure, and programming models in Microsoft's Developer Division. He has a book coming in Nov. 2008 titled Concurrent Programming on Windows (Amazon link). Also, don't miss the Godfather's blog: Herb Sutter's Sutter's Mill. He has links to all his articles in Dr. Dobb's Journal and more. Click his Concurrency category. A: CPU manufacturers websites have some interesting content: http://developer.amd.com/documentation/articles/Pages/default.aspx#parallel http://software.intel.com/en-us/multi-core Also Intel's opensource threading library has some good references: http://www.threadingbuildingblocks.org/ A: If you work with C#, the book "C# 2008 and 2005 threaded programming", by Gaston C. Hillar - Packt Publishing - http://www.packtpub.com/beginners-guide-for-C-sharp-2008-and-2005-threaded-programming/book , will help you. Highly recommended for C# programmers, because you can download the code with funny examples that exploit your multicore computer. The book is a nice guide with a lot of code to practice. It tells stories while it explains the most difficult concepts.
{ "language": "en", "url": "https://stackoverflow.com/questions/2482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Casting: (NewType) vs. Object as NewType What is actually the difference between these two casts? SomeClass sc = (SomeClass)SomeObject; SomeClass sc2 = SomeObject as SomeClass; Normally, shouldn't they both be explicit casts to the specified type? A: The former will throw an exception if the source type can't be cast to the target type. The latter will result in sc2 being a null reference, but no exception. [Edit] My original answer is certainly the most pronounced difference, but as Eric Lippert points out, it's not the only one. Other differences include: * *You can't use the 'as' operator to cast to a type that doesn't accept 'null' as a value *You can't use 'as' to convert things, like numbers to a different representation (float to int, for example). And finally, using 'as' vs. the cast operator, you're also saying "I'm not sure if this will succeed." A: Well the 'as' operator "helps" you bury your problem way lower because when it is provided an incompatible instance it will return null, maybe you'll pass that to a method which will pass it to another and so on and finally you'll get a NullReferenceException which will make your debugging harder. Don't abuse it. The direct cast operator is better in 99% of the cases. A: A difference between the two approaches is that the the first ((SomeClass)obj) may cause a type converter to be called. A: Here is a good way to remember the process that each of them follow that I use when trying to decide which is better for my circumstance. DateTime i = (DateTime)value; // is like doing DateTime i = value is DateTime ? value as DateTime : throw new Exception(...); and the next should be easy to guess what it does DateTime i = value as DateTime; in the first case if the value cannot be cast than an exception is thrown in the second case if the value cannot be cast, i is set to null. So in the first case a hard stop is made if the cast fails in the second cast a soft stop is made and you might encounter a NullReferenceException later on. A: To expand on Rytmis's comment, you can't use the as keyword for structs (Value Types), as they have no null value. A: All of this applies to reference types, value types cannot use the as keyword as they cannot be null. //if I know that SomeObject is an instance of SomeClass SomeClass sc = (SomeClass) someObject; //if SomeObject *might* be SomeClass SomeClass sc2 = someObject as SomeClass; The cast syntax is quicker, but only when successful, it's much slower to fail. Best practice is to use as when you don't know the type: //we need to know what someObject is SomeClass sc; SomeOtherClass soc; //use as to find the right type if( ( sc = someObject as SomeClass ) != null ) { //do something with sc } else if ( ( soc = someObject as SomeOtherClass ) != null ) { //do something with soc } However if you are absolutely sure that someObject is an instance of SomeClass then use cast. In .Net 2 or above generics mean that you very rarely need to have an un-typed instance of a reference class, so the latter is less often used. A: Also note that you can only use the as keyword with a reference type or a nullable type ie: double d = 5.34; int i = d as int; will not compile double d = 5.34; int i = (int)d; will compile. A: For those of you with VB.NET experience, (type) is the same as DirectCast and "as type" is the same as TryCast. A: Typecasting using "as" is of course much faster when the cast fails, as it avoids the expense of throwing an exception. But it is not faster when the cast succeeds. The graph at http://www.codeproject.com/KB/cs/csharpcasts.aspx is misleading because it doesn't explain what it's measuring. The bottom line is: * *If you expect the cast to succeed (i.e. a failure would be exceptional), use a cast. *If you don't know if it will succeed, use the "as" operator and test the result for null. A: The parenthetical cast throws an exception if the cast attempt fails. The "as" cast returns null if the cast attempt fails. A: They'll throw different exceptions. () : NullReferenceException as : InvalidCastException Which could help for debugging. The "as" keyword attempts to cast the object and if the cast fails, null is returned. The () cast operator will throw an exception immediately if the cast fails. Only use the C# "as" keyword where you are expecting the cast to fail in a non-exceptional case. If you are counting on a cast to succeed and unprepared to receive any object that would fail, you should use the () cast operator so that an appropriate and helpful exception is thrown. For code examples and a further explanation: http://blog.nerdbank.net/2008/06/when-not-to-use-c-keyword.html A: It's like the difference between Parse and TryParse. You use TryParse when you expect it might fail, but when you have strong assurance it won't fail you use Parse.
{ "language": "en", "url": "https://stackoverflow.com/questions/2483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "91" }
Q: What is Progressive Enhancement? Jeff mentioned the concept of 'Progressive Enhancement' when talking about using JQuery to write stackoverflow. After a quick Google, I found a couple of high-level discussions about it. Can anyone recommend a good place to start as a programmer. Specifically, I have been writing web apps in PHP and would like to use YUI to improve the pages I am writing, but a lot of them seem very JavaScript based, with most of the donkey work being done using JavaScript. To me, that seems a bit overkill, since viewing the site without Javascript will probably break most of it. Anyone have some good places to start using this idea, I don't really care about the language. Ideally, I would like to see how you start creating the static HTML first, and then adding the YUI (or whatever Ajax framework) to it so that you get the benefits of a richer client? A: Going at it from the other direction is sometimes referred to as graceful degradation. This is usually needed when the site is built first with the enhanced functionality afforded by the various technologies then modified to degrade gracefully for browsers with those technologies are not available. It is also graceful degradation when designing to work with older browsers (ancient in the Internets terminology) such as IE 5.5, Netscape, etc... In my opinion it is much more work to gracefully degrade the application. Progressively enhancing it tends to be much more efficient; however, sometimes the need to take an existing app and make it accessible in these lacking environments arise. A: Basically, if your site still works with JavaScript turned off, then anything you add with JavaScript can be considered progressive enhancement. Some people may think that this is unnecessary, but plenty of people browse with addons like NoScript (or, with JavaScript simply turned off in their browser settings). In addition, many Mobile web browsers may or may not support JavaScript. So, it's always a good idea to test your site completely with and without JavaScript. A: Progressive Enhancement is a development technique that stresses the primacy of the semantic HTML, then testing for browser-capability and conditionally "layering" on JavaScript and/or CSS enhancements for the browsers that can utilize those enhancements. One of the keys is understanding that we're testing for what the browser can do, as opposed to browser-sniffing. Modernizr is a very popular browser-capability test suite. Progressive-Enhancement is inherently (section 508) accessible; it provides for meeting the letter of the law and the spirit of the rule. The Filament Group wrote the excellent "Designing With Progressive Enhancement" book on the subject. (I am not affiliated with Filament Group, though they are so freaking smart I wish I were.) A: This is such an important concept and it saddens me that so few web developers understand it. Basically, start by building a site/framework in Plain Old HTML -- structural elements, links and forms. Then add on some style and then shiny stuff (Ajax or what have you). It's not very difficult. Like palehorse says, graceful degradation is more work. Websites should work in any user agent, not look the same (not even look but sound if your vision impaired), just work. A: Progressive Enhancement: * *The plain HTML/CSS site is awesome (fully working and user-friendly). *Adding JavaScript defines a new level of awesome. A: As you've said To me, that seems a bit overkill, since viewing the site without Javascript will probably break most of it. This isn't progressive enhancement. Progressive enhancement is when the site works perfectly without JavaScript or CSS, and then adding (layering) these extra technologies/code to increase the usability and functionality of the website. The best example I can give is the tag input box on this website. With JavaScript turned off, it would still work allowing you to enter tags separated with a space. With JavaScript turned on, you get a drop down with suggestions of previous entries. This is progressive enhancement. A: See also Unobtrusive JavaScript which is the bedrock progressive enhancement is built.
{ "language": "en", "url": "https://stackoverflow.com/questions/2486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Auto Generate Database Diagram MySQL I'm tired of opening Dia and creating a database diagram at the beginning of every project. Is there a tool out there that will let me select specific tables and then create a database diagram for me based on a MySQL database? Preferably it would allow me to edit the diagram afterward since none of the foreign keys are set... Here is what I am picturing diagram-wise (please excuse the horrible data design, I didn't design it. Let's focus on the diagram concept and not on the actual data it represents for this example ;) ): see full size diagram A: I've recently started using https://github.com/schemaspy/schemaspy . It strikes me as having a good balance between usability and simplicity. (GraphViz now optional) A: This http://code.google.com/p/database-diagram/ will reverse engineer your database. Just do an export 'structure only' then paste the SQL into the tool. A: Try MySQL Maestro. Works great for me. A: Try MySQL Workbench, formerly DBDesigner 4: http://dev.mysql.com/workbench/ This has a "Reverse Engineer Database" mode: Database -> Reverse Engineer A: I believe DB Designer does something like that. And I think they even have a free version. edit Never mind. Michael's link is much better. A: MySQL Workbench worked like a charm. I just backed up database structure to SQL script and used it in "Create EER Model From SQL Script" of MWB 5.2.37 for Windows. A: In MySql Workbench (6.0) its possible generate one diagram based on tables created. For that you should access to the tools bar, press Model and forward Create Diagram from Catalog Objects and done! A: Visual Paradigm for UML 9.0 It's awesome I used to work with mysql bench but for big databases (something like more than 300 tables) won't work very well but visual paradigm reverse database works so much better A: phpMyAdmin has what you are looking for (for many years now): It takes a small bit of configuration, but gives you additional benefits too: http://www.phpmyadmin.net/documentation/#pmadb A: Try out Vertabelo! It's an online database modeler that supports reverse enginnering. Just create free of charge Vertabelo account, import an existing database into Vertabelo and voila - your database is in Vertabelo! It supports following databases: * *PostgreSQL, *MySQL, *Oracle, *IBM DB2, *HSQLDB, *MS SQL Server. A: On a Mac, SQLEditor will do what you want. A: The "Reverse Engineer Database" mode in Workbench is only part of the paid version, not the free one. A: Try SchemaBank. They support reverse engineering too. A: Here is a tool that generates relational diagrams from MySQL (on Windows at the moment). I have used it on a database with 400 tables. If the diagram is too big for a single diagram, it gets broken down into smaller ones. So you will probably end up with multiple diagrams and you can navigate between them by right clicking. It is all explained in the link below. The tool is free (as in free beer), the author uses it himself on consulting assignments, and lets other people use it. http://www.scmlite.com/Quick%20overview
{ "language": "en", "url": "https://stackoverflow.com/questions/2488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "358" }
Q: What are the primary differences between TDD and BDD? Test Driven Development has been the rage in the .NET community for the last few years. Recently, I have heard grumblings in the ALT.NET community about BDD. What is it? What makes it different from TDD? A: Test-Driven Development is a test-first software development methodology, which means that it requires writing test code before writing the actual code that will be tested. In Kent Beck’s words: The style here is to write a few lines of code, then a test that should run, or even better, to write a test that won't run, then write the code that will make it run. After figuring out how to write one small piece of code, now, instead of just coding on, we want to get immediate feedback and practice "code a little, test a little, code a little, test a little." So we immediately write a test for it. So TDD is a low-level, technical methodology that programmers use to produce clean code that works. Behaviour-Driven Development is a methodology that was created based on TDD, but evolved into a process that doesn’t concern only programmers and testers, but instead deals with the entire team and all important stakeholders, technical and non-technical. BDD started out of a few simple questions that TDD doesn’t answer well: how much tests should I write? What should I actually test—and what shouldn’t I? Which of the tests I write will be in fact important to the business or to the overall quality of the product, and which are just my over-engineering? As you can see, such questions require collaboration between technology and business. Business stakeholders and domain experts often can tell engineers what kind of tests sound like they would be useful—but only if the tests are high-level tests that deal with important business aspects. BDD calls such business-like tests “examples,” as in “tell me an example of how this feature should behave correctly,” and reserves the word “test” for low-level, technical checks such as data validation or testing API integrations. The important part is that while tests can only be created by programmers and testers, examples can be collected and analysed by the entire delivery team—by designers, analysts, and so on. In a sentence, one of the best definitions of BDD I have found so far is that BDD is about “having conversations with domain experts and using examples to gain a shared understanding of the desired behaviour and discover unknowns.” The discovery part is very important. As the delivery team collects more examples, they start to understand the business domain more and more and thus they reduce their uncertainty about some aspects of the product they have to deal with. As uncertainty decreases, creativity and autonomy of the delivery team increase. For instance, they can now start suggesting their own examples that the business users didn’t think were possible because of their lack of tech expertise. Now, having conversations with the business and domain experts sounds great, but we all know how that often ends up in practice. I started my journey with tech as a programmer. As programmers, we are taught to write code—algorithms, design patterns, abstractions. Or, if you are a designer, you are taught to design—organize information and create beautiful interfaces. But when we get our entry-level jobs, our employers expect us to "deliver value to the clients." And among those clients can be, for example... a bank. But I could know next to nothing about banking—except how to efficiently decrease my account balance. So I would have to somehow translate what is expected of me into code... I would have to build a bridge between banking and my technical expertise if I want to deliver any value. BDD helps me build such a bridge on a stable foundation of fluid communication between the delivery team and the domain experts. Learn more If you want to read more about BDD, I wrote a book on the subject. “Writing Great Specifications” explores the art of analysing requirements and will help you learn how to build a great BDD process and use examples as a core part of that process. The book talks about the ubiquitous language, collecting examples, and creating so-called executable specifications (automated tests) out of the examples—techniques that help BDD teams deliver great software on time and on budget. If you are interested in buying “Writing Great Specifications,” you can save 39% with the promo code 39nicieja2 :) A: I have experimented a little with the BDD approach and my premature conclusion is that BDD is well suited to use case implementation, but not on the underlying details. TDD still rock on that level. BDD is also used as a communication tool. The goal is to write executable specifications which can be understood by the domain experts. A: Behaviour Driven Development seems to focus more on the interaction and communication between Developers and also between Developers and testers. The Wikipedia Article has an explanation: Behavior-driven development Not practicing BDD myself though. A: It seems to me that BDD is a broader scope. It almost implies TDD is used, that BDD is the encompassing methodology that gathers the information and requirements for using, among other things, TDD practices to ensure rapid feedback. A: With my latest knowledge in BDD when compared to TDD, BDD focuses on specifying what will happen next, whereas TDD focuses on setting up a set of conditions and then looking at the output. A: Consider the primary benefit of TDD to be design. It should be called Test Driven Design. BDD is a subset of TDD, call it Behaviour Driven Design. Now consider a popular implementation of TDD - Unit Testing. The Units in Unit Testing are typically one bit of logic that is the smallest unit of work you can make. When you put those Units together in a functional way to describe the desired Behaviour to the machines, you need to understand the Behaviour you are describing to the machine. Behaviour Driven Design focuses on verifying the implementers' understanding of the Use Cases/Requirements/Whatever and verifies the implementation of each feature. BDD and TDD in general serves the important purpose of informing design and the second purpose of verifying the correctness of the implementation especially when it changes. BDD done right involves biz and dev (and qa), whereas Unit Testing (possibly incorrectly viewed as TDD rather than one type of TDD) is typically done in the dev silo. I would add that BDD tests serve as living requirements. A: To me primary difference between BDD and TDD is focus and wording. And words are important for communicating your intent. TDD directs focus on testing. And since in "old waterfall world" tests come after implementation, then this mindset leads to wrong understanding and behaviour. BDD directs focus on behaviour and specification, and so waterfall minds are distracted. So BDD is more easily understood as design practice and not as testing practice. A: There seem to be two types of BDD. The first is the original style that Dan North discusses and which caused the creation of the xBehave style frameworks. To me this style is primarily applicable for acceptance testing or specifications against domain objects. The second style is what Dave Astels popularised and which, to me, is a new form of TDD which has some serious benefits. It focuses on behavior rather than testing and also small test classes, trying to get to the point where you basically have one line per specification (test) method. This style suits all levels of testing and can be done using any existing unit testing framework though newer frameworks (xSpec style) help focus one the behavior rather than testing. There is also a BDD group which you might find useful: http://groups.google.com/group/behaviordrivendevelopment/ A: I understand BDD to be more about specification than testing. It is linked to Domain Driven Design (don't you love these *DD acronyms?). It is linked with a certain way to write user stories, including high-level tests. An example by Tom ten Thij: Story: User logging in As a user I want to login with my details So that I can get access to the site Scenario: User uses wrong password Given a username 'jdoe' And a password 'letmein' When the user logs in with username and password Then the login form should be shown again (In his article, Tom goes on to directly execute this test specification in Ruby.) The pope of BDD is Dan North. You'll find a great introduction in his Introducing BDD article. You will find a comparison of BDD and TDD in this video. Also an opinion about BDD as "TDD done right" by Jeremy D. Miller March 25, 2013 update The video above has been missing for a while. Here is a recent one by Llewellyn Falco, BDD vs TDD (explained). I find his explanation clear and to the point. A: There is no difference between TDD and BDD. except you can read your tests better, and you can use them as requirements. If you write your requirements with the same words as you write BDD tests then you can come from your client with some of your tests defined ready to write code. A: In short there is major difference between TDD and BDD In TDD we are majorly focused on Test data In BDD our main focus is on behavior of the project so that any non - programming person can understand the line of code on the behalf of the title of that method A: Here's the quick snapshot: * *TDD is just the process of testing code before writing it! *DDD is the process of being informed about the Domain before each cycle of touching code! *BDD is an implementation of TDD which brings in some aspects of DDD!
{ "language": "en", "url": "https://stackoverflow.com/questions/2509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "135" }
Q: Using multiple SQLite databases at once I have 2 SQLite databases, one downloaded from a server (server.db), and one used as storage on the client (client.db). I need to perform various sync queries on the client database, using data from the server database. For example, I want to delete all rows in the client.db tRole table, and repopulate with all rows in the server.db tRole table. Another example, I want to delete all rows in the client.db tFile table where the fileID is not in the server.db tFile table. In SQL Server you can just prefix the table with the name of the database. Is there anyway to do this in SQLite using Adobe Air? A: I just looked at the AIR SQL API, and there's an attach method on SQLConnection it looks exactly what you need. I haven't tested this, but according to the documentation it should work: var connection : SQLConnection = new SQLConnection(); connection.open(firstDbFile); connection.attach(secondDbFile, "otherDb"); var statement : SQLStatement = new SQLStatement(); statement.connection = connection; statement.text = "INSERT INTO main.myTable SELECT * FROM otherDb.myTable"; statement.execute(); There may be errors in that code snippet, I haven't worked much with the AIR SQL API lately. Notice that the tables of the database opened with open are available using main.tableName, any attached database can be given any name at all (otherDb in the example above). A: It's possible to open multiple databases at once in SQLite, but it's doubtful if can be done when working from Flex/AIR. In the command line client you run ATTACH DATABASE path/to/other.db AS otherDb and then you can refer to tables in that database as otherDb.tableName just as in MySQL or SQL Server. Tables in an attached database can be referred to using the syntax database-name.table-name. ATTACH DATABASE documentation at sqlite.org A: this code can be work,it is write of me: package lib.tools import flash.utils.ByteArray; import flash.data.SQLConnection; import flash.data.SQLStatement; import flash.data.SQLResult; import flash.data.SQLMode; import flash.events.SQLErrorEvent; import flash.events.SQLEvent; import flash.filesystem.File; import mx.core.UIComponent; import flash.data.SQLConnection; public class getConn { public var Conn:SQLConnection; public function getConn(database:Array) { Conn = new SQLConnection(); var Key:ByteArray = new ByteArray(); Key.writeUTFBytes("Some16ByteString"); Conn.addEventListener(SQLErrorEvent.ERROR, createError); var dbFile:File = File.applicationDirectory.resolvePath(database[0]); Conn.open(dbFile); if(database.length > 1) { for(var i:Number = 1; i < database.length; i++) { var DBname:String = database[i]; Conn.attach(DBname.split("\.")[0], File.applicationDirectory.resolvePath(DBname)); } } Conn.open(dbFile, SQLMode.CREATE, false, 1024, Key); } private function createError(event:SQLErrorEvent):void { trace("Error code:", event.error.details); trace("Details:", event.error.message); } public function Rs(sql:Array):Object { var stmt:SQLStatement = new SQLStatement(); Conn.begin(); stmt.sqlConnection = Conn; try { for(var i:String in sql) { stmt.text = sql[i]; stmt.execute(); } Conn.commit(); } catch(error:SQLErrorEvent) { createError(error); Conn.rollback(); }; var result:Object =stmt.getResult(); return result; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/2518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Visual Studio "Unable to start debugging on the web server. The web server did not respond in a timely manner." I get the following error pretty regularly when compiling in Visual Studio and running my web application: "Unable to start debugging on the web server. The web server did not respond in a timely manner. This may be because another debugger is already attached to the web server." Normally this is after having debug the application once already. From the command line I run "iisreset /restart" and it fixes the problem. How do I prevent this from happening in the first place? A: After trying all of the proposed solutions here and in other places (at least 10 different approaches), the only option that worked for me was: * *delete website and application pool on IIS *re-create website and application pool on IIS (in my case, everything exactly the same config as before) PS: I am using VS 2013 and IIS 7.5 (Win7). I hope this saves someone else a few hours. A: Go to task manager and end process aspnet_wp.exe before running application A: I have had this problem a couple times. One time it was resolved by taking Guy's advice: If this is what's happening with you then a quicker solution than running iisreset is to hit Shift-F5 when in Visual Studio and this will terminate the current debug session. You can then hit F5 and this will start a new debug session. On a separate occasion I had to: terminate all my IIS worker processes in the windows task manager (w3wp.exe*). You should be able to hit f5 in visual studio to debug. A: The solution that worked for me: * *Open Command Prompt (Run as Administrator) *Write iisreset /restart *Now, go back to your VS and debug. It will debug your solution. It worked for Visual Studio 2013 and 2015 too in my case. A: I find that this happens if I'm debugging with Firefox as my browser. When I exit Firefox the VS2005/8 debug session doesn't terminate. I have not found a solution for this (yet). If this is what's happening with you then a quicker solution than running iisreset is to hit Shift-F5 when in Visual Studio and this will terminate the current debug session. You can then hit F5 and this will start a new debug session. A: It sounds like you are probably hitting F5 in Visual Studio when you receive this error? There are a few things you can try. The easiest is to hit the Stop button before hitting F5. Optionally, when you are finished debugging and starting to make changes you can go to the Debug menu and choose either Stop Debugging or Terminate All. A: We use another way of debugging, we never use F5 anymore. We use a macro kind of like: http://blogs.conchango.com/howardvanrooijen/archive/2007/06/24/Attach-to-Web-Server-Macro-for-Visual-Studio.aspx (Which we bound to F6). This way you simply attach the debugger to IIS. It's (depending on project size) much quicker to make you changes, compile a single project that you changed and attach the debugger again. A: When debugging 2 web application (1 MVC and 2 is MVC WebAPI) that are both hosted in the local IIS. Make sure that each application is using a different application pool. I encountered the same issue and as soon as I change the app pool of the other one, it worked! A: I saw this message first time in my life and I was very confused about what is going on as it is not pretty obvious what to do. I ran iisreset and it took just 1 sec to finish the execution, and boom, I was back into the game. P.S. I am using chrome A: Hit Shift+F5 when in Visual Studio and this will terminate the current debug session. You can then hit F5 and this will start a new debug session. or close your application, reset iis then open your application and run it A: For me I had two visual studio open. The debugger already was attached to another visual studio :). I stopped it on the first one and was able to attach on the second visual studio. A: Very basic - but check that if you try to run the web site from IIS by clicking on "Browse", the site actually runs. A: It sounds like something is eating up your web server's resources. Perhaps you have some resources (file handlers, wcf proxies) that are being opened and not closed? I've had this happen to me specifically when I was not closing WCF client proxy connections. The problem is not necessarily that you have a debugger attached, but only that the web server is not responding in a timely manner. Note that the message says "This may be because another debugger is attached". A: If you have a lot of break points this will slow the debugging process down, so remove unneeded break points and close the Autos window this will solve your problem A: The issue is normally there when an another instance of iexplore is still running. I used to have the issue when my IE crashes but I can still see it in the Task Manager. Once you "End Process" everything is back to normal :) A: This answer will only apply if you are running your solution through IIS. You will know if this applies to you IF you open up your website/project by doing the following: From within visual studio-->Open Website--> Local IIS -->Select your project This error Kicked my butt for 4 hours but finally I found an answer. I first attempted the iisreset /restart. This seemed to slightly help but still received the same error. What worked for me was going (xp machine) to add/remove programs --> Add/Remove Windows Components--> Click on IIS--> Click on "Details". Be sure to have Front Page Extensions installed if you are debugging through IIS. A: If all the answers does not work for you, just end process all IIS related components in task manager. This is what worked for me. A: I ran into this issue when trying to debug (2) separate solutions in VS.NET and both were using the IIS Web Server to launch the app. The 1st application will start, but any subsequent applications started that also run via IIS will then display that error. It seems that it can only debug a single application via VS.NET hosted in IIS at a time. The solution: run project 1 from VS.NET (place any needed breakpoints) and start the second application directly from IIS (not VS.NET). Your breakpoints in App 1 (running in VS.NET) will be hit when accessing App 2 (ran from IIS directly). A: This happens to me quite a bit in VS 2010 express - Usually because the debugger stopped responding. Right click windows taskbar, select 'Start Task Manager'. More than likely the ASP.NET debugger will be showing a 'not responding' status. Select it and simply terminate the process. Done! A: With me it happened when IE was upgraded to newer version, went to Installed Updates, removed new version of IE, after computer restarted it went back to old version and problem with debugging was solved A: Had the same problem, even after a reboot. Basically did this: * *Restart IIS *Clean Solution *Rebuild Solution Then it started working again. A: This can also be caused if your website uses a database connection but the database server is unavailable. I spent some time trying to resolve this issue in the usual ways, but even after restarting my workstation, the issue remained. Eventually I found that the SQL Server (MSSQLSERVER) service was not running. It should have been running, as it's set to Automatic, but it was stopped, even after the reboot. All the MSSQLSERVER events in the event log appeared normal, so it remains unknown why it wasn't running, but I have now set it to Automatic (delayed start) in the hope that this will reduce resource contention during startup. Once I started MSSQLSERVER , the message "Unable to start debugging on the web server. The web server did not respond in a timely manner" no longer appeared and normal service was resumed. A: I had to recreate the site/application/virtual directory to make it work after I installed vs2015 update3. Hope this helps someone. ;) A: I know this is an old question, but I met the same situation recently and try every solution in this post, and no luck. Finally, I found the solution that works for me: * *Close Visual Studio *Find Turn Windows features on or off in Control Panel *Uncheck Internet Information Services in the popup dialog *Restart your computer *Check Internet Information Services in the same dialog, and make sure Internet Information Service -> World Wide Web Services -> Application Development Features -> ASP.NET also been checked *Open Visual Studio, and now your application should be able to run in debug mode A: Open * *Options and Settings Under the debug *Symbols and unchecked Microsoft Symbol Servers *build solution *iisreset *F5 solution (Be sure Microsoft Symbol Servers unchecked again) A: This worked for me by @mtkachenko Visual Studio 2012: Unable to attach the process. A debugger is already attached "I have installed Debug Diagnostic Tool v2.0 and as a result I have Debug Diagnostic Service which is started automatically and attached to one of w3wp processes. After turning off and disabling this service all works fine. So if you get such error check processes in task manager which can capture your w3wp process" A: I got it worked by creating a new ApplicationPool in the IIS Server and pointing my application to the new ApplicationPool. I have also deleted the old ApplicationPool A: In my case it gets solve deleting all breakpoints. Looks like I had a lot of breakpoints(conditional and not conditional) and it was causing lack of resources . A: I just solved this problem on my machine. My problem is that I upgraded IE 9 To IE 10 and I got this error. Solution : Remove IE 10 and downgrade to IE 9. Go to "Programs and Features" --> "View recent updates" --> find IE 10---> Uninstall it-->reboot--->ie 9 is back--->debug--->works OK. A: Try performing either of the following steps to resolve your issue: * *Restart your IIS Server *Clean the Solution of your project then build again If above steps do not help, you can finally try restarting your machine A: In your cmd type iisreset and press enter after that your iis is reset and your application is working perfectly
{ "language": "en", "url": "https://stackoverflow.com/questions/2524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: .NET obfuscation tools/strategy My product has several components: ASP.NET, Windows Forms App and Windows Service. 95% or so of the code is written in VB.NET. For Intellectual Property reasons, I need to obfuscate the code, and until now I have been using a version of dotfuscator which is now over 5 years old. I'm thinking it is time to move to a new generation tool. What I'm looking for is a list of requirements which I should consider when searching for a new obfuscator. What I know I should look for so far: * *Serialization/De-serialization. In my current solution, I simply tell the tool not to obfuscate any class data members because the pain of not being able to load data which was previously serialized is simply too big. *Integration with Build Process *Working with ASP.NET. In the past, I have found this problematic due to changing .dll names (you often have one per page) - which not all tools handle well. A: I've been also using SmartAssembly. I found that Ezrinz .Net Reactor much better for me on .net applications. It obfuscates, support Mono, merges assemblies and it also also has a very nice licensing module to create trial version or link the licence to a particular machine (very easy to implement). Price is also very competitive and when I needed support they where fast. Eziriz Just to be clear I'm just a custumer who likes the product and not in any way related with the company. A: The short answer is that you can't. There are various tools around that will make it harder for someone to read your code - some of which have been pointed out by other answers. However, all these do is make it harder to read - they increase the amount of effort required, that is all. Often this is enough to deter casual readers, but someone who is determined to dig into your code will always be able to do so. A: We have a multi tier app with an asp.net and winform interface that also supports remoting. I've had no problems with using any obfuscator with the exception of the encrypting type which generates a loader which can be problematic in all sorts of unexpected ways and just not worth it in my opinion. Actually my advice would be more along the lines of "Avoid encrypting loader type obfuscators like the plague". :) In my experience any obfuscator will work fine with any aspect of .net including asp.net and remoting, you just have to become intimate with the settings and learn how far you can push it in which areas of your code. And take the time to attempt reverse engineering on what you get and see how it works with the various settings. We used several over the years in our commercial apps and settled on Spices obfuscator from 9rays.net because the price is right, it does the job and they have good support though we really haven't needed the support in years anymore but to be honest I don't think it really matters which obfuscator you use, the issues and learning curve are all the same if you want to have it work properly with remoting and asp.net. As others have mentioned all you're really doing is the equivalent of a padlock, keeping otherwise honest people out and or making it harder to simply recompile an app. Licensing is usually the key area for most people and you should definitely be using some kind of digitally signed certificate system for licensing anyway. Your biggest loss will come from casual sharing of licenses if you don't have a smart system in place, the people that break the licensing system were never going to buy in the first place. It's really easy to take this too far and have a negative impact on your customers and your business, do what is simple and reasonable and then don't worry about it. A: Crypto Obfuscator address all your concerns and scenarios. It : * *Automatically excludes types/members from obfuscation based on rules. Serialized types/fields are one of them. *It can be integrated into the build process using MSBUild. *Supports ASP.Net projects. A: We've tried a number of obfuscators. None of them work on a large client/server app that uses remoting. Problem is that client and server share some dlls, and we haven't found any obfuscator that can handle it. We've tried DotFuscator Pro, SmartAssembly, XenoCode, Salamander, and several small time apps whose names escape me. Frankly, I'm convinced obfuscation is a big hack. Even the problems it addresses is not entirely a real problem. The only thing you really need to protect is connection strings, activation codes, security-sensitive things like that. This nonsense that another company is going to reverse-engineer your whole codebase and create a competing product from it is something from a paranoid manager's nightmare, not reality. A: For the past two days I've been experimenting with Dotfuscator Community Edition advanced (a free download after registering the basic CE that comes bundled with Visual Studio). I think the reason more people don't use obfuscation as a default option is that it's a serious hassle compared to the risk. On smaller test projects I could get the obfuscated code running with a lot of effort. Deploying a simple project via ClickOnce was troublesome, but achievable after manually signing the manifests with mage. The only problem was that on error the stack trace came back obfuscated and the CE doesn't have a deobfuscator or clarifier packaged. I tried to obfuscate a real project which is VSTO based in Excel, with Virtual Earth integration, lots of webservice calls and an IOC container and lot's of reflection. It was impossible. If obfuscation is really a critical requirement, you should design your application with that in mind from the start, testing the obfuscated builds as you progress. Otherwise, if it's a fairly complex project, you're going to end up with a serious amount of pain. A: I am 'Knee Deep' in this now, trying to find a good solution. Here are my impressions so far. Xenocode - I have an old licence for Xenocode2005 which I used to use for obfuscating my .net 2.0 assemblies. It worked fine on XP and was a decent solution. My current project is .net 3.5 and I am on Vista, support told me to give it a go but the 2005 version does not even work on Vista (crashes) so I and now I have to buy 'PostBuild2008' at a gobsmacking price point of $1900. This might be a good tool but I'm not going to find out. Too expensive. Reactor.Net - This is a much more attractive price point and it worked fine on my Standalone Executeable. The Licencing module was also nice and would have saved me a bunch of effort. Unfortunately, It is missing a key feature and that is the ability to Exclude stuff from the obfuscation. This makes it impossible to achieve the result I needed (Merge multiple assemblies together, obfuscate some, not-Obfuscate others). SmartAssembly - I downloaded the Eval for this and it worked flawlessly. I was able to achieve everything I wanted and the Interface was first class. Price point is still a bit hefty. Dotfuscator Pro - Couldn't find price on website. Currently in discussions to get a quotation. Sounds ominous. Confuser - an open source project which works quite well (to confuse ppl, just as the name implies). Note: ConfuserEx is reportedly "broken" according to Issue #498 on their GitHub repo. A: Back with .Net 1.1 obfuscation was essential: decompiling code was easy, and you could go from assembly, to IL, to C# code and have it compiled again with very little effort. Now with .Net 3.5 I'm not at all sure. Try decompiling a 3.5 assembly; what you get is a long long way from compiling. Add the optimisations from 3.5 (far better than 1.1) and the way anonymous types, delegates and so on are handled by reflection (they are a nightmare to recompile). Add lambda expressions, compiler 'magic' like Linq-syntax and var, and C#2 functions like yield (which results in new classes with unreadable names). Your decompiled code ends up a long long way from compilable. A professional team with lots of time could still reverse engineer it back again, but then the same is true of any obfuscated code. What code they got out of that would be unmaintainable and highly likely to be very buggy. I would recommend key-signing your assemblies (meaning if hackers can recompile one they have to recompile all) but I don't think obfuscation's worth it. A: I've recently tried piping the output of one free obfuscator into the another free obfuscator - namely Dotfuscator CE and the new Babel obfuscator on CodePlex. More details on my blog. As for serialization, I've moved that code into a different DLL and included that in the project. I reasoned that there weren't any secrets in there that aren't in the XML anyway, so it didn't need obfuscation. If there is any serious code in those classes, using partial classes in the main assembly should cover it. A: You should use whatever is cheapest and best known for your platform and call it a day. Obfuscation of high-level languages is a hard problem, because VM opcode streams don't suffer from the two biggest problems native opcode streams do: function/method identification and register aliasing. What you should know about bytecode reversing is that it is already standard practice for security testers to review straight X86 code and find vulnerabilities in it. In raw X86, you cannot necessarily even find valid functions, let alone track a local variable throughout a function call. In almost no circumstances do native code reversers have access to function and variable names --- unless they're reviewing Microsoft code, for which MSFT helpfully provides that information to the public. "Dotfuscation" works principally by scrambling function and variable names. It's probably better to do this than publish code with debug-level information, where the Reflector is literally giving up your source code. But anything you do beyond this is likely to get into diminishing returns. A: I have had no problems with Smartassembly. A: You could use "Dotfuscator Community Edition" - it comes by default in Visual Studio 2008 Professional. You can read about it at: http://msdn.microsoft.com/en-us/library/ms227240%28VS.80%29.aspx http://www.preemptive.com/dotfuscator.html The "Professional" version of the product costs money but is better. Do you really need your code obfuscated? Usually there is very little wrong with your application being decompiled, unless it is used for security purposes. If you are worried about people "stealing" your code, don't be; the vast majority of people looking at your code will be for learning purposes. Anyway, there is no totally effective obfuscation strategy for .NET - someone with enough skill will always be able to decompile/change you application. A: Avoid Reactor. It is completely useless (and yes I paid for a license). Xenocode was the best one I encountered and bought a license for too. The support was very good but I didn't need it much as it just worked. I tested every obfuscator I could find and my conclusion is that xenocode was far and away the most robust and did the best job (also possibility to post process your .NET exe to a native exe which I didn't see anywhere else.). There are two main differences between reactor and xenocode. The first one is that Xenocode actually works. The second is that the execution speed of your assemblies is no different. With reactor it was about 6 million times slower. I also got the impression that reactor was a one man operation. A: I found the Agile.Net provide pretty good protection for your .Net Assembly because it offer not only obfuscation but also encryption. Download a free trail. http://secureteam.net/NET-Code-Protection.aspx http://secureteam.net/downloads.aspx A: If your looking for a free one you could try DotObfuscator Community Edition that comes with Visual Studio or Eazfuscator.NET. Since June 29, 2012, Eazfuscator.NET is now commercial. The last free available version is 3.3. A: I've been obfuscating code in the same application since .Net 1, and it's been a major headache from a maintenance perspective. As you've mentioned, the serialization problem can be avoided, but it's really easy to make a mistake and obfuscate something you didn't want obfuscated. It's easy to break the build, or to change the obfuscation pattern and not be able to open old files. Plus it can be difficult to find out what went wrong and where. Our choice was Xenocode, and were I to make the choice again today I would prefer to not obfuscate the code, or use Dotfuscator. A: Here's a document from Microsoft themselves. Hope that helps..., it's from 2003, but it might still be relevant. A: I have been using smartassembly. Basically, you pick a dll and it returns it obfuscated. It seems to work fine and I've had no problems so far. Very, very easy to use. A: I have tried almost every obfuscator on the market and SmartAssembly is the best in my opinion. A: We're using SmartAssembly on our windows client. Works just fine. Does add some extra problems too. Printing out your class names in log files/exceptions have to be de-obfuscated. And of course can't create a class from its name. So it's a good idea to take a look at your client and see which problems you can get by obfuscating. A: It all depends on the programming language, which you use. Read the article: Obfuscated code A: the free way would be to use dotfuscator from within visual studio, otherwise youd have to go out and buy an obfuscator like Postbuild (http://www.xenocode.com/Landing/Obfuscation.aspx) A: I had to use a obfuscation/resource protection in my latest rpoject and found Crypto Obfuscator as a nice and simple to use tool. The serialization issue is only a matter of settings in this tool. A: There's a good open source version called Obfuscar. Seems to work fine. Types, properties, fields, methods can be excluded. The original is here: https://code.google.com/p/obfuscar/, but since it seems to not be updated anymore A: You may also want to look at new code protection technologies such as Metaforic and V.i.Labs and new software copy protection technologies such as ByteShield. Disclosure: I work for ByteShield. A: I also use smartassembly. However, I don't know how it works for a web application. However, I'd like to point out that if your app uses shareware type protection, make sure it don't check a license with a boolean return. it's too easy to byte crack. http://blogs.compdj.com/post/Binary-hack-a-NET-executable.aspx A: SmartAssembly is great,I was used in most of my projects A: I have tried a product called Rummage and it does a good job in giving you some control ... Although it lacks many things that Eziriz offers but price for Rummage is too good... A: I tried Eziriz demo version....I liked it. But never brought the software. A: Obfuscating is not a real protection. If you have a .NET Exe file there is a FAR better solution. I use Themida and can tell that it works very well. The only drawback of Themida is that it cannot protect .NET Dlls. (It also protects C++ code in Exe and DLLs) Themida is by far cheaper than the here mentioned obfuscators and is the best in anti piracy protection on the market. It creates a virtual machine were critical parts of your code are run and runs several threads that detect manipulation or breakpoints set by a cracker. It converts the .NET Exe into something that Reflector does not even recognize as a .NET assembly anymore. Please read the detailed description on their website: http://www.oreans.com/themida_features.php
{ "language": "en", "url": "https://stackoverflow.com/questions/2525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "165" }
Q: Find node clicked under context menu How can I find out which node in a tree list the context menu has been activated? For instance right-clicking a node and selecting an option from the menu. I can't use the TreeViews' SelectedNode property because the node is only been right-clicked and not selected. A: You can add a mouse click event to the TreeView, then select the correct node using GetNodeAt given the mouse coordinates provided by the MouseEventArgs. void treeView1MouseUp(object sender, MouseEventArgs e) { if(e.Button == MouseButtons.Right) { // Select the clicked node treeView1.SelectedNode = treeView1.GetNodeAt(e.X, e.Y); if(treeView1.SelectedNode != null) { myContextMenuStrip.Show(treeView1, e.Location); } } } A: This is a very old question, but I still found it useful. I am using a combination of some of the answers above, because I don't want the right-clicked node to become the selectedNode. If I have the root node selected and want to delete one of it's children, I don't want the child selected when I delete it (I am also doing some work on the selectedNode that I don't want to happen on a right-click). Here is my contribution: // Global Private Variable to hold right-clicked Node private TreeNode _currentNode = new TreeNode(); // Set Global Variable to the Node that was right-clicked private void treeView_NodeMouseClick(object sender, TreeNodeMouseClickEventArgs e) { if (e.Button == System.Windows.Forms.MouseButtons.Right) _currentNode = e.Node; } // Do something when the Menu Item is clicked using the _currentNode private void toolStripMenuItem_Clicked(object sender, EventArgs e) { if (_currentNode != null) MessageBox.Show(_currentNode.Text); } A: Here is my solution. Put this line into NodeMouseClick event of the TreeView: ((TreeView)sender).SelectedNode = e.Node; A: Similar to Marcus' answer, this was the solution I found worked for me: private void treeView_MouseClick(object sender, MouseEventArgs e) { if (e.Button == MouseButtons.Right) { treeView.SelectedNode = treeView.GetNodeAt(e.Location); } } You need not show the context menu yourself if you set it to each individual node like so: TreeNode node = new TreeNode(); node.ContextMenuStrip = contextMenu; Then inside the ContextMenu's Opening event, the TreeView.SelectedNode property will reflect the correct node. A: I find the standard windows treeview behavior selection behavior to be quite annoying. For example, if you are using Explorer and right click on a node and hit Properties, it highlights the node and shows the properties dialog for the node you clicked on. But when you return from the dialog, the highlighted node was the node previously selected/highlighted before you did the right-click. I find this causes usability problems because I am forever being confused on whether I acted on the right node. So in many of our GUIs, we change the selected tree node on a right-click so that there is no confusion. This may not be the same as a standard iwndos app like Explorer (and I tend to strongly model our GUI behavior after standard window apps for usabiltiy reasons), I believe that this one exception case results in far more usable trees. Here is some code that changes the selection during the right click: private void tree_MouseUp(object sender, System.Windows.Forms.MouseEventArgs e) { // only need to change selected note during right-click - otherwise tree does // fine by itself if ( e.Button == MouseButtons.Right ) { Point pt = new Point( e.X, e.Y ); tree.PointToClient( pt ); TreeNode Node = tree.GetNodeAt( pt ); if ( Node != null ) { if ( Node.Bounds.Contains( pt ) ) { tree.SelectedNode = Node; ResetContextMenu(); contextMenuTree.Show( tree, pt ); } } } } A: Reviving this question because I find this to be a much better solution. I use the NodeMouseClick event instead. void treeview_NodeMouseClick(object sender, TreeNodeMouseClickEventArgs e) { if( e.Button == MouseButtons.Right ) { tree.SelectedNode = e.Node; } } A: If you want the context menu to be dependent on the selected item you're best move I think is to use Jonesinator's code to select the clicked item. Your context menu content can then be dependent on the selected item. Selecting the item first as opposed to just using it for the context menu gives a few advantages. The first is that the user has a visual indication as to which he clicked and thus which item the menu is associated with. The second is that this way it's a hell of a lot easier to keep compatible with other methods of invoking the context menu (e.g. keyboard shortcuts). A: Here is how I do it. private void treeView_NodeMouseClick(object sender, TreeNodeMouseClickEventArgs e) { if (e.Button == System.Windows.Forms.MouseButtons.Right) e.Node.TreeView.SelectedNode = e.Node; } A: Another option you could run with is to have a global variable that has the selected node. You would just need to use the TreeNodeMouseClickEventArgs. public void treeNode_Click(object sender, TreeNodeMouseClickEventArgs e) { _globalVariable = e.Node; } Now you have access to that node and it's properties. A: I would like to propose an alternative to using the click events, using the context menu's Opened event: private void Handle_ContextMenu_Opened(object sender, EventArgs e) { TreeViewHitTestInfo info = treeview.HitTest(treeview.PointToClient(Cursor.Position)); TreeNode contextNode; // was there a node where the context menu was opened? if (info != null && info.Node != null) { contextNode = info.Node; } // Set the enabled states of the context menu elements menuEdit.Enabled = contextNode != null; menuDelete.Enabled = contextNode != null; } This has the following advantages that I can see: * *It does not change the selected node *No separate event handler needed to store the target node instance *Can disable menu items if the user right-clicks empty space in the TreeView Note: if you worry that the user may have already moved the mouse by the time the menu is opened, it is possible to use the Opening event instead.
{ "language": "en", "url": "https://stackoverflow.com/questions/2527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: How do you disable browser autocomplete on web form field / input tags? How do you disable autocomplete in the major browsers for a specific input (or form field)? A: You may use it in input. For example; <input type=text name="test" autocomplete="off" /> A: Many modern browsers do not support autocomplete="off" for login fields anymore. autocomplete="new-password" is wokring instead, more information MDN docs A: Here's the perfect solution that will work in all browsers as of May 2021! TL;DR Rename your input field names and field ids to something non-related like 'data_input_field_1'. Then add the &#8204; character into the middle of your labels. This is a non-printing character, so you won't see it, but it tricks the browser into not recognizing the field as one needing auto-completing, thus no built-in auto-complete widget is shown! The Details Almost all browsers use a combination of the field's name, id, placeholder, and label to determine if the field belongs to a group of address fields that could benefit from auto-completion. So if you have a field like <input type="text" id="address" name="street_address"> pretty much all browsers will interpret the field as being an address field. As such the browser will display its built-in auto-completion widget. The dream would be that using the attribute autocomplete="off" would work, unfortunately, most browsers nowadays don't obey the request. So we need to use some trickery to get the browsers to not display the built-in autocomplete widget. The way we will do that is by fooling the browser into believing that the field is not an address field at all. Start by renaming the id and the name attributes to something that won't give away that you're dealing with address-related data. So rather than using <input type="text" id="city-input" name="city">, use something like this instead <input type="text" id="input-field-3" name="data_input_field_3">. The browser doesn't know what data_input_field_3 represents. But you do. If possible, don't use placeholder text as most browsers will also take that into account. If you have to use placeholder text, then you'll have to get creative and make sure you're not using any words relating to the address parameter itself (like City). Using something like Enter location can do the trick. The final parameter is the label attached to the field. However, if you're like me, you probably want to keep the label intact and display recognizable fields to your users like "Address", "City", "State", "Country". Well, great news: you can! The best way to achieve that is to insert a Zero-Width Non-Joiner Character, &#8204;, as the second character in the label. So replacing <label>City</label> with <label>C&#8204;ity</label>. This is a non-printing character, so your users will see City, but the browser will be tricked into seeing C ity and not recognize the field! Mission accomplished! If all went well, the browser should not display the built-in address auto-completion widget on those fields anymore! A: As others have said, the answer is autocomplete="off". However, I think it's worth stating why it's a good idea to use this in certain cases as some answers to this and duplicate questions have suggested it's better not to turn it off. Stopping browsers storing credit card numbers shouldn't be left to users. Too many users won't even realize it's a problem. It's particularly important to turn it off on fields for credit card security codes. As this page states: "Never store the security code ... its value depends on the presumption that the only way to supply it is to read it from the physical credit card, proving that the person supplying it actually holds the card." The problem is, if it's a public computer (cyber cafe, library, etc.), it's then easy for other users to steal your card details, and even on your own machine a malicious website could steal autocomplete data. A: Always working solution I've solved the endless fight with Google Chrome with the use of random characters. When you always render autocomplete with random string, it will never remember anything. <input name="name" type="text" autocomplete="rutjfkde"> Hope that it will help to other people. Update 2022: Chrome made this improvement: autocomplete="new-password" which will solve it but I am not sure, if Chrome change it again to different functionality after some time. A: Chrome is planning to support this. For now the best suggestion is to use an input type that is rarely autocompleted. Chrome discussion <input type='search' name="whatever" /> To be compatible with Firefox, use normal autocomplete='off' <input type='search' name="whatever" autocomplete='off' /> A: You can disable autocomplete if you remove the form tag. The same was done by my bank and I was wondering how they did this. It even removes the value that was already remembered by the browser after you remove the tag. A: No fake inputs, no javascript! There is no way to disable autofill consistently across browsers. I have tried all the different suggestions and none of them work in all browsers. The only way is not using password input at all. Here's what I came up with: <style type="text/css"> @font-face { font-family: 'PasswordDots'; src: url('text-security-disc.woff') format('woff'); font-weight: normal; font-style: normal; } input.password { font-family: 'PasswordDots' !important; font-size: 8px !important; } </style> <input class="password" type="text" spellcheck="false" /> Download: text-security-disc.woff Here's how my final result looks like: The negative side effect is that it's possible to copy plain text from the input, though it should be possible to prevent that with some JS. A: <input autocomplete="off" aria-invalid="false" aria-haspopup="false" spellcheck="false" /> i find it works for me on all browsers. When I make use of only the autocomplete it doesn't work except i combine all the attributes that you see. Also i got the solution from google form input field A: <script language="javascript" type="text/javascript"> $(document).ready(function () { try { $("input[type='text']").each( function(){ $(this).attr("autocomplete", "off"); }); } catch (e) { } }); </script> A: This is what we called autocomplete of a textbox. We can disable autocomplete of a Textbox in two ways: * *By Browser Label *By Code To disable in a browser, go to the setting Go to Advanced Settings and uncheck the checkbox and then Restore. If you want to disable in coding label you can do as follows - Using AutoCompleteType="Disabled": <asp:TextBox runat="server" ID="txt_userid" AutoCompleteType="Disabled"></asp:TextBox> By Setting Form autocomplete="off": <asp:TextBox runat="server" ID="txt_userid" autocomplete="off"></asp:TextBox> By Setting Form autocomplete="off": <form id="form1" runat="server" autocomplete="off"> // Your content </form> By using code in the .cs page: protected void Page_Load(object sender, EventArgs e) { if(!Page.IsPostBack) { txt_userid.Attributes.Add("autocomplete", "off"); } } By using jQuery <head runat="server"> <title></title> <script src="Scripts/jquery-1.6.4.min.js"></script> <script type="text/javascript"> $(document).ready(function () { $('#txt_userid').attr('autocomplete', 'off'); }); </script> A: If your issue is having a password field being auto-completed, then you may find this useful... We had this issue in several areas of our site where the business wanted to re-query the user for their username and password and specifically did not want the password autofill to work for contractual reasons. We found that the easiest way to do this is to put in a fake password field for the browser to find and fill while the real password field remains untouched. <!-- This is a fake password input to defeat the browser's autofill behavior --> <input type="password" id="txtPassword" style="display:none;" /> <!-- This is the real password input --> <input type="password" id="txtThisIsTheRealPassword" /> Note that in Firefox and IE, it was simply enough to put any input of type password before the actual one but Chrome saw through that and forced me to actually name the fake password input (by giving it an obvious password id) to get it to "bite". I used a class to implement the style instead of using an embedded style so try that if the above doesn't work for some reason. A: The answer dsuess posted with the readonly was very clever and worked. But as I am using Bootstrap, the readonly input field was - until focused - marked with grey background. While the document loads, you can trick the browser by simply locking and unlocking the input. So I had an idea to implement this into a jQuery solution: jQuery(document).ready(function () { $("input").attr('readonly', true); $("input").removeAttr('readonly'); }); A: My problem was mostly autofill with Chrome, but I think this is probably more problematic than autocomplete. Trick: using a timer to reset the form and set the password fields to blank. The 100 ms duration seems to be the minimum for it to work. $(document).ready(function() { setTimeout(function() { var $form = $('#formId'); $form[0].reset(); $form.find('INPUT[type=password]').val(''); }, 100); }); A: I use this TextMode="password" autocomplete="new-password" and in in page load in aspx txtPassword.Attributes.Add("value", ''); A: Google Chrome ignores the autocomplete="off" attribute for certain inputs, including password inputs and common inputs detected by name. For example, if you have an input with name address, then Chrome will provide autofill suggestions from addresses entered on other sites, even if you tell it not to: <input type="string" name="address" autocomplete="off"> If you don't want Chrome to do that, then you can rename or namespace the form field's name: <input type="string" name="mysite_addr" autocomplete="off"> If you don't mind autocompleting values which were previously entered on your site, then you can leave autocomplete enabled. Namespacing the field name should be enough to prevent values remembered from other sites from appearing. <input type="string" name="mysite_addr" autocomplete="on"> A: It could be important to know that Firefox (I think only Firefox) uses a value called ismxfilled that basically forces autocomplete. ismxfilled="0" for OFF or ismxfilled="1" for ON A: It doesn't seem to be possible to achieve this without using a combination client side and server side code. In order to make sure that the user must fill in the form every time without autocomplete I use the following techniques: * *Generate the form field names on the server and use hidden input fields to store those names, so that when submitted to the server the server side code can use the generated names to access the field values. This is to stop the user from having the option to auto populate the fields. *Place three instances of each form field on the form and hide the first and last fields of each set using css and then disable them after page load using javascript. This is to prevent the browser from filling in the fields automatically. Here is a fiddle that demonstrates the javascript, css and html as described in #2 https://jsfiddle.net/xnbxbpv4/ javascript: $(document).ready(function() { $(".disable-input").attr("disabled", "disabled"); }); css: .disable-input { display: none; } html: <form> <input type="email" name="username" placeholder="username" class="disable-input"> <input type="email" name="username" placeholder="username"> <input type="email" name="username" placeholder="username" class="disable-input"> <br> <input type="password" name="password" placeholder="password" class="disable-input"> <input type="password" name="password" placeholder="password"> <input type="password" name="password" placeholder="password" class="disable-input"> <br> <input type="submit" value="submit"> </form> Here is a rough example of what the server code using asp.net with razor would be to facilitate #1 model: public class FormModel { public string Username { get; set; } public string Password { get; set; } } controller: public class FormController : Controller { public ActionResult Form() { var m = new FormModel(); m.Username = "F" + Guid.NewGuid().ToString(); m.Password = "F" + Guid.NewGuid().ToString(); return View(m); } public ActionResult Form(FormModel m) { var u = Request.Form[m.Username]; var p = Request.Form[m.Password]; // todo: do something with the form values ... return View(m); } } view: @model FormModel @using (Html.BeginForm("Form", "Form")) { @Html.HiddenFor(m => m.UserName) @Html.HiddenFor(m => m.Password) <input type="email" name="@Model.Username" placeholder="username" class="disable-input"> <input type="email" name="@Model.Username" placeholder="username"> <input type="email" name="@Model.Username" placeholder="username" class="disable-input"> <br> <input type="password" name="@Model.Password" placeholder="password" class="disable-input"> <input type="password" name="@Model.Password" placeholder="password"> <input type="password" name="@Model.Password" placeholder="password" class="disable-input"> <br> <input type="submit" value="submit"> } A: Easy Hack Make input read-only <input type="text" name="name" readonly="readonly"> Remove read-only after timeout $(function() { setTimeout(function() { $('input[name="name"]').prop('readonly', false); }, 50); }); A: I tried almost all the answers, but the new version of Chrome is smart; if you write autocomplete="randomstring" or autocomplete="rutjfkde" it automatically converts it to autocomplete="off" when the input control receives the focus. So, I did it using jQuery, and my solution is as follows. $("input[type=text], input[type=number], input[type=email], input[type=password]").focus(function (e) { $(this).attr("autocomplete", "new-password"); }) This is the easiest and will do the trick for any number of controls you have on the form. A: There are many answers but most of them are hacks or some kind of workaround. There are three cases here. Case I: If this is your standard login form. Turning it off by any means is probably bad. Think hard if you really need to do it. Users are accustomed to browsers remembering and storing the passwords. You shouldn't change that standard behaviour in most cases. In case you still want to do it, see Case III Case II: When this is not your regular login form but name or id attribute of inputs is not "like" email, login, username, user_name, password. Use <input type="text" name="yoda" autocomplete="off"> Case III: When this is not your regular login form but name or id attribute of inputs is "like" email, login, username, user_name, password. For example: login, abc_login, password, some_password, password_field. All browsers come with password management features offering to remember them OR suggesting stronger passwords. That's how they do it. However, suppose you are an admin of a site and can create users and set their passwords. In this case you wouldn't want browsers to offer these features. In such cases, autocomplete="off" will not work. Use autocomplete="new-password" <input type="text" name="yoda" autocomplete="new-password"> Helpful Link: * *https://developer.mozilla.org/en-US/docs/Web/Security/Securing_your_site/Turning_off_form_autocompletion A: To disable the autocomplete of text in forms, use the autocomplete attribute of and elements. You'll need the "off" value of this attribute. This can be done in a for a complete form or for specific elements: * *Add autocomplete="off" onto the element to disable autocomplete for the entire form. *Add autocomplete="off" for a specific element of the form. form <form action="#" method="GET" autocomplete="off"> </form> input <input type="text" name="Name" placeholder="First Name" autocomplete="off"> A: I'd have to beg to differ with those answers that say to avoid disabling auto-complete. The first thing to bring up is that auto-complete not being explicitly disabled on login form fields is a PCI-DSS fail. In addition, if a users' local machine is compromised then any autocomplete data can be trivially obtained by an attacker due to it being stored in the clear. There is certainly an argument for usability, however there's a very fine balance when it comes to which form fields should have autocomplete disabled and which should not. A: In addition to setting autocomplete=off, you could also have your form field names be randomized by the code that generates the page, perhaps by adding some session-specific string to the end of the names. When the form is submitted, you can strip that part off before processing them on the server-side. This would prevent the web browser from finding context for your field and also might help prevent XSRF attacks because an attacker wouldn't be able to guess the field names for a form submission. A: Three options: First: <input type='text' autocomplete='off' /> Second: <form action='' autocomplete='off'> Third (JavaScript code): $('input').attr('autocomplete', 'off'); A: Safari does not change its mind about autocomplete if you set autocomplete="off" dynamically from JavaScript. However, it would respect if you do that on per-field basis. $(':input', $formElement).attr('autocomplete', 'off'); A: The idea is to create an invisible field with the same name before the original one. That will make the browser auto populate the hidden field. I use the following jQuery snippet: // Prevent input autocomplete $.fn.preventAutocomplete = function() { this.each(function () { var $el = $(this); $el .clone(false, false) // Make a copy (except events) .insertBefore($el) // Place it before original field .prop('id', '') // Prevent ID duplicates .hide() // Make it invisible for user ; }); }; And then just $('#login-form input').preventAutocomplete(); A: To solve this problem, I have used some CSS tricks and the following works for me. input { text-security:disc; -webkit-text-security:disc; -mox-text-security:disc; } Please read this article for further detail. A: To prevent browser auto fill with the user's saved site login credentials, place a text and password input field at the top of the form with non empty values and style "position: absolute; top: -999px; left:-999px" set to hide the fields. <form> <input type="text" name="username_X" value="-" tabindex="-1" aria-hidden="true" style="position: absolute; top: -999px; left:-999px" /> <input type="password" name="password_X" value="-" tabindex="-1" aria-hidden="true" style="position: absolute; top: -999px; left:-999px" /> <!-- Place the form elements below here. --> </form> It is important that a text field precede the password field. Otherwise the auto fill may not be prevented in some cases. It is important that the value of both the text and password fields not be empty, to prevent default values from being overwritten in some cases. It is important that these two fields are before the "real" password type field(s) in the form. For newer browsers that are html 5.3 compliant the autocomplete attribute value "new-password" should work. <form> <input type="text" name="username" value="" /> <input type="password" name="password" value="" autocomplete="new-password" /> </form> A combination of the two methods can be used to support both older and newer browsers. <form> <div style="display:none"> <input type="text" readonly tabindex="-1" /> <input type="password" readonly tabindex="-1" /> </div> <!-- Place the form elements below here. --> <input type="text" name="username" value="" /> <input type="password" name="password" value="" autocomplete="new-password" /> </form> A: If you want to prevent the common browser plug-in LastPass from auto-filling a field as well, you can add the attribute data-lpignore="true" added to the other suggestions on this thread. Note that this doesn't only apply to password fields. <input type="text" autocomplete="false" data-lpignore="true" /> I was trying to do this same thing a while back, and was stumped because none of the suggestions I found worked for me. Turned out it was LastPass. A: Most of the answers didn't help as the browser was simply ignoring them. (Some of them were not cross-browser compatible). The fix that worked for me is: <form autocomplete="off"> <input type="text" autocomplete="new-password" /> <input type="password" autocomplete="new-password" /> </form> I set autofill="off" on the form tag and autofill="new-password" wherever the autofill was not necessary. A: As of Dec 2019: Before answering this question let me say, I tried almost all the answers here on SO and from different forums but couldn't find a solution that works for all modern browsers and IE11. So here is the solution I found, and I believe it's not yet discussed or mentioned in this post. According to Mozilla Dev Network(MDN) post about how to turn off form autocomplete By default, browsers remember information that the user submits through fields on websites. This enables the browser to offer autocompletion (that is, suggest possible completions for fields that the user has started typing in) or autofill (that is, pre-populate certain fields upon load) On same article they discussed the usage of autocmplete property and its limitation. As we know, not all browsers honor this attribute as we desire. Solution So at the end of the article they shared a solution that works for all browsers including IE11+Edge. It is basically a jQuery plugin that do the trick. Here is the link to jQuery plugin and how it works. Code snippet: $(document).ready(function () { $('#frmLogin').disableAutoFill({ passwordField: '.password' }); }); Point to notice in HTML is that password field is of type text and password class is applied to identify that field: <input id="Password" name="Password" type="text" class="form-control password"> Hope this would help someone. A: Firefox 30 ignores autocomplete="off" for passwords, opting to prompt the user instead whether the password should be stored on the client. Note the following commentary from May 5, 2014: * *The password manager always prompts if it wants to save a password. Passwords are not saved without permission from the user. *We are the third browser to implement this change, after IE and Chrome. According to the Mozilla Developer Network documentation, the Boolean form element attribute autocomplete prevents form data from being cached in older browsers. <input type="text" name="foo" autocomplete="off" /> A: Most of the major browsers and password managers (correctly, IMHO) now ignore autocomplete=off. Why? Many banks and other "high security" websites added autocomplete=off to their login pages "for security purposes" but this actually decreases security since it causes people to change the passwords on these high-security sites to be easy to remember (and thus crack) since autocomplete was broken. Long ago most password managers started ignoring autocomplete=off, and now the browsers are starting to do the same for username/password inputs only. Unfortunately, bugs in the autocomplete implementations insert username and/or password info into inappropriate form fields, causing form validation errors, or worse yet, accidentally inserting usernames into fields that were intentionally left blank by the user. What's a web developer to do? * *If you can keep all password fields on a page by themselves, that's a great start as it seems that the presence of a password field is the main trigger for user/pass autocomplete to kick in. Otherwise, read the tips below. *Safari notices that there are 2 password fields and disables autocomplete in this case, assuming it must be a change password form, not a login form. So just be sure to use 2 password fields (new and confirm new) for any forms where you allow *Chrome 34, unfortunately, will try to autofill fields with user/pass whenever it sees a password field. This is quite a bad bug that hopefully, they will change the Safari behavior. However, adding this to the top of your form seems to disable the password autofill: <input type="text" style="display:none"> <input type="password" style="display:none"> I haven't yet investigated IE or Firefox thoroughly but will be happy to update the answer if others have info in the comments. A: This works for me. <input name="pass" type="password" autocomplete="new-password" /> We can also use this strategy in other controls like text, select etc A: In addition to autocomplete="off" Use readonly onfocus="this.removeAttribute('readonly');" for the inputs that you do not want them to remember form data (username, password, etc.) as shown below: <input type="text" name="UserName" autocomplete="off" readonly onfocus="this.removeAttribute('readonly');" > <input type="password" name="Password" autocomplete="off" readonly onfocus="this.removeAttribute('readonly');" > A: On a related or actually, on the completely opposite note - "If you're the user of the aforementioned form and want to re-enable the autocomplete functionality, use the 'remember password' bookmarklet from this bookmarklets page. It removes all autocomplete="off" attributes from all forms on the page. Keep fighting the good fight!" A: Just set autocomplete="off". There is a very good reason for doing this: You want to provide your own autocomplete functionality! A: None of the solutions worked for me in this conversation. I finally figured out a pure HTML solution that doesn't require any JavaScript, works in modern browsers (except Internet Explorer; there had to at least be one catch, right?), and does not require you to disable autocomplete for the entire form. Simply turn off autocomplete on the form and then turn it ON for any input you wish it to work within the form. For example: <form autocomplete="off"> <!-- These inputs will not allow autocomplete and Chrome won't highlight them yellow! --> <input name="username" /> <input name="password" type="password" /> <!-- This field will allow autocomplete to work even though we've disabled it on the form --> <input name="another_field" autocomplete="on" /> </form> A: We did actually use sasb's idea for one site. It was a medical software web app to run a doctor's office. However, many of our clients were surgeons who used lots of different workstations, including semi-public terminals. So, they wanted to make sure that a doctor who doesn't understand the implication of auto-saved passwords or isn't paying attention can't accidentally leave their login information easily accessible. Of course, this was before the idea of private browsing that is starting to be featured in Internet Explorer 8, Firefox 3.1, etc. Even so, many physicians are forced to use old school browsers in hospitals with IT that won't change. So, we had the login page generate random field names that would only work for that post. Yes, it's less convenient, but it's just hitting the user over the head about not storing login information on public terminals. A: I've been trying endless solutions, and then I found this: Instead of autocomplete="off" just simply use autocomplete="false" As simple as that, and it works like a charm in Google Chrome as well! A: I think autocomplete=off is supported in HTML 5. Ask yourself why you want to do this though - it may make sense in some situations but don't do it just for the sake of doing it. It's less convenient for users and not even a security issue in OS X (mentioned by Soren below). If you're worried about people having their passwords stolen remotely - a keystroke logger could still do it even though your app uses autcomplete=off. As a user who chooses to have a browser remember (most of) my information, I'd find it annoying if your site didn't remember mine. A: A workaround is not to insert the password field into the DOM before the user wants to change the password. This may be applicable in certain cases: In our system we have a password field which in an admin page, so we must avoid inadvertently setting other users' passwords. The form has an extra checkbox that will toggle the password field visibility for this reason. So in this case, autofill from a password manager becomes a double problem, because the input won't even be visible to the user. The solution was to have the checkbox trigger whether the password field is inserted in the DOM, not just its visibility. Pseudo implementation for AngularJS: <input type="checkbox" ng-model="createPassword"> <input ng-if="changePassword" type="password"> A: You can use autocomplete = off in input controls to avoid auto completion For example: <input type=text name="test" autocomplete="off" /> if the above code doesn't works then try to add those attributes also autocapitalize="off" autocomplete="off" or Change input type attribute to type="search". Google doesn't apply auto-fill to inputs with a type of search. A: My solution is Change the text inputs type dynamically using angular js directive and it works like charm first add 2 hidden text fields and just add a angular directive like this (function () { 'use strict'; appname.directive('changePasswordType', directive); directive.$inject = ['$timeout', '$rootScope', '$cookies']; function directive($timeout, $rootScope, $cookies) { var directive = { link: link, restrict: 'A' }; return directive; function link(scope,element) { var process = function () { var elem =element[0]; elem.value.length > 0 ? element[0].setAttribute("type", "password") : element[0].setAttribute("type", "text"); } element.bind('input', function () { process(); }); element.bind('keyup', function () { process(); }); } } })() then use it in your text field where you need to prevent auto complete <input type="text" style="display:none">\\can avoid this 2 lines <input type="password" style="display:none"> <input type="text" autocomplete="new-password" change-password-type> NB: dont forget to include jquery, and set type ="text" initially A: I wanted something that took the field management completely out of the browser's hands, so to speak. In this example, there's a single standard text input field to capture a password — no email, user name, etc... <input id='input_password' type='text' autocomplete='off' autofocus> There's a variable named "input", set to be an empty string... var input = ""; The field events are monitored by jQuery... * *On focus, the field content and the associated "input" variable are always cleared. *On keypress, any alphanumeric character, as well as some defined symbols, are appended to the "input" variable, and the field input is replaced with a bullet character. Additionally, when the Enter key is pressed, and the typed characters (stored in the "input" variable) are sent to the server via Ajax. (See "Server Details" below.) *On keyup, the Home, End, and Arrow keys cause the "input" variable and field values to be flushed. (I could have gotten fancy with arrow navigation and the focus event, and used .selectionStart to figure out where the user had clicked or was navigating, but it's not worth the effort for a password field.) Additionally, pressing the Backspace key truncates both the variable and field content accordingly. $("#input_password").off().on("focus", function(event) { $(this).val(""); input = ""; }).on("keypress", function(event) { event.preventDefault(); if (event.key !== "Enter" && event.key.match(/^[0-9a-z!@#\$%&*-_]/)) { $(this).val( $(this).val() + "•" ); input += event.key; } else if (event.key == "Enter") { var params = {}; params.password = input; $.post(SERVER_URL, params, function(data, status, ajax) { location.reload(); }); } }).on("keyup", function(event) { var navigationKeys = ["Home", "End", "ArrowLeft", "ArrowRight", "ArrowUp", "ArrowDown"]; if ($.inArray(event.key, navigationKeys) > -1) { event.preventDefault(); $(this).val(""); input = ""; } else if (event.key == "Backspace") { var length = $(this).val().length - 1 > 0 ? $(this).val().length : 0; input = input.substring(0, length); } }); Front-End Summary In essence, this gives the browser nothing useful to capture. Even if it overrides the autocomplete setting, and/or presents a dropdown with previously entered values, all it has is bullets stored for the field value. Server Details (optional reading) As shown above, JavaScript executes location.reload() as soon as the server returns a JSON response. (This logon technique is for access to a restricted administration tool. Some of the overkill, related to the cookie content, could be skipped for a more generalized implementation.) Here are the details: * *When a user navigates to the site, the server looks for a legitimate cookie. *If there isn't any cookie, the logon page is presented. When the user enters a password and it is sent via Ajax, the server confirms the password and also checks to see if the user's IP address is in an Authorized IP address list. *If either the password or IP address are not recognized, the server doesn't generate a cookie, so when the page reloads, the user sees the same logon page. *If both the password and IP address are recognized, the server generates a cookie that has a ten-minute life span, and it also stores two scrambled values that correspond with the time-frame and IP address. *When the page reloads, the server finds the cookie and verifies that the scrambled values are correct (i.e., that the time-frame corresponds with the cookie's date and that the IP address is the same). *The process of authenticating and updating the cookie is repeated every time the user interacts with the server, whether they are logging in, displaying data, or updating a record. *If at all times the cookie's values are correct, the server presents the full website (if the user is logging in) or fulfills whatever display or update request was submitted. *If at any time the cookie's values are not correct, the server removes the current cookie which then, upon reload, causes the logon page to be redisplayed. A: Unfortunately, this option was removed in most browsers, so it is not possible to disable the password hint. Until today, I did not find a good solution to work around this problem. What we have left now is to hope that one day this option will come back. A: I went through the same problem, today 09/10/2019 only solution I found was this Add autocomplete="off" into the form tag. put 1 false inputs after opening form tag. <input id="username" style="display:none" type="text" name="fakeusernameremembered"> but it won't work on password type field, try <input type="text" oninput="turnOnPasswordStyle()" placeholder="Enter Password" name="password" id="password" required> on script function turnOnPasswordStyle() { $('#password').attr('type', "password"); } This is tested on Chrome-78, IE-44, Firefox-69 A: The autofill functionality changes the value without selecting the field. We could use that in our state management to ignore state changes before the select event. An example in React: import React, {Component} from 'react'; class NoAutoFillInput extends Component{ constructor() { super(); this.state = { locked: true } } onChange(event){ if (!this.state.locked){ this.props.onChange(event.target.value); } } render() { let props = {...this.props, ...{onChange: this.onChange.bind(this)}, ...{onSelect: () => this.setState({locked: false})}}; return <input {...props}/>; } } export default NoAutoFillInput; If the browser tries to fill the field, the element is still locked and the state is not affected. Now you can just replace the input field with a NoAutoFillInput component to prevent autofill: <div className="form-group row"> <div className="col-sm-2"> <NoAutoFillInput type="text" name="myUserName" className="form-control" placeholder="Username" value={this.state.userName} onChange={value => this.setState({userName: value})}/> </div> <div className="col-sm-2"> <NoAutoFillInput type="password" name="myPassword" className="form-control" placeholder="Password" value={this.state.password} onChange={value => this.setState({password: value})}/> </div> </div> Of course, this idea could be used with other JavaScript frameworks as well. A: None of the solutions I've found at this day bring a real working response. Solutions with an autocomplete attribute do not work. So, this is what I wrote for myself: <input type="text" name="UserName" onkeyup="if (this.value.length > 0) this.setAttribute('type', 'password'); else this.setAttribute('type', 'text');" > You should do this for every input field you want as a password type on your page. And this works. cheers A: Pretty sure this is the most generic solution as of today This stops auto-complete and the popup suggestion box too. Intro Let's face it, autocomplete=off or new-password doesn't seem to work. It's should do but it doesn't and every week we discover something else has changed and the browser is filling a form out with more random garbage we don't want. I've discovered a really simple solution that doesn't need autocomplete on sign in pages. How to implement Step 1). Add the same function call to the onmousedown and onkeyup attributes for your username and password fields, making sure you give them an id AND note the code at the end of the function call. md=mousedown and ku=keyup For the username field only add a value of &nbsp; as this prevents the form auto-filling on entry to the page. For example: <input type="text" value="&nbsp;" id="myusername" onmousedown="protectMe(this,'md')" onkeyup="protectMe(this,'ku')" /> Step 2). Include this function on the page function protectMe(el,action){ // Remove the white space we injected on startup var v = el.value.trim(); // Ignore this reset process if we are typing content in // and only acknowledge a keypress when it's the last Delete // we do resulting in the form value being empty if(action=='ku' && v != ''){return;} // Fix double quote issue (in the form input) that results from writing the outerHTML back to itself v = v.replace(/"/g,'\\"'); // Reset the popup appearing when the form came into focus from a click by rewriting it back el.outerHTML=el.outerHTML; // Issue instruction to refocus it again, insert the value that existed before we reset it and then select the content. setTimeout("var d=document.getElementById('"+el.id+"'); d.value=\""+v+"\";d.focus();if(d.value.length>1){d.select();}",100); } What does it do? * *Firstly it adds an &nbsp; space to the field so the browser tries to find details related to that but doesn't find anything, so that's auto-complete fixed *Secondly, when you click, the HTML is created again, cancelling out the popup and then the value is added back selected. *Finally, when you delete the string with the Delete button the popup usually ends up appearing again but the keyup check repeats the process if we hit that point. What are the issues? * *Clicking to select a character is a problem but for a sign in page most will forgive the fact it select all the text *Javascript does trim any inputs of blank spaces but you might want to do it too on the server side to be safe Can it be better? Yes probably. This is just something I tested and it satisfies my basic need but there might be more tricks to add or a better way to apply all of this. Tested browsers Tested and working on latest Chrome, Firefox, Edge as of this post date A: I must have spent two days on this discussion before resorting to creating my own solution: First, if it works for the task, ditch the input and use a textarea. To my knowledge, autofill/autocomplete has no business in a textarea, which is a great start in terms of what we're trying to achieve. Now you just have to change some of the default behaviors of that element to make it act like an input. Next, you'll want to keep any long entries on the same line, like an input, and you'll want the textarea to scroll along the y-axis with the cursor. We also want to get rid of the resize box, since we're doing our best to mimic the behavior of an input, which doesn't come with a resize handle. We achieve all of this with some quick CSS: #your-textarea { resize: none; overflow-y: hidden; white-space: nowrap; } Finally, you'll want to make sure the pesky scrollbar doesn't arrive to wreck the party (for those particularly long entries), so make sure your text-area doesn't show it: #your-textarea::-webkit-scrollbar { display: none; } Easy peasy. We've had zero autocomplete issues with this solution. A: After trying all solutions (some have worked partly, disabling autofill but not autocomplete, some did not work at all), I've found the best solution as of 2020 to be adding type="search" and autocomplete="off" to your input element. Like this: <input type="search" /> or <input type="search" autocomplete="off" /> Also make sure to have autocomplete="off" on the form element. This works perfectly and disables both autocomplete and autofill. Also, if you're using type="email" or any other text type, you'll need to add autocomplete="new-email" and that will disable both perfectly. same goes for type="password". Just add a "new-" prefix to the autocomplete together with the type. Like this: <input type="email" autocomplete="new-email" /> <input type="password" autocomplete="new-password" /> A: (It works in 2021 for Chrome (v88, 89, 90), Firefox, Brave, and Safari.) The old answers already written here will work with trial and error, but most of them don't link to any official documentation or what Chrome has to say on this matter. The issue mentioned in the question is because of Chrome's autofill feature, and here is Chrome's stance on it in this bug link - Issue 468153: autocomplete=off is ignored on non-login INPUT elements, Comment 160 To put it simply, there are two cases - * *[CASE 1]: Your input type is something other than password. In this case, the solution is simple, and has three steps. *Add the name attribute to input *name should not start with a value like email or username. Otherwise Chrome still ends up showing the dropdown. For example, name="emailToDelete" shows the dropdown, but name="to-delete-email" doesn't. The same applies for the autocomplete attribute. *Add the autocomplete attribute, and add a value which is meaningful for you, like new-field-name It will look like this, and you won't see the autofill for this input again for the rest of your life - <input type="text/number/something-other-than-password" name="x-field-1" autocomplete="new-field-1" /> * *[CASE 2]: input type is password *Well, in this case, irrespective of your trials, Chrome will show you the dropdown to manage passwords / use an already existing password. Firefox will also do something similar, and same will be the case with all other major browsers. 2 *In this case, if you really want to stop the user from seeing the dropdown to manage passwords / see a securely generated password, you will have to play around with JS to switch input type, as mentioned in the other answers of this question. 2 Detailed MDN documentation on turning off autocompletion - How to turn off form autocompletion A: This will fix this problem autocomplete="new-password" A: you only need autocomplete attribute for this problem you can visit this page for more information <input type="text" name="foo" autocomplete="off" /> A: Sometimes even autocomplete=off would not prevent to fill in credentials into the wrong fields, but not a user or nickname field. This workaround is in addition to apinstein's post about browser behavior. Fix browser autofill in read-only and set writable on focus (click and tab) <input type="password" readonly onfocus="this.removeAttribute('readonly');"/> Update: Mobile Safari sets cursor in the field, but it does not show the virtual keyboard. The new fix works like before, but it handles the virtual keyboard: <input id="email" readonly type="email" onfocus="if (this.hasAttribute('readonly')) { this.removeAttribute('readonly'); // fix for mobile safari to show virtual keyboard this.blur(); this.focus(); }" /> Live Demo https://jsfiddle.net/danielsuess/n0scguv6/ // UpdateEnd Because the browser auto fills credentials to wrong text field!? I notice this strange behavior on Chrome and Safari, when there are password fields in the same form. I guess the browser looks for a password field to insert your saved credentials. Then it auto fills (just guessing due to observation) the nearest textlike-input field, that appears prior the password field in the DOM. As the browser is the last instance and you can not control it. This readonly-fix above worked for me. A: The best solution: Prevent autocomplete username (or email) and password: <input type="email" name="email"><!-- Can be type="text" --> <input type="password" name="password" autocomplete="new-password"> Prevent autocomplete a field: <input type="text" name="field" autocomplete="nope"> Explanation: autocomplete continues work in <input>, autocomplete="off" does not work, but you can change off to a random string, like nope. Works in: * *Chrome: 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63 and 64 *Firefox: 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57 and 58 A: The solution for Chrome is to add autocomplete="new-password" to the input type password. Please check the example below. Example: <form name="myForm"" method="post"> <input name="user" type="text" /> <input name="pass" type="password" autocomplete="new-password" /> <input type="submit"> </form> Chrome always autocomplete the data if it finds a box of type password, just enough to indicate for that box autocomplete = "new-password". This works well for me. Note: make sure with F12 that your changes take effect. Many times, browsers save the page in the cache, and this gave me a bad impression that it did not work, but the browser did not actually bring the changes. A: Use a non-standard name and id for the fields, so rather than "name" have "name_". Browsers will then not see it as being the name field. The best part about it is that you can do this to some, but not all, fields and it will autocomplete some, but not all fields. A: Adding autocomplete="off" is not going to cut it. Change the input type attribute to type="search". Google doesn't apply auto-fill to inputs with a type of search. A: I just ran into this problem and tried several failures, but this one works for me (found on MDN): In some cases, the browser will keep suggesting autocompletion values even if the autocomplete attribute is set to off. This unexpected behavior can be quite puzzling for developers. The trick to really force the no-completion is to assign a random string to the attribute like so: autocomplete="nope" A: Adding the autocomplete="off" to the form tag will disable the browser autocomplete (what was previously typed into that field) from all input fields within that particular form. Tested on: * *Firefox 3.5, 4 BETA *Internet Explorer 8 *Chrome A: In order to avoid the invalid XHTML, you can set this attribute using JavaScript. An example using jQuery: <input type="text" class="noAutoComplete" ... /> $(function() { $('.noAutoComplete').attr('autocomplete', 'off'); }); The problem is that users without JavaScript will get the autocomplete functionality. A: So here is it: function turnOnPasswordStyle() { $('#inputpassword').attr('type', "password"); } <input oninput="turnOnPasswordStyle()" id="inputpassword" type="text"> A: Try these too if just autocomplete="off" doesn't work: autocorrect="off" autocapitalize="off" autocomplete="off" A: This is a security issue that browsers ignore now. Browsers identify and store content using input names, even if developers consider the information to be sensitive and should not be stored. Making an input name different between 2 requests will solve the problem (but will still be saved in browser's cache and will also increase browser's cache). Asking the user to activate or deactivate options in their browser's settings is not a good solution. The issue can be fixed in the backend. Here's the fix. All autocomplete elements are generated with a hidden input like this: <?php $r = md5(rand() . microtime(TRUE)); ?> <form method="POST" action="./"> <input type="text" name="<?php echo $r; ?>" /> <input type="hidden" name="__autocomplete_fix_<?php echo $r; ?>" value="username" /> <input type="submit" name="submit" value="submit" /> </form> The server then processes the post variables like this: (Demo) foreach ($_POST as $key => $val) { $newKey = preg_replace('~^__autocomplete_fix_~', '', $key, 1, $count); if ($count) { $_POST[$val] = $_POST[$newKey]; unset($_POST[$key], $_POST[$newKey]); } } The value can be accessed as usual echo $_POST['username']; And the browser won't be able to suggest information from the previous request or from previous users. This will continue to work even if browsers update their techniques to ignore/respect autocomplete attributes. A: I can't believe this is still an issue so long after it's been reported. The previous solutions didn't work for me, as Safari seemed to know when the element was not displayed or off-screen, however the following did work for me: <div style="height:0px; overflow:hidden; "> Username <input type="text" name="fake_safari_username" > Password <input type="password" name="fake_safari_password"> </div> A: <form name="form1" id="form1" method="post" autocomplete="off" action="http://www.example.com/form.cgi"> This will work in Internet Explorer and Mozilla Firefox. The downside is that it is not XHTML standard. A: None of the hacks mentioned here worked for me in Chrome. There's a discussion of the issue here: https://code.google.com/p/chromium/issues/detail?id=468153#c41 Adding this inside a <form> works (at least for now): <div style="display: none;"> <input type="text" id="PreventChromeAutocomplete" name="PreventChromeAutocomplete" autocomplete="address-level4" /> </div> A: Things had changed now as I tried it myself old answers no longer work. Implementation that I'm sure it will work. I test this in Chrome, Edge and Firefox and it does do the trick. You may also try this and tell us your experience. set the autocomplete attribute of the password input element to "new-password" <form autocomplete="off"> ....other element <input type="password" autocomplete="new-password"/> </form> This is according to MDN If you are defining a user management page where a user can specify a new password for another person, and therefore you want to prevent autofilling of password fields, you can use autocomplete="new-password" This is a hint, which browsers are not required to comply with. However modern browsers have stopped autofilling <input> elements with autocomplete="new-password" for this very reason. A: With regards to Internet Explorer 11, there is a security feature in place that can be used to block autocomplete. It works like this: Any form input value that is modified in JavaScript after the user has already entered it is flagged as ineligible for autocomplete. This feature is normally used to protect users from malicious websites that want to change your password after you enter it or the like. However, you could insert a single special character at the beginning of a password string to block autocomplete. This special character could be detected and removed later on down the pipeline. A: autocomplete = 'off' didn't work for me, anyway i set the value attribute of the input field to a space i.e <input type='text' name='username' value=" "> that set the default input character to a space, and since the username was blank the password was cleared too. A: None of the provided answers worked on all the browsers I tested. Building on already provided answers, this is what I ended up with, (tested) on Chrome 61, Microsoft Edge 40 (EdgeHTML 15), IE 11, Firefox 57, Opera 49, and Safari 5.1. It is wacky due to many trials; however, it does work for me. <form autocomplete="off"> ... <input type="password" readonly autocomplete="off" id="Password" name="Password" onblur="this.setAttribute('readonly');" onfocus="this.removeAttribute('readonly');" onfocusin="this.removeAttribute('readonly');" onfocusout="this.setAttribute('readonly');" /> ... </form> <script type="text/javascript"> $(function () { $('input#Password').val(''); $('input#Password').on('focus', function () { if (!$(this).val() || $(this).val().length < 2) { $(this).attr('type', 'text'); } else { $(this).attr('type', 'password'); } }); $('input#Password').on('keyup', function () { if (!$(this).val() || $(this).val().length < 2) { $(this).attr('type', 'text'); } else { $(this).attr('type', 'password'); } }); $('input#Password').on('keydown', function () { if (!$(this).val() || $(this).val().length < 2) { $(this).attr('type', 'text'); } else { $(this).attr('type', 'password'); } }); </script> A: Fixed. Just need to add above real input field https://developer.mozilla.org/en-US/docs/Web/Security/Securing_your_site/Turning_off_form_autocompletion - MDN https://medium.com/paul-jaworski/turning-off-autocomplete-in-chrome-ee3ff8ef0908 - medium tested on EDGE, Chrome(latest v63), Firefox Quantum (57.0.4 64-бит), Firefox(52.2.0) fake fields are a workaround for chrome/opera autofill getting the wrong fields const fakeInputStyle = {opacity: 0, float: 'left', border: 'none', height: '0', width: '0'} <input type="password" name='fake-password' autoComplete='new-password' tabIndex='-1' style={fakeInputSyle} /> <TextField name='userName' autoComplete='nope' ... /> <TextField name='password' autoComplete='new-password' ... /> A: I was able to stop Chrome 66 from autofilling by adding two fake inputs and giving them position absolute: <form style="position: relative"> <div style="position: absolute; top: -999px; left: -999px;"> <input name="username" type="text" /> <input name="password" type="password" /> </div> <input name="username" type="text" /> <input name="password" type="password" /> At first, I tried adding display:none; to the inputs but Chrome ignored them and autofilled the visible ones. A: This worked for me like a charm. * *Set the autocomplete attribute of the form to off *Add a dummy input field and set its attribute also to off. <form autocomplete="off"> <input type="text" autocomplete="off" style="display:none"> </form> A: You can add a name in attribute name how email address to your form and generate an email value. For example: <form id="something-form"> <input style="display: none" name="email" value="randomgeneratevalue"></input> <input type="password"> </form> If you use this method, Google Chrome can't insert an autofill password. A: I'v solved putting this code after page load: <script> var randomicAtomic = Math.random().toString(36).substring(2, 15) + Math.random().toString(36).substring(2, 15); $('input[type=text]').attr('autocomplete',randomicAtomic); </script> A: The simplest answer is <input autocomplete="on|off"> But keep in mind the browser support. Currently, autocomplete attribute is supported by Chrome 17.0 & latest IE 5.0 & latest Firefox 4.0 & latest Safari 5.2 & latest Opera 9.6 & latest A: My solution with jQuery. It may not be 100% reliable, but it works for me. The idea is described in code annotations. /** * Prevent fields autofill for fields. * When focusing on a text field with autocomplete (with values: "off", "none", "false") we replace the value with a new and unique one (here it is - "off-forced-[TIMESTAMP]"), * the browser does not find this type of autocomplete in the saved values and does not offer options. * Then, to prevent the entered text from being saved in the browser for a our new unique autocomplete, we replace it with the one set earlier when the field loses focus or when user press Enter key. * @type {{init: *}} */ var PreventFieldsAutofill = (function () { function init () { events.onPageStart(); } var events = { onPageStart: function () { $(document).on('focus', 'input[autocomplete="off"], input[autocomplete="none"], input[autocomplete="false"]', function () { methods.replaceAttrs($(this)); }); $(document).on('blur', 'input[data-prev-autocomplete]', function () { methods.returnAttrs($(this)); }); $(document).on('keydown', 'input[data-prev-autocomplete]', function (event) { if (event.keyCode == 13 || event.which == 13) { methods.returnAttrs($(this)); } }); $(document).on('submit', 'form', function () { $(this).find('input[data-prev-autocomplete]').each(function () { methods.returnAttrs($(this)); }); }); } }; var methods = { /** * Replace value of autocomplete and name attribute for unique and save the original value to new data attributes * @param $input */ replaceAttrs: function ($input) { var randomString = 'off-forced-' + Date.now(); $input.attr('data-prev-autocomplete', $input.attr('autocomplete')); $input.attr('autocomplete', randomString); if ($input.attr('name')) { $input.attr('data-prev-name', $input.attr('name')); $input.attr('name', randomString); } }, /** * Restore original autocomplete and name value for prevent saving text in browser for unique value * @param $input */ returnAttrs: function ($input) { $input.attr('autocomplete', $input.attr('data-prev-autocomplete')); $input.removeAttr('data-prev-autocomplete'); if ($input.attr('data-prev-name')) { $input.attr('name', $input.attr('data-prev-name')); $input.removeAttr('data-prev-name'); } } }; return { init: init } })(); PreventFieldsAutofill.init(); .input { display: block; width: 90%; padding: 6px 12px; font-size: 14px; line-height: 1.42857143; color: #555555; background-color: #fff; background-image: none; border: 1px solid #ccc; border-radius: 4px; } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.0/jquery.min.js"></script> <form action="#"> <p> <label for="input-1">Firts name without autocomplete</label><br /> <input id="input-1" class="input" type="text" name="first-name" autocomplete="off" placeholder="Firts name without autocomplete" /> </p> <p> <label for="input-2">Firts name with autocomplete</label><br /> <input id="input-2" class="input" type="text" name="first-name" autocomplete="given-name" placeholder="Firts name with autocomplete" /> </p> <p> <button type="submit">Submit form</button> </p> </form> A: Simply try to put attribute autocomplete with value "off" to input type. <input type="password" autocomplete="off" name="password" id="password" /> A: <input type="text" name="attendees" id="autocomplete-custom-append"> <script> /* AUTOCOMPLETE */ function init_autocomplete() { if (typeof ($.fn.autocomplete) === 'undefined') { return; } // console.log('init_autocomplete'); var attendees = { 1: "Shedrack Godson", 2: "Hellen Thobias Mgeni", 3: "Gerald Sanga", 4: "Tabitha Godson", 5: "Neema Oscar Mracha" }; var countriesArray = $.map(countries, function (value, key) { return { value: value, data: key }; }); // initialize autocomplete with custom appendTo $('#autocomplete-custom-append').autocomplete({ lookup: countriesArray }); }; </script> A: * *Leave inputs with text type and hidden. *On DOMContentLoaded, call a function that changes the types for password and display the fields, with a delay of 1s. document.addEventListener('DOMContentLoaded', changeInputsType); function changeInputsType() { setTimeout(function() { $(/*selector*/).prop("type", "password").show(); }, 1000); } A: I already posted a solution for this here: Disable Chrome Autofill creditcard The workaround is to set the autocomplete attribute as "cc-csc" that value is the CSC of a credit card and that they are no allowed to store it! (for now...) autocomplete="cc-csc" A: Simply change the name attribute in your input element to something unique each time and it will never autocomplete again! An example might be a time tic added at the end. Your server would only need to parse the first part of the text name to retrieve the value back. <input type="password" name="password_{DateTime.Now.Ticks}" value="" /> A: DO NOT USE JAVASCRIPT to fix this!! Use HTML to address this problem first, in case browsers are not using scripts, fail to use your version of the script, or have scripts turned off. Always layer scripting LAST on top of HTML. * *For regular input form fields with "text" type attributes, add the autocomplete="off" on the element. This may not work in modern HTML5 browsers due to user browser override settings but it takes care of many older browsers and stops the autocomplete drop-down choices in most. Notice, I also include all the other autocomplete options that might be irritating to users, like autocapitalize="off", autocorrect="off", and spellcheck="false". Many browsers do not support these but they add extra "offs" of features that annoy data entry people. Note that Chrome for example ignores "off" if a user's browsers are set with autofill enabled. So, realize browser settings can override this attribute. EXAMPLE: <input type="text" id="myfield" name="myfield" size="20" value="" autocomplete="off" autocapitalize="off" autocorrect="off" spellcheck="false" tabindex="0" placeholder="Enter something here..." title="Enter Something" aria-label="Something" required="required" aria-required="true" /> *For username and password input type fields, there is some limited browser support for more specific attributes for autocomplete that trigger the "off" feature". autocomplete="username" on text inputs used for logins will force some browsers to reset and remove the autocomplete feature. autocomplete="new-password" will try and do the same for passwords using the "password" type input field. (see below). However, these are propriety to specific browsers and will fail in most. Browsers that do not support these features include Internet Explorer 1-11, many Safari and Opera desktop browsers, and some versions of iPhone and Android default browsers. But they provide additional power to try and force the removal of autocomplete. EXAMPLE: <input type="text" id="username" name="username" size="20" value="" autocomplete="username" autocapitalize="off" autocorrect="off" spellcheck="false" tabindex="0" placeholder="Enter username here..." title="Enter Username" aria-label="Username" required="required" aria-required="true" /> <input type="password" id="password" name="password" size="20" minlength="8" maxlength="12" pattern="(?=.*\d)(?=.*[a-z])(?=.*[A-Z]).{8,12}" value="" autocomplete="new-password" autocapitalize="off" autocorrect="off" spellcheck="false" tabindex="0" placeholder="Enter password here..." title="Enter Password: (8-12) characters, must contain one number, one uppercase and lowercase letter" aria-label="Password" required="required" aria-required="true" /> NOW FOR A BETTER SOLUTION! Because of the splintered support for HTML5 by standards bodies and browsers, many of these modern HTML5 attributes fail in many browsers. So, some kids turn to script to fill in the holes. But that's lazy programming. Hiding or rewriting HTML using JavaScript makes things worse, as you now have layers upon layers of script dependencies PLUS the HTML patches. Avoid this! The best way to permanently disable this feature in forms and fields is to simply create a custom "id"/"name" value each time on your 'input' elements using a unique value like a date or time concatenated to the field id and name attribute and make sure they match (e.g. "name=password_{date}"). <input type="text" id="password_20211013" name="password_20211013" /> This destroys the silly browser autocomplete choice algorithms sniffing for matched names that trigger past values for these fields. By doing so, the browsers cannot match caches of previous form data with yours. It will force "autocomplete" to be off, or show a blank list, and will not show any previous passwords ever again! On the server side, you can create these custom input "named" fields and still identify and extract the correct field from the response from the user's browser by simply parsing the first part of the id/name. For example "password_" or "username_", etc. When the field's "name" value comes in you simply parse for the first part ("password_{date}") and ignore the rest, then extract your value on the server. Easy! A: I've been fighting this never-ending battle for ages now... And all tricks and hacks eventually stop working, almost as if browser devs are reading this question. I didn't want to randomize field names, touch the server-side code, use JS-heavy tricks and wanted to keep "hacks" to a minimum. So here's what I came up with: TL;DR use an input without a name or id at all! And track changes in a hidden field <!-- input without the "name" or "id" --> <input type="text" oninput="this.nextElementSibling.value=this.value"> <input type="hidden" name="email" id="email"> Works in all major browsers, obviously. P.S. Known minor issues: * *you can't reference this field by id or name anymore. But you can use CSS classes. Or use $(#email').prev(); in jQuery. Or come up with another solution (there's many). *oninput and onchange events do not fire when the textbox value is changed programmatically. So modify your code accordingly to mirror changes in the hidden field too. A: Browser ignores autocomlete on readonly fields. This code makes all writable filds readonly till it's focused. $(_=>{$('form[autocomplete=off] [name]:not([readonly])').map((i,t)=>$(t).prop('autocomplete',t.type=='password'?'new-password':'nope').prop('readonly',!0).one('focus',e=>$(t).prop('readonly',!1))); }) <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <form method="post" autocomplete="off"> <input name="user" id="user" required class="form-control" /> <label class="form-label" for="user">New User</label> <hr> <input type="password" name="pass" id="pass" minlength="8" maxlength="20" /> <label class="form-label" for="pass">New Passwort</label> </form> A: Since the behavior of autocomplete is pretty much unpredictable over different browsers and versions, my approach is a different one. Give all input elements for which you want to disable autofill the readonly attribute and only disable it on focus. document.addEventListener('click', (e) => { readOnlys.forEach(readOnly => { if (e.target == readOnly) { readOnly.removeAttribute('readonly', ''); readOnly.style.setProperty('pointer-events', 'none'); } else { readOnly.setAttribute('readonly', ''); readOnly.style.setProperty('pointer-events', 'auto'); } }); }); document.addEventListener('keyup', (e) => { if (e.key == 'Tab') { readOnlys.forEach(readOnly => { if (e.target == readOnly) { readOnly.removeAttribute('readonly', ''); readOnly.style.setProperty('pointer-events', 'none'); } else { readOnly.setAttribute('readonly', ''); readOnly.style.setProperty('pointer-events', 'auto'); } }); } }); If you want to make sure that users can still access the fields if they have disabled JS, you can set all readonlys initially via JS on page load. You can still use the autocomplete attribute as a fallback. A: 2022 September Some browsers such as Google Chrome does not even care the value of autocomplete attribute. The best way to stop getting suggestions is changing the name attribute of the input to something that changes time to time As of September 2022, browsers will not provide autocomplete for one time passwords So we can use otp as the field name. <input name="otp"> A: For React you can try to put this code either now under a form or above password input or between email and password inputs export const HackRemoveBrowsersAutofill = () => ( <> <input type="email" autoComplete="new-password" style={ { display: 'none' } } /> <input type="password" autoComplete="new-password" style={ { display: 'none' } } /> </> ) One of the examples: <input type="email"/> <HackRemoveBrowsersAutofill/> <input type="password"/> A: Just autocomplete="off" list="autocompleteOff" in your input and work done for IE/Edge! and for chrome add autocomplete="new password" A: I was fighting to autocomplete for years. I've tried every single suggestion, and nothing worked for me. Adding by jQuery 2 attributes worked well: $(document).ready( function () { setTimeout(function() { $('input').attr('autocomplete', 'off').attr('autocorrect', 'off'); }, 10); }); will result in HTML <input id="start_date" name="start_date" type="text" value="" class="input-small hasDatepicker" autocomplete="off" placeholder="Start date" autocorrect="off"> A: This works with Chrome 84.0.4147.105 1. Assign text type to password fields and add a class, e.g. "fakepasswordtype" <input type="text" class="fakepasswordtype" name="password1"> <input type="text" class="fakepasswordtype" name="password2"> 2. Then use jQuery to change the type back to password the moment when first input is done jQuery(document).ready(function($) { $('.fakepasswordtype').on('input', function(e){ $(this).prop('type', 'password'); }); }); This stopped Chrome from its nasty ugly behavior from auto filling. A: Chrome keeps trying to force autocomplete. So there are a few things you need to do. The input field's attributes id and name need to not be recognizable. So no values such as name, phone, address, etc. The adjacent label or div also needs to not be recognizable. However, you may wonder, how will the user know? Well there is a fun little trick you can do with ::after using the content. You can set it to a letter. <label>Ph<span class='o'></span>ne</label> <input id='phnbr' name='phnbr' type='text'> <style> span.o { content: 'o'; } </style> A: Very simple solution Just change the label name and fields name other than the more generic name e.g: Name, Contact, Email. Use "Mail To" instead of "Email". Browsers ignore autocomplete off for these fields. A: Define input's attribute type="text" along with autocomplete="off" works enough to turn off autocomplete for me. EXCEPT for type="password", I try switching the readonly attribute using JavaScript hooking on onfocus/onfocusout event from others' suggestions works fine BUT while the password field editing begin empty, autocomplete password comes again. I would suggest switching the type attribute regarding length of password using JavaScript hooking on oninput event in addition to above workaround which worked well.. <input id="pwd1" type="text" autocomplete="off" oninput="sw(this)" readonly onfocus="a(this)" onfocusout="b(this)" /> <input id="pwd2" type="text" autocomplete="off" oninput="sw(this)" readonly onfocus="a(this)" onfocusout="b(this)" /> and JavaScript.. function sw(e) { e.setAttribute('type', (e.value.length > 0) ? 'password' : 'text'); } function a(e) { e.removeAttribute('readonly'); } function b(e) { e.setAttribute('readonly', 'readonly'); } A: I was looking arround to find a solution to this but none of them worked proberly on all browsers. I tried autocomplete off, none, nope, new_input_64, (even some funny texts) because the autoComplete attribute expects a string to be passed, no matter what. After many searches and attempts I found this solution. I changed all input types to text and added a bit of simple CSS code. Here is the form: <form class="row gy-3"> <div class="col-md-12"> <label for="fullname" class="form-label">Full Name</label> <input type="text" class="form-control" id="fullname" name="fullname" value=""> </div> <div class="col-md-6"> <label for="username" class="form-label">User Name</label> <input type="text" class="form-control" id="username" name="username" value=""> </div> <div class="col-md-6"> <label for="email" class="form-label">Email</label> <input type="text" class="form-control" id="email" name="email" value=""> </div> <div class="col-md-6"> <label for="password" class="form-label">Password</label> <input type="text" class="form-control pswd" id="password" name="password" value=""> </div> <div class="col-md-6"> <label for="password" class="form-label">Confirm Password</label> <input type="text" class="form-control pswd" id="password" name="password" value=""> </div> <div class="col-md-12"> <label for="recaptcha" class="form-label">ReCaptcha</label> <input type="text" class="form-control" id="recaptcha" name="recaptcha" value=""> </div> </form> The CSS code to hide password: .pswd { -webkit-text-security: disc; } Tested on Chrome, Opera, Edge. A: Unfortunately the problem goes on... and it seems no action done by main browsers, or even worst, some actions are done just against any solution. The option: autocomplete="new-password" is not working on some browsers now, as the autocomplete is done with the password generator (that confuses the user as the value is hidden). For some empty input fields, explicitly the case type="url" it is really complicated to avoid incorrect autocompletion, but an unbreakable space instead of an empty value do the trick (spaces don't do)...at the cost of some garbagge included in the form (and one of this days the browsers will also autocomplete that) Very sad. A: I was having trouble on a form which included input fields for username and email with email being right after username. Chrome would autofill my current username for the site into the email field and put focus on that field. This wasn't ideal as it was a form for adding new users to the web application. It seems that if you have two inputs one after the other where the first one has the term 'user name' or username in it's label or ID and the next one has the word email in it's label or ID chrome will autofill the email field. What solved the issue for me was to change the ID's of the inputs to not include those words. I also had to set the text of the labels to an empty string and use a javascript settimeout function to change the label back to what it should be after 0.01s. setTimeout(function () { document.getElementById('id_of_email_input_label').innerHTML = 'Email:'; }, 10); If you are having this problem with any other input field being wrongly autofilled I'd try this method.
{ "language": "en", "url": "https://stackoverflow.com/questions/2530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3175" }
Q: Good STL-like library for C What are good libraries for C with datastructures like vectors, deques, stacks, hashmaps, treemaps, sets, etc.? Plain C, please, and platform-independent. A: There's some stuff in the Apache Portable Runtime (APR) that I'd expect to be very solid. A: Maybe http://sglib.sourceforge.net/ if you want an easy to use, very fast, macro based library. A: The Glib library used on the Gnome project may also be some use. Moreover it is pretty well tested. IBM developer works has a good tutorial on its use: Manage C data using the GLib collections A: If hash tables, extensible strings and dynamic vector are enough for your needs, please have a look at the library I put toghether: http://code.google.com/p/c-libutl/. I also would welcome any feedback! A: As always, Google is your friend: http://nixbit.com/cat/programming/libraries/c-generic-library/ specifically: http://nixbit.com/cat/programming/libraries/generic-data-structures-library/
{ "language": "en", "url": "https://stackoverflow.com/questions/2540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: What are the best solutions for flash charts and graphs? I'm aware of FusionCharts, are there other good solutions, or APIs, for creating charts in Adobe Flash? A: Is there a reason that you want it in Flash? If a plain, old PNG will work, try the Google Chart API. A: I think the best Flash charts are amCharts - nice looking, highly customizable and free (if you don't mind the link back to amcharts.com) A: XML/SWF Charts does Chart off XML: http://www.maani.us/xml_charts/ A: I know you said flash, but this is a good silverlight chart api. http://www.visifire.com/. Always good to keep your options open right? Had to plug silverlight, but you can also take a look at Yahoo!'s YUI charting component. A: open-flash-chart because you can't resize a .png and .png has no tooltips. Also why would you send all your data to google? Do you trust them that much? A: http://teethgrinder.co.uk/open-flash-chart/ seems really sweet. I suggest you give it a look. UPDATE: Open Flash Chart 2 is out: http://teethgrinder.co.uk/open-flash-chart-2/ A: Depending on your needs, a couple others you might look at: Flare Prefuse A: I can vouch that I've had good experiences with PHP/SWF charts (and by extension, the XML/SWF charts too). It's easy to create really subtle chart effects. Pie charts fading in one slice at a time looks fairly professional without being annoying. Edit: Scratch that, Open-Flash-Chart looks WAY better. I recede my suggestion. A: Adobe's data visualization package is now free after the release of Flash Builder 4. Charting and other libraries are on this site - http://opensource.adobe.com/wiki/display/flexsdk/Download+Flex+4 A: MultiChart, because it beats all the others in all parameters. http://chart.sqlcode.org
{ "language": "en", "url": "https://stackoverflow.com/questions/2543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: What are effective options for embedding video in an ASP.NET web site? A quick glance at the present-day internet would seem to indicate that Adobe Flash is the obvious choice for embedding video in a web page. Is this accurate, or are they other effective choices? Does the choice of ASP.NET as a platform influence this decision? A: Flash is usually the product of choice: Everyone has it, and using the JW FLV Player makes it relatively easy on your side. As for other Video Formats, there are WMV and QuickTime, but the players are rather "heavy", not everyone might have them and they feel so 1990ish... Real Player... Don't let me even start ranting about that pile of ... The only other alternative of Flash that I would personally consider is Silverlight, which allows streaming WMV Videos. I found the production of WMV much better and easier than FLV because all Windows FLV Encoders I tried are not really good and stable, whereas pretty much every tool can natively output WMV. The problem with Silverlight is that no one has that Browser Plugin (yet?). There is also a player from JW. A: Flash is certainly the most ubiquitous and portable solution. 98% of browsers have Flash installed. Other alternatives are Quicktime, Windows Media Player, or even Silverlight (Microsoft's Flash competitor, which can be used to embed several video formats). I would recommend using Flash (and it's FLV video file format) for embedding your video unless you have very specific requirements as far as video quality or DRM. A: One consideration would be whether video playback is via progressive download or streaming. If it's progressive download, then I would say use Flash because you get a wider audience reach. For streaming wmv, it is out of the box functionality provided by Windows Media Services For streaming flash, you will have to install a streaming server on your Windows box. Some options are: * *Adobe Flash Media Server (Commercial) *Wowza Media Server (Free/Commercial) *Red5 Flash Server (Open Source) A: <object width="660" height="525"><param name="movie" value="http://www.youtube.com/v/WAQUskZuXhQ&hl=en&fs=1&color1=0x006699&color2=0x54abd6&border=1"></param><param name="allowFullScreen" value="true"></param><param name="allowscriptaccess" value="always"></param><embed src="http://www.youtube.com/v/WAQUskZuXhQ&hl=en&fs=1&color1=0x006699&color2=0x54abd6&border=1" type="application/x-shockwave-flash" allowscriptaccess="always" allowfullscreen="true" width="660" height="525"></embed></object> A: If you have access to Microsoft Expression Encoder 2, you can use that to encode a video file and generate a Silverlight video player. Then if you have IIS 7, you can use Adaptive or Smooth Streaming also checkout Smooth HD for a really cool example. You can also do streaming from the free Microsoft Silverlight Streaming Service. It's connected to a Windows Live account. A consideration is that the client will need to have Silverlight installed, just like Flash, but Flash has been around longer. A: I have worked for a company that developed a system for distributing media content to dedicated "players". It was web based and used ASP.NET technology and have tried almost every possible media format you can think of and your choice really comes down to asking yourself: does it needs to play directly out of the box, or can I make sure that the components required to play the videos can be installed beforehand? If your answer is that it needs to play out of the box then really your only option is flash (I know that it is not installed by default, but most will already have it installed) If it is not a big issue that extra components are needed then you can go with formats that are supported by windows media player The reason why windows media player falls into the second option is because for some browsers and some formats extra components must be installed. We had the luxury that the "players" were provided by us, so we could go for the second option, however even we tried to convert as much as possible back to flash because it handles way better than windows media player A: "Does the choice of ASP.NET as a platform influence this decision?" Probably not.
{ "language": "en", "url": "https://stackoverflow.com/questions/2550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: What's the best online payment processing solution? Should be available to non-U.S. companies, easy to setup, reliable, cheap, customizable, etc. What are your experiences? A: You can't really answer this kind of question with a "I like 'insert provide name here'" type answer because like so many things it is a balance and the reasons for choosing a payment processing solution tend to be complex. Volume / Value The most important factor in choosing a secure payment clearance service (the people who will connect to the banking networks and clear the money for you - will refer to them as SPCS) is how many widgets will you be selling at what cost. The pricing models of all the SCPS providers is based around this equation. This dictates the economics of using the service, which is nearly always the most important factor. For example, in the UK securetrading.net have a large annual fee and high minimum transaction values (been a while since I've seen the exact numbers and they don't make it immediately obvious on the site, but this is for illustration only anyway) making it one of the most expensive solutions to use if you are selling high value low volume. Most smaller clients will fall into this model. High value is really anything over a couple of dollars. Low volume is typically anything less than tens of thousands of units per month. However, if you are running a donations service in the aftermath of an international environmental disaster (relatively low value very high volume) then they become one of the cheapest. Factor in to this the setup costs (relatively high), and the cost to tie the service into the site (in SecureTrading's case it's very easy to do, but still a lot harder than adding a PayPal button) and you start to build up a true picture. On the flip side, a service such as PayPal has very low setup costs (no fee to pay, and trivially easy to integrate), but relatively high transaction costs. It is great for high value / low volume transactions. The Bank There are two main categories of payment clearance service - Bureau and Bank Acquired. In the UK at least NetBanx, SecureTrading and WorldPay offer both bank acquired and bureau services. ProtX and SecPay offer only bank acquired services. PayPal and its ilk operates slightly outside both definitions (see Protection below). A Bank Acquired service plumbs into your normal banking merchant account and clears the funds straight into it. As well as charging you for this service, your bank will also take a slice, typically this is more than the SPCS provider will charge and so it actually is the bank that becomes the deciding factor. Some banks will only work with their preferred provider. In the UK, most banks want you to have a separate Internet Merchant Account even if you already have a Merchant Account with them. I always tell clients to shop around, as this will make a huge difference to how much their e-commerce venture can bring in. All banks are not created equal. Bureau services effectively act as your bank at the same time as providing the clearance service. They were popular in a time when banks hadn't grasped the concept of the Internet and would prefer transactions be chiseled into stone tablets if they got their way. Often the choice between a bureau service and a bank acquired service is made for you based on circumstances. Trading History In many countries (including the UK), most banks won't give you a merchant account until you have been trading for a particular period of time (2 years in the UK). Your only option is then a bureau service. Cash flow Most bureau services will hold onto your cash as security against "charge backs". If you sell me a Ferrari and I am horrified to learn that you've sold me a small metal toy rather than the 1.5 tonnes of Italian automotive passion I was expecting, I will complain to my credit card company who will refund me and then chase your merchant services provider for a refund. They will have to give them the refund and then chase you for the money. It's therefore in their interests to hold on to your money for a period of 4-6 weeks to protect against this. If you sell services or goods with no capital outlay (software for instance), then you can afford this. If on the other hand, you really are having to pay your luxury car importer to provide you with stock, then cash flow becomes very important and you're going to need a bank acquired service where you can be paid immediately. Protection One major downside to PayPal and similar services is that it is not covered under the same regulations that govern credit cards. Simply put, if you buy something on a credit card your card provider is liable for ensure you get what you paid for (broadly speaking, in most countries, does not constitute legal advice etc.) and if you have a problem with your purchase they will refund you very quickly and then will go and chase the person that you paid. This is the kind of protection you hear about when Leo Laporte advertises American Express on his podcasts. It is a "Good Thing"TM. You don't have that protection with PayPal because when you use your credit card on PayPal, you are actually buying PayPal's service. So, even if you are mis-sold a product, the person you paid for the service (PayPal) didn't mis-sell, they provided the service you paid for. This breaks the chain. PayPal don't have a legal obligation to protect you in the same way, and their record on refunding ripped off customers is less than spangly. I'm guessing they have "Caveat Emptor" writ large on the walls of their head office. :) I'm not dissing PayPal, they are way ahead of the curve on many other security features, but just another factor to bear in mind. End to end integration Different services differ in their ease of integration. Oh boy do they differ. I'm sitting on some work right now to do an HSBC integration. I'd rather have a root canal. Some of the systems make big assumptions about the way you have to work with them, and are poorly designed or inflexible. Retro-fitting them to an active site can be very painful. Some of them are beautiful and easy to work with (and not necessarily less secure). The biggest difference is how you choose to integrate though. Most services integrate by allowing you to redirect to a secure site where your customer fills in his / her details. They are finally redirected back to a page on your own site with the results of the transaction. This works well in most cases and is easiest to integrate. When you buy something on Amazon, you don't get redirected to WorldPay, or PayPal however. If you want end-to-end integration, most services now will let the communication happen behind the scenes. Your own site has to have a decent secure server certificate of course, and the integration is necessarily more complex. Reputation It used to be that PayPal was used on dinky sites. You wouldn't catch Amazon using it. That perception has changed a lot, and in fact in some senses PayPal does security better than most. If your audience expects to see PayPal and you give them some other service then you may lose custom, or vice versa. These days many merchants offer a choice to customers. UK Providers * *WorldPay. Well established. Bureau and bank acquired. Relatively high transaction costs and annual costs. Fairly easy to integrate. Owned ultimately by Royal Bank of Scotland. *SecPay. Bank Acquired. Low per transaction cost and low annual cost and flexible payment models. *ProtX. Bank Acquired. Low per transaction cost and low annual cost, flexible payment models. Can be quite demanding to integrate. *HSBC. Bank Acquired. Low per transaction cost. High set up and annual costs. Very inflexible to integrate. *SecureTrading. Bureau and Bank Acquired. Low per transaction cost but high setup and annual costs. Was a doddle to integrate last time I used it (9 years ago!) *NetBanx. Bureau and Bank Acquired. Haven't used since 1996 so can't comment! And of course PayPal, Google Checkout and Amazon FPS are well worth looking at and worth a whole answer on their own! Summary Told you it wasn't that simple! Usually, as developers, we're not in the position to choose for ourselves, and these decisions should be driven by the business needs of our employer / client. Most e-commerce projects would start with PayPal or similar. When the business gets enough orders that they could save money by switching to another service, then they've got enough money to pay for the switch. Disclaimer: I am UK based, and have performed many integrations with a whole slew of these services over the years, however the market changes all the time and things may have changed and your mileage may vary! I am not a lawyer or accountant, and if you take my advice it's not my fault :) A: http://www.authorize.net/ works well. This type of solution would allow your customer to enter his/her credit card directly. A: I've been researching Google Checkout. If you require subscriptions (recurring payments) like I do - Google Checkout has it but it is still in beta. So depending on when you want to go live and your needs - you may want to use something else. A: esellerate if it is Digital stuff that you are selling, I recommend http://www.esellerate.net/ . they have nice support for website payment, delivery of serial numbers upon sell and even have API so you can integrate the buying process into your application in case it is a desktop application. A: Well by cheap do you mean processing fees or month fees? Also is this for micro or normal transactions? PayPal in my experience is an all around good choice because it offers both starter to professional level payment processing services that fit most needs. A: I've used CyberSource in the past, and had a good experience. They support several interfaces including SOAP, work internationally and have a pretty good web interface. I'm not sure whether it's cheap though. http://www.cybersource.com/products_and_services/global_payment_services/credit_card_processing/ A: I've looked at WorldPay and SecPay in the past; you need to know your onions to use them competently, I think - if you want really nice integration, at any rate. A: Google Check-out isn't available to non-US companies. I didn't realize this until the last stages of my research, so I found it quite annoying (considering it was very easy to work with and very well documented). Unfortunately in order to make things as convenient as possible for your end users, you're pretty much stuck with having to support Paypal. No one else comes close in terms of registered users. A: I'd say paypal or GoogleCheckout. Google Checkout is either 2% + .20USD or free depending on how much you spend on adwords. If you spend a dollar on adWords, your next $10 on Google checkout is free. Paypal is 1.9% to 2.9% + $0.30 USD (2.9% for up to $30,000/month, 1.9% for more than $100,000/month) Without factoring in the 20/30 cents, Paypal is just barely cheaper if you sell more than $100,000 per month, and spend nothing on adwords. A: Epoch is pretty large and available in US and EU: http://www.epoch.com/en/index.html I have no idea about their conditions though. A: I'd have to go with paypal. I've used it in the past, and its really quite painless. All you need to do is create an account, and it's automatically available to you. A: Try AlertPay, they have very competetive fees. A: alertpay looks great low fees (compared with paypal), supports more countries , developers center
{ "language": "en", "url": "https://stackoverflow.com/questions/2556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "93" }
Q: What is a good web-based Grid that accepts Excel clipboard data? Any good recommendations for a platform agnostic (i.e. Javascript) grid control/plugin that will accept pasted Excel data and can emit Excel-compliant clipboard data during a Copy? I believe Excel data is formatted as CSV during "normal" clipboard operations. dhtmlxGrid looks promising, but the online demo's don't actually copy contents to my clipboard! A: Not an answer, but a warning: my company bought the 2007 Infragistics ASP.NET controls just for the Grid, and we regret that choice. The quality of API is horrible (in our opinion at least), making it very hard to program against the grid (for example, inconsistent naming conventions, but this is just an inconvenience, we have complaints about the object model as well). So I can't say that I know of a better option, I just know I will give a try to something else before paying for Infragistics products again (and the email support we got was horrible as well). A: I'm currently using dhtmlxGrid and we have the Excel copy/paste functionality working. dhtmlXGrid is the most full featured javascript grid package that I've found. On their website, dhtmlXGrid claims to support Clipboard functionality in the Professional version. (However, I noticed the Sample on their site isn't working on my Firefox. EDIT: It's probably the permissions issue that Nathan mentioned.) In any case, we had to do some extra work to get the exact Excel copy and paste functionality we wanted. We essentially had to override some of their functionality to get the desired behavior. Their support was pretty good in helping us come up with a solution. So to answer your question, you should be able to get them to support copy and paste if you purchase the Professional version. I'm just warning you that it may take some additional work to fine tune that behavior. Overall, I'm happy with dhtmlXGrid. We use a lot of their features. Their support is pretty good. They usually take one day to respond since they are in Europe (I think). And Javascript is by its very nature open source so I can always dive in when I need to. A: I was wrestling with this problem several years ago (2004 I think). We ran into the problem that Firefox doesn't allow scripts to read the clipboard by default (but you can grant access to the clipboard). There's other ways of reading the clipboard data as well...Flash, for instance, can read the clipboard. There's a good article on ajaxian to explain how do to this behind the scenes. In the end, we couldn't find a web-based Grid that fit the bill, so we had to create our own in a mixture of Actionscript and Javascript. A: I'd hate to be Captain Obvious here...but what about a plain old .NET Gridview control? You can copy Excel data into it and out of it...and you can run it on any system with the .NET platform installed. A: http://dhtmlx.com/dhxdocs/doku.php?id=dhtmlxgrid:clipboard_operations
{ "language": "en", "url": "https://stackoverflow.com/questions/2563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Appropriate Windows O/S pagefile size for SQL Server Does any know a good rule of thumb for the appropriate pagefile size for a Windows 2003 server running SQL Server? A: According to Microsoft, "as the amount of RAM in a computer increases, the need for a page file decreases." The article then goes on to describe how to use Performance Logs to determine how much of the page file is actually being used. Try setting your page file to 1.5X system memory for a start, then do the recommended monitoring and make adjustments from there. How to determine the appropriate page file size for 64-bit versions of Windows A: The bigger the better up to the size of the working set of the application where you will start to get into diminishing returns. You can try to find this by slowly increasing or decreasing the size until you see a significant change in cache hit rates. However, if the cache hit rate is over 90% or so you're probably OK. Generally you should keep an eye on this on a production system to make sure it hasn't outgrown its RAM allocation. A: We were recently having some performance issues with one of our SQL Server that we weren't able to completely narrow down, and actually used one of our Microsoft support tickets to have them help troubleshoot. The optimal pagefile size to use with SQL Server came up, and Microsoft's recommendation is that it be 1 1/2 times the amount of RAM. A: In this case, the normal recommendation of 1.5 times total physical RAM is not the best. This very general recommendation is provided under the assumption that all memory is being used by "normal" processes, which can generally have their least-used pages moved to disk without generating massive performance issues for the application process the memory belongs to. For servers running SQL Server (generally with very large amounts of RAM), the majority of the physical RAM is committed to the SQL Server process and should be (if configured correctly) locked in physical memory, preventing it from being paged out to the pagefile. SQL Server manages its own memory very carefully with performance in mind, using a large part of the RAM allocated to its process as a data cache to reduce disk I/O. It does not make sense to page out those data cache pages to the pagefile, as the sole purpose of having that data in RAM in the first place is to reduce disk I/O. (Note that the Windows OS also uses available RAM similarly as disk cache to speed up system operation.) Since SQL Server already manages its own memory space, this memory space should not be considered "pageable", and not included in a calculation for pagefile size. In regard to MEM_COMMIT mentioned by Remus, the terminology is confusing because in the virtual memory parlance, "reserved" never refers to actual allocation, but to preventing use of an address space (not physical space) by another process. Memory available to be "committed" is basically equal to the sum of physical RAM and pagefile size, and doing a MEM_COMMIT just decrements the amount available in the committed pool. It does not allocate a matching page in the pagefile at that time. When a committed memory page is actually written to, that is when the virtual memory system will allocate a physical memory page and possibly bump another memory page from physical RAM to the pagefile. See MSDN's VirtualAlloc function reference. The Windows OS keeps track of memory pressures between application processes and its own disk cache mechanism and decides when it should bump non-locked memory pages from physical to the pagefile. My understanding is that having a pagefile that is way too large compared to the actual non-locked memory space can result in Windows overzealously paging out application memory to the pagefile, resulting in those applications suffering the consequences of page misses (slow performance). As long as the server is not running other memory-hungry processes, a pagefile size of 4GB should be plenty. If you have set SQL Server to allow locking pages in memory, you should also consider setting SQL Server's max memory setting so that it leaves some physical RAM available to the OS for itself and other processes. 802 errors in SQL Server indicate that the system cannot commit any more pages for the data cache. Increasing the pagefile size will only help in this situation insofar as Windows is able to page out memory from non-SQL Server processes. Allowing SQL Server memory to grow into the pagefile in this situation might get rid of the error messages, but it is counterproductive, due to the point earlier about the reason for the data cache in the first place. A: With all due respect to Remus (whom I respect greatly), I strongly disagree. If your page file is large enough to support a full dump, it will perform a full dump every time. If you have a very large amount of RAM, this can cause a tiny blip to became a major outage. You do NOT want your server to have to write out 1 TB of RAM to disk if there is a one-time transient issue. If there is a recurring issue, you can increase the page file to capture a full dump. I would wait to do this until you have been isntructed by PSS (or someone else qualified to analyze a full dump) request you to capture a full dump. An extremely small percentage of DBAs know how to analyze a full dump. A mini-dump is sufficent for troubleshooting most issues that pop up anyway. Plus, if your server is configured to allow a 1 TB full dump and a recurring issue occurs, how much free disk space would you recommend having on hand? You could fill up an entire SAN in a single weekend. A page file 1.5*RAM was the norm back in the days when you were lucky to have a SQL Server with 3 or 4 GB of RAM. This is not the case any more. I leave the page file at Windows default size and settings on all production servers (except for an SSAS server that is experiencing memory pressure). And just for clarification, I've worked with servers ranging from 2 GB of RAM to 2 TB of RAM. After more than 11 years, I have only had to increae the paging file to capture a full dump one time. A: Irrelevant of the size of the RAM, you still need a pagefile at least 1.5 times the amount of physical RAM. This is true even if you have a 1 TB RAM machine, you'll need 1.5 TB pagefile on disk (sounds crazy, but is true). When a process asks MEM_COMMIT memory via VirtualAlloc/VirtualAllocEx, the requested size needs to be reserved in the pagefile. This was true in the first Win NT system, and is still true today see Managing Virtual Memory in Win32: When memory is committed, physical pages of memory are allocated and space is reserved in a pagefile. Bare some extreme odd cases, SQL Server will always ask for MEM_COMMIT pages. And given the fact that SQL uses a Dynamic Memory Management policy that reserves upfront as much buffer pool as possible (reserves and commits in terms of VAS), SQL Server will request at start up a huge reservation of space in the pagefile. If the pagefile is not properly sized errors 801/802 will start showing up in SQL's ERRORLOG file and operations. This always causes some confusion, as administrators erroneously assume that a large RAM eliminates the need for a pagefile. In truth the contrary happens, a large RAM increases the need for pagefile, just because of the inner workings of the Windows NT memory manager. The reserved pagefile is, hopefully, never used. A: If you're looking for high performance, you are going to want to avoid paging completely, so the page file size becomes less significant. Invest in as much RAM as feasible for the DB server. A: After much research our dedicated SQL Servers running Enterprise x64 on Windows 2003 Enterprise x64 have no page file. Simply, the page file is a cache for files that gets managed by the OS, and SQL has it's own internal memory management system. The MS article referenced does not qualify that the advice is for the OS running out-of-the-box services such as file sharing. Having a page file simply burdens the disk I/O because Windows is trying to help, when only the SQL OS can do the job.
{ "language": "en", "url": "https://stackoverflow.com/questions/2588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: What are your favorite Powershell Cmdlets? I just found /n softwares free Powershell NetCmdlets, and after playing with them I love the functionality they bring to the command line. So it raises the question what are your favorite Cmdlets, and how do you use them? A: there's an out-twitter script i use for posting to twitter. it's nice, as it means you can send something to twitter without the risk of being distracted by a browser. i added an alias for it, "twit". so now you can type, for example: PS C:\>"trying out stack overflow" | twit and if successfully lodged, it will return an integer that identifies your post. A: As a programmer/hacker, Get-Member and Get-Command are the ones I use more than any others, but the ones I use to show off are Select-Control and Send-Keys from WASP, the PowerGadgets, and some of my own stuff written in WPF against CTP2 or PoshConsole ;-) A: Get-Member, hands down. No, it's not very glamorous, but the ability to inspect objects interactively beats interrupting your work to go hit up MSDN. A: Set-Clipboard, found on the PowerShell Community Extensions project on CodePlex. Usually when I'm working in PowerShell, the ultimate goal is to generate some text or even an Excel spreadsheet. Set-Clipboard eliminates all of the intermediate "save it to a file, ok now open that file, select all, copy to clipboard" steps--you do it all in PowerShell. A: I wrote a PowerShell provider to give me access to IE7's RSS feed store, and had lots of fun with it. It lets me cd to a drive called feed: and navigate around folders and feeds using cd and dir. It even lets you add or remove feeds from the command line. See this post on my blog as an example: Getting the Most Prolific Authors in your Feeds It's rolled up into the PowerShell Community Extensions project nowadays, which you can find on CodePlex here. A: While it is not as fun as Out-Twitter, my favorite cmdlet is Get-Member, since it allows me to examine any of the objects I'm working with and find out new properties and methods, as well as the underlying type of the object. If I did not choose Get-Member, I would have to go with Out-Clipboard from the PowerShell Community Extensions (PSCX), as it enables a whole lot of clipboard automation and makes using PowerShell for code templating much easier. A: Well it is a little bland, but I would vote for Get-Help. A: export-csv. This creates a nice report in a manager-friendly Excel-ready format. Bonus points if you have community extensions installed and user send-smtpmail. Management report in their inbox from the commandline. Nice. A: While semi-related to your question, it does not entirely fit the Powershell NetCmdlets motif. But I wanted to post it anyhow as I use it daily and it may help others. Simply making shift-control-c key combo into displaying the visual studio command prompt. A: ls (Get-ChildItem) rm (Remove-Item) ps (Get-Process) and the rest of my familiar commands that now "just work" :) but seriously... New-Object would have to get my vote. With it, powershell can do ANYTHING :) A: I find Get-member to be the most useful native PowerShell cmdlet. I also use Get-WMIObject on a daily basis. Even if I'm troubleshooting a VBScript problem for someone I'll turn to Get-WMIObject because I can work with WMI interactively. A: The combination of Get-WMIObject and Get-Member is something I use throughout the workday. Working on Get-Sandwich. A: I do alot of work with Microsoft Lync 2010 which includes a set of synthetic for testing functionality. Of these Test-CsPstnOutboundCall is my favourite. For general scripting got to vote for get-member and get-help :)
{ "language": "en", "url": "https://stackoverflow.com/questions/2630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: What are some web-based knowledge-base solutions? I've used a WordPress blog and a Screwturn Wiki (at two separate jobs) to store private, company-specific KB info, but I'm looking for something that was created to be a knowledge base. Specifically, I'd like to see: * *Free/low cost *Simple method for users to subscribe to KB (or just sections) to get updates *Ability to do page versioning/audit changes *Limit access to certain pages for certain users *Very simple method of posting/editing articles *Very simple method of adding images to articles *Excellent (fast, accurate) searching abilities *Ability to rate and comment on articles I liked using the Wordpress blog because it allowed me to use Live Writer to add/edit articles and images, but it didn't have page versioning (that I could see). I like using Screwturn wiki because of it's ability to track article versions, and I like it's clean look, but some non-technical people balk at the input and editing. A: I've also been investigating wiki software for use as a KB, but it is tricky to find something that is easy to use for non-technical people. There are many wikis that attempt to provide WYSIWYG editing, but most of the software I've found generates nasty inefficient html markup from the WYSIWYG editor. One notable exception to this is Confluence which generates wiki syntax from a WYSIWYG editor. This still isn't perfect (show me a WYSIWYG editor that is) but is a pretty good compromise between retaining simple wiki syntax for those who like it and allowing non-technical users to contribute content. The only problem is that Confluence isn't free ($1,200 for 25 user license). Edit: I also tried DekiWiki and while the UI is nice it doesn't seem to be quite ready for primetime (suffers terribly from the bad WYSIWYG output disease mentioned above). Also seems like they lack direction as there are so many different ways of accomplishing the same task. A: Cerberus - it's more a full featured Help Desk/Issue Tracking system but it has a nice KB solution built in. It can be free but they do have a low cost pay version that is also very good. A: I second Luke's answer. I can Recommend Confluence and here is why: I tested extensively many commercial and free Wiki based solutions. Not a single one is a winner on all accounts, including confluence. Let me try to make your quest a little shorter by summarizing what I have learned to be a pain and what is important: * *WYSIWYG is a most have feature for the Enterprise. A wiki without it, skip it *Saying that, in reality, WYSIWYG doesn't work perfectly. It is more of a feature you must have to get the casual users not be afraid of the monster, and start using it. But you and anyone that wants to seriously create content, will very quickly get used to the wiki markup. it is faster and more reliable. *You need good permissions controls (who can see, edit etc' a page). confluence has good, but I have my complaints (to complicated to be put here) *You will want a good export feature. Most will give you a single page "PDF" export, but you need much more. For example, lets say you have an FAQ, you want to export the entire FAQ right? will that work? *Macros: you want a community creating macros. You asked for example about the ability to rate pages, here is a link to a Macro for Confluence that lets you do that *Structure: you want to be able to say that a page is a child of a different page, and be able to browse the data. The wikipedia model, of orphaned pages with no sturcture will not work in the Enterprise. (think FAQ, you want to have a hierarchy no?) *Ability to easily attache picture to be embedded in the body of the page/article. In confluence, you need to upload the image and then can embed it, it could be a little better (CTR+V) but I guess this is easy enough for 80% of the users. At the end of the day, remember that a Wiki will be valuable to you the more flexible it is. It needs to be a "blank" canvas, and your imagination is then used to "build" the application. In Confluence, I found 3 different "best practices" on how to create a FAQ. That means I can implement MANY things. Some examples (I use my Wiki for) * *FAQ: any error, problem is logged. Used by PS and ENG. reduced internal support time dramatically *Track account status: I implemetned sophisticated "dashboard" that you can see at a glance which customer is at what state, the software version they have, who in the company 'owns" the custoemr etc' *Product: all documentation, installation instructions, the "what's new" etc *Technical documentation, DB structure and what the tables mean *HR: contact list, Document repository My runner up (15 month ago) was free Deki_Wiki, time has passed, so I don't know if this would be still my runner up. good luck! A: I think Drupal is a very possible choice. It has a lot of built-in support for book-type information capturing. And there is a rich collection of user generated modules which you can use to enhance the features. I think it has almost all the features you ask for out of the box. Drupal CMS Benefits A: Personally I use MediaWiki for this purpose. I've tried a number of other free and paid wikis (including Confluence) and have always been impressed with MediaWiki's simplicity and ease of use. I have MediaWiki installed on a thumb drive (using XAMPP from PortableApps), which I use mostly as a personal knowledge base/code snippet repository. I can take it with me wherever I go, and view/edit it from any computer I'm using. A: We've been using a combination of * *TWiki *OpenGrok for the codebase *usenet *LotusNotes based system As long as there is a google search appliance pointed at these things I think it's ok to have any or many versions as long as people use them
{ "language": "en", "url": "https://stackoverflow.com/questions/2639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do I split a delimited string so I can access individual items? Using SQL Server, how do I split a string so I can access item x? Take a string "Hello John Smith". How can I split the string by space and access the item at index 1 which should return "John"? A: I use the answer of frederic but this did not work in SQL Server 2005 I modified it and I'm using select with union all and it works DECLARE @str varchar(max) SET @str = 'Hello John Smith how are you' DECLARE @separator varchar(max) SET @separator = ' ' DECLARE @Splited table(id int IDENTITY(1,1), item varchar(max)) SET @str = REPLACE(@str, @separator, ''' UNION ALL SELECT ''') SET @str = ' SELECT ''' + @str + ''' ' INSERT INTO @Splited EXEC(@str) SELECT * FROM @Splited And the result-set is: id item 1 Hello 2 John 3 Smith 4 how 5 are 6 you A: This pattern works fine and you can generalize Convert(xml,'<n>'+Replace(FIELD,'.','</n><n>')+'</n>').value('(/n[INDEX])','TYPE') ^^^^^ ^^^^^ ^^^^ note FIELD, INDEX and TYPE. Let some table with identifiers like sys.message.1234.warning.A45 sys.message.1235.error.O98 .... Then, you can write SELECT Source = q.value('(/n[1])', 'varchar(10)'), RecordType = q.value('(/n[2])', 'varchar(20)'), RecordNumber = q.value('(/n[3])', 'int'), Status = q.value('(/n[4])', 'varchar(5)') FROM ( SELECT q = Convert(xml,'<n>'+Replace(fieldName,'.','</n><n>')+'</n>') FROM some_TABLE ) Q splitting and casting all parts. A: Yet another get n'th part of string by delimeter function: create function GetStringPartByDelimeter ( @value as nvarchar(max), @delimeter as nvarchar(max), @position as int ) returns NVARCHAR(MAX) AS BEGIN declare @startPos as int declare @endPos as int set @endPos = -1 while (@position > 0 and @endPos != 0) begin set @startPos = @endPos + 1 set @endPos = charindex(@delimeter, @value, @startPos) if(@position = 1) begin if(@endPos = 0) set @endPos = len(@value) + 1 return substring(@value, @startPos, @endPos - @startPos) end set @position = @position - 1 end return null end and the usage: select dbo.GetStringPartByDelimeter ('a;b;c;d;e', ';', 3) which returns: c A: Most of the solutions here use while loops or recursive CTEs. A set-based approach will be superior, I promise, if you can use a delimiter other than a space: CREATE FUNCTION [dbo].[SplitString] ( @List NVARCHAR(MAX), @Delim VARCHAR(255) ) RETURNS TABLE AS RETURN ( SELECT [Value], idx = RANK() OVER (ORDER BY n) FROM ( SELECT n = Number, [Value] = LTRIM(RTRIM(SUBSTRING(@List, [Number], CHARINDEX(@Delim, @List + @Delim, [Number]) - [Number]))) FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name) FROM sys.all_objects) AS x WHERE Number <= LEN(@List) AND SUBSTRING(@Delim + @List, [Number], LEN(@Delim)) = @Delim ) AS y ); Sample usage: SELECT Value FROM dbo.SplitString('foo,bar,blat,foo,splunge',',') WHERE idx = 3; Results: ---- blat You could also add the idx you want as an argument to the function, but I'll leave that as an exercise to the reader. You can't do this with just the native STRING_SPLIT function added in SQL Server 2016, because there is no guarantee that the output will be rendered in the order of the original list. In other words, if you pass in 3,6,1 the result will likely be in that order, but it could be 1,3,6. I have asked for the community's help in improving the built-in function here: * *Please help with STRING_SPLIT improvements With enough qualitative feedback, they may actually consider making some of these enhancements: * *STRING_SPLIT is not feature complete More on split functions, why (and proof that) while loops and recursive CTEs don't scale, and better alternatives, if splitting strings coming from the application layer: * *Split strings the right way – or the next best way *Splitting Strings : A Follow-Up *Splitting Strings : Now with less T-SQL *Comparing string splitting / concatenation methods *Processing a list of integers : my approach *Splitting a list of integers : another roundup *More on splitting lists : custom delimiters, preventing duplicates, and maintaining order *Removing Duplicates from Strings in SQL Server On SQL Server 2016 or above, though, you should look at STRING_SPLIT() and STRING_AGG(): * *Performance Surprises and Assumptions : STRING_SPLIT() *STRING_SPLIT() in SQL Server 2016 : Follow-Up #1 *STRING_SPLIT() in SQL Server 2016 : Follow-Up #2 *SQL Server v.Next : STRING_AGG() performance *Solve old problems with SQL Server’s new STRING_AGG and STRING_SPLIT functions A: If your database has compatibility level of 130 or higher then you can use the STRING_SPLIT function along with OFFSET FETCH clauses to get the specific item by index. To get the item at index N (zero based), you can use the following code SELECT value FROM STRING_SPLIT('Hello John Smith',' ') ORDER BY (SELECT NULL) OFFSET N ROWS FETCH NEXT 1 ROWS ONLY To check the compatibility level of your database, execute this code: SELECT compatibility_level FROM sys.databases WHERE name = 'YourDBName'; A: Try this: CREATE function [SplitWordList] ( @list varchar(8000) ) returns @t table ( Word varchar(50) not null, Position int identity(1,1) not null ) as begin declare @pos int, @lpos int, @item varchar(100), @ignore varchar(100), @dl int, @a1 int, @a2 int, @z1 int, @z2 int, @n1 int, @n2 int, @c varchar(1), @a smallint select @a1 = ascii('a'), @a2 = ascii('A'), @z1 = ascii('z'), @z2 = ascii('Z'), @n1 = ascii('0'), @n2 = ascii('9') set @ignore = '''"' set @pos = 1 set @dl = datalength(@list) set @lpos = 1 set @item = '' while (@pos <= @dl) begin set @c = substring(@list, @pos, 1) if (@ignore not like '%' + @c + '%') begin set @a = ascii(@c) if ((@a >= @a1) and (@a <= @z1)) or ((@a >= @a2) and (@a <= @z2)) or ((@a >= @n1) and (@a <= @n2)) begin set @item = @item + @c end else if (@item > '') begin insert into @t values (@item) set @item = '' end end set @pos = @pos + 1 end if (@item > '') begin insert into @t values (@item) end return end Test it like this: select * from SplitWordList('Hello John Smith') A: I was looking for the solution on net and the below works for me. Ref. And you call the function like this : SELECT * FROM dbo.split('ram shyam hari gopal',' ') SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE FUNCTION [dbo].[Split](@String VARCHAR(8000), @Delimiter CHAR(1)) RETURNS @temptable TABLE (items VARCHAR(8000)) AS BEGIN DECLARE @idx INT DECLARE @slice VARCHAR(8000) SELECT @idx = 1 IF len(@String)<1 OR @String IS NULL RETURN WHILE @idx!= 0 BEGIN SET @idx = charindex(@Delimiter,@String) IF @idx!=0 SET @slice = LEFT(@String,@idx - 1) ELSE SET @slice = @String IF(len(@slice)>0) INSERT INTO @temptable(Items) VALUES(@slice) SET @String = RIGHT(@String,len(@String) - @idx) IF len(@String) = 0 break END RETURN END A: The following example uses a recursive CTE Update 18.09.2013 CREATE FUNCTION dbo.SplitStrings_CTE(@List nvarchar(max), @Delimiter nvarchar(1)) RETURNS @returns TABLE (val nvarchar(max), [level] int, PRIMARY KEY CLUSTERED([level])) AS BEGIN ;WITH cte AS ( SELECT SUBSTRING(@List, 0, CHARINDEX(@Delimiter, @List + @Delimiter)) AS val, CAST(STUFF(@List + @Delimiter, 1, CHARINDEX(@Delimiter, @List + @Delimiter), '') AS nvarchar(max)) AS stval, 1 AS [level] UNION ALL SELECT SUBSTRING(stval, 0, CHARINDEX(@Delimiter, stval)), CAST(STUFF(stval, 1, CHARINDEX(@Delimiter, stval), '') AS nvarchar(max)), [level] + 1 FROM cte WHERE stval != '' ) INSERT @returns SELECT REPLACE(val, ' ','' ) AS val, [level] FROM cte WHERE val > '' RETURN END Demo on SQLFiddle A: You can leverage a Number table to do the string parsing. Create a physical numbers table: create table dbo.Numbers (N int primary key); insert into dbo.Numbers select top 1000 row_number() over(order by number) from master..spt_values go Create test table with 1000000 rows create table #yak (i int identity(1,1) primary key, array varchar(50)) insert into #yak(array) select 'a,b,c' from dbo.Numbers n cross join dbo.Numbers nn go Create the function create function [dbo].[ufn_ParseArray] ( @Input nvarchar(4000), @Delimiter char(1) = ',', @BaseIdent int ) returns table as return ( select row_number() over (order by n asc) + (@BaseIdent - 1) [i], substring(@Input, n, charindex(@Delimiter, @Input + @Delimiter, n) - n) s from dbo.Numbers where n <= convert(int, len(@Input)) and substring(@Delimiter + @Input, n, 1) = @Delimiter ) go Usage (outputs 3mil rows in 40s on my laptop) select * from #yak cross apply dbo.ufn_ParseArray(array, ',', 1) cleanup drop table dbo.Numbers; drop function [dbo].[ufn_ParseArray] Performance here is not amazing, but calling a function over a million row table is not the best idea. If performing a string split over many rows I would avoid the function. A: This question is not about a string split approach, but about how to get the nth element. All answers here are doing some kind of string splitting using recursion, CTEs, multiple CHARINDEX, REVERSE and PATINDEX, inventing functions, call for CLR methods, number tables, CROSS APPLYs ... Most answers cover many lines of code. But - if you really want nothing more than an approach to get the nth element - this can be done as real one-liner, no UDF, not even a sub-select... And as an extra benefit: type safe Get part 2 delimited by a space: DECLARE @input NVARCHAR(100)=N'part1 part2 part3'; SELECT CAST(N'<x>' + REPLACE(@input,N' ',N'</x><x>') + N'</x>' AS XML).value('/x[2]','nvarchar(max)') Of course you can use variables for delimiter and position (use sql:column to retrieve the position directly from a query's value): DECLARE @dlmt NVARCHAR(10)=N' '; DECLARE @pos INT = 2; SELECT CAST(N'<x>' + REPLACE(@input,@dlmt,N'</x><x>') + N'</x>' AS XML).value('/x[sql:variable("@pos")][1]','nvarchar(max)') If your string might include forbidden characters (especially one among &><), you still can do it this way. Just use FOR XML PATH on your string first to replace all forbidden characters with the fitting escape sequence implicitly. It's a very special case if - additionally - your delimiter is the semicolon. In this case I replace the delimiter first to '#DLMT#', and replace this to the XML tags finally: SET @input=N'Some <, > and &;Other äöü@€;One more'; SET @dlmt=N';'; SELECT CAST(N'<x>' + REPLACE((SELECT REPLACE(@input,@dlmt,'#DLMT#') AS [*] FOR XML PATH('')),N'#DLMT#',N'</x><x>') + N'</x>' AS XML).value('/x[sql:variable("@pos")][1]','nvarchar(max)'); UPDATE for SQL-Server 2016+ Regretfully the developers forgot to return the part's index with STRING_SPLIT. But, using SQL-Server 2016+, there is JSON_VALUE and OPENJSON. With JSON_VALUE we can pass in the position as the index' array. For OPENJSON the documentation states clearly: When OPENJSON parses a JSON array, the function returns the indexes of the elements in the JSON text as keys. A string like 1,2,3 needs nothing more than brackets: [1,2,3]. A string of words like this is an example needs to be ["this","is","an","example"]. These are very easy string operations. Just try it out: DECLARE @str VARCHAR(100)='Hello John Smith'; DECLARE @position INT = 2; --We can build the json-path '$[1]' using CONCAT SELECT JSON_VALUE('["' + REPLACE(@str,' ','","') + '"]',CONCAT('$[',@position-1,']')); --See this for a position safe string-splitter (zero-based): SELECT JsonArray.[key] AS [Position] ,JsonArray.[value] AS [Part] FROM OPENJSON('["' + REPLACE(@str,' ','","') + '"]') JsonArray In this post I tested various approaches and found, that OPENJSON is really fast. Even much faster than the famous "delimitedSplit8k()" method... UPDATE 2 - Get the values type-safe We can use an array within an array simply by using doubled [[]]. This allows for a typed WITH-clause: DECLARE @SomeDelimitedString VARCHAR(100)='part1|1|20190920'; DECLARE @JsonArray NVARCHAR(MAX)=CONCAT('[["',REPLACE(@SomeDelimitedString,'|','","'),'"]]'); SELECT @SomeDelimitedString AS TheOriginal ,@JsonArray AS TransformedToJSON ,ValuesFromTheArray.* FROM OPENJSON(@JsonArray) WITH(TheFirstFragment VARCHAR(100) '$[0]' ,TheSecondFragment INT '$[1]' ,TheThirdFragment DATE '$[2]') ValuesFromTheArray A: I don't believe SQL Server has a built-in split function, so other than a UDF, the only other answer I know is to hijack the PARSENAME function: SELECT PARSENAME(REPLACE('Hello John Smith', ' ', '.'), 2) PARSENAME takes a string and splits it on the period character. It takes a number as its second argument, and that number specifies which segment of the string to return (working from back to front). SELECT PARSENAME(REPLACE('Hello John Smith', ' ', '.'), 3) --return Hello Obvious problem is when the string already contains a period. I still think using a UDF is the best way...any other suggestions? A: Here is a UDF which will do it. It will return a table of the delimited values, haven't tried all scenarios on it but your example works fine. CREATE FUNCTION SplitString ( -- Add the parameters for the function here @myString varchar(500), @deliminator varchar(10) ) RETURNS @ReturnTable TABLE ( -- Add the column definitions for the TABLE variable here [id] [int] IDENTITY(1,1) NOT NULL, [part] [varchar](50) NULL ) AS BEGIN Declare @iSpaces int Declare @part varchar(50) --initialize spaces Select @iSpaces = charindex(@deliminator,@myString,0) While @iSpaces > 0 Begin Select @part = substring(@myString,0,charindex(@deliminator,@myString,0)) Insert Into @ReturnTable(part) Select @part Select @myString = substring(@mystring,charindex(@deliminator,@myString,0)+ len(@deliminator),len(@myString) - charindex(' ',@myString,0)) Select @iSpaces = charindex(@deliminator,@myString,0) end If len(@myString) > 0 Insert Into @ReturnTable Select @myString RETURN END GO You would call it like this: Select * From SplitString('Hello John Smith',' ') Edit: Updated solution to handle delimters with a len>1 as in : select * From SplitString('Hello**John**Smith','**') A: Alter Function dbo.fn_Split ( @Expression nvarchar(max), @Delimiter nvarchar(20) = ',', @Qualifier char(1) = Null ) RETURNS @Results TABLE (id int IDENTITY(1,1), value nvarchar(max)) AS BEGIN /* USAGE Select * From dbo.fn_Split('apple pear grape banana orange honeydew cantalope 3 2 1 4', ' ', Null) Select * From dbo.fn_Split('1,abc,"Doe, John",4', ',', '"') Select * From dbo.fn_Split('Hello 0,"&""&&&&', ',', '"') */ -- Declare Variables DECLARE @X xml, @Temp nvarchar(max), @Temp2 nvarchar(max), @Start int, @End int -- HTML Encode @Expression Select @Expression = (Select @Expression For XML Path('')) -- Find all occurences of @Delimiter within @Qualifier and replace with |||***||| While PATINDEX('%' + @Qualifier + '%', @Expression) > 0 AND Len(IsNull(@Qualifier, '')) > 0 BEGIN Select -- Starting character position of @Qualifier @Start = PATINDEX('%' + @Qualifier + '%', @Expression), -- @Expression starting at the @Start position @Temp = SubString(@Expression, @Start + 1, LEN(@Expression)-@Start+1), -- Next position of @Qualifier within @Expression @End = PATINDEX('%' + @Qualifier + '%', @Temp) - 1, -- The part of Expression found between the @Qualifiers @Temp2 = Case When @End &LT 0 Then @Temp Else Left(@Temp, @End) End, -- New @Expression @Expression = REPLACE(@Expression, @Qualifier + @Temp2 + Case When @End &LT 0 Then '' Else @Qualifier End, Replace(@Temp2, @Delimiter, '|||***|||') ) END -- Replace all occurences of @Delimiter within @Expression with '&lt/fn_Split&gt&ltfn_Split&gt' -- And convert it to XML so we can select from it SET @X = Cast('&ltfn_Split&gt' + Replace(@Expression, @Delimiter, '&lt/fn_Split&gt&ltfn_Split&gt') + '&lt/fn_Split&gt' as xml) -- Insert into our returnable table replacing '|||***|||' back to @Delimiter INSERT @Results SELECT "Value" = LTRIM(RTrim(Replace(C.value('.', 'nvarchar(max)'), '|||***|||', @Delimiter))) FROM @X.nodes('fn_Split') as X(C) -- Return our temp table RETURN END A: You can split a string in SQL without needing a function: DECLARE @bla varchar(MAX) SET @bla = 'BED40DFC-F468-46DD-8017-00EF2FA3E4A4,64B59FC5-3F4D-4B0E-9A48-01F3D4F220B0,A611A108-97CA-42F3-A2E1-057165339719,E72D95EA-578F-45FC-88E5-075F66FD726C' -- http://stackoverflow.com/questions/14712864/how-to-query-values-from-xml-nodes SELECT x.XmlCol.value('.', 'varchar(36)') AS val FROM ( SELECT CAST('<e>' + REPLACE(@bla, ',', '</e><e>') + '</e>' AS xml) AS RawXml ) AS b CROSS APPLY b.RawXml.nodes('e') x(XmlCol); If you need to support arbitrary strings (with xml special characters) DECLARE @bla NVARCHAR(MAX) SET @bla = '<html>unsafe & safe Utf8CharsDon''tGetEncoded ÄöÜ - "Conex"<html>,Barnes & Noble,abc,def,ghi' -- http://stackoverflow.com/questions/14712864/how-to-query-values-from-xml-nodes SELECT x.XmlCol.value('.', 'nvarchar(MAX)') AS val FROM ( SELECT CAST('<e>' + REPLACE((SELECT @bla FOR XML PATH('')), ',', '</e><e>') + '</e>' AS xml) AS RawXml ) AS b CROSS APPLY b.RawXml.nodes('e') x(XmlCol); A: In Azure SQL Database (based on Microsoft SQL Server but not exactly the same thing) the signature of STRING_SPLIT function looks like: STRING_SPLIT ( string , separator [ , enable_ordinal ] ) When enable_ordinal flag is set to 1 the result will include a column named ordinal that consists of the 1‑based position of the substring within the input string: SELECT * FROM STRING_SPLIT('hello john smith', ' ', 1) | value | ordinal | |-------|---------| | hello | 1 | | john | 2 | | smith | 3 | This allows us to do this: SELECT value FROM STRING_SPLIT('hello john smith', ' ', 1) WHERE ordinal = 2 | value | |-------| | john | If enable_ordinal is not available then there is a trick which assumes that the substrings within the input string are unique. In this scenario, CHAR_INDEX could be used to find the position of the substring within the input string: SELECT value, ROW_NUMBER() OVER (ORDER BY CHARINDEX(value, input_str)) AS ord_pos FROM (VALUES ('hello john smith') ) AS x(input_str) CROSS APPLY STRING_SPLIT(input_str, ' ') | value | ord_pos | |-------+---------| | hello | 1 | | john | 2 | | smith | 3 | A: You may find the solution in SQL User Defined Function to Parse a Delimited String helpful (from The Code Project). You can use this simple logic: Declare @products varchar(200) = '1|20|3|343|44|6|8765' Declare @individual varchar(20) = null WHILE LEN(@products) > 0 BEGIN IF PATINDEX('%|%', @products) > 0 BEGIN SET @individual = SUBSTRING(@products, 0, PATINDEX('%|%', @products)) SELECT @individual SET @products = SUBSTRING(@products, LEN(@individual + '|') + 1, LEN(@products)) END ELSE BEGIN SET @individual = @products SET @products = NULL SELECT @individual END END A: Here I post a simple way of solution CREATE FUNCTION [dbo].[split]( @delimited NVARCHAR(MAX), @delimiter NVARCHAR(100) ) RETURNS @t TABLE (id INT IDENTITY(1,1), val NVARCHAR(MAX)) AS BEGIN DECLARE @xml XML SET @xml = N'<t>' + REPLACE(@delimited,@delimiter,'</t><t>') + '</t>' INSERT INTO @t(val) SELECT r.value('.','varchar(MAX)') as item FROM @xml.nodes('/t') as records(r) RETURN END Execute the function like this select * from dbo.split('Hello John Smith',' ') A: First, create a function (using CTE, common table expression does away with the need for a temp table) create function dbo.SplitString ( @str nvarchar(4000), @separator char(1) ) returns table AS return ( with tokens(p, a, b) AS ( select 1, 1, charindex(@separator, @str) union all select p + 1, b + 1, charindex(@separator, @str, b + 1) from tokens where b > 0 ) select p-1 zeroBasedOccurance, substring( @str, a, case when b > 0 then b-a ELSE 4000 end) AS s from tokens ) GO Then, use it as any table (or modify it to fit within your existing stored proc) like this. select s from dbo.SplitString('Hello John Smith', ' ') where zeroBasedOccurance=1 Update Previous version would fail for input string longer than 4000 chars. This version takes care of the limitation: create function dbo.SplitString ( @str nvarchar(max), @separator char(1) ) returns table AS return ( with tokens(p, a, b) AS ( select cast(1 as bigint), cast(1 as bigint), charindex(@separator, @str) union all select p + 1, b + 1, charindex(@separator, @str, b + 1) from tokens where b > 0 ) select p-1 ItemIndex, substring( @str, a, case when b > 0 then b-a ELSE LEN(@str) end) AS s from tokens ); GO Usage remains the same. A: In my opinion you guys are making it way too complicated. Just create a CLR UDF and be done with it. using System; using System.Data; using System.Data.SqlClient; using System.Data.SqlTypes; using Microsoft.SqlServer.Server; using System.Collections.Generic; public partial class UserDefinedFunctions { [SqlFunction] public static SqlString SearchString(string Search) { List<string> SearchWords = new List<string>(); foreach (string s in Search.Split(new char[] { ' ' })) { if (!s.ToLower().Equals("or") && !s.ToLower().Equals("and")) { SearchWords.Add(s); } } return new SqlString(string.Join(" OR ", SearchWords.ToArray())); } }; A: What about using string and values() statement? DECLARE @str varchar(max) SET @str = 'Hello John Smith' DECLARE @separator varchar(max) SET @separator = ' ' DECLARE @Splited TABLE(id int IDENTITY(1,1), item varchar(max)) SET @str = REPLACE(@str, @separator, '''),(''') SET @str = 'SELECT * FROM (VALUES(''' + @str + ''')) AS V(A)' INSERT INTO @Splited EXEC(@str) SELECT * FROM @Splited Result-set achieved. id item 1 Hello 2 John 3 Smith A: I know it's an old Question, but i think some one can benefit from my solution. select SUBSTRING(column_name,1,CHARINDEX(' ',column_name,1)-1) ,SUBSTRING(SUBSTRING(column_name,CHARINDEX(' ',column_name,1)+1,LEN(column_name)) ,1 ,CHARINDEX(' ',SUBSTRING(column_name,CHARINDEX(' ',column_name,1)+1,LEN(column_name)),1)-1) ,SUBSTRING(SUBSTRING(column_name,CHARINDEX(' ',column_name,1)+1,LEN(column_name)) ,CHARINDEX(' ',SUBSTRING(column_name,CHARINDEX(' ',column_name,1)+1,LEN(column_name)),1)+1 ,LEN(column_name)) from table_name SQL FIDDLE Advantages: * *It separates all the 3 sub-strings deliminator by ' '. *One must not use while loop, as it decreases the performance. *No need to Pivot as all the resultant sub-string will be displayed in one Row Limitations: * *One must know the total no. of spaces (sub-string). Note: the solution can give sub-string up to to N. To overcame the limitation we can use the following ref. But again the above solution can't be use in a table (Actaully i wasn't able to use it). Again i hope this solution can help some-one. Update: In case of Records > 50000 it is not advisable to use LOOPS as it will degrade the Performance A: Almost all the other answers are replacing the string being split which wastes CPU cycles and performs unnecessary memory allocations. I cover a much better way to do a string split here: http://www.digitalruby.com/split-string-sql-server/ Here is the code: SET NOCOUNT ON -- You will want to change nvarchar(MAX) to nvarchar(50), varchar(50) or whatever matches exactly with the string column you will be searching against DECLARE @SplitStringTable TABLE (Value nvarchar(MAX) NOT NULL) DECLARE @StringToSplit nvarchar(MAX) = 'your|string|to|split|here' DECLARE @SplitEndPos int DECLARE @SplitValue nvarchar(MAX) DECLARE @SplitDelim nvarchar(1) = '|' DECLARE @SplitStartPos int = 1 SET @SplitEndPos = CHARINDEX(@SplitDelim, @StringToSplit, @SplitStartPos) WHILE @SplitEndPos > 0 BEGIN SET @SplitValue = SUBSTRING(@StringToSplit, @SplitStartPos, (@SplitEndPos - @SplitStartPos)) INSERT @SplitStringTable (Value) VALUES (@SplitValue) SET @SplitStartPos = @SplitEndPos + 1 SET @SplitEndPos = CHARINDEX(@SplitDelim, @StringToSplit, @SplitStartPos) END SET @SplitValue = SUBSTRING(@StringToSplit, @SplitStartPos, 2147483647) INSERT @SplitStringTable (Value) VALUES(@SplitValue) SET NOCOUNT OFF -- You can select or join with the values in @SplitStringTable at this point. A: Pure set-based solution using TVF with recursive CTE. You can JOIN and APPLY this function to any dataset. create function [dbo].[SplitStringToResultSet] (@value varchar(max), @separator char(1)) returns table as return with r as ( select value, cast(null as varchar(max)) [x], -1 [no] from (select rtrim(cast(@value as varchar(max))) [value]) as j union all select right(value, len(value)-case charindex(@separator, value) when 0 then len(value) else charindex(@separator, value) end) [value] , left(r.[value], case charindex(@separator, r.value) when 0 then len(r.value) else abs(charindex(@separator, r.[value])-1) end ) [x] , [no] + 1 [no] from r where value > '') select ltrim(x) [value], [no] [index] from r where x is not null; go Usage: select * from [dbo].[SplitStringToResultSet]('Hello John Smith', ' ') where [index] = 1; Result: value index ------------- John 1 A: Recursive CTE solution with server pain, test it MS SQL Server 2008 Schema Setup: create table Course( Courses varchar(100) ); insert into Course values ('Hello John Smith'); Query 1: with cte as ( select left( Courses, charindex( ' ' , Courses) ) as a_l, cast( substring( Courses, charindex( ' ' , Courses) + 1 , len(Courses ) ) + ' ' as varchar(100) ) as a_r, Courses as a, 0 as n from Course t union all select left(a_r, charindex( ' ' , a_r) ) as a_l, substring( a_r, charindex( ' ' , a_r) + 1 , len(a_R ) ) as a_r, cte.a, cte.n + 1 as n from Course t inner join cte on t.Courses = cte.a and len( a_r ) > 0 ) select a_l, n from cte --where N = 1 Results: | A_L | N | |--------|---| | Hello | 0 | | John | 1 | | Smith | 2 | A: while similar to the xml based answer by josejuan, i found that processing the xml path only once, then pivoting was moderately more efficient: select ID, [3] as PathProvidingID, [4] as PathProvider, [5] as ComponentProvidingID, [6] as ComponentProviding, [7] as InputRecievingID, [8] as InputRecieving, [9] as RowsPassed, [10] as InputRecieving2 from ( select id,message,d.* from sysssislog cross apply ( SELECT Item = y.i.value('(./text())[1]', 'varchar(200)'), row_number() over(order by y.i) as rn FROM ( SELECT x = CONVERT(XML, '<i>' + REPLACE(Message, ':', '</i><i>') + '</i>').query('.') ) AS a CROSS APPLY x.nodes('i') AS y(i) ) d WHERE event = 'OnPipelineRowsSent' ) as tokens pivot ( max(item) for [rn] in ([3],[4],[5],[6],[7],[8],[9],[10]) ) as data ran in 8:30 select id, tokens.value('(/n[3])', 'varchar(100)')as PathProvidingID, tokens.value('(/n[4])', 'varchar(100)') as PathProvider, tokens.value('(/n[5])', 'varchar(100)') as ComponentProvidingID, tokens.value('(/n[6])', 'varchar(100)') as ComponentProviding, tokens.value('(/n[7])', 'varchar(100)') as InputRecievingID, tokens.value('(/n[8])', 'varchar(100)') as InputRecieving, tokens.value('(/n[9])', 'varchar(100)') as RowsPassed from ( select id, Convert(xml,'<n>'+Replace(message,'.','</n><n>')+'</n>') tokens from sysssislog WHERE event = 'OnPipelineRowsSent' ) as data ran in 9:20 A: CREATE FUNCTION [dbo].[fnSplitString] ( @string NVARCHAR(MAX), @delimiter CHAR(1) ) RETURNS @output TABLE(splitdata NVARCHAR(MAX) ) BEGIN DECLARE @start INT, @end INT SELECT @start = 1, @end = CHARINDEX(@delimiter, @string) WHILE @start < LEN(@string) + 1 BEGIN IF @end = 0 SET @end = LEN(@string) + 1 INSERT INTO @output (splitdata) VALUES(SUBSTRING(@string, @start, @end - @start)) SET @start = @end + 1 SET @end = CHARINDEX(@delimiter, @string, @start) END RETURN END AND USE IT select *from dbo.fnSplitString('Querying SQL Server','') A: if anyone wants to get only one part of the seperatured text can use this select * from fromSplitStringSep('Word1 wordr2 word3',' ') CREATE function [dbo].[SplitStringSep] ( @str nvarchar(4000), @separator char(1) ) returns table AS return ( with tokens(p, a, b) AS ( select 1, 1, charindex(@separator, @str) union all select p + 1, b + 1, charindex(@separator, @str, b + 1) from tokens where b > 0 ) select p-1 zeroBasedOccurance, substring( @str, a, case when b > 0 then b-a ELSE 4000 end) AS s from tokens ) A: I devoloped this, declare @x nvarchar(Max) = 'ali.veli.deli.'; declare @item nvarchar(Max); declare @splitter char='.'; while CHARINDEX(@splitter,@x) != 0 begin set @item = LEFT(@x,CHARINDEX(@splitter,@x)) set @x = RIGHT(@x,len(@x)-len(@item) ) select @item as item, @x as x; end the only attention you should is dot '.' that end of the @x is always should be there. A: declare @strng varchar(max)='hello john smith' select ( substring( @strng, charindex(' ', @strng) + 1, ( (charindex(' ', @strng, charindex(' ', @strng) + 1)) - charindex(' ',@strng) ) )) A: building on @NothingsImpossible solution, or, rather, comment on the most voted answer (just below the accepted one), i found the following quick-and-dirty solution fulfill my own needs - it has a benefit of being solely within SQL domain. given a string "first;second;third;fourth;fifth", say, I want to get the third token. this works only if we know how many tokens the string is going to have - in this case it's 5. so my way of action is to chop the last two tokens away (inner query), and then to chop the first two tokens away (outer query) i know that this is ugly and covers the specific conditions i was in, but am posting it just in case somebody finds it useful. cheers select REVERSE( SUBSTRING( reverse_substring, 0, CHARINDEX(';', reverse_substring) ) ) from ( select msg, SUBSTRING( REVERSE(msg), CHARINDEX( ';', REVERSE(msg), CHARINDEX( ';', REVERSE(msg) )+1 )+1, 1000 ) reverse_substring from ( select 'first;second;third;fourth;fifth' msg ) a ) b A: Starting with SQL Server 2016 we string_split DECLARE @string varchar(100) = 'Richard, Mike, Mark' SELECT value FROM string_split(@string, ',') A: A modern approach using STRING_SPLIT, requires SQL Server 2016 and above. DECLARE @string varchar(100) = 'Hello John Smith' SELECT ROW_NUMBER() OVER (ORDER BY value) AS RowNr, value FROM string_split(@string, ' ') Result: RowNr value 1 Hello 2 John 3 Smith Now it is possible to get th nth element from the row number. A: Aaron Bertrand's answer is great, but flawed. It doesn't accurately handle a space as a delimiter (as was the example in the original question) since the length function strips trailing spaces. The following is his code, with a small adjustment to allow for a space delimiter: CREATE FUNCTION [dbo].[SplitString] ( @List NVARCHAR(MAX), @Delim VARCHAR(255) ) RETURNS TABLE AS RETURN ( SELECT [Value] FROM ( SELECT [Value] = LTRIM(RTRIM(SUBSTRING(@List, [Number], CHARINDEX(@Delim, @List + @Delim, [Number]) - [Number]))) FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name) FROM sys.all_objects) AS x WHERE Number <= LEN(@List) AND SUBSTRING(@Delim + @List, [Number], LEN(@Delim+'x')-1) = @Delim ) AS y ); A: Here is a function that will accomplish the question's goal of splitting a string and accessing item X: CREATE FUNCTION [dbo].[SplitString] ( @List VARCHAR(MAX), @Delimiter VARCHAR(255), @ElementNumber INT ) RETURNS VARCHAR(MAX) AS BEGIN DECLARE @inp VARCHAR(MAX) SET @inp = (SELECT REPLACE(@List,@Delimiter,'_DELMTR_') FOR XML PATH('')) DECLARE @xml XML SET @xml = '<split><el>' + REPLACE(@inp,'_DELMTR_','</el><el>') + '</el></split>' DECLARE @ret VARCHAR(MAX) SET @ret = (SELECT el = split.el.value('.','varchar(max)') FROM @xml.nodes('/split/el[string-length(.)>0][position() = sql:variable("@elementnumber")]') split(el)) RETURN @ret END Usage: SELECT dbo.SplitString('Hello John Smith', ' ', 2) Result: John A: SIMPLE SOLUTION FOR PARSING FIRST AND LAST NAME DECLARE @Name varchar(10) = 'John Smith' -- Get First Name SELECT SUBSTRING(@Name, 0, (SELECT CHARINDEX(' ', @Name))) -- Get Last Name SELECT SUBSTRING(@Name, (SELECT CHARINDEX(' ', @Name)) + 1, LEN(@Name)) In my case (and in many others it seems...), I have a list of first and last names separated by a single space. This can be used directly inside a select statement to parse first and last name. -- i.e. Get First and Last Name from a table of Full Names SELECT SUBSTRING(FullName, 0, (SELECT CHARINDEX(' ', FullName))) as FirstName, SUBSTRING(FullName, (SELECT CHARINDEX(' ', FullName)) + 1, LEN(FullName)) as LastName, From FullNameTable A: I know its late, but I recently had this requirement and came up with the below code. I don't have a choice to use User defined function. Hope this helps. SELECT SUBSTRING( SUBSTRING('Hello John Smith' ,0,CHARINDEX(' ','Hello John Smith',CHARINDEX(' ','Hello John Smith')+1) ),CHARINDEX(' ','Hello John Smith'),LEN('Hello John Smith') ) A: CREATE TABLE test( id int, adress varchar(100) ); INSERT INTO test VALUES(1, 'Ludovic Aubert, 42 rue de la Victoire, 75009, Paris, France'),(2, 'Jose Garcia, 1 Calle de la Victoria, 56500 Barcelona, Espana'); SELECT id, value, COUNT(*) OVER (PARTITION BY id) AS n, ROW_NUMBER() OVER (PARTITION BY id ORDER BY (SELECT NULL)) AS rn, adress FROM test CROSS APPLY STRING_SPLIT(adress, ',') A: Modified function of @Aaron Bertrand CREATE FUNCTION [dbo].[SplitString] ( @List NVARCHAR(MAX), @Delim VARCHAR(255), @Idx int ) RETURNS NVARCHAR(1000) AS BEGIN DECLARE @ValueTable TABLE(String NVARCHAR(50), Ind int) DECLARE @Value NVARCHAR(50) BEGIN INSERT INTO @ValueTable SELECT Value, idx FROM (SELECT [Value], idx = RANK() OVER (ORDER BY n) FROM ( SELECT n = Number, [Value] = LTRIM(RTRIM(SUBSTRING(@List, [Number], CHARINDEX(@Delim, @List + @Delim, [Number]) - [Number]))) FROM (SELECT Number = ROW_NUMBER() OVER (ORDER BY name) FROM sys.all_objects) AS x WHERE Number <= LEN(@List) AND SUBSTRING(@Delim + @List, [Number], LEN(@Delim)) = @Delim ) AS y ) AS R WHERE idx = @Idx SET @Value = (SELECT String FROM @ValueTable) END RETURN @Value END GO A: Here's my solution that may help someone. Modification of Jonesinator's answer above. If I have a string of delimited INT values and want a table of INTs returned (Which I can then join on). e.g. '1,20,3,343,44,6,8765' Create a UDF: IF OBJECT_ID(N'dbo.ufn_GetIntTableFromDelimitedList', N'TF') IS NOT NULL DROP FUNCTION dbo.[ufn_GetIntTableFromDelimitedList]; GO CREATE FUNCTION dbo.[ufn_GetIntTableFromDelimitedList](@String NVARCHAR(MAX), @Delimiter CHAR(1)) RETURNS @table TABLE ( Value INT NOT NULL ) AS BEGIN DECLARE @Pattern NVARCHAR(3) SET @Pattern = '%' + @Delimiter + '%' DECLARE @Value NVARCHAR(MAX) WHILE LEN(@String) > 0 BEGIN IF PATINDEX(@Pattern, @String) > 0 BEGIN SET @Value = SUBSTRING(@String, 0, PATINDEX(@Pattern, @String)) INSERT INTO @table (Value) VALUES (@Value) SET @String = SUBSTRING(@String, LEN(@Value + @Delimiter) + 1, LEN(@String)) END ELSE BEGIN -- Just the one value. INSERT INTO @table (Value) VALUES (@String) RETURN END END RETURN END GO Then get the table results: SELECT * FROM dbo.[ufn_GetIntTableFromDelimitedList]('1,20,3,343,44,6,8765', ',') 1 20 3 343 44 6 8765 And in a join statement: SELECT [ID], [FirstName] FROM [User] u JOIN dbo.[ufn_GetIntTableFromDelimitedList]('1,20,3,343,44,6,8765', ',') t ON u.[ID] = t.[Value] 1 Elvis 20 Karen 3 David 343 Simon 44 Raj 6 Mike 8765 Richard If you want to return a list of NVARCHARs instead of INTs then just change the table definition: RETURNS @table TABLE ( Value NVARCHAR(MAX) NOT NULL ) A: Here is a SQL UDF that can split a string and grab just a certain piece. create FUNCTION [dbo].[udf_SplitParseOut] ( @List nvarchar(MAX), @SplitOn nvarchar(5), @GetIndex smallint ) returns varchar(1000) AS BEGIN DECLARE @RtnValue table ( Id int identity(0,1), Value nvarchar(MAX) ) DECLARE @result varchar(1000) While (Charindex(@SplitOn,@List)>0) Begin Insert Into @RtnValue (value) Select Value = ltrim(rtrim(Substring(@List,1,Charindex(@SplitOn,@List)-1))) Set @List = Substring(@List,Charindex(@SplitOn,@List)+len(@SplitOn),len(@List)) End Insert Into @RtnValue (Value) Select Value = ltrim(rtrim(@List)) select @result = value from @RtnValue where ID = @GetIndex Return @result END A: A simple optimized algorithm : ALTER FUNCTION [dbo].[Split]( @Text NVARCHAR(200),@Splitor CHAR(1) ) RETURNS @Result TABLE ( value NVARCHAR(50)) AS BEGIN DECLARE @PathInd INT Set @Text+=@Splitor WHILE LEN(@Text) > 0 BEGIN SET @PathInd=PATINDEX('%'+@Splitor+'%',@Text) INSERT INTO @Result VALUES(SUBSTRING(@Text, 0, @PathInd)) SET @Text= SUBSTRING(@Text, @PathInd+1, LEN(@Text)) END RETURN END A: I've been using vzczc's answer using recursive cte's for some time, but have wanted to update it to handle a variable length separator and also to handle strings with leading and lagging "separators" such as when you have a csv file with records such as: "Bob","Smith","Sunnyvale","CA" or when you are dealing with six part fqn's as shown below. I use these extensively for logging of the subject_fqn for auditing, error handling, etc. and parsename only handles four parts: [netbios_name].[machine_name].[instance].[database].[schema].[table].[column] Here is my updated version, and thanks to vzczc's for his original post! select * from [utility].[split_string](N'"this"."string"."gets"."split"."and"."removes"."leading"."and"."trailing"."quotes"', N'"."', N'"', N'"'); select * from [utility].[split_string](N'"this"."string"."gets"."split"."but"."leaves"."leading"."and"."trailing"."quotes"', N'"."', null, null); select * from [utility].[split_string](N'[netbios_name].[machine_name].[instance].[database].[schema].[table].[column]', N'].[', N'[', N']'); create function [utility].[split_string] ( @input [nvarchar](max) , @separator [sysname] , @lead [sysname] , @lag [sysname]) returns @node_list table ( [index] [int] , [node] [nvarchar](max)) begin declare @separator_length [int]= len(@separator) , @lead_length [int] = isnull(len(@lead), 0) , @lag_length [int] = isnull(len(@lag), 0); -- set @input = right(@input, len(@input) - @lead_length); set @input = left(@input, len(@input) - @lag_length); -- with [splitter]([index], [starting_position], [start_location]) as (select cast(@separator_length as [bigint]) , cast(1 as [bigint]) , charindex(@separator, @input) union all select [index] + 1 , [start_location] + @separator_length , charindex(@separator, @input, [start_location] + @separator_length) from [splitter] where [start_location] > 0) -- insert into @node_list ([index],[node]) select [index] - @separator_length as [index] , substring(@input, [starting_position], case when [start_location] > 0 then [start_location] - [starting_position] else len(@input) end) as [node] from [splitter]; -- return; end; go A: Well, mine isn't all that simpler, but here is the code I use to split a comma-delimited input variable into individual values, and put it into a table variable. I'm sure you could modify this slightly to split based on a space and then to do a basic SELECT query against that table variable to get your results. -- Create temporary table to parse the list of accounting cycles. DECLARE @tblAccountingCycles table ( AccountingCycle varchar(10) ) DECLARE @vchAccountingCycle varchar(10) DECLARE @intPosition int SET @vchAccountingCycleIDs = LTRIM(RTRIM(@vchAccountingCycleIDs)) + ',' SET @intPosition = CHARINDEX(',', @vchAccountingCycleIDs, 1) IF REPLACE(@vchAccountingCycleIDs, ',', '') <> '' BEGIN WHILE @intPosition > 0 BEGIN SET @vchAccountingCycle = LTRIM(RTRIM(LEFT(@vchAccountingCycleIDs, @intPosition - 1))) IF @vchAccountingCycle <> '' BEGIN INSERT INTO @tblAccountingCycles (AccountingCycle) VALUES (@vchAccountingCycle) END SET @vchAccountingCycleIDs = RIGHT(@vchAccountingCycleIDs, LEN(@vchAccountingCycleIDs) - @intPosition) SET @intPosition = CHARINDEX(',', @vchAccountingCycleIDs, 1) END END The concept is pretty much the same. One other alternative is to leverage the .NET compatibility within SQL Server 2005 itself. You can essentially write yourself a simple method in .NET that would split the string and then expose that as a stored procedure/function. A: I realize this is a really old question, but starting with SQL Server 2016 there are functions for parsing JSON data that can be used to specifically address the OP's question--and without splitting strings or resorting to a user-defined function. To access an item at a particular index of a delimited string, use the JSON_VALUE function. Properly formatted JSON data is required, however: strings must be enclosed in double quotes " and the delimiter must be a comma ,, with the entire string enclosed in square brackets []. DECLARE @SampleString NVARCHAR(MAX) = '"Hello John Smith"'; --Format as JSON data. SET @SampleString = '[' + REPLACE(@SampleString, ' ', '","') + ']'; SELECT JSON_VALUE(@SampleString, '$[0]') AS Element1Value, JSON_VALUE(@SampleString, '$[1]') AS Element2Value, JSON_VALUE(@SampleString, '$[2]') AS Element3Value; Output Element1Value Element2Value Element3Value --------------------- ------------------- ------------------------------ Hello John Smith (1 row affected) A: Using SQL Server 2016 and above. Use this code to TRIM strings, ignore NULL values and apply a row index in the correct order. It also works with a space delimiter: DECLARE @STRING_VALUE NVARCHAR(MAX) = 'one, two,,three, four, five' SELECT ROW_NUMBER() OVER (ORDER BY R.[index]) [index], R.[value] FROM ( SELECT 1 [index], NULLIF(TRIM([value]), '') [value] FROM STRING_SPLIT(@STRING_VALUE, ',') T WHERE NULLIF(TRIM([value]), '') IS NOT NULL ) R A: If you check the following SQL tutorial on splitting string using SQL, you will find a number of functions that can be used to split a given string on SQL Server For example, SplitAndReturnNth UDF function can be used to split a text using a separator and return the Nth piece as the output of the function select dbo.SplitAndReturnNth('Hello John Smith',' ',2)
{ "language": "en", "url": "https://stackoverflow.com/questions/2647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "527" }
Q: What's the best way to determine if a temporary table exists in SQL Server? When writing a T-SQL script that I plan on re-running, often times I use temporary tables to store temporary data. Since the temp table is created on the fly, I'd like to be able to drop that table only if it exists (before I create it). I'll post the method that I use, but I'd like to see if there is a better way. A: IF Object_Id('TempDB..#TempTable') IS NOT NULL BEGIN DROP TABLE #TempTable END A: The OBJECT_ID function returns the internal object id for the given object name and type. 'tempdb..#t1' refers to the table #t1 in the tempdb database. 'U' is for user-defined table. IF OBJECT_ID('tempdb..#t1', 'U') IS NOT NULL DROP TABLE #t1 CREATE TABLE #t1 ( id INT IDENTITY(1,1), msg VARCHAR(255) ) A: SELECT name FROM sysobjects WHERE type = 'U' AND name = 'TempTable'
{ "language": "en", "url": "https://stackoverflow.com/questions/2649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Getting started with Version Control System I need to implement version control, even for just the developing I do at home. I have read about how great subversion is for the past couple of years and was about to dedicate myself to learning this on the side until I heard about Git being the up and coming version control system. Given the situation, should I hold off and see which one comes out on top? What are their relative advantages? One issue I noticed with Git is, there are not many full featured GUIs, which is important to many users on my team. Also, wouldn't mind suggestions on how to get started with one or the other. (tutorials, etc.) A: For a friendly explanation of most of the basic concepts, see A Visual Guide to Version Control. The article is very SVN-friendly. A: The most important thing about version control is: JUST START USING IT Not using version control is a horrible idea. If you are not using version control, stop reading right now and start using it. It is very easy to convert from cvs<->svn<->git<->hg It doesn't matter which one you choose. Just pick the easiest one for you to use and start recording the history of your code. You can always migrate to another (D)VCS later. If you are looking for a easy to use GUI look at TortoiseSVN (Windows) and Versions (Mac) (Suggested by codingwithoutcomments) Edit: pix0r said: Git has some nice features, but you won't be able to appreciate them unless you've already used something more standard like CVS or Subversion. This. Using git is pointless if you don't know what version control can do for you. Edit 2: Just saw this link on reddit: Subversion Cheat Sheet. Good quick reference for the svn command line. A: I've used RCS, CVS, SCCS, SourceSafe, Vault, perforce, subversion, and git. I've evaluated BitKeeper, Dimensions, arch, bazaar, svk, ClearCase, PVCS, and Synergy. If I had to start a new repository today, I'd choose git. Hands down. It's free, fast, and under active development. And you can use it as a client of any subversion repository using git-svn. It rocks. A: @superjoe30 What about using source control on your own computer, if you're the sole programmer? Is this good practice? Are there related tips or tricks? I find git is actually easier for this as you don't need a server or worry about entering URL's and so on. Your version-control stuff just lives in the .git directory inside your project and you just go ahead and use it. 5 second intro (assuming you have installed it) cd myproject git init git add * # add all the files git commit Next time you do some changes git add newfile1 newfile2 # if you've made any new files since last time git commit -a As long as you're doing that, git has your back. If you mess up, your code is safe in the nice git repository. It's awesome * *Note: You may find getting things OUT of git a bit harder than getting them in, but it's far more preferable to have that problem than to not have the files at all! A: From my own experience with it, I wouldn't recommend git as an introduction to version control. I've been using it for a couple of months now, and my impression is that it's very powerful and - now that I've partially got my head around it - reasonably intuitive. However, the learning curve is very steep, even though I've been using version control for years. It also suffers from being too expressive - it supports many different workflows and development models, but the only guidance on "the best" way to use it is a few pages deep in a Google search, which also makes it tricky for a newcomer to pick up. That said, it's possible that starting from a blank slate with git might actually be easier - my VCS experience is all with centralised version control (CVS, SVN, Perforce...) and part of my (ongoing!) difficulty with git has been understanding the implications of the distributed model. I did glance briefly at other DVCSes like Bazaar and Mercurial and they seemed to be somewhat more newbie-friendly. Anyway, as others have said, Subversion is probably the easiest way to get used to the version control mindset and get practical experience of the benefits of VCS (rollback, branches, collaborative development, easier code review, etc). Oh, and don't start with CVS. It's still in practical use, and has advantages, but IMHO it has too many historical quirks and implementation problems (non-atomic commits!) to be a good way to learn. A: My vote goes to Subversion. It's very powerful, yet easy to use, and has some great tools like TortoiseSVN. But as others have said before me, JUST START USING IT. Source control is such an important part of the software development process. No "serious" software project should be without it. A: At my current job, my predecessor did not use any kind of version control. There are just mountains of folders in at least 3 different places where he kept all of his projects. Any random project folder can be expected to find at least one folder name "project (OLD)" and one named "project" With version control, you never have to make copies of "safe" builds. You don't really have to worry about your IDE corrupting the file you're working on (I'm looking at you, REALBasic 5.5) because is so easy to commit (Read: Save) your work every day. Needless to say, I installed version control the day after I found out it existed. Also, TortoiseSVN makes committing to the database as easy as right clicking a folder. A: Also try out visual svn for your server if you want to avoid any command line work. A: If you are on Mac OSX, I found http://www.versionsapp.com/">Versions to be an incredible (free) GUI front-end to SVN. A: Git is superior to subversion, but it's a little bit out on the bleeding edge. I'd say, if you're just getting started, jump on the edge; setup a free account @ http://github.com They have educational material on site for setting up & using git. A: Don't wait. Pick one, and go with it. All systems will have their pluses and minuses. Your power could go out, you computer gets stolen, or you forget to undo a major change and all your code gets fried while you're waiting to see who emerges victorious. A: It's not that difficult to switch between version control systems. As others have mentioned the important thing is to start using anything as soon as possible. The benefits of using source control over not using source control vastly outweigh the differential benefits between different types of source control. Remember that no matter what version of source control you are using you will always be able to do a brute force conversion to another system by laying down the files from your old system onto disk and then importing those raw files into the new system. Moreover, being familiar with source control fundamentals is a very, very important skill to have as a software developer. A: Use subversion, it's easy to setup, easy to use, and has plenty of tools. Any future revision system will have an import from SVN feature, so it isn't like you can't change down the road if your needs grow. A: The Subversion Book is your best bet for learning the tool. There may be other quick-start tutorials out there, but the Book is the best single reference you'll find. Git has some nice features, but you won't be able to appreciate them unless you've already used something more standard like CVS or Subversion. I'd definitely agree with the previous posters and start with Subversion. A: If you are new to versioncontrol read this: Source Control HOWTO A: Go for SVN. If you have never used source control before, it won't matter to you one way or the other. Also, there is not a large amount of learning involved in using a Source Control system. If you learn one, you can easily switch over to another at a later date. SVN is a great tool, and it should take care of most of your needs. And since it's been around, it has a fair sharer of GUI tools (TortoiseSVN, for example). Go for SVN. A: Yup, SVN for preference unless you really need git's particular features. SVN is hard enough; It sounds like git is more complicated to live with. You can get hosted svn from people like Beanstalk - unless you have in-house Linux people, I'd really recommend it. Things can go wrong horribly easily and it's nice to have someone else whose job it is to fix it. There's an excellent tutorial on revision control from Eric Sink which is worth reading no matter which system you use. A: Use TortoiseSVN (version.app if on mac). Just install and go. If you need a place to host your code look at http://beanstalkapp.com/ A: Coding Horror has a great post about how to set up Subversion on Windows. Following the tutorial, I was able to get Subervsion and TortoiseSVN running locally, and I got the education I needed out of it. As far as Git goes, it's probably a good idea to do a hands on experiment with both of them, to understand which fits your specific development practice. A: SubVersion is the best Choice for you , As Karl Seguin pointed out Moving to Another Versioning System would not be a problem. also SVN has very goof Easy to use GUIs in the Client Side (TortoiseSVN). http://www.snee.com/bobdc.blog/2007/08/getting_started_with_subversio.html http://dojo.jot.com/WikiHome/Getting%20Started%20With%20Subversion A: If you choose to go with subversion and you want to host your own svn server, then there is a very nice and easy windows based server called VisualSVN server. It hides the complexity of setting up an apache server, you basically just go next next next. User configuration is handled with a webUI, instead of a config http://www.visualsvn.com/server/ using a public serve rlike beanstalk is probably easier, but some people like to have their own repositories, either for speed or security A: When I decided I must use a code versioning system, I looked around for any good tutorials on how to get started but didn't find any that could help me. So I simplely installed the SVN Server and Tortoise SVN for the client and dived into the deepend and i learn't how to use it along the way. A: Start using SVN for your actual work, but try to make time for fiddling around with Git and/or Mercurial. SVN is reasonably stable for production, but eventually you'll face a scenario where you'll need a distributed SCM, by which time you'll be properly armed and the new systems will be mature enough. A: superjoe30 writes: Related question (perhaps answers can be edited to answer this question as well): What about using source control on your own computer, if you're the sole programmer? Is >>this good practice? Are there related tips or tricks? I use SVN for all of my personal projects. I started off with running svn on my home machine but eventually migrated over to Dreamhost. Their hosting packages that include Subversion are pretty reasonable. A: If on a windows box a quick and dirty slution is CVSNT. Easy to use just set it up and works very well. I myself prefer SVN but this is a good one for quick use. A: I would definitely choose SVN over CVS, if only because people who learned source control using CVS, tend to use "svn delete" then "svn add" instead of "svn move". Which makes it harder to find all of the previous revisions of a specific file. And you can always upgrade to using git-svn. I personally think it is easier to learn than hg, but really the main reason to use SVN is it has largely become the de-facto version control system of Open Source Software. If you ever plan on learning / using D it is almost mandatory to access the third party repositories, like DSource. A: @superjoe30 Yes, absoluteley. Once you start using version control you never go back. I use it for everything, even my "home" folder. @Orion Edwards Subversion does not require a server. You can access a local repository directly (via a client, of course), and there is no server process involved. A: Just use TortoiseSVN, and you can live even without knowing actual Subversion commands... But that's bad. Luckily there will always be a “great opportunity” to learn them by heart — when your priceless repository first gets corrupted. Yes, it happens. A: As mentioned many times elsewhere, Just Do It. I was able to get started from scratch with Subversion under Windows in no time by reading the quick-start guide in the Red Book. Once I pointed TortoiseSVN at the repository, I was in business. It took me a while to get the finer points down, but they were minor humps to get over. I'd suggest installing the Subversion Service instead of using file:// URLs, but that's mostly personal preference. For a repository stored on your development machine, file:// works fine. A: From personal experience, svn would be my recommendation. You can even use a service like Beanstalk that offers free accounts (with limits obviously, but sufficient for any smallish project) to test the waters. But as others have said, git is superior and is likely worth looking into. A: One major tip to ease the setup of an SVN server right now is to use a Virtual Appliance. That is, a virtual machine that has subversion pre-installed and (mostly) pre-configured on it - pretty much a plug & play thing. You can try here, here and here, or just try searching Google on "subversion virtual appliance". A: I started to use subversion after reading Wil Shipleys blog. So I started checking in code, one machine and dreamhost account. Then after I accidentally deleted a function and saved my project I knew I was in deep "dudu", but with subversion I just checked out the latest version of that file and it was like nothing happened. I use version control for everything now. I am planning on moving over to git because it is faster, works offline, takes less space and oh boy is it faster. A: An important reason to use svn rather than cvs is svn supports binary diffs. That may not matter to many programmers but if you are making a series of minor changes in a 10Mb image, having a unique copy each time in your repository can chew up space remarkably quickly. I use TortoiseSVN on Windows but on the Mac have gone for the commercial CornerStone client over the (now commercial) Versions client. I found the range of free Mac clients, including RapidSVN, had enough pain points to bug me into shelling out real dollars. The safety-net that CornerStone provides for catching files I forgot to add to repository is worth the dollars to me. I spend a lot of time collaborating with a US client who is in an opposite time zone so can't afford screwups forgetting to add files! A: Short answer: Subversion if you're the only one coding it or you're on site with everyone you work with. GIT if you're working with people in different sites and your code base is huge. Subversion is really, really easy to setup and get using. It is also nice because you can do relatively complicated things with it too, like hook it up to Apache and use SSL or plug it into Trac for project management. There's so many tools available for Subversion that it's really a good choice. GIT is much more useful for people who are on large teams working in a distributed environment. Linus T. developed it for the Linux team because he was unsatisfied with the capabilities of traditional repositories. Well worth learning if you ever plan to be working with people on open source projects.
{ "language": "en", "url": "https://stackoverflow.com/questions/2658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "76" }
Q: What do I need to escape when sending a query? When you execute a SQL query, you have to clean your strings or users can execute malicious SQL on your website. I usually just have a function escape_string(blah), which: * *Replaces escapes (\) with double escapes (\\). *Replaces single quotes (') with an escaped single quote (\'). Is this adequate? Is there a hole in my code? Is there a library which can do this quickly and reliably for me? I'd like to see graceful solutions in Perl, Java, and PHP. A: For maximum security, performance, and correctness use prepared statements. Here's how to do this with lots of examples in different languages, including PHP: https://stackoverflow.com/questions/1973/what-is-the-best-way-to-avoid-sql-injection-attacks A: I would also escape comments (double dash) -- A: A great thing to use in PHP is the PDO. It takes a lot of the guesswork out of dealing with securing your SQL (and all of your SQL stuff in general). It supports prepared statements, which go a long way towards thwarting SQL Injection Attacks. A great primer on PDO is included in the book The PHP Anthology 101 Essential Tips, Tricks & Hacks by Davey Shafik etc. 2nd Ed. Makes learning a breeze and is excellent as a reference. I don't even have to think about anything other than the actual SQL Query anymore. A: Which language you are using? It seems like pretty much all of them have built-in SQL escape functions that would be better to use. For example, PHP has mysql_real_escape_string and addslashes. A: You're better off using prepared statements with placeholders. Are you using PHP, .NET...either way, prepared statements will provide more security, but I could provide a sample. A: In PHP, I'm using this one and I'll appreciate every comment about it : function quote_smart($valeur) { if (get_magic_quotes_gpc()) $valeur = stripslashes($valeur); if (!is_numeric($valeur)) $valeur = mysql_real_escape_string($valeur); return $valeur; } $IdS = quote_smart($_POST['theID']); $sql = " SELECT * FROM Students WHERE IdStudent={$IdS}; "; Needs one more verification if a field can be NULL : $picture = NULL; $theidyouwant = 7; $Name = 'WOOD'; if(is_null($picture)) $p = 'NULL'; else $p = "'".quote_smart($picture)."'"; $IdS = quote_smart($theidyouwant); $requete = "SELECT * FROM Students WHERE IdStudent={$IdS} AND PictureStudent={$p} AND NameStudent='{$Name}'; "; That's it enjoy ! (hope the post will correctly send underscores and not &#95 ;) A: Use Prepared/Parameterized queries! A: the MySQL C API has it's own mysql_escape_string(). Using it or it's equivalent would be best. A: Use prepared statements. A: In MySQL query, when using LIKE, also make sure to escape the "_" characters as it is not escaped by mysql_real_escape_string. For reference, check here A: I am not sure if MySQL supports parameterized queries, if so, you should make an effort to go this route. This will ensure the users input can't do anything malicious. Otherwise some "bad" characters in addition to what you mentioned would be semicolon (;) and comments (-- and /* */).
{ "language": "en", "url": "https://stackoverflow.com/questions/2688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do I use T-SQL Group By I know I need to have (although I don't know why) a GROUP BY clause on the end of a SQL query that uses any aggregate functions like count, sum, avg, etc: SELECT count(userID), userName FROM users GROUP BY userName When else would GROUP BY be useful, and what are the performance ramifications? A: To retrieve the number of widgets from each widget category that has more than 5 widgets, you could do this: SELECT WidgetCategory, count(*) FROM Widgets GROUP BY WidgetCategory HAVING count(*) > 5 The "having" clause is something people often forget about, instead opting to retrieve all their data to the client and iterating through it there. A: Group By forces the entire set to be populated before records are returned (since it is an implicit sort). For that reason (and many others), never use a Group By in a subquery. A: Counting the number of times tags are used might be a google example: SELECT TagName, Count(*) AS TimesUsed FROM Tags GROUP BY TagName ORDER TimesUsed If you simply want a distinct value of tags, I would prefer to use the DISTINCT statement. SELECT DISTINCT TagName FROM Tags ORDER BY TagName ASC A: GROUP BY is similar to DISTINCT in that it groups multiple records into one. This example, borrowed from http://www.devguru.com/technologies/t-sql/7080.asp, lists distinct products in the Products table. SELECT Product FROM Products GROUP BY Product Product ------------- Desktop Laptop Mouse Network Card Hard Drive Software Book Accessory The advantage of GROUP BY over DISTINCT, is that it can give you granular control when used with a HAVING clause. SELECT Product, count(Product) as ProdCnt FROM Products GROUP BY Product HAVING count(Product) > 2 Product ProdCnt -------------------- Desktop 10 Laptop 5 Mouse 3 Network Card 9 Software 6 A: GROUP BY also helps when you want to generate a report that will average or sum a bunch of data. You can GROUP By the Department ID and the SUM all the sales revenue or AVG the count of sales for each month.
{ "language": "en", "url": "https://stackoverflow.com/questions/2702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32" }
Q: How can you tell when a user last pressed a key (or moved the mouse)? In a Win32 environment, you can use the GetLastInputInfo API call in Microsoft documentation. Basically, this method returns the last tick that corresponds with when the user last provided input, and you have to compare that to the current tick to determine how long ago that was. Xavi23cr has a good example for C# at codeproject. Any suggestions for other environments? A: As for Linux, I know that Pidgin has to determine idle time to change your status to away after a certain amount of time. You might open the source and see if you can find the code that does what you need it to do. A: You seem to have answered your own question there Nathan ;-) "GetLastInputInfo" is the way to go. One trick is that if your application is running on the desktop, and the user connects to a virtual machine, then GetLastInputInfo will report no activity (since there is no activity on the host machine). This can be different to the behaviour you want, depending on how you wish to apply the user input.
{ "language": "en", "url": "https://stackoverflow.com/questions/2709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What sites offer free, quality web site design templates? Let's aggregate a list of free quality web site design templates. There are a million of these sites out there, but most are repetitive and boring. I'll start with freeCSStemplates.org I also think other sites should follow some sort of standards, for example here are freeCSStemplates standards * *Released for FREE under the Creative Commons Attribution 2.5 license *Very lightweight in terms of images *Tables-free (ie. they use no tables for layout purposes) *W3C standards compliant and valid (XHTML Strict) *Provided with public domain photos, generously provided by PDPhoto.org and Wikimedia Commons A: The Open Design Community is a great resource. A: http://www.csszengarden.com/ The images are not Creative Commons, but the CSS is. A: Check out: * *Open Source Web Designs *CSS Remix *Best Web Gallery *CSS Based *CSS Beauty *CSS Genius A: +1 for Zen garden. I like the resources at inobscuro.com A: http://www.opensourcetemplates.org/ has nice designs, just not enough selection.
{ "language": "en", "url": "https://stackoverflow.com/questions/2711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: I need to know how much disk space a table is using in SQL Server I think most people know how to do this via the GUI (right click table, properties), but doing this in T-SQL totally rocks. A: CREATE TABLE #tmpSizeChar ( table_name sysname , row_count int, reserved_size varchar(50), data_size varchar(50), index_size varchar(50), unused_size varchar(50)) CREATE TABLE #tmpSizeInt ( table_name sysname , row_count int, reserved_size_KB int, data_size_KB int, index_size_KB int, unused_size_KB int) SET NOCOUNT ON INSERT #tmpSizeChar EXEC sp_msforeachtable 'sp_spaceused ''?''' INSERT INTO #tmpSizeInt ( table_name, row_count, reserved_size_KB, data_size_KB, index_size_KB, unused_size_KB ) SELECT [table_name], row_count, CAST(SUBSTRING(reserved_size, 0, PATINDEX('% %', reserved_size)) AS int)reserved_size, CAST(SUBSTRING(data_size, 0, PATINDEX('% %', data_size)) AS int)data_size, CAST(SUBSTRING(index_size, 0, PATINDEX('% %', index_size)) AS int)index_size, CAST(SUBSTRING(unused_size, 0, PATINDEX('% %', unused_size)) AS int)unused_size FROM #tmpSizeChar /* DROP TABLE #tmpSizeChar DROP TABLE #tmpSizeInt */ SELECT * FROM #tmpSizeInt ORDER BY reserved_size_KB DESC A: Check out this, I know it works in 2005 (Microsoft Documentation): Here is is for the pubs DB select * from pubs.sys.database_files Returns the size and max_size. A: sp_spaceused tableName where tableName is the name of the table you want to know.
{ "language": "en", "url": "https://stackoverflow.com/questions/2714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Shell scripting input redirection oddities Can anyone explain this behavior? Running: #!/bin/sh echo "hello world" | read var1 var2 echo $var1 echo $var2 results in nothing being ouput, while: #!/bin/sh echo "hello world" > test.file read var1 var2 < test.file echo $var1 echo $var2 produces the expected output: hello world Shouldn't the pipe do in one step what the redirection to test.file did in the second example? I tried the same code with both the dash and bash shells and got the same behavior from both of them. A: read var1 var2 < <(echo "hello world") A: Allright, I figured it out! This is a hard bug to catch, but results from the way pipes are handled by the shell. Every element of a pipeline runs in a separate process. When the read command sets var1 and var2, is sets them it its own subshell, not the parent shell. So when the subshell exits, the values of var1 and var2 are lost. You can, however, try doing var1=$(echo "Hello") echo var1 which returns the expected answer. Unfortunately this only works for single variables, you can't set many at a time. In order to set multiple variables at a time you must either read into one variable and chop it up into multiple variables or use something like this: set -- $(echo "Hello World") var1="$1" var2="$2" echo $var1 echo $var2 While I admit it's not as elegant as using a pipe, it works. Of course you should keep in mind that read was meant to read from files into variables, so making it read from standard input should be a little harder. A: The post has been properly answered, but I would like to offer an alternative one liner that perhaps could be of some use. For assigning space separated values from echo (or stdout for that matter) to shell variables, you could consider using shell arrays: $ var=( $( echo 'hello world' ) ) $ echo ${var[0]} hello $ echo ${var[1]} world In this example var is an array and the contents can be accessed using the construct ${var[index]}, where index is the array index (starts with 0). That way you can have as many parameters as you want assigned to the relevant array index. A: It's because the pipe version is creating a subshell, which reads the variable into its local space which then is destroyed when the subshell exits. Execute this command $ echo $$;cat | read a 10637 and use pstree -p to look at the running processes, you will see an extra shell hanging off of your main shell. | |-bash(10637)-+-bash(10786) | | `-cat(10785) A: My take on this issue (using Bash): read var1 var2 <<< "hello world" echo $var1 $var2 A: Try: echo "hello world" | (read var1 var2 ; echo $var1 ; echo $var2 ) The problem, as multiple people have stated, is that var1 and var2 are created in a subshell environment that is destroyed when that subshell exits. The above avoids destroying the subshell until the result has been echo'd. Another solution is: result=`echo "hello world"` read var1 var2 <<EOF $result EOF echo $var1 echo $var2 A: A recent addition to bash is the lastpipe option, which allows the last command in a pipeline to run in the current shell, not a subshell, when job control is deactivated. #!/bin/bash set +m # Deactiveate job control shopt -s lastpipe echo "hello world" | read var1 var2 echo $var1 echo $var2 will indeed output hello world A: This has already been answered correctly, but the solution has not been stated yet. Use ksh, not bash. Compare: $ echo 'echo "hello world" | read var1 var2 echo $var1 echo $var2' | bash -s To: $ echo 'echo "hello world" | read var1 var2 echo $var1 echo $var2' | ksh -s hello world ksh is a superior programming shell because of little niceties like this. (bash is the better interactive shell, in my opinion.) A: #!/bin/sh echo "hello world" | read var1 var2 echo $var1 echo $var2 produces no output because pipelines run each of their components inside a subshell. Subshells inherit copies of the parent shell's variables, rather than sharing them. Try this: #!/bin/sh foo="contents of shell variable foo" echo $foo ( echo $foo foo="foo contents modified" echo $foo ) echo $foo The parentheses define a region of code that gets run in a subshell, and $foo retains its original value after being modified inside them. Now try this: #!/bin/sh foo="contents of shell variable foo" echo $foo { echo $foo foo="foo contents modified" echo $foo } echo $foo The braces are purely for grouping, no subshell is created, and the $foo modified inside the braces is the same $foo modified outside them. Now try this: #!/bin/sh echo "hello world" | { read var1 var2 echo $var1 echo $var2 } echo $var1 echo $var2 Inside the braces, the read builtin creates $var1 and $var2 properly and you can see that they get echoed. Outside the braces, they don't exist any more. All the code within the braces has been run in a subshell because it's one component of a pipeline. You can put arbitrary amounts of code between braces, so you can use this piping-into-a-block construction whenever you need to run a block of shell script that parses the output of something else.
{ "language": "en", "url": "https://stackoverflow.com/questions/2732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Setting up an Erlang development environment I'm interested in looking at Erlang and want to follow the path of least resistance in getting up and running. At present, I'm planning on installing Erlang R12B-3 and Erlide (Eclipse plugin). This is largely a Google-result-based decision. Initially this will be on a Windows XP system, though I am likely to reproduce the environment on Ubuntu shortly after. Is there a significantly better choice? Even if it is tied to one platform. Please share your experiences. A: You could also try NetBeans there's a very nice Erlang module available: ErlyBird * *Install Erlang: sudo aptitude install erlang *Install a recent JDK: sudo aptitute install sun-java6-jdk *Download and install (the smallest) NetBeans edition (e.g. the PHP one): www.netbeans.org/downloads *download the erlang module ErlyBird: sourceforge.net/projects/erlybird *manually install the modules via NetBeans ErlyBird features: * *syntax checking *syntax highlighting *auto-completion *pretty formatter *occurrences mark *brace matching *indentation *code folding *function navigator *go to declaration *project management *Erlang shell console A: I'm using Erlang in a few production systems personally as well at the office. For client side testing, documentation and development I use a MacBook Pro as the OS/platform and TextMate with the Erlang bundle as an editor. For sever side development and deployment we use RHEL 4.x/5.x in production and for editing I use VIM. Personally, I've got 4 machines (slices on slicehost.com) running Debian using Erlang for a few websites and jobs. I try to go with the smallest 'engineering environment possible', usually the one with the fewest dependencies from apt or yum. A: To add to the Emacs suggestions, I would also recommend that you look at the advantages of distel when running the Emacs erlang-mode. A: You could also try a virtual server on demand service like this one from CohesiveFT Select the components you want (e.g. erlangrb12 + yaws + MySQL + erlyweb) and it will build a vm image for you to download or to put onto ec2. Rolling you own locally is quite straightforward too if you follow the instructions in the pragmatic programmers book Programming Erlang A: I've seen answers suggesting TextMate here, so I wanted to add another good Mac OSX tool: ErlangXCode plugin to XCode. I've been using this since I started with Erlang and really do like it. The download link on his blog is broken, here's the real download: http://github.com/JonGretar/erlangxcode/tree/master A: Just a quick note: The Erlang "compiling" process described in Ciaran's post (described for Ubuntu 6.10 btw) can be easily skipped using apt command in any Debian based distro: apt-get install erlang Do not forget to install these packages if you see it fit: erlang-doc-html - Erlang HTML document pages erlang-examples - Some application examples erlang-manpages - Erlang MAN pages erlang-mode - editing mode for Emacs Good Luck! A: I highly recommend the Erlang mode shipped with the standard Erlang distribution. I've put together a "works out of the box" Emacs configuration which includes: * *Syntax highlighting & context-sensitive indentation *Dynamic compilation with on-the-fly error highlighting *Integrated Erlang shell *And more.... You can browse my GitHub repo here: http://github.com/kevsmith/hl-emacs A: I like Justin's suggestion, but I'll add to it: this solution is great for learning a language. If you don't rely on something like code-completion, then it forces you to learn the language better. (If you are working with something with a huge API, like Java or Cocoa, then you'll want the code completion, however!) It's also language-agnostic, and in the case of an interpreted language, particularly one that has an interactive interpreter, you'll probably spend just as much time in the shell/interpreter typing in commands. Even in a large-ish python project, I still work in an editor and 4 or 5 terminal windows. So, the trick is more about getting an editor which works for you. I'm not about to suggest one, as that's heading towards evangelism! A: I just use Scite. Type something and press f5 to see the results. A: Just wrote a guide on this on my blog, heres the abridged version: Part 1: Download what needs to be downloaded. Download and install the Erlang run-time. Download and install TextPad. Download a .syn file for Erlang and place it in the system folder of TextPad. For me, this folder was C:\Program Files\TextPad 5\system. I'm not quite sure who did this syn file (the site is in another language), but they did a good enough job. Part 2: Set up syntax highlighting. Open up TextPad. Ensure no files are opened. Go to the 'Configure' menu, and select 'Preferences'. In the preferences window, click 'Document Classes'. There should be a list of currently recognized languages. Click the 'New' button (it is right under the list of languages), and type 'Erlang'. Click apply. Click the '+' button next to 'Document Classes'. This should expand the list, and Erlang should now be on it. Click Erlang. You should see a list of file extensions associated with Erlang, click 'New', and type '*.erl'. Now click the '+' button next to 'Erlang' on the left. This should expand a list of several more menus. Click on 'Syntax'. Click the drop down menu and select erlang.syn. If erlang.syn is not there, then the .syn file was not properly placed. Feel free to edit some other syntax options to customize TextPad to your liking. Part 3: Compiling from TextPad. Note: as of 12/05/08 there are severe problems with compiling in textpad. The Erlang shell somehow ignores new compilation when it is done in text pad. This is only useful for checking for errors, when you want to actually run the code, compile it in the Erlang Shell. In the preferences menu again, click 'tools' on the left. Click the 'Add' button and select 'Program...'. Navigate to the erl5.6.5\erts-5.6.5\bin\ folder and select erlc.exe. Select and single click the new entry in the list to rename it. Click 'Apply'. Now click the '+' button next to Tools on the left. Select erlc, or whatever you have named the new tool (I named mine 'Compile Erlang'). The parameters field needs to read '$File', and the initial folder field should read '$FileDir'. A: I have had good success with Erlide. A: If you use Vim I recommend you Vimerl (http://github.com/jimenezrick/vimerl): Features * *Syntax highlighting *Code indenting *Code folding *Code omni completion *Syntax checking with quickfix support *Code skeletons for the OTP behaviours *Uses configuration from Rebar *Pathogen compatible (http://github.com/tpope/vim-pathogen) A: I've only done a small bit of coding in Erlang but I found the most useful method was just to write the code in a text editor and have a terminal open ready to build my code as I need to (this was in Linux, but a similar idea would work in Windows, I'm sure). Your question didn't mention it, but if you're looking for a good book on Erlang, try this one by O'Reilly. A: From what i've tried (and are still up to do), a good addition to an erlang dev. environment would be a virtual machine running ubuntu/yaws/erlang. Perhaps Erlyweb (erlang/yaws framework) would be nice checking out too. Ciaran's posts (this would be the first of his "series") about his erlang install is nice, as he details the steps in setting up the server (and other stuff like xmpp with jabberlang). A: Since you're switching to Ubuntu eventually anyways, I highly recommend using erlang-mode for emacs (which comes bundled with the Erlang distribution). It is officially what all the core developers use and what many other developers use because of the many features it offers you. Installing the Erlang distribution itself should be simple :)
{ "language": "en", "url": "https://stackoverflow.com/questions/2742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50" }
Q: Data verifications in Getter/Setter or elsewhere? I'm wondering if it's a good idea to make verifications in getters and setters, or elsewhere in the code. This might surprise you be when it comes to optimizations and speeding up the code, I think you should not make verifications in getters and setters, but in the code where you're updating your files or database. Am I wrong? A: @Terrapin, re: If all you have is a bunch of [simple public set/get] properties ... they might as well be fields Properties have other advantages over fields. They're a more explicit contract, they're serialized, they can be debugged later, they're a nice place for extension through inheritance. The clunkier syntax is an accidental complexity -- .net 3.5 for example overcomes this. A common (and flawed) practice is to start with public fields, and turn them into properties later, on an 'as needed' basis. This breaks your contract with anyone who consumes your class, so it's best to start with properties. A: From the perspective of having the most maintainable code, I think you should do as much validation as you can in the setter of a property. This way you won't be caching or otherwise dealing with invalid data. After all, this is what properties are meant for. If all you have is a bunch of properties like... public string Name { get { return _name; } set { _name = value; } } ... they might as well be fields A: It depends. Generally, code should fail fast. If the value can be set by multiple points in the code and you validate only on after retrieving the value, the bug appears to be in the code that does the update. If the setters validate the input, you know what code is trying to set invalid values. A: Well, one of the reasons why classes usually contain private members with public getters/setters is exactly because they can verify data. If you have a Number than can be between 1 and 100, i would definitely put something in the setter that validates that and then maybe throw an exception that is being caught by the code. The reason is simple: If you don't do it in the setter, you have to remember that 1 to 100 limitation every time you set it, which leads to duplicated code or when you forget it, it leads to an invalid state. As for performance, i'm with Knuth here: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." A: Validation should be captured separately from getters or setters in a validation method. That way if the validation needs to be reused across multiple components, it is available. When the setter is called, such a validation service should be utilized to sanitize input into the object. That way you know all information stored in an object is valid at all times. You don't need any kind of validation for the getter, because information on the object is already trusted to be valid. Don't save your validation until a database update!! It is better to fail fast. A: You might wanna check out Domain Driven Design, by Eric Evans. DDD has this notion of a Specification: ... explicit predicate-like VALUE OBJECTS for specialized purposes. A SPECIFICATION is a predicate that determines if an object does or does not satisfy some criteria. I think failing fast is one thing, the other is where to keep the logic for validation. The domain is the right place to keep the logic and I think a Specification Object or a validate method on your Domain objects would be a good place. A: I like to implement IDataErrorInfo and put my validation logic in its Error and this[columnName] properties. That way if you want to check programmatically whether there's an error you can simply test either of those properties in code, or you can hand the validation off to the data binding in Web Forms, Windows Forms or WPF. WPF's "ValidatesOnDataError" Binding property makes this particularly easy. A: I try to never let my objects enter an invalid state, so setters definitely would have validation as well as any methods that change state. This way, I never have to worry that the object I'm dealing with is invalid. If you keep your methods as validation boundaries, then you never have to worry about validation frameworks and IsValid() method calls sprinkled all over the place.
{ "language": "en", "url": "https://stackoverflow.com/questions/2750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Lightweight IDE for Linux Even though I have a robust and fast computer (Pentium Dual Core 2.0 with 2Gb RAM), I'm always searching for lightweight software to have on it, so it runs fast even when many apps are up and running simultaneously. On the last few weeks I've been migrating gradually to Linux and want to install a free lightweight yet useful IDE to program on C++ and PHP. Syntax highlighting and code completion tips are must-haves. A: I bounce about between Mac, Windows and Ubuntu and while Emacs used to be my editor of choice, I'm finding that in my old age I prefer to something GUI-based (using command-line for the shell is still fine by me). My preferred editor is Komodo Edit, which the advantages of: * *Being free (as in beer) *Available for Mac, Windows and Linux *Syntax highlighting for a boatload of languages, including C++ and PHP (I'm using it for Ruby, Python and PHP myself) *Code completion, even for classes I defined myself *Ability to "remote save" via FTP, SFTP or SCP *Support for organizing your files into projects *Tabs and other interface niceties I'm not sure how lightweight it is, but it certainly feels snappier than Eclipse! A: How has no one mentioned Code::Blocks! Not only is it a fantastic Open Source IDE for C++, but it's fully cross platform, so if you need to work on a Windows or Mac box for a bit, you can use the exact same IDE, and exact same project files to do so! Which is great for cross-compiling! A: If you are taking your time switching to linux, I'd switch to emacs or vim at some point as well. There will always be a resource or a document describing exactly the problem you are having with either of them, and generally a solution is just a few more clicks down the road. Emacs may be easier at the beginning because of modeless editing... but don't let modal editing scare you away from Vim. The key with either Vim or Emacs is knowing it could probably take you the better part of the day just to figure out what you want them to do, let alone how to get them to do that. Once they work for you though, you'll see why mostly everyone is in one of two camps. General hints: * *Setting up a Makefile for your project is almost always worth it. *Using cscope and or ctags will make your life easier. Vim hints: * *:make *:cn, :cp *OmniCompletion *using BufRead autoloads to set what :make should do depending on file type Emacs hints: * *ecb is fun *M-x dired *M-. M-, M-* M-x complete-tag for etags *M-x compile *(add-hook 'mylanguage-mode-hook '(lambda () (setq my-customizations t))) And check out other people's customizations for examples of what other people do. A: gedit * *Syntax highlighting *Fast, lightweight *Tabs *GUI A: emacs has been used by linux programmers for decades. It features syntax highlighting, it's fast, and there are a million tutorials out there you can find. A: Console editors, such as emacs and vi, are more lightweight than their GUI counterparts, and (at least those two are) just as capable as any other IDE (syntax highlighting, mouse support, ctags, autocompletion ... all the way to gdb integration). The learning curve might be somewhat steep, and you might have to do some customization, but its all worth it. Also, vi is present on every installation of unix-like operating system. Amongst X applications, there are * *gedit which comes with GNOME and has many of these IDE features (see, for example, this blog entry), *Geany - really fast, depends only on GTK, and with even more features including code folding. These would be lightweight IDEs, as opposed to heavyweights like Anjuta, KDevelop, Eclipse or NetBeans. A: Vim (or Emacs varying on religion) will always be my first answer to this question, over any point-and-click IDE. As they write in The Pragmatic Programmer Choose an editor, know it thoroughly, and use it for all editing tasks. [...] The editor will be an extension of your hand; the keys will sing as they slice their way through text and thought. That's our goal. Make sure that the editor you choose is available on all platforms you use. Vim is configurable, extensible, programmable and can be turned into an IDE with all the regular features. Lately I've been using Eclim to "bring Eclipse functionality to the Vim editor" (projects, better java support etc.) making it a complete platform with advanced IDE features. A: Joey, I believe anything is lighter than Eclipse! :o) A: I'm not sure exactly what you mean by 'lightweight,' but here are a few popular IDEs for linux: Anjuta for Gtk/Gnome KDevelop or Quanta for KDE CodeBlocks runs on Windows/Mac/Linux and is written in C++ None one of these are Java, so they automatically have an edge over Eclipse for performance ;) Another option is MonoDevelop, which is geared towards .Net/Gtk# programming but also includes C++ support. A: This is a really religious question - just choose the one you like. Every editor has it's pros/cons and you need to decide which set suits best to you. There are many IDEs out there that can use various editors like Pida. A: Nobody mentioned Kate. It's easier than vi for start (and has nice vi-mode for those, who want to migrate to vi), has more options than gedit (And better syntax highlighting). It also has kioslaves support (nice for remote server PHP development) and it's only a little bit more CPU-demanding than gedit. It can also have built-in console (extremely helpful if you want to quick grep through files or compile the project). There are also features like: * *basic code completion *advanced indentation and block selection operations *good and very clean (to read) find/replace with regexp *comment-out on ctrl+d (it comments out one line or one function if used on function header) and a lot more... A: any of the popular editors can be turned into an ide. I use Vi on the console and have used various gui editors over the years. This doesn't just go for linux I use Crimson Editor on windows as a C/python/z80asm ide. A: what about eclipse with linuxtools? A: * *You can look at jEdit if you are using or have Java installed. *jEdit (wikipedia article) Again it's a 'smart editor' rather than an IDE. Seems to know how to handle most languages and once its started it is pretty smart, still Java but less resource hungry than Netbeans and Eclipse. A: I would say Bluefish, not an I.D.E but a nice lightweight code editor with syntax highlighting and code completion (and many others) for quite an array of languages (among them C and Php).
{ "language": "en", "url": "https://stackoverflow.com/questions/2756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Is there a keyboard shortcut to view all open documents in Visual Studio 2008 I am trying to learn the keyboard shortcuts in Visual Studio in order to be more productive. So I downloaded a document showing many of the default keybindings in Visual Basic when using the VS 2008 IDE from Microsoft. When I tried what they say is the keyboard shortcut to view all open documents (CTRL + ALT + DOWN ARROW), I got a completely unexpected result on my XP machine; my entire screen display was flipped upside down! Was this a prank by someone at Microsoft? I can't imagine what practical value this flipping of the screen would have. Does anyone know what the correct keyboard shortcut is to view all open documents in VS 2008? Oh and if you try the above shortcut and it flips your display the way it did mine, do a CTRL + ALT + UP ARROW to switch it back. A: This is a conflict between your graphics driver and Visual Studio. Go to your driver settings page (Control panel) and disable the display rotation shortcuts. With this conflict removed, the shortcut will work in Visual Studio.
{ "language": "en", "url": "https://stackoverflow.com/questions/2765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Recommended add-ons/plugins for Microsoft Visual Studio Can anyone recommend any good add-ons or plugins for Microsoft Visual Studio? Freebies are preferred, but if it is worth the cost then that's fine. A: Not free, but ReSharper is definitely one recommendation. A: Clipboard Manager Maintains your clipboard data through removal of lines, a few other nice items but that one alone makes me happy. Regionerate While some have problems with regions I think if you use them, this tool is for you. Automatically region'izes your code into appropriate region blocks. Fully configurable for custom items etc. A: VSCommands 2010 from the website: Latest version supports: * *Manage Reference Paths *Prevent accidental Drag & Drop in Solution Explorer *Prevent accidental linked file delete *Apply Fix (automatically fix build errors/warnings) *Open PowerShell *Show Assembly Details *Create Code Contract *Cancel Build when first project fails *Debug Output - custom formatting *Build Output - custom formatting *Search Output - custom formatting *Configure WPF Rendering *Configure Fusion Logs *Configure IE for debugging *Locate Source File *Thumbnails in IDE Navigator *Extended support for xaml, aspx, css, js and html files *Disable Ctrl + Mouse Wheel Zoom *Zoom to Mouse Pointer *Configurability *Attach to local IIS *Copy Full Path *Build Startup Projects *Open Command Prompt *Search Online *Build Statistics *Group linked items *Copy/Paste Reference *Copy/Paste as Link *Collapse Solution *Group items directly from user interface (DependantUpon) *Open In Expression Blend *Locate in Solution *Edit Project File *Edit Solution File *Show All Files and others, so try it now! A: http://trolltech.com/products/qt/">Qt Cross-Platform Application Framework Qt is a cross-platform application framework for desktop and embedded development. It includes an intuitive API and a rich C++ class library, integrated tools for GUI development and internationalization, and support for Java™ and C++ development They have a plug-in for Visual Studio that costs a bit of money, but it is worth every penny. A: I've been using Visual Assist X for nearly two years now, and I find it so useful I can honestly say that if my employer didn't provide it, I'd have to pay for it myself. I also use Cool Commands and SlickEdit (the free version), whose File Explorer and Command Spy tools are quite useful. A: +1 for Visual Assist And I will add VLH (Visual Local History) which provides a kind of local source control system. Every time you save a file, the plugin add a copy in the local repository. A: ViEmu vi/vim support inside VS A: Whole Tomato's Visual Assist X. I absolutely swear by it. I would like to see a better plug in for Lint than Visual Lint by Riverblade, but since that will eventually be moved onto the build server I don't mind running it every couple of days manually. A: I found this site called Visual Studio Gallery - it has a lot of visual studio add-ins. I'm browsing it right now and I recommend everyone to visit it. A: Consolas font Free font from MS designed for reading code. A: Try MetalScroll!! It's better than Rockscroll A: Sonic File Finder for when you have loads of files in your solutions and searching for them in the solution explorer becomes a pain in the wrist. You might also find DPack interesting. Several tools and enhancements rolled into one neat package. A: MZTools is great too. A: +1 for CodeRush & Refactor Pro. I've been using CodeRush since its Delphi incarnations, and it's utterly wonderful. The mantra of "Code at the speed of thought" is very close to reality ;) A: * *Microsoft StyleCop provides code style checking for C#, we use it all the time and love it (free) *Axialis IconWorkshop has a Visual Studio add-in which is now free for VS2008 users. *Resharper Yes another vote, because I can't upvote everyone who suggests it :) *Workspace Whiz for C++, I used to live by Workspace Whiz but haven't used it in VS2008 as I hadn't realised there was an update. Will have to give it a try again. A: If you're doing C++ coding, hands down Visual Assist. A: I love CopySourceAsToHTML as a cool little addin. It's great if you want to copy code blocks for blogging and the like while maintaining your syntax formatting. I think this is still the url.. you have to do some manual work to set it up with 08. http://www.jtleigh.com/people/colin/software/CopySourceAsHtml/ A: PowerCommands is a Microsoft-created plugin that offers a variety of new features that one would think probably should have been in Visual Studio in the first place. These include * *Copying/Pasting project references! *"Open Containing Folder" to jump straight to the hard-drive location of a file or project *Automatic reorganizig and sorting of using statements *"Open Command Prompt Here" to open a command prompt in any of your project folders. *Collapse Projects A: I'm always amazed that more people don't know about/use NDepend - it shows all dependencies at every level of your code, and will even draw pretty box and arrow pictures showing how confused your architecture really is :) Together with TestDriven.Net, I can't imagine working without it any more. Free/cheap. A: For the laptop bound or for those with vi/vim key bindings burned into the brain I would recommend ViEmu. If you have not tried editing with vi key bindings here is why you may want to try "Why, oh WHY, do those #?@! nutheads use vi?" A: AtomineerUtils Pro Documentation - automatic DocXml/Doxygen/JavaDoc/Qt doc-comment generation/updating (similar to GhostDoc, but more powerful & flexible, and supports C#, C++, C++/CLI, C, Java and Visual Basic code). The style of the generated comments is very configurable, and automatic re-formatting (such as whitespace control and word wrapping) can be optionally applied to keep the comments as readable as possible. It also has many helpers to allow users to read and convert most legacy doc-comments into any of the above formats. (I'm the author, but I believe the above is an accurate and objective description. This add-in was free when this answer was first added, but to cover the costs of hosting, supporting, and continuing to improve the addin in monthly releases, it is now $10 with a 30-day free trial) A: RockScroll is awesome, and free. Addendum As @Andrei points out, MetalScroll is a better alternative. It's Open Source, and corrects some annoying things about RS. A: I'm a big fan of CodeRush and Refactor! Pro by DevExpress. I've been using them for a number of years, and without a doubt it makes me a faster developer. Also, both are built on a free framework called DXCore that allows you to develop your own plug-ins for Visual Studio, and the sky is the limit there... A: A lot of the mentioned Addins are used by me on a regular basis. Here are just a few I I estimate, too: * *Auto Versioning Controlled Build *Resource Refactoring Tool *Smart Paster All three are free and highly recommended (by me). A: I 2nd VisualAssist, been using it since V6, can't live without it... I see no one has mentiond CoolCommands: Link Great set of time savers... A: definetly +1 for VisualAssistX (cannot work without it anymore & it's worth all the money) and +1 for VisualSVN A: Visual Assist: you cannot live without it! A: We've covered this on this question: What is your favorite Visual Studio add-in/setting? A: I found Code Rocket to be very useful - http://www.getcoderocket.com/ From their website: "Code Rocket is an innovative tool that reveals the inner workings of C#, ... and C/C++ code, for Visual Studio... It makes documentation a seamlessly integrated part of the software development process, plugging directly into your development IDE with minimal overheads, delivering powerful benefits from day one." A: JustDecompile from telerik. Now that Reflector is no longer free. Its a necessity when digging through supplied libraries. A: * *Resharper *Resharper MbUnit Test Runner Add-On *SQL Prompt for Database Projects (works inside your SQL Management Studio as well) *Ankh SVN 2.0+ for free SVN support (v1.x pales in comparison) *TeamCity plug-in to monitor your builds, personal builds, and bug tracking A: I find Ghost Doc to be very useful. GhostDoc is a free add-in for Visual Studio that automatically generates XML documentation comments for C#. Either by using existing documentation inherited from base classes or implemented interfaces, or by deducing comments from name and type of e.g. methods, properties or parameters. A: If you use SVN for source control, definitely get VisualSVN. It enables TortoiseSVN interactions from within the Visual Studio IDE. I also echo the Resharper comment. Retail price is a little steep, but if you're a student or otherwise educationally affiliated, it's actually pretty cheap. A: +1 Visual Assist. It's unfortunate that you need a plugin to get really good intellisense but it's definitely worth paying for. A: SmartPaster - (FREE) Copy/Paste code generator for strings AnkhSvn - (FREE) SVN Source Control Integration for VS.NET VisualSVN Server - (FREE) Source Control ReSharper - IDE enhancement that helps with refactoring and productivity CodeRush - Code gen macros on steroids Refactor - Code refactoring aid CodeMaid (FREE) - Code cleanup, organization and complexity analysis CodeSmith - Code Generator GhostDoc - (FREE) Simple code commenting tool DXCore (FREE) and its many awesome plugins: DxCore Community Plugins, CR_Documentor, CodeStyleEnforcer, RedGreen TestDriven.Net - (FREE/PAY) Unit Testing Aid Reflector - (PAY) Feature rich .Net Disassembler Reflector AddIn's Web Deployment Projects - Provides additional functionality to build and deploy Web sites and Web applications (source). StudioTools - (FREE) Navigation assistant, code metrics tool, incremental search, file explorer in visual studio and tear off editor windows. Moved from old site (archive.org) to new site and discontinued. A: LinqPad is great for testing linq to objects/xml/sql. Free download. A: What about IncrediBuild? This is a nice distributed build system with visual studio integration. A: I like ReSharper, too! It's affordable if you're a student or otherwise connected to an university. For interaction with SVN I'll prefer AnkhSVN. .. and of course for connecting to TeamFoundation Server there's the Visual Studio Team Explorer A: Dispatch for FTP is what Copy Web Site should have been. This just came out but I like it a lot: Mindscape File Explorer VisualSVN is excellent for SVN integration. Much better than Ankh (have not tried Ankh 2+ though) SonicFileFinder for looking up files or classes quickly. Supports searching just the upper case parts of a camel-cased type name Web Deployment Projects by Microsoft for precompiling web site projects A: I use a lot the Fogbguz plug in but well you need to use Fogbugz first !!! A: I just found this rather large list of addins: http://geekswithblogs.net/brians/archive/2008/05/12/122087.aspx A: +1 for VisualSVN being better than AnkhSVN, having tried both, and +1 for the FogBugz Add-in. A: Ghost Docs GhostDoc is a free add-in for Visual Studio that automatically generates XML documentation comments for C#. Either by using existing documentation inherited from base classes or implemented interfaces, or by deducing comments from name and type of e.g. methods, properties or parameters. A: KingsTools is also a nice collection of macros containing: * *Run Doxygen *Insert Doxygen comments *Build Solution stats *Dependency Graph *Inheritance Graph *Swap .h<->.cpp *Colorize *} End of *region/#endregion for c++ *Search the web A: Guidance Explorer Guidance packages integrate into VS as Snippets, projects, and project templates. They provide a way to collect and reuse patterns, code, and How To answers. You can create guidance for your team and you can download the guidance packages coming out of the Patterns and Practices group at MS. A: Definitely Resharper. A: Not really an addon inside VS, but one every VS use needs: Code Preview Handler Provides a preview handler with syntax highlighting for source files. The handler works in the Explorer preview pane and in the preview tab for attachments in Outlook. A: Source Monitor code analysis tool Direct download link A: Resharper. It's the best productivity tool for any software engineer! TestDriven.Net is pretty good too. and GhostDoc. A: VLINQ LINQPad is essential, but for quick stuff inside VS, VLINQ is great. A: Source Code Outliner Nice alternate view of your source files. It's the outliner from the code pane, but without all the code getting in the way of the structure. A: * *Refactor! Pro - Commercial. Free version available. *GhostDoc - Free *Comment Reflower - Free *Versioning Controlled Build - Free A: If vi/vim editing is your thang: ViEmu for Visual Studio If you want color-coded control-flow syntax-highlighting and graphical outlines: Codekana I'm the developer of these commercial tools. A: Here is my list: * *Microsoft StyleCop (code analysis) *JetBrains dotTrace (application profiling) *Typemock Isolator (mocking in unit tests) *Roland Weigelt's GhostDoc (code documentation) A: For C# development I use: * *ReSharper, heavily customized and with a couple dozen custom actions I wrote (not to mention weird but wonderful Live Templates) *GhostDoc - very useful for postprocessing of generated code *Source Code Outliner *P/factor (a set of internally developed code gen tools for VS) - see example here *CodeGenUtils - another internal dev for code generation, available on CodePlex *SharpWizard - a VS add-in for rapid prototyping. Supports advanced generation of interface support, operators, patterns, metadata. *Dependency Analyser - a really nifty tool (another internal dev.) for identifying dependencies between CLR properties. Useful for autogenerating change notifications based on dependency graphs. In addition to these, I also have a couple of DSL graphical designers for the particularly difficult scenarios - for example, I have a DSL for complex multithreaded operations that are implemented using Pulse & Wait. A: I don't fancy the Visual Studio bookmarks so I use DPACK to get the same kind of bookmarks as the Delph IDE provides. http://www.usysware.com/dpack/ A: My favorite would be the one I work on - Goanna. :) http://www.redlizards.com/ C/C++ static analysis - it helps find bugs. A: Here a few I didn't find (or spot) mentioned: * *ASPXEditHelper (a must have for ASP.NET devs) *MouseGestures *CodeKeep *KNOCKS *Git Extensions Someone mentioned SQL Prompt so I'll add SQL Assistant (similar price, but does a lot more) Very few people mentioned DPack which is free and absolutely awesome. Also, really get ReSharper or something similar (it will pay many times over). Bare VS just does not "compare" ;-) Enjoy your coding! A: Build Version Increment (GPL) gives you (nearly) everything you need for controlling the version of your assemblies. Some Features (copied from the site): * *Different auto increment styles can be set per major, minor, build or revision number. *Supports C#, VB.NET and C++.NET projects. *Not required to be installed by all project members. Configuration is shared via properties in the solution and project files. Developers who don't have the addin won't feel a thing. *Automatically checks out required files if under source control. *Can be configured per solution and/or per project. *Can be configured to update only on certain configuration builds (debug, release, any or custom) *Can update assembly attributes in an external source file instead of the default AssemblyInfo. A: * *Resharper (Agree it sucks you have to pay extra to get this, but well work the money) *GhostDoc (Takes away any excuse for not having comments in your code) *PowerCommands for VS 2008 (Forgot I even had this installed because it just adds a the little things that should have been there all along) A: In addition to the refactoring and source control tools listed here, AQTime is a great windows profiler. It can run as a plugin or stand-alone and it works with .NET and native code. A: XPathmania is a good little tool for writing and testing XPath queries. A: A better Addin Manager A: Project MRU editor A: CodePlotter and CodePlotter Remixed A: Code Style Enforcer Lets you define a .NET code style (with some degree of flexibility) and underlines violations. Has context-menu options to change the code to match the style. Requires DXCore, which is linked from the Code Style Enforcer page. Both are free. A: PInvoke.NET addon Menu to search for pre-written P/Invoke code. Much easier than working out the marshalling code yourself, especially when there are nasty unions and alignment requirements. A: If you are looking for a better code editor, vim comes with VisVim, a plugin to replace the VS code editor with vim. A: VS Command Shell Command shell in the Output pane. Far from perfect, but often very, very useful. Faster and easier to get to than a separate cmd and has easier copy/paste support. A: Spell Checker for comments is a godsend. GhostDoc is great for making well documented APIs. A: TracExplorer is cool for integrating Trac with VS. A: It's not a Visual Studio add in, but it is a tool that I couldn't use Visual Studio without it... ClipX - it's works with the normal clipboard, but saves the entries to a searchable list, you can use copy and paste as ususal, but you can hit CTRL+SHIFT+V and the list pops up. It works with images, text, etc. It even persists after you reboot your computer. A: I know this is not a VS add-in but SSMS one anyway could be useful for anyone working with MSSQL. Just for the case you wanna see more like this one check this post. Actually from ssmstoolspack creator. A: While Visual SVN costs $50 or so, I strongly prefer it over AnkhSVN (which I last tried about a year ago - it may have improved since). It's one of the easiest to sell to your boss if funding is an issue. (Thankfully we don't have to scratch and claw to get good tools where I work.) A: one that I wrote http://www.codeplex.com/lazy A: DevExtra - but I'm biased cause I wrote it :) http://www.toptensoftware.com/devextra/ It's a bit old now (has its origins in VC6) and mostly oriented towards C++ developers but its free and I still use it every day. A: Quick Open File is a plugin that, coming from an Eclipse background, I can't live without http://kutny.net/vsopen/ No more digging through the solution explorer trying to find files
{ "language": "en", "url": "https://stackoverflow.com/questions/2767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "211" }
Q: Global Exception Handling for winforms control When working on ASP.NET 1.1 projects I always used the Global.asax to catch all errors. I'm looking for a similar way to catch all exceptions in a Windows Forms user control, which ends up being a hosted IE control. What is the proper way to go about doing something like this? A: If you're using VB.NET, you can tap into the very convenient ApplicationEvents.vb. This file comes for free with a VB.NET WinForms project and contains a method for handling unhandled exceptions. To get to this nifty file, it's "Project Properties >> Application >> Application Events" If you're not using VB.NET, then yeah, it's handling Application.ThreadException. A: To Handle Exceptions Globally... Windows Application System.Windows.Forms.Application.ThreadException event Generally Used in Main Method. Refer MSDN Thread Exception Asp.Net System.Web.HttpApplication.Error event Normally Used in Global.asax file. Refer MSDN Global.asax Global Handlers Console Application System.AppDomain.UnhandledException event Generally used in Main Method. Refer MSDN UnhandledException A: You need to handle the System.Windows.Forms.Application.ThreadException event for Windows Forms. This article really helped me: http://bytes.com/forum/thread236199.html. A: Code from MSDN: http://msdn.microsoft.com/en-us/library/system.appdomain.unhandledexception.aspx?cs-save-lang=1&cs-lang=vb#code-snippet-2 Sub Main() Dim currentDomain As AppDomain = AppDomain.CurrentDomain AddHandler currentDomain.UnhandledException, AddressOf MyHandler Try Throw New Exception("1") Catch e As Exception Console.WriteLine("Catch clause caught : " + e.Message) Console.WriteLine() End Try Throw New Exception("2") End Sub Sub MyHandler(sender As Object, args As UnhandledExceptionEventArgs) Dim e As Exception = DirectCast(args.ExceptionObject, Exception) Console.WriteLine("MyHandler caught : " + e.Message) Console.WriteLine("Runtime terminating: {0}", args.IsTerminating) End Sub A: Currently in my winforms app I have handlers for Application.ThreadException, as above, but also AppDomain.CurrentDomain.UnhandledException Most exceptions arrive via the ThreadException handler, but the AppDomain has also caught a few in my experience
{ "language": "en", "url": "https://stackoverflow.com/questions/2770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Can't get a Console to VMs I've followed this otherwise excellent tutorial on getting Xen working with Ubuntu but am not able to get a console into my virtual machine (domU). I've got the extra = '2 console=xvc0' line in my /etc/xen/hostname_here.cfg file like they say, but am not able to get a console on it. If I statically assign an IP to the VM I can SSH to it, but for now I need to be able to use DHCP to give it an address (and since that's what I'm trying to debug, there's the problem). I know I've got a free DHCP address (although I'm getting more at the moment), so I don't think that's the problem. I've looked on Google and the Xen forums to no avail as well. Any ideas? A: I had followed a different tutorial on setting up my xen on ubuntu before 8.04 but now upgraded to 8.04. I used the extra line in my cfg as folows: extra = ' TERM=xterm xencons=tty console=tty1' It allows me to "xm console hostname" from dom0. I think this was from a problem with the xen setup in the version prior to 8.04 (I'm not sure which version that was). I'm not sure if the same change is necessary in 8.04 as I'm an upgrade and didn't change any of my domU configs after the upgrade.
{ "language": "en", "url": "https://stackoverflow.com/questions/2773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How to remove the time portion of a datetime value (SQL Server)? Here's what I use: SELECT CAST(FLOOR(CAST(getdate() as FLOAT)) as DATETIME) I'm thinking there may be a better and more elegant way. Requirements: * *It has to be as fast as possible (the less casting, the better). *The final result has to be a datetime type, not a string. A: Please try: SELECT CONVERT(VARCHAR(10),[YOUR COLUMN NAME],105) [YOURTABLENAME] A: SQL Server 2008 has a new date data type and this simplifies this problem to: SELECT CAST(CAST(GETDATE() AS date) AS datetime) A: Itzik Ben-Gan in DATETIME Calculations, Part 1 (SQL Server Magazine, February 2007) shows three methods of performing such a conversion (slowest to fastest; the difference between second and third method is small): SELECT CAST(CONVERT(char(8), GETDATE(), 112) AS datetime) SELECT DATEADD(day, DATEDIFF(day, 0, GETDATE()), 0) SELECT CAST(CAST(GETDATE() - 0.50000004 AS int) AS datetime) Your technique (casting to float) is suggested by a reader in the April issue of the magazine. According to him, it has performance comparable to that of second technique presented above. A: Your CAST-FLOOR-CAST already seems to be the optimum way, at least on MS SQL Server 2005. Some other solutions I've seen have a string-conversion, like Select Convert(varchar(11), getdate(),101) in them, which is slower by a factor of 10. A: SQL Server 2008 and up In SQL Server 2008 and up, of course the fastest way is Convert(date, @date). This can be cast back to a datetime or datetime2 if necessary. What Is Really Best In SQL Server 2005 and Older? I've seen inconsistent claims about what's fastest for truncating the time from a date in SQL Server, and some people even said they did testing, but my experience has been different. So let's do some more stringent testing and let everyone have the script so if I make any mistakes people can correct me. Float Conversions Are Not Accurate First, I would stay away from converting datetime to float, because it does not convert correctly. You may get away with doing the time-removal thing accurately, but I think it's a bad idea to use it because it implicitly communicates to developers that this is a safe operation and it is not. Take a look: declare @d datetime; set @d = '2010-09-12 00:00:00.003'; select Convert(datetime, Convert(float, @d)); -- result: 2010-09-12 00:00:00.000 -- oops This is not something we should be teaching people in our code or in our examples online. Also, it is not even the fastest way! Proof – Performance Testing If you want to perform some tests yourself to see how the different methods really do stack up, then you'll need this setup script to run the tests farther down: create table AllDay (Tm datetime NOT NULL CONSTRAINT PK_AllDay PRIMARY KEY CLUSTERED); declare @d datetime; set @d = DateDiff(Day, 0, GetDate()); insert AllDay select @d; while @@ROWCOUNT != 0 insert AllDay select * from ( select Tm = DateAdd(ms, (select Max(DateDiff(ms, @d, Tm)) from AllDay) + 3, Tm) from AllDay ) X where Tm < DateAdd(Day, 1, @d); exec sp_spaceused AllDay; -- 25,920,000 rows Please note that this creates a 427.57 MB table in your database and will take something like 15-30 minutes to run. If your database is small and set to 10% growth it will take longer than if you size big enough first. Now for the actual performance testing script. Please note that it's purposeful to not return rows back to the client as this is crazy expensive on 26 million rows and would hide the performance differences between the methods. Performance Results set statistics time on; -- (All queries are the same on io: logical reads 54712) GO declare @dd date, @d datetime, @di int, @df float, @dv varchar(10); -- Round trip back to datetime select @d = CONVERT(date, Tm) from AllDay; -- CPU time = 21234 ms, elapsed time = 22301 ms. select @d = CAST(Tm - 0.50000004 AS int) from AllDay; -- CPU = 23031 ms, elapsed = 24091 ms. select @d = DATEDIFF(DAY, 0, Tm) from AllDay; -- CPU = 23782 ms, elapsed = 24818 ms. select @d = FLOOR(CAST(Tm as float)) from AllDay; -- CPU = 36891 ms, elapsed = 38414 ms. select @d = CONVERT(VARCHAR(8), Tm, 112) from AllDay; -- CPU = 102984 ms, elapsed = 109897 ms. select @d = CONVERT(CHAR(8), Tm, 112) from AllDay; -- CPU = 103390 ms, elapsed = 108236 ms. select @d = CONVERT(VARCHAR(10), Tm, 101) from AllDay; -- CPU = 123375 ms, elapsed = 135179 ms. -- Only to another type but not back select @dd = Tm from AllDay; -- CPU time = 19891 ms, elapsed time = 20937 ms. select @di = CAST(Tm - 0.50000004 AS int) from AllDay; -- CPU = 21453 ms, elapsed = 23079 ms. select @di = DATEDIFF(DAY, 0, Tm) from AllDay; -- CPU = 23218 ms, elapsed = 24700 ms select @df = FLOOR(CAST(Tm as float)) from AllDay; -- CPU = 29312 ms, elapsed = 31101 ms. select @dv = CONVERT(VARCHAR(8), Tm, 112) from AllDay; -- CPU = 64016 ms, elapsed = 67815 ms. select @dv = CONVERT(CHAR(8), Tm, 112) from AllDay; -- CPU = 64297 ms, elapsed = 67987 ms. select @dv = CONVERT(VARCHAR(10), Tm, 101) from AllDay; -- CPU = 65609 ms, elapsed = 68173 ms. GO set statistics time off; Some Rambling Analysis Some notes about this. First of all, if just performing a GROUP BY or a comparison, there's no need to convert back to datetime. So you can save some CPU by avoiding that, unless you need the final value for display purposes. You can even GROUP BY the unconverted value and put the conversion only in the SELECT clause: select Convert(datetime, DateDiff(dd, 0, Tm)) from (select '2010-09-12 00:00:00.003') X (Tm) group by DateDiff(dd, 0, Tm) Also, see how the numeric conversions only take slightly more time to convert back to datetime, but the varchar conversion almost doubles? This reveals the portion of the CPU that is devoted to date calculation in the queries. There are parts of the CPU usage that don't involve date calculation, and this appears to be something close to 19875 ms in the above queries. Then the conversion takes some additional amount, so if there are two conversions, that amount is used up approximately twice. More examination reveals that compared to Convert(, 112), the Convert(, 101) query has some additional CPU expense (since it uses a longer varchar?), because the second conversion back to date doesn't cost as much as the initial conversion to varchar, but with Convert(, 112) it is closer to the same 20000 ms CPU base cost. Here are those calculations on the CPU time that I used for the above analysis: method round single base ----------- ------ ------ ----- date 21324 19891 18458 int 23031 21453 19875 datediff 23782 23218 22654 float 36891 29312 21733 varchar-112 102984 64016 25048 varchar-101 123375 65609 7843 * *round is the CPU time for a round trip back to datetime. *single is CPU time for a single conversion to the alternate data type (the one that has the side effect of removing the time portion). *base is the calculation of subtracting from single the difference between the two invocations: single - (round - single). It's a ballpark figure that assumes the conversion to and from that data type and datetime is approximately the same in either direction. It appears this assumption is not perfect but is close because the values are all close to 20000 ms with only one exception. One more interesting thing is that the base cost is nearly equal to the single Convert(date) method (which has to be almost 0 cost, as the server can internally extract the integer day portion right out of the first four bytes of the datetime data type). Conclusion So what it looks like is that the single-direction varchar conversion method takes about 1.8 μs and the single-direction DateDiff method takes about 0.18 μs. I'm basing this on the most conservative "base CPU" time in my testing of 18458 ms total for 25,920,000 rows, so 23218 ms / 25920000 = 0.18 μs. The apparent 10x improvement seems like a lot, but it is frankly pretty small until you are dealing with hundreds of thousands of rows (617k rows = 1 second savings). Even given this small absolute improvement, in my opinion, the DateAdd method wins because it is the best combination of performance and clarity. The answer that requires a "magic number" of 0.50000004 is going to bite someone some day (five zeroes or six???), plus it's harder to understand. Additional Notes When I get some time I'm going to change 0.50000004 to '12:00:00.003' and see how it does. It is converted to the same datetime value and I find it much easier to remember. For those interested, the above tests were run on a server where @@Version returns the following: Microsoft SQL Server 2008 (RTM) - 10.0.1600.22 (Intel X86) Jul 9 2008 14:43:34 Copyright (c) 1988-2008 Microsoft Corporation Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 2) A: SQL2005: I recommend cast instead of dateadd. For example, select cast(DATEDIFF(DAY, 0, datetimefield) as datetime) averagely about 10% faster on my dataset, than select DATEADD(DAY, DATEDIFF(DAY, 0, datetimefield), 0) (and casting into smalldatetime was faster still)
{ "language": "en", "url": "https://stackoverflow.com/questions/2775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: Converting ARBG to RGB with alpha blending Let's say that we have an ARGB color: Color argb = Color.FromARGB(127, 69, 12, 255); //Light Urple. When this is painted on top of an existing color, the colors will blend. So when it is blended with white, the resulting color is Color.FromARGB(255, 162, 133, 255); The solution should work like this: Color blend = Color.White; Color argb = Color.FromARGB(127, 69, 12, 255); //Light Urple. Color rgb = ToRGB(argb, blend); //Same as Color.FromARGB(255, 162, 133, 255); What is ToRGB's implementation? A: I know this is an old thread, but I want to add this: Public Shared Function AlphaBlend(ByVal ForeGround As Color, ByVal BackGround As Color) As Color If ForeGround.A = 0 Then Return BackGround If BackGround.A = 0 Then Return ForeGround If ForeGround.A = 255 Then Return ForeGround Dim Alpha As Integer = CInt(ForeGround.A) + 1 Dim B As Integer = Alpha * ForeGround.B + (255 - Alpha) * BackGround.B >> 8 Dim G As Integer = Alpha * ForeGround.G + (255 - Alpha) * BackGround.G >> 8 Dim R As Integer = Alpha * ForeGround.R + (255 - Alpha) * BackGround.R >> 8 Dim A As Integer = ForeGround.A If BackGround.A = 255 Then A = 255 If A > 255 Then A = 255 If R > 255 Then R = 255 If G > 255 Then G = 255 If B > 255 Then B = 255 Return Color.FromArgb(Math.Abs(A), Math.Abs(R), Math.Abs(G), Math.Abs(B)) End Function public static Color AlphaBlend(Color ForeGround, Color BackGround) { if (ForeGround.A == 0) return BackGround; if (BackGround.A == 0) return ForeGround; if (ForeGround.A == 255) return ForeGround; int Alpha = Convert.ToInt32(ForeGround.A) + 1; int B = Alpha * ForeGround.B + (255 - Alpha) * BackGround.B >> 8; int G = Alpha * ForeGround.G + (255 - Alpha) * BackGround.G >> 8; int R = Alpha * ForeGround.R + (255 - Alpha) * BackGround.R >> 8; int A = ForeGround.A; if (BackGround.A == 255) A = 255; if (A > 255) A = 255; if (R > 255) R = 255; if (G > 255) G = 255; if (B > 255) B = 255; return Color.FromArgb(Math.Abs(A), Math.Abs(R), Math.Abs(G), Math.Abs(B)); } A: if you don't need to know this pre-render, you could always use the win32 method of getpixel, I believe. Note: typing on iPhone in the middle of Missouri with no inet access. Will look up real win32 example and see if there is a .net equivalent. In case anyone cares, and doesn't want to use the (excellent) answer posted above, you can get the color value of a pixel in .Net via this link MSDN example A: It's called alpha blending. In psuedocode, assuming the background color (blend) always has 255 alpha. Also assumes alpha is 0-255. alpha=argb.alpha() r = (alpha/255)*argb.r() + (1 - alpha/255)*blend.r() g = (alpha/255)*argb.g() + (1 - alpha/255)*blend.g() b = (alpha/255)*argb.b() + (1 - alpha/255)*blend.b() note: you probably need to be a bit (more) careful about floating-point/int math and rounding issues, depending on language. Cast intermediates accordingly Edited to add: If you don't have a background color with an alpha of 255, the algebra gets alot more complicated. I've done it before and it's a fun exercise left to the reader (if you really need to know, ask another question :). In other words, what color C blends into some background the same as blending A, then blending B. This is sort of like calculating A+B (which isn't the same as B+A).
{ "language": "en", "url": "https://stackoverflow.com/questions/2780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Setting Objects to Null/Nothing after use in .NET Should you set all the objects to null (Nothing in VB.NET) once you have finished with them? I understand that in .NET it is essential to dispose of any instances of objects that implement the IDisposable interface to release some resources although the object can still be something after it is disposed (hence the isDisposed property in forms), so I assume it can still reside in memory or at least in part? I also know that when an object goes out of scope it is then marked for collection ready for the next pass of the garbage collector (although this may take time). So with this in mind will setting it to null speed up the system releasing the memory as it does not have to work out that it is no longer in scope and are they any bad side effects? MSDN articles never do this in examples and currently I do this as I cannot see the harm. However I have come across a mixture of opinions so any comments are useful. A: Karl is absolutely correct, there is no need to set objects to null after use. If an object implements IDisposable, just make sure you call IDisposable.Dispose() when you're done with that object (wrapped in a try..finally, or, a using() block). But even if you don't remember to call Dispose(), the finaliser method on the object should be calling Dispose() for you. I thought this was a good treatment: Digging into IDisposable and this Understanding IDisposable There isn't any point in trying to second guess the GC and its management strategies because it's self tuning and opaque. There was a good discussion about the inner workings with Jeffrey Richter on Dot Net Rocks here: Jeffrey Richter on the Windows Memory Model and Richters book CLR via C# chapter 20 has a great treatment: A: Also: using(SomeObject object = new SomeObject()) { // do stuff with the object } // the object will be disposed of A: In general, there's no need to null objects after use, but in some cases I find it's a good practice. If an object implements IDisposable and is stored in a field, I think it's good to null it, just to avoid using the disposed object. The bugs of the following sort can be painful: this.myField.Dispose(); // ... at some later time this.myField.DoSomething(); It's good to null the field after disposing it, and get a NullPtrEx right at the line where the field is used again. Otherwise, you might run into some cryptic bug down the line (depending on exactly what DoSomething does). A: Chances are that your code is not structured tightly enough if you feel the need to null variables. There are a number of ways to limit the scope of a variable: As mentioned by Steve Tranby using(SomeObject object = new SomeObject()) { // do stuff with the object } // the object will be disposed of Similarly, you can simply use curly brackets: { // Declare the variable and use it SomeObject object = new SomeObject() } // The variable is no longer available I find that using curly brackets without any "heading" to really clean out the code and help make it more understandable. A: In general no need to set to null. But suppose you have a Reset functionality in your class. Then you might do, because you do not want to call dispose twice, since some of the Dispose may not be implemented correctly and throw System.ObjectDisposed exception. private void Reset() { if(_dataset != null) { _dataset.Dispose(); _dataset = null; } //..More such member variables like oracle connection etc. _oraConnection } A: The only time you should set a variable to null is when the variable does not go out of scope and you no longer need the data associated with it. Otherwise there is no need. A: Another reason to avoid setting objects to null when you are done with them is that it can actually keep them alive for longer. e.g. void foo() { var someType = new SomeType(); someType.DoSomething(); // someType is now eligible for garbage collection // ... rest of method not using 'someType' ... } will allow the object referred by someType to be GC'd after the call to "DoSomething" but void foo() { var someType = new SomeType(); someType.DoSomething(); // someType is NOT eligible for garbage collection yet // because that variable is used at the end of the method // ... rest of method not using 'someType' ... someType = null; } may sometimes keep the object alive until the end of the method. The JIT will usually optimized away the assignment to null, so both bits of code end up being the same. A: this kind of "there is no need to set objects to null after use" is not entirely accurate. There are times you need to NULL the variable after disposing it. Yes, you should ALWAYS call .Dispose() or .Close() on anything that has it when you are done. Be it file handles, database connections or disposable objects. Separate from that is the very practical pattern of LazyLoad. Say I have and instantiated ObjA of class A. Class A has a public property called PropB of class B. Internally, PropB uses the private variable of _B and defaults to null. When PropB.Get() is used, it checks to see if _PropB is null and if it is, opens the resources needed to instantiate a B into _PropB. It then returns _PropB. To my experience, this is a really useful trick. Where the need to null comes in is if you reset or change A in some way that the contents of _PropB were the child of the previous values of A, you will need to Dispose AND null out _PropB so LazyLoad can reset to fetch the right value IF the code requires it. If you only do _PropB.Dispose() and shortly after expect the null check for LazyLoad to succeed, it won't be null, and you'll be looking at stale data. In effect, you must null it after Dispose() just to be sure. I sure wish it were otherwise, but I've got code right now exhibiting this behavior after a Dispose() on a _PropB and outside of the calling function that did the Dispose (and thus almost out of scope), the private prop still isn't null, and the stale data is still there. Eventually, the disposed property will null out, but that's been non-deterministic from my perspective. The core reason, as dbkk alludes is that the parent container (ObjA with PropB) is keeping the instance of _PropB in scope, despite the Dispose(). A: Stephen Cleary explains very well in this post: Should I Set Variables to Null to Assist Garbage Collection? Says: The Short Answer, for the Impatient Yes, if the variable is a static field, or if you are writing an enumerable method (using yield return) or an asynchronous method (using async and await). Otherwise, no. This means that in regular methods (non-enumerable and non-asynchronous), you do not set local variables, method parameters, or instance fields to null. (Even if you’re implementing IDisposable.Dispose, you still should not set variables to null). The important thing that we should consider is Static Fields. Static fields are always root objects, so they are always considered “alive” by the garbage collector. If a static field references an object that is no longer needed, it should be set to null so that the garbage collector will treat it as eligible for collection. Setting static fields to null is meaningless if the entire process is shutting down. The entire heap is about to be garbage collected at that point, including all the root objects. Conclusion: Static fields; that’s about it. Anything else is a waste of time. A: No don't null objects. You can check out https://web.archive.org/web/20160325050833/http://codebetter.com/karlseguin/2008/04/28/foundations-of-programming-pt-7-back-to-basics-memory/ for more information, but setting things to null won't do anything, except dirty your code. A: There are some cases where it makes sense to null references. For instance, when you're writing a collection--like a priority queue--and by your contract, you shouldn't be keeping those objects alive for the client after the client has removed them from the queue. But this sort of thing only matters in long lived collections. If the queue's not going to survive the end of the function it was created in, then it matters a whole lot less. On a whole, you really shouldn't bother. Let the compiler and GC do their jobs so you can do yours. A: Take a look at this article as well: http://www.codeproject.com/KB/cs/idisposable.aspx For the most part, setting an object to null has no effect. The only time you should be sure to do so is if you are working with a "large object", which is one larger than 84K in size (such as bitmaps). A: I believe by design of the GC implementors, you can't speed up GC with nullification. I'm sure they'd prefer you not worry yourself with how/when GC runs -- treat it like this ubiquitous Being protecting and watching over and out for you...(bows head down, raises fist to the sky)... Personally, I often explicitly set variables to null when I'm done with them as a form of self documentation. I don't declare, use, then set to null later -- I null immediately after they're no longer needed. I'm saying, explicitly, "I'm officially done with you...be gone..." Is nullifying necessary in a GC'd language? No. Is it helpful for the GC? Maybe yes, maybe no, don't know for certain, by design I really can't control it, and regardless of today's answer with this version or that, future GC implementations could change the answer beyond my control. Plus if/when nulling is optimized out it's little more than a fancy comment if you will. I figure if it makes my intent clearer to the next poor fool who follows in my footsteps, and if it "might" potentially help GC sometimes, then it's worth it to me. Mostly it makes me feel tidy and clear, and Mongo likes to feel tidy and clear. :) I look at it like this: Programming languages exist to let people give other people an idea of intent and a compiler a job request of what to do -- the compiler converts that request into a different language (sometimes several) for a CPU -- the CPU(s) could give a hoot what language you used, your tab settings, comments, stylistic emphases, variable names, etc. -- a CPU's all about the bit stream that tells it what registers and opcodes and memory locations to twiddle. Many things written in code don't convert into what's consumed by the CPU in the sequence we specified. Our C, C++, C#, Lisp, Babel, assembler or whatever is theory rather than reality, written as a statement of work. What you see is not what you get, yes, even in assembler language. I do understand the mindset of "unnecessary things" (like blank lines) "are nothing but noise and clutter up code." That was me earlier in my career; I totally get that. At this juncture I lean toward that which makes code clearer. It's not like I'm adding even 50 lines of "noise" to my programs -- it's a few lines here or there. There are exceptions to any rule. In scenarios with volatile memory, static memory, race conditions, singletons, usage of "stale" data and all that kind of rot, that's different: you NEED to manage your own memory, locking and nullifying as apropos because the memory is not part of the GC'd Universe -- hopefully everyone understands that. The rest of the time with GC'd languages it's a matter of style rather than necessity or a guaranteed performance boost. At the end of the day make sure you understand what is eligible for GC and what's not; lock, dispose, and nullify appropriately; wax on, wax off; breathe in, breathe out; and for everything else I say: If it feels good, do it. Your mileage may vary...as it should... A: I think setting something back to null is messy. Imagine a scenario where the item being set to now is exposed say via property. Now is somehow some piece of code accidentally uses this property after the item is disposed you will get a null reference exception which requires some investigation to figure out exactly what is going on. I believe framework disposables will allows throw ObjectDisposedException which is more meaningful. Not setting these back to null would be better then for that reason. A: Some object suppose the .dispose() method which forces the resource to be removed from memory.
{ "language": "en", "url": "https://stackoverflow.com/questions/2785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "195" }
Q: What's the best setup for Mono development on Windows? I started trying to play with Mono, mostly for fun at the moment. I first tried to use the Visual Studio plugin that will convert a csproj into a makefile, but there seemed to be no version available for Visual Studio 2005. I also read about the MonoDevelop IDE, which sounded nice. Unfortunately, there's no pre-fab Windows package for it. I tried to follow some instructions to build it by combining dependencies from other semi-related installs. It didn't work, but that's probably because I'm a Windows-oriented guy and can barely spell "makefile". So, my question is this: What's the lowest-energy way to get up and running to try some Mono-based development on Windows? A: @Chris I have found that Visual Studio is the best IDE for developing against .NET -- I think the best way to target Mono is really just to develop and build in Visual Studio under Windows then just run those binaries directly on Linux (or whatever other Mono platform you are using). There are free versions of Visual Studio if licensing is a concern. If you are developing under Linux, the best software is probably Eclipse with a Mono plugin (see The Mono Handbook - Eclipse for installation instructions) but keep in mind it doesn't have near the amount of features or language integration Visual Studio has. @modesty Mono is a 3rd party open source implementation of the .NET framework which allows you to run .NET applications on platforms other than Windows. A: One of the best things you can do if developing with Visual Studio for Mono is to get MoMA http://www.mono-project.com/MoMA. This will inspect any number of assemblies that you build and generate a report showing potential Mono problems (e.g., methods not implemented in the mono library). It can be run from a GUI or the command line for use in automated builds. A: Miguel had a post about debugging Mono running on linux with remote debugging on Visual Studio. This may be something you want to look into... Using Visual Studio to debug Mono. There is also a new project called CloverLeaf whose goal is enabling debugging Mono on Windows in Visual Studio. A: There's just no reason to build your app using Mono; the whole point of the .Net CLR is that the compiled output is cross-platform. So you can simply build it using your favourite IDE (and if you like IDEs, Microsoft's is the best one to use) and then test it on Mono. Even if you get Mono working on Windows, it wouldn't be a very good test of your app's portability: what if your app does silly things like assuming filenames have backslashes in them, or that there's something special about a folder called Program Files? The best way to do portability testing is to actually test your app on the target platform. And that's pretty easy to do with a Linux VMware player like the one at http://www.go-mono.com/mono-downloads/download.html. A: Personally, I'm just compiling in Visual Studio 2008 as if it were for .Net 2.0 and then running in Mono (VS2008 on Windows in a VirtualBox, Mono on OSX). All the problems come up at runtime, anyway, so the system works perfectly. I just found this very new link, which is amazing and shows you how to set up Visual Studio 2008 for Mono. At the same time, setting up Mono on OpenSuse or Ubuntu inside a VirtualBox (Sun's product) is easy, painless, and doesn't force you to abandon whatever platform you normally live in. This is not relevant to your question, but I might note that I just got into Mono and I'm amazed at how much of .Net is implemented, including much of the Winforms stuff. A: My first instinct would be the rather unhelpful "Install Linux". You are somewhat swimming against the current to try and develop in mono under windows. Installing GTK and everything is a bit of a bother in my experience. If you do feel like using linux, then you could Try Ubuntu Otherwise: There's some information here: http://www.mono-project.com/Mono:Windows and it seems the cygwin toolchain might be your best bet. I don't think you're going to be able to avoid makefiles, sadly. I found a slightly more explicit tutorial from O'Reilly. @modesty: Mono provides the necessary software to develop and run .NET client and server applications on Linux, Solaris, Mac OS X, Windows, and Unix. Sponsored by Novell (http://www.novell.com), the Mono open source project has an active and enthusiastic contributing community and is positioned to become the leading choice for development of Linux applications. -- From the Mono site. A: Eclipse plugin for Mono is dead. On Linux use MonoDevelop or X-Develop if you like good commercial support (although MonoDevelop is closing on them fast feature-wise). On Windows SharpDevelop has custom MSBuild targets for compiling the code against Mono. As Mono and MonoDevelop are changing fast, be sure to use the latest released versions, even if they are not marked as stable yet (e.g. versions shipped with stock Ubuntu are terribly outdated). The VMWare image is a great way to start testing Windows-developed code on Linux. Don't touch cygwin unless you are already very conformable with it. A: I'd recommend getting VMWare Player and using the free Mono development platform image that is provided on the website. Download Mono Setup time for this will be minimal, and it will also allow you to get your code working in .NET and then focus on porting issues without a massive hassle of switching machines and the like. the VMWare Player tools will allow you to simply drag and drop the files over to copy them. I'm looking to take a couple of my .NET apps and make them Mono compliant, and this is the path I'm going to take here shortly. A: A year later and the answer to this has change greatly. You can now use MonoDevelop on Windows, or if you are more comfortable in Visual Studio you can use the Visual Studio Tools to write everything and then debug on in VM to make sure it is working on Linux. A: I liked the idea of trying to use MonoDevelop mostly just to make sure my stuff would work against the Mono runtimes. I guess it would also be possible to get crazy with msbuild and write some custom targets that tried to build against Mono, but that's basically emulating the now-defunct plug-in's functionality which I assume was non-trivial to build. I do have minor experience with cygwin, and I am happy typing "configure" and "make" all day long, but when a problem occurs in that process, I'm virtually screwed. I'll probably try to play with all this again, but if it takes me more than a couple hours to come up with a way to build comfortably against the Mono runtimes, I'll probably just bail. I will try the Eclipse idea. I use that for Java, so I might be able to get the c# stuff to work. We shall see...
{ "language": "en", "url": "https://stackoverflow.com/questions/2786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33" }
Q: Map Routing, a la Google Maps? I've always been intrigued by Map Routing, but I've never found any good introductory (or even advanced!) level tutorials on it. Does anybody have any pointers, hints, etc? Update: I'm primarily looking for pointers as to how a map system is implemented (data structures, algorithms, etc). A: By Map Routing, you mean finding the shortest path along a street network? Dijkstra shortest-path algorithm is the best known. Wikipedia has not a bad intro: http://en.wikipedia.org/wiki/Dijkstra%27s_algorithm There's a Java applet here where you can see it in action: http://www.dgp.toronto.edu/people/JamesStewart/270/9798s/Laffra/DijkstraApplet.html and Google you lead you to source code in just about any language. Any real implementation for generating driving routes will include quite a bit of data on the street network that describes the costs associate with traversing links and nodes—road network hierarchy, average speed, intersection priority, traffic signal linking, banned turns etc. A: A* is actually far closer to production mapping algorithms. It requires quite a bit less exploration compared to Dijikstra's original algorithm. A: Barry Brumitt, one of the engineers of Google maps route finding feature, wrote a post on the topic that may be of interest: The road to better path-finding 11/06/2007 03:47:00 PM A: Instead of learning APIs to each map service provider ( like Gmaps, Ymaps api) Its good to learn Mapstraction "Mapstraction is a library that provides a common API for various javascript mapping APIs" I would suggest you go to the URL and learn a general API. There is good amount of How-Tos too. A: I've yet to find a good tutorial on routing but there are lots of code to read: There are GPL routing applications that use Openstreetmap data, e.g. Gosmore which works on Windows (+ mobile) and Linux. There are a number of interesting [applications using the same data, but gosmore has some cool uses e.g. interface with websites. The biggest problem with routing is bad data, and you never get good enough data. So if you want to try it keep your test very local so you can control the data better. A: From a conceptual point of view, imagine dropping a stone into a pond and watching the ripples. The routes would represent the pond and the stone your starting position. Of course the algorithm would have to search some proportion of n^2 paths as the distance n increases. You would take you starting position and check all available paths from that point. Then recursively call for the points at the end of those paths and so on. You can increase performance, by not double-backing on a path, by not re-checking the routes at a point if it has already been covered and by giving up on paths that are taking too long. An alternative way is to use the ant pheromone approach, where ants crawl randomly from a start point and leave a scent trail, which builds up the more ants cross over a given path. If you send (enough) ants from both the start point and the end points then eventually the path with the strongest scent will be the shortest. This is because the shortest path will have been visited more times in a given time period, given that the ants walk at a uniform pace. EDIT @ Spikie As a further explanation of how to implement the pond algorithm - potential data structures needed are highlighted: You'll need to store the map as a network. This is simply a set of nodes and edges between them. A set of nodes constitute a route. An edge joins two nodes (possibly both the same node), and has an associated cost such as distance or time to traverse the edge. An edge can either either be bi-directional or uni-directional. Probably simplest to just have uni-directional ones and double up for two way travel between nodes (i.e. one edge from A to B and a different one for B to A). By way of example imagine three railway stations arranged in an equilateral triangle pointing upwards. There are also a further three stations each halfway between them. Edges join all adjacent stations together, the final diagram will have an inverted triangle sitting inside the larger triangle. Label nodes starting from bottom left, going left to right and up, as A,B,C,D,E,F (F at the top). Assume the edges can be traversed in either direction. Each edge has a cost of 1 km. Ok, so we wish to route from the bottom left A to the top station F. There are many possible routes, including those that double back on themselves, e.g. ABCEBDEF. We have a routine say, NextNode, that accepts a node and a cost and calls itself for each node it can travel to. Clearly if we let this routine run it will eventually discover all routes, including ones that are potentially infinite in length (eg ABABABAB etc). We stop this from happening by checking against the cost. Whenever we visit a node that hasn't been visited before, we put both the cost and the node we came from against that node. If a node has been visited before we check against the existing cost and if we're cheaper then we update the node and carry on (recursing). If we're more expensive, then we skip the node. If all nodes are skipped then we exit the routine. If we hit our target node then we exit the routine too. This way all viable routes are checked, but crucially only those with the lowest cost. By the end of the process each node will have the lowest cost for getting to that node, including our target node. To get the route we work backwards from our target node. Since we stored the node we came from along with the cost, we just hop backwards building up the route. For our example we would end up with something like: Node A - (Total) Cost 0 - From Node None Node B - Cost 1 - From Node A Node C - Cost 2 - From Node B Node D - Cost 1 - From Node A Node E - Cost 2 - From Node D / Cost 2 - From Node B (this is an exception as there is equal cost) Node F - Cost 2 - From Node D So the shortest route is ADF. A: Take a look at the open street map project to see how this sort of thing is being tackled in a truely free software project using only user supplied and licensed data and have a wiki containing stuff you might find interesting. A few years back the guys involved where pretty easy going and answered lots of questions I had so I see no reason why they still aren't a nice bunch. A: Another thought occurs to me regarding the cost of each traversal, but would increase the time and processing power required to compute. Example: There are 3 ways I can take (where I live) to go from point A to B, according to the GoogleMaps. Garmin units offer each of these 3 paths in the Quickest route calculation. After traversing each of these routes many times and averaging (obviously there will be errors depending on the time of day, amount of caffeine etc.), I feel the algorithms could take into account the number of bends in the road for high level of accuracy, e.g. straight road of 1 mile will be quicker than a 1 mile road with sharp bends in it. Not a practical suggestion but certainly one I use to improve the result set of my daily commute. A: From my experience of working in this field, A* does the job very well. It is (as mentioned above) faster than Dijkstra's algorithm, but is still simple enough for an ordinarily competent programmer to implement and understand. Building the route network is the hardest part, but that can be broken down into a series of simple steps: get all the roads; sort the points into order; make groups of identical points on different roads into intersections (nodes); add arcs in both directions where nodes connect (or in one direction only for a one-way road). The A* algorithm itself is well documented on Wikipedia. The key place to optimise is the selection of the best node from the open list, for which you need a high-performance priority queue. If you're using C++ you can use the STL priority_queue adapter. Customising the algorithm to route over different parts of the network (e.g., pedestrian, car, public transport, etc.) of favour speed, distance or other criteria is quite easy. You do that by writing filters to control which route segments are available, when building the network, and which weight is assigned to each one.
{ "language": "en", "url": "https://stackoverflow.com/questions/2798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: How should I translate from screen space coordinates to image space coordinates in a WinForms PictureBox? I have an application that displays an image inside of a Windows Forms PictureBox control. The SizeMode of the control is set to Zoom so that the image contained in the PictureBox will be displayed in an aspect-correct way regardless of the dimensions of the PictureBox. This is great for the visual appearance of the application because you can size the window however you want and the image will always be displayed using its best fit. Unfortunately, I also need to handle mouse click events on the picture box and need to be able to translate from screen-space coordinates to image-space coordinates. It looks like it's easy to translate from screen space to control space, but I don't see any obvious way to translate from control space to image space (i.e. the pixel coordinate in the source image that has been scaled in the picture box). Is there an easy way to do this, or should I just duplicate the scaling math that they're using internally to position the image and do the translation myself? A: I wound up just implementing the translation manually. The code's not too bad, but it did leave me wishing that they provided support for it directly. I could see such a method being useful in a lot of different circumstances. I guess that's why they added extension methods :) In pseudocode: // Recompute the image scaling the zoom mode uses to fit the image on screen imageScale ::= min(pictureBox.width / image.width, pictureBox.height / image.height) scaledWidth ::= image.width * imageScale scaledHeight ::= image.height * imageScale // Compute the offset of the image to center it in the picture box imageX ::= (pictureBox.width - scaledWidth) / 2 imageY ::= (pictureBox.height - scaledHeight) / 2 // Test the coordinate in the picture box against the image bounds if pos.x < imageX or imageX + scaledWidth < pos.x then return null if pos.y < imageY or imageY + scaledHeight < pos.y then return null // Compute the normalized (0..1) coordinates in image space u ::= (pos.x - imageX) / imageScale v ::= (pos.y - imageY) / imageScale return (u, v) To get the pixel position in the image, you'd just multiply by the actual image pixel dimensions, but the normalized coordinates allow you to address the original responder's point about resolving ambiguity on a case-by-case basis. A: Depending on the scaling, the relative image pixel could be anywhere in a number of pixels. For example, if the image is scaled down significantly, pixel 2, 10 could represent 2, 10 all the way up to 20, 100), so you'll have to do the math yourself and take full responsibility for any inaccuracies! :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/2804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: SQL Server 2000: Is there a way to tell when a record was last modified? The table doesn't have a last updated field and I need to know when existing data was updated. So adding a last updated field won't help (as far as I know). A: SQL Server 2000 does not keep track of this information for you. There may be creative / fuzzy ways to guess what this date was depending on your database model. But, if you are talking about 1 table with no relation to other data, then you are out of luck. A: You can't check for changes without some sort of audit mechanism. You are looking to extract information that ha not been collected. If you just need to know when a record was added or edited, adding a datetime field that gets updated via a trigger when the record is updated would be the simplest choice. If you also need to track when a record has been deleted, then you'll want to use an audit table and populate it from triggers with a row when a record has been added, edited, or deleted. A: You might try a log viewer; this basically just lets you look at the transactions in the transaction log, so you should be able to find the statement that updated the row in question. I wouldn't recommend this as a production-level auditing strategy, but I've found it to be useful in a pinch. Here's one I've used; it's free and (only) works w/ SQL Server 2000. http://www.red-gate.com/products/SQL_Log_Rescue/index.htm A: You can add a timestamp field to that table and update that timestamp value with an update trigger. A: OmniAudit is a commercial package which implments auditng across an entire database. A free method would be to write a trigger for each table which addes entries to an audit table when fired.
{ "language": "en", "url": "https://stackoverflow.com/questions/2809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: SQL Server 2005 For XML Explicit - Need help formatting I have a table with a structure like the following: LocationID AccountNumber long-guid-here 12345 long-guid-here 54321 To pass into another stored procedure, I need the XML to look like this: <root> <clientID>12345</clientID> <clientID>54321</clientID> </root> The best I've been able to do so far was getting it like this: <root clientID="10705"/> I'm using this SQL statement: SELECT 1 as tag, null as parent, AccountNumber as 'root!1!clientID' FROM Location.LocationMDAccount WHERE locationid = 'long-guid-here' FOR XML EXPLICIT So far, I've looked at the documentation on the MSDN page, but I've not come out with the desired results. @KG, Yours gave me this output actually: <root> <Location.LocationMDAccount> <clientId>10705</clientId> </Location.LocationMDAccount> </root> I'm going to stick with the FOR XML EXPLICIT from Chris Leon for now. A: try SELECT 1 AS Tag, 0 AS Parent, AccountNumber AS [Root!1!AccountNumber!element] FROM Location.LocationMDAccount WHERE LocationID = 'long-guid-here' FOR XML EXPLICIT A: Try this, Chris: SELECT AccountNumber as [clientId] FROM Location.Location root WHERE LocationId = 'long-guid-here' FOR XML AUTO, ELEMENTS TERRIBLY SORRY! I mixed up what you were asking for. I prefer the XML AUTO just for ease of maintainance, but I believe either one is effective. My apologies for the oversight ;-) A: I got it with: select 1 as tag, null as parent, AccountNumber as 'root!1!clientID!element' from Location.LocationMDAccount where locationid = 'long-guid-here' for xml explicit A: Using SQL Server 2005 (or presumably 2008) I find for XML PATH to allow for much easier to maintain SQL than for XML Explicit (particularly once the SQL is longer). In this case: SELECT AccountNumber as "clientID" FROM Location.LocationMDAccount WHERE locationid = 'long-guid-here' FOR XML PATH (''), Root ('root'); A: SELECT 1 as tag, null as parent, AccountNumber as 'clientID!1!!element' FROM Location.LocationMDAccount WHERE locationid = 'long-guid-here' FOR XML EXPLICIT, root('root')
{ "language": "en", "url": "https://stackoverflow.com/questions/2811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: How to curl or wget a web page? I would like to make a nightly cron job that fetches my stackoverflow page and diffs it from the previous day's page, so I can see a change summary of my questions, answers, ranking, etc. Unfortunately, I couldn't get the right set of cookies, etc, to make this work. Any ideas? Also, when the beta is finished, will my status page be accessible without logging in? A: Your status page is available now without logging in (click logout and try it). When the beta-cookie is disabled, there will be nothing between you and your status page. For wget: wget --no-cookies --header "Cookie: soba=(LookItUpYourself)" https://stackoverflow.com/users/30/myProfile.html A: From Mark Harrison And here's what works... curl -s --cookie soba=. https://stackoverflow.com/users And for wget: wget --no-cookies --header "Cookie: soba=(LookItUpYourself)" https://stackoverflow.com/users/30/myProfile.html A: Nice idea :) I presume you've used wget's --load-cookies (filename) might help a little but it might be easier to use something like Mechanize (in Perl or python) to mimic a browser more fully to get a good spider. A: I couldn't figure out how to get the cookies to work either, but I was able to get to my status page in my browser while I was logged out, so I assume this will work once stackoverflow goes public. This is an interesting idea, but won't you also pick up diffs of the underlying html code? Do you have a strategy to avoid ending up with a diff of the html and not the actual content? A: And here's what works... curl -s --cookie soba=. http://stackoverflow.com/users
{ "language": "en", "url": "https://stackoverflow.com/questions/2815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Paging SQL Server 2005 Results How do I page results in SQL Server 2005? I tried it in SQL Server 2000, but there was no reliable way to do this. I'm now wondering if SQL Server 2005 has any built in method? What I mean by paging is, for example, if I list users by their username, I want to be able to only return the first 10 records, then the next 10 records and so on. Any help would be much appreciated. A: The accepted answer for this doesn't actually work for me...I had to jump through one more hoop to get it to work. When I tried the answer SELECT Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName FROM Users WHERE RowID Between 0 AND 9 it failed, complaining that it didn't know what RowID was. I had to wrap it in an inner select like this: SELECT * FROM (SELECT Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName FROM Users ) innerSelect WHERE RowID Between 0 AND 9 and then it worked. A: You can use the Row_Number() function. Its used as follows: SELECT Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName FROM Users From which it will yield a result set with a RowID field which you can use to page between. SELECT * FROM ( SELECT Row_Number() OVER(ORDER BY UserName) As RowID, UserFirstName, UserLastName FROM Users ) As RowResults WHERE RowID Between 5 AND 10 etc A: When I need to do paging, I typically use a temporary table as well. You can use an output parameter to return the total number of records. The case statements in the select allow you to sort the data on specific columns without needing to resort to dynamic SQL. --Declaration-- --Variables @StartIndex INT, @PageSize INT, @SortColumn VARCHAR(50), @SortDirection CHAR(3), @Results INT OUTPUT --Statements-- SELECT @Results = COUNT(ID) FROM Customers WHERE FirstName LIKE '%a%' SET @StartIndex = @StartIndex - 1 --Either do this here or in code, but be consistent CREATE TABLE #Page(ROW INT IDENTITY(1,1) NOT NULL, id INT, sorting_1 SQL_VARIANT, sorting_2 SQL_VARIANT) INSERT INTO #Page(ID, sorting_1, sorting_2) SELECT TOP (@StartIndex + @PageSize) ID, CASE WHEN @SortColumn='FirstName' AND @SortDirection='ASC' THEN CAST(FirstName AS SQL_VARIANT) WHEN @SortColumn='LastName' AND @SortDirection='ASC' THEN CAST(LastName AS SQL_VARIANT) ELSE NULL END AS sort_1, CASE WHEN @SortColumn='FirstName' AND @SortDirection='DES' THEN CAST(FirstName AS SQL_VARIANT) WHEN @SortColumn='LastName' AND @SortDirection='DES' THEN CAST(LastName AS SQL_VARIANT) ELSE NULL END AS sort_2 FROM ( SELECT CustomerId AS ID, FirstName, LastName FROM Customers WHERE FirstName LIKE '%a%' ) C ORDER BY sort_1 ASC, sort_2 DESC, ID ASC; SELECT ID, Customers.FirstName, Customers.LastName FROM #Page INNER JOIN Customers ON ID = Customers.CustomerId WHERE ROW > @StartIndex AND ROW <= (@StartIndex + @PageSize) ORDER BY ROW ASC DROP TABLE #Page A: If you're trying to get it in one statement (the total plus the paging). You might need to explore SQL Server support for the partition by clause (windowing functions in ANSI SQL terms). In Oracle the syntax is just like the example above using row_number(), but I have also added a partition by clause to get the total number of rows included with each row returned in the paging (total rows is 1,262): SELECT rn, total_rows, x.OWNER, x.object_name, x.object_type FROM (SELECT COUNT (*) OVER (PARTITION BY owner) AS TOTAL_ROWS, ROW_NUMBER () OVER (ORDER BY 1) AS rn, uo.* FROM all_objects uo WHERE owner = 'CSEIS') x WHERE rn BETWEEN 6 AND 10 Note that I have where owner = 'CSEIS' and my partition by is on owner. So the results are: RN TOTAL_ROWS OWNER OBJECT_NAME OBJECT_TYPE 6 1262 CSEIS CG$BDS_MODIFICATION_TYPES TRIGGER 7 1262 CSEIS CG$AUS_MODIFICATION_TYPES TRIGGER 8 1262 CSEIS CG$BDR_MODIFICATION_TYPES TRIGGER 9 1262 CSEIS CG$ADS_MODIFICATION_TYPES TRIGGER 10 1262 CSEIS CG$BIS_LANGUAGES TRIGGER A: I believe you'd need to perform a separate query to accomplish that unfortionately. I was able to accomplish this at my previous position using some help from this page: Paging in DotNet 2.0 They also have it pulling a row count seperately. A: Here's what I do for paging: All of my big queries that need to be paged are coded as inserts into a temp table. The temp table has an identity field that will act in a similar manner to the row_number() mentioned above. I store the number of rows in the temp table in an output parameter so the calling code knows how many total records there are. The calling code also specifies which page it wants, and how many rows per page, which are selected out from the temp table. The cool thing about doing it this way is that I also have an "Export" link that allows you to get all rows from the report returned as CSV above every grid in my application. This link uses the same stored procedure: you just return the contents of the temp table instead of doing the paging logic. This placates users who hate paging, and want to see everything, and want to sort it in a million different ways.
{ "language": "en", "url": "https://stackoverflow.com/questions/2840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: How do you format an unsigned long long int using printf? #include <stdio.h> int main() { unsigned long long int num = 285212672; //FYI: fits in 29 bits int normalInt = 5; printf("My number is %d bytes wide and its value is %ul. A normal number is %d.\n", sizeof(num), num, normalInt); return 0; } Output: My number is 8 bytes wide and its value is 285212672l. A normal number is 0. I assume this unexpected result is from printing the unsigned long long int. How do you printf() an unsigned long long int? A: Compile it as x64 with VS2005: %llu works well. A: How do you format an unsigned long long int using printf? Since C99 use an "ll" (ell-ell) before the conversion specifiers o,u,x,X. In addition to base 10 options in many answers, there are base 16 and base 8 options: Choices include unsigned long long num = 285212672; printf("Base 10: %llu\n", num); num += 0xFFF; // For more interesting hex/octal output. printf("Base 16: %llX\n", num); // Use uppercase A-F printf("Base 16: %llx\n", num); // Use lowercase a-f printf("Base 8: %llo\n", num); puts("or 0x,0X prefix"); printf("Base 16: %#llX %#llX\n", num, 0ull); // When non-zero, print leading 0X printf("Base 16: %#llx %#llx\n", num, 0ull); // When non-zero, print leading 0x printf("Base 16: 0x%llX\n", num); // My hex fave: lower case prefix, with A-F Output Base 10: 285212672 Base 16: 11000FFF Base 16: 11000fff Base 8: 2100007777 or 0x,0X prefix Base 16: 0X11000FFF 0 Base 16: 0x11000fff 0 Base 16: 0x11000FFF A: Use the ll (el-el) long-long modifier with the u (unsigned) conversion. (Works in windows, GNU). printf("%llu", 285212672); A: For long long (or __int64) using MSVS, you should use %I64d: __int64 a; time_t b; ... fprintf(outFile,"%I64d,%I64d\n",a,b); //I is capital i A: Apparently no one has come up with a multi-platform* solution for over a decade since [the] year 2008, so I shall append mine . Plz upvote. (Joking. I don’t care.) Solution: lltoa() How to use: #include <stdlib.h> /* lltoa() */ // ... char dummy[255]; printf("Over 4 bytes: %s\n", lltoa(5555555555, dummy, 10)); printf("Another one: %s\n", lltoa(15555555555, dummy, 10)); OP’s example: #include <stdio.h> #include <stdlib.h> /* lltoa() */ int main() { unsigned long long int num = 285212672; // fits in 29 bits char dummy[255]; int normalInt = 5; printf("My number is %d bytes wide and its value is %s. " "A normal number is %d.\n", sizeof(num), lltoa(num, dummy, 10), normalInt); return 0; } Unlike the %lld print format string, this one works for me under 32-bit GCC on Windows. *) Well, almost multi-platform. In MSVC, you apparently need _ui64toa() instead of lltoa(). A: That is because %llu doesn't work properly under Windows and %d can't handle 64 bit integers. I suggest using PRIu64 instead and you'll find it's portable to Linux as well. Try this instead: #include <stdio.h> #include <inttypes.h> int main() { unsigned long long int num = 285212672; //FYI: fits in 29 bits int normalInt = 5; /* NOTE: PRIu64 is a preprocessor macro and thus should go outside the quoted string. */ printf("My number is %d bytes wide and its value is %" PRIu64 ". A normal number is %d.\n", sizeof(num), num, normalInt); return 0; } Output My number is 8 bytes wide and its value is 285212672. A normal number is 5. A: In addition to what people wrote years ago: * *you might get this error on gcc/mingw: main.c:30:3: warning: unknown conversion type character 'l' in format [-Wformat=] printf("%llu\n", k); Then your version of mingw does not default to c99. Add this compiler flag: -std=c99. A: Non-standard things are always strange :) for the long long portion under GNU it's L, ll or q and under windows I believe it's ll only A: In Linux it is %llu and in Windows it is %I64u Although I have found it doesn't work in Windows 2000, there seems to be a bug there! A: %d--> for int %u--> for unsigned int %ld--> for long int or long %lu--> for unsigned long int or long unsigned int or unsigned long %lld--> for long long int or long long %llu--> for unsigned long long int or unsigned long long A: You may want to try using the inttypes.h library that gives you types such as int32_t, int64_t, uint64_t etc. You can then use its macros such as: #include <inttypes.h> uint64_t x; uint32_t y; printf("x: %"PRIu64", y: %"PRIu32"\n", x, y); This is "guaranteed" to not give you the same trouble as long, unsigned long long etc, since you don't have to guess how many bits are in each data type. A: One possibility for formatting an unsigned long long is to make use of uintmax_t. This type has been available since C99 and unlike some of the other optional exact-width types found in stdint.h, uintmax_t is required by the Standard (as is its signed counterpart intmax_t). According to the Standard, a uintmax_t type can represent any value of any unsigned integer type. You can print a uintmax_t value using the %ju conversion specifier (and intmax_t can be printed using %jd). To print a value which is not already uintmax_t, you must first cast to uintmax_t to avoid undefined behavior: #include <stdio.h> #include <stdint.h> int main(void) { unsigned long long num = 285212672; printf("%ju\n", (uintmax_t)num); return 0; } A: Well, one way is to compile it as x64 with VS2008 This runs as you would expect: int normalInt = 5; unsigned long long int num=285212672; printf( "My number is %d bytes wide and its value is %ul. A normal number is %d \n", sizeof(num), num, normalInt); For 32 bit code, we need to use the correct __int64 format specifier %I64u. So it becomes. int normalInt = 5; unsigned __int64 num=285212672; printf( "My number is %d bytes wide and its value is %I64u. A normal number is %d", sizeof(num), num, normalInt); This code works for both 32 and 64 bit VS compiler. A: Hex: printf("64bit: %llp", 0xffffffffffffffff); Output: 64bit: FFFFFFFFFFFFFFFF
{ "language": "en", "url": "https://stackoverflow.com/questions/2844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "450" }
Q: Reading a C/C++ data structure in C# from a byte array What would be the best way to fill a C# struct from a byte[] array where the data was from a C/C++ struct? The C struct would look something like this (my C is very rusty): typedef OldStuff { CHAR Name[8]; UInt32 User; CHAR Location[8]; UInt32 TimeStamp; UInt32 Sequence; CHAR Tracking[16]; CHAR Filler[12]; } And would fill something like this: [StructLayout(LayoutKind.Explicit, Size = 56, Pack = 1)] public struct NewStuff { [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] [FieldOffset(0)] public string Name; [MarshalAs(UnmanagedType.U4)] [FieldOffset(8)] public uint User; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 8)] [FieldOffset(12)] public string Location; [MarshalAs(UnmanagedType.U4)] [FieldOffset(20)] public uint TimeStamp; [MarshalAs(UnmanagedType.U4)] [FieldOffset(24)] public uint Sequence; [MarshalAs(UnmanagedType.ByValTStr, SizeConst = 16)] [FieldOffset(28)] public string Tracking; } What is best way to copy OldStuff to NewStuff, if OldStuff was passed as byte[] array? I'm currently doing something like the following, but it feels kind of clunky. GCHandle handle; NewStuff MyStuff; int BufferSize = Marshal.SizeOf(typeof(NewStuff)); byte[] buff = new byte[BufferSize]; Array.Copy(SomeByteArray, 0, buff, 0, BufferSize); handle = GCHandle.Alloc(buff, GCHandleType.Pinned); MyStuff = (NewStuff)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(NewStuff)); handle.Free(); Is there better way to accomplish this? Would using the BinaryReader class offer any performance gains over pinning the memory and using Marshal.PtrStructure? A: Watch out for packing issues. In the example you gave all fields are at the obvious offsets because everything is on 4 byte boundaries but this will not always be the case. Visual C++ packs on 8 byte boundaries by default. A: object ByteArrayToStructure(byte[] bytearray, object structureObj, int position) { int length = Marshal.SizeOf(structureObj); IntPtr ptr = Marshal.AllocHGlobal(length); Marshal.Copy(bytearray, 0, ptr, length); structureObj = Marshal.PtrToStructure(Marshal.UnsafeAddrOfPinnedArrayElement(bytearray, position), structureObj.GetType()); Marshal.FreeHGlobal(ptr); return structureObj; } Have this A: Here is an exception safe version of the accepted answer: public static T ByteArrayToStructure<T>(byte[] bytes) where T : struct { var handle = GCHandle.Alloc(bytes, GCHandleType.Pinned); try { return (T) Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(T)); } finally { handle.Free(); } } A: From what I can see in that context, you don't need to copy SomeByteArray into a buffer. You simply need to get the handle from SomeByteArray, pin it, copy the IntPtr data using PtrToStructure and then release. No need for a copy. That would be: NewStuff ByteArrayToNewStuff(byte[] bytes) { GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned); try { NewStuff stuff = (NewStuff)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(NewStuff)); } finally { handle.Free(); } return stuff; } Generic version: T ByteArrayToStructure<T>(byte[] bytes) where T: struct { T stuff; GCHandle handle = GCHandle.Alloc(bytes, GCHandleType.Pinned); try { stuff = (T)Marshal.PtrToStructure(handle.AddrOfPinnedObject(), typeof(T)); } finally { handle.Free(); } return stuff; } Simpler version (requires unsafe switch): unsafe T ByteArrayToStructure<T>(byte[] bytes) where T : struct { fixed (byte* ptr = &bytes[0]) { return (T)Marshal.PtrToStructure((IntPtr)ptr, typeof(T)); } } A: If you have a byte[] you should be able to use the BinaryReader class and set values on NewStuff using the available ReadX methods.
{ "language": "en", "url": "https://stackoverflow.com/questions/2871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "95" }
Q: Possible to "spin off" several GUI threads? (Not halting the system at Application.Run) My Goal I would like to have a main processing thread (non GUI), and be able to spin off GUIs in their own background threads as needed, and having my main non GUI thread keep working. Put another way, I want my main non GUI-thread to be the owner of the GUI-thread and not vice versa. I'm not sure this is even possible with Windows Forms(?) Background I have a component based system in which a controller dynamically load assemblies and instantiates and run classes implementing a common IComponent interface with a single method DoStuff(). Which components that gets loaded is configured via a xml configuration file and by adding new assemblies containing different implementations of IComponent. The components provides utility functions to the main application. While the main program is doing it's thing, e.g. controlling a nuclear plant, the components might be performing utility tasks (in their own threads), e.g. cleaning the database, sending emails, printing funny jokes on the printer, what have you. What I would like, is to have one of these components be able to display a GUI, e.g. with status information for the said email sending component. The lifetime of the complete system looks like this * *Application starts. *Check configuration file for components to load. Load them. *For each component, run DoStuff() to initialize it and make it live its own life in their own threads. *Continue to do main application-thingy king of work, forever. I have not yet been able to successfully perform point 3 if the component fires up a GUI in DoStuff(). It simply just halts until the GUI is closed. And not until the GUI is closed does the program progress to point 4. It would be great if these components were allowed to start up their own Windows Forms GUIs. Problem When a component tries to fire up a GUI in DoStuff() (the exact line of code is when the component runs Application.Run(theForm)), the component and hence our system "hangs" at the Application.Run() line until the GUI is closed. Well, the just fired up GUI works fine, as expected. Example of components. One hasn't nothing to do with GUI, whilst the second fires up a cute windows with pink fluffy bunnies in them. public class MyComponent1: IComponent { public string DoStuff(...) { // write something to the database } } public class MyComponent2: IComponent { public void DoStuff() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form()); // I want the thread to immediately return after the GUI // is fired up, so that my main thread can continue to work. } } I have tried this with no luck. Even when I try to fire up the GUI in it's own thread, the execution halts until the GUI as closed. public void DoStuff() { new Thread(ThreadedInitialize).Start() } private void ThreadedInitialize() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Application.Run(new Form()); } Is it possible to spin off a GUI and return after Application.Run()? A: Application.Run method displays one (or more) forms and initiates the standard message loop which runs until all the forms are closed. You cannot force a return from that method except by closing all your forms or forcing an application shutdown. You can, however, pass an ApplicationContext (instad of a new Form()) to Application.Run method and ApplicationContext can be used to launch several forms at once. Your application will only end when all of those are closed. See here: http://msdn.microsoft.com/en-us/library/system.windows.forms.application.run.aspx Also, any forms that you Show non-modally will continue to run alongside your main form, which will enable you to have more than one windows that do not block each other. I believe this is actually what you are trying to accomplish. A: I'm sure this is possible if you hack at it hard enough, but I'd suggest it is not a good idea. 'Windows' (that you see on the screen) are highly coupled to processes. That is, each process which displays any GUI is expected to have a Message Loop, which processes all of the messages which are involved with creating and managing windows (things like 'clicked the button', 'closed the app', 'redraw the screen' and so on. Because of this, it is more or less assumed that if you have any message loop, it must be available for the lifetime of your process. For example windows might send you a 'quit' message, and you need to have a message loop available to handle that, even if you've got nothing on the screen. Your best bet is do it like this: Make a fake form which is never shown which is your 'main app' Start up Call Application.Run and pass in this fake form. Do your work in another thread, and fire events at the main thread when you need to do Gui stuff. A: I'm not sure if this is right, however I remember running window forms from a console application by just newing the form and calling newForm.Show() on it, if your components use that instead of Application.Run() then the new form shouldn't block. Of course the component will be responsible for maintaining a reference to the forms it creates
{ "language": "en", "url": "https://stackoverflow.com/questions/2872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: Choosing a static code analysis tool I'm working on a project where I'm coding in C in a UNIX environment. I've been using the lint tool to check my source code. Lint has been around a long time (since 1979), can anyone suggest a more recent code analysis tool I could use ? Preferably a tool that is free. A: You can use cppcheck. It is an easy to use static code analysis tool.For example: cppcheck --enable=all . will check all C/C++ files under the current folder. A: I recently compiled a list of all the static analysis tools I had at my disposal, I am still in the process of evaluating them all. Note, these are mostly security analysis tools. * *splint *RATS *SMATCH *Uno A: We've been using Coverity Prevent to check out C++ source code. It's not a free tool (although I believe they offer free scanning for open source projects), but it's one of the best static analysis tools you'll find. I've heard it's even more impressive on C than on C++, but it's helped us avoid quite a number of bugs so far. A: Don't overlook the compiler itself. Read the compiler's documentation and find all the warnings and errors it can provide, and then enable as many as make sense for you. Also make sure to tell your compiler to treat warnings like errors so you're forced to fix them right away (-Werror on gcc). By the way, don't be fooled -Wall on gcc does not enable all warnings. You may want to check valgrind (free!) — it "automatically detect[s] many memory management and threading bugs, and profile[s] your programs in detail." It isn't a static checker, but it's a great tool! A: Lint-like tools generally suffer from a "false alarm" problem: they report a lot more issues than really exist. If the proportion of genuinely-useful warnings is too low, the user learns to just ignore the tool. More modern tools expend some effort to focus on the most likely/interesting warnings. A: For C code, you definitely should definitely use Flexelint. I used it for nearly 15 years and swear by it. One of the really great features it has is that warnings can be selectively turned off and on via comments in the code ("/* lint -e123*/"). This turned out to be a powerful documentation tool when you wanted to something out of the ordinary. "I am turning off warning X, therefore, there is some good reason I'm doing X." For anybody into interesting C/C++ questions, look at some of their examples on their site and see if you can figure out the bugs without looking at the hints. A: I've heard good things about clang static analyzer, which IIRC uses LLVM as it's backend. If that's implemented on your platform, that might be a good choice. From what I understand, it does a bit more than just syntax analysis. "Automatic Bug Finding", for instance. A: You might find the Uno tool useful. It's one of the few free non-toy options. It differs from lint, Flexelint, etc. in focusing on a small number of "semantic" errors (null pointer derefs, out-of-bounds array indices, and use of uninitialized variables). It also allows user-defined checks, like lock-unlock discipline. I'm working towards a public release of a successor tool, Orion (CONTENT NOT AVAILABLE ANYMORE) A: PC-lint/Flexelint are very powerful and useful static analysis tools, and highly configurable, though sadly not free. When first using a tool like this, they can produce huge numbers of warnings, which can make it hard to differentiate between major and minor ones. Therefore, it is best to start using the tool on your code as early in the project as possible, and then to run it on your code as often as possible, so that you can deal with new warnings as they come up. With continual use like this, you soon learn how to write your code in a way which confirms to the rules applied by the tool. Because of this, I prefer tools like Lint which run relatively quickly, and so encourage continual use, rather than the more cumbersome tools which you may end up using less often, if at all. A: You can try CppDepend, a pretty complete static analyzer available on windows and linux, throught VS Plugin, IDE or command line, and it's free for open source contributors A: lint is constantly updated... so why would you want a more recent one. BTW flexelint is lint A: G'day, I totally agree with the suggestions to read and digest what the compiler is telling you after setting -Wall. A good static analysis tool for security is FlawFinder written by David Wheeler. It does a good job looking for various security exploits, However, it doesn't replace having a knowledgable someone read through your code. As David says on his web page, "A fool with a tool is still a fool!" cheers, Rob A: I've found that it's generally best to use multiple static analysis tools to find bugs. Every tool is designed differently, and they can find very different things from each other. There are some good discussions in some of the talks here. It's from a conference held by the US Department of Homeland Security on static analysis. A: Sparse is a computer software tool, already available on Linux, designed to find possible coding faults in the Linux kernel. There are two active projects of Linux Verification Center aimed to improve quality of the loadable kernel modules. * *Linux Driver Verification (LDV) - a comprehensive toolset for static source code verification of Linux device drivers. *KEDR Framework - an extensible framework for dynamic analysis and verification of kernel modules. *Another ongoing project is Linux File System Verification that aims to develop a dedicated toolset for verification of Linux file system implementations. A: There is a "-Weffc++" option for gcc which according to the Mac OS X man page will: Warn about violations of the following style guidelines from Scott Meyers' Effective C++ book: [snip] I know you asked about C, but this is the closest I know of..
{ "language": "en", "url": "https://stackoverflow.com/questions/2873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: How to render a control to look like ComboBox with Visual Styles enabled? I have a control that is modelled on a ComboBox. I want to render the control so that the control border looks like that of a standard Windows ComboBox. Specifically, I have followed the MSDN documentation and all the rendering of the control is correct except for rendering when the control is disabled. Just to be clear, this is for a system with Visual Styles enabled. Also, all parts of the control render properly except the border around a disabled control, which does not match the disabled ComboBox border colour. I am using the VisualStyleRenderer class. MSDN suggests using the VisualStyleElement.TextBox element for the TextBox part of the ComboBox control but a standard disabled TextBox and a standard disabled ComboBox draw slightly differently (one has a light grey border, the other a light blue border). How can I get correct rendering of the control in a disabled state? A: I'm not 100% sure if this is what you are looking for but you should check out the VisualStyleRenderer in the System.Windows.Forms.VisualStyles-namespace. * *VisualStyleRenderer class (MSDN) *How to: Render a Visual Style Element (MSDN) *VisualStyleElement.ComboBox.DropDownButton.Disabled (MSDN) Since VisualStyleRenderer won't work if the user don't have visual styles enabled (he/she might be running 'classic mode' or an operative system prior to Windows XP) you should always have a fallback to the ControlPaint class. // Create the renderer. if (VisualStyleInformation.IsSupportedByOS && VisualStyleInformation.IsEnabledByUser) { renderer = new VisualStyleRenderer( VisualStyleElement.ComboBox.DropDownButton.Disabled); } and then do like this when drawing: if(renderer != null) { // Use visual style renderer. } else { // Use ControlPaint renderer. } Hope it helps! A: Are any of the ControlPaint methods useful for this? That's what I usually use for custom-rendered controls.
{ "language": "en", "url": "https://stackoverflow.com/questions/2874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: Text Editor For Linux (Besides Vi)? Let me preface this question by saying I use TextMate on Mac OSX for my text needs and I am in love with it. Anything comparable on the Linux platform? I'll mostly use it for coding python/ruby. Doing a google search yielded outdated answers. Edit: Since there has been some concern about the 'merit' of this question. I am about to start a new Ruby Programming Project in Linux and before I got started I wanted to make sure I had the right tools to do the job. Edit #2: I use VIM on a daily basis -- all . the . time. I enjoy using it. I was just looking for some alternatives. A: I like the versatility of jEdit (http://www.jedit.org), its got a lot of plugins, crossplatform and has also stuff like block selection which I use all the time. The downside is, because it is written in java, it is not the fastest one. A: I find Geany (http://geany.uvena.de/) quite good. A: I use pico or nano as my "casual" text editor in Linux/Solaris/etc. It's easy to come to grips with, and whilst you lose a couple of rows of text to the menu, at least it's easy to see how to exit, etc. You can even extend nano, I think, and add syntax highlighting. A: Alternative text editors? Try Diakonos, "a Linux editor for the masses". The default keyboard mapping is as expected for cut, copy, paste, undo, open, save, etc. A: Emacs is a wonderful text editor. It has huge power once you become a power user. You can access a shell, have as many files open as you want in as many sub-windows and an extremely powerful scripting support that lets you add all kinds of neat features. I have been using a ruby-mode which adds syntax highlighting and whatnot to ruby, and the same exists for every major language. If you keep at it, you can use exclusively the keyboard and never touch the mouse, which increases your editing speed by a significant margin. If you want to start with something a lot more basic though, gedit is nice... it has built in syntax highlighting as well for most languages based on the filename extension. It comes with the OS as well (though emacs you can easily install with apt-get or some similar package finder utility). UPDATE: I think gedit is exclusively GUI based though, so it would be useful to learn emacs in case you are stuck with just a shell (it is fully featured in both shell and graphical mode). FURTHER UPDATE: Just FYI, I am not trying to push Emacs over Vim, it's just what I use, and it's a great editor (as I'm sure Vim is too). It is daunting at first (as I'm sure Vim is too), but the question was about text editors on Linux besides vi... Emacs seems the logical choice to me, but gedit is a great simple text editor with some nice features if that's all you are looking for. A: When I searched for TextMate alternative for Linux, I ended up using Geany. It's not as powerfull, but still nice to work with. Great replacement for Kate. A: On Mac OS X, I have used BBEdit since the early 1990's, so I use that as my reference for all other editors. I sometimes use BBEdit to edit files on a Linux box using ftp mode, and that works very well if you have a fast network connection to the Linux box. I learned emacs two years ago because the rest of the programming team I joined uses it. I find emacs powerful but annoyingly old-fashioned in many ways, but once you have learned emacs, you can use it on any platform (Linux, OS X, Windows). This is the editor I use almost exclusively at work now. It is going to take me years to master all its features, though. I have also used gedit on Linux and found it very usable, but I haven't tried to use it as my primary editor for any project. I have a colleague at work who uses Komodo Edit 4.4 (free from activestate.com), running it on a Windows computer but using it in ftp mode so she can edit files on our Linux server. Komodo Edit has many nice features, but it takes a looonnnggg time to launch the first time. A: Don't forget NEdit! Small and light, but with syntax highlighting and macro record/replay. A: Best one besides Vi? Vim. A: Kate, the KDE Advanced Text Editor is quite good. It has syntax highlighting, block selection mode, terminal/console, sessions, window splitting both horizontal and vertical etc. A: SciTE http://www.scintilla.org/SciTE.html A: The best I've found is gedit unfortunately. Spend a few hours with it and you'll discover it's not so bad, with plugins and themes. You can use the command line to open documents in it. A: +1 for pico/nano -- lightweight, gets the job done, good help A: Friend of mine swears by jed, http://www.jedsoft.org/jed/ A: First I don't want to start a war.. I haven't used TextMate but I have used its Windows equivalent, e-TextEditor and I could understand why people love it. I've also tried many text editors and IDEs in my quest in finding the perfect text editor on Linux. I've tried jEdit, vim, emacs (although I used to love when I was at uni) and various others. On Linux I've settled with gEdit. Although I do use Komodo Edit from time to time. When I'm in a hurry I use gEdit purely because it is quicker than Komodo Edit. gEdit has plenty of plugins and comes with some nice colour schemes. I reckon once gEdit has a proper code-tidy facility it'll be cool. I think the only reason I use Komodo Edit is the project file facility. I have a friend who donated his 'Vi Improved' book in the hope that he can convert me to Vim. The book is over an inch thick and completely put me off in investing time in learning Vim.. Everytime I find an editor - I always find myself going back to gEdit. It is a frills-in-the-right-places editor. Give gEdit a go, it is the default text editor in Ubuntu and Linux Mint. Here is a link to an excellent guide on how to get gEdit to look and behave (somewhat) like TextMate: http://grigio.org/pimp_my_gedit_was_textmate_linux Hope that helps. A: I use sublime Text on linux. A: Try Scribes . It tries to be a TextMate replacement for Linux 2020 edit: forgotten in the mists of history A: http://xkcd.com/378/ A: I use SciTE very small and simple text editor. A: I agree with Mike, though I'm a Vim die-hard. I've been using GEdit quite frequently lately when I'm doing lightweight Ruby scripting. The standard editor (plus Ruby code snippets) is extremely usable and polished, and can provide a nice reprieve from full-strength, always-on programming editors. A: I've just started using OSX. Free editors of note that I've discovered: * *Komodo by ActiveState. No debugger or regex editor (although one comes with Python, i.e. redemo.py) in free version but perfectly usable. *ERIC, written in PyQT. *Eclipse with PyDev is my preferred option for editing Python on all platforms. Nice clean GUI, decent debugger. Good syntax parsing etc. A: I've used Emacs for 20 years. It's great and it works everywhere. I also have TextMate, which I use for some things on the Mac (HTML mode is great). If you want to do Ruby development, Netbeans supports Ruby and it also runs on all platforms. http://www.netbeans.org/features/ruby/index.html I've seen some blogs, etc claiming that it's the best Ruby environment available. A: I use joe for simple (and not so simple) editing when I'm away from Eclipse. It uses the classic Wordstar keybindings- although I've never used Wordstar, it's a selling point for many people. It's easy, well-supported, light-weight and it has binaries available for everything. A: I love Kate because it has several interesting features (already cited) usually found in (heavier) IDEs. My favorite feature, however, is its terminal window that is very practical for quickly performing the save-compile-execute combo. Nedit is another valid option, packed with lots of features (and it hasn't lots of dependencies: that's a huge plus IMHO). For editing in a shell, when I cannot use VIM, I look immediately for pico or nano (but I would not recommend them for continuous development: for rapid editing they are perfect). A: If it's just you? Use what you want to use today; switch in mid-stream if you want. Is it a team? Try to be editor-agnostic. Set standards for white-space (are tabs allowed? How many spaces does a tab represent?), but otherwise allow anyone to use whichever editor they want. Is it a team doing pair-programming? That's where you may need a team-standard editor, just so that programmers can easily pass the keyboard. To help implement a standard white-space policy in a shop where one or more coders is using Emacs: You can tell Emacs about your white-space policy with some comments stuck at the bottom of every file source file. For example, # Local Variables: # tab-width: 2 # ruby-indent-level: 2 # indent-tabs-mode: nil # End: Anyone using emacs (or xemacs) on that file will automatically get the group standard indentation. A: Sublime Text 2 is my favorite. Intuitively understandable and quite powerful. A: You can try Emacs with ruby-mode, Rinari (for Rails) and yasnippet which provides automatic snippets like Textmate. A: I love TextMate on OSX. There is a kind of TextMate clone for Windows called simply "E" (e-texteditor.com). Its author promised that there will be a Linux version soon. Even if you already picked your favourite, TextMate (or E) is worth a look, simply because it is different. I would say that there are mainly four different families of text editors: * *classic menubar-based editors like WinEdit, Gedit or BBEdit *Emacs and its brethren XEmacs, Aquamacs etc. *VI / Vim / Cream and the like *TextMate and E You can differenciate between these families by their different paradigms of usage: * *Classic editors rely mainly on a menubar and some Ctrl-key shortcuts. *Emacs-style editing uses highly sophisticated keyboard commands like C-x-s and even whole words to evoke commands. *VI is modebased and is operated by single-key commands or whole words. *TextMate is based on Snippets and classic shortcuts. Emacs and TextMate are also easily extensible by user-created scripts in Lisp (Emacs) or any other command-line-language (TextMate). (Classic editors and VI are also extendable, but the effort is usually considerably bigger) I would recommend that everyone tried at least one good example of each of these families (if possible) and find out what suits them best. A: TextMate is a great editor, and there is a way to replicate some of the functionality in GEdit. Check the article out here: http://rubymm.blogspot.com/2007/08/make-gedit-behave-roughly-like-textmate.html to modify GEdit to behave like TextMate. A: Vim is a nice upgrade for Vi, offering decent features and a more usable set of keybindings and default behaviour. However, graphical versions like GVim, KVim and even Cream are extremely lacking in my opinion. I've been using Geany a lot lately, but it also has its shortcomings. I just can't find something in the league of Programmers Notepad, Smultron or TextMate on Linux. A shame, since I want to live in an all open source cyberworld, I'm stuck hopping from one almost-right editor to another. A: You could give bluefish a try. Has a bunch of nice features for website work. Syntax files for most every language. http://bluefish.openoffice.nl/ If on windows give Crimson Editor a try http://www.crimsoneditor.com/ It's been a long while since I ran windows, but iirc, 'official' development has stopped on it, but the community has taken up a fork of it and called it emerald or somesuch. Crimson editor is still very capable as is. Both bluefish and crimson editor have project management abilities. FTP ablilities, macros etc etc A: I personally use MacVim which is basically a GVim for Mac OSx. However I have been reading alot about Redcar, which is a text editor for Linux, which shares a lot of the Textmate functionality. Checkout the links below. Redcar LURG Lecture on Redcar A: I just thought I would recommend Ninja IDE, open source and all.. I use it for all my Python development now days when I got a GUI to work with and looks the same when I am on my Windows and Linux machines. Ninja IDE A: for multiple tab text editor "medit" is the best . its like notepad++ in windows . for stylish and good looking "schite Text editor " is best .
{ "language": "en", "url": "https://stackoverflow.com/questions/2898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51" }
Q: MySQL/Apache Error in PHP MySQL query I am getting the following error: Access denied for user 'apache'@'localhost' (using password: NO) When using the following code: <?php include("../includes/connect.php"); $query = "SELECT * from story"; $result = mysql_query($query) or die(mysql_error()); echo "<h1>Delete Story</h1>"; if (mysql_num_rows($result) > 0) { while($row = mysql_fetch_row($result)){ echo '<b>'.$row[1].'</b><span align="right"><a href="../process/delete_story.php?id='.$row[0].'">Delete</a></span>'; echo '<br /><i>'.$row[2].'</i>'; } } else { echo "No stories available."; } ?> The connect.php file contains my MySQL connect calls that are working fine with my INSERT queries in another portion of the software. If I comment out the $result = mysql_query line, then it goes through to the else statement. So, it is that line or the content in the if. I have been searching the net for any solutions, and most seem to be related to too many MySQL connections or that the user I am logging into MySQL as does not have permission. I have checked both. I can still perform my other queries elsewhere in the software, and I have verified that the account has the correct permissions. A: Change the include() to require(). If the "connect.php" file can't be require()d, the script will fail with a fatal error, whereas include() only generates a warning. If the username you're passing to mysql_connect() isn't "apache", an incorrect path to the connect script is the most common way to get this type of error. A: Don't forget to check your database error logs. You should be able to see if you are even hitting the DB. If you aren't, you should check your firewall rules on the box. On a linux box you can run iptables -L to get the firewall list rules. Otherwise it will be a pure access issue. Do a "select * from mysql.user" to see if the apache user is even set up in there. Further, I would recommend creating an account specifically for your app as opposed to using apache, since any other app you create will run as apache by default, and could get unauthorized access to your db. Just look up "GRANT" in the documentation @ dev.mysql.com to get more info. If you have more specific questiosn regarding db, just edit your question, and i will take a look. A: Does the connect.php script actually make the connection or does it just define a function you need to call to create a connection? The error you're getting is symptomatic of not having a previously established connection at all. ETA: Also change the include to a require. I suspect it's not actually including the file at all. But include can fail silently. A: Dude the answer is a big DUH! which unfortunately it took me a while to figure out as well. You probably have a function like dbconnect() and you are using variables from an include file to make the connection. $conn = mysql_connect($dbhost, $dbuser, $dbpass). Well since this is inside a function the variables from the include file need to be passed to the function or else the function will not know what $dbhost, $dbuser and $dbpass is. A way to fix this is to make those variables global so your functions can pick them up. Another solution which is not very secure would be to write out you host, user and pass in the mysql_connect function. Hope this helps but I had the same problem. A: And if it matters at all, apache@localhost is not the name of the user account that I use to get into the database. I don't have any user accounts with the name apache in them at all for that matter. If it is saying 'apache@localhost' the username is not getting passed correctly to the MySQL connection. 'apache' is normally the user that runs the httpd process (at least on Redhat-based systems) and if no username is passed during the connection MySQL uses whomever is calling for the connection. If you do the connection right in your script, not in a called file, do you get the same error? A: Just to check, if you use just this part you get an error? <?php include("../includes/connect.php"); $query = "SELECT * from story"; $result = mysql_query($query) or die(mysql_error()); If so, do you still get an error if you copy and paste one of those Inserts into this page, I am trying to see if it's local to the page or that actual line. Also, can you post a copy of the connection calls (minus passwords), unless the inserts use exactly the same syntax as this example. A: Does the apache user require a password to connect to the database? If so, then the fact that it says "using password: NO" would lead me to believe that the code is trying to connect without a password. If, however, the apache user doesn't require a password, a double-check of the permissions may be a good idea (which you mentioned you already checked). It may still be beneficial to try executing something like this at a mysql prompt: GRANT ALL PRIVILEGES ON `*databasename*`.* to 'apache'@'localhost'; That syntax should be correct. Other than that, I'm just as stumped as you are. A: If indeed you are able to insert using the same connection calls, your problem most likely lies in the user "apache" not having SELECT permissions on the database. If you have phpMyAdmin installed you can look at the permissions for the user in the Privileges pane. phpMyAdmin also makes it very easy to modify the permissions. If you only have access to the command line, you can check the permissions from the mysql database. You'll probably need to do something like: GRANT SELECT ON myDatabase.myTable TO 'apache'@'localhost'; A: Just to check, if you use just this part you get an error? If so, do you still get an error if you copy and paste one of those Inserts into this >page, I am trying to see if it's local to the page or that actual line. Also, can you post a copy of the connection calls (minus passwords), unless the inserts >use exactly the same syntax as this example. Here is what is in the connection.php file. I linked to the file through an include in the same fashion as where I execute the INSERT queries elsewhere in the code. $conn = mysql_connect("localhost", ******, ******) or die("Could not connect"); mysql_select_db("adbay_com_-_cms") or die("Could not select database"); I will try the working INSERT query in this area to check that out. As to the others posting about the password access. I did, as stated in my first posting, check permissions. I used phpMyAdmin to verify that the permissions for the user account I was using were correct. And if it matters at all, apache@localhost is not the name of the user account that I use to get into the database. I don't have any user accounts with the name apache in them at all for that matter. A: You can do one of the following: * *Add the user "apache" and setup its privileges from phpmyadmin or using mysql on a shell *Tell php to run mysql_connect as another user, someone who already has the privileges needed (but maybe not root), look for mysql.default_user in your php.ini file. A: Did you remember to do: flush privileges; If the user is not set up then it will give the 'apache'@'localhost' error.
{ "language": "en", "url": "https://stackoverflow.com/questions/2900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29" }
Q: How to Test Web Code? Does anyone have some good hints for writing test code for database-backend development where there is a heavy dependency on state? Specifically, I want to write tests for code that retrieve records from the database, but the answers will depend on the data in the database (which may change over time). Do people usually make a separate development system with a 'frozen' database so that any given function should always return the exact same result set? I am quite sure this is not a new issue, so I would be very interested to learn from other people's experience. Are there good articles out there that discuss this issue of web-based development in general? I usually write PHP code, but I would expect all of these issues are largely language and framework agnostic. A: You should look into DBUnit, or try to find a PHP equivalent (there must be one out there). You can use it to prepare the database with a specific set of data which represents your test data, and thus each test will no longer depend on the database and some existing state. This way, each test is self contained and will not break during further database usage. Update: A quick google search showed a DB unit extension for PHPUnit. A: If you're mostly concerned with data layer testing, you might want to check out this book: xUnit Test Patterns: Refactoring Test Code. I was always unsure about it myself, but this book does a great job to help enumerate the concerns like performance, reproducibility, etc. A: I guess it depends what database you're using, but Red Gate (www.red-gate.com) make a tool called SQL Data Generator. This can be configured to fill your database with sensible looking test data. You can also tell it to always use the same seed in its random number generator so your 'random' data is the same every time. You can then write your unit tests to make use of this reliable, repeatable data. As for testing the web side of things, I'm currently looking into Selenium (selenium.openqa.org). This appears to be a cross-browser capable test suite which will help you test functionality. However, as with all of these web site test tools, there's no real way to test how well these things look in all of the browsers without casting a human eye over them! A: We use an in-memory database (hsql : http://hsqldb.org/). Hibernate (http://www.hibernate.org/) makes it easy for us to point our unit tests at the testing db, with the added bonus that they run as quick as lightning.. A: I have the exact same problem with my work and I find that the best idea is to have a PHP script to re-create the database and then a separate script where I throw crazy data at it to see if it breaks it. I have not ever used any Unit testing or suchlike so cannot say if it works or not sorry. A: If you can setup the database with a known quantity prior to running the tests and tear down at the end, then you'll know what data you are working with. Then you can use something like Selenium to easily test from your UI (assuming web-based here, but there are a lot of UI testing tools out there for other UI-flavours) and detect the presence of certain records pulled back from the database. It's definitely worth setting up either a test version of the database - or make your test scripts populate the database with known data as part of the tests. A: You could try http://selenium.openqa.org/ it is more for GUI testing rather than a data layer testing application but does record your actions which then can be played back to automate tests across different platforms. A: Here's my strategy (I use JUnit, but I'm sure there's a way to do the equivalent in PHP): I have a method that runs before all of the Unit Tests for a specific DAO class. It puts the dev database into a known state (adds all test data, etc.). As I run tests, I keep track of any data added to the known state. This data is cleaned up at the end of each test. After all the tests for the class have run, another method removes all the test data in the dev database, leaving it in the state it was in before the tests were run. It's a bit of work to do all this, but I usually write the methods in a DBTestCommon class where all of my DAO test classes can get to them. A: I would propose to use three databases. One production database, one development database (filled with some meaningful data for each developer) and one testing database (with empty tables and maybe a few rows that are always needed). A way to test database code is: * *Insert a few rows (using SQL) to initialize state *Run the function that you want to test *Compare expected with actual results. Here you could use your normal unit testing framework *Clean up the rows that were changed (so the next run won't see the previous run) The cleanup could be done in a standard way (of course, only in the testing database) with DELETE * FROM table. A: In general I agree with Peter but for creating and deleting of test data I wouldn't use SQL directly. I prefer to use some CRUD API that is used in product to create data as similar to production as possible...
{ "language": "en", "url": "https://stackoverflow.com/questions/2913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How can I detect if a browser is blocking a popup? Occasionally, I've come across a webpage that tries to pop open a new window (for user input, or something important), but the popup blocker prevents this from happening. What methods can the calling window use to make sure the new window launched properly? A: I tried a number of the examples above, but I could not get them to work with Chrome. This simple approach seems to work with Chrome 39, Firefox 34, Safari 5.1.7, and IE 11. Here is the snippet of code from our JS library. openPopUp: function(urlToOpen) { var popup_window=window.open(urlToOpen,"myWindow","toolbar=no, location=no, directories=no, status=no, menubar=no, scrollbars=yes, resizable=yes, copyhistory=yes, width=400, height=400"); try { popup_window.focus(); } catch (e) { alert("Pop-up Blocker is enabled! Please add this site to your exception list."); } } A: Update: Popups exist from really ancient times. The initial idea was to show another content without closing the main window. As of now, there are other ways to do that: JavaScript is able to send requests for server, so popups are rarely used. But sometimes they are still handy. In the past evil sites abused popups a lot. A bad page could open tons of popup windows with ads. So now most browsers try to block popups and protect the user. Most browsers block popups if they are called outside of user-triggered event handlers like onclick. If you think about it, that’s a bit tricky. If the code is directly in an onclick handler, then that’s easy. But what is the popup opens in setTimeout? Try this code: // open after 3 seconds setTimeout(() => window.open('http://google.com'), 3000); The popup opens in Chrome, but gets blocked in Firefox. …And this works in Firefox too: // open after 1 seconds setTimeout(() => window.open('http://google.com'), 1000); The difference is that Firefox treats a timeout of 2000ms or less are acceptable, but after it – removes the “trust”, assuming that now it’s “outside of the user action”. So the first one is blocked, and the second one is not. Original answer which was current 2012: This solution for popup blocker checking has been tested in FF (v11), Safari (v6), Chrome (v23.0.127.95) & IE (v7 & v9). Update the displayError function to handle the error message as you see fit. var popupBlockerChecker = { check: function(popup_window){ var scope = this; if (popup_window) { if(/chrome/.test(navigator.userAgent.toLowerCase())){ setTimeout(function () { scope.is_popup_blocked(scope, popup_window); },200); }else{ popup_window.onload = function () { scope.is_popup_blocked(scope, popup_window); }; } } else { scope.displayError(); } }, is_popup_blocked: function(scope, popup_window){ if ((popup_window.innerHeight > 0)==false){ scope.displayError(); } }, displayError: function(){ alert("Popup Blocker is enabled! Please add this site to your exception list."); } }; Usage: var popup = window.open("http://www.google.ca", '_blank'); popupBlockerChecker.check(popup); Hope this helps! :) A: If you use JavaScript to open the popup, you can use something like this: var newWin = window.open(url); if(!newWin || newWin.closed || typeof newWin.closed=='undefined') { //POPUP BLOCKED } A: One "solution" that will always work regardless of browser company or version is to simply put a warning message on the screen, somewhere close to the control that will create a pop-up, that politely warns the user that the action requires a pop-up and to please enable them for the site. I know it's not fancy or anything, but it can't get any simpler and only requires about 5 minutes testing, then you can move on to other nightmares. Once the user has allowed pop-ups for your site, it would also be considerate if you don't overdo the pop-ups. The last thing you want to do is annoy your visitors. A: I've tried lots of solutions, but the only one I could come up with that also worked with uBlock Origin, was by utilising a timeout to check the closed status of the popup. function popup (url, width, height) { const left = (window.screen.width / 2) - (width / 2) const top = (window.screen.height / 2) - (height / 2) let opener = window.open(url, '', `menubar=no, toolbar=no, status=no, resizable=yes, scrollbars=yes, width=${width},height=${height},top=${top},left=${left}`) window.setTimeout(() => { if (!opener || opener.closed || typeof opener.closed === 'undefined') { console.log('Not allowed...') // Do something here. } }, 1000) } Obviously this is a hack; like all solutions to this problem. You need to provide enough time in your setTimeout to account for the initial opening and closing, so it's never going to be thoroughly accurate. It will be a position of trial and error. Add this to your list of attempts. A: I combined @Kevin B and @DanielB's solutions. This is much simpler. var isPopupBlockerActivated = function(popupWindow) { if (popupWindow) { if (/chrome/.test(navigator.userAgent.toLowerCase())) { try { popupWindow.focus(); } catch (e) { return true; } } else { popupWindow.onload = function() { return (popupWindow.innerHeight > 0) === false; }; } } else { return true; } return false; }; Usage: var popup = window.open('https://www.google.com', '_blank'); if (isPopupBlockerActivated(popup)) { // Do what you want. } A: A simple approach if you own the child code as well, would be to create a simple variable in its html as below: <script> var magicNumber = 49; </script> And then check its existence from parent something similar to following: // Create the window with login URL. let openedWindow = window.open(URL_HERE); // Check this magic number after some time, if it exists then your window exists setTimeout(() => { if (openedWindow["magicNumber"] !== 32) { console.error("Window open was blocked"); } }, 1500); We wait for some time to make sure that webpage has been loaded, and check its existence. Obviously, if the window did not load after 1500ms then the variable would still be undefined. A: For some popup blockers this don't work but i use it as basic solution and add try {} catch try { const newWindow = window.open(url, '_blank'); if (!newWindow || newWindow.closed || typeof newWindow.closed == 'undefined') { return null; } (newWindow as Window).window.focus(); newWindow.addEventListener('load', function () { console.info('Please allow popups for this website') }) return newWindow; } catch (e) { return null; } This will try to call addEventListaner function but if popup is not open then it will break and go to catch, then ask if its object or null and run rest of code. A: By using onbeforeunload event we can check as follows function popup() { var chk=false; var win1=window.open(); win1.onbeforeunload=()=>{ var win2=window.open(); win2.onbeforeunload=()=>{ chk=true; }; win2.close(); }; win1.close(); return chk; } it will open 2 black windows in background the function returns boolean value.
{ "language": "en", "url": "https://stackoverflow.com/questions/2914", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "152" }
Q: Getting started with a custom JXTA PeerGroup I have been working with JXTA 2.3 for the last year or so for a peer-to-peer computing platform I am developing. I am migrating to JXTA 2.5 and in the process I am trying to clean up a lot of my use of JXTA. For the most part, I approached JXTA with a just make it work attitude. I used it to jumpstart creating and managing my peer-to-peer overlay network and providing basic communication services. I would like to use it in a more JXTA way since I am making changes to move to 2.5 anyway. My first step would be a basic creation of a custom PeerGroup. I see some new new mechanisms that are using the META-INF.services infrastructure of Java. Should I be listing a related PeerGroup implementing object here with a GUID in net.jxta.platform.Module? As I understand it, if I do this, when a group with a spec ID matching the GUID is encountered and joined or created it should automatically use the matching object. I should be able to just manually tie a PeerGroup object to the group but this new method using META-INF seems to be a lot easier to manage. Does anyone have any pointers or examples of using this infrastructure for PeerGroup implementation? Also, some general information on the META-INF.services mechanism in Java would be helpful. A: The META-INF.services stuff is known by its class name in the API: ServiceLoader. A Google search for ServiceLoader yields some information. I am not really familiar with it, but sometimes it's all about knowing the right search keywords.
{ "language": "en", "url": "https://stackoverflow.com/questions/2931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: Create a directly-executable cross-platform GUI app using Python Python works on multiple platforms and can be used for desktop and web applications, thus I conclude that there is some way to compile it into an executable for Mac, Windows and Linux. The problem being I have no idea where to start or how to write a GUI with it, can anybody shed some light on this and point me in the right direction please? A: Another system (not mentioned in the accepted answer yet) is PyInstaller, which worked for a PyQt project of mine when py2exe would not. I found it easier to use. http://www.pyinstaller.org/ Pyinstaller is based on Gordon McMillan's Python Installer. Which is no longer available. A: Since python is installed on nearly every non-Windows OS by default now, the only thing you really need to make sure of is that all of the non-standard libraries you use are installed. Having said that, it is possible to build executables that include the python interpreter, and any libraries you use. This is likely to create a large executable, however. MacOS X even includes support in the Xcode IDE for creating full standalone GUI apps. These can be run by any user running OS X. A: For the GUI itself: PyQT is pretty much the reference. Another way to develop a rapid user interface is to write a web app, have it run locally and display the app in the browser. Plus, if you go for the Tkinter option suggested by lubos hasko you may want to try portablepy to have your app run on Windows environment without Python. A: I'm not sure that this is the best way to do it, but when I'm deploying Ruby GUI apps (not Python, but has the same "problem" as far as .exe's are concerned) on Windows, I just write a short launcher in C# that calls on my main script. It compiles to an executable, and I then have an application executable. A: First you will need some GUI library with Python bindings and then (if you want) some program that will convert your python scripts into standalone executables. Cross-platform GUI libraries with Python bindings (Windows, Linux, Mac) Of course, there are many, but the most popular that I've seen in wild are: * *Tkinter - based on Tk GUI toolkit (de-facto standard GUI library for python, free for commercial projects) *WxPython - based on WxWidgets (popular, free for commercial projects) *Qt using the PyQt bindings or Qt for Python. The former is not free for commercial projects. The latter is less mature, but can be used for free. Complete list is at http://wiki.python.org/moin/GuiProgramming Single executable (all platforms) * *PyInstaller - the most active(Could also be used with PyQt) *fbs - if you chose Qt above Single executable (Windows) * *py2exe - used to be the most popular Single executable (Linux) * *Freeze - works the same way like py2exe but targets Linux platform Single executable (Mac) * *py2app - again, works like py2exe but targets Mac OS A: PySimpleGUI wraps tkinter and works on Python 3 and 2.7. It also runs on Qt, WxPython and in a web browser, using the same source code for all platforms. You can make custom GUIs that utilize all of the same widgets that you find in tkinter (sliders, checkboxes, radio buttons, ...). The code tends to be very compact and readable. #!/usr/bin/env python import sys if sys.version_info[0] >= 3: import PySimpleGUI as sg else: import PySimpleGUI27 as sg layout = [[ sg.Text('My Window') ], [ sg.Button('OK')]] window = sg.Window('My window').Layout(layout) button, value = window.Read() As explained in the PySimpleGUI Documentation, to build the .EXE file you run: pyinstaller -wF MyGUIProgram.py A: # I'd use tkinter for python 3 import tkinter tk = tkinter.Tk() tk.geometry("400x300+500+300") l = Label(tk,text="") l.pack() e = Entry(tk) e.pack() def click(): e['text'] = 'You clicked the button' b = Button(tk,text="Click me",command=click) b.pack() tk.mainloop() # After this I would you py2exe # search for the use of this module on stakoverflow # otherwise I could edit this to let you know how to do it py2exe Then you should use py2exe, for example, to bring in one folder all the files needed to run the app, even if the user has not python on his pc (I am talking of windows... for the apple os there is no need of an executable file, I think, as it come with python in it without any need of installing it. Create this file * *Create a setup.py with this code: from distutils.core import setup import py2exe setup(console=['l4h.py']) save it in a folder *Put your program in the same folder of setup.py put in this folder the program you want to make it distribuitable: es: l4h.py ps: change the name of the file (from l4h to anything you want, that is an example) *Run cmd from that folder (on the folder, right click + shift and choose start cmd here) *write in cmd:>python setup.py py2exe *in the dist folder there are all the files you need *you can zip it and distribute it Pyinstaller Install it from cmd ** pip install pyinstaller ** Run it from the cmd from the folder where the file is ** pyinstaller file.py ** Update Read this post to make an exe on windows with pyinstaller the proper way and with one file and images in it https://pythonprogramming.altervista.org/auto-py-to-exe-only-one-file-with-images-for-our-python-apps/ A: !!! KIVY !!! I was amazed seeing that no one mentioned Kivy!!! I have once done a project using Tkinter, although they do advocate that it has improved a lot, it still gives me a feel of windows 98, so I switched to Kivy. I have been following a tutorial series if it helps... Just to give an idea of how kivy looks, see this (The project I am working on): And I have been working on it for barely a week now ! The benefits for Kivy you ask? Check this The reason why I chose this is, its look and that it can be used in mobile as well. A: An alternative tool to py2exe is bbfreeze which generates executables for windows and linux. It's newer than py2exe and handles eggs quite well. I've found it magically works better without configuration for a wide variety of applications. A: There's also PyGTK, which is basically a Python wrapper for the Gnome Toolkit. I've found it easier to wrap my mind around than Tkinter, coming from pretty much no knowledge of GUI programming previously. It works pretty well and has some good tutorials. Unfortunately there isn't an installer for Python 2.6 for Windows yet, and may not be for a while. A: You don't need to compile python for Mac/Windows/Linux. It is an interpreted language, so you simply need to have the Python interpreter installed on the system of your choice (it is available for all three platforms). As for a GUI library that works cross platform, Python's Tk/Tcl widget library works very well, and I believe is sufficiently cross platform. Tkinter is the python interface to Tk/Tcl From the python project webpage: Tkinter is not the only GuiProgramming toolkit for Python. It is however the most commonly used one, and almost the only one that is portable between Unix, Mac and Windows A: You can use appJar for basic GUI development. from appJar import gui num=1 def myfcn(btnName): global num num +=1 win.setLabel("mylabel", num) win = gui('Test') win.addButtons(["Set"], [myfcn]) win.addLabel("mylabel", "Press the Button") win.go() See documentation at appJar site. Installation is made with pip install appjar from command line. A: There's three things you could do: The first thing is to find a GUI Designer that can launch its code as standalone applications like .exe files. I use a version of MatDeck (for people using GUI Designers I recommend MD Python Designer) as I believe(I use another version so I'm not too sure.) it allows me to convert the code to a standalone applications and by having it as such, there is no need to install the software on every PC that's going to run the program. The second option is partially bypassing the problem, launch the GUI as a web page. This would give you the most compatibility as most if not all OS can utilize it. Once again, you would need a GUI Designer that can convert its components into a web compatible format, I've done it once and I used the same version of MatDeck(Visionary Deck), I would not recommend MD Python Designer this time as I don't know if it can turn its GUIs into websites using web assembly whereas Visionary Deck I've tried and tested. As with all things there are most likely other software this is just one I use frequently because I work a lot with Mathematics and Physics. The third option is also kind of bypassing the problem but do it in Tkinter and just ensure you have a Python IDE or just plain old Python and run the code, this will launch the GUI. This is a good solution and maybe the simplest but I wouldn't class it as the shortest or the best. If you only plan to switch between a few operating systems and computers this will probably be your best bet.
{ "language": "en", "url": "https://stackoverflow.com/questions/2933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "305" }