text
stringlengths
8
267k
meta
dict
Q: How do I setup Public-Key Authentication? How do I setup Public-Key Authentication for SSH? A: For windows this is a good introduction and guide Here are some good ssh-agents for systems other than linux. * *Windows - pageant *OS X - SSHKeychain A: If you have SSH installed, you should be able to run.. ssh-keygen Then go through the steps, you'll have two files, id_rsa and id_rsa.pub (the first is your private key, the second is your public key - the one you copy to remote machines) Then, connect to the remote machine you want to login to, to the file ~/.ssh/authorized_keys add the contents of your that id_rsa.pub file. Oh, and chmod 600 all the id_rsa* files (both locally and remote), so no other users can read them: chmod 600 ~/.ssh/id_rsa* Similarly, ensure the remote ~/.ssh/authorized_keys file is chmod 600 also: chmod 600 ~/.ssh/authorized_keys Then, when you do ssh remote.machine, it should ask you for the key's password, not the remote machine. To make it nicer to use, you can use ssh-agent to hold the decrypted keys in memory - this means you don't have to type your keypair's password every single time. To launch the agent, you run (including the back-tick quotes, which eval the output of the ssh-agent command) `ssh-agent` On some distros, ssh-agent is started automatically. If you run echo $SSH_AUTH_SOCK and it shows a path (probably in /tmp/) it's already setup, so you can skip the previous command. Then to add your key, you do ssh-add ~/.ssh/id_rsa and enter your passphrase. It's stored until you remove it (using the ssh-add -D command, which removes all keys from the agent)
{ "language": "en", "url": "https://stackoverflow.com/questions/7260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: How can I identify in which Java Applet context running without passing an ID? I'm part of a team that develops a pretty big Swing Java Applet. Most of our code are legacy and there are tons of singleton references. We've bunched all of them to a single "Application Context" singleton. What we now need is to create some way to separate the shared context (shared across all applets currently showing) and non-shared context (specific to each applet currently showing). However, we don't have an ID at each of the locations that call to the singleton, nor do we want to propagate the ID to all locations. What's the easiest way to identify in which applet context we're running? (I've tried messing with classloaders, thread groups, thread ids... so far I could find nothing that will enable me to ID the origin of the call). A: Singletons are evil, what do you expect? ;) Perhaps the most comprehensive approach would be to load the bulk of the applet in a different class loader (use java.net.URLClassLoader.newInstance). Then use a WeakHashMap to associate class loader with an applet. If you could split most of the code into a common class loader (as a parent of each per-applet class loader) and into the normal applet codebase, that would be faster but more work. Other hacks: If you have access to any component, you can use Component.getParent repeatedly or SwingUtilities.getRoot. If you are in a per-applet instance thread, then you can set up a ThreadLocal. From the EDT, you can read the current event from the queue (java.awt.EventQueue.getCurrentEvent()), and possibly find a component from that. Alternatively push an EventQueue with a overridden dispatchEvent method. A: If I understand you correctly, the idea is to get a different "singleton" object for each caller object or "context". One thing you can do is to create a thread-local global variable where you write the ID of the current context. (This can be done with AOP.) Then in the singleton getter, the context ID is fetched from the thread-local to use as a key to the correct "singleton" instance for the calling context. Regarding AOP there should be no problem using it in applets since, depending on your point-cuts, the advices are woven at compile time and a JAR is added to the runtime dependencies. Hence, no special evidence of AOP should remain at run time. A: @Hugo regarding threadlocal: I thought about that solution. However, from experiments I found two problems with that approach: * *Shared thread (server connections, etc) are problematic. This can be solved though by paying special attention to these thread (they're all under my control and are pretty much isolated from the legacy code). *The EDT thread is shared across all applets. I failed to find a way to force the creation of a new EDT thread for each applet. This means that the threadlocal for the EDT would be shared across the applets. This one I have no idea how to solve. Suggestions?
{ "language": "en", "url": "https://stackoverflow.com/questions/7269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Of Ways to Count the Limitless Primes Alright, so maybe I shouldn't have shrunk this question sooo much... I have seen the post on the most efficient way to find the first 10000 primes. I'm looking for all possible ways. The goal is to have a one stop shop for primality tests. Any and all tests people know for finding prime numbers are welcome. And so: * *What are all the different ways of finding primes? A: Some prime tests only work with certain numbers, for instance, the Lucas–Lehmer test only works for Mersenne numbers. Most prime tests used for big numbers can only tell you that a certain number is "probably prime" (or, if the number fails the test, it is definitely not prime). Usually you can continue the algorithm until you have a very high probability of a number being prime. Have a look at this page and especially its "See Also" section. The Miller-Rabin test is, I think, one of the best tests. In its standard form it gives you probable primes - though it has been shown that if you apply the test to a number beneath 3.4*10^14, and it passes the test for each parameter 2, 3, 5, 7, 11, 13 and 17, it is definitely prime. The AKS test was the first deterministic, proven, general, polynomial-time test. However, to the best of my knowledge, its best implementation turns out to be slower than other tests unless the input is ridiculously large. A: The Sieve of Eratosthenes is a decent algorithm: * *Take the list of positive integers 2 to any given Ceiling. *Take the next item in the list (2 in the first iteration) and remove all multiples of it (beyond the first) from the list. *Repeat step two until you reach the given Ceiling. *Your list is now composed purely of primes. There is a functional limit to this algorithm in that it exchanges speed for memory. When generating very large lists of primes the memory capacity needed skyrockets. A: For a given integer, the fastest primality check I know is: * *Take a list of 2 to the square root of the integer. *Loop through the list, taking the remainder of the integer / current number * *If the remainder is zero for any number in the list, then the integer is not prime. *If the remainder was non-zero for all numbers in the list, then the integer is prime. It uses significantly less memory than The Sieve of Eratosthenes and is generally faster for individual numbers. A: @akdom's question to me: Looping would work fine on my previous suggestion, and you don't need to do any calculations to determine if a number is even; in your loop, simply skip every even number, as shown below: //Assuming theInteger is the number to be tested for primality. // Check if theInteger is divisible by 2. If not, run this loop. // This loop skips all even numbers. for( int i = 3; i < sqrt(theInteger); i + 2) { if( theInteger % i == 0) { //getting here denotes that theInteger is not prime // somehow indicate that some number, i, divides it and break break; } } A: A Rutgers grad student recently found a recurrence relation that generates primes. The difference of its successive numbers will generate either primes or 1's. a(1) = 7 a(n) = a(n-1) + gcd(n,a(n-1)). It makes a lot of crap that needs to be filtered out. Benoit Cloitre also has this recurrence that does a similar task: b(1) = 1 b(n) = b(n-1) + lcm(n,b(n-1)) then the ratio of successive numbers, minus one [b(n)/b(n-1)-1] is prime. A full account of all this can be read at Recursivity. For the sieve, you can do better by using a wheel instead of adding one each time, check out the Improved Incremental Prime Number Sieves. Here is an example of a wheel. Let's look at the numbers, 2 and 5 to ignore. Their wheel is, [2,4,2,2]. A: In your algorithm using the list from 2 to the root of the integer, you can improve performance by only testing odd numbers after 2. That is, your list only needs to contain 2 and all odd numbers from 3 to the square root of the integer. This cuts the number of times you loop in half without introducing any more complexity. A: @theprise If I were wanting to use an incrementing loop instead of an instantiated list (problems with memory for massive numbers...), what would be a good way to do that without building the list? It doesn't seem like it would be cheaper to do a divisibility check for the given integer (X % 3) than just the check for the normal number (N % X). A: If you're wanting to find a way of generating prime numbers, this have been covered in a previous question.
{ "language": "en", "url": "https://stackoverflow.com/questions/7272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Is it "bad practice" to be sensitive to linebreaks in XML documents? I'm generating some XML documents and when it comes to the address part I have fragments that look like this: <Address>15 Sample St Example Bay Some Country</Address> The XSLT that I have for converting this to XHTML has some funky recursive template to convert newline characters within strings to <br/> tags. This is all working fine; but is it considered "bad practice" to rely on linebreaks within XML documents? If so, is it recommended that I do this instead? <Address><Line>15 Sample St</Line> <Line>Example Bay</Line> <Line>Some Country</Line></Address> Seems like it'd be really awkward to wrap every place where my text may be multiple lines with tags like that.. A: It's generally considered bad practice to rely on linebreaks, since it's a fragile way to differentiate data. While most XML processors will preserve any whitespace you put in your XML, it's not guaranteed. The real problem is that most applications that output your XML into a readable format consider all whitespace in an XML interchangable, and might collapse those linebreaks into a single space. That's why your XSLT has to jump through such hoops to render the data properly. Using a "br" tag would vastly simplify the transform. Another potential problem is that if you open up your XML document in an XML editor and pretty-print it, you're likely to lose those line breaks. If you do keep using linebreaks, make sure add an xml:space="preserve" attribute to "address." (You can do this in your DTD, if you're using one.) Some suggested reading * *An article from XML.com says the following: XML applications often seem to take a cavalier attitude toward whitespace because the rules about the places in an XML document where whitespace doesn't matter sometimes give these applications free rein to add or remove whitespace in certain places. * *A collection of XSL-list posts regarding whitespace. A: What about using attributes to store the data, rather than text nodes: <Address Street="15 Sample St" City="Example Bay" State="" Country="Some Country"/> I know the use of attributes vs. text nodes is an often debated subject, but I've stuck with attributes 95% of the time, and haven't had any troubles because of it. A: Few people have said that CDATA blocks will allow you to retain line breaks. This is wrong. CDATA sections will only make markup be processed as character data, they will not change line break processing. <Address>15 Sample St Example Bay Some Country</Address> is exactly the same as <Address><![CDATA[15 Sample St Example Bay Some Country]]></Address> The only difference is how different APIs report this. A: I think the only real problem is that it makes the XML harder to read. e.g. <Something> <Contains> <An> <Address>15 Sample St Example Bay Some Country</Address> </An> </Contains> </Something> If pretty XML isn't a concern, I'd probably not worry about it, so long as it's working. If pretty XML is a concern, I'd convert the explicit newlines into <br /> tags or \n before embedding them in the XML. A: It depends on how you're reading and writing the XML. If XML is being generated automatically - if newlines or explicit \n flags are being parsed into - then there's nothing to worry about. Your input likely doesn't have any other XML in it so it's just cleaner to not mess with XML at all. If tags are being worked with manually, it's still cleaner to just have a line break, if you ask me. The exception is if you're using DOM to get some structure out of the XML. In that case line breaks are obviously evil because they don't represent the heirarchy properly. It sounds like the heirarchy is irrelevant for your application, though, so line breaks sound sufficient. If the XML just looks bad (especially when automatically generated), Tidy can help, although it works better with HTML than with XML. A: This is probably a bit deceptive example, since address is a bit non-normalized in this case. It is a reasonable trade-off, however since address fields are difficult to normalize. If you make the line breaks carry important information, you're un-normalizing and making the post office interpret the meaning of the line break. I would say that normally this is not a big problem, but in this case I think the Line tag is most correct since it explicitly shows that you don't actually interpret what the lines may mean in different cultures. (Remember that most forms for entering an address has zip code etc, and address line 1 and 2.) The awkwardness of having the line tag comes with normal XML, and has been much debated at coding horror. http://www.codinghorror.com/blog/archives/001139.html A: The XML spec has something to say regarding whitespace and linefeeds and carriage returns in particular. So if you limit yourself to true linefeeds (x0A) you should be Ok. However, many editing tools will reformat XML for "better presentation" and possibly get rid of the special syntax. A more robust and cleaner approach than the "< line>< / line>" idea would be to simply use namespaces and embed XHTML content, e.g.: <Address xmlns="http://www.w3.org/1999/xhtml">15 Sample St<br />Example Bay<br />Some Country</Address> No need to reinvent the wheel when it comes to standard vocabularies. A: I don't see what's wrong with <Line> tags. Apparently, the visualization of the data is important to you, important enough to keep it in your data (via line breaks in your first example). Fine. Then really keep it, don't rely on "magic" to keep it for you. Keep every bit of data you'll need later on and can't deduce perfectly from the saved portion of the data, keep it even if it's visualization data (line breaks and other formatting). Your user (end user of another developer) took the time to format that data to his liking - either tell him (API doc / text near the input) that you don't intend on keeping it, or - just keep it. A: If you need your linebreaks preserved, use a CDATA block, as tweakt said Otherwise beware. Most of the time, the linebreaks will be preserved by XML software, but sometimes they won't, and you really don't want to be relying on things which only work by coincidence A: Yes, I think using a CDATA block would protect the whitespace. Although some parser APIs allow you to preserve whitespace. A: What you really should be doing is converting your XML to a format that preserves white-space. So rather than seek to replace \n with <br /> you should wrap the whole block in a <pre> That way, your address is functionally preserved (whether you include line breaks or not) and the XSTL can choose whether to preserve white-space in the result. A: I recommend you should either add the <br/> line breaks or maybe use line-break entity - &#x000D;
{ "language": "en", "url": "https://stackoverflow.com/questions/7277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is Turing Complete? What does the expression "Turing Complete" mean? Can you give a simple explanation, without going into too many theoretical details? A: In the simplest terms, a Turing-complete system can solve any possible computational problem. One of the key requirements is the scratchpad size be unbounded and that is possible to rewind to access prior writes to the scratchpad. Thus in practice no system is Turing-complete. Rather some systems approximate Turing-completeness by modeling unbounded memory and performing any possible computation that can fit within the system's memory. A: From wikipedia: Turing completeness, named after Alan Turing, is significant in that every plausible design for a computing device so far advanced can be emulated by a universal Turing machine — an observation that has become known as the Church-Turing thesis. Thus, a machine that can act as a universal Turing machine can, in principle, perform any calculation that any other programmable computer is capable of. However, this has nothing to do with the effort required to write a program for the machine, the time it may take for the machine to perform the calculation, or any abilities the machine may possess that are unrelated to computation. While truly Turing-complete machines are very likely physically impossible, as they require unlimited storage, Turing completeness is often loosely attributed to physical machines or programming languages that would be universal if they had unlimited storage. All modern computers are Turing-complete in this sense. I don't know how you can be more non-technical than that except by saying "turing complete means 'able to answer computable problem given enough time and space'". A: Here's the briefest explanation: A Turing Complete system means a system in which a program can be written that will find an answer (although with no guarantees regarding runtime or memory). So, if somebody says "my new thing is Turing Complete" that means in principle (although often not in practice) it could be used to solve any computation problem. Sometimes it's a joke... a guy wrote a Turing Machine simulator in vi, so it's possible to say that vi is the only computational engine ever needed in the world. A: Super-brief summary from what Professor Brasilford explains in this video. Turing Complete ≅ do anything that a Turing Machine can do. * *It has conditional branching (i.e. "if statement"). Also, implies "go to" and thus permitting loop. *It gets arbitrary amount of memory (e.g. long enough tape) that the program needs. A: Here is the simplest explanation Alan Turing created a machine that can take a program, run that program, and show some result. But then he had to create different machines for different programs. So he created "Universal Turing Machine" that can take ANY program and run it. Programming languages are similar to those machines (although virtual). They take programs and run them. Now, a programing language is called "Turing complete", if it can run any program (irrespective of the language) that a Turing machine can run given enough time and memory. For example: Let's say there is a program that takes 10 numbers and adds them. A Turing machine can easily run this program. But now imagine that for some reason your programming language can't perform the same addition. This would make it "Turing incomplete" (so to speak). On the other hand, if it can run any program that the universal Turing machine can run, then it's Turing complete. Most modern programming languages (e.g. Java, JavaScript, Perl, etc.) are all Turing complete because they each implement all the features required to run programs like addition, multiplication, if-else condition, return statements, ways to store/retrieve/erase data and so on. Update: You can learn more on my blog post: "JavaScript Is Turing Complete" — Explained A: I think the importance of the concept "Turing Complete" is in the the ability to identify a computing machine (not necessarily a mechanical/electrical "computer") that can have its processes be deconstructed into "simple" instructions, composed of simpler and simpler instructions, that a Universal machine could interpret and then execute. I highly recommend The Annotated Turing @Mark i think what you are explaining is a mix between the description of the Universal Turing Machine and Turing Complete. Something that is Turing Complete, in a practical sense, would be a machine/process/computation able to be written and represented as a program, to be executed by a Universal Machine (a desktop computer). Though it doesn't take consideration for time or storage, as mentioned by others. A: Turing Complete means that it is at least as powerful as a Turing Machine. This means anything that can be computed by a Turing Machine can be computed by a Turing Complete system. No one has yet found a system more powerful than a Turing Machine. So, for the time being, saying a system is Turing Complete is the same as saying the system is as powerful as any known computing system (see Church-Turing Thesis). A: Fundamentally, Turing-completeness is one concise requirement, unbounded recursion. Not even bounded by memory. I thought of this independently, but here is some discussion of the assertion. My definition of LSP provides more context. The other answers here don't directly define the fundamental essence of Turing-completeness. A: Informal Definition A Turing complete language is one that can perform any computation. The Church-Turing Thesis states that any performable computation can be done by a Turing machine. A Turing machine is a machine with infinite random access memory and a finite 'program' that dictates when it should read, write, and move across that memory, when it should terminate with a certain result, and what it should do next. The input to a Turing machine is put in its memory before it starts. Things that can make a language NOT Turing complete A Turing machine can make decisions based on what it sees in memory - The 'language' that only supports +, -, *, and / on integers is not Turing complete because it can't make a choice based on its input, but a Turing machine can. A Turing machine can run forever - If we took Java, Javascript, or Python and removed the ability to do any sort of loop, GOTO, or function call, it wouldn't be Turing complete because it can't perform an arbitrary computation that never finishes. Coq is a theorem prover that can't express programs that don't terminate, so it's not Turing complete. A Turing machine can use infinite memory - A language that was exactly like Java but would terminate once it used more than 4 Gigabytes of memory wouldn't be Turing complete, because a Turing machine can use infinite memory. This is why we can't actually build a Turing machine, but Java is still a Turing complete language because the Java language has no restriction preventing it from using infinite memory. This is one reason regular expressions aren't Turing complete. A Turing machine has random access memory - A language that only lets you work with memory through push and pop operations to a stack wouldn't be Turing complete. If I have a 'language' that reads a string once and can only use memory by pushing and popping from a stack, it can tell me whether every ( in the string has its own ) later on by pushing when it sees ( and popping when it sees ). However, it can't tell me if every ( has its own ) later on and every [ has its own ] later on (note that ([)] meets this criteria but ([]] does not). A Turing machine can use its random access memory to track ()'s and []'s separately, but this language with only a stack cannot. A Turing machine can simulate any other Turing machine - A Turing machine, when given an appropriate 'program', can take another Turing machine's 'program' and simulate it on arbitrary input. If you had a language that was forbidden from implementing a Python interpreter, it wouldn't be Turing complete. Examples of Turing complete languages If your language has infinite random access memory, conditional execution, and some form of repeated execution, it's probably Turing complete. There are more exotic systems that can still achieve everything a Turing machine can, which makes them Turing complete too: * *Untyped lambda calculus *Conway's game of life *C++ Templates *Prolog A: A Turing Machine requires that any program can perform condition testing. That is fundamental. Consider a player piano roll. The player piano can play a highly complicated piece of music, but there is never any conditional logic in the music. It is not Turing Complete. Conditional logic is both the power and the danger of a machine that is Turing Complete. The piano roll is guaranteed to halt every time. There is no such guarantee for a TM. This is called the “halting problem.” A: As Waylon Flinn said: Turing Complete means that it is at least as powerful as a Turing Machine. I believe this is incorrect, a system is Turing complete if it's exactly as powerful as the Turing Machine, i.e. every computation done by the machine can be done by the system, but also every computation done by the system can be done by the Turing machine. A: Can a relational database input latitudes and longitudes of places and roads, and compute the shortest path between them - no. This is one problem that shows SQL is not Turing complete. But C++ can do it, and can do any problem. Thus it is. A: In practical language terms familiar to most programmers, the usual way to detect Turing completeness is if the language allows or allows the simulation of nested unbounded while statements (as opposed to Pascal-style for statements, with fixed upper bounds). A: We call a language Turing-complete if and only if (1) it is decidable by a Turing machine but (2) not by anything less capable than a Turing machine. For instance, the language of palindromes over the alphabet {a, b} is decidable by Turing machines, but also by pushdown automata; so, this language is not Turing-complete. Truly Turing-complete languages - ones that require the full computing power of Turing machines - are pretty rare. Perhaps the language of strings x.y.z where x is a number, y is a Turing-machine and z is an initial tape configuration, and y halts on z in fewer than x! steps - perhaps that qualifies (though it would need to be shown!) A common imprecise usage confuses Turing-completeness with Turing-equivalence. Turing-equivalence refers to the property of a computational system which can simulate, and which can be simulated by, Turing machines. We might say Java is a Turing-equivalent programming language, for instance, because you can write a Turing-machine simulator in Java, and because you could define a Turing machine that simulates execution of Java programs. According to the Church-Turing thesis, Turing machines can perform any effective computation, so Turing-equivalence means a system is as capable as possible (if the Church-Turing thesis is true!) Turing equivalence is a much more mainstream concern that true Turing completeness; this and the fact that "complete" is shorter than "equivalent" may explain why "Turing-complete" is so often misused to mean Turing-equivalent, but I digress.
{ "language": "en", "url": "https://stackoverflow.com/questions/7284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "685" }
Q: How do you test/change untested and untestable code? Lately I had to change some code on older systems where not all of the code has unit tests. Before making the changes I want to write tests, but each class created a lot of dependencies and other anti-patterns which made testing quite hard. Obviously, I wanted to refactor the code to make it easier to test, write the tests and then change it. Is this the way you'd do it? Or would you spend a lot of time writing the hard-to-write tests that would be mostly removed after the refactoring will be completed? A: First of all, here's a great article with tips on unit testing. Secondly, I found a great way to avoid making tons of changes in old code is to just refactor it a little until you can test it. One easy way to do this is to make private members protected, and then override the protected field. For example, let's say you have a class that loads some stuff from the database during the constructor. In this case, you can't just override a protected method, but you can extract the DB logic to a protected field and then override it in the test. public class MyClass { public MyClass() { // undesirable DB logic } } becomes public class MyClass { public MyClass() { loadFromDB(); } protected void loadFromDB() { // undesirable DB logic } } and then your test looks something like this: public class MyClassTest { public void testSomething() { MyClass myClass = new MyClassWrapper(); // test it } private static class MyClassWrapper extends MyClass { @Override protected void loadFromDB() { // some mock logic } } } This is somewhat of a bad example, because you could use DBUnit in this case, but I actually did this in a similar case recently because I wanted to test some functionality totally unrelated to the data being loaded, so it was very effective. I've also found such exposing of members to be useful in other similar cases where I need to get rid of some dependency that has been in a class for a long time. I would recommend against this solution if you are writing a framework though, unless you really don't mind exposing the members to users of your framework. It's a bit of a hack, but I've found it quite useful. A: @valters I disagree with your statement that tests shouldn't break the build. The tests should be an indication that the application doesn't have new bugs introduced for the functionality that is tested (and a found bug is an indication of a missing test). If tests don't break the build, then you can easily run into the situation where new code breaks the build and it isn't known for a while, even though a test covered it. A failing test should be a red flag that either the test or the code has to be fixed. Furthermore, allowing the tests to not break the build will cause the failure rate to slowly creep up, to the point where you no longer have a reliable set of regression tests. If there is a problem with tests breaking too often, it may be an indication that the tests are being written in too fragile a manner (dependence on resources that could change, such as the database without using DB Unit properly, or an external web service that should be mocked), or it may be an indication that there are developers in the team that don't give the tests proper attention. I firmly believe that a failing test should be fixed ASAP, just as you would fix code that fails to compile ASAP. A: I am not sure why would you say that unit tests are going be removed once refactoring is completed. Actually your unit-test suite should run after main build (you can create a separate "tests" build, that just runs the unit tests after the main product is built). Then you will immediately see if changes in one piece break the tests in other subsystem. Note it's a bit different than running tests during build (as some may advocate) - some limited testing is useful during build, but usually it's unproductive to "crash" the build just because some unit test happens to fail. If you are writing Java (chances are), check out http://www.easymock.org/ - may be useful for reducing coupling for the test purposes. A: I have read Working Effectively With Legacy Code, and I agree it is very useful for dealing with "untestable" code. Some techniques only apply to compiled languages (I'm working on "old" PHP apps), but I would say most of the book is applicable to any language. Refactoring books sometimes assume the code is in semi-ideal or "maintenance aware" state before refactoring, but the systems I work on are less than ideal and were developed as "learn as you go" apps, or as first apps for some technologies used (and I don't blame the initial developers for that, since I'm one of them), so there are no tests at all, and code is sometimes messy. This book addresses this kind of situation, whereas other refactoring books usually don't (well, not to this extent). I should mention that I haven't received any money from the editor nor author of this book ;), but I found it very interesting, since resources are lacking in the field of legacy code (and particularly in my language, French, but that's another story). A: So I totally respect the accepted Mike-Stone answer. Because it is better than nothing. However, I'll offer another alternative. The reason I don't use the Mike-Stone answer is that ... it is (IMHO)... "create a ~little more technical debt...to deal with a big technical debt item". Here is my approach. It is a stepping stone. See below. I've even included some PTSD code snipplets from years gone by. (a hacky stored procedure caller with a string-array). As in, I'm really trying to simulate some duct-taped together code. /* BEFORE */ public interface IEmployeeManager { void addEmployee(string lastname, string firstname, string ssn, DateTime dob); } public class EmployeeManager : (implements) IEmployeeManager { /* no dependencies injected, just "new it up" or "use static stuff" */ public void addEmployee(string lastname, string firstname, string ssn, DateTime dob) { if(dob.Subtract(DateTime.Now).Months < 16) { throw new ArgumentOutOfRangeException("Too young"); } /* STATIC CALL, :( */ DatabaseHelper.runStoredProcedureWrapper("dbo.uspEmployeeAdd", new String[] {lastname, firstname, ssn, dob.ToString()}; /* "new it up". :( */ new EmailSender.sendEmail("humanresources@this.company.com", "new employee alert subject", string.format("New Employee Added. (LastName='{0}', FirstName='{1}')", lastname, firstname)); } } /* AFTER */ public interface IEmployeeDataHelperWrapper { void addEmployeeToDatabase(string lastname, string firstname, string ssn, DateTime dob); } public class EmployeeDataHelperWrapper : (implements) IEmployeeDataHelperWrapper { public void addEmployeeToDatabase(string lastname, string firstname, string ssn, DateTime dob) { DatabaseHelper.runStoredProcedureWrapper("dbo.uspEmployeeAdd", new String[] {lastname, firstname, ssn, dob.ToString()}; } } and public interface IEmailSenderWrapper { void sendEmail(string to, string subject, string body); } public class EmailSenderWrapper : (implements) IEmailSenderWrapper { public void sendEmail(string to, string subject, string body); { new EmailSender.sendEmail(to, subject, body); } } and public interface IEmployeeManager (NO CHANGE) and (the refactor) public class EmployeeManager : (implements) IEmployeeManager { private readonly IEmployeeDataHelperWrapper empDataHelper; private readonly IEmailSenderWrapper emailSenderWrapper; public EmployeeManager() { /* here is the stop-gap "trick", this default constructor does a hard coded "new it up" */ this(new EmployeeDataHelperWrapper(), new EmailSenderWrapper()); } public EmployeeManager(IEmployeeDataHelperWrapper empDataHelper, IEmailSenderWrapper emailSenderWrapper ) { /* now you have this new constructor that injects INTERFACES, that can easier be mocked for unit-test code */ this.empDataHelper = empDataHelper; this.emailSenderWrapper = emailSenderWrapper; } public void addEmployee(string lastname, string firstname, string ssn, DateTime dob) { if(dob.Subtract(DateTime.Now).Months < 16) { throw new ArgumentOutOfRangeException("Too young"); } this.empDataHelper.addEmployeeToDatabase(lastname, firstname, ssn, dob); this.emailSenderWrapper.sendEmail("humanresources@this.company.com", "new employee alert subject", string.format("New Employee Added. (LastName='{0}', FirstName='{1}')", lastname, firstname)); } } Essentially, I MOVE code (I do not really "re-code it"). and the risk is small (IMHO).... but "sequentially", its the same code that runs in the same exact order. Is it perfect? No. But you start thinking about "injected dependencies"... and then they become independently refactor-able.
{ "language": "en", "url": "https://stackoverflow.com/questions/7287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Zip library options for the Compact Framework? My requirements: * *Support .NET Compact Framework 2.0 and Windows Mobile 6.0 devices. *Only need to unzip the contents to a directory on a storage card. Creation of zip files is not required. *Must be able to use in corporate/commercial software. *Can be open source, but not have GPL or other viral license. I've seen the Xceed Zip for CF library. What other options are there? A: As of v1.7, the DotNetZip distribution now includes a version built specifically for the .NET Compact Framework, either v2.0 or v3.5. http://www.codeplex.com/DotNetZip/Release/ProjectReleases.aspx. It is about ~70k DLL. It does zip, unzip, zip editing, passwords, ZIP64, unicode, streams, and more. DotNetZip is 100% managed code, open source, and free/gratis to use. It's also very simple and easy. try { using (var zip1 = Ionic.Zip.ZipFile.Read(zipToUnpack)) { foreach (var entry in zip1) { entry.Extract(dir, ExtractExistingFileAction.OverwriteSilently); } } } catch (Exception ex) { MessageBox.Show("Exception! " + ex); } There's a sample app included in the source distribution that unzips to a storage card. CF-Unzipper app http://www.freeimagehosting.net/uploads/ce5ad6a964.png A: Have a look at #ziplib (www.icsharpcode.com). It's GPL, but you can use it in closed-source, commercial applications. They don't say anything specifically on their page about using it with the Compact Framework, so you'd have to give it a test yourself (that said, it's pure C# without any external dependencies, so the chances are somewhat good that it will work). A: This looks like it may be a good option for you: http://www.codeplex.com/DotNetZip. It seems small, has source and has a very open license (MS-PL). A: Looks like what you need is zlibCE from the OpenNETCF foundation. You can get it here: http://opennetcf.com/FreeSoftware/zlibCE/tabid/245/Default.aspx It's a port of the linux zlib library to CE. At it's core, it's a native dll, but they now also provide a .NET wrapper, along with all the source code. I've used it in projects before and it performed quite well. A: I use the Resco MobileForms toolkit for a variety of functionality: http://www.resco.net/developer/mobileformstoolkit/overview.aspx It includes a good ZIP library.
{ "language": "en", "url": "https://stackoverflow.com/questions/7348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: Edit PDF in PHP? Does anyone know of a good method for editing PDFs in PHP? Preferably open-source/zero-license cost methods. :) I am thinking along the lines of opening a PDF file, replacing text in the PDF and then writing out the modified version of the PDF? On the front-end A: If you are taking a 'fill in the blank' approach, you can precisely position text anywhere you want on the page. So it's relatively easy (if not a bit tedious) to add the missing text to the document. For example with Zend Framework: <?php require_once 'Zend/Pdf.php'; $pdf = Zend_Pdf::load('blank.pdf'); $page = $pdf->pages[0]; $font = Zend_Pdf_Font::fontWithName(Zend_Pdf_Font::FONT_HELVETICA); $page->setFont($font, 12); $page->drawText('Hello world!', 72, 720); $pdf->save('zend.pdf'); If you're trying to replace inline content, such as a "[placeholder string]," it gets much more complicated. While it's technically possible to do, you're likely to mess up the layout of the page. A PDF document is comprised of a set of primitive drawing operations: line here, image here, text chunk there, etc. It does not contain any information about the layout intent of those primitives. A: There is a free and easy to use PDF class to create PDF documents. It's called FPDF. In combination with FPDI (http://www.setasign.de/products/pdf-php-solutions/fpdi) it is even possible to edit PDF documents. The following code shows how to use FPDF and FPDI to fill an existing gift coupon with the user data. require_once('fpdf.php'); require_once('fpdi.php'); $pdf = new FPDI(); $pdf->AddPage(); $pdf->setSourceFile('gift_coupon.pdf'); // import page 1 $tplIdx = $this->pdf->importPage(1); //use the imported page and place it at point 0,0; calculate width and height //automaticallay and ajust the page size to the size of the imported page $this->pdf->useTemplate($tplIdx, 0, 0, 0, 0, true); // now write some text above the imported page $this->pdf->SetFont('Arial', '', '13'); $this->pdf->SetTextColor(0,0,0); //set position in pdf document $this->pdf->SetXY(20, 20); //first parameter defines the line height $this->pdf->Write(0, 'gift code'); //force the browser to download the output $this->pdf->Output('gift_coupon_generated.pdf', 'D'); A: Zend Framework can load and edit existing PDF files. I think it supports revisions too. I use it to create docs in a project, and it works great. Never edited one though. Check out the doc here A: Don't know if this is an option, but it would work very similar to Zend's pdf library, but you don't need to load a bunch of extra code (the zend framework). It just extends FPDF. http://www.setasign.de/products/pdf-php-solutions/fpdi/ Here you can basically do the same thing. Load the PDF, write over top of it, and then save to a new PDF. In FPDI you basically insert the PDF as an image so you can put whatever you want over it. But again, this uses FPDF, so if you don't want to use that, then it won't work. A: If you need really simple PDFs, then Zend or FPDF is fine. However I find them difficult and frustrating to work with. Also, because of the way the API works, there's no good way to separate content from presentation from business logic. For that reason, I use dompdf, which automatically converts HTML and CSS to PDF documents. You can lay out a template just as you would for an HTML page and use standard HTML syntax. You can even include an external CSS file. The library isn't perfect and very complex markup or css sometimes gets mangled, but I haven't found anything else that works as well. A: The PDF/pdflib extension documentation in PHP is sparse (something that has been noted in bugs.php.net) - I reccommend you use the Zend library. A: Tcpdf is also a good liabrary for generating pdf in php http://www.tcpdf.org/ A: <?php //getting new instance $pdfFile = new_pdf(); PDF_open_file($pdfFile, " "); //document info pdf_set_info($pdfFile, "Auther", "Ahmed Elbshry"); pdf_set_info($pdfFile, "Creator", "Ahmed Elbshry"); pdf_set_info($pdfFile, "Title", "PDFlib"); pdf_set_info($pdfFile, "Subject", "Using PDFlib"); //starting our page and define the width and highet of the document pdf_begin_page($pdfFile, 595, 842); //check if Arial font is found, or exit if($font = PDF_findfont($pdfFile, "Arial", "winansi", 1)) { PDF_setfont($pdfFile, $font, 12); } else { echo ("Font Not Found!"); PDF_end_page($pdfFile); PDF_close($pdfFile); PDF_delete($pdfFile); exit(); } //start writing from the point 50,780 PDF_show_xy($pdfFile, "This Text In Arial Font", 50, 780); PDF_end_page($pdfFile); PDF_close($pdfFile); //store the pdf document in $pdf $pdf = PDF_get_buffer($pdfFile); //get the len to tell the browser about it $pdflen = strlen($pdfFile); //telling the browser about the pdf document header("Content-type: application/pdf"); header("Content-length: $pdflen"); header("Content-Disposition: inline; filename=phpMade.pdf"); //output the document print($pdf); //delete the object PDF_delete($pdfFile); ?> A: We use pdflib to create PDF files from our rails apps. It has bindings for PHP, and a ton of other languages. We use the commmercial version, but they also have a free/open source version which has some limitations. Unfortunately, this only allows creation of PDF's. If you want to open and 'edit' existing files, pdflib do provide a product which does this this, but costs a LOT
{ "language": "en", "url": "https://stackoverflow.com/questions/7364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "88" }
Q: Visual Studio - new "default" property values for inherited controls I'm looking for help setting a new default property value for an inherited control in Visual Studio: class NewCombo : System.Windows.Forms.ComboBox { public NewCombo() { DropDownItems = 50; } } The problem is that the base class property DropDownItems has a 'default' attribute set on it that is a different value (not 50). As a result, when I drag the control onto a form, the designer file gets an explicit mycontrol.DropDownItems = 50; line. At first, this doesn't matter. But if later I change my inherited class to DropDownItems = 45; in the constructor, this does not affect any of the controls on any form since all those designer files still have the value 50 hard-coded in them. And the whole point was to have the value set in one place so I can deal with the customer changing his mind. Obviously, if I were creating my own custom property in the subclass, I could give it its own designer default attribute of whatever I wanted. But here I'm wanting to change the default values of properties in the base. Is there any way to apply Visual Studio attributes to a base class member? Or is there some other workaround to get the result I want? A: In your derived class you need to either override (or shadow using new) the property in question and then re-apply the default value attribute.
{ "language": "en", "url": "https://stackoverflow.com/questions/7367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to avoid redefining VERSION, PACKAGE, etc I haven't seen any questions relating to GNU autoconf/automake builds, but I'm hoping at least some of you out there are familiar with it. Here goes: I have a project (I'll call it myproject) that includes another project (vendor). The vendor project is a standalone project maintained by someone else. Including a project like this is fairly straightforward, but in this case there is a tiny snag: each project generates its own config.h file, each of which defines standard macros such as PACKAGE, VERSION, etc. This means that, during the build, when vendor is being built, I get lots of errors like this: ... warning: "VERSION" redefined ... warning: this is the location of the previous definition ... warning: "PACKAGE" redefined ... warning: this is the location of the previous definition These are just warnings, for the time being at least, but I would like to get rid of them. The only relevant information I've been able to turn up with a Google search is this thread on the automake mailing list, which isn't a whole lot of help. Does anybody else have any better ideas? A: Some notes: * *you didn't mention how config.h was included - with quotes or angle brackets. See this other question for more information on the difference. In short, config.h is typically included with quotes, not angle brackets, and this should make the preprocessor prefer the config.h from the project's own directory (which is usually what you want) *You say that a subproject should be including the enclosing project's config.h Normally this is not at all what you want. The subproject is standalone, and its PACKAGE and VERSION should be the one of that subproject, not yours. If you include libxml in your xmlreader project for example, you would still want the libxml code to be compiled with PACKAGE libxml and VERSION (whatever the libxml version is). *It is usually a big mistake to have config.h be included from public headers. config.h is always private to your project or the subproject, and should only be included from .c files. So, if your vendor's documentation says to include their "vendor.h" and that public header includes config.h somehow, then that is a no-no. Similarly, if your project is a library, don't include config.h anywhere from your publically installed headers. A: It's definitely a hack, but I post-process the autogen'd config.h file: sed -e 's/.*PACKAGE_.*//' < config.h > config.h.sed && mv config.h.sed config.h This is tolerable in our build environment but I'd be interested in a cleaner way. A: It turns out there was a very simple solution in my case. The vendor project gathers several header files into one monolithic header file, which is then #included by the vendor sources. But the make rule that builds the monolithic header accidentally included the generated config.h. The presence of the PACKAGE, VERSION, etc. config variables in the monolithic header is what was causing the redefinition warnings. It turns out that the vendor's config.h was irrelevant, because "config.h" always resolved to $(top_builddir)/config.h. I believe this is the way it's supposed to work. By default a subproject should be including the enclosing project's config.h instead of its own, unless the subproject explicitly includes its own, or manipulates the INCLUDE path so that its own directory comes before $(top_builddir), or otherwise manipulates the header files as in my case.
{ "language": "en", "url": "https://stackoverflow.com/questions/7398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: What do you use to Unit-Test your Web UI? The company I'm currently working for is using Selenium for Uniting-Testing our User Interface. What do you use to Unit-Test your Web UI and how effective do you find it? A: Well, if you've designed your application properly, you won't have scads of logic inside the UI anyway. It makes much more sense to separate the actual work getting done into units separate from the UI, and then test those. If you do that, then the only code in the UI will be code that invokes the backend, so simply testing the backend is sufficient. I have used NUnit ASP in the past (at my job), and if you insist on unit testing your UI, I would strongly advise you to use ANYTHING but NUnit ASP. It's a pain to work with, and tests tend to be invalidated (needing to be revised) after even the most minor UI changes (even if the subjects of the tests don't actually change). A: We are using QuickTestPro. So far it is effective, but the browser selection is limited. The nicest part is the ability to record your browser's activity, and convert it into a scriptable set of steps. There is also a nice .Net addin so if you have any validation code you need to do for the different stages of your test, you can write methods in an assembly and call them from your script. A: We have been using JSunit for a while to do unit tests... it may not be the same kinds of tests you are talking about, but it is great for ensuring your JavaScript works as you expect. You run it in the browser, and it can be set in an Ant build to be automatically run against a bunch of browsers on a bunch of platforms remotely (so you can ensure your code is cross-browser as well as ensure the logic is correct). I don't think it replaces Selenium, but it complements it well. A: We use Visual Studio 2008 Tester Edition. Pros: Very good at capturing user interaction Captures Ajax calls It is very easy to map user input to a database, XML or CSV file The captured test can be converted to C# for more control The same tests can be used for load testing and code coverage Cons: VS2008 Tester Edition is a seperate SKU from the normal Developer Edition, which means extra cost You may be alergic to Microsoft ;-) We have used it very effectively on projects, however there a lot of effort involved in keeping tests up to date, every time you change a screen the test may need to be re-recorded We tend to keep the tests short and sharp, do one thing and get out instead of recording 10 minutes worth of clicking around in a single test. We have a few standard UI test types: Menu Test: Log in as a specific user (or user type/role) and make sure all the required menu items are available Validation Test: Open a page and click save without entering any data, ensure that all the validation warnings appear. Complete required fields one at a time and check that the warning messages disappear when they are supposed to. Search Test: Search using data from your database or a data file and ensure the correct data is returned by the search Data Entry Test: Create new recrords from a data file, cleanup the database to allow tests to run multiple times UI Testing is quite time consuming but the comfort feeling you get when a few hundred tests pass before you release a new version is priceless. A: We use Selenium Core, but are switching gradually to Selenium RC which is much nicer and easier to manage. We have written lots of custom code to make the tests run on our Continuous Integration servers, some of them in parallel suites to run faster. One thing you'll find is that Selenium seems to restart the browser for each test (you can set it not to do this, but we got memory problems when we did that). This can be slow in Firefox, but is not too bad in IE (one time I'm thankful for Bill Gates's OS integraion). A: I've used WATIR, which is pretty good. I liked it because it's Ruby and allows for testing interactivity, available elements and source code parsing. I haven't used it for a while but I assume it's gotten better. It's supposedly being ported to Firefox and Safari, but that's been happening for a while now. A: Check out Canoo Web Test. It is open source and built on the ANT framework. I spent some time working with it for a graduate course on Software QA and it seems to be a pretty powerful testing tool. A: Selenium Grid can run your web tests across multiple machines in parallel, which can speed up the web testing process A: I mostly use CubicTest, which is an eclipse plugin that lets you define tests graphically. It can export/run tests through several libraries, including watir and selenium. Most people just use the Selenium runner though. Full disclosure: I'm one of the developers, so I'm kind of biased :) Take a closer look here: cubictest.openqa.org -Erlend A: Selenium is for Integration testing, not Unit testing. It's a subtle, but important difference. The usage I usually see is for sanity checking a build. i.e., have a test that logs in, a test that (for example) submits a story, makes a comment, etc. The idea is that you're testing to see if the whole system is working together before deployment, rather than have a user discover that your site is broken. A: I'm a huge fan of Selenium. Saying 'unit-testing your web ui' isn't exactly accurate as some of the comments have mentioned. However, I do find Selenium to be incredibly useful for performing those sort of acceptance and sanity tests on the UI. A good way to get started is using Selenium IDE as part of your development. Ie, just have the IDE open as you're developing and write your test as you go to cut down on your dev time. (Instead of having to manually go through the UI to get to the point where you can test whatever you're working on, just hit a button and Selenium IDE will take care of that for you. It's a terrific time-saver!) Most of my major use case scenarios have Selenium RC tests to back them up. You can't really think of them as unit-tests along the lines of an xUnit framework, but they are tests targetted to very specific functionality. They're quick to write (especially if you implement common methods for things like logging in or setting up your test cases), quick to run, and provide a very tight feedback loop. In those senses Selenium RC tests are very similar to unit-tests. I think, like anything else, if you put the effort into properly learning a test tool (eg, Selenium), your effort will pay off in spades. You mention that your company already uses Selenium to do UI testing. This is great. Work with it. If you find Selenium hard to use, or confusing, stick with it. The learning curve really isn't all that steep once you learn the API a little bit. If I'm working on a web app, its rare for me to write a significant amount of code without Selenium RC tests to back it up. That's how effective I find Selenium. :) (Hopefully that'll answer your question..) A: We use Watin at my place of employment, we are a .net shop so this solution made a lot of sense. We actually started with Watir (the original ruby implementation) and switched after. It's been a pretty good solution for us so far A: We currently use Silk4J - a Java centric approach to testing Web UI. It can test Flash, Flex, AIR, Silver Light, Win32, HTML, and a few other applications. Since Silk4J can control Win32 apps it can control browser dialogs directly, which is a step above what Selenium can control and is especially useful for download prompts. A: We use WatiN for system testing, and QUnit for JavaScript unit testing. A: Molybdenum is built over Selenium and has some additional features.
{ "language": "en", "url": "https://stackoverflow.com/questions/7440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35" }
Q: RSS feeds from Gallery2 After a couple of hours fighting with the Gallery2 RSS module and getting only the message, "no feeds have yet been defined", I gave up. Based on a Google search for "no feeds have yet been defined", this is a pretty common problem. Do you have any tips and/or tricks for getting the Gallery2 RSS module to work? Or any tips for a relatively-PHP-ignorant developer trying to debug problems with this PHP application? A: My eventual (and hopefully temporary) solution to this problem was a Python CGI script. My script follows for anyone who might find it useful (despite the fact that this is a total hack). #!/usr/bin/python """A CGI script to produce an RSS feed of top-level Gallery2 albums.""" #import cgi #import cgitb; cgitb.enable() from time import gmtime, strftime import MySQLdb ALBUM_QUERY = ''' select g_id, g_title, g_originationTimestamp from g_Item where g_canContainChildren = 1 order by g_originationTimestamp desc limit 0, 20 ''' RSS_TEMPLATE = '''Content-Type: text/xml <?xml version="1.0"?> <rss version="2.0"> <channel> <title>TITLE</title> <link>http://example.com/gallery2/main.php</link> <description>DESCRIPTION</description> <ttl>1440</ttl> %s </channel> </rss> ''' ITEM_TEMPLATE = ''' <item> <title>%s</title> <link>http://example.com/gallery2/main.php?g2_itemId=%s</link> <description>%s</description> <pubDate>%s</pubDate> </item> ''' def to_item(row): item_id = row[0] title = row[1] date = strftime("%a, %d %b %Y %H:%M:%S GMT", gmtime(row[2])) return ITEM_TEMPLATE % (title, item_id, title, date) conn = MySQLdb.connect(host = "HOST", user = "USER", passwd = "PASSWORD", db = "DATABASE") curs = conn.cursor() curs.execute(ALBUM_QUERY) print RSS_TEMPLATE % ''.join([ to_item(row) for row in curs.fetchall() ]) curs.close() A: Well, I am unsure this can help you but here is a very simple RSS that was presented as solution in another topic: PHP RSS Builder
{ "language": "en", "url": "https://stackoverflow.com/questions/7470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: WPF Databinding Can anyone point me to a good resource (or throw me a clue) to show me how to do DataBinding to controls (ComboBox, ListBox, etc.) in WPF? I'm at a bit of a loss when all my WinForms niceities are taken away from me, and I'm not all that bright to start with... A: The best resource I've found for WPF data binding is Bea Costa's blog. Start from the first post and read forward. It's awesome. A: in code behind -- set the DataContext of your list box equal to the collection you're binding to. private void OnInit(object sender, EventArgs e) { //myDataSet is some IEnumerable // myListBox is a ListBox control. // Set the DataContext of the ListBox to myDataSet myListBox.DataContext = myDataSet; } In XAML, Listbox can declare which properties it binds to using the "Binding" syntax. <ListBox Name="myListBox" Height="200" ItemsSource="{Binding Path=BookTable}" ItemTemplate ="{StaticResource BookItemTemplate}"/> A: I find the tutorial videos at Windows Client .Net equally awesome. Dot Net Rocks TV has also covered it some time ago. A: And some more links, just in case the above didn't suffice: Windows Presentation Foundation - Data Binding How-to Topics - Approx 30 'How To' articles from MSDN. "The topics in this section describe how to use data binding to bind elements to data from a variety of data sources in the form of common language runtime (CLR) objects and XML. " Moving Toward WPF Data Binding One Step at a Time - By WPF guru Josh Smith "This article explains the absolute basics of WPF data binding. It shows four different ways how to perform the same simple task. Each iteration moves closer to the most compact, XAML-only implementation possible. This article is for people with no experience in WPF data binding." A: Here's another good resource from MSDN: Data Binding Overview. A: There are three things you need to do: * *Bind the ItemsSource of the ComboBox to the list of options. *Bind the SelectedItem to the property that holds the selection. *Set the ComboBox.ItemTemplate to a DataTemplate for a ComboBoxItem. So, for example, if your data context object is a person having email addresses, and you want to choose their primary, you might have classes with these signatures: public class EmailAddress { public string AddressAsString { get; set; } } public class Person { public IEnumerable<EmailAddress> EmailAddresses { get; } public EmailAddress MainEmailAddress { get; set; } } Then you could create the combo box like this: <ComboBox ItemsSource="{Binding EmailAddresses}" SelectedItem="{Binding MainEmailAddress}"> <ComboBox.ItemTemplate> <DataTemplate> <ComboBoxItem Content="{Binding AddressAsString}"/> </DataTemplate> </ComboBox.ItemTemplate> </ComboBox> Now you need to implement INotifyPropertyChanged in both Person and EmailAddress. For the EmailAddresses collection, you could back it with an ObjservableCollection. Or as an alternative you can use Update Controls .NET. This is an open source project that replaces data binding and does not require INotifyPropertyChanged. You can use whatever collection makes sense to back the EmailAddresses property. The XAML works the same as above, except that you import the UpdateControls.XAML namespace and replace {Binding ...} with {u:Update ...}.
{ "language": "en", "url": "https://stackoverflow.com/questions/7472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How to autosize a textarea using Prototype? I'm currently working on an internal sales application for the company I work for, and I've got a form that allows the user to change the delivery address. Now I think it would look much nicer, if the textarea I'm using for the main address details would just take up the area of the text in it, and automatically resize if the text was changed. Here's a screenshot of it currently. Any ideas? @Chris A good point, but there are reasons I want it to resize. I want the area it takes up to be the area of the information contained in it. As you can see in the screen shot, if I have a fixed textarea, it takes up a fair wack of vertical space. I can reduce the font, but I need address to be large and readable. Now I can reduce the size of the text area, but then I have problems with people who have an address line that takes 3 or 4 (one takes 5) lines. Needing to have the user use a scrollbar is a major no-no. I guess I should be a bit more specific. I'm after vertical resizing, and the width doesn't matter as much. The only problem that happens with that, is the ISO number (the large "1") gets pushed under the address when the window width is too small (as you can see on the screenshot). It's not about having a gimick; it's about having a text field the user can edit that won't take up unnecessary space, but will show all the text in it. Though if someone comes up with another way to approach the problem I'm open to that too. I've modified the code a little because it was acting a little odd. I changed it to activate on keyup, because it wouldn't take into consideration the character that was just typed. resizeIt = function() { var str = $('iso_address').value; var cols = $('iso_address').cols; var linecount = 0; $A(str.split("\n")).each(function(l) { linecount += 1 + Math.floor(l.length / cols); // Take into account long lines }) $('iso_address').rows = linecount; }; A: Facebook does it, when you write on people's walls, but only resizes vertically. Horizontal resize strikes me as being a mess, due to word-wrap, long lines, and so on, but vertical resize seems to be pretty safe and nice. None of the Facebook-using-newbies I know have ever mentioned anything about it or been confused. I'd use this as anecdotal evidence to say 'go ahead, implement it'. Some JavaScript code to do it, using Prototype (because that's what I'm familiar with): <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <script src="http://www.google.com/jsapi"></script> <script language="javascript"> google.load('prototype', '1.6.0.2'); </script> </head> <body> <textarea id="text-area" rows="1" cols="50"></textarea> <script type="text/javascript" language="javascript"> resizeIt = function() { var str = $('text-area').value; var cols = $('text-area').cols; var linecount = 0; $A(str.split("\n")).each( function(l) { linecount += Math.ceil( l.length / cols ); // Take into account long lines }) $('text-area').rows = linecount + 1; }; // You could attach to keyUp, etc. if keydown doesn't work Event.observe('text-area', 'keydown', resizeIt ); resizeIt(); //Initial on load </script> </body> </html> PS: Obviously this JavaScript code is very naive and not well tested, and you probably don't want to use it on textboxes with novels in them, but you get the general idea. A: Here's a Prototype version of resizing a text area that is not dependent on the number of columns in the textarea. This is a superior technique because it allows you to control the text area via CSS as well as have variable width textarea. Additionally, this version displays the number of characters remaining. While not requested, it's a pretty useful feature and is easily removed if unwanted. //inspired by: http://github.com/jaz303/jquery-grab-bag/blob/63d7e445b09698272b2923cb081878fd145b5e3d/javascripts/jquery.autogrow-textarea.js if (window.Widget == undefined) window.Widget = {}; Widget.Textarea = Class.create({ initialize: function(textarea, options) { this.textarea = $(textarea); this.options = $H({ 'min_height' : 30, 'max_length' : 400 }).update(options); this.textarea.observe('keyup', this.refresh.bind(this)); this._shadow = new Element('div').setStyle({ lineHeight : this.textarea.getStyle('lineHeight'), fontSize : this.textarea.getStyle('fontSize'), fontFamily : this.textarea.getStyle('fontFamily'), position : 'absolute', top: '-10000px', left: '-10000px', width: this.textarea.getWidth() + 'px' }); this.textarea.insert({ after: this._shadow }); this._remainingCharacters = new Element('p').addClassName('remainingCharacters'); this.textarea.insert({after: this._remainingCharacters}); this.refresh(); }, refresh: function() { this._shadow.update($F(this.textarea).replace(/\n/g, '<br/>')); this.textarea.setStyle({ height: Math.max(parseInt(this._shadow.getHeight()) + parseInt(this.textarea.getStyle('lineHeight').replace('px', '')), this.options.get('min_height')) + 'px' }); var remaining = this.options.get('max_length') - $F(this.textarea).length; this._remainingCharacters.update(Math.abs(remaining) + ' characters ' + (remaining > 0 ? 'remaining' : 'over the limit')); } }); Create the widget by calling new Widget.Textarea('element_id'). The default options can be overridden by passing them as an object, e.g. new Widget.Textarea('element_id', { max_length: 600, min_height: 50}). If you want to create it for all textareas on the page, do something like: Event.observe(window, 'load', function() { $$('textarea').each(function(textarea) { new Widget.Textarea(textarea); }); }); A: One refinement to some of these answers is to let CSS do more of the work. The basic route seems to be: * *Create a container element to hold the textarea and a hidden div *Using Javascript, keep the textarea’s contents synced with the div’s *Let the browser do the work of calculating the height of that div *Because the browser handles rendering / sizing the hidden div, we avoid explicitly setting the textarea’s height. document.addEventListener('DOMContentLoaded', () => { textArea.addEventListener('change', autosize, false) textArea.addEventListener('keydown', autosize, false) textArea.addEventListener('keyup', autosize, false) autosize() }, false) function autosize() { // Copy textarea contents to div browser will calculate correct height // of copy, which will make overall container taller, which will make // textarea taller. textCopy.innerHTML = textArea.value.replace(/\n/g, '<br/>') } html, body, textarea { font-family: sans-serif; font-size: 14px; } .textarea-container { position: relative; } .textarea-container > div, .textarea-container > textarea { word-wrap: break-word; /* make sure the div and the textarea wrap words in the same way */ box-sizing: border-box; padding: 2px; width: 100%; } .textarea-container > textarea { overflow: hidden; position: absolute; height: 100%; } .textarea-container > div { padding-bottom: 1.5em; /* A bit more than one additional line of text. */ visibility: hidden; } <div class="textarea-container"> <textarea id="textArea"></textarea> <div id="textCopy"></div> </div> A: Here is a solution with JQuery: $(document).ready(function() { var $abc = $("#abc"); $abc.css("height", $abc.attr("scrollHeight")); }) abc is a teaxtarea. A: Check the below link: http://james.padolsey.com/javascript/jquery-plugin-autoresize/ $(document).ready(function () { $('.ExpandableTextCSS').autoResize({ // On resize: onResize: function () { $(this).css({ opacity: 0.8 }); }, // After resize: animateCallback: function () { $(this).css({ opacity: 1 }); }, // Quite slow animation: animateDuration: 300, // More extra space: extraSpace:20, //Textarea height limit limit:10 }); }); A: Here's another technique for autosizing a textarea. * *Uses pixel height instead of line height: more accurate handling of line wrap if a proportional font is used. *Accepts either ID or element as input *Accepts an optional maximum height parameter - useful if you'd rather not let the text area grow beyond a certain size (keep it all on-screen, avoid breaking layout, etc.) *Tested on Firefox 3 and Internet Explorer 6 Code: (plain vanilla JavaScript) function FitToContent(id, maxHeight) { var text = id && id.style ? id : document.getElementById(id); if (!text) return; /* Accounts for rows being deleted, pixel value may need adjusting */ if (text.clientHeight == text.scrollHeight) { text.style.height = "30px"; } var adjustedHeight = text.clientHeight; if (!maxHeight || maxHeight > adjustedHeight) { adjustedHeight = Math.max(text.scrollHeight, adjustedHeight); if (maxHeight) adjustedHeight = Math.min(maxHeight, adjustedHeight); if (adjustedHeight > text.clientHeight) text.style.height = adjustedHeight + "px"; } } Demo: (uses jQuery, targets on the textarea I'm typing into right now - if you have Firebug installed, paste both samples into the console and test on this page) $("#post-text").keyup(function() { FitToContent(this, document.documentElement.clientHeight) }); A: Just revisiting this, I've made it a little bit tidier (though someone who is full bottle on Prototype/JavaScript could suggest improvements?). var TextAreaResize = Class.create(); TextAreaResize.prototype = { initialize: function(element, options) { element = $(element); this.element = element; this.options = Object.extend( {}, options || {}); Event.observe(this.element, 'keyup', this.onKeyUp.bindAsEventListener(this)); this.onKeyUp(); }, onKeyUp: function() { // We need this variable because "this" changes in the scope of the // function below. var cols = this.element.cols; var linecount = 0; $A(this.element.value.split("\n")).each(function(l) { // We take long lines into account via the cols divide. linecount += 1 + Math.floor(l.length / cols); }) this.element.rows = linecount; } } Just it call with: new TextAreaResize('textarea_id_name_here'); A: I've made something quite easy. First I put the TextArea into a DIV. Second, I've called on the ready function to this script. <div id="divTable"> <textarea ID="txt" Rows="1" TextMode="MultiLine" /> </div> $(document).ready(function () { var heightTextArea = $('#txt').height(); var divTable = document.getElementById('divTable'); $('#txt').attr('rows', parseInt(parseInt(divTable .style.height) / parseInt(altoFila))); }); Simple. It is the maximum height of the div once it is rendered, divided by the height of one TextArea of one row. A: I needed this function for myself, but none of the ones from here worked as I needed them. So I used Orion's code and changed it. I added in a minimum height, so that on the destruct it does not get too small. function resizeIt( id, maxHeight, minHeight ) { var text = id && id.style ? id : document.getElementById(id); var str = text.value; var cols = text.cols; var linecount = 0; var arStr = str.split( "\n" ); $(arStr).each(function(s) { linecount = linecount + 1 + Math.floor(arStr[s].length / cols); // take into account long lines }); linecount++; linecount = Math.max(minHeight, linecount); linecount = Math.min(maxHeight, linecount); text.rows = linecount; }; A: Like the answer of @memical. However I found some improvements. You can use the jQuery height() function. But be aware of padding-top and padding-bottom pixels. Otherwise your textarea will grow too fast. $(document).ready(function() { $textarea = $("#my-textarea"); // There is some diff between scrollheight and height: // padding-top and padding-bottom var diff = $textarea.prop("scrollHeight") - $textarea.height(); $textarea.live("keyup", function() { var height = $textarea.prop("scrollHeight") - diff; $textarea.height(height); }); }); A: My solution not using jQuery (because sometimes they don't have to be the same thing) is below. Though it was only tested in Internet Explorer 7, so the community can point out all the reasons this is wrong: textarea.onkeyup = function () { this.style.height = this.scrollHeight + 'px'; } So far I really like how it's working, and I don't care about other browsers, so I'll probably apply it to all my textareas: // Make all textareas auto-resize vertically var textareas = document.getElementsByTagName('textarea'); for (i = 0; i<textareas.length; i++) { // Retain textarea's starting height as its minimum height textareas[i].minHeight = textareas[i].offsetHeight; textareas[i].onkeyup = function () { this.style.height = Math.max(this.scrollHeight, this.minHeight) + 'px'; } textareas[i].onkeyup(); // Trigger once to set initial height } A: Probably the shortest solution: jQuery(document).ready(function(){ jQuery("#textArea").on("keydown keyup", function(){ this.style.height = "1px"; this.style.height = (this.scrollHeight) + "px"; }); }); This way you don't need any hidden divs or anything like that. Note: you might have to play with this.style.height = (this.scrollHeight) + "px"; depending on how you style the textarea (line-height, padding and that kind of stuff). A: Here is an extension to the Prototype widget that Jeremy posted on June 4th: It stops the user from entering more characters if you're using limits in textareas. It checks if there are characters left. If the user copies text into the textarea, the text is cut off at the max. length: /** * Prototype Widget: Textarea * Automatically resizes a textarea and displays the number of remaining chars * * From: http://stackoverflow.com/questions/7477/autosizing-textarea * Inspired by: http://github.com/jaz303/jquery-grab-bag/blob/63d7e445b09698272b2923cb081878fd145b5e3d/javascripts/jquery.autogrow-textarea.js */ if (window.Widget == undefined) window.Widget = {}; Widget.Textarea = Class.create({ initialize: function(textarea, options){ this.textarea = $(textarea); this.options = $H({ 'min_height' : 30, 'max_length' : 400 }).update(options); this.textarea.observe('keyup', this.refresh.bind(this)); this._shadow = new Element('div').setStyle({ lineHeight : this.textarea.getStyle('lineHeight'), fontSize : this.textarea.getStyle('fontSize'), fontFamily : this.textarea.getStyle('fontFamily'), position : 'absolute', top: '-10000px', left: '-10000px', width: this.textarea.getWidth() + 'px' }); this.textarea.insert({ after: this._shadow }); this._remainingCharacters = new Element('p').addClassName('remainingCharacters'); this.textarea.insert({after: this._remainingCharacters}); this.refresh(); }, refresh: function(){ this._shadow.update($F(this.textarea).replace(/\n/g, '<br/>')); this.textarea.setStyle({ height: Math.max(parseInt(this._shadow.getHeight()) + parseInt(this.textarea.getStyle('lineHeight').replace('px', '')), this.options.get('min_height')) + 'px' }); // Keep the text/character count inside the limits: if($F(this.textarea).length > this.options.get('max_length')){ text = $F(this.textarea).substring(0, this.options.get('max_length')); this.textarea.value = text; return false; } var remaining = this.options.get('max_length') - $F(this.textarea).length; this._remainingCharacters.update(Math.abs(remaining) + ' characters remaining')); } }); A: Internet Explorer, Safari, Chrome and Opera users need to remember to explicidly set the line-height value in CSS. I do a stylesheet that sets the initial properites for all text boxes as follows. <style> TEXTAREA { line-height: 14px; font-size: 12px; font-family: arial } </style> A: Here is a function I just wrote in jQuery to do it - you can port it to Prototype, but they don't support the "liveness" of jQuery so elements added by Ajax requests will not respond. This version not only expands, but it also contracts when delete or backspace is pressed. This version relies on jQuery 1.4.2. Enjoy ;) http://pastebin.com/SUKeBtnx Usage: $("#sometextarea").textareacontrol(); or (any jQuery selector for example) $("textarea").textareacontrol(); It was tested on Internet Explorer 7/Internet Explorer 8, Firefox 3.5, and Chrome. All works fine. A: @memical had an awesome solution for setting the height of the textarea on pageload with jQuery, but for my application I wanted to be able to increase the height of the textarea as the user added more content. I built off memical's solution with the following: $(document).ready(function() { var $textarea = $("p.body textarea"); $textarea.css("height", ($textarea.attr("scrollHeight") + 20)); $textarea.keyup(function(){ var current_height = $textarea.css("height").replace("px", "")*1; if (current_height + 5 <= $textarea.attr("scrollHeight")) { $textarea.css("height", ($textarea.attr("scrollHeight") + 20)); } }); }); It's not very smooth but it's also not a client-facing application, so smoothness doesn't really matter. (Had this been client-facing, I probably would have just used an auto-resize jQuery plugin.) A: Using ASP.NET, just simply do this: <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Automatic Resize TextBox</title> <script type="text/javascript"> function setHeight(txtarea) { txtarea.style.height = txtdesc.scrollHeight + "px"; } </script> </head> <body> <form id="form1" runat="server"> <asp:TextBox ID="txtarea" runat= "server" TextMode="MultiLine" onkeyup="setHeight(this);" onkeydown="setHeight(this);" /> </form> </body> </html> A: For those that are coding for IE and encounter this problem. IE has a little trick that makes it 100% CSS. <TEXTAREA style="overflow: visible;" cols="100" ....></TEXTAREA> You can even provide a value for rows="n" which IE will ignore, but other browsers will use. I really hate coding that implements IE hacks, but this one is very helpful. It is possible that it only works in Quirks mode.
{ "language": "en", "url": "https://stackoverflow.com/questions/7477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "129" }
Q: How do I make a GUI? I've made many different seperate parts of a GUI system for the Nintendo DS, like buttons and textboxes and select boxes, but I need a way of containing these classes in one Gui class, so that I can draw everything to the screen all at once, and check all the buttons at once to check if any are being pressed. My question is what is the best way organize all the classes (such as buttons and textboxes) into one GUI class? Here's one way I thought of but it doesn't seem right: Edit: I'm using C++. class Gui { public: void update_all(); void draw_all() const; int add_button(Button *button); // Returns button id void remove_button(int button_id); private: Button *buttons[10]; int num_buttons; } This code has a few problems, but I just wanted to give you an idea of what I want. A: This question is very similar to one I was going to post, only mine is for Sony PSP programming. I've been toying with something for a while, I've consulted some books and VTMs, and so far this is a rough idea of a simple ui systems. class uiElement() { ... virtual void Update() = 0; virtual void Draw() = 0; ... } class uiButton() public : uiElement { ... virtual void Update(); virtual void Draw(); ... } class uiTextbox() public : uiElement { ... virtual void Update(); virtual void Draw(); ... } ... // Other ui Elements class uiWindow() { ... void Update(); void Draw(); void AddElement(uiElement *Element); void RemoveElement(uiElement *Element); std::list <uiElement*> Elements; ... } void uiWindow::Update() { ... for (list <uiElement*>::iterator it = Elements.begin(); it != Elements.end(); it++ ) it->Update(); ... } void uiWindow::Draw() { ... for (list <uiElement*>::iterator it = Elements.begin(); it != Elements.end(); it++ ) it->Draw(); ... } The princple is to create a window and attact ui Elements to it, and call the draw and update methods from the respective main functions. I don't have anything working yet, as I have issues with drawing code. With different APIs on the PC and PSP, I'm looking at some wrapper code for OpenGL and psp gu. Hope this helps. thing2k A: For anyone who's interested, here's my open source, BSD-licenced GUI toolkit for the DS: http://www.sourceforge.net/projects/woopsi thing2k's answer is pretty good, but I'd seriously recommend having code to contain child UI elements in the base uiElement class. This is the pattern I've followed in Woopsi. If you don't support this in the base class, you'll run into major problems when you try to implement anything more complex than a textbox and a button. For example: * *Tab bars can be modelled as multiple buttons grouped together into a single parent UI element that enforces mutual exclusiveness of selection; *Radio button groups (ditto); *Scroll bars can be represented as a slider/gutter element and up/down buttons; *Scrolling lists can be represented as a container and multiple option UI elements. Also, it's worth remembering that the DS has a 66MHz CPU and 4MB of RAM, which is used both to store your program and execute it (DS ROMs are loaded into RAM before they are run). You should really be treating it as an embedded system, which means the STL is out. I removed the STL from Woopsi and managed to save 0.5MB. Not a lot by desktop standards, but that's 1/8th of the DS' total available memory consumed by STL junk. I've detailed the entire process of writing the UI on my blog: http://ant.simianzombie.com/blog It includes descriptions of the two algorithms I came up with for redrawing the screen, which is the trickiest part of creating a GUI (one just splits rectangles up and remembers visible regions; the other uses BSP trees, which is much more efficient and easier to understand), tips for optimisation, etc. A: One useful strategy to keep in mind might be the composite pattern. At a low level, it might allow you to treat all GUI objects (and collections of objects) more easily once built. But I have no idea what's involved in GUI framework design, so one place to find general inspiration is in the source code of an existing project. WxWidgets is a cross-platform GUI framework with source available. Good luck with your project! A: I think looking at the way other GUI toolkits have done it would be an excellent place to start. For C++ examples, I hear lots of good things about Qt. I haven't used it personally though. And of course WxWidgets as Nick mentioned. A: I've written a very simple GUI just like you propose. I have it running on Windows, Linux and Macintosh. It should port relatively easily to any system like the PSP or DS too. It's open-source, LGPL and is here: http://code.google.com/p/kgui/
{ "language": "en", "url": "https://stackoverflow.com/questions/7489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Performing a Stress Test on Web Application? In the past, I used Microsoft Web Application Stress Tool and Pylot to stress test web applications. I'd written a simple home page, login script, and site walkthrough (in an ecommerce site adding a few items to a cart and checkout). Just hitting the homepage hard with a handful of developers would almost always locate a major problem. More scalability problems would surface at the second stage, and even more - after the launch. The URL of the tools I used were Microsoft Homer (aka Microsoft Web Application Stress Tool) and Pylot. The reports generated by these tools never made much sense to me, and I would spend many hours trying to figure out what kind of concurrent load the site would be able to support. It was always worth it because the stupidest bugs and bottlenecks would always come up (for instance, web server misconfigurations). What have you done, what tools have you used, and what success have you had with your approach? The part that is most interesting to me is coming up with some kind of a meaningful formula for calculating the number of concurrent users an app can support from the numbers reported by the stress test application. A: For simple usage, I perfer ab(apache benchmark) and siege, later one is needed as ab don't support cookie and would create endless sessions from dynamic site. both are simple to start: ab -c n -t 30 url siege -b -c n -t 30s url siege can run with more urls. last siege version turn verbose on in siegerc, which is annoy. you can only disable it by edit that file(/usr/local/etc/siegerc). A: As this question is still open, I might as well weigh in. The good news is that over the past 5 or so years the Open Source tools have really matured and taken off in the space, the bad news is there are so many of them out there. Here are my thoughts:- Jmeter vs Grinder Jmeter is driven from an XML style specification, that is constructed via a GUI. Grinder uses Jython scripting within a muti-threaded Java framework, so more oriented to programmers. Both tools will handle HTTP and HTTPS and have a proxy recorder to get you started. Both tools use the Controller model to drive multiple test agents so scalability is not an issue (given access to the Cloud). Which is better:- A hard call as the learning curve is steep with both tools as you get into the more complicated scripting requirements for url rewriting, correlation, providing unique data per Virtual User and simulating first time or returning Users (by manipulating the HTTP Headers). That said I would start with Jmeter as this tool has a huge following and there are many examples and tutorials on the web for using this tool. If and when you come to a 'road block', that is something you can't 'easily' do with Jmeter then have a look at the Grinder. The good news is both these tools have the same Java requirement and a 'mix and match' solution is not out of the question. Something new to add – Headless browsers running multiple instances of Selenium WebDriver. This is a relatively new approach because it relies on the availability of resources that can now be provisioned from the Cloud. With this approach a Selenium (WebDriver) script is taken and run within a headless browser (i.e. WebDriver = New HtmlUnitDriver()) driver in multiple threads. From experience around 25 instances of 'headless browsers' can be executed from the Amazon M1 Small Instance. What this means is that all of the correlation, url rewriting issues disappear as you repurpose your functional testing scripts to become performance testing scripts. The scalability is compromised as more VMs will be needed to drive the load, as compared with a HTTP driver such as the Grinder or Jmeter. That said, if you are looking to drive 500 Virtual Users then with 20 Amazon Small Instances (6 cents an hour each) at a cost of just $1.20 per hour gives you load that is very close to the Real User Experience. A: Also, there is an awesome open-source pure-python distributed and scaleable locust framework that uses greenlets. It's great at simulating enormous amount of simultaneous users. A: We recently started using Gatling for load testing. I would highly recommend to try out this tool for load testing. We had used SOASTA and JMETER in past. Our main reason to consider Gatling is following: * *Recorder to record the scenario *Using Akka and Netty which gives better performance compare to Jmeter Threading model *DSL Scala which is very much maintainable compare to Jmeter XML *Easy to write the tests, don't scare if it's scala. *Reporting Let me give you simple example to write the code using Gatling Code: // your code starts here val scn = scenario("Scenario") .exec(http("Page") .get("http://example.com")) // injecting 100 user enter code here's on above scenario. setUp(scn.inject(atOnceUsers(100))) However you can make it as complicated as possible. One of the feature which stand out for Gatling is reporting which is very detail. Here are some links: Gatling Gatling Tutorial I recently gave talk on it, you can go through the talk here: https://docs.google.com/viewer?url=http%3A%2F%2Ffiles.meetup.com%2F3872152%2FExploring-Load-Testing-with-Gatling.pdf A: This is an old question, but I think newer solutions are worthy of a mention. Checkout LoadImpact: http://www.loadimpact.com. A: I tried WebLoad it's a pretty neat tool. It comes with and test script IDE which allows you to record user action on a website. It also draws a graph as it perform stress test on your web server. Try it out, I highly recommend it. A: I've used The Grinder. It's open source, pretty easy to use, and very configurable. It is Java based and uses Jython for the scripts. We ran it against a .NET web application, so don't think it's a Java only tool (by their nature, any web stress tool should not be tied to the platform it uses). We did some neat stuff with it... we were a web based telecom application, so one cool use I set up was to mimick dialing a number through our web application, then used an auto answer tool we had (which was basically a tutorial app from Microsoft to connect to their RTC LCS server... which is what Microsoft Office Communicator connects to on a local network... then modified to just pick up calls automatically). This then allowed us to use this instead of an expensive telephony tool called The Hammer (or something like that). Anyways, we also used the tool to see how our application held up under high load, and it was very effective in finding bottlenecks. The tool has built in reporting to show how long requests are taking, but we never used it. The logs can also store all the responses and whatnot, or custom logging. I highly recommend this tool, very useful for the price... but expect to do some custom setup with it (it has a built in proxy to record a script, but it may need customization for capturing something like sessions... I know I had to customize it to utilize a unique session per thread). A: Trying all mentioned here, I found curl-loader as best for my purposes. very easy interface, real-time monitoring, useful statistics, from which I build graphs of performance. All features of libcurl are included. A: Blaze meter has a chrome extension for recording sessions and exporting them to JMeter (currently requires login). You also have the option of paying them money to run it on their cluster of JMeter servers (their pricing seems much better than LoadImpact which I've just stopped using): * *BlazeMeter Chrome Extension *Blog entry about it I don't have any association with them, I just like the look of their service, although I haven't used the paid version yet. A: A little late to this party. I agree that Pylot is the best up-and-coming open source tool out there. It's simple to use and is actively worked on by a great guy (Corey Goldberg). As the founder of OpenQA, I'm also happy that Pylot now is listed on our home page and uses some of our infrastructure (namely the forums). However, I also recently decided that the entire concept of load testing was flawed: emulating HTTP traffic, with applications as complex as they have become, is a pain in the butt. That's why I created the commercial tool BrowserMob. It's an external load testing service that uses Selenium to control real web browsers when playing back load. The approach obviously requires a ton more hardware than normal load testing techniques, but hardware is actually pretty cheap when you are using cloud computing. And a nice side effect of this is that the scripting is much easier than normal load testing. You don't have to do any advanced regex matching (like JMeter requires) to extract out cookies, .NET session state, Ajax request parameters, etc. Since you're using real browsers, they just do what they are supposed to do. Sorry to blatantly pitch a commercial product, but hopefully the concept is interesting to some folks and at least gets them thinking about some new ways to deal with load testing when you have access to a bunch of extra hardware! A: You asked this question almost a year ago and I don't know if you still are looking for another way of benchmarking your website. However since this question is still not marked as solved I would like to suggest the free webservice LoadImpact (btw. not affiliated). Just got this link via twitter and would like to share this find. They create a reasonable good overview and for a few bucks more you get the "full impact mode". This probably sounds strange, but good luck pushing and braking your service :) A: I've used JMeter. Besides testing the web server you can also test your database backend, messaging services and email servers. A: ab, siege, tsung, httperf, Trample, Pylot, request-log-analyzer, perftools A: Here's another vote for JMeter. JMeter is an open-source load testing tool, written in Java. It's capable of testing a number of different server types (for example, web, web services, database, just about anything that uses requests basically). It does however have a steep learning curve once you start getting to complicated tests, but it's well worth it. You can get up and running very quickly, and depending on what sort of stress-testing you want to do, that might be fine. Pros: * *Open-Source/Free tool from the Apache project (helps with buy-in) *Easy to get started with, and easy to use once you grasp the core concepts. (Ie, how to create a request, how to create an assertion, how to work with variables etc). *Very scalable. I've run tests with 11 machines generating load on the server to the tune of almost a million hits/hour. It was much easier to setup than I was expecting. *Has an active community and good resources to help you get up and running. Read the tutorials first and play with it for a while. Cons: * *The UI is written in Swing. (ugh!) *JMeter works by parsing the response text returned by the server. So if you're looking to validate any sort of javascript behaviours, you're out of luck. *Learning curve is steep for non-programmers. If you're familiar with regular expressions, you're already ahead of the game. *There are large numbers of (insert expletive) idiots in the support forum asking stupid questions that could be easily solved if they'd give the documentation even a cursory glance. ('How do I use JMeter to stress-test my Windows GUI' shows up quite frequently). *Reporting 'out of the box' leaves much to be desired, particularly for larger tests. In the test I mentioned above, I ended up having to write a quick console app to do some of the 'xml-logfile' to 'html' conversions. That was a few years ago though, so it's probable that this would no longer be required. A: For a web based service, check out loader.io. Summary: loader.io is a free load testing service that allows you to stress test your web-apps/apis with thousands of concurrent connections. They also have an API. A: I found IBM Page Detailer also an interesting tool to work with. A: I've used openSTA. This allows a session with a web site to be recorded and then played back via a relatively simple script language. You can easily test web services and write your own scripts. It allows you to put scripts together in a test in any way you want and configure the number of iterations, the number of users in each iteration, the ramp up time to introduce each new user and the delay between each iteration. Tests can also be scheduled in the future. It's open source and free. It produces a number of reports which can be saved to a spreadsheet. We then use a pivot table to easily analyse and graph the results. A: We use the Microsoft tool mentioned - Microsoft Web Application Stress Tool. It is the easiest tool I have used. It is limited in many ways, including only being able to hit port 80 on manually created tests. But, its ease of use means it actually gets used. We supplement the load from this tool with other tools including OpenSTA and link check spiders. JMeter looks good from my initial evaluation, I hope to include it in our continuous integration going forward. But, JMeter is complex and non trivial to roll out. I'd suggest opening another question regarding interpreting the MS stress tool results. A: Visual Studio Test Edition 2010 (2008 good too). This is a really easy and powerful tool to create web/load tests with. The bonus with this tool when using against Windows servers is that you get integrated access to all the perfmon server stats in your report. Really useful. The other bonus is that with Visual Studio project you can integrate a "Performance Session" that will profile the code execution of your website. If you are serving webpages from a windows server, this is the best tool out there. There is a separate and expensive licence required to use several machines to load test the application however. A: We have developed a process that treats load and performance measurenment as a first-class concern - as you say, leaving it to the end of the project tends to lead to disappointment... So, during development, we include very basic multi-user testing (using selenium), which checks for basic craziness like broken session management, obvious concurrency issues, and obvious resource contention problems. Non-trivial projects include this in the continuous integration process, so we get very regular feedback. For projects that don't have extreme performance requirements, we include basic performance testing in our testing; usually, we script out the tests using BadBoy, and import them into JMeter, replacing the login details and other thread-specific things. We then ramp these up to the level that the server is dealing with 100 requests per second; if the response time is less than 1 second, that's usually sufficient. We launch and move on with our lives. For projects with extreme performance requirements, we still use BadBoy and JMeter, but put a lot of energy into understanding the bottlenecks on the servers on our test rig(web and database servers, usually). There's a good tool for analyzing Microsoft event logs which helps a lot with this. We typically find unexpected bottlenecks, which we optimize if possible; that gives us an application that is as fast as it can be on "1 web server, 1 database server". We then usually deploy to our target infrastructure, and use one of the "Jmeter in the cloud" services to re-run the tests at scale. Again, PAL reports help to analyze what happened during the tests - you often see very different bottlenecks on production environments. The key is to make sure you don't just run your stress tests, but also that you collect the information you need to understand the performance of your application. A: There are a lot of good tools mentioned here. I wonder if tools are an answer the question: "How do you stress test a web application?" The tools don't really provide a method to stress a Web app. Here's what I know: Stress testing shows how a Web app fails while serving responses to an increasing population of users. Stress testing shows how the Web app functions while it fails. Most Web apps today - especially the Social/Mobile Web apps - are integrations of services. For example, when Facebook had its outage in May 2011 you could not log onto Pepsi.com's Web app. The app didn't fail entirely, just a big portion of its normal function become unavailable to users. Performance testing shows a Web app's ability to maintain response times independent of how many users are concurrently using the app. For example, an app that handles 10 transactions per second with 10 concurrent users should handle 20 transactions per second at 20 users. If the app handles less than 20 transactions per second the response times are growing longer and the app is not able to achieve linear scalability. Also, in the above example the transaction-per-second count should be of only successful operations of a test use case/workflow. Failures typically happen in shorter timespans and will make the TPS measurement overly optimistic. Failures are important to a stress and performance test since they generate load on the app too. I wrote up the PushToTest methodology in the TestMaker User Guide at http://www.pushtotest.com/pushtotest-testmaker-6-methodology. TestMaker comes in two flavors: Open Source (GPL) Community version and TestMaker Enterprise (commercial with great professional support.) -Frank A: Try ZebraTester which is much easier to use than jMeter. I have used jMeter for a long time but the total setup time for a load test was always an issue. Although ZebraTester isn't open source, the time that I have saved in the last six months makes up for it. They also have a SaaS portal which can be used for quickly running tests using their load generators. A: Take a look at LoadBooster(https://www.loadbooster.com). It utilizes headless scriptable browser PhantomJS/CasperJs to test web sites. Phantomjs will parse and render every page, execute the client-side script. The headless browser approach is easier to write test scenarios to support complex AJAX heavy Web 2.0 app,browser navigation, mouse click and keystrokes into the browser or wait until an element exists in DOM. LoadBooster support selenium HTML script too. Disclaimer: I work for LoadBooster. A: One more note, for our web application, I found that we had huge performance issues due to contention between threads over locks... so the moral was to think over the locking scheme very carefully. We ended up having worker threads to throttle too many requests using an asynchronous http handler, otherwise the application would just get overwhelmed and crash and burn. It meant a huge backlog could pile up, but at least the site would stay up. A: Take a look at TestComplete. A: I second the opensta suggestion. I would just add that it allows you to do things to monitor the server you're testing using SMTP. We keep track of processor load, memory used, byes sent, etc. The only downside is that if you find something boken and want to do a fix it relies on several open-source libraries that are no longer kept up, so getting a compiling version of the source is more tricky than with most OSS. A: I played with JMeter. One think it could not not test was ASP.NET Webforms. The viewstate broke my tests. I am not shure why, but there are a couple of tools out there that dont handle viewstate right. My current project is ASP.NET MVC and JMeter works well with it. A: I had good results with FunkLoad : * *easy to script user interaction *reports are clear *can monitor server load A: At the risk of being accused of shameless self promotion I would like to point out that in my quest for a free load testing tool I went to this article: http://www.devcurry.com/2010/07/10-free-tools-to-loadstress-test-your.html Either I couldn't get the throughput I wanted, or I couldn't get the flexibility I wanted. AND I wanted to easily aggregate the results of multiple load test generation hosts in post test analysis. I tried out every tool on the list and to my frustration found that none of them quite did what I wanted to be able to do. So I built one and am sharing it. Here it is: http://sourceforge.net/projects/loadmonger PS: No snide comments on the name from folks who are familiar with urban slang. I wasn't but am slightly more worldly now. A: I vote for jMeter too and I want add some quotes to @PeterBernier answer. The main question that load testing answers is how many concurrent users can my web application support? In order to get a proper answer, load testing should represent real application usage, as close as possible. Keep above in mind, jMeter has many building blocks Logical Controllers, Config Elements, Pre Processors, Listeners ,... which can help you in this. You can mimic real world situation with jMeter, for example you can: * *Configure jMeter to act as real Browser by configuring (concurrent resource download, browser cache, http headers, setting request time out, cookie management, https support, encoding , ajax support ,... ) *Configure jMeter to generate user requests (by defining number of users per second, ramp-up time, scheduling ,...) *Configure lots of client with jMeter on them, to do a distributed load test. *Process response to find if the server is responding correctly during test. ( For example assert response to find a text in it) Please consider: * *It is easy to start a real web application test with jMeter in minutes. The jMeter has a very easy tool which record your test scenario ( know as HTTP(S) Test Script Recorder). *jMeter has lots of plugins at http://jmeter-plugins.org. *The jMeter UI is swing based and has made good changes in jMeter 3.2. On the other hand please consider that JMeter GUI should only be used for test and debugging. It is not good practice to use it in GUI mode for actual test. https://www.blazemeter.com/blog/5-ways-launch-jmeter-test-without-using-jmeter-gui. Configure and test your scenario and run it on non-gui mode. *The are lots of reporting showing tools in jMeter (Known as listeners) but there are not meant to be on during test. You must run your test and generate reports ( .jtl files). Then you must use these tools to analyze result. Please have a look at https://www.blazemeter.com/blog/jmeter-listeners-part-1-basic-display-formats or https://www.tutorialspoint.com/jmeter/jmeter_listeners.htm. The https://www.blazemeter.com/jmeter has very good and practical information to help you configure your test environment.
{ "language": "en", "url": "https://stackoverflow.com/questions/7492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "252" }
Q: Linq to objects - select first object I know almost nothing about linq. I'm doing this: var apps = from app in Process.GetProcesses() where app.ProcessName.Contains( "MyAppName" ) && app.MainWindowHandle != IntPtr.Zero select app; Which gets me all the running processes which match that criteria. But I don't know how to get the first one. The examples I can find on the net seem to imply I have to do this var matchedApp = (from app in Process.GetProcesses() where app.ProcessName.Contains( "MyAppName" ) && app.MainWindowHandle != IntPtr.Zero select app).First(); which strikes me as somewhat ugly, and also throws an exception if there are no matching processes. Is there a better way? UPDATE I'm actually trying to find the first matching item, and call SetForegroundWindow on it I've come up with this solution, which also strikes me as ugly and awful, but better than above. Any ideas? var unused = from app in Process.GetProcesses() where app.ProcessName.Contains( "MyAppName" ) && app.MainWindowHandle != IntPtr.Zero select SetForegroundWindow( app.MainWindowHandle ); // side-effects in linq-query is technically bad I guess A: @FryHard FirstOrDefault will work but remember that it returns null if none are found. This code isn't tested but should be close to what you want: var app = Process.GetProcesses().FirstOrDefault(p => p.ProcessName.Contains("MyAppName") && p.MainWindowHandle != IntPtr.Zero); if (app == null) return; SetForegroundWindow(app.MainWindowHandle); A: Do not use Count() like ICR says. Count() will iterate through the IEnumerable to figure out how many items it has. In this case the performance penalty may be negligible since there aren't many processes, but it's a bad habit to get into. Only use Count() when your query is only interested in the number of results. Count is almost never a good idea. There are several problems with FryHard's answer. First, because of delayed execution, you will end up executing the LINQ query twice, once to get the number of results, and once to get the FirstOrDefault. Second, there is no reason whatsoever to use FirstOrDefault after checking the count. Since it can return null, you should never use it without checking for null. Either do apps.First().MainWindowHandle or: var app = apps.FirstOrDefault(); if (app != null) SetForegroundWindow(app.MainWindowHandle); This is why the best solution is Mark's, without question. It's the most efficient and stable way of using LINQ to get what you want. A: Assuming that in your first example apps is an IEnumerable you could make use of the .Count and .FirstOrDefault properties to get the single item that you want to pass to SetForegroundWindow. var apps = from app in Process.GetProcesses() where app.ProcessName.Contains( "MyAppName" ) && app.MainWindowHandle != IntPtr.Zero select app; if (apps.Count > 0) { SetForegroundWindow(apps.FirstOrDefault().MainWindowHandle ); }
{ "language": "en", "url": "https://stackoverflow.com/questions/7503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Dropping a group of tables in SQL Server Is there a simple way to drop a group of interrelated tables in SQL Server? Ideally I'd like to avoid having to worry about what order they're being dropped in since I know the entire group will be gone by the end of the process. A: At the risk of sounding stupid, I don't believe SQL Server supports the delete / cascade syntax. I think you can configure a delete rule to do cascading deletes (http://msdn.microsoft.com/en-us/library/ms152507.aspx), but as far as I know the trick with SQL Server is to just to run your drop query once for each table you're dropping, then check it worked. A: I'm not sure, if Derek's approach works. You haven't mark it as best answer yet. If not: with SQL Server 2005 it should be possible, I guess. There they introduced exceptions (which I've not used yet). So drop the table, catch the exception, if one occurs and try the next table till they are all gone. You can store the list of tables in a temp-table and use a cursor to traverse it, if you want to. A: A diferent approach could be: first get rid of the constraints, then drop the tables in a single shot. In other words, a DROP CONSTRAINT for every constraint, then a DROP TABLE for each table; at this point the order of execution shouldn't be an issue. A: This requires the sp___drop___constraints script you can find at Database Journal: sp_MSforeachtable @command1="print 'disabling constraints: ?'", @command2="sp_drop_constraints @tablename=?" GO sp_MSforeachtable @command1="print 'dropping: ?'", @command2="DROP TABLE ?" GO NOTE this - obviously - if you meant to drop ALL of the tables in your database, so be careful A: I don't have access to SQL Server to test this, but how about: DROP TABLE IF EXISTS table1, table2, table3 CASCADE; A: I ended up using Apache's ddlutils to perform the dropping for me, which sorted it out in my case, though a solution which worked only within sql server would be quite a bit simpler. @Derek Park, I didn't know you could comma separate tables there, so that's handy, but it doesn't seem to work quite as expected. Nether IF EXISTS nor CASCADE are recognised by sql server it seems, and running drop table X, Y, Z seems to work only if they should be dropped in the stated order. See also http://msdn.microsoft.com/en-us/library/ms173790.aspx, which describes the drop table syntax. A: The thing holding you back from dropping the tables in any order are foreign key dependencies between the tables. So get rid of the FK's before you start. * *Using the INFORMATION_SCHEMA system views, retrieve a list of all foreign keys related to any of these tables *Drop each of these foreign keys *Now you should be able to drop all of the tables, using any order that you want.
{ "language": "en", "url": "https://stackoverflow.com/questions/7517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Of Memory Management, Heap Corruption, and C++ So, I need some help. I am working on a project in C++. However, I think I have somehow managed to corrupt my heap. This is based on the fact that I added an std::string to a class and assigning it a value from another std::string: std::string hello = "Hello, world.\n"; /* exampleString = "Hello, world.\n" would work fine. */ exampleString = hello; crashes on my system with a stack dump. So basically I need to stop and go through all my code and memory management stuff and find out where I've screwed up. The codebase is still small (about 1000 lines), so this is easily do-able. Still, I'm over my head with this kind of stuff, so I thought I'd throw it out there. I'm on a Linux system and have poked around with valgrind, and while not knowing completely what I'm doing, it did report that the std::string's destructor was an invalid free. I have to admit to getting the term 'Heap Corruption' from a Google search; any general purpose articles on this sort of stuff would be appreciated as well. (In before rm -rf ProjectDir, do again in C# :D) EDIT: I haven't made it clear, but what I'm asking for are ways an advice of diagnosing these sort of memory problems. I know the std::string stuff is right, so it's something I've done (or a bug, but there's Not A Problem With Select). I'm sure I could check the code I've written up and you very smart folks would see the problem in no time, but I want to add this kind of code analysis to my 'toolbox', as it were. A: Oh, if you want to know how to debug the problem, that's simple. First, get a dead chicken. Then, start shaking it. Seriously, I haven't found a consistent way to track these kinds of bugs down. Because there's so many potential problems, there's not a simple checklist to go through. However, I would recommend the following: * *Get comfortable in a debugger. *Start tromping around in the debugger to see if you can find anything that looks fishy. Check especially to see what's happening during the exampleString = hello; line. *Check to make sure it's actually crashing on the exampleString = hello; line, and not when exiting some enclosing block (which could cause destructors to fire). *Check any pointer magic you might be doing. Pointer arithmetic, casting, etc. *Check all of your allocations and deallocations to make sure they are matched (no double-deallocations). *Make sure you aren't returning any references or pointers to objects on the stack. There are lots of other things to try, too. I'm sure some other people will chime in with ideas as well. A: Some places to start: If you're on windows, and using visual C++6 (I hope to god nobody still uses it these days) it's implentation of std::string is not threadsafe, and can lead to this kind of thing. Here's an article I found which explains a lot of the common causes of memory leaks and corruption. At my previous workplace we used Compuware Boundschecker to help with this. It's commercial and very expensive, so may not be an option. Here's a couple of free libraries which may be of some use http://www.codeguru.com/cpp/misc/misc/memory/article.php/c3745/ http://www.codeproject.com/KB/cpp/MemLeakDetect.aspx Hope that helps. Memory corruption is a sucky place to be in! A: These are relatively cheap mechanisms for possibly solving the problem: * *Keep an eye on my heap corruption question - I'm updating with the answers as they shake out. The first was balancing new[] and delete[], but you're already doing that. *Give valgrind more of a go; it's an excellent tool, and I only wish it was available under Windows. I only slows your program down by about half, which is pretty good compared to the Windows equivalents. *Think about using the Google Performance Tools as a replacement malloc/new. *Have you cleaned out all your object files and started over? Perhaps your make file is... "suboptimal" *You're not assert()ing enough in your code. How do I know that without having seen it? Like flossing, no-one assert()s enough in their code. Add in a validation function for your objects and call that on method start and method end. *Are you compiling -wall? If not, do so. *Find yourself a lint tool like PC-Lint. A small app like yours might fit in the PC-lint demo page, meaning no purchase for you! *Check you're NULLing out pointers after deleteing them. Nobody likes a dangling pointer. Same gig with declared but unallocated pointers. *Stop using arrays. Use a vector instead. *Don't use raw pointers. Use a smart pointer. Don't use auto_ptr! That thing is... surprising; its semantics are very odd. Instead, choose one of the Boost smart pointers, or something out of the Loki library. A: We once had a bug which eluded all of the regular techniques, valgrind, purify etc. The crash only ever happened on machines with lots of memory and only on large input data sets. Eventually we tracked it down using debugger watch points. I'll try to describe the procedure here: 1) Find the cause of the failure. It looks from your example code, that the memory for "exampleString" is being corrupted, and so cannot be written to. Let's continue with this assumption. 2) Set a breakpoint at the last known location that "exampleString" is used or modified without any problem. 3) Add a watch point to the data member of 'exampleString'. With my version of g++, the string is stored in _M_dataplus._M_p. We want to know when this data member changes. The GDB technique for this is: (gdb) p &exampleString._M_dataplus._M_p $3 = (char **) 0xbfccc2d8 (gdb) watch *$3 Hardware watchpoint 1: *$3 I'm obviously using linux with g++ and gdb here, but I believe that memory watch points are available with most debuggers. 4) Continue until the watch point is triggered: Continuing. Hardware watchpoint 2: *$3 Old value = 0xb7ec2604 "" New value = 0x804a014 "" 0xb7e70a1c in std::string::_M_mutate () from /usr/lib/libstdc++.so.6 (gdb) where The gdb where command will give a back trace showing what resulted in the modification. This is either a perfectly legal modification, in which case just continue - or if you're lucky it will be the modification due to the memory corruption. In the latter case, you should now be able to review the code that is really causing the problem and hopefully fix it. The cause of our bug was an array access with a negative index. The index was the result of a cast of a pointer to an 'int' modulos the size of the array. The bug was missed by valgrind et al. as the memory addresses allocated when running under those tools was never "> MAX_INT" and so never resulted in a negative index. A: It could be heap corruption, but it's just as likely to be stack corruption. Jim's right. We really need a bit more context. Those two lines of source don't tell us much in isolation. There could be any number of things causing this (which is the real joy of C/C++). If you're comfortable posting your code, you could even throw all of it up on a server and post a link. I'm sure you'd gets lots more advice that way (some of it undoubtedly unrelated to your question). A: Your code as I can see has no errors. As has been said more context is needed. If you haven't already tried, install gdb (the gcc debugger) and compile the program with -g. This will compile in debugging symbols which gdb can use. Once you have gdb installed run it with the program (gdb <your_program>). This is a useful cheatsheat for using gdb. Set a breakpoint for the function that is producing the bug, and see what the value of exampleString is. Also do the same for whatever parameter you are passing to exampleString. This should at least tell you if the std::strings are valid. I found the answer from this article to be a good guide about pointers. A: The code was simply an example of where my program was failing (it was allocated on the stack, Jim). I'm not actually looking for 'what have I done wrong', but rather 'how do I diagnose what I've done wrong'. Teach a man to fish and all that. Though looking at the question, I haven't made that clear enough. Thank goodness for the edit function. :') Also, I actually fixed the std::string problem. How? By replacing it with a vector, compiling, then replacing the string again. It was consistently crashing there, and that fixed even though it...couldn't. There's something nasty there, and I'm not sure what. I did want to check the one time I manually allocate memory on the heap, though: this->map = new Area*[largestY + 1]; for (int i = 0; i < largestY + 1; i++) { this->map[i] = new Area[largestX + 1]; } and deleting it: for (int i = 0; i < largestY + 1; i++) { delete [] this->map[i]; } delete [] this->map; I haven't allocated a 2d array with C++ before. It seems to work. A: Also, I actually fixed the std::string problem. How? By replacing it with a vector, compiling, then replacing the string again. It was consistently crashing there, and that fixed even though it...couldn't. There's something nasty there, and I'm not sure what. That sounds like you really did shake a chicken at it. If you don't know why it's working now, then it's still broken, and pretty much guaranteed to bite you again later (after you've added even more complexity). A: Run Purify. It is a near-magical tool that will report when you are clobbering memory you shouldn't be touching, leaking memory by not freeing things, double-freeing, etc. It works at the machine code level, so you don't even have to have the source code. One of the most enjoyable vendor conference calls I was ever on was when Purify found a memory leak in their code, and we were able to ask, "is it possible you're not freeing memory in your function foo()" and hear the astonishment in their voices. They thought we were debugging gods but then we let them in on the secret so they could run Purify before we had to use their code. :-) http://www-306.ibm.com/software/awdtools/purify/unix/ (It's pretty pricey but they have a free eval download) A: One of the debugging techniques that I use frequently (except in cases of the most extreme weirdness) is to divide and conquer. If your program currently fails with some specific error, then divide it in half in some way and see if it still has the same error. Obviously the trick is to decide where to divide your program! Your example as given doesn't show enough context to determine where the error might be. If anybody else were to try your example, it would work fine. So, in your program, try removing as much of the extra stuff you didn't show us and see if it works then. If so, then add the other code back in a bit at a time until it starts failing. Then, the thing you just added is probably the problem. Note that if your program is multithreaded, then you probably have larger problems. If not, then you should be able to narrow it down in this way. Good luck! A: Other than tools like Boundschecker or Purify, your best bet at solving problems like this is to just get really good at reading code and become familiar with the code that you're working on. Memory corruption is one of the most difficult things to troubleshoot and usually these types of problems are solved by spending hours/days in a debugger and noticing something like "hey, pointer X is being used after it was deleted!". If it helps any, it's something you get better at as you gain experience. Your memory allocation for the array looks correct, but make sure you check all the places where you access the array too. A: As far as I can tell your code is correct. Assuming exampleString is an std::string that has class scope like you describe, you ought to be able to initialize/assign it that way. Perhaps there is some other issue? Maybe a snippet of actual code would help put it in context. Question: Is exampleString a pointer to a string object created with new?
{ "language": "en", "url": "https://stackoverflow.com/questions/7525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: SQL Server 2008 compatibility with SQL Server 2005 What is the best way to use SQL Server 2008 as a development database, but ensure that the database is compatible with SQL Server 2005? A: You can set the database compatibility level, but there is no guarantee that this will cover all your bases. For example the new data types will be available on the SQL 2008 system, but they won't be available on the SQL 2005 server. The only way to guarantee that everything will work from Dev to Prod is to put SQL 2005 in development. A: This can be done via SQL Enterprise Manager or like this: ALTER DATABASE <database> SET COMPATIBILITY_LEVEL = { 80 | 90 | 100 } Use 90 for 2005 compatibility. This replaces the functionality used for previous releases, the stored procedure sp_dbcmptlevel. A: Don't forget that there are also behavioral differences between the two versions, and something on 2008 may perform differently (and insufficiently) than 2005 all other things equal - this will obviously depend on a lot of factors about your data and application. You're better off developing against the lowest common denominator and testing against the newer versions.
{ "language": "en", "url": "https://stackoverflow.com/questions/7535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Use of java.math.MathContext Recently I tried understanding the use of java.math.MathContext but failed to understand properly. Is it used for rounding in java.math.BigDecimal. If yes why does not it round the decimal digits but even mantissa part. From API docs, I came to know that it follows the standard specified in ANSI X3.274-1996 and ANSI X3.274-1996/AM 1-2000 specifications but I did not get them to read online. Please let me know if you have any idea on this. A: For rounding just the fractional part of a BigDecimal, check out the BigDecimal.setScale(int newScale, int roundingMode) method. E.g. to change a number with three digits after the decimal point to one with two digits, and rounding up: BigDecimal original = new BigDecimal("1.235"); BigDecimal scaled = original.setScale(2, BigDecimal.ROUND_HALF_UP); The result of this is a BigDecimal with the value 1.24 (because of the rounding up rule) A: @jatan Thanks for you answer. It makes sense. Can you please explain me MathContext in the context of BigDecimal#round method. There's nothing special about BigDecimal.round() vs. any other BigDecimal method. In all cases, the MathContext specifies the number of significant digits and the rounding technique. Basically, there are two parts of every MathContext. There's a precision, and there's also a RoundingMode. The precision again specifies the number of significant digits. So if you specify 123 as a number, and ask for 2 significant digits, you're going to get 120. It might be clearer if you think in terms of scientific notation. 123 would be 1.23e2 in scientific notation. If you only keep 2 significant digits, then you get 1.2e2, or 120. By reducing the number of significant digits, we reduce the precision with which we can specify a number. The RoundingMode part specifies how we should handle the loss of precision. To reuse the example, if you use 123 as the number, and ask for 2 significant digits, you've reduced your precision. With a RoundingMode of HALF_UP (the default mode), 123 will become 120. With a RoundingMode of CEILING, you'll get 130. For example: System.out.println(new BigDecimal("123.4", new MathContext(4,RoundingMode.HALF_UP))); System.out.println(new BigDecimal("123.4", new MathContext(2,RoundingMode.HALF_UP))); System.out.println(new BigDecimal("123.4", new MathContext(2,RoundingMode.CEILING))); System.out.println(new BigDecimal("123.4", new MathContext(1,RoundingMode.CEILING))); Outputs: 123.4 1.2E+2 1.3E+2 2E+2 You can see that both the precision and the rounding mode affect the output. A: If I'm understanding you correctly, it sounds like you're expecting the MathContext to control how many digits should be kept after the decimal point. That's not what it's for. It specifies how many digits to keep, total. So if you specify that you want 3 significant digits, that's all you're going to get. For example, this: System.out.println(new BigDecimal("1234567890.123456789", new MathContext(20))); System.out.println(new BigDecimal("1234567890.123456789", new MathContext(10))); System.out.println(new BigDecimal("1234567890.123456789", new MathContext(5))); will output: 1234567890.123456789 1234567890 1.2346E+9 A: It's not for fun. Actually I found some online example, which stated the use of MathContext to round the amounts/numbers stored in BigDecimal. For example, If MathContext is configured to have precision = 2 and rounding mode = ROUND_HALF_EVEN BigDecimal Number = 0.5294, is rounded to 0.53 So I thought it is a newer technique and used it for rounding purpose. However it turned into nightmare because it started rounding even mentissa part of number. For example, Number = 1.5294 is rounded to 1.5 Number = 10.5294 is rounded to 10 Number = 101.5294 is rounded to 100 .... and so on So this is not the behavior I expected for rounding (as precision = 2). It seems to be having some logic because from patter I can say that it takes first two digits (as precision is 2) of number and then appends 0's till the no. of digits become same as unrounded amount (checkout the example of 101.5294 ...) A: I would add here, a few examples. I haven't found them in previous answers, but I find them useful for those who maybe mislead significant digits with number of decimal places. Let's assume, we have such context: MathContext MATH_CTX = new MathContext(3, RoundingMode.HALF_UP); For this code: BigDecimal d1 = new BigDecimal(1234.4, MATH_CTX); System.out.println(d1); it's perfectly clear, that your result is 1.23E+3 as guys said above. First significant digits are 123... But what in this case: BigDecimal d2 = new BigDecimal(0.000000454770054, MATH_CTX); System.out.println(d2); your number will not be rounded to 3 places after comma - for someone it can be not intuitive and worth to emphasize. Instead it will be rounded to the first 3 significant digits, which in this case are "4 5 4". So above code results in 4.55E-7 and not in 0.000 as someone could expect. Similar examples: BigDecimal d3 = new BigDecimal(0.001000045477, MATH_CTX); System.out.println(d3); // 0.00100 BigDecimal d4 = new BigDecimal(0.200000477, MATH_CTX); System.out.println(d4); // 0.200 BigDecimal d5 = new BigDecimal(0.000000004, MATH_CTX); System.out.println(d5); //4.00E-9 I hope this obvious, but relevant example would be helpful...
{ "language": "en", "url": "https://stackoverflow.com/questions/7539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63" }
Q: Some kind of task manager for JavaScript in Firefox 3? Recently I have been having issues with Firefox 3 on Ubuntu Hardy Heron. I will click on a link and it will hang for a while. I don't know if its a bug in Firefox 3 or a page running too much client side JavaScript, but I would like to try and debug it a bit. So, my question is "is there a way to have some kind of process explorer, or task manager sort of thing for Firefox 3?" I would like to be able to see what tabs are using what percent of my processor via the JavaScript on that page (or anything in the page that is causing CPU/memory usage). Does anybody know of a plugin that does this, or something similar? Has anyone else done this kind of inspection another way? I know about FireBug, but I can't imagine how I would use it to finger which tab is using a lot of resources. Any suggestions or insights? A: It's probably the awesome firefox3 fsync "bug", which is a giant pile of fail. In summary * *Firefox3 saves its bookmarks and history in an SQLite database *Every time you load a page it writes to this database several times *SQLite cares deeply that you don't lose your bookmarks, so each time it writes, instructs the kernel to flush it's database file to disk and ensure that it's fully written *Many variants of linux, when told to flush like that, flush EVERY FILE. This may take up to a minute or more if you have background tasks doing any kind of disk intensive stuff. *The kernel makes firefox wait while this flush happens, which locks up the UI. A: So, my question is, is there a way to have some kind of process explorer, or task manager sort of thing for Firefox 3? Because of the way Firefox is built this is not possible at the moment. But the new Internet Explorer 8 Beta 2 and the just announced Google Chrome browser are heading in that direction, so I suppose Firefox will be heading there too. Here is a post ( Google Chrome Process Manager ),by John Resig from Mozilla and jQuery fame on the subject. A: There's no "process explorer" kind of tool for Firefox; but there's https://developer.mozilla.org/en-US/docs/Archive/Mozilla/Venkman with profiling mode, which you could use to see the time spent by chrome (meaning non-content, that is not web-page) scripts. From what I've read about it, DTrace might also be useful for this sort of thing, but it requires creating a custom build and possibly adding additional probes to the source. I haven't played with it myself yet. A: There's a thorough discussion of this that explains all of the fsync related problems that affected pre-3.0 versions of FF. In general, I have not seen the behaviour since then either, and really it shouldn't be a problem at all if your system isn't also doing IO intensive tasks. Firebug/Venkman make for nice debuggers, but they would be painful for figuring out these kinds of problems for someone else's code, IMO. I also wish that there was an easy way to look at CPU utilization in Firefox by tab, though, as I often find myself with FF eating 100% CPU, but no clue which part is causing the problem. A: XUL Profiler is an awesome extension that can point out extensions and client side JS gone bananas CPU-wise. It does not work on a per-tab basis, but per-script (or so). You can normally relate those .js scripts to your tabs or extensions by hand. It is also worth mentioning that Google Chrome has built-in a really good task manager that gives memory and CPU usage per tab, extension and plugin. [XUL Profiler] is a Javascript profiler. It shows elapsed time in each method as a graph, as well as browser canvas zones redraws to help track down consuming CPU chunks of code. Traces all JS calls and paint events in XUL and pages context. Builds an animation showing dynamically the canvas zones being redrawn. As of FF 3.6.10 it is not up to date in that it is not marked as compatible anymore. But it still works and you can override the incompatibility with the equally awesome MR Tech Toolkit extension.
{ "language": "en", "url": "https://stackoverflow.com/questions/7540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Best Practices for securing a REST API / web service When designing a REST API or service are there any established best practices for dealing with security (Authentication, Authorization, Identity Management) ? When building a SOAP API you have WS-Security as a guide and much literature exists on the topic. I have found less information about securing REST endpoints. While I understand REST intentionally does not have specifications analogous to WS-* I am hoping best practices or recommended patterns have emerged. Any discussion or links to relevant documents would be very much appreciated. If it matters, we would be using WCF with POX/JSON serialized messages for our REST API's/Services built using v3.5 of the .NET Framework. A: You may also want to take a look at OAuth, an emerging open protocol for token-based authorization specifically targeting http apis. It is very similar to the approach taken by flickr and remember the milk "rest" apis (not necessarily good examples of restful apis, but good examples of the token-based approach). A: There is a great checklist found on Github: Authentication * *Don't reinvent the wheel in Authentication, token generation, password storage. Use the standards. *Use Max Retry and jail features in Login. *Use encryption on all sensitive data. JWT (JSON Web Token) * *Use a random complicated key (JWT Secret) to make brute forcing the token very hard. *Don't extract the algorithm from the payload. Force the algorithm in the backend (HS256 or RS256). *Make token expiration (TTL, RTTL) as short as possible. *Don't store sensitive data in the JWT payload, it can be decoded easily. OAuth * *Always validate redirect_uri server-side to allow only whitelisted URLs. *Always try to exchange for code and not tokens (don't allow response_type=token). *Use state parameter with a random hash to prevent CSRF on the OAuth authentication process. *Define the default scope, and validate scope parameters for each application. Access * *Limit requests (Throttling) to avoid DDoS / brute-force attacks. *Use HTTPS on server side to avoid MITM (Man In The Middle Attack) *Use HSTS header with SSL to avoid SSL Strip attack. Input * *Use the proper HTTP method according to the operation: GET (read), POST (create), PUT/PATCH (replace/update), and DELETE (to delete a record), and respond with 405 Method Not Allowed if the requested method isn't appropriate for the requested resource. *Validate content-type on request Accept header (Content Negotiation) to allow only your supported format (e.g. application/xml, application/json, etc) and respond with 406 Not Acceptable response if not matched. *Validate content-type of posted data as you accept (e.g. application/x-www-form-urlencoded, multipart/form-data, application/json, etc). *Validate User input to avoid common vulnerabilities (e.g. XSS, SQL-Injection, Remote Code Execution, etc). *Don't use any sensitive data (credentials, Passwords, security tokens, or API keys) in the URL, but use standard Authorization header. *Use an API Gateway service to enable caching, Rate Limit policies (e.g. Quota, Spike Arrest, Concurrent Rate Limit) and deploy APIs resources dynamically. Processing * *Check if all the endpoints are protected behind authentication to avoid broken authentication process. *User own resource ID should be avoided. Use /me/orders instead of /user/654321/orders. *Don't auto-increment IDs. Use UUID instead. *If you are parsing XML files, make sure entity parsing is not enabled to avoid XXE (XML external entity attack). *If you are parsing XML files, make sure entity expansion is not enabled to avoid Billion Laughs/XML bomb via exponential entity expansion attack. *Use a CDN for file uploads. *If you are dealing with huge amount of data, use Workers and Queues to process as much as possible in background and return response fast to avoid HTTP Blocking. *Do not forget to turn the DEBUG mode OFF. Output * *Send X-Content-Type-Options: nosniff header. *Send X-Frame-Options: deny header. *Send Content-Security-Policy: default-src 'none' header. *Remove fingerprinting headers - X-Powered-By, Server, X-AspNet-Version etc. *Force content-type for your response, if you return application/json then your response content-type is application/json. *Don't return sensitive data like credentials, Passwords, security tokens. *Return the proper status code according to the operation completed. (e.g. 200 OK, 400 Bad Request, 401 Unauthorized, 405 Method Not Allowed, etc). A: I would recommend OAuth 2/3. You can find more information at http://oauth.net/2/ A: I searched a lot about restful ws security and we also ended up with using token via cookie from client to server to authenticate the requests . I used spring security for authorization of requests in service because I had to authenticate and authorized each request based on specified security policies that has already been in DB. A: The fact that the SOAP world is pretty well covered with security standards doesn't mean that it's secure by default. In the first place, the standards are very complex. Complexity is not a very good friend of security and implementation vulnerabilities such as XML signature wrapping attacks are endemic here. As for the .NET environment I won't help much, but “Building web services with Java” (a brick with ~10 authors) did help me a lot in understanding the WS-* security architecture and, especially, its quirks. A: I'm kind of surprised SSL with client certificates hasn't been mentioned yet. Granted, this approach is only really useful if you can count on the community of users being identified by certificates. But a number of governments/companies do issue them to their users. The user doesn't have to worry about creating yet another username/password combination, and the identity is established on each and every connection so communication with the server can be entirely stateless, no user sessions required. (Not to imply that any/all of the other solutions mentioned require sessions) A: Everyone in these answers has overlooked true access control / authorization. If for instance your REST APIs / web services are about POSTing / GETing medical records, you may want to define access control policie about who can access the data and under which circumstances. For instance: * *doctors can GET the medical record of a patient they have a care relationship with *no one can POST medical data outside practice hours (e.g. 9 to 5) *end-users can GET medical records they own or medical records of patients for whom they are the guardian *nurses can UPDATE the medical record of a patient that belongs to the same unit as the nurse. In order to define and implement those fine-grained authorizations, you will need to use an attribute-based access control language called XACML, the eXtensible Access Control Markup Language. The other standards here are for the following: * *OAuth: id. federation and delegation of authorization e.g. letting a service act on my behalf on another service (Facebook can post to my Twitter) *SAML: identity federation / web SSO. SAML is very much about who the user is. *WS-Security / WS-* standards: these focus on the communication between SOAP services. They are specific to the application-level messaging format (SOAP) and they deal with aspects of messaging e.g. reliability, security, confidentiality, integrity, atomicity, eventing... None cover access control and all are specific to SOAP. XACML is technology-agnostic. It can be applied to java apps, .NET, Python, Ruby... web services, REST APIs, and more. The following are interesting resources: * *the OASIS XACML website *the NIST ABAC standard A: REST itself offers no security standards, but things like OAuth and SAML are rapidly becoming the standards in this space. However, authentication and authorization are only a small part of what you need to consider. Many of the known vulnerabilities relating to web applications apply very much to REST apis. You have to consider input validation, session cracking, inappropriate error messages, internal employee vulnerabilities and so on. It is a big subject. A: I want to add(in line with stinkeymatt), simplest solution would be to add SSL certificates to your site. In other words, make sure your url is HTTPS://. That will cover your transport security (bang for the buck). With RESTful url's, idea is to keep it simple (unlike WS* security/SAML), you can use oAuth2/openID connect or even Basic Auth (in simple cases). But you will still need SSL/HTTPS. Please check ASP.NET Web API 2 security here: http://www.asp.net/web-api/overview/security (Articles and Videos) A: As tweakt said, Amazon S3 is a good model to work with. Their request signatures do have some features (such as incorporating a timestamp) that help guard against both accidental and malicious request replaying. The nice thing about HTTP Basic is that virtually all HTTP libraries support it. You will, of course, need to require SSL in this case because sending plaintext passwords over the net is almost universally a bad thing. Basic is preferable to Digest when using SSL because even if the caller already knows that credentials are required, Digest requires an extra roundtrip to exchange the nonce value. With Basic, the callers simply sends the credentials the first time. Once the identity of the client is established, authorization is really just an implementation problem. However, you could delegate the authorization to some other component with an existing authorization model. Again the nice thing about Basic here is your server ends up with a plaintext copy of the client's password that you can simply pass on to another component within your infrastructure as needed. A: As @Nathan ended up with which is a simple HTTP Header, and some had said OAuth2 and client side SSL certificates. The gist of it is this... your REST API shouldn't have to handle security as that should really be outside the scope of the API. Instead a security layer should be put on top of it, whether it is an HTTP Header behind a web proxy (a common approach like SiteMinder, Zermatt or even Apache HTTPd), or as complicated as OAuth 2. The key thing is the requests should work without any end-user interaction. All that is needed is to ensure that the connection to the REST API is authenticated. In Java EE we have the notion of a userPrincipal that can be obtained on an HttpServletRequest. It is also managed in the deployment descriptor that a URL pattern can be secure so the REST API code does not need to check anymore. In the WCF world, I would use ServiceSecurityContext.Current to get the current security context. You need to configure you application to require authentication. There is one exception to the statement I had above and that's the use of a nonce to prevent replays (which can be attacks or someone just submitting the same data twice). That part can only be handled in the application layer. A: For Web Application Security, you should take a look at OWASP (https://www.owasp.org/index.php/Main_Page) which provides cheatsheets for various security attacks. You can incorporate as many measures as possible to secure your Application. With respect to API security (authorization, authentication, identity management), there are multiple ways as already mentioned (Basic,Digest and OAuth). There are loop holes in OAuth1.0, so you can use OAuth1.0a (OAuth2.0 is not widely adopted due to concerns with the specification) A: I've used OAuth a few times, and also used some other methods (BASIC/DIGEST). I wholeheartedly suggest OAuth. The following link is the best tutorial I've seen on using OAuth: http://hueniverse.com/oauth/guide/ A: It's been a while but the question is still relevant, though the answer might have changed a bit. An API Gateway would be a flexible and highly configurable solution. I tested and used KONG quite a bit and really liked what I saw. KONG provides an admin REST API of its own which you can use to manage users. Express-gateway.io is more recent and is also an API Gateway. A: One of the best posts I've ever come across regarding Security as it relates to REST is over at 1 RainDrop. The MySpace API's use OAuth also for security and you have full access to their custom channels in the RestChess code, which I did a lot of exploration with. This was demo'd at Mix and you can find the posting here. A: Thanks for the excellent advice. We ended up using a custom HTTP header to pass an identity token from the client to the service, in preparation for integrating our RESTful API with the the upcoming Zermatt Identity framework from Microsoft. I have described the problem here and our solution here. I also took tweakt's advice and bought RESTful Web Services - a very good book if you're building a RESTful API of any kind. A: OWASP(Open Web Application Security Project) has some cheat sheets covering about all aspects of Web Application development. This Project is a very valuable and reliable source of information. Regarding REST services you can check this: https://www.owasp.org/index.php/REST_Security_Cheat_Sheet A: There are no standards for REST other than HTTP. There are established REST services out there. I suggest you take a peek at them and get a feel for how they work. For example, we borrowed a lot of ideas from Amazon's S3 REST service when developing our own. But we opted not to use the more advanced security model based on request signatures. The simpler approach is HTTP Basic auth over SSL. You have to decide what works best in your situation. Also, I highly recommend the book RESTful Web Services from O'reilly. It explains the core concepts and does provide some best practices. You can generally take the model they provide and map it to your own application.
{ "language": "en", "url": "https://stackoverflow.com/questions/7551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "839" }
Q: Column Tree Model doesn't expand node after EXPAND_NO_CHILDREN event I am displaying a list of items using a SAP ABAP column tree model, basically a tree of folder and files, with columns. I want to load the sub-nodes of folders dynamically, so I'm using the EXPAND_NO_CHILDREN event which is firing correctly. Unfortunately, after I add the new nodes and items to the tree, the folder is automatically collapsing again, requiring a second click to view the sub-nodes. Do I need to call a method when handling the event so that the folder stays open, or am I doing something else wrong? * Set up event handling. LS_EVENT-EVENTID = CL_ITEM_TREE_CONTROL=>EVENTID_EXPAND_NO_CHILDREN. LS_EVENT-APPL_EVENT = GC_X. APPEND LS_EVENT TO LT_EVENTS. CALL METHOD GO_MODEL->SET_REGISTERED_EVENTS EXPORTING EVENTS = LT_EVENTS EXCEPTIONS ILLEGAL_EVENT_COMBINATION = 1 UNKNOWN_EVENT = 2. SET HANDLER GO_APPLICATION->HANDLE_EXPAND_NO_CHILDREN FOR GO_MODEL. ... * Add new data to tree. CALL METHOD GO_MODEL->ADD_NODES EXPORTING NODE_TABLE = PTI_NODES[] EXCEPTIONS ERROR_IN_NODE_TABLE = 1. CALL METHOD GO_MODEL->ADD_ITEMS EXPORTING ITEM_TABLE = PTI_ITEMS[] EXCEPTIONS NODE_NOT_FOUND = 1 ERROR_IN_ITEM_TABLE = 2. A: It's been a while since I've played with SAP, but I always found the SAP Library to be particularly helpful when I got stuck... I managed to come up with this one for you: http://help.sap.com/saphelp_nw04/helpdata/en/47/aa7a18c80a11d3a6f90000e83dd863/frameset.htm, specifically: When you add new nodes to the tree model, set the flag ITEMSINCOM to 'X'. This informs the tree model that you want to load the items for that node on demand. Hope it helps? A: Your code looks fine, I would use the method ADD_NODES_AND_ITEMS myself if I were to add nodes and items ;) Beyond that, try to call EXPAND_NODE after you added the items/nodes and see if that helps.
{ "language": "en", "url": "https://stackoverflow.com/questions/7558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Reorganise index vs Rebuild Index in Sql Server Maintenance plan In the SSW rules to better SQL Server Database there is an example of a full database maintenance plan: SSW. In the example they run both a Reorganize Index and then a Rebuild Index and then Update Statistics. Is there any point to this? I thought Reorganize Index was a fast but less effective version of Rebuild Index? and that an index rebuild would also update the statistics automatically (on the clustered index at least). A: Exactly what Biri said. Here is how I would reindex an entire database: EXEC [sp_MSforeachtable] @command1="RAISERROR('DBCC DBREINDEX(''?'') ...',10,1) WITH NOWAIT DBCC DBREINDEX('?')" A: I Use this SP CREATE PROCEDURE dbo.[IndexRebuild] AS DECLARE @TableName NVARCHAR(500); DECLARE @SQLIndex NVARCHAR(MAX); DECLARE @RowCount INT; DECLARE @Counter INT; DECLARE @IndexAnalysis TABLE ( AnalysisID INT IDENTITY(1, 1) NOT NULL PRIMARY KEY , TableName NVARCHAR(500) , SQLText NVARCHAR(MAX) , IndexDepth INT , AvgFragmentationInPercent FLOAT , FragmentCount BIGINT , AvgFragmentSizeInPages FLOAT , PageCount BIGINT ) BEGIN INSERT INTO @IndexAnalysis SELECT [objects].name , 'ALTER INDEX [' + [indexes].name + '] ON [' + [schemas].name + '].[' + [objects].name + '] ' + ( CASE WHEN ( [dm_db_index_physical_stats].avg_fragmentation_in_percent >= 20 AND [dm_db_index_physical_stats].avg_fragmentation_in_percent < 40 ) THEN 'REORGANIZE' WHEN [dm_db_index_physical_stats].avg_fragmentation_in_percent > = 40 THEN 'REBUILD' END ) AS zSQL , [dm_db_index_physical_stats].index_depth , [dm_db_index_physical_stats].avg_fragmentation_in_percent , [dm_db_index_physical_stats].fragment_count , [dm_db_index_physical_stats].avg_fragment_size_in_pages , [dm_db_index_physical_stats].page_count FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS [dm_db_index_physical_stats] INNER JOIN [sys].[objects] AS [objects] ON ( [dm_db_index_physical_stats].[object_id] = [objects].[object_id] ) INNER JOIN [sys].[schemas] AS [schemas] ON ( [objects].[schema_id] = [schemas].[schema_id] ) INNER JOIN [sys].[indexes] AS [indexes] ON ( [dm_db_index_physical_stats].[object_id] = [indexes].[object_id] AND [dm_db_index_physical_stats].index_id = [indexes].index_id ) WHERE index_type_desc <> 'HEAP' AND [dm_db_index_physical_stats].avg_fragmentation_in_percent > 20 END SELECT @RowCount = COUNT(AnalysisID) FROM @IndexAnalysis SET @Counter = 1 WHILE @Counter <= @RowCount BEGIN SELECT @SQLIndex = SQLText FROM @IndexAnalysis WHERE AnalysisID = @Counter EXECUTE sp_executesql @SQLIndex SET @Counter = @Counter + 1 END GO and create One Job that execute this SP every week. A: The reorganize and rebuild are different things. Reorganize: it's a defrag for indexes. Takes the existing index(es) and defragments the existing pages. However if the pages are not in a contiguous manner, they stays like before. Only the content of the pages are changing. Rebuild: actually it drops the index and rebuilds it from scratch. It means that you will get a completely new index, with defragmented and contiguous pages. Moreover with rebuild you can change partitioning or file groups, but with reorganize you can defrag not only the whole index, but also only one partition of the index. The update statistics is automatic on clustered indexes, but not on the non-clustered ones. A: Even better is: EXEC sp_MSforeachtable 'ALTER INDEX ALL ON ? REINDEX' or EXEC sp_MSforeachtable 'ALTER INDEX ALL ON ? REORGANIZE' A: Doing a REORGANIZE and then a REBUILD on the same indexes is pointless, as any changes by the REORGANIZE would be lost by doing the REBUILD. Worse than that is that in the maintenance plan diagram from SSW, it performs a SHRINK first, which fragments the indexes as a side effect of the way it releases space. Then the REBUILD allocates more space to the database files again as working space during the REBUILD operation. * *REORGANIZE is an online operation that defragments leaf pages in a clustered or non-clustered index page by page using little extra working space. *REBUILD is an online operation in Enterprise editions, offline in other editions, and uses as much extra working space again as the index size. It creates a new copy of the index and then drops the old one, thus getting rid of fragmentation. Statistics are recomputed by default as part of this operation, but that can be disabled. See Reorganizing and Rebuilding Indexes for more information. Don't use SHRINK except with the TRUNCATEONLY option and even then if the file will grow again then you should think hard as to whether it's necessary: sqlservercentral_SHRINKFILE A: Before considering maintenance of indexes, it is important to answer two main questions: * *What is the degree of fragmentation? *What is the appropriate action? Reorganize or rebuild? As described in this article http://solutioncenter.apexsql.com/why-when-and-how-to-rebuild-and-reorganize-sql-server-indexes/, and to help you determine if you should perform index rebuild or index reorganization, please understand the following: * *Index reorganization is a process where the SQL Server goes through existing index, and cleans it up. Index rebuild is a heavy-duty process where index is deleted and then recreated from scratch with entirely new structure, free from all piled up fragments and empty-space pages. *While index reorganization is a pure cleanup operation which leaves system state as it is without locking-out affected tables and views, the rebuild process locks affected table for the whole rebuild period, which may result in long down-times that could not be acceptable in some environments. With this in mind, it is clear that the index rebuild is a process with ‘stronger’ solution, but it comes with a price – possible long locks on affected indexed tables. On the other side, index reorganization is a ‘lightweight’ process that will solve the fragmentation in a less effective way – since cleaned index will always be second to the new one fully made from scratch. But reorganizing index is much better from the efficiency standpoint, since it does not lock affected indexed table during the course of operation. The above mentioned article also explains how to reorganize and rebuild indexes using SSMS, T-SQL (to reorganize/rebuild indexes in a table) and a 3rd party tool called ApexSQL Backup. A: When doing a reorg of an index, if the index is spread across two or more physical files the data will only be defragged within the data file. Pages are not moved from one data file to another. When the index is in a single file the reorg and reindex will have the same end result. Some times the reorg will be faster, and some times the reindex will be faster depending on how fragmented the index is. The less fragmented the index then a reorg will be faster, the more fragmented the slower the reorg will be, but the faster a reindex will be. A: I researched on web and found some of good articles. At the and i wrote the function and script below which is reorganize, recreate or rebuild all the indexes in a database. First you may need to read this article to understand why we're not just recreate all indexes. Second we need a function to build create script for index. So this article may help. Also I'm sharing working function below. Last step making a while loop to find and organize all indexes in the database. This video is grate example to make this. Function: create function GetIndexCreateScript( @index_name nvarchar(100) ) returns nvarchar(max) as begin declare @Return varchar(max) SELECT @Return = ' CREATE ' + CASE WHEN I.is_unique = 1 THEN ' UNIQUE ' ELSE '' END + I.type_desc COLLATE DATABASE_DEFAULT +' INDEX ' + I.name + ' ON ' + Schema_name(T.Schema_id)+'.'+T.name + ' ( ' + KeyColumns + ' ) ' + ISNULL(' INCLUDE ('+IncludedColumns+' ) ','') + ISNULL(' WHERE '+I.Filter_definition,'') + ' WITH ( ' + CASE WHEN I.is_padded = 1 THEN ' PAD_INDEX = ON ' ELSE ' PAD_INDEX = OFF ' END + ',' + 'FILLFACTOR = '+CONVERT(CHAR(5),CASE WHEN I.Fill_factor = 0 THEN 100 ELSE I.Fill_factor END) + ',' + -- default value 'SORT_IN_TEMPDB = OFF ' + ',' + CASE WHEN I.ignore_dup_key = 1 THEN ' IGNORE_DUP_KEY = ON ' ELSE ' IGNORE_DUP_KEY = OFF ' END + ',' + CASE WHEN ST.no_recompute = 0 THEN ' STATISTICS_NORECOMPUTE = OFF ' ELSE ' STATISTICS_NORECOMPUTE = ON ' END + ',' + -- default value ' DROP_EXISTING = ON ' + ',' + -- default value ' ONLINE = OFF ' + ',' + CASE WHEN I.allow_row_locks = 1 THEN ' ALLOW_ROW_LOCKS = ON ' ELSE ' ALLOW_ROW_LOCKS = OFF ' END + ',' + CASE WHEN I.allow_page_locks = 1 THEN ' ALLOW_PAGE_LOCKS = ON ' ELSE ' ALLOW_PAGE_LOCKS = OFF ' END + ' ) ON [' + DS.name + ' ] ' FROM sys.indexes I JOIN sys.tables T ON T.Object_id = I.Object_id JOIN sys.sysindexes SI ON I.Object_id = SI.id AND I.index_id = SI.indid JOIN (SELECT * FROM ( SELECT IC2.object_id , IC2.index_id , STUFF((SELECT ' , ' + C.name + CASE WHEN MAX(CONVERT(INT,IC1.is_descending_key)) = 1 THEN ' DESC ' ELSE ' ASC ' END FROM sys.index_columns IC1 JOIN Sys.columns C ON C.object_id = IC1.object_id AND C.column_id = IC1.column_id AND IC1.is_included_column = 0 WHERE IC1.object_id = IC2.object_id AND IC1.index_id = IC2.index_id GROUP BY IC1.object_id,C.name,index_id ORDER BY MAX(IC1.key_ordinal) FOR XML PATH('')), 1, 2, '') KeyColumns FROM sys.index_columns IC2 --WHERE IC2.Object_id = object_id('Person.Address') --Comment for all tables GROUP BY IC2.object_id ,IC2.index_id) tmp3 )tmp4 ON I.object_id = tmp4.object_id AND I.Index_id = tmp4.index_id JOIN sys.stats ST ON ST.object_id = I.object_id AND ST.stats_id = I.index_id JOIN sys.data_spaces DS ON I.data_space_id=DS.data_space_id JOIN sys.filegroups FG ON I.data_space_id=FG.data_space_id LEFT JOIN (SELECT * FROM ( SELECT IC2.object_id , IC2.index_id , STUFF((SELECT ' , ' + C.name FROM sys.index_columns IC1 JOIN Sys.columns C ON C.object_id = IC1.object_id AND C.column_id = IC1.column_id AND IC1.is_included_column = 1 WHERE IC1.object_id = IC2.object_id AND IC1.index_id = IC2.index_id GROUP BY IC1.object_id,C.name,index_id FOR XML PATH('')), 1, 2, '') IncludedColumns FROM sys.index_columns IC2 --WHERE IC2.Object_id = object_id('Person.Address') --Comment for all tables GROUP BY IC2.object_id ,IC2.index_id) tmp1 WHERE IncludedColumns IS NOT NULL ) tmp2 ON tmp2.object_id = I.object_id AND tmp2.index_id = I.index_id WHERE I.is_primary_key = 0 AND I.is_unique_constraint = 0 AND I.[name] = @index_name return @Return end Sql for while: declare @RebuildIndex Table( IndexId int identity(1,1), IndexName varchar(100), TableSchema varchar(50), TableName varchar(100), Fragmentation decimal(18,2) ) insert into @RebuildIndex (IndexName,TableSchema,TableName,Fragmentation) SELECT B.[name] as 'IndexName', Schema_Name(O.[schema_id]) as 'TableSchema', OBJECT_NAME(A.[object_id]) as 'TableName', A.[avg_fragmentation_in_percent] Fragmentation FROM sys.dm_db_index_physical_stats(db_id(),NULL,NULL,NULL,'LIMITED') A INNER JOIN sys.indexes B ON A.[object_id] = B.[object_id] and A.index_id = B.index_id INNER JOIN sys.objects O ON O.[object_id] = B.[object_id] where B.[name] is not null and B.is_primary_key = 0 AND B.is_unique_constraint = 0 and A.[avg_fragmentation_in_percent] >= 5 --select * from @RebuildIndex declare @begin int = 1 declare @max int select @max = Max(IndexId) from @RebuildIndex declare @IndexName varchar(100), @TableSchema varchar(50), @TableName varchar(100) , @Fragmentation decimal(18,2) while @begin <= @max begin Select @IndexName = IndexName from @RebuildIndex where IndexId = @begin select @TableSchema = TableSchema from @RebuildIndex where IndexId = @begin select @TableName = TableName from @RebuildIndex where IndexId = @begin select @Fragmentation = Fragmentation from @RebuildIndex where IndexId = @begin declare @sql nvarchar(max) if @Fragmentation < 31 begin set @sql = 'ALTER INDEX ['+@IndexName+'] ON ['+@TableSchema+'].['+@TableName+'] REORGANIZE WITH ( LOB_COMPACTION = ON )' print 'Reorganized Index ' + @IndexName + ' for ' + @TableName + ' Fragmentation was ' + convert(nvarchar(18),@Fragmentation) end else begin set @sql = (select dbo.GetIndexCreateScript(@IndexName)) if(@sql is not null) begin print 'Recreated Index ' + @IndexName + ' for ' + @TableName + ' Fragmentation was ' + convert(nvarchar(18),@Fragmentation) end else begin set @sql = 'ALTER INDEX ['+@IndexName+'] ON ['+@TableSchema+'].['+@TableName+'] REBUILD PARTITION = ALL WITH (ONLINE = ON)' print 'Rebuilded Index ' + @IndexName + ' for ' + @TableName + ' Fragmentation was ' + convert(nvarchar(18),@Fragmentation) end end execute(@sql) set @begin = @begin+1 end A: My two cents... This method follows the spec outlined on tech net: http://technet.microsoft.com/en-us/library/ms189858(v=sql.105).aspx USE [MyDbName] GO SET ANSI_NULLS OFF GO SET QUOTED_IDENTIFIER OFF GO CREATE PROCEDURE [maintenance].[IndexFragmentationCleanup] AS DECLARE @reIndexRequest VARCHAR(1000) DECLARE reIndexList CURSOR FOR SELECT INDEX_PROCESS FROM ( SELECT CASE WHEN avg_fragmentation_in_percent BETWEEN 5 AND 30 THEN 'ALTER INDEX [' + i.NAME + '] ON [' + t.NAME + '] REORGANIZE;' WHEN avg_fragmentation_in_percent > 30 THEN 'ALTER INDEX [' + i.NAME + '] ON [' + t.NAME + '] REBUILD with(ONLINE=ON);' END AS INDEX_PROCESS ,avg_fragmentation_in_percent ,t.NAME FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, NULL) AS a INNER JOIN sys.indexes AS i ON a.object_id = i.object_id AND a.index_id = i.index_id INNER JOIN sys.tables t ON t.object_id = i.object_id WHERE i.NAME IS NOT NULL ) PROCESS WHERE PROCESS.INDEX_PROCESS IS NOT NULL ORDER BY avg_fragmentation_in_percent DESC OPEN reIndexList FETCH NEXT FROM reIndexList INTO @reIndexRequest WHILE @@FETCH_STATUS = 0 BEGIN BEGIN TRY PRINT @reIndexRequest; EXEC (@reIndexRequest); END TRY BEGIN CATCH DECLARE @ErrorMessage NVARCHAR(4000); DECLARE @ErrorSeverity INT; DECLARE @ErrorState INT; SELECT @ErrorMessage = 'UNABLE TO CLEAN UP INDEX WITH: ' + @reIndexRequest + ': MESSAGE GIVEN: ' + ERROR_MESSAGE() ,@ErrorSeverity = 9 ,@ErrorState = ERROR_STATE(); END CATCH; FETCH NEXT FROM reIndexList INTO @reIndexRequest END CLOSE reIndexList; DEALLOCATE reIndexList; RETURN 0 GO
{ "language": "en", "url": "https://stackoverflow.com/questions/7579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: How do I generate WPF controls through code I was trying to get my head around XAML and thought that I would try writing some code. Trying to add a grid with 6 by 6 column definitions then add a text block into one of the grid cells. I don't seem to be able to reference the cell that I want. There is no method on the grid that I can add the text block too. There is only grid.children.add(object), no Cell definition. XAML: <Page x:Class="WPF_Tester.Page1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Page1" Loaded="Page_Loaded"> </Page> C#: private void Page_Loaded(object sender, RoutedEventArgs e) { //create the structure Grid g = new Grid(); g.ShowGridLines = true; g.Visibility = Visibility.Visible; //add columns for (int i = 0; i < 6; ++i) { ColumnDefinition cd = new ColumnDefinition(); cd.Name = "Column" + i.ToString(); g.ColumnDefinitions.Add(cd); } //add rows for (int i = 0; i < 6; ++i) { RowDefinition rd = new RowDefinition(); rd.Name = "Row" + i.ToString(); g.RowDefinitions.Add(rd); } TextBlock tb = new TextBlock(); tb.Text = "Hello World"; g.Children.Add(tb); } Update Here is the spooky bit: * *Using VS2008 Pro on XP *WPFbrowser Project Template (3.5 verified) I don't get the methods in autocomplete. A: WPF makes use of a funky thing called attached properties. So in your XAML you might write this: <TextBlock Grid.Row="0" Grid.Column="0" /> And this will effectively move the TextBlock into cell (0,0) of your grid. In code this looks a little strange. I believe it'd be something like: g.Children.Add(tb); Grid.SetRow(tb, 0); Grid.SetColumn(tb, 0); Have a look at that link above - attached properties make things really easy to do in XAML perhaps at the expense of intuitive-looking code. A: The cell location is an attached property - the value belongs to the TextBlock rather than Grid. However, since the property itself belongs to Grid, you need to use either the property definition field or the provided static functions. TextBlock tb = new TextBlock(); // // Locate tb in the second row, third column. // Row and column indices are zero-indexed, so this // equates to row 1, column 2. // Grid.SetRow(tb, 1); Grid.SetColumn(tb, 2); A: Use attached properties of the Grid class. in C#: Grid.SetRow( cell, rownumber ) In XAML: <TextBlock Grid.Row="1" /> Also, I would advice if you do not use dynamic grids, use the XAML markup language. I know, it has a learning curve, but once you mastered it, it is so much easier, especially if you are going to use ControlTemplates and DataTemplates! ;) A: Here is some sample Grid grid = new Grid(); // Set the column and row definitions grid.ColumnDefinitions.Add(new ColumnDefinition() { Width = new GridLength(1, GridUnitType.Auto) }); grid.ColumnDefinitions.Add(new ColumnDefinition() { Width = new GridLength(1, GridUnitType.Star) }); grid.RowDefinitions.Add(new RowDefinition() { Height = new GridLength(1, GridUnitType.Auto) }); grid.RowDefinitions.Add(new RowDefinition() { Height = new GridLength(1, GridUnitType.Auto) }); // Row 0 TextBlock tbFirstNameLabel = new TextBlock() { Text = "First Name: "}; TextBlock tbFirstName = new TextBlock() { Text = "John"}; grid.Children.Add(tbFirstNameLabel ); // Add to the grid Grid.SetRow(tbFirstNameLabel , 0); // Specify row for previous grid addition Grid.SetColumn(tbFirstNameLabel , 0); // Specity column for previous grid addition grid.Children.Add(tbFirstName ); // Add to the grid Grid.SetRow(tbFirstName , 0); // Specify row for previous grid addition Grid.SetColumn(tbFirstName , 1); // Specity column for previous grid addition // Row 1 TextBlock tbLastNameLabel = new TextBlock() { Text = "Last Name: "}; TextBlock tbLastName = new TextBlock() { Text = "Smith"}; grid.Children.Add(tbLastNameLabel ); // Add to the grid Grid.SetRow(tbLastNameLabel , 1); // Specify row for previous grid addition Grid.SetColumn(tbLastNameLabel , 0); // Specity column for previous grid addition grid.Children.Add(tbLastName ); // Add to the grid Grid.SetRow(tbLastName , 1); // Specify row for previous grid addition Grid.SetColumn(tbLastName , 1); // Specity column for previous grid addition
{ "language": "en", "url": "https://stackoverflow.com/questions/7586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Can I use JavaScript to create a client side email? I want to create a client side mail creator web page. I know the problems of using the mailto action in an html form (not standard, no default mail appication set on the client). But the web page isn't very important, and they don't care very much. The mail created by the mailto action has the syntax: subject: undefined subject body: param1=value1 param2=value2 . . . paramn=valuen Can I use JavaScript to format the mail like this? Subject:XXXXX Body: Value1;Value2;Value3...ValueN A: You more or less only have two alternatives when sending mail via the browser.. * *make a page that takes user input, and allows them to send the mail via your web-server. You need some kind of server-side scripting for this. *use a mailto: link to trigger opening of the users registered mail client. This has the obvious pitfalls you mentioned, and is less flexible. It needs less work though. A: With javascript alone, it's not possible. Javascript is not intended to do such things and is severely crippled in the way it can interact with anything other than the webbrowser it lives in, (for good reason!). Think about it: a spammer writing a website with client side javascript which will automatically mail to thousands of random email addresses. If people should go to that site they would all be participating in a distributed mass mailing scam, with their own computer... no infection or user interaction needed! A: What we used in a projet is a popup window that opens a mailto: link, it is the only way we found to compose a mail within the default mail client that works with all mail clients (at least all our clients used). var addresses = "";//between the speech mark goes the receptient. Seperate addresses with a ; var body = ""//write the message text between the speech marks or put a variable in the place of the speech marks var subject = ""//between the speech marks goes the subject of the message var href = "mailto:" + addresses + "?" + "subject=" + subject + "&" + "body=" + body; var wndMail; wndMail = window.open(href, "_blank", "scrollbars=yes,resizable=yes,width=10,height=10"); if(wndMail) { wndMail.close(); } A: You can create a mailto-link and fire it using javascript: var mail = "mailto:buddy@mail.com?subject=New Mail&body=Mail text body"; var mlink = document.createElement('a'); mlink.setAttribute('href', mail); mlink.click(); A: Is there a reason you can't just send the data to a page which handles sending the mail? It is pretty easy to send an email in most languages, so unless there's a strong reason to push it to client side, I would recommend that route.
{ "language": "en", "url": "https://stackoverflow.com/questions/7592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: How should I structure a Java application, where do I put my classes? First of all, I know how to build a Java application. But I have always been puzzled about where to put my classes. There are proponents for organizing the packages in a strictly domain oriented fashion, others separate by tier. I myself have always had problems with * *naming, *placing So, * *Where do you put your domain specific constants (and what is the best name for such a class)? *Where do you put classes for stuff which is both infrastructural and domain specific (for instance I have a FileStorageStrategy class, which stores the files either in the database, or alternatively in database)? *Where to put Exceptions? *Are there any standards to which I can refer? A: Class names should always be descriptive and self-explanatory. If you have multiple domains of responsibility for your classes then they should probably be refactored. Likewise for you packages. They should be grouped by domain of responsibility. Every domain has it's own exceptions. Generally don't sweat it until you get to a point where it is becoming overwhelming and bloated. Then sit down and don't code, just refactor the classes out, compiling regularly to make sure everything works. Then continue as you did before. A: Use packages to group related functionality together. Usually the top of your package tree is your domain name reversed (com.domain.subdomain) to guarantee uniqueness, and then usually there will be a package for your application. Then subdivide that by related area, so your FileStorageStrategy might go in, say, com.domain.subdomain.myapp.storage, and then there might be specific implementations/subclasses/whatever in com.domain.subdomain.myapp.storage.file and com.domain.subdomain.myapp.storage.database. These names can get pretty long, but import keeps them all at the top of files and IDEs can help to manage that as well. Exceptions usually go in the same package as the classes that throw them, so if you had, say, FileStorageException it would go in the same package as FileStorageStrategy. Likewise an interface defining constants would be in the same package. There's not really any standard as such, just use common sense, and if it all gets too messy, refactor! A: One thing that I found very helpful for unit tests was to have a myApp/src/ and also myApp/test_src/ directories. This way, I can place unit tests in the same packages as the classes they test, and yet I can easily exclude the test cases when I prepare my production installation. A: Short answer: draw your system architecture in terms of modules, drawn side-by-side, with each module sliced vertically into layers (e.g. view, model, persistence). Then use a structure like com.mycompany.myapp.somemodule.somelayer, e.g. com.mycompany.myapp.client.view or com.mycompany.myapp.server.model. Using the top level of packages for application modules, in the old-fashioned computer-science sense of modular programming, ought to be obvious. However, on most of the projects I have worked on we end up forgetting to do that, and end up with a mess of packages without that top-level structure. This anti-pattern usually shows itself as a package for something like 'listeners' or 'actions' that groups otherwise unrelated classes simply because they happen to implement the same interface. Within a module, or in a small application, use packages for the application layers. Likely packages include things like the following, depending on the architecture: * *com.mycompany.myapp.view *com.mycompany.myapp.model *com.mycompany.myapp.services *com.mycompany.myapp.rules *com.mycompany.myapp.persistence (or 'dao' for data access layer) *com.mycompany.myapp.util (beware of this being used as if it were 'misc') Within each of these layers, it is natural to group classes by type if there are a lot. A common anti-pattern here is to unnecessarily introduce too many packages and levels of sub-package so that there are only a few classes in each package. A: I've really come to like Maven's Standard Directory Layout. One of the key ideas for me is to have two source roots - one for production code and one for test code like so: MyProject/src/main/java/com/acme/Widget.java MyProject/src/test/java/com/acme/WidgetTest.java (here, both src/main/java and src/test/java are source roots). Advantages: * *Your tests have package (or "default") level access to your classes under test. *You can easily package only your production sources into a JAR by dropping src/test/java as a source root. One rule of thumb about class placement and packages: Generally speaking, well structured projects will be free of circular dependencies. Learn when they are bad (and when they are not), and consider a tool like JDepend or SonarJ that will help you eliminate them. A: I think keep it simple and don't over think it. Don't over abstract and layer too much. Just keep it neat, and as it grows, refactoring it is trivial. One of the best features of IDEs is refactoring, so why not make use of it and save you brain power for solving problems that are related to your app, rather then meta issues like code organisation. A: I'm a huge fan of organized sources, so I always create the following directory structure: /src - for your packages & classes /test - for unit tests /docs - for documentation, generated and manually edited /lib - 3rd party libraries /etc - unrelated stuff /bin (or /classes) - compiled classes, output of your compile /dist - for distribution packages, hopefully auto generated by a build system In /src I'm using the default Java patterns: Package names starting with your domain (org.yourdomain.yourprojectname) and class names reflecting the OOP aspect you're creating with the class (see the other commenters). Common package names like util, model, view, events are useful, too. I tend to put constants for a specific topic in an own class, like SessionConstants or ServiceConstants in the same package of the domain classes. A: Where I'm working, we're using Maven 2 and we have a pretty nice archetype for our projects. The goal was to obtain a good separation of concerns, thus we defined a project structure using multiple modules (one for each application 'layer'): - common: common code used by the other layers (e.g., i18n) - entities: the domain entities - repositories: this module contains the daos interfaces and implementations - services-intf: interfaces for the services (e.g, UserService, ...) - services-impl: implementations of the services (e.g, UserServiceImpl) - web: everything regarding the web content (e.g., css, jsps, jsf pages, ...) - ws: web services Each module has its own dependencies (e.g., repositories could have jpa) and some are project wide (thus they belong in the common module). Dependencies between the different project modules clearly separate things (e.g., the web layer depends on the service layer but doesn't know about the repository layer). Each module has its own base package, for example if the application package is "com.foo.bar", then we have: com.foo.bar.common com.foo.bar.entities com.foo.bar.repositories com.foo.bar.services com.foo.bar.services.impl ... Each module respects the standard maven project structure: src\ ..main\java ...\resources ..test\java ...\resources Unit tests for a given layer easily find their place under \src\test... Everything that is domain specific has it's place in the entities module. Now something like a FileStorageStrategy should go into the repositories module, since we don't need to know exactly what the implementation is. In the services layer, we only know the repository interface, we do not care what the specific implementation is (separation of concerns). There are multiple advantages to this approach: * *clear separation of concerns *each module is packageable as a jar (or a war in the case of the web module) and thus allows for easier code reuse (e.g., we could install the module in the maven repository and reuse it in another project) *maximum independence of each part of the project I know this doesn't answer all your questions, but I think this could put you on the right path and could prove useful to others. A: One thing I've done in the past - if I'm extending a class I'll try and follow their conventions. For example, when working with the Spring Framework, I'll have my MVC Controller classes in a package called com.mydomain.myapp.web.servlet.mvc If I'm not extending something I just go with what is simplest. com.mydomain.domain for Domain Objects (although if you have a ton of domain objects this package could get a bit unwieldy). For domain specific constants, I actually put them as public constants in the most related class. For example, if I have a "Member" class and have a maximum member name length constant, I put it in the Member class. Some shops make a separate Constants class but I don't see the value in lumping unrelated numbers and strings into a single class. I've seen some other shops try to solve this problem by creating SEPARATE Constants classes, but that just seems like a waste of time and the result is too confusing. Using this setup, a large project with multiple developers will be duplicating constants all over the place. A: I like break my classes down into packages that are related to each other. For example: Model For database related calls View Classes that deal with what you see Control Core functionality classes Util Any misc. classes that are used (typically static functions) etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/7596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "42" }
Q: Where can I find some good WS-Security introductions and tutorials? Can anyone point me to some decent introductions to WS-Security? I'm looking for tutorials or something that provide a fairly gentle introduction to the subject, though I don't mind if it assumes basic knowledge of web services and SOAP. Most of the stuff I've seen so far is very technical and you need a lot of complex, detailed background knowledge to understand it properly. We have to implement a web service in PHP and one or more clients in .NET, so resources covering both would be much appreciated. A: I think the best introduction to the (any) subject are some good examples. This article at codeProject gives a fairly easy to follow guide trough web service security for a .NET application. A: I had to give myself a crash course in WS-Security a few months back, here are a few links that I found helpful: http://www.ibm.com/developerworks/webservices/library/ws-security.html http://www.devx.com/security/Article/15634 http://www.devx.com/Java/Article/28816 A: Check Kirk Evans' blog as it deals with the .NET Framework 2.0+ instead of the old .NET Framework 1.0 linked by Sven. The implementation on Evans' blog is backwards compatible with clients running .NET Frameowrk 1.0. A: You can find some good article on WS-Security and Apache Rampart - the open source WS-Security implementation - in WSO2 Online Library. WSO2 WSF/PHP is a PHP Web services extension that has support for WS-Security and also interops with .NET Web services/clients seamlessly.
{ "language": "en", "url": "https://stackoverflow.com/questions/7603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Testing a function that throws on failure What is the best way of testing a function that throws on failure? Or testing a function that is fairly immune to failure? For instance; I have a I/O Completion Port class that throws in the constructor if it can't initialise the port correctly. This uses the Win32 function of CreateIoCompletionPort in the initialiser list. If the handle isn't set correctly - a non-null value - then the constructor will throw an exception. I have never seen this function fail. I am pretty certain that this (and other functions like it in my code) if they fail will behave correctly, the code is 50 lines long including white-space, so my questions are a) is it worth testing that it will throw b) and if it is worth testing, how to? c) should simple wrapper classes as these be unit-tested? For b) I thought about overriding CreateIoCompletionPort and passing the values through. In the unit test override it and cause it to return 0 when a certain value is passed in. However since this is used in the constructor then this needs to be static. Does this seem valid or not? A: If you are doing this in .NET, there is an ExpectedException attribute that you can add to your test: [Test, ExpectedException(typeof(SpecificException), "Exception's specific message")] public void TestWhichHasException() { CallMethodThatThrowsSpecificException(); } Test will pass if the exception of that type and with the specified message is thrown. The attribute has other overloads including having InnerExceptions, etc. A: It is definitely worthwhile to test failure conditions, both that your class properly throws an exception when you want it to and that exceptions are handled properly in the class. This can easily be done if you are acting on an object passed in to the constructor... just pass in a mock. If not, I tend to prefer to have the functionality moved to a protected method, and override the protected method to evoke my failure case. I will use Java as an example, but it should be easy enough to port the ideas to a C# case: public class MyClass { public MyClass() throws MyClassException { // Whatever, including a call to invokeCreateIoCompletionPort } protected int invokeCreateIoCompletionPort(String str, int i) { return StaticClass.createIoCompletionPort(str, i); } } public class MyTest { public void myTest() { try { new MyClass(); fail("MyClassException was not thrown!"); } catch (MyClassException e) { } } private static class MyClassWrapper extends MyClass { @Override protected int invokeCreateIoCompletionPort(String str, int i) { throw new ExpectedException(); } } } As you can see, it is pretty easy to test whether an exception is being thrown by the constructor or method you are testing, and it is also pretty easy to inject an exception from an external class that can throw an exception. Sorry I'm not using your actual method, I just used the name to illustrate how it sounded like you are using it, and how I would test the cases it sounded you wanted to test. Basically, any API details you expose can usually be tested, and if you want to KNOW that exceptional cases work as they should, you probably will want to test it. A: You should consider writing your code in such a way that you can mock your I/O completion port. Make an interface/abstract class that exposes the methods you need on the I/O object, and write and test implementation that does things like it's supposed to (and an option to simulate failure perhaps). AFAIK it's a common practice to mock external resources when unit testing, to minimize dependencies. A: Sound like C++ to me. You need a seam to mock out the Win32 functions. E.g. in your class you would create a protected method CreateIoCompletionPort() which calls ::CreateIoCompletionPort() and for your test you create a class that derives from you I/O Completion Port class and overrides CreateIoCompletionPort() to do nothing but return NULL. Your production class is still behaving like it was designed but you are now able to simulate a failure in the CreateIoCompletionPort() function. This technique is from Michael Feathers book "Working effectively with legacy code".
{ "language": "en", "url": "https://stackoverflow.com/questions/7614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are the shift operators (<<, >>) arithmetic or logical in C? In C, are the shift operators (<<, >>) arithmetic or logical? A: When you do - left shift by 1 you multiply by 2 - right shift by 1 you divide by 2 x = 5 x >> 1 x = 2 ( x=5/2) x = 5 x << 1 x = 10 (x=5*2) A: TL;DR Consider i and n to be the left and right operands respectively of a shift operator; the type of i, after integer promotion, be T. Assuming n to be in [0, sizeof(i) * CHAR_BIT) — undefined otherwise — we've these cases: | Direction | Type | Value (i) | Result | | ---------- | -------- | --------- | ------------------------ | | Right (>>) | unsigned | ≥ 0 | −∞ ← (i ÷ 2ⁿ) | | Right | signed | ≥ 0 | −∞ ← (i ÷ 2ⁿ) | | Right | signed | < 0 | Implementation-defined† | | Left (<<) | unsigned | ≥ 0 | (i * 2ⁿ) % (T_MAX + 1) | | Left | signed | ≥ 0 | (i * 2ⁿ) ‡ | | Left | signed | < 0 | Undefined | † most compilers implement this as arithmetic shift ‡ undefined if value overflows the result type T; promoted type of i Shifting First is the difference between logical and arithmetic shifts from a mathematical viewpoint, without worrying about data type size. Logical shifts always fills discarded bits with zeros while arithmetic shift fills it with zeros only for left shift, but for right shift it copies the MSB thereby preserving the sign of the operand (assuming a two's complement encoding for negative values). In other words, logical shift looks at the shifted operand as just a stream of bits and move them, without bothering about the sign of the resulting value. Arithmetic shift looks at it as a (signed) number and preserves the sign as shifts are made. A left arithmetic shift of a number X by n is equivalent to multiplying X by 2n and is thus equivalent to logical left shift; a logical shift would also give the same result since MSB anyway falls off the end and there's nothing to preserve. A right arithmetic shift of a number X by n is equivalent to integer division of X by 2n ONLY if X is non-negative! Integer division is nothing but mathematical division and round towards 0 (trunc). For negative numbers, represented by two's complement encoding, shifting right by n bits has the effect of mathematically dividing it by 2n and rounding towards −∞ (floor); thus right shifting is different for non-negative and negative values. for X ≥ 0, X >> n = X / 2n = trunc(X ÷ 2n) for X < 0, X >> n = floor(X ÷ 2n) where ÷ is mathematical division, / is integer division. Let's look at an example: 37)10 = 100101)2 37 ÷ 2 = 18.5 37 / 2 = 18 (rounding 18.5 towards 0) = 10010)2 [result of arithmetic right shift] -37)10 = 11011011)2 (considering a two's complement, 8-bit representation) -37 ÷ 2 = -18.5 -37 / 2 = -18 (rounding 18.5 towards 0) = 11101110)2 [NOT the result of arithmetic right shift] -37 >> 1 = -19 (rounding 18.5 towards −∞) = 11101101)2 [result of arithmetic right shift] As Guy Steele pointed out, this discrepancy has led to bugs in more than one compiler. Here non-negative (math) can be mapped to unsigned and signed non-negative values (C); both are treated the same and right-shifting them is done by integer division. So logical and arithmetic are equivalent in left-shifting and for non-negative values in right shifting; it's in right shifting of negative values that they differ. Operand and Result Types Standard C99 §6.5.7: Each of the operands shall have integer types. The integer promotions are performed on each of the operands. The type of the result is that of the promoted left operand. If the value of the right operand is negative or is greater than or equal to the width of the promoted left operand, the behaviour is undefined. short E1 = 1, E2 = 3; int R = E1 << E2; In the above snippet, both operands become int (due to integer promotion); if E2 was negative or E2 ≥ sizeof(int) * CHAR_BIT then the operation is undefined. This is because shifting more than the available bits is surely going to overflow. Had R been declared as short, the int result of the shift operation would be implicitly converted to short; a narrowing conversion, which may lead to implementation-defined behaviour if the value is not representable in the destination type. Left Shift The result of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are filled with zeros. If E1 has an unsigned type, the value of the result is E1×2E2, reduced modulo one more than the maximum value representable in the result type. If E1 has a signed type and non-negative value, and E1×2E2 is representable in the result type, then that is the resulting value; otherwise, the behaviour is undefined. As left shifts are the same for both, the vacated bits are simply filled with zeros. It then states that for both unsigned and signed types it's an arithmetic shift. I'm interpreting it as arithmetic shift since logical shifts don't bother about the value represented by the bits, it just looks at it as a stream of bits; but the standard talks not in terms of bits, but by defining it in terms of the value obtained by the product of E1 with 2E2. The caveat here is that for signed types the value should be non-negative and the resulting value should be representable in the result type. Otherwise the operation is undefined. The result type would be the type of the E1 after applying integral promotion and not the destination (the variable which is going to hold the result) type. The resulting value is implicitly converted to the destination type; if it is not representable in that type, then the conversion is implementation-defined (C99 §6.3.1.3/3). If E1 is a signed type with a negative value then the behaviour of left shifting is undefined. This is an easy route to undefined behaviour which may easily get overlooked. Right Shift The result of E1 >> E2 is E1 right-shifted E2 bit positions. If E1 has an unsigned type or if E1 has a signed type and a non-negative value, the value of the result is the integral part of the quotient of E1/2E2. If E1 has a signed type and a negative value, the resulting value is implementation-defined. Right shift for unsigned and signed non-negative values are pretty straight forward; the vacant bits are filled with zeros. For signed negative values the result of right shifting is implementation-defined. That said, most implementations like GCC and Visual C++ implement right-shifting as arithmetic shifting by preserving the sign bit. Conclusion Unlike Java, which has a special operator >>> for logical shifting apart from the usual >> and <<, C and C++ have only arithmetic shifting with some areas left undefined and implementation-defined. The reason I deem them as arithmetic is due to the standard wording the operation mathematically rather than treating the shifted operand as a stream of bits; this is perhaps the reason why it leaves those areas un/implementation-defined instead of just defining all cases as logical shifts. A: Well, I looked it up on wikipedia, and they have this to say: C, however, has only one right shift operator, >>. Many C compilers choose which right shift to perform depending on what type of integer is being shifted; often signed integers are shifted using the arithmetic shift, and unsigned integers are shifted using the logical shift. So it sounds like it depends on your compiler. Also in that article, note that left shift is the same for arithmetic and logical. I would recommend doing a simple test with some signed and unsigned numbers on the border case (high bit set of course) and see what the result is on your compiler. I would also recommend avoiding depending on it being one or the other since it seems C has no standard, at least if it is reasonable and possible to avoid such dependence. A: When shifting left, there is no difference between arithmetic and logical shift. When shifting right, the type of shift depends on the type of the value being shifted. (As background for those readers unfamiliar with the difference, a "logical" right shift by 1 bit shifts all the bits to the right and fills in the leftmost bit with a 0. An "arithmetic" shift leaves the original value in the leftmost bit. The difference becomes important when dealing with negative numbers.) When shifting an unsigned value, the >> operator in C is a logical shift. When shifting a signed value, the >> operator is an arithmetic shift. For example, assuming a 32 bit machine: signed int x1 = 5; assert((x1 >> 1) == 2); signed int x2 = -5; assert((x2 >> 1) == -3); unsigned int x3 = (unsigned int)-5; assert((x3 >> 1) == 0x7FFFFFFD); A: In terms of the type of shift you get, the important thing is the type of the value that you're shifting. A classic source of bugs is when you shift a literal to, say, mask off bits. For example, if you wanted to drop the left-most bit of an unsigned integer, then you might try this as your mask: ~0 >> 1 Unfortunately, this will get you into trouble because the mask will have all of its bits set because the value being shifted (~0) is signed, thus an arithmetic shift is performed. Instead, you'd want to force a logical shift by explicitly declaring the value as unsigned, i.e. by doing something like this: ~0U >> 1; A: Here are functions to guarantee logical right shift and arithmetic right shift of an int in C: int logicalRightShift(int x, int n) { return (unsigned)x >> n; } int arithmeticRightShift(int x, int n) { if (x < 0 && n > 0) return x >> n | ~(~0U >> n); else return x >> n; } A: According to K&R 2nd edition the results are implementation-dependent for right shifts of signed values. Wikipedia says that C/C++ 'usually' implements an arithmetic shift on signed values. Basically you need to either test your compiler or not rely on it. My VS2008 help for the current MS C++ compiler says that their compiler does an arithmetic shift. A: gcc will typically use logical shifts on unsigned variables and for left-shifts on signed variables. The arithmetic right shift is the truly important one because it will sign extend the variable. gcc will will use this when applicable, as other compilers are likely to do. A: Left shift << This is somehow easy and whenever you use the shift operator, it is always a bit-wise operation, so we can't use it with a double and float operation. Whenever we left shift one zero, it is always added to the least significant bit (LSB). But in right shift >> we have to follow one additional rule and that rule is called "sign bit copy". Meaning of "sign bit copy" is if the most significant bit (MSB) is set then after a right shift again the MSB will be set if it was reset then it is again reset, means if the previous value was zero then after shifting again, the bit is zero if the previous bit was one then after the shift it is again one. This rule is not applicable for a left shift. The most important example on right shift if you shift any negative number to right shift, then after some shifting the value finally reach to zero and then after this if shift this -1 any number of times the value will remain same. Please check. A: According to many c compilers: * *<< is an arithmetic left shift or bitwise left shift. *>> is an arithmetic right shiftor bitwise right shift. A: GCC does * *for -ve - > Arithmetic Shift *For +ve -> Logical Shift
{ "language": "en", "url": "https://stackoverflow.com/questions/7622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "173" }
Q: Naming convention for VB.NET private fields Is there an official convention for naming private fields in VB.NET? For example, if I have a property called 'Foo', I normally call the private field '_Foo'. This seems to be frowned upon in the Offical Guidelines: "Do not use a prefix for field names. For example, do not use g_ or s_ to distinguish static versus non-static fields." In C#, you could call the private field 'foo', the property 'Foo', and refer to the private field as 'this.foo' in the constructor. As VB.NET is case insensitive you can't do this - any suggestions? A: It's personal preference, although there's widespread support for having some distinction. Even in C# I don't think there's one widely used convention. Jeff Prosise says As a matter of personal preference I typically prefix private fields with an underscore [in C#] ... This convention is used quite a lot in the .NET framework but it is not used throughout. From the .NET Framework Design Guidelines 2nd Edition page 73. Jeffrey Richter says I make all my fields private and I prefix my instance fields with "m_" and my static fields with "s_" [in C#] From the .NET Framework Design Guidelines 2nd Edition page 47. Anthony Moore (BCL team) also thinks using "m_" and "s_" is worth consideration, page 48. A: Official guidelines are just that -- guidelines. You can always go around them. That being said we usually prefix fields with an underscore in both C# and VB.NET. This convention is quite common (and obviously, the Official Guidelines ignored). Private fields can then be referenced without the "me" keyword (the "this" keyword is for C# :) A: The design guidelines that you linked specifically state that they only apply to static public and protected fields. The design guidelines mostly focus on designing public APIs; what you do with your private members is up to you. I'm not positive but I'm relatively confident that private members are not considered when the compiler checks for CLS compliance, because only public/protected members come in to play there (the idea is, "What if someone who uses a language that doesn't allow the _ character tries to use your library?" If the members are private, the answer is "Nothing, the user doesn't have to use these members." but if the members are public you're in trouble.) That said, I'm going to add to the echo chamber and point out that whatever you do, it's important to be consistent. My employer mandates that private fields in both C# and VB are prefixed with _, and because all of us follow this convention it is easy to use code written by someone else. A: In VB.NET 4.0, most of you probably know you don't need to explicitly write getters and setters for your Property declarations as follows: Public Property Foo As String Public Property Foo2 As String VB automatically creates private member variables called _Foo and _Foo2. It seems as though Microsoft and the VS team have adopted the _ convention, so I don't see an issue with it. A: I still use the _ prefix in VB for private fields, so I'll have _foo as the private field and Foo as the property. I do this for c# as well and pretty much any code I write. Generally I wouldn't get too caught up in "what is the right way to do it" because there isn't really a "right" way (altho there are some very bad ways) but rather be concerned with doing it consistently. At the end of the day, being consistent will make your code much more readable and maintainable than using any set of "right" conventions. A: I don't think there is an official naming convention, but i've seen that Microsoft use m_ in the Microsoft.VisualBasic dll (via reflector). A: I still use the _ prefix in VB for private fields, so I'll have _foo as the private field and Foo as the property. I do this for c# as well and pretty much any code I write. Generally I wouldn't get too caught up in "what is the right way to do it" because there isn't really a "right" way (altho there are some very bad ways) but rather be concerned with doing it consistently. I haven't found anything better than the "_" for clarify and consistency. Cons include: * *Not CLS compliant *Tends to get lost when VB draws horizontal lines across my IDE I get around the lines by turning those off in the editor, and try not to think too much about the CLS compliance. A: I agree with @lomaxx, it's more important to be consistent throughout the team than to have the right convention. Still, here are several good places to get ideas and guidance for coding conventions: * *Practical Guidelines and Best Practices for Microsoft Visual Basic and Visual C# Developers by Francesco Balena is a great book that addresses many of these issues. *IDesign Coding Standards (for C# and for WCF) *The .NET Framework Source Code (in VS2008) A: I prefer to use the underscore prefix for private fields. I use lowercase first letter for the method parameters. I follow the guideline of having lowercase camelcase parameters for methods, which I regard as more important than the naming of private fields since it is part of the API for the class. . e.g. Public Class Class1 Private _foo As String Public Property Foo() As String Get Return _foo End Get Set(ByVal value As String) _foo = value End Set End Property Public Sub New(ByVal foo As String) _foo = foo End Sub End Class Using this pattern, you won't have any naming conflicts with the private field and your constructor parameter in C# or VB.NET. A: I agree most important is not what style one uses but it being consistent. With that said, the new MS/.NET styling for private fields tends to be _fooVar (underscore followed by a camelCased name)
{ "language": "en", "url": "https://stackoverflow.com/questions/7642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How do I remove duplicate items from an array in Perl? I have an array in Perl: my @my_array = ("one","two","three","two","three"); How do I remove the duplicates from the array? A: Method 1: Use a hash Logic: A hash can have only unique keys, so iterate over array, assign any value to each element of array, keeping element as key of that hash. Return keys of the hash, its your unique array. my @unique = keys {map {$_ => 1} @array}; Method 2: Extension of method 1 for reusability Better to make a subroutine if we are supposed to use this functionality multiple times in our code. sub get_unique { my %seen; grep !$seen{$_}++, @_; } my @unique = get_unique(@array); Method 3: Use module List::MoreUtils use List::MoreUtils qw(uniq); my @unique = uniq(@array); A: The variable @array is the list with duplicate elements %seen=(); @unique = grep { ! $seen{$_} ++ } @array; A: Install List::MoreUtils from CPAN Then in your code: use strict; use warnings; use List::MoreUtils qw(uniq); my @dup_list = qw(1 1 1 2 3 4 4); my @uniq_list = uniq(@dup_list); A: That last one was pretty good. I'd just tweak it a bit: my @arr; my @uniqarr; foreach my $var ( @arr ){ if ( ! grep( /$var/, @uniqarr ) ){ push( @uniqarr, $var ); } } I think this is probably the most readable way to do it. A: My usual way of doing this is: my %unique = (); foreach my $item (@myarray) { $unique{$item} ++; } my @myuniquearray = keys %unique; If you use a hash and add the items to the hash. You also have the bonus of knowing how many times each item appears in the list. A: You can do something like this as demonstrated in perlfaq4: sub uniq { my %seen; grep !$seen{$_}++, @_; } my @array = qw(one two three two three); my @filtered = uniq(@array); print "@filtered\n"; Outputs: one two three If you want to use a module, try the uniq function from List::MoreUtils A: The Perl documentation comes with a nice collection of FAQs. Your question is frequently asked: % perldoc -q duplicate The answer, copy and pasted from the output of the command above, appears below: Found in /usr/local/lib/perl5/5.10.0/pods/perlfaq4.pod How can I remove duplicate elements from a list or array? (contributed by brian d foy) Use a hash. When you think the words "unique" or "duplicated", think "hash keys". If you don't care about the order of the elements, you could just create the hash then extract the keys. It's not important how you create that hash: just that you use "keys" to get the unique elements. my %hash = map { $_, 1 } @array; # or a hash slice: @hash{ @array } = (); # or a foreach: $hash{$_} = 1 foreach ( @array ); my @unique = keys %hash; If you want to use a module, try the "uniq" function from "List::MoreUtils". In list context it returns the unique elements, preserving their order in the list. In scalar context, it returns the number of unique elements. use List::MoreUtils qw(uniq); my @unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 1,2,3,4,5,6,7 my $unique = uniq( 1, 2, 3, 4, 4, 5, 6, 5, 7 ); # 7 You can also go through each element and skip the ones you've seen before. Use a hash to keep track. The first time the loop sees an element, that element has no key in %Seen. The "next" statement creates the key and immediately uses its value, which is "undef", so the loop continues to the "push" and increments the value for that key. The next time the loop sees that same element, its key exists in the hash and the value for that key is true (since it's not 0 or "undef"), so the next skips that iteration and the loop goes to the next element. my @unique = (); my %seen = (); foreach my $elem ( @array ) { next if $seen{ $elem }++; push @unique, $elem; } You can write this more briefly using a grep, which does the same thing. my %seen = (); my @unique = grep { ! $seen{ $_ }++ } @array; A: Can be done with a simple Perl one-liner. my @in=qw(1 3 4 6 2 4 3 2 6 3 2 3 4 4 3 2 5 5 32 3); #Sample data my @out=keys %{{ map{$_=>1}@in}}; # Perform PFM print join ' ', sort{$a<=>$b} @out;# Print data back out sorted and in order. The PFM block does this: Data in @in is fed into map. map builds an anonymous hash. keys are extracted from the hash and feed into @out A: Previous answers pretty much summarize the possible ways of accomplishing this task. However, I suggest a modification for those who don't care about counting the duplicates, but do care about order. my @record = qw( yeah I mean uh right right uh yeah so well right I maybe ); my %record; print grep !$record{$_} && ++$record{$_}, @record; Note that the previously suggested grep !$seen{$_}++ ... increments $seen{$_} before negating, so the increment occurs regardless of whether it has already been %seen or not. The above, however, short-circuits when $record{$_} is true, leaving what's been heard once 'off the %record'. You could also go for this ridiculousness, which takes advantage of autovivification and existence of hash keys: ... grep !(exists $record{$_} || undef $record{$_}), @record; That, however, might lead to some confusion. And if you care about neither order or duplicate count, you could for another hack using hash slices and the trick I just mentioned: ... undef @record{@record}; keys %record; # your record, now probably scrambled but at least deduped A: Try this, seems the uniq function needs a sorted list to work properly. use strict; # Helper function to remove duplicates in a list. sub uniq { my %seen; grep !$seen{$_}++, @_; } my @teststrings = ("one", "two", "three", "one"); my @filtered = uniq @teststrings; print "uniq: @filtered\n"; my @sorted = sort @teststrings; print "sort: @sorted\n"; my @sortedfiltered = uniq sort @teststrings; print "uniq sort : @sortedfiltered\n"; A: Using concept of unique hash keys : my @array = ("a","b","c","b","a","d","c","a","d"); my %hash = map { $_ => 1 } @array; my @unique = keys %hash; print "@unique","\n"; Output: a c b d
{ "language": "en", "url": "https://stackoverflow.com/questions/7651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "171" }
Q: Querying like Linq when you don't have Linq I have a project that I'm currently working on but it currently only supports the .net framework 2.0. I love linq, but because of the framework version I can't use it. What I want isn't so much the ORM side of things, but the "queryability" (is that even a word?) of Linq. So far the closest is llblgen but if there was something even lighter weight that could just do the querying for me that would be even better. I've also looked at NHibernate which looks like it could go close to doing what I want, but it has a pretty steep learning curve and the mapping files don't get me overly excited. If anyone is aware of something that will give me a similar query interface to Linq (or even better, how to get Linq to work on the .net 2.0 framework) I'd really like to hear about it. A: Have a look at this: http://www.albahari.com/nutshell/linqbridge.html Linq is several different things, and I'm not 100% sure which bits you want, but the above might be useful in some way. If you don't already have a book on Linq (I guess you don't), then I found "Linq In Action" to be be good. A: You might want to check out Subsonic. It is an ORM that uses an ActiveRecord pattern. I'm pretty sure most of its features work with the .NET Framework 2.0. A: To echo what Lance said - the SubSonic query language has a fluent interface which isn't as pretty as LINQ, but gives you some of the benefits (compile time checking, intellisense, etc.). A: LinqBridge works fine under .NET 2.0, and you get all the Linq extensions and query language. You need VS 2008 in order to use it, but you already knew that. However, Linq it not an ORM. It's a query syntax. If you want to use Linq to query a database, you will need .NET 3.5. That's because 2.0 does not provide the mechanism needed to convert Linq code to your favorite database query language. In other words, if an ORM is what you need, LinqBridge will not help you. You need to check out some of the other suggestions provided. A: There's a way to reference LINQ in the .NET 2.0 Framework, but I have to warn you that it might be against the terms of use/EULA of the framework: LINQ on the .NET 2.0 Runtime A: First of all. Getting linq itself to work on 2.0 is out of the question. Its possible, but really not something to do outside a testing environment. The closest you can get in terms of the ORM/Dynamic Querying part of it, is imho SubSonic, which I'll recommend for anyone stuck in C# 2.0 A: LinqBridge looks like a pretty nice place to start since I have VS2008, I just need to compile and deploy to a .net 2.0 server. I've looked at SubSonic and it's also an interesting alternative, but linqbridge seems to provide a much closer fit so I'm not going to have to go and learn a new ORM / query syntax.
{ "language": "en", "url": "https://stackoverflow.com/questions/7652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Java code for WGS84 to Google map position and back Searching for some sample code for converting a point in WGS84 coordinate system to a map position in Google Maps (pixel position), also supporting zoom levels. If the codes is well commented, then it can also be in some other language. You can also point me to a open source Java project :) Some resources found: OpenLayer implementation. JOSM project Excellent Java Map Projection Library from JH LABS. This is a pure java PROJ.4 port. Does projection from WGS84 to meters. From there it's quite straightforward to convert meters to tile pixels. A: Tile utility code in Java on mapki.com (great resource for google map developers) A: Here are the functions in JavaSCript ... As extracted from OpenLayers function toMercator (lon, lat) { var x = lon * 20037508.34 / 180; var y = Math.log(Math.tan((90 + lat) * Math.PI / 360)) / (Math.PI / 180); y = y * 20037508.34 / 180; return [x, y]; } function inverseMercator (x, y) { var lon = (x / 20037508.34) * 180; var lat = (y / 20037508.34) * 180; lat = 180/Math.PI * (2 * Math.atan(Math.exp(lat * Math.PI / 180)) - Math.PI / 2); return [lon, lat]; } Fairly straightforward to convert to Java A: GeoTools has code to transform to and from about any coordinate system you could imagine, and among them also Google Map's. It's also open source. However, it should also be pointed out that GeoTools is a large library, so if you're looking something small, quick and easy, it's likely not the way to go. I would highly recommend it though if you're going to do other GIS/coordinate transformations, etc. as well. If you use GeoTools or something similar, you might also be interested in knowing that the Google Map coordinate system is called EPSG 3785. A: I ported this to PHP - here's the code, if anyone would need it: To mercator: $lon = ($lon * 20037508.34) / 180; $lat = log(tan((90 + $lat) * M_PI / 360)) / (M_PI / 180); $lat = $lat * 20037508.34 / 180; From mercator: $lon = ($lon / 20037508.34) * 180; $lat = ($lat / 20037508.34) * 180; $lat = 180/M_PI * (2 * atan(exp($lat * M_PI / 180)) - M_PI / 2); A: /* * Utility functions to transform between wgs84 and google projection coordinates * Derived from openmap http://openmap.bbn.com/ */ public class MercatorTransform { public final static double NORTH_POLE = 90.0; public final static double SOUTH_POLE = -NORTH_POLE; public final static double DATELINE = 180.0; public final static double LON_RANGE = 360.0; final public static transient double wgs84_earthEquatorialRadiusMeters_D = 6378137.0; private static double latfac = wgs84_earthEquatorialRadiusMeters_D; private static double lonfac = wgs84_earthEquatorialRadiusMeters_D; final public static transient double HALF_PI_D = Math.PI / 2.0d; /** * Returns google projection coordinates from wgs84 lat,long coordinates */ public static double[] forward(double lat, double lon) { lat = normalizeLatitude(lat); lon = wrapLongitude(lon); double latrad = Math.toRadians(lat); double lonrad = Math.toRadians(lon); double lat_m = latfac * Math.log(Math.tan(((latrad + HALF_PI_D) / 2d))); double lon_m = lonfac * lonrad; double[] x = { lon_m, lat_m }; return x; } /** * Returns wgs84 lat,long coordinates from google projection coordinates */ public static float[] inverse(float lon_m, float lat_m) { double latrad = (2d * Math.atan(Math.exp(lat_m / latfac))) - HALF_PI_D; double lonrad = lon_m / lonfac; double lat = Math.toDegrees(latrad); double lon = Math.toDegrees(lonrad); lat = normalizeLatitude(lat); lon = wrapLongitude(lon); float[] x = { (float) lat, (float) lon }; return x; } private static double wrapLongitude(double lon) { if ((lon < -DATELINE) || (lon > DATELINE)) { lon += DATELINE; lon = lon % LON_RANGE; lon = (lon < 0) ? DATELINE + lon : -DATELINE + lon; } return lon; } private static double normalizeLatitude(double lat) { if (lat > NORTH_POLE) { lat = NORTH_POLE; } if (lat < SOUTH_POLE) { lat = SOUTH_POLE; } return lat; } } A: Someone took the javascript code from Google Maps and ported it to python: gmerc.py I've used this and it works great.
{ "language": "en", "url": "https://stackoverflow.com/questions/7661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Database, Table and Column Naming Conventions? Whenever I design a database, I always wonder if there is a best way of naming an item in my database. Quite often I ask myself the following questions: * *Should table names be plural? *Should column names be singular? *Should I prefix tables or columns? *Should I use any case in naming items? Are there any recommended guidelines out there for naming items in a database? A: Naming conventions allow the development team to design discovereability and maintainability at the heart of the project. A good naming convention takes time to evolve but once it’s in place it allows the team to move forward with a common language. A good naming convention grows organically with the project. A good naming convention easily copes with changes during the longest and most important phase of the software lifecycle - service management in production. Here are my answers: * *Yes, table names should be plural when they refer to a set of trades, securities, or counterparties for example. *Yes. *Yes. SQL tables are prefixed with tb_, views are prefixed vw_, stored procedures are prefixed usp_ and triggers are prefixed tg_ followed by the database name. *Column name should be lower case separated by underscore. Naming is hard but in every organisation there is someone who can name things and in every software team there should be someone who takes responsibility for namings standards and ensures that naming issues like sec_id, sec_value and security_id get resolved early before they get baked into the project. So what are the basic tenets of a good naming convention and standards: - * *Use the language of your client and your solution domain *Be descriptive *Be consistent *Disambiguate, reflect and refactor *Don’t use abbreviations unless they are clear to everyone *Don’t use SQL reserved keywords as column names A: Here's a link that offers a few choices. I was searching for a simple spec I could follow rather than having to rely on a partially defined one. http://justinsomnia.org/writings/naming_conventions.html A: SELECT UserID, FirstName, MiddleInitial, LastName FROM Users ORDER BY LastName A: I work in a database support team with three DBAs and our considered options are: * *Any naming standard is better than no standard. *There is no "one true" standard, we all have our preferences *If there is standard already in place, use it. Don't create another standard or muddy the existing standards. We use singular names for tables. Tables tend to be prefixed with the name of the system (or its acronym). This is useful if the system complex as you can change the prefix to group the tables together logically (ie. reg_customer, reg_booking and regadmin_limits). For fields we'd expect field names to be include the prefix/acryonm of the table (i.e. cust_address1) and we also prefer the use of a standard set of suffixes ( _id for the PK, _cd for "code", _nm for "name", _nb for "number", _dt for "Date"). The name of the Foriegn key field should be the same as the Primary key field. i.e. SELECT cust_nm, cust_add1, booking_dt FROM reg_customer INNER JOIN reg_booking ON reg_customer.cust_id = reg_booking.cust_id When developing a new project, I'd recommend you write out all the preferred entity names, prefixes and acronyms and give this document to your developers. Then, when they decide to create a new table, they can refer to the document rather than "guess" what the table and fields should be called. A: * *No. A table should be named after the entity it represents. Person, not persons is how you would refer to whoever one of the records represents. *Again, same thing. The column FirstName really should not be called FirstNames. It all depends on what you want to represent with the column. *NO. *Yes. Case it for clarity. If you need to have columns like "FirstName", casing will make it easier to read. Ok. Thats my $0.02 A: Table names should always be singular, because they represent a set of objects. As you say herd to designate a group of sheep, or flock do designate a group of birds. No need for plural. When a table name is composition of two names and naming convention is in plural it becomes hard to know if the plural name should be the first word or second word or both. It’s the logic – Object.instance, not objects.instance. Or TableName.column, not TableNames.column(s). Microsoft SQL is not case sensitive, it’s easier to read table names, if upper case letters are used, to separate table or column names when they are composed of two or more names. A: Table Name: It should be singular, as it is a singular entity representing a real world object and not objects, which is singlular. Column Name: It should be singular only then it conveys that it will hold an atomic value and will confirm to the normalization theory. If however, there are n number of same type of properties, then they should be suffixed with 1, 2, ..., n, etc. Prefixing Tables / Columns: It is a huge topic, will discuss later. Casing: It should be Camel case My friend, Patrick Karcher, I request you to please not write anything which may be offensive to somebody, as you wrote, "•Further, foreign keys must be named consistently in different tables. It should be legal to beat up someone who does not do this.". I have never done this mistake my friend Patrick, but I am writing generally. What if they together plan to beat you for this? :) A: Very late to the party but I still wanted to add my two cents about column prefixes There seem to be two main arguments for using the table_column (or tableColumn) naming standard for columns, both based on the fact that the column name itself will be unique across your whole database: 1) You do not have to specify table names and/or column aliases in your queries all the time 2) You can easily search your whole code for the column name I think both arguments are flawed. The solution for both problems without using prefixes is easy. Here's my proposal: Always use the table name in your SQL. E.g., always use table.column instead of column. It obviously solves 2) as you can now just search for table.column instead of table_column. But I can hear you scream, how does it solve 1)? It was exactly about avoiding this. Yes, it was, but the solution was horribly flawed. Why? Well, the prefix solution boils down to: To avoid having to specify table.column when there's ambiguity, you name all your columns table_column! But this means you will from now on ALWAYS have to write the column name every time you specify a column. But if you have to do that anyways, what's the benefit over always explicitly writing table.column? Exactly, there is no benefit, it's the exact same number of characters to type. edit: yes, I am aware that naming the columns with the prefix enforces the correct usage whereas my approach relies on the programmers A: I hear the argument all the time that whether or not a table is pluralized is all a matter of personal taste and there is no best practice. I don't believe that is true, especially as a programmer as opposed to a DBA. As far as I am aware, there are no legitimate reasons to pluralize a table name other than "It just makes sense to me because it's a collection of objects," while there are legitimate gains in code by having singular table names. For example: * *It avoids bugs and mistakes caused by plural ambiguities. Programmers aren't exactly known for their spelling expertise, and pluralizing some words are confusing. For example, does the plural word end in 'es' or just 's'? Is it persons or people? When you work on a project with large teams, this can become an issue. For example, an instance where a team member uses the incorrect method to pluralize a table he creates. By the time I interact with this table, it is used all over in code I don't have access to or would take too long to fix. The result is I have to remember to spell the table wrong every time I use it. Something very similar to this happened to me. The easier you can make it for every member of the team to consistently and easily use the exact, correct table names without errors or having to look up table names all the time, the better. The singular version is much easier to handle in a team environment. *If you use the singular version of a table name AND prefix the primary key with the table name, you now have the advantage of easily determining a table name from a primary key or vice versa via code alone. You can be given a variable with a table name in it, concatenate "Id" to the end, and you now have the primary key of the table via code, without having to do an additional query. Or you can cut off "Id" from the end of a primary key to determine a table name via code. If you use "id" without a table name for the primary key, then you cannot via code determine the table name from the primary key. In addition, most people who pluralize table names and prefix PK columns with the table name use the singular version of the table name in the PK (for example statuses and status_id), making it impossible to do this at all. *If you make table names singular, you can have them match the class names they represent. Once again, this can simplify code and allow you to do really neat things, like instantiating a class by having nothing but the table name. It also just makes your code more consistent, which leads to... *If you make the table name singular, it makes your naming scheme consistent, organized, and easy to maintain in every location. You know that in every instance in your code, whether it's in a column name, as a class name, or as the table name, it's the same exact name. This allows you to do global searches to see everywhere that data is used. When you pluralize a table name, there will be cases where you will use the singular version of that table name (the class it turns into, in the primary key). It just makes sense to not have some instances where your data is referred to as plural and some instances singular. To sum it up, if you pluralize your table names you are losing all sorts of advantages in making your code smarter and easier to handle. There may even be cases where you have to have lookup tables/arrays to convert your table names to object or local code names you could have avoided. Singular table names, though perhaps feeling a little weird at first, offer significant advantages over pluralized names and I believe are best practice. A: I recommend checking out Microsoft's SQL Server sample databases: https://github.com/Microsoft/sql-server-samples/releases/tag/adventureworks The AdventureWorks sample uses a very clear and consistent naming convention that uses schema names for the organization of database objects. * *Singular names for tables *Singular names for columns *Schema name for tables prefix (E.g.: SchemeName.TableName) *Pascal casing (a.k.a. upper camel case) A: Late answer here, but in short: * *Plural table names: My preference is plural *Singular column names: Yes *Prefix tables or columns: * *Tables: *Usually* no prefixes is best. *Columns: No. *Use any case in naming items: PascalCase for both tables and columns. Elaboration: (1) What you must do. There are very few things that you must do a certain way, every time, but there are a few. * *Name your primary keys using "[singularOfTableName]ID" format. That is, whether your table name is Customer or Customers, the primary key should be CustomerID. *Further, foreign keys must be named consistently in different tables. It should be legal to beat up someone who does not do this. I would submit that while defined foreign key constraints are often important, consistent foreign key naming is always important *You database must have internal conventions. Even though in later sections you'll see me being very flexible, within a database naming must be very consistent . Whether your table for customers is called Customers or Customer is less important than that you do it the same way throughout the same database. And you can flip a coin to determine how to use underscores, but then you must keep using them the same way. If you don't do this, you are a bad person who should have low self-esteem. (2) What you should probably do. * *Fields representing the same kind of data on different tables should be named the same. Don't have Zip on one table and ZipCode on another. *To separate words in your table or column names, use PascalCasing. Using camelCasing would not be intrinsically problematic, but that's not the convention and it would look funny. I'll address underscores in a moment. (You may not use ALLCAPS as in the olden days. OBNOXIOUSTABLE.ANNOYING_COLUMN was okay in DB2 20 years ago, but not now.) *Don't artifically shorten or abbreviate words. It is better for a name to be long and clear than short and confusing. Ultra-short names is a holdover from darker, more savage times. Cus_AddRef. What on earth is that? Custodial Addressee Reference? Customer Additional Refund? Custom Address Referral? (3) What you should consider. * *I really think you should have plural names for tables; some think singular. Read the arguments elsewhere. Column names should be singular however. Even if you use plural table names, tables that represent combinations of other tables might be in the singular. For example, if you have a Promotions and an Items table, a table representing an item being a part of a promotion could be Promotions_Items, but it could also legitimately be Promotion_Items I think (reflecting the one-to-many relationship). *Use underscores consistently and for a particular purpose. Just general tables names should be clear enough with PascalCasing; you don't need underscores to separate words. Save underscores either (a) to indicate an associative table or (b) for prefixing, which I'll address in the next bullet. *Prefixing is neither good or bad. It usually is not best. In your first db or two, I would not suggest using prefixes for general thematic grouping of tables. Tables end up not fitting your categories easily, and it can actually make it harder to find tables. With experience, you can plan and apply a prefixing scheme that does more good than harm. I worked in a db once where data tables began with tbl, config tables with ctbl, views with vew, proc's sp, and udf's fn, and a few others; it was meticulously, consistently applied so it worked out okay. The only time you NEED prefixes is when you have really separate solutions that for some reason reside in the same db; prefixing them can be very helpful in grouping the tables. Prefixing is also okay for special situations, like for temporary tables that you want to stand out. *Very seldom (if ever) would you want to prefix columns. A: I'm also in favour of a ISO/IEC 11179 style naming convention, noting they are guidelines rather than being prescriptive. See Data element name on Wikipedia: "Tables are Collections of Entities, and follow Collection naming guidelines. Ideally, a collective name is used: eg., Personnel. Plural is also correct: Employees. Incorrect names include: Employee, tblEmployee, and EmployeeTable." As always, there are exceptions to rules e.g. a table which always has exactly one row may be better with a singular name e.g. a config table. And consistency is of utmost importance: check whether you shop has a convention and, if so, follow it; if you don't like it then do a business case to have it changed rather than being the lone ranger. A: Essential Database Naming Conventions (and Style) (click here for more detailed description) table names choose short, unambiguous names, using no more than one or two words distinguish tables easily facilitates the naming of unique field names as well as lookup and linking tables give tables singular names, never plural (update: i still agree with the reasons given for this convention, but most people really like plural table names, so i’ve softened my stance)... follow the link above please A: our preference: * *Should table names be plural? Never. The arguments for it being a collection make sense, but you never know what the table is going to contain (0,1 or many items). Plural rules make the naming unnecessarily complicated. 1 House, 2 houses, mouse vs mice, person vs people, and we haven't even looked at any other languages. Update person set property = 'value' acts on each person in the table. Select * from person where person.name = 'Greg' returns a collection/rowset of person rows. *Should column names be singular? Usually, yes, except where you are breaking normalisation rules. *Should I prefix tables or columns? Mostly a platform preference. We prefer to prefix columns with the table name. We don't prefix tables, but we do prefix views (v_) and stored_procedures (sp_ or f_ (function)). That helps people who want to try to upday v_person.age which is actually a calculated field in a view (which can't be UPDATEd anyway). It is also a great way to avoid keyword collision (delivery.from breaks, but delivery_from does not). It does make the code more verbose, but often aids in readability. bob = new person() bob.person_name = 'Bob' bob.person_dob = '1958-12-21' ... is very readable and explicit. This can get out of hand though: customer.customer_customer_type_id indicates a relationship between customer and the customer_type table, indicates the primary key on the customer_type table (customer_type_id) and if you ever see 'customer_customer_type_id' whilst debugging a query, you know instantly where it is from (customer table). or where you have a M-M relationship between customer_type and customer_category (only certain types are available to certain categories) customer_category_customer_type_id ... is a little (!) on the long side. *Should I use any case in naming items? Yes - lower case :), with underscores. These are very readable and cross platform. Together with 3 above it also makes sense. Most of these are preferences though. - As long as you are consistent, it should be predictable for anyone that has to read it. A: Take a look at ISO 11179-5: Naming and identification principles You can get it here: http://metadata-standards.org/11179/#11179-5 I blogged about it a while back here: ISO-11179 Naming Conventions A: Table names singular. Let's say you were modelling a realtionship between someone and their address. For example, if you are reading a datamodel would you prefer 'each person may live at 0,1 or many address.' or 'each people may live at 0,1 or many addresses.' I think its easier to pluralise address, rather than have to rephrase people as person. Plus collective nouns are quite often dissimlar to the singular version. A: I know this is late to the game, and the question has been answered very well already, but I want to offer my opinion on #3 regarding the prefixing of column names. All columns should be named with a prefix that is unique to the table they are defined in. E.g. Given tables "customer" and "address", let's go with prefixes of "cust" and "addr", respectively. "customer" would have "cust_id", "cust_name", etc. in it. "address" would have "addr_id", "addr_cust_id" (FK back to customer), "addr_street", etc. in it. When I was first presented with this standard, I was dead-set against it; I hated the idea. I couldn't stand the idea of all that extra typing and redundancy. Now I've had enough experience with it that I'd never go back. The result of doing this is that all of the columns in your database schema are unique. There is one major benefit to this, which trumps all arguments against it (in my opinion, of course): You can search your entire code base and reliably find every line of code that touches a particular column. The benefit from #1 is incredibly huge. I can deprecate a column and know exactly what files need to be updated before the column can safely be removed from the schema. I can change the meaning of a column and know exactly what code needs to be refactored. Or I can simply tell if data from a column is even being used in a particular portion of the system. I can't count the number of times this has turned a potentially huge project into a simple one, nor the amount of hours we've saved in development work. Another, relatively minor benefit to it is that you only have to use table-aliases when you do a self join: SELECT cust_id, cust_name, addr_street, addr_city, addr_state FROM customer INNER JOIN address ON addr_cust_id = cust_id WHERE cust_name LIKE 'J%'; A: My opinions on these are: 1) No, table names should be singular. While it appears to make sense for the simple selection (select * from Orders) it makes less sense for the OO equivalent (Orders x = new Orders). A table in a DB is really the set of that entity, it makes more sense once you're using set-logic: select Orders.* from Orders inner join Products on Orders.Key = Products.Key That last line, the actual logic of the join, looks confusing with plural table names. I'm not sure about always using an alias (as Matt suggests) clears that up. 2) They should be singular as they only hold 1 property 3) Never, if the column name is ambiguous (as above where they both have a column called [Key]) the name of the table (or its alias) can distinguish them well enough. You want queries to be quick to type and simple - prefixes add unnecessary complexity. 4) Whatever you want, I'd suggest CapitalCase I don't think there's one set of absolute guidelines on any of these. As long as whatever you pick is consistent across the application or DB I don't think it really matters. A: In my opinion: * *Table names should be plural. *Column names should be singular. *No. *Either CamelCase (my preferred) or underscore_separated for both table names and column names. However, like it has been mentioned, any convention is better than no convention. No matter how you choose to do it, document it so that future modifications follow the same conventions. A: I think the best answer to each of those questions would be given by you and your team. It's far more important to have a naming convention then how exactly the naming convention is. As there's no right answer to that, you should take some time (but not too much) and choose your own conventions and - here's the important part - stick to it. Of course it's good to seek some information about standards on that, which is what you're asking, but don't get anxious or worried about the number of different answers you might get: choose the one that seems better for you. Just in case, here are my answers: * *Yes. A table is a group of records, teachers or actors, so... plural. *Yes. *I don't use them. *The database I use more often - Firebird - keeps everything in upper case, so it doesn't matter. Anyway, when I'm programming I write the names in a way that it's easier to read, like releaseYear. A: * *Definitely keep table names singular, person not people *Same here *No. I've seen some terrible prefixes, going so far as to state what were dealing with is a table (tbl_) or a user store procedure (usp_). This followed by the database name... Don't do it! *Yes. I tend to PascalCase all my table names A: Ok, since we're weighing in with opinion: I believe that table names should be plural. Tables are a collection (a table) of entities. Each row represents a single entity, and the table represents the collection. So I would call a table of Person entities People (or Persons, whatever takes your fancy). For those who like to see singular "entity names" in queries, that's what I would use table aliases for: SELECT person.Name FROM People person A bit like LINQ's "from person in people select person.Name". As for 2, 3 and 4, I agree with @Lars. A: --Example SQL CREATE TABLE D001_Students ( StudentID INTEGER CONSTRAINT nnD001_STID NOT NULL, ChristianName NVARCHAR(255) CONSTRAINT nnD001_CHNA NOT NULL, Surname NVARCHAR(255) CONSTRAINT nnD001_SURN NOT NULL, CONSTRAINT pkD001 PRIMARY KEY(StudentID) ); CREATE INDEX idxD001_STID on D001_Students; CREATE TABLE D002_Classes ( ClassID INTEGER CONSTRAINT nnD002_CLID NOT NULL, StudentID INTEGER CONSTRAINT nnD002_STID NOT NULL, ClassName NVARCHAR(255) CONSTRAINT nnD002_CLNA NOT NULL, CONSTRAINT pkD001 PRIMARY KEY(ClassID, StudentID), CONSTRAINT fkD001_STID FOREIGN KEY(StudentID) REFERENCES D001_Students(StudentID) ); CREATE INDEX idxD002_CLID on D002_Classes; CREATE VIEW V001_StudentClasses ( SELECT D001.ChristianName, D001.Surname, D002.ClassName FROM D001_Students D001 INNER JOIN D002_Classes D002 ON D001.StudentID = D002.StudentID ); These are the conventions I was taught, but you should adapt to whatever you developement hose uses. * *Plural. It is a collection of entities. *Yes. The attribute is a representation of singular property of an entity. *Yes, prefix table name allows easily trackable naming of all constraints indexes and table aliases. *Pascal Case for table and column names, prefix + ALL caps for indexes and constraints.
{ "language": "en", "url": "https://stackoverflow.com/questions/7662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "899" }
Q: Windows C++: How can I redirect stderr for calls to fprintf? I am wrapping existing C++ code from a BSD project in our own custom wrapper and I want to integrate it to our code with as few changes as possible. This code uses fprintf to print to stderr in order to log / report errors. I want to redirect this to an alternative place within the same process. On Unix I have done this with a socketpair and a thread: one end of the socket is where I send stderr (via a call to dup2) and the other end is monitored in a thread, where I can then process the output. This does not work on Windows though because a socket is not the same as a file handle. All documents I have found on the web show how to redirect output from a child process, which is not what I want. How can I redirect stderr within the same process getting a callback of some sort when output is written? (and before you say so, I've tried SetStdHandle but cannot find any way to make this work)... A: You can use a similar technique on Windows, you just need to use different words for the same concepts. :) This article: http://msdn.microsoft.com/en-us/library/ms682499.aspx uses a win32 pipe to handle I/O from another process, you just have to do the same thing with threads within the same process. Of course, in your case all output to stderr from anywhere in the process will be redirected to your consumer. Actually, other pieces of the puzzle you may need are _fdopen and _open_osfhandle. In fact, here's a related example from some code I released years ago: DWORD CALLBACK DoDebugThread(void *) { AllocConsole(); SetConsoleTitle("Copilot Debugger"); // The following is a really disgusting hack to make stdin and stdout attach // to the newly created console using the MSVC++ libraries. I hope other // operating systems don't need this kind of kludge.. :) stdout->_file = _open_osfhandle((long)GetStdHandle(STD_OUTPUT_HANDLE), _O_TEXT); stdin->_file = _open_osfhandle((long)GetStdHandle(STD_INPUT_HANDLE), _O_TEXT); debug(); stdout->_file = -1; stdin->_file = -1; FreeConsole(); CPU_run(); return 0; } In this case, the main process was a GUI process which doesn't start with stdio handles at all. It opens a console, then shoves the right handles into stdout and stdin so the debug() function (which was designed as a stdio interactive function) can interact with the newly created console. You should be able to open some pipes and do the same sort of thing to redirect stderr. A: You have to remember that what MSVCRT calls "OS handles" are not Win32 handles, but another layer of handles added just to confuse you. MSVCRT tries to emulate the Unix handle numbers where stdin = 0, stdout = 1, stderr = 2 and so on. Win32 handles are numbered differently and their values always happen to be a multiple of 4. Opening the pipe and getting all the handles configured properly will require getting your hands messy. Using the MSVCRT source code and a debugger is probably a requirement. A: You mention that you don't want to use a named pipe for internal use; it's probably worth poining out that the documentation for CreatePipe() states, "Anonymous pipes are implemented using a named pipe with a unique name. Therefore, you can often pass a handle to an anonymous pipe to a function that requires a handle to a named pipe." So, I suggest that you just write a function that creates a similar pipe with the correct settings for async reading. I tend to use a GUID as a string (generated using CoCreateGUID() and StringFromIID()) to give me a unique name and then create the server and client ends of the named pipe with the correct settings for overlapped I/O (more details on this, and code, here: http://www.lenholgate.com/blog/2008/02/process-management-using-jobs-on-windows.html). Once I have that I wire up some code that I have to read a file using overlapped I/O with an I/O Completion Port and, well, then I just get async notifications of the data as it arrives... However, I've got a fair amount of well tested library code in there that makes it all happen... It's probably possible to set up the named pipe and then just do an overlapped read with an event in your OVERLAPPED structure and check the event to see if data was available... I don't have any code available that does that though.
{ "language": "en", "url": "https://stackoverflow.com/questions/7664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How to resolve symbolic links in a shell script Given an absolute or relative path (in a Unix-like system), I would like to determine the full path of the target after resolving any intermediate symlinks. Bonus points for also resolving ~username notation at the same time. If the target is a directory, it might be possible to chdir() into the directory and then call getcwd(), but I really want to do this from a shell script rather than writing a C helper. Unfortunately, shells have a tendency to try to hide the existence of symlinks from the user (this is bash on OS X): $ ls -ld foo bar drwxr-xr-x 2 greg greg 68 Aug 11 22:36 bar lrwxr-xr-x 1 greg greg 3 Aug 11 22:36 foo -> bar $ cd foo $ pwd /Users/greg/tmp/foo $ What I want is a function resolve() such that when executed from the tmp directory in the above example, resolve("foo") == "/Users/greg/tmp/bar". A: Another way: # Gets the real path of a link, following all links myreadlink() { [ ! -h "$1" ] && echo "$1" || (local link="$(expr "$(command ls -ld -- "$1")" : '.*-> \(.*\)$')"; cd $(dirname $1); myreadlink "$link" | sed "s|^\([^/].*\)\$|$(dirname $1)/\1|"); } # Returns the absolute path to a command, maybe in $PATH (which) or not. If not found, returns the same whereis() { echo $1 | sed "s|^\([^/].*/.*\)|$(pwd)/\1|;s|^\([^/]*\)$|$(which -- $1)|;s|^$|$1|"; } # Returns the realpath of a called command. whereis_realpath() { local SCRIPT_PATH=$(whereis $1); myreadlink ${SCRIPT_PATH} | sed "s|^\([^/].*\)\$|$(dirname ${SCRIPT_PATH})/\1|"; } A: Putting some of the given solutions together, knowing that readlink is available on most systems, but needs different arguments, this works well for me on OSX and Debian. I'm not sure about BSD systems. Maybe the condition needs to be [[ $OSTYPE != darwin* ]] to exclude -f from OSX only. #!/bin/bash MY_DIR=$( cd $(dirname $(readlink `[[ $OSTYPE == linux* ]] && echo "-f"` $0)) ; pwd -P) echo "$MY_DIR" A: readlink -f "$path" Editor's note: The above works with GNU readlink and FreeBSD/PC-BSD/OpenBSD readlink, but not on OS X as of 10.11. GNU readlink offers additional, related options, such as -m for resolving a symlink whether or not the ultimate target exists. Note since GNU coreutils 8.15 (2012-01-06), there is a realpath program available that is less obtuse and more flexible than the above. It's also compatible with the FreeBSD util of the same name. It also includes functionality to generate a relative path between two files. realpath $path [Admin addition below from comment by halloleo —danorton] For Mac OS X (through at least 10.11.x), use readlink without the -f option: readlink $path Editor's note: This will not resolve symlinks recursively and thus won't report the ultimate target; e.g., given symlink a that points to b, which in turn points to c, this will only report b (and won't ensure that it is output as an absolute path). Use the following perl command on OS X to fill the gap of the missing readlink -f functionality: perl -MCwd -le 'print Cwd::abs_path(shift)' "$path" A: Here's how one can get the actual path to the file in MacOS/Unix using an inline Perl script: FILE=$(perl -e "use Cwd qw(abs_path); print abs_path('$0')") Similarly, to get the directory of a symlinked file: DIR=$(perl -e "use Cwd qw(abs_path); use File::Basename; print dirname(abs_path('$0'))") A: Common shell scripts often have to find their "home" directory even if they are invoked as a symlink. The script thus have to find their "real" position from just $0. cat `mvn` on my system prints a script containing the following, which should be a good hint at what you need. if [ -z "$M2_HOME" ] ; then ## resolve links - $0 may be a link to maven's home PRG="$0" # need this for relative symlinks while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG="`dirname "$PRG"`/$link" fi done saveddir=`pwd` M2_HOME=`dirname "$PRG"`/.. # make it fully qualified M2_HOME=`cd "$M2_HOME" && pwd` A: Note: I believe this to be a solid, portable, ready-made solution, which is invariably lengthy for that very reason. Below is a fully POSIX-compliant script / function that is therefore cross-platform (works on macOS too, whose readlink still doesn't support -f as of 10.12 (Sierra)) - it uses only POSIX shell language features and only POSIX-compliant utility calls. It is a portable implementation of GNU's readlink -e (the stricter version of readlink -f). You can run the script with sh or source the function in bash, ksh, and zsh: For instance, inside a script you can use it as follows to get the running's script true directory of origin, with symlinks resolved: trueScriptDir=$(dirname -- "$(rreadlink "$0")") rreadlink script / function definition: The code was adapted with gratitude from this answer. I've also created a bash-based stand-alone utility version here, which you can install with npm install rreadlink -g, if you have Node.js installed. #!/bin/sh # SYNOPSIS # rreadlink <fileOrDirPath> # DESCRIPTION # Resolves <fileOrDirPath> to its ultimate target, if it is a symlink, and # prints its canonical path. If it is not a symlink, its own canonical path # is printed. # A broken symlink causes an error that reports the non-existent target. # LIMITATIONS # - Won't work with filenames with embedded newlines or filenames containing # the string ' -> '. # COMPATIBILITY # This is a fully POSIX-compliant implementation of what GNU readlink's # -e option does. # EXAMPLE # In a shell script, use the following to get that script's true directory of origin: # trueScriptDir=$(dirname -- "$(rreadlink "$0")") rreadlink() ( # Execute the function in a *subshell* to localize variables and the effect of `cd`. target=$1 fname= targetDir= CDPATH= # Try to make the execution environment as predictable as possible: # All commands below are invoked via `command`, so we must make sure that # `command` itself is not redefined as an alias or shell function. # (Note that command is too inconsistent across shells, so we don't use it.) # `command` is a *builtin* in bash, dash, ksh, zsh, and some platforms do not # even have an external utility version of it (e.g, Ubuntu). # `command` bypasses aliases and shell functions and also finds builtins # in bash, dash, and ksh. In zsh, option POSIX_BUILTINS must be turned on for # that to happen. { \unalias command; \unset -f command; } >/dev/null 2>&1 [ -n "$ZSH_VERSION" ] && options[POSIX_BUILTINS]=on # make zsh find *builtins* with `command` too. while :; do # Resolve potential symlinks until the ultimate target is found. [ -L "$target" ] || [ -e "$target" ] || { command printf '%s\n' "ERROR: '$target' does not exist." >&2; return 1; } command cd "$(command dirname -- "$target")" # Change to target dir; necessary for correct resolution of target path. fname=$(command basename -- "$target") # Extract filename. [ "$fname" = '/' ] && fname='' # !! curiously, `basename /` returns '/' if [ -L "$fname" ]; then # Extract [next] target path, which may be defined # *relative* to the symlink's own directory. # Note: We parse `ls -l` output to find the symlink target # which is the only POSIX-compliant, albeit somewhat fragile, way. target=$(command ls -l "$fname") target=${target#* -> } continue # Resolve [next] symlink target. fi break # Ultimate target reached. done targetDir=$(command pwd -P) # Get canonical dir. path # Output the ultimate target's canonical path. # Note that we manually resolve paths ending in /. and /.. to make sure we have a normalized path. if [ "$fname" = '.' ]; then command printf '%s\n' "${targetDir%/}" elif [ "$fname" = '..' ]; then # Caveat: something like /var/.. will resolve to /private (assuming /var@ -> /private/var), i.e. the '..' is applied # AFTER canonicalization. command printf '%s\n' "$(command dirname -- "${targetDir}")" else command printf '%s\n' "${targetDir%/}/$fname" fi ) rreadlink "$@" A tangent on security: jarno, in reference to the function ensuring that builtin command is not shadowed by an alias or shell function of the same name, asks in a comment: What if unalias or unset and [ are set as aliases or shell functions? The motivation behind rreadlink ensuring that command has its original meaning is to use it to bypass (benign) convenience aliases and functions often used to shadow standard commands in interactive shells, such as redefining ls to include favorite options. I think it's safe to say that unless you're dealing with an untrusted, malicious environment, worrying about unalias or unset - or, for that matter, while, do, ... - being redefined is not a concern. There is something that the function must rely on to have its original meaning and behavior - there is no way around that. That POSIX-like shells allow redefinition of builtins and even language keywords is inherently a security risk (and writing paranoid code is hard in general). To address your concerns specifically: The function relies on unalias and unset having their original meaning. Having them redefined as shell functions in a manner that alters their behavior would be a problem; redefinition as an alias is not necessarily a concern, because quoting (part of) the command name (e.g., \unalias) bypasses aliases. However, quoting is not an option for shell keywords (while, for, if, do, ...) and while shell keywords do take precedence over shell functions, in bash and zsh aliases have the highest precedence, so to guard against shell-keyword redefinitions you must run unalias with their names (although in non-interactive bash shells (such as scripts) aliases are not expanded by default - only if shopt -s expand_aliases is explicitly called first). To ensure that unalias - as a builtin - has its original meaning, you must use \unset on it first, which requires that unset have its original meaning: unset is a shell builtin, so to ensure that it is invoked as such, you'd have to make sure that it itself is not redefined as a function. While you can bypass an alias form with quoting, you cannot bypass a shell-function form - catch 22. Thus, unless you can rely on unset to have its original meaning, from what I can tell, there is no guaranteed way to defend against all malicious redefinitions. A: Is your path a directory, or might it be a file? If it's a directory, it's simple: (cd "$DIR"; pwd -P) However, if it might be a file, then this won't work: DIR=$(cd $(dirname "$FILE"); pwd -P); echo "${DIR}/$(readlink "$FILE")" because the symlink might resolve into a relative or full path. On scripts I need to find the real path, so that I might reference configuration or other scripts installed together with it, I use this: SOURCE="${BASH_SOURCE[0]}" while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink DIR="$( cd -P "$( dirname "$SOURCE" )" && pwd )" SOURCE="$(readlink "$SOURCE")" [[ $SOURCE != /* ]] && SOURCE="$DIR/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located done You could set SOURCE to any file path. Basically, for as long as the path is a symlink, it resolves that symlink. The trick is in the last line of the loop. If the resolved symlink is absolute, it will use that as SOURCE. However, if it is relative, it will prepend the DIR for it, which was resolved into a real location by the simple trick I first described. A: "pwd -P" seems to work if you just want the directory, but if for some reason you want the name of the actual executable I don't think that helps. Here's my solution: #!/bin/bash # get the absolute path of the executable SELF_PATH=$(cd -P -- "$(dirname -- "$0")" && pwd -P) && SELF_PATH=$SELF_PATH/$(basename -- "$0") # resolve symlinks while [[ -h $SELF_PATH ]]; do # 1) cd to directory of the symlink # 2) cd to the directory of where the symlink points # 3) get the pwd # 4) append the basename DIR=$(dirname -- "$SELF_PATH") SYM=$(readlink "$SELF_PATH") SELF_PATH=$(cd "$DIR" && cd "$(dirname -- "$SYM")" && pwd)/$(basename -- "$SYM") done A: One of my favorites is realpath foo realpath - return the canonicalized absolute pathname realpath expands all symbolic links and resolves references to '/./', '/../' and extra '/' characters in the null terminated string named by path and stores the canonicalized absolute pathname in the buffer of size PATH_MAX named by resolved_path. The resulting path will have no symbolic link, '/./' or '/../' components. A: function realpath { local r=$1; local t=$(readlink $r) while [ $t ]; do r=$(cd $(dirname $r) && cd $(dirname $t) && pwd -P)/$(basename $t) t=$(readlink $r) done echo $r } #example usage SCRIPT_PARENT_DIR=$(dirname $(realpath "$0"))/.. A: This is a symlink resolver in Bash that works whether the link is a directory or a non-directory: function readlinks {( set -o errexit -o nounset declare n=0 limit=1024 link="$1" # If it's a directory, just skip all this. if cd "$link" 2>/dev/null then pwd -P return 0 fi # Resolve until we are out of links (or recurse too deep). while [[ -L $link ]] && [[ $n -lt $limit ]] do cd "$(dirname -- "$link")" n=$((n + 1)) link="$(readlink -- "${link##*/}")" done cd "$(dirname -- "$link")" if [[ $n -ge $limit ]] then echo "Recursion limit ($limit) exceeded." >&2 return 2 fi printf '%s/%s\n' "$(pwd -P)" "${link##*/}" )} Note that all the cd and set stuff takes place in a subshell. A: In case where pwd can't be used (e.g. calling a scripts from a different location), use realpath (with or without dirname): $(dirname $(realpath $PATH_TO_BE_RESOLVED)) Works both when calling through (multiple) symlink(s) or when directly calling the script - from any location. A: According to the standards, pwd -P should return the path with symlinks resolved. C function char *getcwd(char *buf, size_t size) from unistd.h should have the same behaviour. getcwd pwd A: readlink -e [filepath] seems to be exactly what you're asking for - it accepts an arbirary path, resolves all symlinks, and returns the "real" path - and it's "standard *nix" that likely all systems already have A: To work around the Mac incompatibility, I came up with echo `php -r "echo realpath('foo');"` Not great but cross OS A: Try this: cd $(dirname $([ -L $0 ] && readlink -f $0 || echo $0)) A: Since I've run into this many times over the years, and this time around I needed a pure bash portable version that I could use on OSX and linux, I went ahead and wrote one: The living version lives here: https://github.com/keen99/shell-functions/tree/master/resolve_path but for the sake of SO, here's the current version (I feel it's well tested..but I'm open to feedback!) Might not be difficult to make it work for plain bourne shell (sh), but I didn't try...I like $FUNCNAME too much. :) #!/bin/bash resolve_path() { #I'm bash only, please! # usage: resolve_path <a file or directory> # follows symlinks and relative paths, returns a full real path # local owd="$PWD" #echo "$FUNCNAME for $1" >&2 local opath="$1" local npath="" local obase=$(basename "$opath") local odir=$(dirname "$opath") if [[ -L "$opath" ]] then #it's a link. #file or directory, we want to cd into it's dir cd $odir #then extract where the link points. npath=$(readlink "$obase") #have to -L BEFORE we -f, because -f includes -L :( if [[ -L $npath ]] then #the link points to another symlink, so go follow that. resolve_path "$npath" #and finish out early, we're done. return $? #done elif [[ -f $npath ]] #the link points to a file. then #get the dir for the new file nbase=$(basename $npath) npath=$(dirname $npath) cd "$npath" ndir=$(pwd -P) retval=0 #done elif [[ -d $npath ]] then #the link points to a directory. cd "$npath" ndir=$(pwd -P) retval=0 #done else echo "$FUNCNAME: ERROR: unknown condition inside link!!" >&2 echo "opath [[ $opath ]]" >&2 echo "npath [[ $npath ]]" >&2 return 1 fi else if ! [[ -e "$opath" ]] then echo "$FUNCNAME: $opath: No such file or directory" >&2 return 1 #and break early elif [[ -d "$opath" ]] then cd "$opath" ndir=$(pwd -P) retval=0 #done elif [[ -f "$opath" ]] then cd $odir ndir=$(pwd -P) nbase=$(basename "$opath") retval=0 #done else echo "$FUNCNAME: ERROR: unknown condition outside link!!" >&2 echo "opath [[ $opath ]]" >&2 return 1 fi fi #now assemble our output echo -n "$ndir" if [[ "x${nbase:=}" != "x" ]] then echo "/$nbase" else echo fi #now return to where we were cd "$owd" return $retval } here's a classic example, thanks to brew: %% ls -l `which mvn` lrwxr-xr-x 1 draistrick 502 29 Dec 17 10:50 /usr/local/bin/mvn@ -> ../Cellar/maven/3.2.3/bin/mvn use this function and it will return the -real- path: %% cat test.sh #!/bin/bash . resolve_path.inc echo echo "relative symlinked path:" which mvn echo echo "and the real path:" resolve_path `which mvn` %% test.sh relative symlinked path: /usr/local/bin/mvn and the real path: /usr/local/Cellar/maven/3.2.3/libexec/bin/mvn A: Here I present what I believe to be a cross-platform (Linux and macOS at least) solution to the answer that is working well for me currently. crosspath() { local ref="$1" if [ -x "$(which realpath)" ]; then path="$(realpath "$ref")" else path="$(readlink -f "$ref" 2> /dev/null)" if [ $? -gt 0 ]; then if [ -x "$(which readlink)" ]; then if [ ! -z "$(readlink "$ref")" ]; then ref="$(readlink "$ref")" fi else echo "realpath and readlink not available. The following may not be the final path." 1>&2 fi if [ -d "$ref" ]; then path="$(cd "$ref"; pwd -P)" else path="$(cd $(dirname "$ref"); pwd -P)/$(basename "$ref")" fi fi fi echo "$path" } Here is a macOS (only?) solution. Possibly better suited to the original question. mac_realpath() { local ref="$1" if [[ ! -z "$(readlink "$ref")" ]]; then ref="$(readlink "$1")" fi if [[ -d "$ref" ]]; then echo "$(cd "$ref"; pwd -P)" else echo "$(cd $(dirname "$ref"); pwd -P)/$(basename "$ref")" fi } A: My answer here Bash: how to get real path of a symlink? but in short very handy in scripts: script_home=$( dirname $(realpath "$0") ) echo Original script home: $script_home These are part of GNU coreutils, suitable for use in Linux systems. To test everything, we put symlink into /home/test2/, amend some additional things and run/call it from root directory: /$ /home/test2/symlink /home/test Original script home: /home/test Where Original script is: /home/test/realscript.sh Called script is: /home/test2/symlink A: My 2 cents. This function is POSIX compliant, and both the source and the destination can contain ->. However, I have not gotten it work with filenames that container newline or tabs, as ls in general has issues with those. resolve_symlink() { test -L "$1" && ls -l "$1" | awk -v SYMLINK="$1" '{ SL=(SYMLINK)" -> "; i=index($0, SL); s=substr($0, i+length(SL)); print s }' } I believe the solution here is the file command, with a custom magic file that only outputs the destination of the provided symlink. A: This is the best solution, tested in Bash 3.2.57: # Read a path (similar to `readlink`) recursively, until the physical path without any links (like `cd -P`) is found. # Accepts any existing path, prints its physical path and exits `0`, exits `1` if some contained links don't exist. # Motivation: `${BASH_SOURCE[0]}` often contains links; using it directly to extract your project's path may fail. # # Example: Safely `source` a file located relative to the current script # # source "$(dirname "$(rreadlink "${BASH_SOURCE[0]}")")/relative/script.sh" #Inspiration: https://stackoverflow.com/a/51089005/6307827 rreadlink () { declare p="$1" d l while :; do d="$(cd -P "$(dirname "$p")" && pwd)" || return $? #absolute path without symlinks p="$d/$(basename "$p")" if [ -h "$p" ]; then l="$(readlink "$p")" || break #A link must be resolved from its fully resolved parent dir. d="$(cd "$d" && cd -P "$(dirname "$l")" && pwd)" || return $? p="$d/$(basename "$l")" else break fi done printf '%s\n' "$p" }
{ "language": "en", "url": "https://stackoverflow.com/questions/7665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "254" }
Q: Best practices for building Flash video player We have a custom-built Flash-based video player that I maintain, and it needs to support preroll ads and ideally both progressive video playback and streaming depending on a server switch. I've been working with the flvPlayback component but am finding myself a little out of my depth. Are there any good tutorials or resources for understanding the difference between netstream and flvPlayback? Or is one part of the other? Have googled without success. For the preroll ads we'll probably use DART In-Stream, which is part of the reason I feel I'm losing a grip on the best way to structure this thing. Any help with best practices or links most appreciated - ta! EDIT - Update: I wrote a player by hand and got it more or less working with everything it needed to do, but we did migrate to JW Player across all the web properties in the end, about six months later. It's very reliable and well-supported, it integrated with the DART system well, and the designers found it easy to skin. A: I would definitely have a look at the JW Flash Media Player: http://www.jeroenwijering.com/?item=JW_FLV_Player It's Open Source, and I found the Source quite clean and easy to understand, it also supports playlists. I don't know the DART In-Stream stuff, but maybe you could "creatively use" the playlist feature to achieve that? Source Code is available here: http://code.jeroenwijering.com/trac/ A: I've used the FLVPlayBack component for a while now and while it has some quirks I find it to be pretty versatile without having to write a lot of code. The only large drawback I found is that if you try to stream a file that doesn't exist the playstate remains "loading" and never resolves - at that point, you can't load anything else into and it'll stay loading forever. For what it sounds like you are doing though it should handle that stuff fine - any of the default control bars will handle the status of either your progressive or streaming videos and it has some cool closed captioning features to boot. As for documentation - Adobe's LiveDocs is really helpful: http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/fl/video/FLVPlayback.html Can't speak on the DART stuff though - never had to deal with it. A: I don't really like the flvPlayback-component, it's hard to handle both implementation wise and somewhat tricky to skin nicely and it's also quite bloated. So I'd opt to use either the JW Flash Media Player as recommended by Michael above or rolling my own entirely. A: If you are interested in writing your own video player, you should pick up the following book; Learning ActionScript 3 http://www.learningactionscript3.com/. It will give you a great understanding of AS3 and there is also a chapter dedicated to creating your own basic flash player, which you can then build upon.
{ "language": "en", "url": "https://stackoverflow.com/questions/7674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Replicating between SQL Server 2005 and SQL Server Compact Edition Can it be done and if so, how? A: You can also check out Sync Services for Sql Server and Compact edition. The benefit of Sync Services is that you don't need a replication server or IIS and you can also sync between compact edition databases. This method involves writing a fair bit more code and is fairly involved, but I'd recommend looking into it as a lightweight service. A: You can use Merge Replication. Theres a tutorial here SQL Server Compact 3.5 How-to Tutorials (Number 5). A: Certainly replication is possible, as is Sync Services if you're not afraid to get your hands dirty. It depends on the details of what you need: * *Sometimes-connected application wanting to have a read-only cache: Sync Services *Sometimes-connected application wanting to have part or full update ability: Sync Services *Remote site with multiple workstations needing read/write access to data: replication if you can get a secure network connection that's stable enough, otherwise look at extending Syn Services to work with SQL Express (or full SQL Server) based on the sample here: Sync using SQL Express If you just want a SQL CE database and you're working with a SQL 2008 server then the wizard in Visual Studio 2008 SP1 will do all the work for you, you need only add 1 line of code to it if you want bi-directional support. If you can't upgrade then it will take more work with SQL 2005, and it's only reliable if you have at least SP2. I'm in the middle of a project that requires multiple sites to have a sub-set of data in an environment where each site may lose it's connection to the head office at times, we've managed to get Sync Services to work with SQL 2008 at the head office and SQL Express 2008 at each site with full change tracking (2008 feature) and it's working great. It does require a reasonable amount of code (C# and SQL), so we've used some pretty smart templates to help. Be aware that. Perhaps you could refine your question with more details? A: Because of budget constraints I think it will have to beta-tester's approch,i tried following the guide and cant seem to get it working. Before I spend time getting it to work, I just confrim, Replicating between SqlServer 2005 and Compact Edition is something that can be done? A: I just confrim, Replicating between SqlServer 2005 and Compact Edition is something that can be done? Yes it can definately be done using either Merge Replication or Sync Services
{ "language": "en", "url": "https://stackoverflow.com/questions/7676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Test serialization encoding What is the best way to verify/test that a text string is serialized to a byte array with a certain encoding? In my case, I want to verify that an XML structure is serialized to a byte array with the UTF-8 encoding which is of variable character length. As an example, my current ugly procedure is to inject a character known to require two bytes into the structure before serializing, then replacing the two-byte character with an ASCII character and comparing the serialized array lengths. This should yield two serialized arrays where the array containing the two-byte characters should have length +1. Plus if the solution is elegant for Java. I can't think of any elegant way to seek for a byte sequence in a byte array. (Could be used to seek for a known byte sequence representing the desired character representation in UTF-8.) A: Perhaps you could deserialise the byte array using a known encoding and ensure that (a) it doesn't throw any exceptions, and (b) deserialises to the original string. It seems that from your description of the scenario, you may not have the original string readily available. Might there be a way to create it? A: That's good. You're right, I don't have the original string, since I'm testing a module that creates an XML document as a byte array. I didn't think about deserializing to a String with an expected encoding. That will do the trick.
{ "language": "en", "url": "https://stackoverflow.com/questions/7681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Merge Sort a Linked List I was recently brushing up on some fundamentals and found merge sorting a linked list to be a pretty good challenge. If you have a good implementation then show it off here. A: Wonder why it should be big challenge as it is stated here, here is a straightforward implementation in Java with out any "clever tricks". //The main function public static Node merge_sort(Node head) { if(head == null || head.next == null) return head; Node middle = getMiddle(head); //get the middle of the list Node left_head = head; Node right_head = middle.next; middle.next = null; //split the list into two halfs return merge(merge_sort(left_head), merge_sort(right_head)); //recurse on that } //Merge subroutine to merge two sorted lists public static Node merge(Node a, Node b) { Node dummyHead = new Node(); for(Node current = dummyHead; a != null && b != null; current = current.next;) { if(a.data <= b.data) { current.next = a; a = a.next; } else { current.next = b; b = b.next; } } dummyHead.next = (a == null) ? b : a; return dummyHead.next; } //Finding the middle element of the list for splitting public static Node getMiddle(Node head) { if(head == null) return head; Node slow = head, fast = head; while(fast.next != null && fast.next.next != null) { slow = slow.next; fast = fast.next.next; } return slow; } A: One interesting way is to maintain a stack, and only merge if the list on the stack has the same number of elements, and otherwise push the list, until you run out of elements in the incoming list, and then merge up the stack. A: The simplest is from Gonnet + Baeza Yates Handbook of Algorithms. You call it with the number of sorted elements you want, which recursively gets bisected until it reaches a request for a size one list which you then just peel off the front of the original list. These all get merged up into a full sized sorted list. [Note that the cool stack-based one in the first post is called the Online Mergesort and it gets the tiniest mention in an exercise in Knuth Vol 3] A: Here's an alternative recursive version. This does not need to step along the list to split it: we supply a pointer to a head element (which is not part of the sort) and a length, and the recursive function returns a pointer to the end of the sorted list. element* mergesort(element *head,long lengtho) { long count1=(lengtho/2), count2=(lengtho-count1); element *next1,*next2,*tail1,*tail2,*tail; if (lengtho<=1) return head->next; /* Trivial case. */ tail1 = mergesort(head,count1); tail2 = mergesort(tail1,count2); tail=head; next1 = head->next; next2 = tail1->next; tail1->next = tail2->next; /* in case this ends up as the tail */ while (1) { if(cmp(next1,next2)<=0) { tail->next=next1; tail=next1; if(--count1==0) { tail->next=next2; return tail2; } next1=next1->next; } else { tail->next=next2; tail=next2; if(--count2==0) { tail->next=next1; return tail1; } next2=next2->next; } } } A: I'd been obsessing over optimizing clutter for this algorithm and below is what I've finally arrived at. Lot of other code on Internet and StackOverflow is horribly bad. There are people trying to get middle point of the list, doing recursion, having multiple loops for left over nodes, maintaining counts of ton of things - ALL of which is unnecessary. MergeSort naturally fits to linked list and algorithm can be beautiful and compact but it's not trivial to get to that state. Below code maintains minimum number of variables and has minimum number of logical steps needed for the algorithm (i.e. without making code unmaintainable/unreadable) as far as I know. However I haven't tried to minimize LOC and kept as much white space as necessary to keep things readable. I've tested this code through fairly rigorous unit tests. Note that this answer combines few techniques from other answer https://stackoverflow.com/a/3032462/207661. While the code is in C#, it should be trivial to convert in to C++, Java, etc. SingleListNode<T> SortLinkedList<T>(SingleListNode<T> head) where T : IComparable<T> { int blockSize = 1, blockCount; do { //Maintain two lists pointing to two blocks, left and right SingleListNode<T> left = head, right = head, tail = null; head = null; //Start a new list blockCount = 0; //Walk through entire list in blocks of size blockCount while (left != null) { blockCount++; //Advance right to start of next block, measure size of left list while doing so int leftSize = 0, rightSize = blockSize; for (;leftSize < blockSize && right != null; ++leftSize) right = right.Next; //Merge two list until their individual ends bool leftEmpty = leftSize == 0, rightEmpty = rightSize == 0 || right == null; while (!leftEmpty || !rightEmpty) { SingleListNode<T> smaller; //Using <= instead of < gives us sort stability if (rightEmpty || (!leftEmpty && left.Value.CompareTo(right.Value) <= 0)) { smaller = left; left = left.Next; --leftSize; leftEmpty = leftSize == 0; } else { smaller = right; right = right.Next; --rightSize; rightEmpty = rightSize == 0 || right == null; } //Update new list if (tail != null) tail.Next = smaller; else head = smaller; tail = smaller; } //right now points to next block for left left = right; } //terminate new list, take care of case when input list is null if (tail != null) tail.Next = null; //Lg n iterations blockSize <<= 1; } while (blockCount > 1); return head; } Points of interest * *There is no special handling for cases like null list of list of 1 etc required. These cases "just works". *Lot of "standard" algorithms texts have two loops to go over leftover elements to handle the case when one list is shorter than other. Above code eliminates need for it. *We make sure sort is stable *The inner while loop which is a hot spot evaluates 3 expressions per iteration on average which I think is minimum one can do. Update: @ideasman42 has translated above code to C/C++ along with suggestions for fixing comments and bit more improvement. Above code is now up to date with these. A: I decided to test the examples here, and also one more approach, originally written by Jonathan Cunningham in Pop-11. I coded all the approaches in C# and did a comparison with a range of different list sizes. I compared the Mono eglib approach by Raja R Harinath, the C# code by Shital Shah, the Java approach by Jayadev, the recursive and non-recursive versions by David Gamble, the first C code by Ed Wynn (this crashed with my sample dataset, I didn't debug), and Cunningham's version. Full code here: https://gist.github.com/314e572808f29adb0e41.git. Mono eglib is based on a similar idea to Cunningham's and is of comparable speed, unless the list happens to be sorted already, in which case Cunningham's approach is much much faster (if its partially sorted, the eglib is slightly faster). The eglib code uses a fixed table to hold the merge sort recursion, whereas Cunningham's approach works by using increasing levels of recursion - so it starts out using no recursion, then 1-deep recursion, then 2-deep recursion and so on, according to how many steps are needed to do the sort. I find the Cunningham code a little easier to follow and there is no guessing involved in how big to make the recursion table, so it gets my vote. The other approaches I tried from this page were two or more times slower. Here is the C# port of the Pop-11 sort: /// <summary> /// Sort a linked list in place. Returns the sorted list. /// Originally by Jonathan Cunningham in Pop-11, May 1981. /// Ported to C# by Jon Meyer. /// </summary> public class ListSorter<T> where T : IComparable<T> { SingleListNode<T> workNode = new SingleListNode<T>(default(T)); SingleListNode<T> list; /// <summary> /// Sorts a linked list. Returns the sorted list. /// </summary> public SingleListNode<T> Sort(SingleListNode<T> head) { if (head == null) throw new NullReferenceException("head"); list = head; var run = GetRun(); // get first run // As we progress, we increase the recursion depth. var n = 1; while (list != null) { var run2 = GetSequence(n); run = Merge(run, run2); n++; } return run; } // Get the longest run of ordered elements from list. // The run is returned, and list is updated to point to the // first out-of-order element. SingleListNode<T> GetRun() { var run = list; // the return result is the original list var prevNode = list; var prevItem = list.Value; list = list.Next; // advance to the next item while (list != null) { var comp = prevItem.CompareTo(list.Value); if (comp > 0) { // reached end of sequence prevNode.Next = null; break; } prevItem = list.Value; prevNode = list; list = list.Next; } return run; } // Generates a sequence of Merge and GetRun() operations. // If n is 1, returns GetRun() // If n is 2, returns Merge(GetRun(), GetRun()) // If n is 3, returns Merge(Merge(GetRun(), GetRun()), // Merge(GetRun(), GetRun())) // and so on. SingleListNode<T> GetSequence(int n) { if (n < 2) { return GetRun(); } else { n--; var run1 = GetSequence(n); if (list == null) return run1; var run2 = GetSequence(n); return Merge(run1, run2); } } // Given two ordered lists this returns a list that is the // result of merging the two lists in-place (modifying the pairs // in list1 and list2). SingleListNode<T> Merge(SingleListNode<T> list1, SingleListNode<T> list2) { // we reuse a single work node to hold the result. // Simplifies the number of test cases in the code below. var prevNode = workNode; while (true) { if (list1.Value.CompareTo(list2.Value) <= 0) { // list1 goes first prevNode.Next = list1; prevNode = list1; if ((list1 = list1.Next) == null) { // reached end of list1 - join list2 to prevNode prevNode.Next = list2; break; } } else { // same but for list2 prevNode.Next = list2; prevNode = list2; if ((list2 = list2.Next) == null) { prevNode.Next = list1; break; } } } // the result is in the back of the workNode return workNode.Next; } } A: A simpler/clearer implementation might be the recursive implementation, from which the NLog(N) execution time is more clear. typedef struct _aList { struct _aList* next; struct _aList* prev; // Optional. // some data } aList; aList* merge_sort_list_recursive(aList *list,int (*compare)(aList *one,aList *two)) { // Trivial case. if (!list || !list->next) return list; aList *right = list, *temp = list, *last = list, *result = 0, *next = 0, *tail = 0; // Find halfway through the list (by running two pointers, one at twice the speed of the other). while (temp && temp->next) { last = right; right = right->next; temp = temp->next->next; } // Break the list in two. (prev pointers are broken here, but we fix later) last->next = 0; // Recurse on the two smaller lists: list = merge_sort_list_recursive(list, compare); right = merge_sort_list_recursive(right, compare); // Merge: while (list || right) { // Take from empty lists, or compare: if (!right) { next = list; list = list->next; } else if (!list) { next = right; right = right->next; } else if (compare(list, right) < 0) { next = list; list = list->next; } else { next = right; right = right->next; } if (!result) { result=next; } else { tail->next=next; } next->prev = tail; // Optional. tail = next; } return result; } NB: This has a Log(N) storage requirement for the recursion. Performance should be roughly comparable with the other strategy I posted. There is a potential optimisation here by running the merge loop while (list && right), and simple appending the remaining list (since we don't really care about the end of the lists; knowing that they're merged suffices). A: Heavily based on the EXCELLENT code from: http://www.chiark.greenend.org.uk/~sgtatham/algorithms/listsort.html Trimmed slightly, and tidied: typedef struct _aList { struct _aList* next; struct _aList* prev; // Optional. // some data } aList; aList *merge_sort_list(aList *list,int (*compare)(aList *one,aList *two)) { int listSize=1,numMerges,leftSize,rightSize; aList *tail,*left,*right,*next; if (!list || !list->next) return list; // Trivial case do { // For each power of two<=list length numMerges=0,left=list;tail=list=0; // Start at the start while (left) { // Do this list_len/listSize times: numMerges++,right=left,leftSize=0,rightSize=listSize; // Cut list into two halves (but don't overrun) while (right && leftSize<listSize) leftSize++,right=right->next; // Run through the lists appending onto what we have so far. while (leftSize>0 || (rightSize>0 && right)) { // Left empty, take right OR Right empty, take left, OR compare. if (!leftSize) {next=right;right=right->next;rightSize--;} else if (!rightSize || !right) {next=left;left=left->next;leftSize--;} else if (compare(left,right)<0) {next=left;left=left->next;leftSize--;} else {next=right;right=right->next;rightSize--;} // Update pointers to keep track of where we are: if (tail) tail->next=next; else list=next; // Sort prev pointer next->prev=tail; // Optional. tail=next; } // Right is now AFTER the list we just sorted, so start the next sort there. left=right; } // Terminate the list, double the list-sort size. tail->next=0,listSize<<=1; } while (numMerges>1); // If we only did one merge, then we just sorted the whole list. return list; } NB: This is O(NLog(N)) guaranteed, and uses O(1) resources (no recursion, no stack, nothing). A: Here is my implementation of Knuth's "List merge sort" (Algorithm 5.2.4L from Vol.3 of TAOCP, 2nd ed.). I'll add some comments at the end, but here's a summary: On random input, it runs a bit faster than Simon Tatham's code (see Dave Gamble's non-recursive answer, with a link) but a bit slower than Dave Gamble's recursive code. It's harder to understand than either. At least in my implementation, it requires each element to have TWO pointers to elements. (An alternative would be one pointer and a boolean flag.) So, it's probably not a useful approach. However, one distinctive point is that it runs very fast if the input has long stretches that are already sorted. element *knuthsort(element *list) { /* This is my attempt at implementing Knuth's Algorithm 5.2.4L "List merge sort" from Vol.3 of TAOCP, 2nd ed. */ element *p, *pnext, *q, *qnext, head1, head2, *s, *t; if(!list) return NULL; L1: /* This is the clever L1 from exercise 12, p.167, solution p.647. */ head1.next=list; t=&head2; for(p=list, pnext=p->next; pnext; p=pnext, pnext=p->next) { if( cmp(p,pnext) > 0 ) { t->next=NULL; t->spare=pnext; t=p; } } t->next=NULL; t->spare=NULL; p->spare=NULL; head2.next=head2.spare; L2: /* begin a new pass: */ t=&head2; q=t->next; if(!q) return head1.next; s=&head1; p=s->next; L3: /* compare: */ if( cmp(p,q) > 0 ) goto L6; L4: /* add p onto the current end, s: */ if(s->next) s->next=p; else s->spare=p; s=p; if(p->next) { p=p->next; goto L3; } else p=p->spare; L5: /* complete the sublist by adding q and all its successors: */ s->next=q; s=t; for(qnext=q->next; qnext; q=qnext, qnext=q->next); t=q; q=q->spare; goto L8; L6: /* add q onto the current end, s: */ if(s->next) s->next=q; else s->spare=q; s=q; if(q->next) { q=q->next; goto L3; } else q=q->spare; L7: /* complete the sublist by adding p and all its successors: */ s->next=p; s=t; for(pnext=p->next; pnext; p=pnext, pnext=p->next); t=p; p=p->spare; L8: /* is this end of the pass? */ if(q) goto L3; if(s->next) s->next=p; else s->spare=p; t->next=NULL; t->spare=NULL; goto L2; } A: There's a non-recursive linked-list mergesort in mono eglib. The basic idea is that the control-loop for the various merges parallels the bitwise-increment of a binary integer. There are O(n) merges to "insert" n nodes into the merge tree, and the rank of those merges corresponds to the binary digit that gets incremented. Using this analogy, only O(log n) nodes of the merge-tree need to be materialized into a temporary holding array. A: Another example of a non-recursive merge sort for linked lists, where the functions are not part of a class. This example code and HP / Microsoft std::list::sort both use the same basic algorithm. A bottom up, non-recursive, merge sort that uses a small (26 to 32) array of pointers to the first nodes of a list, where array[i] is either 0 or points to a list of size 2 to the power i. On my system, Intel 2600K 3.4ghz, it can sort 4 million nodes with 32 bit unsigned integers as data in about 1 second. typedef struct NODE_{ struct NODE_ * next; uint32_t data; }NODE; NODE * MergeLists(NODE *, NODE *); /* prototype */ /* sort a list using array of pointers to list */ /* aList[i] == NULL or ptr to list with 2^i nodes */ #define NUMLISTS 32 /* number of lists */ NODE * SortList(NODE *pList) { NODE * aList[NUMLISTS]; /* array of lists */ NODE * pNode; NODE * pNext; int i; if(pList == NULL) /* check for empty list */ return NULL; for(i = 0; i < NUMLISTS; i++) /* init array */ aList[i] = NULL; pNode = pList; /* merge nodes into array */ while(pNode != NULL){ pNext = pNode->next; pNode->next = NULL; for(i = 0; (i < NUMLISTS) && (aList[i] != NULL); i++){ pNode = MergeLists(aList[i], pNode); aList[i] = NULL; } if(i == NUMLISTS) /* don't go beyond end of array */ i--; aList[i] = pNode; pNode = pNext; } pNode = NULL; /* merge array into one list */ for(i = 0; i < NUMLISTS; i++) pNode = MergeLists(aList[i], pNode); return pNode; } /* merge two already sorted lists */ /* compare uses pSrc2 < pSrc1 to follow the STL rule */ /* of only using < and not <= */ NODE * MergeLists(NODE *pSrc1, NODE *pSrc2) { NODE *pDst = NULL; /* destination head ptr */ NODE **ppDst = &pDst; /* ptr to head or prev->next */ if(pSrc1 == NULL) return pSrc2; if(pSrc2 == NULL) return pSrc1; while(1){ if(pSrc2->data < pSrc1->data){ /* if src2 < src1 */ *ppDst = pSrc2; pSrc2 = *(ppDst = &(pSrc2->next)); if(pSrc2 == NULL){ *ppDst = pSrc1; break; } } else { /* src1 <= src2 */ *ppDst = pSrc1; pSrc1 = *(ppDst = &(pSrc1->next)); if(pSrc1 == NULL){ *ppDst = pSrc2; break; } } } return pDst; } Visual Studio 2015 changed std::list::sort to be based on iterators instead of lists, and also changed to a top down merge sort, which requires the overhead of scanning. I initially assumed that the switch to top down was needed to work with the iterators, but when it was asked about again, I investigated this and determined that the switch to the slower top down method was not needed, and bottom up could be implemented using the same iterator based logic. The answer in this link explains this and provide a stand-alone example as well as a replacement for VS2019's std::list::sort() in the include file "list". `std::list<>::sort()` - why the sudden switch to top-down strategy? A: A tested, working C++ version of single linked list, based on the highest voted answer. singlelinkedlist.h: #pragma once #include <stdexcept> #include <iostream> #include <initializer_list> namespace ythlearn{ template<typename T> class Linkedlist{ public: class Node{ public: Node* next; T elem; }; Node head; int _size; public: Linkedlist(){ head.next = nullptr; _size = 0; } Linkedlist(std::initializer_list<T> init_list){ head.next = nullptr; _size = 0; for(auto s = init_list.begin(); s!=init_list.end(); s++){ push_left(*s); } } int size(){ return _size; } bool isEmpty(){ return size() == 0; } bool isSorted(){ Node* n_ptr = head.next; while(n_ptr->next != nullptr){ if(n_ptr->elem > n_ptr->next->elem) return false; n_ptr = n_ptr->next; } return true; } Linkedlist& push_left(T elem){ Node* n = new Node; n->elem = elem; n->next = head.next; head.next = n; ++_size; return *this; } void print(){ Node* loopPtr = head.next; while(loopPtr != nullptr){ std::cout << loopPtr->elem << " "; loopPtr = loopPtr->next; } std::cout << std::endl; } void call_merge(){ head.next = merge_sort(head.next); } Node* merge_sort(Node* n){ if(n == nullptr || n->next == nullptr) return n; Node* middle = getMiddle(n); Node* left_head = n; Node* right_head = middle->next; middle->next = nullptr; return merge(merge_sort(left_head), merge_sort(right_head)); } Node* getMiddle(Node* n){ if(n == nullptr) return n; Node* slow, *fast; slow = fast = n; while(fast->next != nullptr && fast->next->next != nullptr){ slow = slow->next; fast = fast->next->next; } return slow; } Node* merge(Node* a, Node* b){ Node dummyHead; Node* current = &dummyHead; while(a != nullptr && b != nullptr){ if(a->elem < b->elem){ current->next = a; a = a->next; }else{ current->next = b; b = b->next; } current = current->next; } current->next = (a == nullptr) ? b : a; return dummyHead.next; } Linkedlist(const Linkedlist&) = delete; Linkedlist& operator=(const Linkedlist&) const = delete; ~Linkedlist(){ Node* node_to_delete; Node* ptr = head.next; while(ptr != nullptr){ node_to_delete = ptr; ptr = ptr->next; delete node_to_delete; } } }; } main.cpp: #include <iostream> #include <cassert> #include "singlelinkedlist.h" using namespace std; using namespace ythlearn; int main(){ Linkedlist<int> l = {3,6,-5,222,495,-129,0}; l.print(); l.call_merge(); l.print(); assert(l.isSorted()); return 0; } A: Simplest Java Implementation: Time Complexity: O(nLogn) n = number of nodes. Each iteration through the linked list doubles the size of the sorted smaller linked lists. For example, after the first iteration, the linked list will be sorted into two halves. After the second iteration, the linked list will be sorted into four halves. It will keep sorting up to the size of the linked list. This will take O(logn) doublings of the small linked lists' sizes to reach the original linked list size. The n in nlogn is there because each iteration of the linked list will take time proportional to the number of nodes in the originial linked list. class Node { int data; Node next; Node(int d) { data = d; } } class LinkedList { Node head; public Node mergesort(Node head) { if(head == null || head.next == null) return head; Node middle = middle(head), middle_next = middle.next; middle.next = null; Node left = mergesort(head), right = mergesort(middle_next), node = merge(left, right); return node; } public Node merge(Node first, Node second) { Node node = null; if (first == null) return second; else if (second == null) return first; else if (first.data <= second.data) { node = first; node.next = merge(first.next, second); } else { node = second; node.next = merge(first, second.next); } return node; } public Node middle(Node head) { if (head == null) return head; Node second = head, first = head.next; while(first != null) { first = first.next; if (first != null) { second = second.next; first = first.next; } } return second; } } A: This is the entire Piece of code which shows how we can create linklist in java and sort it using Merge sort. I am creating node in MergeNode class and there is another class MergesortLinklist where there is divide and merge logic. class MergeNode { Object value; MergeNode next; MergeNode(Object val) { value = val; next = null; } MergeNode() { value = null; next = null; } public Object getValue() { return value; } public void setValue(Object value) { this.value = value; } public MergeNode getNext() { return next; } public void setNext(MergeNode next) { this.next = next; } @Override public String toString() { return "MergeNode [value=" + value + ", next=" + next + "]"; } } public class MergesortLinkList { MergeNode head; static int totalnode; public MergeNode getHead() { return head; } public void setHead(MergeNode head) { this.head = head; } MergeNode add(int i) { // TODO Auto-generated method stub if (head == null) { head = new MergeNode(i); // System.out.println("head value is "+head); return head; } MergeNode temp = head; while (temp.next != null) { temp = temp.next; } temp.next = new MergeNode(i); return head; } MergeNode mergesort(MergeNode nl1) { // TODO Auto-generated method stub if (nl1.next == null) { return nl1; } int counter = 0; MergeNode temp = nl1; while (temp != null) { counter++; temp = temp.next; } System.out.println("total nodes " + counter); int middle = (counter - 1) / 2; temp = nl1; MergeNode left = nl1, right = nl1; int leftindex = 0, rightindex = 0; if (middle == leftindex) { right = left.next; } while (leftindex < middle) { leftindex++; left = left.next; right = left.next; } left.next = null; left = nl1; System.out.println(left.toString()); System.out.println(right.toString()); MergeNode p1 = mergesort(left); MergeNode p2 = mergesort(right); MergeNode node = merge(p1, p2); return node; } MergeNode merge(MergeNode p1, MergeNode p2) { // TODO Auto-generated method stub MergeNode L = p1; MergeNode R = p2; int Lcount = 0, Rcount = 0; MergeNode tempnode = null; while (L != null && R != null) { int val1 = (int) L.value; int val2 = (int) R.value; if (val1 > val2) { if (tempnode == null) { tempnode = new MergeNode(val2); R = R.next; } else { MergeNode store = tempnode; while (store.next != null) { store = store.next; } store.next = new MergeNode(val2); R = R.next; } } else { if (tempnode == null) { tempnode = new MergeNode(val1); L = L.next; } else { MergeNode store = tempnode; while (store.next != null) { store = store.next; } store.next = new MergeNode(val1); L = L.next; } } } MergeNode handle = tempnode; while (L != null) { while (handle.next != null) { handle = handle.next; } handle.next = L; L = null; } // Copy remaining elements of L[] if any while (R != null) { while (handle.next != null) { handle = handle.next; } handle.next = R; R = null; } System.out.println("----------------sorted value-----------"); System.out.println(tempnode.toString()); return tempnode; } public static void main(String[] args) { MergesortLinkList objsort = new MergesortLinkList(); MergeNode n1 = objsort.add(9); MergeNode n2 = objsort.add(7); MergeNode n3 = objsort.add(6); MergeNode n4 = objsort.add(87); MergeNode n5 = objsort.add(16); MergeNode n6 = objsort.add(81); MergeNode n7 = objsort.add(21); MergeNode n8 = objsort.add(16); MergeNode n9 = objsort.add(99); MergeNode n10 = objsort.add(31); MergeNode val = objsort.mergesort(n1); System.out.println("===============sorted values====================="); while (val != null) { System.out.println(" value is " + val.value); val = val.next; } } } A: I don't see any C++ solutions posted here. So, here it goes. Hope it helps someone. class Solution { public: ListNode *merge(ListNode *left, ListNode *right){ ListNode *head = NULL, *temp = NULL; // Find which one is the head node for the merged list if(left->val <= right->val){ head = left, temp = left; left = left->next; } else{ head = right, temp = right; right = right->next; } while(left && right){ if(left->val <= right->val){ temp->next = left; temp = left; left = left->next; } else{ temp->next = right; temp = right; right = right->next; } } // If some elements still left in the left or the right list if(left) temp->next = left; if(right) temp->next = right; return head; } ListNode* sortList(ListNode* head){ if(!head || !head->next) return head; // Find the length of the list int length = 0; ListNode *temp = head; while(temp){ length++; temp = temp->next; } // Reset temp temp = head; // Store half of it in left and the other half in right // Create two lists and sort them ListNode *left = temp, *prev = NULL; int i = 0, mid = length / 2; // Left list while(i < mid){ prev = temp; temp = temp->next; i++; } // The end of the left list should point to NULL if(prev) prev->next = NULL; // Right list ListNode *right = temp; // Sort left list ListNode *sortedLeft = sortList(left); // Sort right list ListNode *sortedRight = sortList(right); // Merge them ListNode *sortedList = merge(sortedLeft, sortedRight); return sortedList; } }; A: Here is the Java Implementation of Merge Sort on Linked List: * *Time Complexity: O(n.logn) *Space Complexity: O(1) - Merge sort implementation on Linked List avoids the O(n) auxiliary storage cost normally associated with the algorithm class Solution { public ListNode mergeSortList(ListNode head) { if(head == null || head.next == null) return head; ListNode mid = getMid(head), second_head = mid.next; mid.next = null; return merge(mergeSortList(head), mergeSortList(second_head)); } private ListNode merge(ListNode head1, ListNode head2) { ListNode result = new ListNode(0), current = result; while(head1 != null && head2 != null) { if(head1.val < head2.val) { current.next = head1; head1 = head1.next; } else { current.next = head2; head2 = head2.next; } current = current.next; } if(head1 != null) current.next = head1; if(head2 != null) current.next = head2; return result.next; } private ListNode getMid(ListNode head) { ListNode slow = head, fast = head.next; while(fast != null && fast.next != null) { slow = slow.next; fast = fast.next.next; } return slow; } } A: Hey I know that this is a bit late an answer but got a fast simple one. The code is in F# but will goes in any language. Since this is an uncommen language of the ML family, I'll give some point to enhance the readability. F# are nesting done by tabulating. the last line of code in a function (nested part) are the return value. (x, y) is a tuple, x::xs is a list of the head x and tail xs (where xs can be empty), |> take the result of last line an pipe it as argument to the expression right of it (readability enhancing) and last (fun args -> some expression) are a lambda function. // split the list into a singleton list let split list = List.map (fun x -> [x]) lst // takes to list and merge them into a sorted list let sort lst1 lst2 = // nested function to hide accumulator let rec s acc pair = match pair with // empty list case, return the sorted list | [], [] -> List.rev acc | xs, [] | [], xs -> // one empty list case, // append the rest of xs onto acc and return the sorted list List.fold (fun ys y -> y :: ys) acc xs |> List.rev // general case | x::xs, y::ys -> match x < y with | true -> // cons x onto the accumulator s (x::acc) (xs,y::ys) | _ -> // cons y onto the accumulator s (y::acc) (x::xs,ys) s [] (lst1, lst2) let msort lst = let rec merge acc lst = match lst with | [] -> match acc with | [] -> [] // empty list case | _ -> merge [] acc | x :: [] -> // single list case (x is a list) match acc with | [] -> x // since acc are empty there are only x left, hence x are the sorted list. | _ -> merge [] (x::acc) // still need merging. | x1 :: x2 :: xs -> // merge the lists x1 and x2 and add them to the acummulator. recursiv call merge (sort x1 x2 :: acc) xs // return part split list // expand to singleton list list |> merge [] // merge and sort recursively. It is important to notice that this is fully tail recursive so no possibility of stack overflow, and by first expanding the list to a singleton list list in one go we, lower the constant factor on the worst cost. Since merge are working on list of list, we can recursively merge and sort the inner list until we get to the fix point where all inner list are sorted into one list and then we return that list, hence collapsing from a list list to a list again. A: Here is the solution using Swift Programming Language. //Main MergeSort Function func mergeSort(head: Node?) -> Node? { guard let head = head else { return nil } guard let _ = head.next else { return head } let middle = getMiddle(head: head) let left = head let right = middle.next middle.next = nil return merge(left: mergeSort(head: left), right: mergeSort(head: right)) } //Merge Function func merge(left: Node?, right: Node?) -> Node? { guard let left = left, let right = right else { return nil} let dummyHead: Node = Node(value: 0) var current: Node? = dummyHead var currentLeft: Node? = left var currentRight: Node? = right while currentLeft != nil && currentRight != nil { if currentLeft!.value < currentRight!.value { current?.next = currentLeft currentLeft = currentLeft!.next } else { current?.next = currentRight currentRight = currentRight!.next } current = current?.next } if currentLeft != nil { current?.next = currentLeft } if currentRight != nil { current?.next = currentRight } return dummyHead.next! } And here are the Node Class & getMiddle Method class Node { //Node Class which takes Integers as value var value: Int var next: Node? init(value: Int) { self.value = value } } func getMiddle(head: Node) -> Node { guard let nextNode = head.next else { return head } var slow: Node = head var fast: Node? = head while fast?.next?.next != nil { slow = slow.next! fast = fast!.next?.next } return slow } A: public int[] msort(int[] a) { if (a.Length > 1) { int min = a.Length / 2; int max = min; int[] b = new int[min]; int[] c = new int[max]; // dividing main array into two half arrays for (int i = 0; i < min; i++) { b[i] = a[i]; } for (int i = min; i < min + max; i++) { c[i - min] = a[i]; } b = msort(b); c = msort(c); int x = 0; int y = 0; int z = 0; while (b.Length != y && c.Length != z) { if (b[y] < c[z]) { a[x] = b[y]; //r-- x++; y++; } else { a[x] = c[z]; x++; z++; } } while (b.Length != y) { a[x] = b[y]; x++; y++; } while (c.Length != z) { a[x] = c[z]; x++; z++; } } return a; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54" }
Q: How do I enable MSDTC on SQL Server? Is this even a valid question? I have a .NET Windows app that is using MSTDC and it is throwing an exception: System.Transactions.TransactionManagerCommunicationException: Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool ---> System.Runtime.InteropServices.COMException (0x8004D024): The transaction manager has disabled its support for remote/network transactions. (Exception from HRESULT: 0x8004D024) at System.Transactions.Oletx.IDtcProxyShimFactory.ReceiveTransaction(UInt32 propgationTokenSize, Byte[] propgationToken, IntPtr managedIdentifier, Guid& transactionIdentifier, OletxTransactionIsolationLevel& isolationLevel, ITransactionShim& transactionShim).... I followed the Kbalertz guide to enable MSDTC on the PC on which the app is installed, but the error still occurs. I was wondering if this was a database issue? If so, how can I resolve it? A: Can also see here on how to turn on MSDTC from the Control Panel's services.msc. On the server where the trigger resides, you need to turn the MSDTC service on. You can this by clicking START > SETTINGS > CONTROL PANEL > ADMINISTRATIVE TOOLS > SERVICES. Find the service called 'Distributed Transaction Coordinator' and RIGHT CLICK (on it and select) > Start. A: MSDTC must be enabled on both systems, both server and client. Also, make sure that there isn't a firewall between the systems that blocks RPC. DTCTest is a nice litt app that helps you to troubleshoot any other problems. A: @Dan, Do I not need msdtc enabled for transactions to work? Only distributed transactions - Those that involve more than a single connection. Make doubly sure you are only opening a single connection within the transaction and it won't escalate - Performance will be much better too. A: I've found that the best way to debug is to use the microsoft tool called DTCPing * *Copy the file to both the server (DB) and the client (Application server/client pc) * *Start it at the server and the client *At the server: fill in the client netbios computer name and try to setup a DTC connection *Restart both applications. *At the client: fill in the server netbios computer name and try to setup a DTC connection I've had my fare deal of problems in our old company network, and I've got a few tips: * *if you get the error message "Gethostbyname failed" it means the computer can not find the other computer by its netbios name. The server could for instance resolve and ping the client, but that works on a DNS level. Not on a netbios lookup level. Using WINS servers or changing the LMHOST (dirty) will solve this problem. *if you get an error "Acces Denied", the security settings don't match. You should compare the security tab for the msdtc and get the server and client to match. One other thing to look at is the RestrictRemoteClients value. Depending on your OS version and more importantly the Service Pack, this value can be different. *Other connection problems: * *The firewall between the server and the client must allow communication over port 135. And more importantly the connection can be initiated from both sites (I had a lot of problems with the firewall people in my company because they assumed only the server would open an connection on to that port) *The protocol returns a random port to connect to for the real transaction communication. Firewall people don't like that, they like to restrict the ports to a certain range. You can restrict the RPC dynamic port generation to a certain range using the keys as described in How to configure RPC dynamic port allocation to work with firewalls. In my experience, if the DTCPing is able to setup a DTC connection initiated from the client and initiated from the server, your transactions are not the problem any more. A: Use this for windows Server 2008 r2 and Windows Server 2012 R2 * *Click Start, click Run, type dcomcnfg and then click OK to open Component Services. *In the console tree, click to expand Component Services, click to expand Computers, click to expand My Computer, click to expand Distributed Transaction Coordinator and then click Local DTC. *Right click Local DTC and click Properties to display the Local DTC Properties dialog box. *Click the Security tab. *Check mark "Network DTC Access" checkbox. *Finally check mark "Allow Inbound" and "Allow Outbound" checkboxes. *Click Apply, OK. *A message will pop up about restarting the service. *Click OK and That's all. Reference : https://msdn.microsoft.com/en-us/library/dd327979.aspx Note: Sometimes the network firewall on the Local Computer or the Server could interrupt your connection so make sure you create rules to "Allow Inbound" and "Allow Outbound" connection for C:\Windows\System32\msdtc.exe A: Do you even need MSDTC? The escalation you're experiencing is often caused by creating multiple connections within a single TransactionScope. If you do need it then you need to enable it as outlined in the error message. On XP: * *Go to Administrative Tools -> Component Services *Expand Component Services -> Computers -> *Right-click -> Properties -> MSDTC tab *Hit the Security Configuration button A: MSDTC can be configured with MsDtc PowerShell module, e.g.: # Import the module Import-Module -Name MsDtc # Set the DTC config $dtcNetworkSetting = @{ DtcName = 'Local' AuthenticationLevel = 'NoAuth' InboundTransactionsEnabled = $true OutboundTransactionsEnabled = $true RemoteClientAccessEnabled = $true RemoteAdministrationAccessEnabled = $true XATransactionsEnabled = $false LUTransactionsEnabled = $true } Set-DtcNetworkSetting @dtcNetworkSetting # Restart the MsDtc service Get-Service -Name MsDtc | Restart-Service Run on each of the machines that will be supporting the distributed transactions (i.e. where the MSDTC service is running).
{ "language": "en", "url": "https://stackoverflow.com/questions/7694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "115" }
Q: IE8 overflow:auto with max-height I have an element which may contain very big amounts of data, but I don't want it to ruin the page layout, so I set max-height: 100px and overflow:auto, hoping for scrollbars to appear when the content does not fit. It all works fine in Firefox and IE7, but IE8 behaves as if overflow:hidden was present instead of overflow:auto. I tried overflow:scroll, still does not help, IE8 simply truncates the content without showing scrollbars. Changing max-height declaration to height makes overflow work OK, it's the combination of max-height and overflow:auto that breaks things. This is also logged as an official bug in the final, release version of IE8 Is there a workaround? For now I resorted to using height instead of max-height, but it leaves plenty of empty space in case there isn't much data. A: This is a really nasty bug as it affects us heavily on Stack Overflow with <pre> code blocks, which have max-height:600 and width:auto. It is logged as a bug in the final version of IE8 with no fix. http://connect.microsoft.com/IE/feedback/ViewFeedback.aspx?FeedbackID=408759 There is a really, really hacky CSS workaround: http://my.opera.com/dbloom/blog/2009/03/11/css-hack-for-ie8-standards-mode /* SUPER nasty IE8 hack to deal with this bug */ pre { max-height: none\9 } and of course conditional CSS as others have mentioned, but I dislike that because it means you're serving up extra HTML cruft in every page request. A: { overflow:auto } Try div overflow:auto A: I saw this logged as a fixed bug in RC1. But I've found a variation that seems to cause a hard assert render failure. Involves these two styles in a nested table. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <title>Test</title> <style type="text/css"> .calendarBody { overflow: scroll; max-height: 500px; } </style> </head> <body> <table> <tbody> <tr> <td> This is a cell in the outer table. <div class="calendarBody"> <table> <tbody> <tr> <td> This is a cell in the inner table. </td> </tr> </tbody> </table> </div> </td> </tr> </tbody> </table> </body> </html> A: {max-height:200px, Overflow:auto} Thanks to Srinivas Tamada, The above code did work for me. A: To reproduce: (This crashes the whole page.) <HTML> <HEAD> <META content="IE=8" http-equiv="X-UA-Compatible"/> </HEAD> <BODY> look: <TABLE width="100%"> <TR> <TD> <TABLE width="100%"> <TR> <TD> <DIV style="overflow-y: scroll; max-height: 100px;"> X </DIV> </TD> </TR> </TABLE> </TD> </TR> </TABLE> </BODY> </HTML> (Whereas this works fine...) <HTML> <HEAD> <META content="IE=8" http-equiv="X-UA-Compatible"/> </HEAD> <BODY> look: <TABLE width="100%"> <TR> <TD> <TABLE width="100%"> <TR> <TD> <DIV style="overflow-y: scroll; max-height: 100px;"> The quick brown fox </DIV> </TD> </TR> </TABLE> </TD> </TR> </TABLE> </BODY> </HTML> (And, madly, so does this. [No content in the div at all.]) <HTML> <HEAD> <META content="IE=8" http-equiv="X-UA-Compatible"/> </HEAD> <BODY> look: <TABLE width="100%"> <TR> <TD> <TABLE width="100%"> <TR> <TD> <DIV style="overflow-y: scroll; max-height: 100px;"> </DIV> </TD> </TR> </TABLE> </TD> </TR> </TABLE> </BODY> </HTML> A: Similar situation, a pre element with maxHeight set by js to fit in allotted space, width 100%, overflow auto. If the content is shorter than maxHeight and also fits horizontally, we're good. If you resize the window so the content no longer fits horizontally, a horizontal scrollbar appears, but the height of element immediately jumps to the full maxHeight, regardless of the height of the content. Tried various forms of the css hack mentioned by Jeff, but didn't find anything like it that wasn't a js bad-parameter error. Best I could find was to pick your poison for ie8: Either drop the maxHeight limit, so the element can be any height (best for my case), or set height rather than maxHeight, so it's always that tall even if the content itself is much shorter. Very not ideal. Wacked behavior is gone in ie9. A: Set max-height only and don't set the overflow. This way it will show scroll bar if content is more than max-height and shrinks if content is less than the max-height. A: I found this : https://perishablepress.com/maximum-and-minimum-height-and-width-in-internet-explorer/ This method has been verified in IE6 and should also work in IE5. Simply change the values to suit your needs (code commented with explanatory notes). In this example, we are setting the max-height at 333px 1 for IE and all standards-compliant browsers: * html div#division { height: expression( this.scrollHeight > 332 ? "333px" : "auto" ); /* sets max-height for IE */ } and this works for me perfectly so I decided to share this.
{ "language": "en", "url": "https://stackoverflow.com/questions/7707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Capture MouseDown event for .NET TextBox Is there any way to capture the MouseDown even from the .NET 2.0 TextBox control? I know the inherited Control class has the event, but it's not exposed in TextBox. Is there a way to override the event handler? I also tried the OpenNETCF TextBox2 control which does have the MouseDown event exposed, but no matter what I do, it doesn't fire the handler. Any suggestions? What kind of crazy mobile device do you have that has a mouse? :) Yes, windows mobile does not have an actual mouse, but you are mistaken that Windows Mobile .NET do not support the Mouse events. A click or move on the screen is still considered a "Mouse" event. It was done this way so that code could port over from full Windows easily. And this is not a Windows Mobile specific issue. The TextBox control on Windows does not have native mouse events either. I just happened to be using Windows Mobile in this case. Edit: And on a side note...as Windows Mobile is built of the WindowsCE core which is often used for embedded desktop systems and Slim Terminal Services clients or "WinTerms" it has support for a hardware mouse and has for a long time. Most devices just don't have the ports to plug one in. According to the .Net Framework, the MouseDown Event Handler on a TextBox is supported. What happens when you try to run the code? Actually, that's only there because it inherits it from "Control", as does every other Form control. It is, however, overridden (and changed to private I believe) in the TextBox class. So it will not show up in IntelliSense in Visual Studio. However, you actually can write the code: textBox1.MouseDown += new System.Windows.Forms.MouseEventHandler(this.textBox1_MouseDown); and it will compile and run just fine, the only problem is that textBox1_MouseDown() will not be fired when you tap the TextBox control. I assume this is because of the Event being overridden internally. I don't even want to change what's happening on the event internally, I just want to add my own event handler to that event so I can fire some custom code as you could with any other event. A: I know this answer is way late, but hopefully it ends up being useful for someone who finds this. Also, I didn't entirely come up with it myself. I believe I originally found most of the info on the OpenNETCF boards, but what is typed below is extracted from one of my applications. You can get a mousedown event by implementing the OpenNETCF.Windows.Forms.IMessageFilter interface and attaching it to your application's message filter. static class Program { public static MouseUpDownFilter mudFilter = new MouseUpDownfilter(); public static void Main() { Application2.AddMessageFilter(mudFilter); Application2.Run(new MainForm()); } } This is how you could implement the MouseUpDownFilter: public class MouseUpDownFilter : IMessageFilter { List ControlList = new List(); public void WatchControl(Control buttonToWatch) { ControlList.Add(buttonToWatch); } public event MouseEventHandler MouseUp; public event MouseEventHandler MouseDown; public bool PreFilterMessage(ref Microsoft.WindowsCE.Forms.Message m) { const int WM_LBUTTONDOWN = 0x0201; const int WM_LBUTTONUP = 0x0202; // If the message code isn't one of the ones we're interested in // then we can stop here if (m.Msg != WM_LBUTTONDOWN && m.Msg != WM_LBUTTONDOWN) { return false; } // see if the control is a watched button foreach (Control c in ControlList) { if (m.HWnd == c.Handle) { int i = (int)m.LParam; int x = i & 0xFFFF; int y = (i >> 16) & 0xFFFF; MouseEventArgs args = new MouseEventArgs(MouseButtons.Left, 1, x, y, 0); if (m.Msg == WM_LBUTTONDOWN) MouseDown(c, args); else MouseUp(c, args); // returning true means we've processed this message return true; } } return false; } } Now this MouseUpDownFilter will fire an MouseUp/MouseDown event when they occur on a watched control, for example your textbox. To use this filter you add some watched controls and assign to the events it might fire in your form's load event: private void MainForm_Load(object sender, EventArgs e) { Program.mudFilter.WatchControl(this.textBox1); Program.mudFilter.MouseDown += new MouseEventHandler(mudFilter_MouseDown); Program.mudFilter.MouseUp += new MouseEventHandler(mudFilter_MouseUp); } void mudFilter_MouseDown(object sender, MouseEventArgs e) { if (sender == textBox1) { // do what you want to do in the textBox1 mouse down event :) } } A: According to the .Net Framework, the MouseDown Event Handler on a TextBox is supported. What happens when you try to run the code? A: Fair enough. You probably know more than I do about Windows Mobile. :) I just started programming for it. But in regular WinForms, you can override the OnXxx event handler methods all you want. A quick look in Reflector with the CF shows that Control, TextBoxBase and TextBox don't prevent you from overriding the OnMouseDown event handler. Have you tried this?: public class MyTextBox : TextBox { public MyTextBox() { } protected override void OnMouseDown(MouseEventArgs e) { //do something specific here base.OnMouseDown(e); } } A: Looks like you're right. Bummer. No MouseOver event. One of the fallbacks that always works with .NET, though, is P/Invoke. Someone already took the time to do this for the .NET CF TextBox. I found this on CodeProject: http://www.codeproject.com/KB/cs/TextBox_subclassing.aspx Hope this helps A: is there an 'OnEnter' event that you could capture instead? it'd presumably also capture when you tab into the textbox as well as enter the text box by tapping/clicking on it, but if that isn't a problem, then this may be a more straightforward work-around
{ "language": "en", "url": "https://stackoverflow.com/questions/7719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Packaging Java apps for the Windows/Linux desktop I am writing an application in Java for the desktop using the Eclipse SWT library for GUI rendering. I think SWT helps Java get over the biggest hurdle for acceptance on the desktop: namely providing a Java application with a consistent, responsive interface that looks like that belonging to any other app on your desktop. However, I feel that packaging an application is still an issue. OS X natively provides an easy mechanism for wrapping Java apps in native application bundles, but producing an app for Windows/Linux that doesn't require the user to run an ugly batch file or click on a .jar is still a hassle. Possibly that's not such an issue on Linux, where the user is likely to be a little more tech-savvy, but on Windows I'd like to have a regular .exe for him/her to run. Has anyone had any experience with any of the .exe generation tools for Java that are out there? I've tried JSmooth but had various issues with it. Is there a better solution before I crack out Visual Studio and roll my own? Edit: I should perhaps mention that I am unable to spend a lot of money on a commercial solution. A: Maybe you should take a look at IzPack. I created a very nice installer some years ago and I'd bet that they are still improving it. It allows the installation of docs, binaries and a clickable link to start the application IIRC. A: I've used the free Launch4J to create a custom launcher for my Java programs on Windows. Combined with the free NSIS Installer you can build a nice package for your Windows users. Edit: Did not see that you use SWT. Don't know if it works with SWT as well, because I used only Swing in my apps. A: To follow up on pauxu's answer, I'm using launch4j and NSIS on a project of mine and thought it would be helpful to show just how I'm using them. Here's what I'm doing for Windows. BTW, I'm creating .app and .dmg for Mac, but haven't figured out what to do for Linux yet. Project Copies of launch4j and NSIS In my project I have a "vendor" directory and underneath it I have a directory for "launch4j" and "nsis". Within each is a copy of the install for each application. I find it easier to have a copy local to the project rather than forcing others to install both products and set up some kind of environment variable to point to each. Script Files I also have a "scripts" directory in my project that holds various configuration/script files for my project. First there is the launch4j.xml file: <launch4jConfig> <dontWrapJar>true</dontWrapJar> <headerType>gui</headerType> <jar>rpgam.jar</jar> <outfile>rpgam.exe</outfile> <errTitle></errTitle> <cmdLine></cmdLine> <chdir>.</chdir> <priority>normal</priority> <downloadUrl>http://www.rpgaudiomixer.com/</downloadUrl> <supportUrl></supportUrl> <customProcName>false</customProcName> <stayAlive>false</stayAlive> <manifest></manifest> <icon></icon> <jre> <path></path> <minVersion>1.5.0</minVersion> <maxVersion></maxVersion> <jdkPreference>preferJre</jdkPreference> </jre> <splash> <file>..\images\splash.bmp</file> <waitForWindow>true</waitForWindow> <timeout>60</timeout> <timeoutErr>true</timeoutErr> </splash> </launch4jConfig> And then there's the NSIS script rpgam-setup.nsis. It can take a VERSION argument to help name the file. ; The name of the installer Name "RPG Audio Mixer" !ifndef VERSION !define VERSION A.B.C !endif ; The file to write outfile "..\dist\installers\windows\rpgam-${VERSION}.exe" ; The default installation directory InstallDir "$PROGRAMFILES\RPG Audio Mixer" ; Registry key to check for directory (so if you install again, it will ; overwrite the old one automatically) InstallDirRegKey HKLM "Software\RPG_Audio_Mixer" "Install_Dir" # create a default section. section "RPG Audio Mixer" SectionIn RO ; Set output path to the installation directory. SetOutPath $INSTDIR File /r "..\dist\layout\windows\" ; Write the installation path into the registry WriteRegStr HKLM SOFTWARE\RPG_Audio_Mixer "Install_Dir" "$INSTDIR" ; Write the uninstall keys for Windows WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" "DisplayName" "RPG Audio Mixer" WriteRegStr HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" "UninstallString" '"$INSTDIR\uninstall.exe"' WriteRegDWORD HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" "NoModify" 1 WriteRegDWORD HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" "NoRepair" 1 WriteUninstaller "uninstall.exe" ; read the value from the registry into the $0 register ;readRegStr $0 HKLM "SOFTWARE\JavaSoft\Java Runtime Environment" CurrentVersion ; print the results in a popup message box ;messageBox MB_OK "version: $0" sectionEnd Section "Start Menu Shortcuts" CreateDirectory "$SMPROGRAMS\RPG Audio Mixer" CreateShortCut "$SMPROGRAMS\RPG Audio Mixer\Uninstall.lnk" "$INSTDIR\uninstall.exe" "" "$INSTDIR\uninstall.exe" 0 CreateShortCut "$SMPROGRAMS\RPG AUdio Mixer\RPG Audio Mixer.lnk" "$INSTDIR\rpgam.exe" "" "$INSTDIR\rpgam.exe" 0 SectionEnd Section "Uninstall" ; Remove registry keys DeleteRegKey HKLM "Software\Microsoft\Windows\CurrentVersion\Uninstall\RPGAudioMixer" DeleteRegKey HKLM SOFTWARE\RPG_Audio_Mixer ; Remove files and uninstaller Delete $INSTDIR\rpgam.exe Delete $INSTDIR\uninstall.exe ; Remove shortcuts, if any Delete "$SMPROGRAMS\RPG Audio Mixer\*.*" ; Remove directories used RMDir "$SMPROGRAMS\RPG Audio Mixer" RMDir "$INSTDIR" SectionEnd Ant Integration I have some targets in my Ant buildfile (build.xml) to handle the above. First I tel Ant to import launch4j's Ant tasks: <property name="launch4j.dir" location="vendor/launch4j" /> <taskdef name="launch4j" classname="net.sf.launch4j.ant.Launch4jTask" classpath="${launch4j.dir}/launch4j.jar:${launch4j.dir}/lib/xstream.jar" /> I then have a simple target for creating the wrapper executable: <target name="executable-windows" depends="jar" description="Create Windows executable (EXE)"> <launch4j configFile="scripts/launch4j.xml" outfile="${exeFile}" /> </target> And another target for making the installer: <target name="installer-windows" depends="executable-windows" description="Create the installer for Windows (EXE)"> <!-- Lay out files needed for building the installer --> <mkdir dir="${windowsLayoutDirectory}" /> <copy file="${jarFile}" todir="${windowsLayoutDirectory}" /> <copy todir="${windowsLayoutDirectory}/lib"> <fileset dir="${libraryDirectory}" /> <fileset dir="${windowsLibraryDirectory}" /> </copy> <copy todir="${windowsLayoutDirectory}/icons"> <fileset dir="${iconsDirectory}" /> </copy> <copy todir="${windowsLayoutDirectory}" file="${exeFile}" /> <mkdir dir="${windowsInstallerDirectory}" /> <!-- Build the installer using NSIS --> <exec executable="vendor/nsis/makensis.exe"> <arg value="/DVERSION=${version}" /> <arg value="scripts/rpgam-setup.nsi" /> </exec> </target> The top portion of that just copies the necessary files for the installer to a temporary location and the second half executes the script that uses all of it to make the installer. A: Have you considered writing a small program in C/C++ that just calls CreateProcess to start up the java VM with the jar (or class) file? You could get Visual C++ Express and put together the startup program pretty easily. This would make it easy to add a friendly icon as well. A: Consider converting your application to Eclipse RCP. It is written in SWT, and the Eclipse IDE contains packaging tools that generate executables for all major platforms. For windows, it can generate a zip or a folder containing your code. For a common installation experience, I'd using NSIS. There is actually a packages generator project at eclipse to create common installers for all platforms eclipse supports. A: Have you thought about Java Web Start? Here is a tutorial specifically for deploying an SWT application with Java Web Start. A: In my company we use Launch4J to create the exe file, and NSIS to create the installer, with SWT applications. We have used it for years in several commercial applications and the pair works fine. A: Install4J. Not free, but worth it. Give the trial a shot A: You can now do this through Netbeans! It's really easy and works perfectly. Check out this tutorial on the Netbeans website. A: I went through the same and found that all of the free options weren't very good. Looks like you'll be writing your own. I'd be interested to see if someone has a free/cheap option that works A: Another option I was considering: rather than writing a native launcher from scratch, Eclipse comes with the source code for its own launcher, and this could perhaps be repurposed for my app. It's a shame that Sun never included anything similar in the JDK. A: Another vote for Launch4J, just wrote an ant task this morning to integrate with one of my projects. Seems to work really well A: I have used JSmooth in the past, and still have luck with it. The UI is pretty buggy, but I only use that for building the config file once, and then I build from Ant after that. What issues are you having with JSmooth? A: JSMooth has worked very well for us in a production environment, where I first generated a single jar using one-jar (fat jar plugin to eclipse) and then wrapped it with JSmooth. (Please note that I wanted a no-install distribution of a single file, which could promt for installing the JRE if needed). It has worked so well that I thought nobody was using it :) A: You may want to try our tool, BitRock InstallBuilder. Although it is a native application, a lot of our customers use it to package desktop Java applications. If you bundle the JRE and create launcher, etc. the user does not even need to know they are installing a Java application. It is cross platform, so you can generate installers for both Windows and Mac (and Linux, Solaris, etc.) Like install4j tool mentioned in another post, it is a commercial tool, but we have free licenses for open source projects and special discounts for microISVs / small business, etc. just drop us an email. Also wanted to emphasize that this is an installer tool, so it will not address your needs if you are looking only for a single file executable. A: In my company we use launch4J and NSIS for the windows distribution, and jdeb for the Debian distribution, and Java Web Start for the general operating system. This works quite fine. A: Please try InstallJammer.The best one I have used ever. Free and powerful.And sufficient for personal and commercial use. A: Have you considered Advanced Installer? I have used it severally especially for Windows and Mac. No scripting or Ant required. All GUI. Very simple and understandable. Ain't free but worth every penny. - Lauch as Administrator - File Association - Custom Install Themes + In built Themes - Package with JRE - Install location - Native Splash screen implementation - You can event create services and installation events - Prerequisites - JRE minimum version and maximum version And a lot more. And don't get it twisted, i have no connections with the dudes...their App is just awesome.
{ "language": "en", "url": "https://stackoverflow.com/questions/7720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: What are the options for delivering Flash video? I'd like a concise introduction to the different options. A: From Wikipedia Embedded in an SWF file using the Flash authoring tool (supported in Flash Player 6 and later). The entire file must be transferred before playback can begin. Changing the video requires rebuilding the SWF file.[citation needed] Progressive download via HTTP (supported in Flash Player 7 and later). This method uses ActionScript to include an externally hosted Flash Video file client-side for playback. Progressive download has several advantages, including buffering, use of generic HTTP servers, and the ability to reuse a single SWF player for multiple Flash Video sources. Flash Player 8 includes support for random access within video files using the partial download functionality of HTTP, sometimes this is referred to as streaming. However, unlike streaming using RTMP, HTTP "streaming" does not support real-time broadcasting. Streaming via HTTP requires a custom player and the injection of specific Flash Video metadata containing the exact starting position in bytes and timecode of each keyframe. Using this specific information, a custom Flash Video player can request any part of the Flash Video file starting at a specified keyframe. For example, Google Video and Youtube support progressive downloading and can seek to any part of the video before buffering is complete. The server-side part of this "HTTP pseudo-streaming" method is fairly simple to implement, for example in PHP, as an Apache HTTPD module, or a lighttpd module. Rich Media Project provides players and Flash components compatible with "HTTP pseudo-streaming" method. Streamed via RTMP to the Flash Player using the Flash Media Server (formerly called Flash Communication Server), VCS, ElectroServer, Wowza Pro or the open source Red5 server. As of April 2008, there are four stream recorders available for this protocol, re-encoding screencast software excluded. There is a useful introduction from Adobe here: Flash video learning guide A: You can stream FLV videos using a simple player like JW FLV Media Player. It supports several streaming methods, playlists etc. It's actively developed, and I have found it to be the best solution for streaming flash video. A: Further to yoavf's answer, you can also use haxevideo as an open source rtmp video streaming server.
{ "language": "en", "url": "https://stackoverflow.com/questions/7726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to display unicode text in OpenGL? Is there a good way for displaying unicode text in opengl under Windows? For example, when you have to deal with different languages. The most common approach like #define FONTLISTRANGE 128 GLuint list; list = glGenLists(FONTLISTRANGE); wglUseFontBitmapsW(hDC, 0, FONTLISTRANGE, list); just won't do because you can't create enough lists for all unicode characters. A: I recommend reading this OpenGL font tutorial. It's for the D programming language but it's a nice introduction to various issues involved in implementing a glyph caching system for rendering text with OpenGL. The tutorial covers Unicode compliance, antialiasing, and kerning techniques. D is pretty comprehensible to anyone who knows C++ and most of the article is about the general techniques, not the implementation language. A: Id recommend FTGL as already recommended above, however I have implemented a freetype/OpenGL renderer myself and thought you might find the code handy if you want reinvent this wheel yourself. I'd really recommend FTGL though, its a lot less hassle to use. :) * glTextRender class by Semi Essessi * * FreeType2 empowered text renderer * */ #include "glTextRender.h" #include "jEngine.h" #include "glSystem.h" #include "jMath.h" #include "jProfiler.h" #include "log.h" #include <windows.h> FT_Library glTextRender::ftLib = 0; //TODO::maybe fix this so it use wchar_t for the filename glTextRender::glTextRender(jEngine* j, const char* fontName, int size = 12) { #ifdef _DEBUG jProfiler profiler = jProfiler(L"glTextRender::glTextRender"); #endif char fontName2[1024]; memset(fontName2,0,sizeof(char)*1024); sprintf(fontName2,"fonts\\%s",fontName); if(!ftLib) { #ifdef _DEBUG wchar_t fn[128]; mbstowcs(fn,fontName,strlen(fontName)+1); LogWriteLine(L"\x25CB\x25CB\x25CF Font: %s was requested before FreeType was initialised", fn); #endif return; } // constructor code for glTextRender e=j; gl = j->gl; red=green=blue=alpha=1.0f; face = 0; // remember that for some weird reason below font size 7 everything gets scrambled up height = max(6,(int)floorf((float)size*((float)gl->getHeight())*0.001666667f)); aHeight = ((float)height)/((float)gl->getHeight()); setPosition(0.0f,0.0f); // look in base fonts dir if(FT_New_Face(ftLib, fontName2, 0, &face )) { // if we dont have it look in windows fonts dir char buf[1024]; GetWindowsDirectoryA(buf,1024); strcat(buf, "\\fonts\\"); strcat(buf, fontName); if(FT_New_Face(ftLib, buf, 0, &face )) { //TODO::check in mod fonts directory #ifdef _DEBUG wchar_t fn[128]; mbstowcs(fn,fontName,strlen(fontName)+1); LogWriteLine(L"\x25CB\x25CB\x25CF Request for font: %s has failed", fn); #endif face = 0; return; } } // FreeType uses 64x size and 72dpi for default // doubling size for ms FT_Set_Char_Size(face, mulPow2(height,7), mulPow2(height,7), 96, 96); // set up cache table and then generate the first 256 chars and the console prompt character for(int i=0;i<65536;i++) { cached[i]=false; width[i]=0.0f; } for(unsigned short i = 0; i < 256; i++) getChar((wchar_t)i); getChar(CHAR_PROMPT); #ifdef _DEBUG wchar_t fn[128]; mbstowcs(fn,fontName,strlen(fontName)+1); LogWriteLine(L"\x25CB\x25CB\x25CF Font: %s loaded OK", fn); #endif } glTextRender::~glTextRender() { // destructor code for glTextRender for(int i=0;i<65536;i++) { if(cached[i]) { glDeleteLists(listID[i],1); glDeleteTextures(1,&(texID[i])); } } // TODO:: work out stupid freetype crashz0rs try { static int foo = 0; if(face && foo < 1) { foo++; FT_Done_Face(face); face = 0; } } catch(...) { face = 0; } } // return true if init works, or if already initialised bool glTextRender::initFreeType() { if(!ftLib) { if(!FT_Init_FreeType(&ftLib)) return true; else return false; } else return true; } void glTextRender::shutdownFreeType() { if(ftLib) { FT_Done_FreeType(ftLib); ftLib = 0; } } void glTextRender::print(const wchar_t* str) { // store old stuff to set start position glPushAttrib(GL_TRANSFORM_BIT); // get viewport size GLint viewport[4]; glGetIntegerv(GL_VIEWPORT, viewport); glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluOrtho2D(viewport[0],viewport[2],viewport[1],viewport[3]); glPopAttrib(); float color[4]; glGetFloatv(GL_CURRENT_COLOR, color); glPushAttrib(GL_LIST_BIT | GL_CURRENT_BIT | GL_ENABLE_BIT | GL_TRANSFORM_BIT); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); glEnable(GL_TEXTURE_2D); //glDisable(GL_DEPTH_TEST); // set blending for AA glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glTranslatef(xPos,yPos,0.0f); glColor4f(red,green,blue,alpha); // call display lists to render text glListBase(0u); for(unsigned int i=0;i<wcslen(str);i++) glCallList(getChar(str[i])); // restore old states glMatrixMode(GL_MODELVIEW); glPopMatrix(); glPopAttrib(); glColor4fv(color); glPushAttrib(GL_TRANSFORM_BIT); glMatrixMode(GL_PROJECTION); glPopMatrix(); glPopAttrib(); } void glTextRender::printf(const wchar_t* str, ...) { if(!str) return; wchar_t* buf = 0; va_list parg; va_start(parg, str); // allocate buffer int len = (_vscwprintf(str, parg)+1); buf = new wchar_t[len]; if(!buf) return; vswprintf(buf, str, parg); va_end(parg); print(buf); delete[] buf; } GLuint glTextRender::getChar(const wchar_t c) { int i = (int)c; if(cached[i]) return listID[i]; // load glyph and get bitmap if(FT_Load_Glyph(face, FT_Get_Char_Index(face, i), FT_LOAD_DEFAULT )) return 0; FT_Glyph glyph; if(FT_Get_Glyph(face->glyph, &glyph)) return 0; FT_Glyph_To_Bitmap(&glyph, FT_RENDER_MODE_NORMAL, 0, 1); FT_BitmapGlyph bitmapGlyph = (FT_BitmapGlyph)glyph; FT_Bitmap& bitmap = bitmapGlyph->bitmap; int w = roundPow2(bitmap.width); int h = roundPow2(bitmap.rows); // convert to texture in memory GLubyte* texture = new GLubyte[2*w*h]; for(int j=0;j<h;j++) { bool cond = j>=bitmap.rows; for(int k=0;k<w;k++) { texture[2*(k+j*w)] = 0xFFu; texture[2*(k+j*w)+1] = ((k>=bitmap.width)||cond) ? 0x0u : bitmap.buffer[k+bitmap.width*j]; } } // store char width and adjust max height // note .5f float ih = 1.0f/((float)gl->getHeight()); width[i] = ((float)divPow2(face->glyph->advance.x, 7))*ih; aHeight = max(aHeight,(.5f*(float)bitmap.rows)*ih); glPushAttrib(GL_LIST_BIT | GL_CURRENT_BIT | GL_ENABLE_BIT | GL_TRANSFORM_BIT); // create gl texture glGenTextures(1, &(texID[i])); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D, texID[i]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, texture); glPopAttrib(); delete[] texture; // create display list listID[i] = glGenLists(1); glNewList(listID[i], GL_COMPILE); glBindTexture(GL_TEXTURE_2D, texID[i]); glMatrixMode(GL_MODELVIEW); glPushMatrix(); // adjust position to account for texture padding glTranslatef(.5f*(float)bitmapGlyph->left, 0.0f, 0.0f); glTranslatef(0.0f, .5f*(float)(bitmapGlyph->top-bitmap.rows), 0.0f); // work out texcoords float tx=((float)bitmap.width)/((float)w); float ty=((float)bitmap.rows)/((float)h); // render // note .5f glBegin(GL_QUADS); glTexCoord2f(0.0f, 0.0f); glVertex2f(0.0f, .5f*(float)bitmap.rows); glTexCoord2f(0.0f, ty); glVertex2f(0.0f, 0.0f); glTexCoord2f(tx, ty); glVertex2f(.5f*(float)bitmap.width, 0.0f); glTexCoord2f(tx, 0.0f); glVertex2f(.5f*(float)bitmap.width, .5f*(float)bitmap.rows); glEnd(); glPopMatrix(); // move position for the next character // note extra div 2 glTranslatef((float)divPow2(face->glyph->advance.x, 7), 0.0f, 0.0f); glEndList(); // char is succesfully cached for next time cached[i] = true; return listID[i]; } void glTextRender::setPosition(float x, float y) { float fac = ((float)gl->getHeight()); xPos = fac*x+FONT_BORDER_PIXELS; yPos = fac*(1-y)-(float)height-FONT_BORDER_PIXELS; } float glTextRender::getAdjustedWidth(const wchar_t* str) { float w = 0.0f; for(unsigned int i=0;i<wcslen(str);i++) { if(cached[str[i]]) w+=width[str[i]]; else { getChar(str[i]); w+=width[str[i]]; } } return w; } A: You may have to generate you own "glyph cache" in texture memory as you go, potentially with some sort of LRU policy to avoid destroying all of the texture memory. Not nearly as easy as your current method, but may be the only way given the number of unicode chars A: You should consider using an Unicode rendering library (eg. Pango) to render the stuff into a bitmap and put that bitmap on the screen or into a texture. Rendering unicode text is not simple. So you cannot simply load 64K rectangular glyphs and use it. Characters may overlap. Eg in this smiley: ( ͡° ͜ʖ ͡°) Some code points stack accents on the previous character. Consider this excerpt from this notable post: ...he com̡e̶s, ̕h̵i​s un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo​͟ur eye͢s̸ ̛l̕ik͏e liq​uid pain, the song of re̸gular exp​ression parsing will exti​nguish the voices of mor​tal man from the sp​here I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful t​he final snuffing of the lie​s of Man ALL IS LOŚ͖̩͇̗̪̏̈́T ALL I​S LOST the pon̷y he comes he c̶̮omes he comes the ich​or permeates all MY FACE MY FACE ᵒh god no NO NOO̼O​O NΘ stop the an​*̶͑̾̾​̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨe̠̅s ͎a̧͈͖r̽̾̈́͒͑e n​ot rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚​N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅ If you truly want to render Unicode correctly you should be able to render this one correctly too. UPDATE: Looked at this Pango engine, and it's the case of banana, the gorilla, and the entire jungle. First it depends on the Glib because it used GObjects, second it cannot render directly into a byte buffer. It has Cario and FreeType backends, so you must use one of them to render the text and export it into bitmaps eventually. That's doesn't look good so far. In addition to that, if you want to store the result in a texture, use pango_layout_get_pixel_extents after setting the text to get the sizes of rectangles to render the text to. Ink rectangle is the rectangle to contain the entire text, it's left-top position is the position relative to the left-top of the logical rectangle. (The bottom line of the logical rectangle is the baseline). Hope this helps. A: You should also check out the FTGL library. FTGL is a free cross-platform Open Source C++ library that uses Freetype2 to simplify rendering fonts in OpenGL applications. FTGL supports bitmaps, pixmaps, texture maps, outlines, polygon mesh, and extruded polygon rendering modes. This project was dormant for awhile, but is recently back under development. I haven't updated my project to use the latest version, but you should check it out. It allows for using any True Type Font via the FreeType font library. A: Queso GLC is great for this, I've used it to render Chinese and Cyrillic characters in 3D. http://quesoglc.sourceforge.net/ The Unicode text sample it comes with should get you started. A: You could also group the characters by language. Load each language table as needed, and when you need to switch languages, unload the previous language table and load the new one. A: Unicode is supported in the title bar. I have just tried this on a Mac, and it ought to work elsewhere too. If you have (say) some imported data including text labels, and some of the labels just might contain unicode, you could add a tool that echoes the label in the title bar. It's not a great solution, but it is very easy to do.
{ "language": "en", "url": "https://stackoverflow.com/questions/7737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is the value-binding syntax in xaml? I'm getting all learned up about binding in WPF. I'm having a lot of trouble debugging the parse errors in my xaml, though. Can somebody pretty please tell me what's wrong with this little piece? : <Border Name="TrackBackground" Margin="0" CornerRadius="2" Grid.Row="1" Grid.Column="1" Background="BlanchedAlmond" BorderThickness="1" Height="{TemplateBinding Height}"> <Canvas Name="PART_Track" Background="DarkSalmon" Grid.Row="1" Grid.Column="1"> <Thumb Name="ThumbKnob" Height="{Binding ElementName=Part_Track, Path=Height, Mode=OneWay}" /> </Canvas> </Border> It's the databinding that breaks. I get an InvalidAttributeValue exception for ThumbKnob.Height when I try to run this. I know I must be missing something fundamental. So fill me in, stackers, and my gratitude will be boundless. Changing the ElementName didn't help. There must me something else I'm not getting. I should mention that I'm testing this in Silverlight. The exact message I'm getting out of Internet Explorer is: XamlParseException: Invalid attribute value for property Height. This whole thing is inside a ControlTemplate. I'm making a slider control just to teach myself the concepts. A: The ElementName property on a Binding is not supported in Silverlight. You will notice, if you go into the code behind or inspect the Binding object in class explorer, it doesn't have a property named ElementName. A: What I usually do to debug databindings, is add a converter where I can set a breakpoint in VS.NET.. so the Binding would be something like this: {Binding ElementName=PART_Track, Path=Height, Mode=OneWay, Converter={StaticResources DebugConverter}} Then the converter can be an empty implementation of an IValueConverter, set a breakpoint in the Convert method and see what the Height is being set to... Don't forget to add your converter to your Resources... Perhaps the value is NaN ? A: Hmm there might be a substantial difference between WPF en Silverlight on this point.. I seem to have no trouble what so even compiling and running this sample in a WPF window: <Slider Width="400" Height="20"> <Slider.Template> <ControlTemplate> <Border Name="TrackBackground" Margin="0" CornerRadius="2" Grid.Row="1" Grid.Column="1" Background="BlanchedAlmond" BorderThickness="1"> <Canvas x:Name="PART_Track" Background="DarkSalmon" Grid.Row="1" Grid.Column="1"> <Thumb Name="ThumbKnob" Height="{Binding ElementName=PART_Track, Path=Height, Mode=OneWay}" /> </Canvas> </Border> </ControlTemplate> </Slider.Template> </Slider> Perhaps Silverlight has fewer properties in the Thumb class... http://msdn.microsoft.com/en-us/library/system.windows.controls.primitives.thumb.aspx A: Ok, here's the deal: In silverlight, you can't bind values from one UI element to another declaratively. The only way to do what I was trying to do here would be in the C# code. I had a reference for this yesterday, but now I guess you'll just have to take my word for it :) A: First of all its a matter of casing... Change Part_Track to PART_Track which will fix your binding error.. But I do not think that this is what you are trying to do.. You could use a Grid instead of a canvas, and the Thumb will size automatically. Canvas does not really have a height, for it does not really care about the height of its children... Hope this helps... A: Is the Border in a Template btw ? Because there is no need for TemplateBinding if the border is not located in a Template (either ControlTemplate or DataTemplate) A: Silverlight 3 now includes ElementName binding...
{ "language": "en", "url": "https://stackoverflow.com/questions/7758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: Change visibility of ASP.NET label with JavaScript I have a ASP.NET page with an asp:button that is not visible. I can't turn it visible with JavaScript because it is not rendered to the page. What can I do to resolve this? A: If you need to manipulate it on the client side, you can't use the Visible property on the server side. Instead, set its CSS display style to "none". For example: <asp:Label runat="server" id="Label1" style="display: none;" /> Then, you could make it visible on the client side with: document.getElementById('Label1').style.display = 'inherit'; You could make it hidden again with: document.getElementById('Label1').style.display = 'none'; Keep in mind that there may be issues with the ClientID being more complex than "Label1" in practice. You'll need to use the ClientID with getElementById, not the server side ID, if they differ. A: Continuing with what Dave Ward said: * *You can't set the Visible property to false because the control will not be rendered. *You should use the Style property to set it's display to none. Page/Control design <asp:Label runat="server" ID="Label1" Style="display: none;" /> <asp:Button runat="server" ID="Button1" /> Code behind Somewhere in the load section: Label label1 = (Label)FindControl("Label1"); ((Label)FindControl("Button1")).OnClientClick = "ToggleVisibility('" + label1.ClientID + "')"; Javascript file function ToggleVisibility(elementID) { var element = document.getElementByID(elementID); if (element.style.display = 'none') { element.style.display = 'inherit'; } else { element.style.display = 'none'; } } Of course, if you don't want to toggle but just to show the button/label then adjust the javascript method accordingly. The important point here is that you need to send the information about the ClientID of the control that you want to manipulate on the client side to the javascript file either setting global variables or through a function parameter as in my example. A: You need to be wary of XSS when doing stuff like this: document.getElementById('<%= Label1.ClientID %>').style.display The chances are that no-one will be able to tamper with the ClientID of Label1 in this instance, but just to be on the safe side you might want pass it's value through one of the AntiXss library's methods: document.getElementById('<%= AntiXss.JavaScriptEncode(Label1.ClientID) %>').style.display A: Try this. <asp:Button id="myButton" runat="server" style="display:none" Text="Click Me" /> <script type="text/javascript"> function ShowButton() { var buttonID = '<%= myButton.ClientID %>'; var button = document.getElementById(buttonID); if(button) { button.style.display = 'inherit'; } } </script> Don't use server-side code to do this because that would require a postback. Instead of using Visibility="false", you can just set a CSS property that hides the button. Then, in javascript, switch that property back whenever you want to show the button again. The ClientID is used because it can be different from the server ID if the button is inside a Naming Container control. These include Panels of various sorts. A: This is the easiest way I found: BtnUpload.Style.Add("display", "none"); FileUploader.Style.Add("display", "none"); BtnAccept.Style.Add("display", "inherit"); BtnClear.Style.Add("display", "inherit"); I have the opposite in the Else, so it handles displaying them as well. This can go in the Page's Load or in a method to refresh the controls on the page. A: If you wait until the page is loaded, and then set the button's display to none, that should work. Then you can make it visible at a later point. A: Make sure the Visible property is set to true or the control won't render to the page. Then you can use script to manipulate it.
{ "language": "en", "url": "https://stackoverflow.com/questions/7773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: I would like a recommendation for a book on Eclipse's Rich Client Platform (RCP) I have read through several reviews on Amazon and some books seem outdated. I am currently using MyEclipse 6.5 which is using Eclipse 3.3. I'm interested in hearing from people that have experience learning RCP and what reference material they used to get started. A: I've been doing Eclipse RCP development for almost 2 years now. When I first started, I wanted a book for help and many people told me, with Eclipse you're better off using the Eclipsepedia and Google. However, I started with "The Java Developer's Guide to Eclipse" by D'Anjou et al, and I still reference it when I need a better starting point or a good reference. It's probably a little outdated now, but is very thorough and really explains how the Eclipse framework works. Like just about anything, RCP isn't too hard to pick up if you've figured out how the framework supporting it works and you'll get a lot more mileage out of your code. A: I agree with Thomas Owens on "Eclipse Rich Client Platform: Designing, Coding, and Packaging Java(TM) Applications" and would also add "Eclipse: Building Commercial-Quality Plug-ins" to the list of rather outdated but still somewhat useful books on Eclipse RCP. Even though the latter does not go much into the Rich Client Platform, it explains quite a lot about the Eclipse plug-in architecture that is useful knowledge for developers of RCP applications. There has been a lot of improvements in the Eclipse RCP platform since the release of both of these books, so I really hope that there are new versions of these books coming out soon. A: Although I don't have personal experience, a few friends of mine did Eclipse RCP development, and they used the book "Eclipse Rich Client Platform: Designing, Coding, and Packaging Java(TM) Applications". They seemed to like it a lot, and I looked it at myself, and it seemed useful. If I was going to do RCP development on Eclipse, I would probably get this book. To clarify - this book is geared toward Eclipse 3.1, and since I haven't done any RCP development of my own, I'm not sure how much things have changed. A: Recently published: The third edition of "eclipse Plug-ins" by Eric Clayberg and Dan Rubel Practical Eclipse rich Client Platform Projects by Vladimir Silvar=1-1 A: I find a lot of the books to be lacking in any sort of depth. At least not enough to justify their price. You can find plenty of tutorials online that cover what the books do and plenty more. They're usually less outdated than the books too. I really like Lars Vogel's tutorials: http://www.vogella.com/eclipse.html They're short and easy to understand, with enough pictures and material to get you going. After you have a basic understanding, then google will suffice for the details of things. A: I read the book suggested by Thomas and it's really worth reading, although not very up-to-date.
{ "language": "en", "url": "https://stackoverflow.com/questions/7779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: How can I split an XML document into thirds (or, even better, n pieces)? I would like to use a language that I am familiar with - Java, C#, Ruby, PHP, C/C++, although examples in any language or pseudocode are more than welcome. What is the best way of splitting a large XML document into smaller sections that are still valid XML? For my purposes, I need to split them into roughly thirds or fourths, but for the sake of providing examples, splitting them into n components would be good. A: Parsing XML documents using DOM doesn't scale. This Groovy-script is using StAX (Streaming API for XML) to split an XML document between the top-level elements (that shares the same QName as the first child of the root-document). It's pretty fast, handles arbitrary large documents and is very useful when you want to split a large batch-file into smaller pieces. Requires Groovy on Java 6 or a StAX API and implementation such as Woodstox in the CLASSPATH import javax.xml.stream.* pieces = 5 input = "input.xml" output = "output_%04d.xml" eventFactory = XMLEventFactory.newInstance() fileNumber = elementCount = 0 def createEventReader() { reader = XMLInputFactory.newInstance().createXMLEventReader(new FileInputStream(input)) start = reader.next() root = reader.nextTag() firstChild = reader.nextTag() return reader } def createNextEventWriter () { println "Writing to '${filename = String.format(output, ++fileNumber)}'" writer = XMLOutputFactory.newInstance().createXMLEventWriter(new FileOutputStream(filename), start.characterEncodingScheme) writer.add(start) writer.add(root) return writer } elements = createEventReader().findAll { it.startElement && it.name == firstChild.name }.size() println "Splitting ${elements} <${firstChild.name.localPart}> elements into ${pieces} pieces" chunkSize = elements / pieces writer = createNextEventWriter() writer.add(firstChild) createEventReader().each { if (it.startElement && it.name == firstChild.name) { if (++elementCount > chunkSize) { writer.add(eventFactory.createEndDocument()) writer.flush() writer = createNextEventWriter() elementCount = 0 } } writer.add(it) } writer.flush() A: Well of course you can always extract the top-level elements (whether this is the granularity you want is up to you). In C#, you'd use the XmlDocument class. For example, if your XML file looked something like this: <Document> <Piece> Some text </Piece> <Piece> Some other text </Piece> </Document> then you'd use code like this to extract all of the Pieces: XmlDocument doc = new XmlDocument(); doc.Load("<path to xml file>"); XmlNodeList nl = doc.GetElementsByTagName("Piece"); foreach (XmlNode n in nl) { // Do something with each Piece node } Once you've got the nodes, you can do something with them in your code, or you can transfer the entire text of the node to its own XML document and act on that as if it were an independent piece of XML (including saving it back to disk, etc). A: As DannySmurf touches on here, it is all about the structure of the xml document. If you only two huge "top level" tags, it will be extremely hard to be able to split it in a way that makes it possible to both merge it back together and read it piece by piece as valid xml. Given a document with a lot of seperate pieces like the ones in DannySmurfs example, it should be fairly easy. Some rough code in Pseudo C# : int nrOfPieces = 5; XmlDocument xmlOriginal = some input parameter.. // construct the list we need, and fill it with XmlDocuments.. var xmlList = new List<XmlDocument>(); for (int i = 0; i < nrOfPieces ; i++) { var xmlDoc = new XmlDocument(); xmlDoc.ChildNodes.Add(new XmlNode(xmlOriginal.FistNode.Name)); xmlList.Add(xmlDoc); } var nodeList = xmlOriginal.GetElementsByTagName("Piece")M // Copy the nodes from the original into the pieces.. for (int i = 0; i < nodeList .Count; i++) { var xmlDoc = xmlList[i % nrOfPieces]; var nodeToCopy = nodeList[i].Clone(); xmlDoc.FirstNode.ChildNodes.Add(nodeToCopy); } This should give you n docs with correct xml and the possibility to merge them back together. But again, it depends on the xml file. A: This is more of a comment than an answer, but wouldn't: XmlDocument doc = new XmlDocument(); doc.Load("path"); Read the entire file at once? Just thought I should raise the point since from the look of Thomas' question, he is concerned about reading large files and wants to break the process down.. A: It would read the entire file at once. In my experience, though, if you're just reading the file, doing some processing (i.e., breaking it up) and then continuing on with your work, the XmlDocument is going to go through it's create/read/collect cycle so quickly that it likely won't matter. Of course, that depends on what a "large" file is. If it's a 30 MB XML file (which I would consider large for an XML file), it probably won't make any difference. If it's a 500 MB XML file, using XmlDocument will become extremely problematic on systems without a significant amount of RAM (in that case, however, I'd argue that the time to manually pick through the file with a XmlReader would be the more significant impediment). A: It looks like you're working with C# and .NET 3.5. I have come across some posts that suggest using a yield type of algorithm on a file stream with an XmlReader. Here's a couple blog posts to get you started down the path: * *Streaming With Linq to SQL Part 1 *Streaming With Linq To SQL Part 2 A: Not sure what type of processing you're doing, but for very large XML, I've always been a fan of event-based processing. Maybe it's my Java background, but I really do like SAX. You need to do your own state management, but once you get past that, it's a very efficient method of parsing XML. http://saxdotnet.sourceforge.net/ A: I'm going to go with youphoric on this one. For very large files SAX (or any other streaming parser) will be a great help in processing. Using DOM you can collect just top level nodes, but you still have to parse the entire document to do it...using a streaming parser and event-based processing lets you "skip" the nodes you aren't interested in; makes the processing faster. A: If you are not completely allergic to Perl, then XML::Twig comes with a tool named xml_split that can split a document, producing well-formed XML section. You can split on a level of the tree, by size or on an XPath expression. A: I did a YouTube video showing how to split XML files with foxe (the free XML editor from Firstobject) using only a small amount of memory regardless of the size of the input and output files. The memory usage for this CMarkup XML reader (pull parser) and XML writer solution depends on the size of the subdocuments that are individually transferred from the input file to the output files, or the minimum block size of 16 KB. split() { CMarkup xmlInput, xmlOutput; xmlInput.Open( "50MB.xml", MDF_READFILE ); int nObjectCount = 0, nFileCount = 0; while ( xmlInput.FindElem("//ACT") ) { if ( nObjectCount == 0 ) { ++nFileCount; xmlOutput.Open( "piece" + nFileCount + ".xml", MDF_WRITEFILE ); xmlOutput.AddElem( "root" ); xmlOutput.IntoElem(); } xmlOutput.AddSubDoc( xmlInput.GetSubDoc() ); ++nObjectCount; if ( nObjectCount == 5 ) { xmlOutput.Close(); nObjectCount = 0; } } if ( nObjectCount ) xmlOutput.Close(); xmlInput.Close(); return nFileCount; }
{ "language": "en", "url": "https://stackoverflow.com/questions/7846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: What is PHP Safe Mode GID? According to the PHP Safe Mode Docs on safe_mode_gid: By default, Safe Mode does a UID compare check when opening files. If you want to relax this to a GID compare, then turn on safe_mode_gid. Whether to use UID (FALSE) or GID (TRUE) checking upon file access. To a PHP script running on a web server, what is the difference between a UID (User Identifier) check and a GID (Group identifier) check? A: This will just allow you a little more flexibility in your web server setup. Using safe_mode_gid, you should be able to run individual PHP/httpd processes for each user on your system, but give each process the ability to read some shared files as long as they are all run as the same group (GID) and the shared files are owned by this group. A: For example the webserver username and group is apache:www, if you set your to check UID, the php process will only have access to any file owned by the user apache. So if you have another webserver running on your system,for example tomcat with the username and group of tomcat:www, any files created by the tomcat process won't be accessible by php because it is owned by the tomcat user. But if you instead use GID check, the files created by the tomcat process will be readable by the php process because the belong to the same www group A: Safe mode UID check is used to prevent users from accessing other peoples files. GID is used to prevent a GROUP of users from accessing other GROUPS files. A: Most answers are not exactly true or detailed... Don't forget safe_mode checks if the owner of the SCRIPT matches the owner of the file you want to access. It has nothing to do with the httpd user:group. For example, your httpd could run as apache:daemon, your script owned by some_user:users and the file you want to write to some_other_user:users. If you don't activate safe_mode_gid, the script won't be able to access the file because users don't match. This is a common phenomenon when a script creates a folder and then tries to create files inside this folder. The folder creation succeeds because the parent folder is owned by the same user as the script creating it (most likely, it was uploaded by "some_user"). BUT, the created folder is now owned by the httpd user, let's say apache:daemon If safe_mode is active, you won't be able to create a file inside this folder because the script owner (some_user) doesn't match the folder owner (apache). Even if you activate safe_mode_gid, it won't work because the script group is "users" while the folder group is "daemon". The best solution is to set the same group for ftp users and httpd. Don't forget you have to allow write access to the group on the "writeable" folder too, and this is less secure because since all your users are in the same group, an httpd process could access the other users files since you activate safe_mode_gid. You should in fact combine safe_mode_gid + open_basedir and set the home of the user as open_basedire value to avoid this. HTH
{ "language": "en", "url": "https://stackoverflow.com/questions/7851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Why all the Active Record hate? As I learn more and more about OOP, and start to implement various design patterns, I keep coming back to cases where people are hating on Active Record. Often, people say that it doesn't scale well (citing Twitter as their prime example) -- but nobody actually explains why it doesn't scale well; and / or how to achieve the pros of AR without the cons (via a similar but different pattern?) Hopefully this won't turn into a holy war about design patterns -- all I want to know is ****specifically**** what's wrong with Active Record. If it doesn't scale well, why not? What other problems does it have? A: There's ActiveRecord the Design Pattern and ActiveRecord the Rails ORM Library, and there's also a ton of knock-offs for .NET, and other languages. These are all different things. They mostly follow that design pattern, but extend and modify it in many different ways, so before anyone says "ActiveRecord Sucks" it needs to be qualified by saying "which ActiveRecord, there's heaps?" I'm only familiar with Rails' ActiveRecord, I'll try address all the complaints which have been raised in context of using it. @BlaM The problem that I see with Active Records is, that it's always just about one table Code: class Person belongs_to :company end people = Person.find(:all, :include => :company ) This generates SQL with LEFT JOIN companies on companies.id = person.company_id, and automatically generates associated Company objects so you can do people.first.company and it doesn't need to hit the database because the data is already present. @pix0r The inherent problem with Active Record is that database queries are automatically generated and executed to populate objects and modify database records Code: person = Person.find_by_sql("giant complicated sql query") This is discouraged as it's ugly, but for the cases where you just plain and simply need to write raw SQL, it's easily done. @Tim Sullivan ...and you select several instances of the model, you're basically doing a "select * from ..." Code: people = Person.find(:all, :select=>'name, id') This will only select the name and ID columns from the database, all the other 'attributes' in the mapped objects will just be nil, unless you manually reload that object, and so on. A: My long and late answer, not even complete, but a good explanation WHY I hate this pattern, opinions and even some emotions: 1) short version: Active Record creates a "thin layer" of "strong binding" between the database and the application code. Which solves no logical, no whatever-problems, no problems at all. IMHO it does not provide ANY VALUE, except some syntactic sugar for the programmer (which may then use an "object syntax" to access some data, that exists in a relational database). The effort to create some comfort for the programmers should (IMHO...) better be invested in low level database access tools, e.g. some variations of simple, easy, plain hash_map get_record( string id_value, string table_name, string id_column_name="id" ) and similar methods (of course, the concepts and elegance greatly varies with the language used). 2) long version: In any database-driven projects where I had the "conceptual control" of things, I avoided AR, and it was good. I usually build a layered architecture (you sooner or later do divide your software in layers, at least in medium- to large-sized projects): A1) the database itself, tables, relations, even some logic if the DBMS allows it (MySQL is also grown-up now) A2) very often, there is more than a data store: file system (blobs in database are not always a good decision...), legacy systems (imagine yourself "how" they will be accessed, many varieties possible.. but thats not the point...) B) database access layer (at this level, tool methods, helpers to easily access the data in the database are very welcome, but AR does not provide any value here, except some syntactic sugar) C) application objects layer: "application objects" sometimes are simple rows of a table in the database, but most times they are compound objects anyway, and have some higher logic attached, so investing time in AR objects at this level is just plainly useless, a waste of precious coders time, because the "real value", the "higher logic" of those objects needs to be implemented on top of the AR objects, anyway - with and without AR! And, for example, why would you want to have an abstraction of "Log entry objects"? App logic code writes them, but should that have the ability to update or delete them? sounds silly, and App::Log("I am a log message") is some magnitudes easier to use than le=new LogEntry(); le.time=now(); le.text="I am a log message"; le.Insert();. And for example: using a "Log entry object" in the log view in your application will work for 100, 1000 or even 10000 log lines, but sooner or later you will have to optimize - and I bet in most cases, you will just use that small beautiful SQL SELECT statement in your app logic (which totally breaks the AR idea..), instead of wrapping that small statement in rigid fixed AR idea frames with lots of code wrapping and hiding it. The time you wasted with writing and/or building AR code could have been invested in a much more clever interface for reading lists of log-entries (many, many ways, the sky is the limit). Coders should dare to invent new abstractions to realize their application logic that fit the intended application, and not stupidly re-implement silly patterns, that sound good on first sight! D) the application logic - implements the logic of interacting objects and creating, deleting and listing(!) of application logic objects (NO, those tasks should rarely be anchored in the application logic objects itself: does the sheet of paper on your desk tell you the names and locations of all other sheets in your office? forget "static" methods for listing objects, thats silly, a bad compromise created to make the human way of thinking fit into [some-not-all-AR-framework-like-]AR thinking) E) the user interface - well, what I will write in the following lines is very, very, very subjective, but in my experience, projects that built on AR often neglected the UI part of an application - time was wasted on creation obscure abstractions. In the end such applications wasted a lot of coders time and feel like applications from coders for coders, tech-inclined inside and outside. The coders feel good (hard work finally done, everything finished and correct, according to the concept on paper...), and the customers "just have to learn that it needs to be like that", because thats "professional".. ok, sorry, I digress ;-) Well, admittedly, this all is subjective, but its my experience (Ruby on Rails excluded, it may be different, and I have zero practical experience with that approach). In paid projects, I often heard the demand to start with creating some "active record" objects as a building block for the higher level application logic. In my experience, this conspicuously often was some kind of excuse for that the customer (a software dev company in most cases) did not have a good concept, a big view, an overview of what the product should finally be. Those customers think in rigid frames ("in the project ten years ago it worked well.."), they may flesh out entities, they may define entities relations, they may break down data relations and define basic application logic, but then they stop and hand it over to you, and think thats all you need... they often lack a complete concept of application logic, user interface, usability and so on and so on... they lack the big view and they lack love for the details, and they want you to follow that AR way of things, because.. well, why, it worked in that project years ago, it keeps people busy and silent? I don't know. But the "details" separate the men from the boys, or .. how was the original advertisement slogan ? ;-) After many years (ten years of active development experience), whenever a customer mentions an "active record pattern", my alarm bell rings. I learned to try to get them back to that essential conceptional phase, let them think twice, try them to show their conceptional weaknesses or just avoid them at all if they are undiscerning (in the end, you know, a customer that does not yet know what it wants, maybe even thinks it knows but doesn't, or tries to externalize concept work to ME for free, costs me many precious hours, days, weeks and months of my time, live is too short ... ). So, finally: THIS ALL is why I hate that silly "active record pattern", and I do and will avoid it whenever possible. EDIT: I would even call this a No-Pattern. It does not solve any problem (patterns are not meant to create syntactic sugar). It creates many problems: the root of all its problems (mentioned in many answers here..) is, that it just hides the good old well-developed and powerful SQL behind an interface that is by the patterns definition extremely limited. This pattern replaces flexibility with syntactic sugar! Think about it, which problem does AR solve for you? A: Some messages are getting me confused. Some answers are going to "ORM" vs "SQL" or something like that. The fact is that AR is just a simplification programming pattern where you take advantage of your domain objects to write there database access code. These objects usually have business attributes (properties of the bean) and some behaviour (methods that usually work on these properties). The AR just says "add some methods to these domain objects" to database related tasks. And I have to say, from my opinion and experience, that I do not like the pattern. At first sight it can sound pretty good. Some modern Java tools like Spring Roo uses this pattern. For me, the real problem is just with OOP concern. AR pattern forces you in some way to add a dependency from your object to infraestructure objects. These infraestructure objects let the domain object to query the database through the methods suggested by AR. I have always said that two layers are key to the success of a project. The service layer (where the bussiness logic resides or can be exported through some kind of remoting technology, as Web Services, for example) and the domain layer. In my opinion, if we add some dependencies (not really needed) to the domain layer objects for resolving the AR pattern, our domain objects will be harder to share with other layers or (rare) external applications. Spring Roo implementation of AR is interesting, because it does not rely on the object itself, but in some AspectJ files. But if later you do not want to work with Roo and have to refactor the project, the AR methods will be implemented directly in your domain objects. Another point of view. Imagine we do not use a Relational Database to store our objects. Imagine the application stores our domain objects in a NoSQL Database or just in XML files, for example. Would we implement the methods that do these tasks in our domain objects? I do not think so (for example, in the case of XM, we would add XML related dependencies to our domain objects...Truly sad I think). Why then do we have to implement the relational DB methods in the domain objects, as the Ar pattern says? To sum up, the AR pattern can sound simpler and good for small and simple applications. But, when we have complex and large apps, I think the classical layered architecture is a better approach. A: I have always found that ActiveRecord is good for quick CRUD-based applications where the Model is relatively flat (as in, not a lot of class hierarchies). However, for applications with complex OO hierarchies, a DataMapper is probably a better solution. While ActiveRecord assumes a 1:1 ratio between your tables and your data objects, that kind of relationship gets unwieldy with more complex domains. In his book on patterns, Martin Fowler points out that ActiveRecord tends to break down under conditions where your Model is fairly complex, and suggests a DataMapper as the alternative. I have found this to be true in practice. In cases, where you have a lot inheritance in your domain, it is harder to map inheritance to your RDBMS than it is to map associations or composition. The way I do it is to have "domain" objects that are accessed by your controllers via these DataMapper (or "service layer") classes. These do not directly mirror the database, but act as your OO representation for some real-world object. Say you have a User class in your domain, and need to have references to, or collections of other objects, already loaded when you retrieve that User object. The data may be coming from many different tables, and an ActiveRecord pattern can make it really hard. Instead of loading the User object directly and accessing data using an ActiveRecord style API, your controller code retrieves a User object by calling the API of the UserMapper.getUser() method, for instance. It is that mapper that is responsible for loading any associated objects from their respective tables and returning the completed User "domain" object to the caller. Essentially, you are just adding another layer of abstraction to make the code more managable. Whether your DataMapper classes contain raw custom SQL, or calls to a data abstraction layer API, or even access an ActiveRecord pattern themselves, doesn't really matter to the controller code that is receiving a nice, populated User object. Anyway, that's how I do it. A: The question is about the Active Record design pattern. Not an orm Tool. The original question is tagged with rails and refers to Twitter which is built in Ruby on Rails. The ActiveRecord framework within Rails is an implementation of Fowler's Active Record design pattern. A: The main thing that I've seen with regards to complaints about Active Record is that when you create a model around a table, and you select several instances of the model, you're basically doing a "select * from ...". This is fine for editing a record or displaying a record, but if you want to, say, display a list of the cities for all the contacts in your database, you could do "select City from ..." and only get the cities. Doing this with Active Record would require that you're selecting all the columns, but only using City. Of course, varying implementations will handle this differently. Nevertheless, it's one issue. Now, you can get around this by creating a new model for the specific thing you're trying to do, but some people would argue that it's more effort than the benefit. Me, I dig Active Record. :-) HTH A: Although all the other comments regarding SQL optimization are certainly valid, my main complaint with the active record pattern is that it usually leads to impedance mismatch. I like keeping my domain clean and properly encapsulated, which the active record pattern usually destroys all hope of doing. A: I think there is a likely a very different set of reasons between why people are "hating" on ActiveRecord and what is "wrong" with it. On the hating issue, there is a lot of venom towards anything Rails related. As far as what is wrong with it, it is likely that it is like all technology and there are situations where it is a good choice and situations where there are better choices. The situation where you don't get to take advantage of most of the features of Rails ActiveRecord, in my experience, is where the database is badly structured. If you are accessing data without primary keys, with things that violate first normal form, where there are lots of stored procedures required to access the data, you are better off using something that is more of just a SQL wrapper. If your database is relatively well structured, ActiveRecord lets you take advantage of that. To add to the theme of replying to commenters who say things are hard in ActiveRecord with a code snippet rejoinder @Sam McAfee Say you have a User class in your domain, and need to have references to, or collections of other objects, already loaded when you retrieve that User object. The data may be coming from many different tables, and an ActiveRecord pattern can make it really hard. user = User.find(id, :include => ["posts", "comments"]) first_post = user.posts.first first_comment = user.comments.first By using the include option, ActiveRecord lets you override the default lazy-loading behavior. A: I love the way SubSonic does the one column only thing. Either DataBaseTable.GetList(DataBaseTable.Columns.ColumnYouWant) , or: Query q = DataBaseTable.CreateQuery() .WHERE(DataBaseTable.Columns.ColumnToFilterOn,value); q.SelectList = DataBaseTable.Columns.ColumnYouWant; q.Load(); But Linq is still king when it comes to lazy loading. A: @BlaM: Sometimes I justed implemented an active record for a result of a join. Doesn't always have to be the relation Table <--> Active Record. Why not "Result of a Join statement" <--> Active Record ? A: I'm going to talk about Active Record as a design pattern, I haven't seen ROR. Some developers hate Active Record, because they read smart books about writing clean and neat code, and these books states that active record violates single resposobility principle, violates DDD rule that domain object should be persistant ignorant, and many other rules from these kind of books. The second thing domain objects in Active Record tend to be 1-to-1 with database, that may be considered a limitation in some kind of systems (n-tier mostly). Thats just abstract things, i haven't seen ruby on rails actual implementation of this pattern. A: The problem that I see with Active Records is, that it's always just about one table. That's okay, as long as you really work with just that one table, but when you work with data in most cases you'll have some kind of join somewhere. Yes, join usually is worse than no join at all when it comes to performance, but join usually is better than "fake" join by first reading the whole table A and then using the gained information to read and filter table B. A: The problem with ActiveRecord is that the queries it automatically generates for you can cause performance problems. You end up doing some unintuitive tricks to optimize the queries that leave you wondering if it would have been more time effective to write the query by hand in the first place. A: Try doing a many to many polymorphic relationship. Not so easy. Especially when you aren't using STI.
{ "language": "en", "url": "https://stackoverflow.com/questions/7864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "102" }
Q: Remove the bar at the top of Loginview for formatting I'm making a webform using a LoginView, the problem is that because the control includes a grey bar telling you what type of control it is it throws of correctly formatting the page (it has LoginView1 at the top). Is there a way to hide this on the LoginView as the contentPlaceholder does an excellent job for this. I've found that you can remove the ID, but that seems like a hack as it stops programatic access A: I may have misunderstood your question but.... The 'grey bar telling you what type of control it is' only shows up if you are looking at the page in 'design view' in your IDE (are you using Visual Studio?). Once you run the page this label is not visible. It is very common for pages that have dynamic/server-side content to 'not look right' when you are looking at them in 'design view'. Little things like the label/grey bar you are talking about are just there to help you work on the page when it is not populated with the dynamic content. As a result of this, I find that 99.9% of the time I use 'source view' in my IDE because as your page content becomes more dynamic, the 'design view' becomes more useless. A: I don't know that there is a property to control this (can't find one on MSDN), but I'd think you could just iterate through the Controls property of the LoginView and hide that panel/label/whatever.
{ "language": "en", "url": "https://stackoverflow.com/questions/7873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: How do you open a file in C++? I want to open a file for reading, the C++ way. I need to be able to do it for: * *text files, which would involve some sort of read line function. *binary files, which would provide a way to read raw data into a char* buffer. A: You need to use an ifstream if you just want to read (use an ofstream to write, or an fstream for both). To open a file in text mode, do the following: ifstream in("filename.ext", ios_base::in); // the in flag is optional To open a file in binary mode, you just need to add the "binary" flag. ifstream in2("filename2.ext", ios_base::in | ios_base::binary ); Use the ifstream.read() function to read a block of characters (in binary or text mode). Use the getline() function (it's global) to read an entire line. A: To open and read a text file line per line, you could use the following: // define your file name string file_name = "data.txt"; // attach an input stream to the wanted file ifstream input_stream(file_name); // check stream status if (!input_stream) cerr << "Can't open input file!"; // file contents vector<string> text; // one line string line; // extract all the text from the input file while (getline(input_stream, line)) { // store each line in the vector text.push_back(line); } To open and read a binary file you need to explicitly declare the reading format in your input stream to be binary, and read memory that has no explicit interpretation using stream member function read(): // define your file name string file_name = "binary_data.bin"; // attach an input stream to the wanted file ifstream input_stream(file_name, ios::binary); // check stream status if (!input_stream) cerr << "Can't open input file!"; // use function that explicitly specifies the amount of block memory read int memory_size = 10; // allocate 10 bytes of memory on heap char* dynamic_buffer = new char[memory_size]; // read 10 bytes and store in dynamic_buffer file_name.read(dynamic_buffer, memory_size); When doing this you'll need to #include the header : <iostream> A: There are three ways to do this, depending on your needs. You could use the old-school C way and call fopen/fread/fclose, or you could use the C++ fstream facilities (ifstream/ofstream), or if you're using MFC, use the CFile class, which provides functions to accomplish actual file operations. All of these are suitable for both text and binary, though none have a specific readline functionality. What you'd most likely do instead in that case is use the fstream classes (fstream.h) and use the stream operators (<< and >>) or the read function to read/write blocks of text: int nsize = 10; std::vector<char> somedata(nsize); ifstream myfile; myfile.open("<path to file>"); myfile.read(somedata.data(), nsize); myfile.close(); Note that, if you're using Visual Studio 2005 or higher, traditional fstream may not be available (there's a new Microsoft implementation, which is slightly different, but accomplishes the same thing). A: #include <iostream> #include <fstream> using namespace std; void main() { ifstream in_stream; // fstream command to initiate "in_stream" as a command. char filename[31]; // variable for "filename". cout << "Enter file name to open :: "; // asks user for input for "filename". cin.getline(filename, 30); // this gets the line from input for "filename". in_stream.open(filename); // this in_stream (fstream) the "filename" to open. if (in_stream.fail()) { cout << "Could not open file to read.""\n"; // if the open file fails. return; } //.....the rest of the text goes beneath...... } A: #include <iostream> #include <fstream> using namespace std; int main () { ofstream file; file.open ("codebind.txt"); file << "Please writr this text to a file.\n this text is written using C++\n"; file.close(); return 0; } A: Follow the steps, * *Include Header files or name space to access File class. *Make File class object Depending on your IDE platform ( i.e, CFile,QFile,fstream). *Now you can easily find that class methods to open/read/close/getline or else of any file. CFile/QFile/ifstream m_file; m_file.Open(path,Other parameter/mood to open file); For reading file you have to make buffer or string to save data and you can pass that variable in read() method. A: **#include<fstream> //to use file #include<string> //to use getline using namespace std; int main(){ ifstream file; string str; file.open("path the file" , ios::binary | ios::in); while(true){ getline(file , str); if(file.fail()) break; cout<<str; } }** A: fstream are great but I will go a little deeper and tell you about RAII. The problem with a classic example is that you are forced to close the file by yourself, meaning that you will have to bend your architecture to this need. RAII makes use of the automatic destructor call in C++ to close the file for you. Update: seems that std::fstream already implements RAII so the code below is useless. I'll keep it here for posterity and as an example of RAII. class FileOpener { public: FileOpener(std::fstream& file, const char* fileName): m_file(file) { m_file.open(fileName); } ~FileOpeneer() { file.close(); } private: std::fstream& m_file; }; You can now use this class in your code like this: int nsize = 10; char *somedata; ifstream myfile; FileOpener opener(myfile, "<path to file>"); myfile.read(somedata,nsize); // myfile is closed automatically when opener destructor is called Learning how RAII works can save you some headaches and some major memory management bugs. A: #include <fstream> ifstream infile; infile.open(**file path**); while(!infile.eof()) { getline(infile,data); } infile.close();
{ "language": "en", "url": "https://stackoverflow.com/questions/7880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45" }
Q: Testing for inequality in T-SQL I've just come across this in a WHERE clause: AND NOT (t.id = @id) How does this compare with: AND t.id != @id Or with: AND t.id <> @id I'd always write the latter myself, but clearly someone else thinks differently. Is one going to perform any better than the other? I know that using <> or != is going to bust any hopes for using an index that I might have had, but surely the first approach above will suffer the same problem? A: There will be no performance hit, both statements are perfectly equal. HTH A: These 3 will get the same exact execution plan declare @id varchar(40) select @id = '172-32-1176' select * from authors where au_id <> @id select * from authors where au_id != @id select * from authors where not (au_id = @id) It will also depend on the selectivity of the index itself of course. I always use au_id <> @id myself A: Note that the != operator is not standard SQL. If you want your code to be portable (that is, if you care), use <> instead. A: Logic Hazard On Equality to Null To Be Considered The equality operator generates an unknown value when there is a null and the unknown value is treated a false. Not (unknown) is still unknown. In the example below I'll ask if a couple (a1, b1) is equal to (a2, b2). Note that each column has 3 values: 0, 1 and NULL. DECLARE @t table (a1 bit, a2 bit, b1 bit, b2 bit) Insert into @t (a1 , a2, b1, b2) values( 0 , 0 , 0 , NULL ) select a1,a2,b1,b2, case when ( (a1=a2 or (a1 is null and a2 is null)) and (b1=b2 or (b1 is null and b2 is null)) ) then 'Equal' end, case when not ( (a1=a2 or (a1 is null and a2 is null)) and (b1=b2 or (b1 is null and b2 is null)) ) then 'Not Equal' end, case when ( (a1<>a2 or (a1 is null and a2 is not null) or (a1 is not null and a2 is null)) or (b1<>b2 or (b1 is null and b2 is not null) or (b1 is not null and b2 is null)) ) then 'Different' end from @t Note that here, the results we expect are: * *Equal to be null *Not equal to be not equal *Different to be different But instead, we get another result * *Equal is null - what we expected. *Not Equal is null ??? *Different is different - what we expected.
{ "language": "en", "url": "https://stackoverflow.com/questions/7884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48" }
Q: How do you create objects in Perl? Perl has OOP features, but they are somewhat rarely used. How do you create and use Perl objects with methods and properties? A: Moose, definitely. package Person; use Moose; has age => ( isa => Int, is => 'rw'); has name => ( isa => Str, is => 'rw'); 1; Immediately, you have for free a new() method, and accessor methods for the attributes you just defined with 'has'. So, you can say: my $person = Person->new(); $person->age(34); $person->name('Mike'); print $person->name, "\n"; and so on. Not only that, but your accessor methods come type-checked for free (and you can define your own types as well as the standard ones). Plus you get 'extends' for subclassing, 'with' for roles/traits, and all manner of other great stuff that allows you to get on with the real job of writing good robust maintainable OO code. TMTOWTDI, but this one works. A: Currently I use Object::InsideOut whenever I want objects, its quite nice and will give you a lot of features over standard blessed hash objects. Having said that, if I was starting a new project I would seriously look at Moose. While it is good to read the official PERL documentation, I would NOT recommend trying to role your own object framework, or building objects using hashes, its far to tempting to take the easy road and "peak" directly into the objects "private" variables completely breaking encapsulation, this will come back to bite you when you want to refactor the object. A: Perl objects are NOT just blessed hashes. They are blessed REFERENCES. They can be (and most often are) blessed hash references, but they could just as easily be blessed scalar or array references. A: The official tutorial on the CPAN site is good. There's also a good article called Camel POOP at CodeProject. A: You should definitely take a look at Moose. package Point; use Moose; # automatically turns on strict and warnings has 'x' => (is => 'rw', isa => 'Int'); has 'y' => (is => 'rw', isa => 'Int'); sub clear { my $self = shift; $self->x(0); $self->y(0); } Moose gives you (among other things) a constructor, accessor methods, and type checking for free! So in your code you can: my $p = Point->new({x=>10 , y=>20}); # Free constructor $p->x(15); # Free setter print $p->x(); # Free getter $p->clear(); $p->x(15.5); # FAILS! Free type check. A good starting point is Moose::Manual and Moose::Cookbook If you just need the basic stuff you can also use Mouse which is not as complete, but without most of the compile time penalty. A: I highly recommend taking a look at Moose if you want to do OO in Perl. However, it's not very useful if you don't understand what OO in Perl means. To better understand how Perl OO works under the hood, I wrote an overview on my blog: http://augustinalareina.wordpress.com/2010/06/06/an-introduction-to-object-oriented-perl/ From a data structure point of view, an Object is reference with a few extra features. The interpreter knows to treat these special references as Objects because they have been "blessed" with the keyword "bless". Blessed references contain a flag indicating they are an Object. Essentially this means you can define and call methods on them. For instance if you created a basic hashref, this wouldn't work: $hashref->foo(); But if you create a blessed hashref (aka an Object) this does work: $blessed_hashref->foo(); Moose is an excellent module for OOP in Perl because it creates an enforceable OO layer AND automagically handles accessor methods so you don't have to define a bunch of getters and setters. If you're interested in using Devel::Peak to see how the Perl interpreter stores objects, follow the link to the blog entry I posted above. A: On one foot, each class is a package; you establish (multiple, if desired) inheritance by setting the package variable @ISA (preferably at compile time); you create an object from an existing piece of data (often, but not always, an anonymous hash used to store instance variables) with bless(REFERENCE [, CLASSNAME]); you call object methods like $obj->methodname(@ARGS) and class methods like "CLASSNAME"->methodname(@ARGS). Multiple inheritance method resolution order can be altered using mro. Because this is somewhat minimalistic and doesn't force encapsulation, there are many different modules that provide more or different functionality.
{ "language": "en", "url": "https://stackoverflow.com/questions/7885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: How do I make Subversion (SVN) send email on checkins? I've always found checkin (commit) mails to be very useful for keeping track of what work other people are doing in the codebase / repository. How do I set up SVN to email a distribution list on each commit? I'm running clients on Windows and the Apache Subversion server on Linux. The answers below for various platforms will likely be useful to other people though. A: You use the post-commit hooks. Here's a sample Ruby script that sends an email after each commit: commit-email.rb A: You'll want to familiarize yourself with repository hooks, particularly the post-commit hook. A: Have a look at the standalone Subversion Notify tool (Windows only!) It can do emailing on commit and also much more! A: VisualSVN Server has useful commit e-mail notification hook VisualSVNServerHooks.exe. It supports colored diffs and can send commit notifications only when commit affects certain repository path. See "Configuring Email Notifications in VisualSVN Server". A: 1) Install svnnotify on a svn server using sudo apt-get 2) Use post-commit hook of your repo (read on post-commit hooks on svn website) 3) Open post-commit hook file and paste following code to send an email using smtp server. Using smtp is straight forward since you don't need to configure sendmail. 4) Make sure after \ (line break) you don't have an extra space. #!/bin/sh REPOS="$1" REV="$2" TO="xyz@yah.com" # who will receive the notifications FROM="hello@goog.com" # what will be in "FROM" fields /usr/bin/svnnotify \ --repos-path "$REPOS" \ --revision "$REV" \ --to $TO \ --from $FROM \ --reply-to $FROM \ --smtp "YOUR.SMTP.MAIL.COM" \ --subject-prefix "[svn commit]" \ --attach-diff -a \ --header 'Message generated on Subversion Check-in.' \ --footer 'OpenSource Team. ' \ --svnlook "/usr/local/bin/svnlook" \ --handler HTML::ColorDiff # make diff pretty A: What platform? On Mac OS X I have installed msmtp and created a post-commit script under hooks in the repository. A .msmtprc file needs to be setup for the svn (or www) user. REPOS="`echo $1 | sed 's/\/{root of repository}//g'` " REV="$2" MSG=`/usr/local/bin/svn log -v -r HEAD https://localhost$REPOS` /usr/local/bin/msmtp {list of recipients} <<EOF Subject: SVN-Commit $REPOS#$REV MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8Bit $MSG EOF Make {root of repository} and {list of recipients} specific for your needs. Note I have used UTF-8 because we have some special characters here in Sweden (åäö). A: There's a related question here on post-commit hooks. Personally, I prefer to send a message to something I can get an RSS feed from, as an email-per-commit would overload my inbox pretty quickly. A: Seconding @Matt Miller on RSS feeds. There's a useful tool called WebSVN that offers RSS feeds of every repository and individual branches/tags/folders with full commit messages. It's also a great web interface for quickly looking at file histories and commits/diffs without having to run an update and open your editor of choice. A: As someone else said, 'what platform'. On Windows I've used 'blat', which is a freebie command line SMTP mailer to do this, along with a post-commit and another batch file. The post commit looks like this: (Just calls another batch file) call d:\subversion\repos\rts\hooks\mail %1 %2 And mail.bat looked like this: copy d:\subversion\repos\RTS\hooks\Commitmsg.txt %temp%\commit.txt copy d:\subversion\repos\RTS\hooks\subjbase.txt %temp%\subject.txt svnlook info -r %2 %1 >> %temp%\commit.txt echo Revision %2 >> %temp%\commit.txt svnlook changed -r %2 %1 >> %temp%\commit.txt svnlook author -r %2 %1 >> %temp%\subject.txt c:\utils\blat %temp%\commit.txt -t <me@my.email.com> -sf %temp%\subject.txt -server ServerName -f "SVN Admin <svn@my.email.com>" -noh2 The biggest gotcha in writing SVN hooks is that you might have basically NO environment set-up - no exe path, no temp path, etc. Though maybe that's improved in more recent SVN builds. A: I use a post-commit script similar to this one. It's sends a nice HTML email. I updated it some where it highlights code that was removed in red and highlights code that was added in blue. A: You could use buildbot. It's a tool that can take arbitrary action whenever a check-in occurs. It's a full featured continuous integration system but if you just want emails it can certainly handle that. It has plug-ins for a variety of SCMs including SVN. A: In the "hooks" directory of your specific subversion branch there are 9 template files to get you started. Key point: subversion will not execute any of the files until they are renamed. To get post-commit.tmpl to execute under unix, rename it "post-commit". Under Windows, rename it to "post-commit.bat" or "post-commit.exe". Subversion will not execute the file if it is named "post-commit.tmpl" or "post-commit.sh" or the like. Also, make sure that the file is executable by the same user that runs subversion. A: I did it on Linux server in 3 steps: * *Create a mailing list (svn-notify@xy.com) and add people to the list. *Edit /path_to_your_svn/svn/hooks/svn-notify/mailer.conf * *to_addr = svn-notify@xy.com *from_addr = %(author)s@xy.com *commit_subject_prefix =[XY-SVN] *Add this line to your /path_to_your_svn/svn/hooks/post-commit file: /path_to_your_svn/svn/hooks/svn-notify/mailer.py commit "$REPOS" "$REV" /path_to_your_svn/svn/hooks/svn-notify/mailer.conf A: There is a (large) example written in Perl included in the Subversion source (it can be viewed here). A: Also SVNMailer, which works on Linux. A: Check the svn-mod-email package described here. The svn-mod-email is a powerful tool for SVN email notifications management that is delivered as a Debian archive. It's easily to install, configure and use.
{ "language": "en", "url": "https://stackoverflow.com/questions/7913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52" }
Q: Remove Quotes and Commas from a String in MySQL I'm importing some data from a CSV file, and numbers that are larger than 1000 get turned into 1,100 etc. What's a good way to remove both the quotes and the comma from this so I can put it into an int field? Edit: The data is actually already in a MySQL table, so I need to be able to this using SQL. Sorry for the mixup. A: Here is a good case for regular expressions. You can run a find and replace on the data either before you import (easier) or later on if the SQL import accepted those characters (not nearly as easy). But in either case, you have any number of methods to do a find and replace, be it editors, scripting languages, GUI programs, etc. Remember that you're going to want to find and replace all of the bad characters. A typical regular expression to find the comma and quotes (assuming just double quotes) is: (Blacklist) /[,"]/ Or, if you find something might change in the future, this regular expression, matches anything except a number or decimal point. (Whitelist) /[^0-9\.]/ What has been discussed by the people above is that we don't know all of the data in your CSV file. It sounds like you want to remove the commas and quotes from all of the numbers in the CSV file. But because we don't know what else is in the CSV file we want to make sure that we don't corrupt other data. Just blindly doing a find/replace could affect other portions of the file. A: My guess here is that because the data was able to import that the field is actually a varchar or some character field, because importing to a numeric field might have failed. Here was a test case I ran purely a MySQL, SQL solution. * *The table is just a single column (alpha) that is a varchar. mysql> desc t; +-------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-------+-------------+------+-----+---------+-------+ | alpha | varchar(15) | YES | | NULL | | +-------+-------------+------+-----+---------+-------+ *Add a record mysql> insert into t values('"1,000,000"'); Query OK, 1 row affected (0.00 sec) mysql> select * from t; +-------------+ | alpha | +-------------+ | "1,000,000" | +-------------+ *Update statement. mysql> update t set alpha = replace( replace(alpha, ',', ''), '"', '' ); Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> select * from t; +---------+ | alpha | +---------+ | 1000000 | +---------+ So in the end the statement I used was: UPDATE table SET field_name = replace( replace(field_name, ',', ''), '"', '' ); I looked at the MySQL Documentation and it didn't look like I could do the regular expressions find and replace. Although you could, like Eldila, use a regular expression for a find and then an alternative solution for replace. Also be careful with s/"(\d+),(\d+)"/$1$2/ because what if the number has more then just a single comma, for instance "1,000,000" you're going to want to do a global replace (in perl that is s///g). But even with a global replace the replacement starts where you last left off (unless perl is different), and would miss the every other comma separated group. A possible solution would be to make the first (\d+) optional like so s/(\d+)?,(\d+)/$1$2/g and in this case I would need a second find and replace to strip the quotes. Here are some ruby examples of the regular expressions acting on just the string "1,000,000", notice there are NOT double quote inside the string, this is just a string of the number itself. >> "1,000,000".sub( /(\d+),(\d+)/, '\1\2' ) # => "1000,000" >> "1,000,000".gsub( /(\d+),(\d+)/, '\1\2' ) # => "1000,000" >> "1,000,000".gsub( /(\d+)?,(\d+)/, '\1\2' ) # => "1000000" >> "1,000,000".gsub( /[,"]/, '' ) # => "1000000" >> "1,000,000".gsub( /[^0-9]/, '' ) # => "1000000" A: You could use this perl command. Perl -lne 's/[,|"]//; print' file.txt > newfile.txt You may need to play around with it a bit, but it should do the trick. A: Here's the PHP way: $stripped = str_replace(array(',', '"'), '', $value); Link to W3Schools page A: My command does remove all ',' and '"'. In order to convert the sting "1,000" more strictly, you will need the following command. Perl -lne 's/"(\d+),(\d+)"/$1$2/; print' file.txt > newfile.txt A: Actually nlucaroni, your case isn't quite right. Your example doesn't include double-quotes, so id,age,name,... 1,23,phil, won't match my regex. It requires the format "XXX,XXX". I can't think of an example of when it will match incorrectly. All the following example won't include the deliminator in the regex: "111,111",234 234,"111,111" "111,111","111,111" Please let me know if you can think of a counter-example. Cheers! A: The solution to the changed question is basically the same. You will have to run select query with the regex where clause. Somthing like Select * FROM SOMETABLE WHERE SOMEFIELD REGEXP '"(\d+),(\d+)"' Foreach of these rows, you want to do the following regex substitution s/"(\d+),(\d+)"/$1$2/ and then update the field with the new value. Please Joseph Pecoraro seriously and have a backup before doing mass changes to any files or databases. Because whenever you do regex, you can seriously mess up data if there are cases that you have missed. A: Daniel's and Eldila's answer have one problem: They remove all quotes and commas in the whole file. What I usually do when I have to do something like this is to first replace all separating quotes and (usually) semicolons by tabs. * *Search: ";" *Replace: \t Since I know in which column my affected values will be I then do another search and replace: * *Search: ^([\t]+)\t([\t]+)\t([0-9]+),([0-9]+)\t *Replace: \1\t\2\t\3\4\t ... given the value with the comma is in the third column. You need to start with an "^" to make sure that it starts at the beginning of a line. Then you repeat ([0-9]+)\t as often as there are columns that you just want to leave as they are. ([0-9]+),([0-9]+) searches for values where there is a number, then a comma and then another number. In the replace string we use \1 and \2 to just keep the values from the edited line, separating them with \t (tab). Then we put \3\4 (no tab between) to put the two components of the number without the comma right after each other. All values after that will be left alone. If you need your file to have semicolon to separate the elements, you then can go on and replace the tabs with semicolons. However then - if you leave out the quotes - you'll have to make sure that the text values do not contain any semicolons themselves. That's why I prefer to use TAB as column separator. I usually do that in an ordinary text editor (EditPlus) that supports RegExp, but the same regexps can be used in any programming language.
{ "language": "en", "url": "https://stackoverflow.com/questions/7917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: interrogating table lock schemes in T-SQL Is there some means of querying the system tables to establish which tables are using what locking schemes? I took a look at the columns in sysobjects but nothing jumped out. A: aargh, just being an idiot: SELECT name, lockscheme(name) FROM sysobjects WHERE type="U" ORDER BY name A: take a look at the syslockinfo and syslocks system tables you can also run the sp_lock proc
{ "language": "en", "url": "https://stackoverflow.com/questions/7933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: How to solve the select overlap bug in IE6? When using IE, you cannot put an absolutely positioned div over a select input element. That's because the select element is considered an ActiveX object and is on top of every HTML element in the page. I already saw people hiding selects when opening a popup div, that leads to pretty bad user experience having controls disappearing. FogBugz actually had a pretty smart solution (before v6) of turning every select into text boxes when a popup was displayed. This solved the bug and tricked the user eye but the behavior was not perfect. Another solution is in FogBugz 6 where they no more use the select element and recoded it everywhere. Last solution I currently use is messing up the IE rendering engine and force it to render the absolutely positioned <div> as an ActiveX element too, ensuring it can live over a select element. This is achieved by placing an invisible <iframe> inside the <div> and styling it with: #MyDiv iframe { position: absolute; z-index: -1; filter: mask(); border: 0; margin: 0; padding: 0; top: 0; left: 0; width: 9999px; height: 9999px; overflow: hidden; } Does anyone have an even better solution than this one? EDIT: The purpose of this question is as much informative as it is a real question. I find the <iframe> trick to be a good solution, but I am still looking for improvement like removing this ugly useless tag that degrades accessibility. A: I don't know anything better than an Iframe But it does occur to me that this could be added in JS by looking for a couple of variables * *IE 6 *A high Z-Index (you tend to have to set a z-index if you are floating a div over) *A box element Then a script that looks for these items and just add an iframe layer would be a neat solution Paul A: Thanks for the iframe hack solution. It's ugly and yet still elegant. :) Just a comment. If you happen to be running your site via SSL, the dummy iframe tag needs to have a src specified, otherwise IE6 is going to complain with a security warning. example: <iframe src="javascript:false;"></iframe> I've seen some people recommend setting src to blank.html ... but I like the javascript way more. Go figure. A: As far as I know there are only two options, the better of which is the mentioned usage of an iframe. The other one is hiding all selects when the overlay is shown, leading to an even weirder user experience. A: try this plugin http://docs.jquery.com/Plugins/bgiframe , it should work! usage: $('.your-dropdown-menu').bgiframe(); A: I don't think there is. I've tried to solve this problem at my job. Hiding the select control was the best we could come up with (being a corporate shop with a captive audience, user experience doesn't usually factor into the PM's decisions). From what I could gather online when looking for a solution, there's just no good solution to this. I like the FogBugz solution (the same thing done by a lot of high-profile sites, like Facebook), and this is actually what I use in my own projects. A: I do the same thing with select boxes and Flash. When using an overlay, hide the underlying objects that would push through. It's not great, but it works. You can use JavaScript to hide the elements just before displaying an overlay, then show them again once you're done. I try not to mess with iframes unless it's absolutely necessary. The trick of using labels or textboxes instead of select boxes during overlays is neat. I may use that in the future. A: Mootools has a pretty well heshed out solution using an iframe, called iframeshim. Not worth including the lib just for this, but if you have it in your project anyway, you should be aware that the 'iframeshim' plugin exists. A: There's this simple and straightforward jquery plugin called bgiframe. The developer created it for the sole purpose of solving this issue in ie6. I've recently used and it works like a charm. A: When hiding the select elements hide them by setting the "visibility: hidden" instead of display: none otherwise the browser will re-flow the document. A: I fixed this by hiding the select components using CSS when a dialog or overlay is displayed: selects[i].style.visibility = "hidden"; function showOverlay() { el = document.getElementById("overlay"); el.style.visibility = "visible"; selects = document.getElementsByTagName("select"); for (var i = 0; i < selects.length; i++) { selects[i].style.visibility = "hidden"; } } function hideOverlay() { el = document.getElementById("overlay"); el.style.visibility = "hidden"; var selects = document.getElementsByTagName("select"); for (var i = 0; i < selects.length; i++) { selects[i].style.visibility = "visible"; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/7937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28" }
Q: How important is W3C XHTML/CSS validation when finalizing work? Even though I always strive for complete validation these days, I often wonder if it's a waste of time. If the code runs and it looks the same in all browsers (I use browsershots.org to verify) then do I need to take it any further or am I just being overly anal? What level do you hold your code to when you create it for: a) yourself b) your clients P.S. Jeff and company, why doesn't stack overflow validate? :) EDIT: Some good insights, I think that since I've been so valid-obsessed for so long I program knowing what will cause problems and what won't so I'm in a better position than people who create a site first and then "go back and fix the validation problems" I think I may post another question on stack overflow; "Do you validate as you go or do you finish and then go back and validate?" as that seems to be where this question is going A: I think this is an area in which you should strive to use the Robustness principle as far as is practical (which is good advice for any area of coding). Just because something works today doesn't mean it will work tomorrow: if you're relying on a particular HTML/CSS hack or even if you've just been a little lax in emitting strictly valid code, the next iteration of browsers could well break. Doing it once the right way minimises this problem (though does not entirely mitigate it). There is a certain element of pragmatism to take here, though. I'd certainly do all I could for a client's site to be valid, but I would be willing to take more risks in my own space. A: I think it's only "tech" guys that really care for "100% standard compliance". My usual page consumers (= users) don't care if there's no alt-attribute for a "menu border picture element". I usually just make sure that I don't see any obvious errors (all tags closed, all lower case, attributes in quotes, ...), but if it looks good on IE and FF, that's all I care for. I don't really care if I use a non-standard attribute in any HTML tag, so that the page doesn't validate against an DTD - as long as I get the visual results that I intended to get. A: For understanding why validation matters, it is needed to understand how a browser works at its different layers, and also a little bit about the history of web from the perspective of web browsers. The HTML you give to a browser is interpreted by the browser following the DOM, an application programming interface that maps out the entire page as a hierarchy of nodes. Each part of that tree is a type of node containing different kinds of data. DOM (Document Object Model) was necessary because of the diversity of HTML pages that early web browsers (Netscape, IE...) implemented to allow alter the appearance and content of a web page without reloading it. For preserving the cross-platform nature of the web, W3C wanted to fix the different implementation of those browsers, proposing DOM. DOM support became a huge priority for most web browsers vendors, and efforts have been ongoing to improve support on each release. So, it worked. DOM is the very basic step with which a web browser starts. Its main flow is: * *parsing HTML to construct the DOM tree *render tree construction *layout of the render tree *painting the render tree The step 1 gives the content tree, with the tags turned to DOM nodes. The step 2 gives the render tree, containing styling information. So, why validation matters: because content tree and render tree are the basis from which the web browser start its job. The most they are well defined, the better for the web browser. Ultimately, the DOM is also the basis for your JavaScript events. So, its validation helps to the interaction layer too. A: a) Must look the same b) As standards-compliant as possible, but not so anal that it blocks finishing work In a situation where you have perpetual access to the code, I don't think standards-compliance is all that important, since you can always make changes to the code if something breaks. If you don't have perpetual access (ie, you sign off on the code and it becomes someone else's responsibility), it's probably best to be as standards-compliant as possible to minimize maintenance headaches later... even if you never have to deal with the code again, your reputation persists and can be transmitted to other potential clients, and many teams like to blame the previous developer(s) for problems that come up. A: I know this isn't answering your whole question, but it is worth considering that by using completely valid html you can be sure that your website should work properly in future web browsers that haven't been released yet. A: My approach tends to be to ensure I can completely validate on all pages, however I still send the page as text/html instead of application/xhtml+xml so there are no ugly XML errors in the event I have missed something. A: For me, I feel like I've done a good job if my code validates. Seeing the green check box on the w3c pages just makes me slightly giddy. As for group b, They usually only care that it looks and works the same across browsers. They only place I've found that this is not true is the government sector. They require complete validation not only with the w3c but also passing ADA tests (basically how does it sound with a screen reader). p.s. when I say government sector, I mean specifically the state of California and a few counties inside it. I have had no ther experience with other government groups besides them. A: I think validation is a good litmus test of whether you've done things properly, so if there are only a few minor problems, why not fix them and ensure your site will at least be understood correctly by browsers in the future (even if they do render things differently for other reasons)? OTOH, for most projects, validation seems like a huge headache and if you can get things working across browsers, it's not worth spending an extra day/week+ on just validation. A: Except that the validators themselves are so positively anal, when they flag an error or warning whenever a -moz- or -webkit or -o- i.e. a browser specific qualification term is used. also they want you to specify 0px rather than 0 or other units Zero is Zero whatever units the validator wants to check it against! just try validating the WordPress twentyeleven style.css it throws 140 odd errors which are all of the nature above or the validator is recovering from parse errors The validators are useless if you cannot sort the wheat from the chaff!!! We need validators that recognise browser specific qualification terms!
{ "language": "en", "url": "https://stackoverflow.com/questions/7940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: SQL Server Recovery States When restoring a SQL Server Database, I notice that there are 3 different Recovery States to choose from: * *Restore with Recovery *Restore with No Recovery *Restore with Standby I've always left it at it's default value, but what do they all mean? (Preferably in layman's terms) A: GateKiller, In simple terms (and not a copy-paste out of the SQLBOL) so you can understand the concepts: RESTORE WITH RECOVERY uses the backup media file (eg. fulldata.bak) to restore the database to back to the time that backup file was created. This is great if you want to go back in time to restore the database to an earlier state - like when developing a system. If you want to restore the database TO THE VERY LATEST DATA, (i.e. like if your doing a system Disaster Recovery and you cannot lose any data) then you want to restore that backup AND THEN all the transaction logs created since that backup. This is when you use RESTORE NORECOVERY. It will allow you to restore the later transaction logs right up to the point of failure (as long as you have them). RECOVERY WITH STANDBY is the ability to restore the database up to a parital date (like NORECOVERY above) but to allow the database still to be used READONLY. New transaction logs can still be applied to the database to keep it up to date (a standby server). Use this when it would take too long to restore a full database in order to Return To Operations the system. (ie. if you have a multi TB database that would take 16 hours to restore, but could receive transaction log updates every 15 minutes). This is a bit like a mirror server - but without having "every single transaction" send to the backup server in real time. A: You can set a Microsoft SQL Server database to be in NORECOVERY, RECOVERY or STANDBY mode. RECOVERY is the normal and usual status of the database where users can connect and access the database (given that they have the proper permissions set up). NORECOVERY allows the Database Administrator to restore additional backup files such as Differential or Transactional backups. While the database is in this state then users are not able to connect or access this database. STANDBY is pretty much the same as NORECOVERY status however it allows users to connect or access database in a READONLY access. So the users are able to run only SELECT command against the database. This is used in Log Shipping quite often for reporting purposes. The only drawback is that while there are users in the database running queries SQL Server or a DBA is not able to restore additional backup files. Therefore if you have many users accessing the database all the time then the replication could fall behind. A: From Books On line, i think it is pretty clear after you read it NORECOVERY Instructs the restore operation to not roll back any uncommitted transactions. Either the NORECOVERY or STANDBY option must be specified if another transaction log has to be applied. If neither NORECOVERY, RECOVERY, or STANDBY is specified, RECOVERY is the default. SQL Server requires that the WITH NORECOVERY option be used on all but the final RESTORE statement when restoring a database backup and multiple transaction logs, or when multiple RESTORE statements are needed (for example, a full database backup followed by a differential database backup). Note When specifying the NORECOVERY option, the database is not usable in this intermediate, nonrecovered state. When used with a file or filegroup restore operation, NORECOVERY forces the database to remain in the restoring state after the restore operation. This is useful in either of these situations: A restore script is being run and the log is always being applied. A sequence of file restores is used and the database is not intended to be usable between two of the restore operations. RECOVERY Instructs the restore operation to roll back any uncommitted transactions. After the recovery process, the database is ready for use. If subsequent RESTORE operations (RESTORE LOG, or RESTORE DATABASE from differential) are planned, NORECOVERY or STANDBY should be specified instead. If neither NORECOVERY, RECOVERY, or STANDBY is specified, RECOVERY is the default. When restoring backup sets from an earlier version of SQL Server, a database upgrade may be required. This upgrade is performed automatically when WITH RECOVERY is specified. For more information, see Transaction Log Backups . STANDBY = undo_file_name Specifies the undo file name so the recovery effects can be undone. The size required for the undo file depends on the volume of undo actions resulting from uncommitted transactions. If neither NORECOVERY, RECOVERY, or STANDBY is specified, RECOVERY is the default. STANDBY allows a database to be brought up for read-only access between transaction log restores and can be used with either warm standby server situations or special recovery situations in which it is useful to inspect the database between log restores. If the specified undo file name does not exist, SQL Server creates it. If the file does exist, SQL Server overwrites it. The same undo file can be used for consecutive restores of the same database. For more information, see Using Standby Servers. Important If free disk space is exhausted on the drive containing the specified undo file name, the restore operation stops. STANDBY is not allowed when a database upgrade is necessary.
{ "language": "en", "url": "https://stackoverflow.com/questions/7954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Decoding printf statements in C (Printf Primer) I'm working on bringing some old code from 1998 up to the 21st century. One of the first steps in the process is converting the printf statements to QString variables. No matter how many times I look back at printf though, I always end up forgetting one thing or the other. So, for fun, let's decode it together, for ole' times sake and in the process create the first little 'printf primer' for Stackoverflow. In the code, I came across this little gem, printf("%4u\t%016.1f\t%04X\t%02X\t%1c\t%1c\t%4s", a, b, c, d, e, f, g); How will the variables a, b, c, d, e, f, g be formatted? A: Danny is mostly right. a. unsigned decimal, minimum 4 characters, space padded b. floating point, minimum 16 digits before the decimal (0 padded), 1 digit after the decimal c. hex, minimum 4 characters, 0 padded, letters are printed in upper case d. same as above, but minimum 2 characters e. e is assumed to be an int, converted to an unsigned char and printed f. same as e g. This is likely a typo, the 4 has no effect. If it were "%.4s", then a maximum of 4 characters from the string would be printed. It is interesting to note that in this case, the string does not need to be null terminated. Edit: jj33 points out 2 errors in b and g above here. A: @Jason Day, I think the 4 in the last %4s is significant if there are fewer than 4 characters. If there are more than 4 you are right, %4s and %s would be the same, but with fewer than 4 chars in g %s would be left justified and %4s would be right-justified in a 4 char field. b is actually minimum 16 chars for the whole field, including the decimal and the single digit after the decimal I think (16 total chars vs 18 total chars) A: Here's my printf primer: http://www.pixelbeat.org/programming/gcc/format_specs.html I always compile with -Wall with gcc which will warn about any mismatches between the supplied printf formats and variables. A: @jj33, you're absolutely right, on both counts. #include <stdio.h> int main(int argc, char *argv[]) { char *s = "Hello, World"; char *s2 = "he"; printf("4s: '%4s'\n", s); printf(".4s: '%.4s'\n", s); printf("4s2: '%4s'\n", s2); printf(".4s2: '%.4s'\n", s2); return 0; } $ gcc -o foo foo.c $ ./foo 4s: 'Hello, World' .4s: 'Hell' 4s2: ' he' .4s2: 'he' Good catch! A: a. decimal, four significant digits b. Not sure c. hex, minimum 4 characters d. Also hex, minimum 2 characters e. 1 character f. String of characters, minimum 4 A: What you really need is a tool which takes the format strings in printf() statements and converts them into equivalent QString based function calls. Does anyone want to spend his Free Software Donation Time on developing such a tool? Placeholder for URL to a Free Software hosting service holding the source code of such a tool
{ "language": "en", "url": "https://stackoverflow.com/questions/7981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Printing from a .NET Service I am working on a project right now that involves receiving a message from another application, formatting the contents of that message, and sending it to a printer. The technology of choice is C# windows service. The output could be called a report, I suppose, but a reporting engine is not necessary. A simple templating engine, like StringTemplate, or even XSLT outputting HTML would be fine. The problem I'm having is finding a free way to print this kind of output from a service. Since it seems that it will work, I'm working on a prototype using Microsoft's RDLC, populating a local report and then rendering it as an image to a memory stream, which I will then print. Issues with that are: * *Multi-page printing will be a big headache. *Still have to use PrintDocument to print the memory stream, which is unsupported in a Windows Service (though it may work - haven't gotten that far with the prototype yet) *If the data coming across changes, I have to change the dataset and the class that the data is being deserialized into. bad bad bad. Has anyone had to do anything remotely like this? Any advice? I already posted a question about printing HTML without user input, and after wasting about 3 days on that, I have come to the conclusion that it cannot be done, at least not with any freely available tool. All help is appreciated. EDIT: We are on version 2.0 of the .NET framework. A: I've done it. It's a pain in the A*s. The problem is that printing requires that GDI engine to be in place, which normally means that you have to have the desktop, which only loads when you're logged in. If you're attempting to do this from a Service on a Server, then you normally aren't logged in. So first you can't run as the normal service user, but instead as a real user that has interactive login rights. Then you have to tweak the service registry entries (I forget how at the moment, would have to find the code which I can do tonight if you're really interested). Finally, you have to pray. Your biggest long term headache will be with print drivers. If you are running as a service without a logged in user, some print drivers like to pop up dialogs from time to time. What happens when your printer is out of toner? Or out of paper? The driver may pop up a dialog that will never be seen, and hold up the printer queue because nobody is logged in! A: To answer your first question, this can be fairly straight forward depending on the data. We have a variety of Service-based applications that do exactly what you are asking. Typically, we parse the incoming file and wrap our own Postscript or PCL around it. If you layout is fairly simple, then there are some very basic PCL codes you can wrap it with to provide the font/print layup you want (I'd be more then happy to give you some guidance here offline). One you have a print ready file you can send it to a UNC printer that is shared, directly to a locally installed printer, or even to the IP of the device (RAW or LPR type data). If, however, you are going down the PDF path, the simplest method is to send the PDF output to a printer that supports direct PDF printing (many do now). In this case you just send the PDF to the device and away it prints. The other option is to launch Ghostscript which should be free for your needs (check the licensing as they have a few different version, some GNU, some GPL etc.) and either use it's built in print function or simply convert to Postscript and send to the device. I've used Ghostscript many times in Service apps but not a huge fan as you will basically be shelling out and executing a command line app to do the conversion. That being said, it's a stable app that does tend to fail gracefully A: Printing from a service is a bad idea. Network printers are connected "per-user". You can mark the service to be run as a particular user, but I'd consider that a bad security practice. You might be able to connect to a local printer, but I'd still hesitate before going this route. The best option is to have the service store the data and have a user-launched application do the printing by asking the service for the data. Or a common location that the data is stored, like a database. If you need to have the data printed as regular intervals, setup a Task event thru the Task Scheduler. Launching a process from a service will require knowing the user name and password, which again is bad security practice. As for the printing itself, use a third-party tool to generate the report will be the easiest. A: Trust me, you will spend more money trying to search/develop a solution for this as compared to buying a third party component. Do not reinvent the wheel and go for the paid solution. Printing is a complex problem and I would love to see the day when better framework support is added for this. A: Printing from a Windows service is really painful. It seems to work... sometimes... but finally it craches or throws an exception from time to time, without any clear reason. It's really hopeless. Officially, it's even not supported, without any explanation, nor any proposal for an alternate solution. Recently, I have been confronted to the problem and after several unsuccessful trials and experimentations, I came finally with two viable solutions: * *Write your own printing DLL using the Win32 API (in C/C++ for instance), then use it from your service with P/Invoke (works fine) *Write your own printing COM+ component, then uses it from your service. I have chosen this solution with success recently (but it was third party COM+ component, not own written) It works absolutely fine too. A: This may not be what you're looking for, but if I needed to do this quick&dirty, I would: * *Create a separate WPF application (so I could use the built-in document handling) *Give the service the ability to interact with the desktop (note that you don't actually have to show anything on the desktop, or be logged in for this to work) *Have the service run the application, and give it the data to print. You could probably also jigger this to print from a web browser that you run from the service (though I'd recommend building your own shell IE, rather than using a full browser). For a more detailed (also free) solution, your best bet is probably to manually format the document yourself (using GDI+ to do the layout for you). This is tedious, error prone, time consuming, and wastes a lot of paper during development, but also gives you the most control over what's going to the printer. A: If you can output to post script some printers will print anything that gets FTPed to a certain directory on them. We used this to get past the print credits that our university exposed on us, but if your service outputs to a ps then you can just ftp the ps file to the printer. A: We are using DevExpress' XtraReports to print from a service without any problems. Their report model is similar to that of Windows Forms, so you could dynamically insert text elements and then issue the print command. A: I think we are going to go the third party route. I like the XSL -> HTML -> PDF -> Printer flow... Winnovative's HTML to PDF looks good for the first part, but I'm running into a block finding a good PDF printing solution... any suggestions? Ideally the license would be on a developer basis, not on a deployed runtime basis. A: In answer to your question about PDF printing, I have not found an elegant solution. I was "shell" ing out to Adobe which was unreliable and required a user to be logged in at all times. To fix this specific problem, I requested that the files we process (invoices) be formatted as multi-page Tiff files instead which can be split apart and printed using native .NET printing functions. Adobe's position seems to be "get the user to view the file in Adobe Reader and they can click print". Useless. I am still keen to find a good way of producing quality reports which can be output from the web server... A: Printing using System.Drawing.Printing is not supported by MS, as per Yann Trevin's response. However, you might be able to use the new, WPF-based, System.Printing (I think)
{ "language": "en", "url": "https://stackoverflow.com/questions/7990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Center text output from Graphics.DrawString() I'm using the .NETCF (Windows Mobile) Graphics class and the DrawString() method to render a single character to the screen. The problem is that I can't seem to get it centred properly. No matter what I set for the Y coordinate of the location of the string render, it always comes out lower than that and the larger the text size the greater the Y offset. For example, at text size 12, the offset is about 4, but at 32 the offset is about 10. I want the character to vertically take up most of the rectangle it's being drawn in and be centred horizontally. Here's my basic code. this is referencing the user control it's being drawn in. Graphics g = this.CreateGraphics(); float padx = ((float)this.Size.Width) * (0.05F); float pady = ((float)this.Size.Height) * (0.05F); float width = ((float)this.Size.Width) - 2 * padx; float height = ((float)this.Size.Height) - 2 * pady; float emSize = height; g.DrawString(letter, new Font(FontFamily.GenericSansSerif, emSize, FontStyle.Regular), new SolidBrush(Color.Black), padx, pady); Yes, I know there is the label control that I could use instead and set the centring with that, but I actually do need to do this manually with the Graphics class. A: I'd like to add another vote for the StringFormat object. You can use this simply to specify "center, center" and the text will be drawn centrally in the rectangle or points provided: StringFormat format = new StringFormat(); format.LineAlignment = StringAlignment.Center; format.Alignment = StringAlignment.Center; However there is one issue with this in CF. If you use Center for both values then it turns TextWrapping off. No idea why this happens, it appears to be a bug with the CF. A: To align a text use the following: StringFormat sf = new StringFormat(); sf.LineAlignment = StringAlignment.Center; sf.Alignment = StringAlignment.Center; e.Graphics.DrawString("My String", this.Font, Brushes.Black, ClientRectangle, sf); Please note that the text here is aligned in the given bounds. In this sample this is the ClientRectangle. A: Here's some code. This assumes you are doing this on a form, or a UserControl. Graphics g = this.CreateGraphics(); SizeF size = g.MeasureString("string to measure"); int nLeft = Convert.ToInt32((this.ClientRectangle.Width / 2) - (size.Width / 2)); int nTop = Convert.ToInt32((this.ClientRectangle.Height / 2) - (size.Height / 2)); From your post, it sounds like the ClientRectangle part (as in, you're not using it) is what's giving you difficulty. A: You can use an instance of the StringFormat object passed into the DrawString method to center the text. See Graphics.DrawString Method and StringFormat Class. A: Through a combination of the suggestions I got, I came up with this: private void DrawLetter() { Graphics g = this.CreateGraphics(); float width = ((float)this.ClientRectangle.Width); float height = ((float)this.ClientRectangle.Width); float emSize = height; Font font = new Font(FontFamily.GenericSansSerif, emSize, FontStyle.Regular); font = FindBestFitFont(g, letter.ToString(), font, this.ClientRectangle.Size); SizeF size = g.MeasureString(letter.ToString(), font); g.DrawString(letter, font, new SolidBrush(Color.Black), (width-size.Width)/2, 0); } private Font FindBestFitFont(Graphics g, String text, Font font, Size proposedSize) { // Compute actual size, shrink if needed while (true) { SizeF size = g.MeasureString(text, font); // It fits, back out if (size.Height <= proposedSize.Height && size.Width <= proposedSize.Width) { return font; } // Try a smaller font (90% of old size) Font oldFont = font; font = new Font(font.Name, (float)(font.Size * .9), font.Style); oldFont.Dispose(); } } So far, this works flawlessly. The only thing I would change is to move the FindBestFitFont() call to the OnResize() event so that I'm not calling it every time I draw a letter. It only needs to be called when the control size changes. I just included it in the function for completeness. A: To draw a centered text: TextRenderer.DrawText(g, "my text", Font, Bounds, ForeColor, BackColor, TextFormatFlags.HorizontalCenter | TextFormatFlags.VerticalCenter | TextFormatFlags.GlyphOverhangPadding); Determining optimal font size to fill an area is a bit more difficult. One working soultion I found is trial-and-error: start with a big font, then repeatedly measure the string and shrink the font until it fits. Font FindBestFitFont(Graphics g, String text, Font font, Size proposedSize, TextFormatFlags flags) { // Compute actual size, shrink if needed while (true) { Size size = TextRenderer.MeasureText(g, text, font, proposedSize, flags); // It fits, back out if ( size.Height <= proposedSize.Height && size.Width <= proposedSize.Width) { return font; } // Try a smaller font (90% of old size) Font oldFont = font; font = new Font(font.FontFamily, (float)(font.Size * .9)); oldFont.Dispose(); } } You'd use this as: Font bestFitFont = FindBestFitFont(g, text, someBigFont, sizeToFitIn, flags); // Then do your drawing using the bestFitFont // Don't forget to dispose the font (if/when needed)
{ "language": "en", "url": "https://stackoverflow.com/questions/7991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "71" }
Q: Best way to wrap rsync progress in a gui? I use rsync to synchronize files to Windows clients in a server agnostic way. What methods are available to send the progress of rsync to the parent process for display in a gui progress bar? I imagine two or three choices exist. (1) Watch STDOUT (2) Watch rsync.exe log file, similar to unix tail (3) Watch rsync console output in memory. Which one is best/preferred? A: For this type of tasks, I use my own AutoIt script (freeware, Windows only). The script redirects the standard output into a graphical window, displaying it with the ability to scroll back, etc (very useful in long processes like XCOPYs / PKZIPs to check if any error did happen). I use AutoIt because it's free, very easy to use, and can compile quickly into an .EXE. I think it's an excellent alternative to a complete programming language for this type of tasks. The downside is that it's for Windows only. $sCmd = "DIR E:\*.AU3 /S" ; Test command $nAutoTimeout = 10 ; Time in seconds to close window after finish $nDeskPct = 60 ; % of desktop size (if percent) ; $nHeight = 480 ; height/width of the main window (if fixed) ; $nWidth = 480 $sTitRun = "Executing process. Wait...." ; $sTitDone = "Process done" ; $sSound = @WindowsDir & "\Media\Ding.wav" ; End Sound $sButRun = "Cancel" ; Caption of "Exec" button $sButDone = "Close" ; Caption of "Close" button #include <GUIConstants.au3> #include <Constants.au3> #Include <GuiList.au3> Opt("GUIOnEventMode", 1) if $nDeskPct > 0 Then $nHeight = @DesktopHeight * ($nDeskPct / 100) $nWidth = @DesktopWidth * ($nDeskPct / 100) EndIf If $CmdLine[0] > 0 Then $sCmd = "" For $nCmd = 1 To $CmdLine[0] $sCmd = $sCmd & " " & $CmdLine[$nCmd] Next ; MsgBox (1,"",$sCmd) EndIf ; AutoItSetOption("GUIDataSeparatorChar", Chr(13)+Chr(10)) $nForm = GUICreate($sTitRun, $nWidth, $nHeight) GUISetOnEvent($GUI_EVENT_CLOSE, "CloseForm") $nList = GUICtrlCreateList ("", 10, 10, $nWidth - 20, $nHeight - 50, $WS_BORDER + $WS_VSCROLL) GUICtrlSetFont (-1, 9, 0, 0, "Courier New") $nClose = GUICtrlCreateButton ($sButRun, $nWidth - 100, $nHeight - 40, 80, 30) GUICtrlSetOnEvent (-1, "CloseForm") GUISetState(@SW_SHOW) ;, $nForm) $nPID = Run(@ComSpec & " /C " & $sCmd, ".", @SW_HIDE, $STDOUT_CHILD) ; $nPID = Run(@ComSpec & " /C _RunErrl.bat " & $sCmd, ".", @SW_HIDE, $STDOUT_CHILD) ; # Con ésto devuelve el errorlevel en _ERRL.TMP While 1 $sLine = StdoutRead($nPID) If @error Then ExitLoop If StringLen ($sLine) > 0 then $sLine = StringReplace ($sLine, Chr(13), "|") $sLine = StringReplace ($sLine, Chr(10), "") if StringLeft($sLine, 1)="|" Then $sLine = " " & $sLine endif GUICtrlSetData ($nList, $sLine) _GUICtrlListSelectIndex ($nList, _GUICtrlListCount ($nList) - 1) EndIf Wend $sLine = " ||" GUICtrlSetData ($nList, $sLine) _GUICtrlListSelectIndex ($nList, _GUICtrlListCount ($nList) - 1) GUICtrlSetData ($nClose, $sButDone) WinSetTitle ($sTitRun, "", $sTitDone) If $sSound <> "" Then SoundPlay ($sSound) EndIf $rInfo = DllStructCreate("uint;dword") ; # LASTINPUTINFO DllStructSetData($rInfo, 1, DllStructGetSize($rInfo)); DllCall("user32.dll", "int", "GetLastInputInfo", "ptr", DllStructGetPtr($rInfo)) $nLastInput = DllStructGetData($rInfo, 2) $nTime = TimerInit() While 1 If $nAutoTimeout > 0 Then DllCall("user32.dll", "int", "GetLastInputInfo", "ptr", DllStructGetPtr($rInfo)) If DllStructGetData($rInfo, 2) <> $nLastInput Then ; Tocó una tecla $nAutoTimeout = 0 EndIf EndIf If $nAutoTimeout > 0 And TimerDiff ($nTime) > $nAutoTimeOut * 1000 Then ExitLoop EndIf Sleep (100) Wend Func CloseForm() Exit EndFunc A: .NET has a pretty straight forward way to read and watch STDOUT. I guess this would be the cleanest way, since it is not dependent on any external files, just the path to rsync. I would not be too surprised if there is a wrapper library out there either. If not, write and open source it :) A: I've built my own simple object for this, I get a lot of reuse out of it, I can wrap it with a cmdline, web page, webservice, write output to a file, etc--- The commented items contain some rsync examples-- what I'd like to do sometime is embed rsync (and cygwin) into a resource & make a single .net executable out of it-- Here you go: Imports System.IO Namespace cds Public Class proc Public _cmdString As String Public _workingDir As String Public _arg As String Public Function basic() As String Dim sOut As String = "" Try 'Set start information. 'Dim startinfo As New ProcessStartInfo("C:\Program Files\cwRsync\bin\rsync", "-avzrbP 192.168.42.6::cdsERP /cygdrive/s/cdsERP_rsync/gwy") 'Dim startinfo As New ProcessStartInfo("C:\Program Files\cwRsync\bin\rsync", "-avzrbP 10.1.1.6::user /cygdrive/s/cdsERP_rsync/gws/user") 'Dim startinfo As New ProcessStartInfo("C:\windows\system32\cscript", "//NoLogo c:\windows\system32\prnmngr.vbs -l") Dim si As New ProcessStartInfo(_cmdString, _arg) si.UseShellExecute = False si.CreateNoWindow = True si.RedirectStandardOutput = True si.RedirectStandardError = True si.WorkingDirectory = _workingDir ' Make the process and set its start information. Dim p As New Process() p.StartInfo = si ' Start the process. p.Start() ' Attach to stdout and stderr. Dim stdout As StreamReader = p.StandardOutput() Dim stderr As StreamReader = p.StandardError() sOut = stdout.ReadToEnd() & ControlChars.NewLine & stderr.ReadToEnd() 'Dim writer As New StreamWriter("out.txt", FileMode.CreateNew) 'writer.Write(sOut) 'writer.Close() stdout.Close() stderr.Close() p.Close() Catch ex As Exception sOut = ex.Message End Try Return sOut End Function End Class End Namespace A: Check out DeltaCopy. It is a Windows GUI for rsync. A: Check NAsBackup Its open source software that give Windows user Rsync GUI using Watch STDOUT.
{ "language": "en", "url": "https://stackoverflow.com/questions/8004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Allow user to set up an SSH tunnel, but nothing else I'd like to allow a user to set up an SSH tunnel to a particular machine on a particular port (say, 5000), but I want to restrict this user as much as possible. (Authentication will be with public/private keypair). I know I need to edit the relevant ~/.ssh/authorized_keys file, but I'm not sure exactly what content to put in there (other than the public key). A: On Ubuntu 11.10, I found I could block ssh commands, sent with and without -T, and block scp copying, while allowing port forwarding to go through. Specifically I have a redis-server on "somehost" bound to localhost:6379 that I wish to share securely via ssh tunnels to other hosts that have a keyfile and will ssh in with: $ ssh -i keyfile.rsa -T -N -L 16379:localhost:6379 someuser@somehost This will cause the redis-server, "localhost" port 6379 on "somehost" to appear locally on the host executing the ssh command, remapped to "localhost" port 16379. On the remote "somehost" Here is what I used for authorized_keys: cat .ssh/authorized_keys (portions redacted) no-pty,no-X11-forwarding,permitopen="localhost:6379",command="/bin/echo do-not-send-commands" ssh-rsa rsa-public-key-code-goes-here keyuser@keyhost The no-pty trips up most ssh attempts that want to open a terminal. The permitopen explains what ports are allowed to be forwarded, in this case port 6379 the redis-server port I wanted to forward. The command="/bin/echo do-not-send-commands" echoes back "do-not-send-commands" if someone or something does manage to send commands to the host via ssh -T or otherwise. From a recent Ubuntu man sshd, authorized_keys / command is described as follows: command="command" Specifies that the command is executed whenever this key is used for authentication. The command supplied by the user (if any) is ignored. Attempts to use scp secure file copying will also fail with an echo of "do-not-send-commands" I've found sftp also fails with this configuration. I think the restricted shell suggestion, made in some previous answers, is also a good idea. Also, I would agree that everything detailed here could be determined from reading "man sshd" and searching therein for "authorized_keys" A: My solution is to provide the user who only may be tunneling, without an interactive shell, to set that shell in /etc/passwd to /usr/bin/tunnel_shell. Just create the executable file /usr/bin/tunnel_shell with an infinite loop. #!/bin/bash trap '' 2 20 24 clear echo -e "\r\n\033[32mSSH tunnel started, shell disabled by the system administrator\r\n" while [ true ] ; do sleep 1000 done exit 0 Fully explained here: http://blog.flowl.info/2011/ssh-tunnel-group-only-and-no-shell-please/ A: Here you have a nice post that I found useful: http://www.ab-weblog.com/en/creating-a-restricted-ssh-user-for-ssh-tunneling-only/ The idea is: (with the new restricted username as "sshtunnel") useradd sshtunnel -m -d /home/sshtunnel -s /bin/rbash passwd sshtunnel Note that we use rbash (restricted-bash) to restrict what the user can do: the user cannot cd (change directory) and cannot set any environment variables. Then we edit the user's PATH env variable in /home/sshtunnel/.profile to nothing - a trick that will make bash not find any commands to execute: PATH="" Finally we disallow the user to edit any files by setting the following permissions: chmod 555 /home/sshtunnel/ cd /home/sshtunnel/ chmod 444 .bash_logout .bashrc .profile A: I'm able to set up the authorized_keys file with the public key to log in. What I'm not sure about is the additional information I need to restrict what that account is allowed to do. For example, I know I can put commands such as: no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding You would want a line in your authorized_keys file that looks like this. permitopen="host.domain.tld:443",no-pty,no-agent-forwarding,no-X11-forwardi ng,command="/bin/noshell.sh" ssh-rsa AAAAB3NzaC.......wCUw== zoredache A: If you want to do allow access only for a specific command -- like svn -- you can also specify that command in the authorized keys file: command="svnserve -t",no-port-forwarding,no-pty,no-agent-forwarding,no-X11-forwarding [KEY TYPE] [KEY] [KEY COMMENT] From http://svn.apache.org/repos/asf/subversion/trunk/notes/ssh-tricks A: You'll probably want to set the user's shell to the restricted shell. Unset the PATH variable in the user's ~/.bashrc or ~/.bash_profile, and they won't be able to execute any commands. Later on, if you decide you want to allow the user(s) to execute a limited set of commands, like less or tail for instance, then you can copy the allowed commands to a separate directory (such as /home/restricted-commands) and update the PATH to point to that directory. A: Besides authorized_keys option like no-X11-forwarding, there actually is exactly one you are asking for: permitopen="host:port". By using this option, the user may only set up a tunnel to the specified host and port. For the details of the AUTHORIZED_KEYS file format refer to man sshd. A: I made a C program which looks like this: #include <stdio.h> #include <unistd.h> #include <signal.h> #include <stdlib.h> void sig_handler(int signo) { if (signo == SIGHUP) exit(0); } int main() { signal(SIGINT, &sig_handler); signal(SIGTSTP, &sig_handler); printf("OK\n"); while(1) sleep(1); exit(0); } I set the restricted user's shell to this program. I don't think the restricted user can execute anything, even if they do ssh server command, because the commands are executed using the shell, and this shell does not execute anything. A: You will generate a key on the users machine via whatever ssh client they are using. pUTTY for example has a utility to do this exact thing. It will generate both a private and public key. The contents of the public key file generated will be placed in the authorized_keys file. Next you need to make sure that the ssh client is configured to use the private key that generated the public key. It's fairly straight forward, but slightly different depending on the client being used. A: See this post on authenticating public keys. The two main things you need to remember are: * *Make sure you chmod 700 ~/.ssh *Append the public key block to authorized-keys
{ "language": "en", "url": "https://stackoverflow.com/questions/8021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "101" }
Q: Extension interface patterns The new extensions in .Net 3.5 allow functionality to be split out from interfaces. For instance in .Net 2.0 public interface IHaveChildren { string ParentType { get; } int ParentId { get; } List<IChild> GetChildren() } Can (in 3.5) become: public interface IHaveChildren { string ParentType { get; } int ParentId { get; } } public static class HaveChildrenExtension { public static List<IChild> GetChildren( this IHaveChildren ) { //logic to get children by parent type and id //shared for all classes implementing IHaveChildren } } This seems to me to be a better mechanism for many interfaces. They no longer need an abstract base to share this code, and functionally the code works the same. This could make the code more maintainable and easier to test. The only disadvantage being that an abstract bases implementation can be virtual, but can that be worked around (would an instance method hide an extension method with the same name? would it be confusing code to do so?) Any other reasons not to regularly use this pattern? Clarification: Yeah, I see the tendency with extension methods is to end up with them everywhere. I'd be particularly careful having any on .Net value types without a great deal of peer review (I think the only one we have on a string is a .SplitToDictionary() - similar to .Split() but taking a key-value delimiter too) I think there's a whole best practice debate there ;-) (Incidentally: DannySmurf, your PM sounds scary.) I'm specifically asking here about using extension methods where previously we had interface methods. I'm trying to avoid lots of levels of abstract base classes - the classes implementing these models mostly already have base classes. I think this model could be more maintainable and less overly-coupled than adding further object hierarchies. Is this what MS has done to IEnumerable and IQueryable for Linq? A: I think the judicious use of extension methods put interfaces on a more equatable position with (abstract) base classes. Versioning. One advantage base classes have over interfaces is that you can easily add new virtual members in a later version, whereas adding members to an interface will break implementers built against the old version of the library. Instead, a new version of the interface with the new members needs to be created, and the library will have to work around or limit access to legacy objects only implementing the original interface. As a concrete example, the first version of a library might define an interface like so: public interface INode { INode Root { get; } List<INode> GetChildren( ); } Once the library has released, we cannot modify the interface without breaking current users. Instead, in the next release we would need to define a new interface to add additional functionalty: public interface IChildNode : INode { INode Parent { get; } } However, only users of the new library will be able to implement the new interface. In order to work with legacy code, we need to adapt the old implementation, which an extension method can handle nicely: public static class NodeExtensions { public INode GetParent( this INode node ) { // If the node implements the new interface, call it directly. var childNode = node as IChildNode; if( !object.ReferenceEquals( childNode, null ) ) return childNode.Parent; // Otherwise, fall back on a default implementation. return FindParent( node, node.Root ); } } Now all users of the new library can treat both legacy and modern implementations identically. Overloads. Another area where extension methods can be useful is in providing overloads for interface methods. You might have a method with several parameters to control its action, of which only the first one or two are important in the 90% case. Since C# does not allow setting default values for parameters, users either have to call the fully parameterized method every time, or every implementation must implement the trivial overloads for the core method. Instead extension methods can be used to provide the trivial overload implementations: public interface ILongMethod { public bool LongMethod( string s, double d, int i, object o, ... ); } ... public static LongMethodExtensions { public bool LongMethod( this ILongMethod lm, string s, double d ) { lm.LongMethod( s, d, 0, null ); } ... } Please note that both of these cases are written in terms of the operations provided by the interfaces, and involve trivial or well-known default implementations. That said, you can only inherit from a class once, and the targeted use of extension methods can provide a valuable way to deal with some of the niceties provided by base classes that interfaces lack :) Edit: A related post by Joe Duffy: Extension methods as default interface method implementations A: I think the best thing that extension methods replace are all those utility classes that you find in every project. At least for now, I feel that any other use of Extension methods would cause confusion in the workplace. My two bits. A: There is nothing wrong with extending interfaces, in fact that is how LINQ works to add the extension methods to the collection classes. That being said, you really should only do this in the case where you need to provide the same functionality across all classes that implement that interface and that functionality is not (and probably should not be) part of the "official" implementation of any derived classes. Extending an interface is also good if it is just impractical to write an extension method for every possible derived type that requires the new functionality. A: I see separating the domain/model and UI/view functionality using extension methods as a good thing, especially since they can reside in separate namespaces. For example: namespace Model { class Person { public string Title { get; set; } public string FirstName { get; set; } public string Surname { get; set; } } } namespace View { static class PersonExtensions { public static string FullName(this Model.Person p) { return p.Title + " " + p.FirstName + " " + p.Surname; } public static string FormalName(this Model.Person p) { return p.Title + " " + p.FirstName[0] + ". " + p.Surname; } } } This way extension methods can be used similarly to XAML data templates. You can't access private/protected members of the class but it allows the data abstraction to be maintained without excessive code duplication throughout the application. A: A little bit more. If multiple interfaces have the same extension method signature, you would need to explicitly convert the caller to one interface type and then call the method. E.g. ((IFirst)this).AmbigousMethod() A: Extension methods should be used as just that: extensions. Any crucial structure/design related code or non-trivial operation should be put in an object that is composed into/inherited from a class or interface. Once another object tries to use the extended one, they won't see the extensions and might have to reimplement/re-reference them again. The traditional wisdom is that Extension methods should only be used for: * *utility classes, as Vaibhav mentioned *extending sealed 3rd party APIs A: Ouch. Please don't extend Interfaces. An interface is a clean contract that a class should implement, and your usage of said classes must be restricted to what is in the core Interface for this to work correctly. That is why you always declare the interface as the type instead of the actual class. IInterface variable = new ImplementingClass(); Right? If you really need a contract with some added functionality, abstract classes are your friends. A: I see a lot of people advocating using a base class to share common functionality. Be careful with this - you should favor composition over inheritance. Inheritance should only be used for polymorphism, when it makes sense from a modelling point of view. It is not a good tool for code reuse. As for the question: Be ware of the limitations when doing this - for example in the code shown, using an extension method to implement GetChildren effectively 'seals' this implementation and doesn't allow any IHaveChildren impl to provide its own if needed. If this is OK, then I dont mind the extension method approach that much. It is not set in stone, and can usually be easily refactored when more flexibility is needed later. For greater flexibility, using the strategy pattern may be preferable. Something like: public interface IHaveChildren { string ParentType { get; } int ParentId { get; } } public interface IChildIterator { IEnumerable<IChild> GetChildren(); } public void DefaultChildIterator : IChildIterator { private readonly IHaveChildren _parent; public DefaultChildIterator(IHaveChildren parent) { _parent = parent; } public IEnumerable<IChild> GetChildren() { // default child iterator impl } } public class Node : IHaveChildren, IChildIterator { // *snip* public IEnumerable<IChild> GetChildren() { return new DefaultChildIterator(this).GetChildren(); } } A: Rob Connery (Subsonic and MVC Storefront) implemented an IRepository-like pattern in his Storefront application. It's not quite the pattern above, but it does share some similarities. The data layer returns IQueryable which permits the consuming layer to apply filtering and sorting expression on top of that. The bonus is being able to specify a single GetProducts method, for example, and then decide appropriately in the consuming layer how you want that sorting, filtering or even just a particular range of results. Not a traditional approach, but very cool and definitely a case of DRY. A: One problem I can see is that, in a large company, this pattern could allow the code to become difficult (if not impossible) for anyone to understand and use. If multiple developers are constantly adding their own methods to existing classes, separate from those classes (and, God help us all, to BCL classes even), I could see a code base spinning out of control rather quickly. Even at my own job, I could see this happening, with my PM's desire to add every bit of code that we work on to either the UI or the data access layer, I could totally see him insisting on 20 or 30 methods being added to System.String that are only tangentially-related to string handling. A: I needed to solve something similar: I wanted to have a List<IIDable> passed to the extensions function where IIDable is an interface that has a long getId() function. I tried using GetIds(this List<IIDable> bla) but the compiler didn't allow me to do so. I used templates instead and then type casted inside the function to the interface type. I needed this function for some linq to sql generated classes. I hope this helps :) public static List<long> GetIds<T>(this List<T> original){ List<long> ret = new List<long>(); if (original == null) return ret; try { foreach (T t in original) { IIDable idable = (IIDable)t; ret.Add(idable.getId()); } return ret; } catch (Exception) { throw new Exception("Class calling this extension must implement IIDable interface"); }
{ "language": "en", "url": "https://stackoverflow.com/questions/8042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25" }
Q: .NET Interfaces Over the past few years I've changed from having a long flowing page of controls that I hid/showed to using a lot of user controls. I've always had a bit of a discussion between co-workers on best practices. Should you have properties that you populate, or use paramterized sub's to load the information in your controls? Part of my fear of using paramteter's is the fact that I cannot make sure that everything will be populated. What is the basic rule's for using interfaces? I've never created one. And should I try this, or stay with a 'sub load'? A: I'm not sure if interfaces are going to help you a lot here. My understanding is that you are breaking a page down into set of "composite" user controls that contain other controls, and you want to decide whether to use properties for setting values. I guess this really depends on how the user controls are designed and whether they are being dynamically added to a page etc (one possible scenario). I have a personal preference for specifying stuff in a constructor or using a factory method to create controls. I assume responsibility at creation for making sure that everything is set. My experience with properties is that I'll sometimes forget to set something and not realize my mistake. Your point about setting properties or using a sub, and everything being populated doesn't make a lot of sense to me. If you have some sort of dependency and need something else to be loaded then this could happen irrespective of whether it's a property or sub. I would refer to any book on VB.NET/C#/OOP to see the syntax for interfaces. Interfaces basically describe a contract for a class. If you have class A and B and both implement an interface called ITime then both will provide all of the methods defined on ITime. They can still add their own methods but they must at minimum include an implementation of ITime's methods (eg. we might have GetDate(), GetCurrentTime() as methods on ITime). An interface doesn't tell class A or B how those methods should work - just their name, parameters and return type. Lookup inheritance in an OOP book for more information on how interfaces inheritance is different from implementation inheritance.
{ "language": "en", "url": "https://stackoverflow.com/questions/8066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: MySQL replication for fallback scenario When I have two mysql servers that have different jobs (holding different databases) but want to be able to use one of them to slip in when the other one fails, what would you suggest how I keep the data on both of them equal "close to realtime"? Obviously it's not possible to make a full database dump every x minutes. I've read about the Binary Log, is that the way that I need to go? Will that not slow down the fallback server a lot? Is there a way to not include some tables in the binary log - where it doesn't matter that the data has changed? A: You may want to consider the master-master replication scenario, but with a slight twist. You can specify which databases to replicate and limit the replication for each server. For server1 I would add --replicate-do-db=server_2_db and on server2 --replicate-do-db=server_1_db to your my.cnf (or my.ini on Windows). This would mean that only statements for the server_1_db would be replicated to server2 and vice verse. Please also make sure that you perform full backups on a regular basis and not just rely on replication as it does not provide safety from accidental DROP DATABASE statements or their like. A: Binary log is definitely the way to go. However, you should be aware that with MySQL you can't just flip back and forth between servers like that. One server will be the master and the other will be the slave. You write/read to the master, but can only read from the slave server. If you ever write to the slave, they'll be out of sync and there's no easy way to get them to sync up again (basically, you have to swap them so the master is the new slave, but this is a tedious manual process). If you need true hot-swappable backup databases you might have to go to a system other than MySQL. If all you want is a read-only live backup that you can use instantly in the worst-case scenario (master is permanently destroyed), Binary Log will suit you just fine.
{ "language": "en", "url": "https://stackoverflow.com/questions/8107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Pre-build task - deleting the working copy in CruiseControl.NET I'm currently in the process of setting up a continuous integration environment at work. We are using VisualSVN Server and CrusieControl.NET. Occasionally a build will fail and a symptom is that there are conflicts in the CruiseControl.NET working copy. I believe this is due to the way I've setup the Visual Studio solutions. Hopefully the more projects we run in this environment the better our understanding of how to set them up will be so I'm not questioning why the conflicts happen at this stage. To fix the builds I delete the working copy and force a new build - this works every time (currently). So my questions are: is deleting the working copy a valid part of a continuous integration build process, and how do I go about it? I've tried solutions including MSTask and calling delete from the command line but I'm not having any luck. Sorry for being so wordy - good job this is a beta :) A: @jamie: There is one reason why you may not be able to do a clean build every time when using a continuous integration server -- build time. On some projects I've worked on, clean builds take 80+ minutes (an embedded project consisting of thousands of C++ files to checkout and then compile against multiple targets). In this case, you have to weigh the benefit of fast feedback against the likelihood that a clean build will catch something that an incremental build won't. In our case, we worked on improving and parallelizing the build process while at the same time allowing incremental builds on our CI machine. We did have a few problems because we weren't doing clean builds, but by doing a clean build nightly or weekly you could remove the risk without losing the fast feedback of your CI machine. A: If you check out CC.NET's jira there is a patch checked in to implement CleanCopy for Subversion which does exactly what you want and just set CleanCopy equal to true inside your source control block just like with the TFS one. A: Doing a full delete before or after your build is good practice. This means that there is no chance of your build environment picking up an out of date file. Your building exactly against what is in the repository. Deleting the working copy is possible as I have done it with Nant. In Nant I would have a clean script in its own folder outwith the one I want to delete and would then invoke it from CC.net. I assume this should also be possible with a batch file. Take a look at the rmdir command http://www.computerhope.com/rmdirhlp.htm @pauldoo I prefer my CI server to do a full delete as I don't want any surprise when I go to do a release build, which should always be done from a clean state. But it should be able to handle both, no reason why not A: It is very common and generally a good practice for any build process to do a 'clean' before doing any significant build. This prevents any 'artifacts' from previous builds to taint the output. A clean is essentially what you are doing by deleting the working copy. A: @Brad Barker Clean means to just wipe out build products. Deleting the working copy deletes everything else too (source and project files etc). In general it's nice if you're build machine can operate without doing a full delete, as this replicates what a normal developer does. Any conflicts it finds during update are an early warning to what your developers can expect. @jamie For formal releases yes it's better to do a completely clean checkout. So I guess it depends on the purpose of the build.
{ "language": "en", "url": "https://stackoverflow.com/questions/8127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do I calculate CRC32 of a string How do I calculate the CRC32 (Cyclic Redundancy Checksum) of a string in .NET? A: This guy seems to have your answer. https://damieng.com/blog/2006/08/08/calculating_crc32_in_c_and_net And in case the blog ever goes away or breaks the url, here's the github link: https://github.com/damieng/DamienGKit/blob/master/CSharp/DamienG.Library/Security/Cryptography/Crc32.cs Usage of the Crc32 class from the blog post: Crc32 crc32 = new Crc32(); String hash = String.Empty; using (FileStream fs = File.Open("c:\\myfile.txt", FileMode.Open)) foreach (byte b in crc32.ComputeHash(fs)) hash += b.ToString("x2").ToLower(); Console.WriteLine("CRC-32 is {0}", hash); A: Using the logic from the previous answer, this was my take: public class CRC32 { private readonly uint[] ChecksumTable; private readonly uint Polynomial = 0xEDB88320; public CRC32() { ChecksumTable = new uint[0x100]; for (uint index = 0; index < 0x100; ++index) { uint item = index; for (int bit = 0; bit < 8; ++bit) item = ((item & 1) != 0) ? (Polynomial ^ (item >> 1)) : (item >> 1); ChecksumTable[index] = item; } } public byte[] ComputeHash(Stream stream) { uint result = 0xFFFFFFFF; int current; while ((current = stream.ReadByte()) != -1) result = ChecksumTable[(result & 0xFF) ^ (byte)current] ^ (result >> 8); byte[] hash = BitConverter.GetBytes(~result); Array.Reverse(hash); return hash; } public byte[] ComputeHash(byte[] data) { using (MemoryStream stream = new MemoryStream(data)) return ComputeHash(stream); } } A: Since you seem to be looking to calculate the CRC32 of a string (rather than a file) there's a good example here: https://rosettacode.org/wiki/CRC-32#C.23 The code, should it ever disappear: /// <summary> /// Performs 32-bit reversed cyclic redundancy checks. /// </summary> public class Crc32 { #region Constants /// <summary> /// Generator polynomial (modulo 2) for the reversed CRC32 algorithm. /// </summary> private const UInt32 s_generator = 0xEDB88320; #endregion #region Constructors /// <summary> /// Creates a new instance of the Crc32 class. /// </summary> public Crc32() { // Constructs the checksum lookup table. Used to optimize the checksum. m_checksumTable = Enumerable.Range(0, 256).Select(i => { var tableEntry = (uint)i; for (var j = 0; j < 8; ++j) { tableEntry = ((tableEntry & 1) != 0) ? (s_generator ^ (tableEntry >> 1)) : (tableEntry >> 1); } return tableEntry; }).ToArray(); } #endregion #region Methods /// <summary> /// Calculates the checksum of the byte stream. /// </summary> /// <param name="byteStream">The byte stream to calculate the checksum for.</param> /// <returns>A 32-bit reversed checksum.</returns> public UInt32 Get<T>(IEnumerable<T> byteStream) { try { // Initialize checksumRegister to 0xFFFFFFFF and calculate the checksum. return ~byteStream.Aggregate(0xFFFFFFFF, (checksumRegister, currentByte) => (m_checksumTable[(checksumRegister & 0xFF) ^ Convert.ToByte(currentByte)] ^ (checksumRegister >> 8))); } catch (FormatException e) { throw new CrcException("Could not read the stream out as bytes.", e); } catch (InvalidCastException e) { throw new CrcException("Could not read the stream out as bytes.", e); } catch (OverflowException e) { throw new CrcException("Could not read the stream out as bytes.", e); } } #endregion #region Fields /// <summary> /// Contains a cache of calculated checksum chunks. /// </summary> private readonly UInt32[] m_checksumTable; #endregion } and to use it: var arrayOfBytes = Encoding.ASCII.GetBytes("The quick brown fox jumps over the lazy dog"); var crc32 = new Crc32(); Console.WriteLine(crc32.Get(arrayOfBytes).ToString("X")); You can test the input / output values here: https://crccalc.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/8128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49" }
Q: Suggestions for Adding Plugin Capability? Is there a general procedure for programming extensibility capability into your code? I am wondering what the general procedure is for adding extension-type capability to a system you are writing so that functionality can be extended through some kind of plugin API rather than having to modify the core code of a system. Do such things tend to be dependent on the language the system was written in, or is there a general method for allowing for this? A: I've used event-based APIs for plugins in the past. You can insert hooks for plugins by dispatching events and providing access to the application state. For example, if you were writing a blogging application, you might want to raise an event just before a new post is saved to the database, and provide the post HTML to the plugin to alter as needed. A: This is generally something that you'll have to expose yourself, so yes, it will be dependent on the language your system is written in (though often it's possible to write wrappers for other languages as well). If, for example, you had a program written in C, for Windows, plugins would be written for your program as DLLs. At runtime, you would manually load these DLLs, and expose some interface to them. For example, the DLLs might expose a gimme_the_interface() function which could accept a structure filled with function pointers. These function pointers would allow the DLL to make calls, register callbacks, etc. If you were in C++, you would use the DLL system, except you would probably pass an object pointer instead of a struct, and the object would implement an interface which provided functionality (accomplishing the same thing as the struct, but less ugly). For Java, you would load class files on-demand instead of DLLs, but the basic idea would be the same. In all cases, you'll need to define a standard interface between your code and the plugins, so that you can initialize the plugins, and so the plugins can interact with you. P.S. If you'd like to see a good example of a C++ plugin system, check out the foobar2000 SDK. I haven't used it in quite a while, but it used to be really well done. I assume it still is. A: I'm tempted to point you to the Design Patterns book for this generic question :p Seriously, I think the answer is no. You can't write extensible code by default, it will be both hard to write/extend and awfully inefficient (Mozilla started with the idea of being very extensible, used XPCOM everywhere, and now they realized it was a mistake and started to remove it where it doesn't make sense). what makes sense to do is to identify the pieces of your system that can be meaningfully extended and support a proper API for these cases (e.g. language support plug-ins in an editor). You'd use the relevant patterns, but the specific implementation depends on your platform/language choice. IMO, it also helps to use a dynamic language - makes it possible to tweak the core code at run time (when absolutely necessary). I appreciated that Mozilla's extensibility works that way when writing Firefox extensions. A: I think there are two aspects to your question: The design of the system to be extendable (the design patterns, inversion of control and other architectural aspects) (http://www.martinfowler.com/articles/injection.html). And, at least to me, yes these patterns/techniques are platform/language independent and can be seen as a "general procedure". Now, their implementation is language and platform dependend (for example in C/C++ you have the dynamic library stuff, etc.) Several 'frameworks' have been developed to give you a programming environment that provides you pluggability/extensibility but as some other people mention, don't get too crazy making everything pluggable. In the Java world a good specification to look is OSGi (http://en.wikipedia.org/wiki/OSGi) with several implementations the best one IMHO being Equinox (http://www.eclipse.org/equinox/) A: * *Find out what minimum requrements you want to put on a plugin writer. Then make one or more Interfaces that the writer must implement for your code to know when and where to execute the code. *Make an API the writer can use to access some of the functionality in your code. You could also make a base class the writer must inherit. This will make wiring up the API easier. Then use some kind of reflection to scan a directory, and load the classes you find that matches your requirements. Some people also make a scripting language for their system, or implements an interpreter for a subset of an existing language. This is also a possible route to go. Bottom line is: When you get the code to load, only your imagination should be able to stop you. Good luck. A: If you are using a compiled language such as C or C++, it may be a good idea to look at plugin support via scripting languages. Both Python and Lua are excellent languages that are used to script a large number of applications (Civ4 and blender use Python, Supreme Commander uses Lua, etc). If you are using C++, check out the boost python library. Otherwise, python ships with headers that can be used in C, and does a fairly good job documenting the C/python API. The documentation seemed less complete for Lua, but I may not have been looking hard enough. Either way, you can offer a fairly solid scripting platform without a terrible amount of work. It still isn't trivial, but it provides you with a very good base to work from.
{ "language": "en", "url": "https://stackoverflow.com/questions/8140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: When should I use Compiled LINQ vs Normal LINQ I just read up on a performance of LINQ, and there is a HUGE amount to be gained by using Compiled LINQ. Now, why won't I always use compiled LINQ? A: Short answer: when it's only going to happen once in a long time. Long Answer. A: You should use it when some linq query is executing most of the time. Those can be converted to compiled lynq. Performance will be improved as execution path of query will be set at compile time. I used it in my project and performance went up a notched.
{ "language": "en", "url": "https://stackoverflow.com/questions/8142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I find the high water mark (for sessions) on Oracle 9i How can I find the high water mark (the historical maximum number of concurrent users) in an oracle database (9i). A: This should do the trick: SELECT sessions_highwater FROM v$license; A: select max_utilization from v$resource_limit where resource_name = 'sessions'; A good overview of Oracle system views can be found here.
{ "language": "en", "url": "https://stackoverflow.com/questions/8145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you get a custom id to render using HtmlHelper in MVC Using preview 4 of ASP.NET MVC Code like: <%= Html.CheckBox( "myCheckBox", "Click Here", "True", false ) %> only outputs: <input type="checkbox" value="True" name="myCheckBox" /> There is a name there for the form post back but no id for javascript or labels :-( I was hoping that changing it to: Html.CheckBox( "myCheckBox", "Click Here", "True", false, new { id="myCheckBox" } ) would work - but instead I get an exception: System.ArgumentException: An item with the same key has already been added. As if there was already an id somewhere in a collection somewhere - I'm stumped! The full exception for anyone interested follows (hey - wouldn't it be nice to attach files in here): System.ArgumentException: An item with the same key has already been added. at System.ThrowHelper.ThrowArgumentException(ExceptionResource resource) at System.Collections.Generic.Dictionary`2.Insert(TKey key, TValue value, Boolean add) at System.Web.Routing.RouteValueDictionary.Add(String key, Object value) at System.Web.Mvc.TagBuilder2.CreateInputTag(HtmlInputType inputType, String name, RouteValueDictionary attributes) at System.Web.Mvc.CheckBoxBuilder.CheckBox(String htmlName, RouteValueDictionary htmlAttributes) at System.Web.Mvc.CheckBoxBuilder.CheckBox(String htmlName, String text, String value, Boolean isChecked, RouteValueDictionary htmlAttributes) at System.Web.Mvc.CheckBoxExtensions.CheckBox(HtmlHelper helper, String htmlName, String text, String value, Boolean isChecked, Object htmlAttributes) at ASP.views_account_termsandconditions_ascx.__Render__control1(HtmlTextWriter __w, Control parameterContainer) in c:\dev\myProject\Views\Account\Edit.ascx:line 108 A: Try this: <%= Html.CheckBox("myCheckbox", "Click here", "True", false, new {_id ="test" })%> For any keyword you can use an underscore before the name of the attribute. Instead of class you use _class. Since class is a keyword in C#, and also the name of the attribute in HTML. Now, "id" isn't a keyword in C#, but perhaps it is in another .NET language that they want to support. From what I can tell, it's not a keyword in VB.NET, F#, or Ruby so maybe it is a mistake that they force you to use an underscore with it. A: Apparently this is a bug. Because they are adding it to potential rendering values, they just forgot to include it. I would recommend creating a bug on codeplex, and download the source and modify it for your needs.
{ "language": "en", "url": "https://stackoverflow.com/questions/8147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Pylons error - 'MySQL server has gone away' I'm using Pylons (a python framework) to serve a simple web application, but it seems to die from time to time, with this in the error log: (2006, 'MySQL server has gone away') I did a bit of checking, and saw that this was because the connections to MySQL were not being renewed. This shouldn't be a problem though, because the sqlalchemy.pool_recycle in the config file should automatically keep it alive. The default was 3600, but I dialed it back to 1800 because of this problem. It helped a bit, but 3600 should be fine according to the docs. The errors still happen semi-regularly. I don't want to lower it too much though and DOS my own database :). Maybe something in my MySQL config is goofy? Not sure where to look exactly. Other relevant details: Python 2.5 Pylons: 0.9.6.2 (w/ sql_alchemy) MySQL: 5.0.51 A: I think I fixed it. It's turns out I had a simple config error. My ini file read: sqlalchemy.default.url = [connection string here] sqlalchemy.pool_recycle = 1800 The problem is that my environment.py file declared that the engine would only map keys with the prefix: sqlalchemy.default so pool_recycle was ignored. The solution is to simply change the second line in the ini to: sqlalchemy.default.pool_recycle = 1800 A: You might want to check MySQL's timeout variables: show variables like '%timeout%'; You're probably interested in wait_timeout (less likely but possible: interactive_timeout). On Debian and Ubuntu, the defaults are 28800 (MySQL kills connections after 8 hours), but maybe the default for your platform is different or whoever administrates the server has configured things differently. AFAICT, pool_recycle doesn't actually keep the connections alive, it expires them on its own before MySQL kills them. I'm not familiar with pylons, but if causing the connections to intermittently do a SELECT 1; is an option, that will keep them alive at the cost of basically no server load and minimal network traffic. One final thought: are you somehow managing to use a connection that pylons thinks it has expired?
{ "language": "en", "url": "https://stackoverflow.com/questions/8154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: MySQL replication: if I don't specify any databases, will log_bin log EVERYTHING? I'm setting up replication for a server which runs a bunch of databases (one per client) and plan on adding more all the time, on my.cnf, Instead of having: binlog-do-db = databasename 1 binlog-do-db = databasename 2 binlog-do-db = databasename 3 ... binlog-do-db = databasename n can I rather just have binlog-ignore-db = mysql binlog-ignore-db = informationschema (and no database to log specified) and assume that everything else is logged? EDIT: actually if I remove all my binlog-do-db entries, it seemingly logs everything (as you see the binary log file change position when you move the database), but on the slave server, nothing gets picked up! (perhaps, this is the case to use replicate-do-db? this would kill the idea; i guess I cant have MySQL automagically detect which databases to replicate). A: That looks correct: http://dev.mysql.com/doc/refman/5.0/en/binary-log.html#option_mysqld_binlog-ignore-db. According to that reference: There are some --binlog-ignore-db rules. Does the default database match any of the --binlog-ignore-db rules? * *Yes: Do not write the statement, and exit. *No: Write the query and exit. Since you only have ignore commands, all queries will be written to the log as long as the default (active) database doesn't match one of the ignored databases.
{ "language": "en", "url": "https://stackoverflow.com/questions/8166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Generate insert SQL statements from a CSV file I need to import a csv file into Firebird and I've spent a couple of hours trying out some tools and none fit my needs. The main problem is that all the tools I've been trying like EMS Data Import and Firebird Data Wizard expect that my CSV file contains all the information needed by my Table. I need to write some custom SQL in the insert statement, for example, I have a CSV file with the city name, but as my database already has all the cities in another table (normalized), I need to write a subselect in the insert statement to lookup for the city and write its ID, also I have a stored procedure to cread GUIDS. My insert statement would be something like this: INSERT INTO PERSON (ID, NAME, CITY_ID) VALUES((SELECT NEW_GUID FROM CREATE_GUID), :NAME, (SELECT CITY_ID FROM CITY WHERE NAME = :CITY_NAME) How can I approach this? A: Well, if it's a CSV, and it this is a one time process, open up the file in Excel, and then write formulas to populate your data in any way you desire, and then write a simple Concat formula to construct your SQL, and then copy that formula for every row. You will get a large number of SQL statements which you can execute anywhere you want. A: Fabio, I've done what Vaibhav has done many times, and it's a good "quick and dirty" way to get data into a database. If you need to do this a few times, or on some type of schedule, then a more reliable way is to load the CSV data "as-is" into a work table (i.e customer_dataload) and then use standard SQL statements to populate the missing fields. (I don't know Firebird syntax - but something like...) UPDATE person SET id = (SELECT newguid() FROM createguid) UPDATE person SET cityid = (SELECT cityid FROM cities WHERE person.cityname = cities.cityname) etc. Usually, it's much faster (and more reliable) to get the data INTO the database and then fix the data than to try to fix the data during the upload. You also get the benefit of transactions to allow you to ROLLBACK if it does not work!! A: I'd do this with awk. For example, if you had this information in a CSV file: Bob,New York Jane,San Francisco Steven,Boston Marie,Los Angeles The following command will give you what you want, run in the same directory as your CSV file (named name-city.csv in this example). $ awk -F, '{ print "INSERT INTO PERSON (ID, NAME, CITY_ID) VALUES ((SELECT NEW_GUID FROM CREATE_GUID), '\''"$1"'\'', (SELECT CITY_ID FROM CITY WHERE NAME = '\''"$2"'\''))" }' name-city.csv Type awk --help for more information. A: Two online tools which helped me in 2020: https://numidian.io/convert/csv/to/sql https://www.convertcsv.com/csv-to-sql.htm The second one is based on JS and does not upload your data (at least not at the time I am writing this) A: You could import the CSV file into a database table as is, then run an SQL query that does all the required transformations on the imported table and inserts the result into the target table. Assuming the CSV file is imported into temp_table with columns n, city_name: insert into target_table select t.n, c.city_id as city from temp_table t, cities c where t.city_name = c.city_name Nice tip about using Excel, but I also suggest getting comfortable with a scripting language like Python, because for some tasks it's easier to just write a quick python script to do the job than trying to find the function you need in Excel or a pre-made tool that does the job. A: You can use the free csvsql to do this. * *Install it using these instructions *Now run a command like so to import your data into your database. More details at the links above, but it'd be something like: csvsql --db firebase:///d=mydb --insert mydata.csv *The following works with sqlite, and is what I use to convert data into an easy to query format csvsql --db sqlite:///dump.db --insert mydata.csv A: It's a bit crude - but for one off jobs, I sometimes use Excel. If you import the CSV file into Excel, you can create a formula which creates an INSERT statement by using string concatenation in the formula. So - if your CSV file has 3 columns that appear in columns A, B, and C in Excel, you could write a formula like... ="INSERT INTO MyTable (Col1, Col2, Col3) VALUES (" & A1 & ", " & B1 & ", " & C1 & ")" Then you can replicate the formula down all of your rows, and copy, and paste the answer into a text file to run against your database. Like I say - it's crude - but it can be quite a 'quick and dirty' way of getting a job done! A: Just finished this VBA script which might be handy for this purpose. All should need to do is change the Insert statement to include the table in question and the list of columns (obviously in the same sequence they appear on the Excel file). Function CreateInsertStatement() 'Output file location and start of the insert statement SQLScript = "C:\Inserts.sql" cStart = "Insert Into Holidays (HOLIDAY_ID, NAT_HOLDAY_DESC, NAT_HOLDAY_DTE) Values (" 'Open file for output Open SQLScript For Output As #1 Dim LoopThruRows As Boolean Dim LoopThruCols As Boolean nCommit = 1 'Commit Count nCommitCount = 100 'The number of rows after which a commit is performed LoopThruRows = True nRow = 1 'Current row While LoopThruRows nRow = nRow + 1 'Start at second row - presuming there are headers nCol = 1 'Reset the columns If Cells(nRow, nCol).Value = Empty Then Print #1, "Commit;" LoopThruRows = False Else If nCommit = nCommitCount Then Print #1, "Commit;" nCommit = 1 Else nCommit = nCommit + 1 End If cLine = cStart LoopThruCols = True While LoopThruCols If Cells(nRow, nCol).Value = Empty Then cLine = cLine & ");" 'Close the SQL statement Print #1, cLine 'Write the line LoopThruCols = False 'Exit the cols loop Else If nCol > 1 Then 'add a preceeding comma for all bar the first column cLine = cLine & ", " End If If Right(Left(Cells(nRow, nCol).Value, 3), 1) = "/" Then 'Format for dates cLine = cLine & "TO_DATE('" & Cells(nRow, nCol).Value & "', 'dd/mm/yyyy')" ElseIf IsNumeric(Left(Cells(nRow, nCol).Value, 1)) Then 'Format for numbers cLine = cLine & Cells(nRow, nCol).Value Else 'Format for text, including apostrophes cLine = cLine & "'" & Replace(Cells(nRow, nCol).Value, "'", "''") & "'" End If nCol = nCol + 1 End If Wend End If Wend Close #1 End Function A: use the csv-file as an external table. Then you can use SQL to copy the data from the external table to your destination table - with all the possibilities of SQL. See http://www.firebirdsql.org/index.php?op=useful&id=netzka A: A tool I recently tried that worked outstandingly well is FSQL. You write an IMPORT command, paste it into FSQL and it imports the CSV file into the Firebird table. A: option 1: 1- have you tried IBExert? IBExpert \ Tools \ Import Data (Trial or Customer Version). option 2: 2- upload your csv file to a temporary table with F_BLOBLOAD. 3- create a stored procedure, which used 3 functions (f_stringlength, f_strcopy, f_MID) you cross all your string, pulling your fields to build your INSERT INTO. links: 2: http://freeadhocudf.org/documentation_english/dok_eng_file.html 3: http://freeadhocudf.org/documentation_english/dok_eng_string.html A: you can use shell sed "s/,/','/g" file.csv > tmp sed "s/$/'),(/g" tmp > tmp2 sed "s/^./'&/g" tmp2 > insert.sql and then add INSERT INTO PERSON (ID, NAME, CITY_ID) VALUES( ... );
{ "language": "en", "url": "https://stackoverflow.com/questions/8213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62" }
Q: Instrumenting a UI How are you instrumenting your UI's? In the past I've read that people have instrumented their user interfaces, but what I haven't found is examples or tips on how to instrument a UI. By instrumenting, I mean collecting data regarding usage and performance of the system. A MSDN article on Instrumentation is http://msdn.microsoft.com/en-us/library/x5952w0c.aspx. I would like to capture which buttons users click on, what keyboard shortucts they use, what terms they use to search, etc. * *How are you instrumenting your UI? *What format are you storing the instrumentation? *How are you processing the instrumented data? *How are you keeping your UI code clean with this instrumentation logic? Specifically, I am implementing my UI in WPF, so this will provide extra challenges compared to instrumenting a web-based application. (i.e. need to transfer the instrumented data back to a central location, etc). That said, I feel the technology may provide an easier implementation of instrumentation via concepts like attached properties. * *Have you instrumented a WPF application? Do you have any tips on how this can be achieved? Edit: The following blog post presents an interesting solution: Pixel-In-Gene Blog: Techniques for UI Auditing on WPF apps A: Here is an example of how I use a simple events manager to hook on to the UI events and extract key information of the events, such as name and type of UI element, name of event and the parent window's type name. For lists I also extract the selected item. This solution only listens for clicks of controls derived from ButtonBase (Button, ToggleButton, ...) and selection changes in controls derived from Selector (ListBox, TabControl, ...). It should be easy to extend to other types of UI elements or to achieve a more fine-grained solution. The solution is inspired of Brad Leach's answer. public class UserInteractionEventsManager { public delegate void ButtonClickedHandler(DateTime time, string eventName, string senderName, string senderTypeName, string parentWindowName); public delegate void SelectorSelectedHandler(DateTime time, string eventName, string senderName, string senderTypeName, string parentWindowName, object selectedObject); public event ButtonClickedHandler ButtonClicked; public event SelectorSelectedHandler SelectorSelected; public UserInteractionEventsManager() { EventManager.RegisterClassHandler(typeof(ButtonBase), ButtonBase.ClickEvent, new RoutedEventHandler(HandleButtonClicked)); EventManager.RegisterClassHandler(typeof(Selector), Selector.SelectionChangedEvent, new RoutedEventHandler(HandleSelectorSelected)); } #region Handling events private void HandleSelectorSelected(object sender, RoutedEventArgs e) { // Avoid multiple events due to bubbling. Example: A ListBox inside a TabControl will cause both to send the SelectionChangedEvent. if (sender != e.OriginalSource) return; var args = e as SelectionChangedEventArgs; if (args == null || args.AddedItems.Count == 0) return; var element = sender as FrameworkElement; if (element == null) return; string senderName = GetSenderName(element); string parentWindowName = GetParentWindowTypeName(sender); DateTime time = DateTime.Now; string eventName = e.RoutedEvent.Name; string senderTypeName = sender.GetType().Name; string selectedItemText = args.AddedItems.Count > 0 ? args.AddedItems[0].ToString() : "<no selected items>"; if (SelectorSelected != null) SelectorSelected(time, eventName, senderName, senderTypeName, parentWindowName, selectedItemText); } private void HandleButtonClicked(object sender, RoutedEventArgs e) { var element = sender as FrameworkElement; if (element == null) return; string parentWindowName = GetParentWindowTypeName(sender); DateTime time = DateTime.Now; string eventName = e.RoutedEvent.Name; string senderTypeName = sender.GetType().Name; string senderName = GetSenderName(element); if (ButtonClicked != null) ButtonClicked(time, eventName, senderName, senderTypeName, parentWindowName); } #endregion #region Private helpers private static string GetSenderName(FrameworkElement element) { return !String.IsNullOrEmpty(element.Name) ? element.Name : "<no item name>"; } private static string GetParentWindowTypeName(object sender) { var parent = FindParent<Window>(sender as DependencyObject); return parent != null ? parent.GetType().Name : "<no parent>"; } private static T FindParent<T>(DependencyObject item) where T : class { if (item == null) return default(T); if (item is T) return item as T; DependencyObject parent = VisualTreeHelper.GetParent(item); if (parent == null) return default(T); return FindParent<T>(parent); } #endregion } And to do the actual logging, I use log4net and created a separate logger named 'Interaction' to log user interaction. The class 'Log' here is simply my own static wrapper for log4net. /// <summary> /// The user interaction logger uses <see cref="UserInteractionEventsManager"/> to listen for events on GUI elements, such as buttons, list boxes, tab controls etc. /// The events are then logged in a readable format using Log.Interaction.Info(). /// </summary> public class UserInteractionLogger { private readonly UserInteractionEventsManager _events; private bool _started; /// <summary> /// Create a user interaction logger. Remember to Start() it. /// </summary> public UserInteractionLogger() { _events = new UserInteractionEventsManager(); } /// <summary> /// Start logging user interaction events. /// </summary> public void Start() { if (_started) return; _events.ButtonClicked += ButtonClicked; _events.SelectorSelected += SelectorSelected; _started = true; } /// <summary> /// Stop logging user interaction events. /// </summary> public void Stop() { if (!_started) return; _events.ButtonClicked -= ButtonClicked; _events.SelectorSelected -= SelectorSelected; _started = false; } private static void SelectorSelected(DateTime time, string eventName, string senderName, string senderTypeName, string parentWindowTypeName, object selectedObject) { Log.Interaction.Info("{0}.{1} by {2} in {3}. Selected: {4}", senderTypeName, eventName, senderName, parentWindowTypeName, selectedObject); } private static void ButtonClicked(DateTime time, string eventName, string senderName, string senderTypeName, string parentWindowTypeName) { Log.Interaction.Info("{0}.{1} by {2} in {3}", senderTypeName, eventName, senderName, parentWindowTypeName); } } The output would then look something like this, omitting non-relevant log entries. 04/13 08:38:37.069 INFO Iact ToggleButton.Click by AnalysisButton in MyMainWindow 04/13 08:38:38.493 INFO Iact ListBox.SelectionChanged by ListView in MyMainWindow. Selected: Andreas Larsen 04/13 08:38:44.587 INFO Iact Button.Click by EditEntryButton in MyMainWindow 04/13 08:38:46.068 INFO Iact Button.Click by OkButton in EditEntryDialog 04/13 08:38:47.395 INFO Iact ToggleButton.Click by ExitButton in MyMainWindow A: The following blog post gives quite a few good ideas for instrumenting a WPF application: Techniques for UI Auditing on WPF apps. A: You could consider log4net. It is a robust logging framework that exists in a single DLL. It is also done in a "non demanding" type mode so that if a critical process is going on, it won't log until resources are freed up a bit more. You could easily setup a bunch of INFO level loggers and track all the user interaction you needed, and it wouldn't take a bug crash to send the file to yourself. You could also then log all your ERROR and FATAL code to seperate file that could easily be mailed to you for processing. A: If you make use of WPF commands, each custom command could then log the Action taken. You can also log the way the command was initiated. A: Perhaps the Microsoft UI Automation for WPF can help out ? Its a framework for automating your UI, perhaps it can be used to log stuff for you... We use the Automation Framework for auto-testing our UI in WPF. A: I have not yet developed using WPF.. But I would assume that its the same as most other applications in that you want to keep the UI code as light as possible.. A number of design patterns may be used in this such as the obvious MVC and Façade. I personally always try and keep the objects travelling between the UI and BL layers as light as possible, keeping them to primitives if I can. This then helps me focus on improving the UI layer without the concerns of anything going on once I throw my (primitive) data back.. I hope I understood your question correctly, and sorry I cannot offer more contextual help with WPF. A: Disclaimer: I work for the company that sells this product, not only that but I am a developer on this particular product :) . If you are interested in a commercial product to provide this then Runtime Intelligence (a functional add on to Dotfuscator ) that injects usage tracking functionality into your .NET applications is available. We provide not only the actual implementation of the tracking functionality but the data collection, processing and reporting functionality as well. There was recently a discussion on the Business of Software forum on this topic that I also posted in located here: http://discuss.joelonsoftware.com/default.asp?biz.5.680205.26 . For a high level overview of our stuff see here: http://www.preemptive.com/runtime-intelligence-services.html . In addition I am currently working on writing up some more technically oriented documentation as we realize that is an area we could definitely improve, please let me know if anyone is interested in being notified when I have completed it.
{ "language": "en", "url": "https://stackoverflow.com/questions/8214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Test Distribution At my work we are running a group of tests that consist of about 3,000 separate test cases. Previously we were running this entire test suite on one machine, which took about 24-72 hours to complete the entire test run. We now have created our own system for grouping and distributing the tests among about three separate machines and the tests are prioritized so that the core tests get run first for more immediate results and the extra tests run when there is an available machine. I am curious if anyone has found a good way to distribute their tests among several machines to reduce total test time for a complete run and what tools were used to achieve that. I've done some research and it looks like TestNG is moving in this direction, but it looks like it is still under quite a bit of development. We don't plan on rewriting any of our tests, but as we add new tests and test new products or add-ons I'd like to be able to deal with the fact that we are working with very large numbers of tests. On the other hand, if we can find a tool that would help distribute our Junit 3.x tests even in a very basic fashion, that would be helpful since we wouldn't have to maintain our own tooling to do that. A: I've seen some people having a play with distributed JUnit. I can't particularly vouch for how effective it is, but the other teams I've seen seemed to think it was straight forward enough. Hope that helps. A: Our build people use Mozilla Tinderbox. It seems to have some hooks for distributed testing. I'm sorry not to know the details but I thought I would at least pass on the pointer to you. It's also nice coz you can find out immediately when a build breaks, and what checkin might have been the culprit. http://www.mozilla.org/tinderbox.html A: There's also parallel-junit. Depending on how you currently execute your tests its convenience may vary - the idea is just to multithread on a single system that has multiple cores. I've played with it briefly, but it's a change from how we currently run our tests. Hudson, the continuous integration engine I use, also has some ways to distribute test running (separate jobs aggregated results in one).
{ "language": "en", "url": "https://stackoverflow.com/questions/8219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Connection Pooling in .NET/SQL Server? Is it necessary or advantageous to write custom connection pooling code when developing applications in .NET with an SQL Server database? I know that ADO.NET gives you the option to enable/disable connection pooling -- does that mean that it's built into the framework and I don't need to worry about it? Why do people talk about writing their own connection pooling software and how is this different than what's built into ADO.NET? A: I'm no real expert on this matter, but I know ADO.NET has its own connection pooling system, and as long as I've been using it it's been faultless. My reaction would be that there's no point in reinventing the wheel... Just make sure you close your connections when you're finished with them and everything will be fine! I hope someone else can give you some more firm anwers! A: My understanding is that the connection pooling is automatically handled for you when using the SqlConnection object. This is purposefully designed to work with MSSQL and will ensure connections are pooled efficiently. You just need to be sure you close them when you are finished with them (and ensure they are disposed of). I have never heard of people needing to roll their own myself. But I admit my experience is kind of limited there. A: The connection pooling built-in to ADO.Net is robust and mature. I would recommend against attempting to write your own version. A: With the advent of ADO.Net and the newer version of SQL connection pooling is handled on two layers, first through ADO.Net itself and secondly by SQL Server 2005/2008 directly, eliminating the need for custom connection pooling. I have been informed that similar support are being planned or have been implemented in Oracle and MySQL out of interest. A: Well, it is going to go away as the answer to all these questions will be LINQ. Incidentally, we have never needed custom connection pooling for any of our applications, so I am not sure what all the noise is about.
{ "language": "en", "url": "https://stackoverflow.com/questions/8223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Why won't Entourage work with Exchange 2007? So this is IT more than programming but Google found nothing, and you guys are just the right kind of geniuses. My Exchange Server 2007 and Entourage clients don't play nice. Right now the big issue is that the entourage client will not connect to Exchange 2007 ( Entourage 2004 or 2008) The account settings are correct and use the proper format of https://exchange2007.mydomain.com/exchange/user@domain.com The issue is with a dll called davex.dll when it is where it belongs, the OWA application pool crashes a whole bunch of nasty things happen. When it isn’t there, I can connect to everything fine - and the OWA app pool doesn’t crash - but Entourage never propogates the folders in the mailbox and doesn't send or receive. Any help or ideas would be appreciated: Microsoft support is silent on the issue, and Google doesn't turn up much. A: Try it without using the /exchange in the server properties field. Here's a link with relevant info. A: davex.dll is the legacy webdav component for Exchange server, which Entourage uses. Your first step should be investigating why the application pool crashes. My guess is that Entourage can't do anything when the dll isn't present because webdav is not responding to any requests.
{ "language": "en", "url": "https://stackoverflow.com/questions/8228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do you determine the size of a file in C? How can I figure out the size of a file, in bytes? #include <stdio.h> unsigned int fsize(char* file){ //what goes here? } A: Don't use int. Files over 2 gigabytes in size are common as dirt these days Don't use unsigned int. Files over 4 gigabytes in size are common as some slightly-less-common dirt IIRC the standard library defines off_t as an unsigned 64 bit integer, which is what everyone should be using. We can redefine that to be 128 bits in a few years when we start having 16 exabyte files hanging around. If you're on windows, you should use GetFileSizeEx - it actually uses a signed 64 bit integer, so they'll start hitting problems with 8 exabyte files. Foolish Microsoft! :-) A: If you're fine with using the std c library: #include <sys/stat.h> off_t fsize(char *file) { struct stat filestat; if (stat(file, &filestat) == 0) { return filestat.st_size; } return 0; } A: And if you're building a Windows app, use the GetFileSizeEx API as CRT file I/O is messy, especially for determining file length, due to peculiarities in file representations on different systems ;) A: Matt's solution should work, except that it's C++ instead of C, and the initial tell shouldn't be necessary. unsigned long fsize(char* file) { FILE * f = fopen(file, "r"); fseek(f, 0, SEEK_END); unsigned long len = (unsigned long)ftell(f); fclose(f); return len; } Fixed your brace for you, too. ;) Update: This isn't really the best solution. It's limited to 4GB files on Windows and it's likely slower than just using a platform-specific call like GetFileSizeEx or stat64. A: I found a method using fseek and ftell and a thread with this question with answers that it can't be done in just C in another way. You could use a portability library like NSPR (the library that powers Firefox). A: I used this set of code to find the file length. //opens a file with a file descriptor FILE * i_file; i_file = fopen(source, "r"); //gets a long from the file descriptor for fstat long f_d = fileno(i_file); struct stat buffer; fstat(f_d, &buffer); //stores file size long file_length = buffer.st_size; fclose(i_file); A: On Unix-like systems, you can use POSIX system calls: stat on a path, or fstat on an already-open file descriptor (POSIX man page, Linux man page). (Get a file descriptor from open(2), or fileno(FILE*) on a stdio stream). Based on NilObject's code: #include <sys/stat.h> #include <sys/types.h> off_t fsize(const char *filename) { struct stat st; if (stat(filename, &st) == 0) return st.st_size; return -1; } Changes: * *Made the filename argument a const char. *Corrected the struct stat definition, which was missing the variable name. *Returns -1 on error instead of 0, which would be ambiguous for an empty file. off_t is a signed type so this is possible. If you want fsize() to print a message on error, you can use this: #include <sys/stat.h> #include <sys/types.h> #include <string.h> #include <stdio.h> #include <errno.h> off_t fsize(const char *filename) { struct stat st; if (stat(filename, &st) == 0) return st.st_size; fprintf(stderr, "Cannot determine size of %s: %s\n", filename, strerror(errno)); return -1; } On 32-bit systems you should compile this with the option -D_FILE_OFFSET_BITS=64, otherwise off_t will only hold values up to 2 GB. See the "Using LFS" section of Large File Support in Linux for details. A: **Don't do this (why?): Quoting the C99 standard doc that i found online: "Setting the file position indicator to end-of-file, as with fseek(file, 0, SEEK_END), has undefined behavior for a binary stream (because of possible trailing null characters) or for any stream with state-dependent encoding that does not assuredly end in the initial shift state.** Change the definition to int so that error messages can be transmitted, and then use fseek() and ftell() to determine the file size. int fsize(char* file) { int size; FILE* fh; fh = fopen(file, "rb"); //binary mode if(fh != NULL){ if( fseek(fh, 0, SEEK_END) ){ fclose(fh); return -1; } size = ftell(fh); fclose(fh); return size; } return -1; //error } A: POSIX The POSIX standard has its own method to get file size. Include the sys/stat.h header to use the function. Synopsis * *Get file statistics using stat(3). *Obtain the st_size property. Examples Note: It limits the size to 4GB. If not Fat32 filesystem then use the 64bit version! #include <stdio.h> #include <sys/stat.h> int main(int argc, char** argv) { struct stat info; stat(argv[1], &info); // 'st' is an acronym of 'stat' printf("%s: size=%ld\n", argv[1], info.st_size); } #include <stdio.h> #include <sys/stat.h> int main(int argc, char** argv) { struct stat64 info; stat64(argv[1], &info); // 'st' is an acronym of 'stat' printf("%s: size=%ld\n", argv[1], info.st_size); } ANSI C (standard) The ANSI C doesn't directly provides the way to determine the length of the file. We'll have to use our mind. For now, we'll use the seek approach! Synopsis * *Seek the file to the end using fseek(3). *Get the current position using ftell(3). Example #include <stdio.h> int main(int argc, char** argv) { FILE* fp = fopen(argv[1]); int f_size; fseek(fp, 0, SEEK_END); f_size = ftell(fp); rewind(fp); // to back to start again printf("%s: size=%ld", (unsigned long)f_size); } If the file is stdin or a pipe. POSIX, ANSI C won't work. It will going return 0 if the file is a pipe or stdin. Opinion: You should use POSIX standard instead. Because, it has 64bit support. A: In plain ISO C, there is only one way to determine the size of a file which is guaranteed to work: To read the entire file from the start, until you encounter end-of-file. However, this is highly inefficient. If you want a more efficient solution, then you will have to either * *rely on platform-specific behavior, or *revert to platform-specific functions, such as stat on Linux or GetFileSize on Microsoft Windows. In contrast to what other answers have suggested, the following code is not guaranteed to work: fseek( fp, 0, SEEK_END ); long size = ftell( fp ); Even if we assume that the data type long is large enough to represent the file size (which is questionable on some platforms, most notably Microsoft Windows), the posted code has the following problems: The posted code is not guaranteed to work on text streams, because according to §7.21.9.4 ¶2 of the ISO C11 standard, the value of the file position indicator returned by ftell contains unspecified information. Only for binary streams is this value guaranteed to be the number of characters from the beginning of the file. There is no such guarantee for text streams. The posted code is also not guaranteed to work on binary streams, because according to §7.21.9.2 ¶3 of the ISO C11 standard, binary streams are not required to meaningfully support SEEK_END. That being said, on most common platforms, the posted code will work, if we assume that the data type long is large enough to represent the size of the file. However, on Microsoft Windows, the characters \r\n (carriage return followed by line feed) will be translated to \n for text streams (but not for binary streams), so that the file size you get will count \r\n as two bytes, although you are only reading a single character (\n) in text mode. Therefore, the results you get will not be consistent. On POSIX-based platforms (e.g. Linux), this is not an issue, because on those platforms, there is no difference between text mode and binary mode. A: C++ MFC extracted from windows file details, not sure if this is better performing than seek but if it is extracted from metadata I think it is faster because it doesn't need to read the entire file ULONGLONG GetFileSizeAtt(const wchar_t *wFile) { WIN32_FILE_ATTRIBUTE_DATA fileInfo; ULONGLONG FileSize = 0ULL; //https://learn.microsoft.com/nl-nl/windows/win32/api/fileapi/nf-fileapi-getfileattributesexa?redirectedfrom=MSDN //https://learn.microsoft.com/nl-nl/windows/win32/api/fileapi/ns-fileapi-win32_file_attribute_data?redirectedfrom=MSDN if (GetFileAttributesEx(wFile, GetFileExInfoStandard, &fileInfo)) { ULARGE_INTEGER ul; ul.HighPart = fileInfo.nFileSizeHigh; ul.LowPart = fileInfo.nFileSizeLow; FileSize = ul.QuadPart; } return FileSize; } A: You can open the file, go to 0 offset relative from the bottom of the file with #define SEEKBOTTOM 2 fseek(handle, 0, SEEKBOTTOM) the value returned from fseek is the size of the file. I didn't code in C for a long time, but I think it should work. A: Here's a simple and clean function that returns the file size. long get_file_size(char *path) { FILE *fp; long size = -1; /* Open file for reading */ fp = fopen(path, "r"); fseek(fp, 0, SEEK_END); size = ftell(fp); fclose(fp); return size; } A: I have a function that works well with only stdio.h. I like it a lot and it works very well and is pretty concise: size_t fsize(FILE *File) { size_t FSZ; fseek(File, 0, 2); FSZ = ftell(File); rewind(File); return FSZ; } A: Try this -- fseek(fp, 0, SEEK_END); unsigned long int file_size = ftell(fp); rewind(fp); What this does is first, seek to the end of the file; then, report where the file pointer is. Lastly (this is optional) it rewinds back to the beginning of the file. Note that fp should be a binary stream. file_size contains the number of bytes the file contains. Note that since (according to climits.h) the unsigned long type is limited to 4294967295 bytes (4 gigabytes) you'll need to find a different variable type if you're likely to deal with files larger than that.
{ "language": "en", "url": "https://stackoverflow.com/questions/8236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "162" }
Q: I can't get my debugger to stop breaking on first-chance exceptions I'm using Visual C++ 2003 to debug a program remotely via TCP/IP. I had set the Win32 exception c00000005, "Access violation," to break into the debugger when thrown. Then, I set it back to "Use parent setting." The setting for the parent, Win32 Exceptions, is to continue when the exception is thrown. Now, when I debug the program, it breaks each time that exception is thrown, forcing me to click Continue to let it keep debugging. How do I get it to stop breaking like this? A: Is this an exception that your code would actually handle if you weren't running in the debugger? A: I'd like to support Will Dean's answer An access violation sounds like an actual bug in your code. It's not something I'd expect the underlying C/++ Runtime to be throwing and catching internally. The 'first-chance-exceptions' feature is so you can intercept things which get 'caught' in code, using the debugger, and have a look. If there's nothing 'catching' that exception (which makes sense, why on earth would you catch and ignore access violations?), then it will trigger the debugger regardless of what options you may have set. A: Ctrl+Alt+E (or Debug\Exceptions) From there you can select which exceptions break.
{ "language": "en", "url": "https://stackoverflow.com/questions/8263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Class::DBI-like library for php? I have inherited an old crusty PHP application, and I'd like to refactor it into something a little nicer to deal with, but in a gradual manner. In perl's CPAN, there is a series of classes around Class::DBI that allow you to use database rows as the basis for objects in your code, with the library generating accessor methods etc as appropriate, but also allowing you to add additional methods. Does anyone know of something like this for PHP? Especially something that doesn't require wholesale adoption of a "framework"... bonus points if it works in PHP4 too, but to be honest, I'd love to have another reason to ditch that. :-) A: It's now defunct but phpdbi is possibly worth a look. If you're willing to let go of some of your caveats (the framework one), I've found that Doctrine is a pretty neat way of accessing DBs in PHP. Worth investigating anyway. A: Class::DBI is an ORM (Object Relational Mapper) for perl. Searching for "PHP ORM" on google gives some good results, including Doctrin, which I've had good luck with. I'd start there and work your way up. A: I'm trying to get more feedback on my own projects, so I'll suggest my take on ORM: ORMer Usage examples are here You can phase it in, it doesn't require you to adopt MVC, and it requires very little setup. A: The right thing to is to access the database via an abstraction layer in a way such if you change your RDBMS or how you implemented that access, you only have to modify this layer while all the rest of your application remains untouched. To do this, to free your application from knowing how to deal with the database, your abstraction layer for DB access must be implemented by a framework such as ADODB. All the files related to this layer must be located in a sub directory: * */ado In this directories you'll put all of your .php.inc files which contains general methods to access the database. A: How about MDB2 from pear? It provides a common API for all supported RDBMS. The main difference to most other DB abstraction packages is that MDB2 goes much further to ensure portability. Btw: @GaryF what are those strange title attributes your links have ? Did you add them or are they added by SO ?
{ "language": "en", "url": "https://stackoverflow.com/questions/8276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: Generating Icon Files I'm looking for an online solution for generating .ICO files. I'd like the ICO files to have the ability to have transparency as well. What software or web site do you use to create them? [Update] To clarify, I have an existing image in PNG format, 32 x 32 pixels. I want to generate the icon from this existing file, not create a brand new one online. Sorry for the confusion. A: I stumbled upon Icon Sushi a little while back and love it. It hasn't been updated in about a year, but it still works great, even in Vista. Plus it is free. http://www.towofu.net/soft/e-aicon.php A: You could use IrfanView, it's freeware and allows you to convert to ICO. It even allows you to select the transparent color. I've used it a lot in my projects, since a WinForm needs an ICO file for it's icon, while I usually have PNG or BMP files. A: I have found the application IcoFx useful, you can import pretty much any image type to use for icon creation, including PNG's. A: I like MicroAngelo. A: I also use Gif Movie Gear to create .ico files. This online Favicon Generator tool also seems to work fine. A: My favorite method is a photoshop plugin to "Save as .ICO". http://www.telegraphics.com.au/svn/icoformat/trunk/dist/README.html Fast, works offline, you're already in Photoshop, etc. A: png2ico is what I've always used, and perfect for your situation. A: Try, it has support for vista style Icons Axialis IconWorkshop Lite for VS 2008 http://www.axialis.com/download/iwlite.html Axialis Software, in association with Microsoft Corporation, presents Axialis IconWorkshop Lite for Visual Studio 2008: * 100% Free for Visual Studio 2008 users; * Make icons for Windows up to 256×256 PNG-compressed icons for Windows Vista™ and include them in your software projects; * Use an advanced icon editor with various tools, filters and effects; * Work efficiently using a Plug-in for Visual Studio 2008; * Create icons from images or ready-to-use image objects; * Use a fully integrated workspace with librarian, built-in file explorer with thumbnail preview, image viewer and more… A: I can't imagine drawing icons online. Nowadays icons are usually drawn as vectors, and I'm not aware of any online vector packages. In case you decide to draw off-line instead, I use Xara (www.xara.com) to draw all my computer artwork, and I use Gif Movie Gear to create .ico files. The former is a superb vector package, the latter is just something I have lying around. A: I use a simple program called MyViewPad, which can convert (almost) any old image to a .ico file. It's free and easy to use. This may not be what you are looking for though. A: Otto, if you could clarify what it is that you need this for, and we are more likely to have good suggestions. Provide a use-case, and someone will probably have some advice to fit it. A: Is this something that needs to be done often and automatically, or is this just a one time thing for your app? If the latter is correct, MyViewPad will work. A: I use InkScape with is a free vector graphics program. A: I use GIMP for my icon design, but it's quite a bit of a pain having to join all the layers together then creating different layers for each icon size before exporting. Hmm maybe I could make my next project writing a plugin for the GIMP. A: I stumbled up Greenfish Icon Editor Pro a while back and its been working great for the simple icons I have been needing to make. Greenfish Icon Editor Pro The only downside is that it is windows only. A: With http://www.favicon.cc/ you can either draw your own, or upload a jpg, jpeg, gif, png, bmp, or ico. And it's a web app, so there's nothing to install. Works very nicely. A: When creating icons for my apps using VS, I simply use Paint (32x32) to draw it and save as PNG, then I go to http://www.online-image-editor.com/, upload my PNG, click on the Wizards tab, click the Transparent button, then click on the points in the image I want to make transparent (usually, just the white sections.) I then save the image, go to another web site at http://prodraw.net/online-tool/pic-to-icon.php, upload the saved image from the previous site, set my Preference options, convert, and download. Viola, a transparent icon! Other than the actual drawing of the original image, it literally takes just a couple of minutes.
{ "language": "en", "url": "https://stackoverflow.com/questions/8284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Creating Redundancy for a Subversion Repository? What is the best way to create redundant subversion repositories? I have a subversion repository (linked through apache2 and WebDAV) and would like to create a mirror repository on a different server in the event of outages, but I am not certain of the best way to proceed. I am thinking that post-commit scripts could be used to propagate changes, but I am not sure if this is the best way to go, anyone have any input A: Sounds like what you are looking for is basically federated (synced) servers... I asked the same question recently...and while I didn't find the exact solution I was looking for it came close. See here: A: Do you really need per-commit back-ups? There are almost certainly better ways of safe-guarding against failures than going down that route. For example, given that most failures are disk failures, move to a RAID array and/or NAS/SAN storage will provide you with better general protection and, if configured correctly, better performance. At that point, off-site back-ups becomes a matter of using the tools available. See the Repository maintenance section of the svn manual for details. If you truly do need per-commit back-ups then, yeah, post-commit scripts are the way to go. A: If you only need read-only access to the mirrored repository, you can use svnsync which was added in SVN 1.4 for mirroring. We use a secondary repository on our build server to run CruiseControl.NET against, but the mirrored repository is read-only.
{ "language": "en", "url": "https://stackoverflow.com/questions/8306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Resolving Session Fixation in JBoss I need to prevent Session Fixation, a particular type of session hijacking, in a Java web application running in JBoss. However, it appears that the standard idiom doesn't work in JBoss. Can this be worked around? A: This defect (found here) points the way to the solution. The Tomcat instance that runs in JBoss is configured with emptySessionPath="true", rather than "false", which is the default. This can be modified in .../deploy/jboss-web.deployer/server.xml; both the HTTP and AJP connectors have this option. The feature itself is used to eliminate the context path (eg. "foo" in http://example.com/foo) from being included in the JSESSIONID cookie. Setting it to false will break applications that rely on cross-application authentication, which includes stuff built using some portal frameworks. It didn't negatively affect the application in question, however. A: This problem and the specific case in which it occurs is a problem in Tomcat as well as JBoss. Tomcat shares the emptySessionPath="true" effect (and actually JBoss inherits it from Tomcat). This really seems like a bug in Tomcat and JBoss when you are trying to prevent session fixation attacks but the servlet spec (at least version 2.3) does not actually require the JSESSIONID to be defined or redefined according to any specific logic. Perhaps this has been cleaned up in later versions. A: One workaround is to store the client address in the session. A response wrapper should validate the client address set in the session is same as the one accessing the session. A: I came to know below code setting snippet from one of the forum. And I added below lines. But when I print the session ID after and before log in into the application it is same. How would I test session Fixation. * *D:\jboss-5.1.0.GA\bin\run.cof file and add the below line. set "JAVA_OPTS=%JAVA_OPTS% -Dorg.apache.catalina.connector.Request.SESSION_ID_CHECK=false" * in each context.xml of the jboss applications. D:\jboss-5.1.0.GA\server\default\deploy\jbossweb.sar\context.xml
{ "language": "en", "url": "https://stackoverflow.com/questions/8318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Using unhandled exceptions instead of Contains()? Imagine an object you are working with has a collection of other objects associated with it, for example, the Controls collection on a WinForm. You want to check for a certain object in the collection, but the collection doesn't have a Contains() method. There are several ways of dealing with this. * *Implement your own Contains() method by looping through all items in the collection to see if one of them is what you are looking for. This seems to be the "best practice" approach. *I recently came across some code where instead of a loop, there was an attempt to access the object inside a try statement, as follows: try { Object aObject = myCollection[myObject]; } catch(Exception e) { //if this is thrown, then the object doesn't exist in the collection } My question is how poor of a programming practice do you consider the second option be and why? How is the performance of it compared to a loop through the collection? A: The general rule of thumb is to avoid using exceptions for control flow unless the circumstances that will trigger the exception are "exceptional" -- e.g., extremely rare! If this is something that will happen normally and regularly it definitely should not be handled as an exception. Exceptions are very, very slow due to all the overhead involved, so there can be performance reasons as well, if it's happening often enough. A: I would have to say that this is pretty bad practice. Whilst some people might be happy to say that looping through the collection is less efficient to throwing an exception, there is an overhead to throwing an exception. I would also question why you are using a collection to access an item by key when you would be better suited to using a dictionary or hashtable. My main problem with this code however, is that regardless of the type of exception thrown, you are always going to be left with the same result. For example, an exception could be thrown because the object doesn't exist in the collection, or because the collection itself is null or because you can't cast myCollect[myObject] to aObject. All of these exceptions will get handled in the same way, which may not be your intention. These are a couple of nice articles on when and where it is usally considered acceptable to throw exceptions: * *Foundations of Programming *Throwing exceptions in c# I particularly like this quote from the second article: It is important that exceptions are thrown only when an unexpected or invalid activity occurs that prevents a method from completing its normal function. Exception handling introduces a small overhead and lowers performance so should not be used for normal program flow instead of conditional processing. It can also be difficult to maintain code that misuses exception handling in this way. A: If, while writing your code, you expect this object to be in the collection, and then during runtime you find that it isn't, I would call that an exceptional case, and it is proper to use an exception. However, if you're simply testing for the existence of an object, and you find that it is not there, this is not exceptional. Using an exception in this case is not proper. The analysis of the runtime performance depends on the actual collection being used, and the method if searching for it. That shouldn't matter though. Don't let the illusion of optimization fool you into writing confusing code. A: I would have to think about it more as to how much I like it... my gut instinct is, eh, not so much... EDIT: Ryan Fox's comments on the exceptional case is perfect, I concur As for performance, it depends on the indexer on the collection. C# lets you override the indexer operator, so if it is doing a for loop like the contains method you would write, then it will be just as slow (with maybe a few nanoseconds slower due to the try/catch... but nothing to worry about unless that code itself is within a huge loop). If the indexer is O(1) (or even O(log(n))... or anything faster than O(n)), then the try/catch solution would be faster of course. Also, I am assuming the indexer is throwing the exception, if it is returning null, you could of course just check for null and not use the try/catch. A: In general, using exception handling for program flow and logic is bad practice. I personally feel that the latter option is unacceptable use of exceptions. Given the features of languages commonly used these days (such as Linq and lambdas in C# for example) there's no reason not to write your own Contains() method. As a final thought, these days most collections do have a contains method already. So I think for the most part this is a non-issue. A: Exceptions should be exceptional. Something like 'The collection is missing because the database has fallen out from underneath it' is exceptional Something like 'the key is not present' is normal behaviour for a dictionary. For your specific example of a winforms Control collection, the Controls property has a ContainsKey method, which is what you're supposed to use. There's no ContainsValue because when dealing with dictionaries/hashtables, there's no fast way short of iterating through the entire collection, of checking if something is present, so you're really discouraged from doing that. As for WHY Exceptions should be exceptional, it's about 2 things * *Indicating what your code is trying to do. You want to have your code match what it is trying to achieve, as closely as possible, so it is readable and maintainable. Exception handling adds a bunch of extra cruft which gets in the way of this purpose *Brevity of code. You want your code to do what it's doing in the most direct way, so it is readable and maintainable. Again, the cruft added by exception handling gets in the way of this. A: Take a look at this blog post from Krzystof: http://blogs.msdn.com/kcwalina/archive/2008/07/17/ExceptionalError.aspx Exceptions should be used for communicating error conditions, but they shouldn't be used as control logic (especially when there are far simpler ways to determine a condition, such as Contains). Part of the issue is that exceptions, while not expensive to throw are expensive to catch and all exceptions are caught at some point. A: The latter is an acceptable solution. Although I would definitely catch on the specific exception (ElementNotFound?) that the collection throws in that case. Speedwise, it depends on the common case. If you're more likely to find the element than not, the exception solution will be faster. If you're more likely to fail, then it would depend on size of the collection and its iteration speed. Either way, you'd want to measure against normal use to see if this is actually a bottle neck before worrying about speed like this. Go for clarity first, and the latter solution is far more clear than the former.
{ "language": "en", "url": "https://stackoverflow.com/questions/8348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Is there something like "Firebug for IE" (for debugging JavaScript)? I'm trying to fix some JavaScript bugs. Firebug makes debugging these issues a lot easier when working in Firefox, but what do you do when the code works fine on Firefox but IE is complaining? A: Firebug lite doesn't work too well for me. The Developer Toolbar just isn't good enough. There really is no great solution. A: Or IE Developer Toolbar A: Have a look at DebugBar. License is free for personal use A: you can also check out the IE Developer Toolbar which isn't a debugger but will help you analyze the contents of your code. Visual Studio will help with the debugging Fiddler should help analyse the traffic travelling to and from your browser A: For the DOM Inspector, try the Internet Explorer Developer Toolbar. For the Net tab, try Fiddler. For Javascript debugging, try Visual Web Developer 2008 Express Edition. (Or a higher edition of Visual Studio) Also, try DebugBar. A: You can try Firebug Lite or use Visual Studio to debug the JavaScript. A: Since Internet Explorer 8, IE has been shipping with a built-in tool-set for debugging, troubleshooting, and generally helping in development of your pages/applications. You can access these tools by pressing F12 while in the browser. HTML Tab The HTML tab will let you peek into the DOM as the browser understands it. As you select elements from the HTML view, their styles will be detailed on the right, with individual rules have the ability to be toggled on and off. You can also modify rules, and determine whether the styles on the element were inherited, or assigned explicitly. Additionally, you can even tell which .css file they originate from. There is a bit more you can do in the HTML tab, such as review and modify attributes on elements, and even make changes to the layout of the element from within the layout section. Additionally, you can make changes directly to the markup to quickly test out some structural ideas. Script Tab For resolving JavaScript issues, you can watch the Console and the Script Tag. If your script stumbles across an a call to an undefined method, you'll be alerted within your console. The console also lets you run arbitrary JavaScript against your page, if you want to toggle items on or off, or try bind a handler to a button. The Script tab great as well as it will format your JavaScript for you, allow you to insert breakpoints, step in and over code blocks, and watch variables over time. If you've used Firebug, or even the Webkit Inspector, the F12 Developer Tools in Internet Explorer 8+ should be pretty familiar to you. A: Visual Studio 2008 can do JavaScript debugging, you have to go to IE's Tools->Internet Options->Advanced and uncheck 'Disable Script Debugging (Internet Explorer)' in order for the browser to bubble up the errors it detects. Once you're in Visual Studio you basically have it's entire debugging arsenal at your disposal. It's not as integrated as Firebug, but it is way better than anything we used to have. A: i think it is better that you first install the ie core addon in firefox then load the page with ie addon and press f12. good luck. A: Make a bookmark in the favourites bar, and put this address as the URL: javascript:(function(F,i,r,e,b,u,g,L,I,T,E){if(F.getElementById(b))return;E=F[i+'NS']&&F.documentElement.namespaceURI;E=E?F[i+'NS'](E,'script'):F[i]('script');E[r]('id',b);E[r]('src',I+g+T);E[r](b,u);(F[e]('head')[0]||F[e]('body')[0]).appendChild(E);E=new%20Image;E[r]('src',I+L);})(document,'createElement','setAttribute','getElementsByTagName','FirebugLite','4','firebug-lite.js','releases/lite/latest/skin/xp/sprite.png','https://getfirebug.com/','#startOpened'); Then navigatge to the page you want and click the link. Firebug Lite will/should open up... A: The IE8 beta comes with what I think is the IE Developer toolbar, but it seems to be a lot more powerful than the last time I tried the toolbar on IE7 A: I'm guessing this question was posted before the IE8 final came out, according tho some of the answers. These days, IE8's inbuilt Developer Tools are great; and while the JS debugging isn't as useful as Visual Studio the Dev Tools in general much better than Firebug in my opinion. Between that and the Compatibility View Browser Mode I can handle all my IE6 development needs. A: I found a solution to this problem, you could simply stick this tag to the page you are trying to debug and it will open firebug: <script type="text/javascript" src="https://getfirebug.com/firebug-lite.js"></script> Explanation from https://getfirebug.com/firebuglite#Stable A: If you're a serious Front-end Developer, give AJAX Edition a test run: http://www.compuware.com/application-performance-management/ajax-performance-testing.html It's a free tool that allows users’ to understand what is causing performance and functional-related problems in modern AJAX/web Applications. A: In IE, go to MenuBar->Tools Select Debugger Tools Hit F12 and enjoy. It is far from Fire Bug, but suitable for some quick help A: There is always a way how to get around this issue, watch the video and you will be able to install firebug in 2 mins. install firebug on ie Good luck
{ "language": "en", "url": "https://stackoverflow.com/questions/8351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44" }
Q: Using mod_rewrite to Mimic SSL Virtual Hosts? What is the best way to transparently rewrite a URL over an SSL connection with Apache 2.2? Apache 2 does not natively support multiple name-based virtual hosts for an SSL connection and I have heard that mod_rewrite can help with this. I would like to do something like this: I have set up the server so that the sites can be accessed by https://secure.example.com/dbadmin but I would like to have this as https://dbadmin.example.com How do I set it up so that the Rewrite rule will rewrite dbadmin.example.com to secure.example.com/dbadmin, but without displaying the rewrite on the client's address bar (i.e. the client will still just see dbadmin.example.com), all over https? A: Configure a single VirtualHost to serve both secure.example.com and dbadmin.example.com (making it the only *:443 VirtualHost achieves this). You can then use mod_rewrite to adjust the URI for requests to dbadmin.example.com: <VirtualHost *:443> ServerName secure.example.com ServerAlias dbadmin.example.com RewriteEngine on RewriteCond %{SERVER_NAME} dbadmin.example.com RewriteRule !/dbadmin(.*)$ /dbadmin$1 </VirtualHost> Your SSL certificate will need to be valid for both secure.example.com and dbadmin.example.com. It can be a wildcard certificate as mentioned by Terry Lorber, or you can use the subjectAltName field to add additional host names. If you're having trouble, first set it up on <VirtualHost *> and check that it works without SSL. The SSL connection and certificate is a separate layer of complexity that you can set up after the URI rewriting is working. A: Unless your SSL certificate is the "wildcard" or multi-site kind, then I don't think this will work. The rewrite will display in the browser and the name in the address bar must be valid against the certificate, or your users will see a security error (which they can always accept and continue, but that doesn't sound like what you'd like). More here. A: There is apaches mod_rewrite, or you could setup apache to direct https://dbadmin.example.com to path/to/example.com/dbadmin on the server <VirtualHost *> ServerName subdomain.domain.com DocumentRoot /home/httpd/htdocs/subdomain/ </VirtualHost>
{ "language": "en", "url": "https://stackoverflow.com/questions/8355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: MySQL Administrator Backups: "Compatibility Mode", What Exactly is this doing? In Mysql Administrator, when doing backups, what exactly is "Compatibility Mode"? I'm trying to bridge backups generated by webmin with the upload tool available inside mysql administrator. My data already has a couple of inconsistencies (ticks, commas, etc, I think) I just wont try to kink out (they might just reappear in the future anyways). These kinks generate errors when I try to restore out of my backups. Now, if I generate backups from webmin, and then use MySQL administrator to restore them, they fail. But if I generate the backups using MySQL Administrator AND tick "Compatibility Mode", then head over to MySQL administrator (another instance) and restore... it works! According to MySQL, "Compatibility Mode" is; Compatibility mode creates backup files that are compatible with older versions of MySQL Administrator. Webmin, on the other hand, gives me the following options for compatibility: * *ANSI *MySQL 3.2.3 *MySQL 4.0 *PostgreSQL *Oracle *Microsoft SQL *DB2 *MaxDB Which would you say is a best fit? My data set is very large, so it would take quite some time to experiment one by one (specially whence thinking might beat brute-forcing it). Edit: seems like it's doing ANSI, but i'm not 100% on it. A: Compatibility mode - the mode that helps you create exports compabible with different versions of MYSQL or other databases. You see, some versions of MySQL had different commands that were used in various versions. So what compatibility mode allows you to do is take a database and export the SQL to be compatible with another version of MySQL. Thus, you may want to upgrade your MySQL 3 server to 4 - this compatibility mode allows for the export your database or individual tables to create a SQL file that can import into a MySQL 4 version server (should work in 5 also). I use webmin, also, and run MySQL 5. I use compatibility mode for MySQL 4.... I steer clear of any of the other ones, because I'm not running those other databases. As far as the MySQL commands that were different between MySQL 3.x and 4.x, I believe there were changes in regards to how CURRENT_TIMESTAMP is translated from MySQL 3 to 4, and also MySQL 3 doesn't support charsets, according to this forum post here: http://www.phpbuilder.com/board/showthread.php?t=10330692
{ "language": "en", "url": "https://stackoverflow.com/questions/8365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you redirect HTTPS to HTTP? How do you redirect HTTPS to HTTP?. That is, the opposite of what (seemingly) everyone teaches. I have a server on HTTPS for which I paid an SSL certification for and a mirror for which I haven't and keep around for just for emergencies so it doesn't merit getting a certification for. On my client's desktops I have SOME shortcuts which point to http://production_server and https://production_server (both work). However, I know that if my production server goes down, then DNS forwarding kicks in and those clients which have "https" on their shortcut will be staring at https://mirror_server (which doesn't work) and a big fat Internet Explorer 7 red screen of uneasyness for my company. Unfortunately, I can't just switch this around at the client level. These users are very computer illiterate: and are very likely to freak out from seeing HTTPS "insecurity" errors (especially the way Firefox 3 and Internet Explorer 7 handle it nowadays: FULL STOP, kind of thankfully, but not helping me here LOL). It's very easy to find Apache solutions for http->https redirection, but for the life of me I can't do the opposite. Ideas? A: Keep in mind that the Rewrite engine only kicks in once the HTTP request has been received - which means you would still need a certificate, in order for the client to set up the connection to send the request over! However if the backup machine will appear to have the same hostname (as far as the client is concerned), then there should be no reason you can't use the same certificate as the main production machine. A: all the above did not work when i used cloudflare, this one worked for me: RewriteCond %{HTTP:X-Forwarded-Proto} =https RewriteRule ^(.*)$ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] and this one definitely works without proxies in the way: RewriteCond %{HTTPS} on RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] A: It is better to avoid using mod_rewrite when you can. In your case I would replace the Rewrite with this: <If "%{HTTPS} == 'on'" > Redirect permanent / http://production_server/ </If> The <If> directive is only available in Apache 2.4+ as per this blog here. A: this works for me. <VirtualHost *:443> ServerName www.example.com # ... SSL configuration goes here Redirect "https://www.example.com/" "http://www.example.com/" </VirtualHost> <VirtualHost *:80> ServerName www.example.com # ... </VirtualHost> be sure to listen to both ports 80 and 443. A: This has not been tested but I think this should work using mod_rewrite RewriteEngine On RewriteCond %{HTTPS} on RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} A: For those that are using a .conf file. <VirtualHost *:443> ServerName domain.com RewriteEngine On RewriteCond %{HTTPS} on RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} SSLEngine on SSLCertificateFile /etc/apache2/ssl/domain.crt SSLCertificateKeyFile /etc/apache2/ssl/domain.key SSLCACertificateFile /etc/apache2/ssl/domain.crt </VirtualHost> A: Based on ejunker's answer, this is the solution working for me, not on a single server but on a cloud enviroment Options +FollowSymLinks RewriteEngine On RewriteCond %{ENV:HTTPS} on RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] A: If none of the above solutions work for you (they did not for me) here is what worked on my server: RewriteCond %{HTTPS} =on RewriteRule ^(.*)$ http://%{HTTP_HOST}/$1 [L,R=301] A: None of the answer works for me on Wordpress website but following works ( it's similar to other answers but have a little change) RewriteEngine On RewriteCond %{HTTPS} on RewriteRule (.*) http://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] A: If you are looking for an answer where you can redirect specific url/s to http then please update your htaccess like below RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{THE_REQUEST} !/(home/panel/videos|user/profile) [NC] # Multiple urls RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTPS} on RewriteCond %{THE_REQUEST} /(home/panel/videos|user/profile) [NC] # Multiple urls RewriteRule ^ http://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] It worked for me :) A: As far as I'm aware of a simple meta refresh also works without causing errors: <meta http-equiv="refresh" content="0;URL='http://www.yourdomain.com/path'">
{ "language": "en", "url": "https://stackoverflow.com/questions/8371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "184" }
Q: How do I debug JavaScript in Visual Studio 2005? I just saw this mentioned in Stack Overflow question Best WYSIWYG CSS editor and didn't know it could be done. I'm a Visual Studio newbie, so how do you do it? Is there a separate debugger for JavaScript? I know how to work the one for code-behind pages... I usually use Firebug to deal with debugging JavaScript code. I'm using Visual Studio 2005. A: I prefer using Firebug for projects I can't use Visual Studio 2008 on. A: To debug in Visual Studio 2005, make sure that "disable script debugging" is unchecked. Then load your webpage in Internet Explorer. From the debug menu inside of Visual Studio 2005, select "Attach to process" and pick the instance of Internet Explorer that has your web page loaded. Alternatively, the Firebug team has been working on a "lite" version that you can include either as a script in your page or by launching it via a bookmarklet from your browser. It doesn't provide the full debugger that Firebug does, but it gives you a console and a command line from which you can inspect variables and log things to the console. A: TechRepublic has a good walk through - see Visual Studio 2008 simplifies JavaScript debugging. A: Visual Studio 2008 ASP.NET projects has debugging enabled by default. You can set breakpoints within your .js file while the website/web app project is run in the ASP.NET debug server. A: Just make sure you have 'Disable Script Debugging' unchecked, and just hit F5 to start debugging in VS2005 or 2008. I would also note that if you have your JavaScript inside the .aspx page you will have to find it via the script explore. However if you have it in a separate .js file you can just put a break point on it like you would any .cs file. A: In Internet Explorer, select View -> Script Debugger -> Open. That should do it. A: Usually you know where you are having problems, so you can set a breakpoint in your JavaScript code by placing the keyword "debugger;" on a line in your JavaScript code (obviously without the quotes) to set a breakpoint. When you get to it in Internet Explorer, it will ask you if you want to debug and prompt you to choose a debugger from a list, hopefully you will see Visual Studio in that list (both a new instance as well as your currently-running instance) - if you are using Firefox with Firebug, it will automatically stop execution on that line and you will be within the Firebug debugger, not Visual Studio. You will want to do the following to setup Internet Explorer for doing this - from within Internet Explorer, follow this menu path: Tools > Internet Options > Advanced Tab > Uncheck the "Disable Script Debugging" options. A: Yeah using Microsoft Script Editor is a an option if you have Office XP or Office 2003 installed. In IE uncheck Disable Script debugging (Internet Explorer) and Disable Script debugging (Other). Restart IE. In View menu you will have a new item, "script debugging", choose open. You will be given a choice of VS2005 or New instance of Microsoft Script Editor, choose that and give it a go. Edit: try this link for a tutorial A: You can set a breakpoint within JavaScript in Visual Studio 2005, but in addition to debugging needing to be enabled in Internet Explorer, you can only set the breakpoint in a .js file. You cannot debug any inline JavaScript code. I also sometimes have problems when trying to debug my JavaScript code when using the attach process method to go into debugging. I will normally use the "Start debugging" green arrow. You will know that your code will stop at the breakpoint in your .js file if the breakpoint icon (Burgandy Circle by default) is filled in. If it's not filled in, you will never stop there. Finally, make sure you have debugging enabled in your ASP.NET configuration settings. A: I usually use Firebug to deal with debugging JS. Unless you need to debug in IE, there's no need to stop using Firebug. It works with JavaScript in ASP.NET pages just as well as it does with any other type of page. Visual Studio's JavaScript debugging is alright, but really cannot compete with the full range of client-side information that Firebug aggregates. A: Debugging client JavaScript code in Visual Studio 2005: Add the following code to the start of the JavaScript code: debugger See Debugging client JavaScript in Visual Studio 2005.
{ "language": "en", "url": "https://stackoverflow.com/questions/8398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do you handle audit logging with SSRS? I have some reports in SQL Server Reporting Services 2005 that I need to keep audit logs for. The audit log should include who ran what report with what parameters. I can't use Windows authentication. What is the best way to log this information? A: The previous comments were dead on accurate that you can mine the data from the ReportServer ExecutionLog table in SQL Server 2000/2005 or the ExecutionLogStorage table in SQL Server 2008. If you are using form-based authentication to access the reports instead of windows authentication, then you are probably passing some unique UserID, CompanyID, CustomerID, or other value as a parameter in your reports. If this is the case, then the built-in table captures the parameters already. If you aren't passing the unique user identifier as a parameter, then you will probably need to rely on logging report executions in your application itself. A: Have a look at the ExecutionLog table in the ReportServer database. This contains information on who ran what report and with what parameters. I'm not sure how this is going to work without Windows authentication though, as it'll have no way of knowing who's running what report. A: Can you share some info on your authentication method? MS provides some report samples that include everything you need to get started. For SSRS 2005 http://www.codeplex.com/MSFTRSProdSamples/Wiki/View.aspx?title=SS2005!Server%20Management%20Sample%20Reports&referringTitle=Home Many more report Samples. http://www.codeplex.com/MSFTRSProdSamples/ A: From memory SSRS has built in logging for this exact situation A: If you are using a custom security extention, you will still be able to get all the info you need from the ExecutionLog table. Unless off-course if all your users uses a shared login, in which case you probably need to reconsider your architecture, depending on the importance of the audit log.
{ "language": "en", "url": "https://stackoverflow.com/questions/8422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: How do you get the icons out of shell32.dll? I'd like to get the Tree icon to use for a homegrown app. Does anyone know how to extract the images out as .icon files? I'd like both the 16x16 and 32x32, or I'd just do a screen capture. A: If anyone is seeking an easy way, just use 7zip to unzip the shell32.dll and look for the folder .src/ICON/ A: Resources Extract is another tool that will recursively find icons from a lot of DLLs, very handy IMO. A: Just open the DLL with IrfanView and save the result as a .gif or .jpg. I know this question is old, but it's the second google hit from "extract icon from dll", I wanted to avoid installing anything on my workstation and I remembered I use IrfanView. A: In Visual Studio, choose "File Open..." then "File...". Then pick the Shell32.dll. A folder tree should be opened, and you will find the icons in the "Icon" folder. To save an Icon, you can right-click on the icon in the folder tree and choose "Export". A: Another option is to use a tool such as ResourceHacker. It handles way more than just icons as well. Cheers! A: You can download freeware Resource Hacker and then follow below instructions : * *Open any dll file you wish to find icons from. *Browse folders to find specific icons. *From menu bar, select 'action' and then 'save'. *Select destination for .ico file. Reference : http://techsultan.com/how-to-extract-icons-from-windows-7/ A: Here is an updated version of a solution above. I added a missing assembly that was buried in a link. Novices will not understand that. This is sample will run without modifications. <# .SYNOPSIS Exports an ico and bmp file from a given source to a given destination .Description You need to set the Source and Destination locations. First version of a script, I found other examples but all I wanted to do as grab and ico file from an exe but found getting a bmp useful. Others might find useful .EXAMPLE This will run but will nag you for input .\Icon_Exporter.ps1 .EXAMPLE this will default to shell32.dll automatically for -SourceEXEFilePath .\Icon_Exporter.ps1 -TargetIconFilePath 'C:\temp\Myicon.ico' -IconIndexNo 238 .EXAMPLE This will give you a green tree icon (press F5 for windows to refresh Windows explorer) .\Icon_Exporter.ps1 -SourceEXEFilePath 'C:/Windows/system32/shell32.dll' -TargetIconFilePath 'C:\temp\Myicon.ico' -IconIndexNo 41 .Notes Based on http://stackoverflow.com/questions/8435/how-do-you-get-the-icons-out-of-shell32-dll Version 1.1 2012.03.8 New version: Version 1.2 2015.11.20 (Added missing custom assembly and some error checking for novices) #> Param ( [parameter(Mandatory = $true)] [string] $SourceEXEFilePath = 'C:/Windows/system32/shell32.dll', [parameter(Mandatory = $true)] [string] $TargetIconFilePath, [parameter(Mandatory = $False)] [Int32]$IconIndexNo = 0 ) #https://social.technet.microsoft.com/Forums/windowsserver/en-US/16444c7a-ad61-44a7-8c6f-b8d619381a27/using-icons-in-powershell-scripts?forum=winserverpowershell $code = @" using System; using System.Drawing; using System.Runtime.InteropServices; namespace System { public class IconExtractor { public static Icon Extract(string file, int number, bool largeIcon) { IntPtr large; IntPtr small; ExtractIconEx(file, number, out large, out small, 1); try { return Icon.FromHandle(largeIcon ? large : small); } catch { return null; } } [DllImport("Shell32.dll", EntryPoint = "ExtractIconExW", CharSet = CharSet.Unicode, ExactSpelling = true, CallingConvention = CallingConvention.StdCall)] private static extern int ExtractIconEx(string sFile, int iIndex, out IntPtr piLargeVersion, out IntPtr piSmallVersion, int amountIcons); } } "@ If (-not (Test-path -Path $SourceEXEFilePath -ErrorAction SilentlyContinue ) ) { Throw "Source file [$SourceEXEFilePath] does not exist!" } [String]$TargetIconFilefolder = [System.IO.Path]::GetDirectoryName($TargetIconFilePath) If (-not (Test-path -Path $TargetIconFilefolder -ErrorAction SilentlyContinue ) ) { Throw "Target folder [$TargetIconFilefolder] does not exist!" } Try { If ($SourceEXEFilePath.ToLower().Contains(".dll")) { Add-Type -TypeDefinition $code -ReferencedAssemblies System.Drawing $form = New-Object System.Windows.Forms.Form $Icon = [System.IconExtractor]::Extract($SourceEXEFilePath, $IconIndexNo, $true) } Else { [void][Reflection.Assembly]::LoadWithPartialName("System.Drawing") [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") $image = [System.Drawing.Icon]::ExtractAssociatedIcon("$($SourceEXEFilePath)").ToBitmap() $bitmap = new-object System.Drawing.Bitmap $image $bitmap.SetResolution(72,72) $icon = [System.Drawing.Icon]::FromHandle($bitmap.GetHicon()) } } Catch { Throw "Error extracting ICO file" } Try { $stream = [System.IO.File]::OpenWrite("$($TargetIconFilePath)") $icon.save($stream) $stream.close() } Catch { Throw "Error saving ICO file [$TargetIconFilePath]" } Write-Host "Icon file can be found at [$TargetIconFilePath]" A: There is also this resource available, the Visual Studio Image Library, which "can be used to create applications that look visually consistent with Microsoft software", presumably subject to the licensing given at the bottom. https://www.microsoft.com/en-ca/download/details.aspx?id=35825 A: This question already has answers here, but for anybody new wondering how to do this, I used 7Zip and navigated to %SystemRoot%\system32\SHELL32.dll\.rsrc\ICON, then copied all the files to a desired location. If you'd like a pre-extracted directory, you can download the ZIP here. Note: I extracted the files on a Windows 8.1 installation, so they may vary from the ones on other versions of Windows. A: Not sure if I am 100% correct, but from my testing, the above options for using 7Zip or VS don't work on Windows 10 / 11 versions of imageres.dll or shell32.dll. This is the content I see: [shell32.dll] .rsrc\MANIFEST\124 .rsrc\MUI\1 .rsrc\TYPELIB\1 .rsrc\version.txt .data .didat .pdata .rdata .reloc .text CERTIFICATE Update: And I think I found why. Link to an article I found. (Sorry for being off-domain). You can find the resources in files in these locations: "C:\Windows\SystemResources\shell32.dll.mun" "C:\Windows\SystemResources\imageres.dll.mun" Icons no longer in imageres.dll in Windows 10 1903 - 4kb file (superuser.com) A: I needed to extract icon #238 from shell32.dll and didn't want to download Visual Studio or Resourcehacker, so I found a couple of PowerShell scripts from Technet (thanks John Grenfell and to #https://social.technet.microsoft.com/Forums/windowsserver/en-US/16444c7a-ad61-44a7-8c6f-b8d619381a27/using-icons-in-powershell-scripts?forum=winserverpowershell) that did something similar and created a new script (below) to suit my needs. The parameters I entered were (the source DLL path, target icon file name and the icon index within the DLL file): C:\Windows\System32\shell32.dll C:\Temp\Restart.ico 238 I discovered the icon index that I needed was #238 by trial and error by temporarily creating a new shortcut (right-click on your desktop and select New --> Shortcut and type in calc and press Enter twice). Then right-click the new shortcut and select Properties then click 'Change Icon' button in the Shortcut tab. Paste in path C:\Windows\System32\shell32.dll and click OK. Find the icon you wish to use and work out its index. NB: Index #2 is beneath #1 and not to its right. Icon index #5 was at the top of column two on my Windows 7 x64 machine. If anyone has a better method that works similarly but obtains higher quality icons then I'd be interested to hear about it. Thanks, Shaun. #Windows PowerShell Code########################################################################### # http://gallery.technet.microsoft.com/scriptcenter/Icon-Exporter-e372fe70 # # AUTHOR: John Grenfell # ########################################################################### <# .SYNOPSIS Exports an ico and bmp file from a given source to a given destination .Description You need to set the Source and Destination locations. First version of a script, I found other examples but all I wanted to do as grab and ico file from an exe but found getting a bmp useful. Others might find useful No error checking I'm afraid so make sure your source and destination locations exist! .EXAMPLE .\Icon_Exporter.ps1 .Notes Version HISTORY: 1.1 2012.03.8 #> Param ( [parameter(Mandatory = $true)][string] $SourceEXEFilePath, [parameter(Mandatory = $true)][string] $TargetIconFilePath ) CLS #"shell32.dll" 238 If ($SourceEXEFilePath.ToLower().Contains(".dll")) { $IconIndexNo = Read-Host "Enter the icon index: " $Icon = [System.IconExtractor]::Extract($SourceEXEFilePath, $IconIndexNo, $true) } Else { [void][Reflection.Assembly]::LoadWithPartialName("System.Drawing") [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms") $image = [System.Drawing.Icon]::ExtractAssociatedIcon("$($SourceEXEFilePath)").ToBitmap() $bitmap = new-object System.Drawing.Bitmap $image $bitmap.SetResolution(72,72) $icon = [System.Drawing.Icon]::FromHandle($bitmap.GetHicon()) } $stream = [System.IO.File]::OpenWrite("$($TargetIconFilePath)") $icon.save($stream) $stream.close() Write-Host "Icon file can be found at $TargetIconFilePath" A: If you're on Linux, you can extract icons from a Windows DLL with gExtractWinIcons. It's available in Ubuntu and Debian in the gextractwinicons package. This blog article has a screenshot and brief explanation. A: The following is a similar take to Mr. Annoyed's answer, it uses ExtractIconEx function to extract icons from libraries. Main difference is, this function should be compatible with both, Windows PowerShell 5.1 and PowerShell 7+ and will also extract both icons by default, large and small, from a given index (-InconIndex). The function outputs 2 FileInfo instances pointing to the created icons given in -DestinationFolder. If no destination folder is provided, the icons will be extracted to the PowerShell current directory ($pwd). function Invoke-ExtractIconEx { [CmdletBinding(PositionalBinding = $false)] param( [Parameter()] [ValidateNotNull()] [string] $SourceLibrary = 'shell32.dll', [Parameter(Position = 0)] [string] $DestinationFolder = $pwd.Path, [Parameter(Position = 1)] [int] $InconIndex ) $refAssemblies = @( [Drawing.Icon].Assembly.Location if ($IsCoreCLR) { $pwshLocation = Split-Path -Path ([psobject].Assembly.Location) -Parent $pwshRefAssemblyPattern = [IO.Path]::Combine($pwshLocation, 'ref', '*.dll') (Get-Item -Path $pwshRefAssemblyPattern).FullName } ) Add-Type -AssemblyName System.Drawing Add-Type ' using System; using System.ComponentModel; using System.Runtime.InteropServices; using System.Drawing; using System.IO; namespace Win32Native { internal class SafeIconHandle : SafeHandle { [DllImport("user32.dll")] private static extern bool DestroyIcon(IntPtr hIcon); public SafeIconHandle() : base(IntPtr.Zero, true) { } public override bool IsInvalid { get { return handle == IntPtr.Zero; } } protected override bool ReleaseHandle() { return DestroyIcon(handle); } } public static class ShellApi { [DllImport("shell32.dll", CharSet = CharSet.Unicode, SetLastError = true)] private static extern uint ExtractIconExW( string szFileName, int nIconIndex, out SafeIconHandle phiconLarge, out SafeIconHandle phiconSmall, uint nIcons); private static void ExtractIconEx(string fileName, int iconIndex, out SafeIconHandle iconLarge, out SafeIconHandle iconSmall) { if (ExtractIconExW(fileName, iconIndex, out iconLarge, out iconSmall, 1) == uint.MaxValue) { throw new Win32Exception(); } } public static FileInfo[] ExtractIcon(string sourceExe, string destinationFolder, int iconIndex) { SafeIconHandle largeIconHandle; SafeIconHandle smallIconHandle; ExtractIconEx(sourceExe, iconIndex, out largeIconHandle, out smallIconHandle); using (largeIconHandle) using (smallIconHandle) using (Icon largeIcon = Icon.FromHandle(largeIconHandle.DangerousGetHandle())) using (Icon smallIcon = Icon.FromHandle(smallIconHandle.DangerousGetHandle())) { FileInfo[] outFiles = new FileInfo[2] { new FileInfo(Path.Combine(destinationFolder, string.Format("{0}-largeIcon-{1}.bmp", sourceExe, iconIndex))), new FileInfo(Path.Combine(destinationFolder, string.Format("{0}-smallIcon-{1}.bmp", sourceExe, iconIndex))) }; largeIcon.ToBitmap().Save(outFiles[0].FullName); smallIcon.ToBitmap().Save(outFiles[1].FullName); return outFiles; } } } } ' -ReferencedAssemblies $refAssemblies $DestinationFolder = $PSCmdlet.GetUnresolvedProviderPathFromPSPath($DestinationFolder) [Win32Native.ShellApi]::ExtractIcon($SourceLibrary, $DestinationFolder, $InconIndex) } Usage: # Extracts to PWD Invoke-ExtractIconEx -InconIndex 1 # Targeting a different library Invoke-ExtractIconEx -SourceLibrary user32.dll -InconIndex 1 # Using a different target folder Invoke-ExtractIconEx path\to\my\folder -InconIndex 1
{ "language": "en", "url": "https://stackoverflow.com/questions/8435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: Conditional Visibility and Page Breaks with SQL Server 2005 Reporting Services I know there's a bug with conditional visibility and page breaks with SQL 2005, but I wonder if anyone has come up with a work around. I have a table that has a conditional visibility expression, and I need a page break at the end of the table. * *If I set the PageBreakAtEnd property to true. It is ignored no matter what. Remove the visibility condition and it works. *If I place the table inside a rectangle with the conditional visibility on the table, and the page break on the table. Same result. The page break property is ignored. *If I set the rectangle with the PageBreakAtEnd property and the table with the visibility condition, then I still get a page break even when the table isn't shown. Any other ideas on what to try? I'm almost at the point where I need a separate report rather than conditional visibility :( Edit: @Josh: That has the same problems. If the second table has conditional visibility it doesn't work. If it doesn't have the visibility expression, then I get the page break all the time. @Erick: I really wanted that to be the answer but unfortunately it doesn't work. When the visibility expression evaluates to hidden, there's a big gap where the rectangles would be (which I can live with), and when it evaluates to visible, the page breaks still don't work. A: Hi This is Bala samsnai, Me too came accross the same type of error. I soveled this with out using Rectangle. Instead of giving the expression to the complete(whole table) select the one row in that tabe,give the visibilty expression. like that repeat it for all the rows (like Header, Detailed, Footer) and give the visibilty expression. By that we can get work both the Visibilty and Pagging both at a time Bala samsani A: Place two rectangles, one inside the other. Place your table inside the inner rectangle and set it to always be visible. Set the inner rectangle's Page Break to Insert After Rectangle. Set the outer rectangle's visibility to use your conditional expression. The page break and the conditional visibility are now separated, and the inner rectangle's page break won't be processed if it is not visible, but it will if it is visible. Edit: When I tried this, it did not appear to work in the Preview tab in Visual Studio, but it did work in the Print Preview and when I exported the report to PDF. A: Use a rectangle, which has the conditional visibility set, and an empty table inside of that rectangle which has the "insert page break before" setting enabled. A: Add a second (empty) table immediately after the first. Page break after that. A: I tried Bala Samsnai solution and it works. Will explain more later. Erik B's solution of uisng two rectangles kind of worked when I hit a snag that I cannot embed a table in the Detail row of another table. So that was a bummer. I followed Bala's solution with my report, which contains only one table and two groups within the table. Instead of adopting and applying an expression to control the Visibility of Groups, I just left that as Visible and applied the Visibility condition expression to each row's Hidden property. Right click on the Row Handle on the far left and you will properties window popup on the right or left (usually as a tab next to Solution explorer). In the Visibility grouping, you will see a property called "Hidden" which will have a default value of FALSE. click on the value and in the dropdown, first option is an expression. Viola and you can setup you condition when the row is hidden. It worked like a charm for me. Hope this helps others. In my case, I had to not show the details section when some of the values where 0. A: I struggled with this problem for quite a few hours until I discovered that the my layout was to wide to fit on print (A4). I had used the extra width for commenting the different field in text boxes with Hidden=false, and as a result twice as many pages as neccessary were generated to display whitespace. So, you might want to check page width as well. A: One thing I noticed is the differences between reports even though I have used the same report as a "template" across different times in SSRS2005. What I mean is, if you open the report you're having problems with in a programmer's editor (say UltraEdit) and look at the RDL file, you may pick up slight variations in page width and height. I noticed this and adjusted the report I was having problems with to the correct width of the paper expected and the report printed perfectly in both PDF and at the printer, and at print preview. Just a thought.
{ "language": "en", "url": "https://stackoverflow.com/questions/8439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }