text
stringlengths
8
267k
meta
dict
Q: Open local file with AIR / Flex I have written an AIR Application that downloads videos and documents from a server. The videos play inside of the application, but I would like the user to be able to open the documents in their native applications. I am looking for a way to prompt the user to Open / Save As on a local file stored in the Application Storage Directory. I have tried using the FileReference + URLRequest classes but this throws an exception that it needs a remote url. My last resort is just copying the file to their desktop : \ A: Only way I could figure out how to do it without just moving the file and telling the user was to pass it off to the browser. navigateToURL(new URLRequest(File.applicationStorageDirectory.nativePath + "/courses/" + fileName)); A: This is the first release of the FluorineFx Aperture framework. The framework provides native OS integration (Windows only) support for AIR desktop applications. The framework extends Adobe AIR applications in a non-intrusive way: simply redistribute the provided libraries with your AIR application, at runtime the framework will automatically hook into your application. Features * *Launch native applications and documents with the provided apsystem library *Take screenshots of the whole screen with the provided apimaging library *Access Outlook contacts from an Air application with the provided apoutlook library http://aperture.fluorinefx.com/ A: You can use the new openWithDefaultApplication(); function that's available on the File class (I believe it's only available in AIR 2) eg: var file:File = File.desktopDirectory.resolvePath(fileLocation); file.openWithDefaultApplication(); A: Currently adobe is not supporting opening files in there default applications. Passing it off to the browser seems to be the only way to make it work. You could however use a FileStream and write a small html file with some javascript that sets the location of an iframe to the file, then after 100ms or so calls window.close(). Then open that file in the browser. A: For me it's: var request:URLRequest = new URLRequest(); request.url = file.url; navigateToURL(request, "_blank"); The navigateToURL(file.nativePath) didn't work since the path, "/users/mydirectory/..." was outside the application sandbox. AIR only allows some protocols to be opened with navigateToURL().
{ "language": "en", "url": "https://stackoverflow.com/questions/5017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Is The Perl Journal available online? Does anyone know where online copies of the old The Perl Journal articles can be found? I know they are now owned by Dr. Dobb's, just the main page for it says they are part of whatever section the subject matter is relevant too, rather than being indexed together. That said, I have never been able to find any of them online on that site. I know Mark Jason Dominus has a few of his articles on his site, any one know of any other good places? Or even what search terms to use at Dr. Dobb's? A: Randal Schwartz's Perl Journal articles are linked from http://www.stonehenge.com/merlyn/PerlJournal/ A: Volumes 1-5 (1996 -> 2000) can be found at http://www.foo.be/docs/tpj/ Hmm, looks like that was the entire run? I though it was longer than that for some reason. A: Many of the articles have the string "TPJ" at the end of the article, so I get quite a few results from searching just "TPJ". I'll put together an index of the articles they published on the website. Are you looking for a particular article or author? I've linked to my TPJ Online articles on my personal web page, but they are all into the DDJ site with unhelpful URLs. Stonehenge gives out Randal's TPJ articles for free, and Simon Cozens has most of his articles online too. Besides the online articles, there are compilation books from O'Reilly Media: Web, Graphics & Perl TK: Best of The Perl Journal, Games, Diversions & Perl Culture: Best of The Perl Journal , and Computer Science & Perl Programming: Best of TPJ . The issues at http://www.foo.be/docs/tpj/ are the entire run of TPJ before it was purchased by Earthweb. At issue 20, things got ugly as Earthweb was imploding in the dot.bomb days and TPJ was eventually bought by CMP and added a supplement to SysAdmin in the summer of 2001. That lasted for about a year before they let it die quietly, then it came back a couple years later as an online magazine. It finally stopped in 2006 when it was rolled into DDJ completely and ceased to exist as a title. Hope that helps,
{ "language": "en", "url": "https://stackoverflow.com/questions/5024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: Locking a SQL Server Database with PHP I'm wanting extra security for a particular point in my web app. So I want to lock the database (SQL Server 2005). Any suggestions or is this even necessary with SQL Server? Edit on question: The query is failing silently with no errors messages logged, and does not occur inside of a transaction. Final Solution: I never was able to solve the problem, however what I wound up doing was switching to MySQL and using a transactional level query here. This was not the main or even a primary reason to switch. I had been having problems with SQL Server and it allowed me to have our CMS and various other tools all running on the same database. Previous we had a SQL Server and a MySQL database running to run our site. The port was a bit on the time consuming however in the long run I feel it will work much better for the site and the business. A: I suppose you have three options. * *Set user permissions so that user x can only read from the database. *Set the database into single user mode so only one connection can access it sp_dboption 'myDataBaseName', single, true *Set the database to readonly sp_dboption 'myDataBaseName', read only, true A: I never was able to solve the problem, however what I wound up doing was switching to MySQL and using a transactional level query here. This was not the main or even a primary reason to switch. I had been having problems with MSSQL and it allowed me to have our CMS and various other tools all running on the same database. Previous we had a MSSQL and a MySQL database running to run our site. The port was a bit on the time consuming however in the long run I feel it will work much better for the site and the business.
{ "language": "en", "url": "https://stackoverflow.com/questions/5025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Reduce ASP.NET menu control size (without 3rd party libraries) I have a fairly simple ASP.NET 2.0 menu control using a sitemap file and security trimmings. There are only 21 menu options, but the results HTML of the menu is a whopping 14k. The site is hosted on our company's intranet and must be serverd to people worldwide on limited bandwidth, so I'd like to reduce the size of the menus. What is the best way to do this? Does anybody have a good reference? I have the following constraints: * *The solution must not reference any 3rd part DLL files (getting approval would be a nightmare) *Has to work with IE 6 CSS and JavaScript are fine, as long as they work with IE 6. A: Take a look at: http://www.asp.net/CSSAdapters/Menu.aspx The default Menu control is rendering far too much HTML. A: You might have a look at my ASP.NET menu optimization post. What I do is extracting the common part of the menu rendered in every page to an external file that is loaded and cached only once at the user browser. This way the pages are 60-70% smaller in some cases.
{ "language": "en", "url": "https://stackoverflow.com/questions/5027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: ASP.NET version of Joomla Does anyone ever found/used an ASP.NET application similar to Joomla? I need to set up a quick and dirty CMS on a Windows Server and our client doesn't want us to use something else than ASP.NET. A: I've been told by a friend that Umbraco is everything you would ever want in a CMS (and it was in the list that Nathan included in his answer). This recommendation is coming from a guy who's built several CMS solutions over the years and after taking a brief look at it, I think I'm going to try to push my clients towards using it over their current solutions. A: DotNetNuke is quick to set up and get running. It is the best ASP.NET CMS that I have used. It comes with many modules, and can be extended with numerous commercial and free 3rd party modules. It is very easy to change to look of a DNN site by simply changing the assigned skin, and many 3rd party skins are available as well. Warbeats.com runs on DNN, and handles quite a bit of traffic. A: Community Server is a very well built CMS for ASP.NET, a free version is available. A: Graffiti is Telligent's CMS (makers of the previously mentioned Community Server) and my be more appropriate depending on your requirements. There are also many CMS projects on Codeplex. A: I tried Graffiti and DotNetNuke and thought both were troublesome, then I tried Umbraco based on a recommendation from a friend and I love it! So much that I recommended it to Kooshmoose... I should also note that dasBlog is not a CMS, it's just blog software (which I use on my personal site and love, but it's not a CMS...) A: Did you Look at DotNetNuke (http://www.dotnetnuke.com/) Its seems to be a good Systems to Start off as a base , But I doubt I could call it a Full CMS ? (Upto the users to decide) A: MojoPortal might be worth a look into. Other than that, the list linked to by Nathan is well-worth looking into A: umbraco gets my vote as a good CMS that comes close to Joomla in maturity and out of the box functionality. I'm not that fond of DNN, but it's been at least a year since I ran it thru its paces. A: See also Oxite. It's an ASP.NET MVC Blog engine that you can use it for CMS. A: If the concern isn't really about the ASP.Net language but about keeping a Windows server, you can use Joomla on IIS. You can also check the list of CMSs on Microsoft's Web Platform
{ "language": "en", "url": "https://stackoverflow.com/questions/5061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: How to add CVS directories recursively I've played with CVS a little bit and am not the most familiar with all of its capabilities, but a huge annoyance for me is trying to add new directories that contain more directories in them. Running "cvs add" only adds the contents of the current directory, and using "cvs import" didn't look like the right thing either since it's still all code I'm producing (this howto claimed import is for 3rd party sources) Do you guys know any way to recursively add everything in a given directory to the current CVS project (or if SVN or git makes this notably easier)? A: I found this worked pretty effectively: First, add all the directories, but not any named "CVS": find . -type d \! -name CVS -exec cvs add '{}' \; Then add all the files, excluding anything in a CVS directory: find . \( -type d -name CVS -prune \) -o \( -type f -exec cvs add '{}' \; \) Now, if anyone has a cure for the embarrassment of using CVS in this day and age... A: First add all directories to CVS find . -type d -print0| xargs -0 cvs add Then add all the files in the directories to CVS find . -type f | grep -v CVS | xargs cvs add Worked for me A: cvs import is not just for 3rd-party sources. In fact, directories are not versioned by CVS, so they are not a subject to branch policies. As long as you import empty directories, it is fine. A: Note that you can only use cvs add on files and folders that are located inside an already checked out working copy, otherwise you will get the "Cannot open CVS/Entries for reading" message. A technique for creating a new "root module" using cvs add is explained in this WinCVS FAQ item: http://cvsgui.sourceforge.net/newfaq.htm#add_rootmodule If you are on Windows, both TortoiseCVS and WinCVS support recursive addition (and optional commit) of multiple files in a single operation. In WinCvs look for the macro Add>Recursive Add (auto-commit)... In Tortoise use the Add Contents command on a directory. Both will allow you to select which files to add and what keyword expansion modes to use for them (mostly used for defining which files are binary). For further info about recursive add in WinCvs look here: http://cvsgui.sourceforge.net/newfaq.htm#cvs-add_recursive Apart from that cvs import is well suited for mass-additions. However, the way cvs import is implemented in vanilla CVS has two drawbacks (because it was originally written for third-party code): * *it creates a mandatory branch with special semantics. *it does not create the repository meta-data (i.e. the hidden CVS directories) needed to establish the imported code as a checked out working copy, which means that in order to actually work with the imported files you first have to check them out of the repository If you are using CVSNT you can avoid both drawbacks by specifying the -nC option on import. -n is for avoiding the "vendor" branch and -C is for creating the CVS directories. A: This answer from Mark was usefull (find . -type f -print0| xargs -0 cvs add) but I solved some problem that occured when the cvs add try to add his own files like Tag, Entries, ect.. * *Add your most high level folder called NEW_FOLDER cvs add NEW_FOLDER *Use the previous command with some exclusion to add all sub-folders tree find NEW_FOLDER/ -type d ! -name "CVS" -and ! -name "Tag" -and ! -name "Entries.Log" -and ! -name "Entries" -and ! -name "Repository" -and ! -name "Root" -print0 | xargs -0 cvs add *Use the previous command with some exclusion to add all files find NEW_FOLDER/ -type f ! -name "CVS" -and ! -name "Tag" -and ! -name "Entries.Log" -and ! -name "Entries" -and ! -name "Repository" -and ! -name "Root" -print0 | xargs -0 cvs add A: Ah, spaces. This will work with spaces: find . -type f -print0| xargs -0 cvs add A: I use this: First add recursively all directories less the CVS ones: $> find . -type d \! -name CVS -exec cvs add '{}' \; Second add all files, less ones the CVS directories: find . \( -type d -name CVS -prune \) -o \( -type f -exec cvs add '{}' \; \) Third do “commit” recursively like "first version" comment: find . \( -type d -name CVS -prune \) -o \( -type f -exec cvs commit -m "first version" '{}' \; \) Last tagging all recursively: find . \( -type d -name CVS -prune \) -o \( -type f -exec cvs tag -F MY_CVS_TAG '{}' \; \) A: I think this is what I did back in my CVS days: find . -type f | xargs cvs add A: First add all directories to CVS find . -type d -print0| xargs -0 cvs add Then add all the files in the directories to CVS find . -type f -print0| xargs -0 cvs add A: I'm using this simple shell script, which should be started from an already checked-out CVS directory. It will stupidly try to add/commit any files and directories it finds upon its recursive search, so in the end you should end up with a fully commit tree. Simply save this as something like /usr/bin/cvsadd and don't forget to chmod +x /usr/bin/cvsadd. #!/bin/sh # @(#) add files and directories recursively to the current CVS directory # (c) 2009 by Dirk Jagdmann if [ -z "$1" ] ; then echo "usage: cvsadd 'import message'" exit 1 fi if [ -d "$2" ] ; then cvs add "$2" cd "$2" || exit 1 fi if [ ! -d CVS ] ; then echo "current directory needs to contain a CVS/ directory" exit 1 fi XARGS="xargs -0 -r -t -L 1" # first add all files in current directory find . -maxdepth 1 -type f -print0 | $XARGS cvs add find . -maxdepth 1 -type f -print0 | $XARGS cvs ci -m "$1" # then add all directories find . -maxdepth 1 -type d -not -name CVS -a -not -name . -print0 | $XARGS "$0" "$1" A: Already discussed methods will do recursive lookup, but it will fail if you perform same action again (if you want to add subtree to existed tree) For that reason you need to check that your directories was not added yet and then add only files which not added yet. To do that we use output of cvs up to see which elements was not added yet - its will have question mark at start of line. We use options -0, -print0 and -zZ to be sure that we correctly process spaces in filenames. We also using --no-run-if-empty to avoid run if nothing need to be added. CVS_PATTERN=/tmp/cvs_pattern cvs -z3 -q up | egrep '^\?.*' | sed -e 's/^? //' > $CVS_PATTERN find . -type d \! -name CVS -print0 | grep -zZf $CVS_PATTERN | xargs -0 --no-run-if-empty cvs add find . \( -type d -name CVS -prune \) -o \( -type f -print0 \) | grep -zZf $CVS_PATTERN | xargs -0 --no-run-if-empty cvs add cvs commit -m 'commiting tree recursively' With this approach we will avoid such errors: cvs add: cannot add special file `.'; skipping cvs [add aborted]: there is a version in ./dirname1 already and cvs add: `./dirname2/filename' already exists, with version number 1.1.1.1 A: Mark's solution resolves the spaces issue, but produces this issue: cvs add: cannot open CVS/Entries for reading: No such file or directory cvs [add aborted]: no repository To fix it, the actual command to use is: find . -type f -exec cvs add {} \; A: i like to do (as directory's need an add to) cvs status 2>/dev/null | awk '{if ($1=="?")system("cvs add "$2)}' you might need to run this multiple times(first for directory's then for its children) until there is no output A: SVN definitely makes this trivial task, using a GUI like Tortoise is even easier, however. This might be a good place to start: http://www-mrsrl.stanford.edu/~brian/cvstutorial/
{ "language": "en", "url": "https://stackoverflow.com/questions/5071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40" }
Q: Bigger than a char but smaller than a blob Char's are great because they are fixed size and thus make for a faster table. They are however limited to 255 characters. I want to hold 500 characters but a blob is variable length and that's not what I want. Is there some way to have a fixed length field of 500 characters in MySQL or am I going to have to use 2 char fields? A: I would suggest using a varchar(500). Even though varchar isn't a fixed length, the database should reserve the correct amount of space. You shouldn't notice any performance difference using varchar(500) over 2xchar(255). You're also probably going to cause extra overhead by joining two char fields together. A: I would suggest using a varchar(500) ... if you have MySQL 5.0.3 or higher. In previous versions, VARCHAR was restricted to 255 characters. Also, CHAR and VARCHAR do not work the same regarding trailing spaces. Be sure to read 10.4.1. The CHAR and VARCHAR Types (this is for MySQL 5.0). A: You're worrying too much about internal implementation details. Don't pre-optimize. Go with VARCHAR(500)
{ "language": "en", "url": "https://stackoverflow.com/questions/5075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Securing a linux webserver for public access I'd like to set up a cheap Linux box as a web server to host a variety of web technologies (PHP & Java EE come to mind, but I'd like to experiment with Ruby or Python in the future as well). I'm fairly versed in setting up Tomcat to run on Linux for serving up Java EE applications, but I'd like to be able to open this server up, even just so I can create some tools I can use while I am working in the office. All the experience I've had with configuring Java EE sites has all been for intranet applications where we were told not to focus on securing the pages for external users. What is your advice on setting up a personal Linux web server in a secure enough way to open it up for external traffic? A: This article has some of the best ways to lock things down: http://www.petefreitag.com/item/505.cfm Some highlights: * *Make sure no one can browse the directories *Make sure only root has write privileges to everything, and only root has read privileges to certain config files *Run mod_security The article also takes some pointers from this book: Apache Securiy (O'Reilly Press) As far as distros, I've run Debain and Ubuntu, but it just depends on how much you want to do. I ran Debian with no X and just ssh'd into it whenever i needed anything. That is a simple way to keep overhead down. Or Ubuntu has some nice GUI things that make it easy to control Apache/MySQL/PHP. A: It's important to follow security best practices wherever possible, but you don't want to make things unduly difficult for yourself or lose sleep worrying about keeping up with the latest exploits. In my experience, there are two key things that can help keep your personal server secure enough to throw up on the internet while retaining your sanity: 1) Security through obscurity Needless to say, relying on this in the 'real world' is a bad idea and not to be entertained. But that's because in the real world, baddies know what's there and that there's loot to be had. On a personal server, the majority of 'attacks' you'll suffer will simply be automated sweeps from machines that have already been compromised, looking for default installations of products known to be vulnerable. If your server doesn't offer up anything enticing on the default ports or in the default locations, the automated attacker will move on. Therefore, if you're going to run a ssh server, put it on a non-standard port (>1024) and it's likely it will never be found. If you can get away with this technique for your web server then great, shift that to an obscure port too. 2) Package management Don't compile and install Apache or sshd from source yourself unless you absolutely have to. If you do, you're taking on the responsibility of keeping up-to-date with the latest security patches. Let the nice package maintainers from Linux distros such as Debian or Ubuntu do the work for you. Install from the distro's precompiled packages, and staying current becomes a matter of issuing the occasional apt-get update && apt-get -u dist-upgrade command, or using whatever fancy GUI tool Ubuntu provides. A: One thing you should be sure to consider is what ports are open to the world. I personally just open port 22 for SSH and port 123 for ntpd. But if you open port 80 (http) or ftp make sure you learn to know at least what you are serving to the world and who can do what with that. I don't know a lot about ftp, but there are millions of great Apache tutorials just a Google search away. A: Bit-Tech.Net ran a couple of articles on how to setup a home server using linux. Here are the links: Article 1 Article 2 Hope those are of some help. A: @svrist mentioned EC2. EC2 provides an API for opening and closing ports remotely. This way, you can keep your box running. If you need to give a demo from a coffee shop or a client's office, you can grab your IP and add it to the ACL. A: Its safe and secure if you keep your voice down about it (i.e., rarely will someone come after your home server if you're just hosting a glorified webroot on a home connection) and your wits up about your configuration (i.e., avoid using root for everything, make sure you keep your software up to date). On that note, albeit this thread will potentially dwindle down to just flaming, my suggestion for your personal server is to stick to anything Ubuntu (get Ubuntu Server here); in my experience, the quickest to get answers from whence asking questions on forums (not sure what to say about uptake though). My home server security BTW kinda benefits (I think, or I like to think) from not having a static IP (runs on DynDNS). Good luck! /mp A: Be careful about opening the SSH port to the wild. If you do, make sure to disable root logins (you can always su or sudo once you get in) and consider more aggressive authentication methods within reason. I saw a huge dictionary attack in my server logs one weekend going after my SSH server from a DynDNS home IP server. That being said, it's really awesome to be able to get to your home shell from work or away... and adding on the fact that you can use SFTP over the same port, I couldn't imagine life without it. =) A: You could consider an EC2 instance from Amazon. That way you can easily test out "stuff" without messing with production. And only pay for the space,time and bandwidth you use. A: If you do run a Linux server from home, install ossec on it for a nice lightweight IDS that works really well. [EDIT] As a side note, make sure that you do not run afoul of your ISP's Acceptable Use Policy and that they allow incoming connections on standard ports. The ISP I used to work for had it written in their terms that you could be disconnected for running servers over port 80/25 unless you were on a business-class account. While we didn't actively block those ports (we didn't care unless it was causing a problem) some ISPs don't allow any traffic over port 80 or 25 so you will have to use alternate ports. A: If you're going to do this, spend a bit of money and at the least buy a dedicated router/firewall with a separate DMZ port. You'll want to firewall off your internal network from your server so that when (not if!) your web server is compromised, your internal network isn't immediately vulnerable as well. A: There are plenty of ways to do this that will work just fine. I would usually jsut use a .htaccess file. Quick to set up and secure enough . Probably not the best option but it works for me. I wouldn't put my credit card numbers behind it but other than that I dont really care. A: Wow, you're opening up a can of worms as soon as you start opening anything up to external traffic. Keep in mind that what you consider an experimental server, almost like a sacrificial lamb, is also easy pickings for people looking to do bad things with your network and resources. Your whole approach to an externally-available server should be very conservative and thorough. It starts with simple things like firewall policies, includes the underlying OS (keeping it patched, configuring it for security, etc.) and involves every layer of every stack you'll be using. There isn't a simple answer or recipe, I'm afraid. If you want to experiment, you'll do much better to keep the server private and use a VPN if you need to work on it remotely.
{ "language": "en", "url": "https://stackoverflow.com/questions/5078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: Upload form does not work in Firefox 3 with Mac OS X? Today, I ran into this weird problem with a user using Mac OS X. This user always had a failed upload. The form uses a regular "input type=file". The user could upload using any browser except Firefox 3 on his Mac. Only this particular user was seeing this error. Obviously, the problem is only with this one particular user. A: User corrected this weird problem by recreating their FireFox profile. How to manage FireFox profiles I imagine a re-install of FireFox would have corrected the problem as well. A: I imagine a re-install of FireFox would have corrected the problem as well. Profile related problems cannot usually be solved by re-installing Firefox since reinstalling (or upgrading) would re-use the same "damaged" profile.
{ "language": "en", "url": "https://stackoverflow.com/questions/5084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Learning Ruby on Rails any good for Grails? My company is in the process of starting down the Grails path. The reason for that is that the current developers are heavy on Java but felt the need for a MVC-style language for some future web development projects. Personally, I'm coming from the design/usability world, but as I take more "front-end" responsibilities I'm starting to feel the need for learning a language more intensively so I can code some logic but especially the front-end code for my UIs and stuff. I've been trying to get into Python/Django personally, but just never invested too much time on it. Now that my company is "jumping" into Grails I bought the "Agile Web Development with Rails (3rd Ed - Beta)" and I'm starting to get into RoR. I'd still like to learn Python in the future or on the side, but my biggest question is: * *Should I be learning RoR, and have a more versatile language in my "portfolio", knowing that my RoR knowledge will be useful for my Grails needs as well?? -OR- * *Should I just skip RoR and focus on learning Grails that I'll be needing for work soon, and work on learning RoR/Django (Ruby/Python) later? Basically the question revolves around the usefulness of Grails in a non-corporate setting and the similarities between Rails and Grails. (and this, while trying to avoid the centennial discussion of Python vs Ruby (on Rails) :)) A: Just a bit of a question, is the reason they are choosing Grails because Groovy is closer in syntax to Java than Ruby, or because they want access to Java? If it is the former, then I would say try to focus on Grails since that is what you will be using. If it is the latter, you might want to see if the development team is open to using JRuby. I have never used Grails or Rails before, but I have used Groovy and Ruby before, and as a language I think Ruby is much cleaner and more consistent, and the team might enjoy production more. As a platform, Rails has been out longer and has a lot of attention, so I would imagine it is a more stable platform to use with more fleshed out features. JRuby has full access to classes written in Java, so this is why I would say consider trying Rails. If it is too late in the decision time to consider it then I guess you can just ignore this post. Basically, if you just want to hook in with Java, then JRuby is an option you should consider, but if the team is afraid of non-Java like syntax, maybe continue as is. A: I would learn both. They are both up and coming technologies. Learning RESTful coding is a real benefit in any language. I use GRAILS at work and RoR for side projects. I can say that the RoR community is much larger (I'm talking about RoR vs Grails not RoR vs Java) and very helpful. Short Answer: They are similar.... what could it hurt? A: Just skip RoR. There are really not a lot of similar things(besides the name) I certainly believe that being enough familiar with Java, plus some experience programming with a dynamic language is more than enough if you plan to do serious development with Grails. Comparing just only views(taglibs in Grails, RHTML in RoR) and the persistence stuff(GORM vs ActiveRecord) is just too different in the core, to invest time learning the nitty gritty details of RoR. Just dive into Grails, you won't regret. Edit: corrected typo. A: I've been learning RoR and Grails and the latter is far easier to learn. Both frameworks share the same principles (agile, kiss, dry, duck typing and so..) but Groovy syntax is...well is simply great, something you can learn and use in a blink of an eye. I truly feel that Grails has brighter future than RoR. PD: Just in case you find it useful, a college of mine it's working full time with Grails and has a blog with some tips: http://dahernan.net/search/label/grails A: Mmh, I don't know how to say this. Some people might bash me over this. Language (Groovy and Ruby) As a language I reckon Ruby is more funky compared to Groovy. Groovy only exists to ease Java programmer as you don't need to learn too much new syntax. But overall I reckon is not as funky as Ruby. Groovy wouldn't be the JVM language that is worth to learn based on attender's vote in this year's JavaOne but instead Scala is the one to go. Besides that, the original creator of Groovy himself does not have faith in the language he created himself in the first place. Community and Job openings As for the community, Grails community is not as big as Rails, though since the acquirement by Spring more and more people are using it in serious application. Rails has more job openings in the market compared to Grails (that is if you want to invest in looking a new job). The framework (Grails and Rails) But, as a framework, if you really care about maintainability and need access to Java framework and legacy Java system, Grails is the way to go as it provides cleaner access to Java. Grails itself is built upon several popular Java framework (Spring & Hibernate). Rails itself IMHO is funky like Ruby itself, but it's funkyness costs you maintainability. Matz himself prefers Merb over Rails 2 because Rails create a DSL on top of Ruby which is really against the Ruby philosophy. And I reckon because Rails itself is opiniated, which in turn if you don't have the same opinion as the creator, it might not fit your needs. Conclusion So in your case, learn Grails as that is the company's consensus (you need to respect the consensus) and if you still want to secure your job. But, invest some time learning Rails and Ruby too if you want to open a chance getting a new job in the future. A: You should just skip RoR and focus on learning Grails that you'll be needing for work. A: @Levi Figueira For one thing, Grails is far more flexible than Rails. Rails is difficult to use with a legacy DB because ActiveRecord has too many design constraints that many legacy DBs didn't follow. Grails, oth, can use standard Hibernate mappings, which can accommodate a much broader range of DB designs. A: The Rails community has been very vocal in evangelising RoR, with the result that high expectations have been set and not always met (programmer productivity is good, but ensuring good performance once deployed isn't so easy). Grails has been designed as the scripted successor to Java, whereas the Ruby-Java integration used in JRuby on Rails, for example, has had to be retrofitted. I would suggest that you stick with Grails; it may not have the same glitz as RoR, but it's a pragmatic choice; you get improved productivity and the re-use of existing Java libraries. A: Jump straight into Grails. I'm sure Ruby/Rails is good but so in Groovy/Grails. I recommend this book. http://beginninggroovyandgrails.com Remember the errata is online. There are a couple of mistakes in the book. http://beginninggroovyandgrails.com/site/content/errata Also, check out the 3 minute and 30 second demo of creating your first Grails app. http://grails.org/Grails+Screencasts This tutorial will show you the basics. http://grails.org/Quick+Start A: Yes Grails is the way to go. RoR is good but it ties you in to the Ruby ecosystem. Part of the effort of learning a new framework or language is learning the class libraries as well as the language syntax. If your co-workers are all Java types you will be much better placed to receive help and support as they will all be speaking the same language as you. The other advantage to learning a bit of Groovy and Java is that web frameworks like GWT will open up to you. Grails has a GWT plugin and as a front end developer you will appreciate the ease of use and cross browser compatibility. Also there is at least one hosting company offering free Grails application hosting (http://www.mor.ph/) which means that you can prototype sites at small data volumes before having to pay. A: I favor Grails over Rails, but learning Rails will give you a more balanced perspective and actually open your eyes to overlooked things that are possible in Grails. A: At a first glance you would think they are completely differente stories, since they are based on extremely different languages (Ruby and Groovy). Then, after reading a couple of tutorials, you'll realize they share the same principles, scaffolding, duck typing, .. and finally the same goal: making agile programming feasible. If you already feel comfortable with terms like IoC and MVC, you'll find any of these options easy and exciting to learn. A: I would say no, I'm learning Grails as well, and I've considered this as well, but just learning Grails is pretty big, plus learning Groovy (which granted is easy, but still gotta learn it right?) and all that... so learning Rails would have been just too much. A: Yes if we compare grails and rails I would choose grails (I developed some intranet applications in grails). But Django is superior to both - as python is well hmm a perfect choice. A: You might also want to take a look at Clojure, a JVM language that's just starting to get popular. It may be a good choice for a Java-based company since it's compatible with your old codebase, and has a lot of modern innovations going for it. There are some good web frameworks emerging, including Compojure.
{ "language": "en", "url": "https://stackoverflow.com/questions/5087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21" }
Q: How do you set up Python scripts to work in Apache 2.0? I tried to follow a couple of googled up tutorials on setting up mod_python, but failed every time. Do you have a good, step-by step, rock-solid howto? My dev box is OS X, production - Centos. A: Are you running Python on UNIX or Windows? An alternative to mod_python and FastCGI is mod_wsgi. You can find out more at modwsgi I have built and installed this on Solaris without problems. I had previously tried mod_python but ran into problems with shared libraries as part of the build. There are good install docs available. A: There are two main ways of running Python on Apache. The simplest would be to use CGI and write normal Python scripts while the second is using a web framework like Django or Pylons. Using CGI is straightforward. Make sure your Apache config file has a cgi-bin set up. If not, follow their documentation (http://httpd.apache.org/docs/2.0/howto/cgi.html). At that point all you need to do is place your Python scripts in the cgi-bin directory and the standard output will become the HTTP response. Refer to Python's documentation for further info (https://docs.python.org/library/cgi.html). If you want to use a web framework you'll need to setup mod_python or FastCGI. These steps are dependent on which framework you want to use. Django provides clear instructions on how to setup mod_python and Django with Apache (http://www.djangoproject.com/documentation/modpython/) A: Yes, mod_python is pretty confusing to set up. Here's how I did it. In httpd.conf: LoadModule python_module modules/mod_python.so <Directory "/serverbase/htdocs/myapp"> AddHandler mod_python .py PythonHandler myapp PythonDebug On and in your application directory: $ /serverbase/htdocs/myapp$ ls -l total 16 -r-xr-xr-x 1 root sys 6484 May 21 15:54 myapp.py Repeat the configuration for each python program you wish to have running under mod_python. A: The problem for me wasn't in Apache set up, but in understanding how mod_apache actually uses the .py files. Module-level statements (including those in a if __name__=='__main__' section) are not executed--I assumed that the stdout from running the script at the commandline would be what the server would output, but that's not how it works. Instead, I wrote a module-level function called index(), and had it return as a string the HTML of the page. It's also possible to have other module-level functions (e.g., otherFunction()) that can be accessed as further segments in the URI (e.g., testScript/otherFunction for the file testScript.py.) Obviously, this makes more sense than my original stdout conception. Better capability of actually using Python as a scripting language and not a humongous markup language.
{ "language": "en", "url": "https://stackoverflow.com/questions/5102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23" }
Q: How to set up a CSS switcher I'm working on a website that will switch to a new style on a set date. The site's built-in semantic HTML and CSS, so the change should just require a CSS reference change. I'm working with a designer who will need to be able to see how it's looking, as well as a client who will need to be able to review content updates in the current look as well as design progress on the new look. I'm planning to use a magic querystring value and/or a javascript link in the footer which writes out a cookie to select the new CSS page. We're working in ASP.NET 3.5. Any recommendations? I should mention that we're using IE Conditional Comments for IE8, 7, and 6 support. I may create a function that does a replacement: <link href="Style/<% GetCssRoot() %>.css" rel="stylesheet" type="text/css" /> <!--[if lte IE 8]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie8.css" rel="stylesheet" /> <![endif]--> <!--[if lte IE 7]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie7.css" rel="stylesheet" /> <![endif]--> <!--[if lte IE 6]> <link type="text/css" href="Style/<% GetCssRoot() %>-ie6.css" rel="stylesheet" /> <![endif]--> A: You should look into ASP.NET themes, that's exactly what they're used for. They also allow you to skin controls, which means give them a set of default attributes. A: In Asp.net 3.5, you should be able to set up the Link tag in the header as a server tag. Then in the codebehind you can set the href property for the link element, based on a cookie value, querystring, date, etc. In your aspx file: <head> <link id="linkStyles" rel="stylesheet" type="text/css" runat="server" /> </head> And in the Code behind: protected void Page_Load(object sender, EventArgs e) { string stylesheetAddress = // logic to determine stylesheet linkStyles.Href = stylesheetAddress; } A: I would suggest storing the stylesheet selection in the session so you don't have to rely on the querystring key being present all the time. You can check the session in Page_Load and add the appropriate stylesheet reference. It sounds like this is a temporary/development situation, so go with whatever is easy and works. if (!String.IsNullOrEmpty(Request.QueryString["css"])) Session.Add("CSS",Request.QueryString["css"]); A: I would do the following: www.website.com/?stylesheet=new.css Then in your ASP.NET code: if (Request.Querystring["stylesheet"] != null) { Response.Cookies["stylesheet"].Value = Request.QueryString["stylesheet"]; Response.Redirect(<Current Page>); } Then where you define your stylesheets: if (Request.Cookies["stylesheet"] != null) { // New Stylesheet } else { // Default }
{ "language": "en", "url": "https://stackoverflow.com/questions/5118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37" }
Q: Best strategy to write hooks for subversion in Windows What is the best approach to write hooks for Subversion in Windows? As far as I know, only executable files can be used. So what is the best choice? * *Plain batch files (very limited but perhaps OK for very simple solutions) *Dedicated compiled executable applications (sledgehammer to crack a nutshell?) *Some other hybrid choice (like a batch file running a Powershell script) A: I’ve just spent several days procrastinating about exactly this question. There are third party products available and plenty of PERL and Python scripts but I wanted something simple and a language I was familiar with so ended up just writing hooks in a C# console app. It’s very straight forward: public void Main(string[] args) { string repositories = args[0]; string transaction = args[1]; var processStartInfo = new ProcessStartInfo { FileName = "svnlook.exe", UseShellExecute = false, CreateNoWindow = true, RedirectStandardOutput = true, RedirectStandardError = true, Arguments = String.Format("log -t \"{0}\" \"{1}\"", transaction, repositories) }; var p = Process.Start(processStartInfo); var s = p.StandardOutput.ReadToEnd(); p.WaitForExit(); if (s == string.Empty) { Console.Error.WriteLine("Message must be provided"); Environment.Exit(1); } Environment.Exit(0); } You can then invoke this on pre commit by adding a pre-commit.cmd file to the hooks folder of the repo with the following line: [path]\PreCommit.exe %1 %2 You may consider this overkill but ultimately it’s only a few minutes of coding. What’s more, you get the advantage of the .NET language suite which IMHO is far preferable to the alternatives. I’ll expand my hooks out significantly and write appropriate tests against them as well – bit hard to do this with a DOS batch file! BTW, the code has been adapted from this post. A: We've got complex requirements like: * *Only certain users can create folders in parts of the SVN tree, but everyone can edit files there *Certain file extensions cannot contain certain text in the file *Certain file extensions can only be stored in a subset of directories *As well as several simpler ones like, Must have a commit comment *Regression testable by running new hook against all previous SVN commits #5 is huge for us, there's no better way to know you're not gonna break commits moving forward than to be able to push all previous commits through your new hook. Making the hook understand that 1234 was a revision and 1234-1 was a transaction and making the appropriate argument changes when calling svnlook, etc. was the best decision we made during the process. For us the nut got big enough that a fully unit testable, regression testable, C# console exe made the most sense. We have config files that feed the directory restrictions, parse the existing httpd_authz file to get "privileged" users, etc. Had we not been running on Windows with a .NET development work force, I would have probably written it all in Python, but since others might need to support it in the future I went .NET over .BAT, .VBS, Powershell silliness. Personally I think Powershell is different enough from .NET to be mostly useless as a "scripting" language. It's good if the only cmd line support for a product comes via PS (Exchange, Windows 2k8), etc. but if all you want to do is parse some text or access regular .NET objects PS just adds a crazy syntax and stupid Security Iron Curtain to what could be a quick and easy little .NET app. A: Check CaptainHook, "a simple plugin framework for writing Subversion hooks using .NET". A: Depending on the complexity, each situation is different, If I am just simply moving files around, I'll write a quick batch file. If I want to do something more complex Ill normally just skip the scripting part and write a quick c# program that can handle it. The question then is do you put that c# program in svn and have it versioned :) edit: The benefits of a dedicated c# application is that I can reuse code fragments to create new hooks later, including a simple log output I created to handle hook logging. A: I've written hooks in Python on Windows since there are a lot of examples on the net (usually for Linux but the differences are small). We also use Trac integrated with SVN and there is a Trac API accessible via Python which lets us automatically create/modify Trac tickets from SVN hook scripts. A: If you have a php executable with a help of simple php class you may write hook script in php like it is shown here http://www.devhands.com/2010/01/subversion-hook-php-framework-in/
{ "language": "en", "url": "https://stackoverflow.com/questions/5134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Creating a shared library in MATLAB A researcher has created a small simulation in MATLAB and we want to make it accessible to others. My plan is to take the simulation, clean up a few things and turn it into a set of functions. Then I plan to compile it into a C library and use SWIG to create a Python wrapper. At that point, I should be able to call the simulation from a small Django application. At least I hope so. Do I have the right plan? Are there are any serious pitfalls that I'm not aware of at the moment? A: One thing to remember is that the MATLAB compiler does not actually compile the MATLAB code into native machine instructions. It simply wraps it into a stand-alone executable or a library with its own runtime engine that runs it. You would be able to run your code without MATLAB installed, and you would be able to interface it with other languages, but it will still be interpreted MATLAB code, so there would be no speedup. Matlab Coder, on the other hand, is the thing that can generate C code from Matlab. There are some limitations, though. Not all Matlab functions are supported for code generation, and there are things you cannot do, like change the type of a variable on the fly. A: I remember that I was able to wrap a MATLAB simulation into a DLL file and then call it from a Delphi application. It worked really well. A: I'd also try ctypes first. * *Use the MATLAB compiler to compile the code into C. *Compile the C code into a DLL. *Use ctypes to load and call code from this DLL The hardest step is probably 1, but if you already know MATLAB and have used the MATLAB compiler, you should not have serious problems with it. A: Perhaps try ctypes instead of SWIG. If it has been included as a part of Python 2.5, then it must be good :-)
{ "language": "en", "url": "https://stackoverflow.com/questions/5136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15" }
Q: Firebird's SQL's Substring function not working I created a view on a machine using the substring function from Firebird, and it worked. When I copied the database to a different machine, the view was broken. This is the way I used it: SELECT SUBSTRING(field FROM 5 FOR 15) FROM table; And this is the output on the machine that does not accept the function: token unknown: FROM Both computers have this configuration: * *IB Expert version 2.5.0.42 to run the queries and deal with the database. *Firebird version 1.5 as server to database. *BDE Administration version 5.01 installed, with Interbase 4.0 drivers. Any ideas about why it's behaving differently on these machines? A: * *Make sure Firebird engine is 1.5 and there's no InterBase server running on this same box on the port you expected Firebird 1.5. *Make sure you don't have any UDF called 'substring' registered inside this DB so that Firebird is expecting different parameters. A: Different engine versions? Have you tried naming that expression in the result? SELECT SUBSTRING(field FROM 5 FOR 15) AS x FROM table;
{ "language": "en", "url": "https://stackoverflow.com/questions/5142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: SQL Server Management Studio alternatives to browse/edit tables and run queries I was wondering if there are any alternatives to Microsoft's SQL Server Management Studio? Not there's anything wrong with SSMS, but sometimes it just seem too big an application where all I want todo is browse/edit tables and run queries. A: If you are already spending time in Visual Studio, then you can always use the Server Explorer to connect to any .Net compliant database server. Provided you're using Professional or greater, you can create and edit tables and databases, run queries, etc. A: I've started using LinqPad. In addition to being more lightweight than SSMS, you can also practice writing LINQ queries- way more fun than boring old TSQL! A: There is an express version on SSMS that has considerably fewer features but still has the basics. A: vim + dbext :) A: TOAD for MS SQL looks pretty good. I've never used it personally but I have used Quest's other products and they're solid. A: Oracle has a free program called SQL Developer which will work with Microsoft SQL Server as well as Oracle & MySQL. When accessing SQL Server, however, Oracle SQL Developer is only intended to enable an easy migration to Oracle, so your SQL Server database is essentially read-only. A: Seems that no one mentioned Query Express (http://www.albahari.com/queryexpress.aspx) and a fork Query ExPlus (also link at the bottom of http://www.albahari.com/queryexpress.aspx) BTW. First URL is the home page of Joseph Albahari who is the author of LINQPad (check out this killer tool) A: Database .NET A: I have been using Atlantis SQL Enywhere, a free software, for almost 6 months and has been working really well. Works with SQL 2005 and SQL 2008 versions. I am really impressed with its features and keyboard shortcuts are similar to VS, so makes the transition really smooth to a new editor. Some of the features that are worth mentioning: * *Intellisense that actually works when using multiple tables and joins with aliases *Suggestion of joins when using multiple tables (reduces time on typing, really neat) *Rich formatting of sql code, AutoIndent using Ctrl K, Ctrl D. *Better representation of SQL plans *Highlights variables declarations while they are used. *Table definition on mouse hover. All these features have saved me lot of time. A: powershell + sqlcmd :) A: You can still install and use Query Analyzer from previous SQL Server versions. A: How about Embarcadero Rapid SQL Really good but kind of expensive.
{ "language": "en", "url": "https://stackoverflow.com/questions/5170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "131" }
Q: How Do I Post and then redirect to an external URL from ASP.Net? ASP.NET server-side controls postback to their own page. This makes cases where you want to redirect a user to an external page, but need to post to that page for some reason (for authentication, for instance) a pain. An HttpWebRequest works great if you don't want to redirect, and JavaScript is fine in some cases, but can get tricky if you really do need the server-side code to get the data together for the post. So how do you both post to an external URL and redirect the user to the result from your ASP.NET codebehind code? A: I started with this example from CodeProject Then instead of adding to the page, I borrowed from saalon (above) and did a Response.Write(). A: I would do the form post in your code behind using HttpWebRequest class. Here is a good helper class to get your started: <Link> From there, you can just do a Response.Redirect, or perhaps you need to vary your action based on the outcome of the post (if there was an error, display it to the user or whatever). I think you already had the answer in your question to be honest - sounds like you think it is a post OR redirect when in reality you can do them both from your code behind. A: If you're using ASP.NET 2.0, you can do this with cross-page posting. Edit: I missed the fact that you're asking about an external page. For that I think you'd need to have your ASP.NET page gen up an HTML form whose action is set to the remote URL and method is set to POST. (Using cross-page posting, this could even be a different page with no UI, only hidden form elements.) Then add a bit of javascript to submit the form as soon as the postback result was received on the client. A: I have done this by rendering a form that auto-posts (using JavaScript) to the desired remote URL - gather whatever information you need for the post in the web form's postback and then build the HTML for the remote-posting form and render it back to the client. I built a utility class for this that contains the remote URL and a collection of name/value pairs for the form. Cross-page posting will work if you own both of the pages involved, but not if you need to post to another site (PayPal, for example). A: I needed to open in the same window, dealing with possible frame issues from the original page, then redirecting to an external site in code behind: Private Sub ExternalRedirector(ByVal externalUrl As String) Dim clientRedirectName As String = "ClientExternalRedirect" Dim externalRedirectJS As New StringBuilder() If Not String.IsNullOrEmpty(externalUrl) Then If Not Page.ClientScript.IsStartupScriptRegistered(clientRedirectName) Then externalRedirectJS.Append("function CheckWindow() {") externalRedirectJS.Append(" if (window.top != window) {") externalRedirectJS.Append(" window.top.location = '") externalRedirectJS.Append(externalUrl) externalRedirectJS.Append("';") externalRedirectJS.Append(" return false;") externalRedirectJS.Append(" }") externalRedirectJS.Append(" else {") externalRedirectJS.Append(" window.location = '") externalRedirectJS.Append(externalUrl) externalRedirectJS.Append("';") externalRedirectJS.Append(" }") externalRedirectJS.Append("}") externalRedirectJS.Append("CheckWindow();") Page.ClientScript.RegisterStartupScript(Page.GetType(), clientRedirectName, externalRedirectJS.ToString(), True) End If End If End Sub A: Here's how I solved this problem today. I started from this article on C# Corner, but found the example - while technically sound - a little incomplete. Everything he said was right, but I needed to hit a few external sites to piece this together to work exactly as I wanted. It didn't help that the user was not technically submitting a form at all; they were clicking a link to go to our support center, but to log them in an http post had to be made to the support center's site. This solution involves using HttpContext.Current.Response.Write() to write the data for the form, then using a bit of Javascript on the <body onload=""> method to submit the form to the proper URL. When the user clicks on the Support Center link, the following method is called to write the response and redirect the user: public static void PassthroughAuthentication() { System.Web.HttpContext.Current.Response.Write("<body onload=document.forms[0].submit();window.location=\"Home.aspx\";>"); System.Web.HttpContext.Current.Response.Write("<form name=\"Form\" target=_blank method=post action=\"https://external-url.com/security.asp\">"); System.Web.HttpContext.Current.Response.Write(string.Format("<input type=hidden name=\"cFName\" value=\"{0}\">", "Username")); System.Web.HttpContext.Current.Response.Write("</form>"); System.Web.HttpContext.Current.Response.Write("</body>"); } The key to this method is in that onload bit of Javascript, which , when the body of the page loads, submits the form and then redirects the user back to my own Home page. The reason for that bit of hoodoo is that I'm launching the external site in a new window, but don't want the user to resubmit the hidden form if they refresh the page. Plus that hidden form pushed the page down a few pixels which got on my nerves. I'd be very interested in any cleaner ideas anyone has on this one. Eric Sipple
{ "language": "en", "url": "https://stackoverflow.com/questions/5179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: How do you pull the URL for an ASP.NET web reference from a configuration file in Visual Studio 2008? I have a web reference for our report server embedded in our application. The server that the reports live on could change though, and I'd like to be able to change it "on the fly" if necessary. I know I've done this before, but can't seem to remember how. Thanks for your help. I've manually driven around this for the time being. It's not a big deal to set the URL in the code, but I'd like to figure out what the "proper" way of doing this in VS 2008 is. Could anyone provide any further insights? Thanks! In VS2008 when I change the URL Behavior property to Dynamic I get the following code auto-generated in the Reference class. Can I override this setting (MySettings) in the web.config? I guess I don't know how the settings stuff works. Public Sub New() MyBase.New Me.Url = Global.My.MySettings.Default.Namespace_Reference_ServiceName If (Me.IsLocalFileSystemWebService(Me.Url) = true) Then Me.UseDefaultCredentials = true Me.useDefaultCredentialsSetExplicitly = false Else Me.useDefaultCredentialsSetExplicitly = true End If End Sub EDIT So this stuff has changed a bit since VS03 (which was probably the last VS version I used to do this). According to: http://msdn.microsoft.com/en-us/library/a65txexh.aspx it looks like I have a settings object on which I can set the property programatically, but that I would need to provide the logic to retrieve that URL from the web.config. Is this the new standard way of doing this in VS2008, or am I missing something? EDIT #2 Anyone have any ideas here? I drove around it in my application and just put the URL in my web.config myself and read it out. But I'm not happy with that because it still feels like I'm missing something. A: In the properties window change the "behavior" to Dynamic. See: http://www.codeproject.com/KB/XML/wsdldynamicurl.aspx A: If you mean a VS2005 "Web Reference", then the generated proxy classes have a URL property that is the SOAP endpoint url of that service. You can change this property and have your subsequent http communications be made to that new endpoint. Edit: Ah, thanks bcaff86. I didn't know you could do that simply by changing a property.
{ "language": "en", "url": "https://stackoverflow.com/questions/5188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: When to use an extension method with lambda over LINQtoObjects to filter a collection? I am prototyping some C# 3 collection filters and came across this. I have a collection of products: public class MyProduct { public string Name { get; set; } public Double Price { get; set; } public string Description { get; set; } } var MyProducts = new List<MyProduct> { new MyProduct { Name = "Surfboard", Price = 144.99, Description = "Most important thing you will ever own." }, new MyProduct { Name = "Leash", Price = 29.28, Description = "Keep important things close to you." } , new MyProduct { Name = "Sun Screen", Price = 15.88, Description = "1000 SPF! Who Could ask for more?" } }; Now if I use LINQ to filter it works as expected: var d = (from mp in MyProducts where mp.Price < 50d select mp); And if I use the Where extension method combined with a Lambda the filter works as well: var f = MyProducts.Where(mp => mp.Price < 50d).ToList(); Question: What is the difference, and why use one over the other? A: LINQ turns into method calls like the code you have. In other words, there should be no difference. However, in your two pieces of code you are not calling .ToList in the first, so the first piece of code will produce an enumerable data source, but if you call .ToList on it, the two should be the same. A: As mentioned d will be IEnumerable<MyProduct> while f is List<MyProduct> The conversion is done by the C# compiler var d = from mp in MyProducts where mp.Price < 50d select mp; Is converted to (before compilation to IL and with generics expanded): var d = MyProducts. Where<MyProduct>( mp => mp.Price < 50d ). Select<MyProduct>( mp => mp ); //note that this last select is optimised out if it makes no change Note that in this simple case it makes little difference. Where Linq becomes really valuable is in much more complicated loops. For instance this statement could include group-bys, orders and a few let statements and still be readable in Linq format when the equivalent .Method().Method.Method() would get complicated. A: Other than the ToList difference, #2 is a lot more readable and natural IMO A: The syntax you are using for d will get transformed by the compiler into the same IL as the extension methods. The "SQL-like" syntax is supposed to be a more natural way to represent a LINQ expression (although I personally prefer the extension methods). As has already been pointed out, the first example will return an IEnumerable result while the second example will return a List result due to the call to ToList(). If you remove the ToList() call in the second example, they will both return the same result as Where returns an IEnumerable result.
{ "language": "en", "url": "https://stackoverflow.com/questions/5194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: Accessing post variables using Java Servlets What is the Java equivalent of PHP's $_POST? After searching the web for an hour, I'm still nowhere closer. A: Here's a simple example. I didn't get fancy with the html or the servlet, but you should get the idea. I hope this helps you out. <html> <body> <form method="post" action="/myServlet"> <input type="text" name="username" /> <input type="password" name="password" /> <input type="submit" /> </form> </body> </html> Now for the Servlet import java.io.*; import javax.servlet.*; import javax.servlet.http.*; public class MyServlet extends HttpServlet { public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { String userName = request.getParameter("username"); String password = request.getParameter("password"); .... .... } } A: Your HttpServletRequest object has a getParameter(String paramName) method that can be used to get parameter values. http://java.sun.com/javaee/5/docs/api/javax/servlet/ServletRequest.html#getParameter(java.lang.String) A: The previous answers are correct but remember to use the name attribute in the input fields (html form) or you won't get anything. Example: <input type="text" id="username" /> <!-- won't work --> <input type="text" name="username" /> <!-- will work --> <input type="text" name="username" id="username" /> <!-- will work too --> All this code is HTML valid, but using getParameter(java.lang.String) you will need the name attribute been set in all parameters you want to receive. A: POST variables should be accessible via the request object: HttpRequest.getParameterMap(). The exception is if the form is sending multipart MIME data (the FORM has enctype="multipart/form-data"). In that case, you need to parse the byte stream with a MIME parser. You can write your own or use an existing one like the Apache Commons File Upload API. A: For getting all post parameters there is Map which contains request param name as key and param value as key. Map params = servReq.getParameterMap(); And to get parameters with known name normal String userId=servReq.getParameter("user_id");
{ "language": "en", "url": "https://stackoverflow.com/questions/5222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64" }
Q: Length of a JavaScript object I have a JavaScript object. Is there a built-in or accepted best practice way to get the length of this object? const myObject = new Object(); myObject["firstname"] = "Gareth"; myObject["lastname"] = "Simpson"; myObject["age"] = 21; A: Here's the most cross-browser solution. This is better than the accepted answer because it uses native Object.keys if exists. Thus, it is the fastest for all modern browsers. if (!Object.keys) { Object.keys = function (obj) { var arr = [], key; for (key in obj) { if (obj.hasOwnProperty(key)) { arr.push(key); } } return arr; }; } Object.keys(obj).length; A: Here's a different version of James Cogan's answer. Instead of passing an argument, just prototype out the Object class and make the code cleaner. Object.prototype.size = function () { var size = 0, key; for (key in this) { if (this.hasOwnProperty(key)) size++; } return size; }; var x = { one: 1, two: 2, three: 3 }; x.size() === 3; jsfiddle example: http://jsfiddle.net/qar4j/1/ A: You can always do Object.getOwnPropertyNames(myObject).length to get the same result as [].length would give for normal array. A: You can simply use Object.keys(obj).length on any object to get its length. Object.keys returns an array containing all of the object keys (properties) which can come in handy for finding the length of that object using the length of the corresponding array. You can even write a function for this. Let's get creative and write a method for it as well (along with a more convienient getter property): function objLength(obj) { return Object.keys(obj).length; } console.log(objLength({a:1, b:"summit", c:"nonsense"})); // Works perfectly fine var obj = new Object(); obj['fish'] = 30; obj['nullified content'] = null; console.log(objLength(obj)); // It also works your way, which is creating it using the Object constructor Object.prototype.getLength = function() { return Object.keys(this).length; } console.log(obj.getLength()); // You can also write it as a method, which is more efficient as done so above Object.defineProperty(Object.prototype, "length", {get:function(){ return Object.keys(this).length; }}); console.log(obj.length); // probably the most effictive approach is done so and demonstrated above which sets a getter property called "length" for objects which returns the equivalent value of getLength(this) or this.getLength() A: Simply use this to get the length: Object.keys(myObject).length A: I'm not a JavaScript expert, but it looks like you would have to loop through the elements and count them since Object doesn't have a length method: var element_count = 0; for (e in myArray) { if (myArray.hasOwnProperty(e)) element_count++; } @palmsey: In fairness to the OP, the JavaScript documentation actually explicitly refer to using variables of type Object in this manner as "associative arrays". A: A nice way to achieve this (Internet Explorer 9+ only) is to define a magic getter on the length property: Object.defineProperty(Object.prototype, "length", { get: function () { return Object.keys(this).length; } }); And you can just use it like so: var myObj = { 'key': 'value' }; myObj.length; It would give 1. A: This method gets all your object's property names in an array, so you can get the length of that array which is equal to your object's keys' length. Object.getOwnPropertyNames({"hi":"Hi","msg":"Message"}).length; // => 2 A: Updated answer Here's an update as of 2016 and widespread deployment of ES5 and beyond. For IE9+ and all other modern ES5+ capable browsers, you can use Object.keys() so the above code just becomes: var size = Object.keys(myObj).length; This doesn't have to modify any existing prototype since Object.keys() is now built-in. Edit: Objects can have symbolic properties that can not be returned via Object.key method. So the answer would be incomplete without mentioning them. Symbol type was added to the language to create unique identifiers for object properties. The main benefit of the Symbol type is the prevention of overwrites. Object.keys or Object.getOwnPropertyNames does not work for symbolic properties. To return them you need to use Object.getOwnPropertySymbols. var person = { [Symbol('name')]: 'John Doe', [Symbol('age')]: 33, "occupation": "Programmer" }; const propOwn = Object.getOwnPropertyNames(person); console.log(propOwn.length); // 1 let propSymb = Object.getOwnPropertySymbols(person); console.log(propSymb.length); // 2 Older answer The most robust answer (i.e. that captures the intent of what you're trying to do while causing the fewest bugs) would be: Object.size = function(obj) { var size = 0, key; for (key in obj) { if (obj.hasOwnProperty(key)) size++; } return size; }; // Get the size of an object const myObj = {} var size = Object.size(myObj); There's a sort of convention in JavaScript that you don't add things to Object.prototype, because it can break enumerations in various libraries. Adding methods to Object is usually safe, though. A: Updated: If you're using Underscore.js (recommended, it's lightweight!), then you can just do _.size({one : 1, two : 2, three : 3}); => 3 If not, and you don't want to mess around with Object properties for whatever reason, and are already using jQuery, a plugin is equally accessible: $.assocArraySize = function(obj) { // http://stackoverflow.com/a/6700/11236 var size = 0, key; for (key in obj) { if (obj.hasOwnProperty(key)) size++; } return size; }; A: To not mess with the prototype or other code, you could build and extend your own object: function Hash(){ var length=0; this.add = function(key, val){ if(this[key] == undefined) { length++; } this[key]=val; }; this.length = function(){ return length; }; } myArray = new Hash(); myArray.add("lastname", "Simpson"); myArray.add("age", 21); alert(myArray.length()); // will alert 2 If you always use the add method, the length property will be correct. If you're worried that you or others forget about using it, you could add the property counter which the others have posted to the length method, too. Of course, you could always overwrite the methods. But even if you do, your code would probably fail noticeably, making it easy to debug. ;) A: Below is a version of James Coglan's answer in CoffeeScript for those who have abandoned straight JavaScript :) Object.size = (obj) -> size = 0 size++ for own key of obj size A: Property Object.defineProperty(Object.prototype, 'length', { get: function () { var size = 0, key; for (key in this) if (this.hasOwnProperty(key)) size++; return size; } }); Use var o = {a: 1, b: 2, c: 3}; alert(o.length); // <-- 3 o['foo'] = 123; alert(o.length); // <-- 4 A: With the ECMAScript 6 in-built Reflect object, you can easily count the properties of an object: Reflect.ownKeys(targetObject).length It will give you the length of the target object's own properties (important). Reflect.ownKeys(target) Returns an array of the target object's own (not inherited) property keys. Now, what does that mean? To explain this, let's see this example. function Person(name, age){ this.name = name; this.age = age; } Person.prototype.getIntro= function() { return `${this.name} is ${this.age} years old!!` } let student = new Person('Anuj', 11); console.log(Reflect.ownKeys(student).length) // 2 console.log(student.getIntro()) // Anuj is 11 years old!! You can see here, it returned only its own properties while the object is still inheriting the property from its parent. For more information, refer this: Reflect API A: Try: Object.values(theObject).length const myObject = new Object(); myObject["firstname"] = "Gareth"; myObject["lastname"] = "Simpson"; myObject["age"] = 21; console.log(Object.values(myObject).length); A: We can find the length of Object by using: const myObject = {}; console.log(Object.values(myObject).length); A: Here's how and don't forget to check that the property is not on the prototype chain: var element_count = 0; for(var e in myArray) if(myArray.hasOwnProperty(e)) element_count++; A: Here is a completely different solution that will only work in more modern browsers (Internet Explorer 9+, Chrome, Firefox 4+, Opera 11.60+, and Safari 5.1+) See this jsFiddle. Setup your associative array class /** * @constructor */ AssociativeArray = function () {}; // Make the length property work Object.defineProperty(AssociativeArray.prototype, "length", { get: function () { var count = 0; for (var key in this) { if (this.hasOwnProperty(key)) count++; } return count; } }); Now you can use this code as follows... var a1 = new AssociativeArray(); a1["prop1"] = "test"; a1["prop2"] = 1234; a1["prop3"] = "something else"; alert("Length of array is " + a1.length); A: If you know you don't have to worry about hasOwnProperty checks, you can use the Object.keys() method in this way: Object.keys(myArray).length A: If you need an associative data structure that exposes its size, better use a map instead of an object. const myMap = new Map(); myMap.set("firstname", "Gareth"); myMap.set("lastname", "Simpson"); myMap.set("age", 21); console.log(myMap.size); // 3 A: Use Object.keys(myObject).length to get the length of object/array var myObject = new Object(); myObject["firstname"] = "Gareth"; myObject["lastname"] = "Simpson"; myObject["age"] = 21; console.log(Object.keys(myObject).length); //3 A: Like most JavaScript problems, there are many solutions. You could extend the Object that for better or worse works like many other languages' Dictionary (+ first class citizens). Nothing wrong with that, but another option is to construct a new Object that meets your specific needs. function uberject(obj){ this._count = 0; for(var param in obj){ this[param] = obj[param]; this._count++; } } uberject.prototype.getLength = function(){ return this._count; }; var foo = new uberject({bar:123,baz:456}); alert(foo.getLength()); A: Simple one liner: console.log(Object.values({id:"1",age:23,role_number:90}).length); A: Use: var myArray = new Object(); myArray["firstname"] = "Gareth"; myArray["lastname"] = "Simpson"; myArray["age"] = 21; obj = Object.keys(myArray).length; console.log(obj) A: <script> myObj = {"key1" : "Hello", "key2" : "Goodbye"}; var size = Object.keys(myObj).length; console.log(size); </script> <p id="myObj">The number of <b>keys</b> in <b>myObj</b> are: <script>document.write(size)</script></p> This works for me: var size = Object.keys(myObj).length; A: For some cases it is better to just store the size in a separate variable. Especially, if you're adding to the array by one element in one place and can easily increment the size. It would obviously work much faster if you need to check the size often. A: The simplest way is like this: Object.keys(myobject).length Where myobject is the object of what you want the length of. A: @palmsey: In fairness to the OP, the JavaScript documentation actually explicitly refer to using variables of type Object in this manner as "associative arrays". And in fairness to @palmsey he was quite correct. They aren't associative arrays; they're definitely objects :) - doing the job of an associative array. But as regards to the wider point, you definitely seem to have the right of it according to this rather fine article I found: JavaScript “Associative Arrays” Considered Harmful But according to all this, the accepted answer itself is bad practice? Specify a prototype size() function for Object If anything else has been added to Object .prototype, then the suggested code will fail: <script type="text/javascript"> Object.prototype.size = function () { var len = this.length ? --this.length : -1; for (var k in this) len++; return len; } Object.prototype.size2 = function () { var len = this.length ? --this.length : -1; for (var k in this) len++; return len; } var myArray = new Object(); myArray["firstname"] = "Gareth"; myArray["lastname"] = "Simpson"; myArray["age"] = 21; alert("age is " + myArray["age"]); alert("length is " + myArray.size()); </script> I don't think that answer should be the accepted one as it can't be trusted to work if you have any other code running in the same execution context. To do it in a robust fashion, surely you would need to define the size method within myArray and check for the type of the members as you iterate through them. A: If we have the hash hash = {"a" : "b", "c": "d"}; we can get the length using the length of the keys which is the length of the hash: keys(hash).length A: Using the Object.entries method to get length is one way of achieving it const objectLength = obj => Object.entries(obj).length; const person = { id: 1, name: 'John', age: 30 } const car = { type: 2, color: 'red', } console.log(objectLength(person)); // 3 console.log(objectLength(car)); // 2 A: var myObject = new Object(); myObject["firstname"] = "Gareth"; myObject["lastname"] = "Simpson"; myObject["age"] = 21; * *Object.values(myObject).length *Object.entries(myObject).length *Object.keys(myObject).length A: What about something like this -- function keyValuePairs() { this.length = 0; function add(key, value) { this[key] = value; this.length++; } function remove(key) { if (this.hasOwnProperty(key)) { delete this[key]; this.length--; }} } A: If you are using AngularJS 1.x you can do things the AngularJS way by creating a filter and using the code from any of the other examples such as the following: // Count the elements in an object app.filter('lengthOfObject', function() { return function( obj ) { var size = 0, key; for (key in obj) { if (obj.hasOwnProperty(key)) size++; } return size; } }) Usage In your controller: $scope.filterResult = $filter('lengthOfObject')($scope.object) Or in your view: <any ng-expression="object | lengthOfObject"></any> A: const myObject = new Object(); myObject["firstname"] = "Gareth"; myObject["lastname"] = "Simpson"; myObject["age"] = 21; console.log(Object.keys(myObject).length) // o/p 3 A: A variation on some of the above is: var objLength = function(obj){ var key,len=0; for(key in obj){ len += Number( obj.hasOwnProperty(key) ); } return len; }; It is a bit more elegant way to integrate hasOwnProp. A: If you don't care about supporting Internet Explorer 8 or lower, you can easily get the number of properties in an object by applying the following two steps: * *Run either Object.keys() to get an array that contains the names of only those properties that are enumerable or Object.getOwnPropertyNames() if you want to also include the names of properties that are not enumerable. *Get the .length property of that array. If you need to do this more than once, you could wrap this logic in a function: function size(obj, enumerablesOnly) { return enumerablesOnly === false ? Object.getOwnPropertyNames(obj).length : Object.keys(obj).length; } How to use this particular function: var myObj = Object.create({}, { getFoo: {}, setFoo: {} }); myObj.Foo = 12; var myArr = [1,2,5,4,8,15]; console.log(size(myObj)); // Output : 1 console.log(size(myObj, true)); // Output : 1 console.log(size(myObj, false)); // Output : 3 console.log(size(myArr)); // Output : 6 console.log(size(myArr, true)); // Output : 6 console.log(size(myArr, false)); // Output : 7 See also this Fiddle for a demo. A: Simple solution: var myObject = {}; // ... your object goes here. var length = 0; for (var property in myObject) { if (myObject.hasOwnProperty(property)){ length += 1; } }; console.log(length); // logs 0 in my example. A: Here you can give any kind of varible array,object,string function length2(obj){ if (typeof obj==='object' && obj!== null){return Object.keys(obj).length;} //if (Array.isArray){return obj.length;} return obj.length; } A: The solution work for many cases and cross browser: Code var getTotal = function(collection) { var length = collection['length']; var isArrayObject = typeof length == 'number' && length >= 0 && length <= Math.pow(2,53) - 1; // Number.MAX_SAFE_INTEGER if(isArrayObject) { return collection['length']; } i= 0; for(var key in collection) { if (collection.hasOwnProperty(key)) { i++; } } return i; }; Data Examples: // case 1 var a = new Object(); a["firstname"] = "Gareth"; a["lastname"] = "Simpson"; a["age"] = 21; //case 2 var b = [1,2,3]; // case 3 var c = {}; c[0] = 1; c.two = 2; Usage getLength(a); // 3 getLength(b); // 3 getLength(c); // 2 A: Object.keys does not return the right result in case of object inheritance. To properly count object properties, including inherited ones, use for-in. For example, by the following function (related question): var objLength = (o,i=0) => { for(p in o) i++; return i } var myObject = new Object(); myObject["firstname"] = "Gareth"; myObject["lastname"] = "Simpson"; myObject["age"] = 21; var child = Object.create(myObject); child["sex"] = "male"; var objLength = (o,i=0) => { for(p in o) i++; return i } console.log("Object.keys(myObject):", Object.keys(myObject).length, "(OK)"); console.log("Object.keys(child) :", Object.keys(child).length, "(wrong)"); console.log("objLength(child) :", objLength(child), "(OK)"); A: var myObject = new Object(); myObject["firstname"] = "Gareth"; myObject["lastname"] = "Simpson"; myObject["age"] = 21; var size = JSON.stringify(myObject).length; document.write(size); JSON.stringify(myObject) A: I had a similar need to calculate the bandwidth used by objects received over a websocket. Simply finding the length of the Stringified object was enough for me. websocket.on('message', data => { dataPerSecond += JSON.stringify(data).length; } A: vendor = {1: "", 2: ""} const keysArray = Object.keys(vendor) const objectLength = keysArray.length console.log(objectLength) Result 2
{ "language": "en", "url": "https://stackoverflow.com/questions/5223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2950" }
Q: HTML Comments Markup I am currently in the process of creating my own blog and I have got to marking up the comments, but what is the best way to mark it up? The information I need to present is: * *Persons Name *Gravatar Icon *Comment Date *The Comment PS: I'm only interested in semantic HTML markup. A: I think that your version with the cite, blockquote, etc. would definitely work, but if semantics is your main concern then I personally wouldn't use cite and blockquote as they have specific things that they are supposed to represent. The blockquote tag is meant to represent a quotation taken from another source and the cite tag is meant to represent a source of information (like a magazine, newspaper, etc.). I think an argument can certainly made that you can use semantic HTML with class names, provided they are meaningful. This article on Plain Old Semantic HTML makes a reference to using class names - http://www.fooclass.com/plain_old_semantic_html A: Here's one way you could do it with the following CSS to float the picture to the left of the contents: .comment { width: 400px; } .comment_img { float: left; } .comment_text, .comment_meta { margin-left: 40px; } .comment_meta { clear: both; } <div class='comment' id='comment_(comment id #)'> <div class='comment_img'> <img src='https://placehold.it/100' alt='(Commenter Name)' /> </div> <div class='comment_text'> <p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Sed mauris. Morbi quis tellus sit amet eros ullamcorper ultrices. Proin a tortor. Praesent et odio. Duis mi odio, consequat ut, euismod sed, commodo vitae, nulla. Suspendisse potenti. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Etiam pede.</p> <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Maecenas rhoncus accumsan velit. Donec varius magna a est. </p> </div> <p class='comment_meta'> By <a href='#'>Name</a> on <span class='comment_date'>2008-08-21 11:32 AM</span> </p> </div> A: I was perhaps thinking of something like this: <ol class="comments"> <li> <a href=""> <img src="" alt="" /> </a> <cite>Name<br />Date</cite> <blockquote>Comment</blockquote> </li> </ol> It's very semantic without using div's and only one class. The list shows the order the comments were made, a link to the persons website, and image for their gravatar, the cite tag to site who said the comment and blockquote to hold what they said. A: I don't know that there's markup that would necessarily represent the comment structure well without using divs or classes as well, but you could use definition lists. You can use multiple dt and dd tags in the context of a definition list - see 10.3 Definition lists: the DL, DT, and DD elements. <dl> <dt>By [Name] at 2008-01-01<dt> <dd><img src='...' alt=''/></dd> <dd><p>Lorem ipsum dolor sit amet, consectetuer adipiscing elit. Sed mauris. Morbi quis tellus sit amet eros ullamcorper ultrices. Proin a tortor. Praesent et odio. Duis mi odio, consequat ut, euismod sed, commodo vitae, nulla. Suspendisse potenti. Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus. Etiam pede.</p> <p>Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Maecenas rhoncus accumsan velit. Donec varius magna a est. </p> </dd> </dl> The concern I'd have with an approach like this is that it could be difficult to uniquely identify the elements with CSS for styling purposes. You could use JavaScript (jQuery would be great here) to find and apply styles. Without full CSS selector support across browsers (Internet Explorer), it would be tougher to style. A: I see your point. OK, after reading through that article, why don't you try something like this? <blockquote cite="http://yoursite/comments/feederscript.php?id=commentid" title="<?php echo Name . " - " . Date ?>" > <?php echo Comment ?> </blockquote> with some snazzy CSS to make it look nice. feederscript.php would be something that could read from the database and echo only the commentid called for.
{ "language": "en", "url": "https://stackoverflow.com/questions/5226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: User Interfaces - Colors and Layout Although I'm specifically interested in web application information, I would also be somewhat curious about desktop application development as well. This question is driven by my work on my personal website as well as my job, where I have developed a few features, but left it to others to integrate into the look and feel of the site. Are there any guides or rules of thumb for things like color schemes, layouts, formatting, etc? I want to ensure readability and clarity for visitors, but not be bland and dull at the same time. As for my knowledge in this area - If you hand me a picture, I have enough knowledge to reproduce it on the screen, but if you ask me to design a new interface or redesign an existing one, I wouldn't know where to begin. A: Usually, each operating System has user Interface Guidelines. For Windows, have a look here. (Edit: The links in that post are broken. But a Search for "User Interface Guidelines" on MSDN has articles about everything) Apple has it's own as well. Also, you may want to keep accessibility in mind. A: One tip to check if your colors have good contrast is taking a snapshot of it and converting to grayscale. If you can't read something, colors were surely bad choosen. Plus, although it's not about user interfaces, Before & After Magazine can give you some pretty good hints about color, design and related topics. It even has got some free pdf's to download. A: The book Designing Interfaces, by Jenifer Tidwell has a entire chapter on the subject (Chapter 9, excerpts accesible online). The entire book is worth recommending. A: For web UI, I'm going to go out on a limb here and say that the most important color in web design is white, or "light". This is the color on top of which you place dense tracts of content. Dark text, light background, always, when it comes to your primary content areas. And the most important rule in layouting is whitespace. Let the content breathe. Following these two simple rules is worth more than most "user interface usability" guidelines. And by the way, the MS user interface guidelines are (by and large) horrible. Read Jakob Nielsen, look at Apple design aesthetics, but stay away from the MS "neutral gray/blue crunchbox" 12-step Wizard 10pt text philosophy of UI. (And I say that as a long-time MS GUI programmer) A: I'm horrible at finding colors that look good together, so I cheat and use pictures from nature that are mostly the color I want (say, green) and then I use this website to pull out the main color scheme. Generally nature does a pretty good job of setting its own nice color schemes. A: Use high contrast color combos; Black text on white background is the best example of a high contrast combo. A bad combo is green text on red background. It's horrible for color blind people (like myself). See what your site looks like to a color blind person: colorfilter.wickline.org A: As for desktop applications: Whatever you do, do not use hand-picked colors. Stick with the named system colors such as "Window Background", "Menu Text", etc. Otherwise, people relying on OS accessibility features will be locked with your color choices (unable to choose a high-contrast theme, for instance) and to people who like to customize their desktop themes will think your application is fugly. A: Here are some simple pointers for usability in your typography. These things mainly address readability and accessibility concerns. DOs: * *Use relative font sizing (em) *Identify language changes within a document using the LANG attribute *Black text on a white background *For headings, use H1, H2, etc. and nest them appropriately *Chunk up content and organize with headings that fit what your users are looking for *Write clear and simple copy *Align left, ragged right *Text-to-background color must be high contrast DONTs: * *Use "click here" or "more" as link text *Use underline for emphasis *More than 2 font-type families *Italics *Blocks of text using all caps *Use true red or true blue text on white background (chromatic aberration)
{ "language": "en", "url": "https://stackoverflow.com/questions/5242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Stand-alone charts in GWT I've been trying to get pretty charts to work in GWT on our internal network. Playing around with GWT-Ext's charts is nice, but it requires flash and is really messy to control (it seems buggy, in general). I'd like to hear about something that works with the least amount of dependencies and it also must work without a connection to the web (so, Google' charts API isn't a solution). Edit: Indeed, I would rather a library that is all client-side. A: I'm building a GWT chart library based on Flot: http://gflot.googlecode.com I hope you find it useful. Contact me if you have any questions. A: Googling for "GWT +sparklines" has gotten me to gchart, which seems like what I need. From what I understand - it's all client side and requires nothing more than their JAR file. A: Google's charts actually come in two flavours, and one of them does not require interaction with Google's servers - so should satisfy your needs. Google Image Charts is the API you are thinking of, which is an API on Google's servers that returns images. Google Interactive Charts is a client side javascript API that renders entirely within the browser: Google Interactive Charts Google provides a GWT wrapper for the interactive charts: GWT Visualization API It's not all rainbows and unicorns and you can find chart libs out there that make nicer charts, but it's pretty solid, works on all major browsers and we've been using it successfully for quite a while. A: http://code.google.com/p/ext-ux-ofcgxt/ is a nice option if you're using ext-gwt A: Do you want something that has a server side component or entirely client driven? The best ones I have seen are all flash, alas. I have done little tricks with JS and GWT before, but there is only sophisticated I will get before I go hunting for a library to do it for me. A: There is also "sparklines" - they are available in lots of flavours (very simple charts though). A: gchart looks seriously awesome. Go with it ! A: If you're looking for client-side check out flotr which is based on prototype javascript library or flot which is based on jQuery. Both work well, though flot seems like its got a bigger backing. A: If you are willing to go with flash, XML/SWF is a wonderful tool A: +1 flot, requires jQuery though, so might not play well with GWT, I haven't used that. A: Another flash option, with a pre-built GWT integration - Open Flash Chart / ofcgwt. A: I think that gwt-chart is a better framework for you. A: well.. i've used yahoo ui chart library (which GWT-Ext uses internally). Pretty neat solution, in the beta stage though. Let us know the conclusion you arrive at.. A: There is one open source api for charts in GWT hosted on http://code.google.com/p/gwt-rcharts/ . The API works on SVG/VML specification. You may find it quite easy to implement and use. You may find the demo at http://gwt-rcharts.appspot.com/
{ "language": "en", "url": "https://stackoverflow.com/questions/5251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: What is the best way to wrap time around the work day? I have a situation where I want to add hours to a date and have the new date wrap around the work-day. I cobbled up a function to determine this new date, but want to make sure that I'm not forgetting anything. The hours to be added is called "delay". It could easily be a parameter to the function instead. Please post any suggestions. [VB.NET Warning] Private Function GetDateRequired() As Date ''// A decimal representation of the current hour Dim hours As Decimal = Decimal.Parse(Date.Now.Hour) + (Decimal.Parse(Date.Now.Minute) / 60.0) Dim delay As Decimal = 3.0 ''// delay in hours Dim endOfDay As Decimal = 12.0 + 5.0 ''// end of day, in hours Dim startOfDay As Decimal = 8.0 ''// start of day, in hours Dim newHour As Integer Dim newMinute As Integer Dim dateRequired As Date = Now Dim delta As Decimal = hours + delay ''// Wrap around to the next day, if necessary If delta > endOfDay Then delta = delta - endOfDay dateRequired = dateRequired.AddDays(1) newHour = Integer.Parse(Decimal.Truncate(delta)) newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60)) newHour = startOfDay + newHour Else newHour = Integer.Parse(Decimal.Truncate(delta)) newMinute = Integer.Parse(Decimal.Truncate((delta - newHour) * 60)) End If dateRequired = New Date(dateRequired.Year, dateRequired.Month, dateRequired.Day, newHour, newMinute, 0) Return dateRequired End Sub Note: This will probably not work if delay is more than 9 hours long. It should never change from 3, through. EDIT: The goal is find the date and time that you get as a result of adding several hours to the current time. This is used to determine a default value for a due date of a submission. I want to add 3 hours to the current time to get the due date time. However, I don't want due dates that go beyond 5pm on the current day. So, I tried to have the hours split between (today, up to 5pm) and (tomorrow, from 8am on), such that adding 3 hours to 4pm would give you 19am, because 1 hour is added to the end of today and 2 hours are added to the beginning of tomorrow. A: Okay, how about these? The difference between the approaches should speak for themselves. Also, this is tested about as far as I can throw it. The warranty lasts until... now. Hope it helps! Module Module1 Public Function IsInBusinessHours(ByVal d As Date) As Boolean Return Not (d.Hour < 8 OrElse d.Hour > 17 OrElse d.DayOfWeek = DayOfWeek.Saturday OrElse d.DayOfWeek = DayOfWeek.Sunday) End Function Public Function AddInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date Dim work As Date = fromDate.AddHours(hours) While Not IsInBusinessHours(work) work = work.AddHours(1) End While Return work End Function Public Function LoopInBusinessHours(ByVal fromDate As Date, ByVal hours As Integer) As Date Dim work As Date = fromDate While hours > 0 While hours > 0 AndAlso IsInBusinessHours(work) work = work.AddHours(1) hours -= 1 End While While Not IsInBusinessHours(work) work = work.AddHours(1) End While End While Return work End Function Sub Main() Dim test As Date = New Date(2008, 8, 8, 15, 0, 0) Dim hours As Integer = 5 Console.WriteLine("Date: " + test.ToString() + ", " + hours.ToString()) Console.WriteLine("Just skipping: " + AddInBusinessHours(test, hours)) Console.WriteLine("Looping: " + LoopInBusinessHours(test, hours)) Console.ReadLine() End Sub End Module A: You should probably write some automated tests for each condition you can think of, and then just start brainstorming more, writing the tests as you think of them. This way, you can see for sure it will work, and will continue to work if you make further changes. Look up Test Driven Development if you like the results. A: I've worked with the following formula (pseudocode) with some success: now <- number of minutes since the work day started delay <- number of minutes in the delay day <- length of a work day in minutes x <- (now + delay) / day {integer division} y <- (now + delay) % day {modulo remainder} return startoftoday + x {in days} + y {in minutes}
{ "language": "en", "url": "https://stackoverflow.com/questions/5260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: How do you persist a tree structure to a database table with auto incrementing IDs using an ADO.NET DataSet and a DataAdapter I have a self-referential Role table that represents a tree structure ID [INT] AUTO INCREMENT Name [VARCHAR] ParentID [INT] I am using an ADO.NET DataTable and DataAdapter to load and save values to this table. This works if I only create children of existing rows. If I make a child row, then make a child of that child, then Update, the temporary ID value generated by the DataTable is going into the ParentID column. I have the following data relation set: dataset.Relations.Add(New DataRelation("RoleToRole",RoleTable.Columns("ID"), RoleTable.Columns("ParentID"))) And when I make new child rows in the DataTable I call the SetParentRow method newRow.SetParentRow(parentRow) Is there something special I have to do to get the ID generation to propagate recursively when I call Update on the DataAdapter? A: I don't know ADO.net in particular, but most ORMs won't automatically insert the ID of a new record in a relationship. You'll have to resort to the 2-step process: * *build and save parent *build and save child with relationship to parent The reason that this is difficult for ORMs is because you might have circular dependencies, and it wouldn't know which object it needed to create an ID for first. Some ORMs are smart enough to figure out those relationships where there are no such circular dependencies, but most aren't. A: Does it make any difference if you go newRow.SetParentRow(parentRow, RoleTable.Relations("RoleToRole")) A: I suggest you add a ForeignKeyConstraint, with UpdateRule set to Cascade.
{ "language": "en", "url": "https://stackoverflow.com/questions/5263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How can I dynamically center an image in a MS Reporting Services report? Out of the box, in MS Reporting Services, the image element does not allow for the centering of the image itself, when the dimensions are unknown at design time. In other words, the image (if smaller than the dimensions allotted on the design surface) will be anchored to the top left corner, not in the center. My report will know the URL of the image at runtime, and I need to be able to center this image if it is smaller than the dimensions specified in my designer. A: Here is how I was able to accomplish this. With help from Chris Hays Size the image to be as big as you would want it on the report, change "Sizing" property to "Clip". Dynamically set the image's left padding using an expression: =CStr(Round((4.625-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Width/96)/2,2)) & "in" Dynamically set the image's top padding using an expression: =CStr(Round((1.125-System.Drawing.Image.FromStream(System.Net.WebRequest.Create(Parameters!LogoURL.Value).GetResponse().GetResponseStream()).Height/96)/2,2)) & "in" The first modification made to Chris's code was to swap out the dimensions of my image element on the report (my image was 4.625x1.125 - see numbers above). I also chose to get the stream from a URL instead of the database. I used WebRequest.Create.GetResponse.GetResponseStream do to so. So far so good - I Hope that helps!
{ "language": "en", "url": "https://stackoverflow.com/questions/5264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: C# logic order and compiler behavior In C#, (and feel free to answer for other languages), what order does the runtime evaluate a logic statement? Example: DataTable myDt = new DataTable(); if (myDt != null && myDt.Rows.Count > 0) { //do some stuff with myDt } Which statement does the runtime evaluate first - myDt != null or: myDt.Rows.Count > 0 ? Is there a time when the compiler would ever evaluate the statement backwards? Perhaps when an "OR" operator is involved? & is known as a logical bitwise operator and will always evaluate all the sub-expressions What is a good example of when to use the bitwise operator instead of the "short-circuited boolean"? A: "C# : Left to right, and processing stops if a match (evaluates to true) is found." Zombie sheep is wrong. The question is about the && operator, not the || operator. In the case of && evaluation will stop if a FALSE is found. In the case of || evaluation stops if a TRUE is found. A: I realise this question has already been answered, but I'd like to throw in another bit of information which is related to the topic. In languages, like C++, where you can actually overload the behaviour of the && and || operators, it is highly recommended that you do not do this. This is because when you overload this behaviour, you end up forcing the evaluation of both sides of the operation. This does two things: * *It breaks the lazy evaluation mechanism because the overload is a function which has to be invoked, and hence both parameters are evaluated before calling the function. *The order of evaluation of said parameters isn't guaranteed and can be compiler specific. Hence the objects wouldn't behave in the same manner as they do in the examples listed in the question/previous answers. For more info, have a read of Scott Meyers' book, More Effective C++. Cheers! A: vb.net if( x isNot Nothing AndAlso x.go()) then * *Evaluation is done left to right *AndAlso operator makes sure that only if the left side was TRUE, the right side will be evaluated (very important, since ifx is nothing x.go will crash) You may use And instead ofAndAlso in vb. in which case the left side gets evaluated first as well, but the right side will get evaluated regardless of result. Best Practice: Always use AndAlso, unless you have a very good reason why not to. It was asked in a followup why or when would anyone use And instead of AndAlso (or & instead of &&): Here is an example: if ( x.init() And y.init()) then x.process(y) end y.doDance() In this case, I want to init both X and Y. Y must be initialized in order for y.DoDance to be able to execute. However, in the init() function I am doing also some extra thing like checking a socket is open, and only if that works out ok, for both, I should go ahead and do the x.process(y). Again, this is probably not needed and not elegant in 99% of the cases, that is why I said that the default should be to use AndAlso. A: @shsteimer The concept modesty is referring to is operator overloading. in the statement: ... A is evaluated first, if it evaluates to false, B is never evaluated. The same applies to That's not operator overloading. Operator overloading is the term given for letting you define custom behaviour for operators, such as *, +, = and so on. This would let you write your own 'Log' class, and then do a = new Log(); // Log class overloads the + operator a + "some string"; // Call the overloaded method - otherwise this wouldn't work because you can't normally add strings to objects. Doing this a() || b() // be never runs if a is true is actually called Short Circuit Evaluation A: ZombieSheep is dead-on. The only "gotcha" that might be waiting is that this is only true if you are using the && operator. When using the & operator, both expressions will be evaluated every time, regardless if one or both evaluate to false. if (amHungry & whiteCastleIsNearby) { // The code will check if White Castle is nearby // even when I am not hungry } if (amHungry && whiteCastleIsNearby) { // The code will only check if White Castle is nearby // when I am hungry } A: Note that there is a difference between && and & regarding how much of your expression is evaluated. && is known as a short-circuited boolean AND, and will, as noted by others here, stop early if the result can be determined before all the sub-expressions are evaluated. & is known as a logical bitwise operator and will always evaluate all the sub-expressions. As such: if (a() && b()) Will only call b if a returns true. however, this: if (a() & b()) Will always call both a and b, even though the result of calling a is false and thus known to be false regardless of the result of calling b. This same difference exists for the || and | operators. A: Some languages have interesting situations where expressions are executed in a different order. I am specifically thinking of Ruby, but I'm sure they borrowed it from elsewhere (probably Perl). The expressions in the logic will stay left to right, but for example: puts message unless message.nil? The above will evaluate "message.nil?" first, then if it evaluates to false (unless is like if except it executes when the condition is false instead of true), "puts message" will execute, which prints the contents of the message variable to the screen. It's kind of an interesting way to structure your code sometimes... I personally like to use it for very short 1 liners like the above. Edit: To make it a little clearer, the above is the same as: unless message.nil? puts message end A: The left one, then stops if it is null. Edit: In vb.net it will evaluate both and possibly throw an error, unless you use AndAlso A: The concept modesty is referring to is operator overloading. in the statement: if( A && B){ // do something } A is evaluated first, if it evaluates to false, B is never evaluated. The same applies to if(A || B){ //do something } A is evaluated first, if it evaluates to true, B is never evaluated. This concept, overloading, applies to (i think) all of the C style languages, and many others as well. A: C# : Left to right, and processing stops if a non-match (evaluates to false) is found. A: Nopes, at least the C# compiler doesn't work backwards (in either && or ||). It's left to right. A: What is a good example of when to use the bitwise operator instead of the "short-circuited boolean"? Suppose you have flags, say for file attributes. Suppose you've defined READ as 4, WRITE as 2, and EXEC as 1. In binary, that's: READ 0100 WRITE 0010 EXEC 0001 Each flag has one bit set, and each one is unique. The bitwise operators let you combine these flags: flags = READ & EXEC; // value of flags is 0101 A: When things are all in-line, they're executed left-to-right. When things are nested, they're executed inner-to-outer. This may seem confusing as usually what's "innermost" is on the right-hand side of the line, so it seems like it's going backwards... For example a = Foo( 5, GetSummary( "Orion", GetAddress("Orion") ) ); Things happen like this: * *Call GetAddress with the literal "Orion" *Call GetSummary with the literal "Orion" and the result of GetAddress *Call Foo with the literal 5 and the result of GetSummary *Assign this value to a A: I like Orion's responses. I'll add two things: * *The left-to-right still applies first *The inner-to-outer to ensure that all arguments are resolved before calling the function Say we have the following example: a = Foo(5, GetSummary("Orion", GetAddress("Orion")), GetSummary("Chris", GetAddress("Chris"))); Here's the order of execution: * *GetAddress("Orion") *GetSummary("Orion", ...) *GetAddress("Chris") *GetSummary("Chris", ...) *Foo(...) *Assigns to a I can't speak about C#'s legal requirements (although I did test a similar example using Mono before writing this post), but this order is guaranteed in Java. And just for completeness (since this is a language-agnostic thread as well), there are languages like C and C++, where the order is not guaranteed unless there is a sequence point. References: 1, 2. In answering the thread's question, however, && and || are sequence points in C++ (unless overloaded; also see OJ's excellent answer). So some examples: * *foo() && bar() *foo() & bar() In the && case, foo() is guaranteed to run before bar() (if the latter is run at all), since && is a sequence point. In the & case, no such guarantee is made (in C and C++), and indeed bar() can run before foo(), or vice versa. A: I have heard somewhere that compilers work backwards, but I am unsure how true this is. A: You use & when you specifically want to evaluate all the sub-expressions, most likely because they have side-effects you want, even though the final result will be false and thus not execute your then part of your if-statement. Note that & and | operates for both bitwise masks and boolean values and is not just for bitwise operations. They're called bitwise, but they are defined for both integers and boolean data types in C#. A: @csmba: It was asked in a followup why or when would anyone use And instead of AndAlso (or & instead of &&): Here is an example: if ( x.init() And y.init()) then x.process(y) end y.doDance() In this case, I want to init both X and Y. Y must be initialized in order for y.DoDance to be able to execute. However, in the init() function I am doing also some extra thing like checking a socket is open, and only if that works out ok, for both, I should go ahead and do the x.process(y). I believe this is rather confusing. Although your example works, it's not the typical case for using And (and I would probably write this differently to make it clearer). And (& in most other languages) is actually the bitwise-and operation. You would use it to calculate bit operations, for example deleting a flag bit or masking and testing flags: Dim x As Formatting = Formatting.Bold Or Formatting.Italic If (x And Formatting.Italic) = Formatting.Italic Then MsgBox("The text will be set in italic.") End If A: The D programming language Does do left-to-right evaluation with short circuiting and doesn't allow overloading of the && and '||' operators.
{ "language": "en", "url": "https://stackoverflow.com/questions/5269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: Print a Winform/visual element All the articles I've found via google are either obsolete or contradict one another. What's the easiest way to print a form or, say, a richtextbox in c#? I think it's using the PrintDiaglog class by setting the Document, but how does this get converted? A: At least in VS 2008, its very easy. It took me about a couple of minutes to code the answer after reading your question. Here's where I borrowed it from: http://msdn.microsoft.com/en-us/library/6he9hz8c.aspx I tested this, and it works. A: Someone I know created a component that extends controls with a lot of properties that give you a lot of control over how the form prints. It's worth a look. MCL PrintForm Helper Component
{ "language": "en", "url": "https://stackoverflow.com/questions/5307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12" }
Q: Is there a business reason for striving for pure CSS layout? It seems like every time I try to create a pure CSS layout it takes me much longer than if I'd use a table or two. Getting three columns to be equal lengths with different amounts of data seems to require particular fancy hacks, especially when dealing with cross-browser issues. My Question: Who are these few tables going to hurt? Tables seem to work particularly well on tabular data — why are they so reviled in this day and age? Google.com has a table in its source code, so do many other sites (stackoverflow.com does not by the way). A: In the real world, your chances of taking one design and totally reskinning it without touching the markup are pretty remote. It's fine for blogs and concocted demos like the csszengarden, but it's a bogus benefit on any site with a moderately complex design, really. Using a CMS is far more important. DIVs plus CSS != semantic, either. Good HTML is well worthwhile for SEO and accessibility always, whether tables or CSS are used for layout. You get really efficient, fast web designs by combining really simple tables with some good CSS. Table layouts can be more accessible than CSS layouts, and the reverse is also true - it depends TOTALLY on the source order of the content, and just because you avoided tables does not mean users with screen readers will automatically have a good time on your site. Layout tables are irrelevant to screen reader access provided the content makes sense when linearised, exactly the same as if you do CSS layout. Data tables are different; they are really hard to mark up properly and even then the users of screen reader software generally don't know the commands they need to use to understand the data. Rather than agonising over using a few layout tables, you should worry that heading tags and alt text are used properly, and that form labels are properly assigned. Then you'll have a pretty good stab at real world accessibility. This from several years experience running user testing for web accessibility, specialising in accessible site design, and from consulting for Cahoot, an online bank, on this topic for a year. So my answer to the poster is no, there is no business reason to prefer CSS over tables. It's more elegant, more satisfying and more correct, but you as the person building it and the person that has to maintain it after you are the only two people in the world who give a rat's ass whether it's CSS or tables. A: I'm of the thought that CSS layout with as few tables as possible is cleaner and better, but I agree that sometimes you just gotta use a table. Business-wise, it's generally "what's going to get it done the fastest and most reliable way." In my experience, using a few tables generally falls into that category. I have found that a very effective way to mitigate cross-browser differences in CSS rendering is to use the "strict" doctype at the top of your page: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> Also, for the dreaded IE6 CSS issues, you can use this hack: .someClass { background-color:black; /*this is for most browsers*/ _background-color:white; /*this is for IE6 only - all others will ignore it*/ } A: Using semantic HTML design is one of those things where you don't know what you're missing unless you make a practice of it. I've worked on several sites where the site was restyled after the fact with little or no impact to the server-side code. Restyling sites is a very common request, something that I've noticed more now that I'm able to say "yes" to instead of try to talk my way out of. And, once you've learned to work with the page layout system, it's usually no harder than table based layout. A: If you have a public facing website, the real business case is SEO. Accessibility is important and maintaining semantic (X)HTML is much easier than maintaining table layouts, but that #1 spot on Google will bring home the bacon. For example: Monthly web report: 127 million page views for July Monthly web report: 127 million page views for July ... Latimes.com keeps getting better at SEO (search engine optimization), which means our stories are ranking higher in Google and other search engines. We are also performing better on sites like Digg.com. All that adds up to more exposure and more readership than ever before. If you look at their site, they've got a pretty decent CSS layout going. Generally, you find relatively few table layouts performing well in the SERPs these days. A: The main reason why we changed our web pages to DIV/CSS based layout was the delay in rendering table based pages. We have a public web site, with most of its users base is in countries like India, where the internet bandwidth is still an issue (its getting improved day by day, but still not on par). In such circumstances, when we used table based layout, users had to stare at a blank page for considerably long time. Then the entire page will get displayed as a whole in a tick. By converting our pages to DIV, we managed to bring some contents to the browser almost instantly as users entered to our web site, and those contents where enough to get the users engaged till browser downloads entire contents of the page. The major flaw with table based implementation is that, the browser we will show the content of the table only after it downloads the entire html for that table. The issue will blow out when we have a main table which wraps the entire content of the page, and when we have lots of nested tables. For the 'flexible tables' (those without any fixed width), after downloading entire table tag, browser has to parse till the last row of the table to find out the width of each columns, then has to parse it again for displaying the content. Till all these happens users has to stare at a blank screen, then everything will come to screen in a tick. A: Keep your layout and your content separate allows you to redesign or make tweaks and changes to your site easily. It may take a bit longer up front, but the longest phase of software development is maintenance. A css friendly site with clear separation between content and design is best over the course of maintenance. A: In my experience, the only time this really adds business value is when there is a need for 100% support for accessibility. When you have users who are visually impaired and/or use screenreaders to view your site, you need to make sure that your site is compliant to accessibility standards. Users that use screenreaders will tend to have their own high-contrast, large-font stylesheet (if your site doesn't supply one itself) which makes it easy for screenreaders to parse the page. When a screenreader reads a page and sees a table, it'll tell the user it's a table. Hence, if you use a table for layout, it gets very confusing because the user doesn't know that the content of the table is actually the article instead of some other tabular data. A menu should be a list or a collection of divs, not a table with menu items, again that's confusing. You should make sure that you use blockquotes, alt-tags title attributes, etc to make it more readable. If you make your design CSS-driven, then your entire look and feel can be stripped away and replaced with a raw view which is very readable to those users. If you have inline styles, table-based layouts, etc, then you're making it harder for those users to parse your content. While I do feel that maintenance is made easier for some things when your site is purely laid out with CSS, I don't think it's the case for all kinds of maintenance -- especially when you're dealing with cross-browser CSS, which can obviously be a nightmare. In short, your page should describe its make-up in a standards compliant way if you want it to be accessible to said users. If you have no need/requirement and likely won't need it in the future, then don't bother wasting too much time attempting to be a CSS purist :) Use the mixture of style and layout techniques that suits you and makes your job easier. Cheers! [EDIT - added strikethrough to wrong or misleading parts of this answer - see comments] A: One other thing I just remembered, you can assign a different stylesheet to a page for printing vs. display. In addition to your normal stylesheet definition, you can add the following tag <link rel="stylesheet" type="text/css" media="print" href="PrintStyle.css" /> Which will render the document according to that style when you send it to the printer. This allows you to strip out the background images, additional header/footer information and just print the raw information without creating a separate module. A: doing a complete revamp of a 15 page web site just by updating 1 file is heaven. This is true. Unfortunately, having one CSS file used by 15,000 complex and widely differing pages is your worst nightmare come true. Change something - did it break a thousand pages? Who knows? CSS is a double-edged sword on big sites like ours. A: The idea is that Designers can Design and Web Developers can implement. This is especially the case in dynamic web applications where you do not want your Designers to mess around in your Source Code. Now, while there are templating engines, Designers apparantly just love to go crazy and CSS allows to pull a lot more stunts than tables. That being said: As a developer, i abandoned CSS Layout mostly because my Design sucks anyway, so at least it can suck properly :-) But if I would ever hire a Designer, I would let him use whatever his WYSIWYG Editor spits out. A: Since this is stackoverflow, I'll give you my programmer's answer semantics 101 First take a look at this code and think about what's wrong here... class car { int wheels = 4; string engine; } car mybike = new car(); mybike.wheels = 2; mybike.engine = null; The problem, of course, is that a bike is not a car. The car class is an inappropriate class for the bike instance. The code is error-free, but is semantically incorrect. It reflects poorly on the programmer. semantics 102 Now apply this to document markup. If your document needs to present tabular data, then the appropriate tag would be <table>. If you place navigation into a table however, then you're misusing the intended purpose of the <table> element. In the second case, you're not presenting tabular data -- you're (mis)using the <table> element to achieve a presentational goal. conclusion Whom does this hurt? No one. Who benefits if you use semantic markup? You -- and your professional reputation. Now go and do the right thing. A: Like a lot of things, it's a good idea that often gets carried too far. I like a div+css driven layout because it's usually quite easy to change the appearance, even drastically, just through the stylesheet. It's also nice to be friendly to lower-level browsers, screen readers, etc. But like most decisions in programming, the purpose of the site and the cost of development should be considered in making a decision. Neither side is the right way to go 100% of the time. BTW, I think everyone agrees that tables should be used for tabular data. A: Business reason for CSS layout: You can blow away the customers by saying "our portal is totally customizable/skinnable without writing code!" Then again, I don't see any evil in designing block elements with tables. By block elements I mean where it doesn't make any sense to break apart the said element in different designs. So, tabular data would best be presented with tables, of course. Designing major building blocks (such as a menu bar, news ticker, etc.) within their own tables should be OK as well. Just don't rely on tables for the overall page layout and you'll be fine, methinks. A: *I would let him use whatever his WYSIWYG Editor spits out I just threw-up a little... *ahh hello? You don't think the graphic designer is writing the CSS by hand do you? Funnily enough I have worked with a few designers and the best among them do hand-tweak their css. The guy I am thinking of actually does all of his design work as an XHTML file with a couple of CSS files and creates graphical elements on the fly as he needs them. He uses Dreamweaver but only really as a navigation tool. (I learned a lot from that guy) Once you've made an investment to learn purely CSS-based design and have had a little experience (found out where IE sucks [to be fair it's getting better]) it ends up being faster I've found. I worked on Content Management Systems and the application rarely had to change for the designers to come up with a radically different look. A: Besides being easily updatable and compliant... I use to design all table based web sites and I was resistant at first, but little by little I moved to CSS. It did not happen overnight, but it happened and it is something you should do as well. There have been some nights I wanted to toss my computer out the window because the style I was applying to a div was not doing what I want, but you learn from those obstacles. As for a business, once you get to designing web sites by CSS down to a science, you can develop processes for each site and even use past web sites and just add a different header graphic, color, etc. Also, be sure to embed/include all reusable parts of your website: header, sub-header, footer. Once you get over the hump, it will be all down hill from there. Good luck! A: :: nods at palmsey and Jon Galloway :: I agree with the maintainability factor. It does take me a bit longer to get my initial layouts done (since I'm still a jedi apprentice in the CSS arts) but doing a complete revamp of a 15 page web site just by updating 1 file is heaven. A: Some additional reasons why this is good practice: * *Accessibility - the web should ideally be accessible by all *Performance - save bandwidth and load faster on mobile devices (these lack bandwidth to some degree and cannot layout complex tables quickly). Besides loading fast is always a good thing... A: When a screenreader reads a page and sees a table, it'll tell the user it's a table. Hence, if you use a table for layout, it gets very confusing because the user doesn't know that the content of the table is actually the article instead of some other tabular data This is actually not true; screen readers like JAWS, Window Eyes and HAL ignore layout tables. They work really well at dealing with the real web. A: I don't think there is a business reason at all. Technical reason, maybe, even so, barely - it is a huge timesuck the world over, and then you look at it in IE and break down and weep. A: i actually can see Tables in Stack Overflow on the user page. It even has heaps of inline styles... A: There definitely is. If you are still striving for it, you are not getting it right. DIV+CSS layout is actually much easier than table layout in terms of maintainability and productivity. Just keep practicing it before it's too early to say that. Table layout is good too it's just not meant for layouts and have exceptional drawbacks when it comes to minor tuning.
{ "language": "en", "url": "https://stackoverflow.com/questions/5323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30" }
Q: Why can't I use a try block around my super() call? So, in Java, the first line of your constructor HAS to be a call to super... be it implicitly calling super(), or explicitly calling another constructor. What I want to know is, why can't I put a try block around that? My specific case is that I have a mock class for a test. There is no default constructor, but I want one to make the tests simpler to read. I also want to wrap the exceptions thrown from the constructor into a RuntimeException. So, what I want to do is effectively this: public class MyClassMock extends MyClass { public MyClassMock() { try { super(0); } catch (Exception e) { throw new RuntimeException(e); } } // Mocked methods } But Java complains that super isn't the first statement. My workaround: public class MyClassMock extends MyClass { public static MyClassMock construct() { try { return new MyClassMock(); } catch (Exception e) { throw new RuntimeException(e); } } public MyClassMock() throws Exception { super(0); } // Mocked methods } Is this the best workaround? Why doesn't Java let me do the former? My best guess as to the "why" is that Java doesn't want to let me have a constructed object in a potentially inconsistent state... however, in doing a mock, I don't care about that. It seems I should be able to do the above... or at least I know that the above is safe for my case... or seems as though it should be anyways. I am overriding any methods I use from the tested class, so there is no risk that I am using uninitialized variables. A: I know this is an old question, but I liked it, and as such, I decided to give it an answer of my own. Perhaps my understanding of why this cannot be done will contribute to the discussion and to future readers of your interesting question. Let me start with an example of failing object construction. Let's define a class A, such that: class A { private String a = "A"; public A() throws Exception { throw new Exception(); } } Now, let's assume we would like to create an object of type A in a try...catch block. A a = null; try{ a = new A(); }catch(Exception e) { //... } System.out.println(a); Evidently, the output of this code will be: null. Why Java does not return a partially constructed version of A? After all, by the point the constructor fails, the object's name field has already been initialized, right? Well, Java can't return a partially constructed version of A because the object was not successfully built. The object is in a inconsistent state, and it is therefore discarded by Java. Your variable A is not even initialized, it is kept as null. Now, as you know, to fully build a new object, all its super classes must be initialized first. If one of the super classes failed to execute, what would be the final state of the object? It is impossible to determine that. Look at this more elaborate example class A { private final int a; public A() throws Exception { a = 10; } } class B extends A { private final int b; public B() throws Exception { methodThatThrowsException(); b = 20; } } class C extends B { public C() throws Exception { super(); } } When the constructor of C is invoked, if an exception occurs while initializing B, what would be the value of the final int variable b? As such, the object C cannot be created, it is bogus, it is trash, it is not fully initialized. For me, this explains why your code is illegal. A: Unfortunately, compilers can't work on theoretical principles, and even though you may know that it is safe in your case, if they allowed it, it would have to be safe for all cases. In other words, the compiler isn't stopping just you, it's stopping everyone, including all those that don't know that it is unsafe and needs special handling. There are probably other reasons for this as well, as all languages usually have ways to do unsafe things if one knows how to deal with them. In C# .NET there are similar provisions, and the only way to declare a constructor that calls a base constructor is this: public ClassName(...) : base(...) in doing so, the base constructor will be called before the body of the constructor, and you cannot change this order. A: It's done to prevent someone from creating a new SecurityManager object from untrusted code. public class Evil : SecurityManager { Evil() { try { super(); } catch { Throwable t } { } } } A: I don't know how Java is implemented internally, but if the constructor of the superclass throws an exception, then there isn't a instance of the class you extend. It would be impossible to call the toString() or equals() methods, for example, since they are inherited in most cases. Java may allow a try/catch around the super() call in the constructor if 1. you override ALL methods from the superclasses, and 2. you don't use the super.XXX() clause, but that all sounds too complicated to me. A: I can't presume to have a deep understanding of Java internals, but it is my understanding that, when a compiler needs to instantiate a derived class, it has to first create the base (and its base before that(...)) and then slap on the extensions made in the subclass. So it is not even the danger of uninited variables or anything like that at all. When you try to do something in the subclass' constructor before the base class' constructor, you are basically asking the compiler to extend a base object instance that doesn't exist yet. Edit:In your case, MyClass becomes the base object, and MyClassMock is a subclass. A: I know this question has numerous answers, but I'd like to give my little tidbit on why this wouldn't be allowed, specifically to answer why Java does not allow you to do this. So here you go... Now, keep in mind that super() has to be called before anything else in a subclass's constructor, so, if you did use try and catch blocks around your super() call, the blocks would have to look like this: try { super(); ... } catch (Exception e) { super(); //This line will throw the same error... ... } If super() fails in the try block, it HAS to be executed first in the catch block, so that super runs before anything in your subclass`s constructor. This leaves you with the same problem you had at the beginning: if an exception is thrown, it isn't caught. (In this case it just gets thrown again in the catch block.) Now, the above code is in no way allowed by Java either. This code may execute half of the first super call, and then call it again, which could cause some problems with some super classes. Now, the reason that Java doesn't let you throw an exception instead of calling super() is because the exception could be caught somewhere else, and the program would continue without calling super() on your subclass object, and possibly because the exception could take your object as a parameter and try to change the value of inherited instance variables, which would not yet have been initialized. A: One way to get around it is by calling a private static function. The try-catch can then be placed in the function body. public class Test { public Test() { this(Test.getObjectThatMightThrowException()); } public Test(Object o) { //... } private static final Object getObjectThatMightThrowException() { try { return new ObjectThatMightThrowAnException(); } catch(RuntimeException rtx) { throw new RuntimeException("It threw an exception!!!", rtx); } } }
{ "language": "en", "url": "https://stackoverflow.com/questions/5328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47" }
Q: What is the difference between a bug and a change request in MSF for CMMI? I'm currently evaluating the MSF for CMMI process template under TFS for use on my development team, and I'm having trouble understanding the need for separate bug and change request work item types. I understand that it is beneficial to be able to differentiate between bugs (errors) and change requests (changing requirements) when generating reports. In our current system, however, we only have a single type of change request and just use a field to indicate whether it is a bug, requirement change, etc (this field can be used to build report queries). What are the benefits of having a separate workflow for bugs? I'm also confused by the fact that developers can submit work against a bug or a change request, I thought the intended workflow was for bugs to generate change requests which are what the developer references when making changes. A: Keep in mind that a part of a Work Item Type definition for TFS is the definition of it's "Workflow" meaning the states the work item can be and the transitions between the states. This can be secured by security role. So - generally speaking - a "Change Request" would be initiated and approved by someone relatively high up in an organization (someone with "Sponsorship" rights related to spending the resources to make a (possibly very large) change to the system. Ultimately this person would be the one to approve that the change was made successfully. For a "Bug" however, ANY user of the application should be able to initiate a Bug. At an organization I implemented TFS at, only Department Heads can be the originators of a "Change Request" - but "Bugs" were created from "Help Desk" tickets (not automated, just through process...) A: Generally, though I can't speak for CMM, change requests and bugs are handled and considered differently because they typically refer to different pieces of your application lifecycle. A bug is a defect in your program implementation. For instance, if you design your program to be able to add two numbers and give the user the sum, a defect would be that it does not handle negative numbers correctly, and thus a bug. A change request is when you have a design defect. For instance, you might have specifically said that your program should not handle negative numbers. A change request is then filed in order to redesign and thus reimplement that part. The design defect might not be intentional, but could easily be because you just didn't consider that part when you originally designed your program, or new cases that didn't exist at the time when the original design was created have been invented or discovered since. In other words, a program might operate exactly as designed, but need to be changed. This is a change request. Typically, fixing a bug is considered a much cheaper action than executing a change request, as the bug was never intended to be part of your program. The design, however, was. And thus a different workflow might be necessary to handle the two different scenarios. For instance, you might have a different way of confirming and filing bugs than you have for change requests, which might require more work to lay out the consequences of the change. A: @Luke I don't disagree with you, but this difference is typically the explanation given for why there is two different processes available for handling the two types of issues. I'd say that if the color of the home page was originally designed to be red, and for some reason it is blue, that's easily a quick fix and doesn't need to involve many people or man-hours to do the change. Just check out the file, change the color, check it back in and update the bug. However, if the color of the home page was designed to be red, and is red, but someone thinks it needs to be blue, that is, to me anyway, a different type of change. For instance, have someone thought about the impact this might have on other parts of the page, like images and logos overlaying the blue background? Could there be borders of things that looks bad? Link underlining is blue, will that show up? As an example, I am red/green color blind, changing the color of something is, for me, not something I take lightly. There are enough webpages on the web that gives me problems. Just to make a point that even the most trivial change can be nontrivial if you consider everything. The actual end implementation change is probably much of the same, but to me a change request is a different beast, precisely because it needs to be thought about more to make sure it will work as expected. A bug, however, is that someone said this is how we're going to do it and then someone did it differently. A change request is more like but we need to consider this other thing as well... hmm.... There are exceptions of course, but let me take your examples apart. If the server was designed to handle more than 300,000,000,000 pageviews, then yes, it is a bug that it doesn't. But designing a server to handle that many pageviews is more than just saying our server should handle 300,000,000,000 pageviews, it should contain a very detailed specification for how it can do that, right down to processing time guarantees and disk access average times. If the code is then implemented exactly as designed, and unable to perform as expected, then the question becomes: did we design it incorrectly or did we implement it incorrectly?. I agree that in this case, wether it is to be considered a design flaw or a implementation flaw depends on the actual reason for why it fails to live up to expectations. For instance, if someone assumed disks were 100x times as fast as they actually are, and this is deemed to be the reason for why the server fails to perform as expected, I'd say this is a design bug, and someone needs to redesign. If the original requirement of that many pageviews is still to be held, a major redesign with more in-memory data and similar might have to be undertaken. However, if someone has just failed to take into account how raid disks operate and how to correctly benefit from striped media, that's a bug and might not need that big of a change to fix. Again, there will of course be exceptions. In any case, the original difference I stated is the one I have found to be true in most cases. A: A bug is something that is broken in a requirement which has already been approved for implementation. A change request needs to go through a cycle in which the impact and effort has to be estimated for that change, and then it has to be approved for implementation before work on it can begin. The two are fundamentally different under CMM. A: Is my assumption incorrect then that change requests should be generated from bugs? I'm confused because I don't think all bugs should be automatically approved for implementation -- they may be trivial and at least in our case will go through the same review process as a change request before being assigned to a developer. A: Implementation always comes from requirement. It may be from product manager, it may be from some of you random thought. It may be documented, it may be from some conversation. In the end of the day, even something as simple as a := a + 1, the "real" implementation would be based on compiler, linker, CPU, etc. which depends on the physical law of real life. A bug is something that is implemented against the ORIGINAL requirement. Other than that, it is a change request. If the requirement is changed and the implementation need to be changed as well, it's a change request. If the dependency has been changed, for example web browser stopped supporting some tags and you need to make some change, it's a change request. In real word, anything that is not properly documented should be treated as change request. Product manager forgot to put something in the story? Sorry, that's a change request. All change requests should be properly estimated and pointed. Developers get paid for making change requests, not for making bugs and fixing those made by them.
{ "language": "en", "url": "https://stackoverflow.com/questions/5329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Memcached chunk limit Why is there a hardcoded chunk limit (.5 meg after compression) in memcached? Has anyone recompiled theirs to up it? I know I should not be sending big chunks like that around, but these extra heavy chunks happen for me from time to time and wreak havoc. A: This question used to be in the official FAQ What are some limits in memcached I might hit? (Wayback Machine) To quote: The simple limits you will probably see with memcache are the key and item size limits. Keys are restricted to 250 characters. Stored data cannot exceed 1 megabyte in size, since that is the largest typical slab size." The FAQ has now been revised and there are now two separate questions covering this: What is the maxiumum key length? (250 bytes) The maximum size of a key is 250 characters. Note this value will be less if you are using client "prefixes" or similar features, since the prefix is tacked onto the front of the original key. Shorter keys are generally better since they save memory and use less bandwidth. Why are items limited to 1 megabyte in size? Ahh, this is a popular question! Short answer: Because of how the memory allocator's algorithm works. Long answer: Memcached's memory storage engine (which will be pluggable/adjusted in the future...), uses a slabs approach to memory management. Memory is broken up into slabs chunks of varying sizes, starting at a minimum number and ascending by a factorial up to the largest possible value. Say the minimum value is 400 bytes, and the maximum value is 1 megabyte, and the factorial is 1.20: slab 1 - 400 bytes slab 2 - 480 bytes slab 3 - 576 bytes ... etc. The larger the slab, the more of a gap there is between it and the previous slab. So the larger the maximum value the less efficient the memory storage is. Memcached also has to pre-allocate some memory for every slab that exists, so setting a smaller factorial with a larger max value will require even more overhead. There're other reason why you wouldn't want to do that... If we're talking about a web page and you're attempting to store/load values that large, you're probably doing something wrong. At that size it'll take a noticeable amount of time to load and unpack the data structure into memory, and your site will likely not perform very well. If you really do want to store items larger than 1MB, you can recompile memcached with an edited slabs.c:POWER_BLOCK value, or use the inefficient malloc/free backend. Other suggestions include a database, MogileFS, etc.
{ "language": "en", "url": "https://stackoverflow.com/questions/5349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: How do you use a variable in xsl when trying to select a node? I would have thought this would be an easy one to Google, but I've been unsucessful. I want to assign a variable the value out of an attribute (easy so far) then use that variable to select another node based on the value of that attribute. Example: <xsl:variable name="myId" select="@id" /> <xsl value-of select="//Root/Some/Other/Path/Where[@id='{@myId}']/@Name /> That does not work. If I replace the {@myId} with the value that is in the variable then it does find the right node, but doign it this way produces nothing. I'm sure I'm missing something, or perhaps there is a different way to do it. The context is that there is related data under different top-level nodes that share the same id value so I need to get the related nodes in my template. A: You seem to have got confused with use of a variable (which is just $variable) and Attribute Value Templates, which allow you to put any XPath expression in some attributes, e.g. <newElement Id="{@Id}"/> They can obviously be combined, so you can include a variable in an Attribute Value Template, such as: <newElement Id="{$myId}"/> A: Ok, I finally figured it out. Silly problem really, I simply needed to leave out the quotes and the braces. One of those times when I thought that I'd already tried that. :D Oh, and I mistyped @myId in the first example, the code was actually $myId. <xsl:variable name="myId" select="@id" /> <xsl value-of select="//Root/Some/Other/Path/Where[@id=$myId]/@Name" />
{ "language": "en", "url": "https://stackoverflow.com/questions/5374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16" }
Q: SQL Server 2008 FileStream on a Web Server I've been developing a site using ASP.NET MVC, and have decided to use the new SQL Server 2008 FILESTREAM facility to store files 'within' the database rather than as separate entities. While initially working within VS2008 (using a trusted connection to the database), everything was fine and dandy. Issues arose, however, when I shifted the site to IIS7 and changed over to SQL authentication on the database. It seems that streaming a FILESTREAM doesn't work with SQL authentication, only with Windows authentication. Given this, what is the best practice to follow? * *Is there a way to force this sort of thing to work under SQL authentication? *Should I add NETWORK SERVICE as a database user and then use Trusted authentication? *Should I create another user, and run both the IIS site and the database connection under this? *Any other suggestions? A: Take a look at this article. I don't know a whole lot about FileStreaming and security, but there are a couple of interesting options in the FileStreaming setup such as allowing remote connections and allow remote clients to access FileStreaming
{ "language": "en", "url": "https://stackoverflow.com/questions/5396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Convert Bytes to Floating Point Numbers? I have a binary file that I have to parse and I'm using Python. Is there a way to take 4 bytes and convert it to a single precision floating point number? A: Just a little addition, if you want a float number as output from the unpack method instead of a tuple just write >>> import struct >>> [x] = struct.unpack('f', b'\xdb\x0fI@') >>> x 3.1415927410125732 If you have more floats then just write >>> import struct >>> [x,y] = struct.unpack('ff', b'\xdb\x0fI@\x0b\x01I4') >>> x 3.1415927410125732 >>> y 1.8719963179592014e-07 >>> A: I would add a comment but I don't have enough reputation. Just to add some info. If you have a byte buffer containing X amount of floats, the syntax for unpacking would be: struct.unpack('Xf', ...) If the values are doubles the unpacking would be: struct.unpack('Xd', ...) A: >>> import struct >>> struct.pack('f', 3.141592654) b'\xdb\x0fI@' >>> struct.unpack('f', b'\xdb\x0fI@') (3.1415927410125732,) >>> struct.pack('4f', 1.0, 2.0, 3.0, 4.0) '\x00\x00\x80?\x00\x00\x00@\x00\x00@@\x00\x00\x80@'
{ "language": "en", "url": "https://stackoverflow.com/questions/5415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "89" }
Q: Python, Unicode, and the Windows console When I try to print a Unicode string in a Windows console, I get an error . UnicodeEncodeError: 'charmap' codec can't encode character .... I assume this is because the Windows console does not accept Unicode-only characters. What's the best way around this? Is there any way I can make Python automatically print a ? instead of failing in this situation? Edit: I'm using Python 2.5. Note: @LasseV.Karlsen answer with the checkmark is sort of outdated (from 2008). Please use the solutions/answers/suggestions below with care!! @JFSebastian answer is more relevant as of today (6 Jan 2016). A: Update: Python 3.6 implements PEP 528: Change Windows console encoding to UTF-8: the default console on Windows will now accept all Unicode characters. Internally, it uses the same Unicode API as the win-unicode-console package mentioned below. print(unicode_string) should just work now. I get a UnicodeEncodeError: 'charmap' codec can't encode character... error. The error means that Unicode characters that you are trying to print can't be represented using the current (chcp) console character encoding. The codepage is often 8-bit encoding such as cp437 that can represent only ~0x100 characters from ~1M Unicode characters: >>> u"\N{EURO SIGN}".encode('cp437') Traceback (most recent call last): ... UnicodeEncodeError: 'charmap' codec can't encode character '\u20ac' in position 0: character maps to I assume this is because the Windows console does not accept Unicode-only characters. What's the best way around this? Windows console does accept Unicode characters and it can even display them (BMP only) if the corresponding font is configured. WriteConsoleW() API should be used as suggested in @Daira Hopwood's answer. It can be called transparently i.e., you don't need to and should not modify your scripts if you use win-unicode-console package: T:\> py -m pip install win-unicode-console T:\> py -m run your_script.py See What's the deal with Python 3.4, Unicode, different languages and Windows? Is there any way I can make Python automatically print a ? instead of failing in this situation? If it is enough to replace all unencodable characters with ? in your case then you could set PYTHONIOENCODING envvar: T:\> set PYTHONIOENCODING=:replace T:\> python3 -c "print(u'[\N{EURO SIGN}]')" [?] In Python 3.6+, the encoding specified by PYTHONIOENCODING envvar is ignored for interactive console buffers unless PYTHONLEGACYWINDOWSIOENCODING envvar is set to a non-empty string. A: Just enter this code in command line before executing python script: chcp 65001 & set PYTHONIOENCODING=utf-8 A: Like Giampaolo Rodolà's answer, but even more dirty: I really, really intend to spend a long time (soon) understanding the whole subject of encodings and how they apply to Windoze consoles, For the moment I just wanted sthg which would mean my program would NOT CRASH, and which I understood ... and also which didn't involve importing too many exotic modules (in particular I'm using Jython, so half the time a Python module turns out not in fact to be available). def pr(s): try: print(s) except UnicodeEncodeError: for c in s: try: print( c, end='') except UnicodeEncodeError: print( '?', end='') NB "pr" is shorter to type than "print" (and quite a bit shorter to type than "safeprint")...! A: Note: This answer is sort of outdated (from 2008). Please use the solution below with care!! Here is a page that details the problem and a solution (search the page for the text Wrapping sys.stdout into an instance): PrintFails - Python Wiki Here's a code excerpt from that page: $ python -c 'import sys, codecs, locale; print sys.stdout.encoding; \ sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout); \ line = u"\u0411\n"; print type(line), len(line); \ sys.stdout.write(line); print line' UTF-8 <type 'unicode'> 2 Б Б $ python -c 'import sys, codecs, locale; print sys.stdout.encoding; \ sys.stdout = codecs.getwriter(locale.getpreferredencoding())(sys.stdout); \ line = u"\u0411\n"; print type(line), len(line); \ sys.stdout.write(line); print line' | cat None <type 'unicode'> 2 Б Б There's some more information on that page, well worth a read. A: Update: On Python 3.6 or later, printing Unicode strings to the console on Windows just works. So, upgrade to recent Python and you're done. At this point I recommend using 2to3 to update your code to Python 3.x if needed, and just dropping support for Python 2.x. Note that there has been no security support for any version of Python before 3.7 (including Python 2.7) since December 2021. If you really still need to support earlier versions of Python (including Python 2.7), you can use https://github.com/Drekin/win-unicode-console , which is based on, and uses the same APIs as the code in the answer that was previously linked here. (That link does include some information on Windows font configuration but I doubt it still applies to Windows 8 or later.) Note: despite other plausible-sounding answers that suggest changing the code page to 65001, that did not work prior to Python 3.8. (It does kind-of work since then, but as pointed out above, you don't need to do so for Python 3.6+ anyway.) Also, changing the default encoding using sys.setdefaultencoding is (still) not a good idea. A: Kind of related on the answer by J. F. Sebastian, but more direct. If you are having this problem when printing to the console/terminal, then do this: >set PYTHONIOENCODING=UTF-8 A: For Python 2 try: print unicode(string, 'unicode-escape') For Python 3 try: import os string = "002 Could've Would've Should've" os.system('echo ' + string) Or try win-unicode-console: pip install win-unicode-console py -mrun your_script.py A: TL;DR: print(yourstring.encode('ascii','replace').decode('ascii')) I ran into this myself, working on a Twitch chat (IRC) bot. (Python 2.7 latest) I wanted to parse chat messages in order to respond... msg = s.recv(1024).decode("utf-8") but also print them safely to the console in a human-readable format: print(msg.encode('ascii','replace').decode('ascii')) This corrected the issue of the bot throwing UnicodeEncodeError: 'charmap' errors and replaced the unicode characters with ?. A: If you're not interested in getting a reliable representation of the bad character(s) you might use something like this (working with python >= 2.6, including 3.x): from __future__ import print_function import sys def safeprint(s): try: print(s) except UnicodeEncodeError: if sys.version_info >= (3,): print(s.encode('utf8').decode(sys.stdout.encoding)) else: print(s.encode('utf8')) safeprint(u"\N{EM DASH}") The bad character(s) in the string will be converted in a representation which is printable by the Windows console. A: The below code will make Python output to console as UTF-8 even on Windows. The console will display the characters well on Windows 7 but on Windows XP it will not display them well, but at least it will work and most important you will have a consistent output from your script on all platforms. You'll be able to redirect the output to a file. Below code was tested with Python 2.6 on Windows. #!/usr/bin/python # -*- coding: UTF-8 -*- import codecs, sys reload(sys) sys.setdefaultencoding('utf-8') print sys.getdefaultencoding() if sys.platform == 'win32': try: import win32console except: print "Python Win32 Extensions module is required.\n You can download it from https://sourceforge.net/projects/pywin32/ (x86 and x64 builds are available)\n" exit(-1) # win32console implementation of SetConsoleCP does not return a value # CP_UTF8 = 65001 win32console.SetConsoleCP(65001) if (win32console.GetConsoleCP() != 65001): raise Exception ("Cannot set console codepage to 65001 (UTF-8)") win32console.SetConsoleOutputCP(65001) if (win32console.GetConsoleOutputCP() != 65001): raise Exception ("Cannot set console output codepage to 65001 (UTF-8)") #import sys, codecs sys.stdout = codecs.getwriter('utf8')(sys.stdout) sys.stderr = codecs.getwriter('utf8')(sys.stderr) print "This is an Е乂αmp١ȅ testing Unicode support using Arabic, Latin, Cyrillic, Greek, Hebrew and CJK code points.\n" A: The cause of your problem is NOT the Win console not willing to accept Unicode (as it does this since I guess Win2k by default). It is the default system encoding. Try this code and see what it gives you: import sys sys.getdefaultencoding() if it says ascii, there's your cause ;-) You have to create a file called sitecustomize.py and put it under python path (I put it under /usr/lib/python2.5/site-packages, but that is differen on Win - it is c:\python\lib\site-packages or something), with the following contents: import sys sys.setdefaultencoding('utf-8') and perhaps you might want to specify the encoding in your files as well: # -*- coding: UTF-8 -*- import sys,time Edit: more info can be found in excellent the Dive into Python book A: Python 3.6 windows7: There is several way to launch a python you could use the python console (which has a python logo on it) or the windows console (it's written cmd.exe on it). I could not print utf8 characters in the windows console. Printing utf-8 characters throw me this error: OSError: [winError 87] The paraneter is incorrect Exception ignored in: (_io-TextIOwrapper name='(stdout)' mode='w' ' encoding='utf8') OSError: [WinError 87] The parameter is incorrect After trying and failing to understand the answer above I discovered it was only a setting problem. Right click on the top of the cmd console windows, on the tab font chose lucida console. A: Nowadays, the Windows console does not encounter this error, unless you redirect the output. Here is an example Python script scratch_1.py: s = "∞" print(s) If you run the script as follows, everything works as intended: python scratch_1.py ∞ However, if you run the following, then you get the same error as in the question: python scratch_1.py > temp.txt Traceback (most recent call last): File "C:\Users\Wok\AppData\Roaming\JetBrains\PyCharmCE2022.2\scratches\scratch_1.py", line 3, in <module> print(s) File "C:\Users\Wok\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UnicodeEncodeError: 'charmap' codec can't encode character '\u221e' in position 0: character maps to <undefined> To solve this issue with the suggestion present in the original question, i.e. by replacing the erroneous characters with question marks ?, one can proceed as follows: s = "∞" try: print(s) except UnicodeEncodeError: output_str = s.encode("ascii", errors="replace").decode("ascii") print(output_str) It is important: * *to call decode(), so that the type of the output is str instead of bytes, *with the same encoding, here "ascii", to avoid the creation of mojibake. A: James Sulak asked, Is there any way I can make Python automatically print a ? instead of failing in this situation? Other solutions recommend we attempt to modify the Windows environment or replace Python's print() function. The answer below comes closer to fulfilling Sulak's request. Under Windows 7, Python 3.5 can be made to print Unicode without throwing a UnicodeEncodeError as follows:     In place of:    print(text)     substitute:     print(str(text).encode('utf-8')) Instead of throwing an exception, Python now displays unprintable Unicode characters as \xNN hex codes, e.g.:   Halmalo n\xe2\x80\x99\xc3\xa9tait plus qu\xe2\x80\x99un point noir Instead of   Halmalo n’était plus qu’un point noir Granted, the latter is preferable ceteris paribus, but otherwise the former is completely accurate for diagnostic messages. Because it displays Unicode as literal byte values the former may also assist in diagnosing encode/decode problems. Note: The str() call above is needed because otherwise encode() causes Python to reject a Unicode character as a tuple of numbers. A: The issue is with windows default encoding being set to cp1252, and need to be set to utf-8. (check PEP) Check default encoding using: import locale locale.getpreferredencoding() You can override locale settings import os if os.name == "nt": import _locale _locale._gdl_bak = _locale._getdefaultlocale _locale._getdefaultlocale = (lambda *args: (_locale._gdl_bak()[0], 'utf8')) referenced code from stack link
{ "language": "en", "url": "https://stackoverflow.com/questions/5419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "170" }
Q: HTML comments break down I have a page that is generated which inserts an HTML comment near the top of the page. Inside the comment is a *nix-style command. <!-- command --option value --option2 value2 --option3 --> This comment breaks the page completely. What is wrong with the comment to cause this to happen, and why is this the case? A: Comments in the XML Spec from the w3.org : For compatibility, the string "--" (double-hyphen) MUST NOT occur within comments. A: If you really want to keep the comment in your page you could use this instead of an HTML comment: <div style="display:none">command --option value --option2 value2 --option3 </div> Or even <div class="comment">command --option value --option2 value2 --option3 </div> and specify: .comment {display:none;} in your stylesheet. A: Comments at the top of the page before <html> will throw IE into quirks mode, which could explain why the page breaks, if that's where your comment appears. For more information, check out the "Triggering different rendering modes" on this wikipedia page
{ "language": "en", "url": "https://stackoverflow.com/questions/5425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Do people use the Hungarian Naming Conventions in the real world? Is it worth learning the convention or is it a bane to readability and maintainability? A: It is pointless (and distracting) but is in relatively heavy use at my company, at least for types like ints, strings, booleans, and doubles. Things like sValue, iCount, dAmount or fAmount, and bFlag are everywhere. Once upon a time there was a good reason for this convention. Now, it is a cancer. A: Considering that most people that use Hungarian Notation is following the misunderstood version of it, I'd say it's pretty pointless. If you want to use the original definition of it, it might make more sense, but other than that it is mostly syntactic sugar. If you read the Wikipedia article on the subject, you'll find two conflicting notations, Systems Hungarian Notation and Apps Hungarian Notation. The original, good, definition is the Apps Hungarian Notation, but most people use the Systems Hungarian Notation. As an example of the two, consider prefixing variables with l for length, a for area and v for volume. With such notation, the following expression makes sense: int vBox = aBottom * lVerticalSide; but this doesn't: int aBottom = lSide1; If you're mixing the prefixes, they're to be considered part of the equation, and volume = area * length is fine for a box, but copying a length value into an area variable should raise some red flags. Unfortunately, the other notation is less useful, where people prefix the variable names with the type of the value, like this: int iLength; int iVolume; int iArea; some people use n for number, or i for integer, f for float, s for string etc. The original prefix was meant to be used to spot problems in equations, but has somehow devolved into making the code slightly easier to read since you don't have to go look for the variable declaration. With todays smart editors where you can simply hover over any variable to find the full type, and not just an abbreviation for it, this type of hungarian notation has lost a lot of its meaning. But, you should make up your own mind. All I can say is that I don't use either. Edit Just to add a short notice, while I don't use Hungarian Notation, I do use a prefix, and it's the underscore. I prefix all private fields of classes with a _ and otherwise spell their names as I would a property, titlecase with the first letter uppercase. A: Sorry to follow up with a question, but does prefixing interfaces with "I" qualify as hungarian notation? If that is the case, then yes, a lot of people are using it in the real world. If not, ignore this. A: I see Hungarian Notation as a way to circumvent the capacity of our short term memories. According to psychologists, we can store approximately 7 plus-or-minus 2 chunks of information. The extra information added by including a prefix helps us by providing more details about the meaning of an identifier even with no other context. In other words, we can guess what a variable is for without seeing how it is used or declared. This can be avoided by applying oo techniques such as encapsulation and the single responsibility principle. I'm unaware of whether or not this has been studied empirically. I would hypothesize that the amount of effort increases dramatically when we try to understand classes with more than nine instance variables or methods with more than 9 local variables. A: When I see Hungarian discussion, I'm glad to see people thinking hard about how to make their code clearer, and how to mistakes more visible. That's exactly what we should all be doing! But don't forget that you have some powerful tools at your disposal besides naming. Extract Method If your methods are getting so long that your variable declarations have scrolled off the top of the screen, consider making your methods smaller. (If you have too many methods, consider a new class.) Strong typing If you find that you are taking zip codes stored in an integer variable and assigning them to a shoe size integer variable, consider making a class for zip codes and a class for shoe size. Then your bug will be caught at compile time, instead of requiring careful inspection by a human. When I do this, I usually find a bunch of zip code- and shoe size-specific logic that I've peppered around my code, which I can then move in to my new classes. Suddenly all my code gets clearer, simpler, and protected from certain classes of bugs. Wow. To sum up: yes, think hard about how you use names in code to express your ideas clearly, but also look to the other powerful OO tools you can call on. A: Isn't scope more important than type these days, e.g. * *l for local *a for argument *m for member *g for global *etc With modern techniques of refactoring old code, search and replace of a symbol because you changed its type is tedious, the compiler will catch type changes, but often will not catch incorrect use of scope, sensible naming conventions help here. A: The Hungarian Naming Convention can be useful when used correctly, unfortunately it tends to be misused more often than not. Read Joel Spolsky's article Making Wrong Code Look Wrong for appropriate perspective and justification. Essentially, type based Hungarian notation, where variables are prefixed with information about their type (e.g. whether an object is a string, a handle, an int, etc.) is mostly useless and generally just adds overhead with very little benefit. This, sadly, is the Hungarian notation most people are familiar with. However, the intent of Hungarian notation as envisioned is to add information on the "kind" of data the variable contains. This allows you to partition kinds of data from other kinds of data which shouldn't be allowed to be mixed together except, possibly, through some conversion process. For example, pixel based coordinates vs. coordinates in other units, or unsafe user input versus data from safe sources, etc. Look at it this way, if you find yourself spelunking through code to find out information on a variable then you probably need to adjust your naming scheme to contain that information, this is the essence of the Hungarian convention. Note that an alternative to Hungarian notation is to use more classes to show the intent of variable usage rather than relying on primitive types everywhere. For example, instead of having variable prefixes for unsafe user input, you can have simple string wrapper class for unsafe user input, and a separate wrapper class for safe data. This has the advantage, in strongly typed languages, of having partitioning enforced by the compiler (even in less strongly typed languages you can usually add your own tripwire code) but adds a not insignificant amount of overhead. A: I don't use a very strict sense of hungarian notation, but I do find myself using it sparing for some common custom objects to help identify them, and also I tend to prefix gui control objects with the type of control that they are. For example, labelFirstName, textFirstName, and buttonSubmit. A: I still use Hungarian Notation when it comes to UI elements, where several UI elements are related to a particular object/value, e.g., lblFirstName for the label object, txtFirstName for the text box. I definitely can't name them both "FirstName" even if that is the concern/responsibility of both objects. How do others approach naming UI elements? A: I think hungarian notation is an interesting footnote along the 'path' to more readable code, and if done properly, is preferable to not-doing it. In saying that though, I'd rather do away with it, and instead of this: int vBox = aBottom * lVerticalSide; write this: int boxVolume = bottomArea * verticalHeight; It's 2008. We don't have 80 character fixed width screens anymore! Also, if you're writing variable names which are much longer than that you should be looking at refactoring into objects or functions anyway. A: I use Hungarian Naming for UI elements like buttons, textboxes and lables. The main benefit is grouping in the Visual Studio Intellisense Popup. If I want to access my lables, I simply start typing lbl.... and Visual Studio will suggest all my lables, nicley grouped together. However, after doing more and more Silverlight and WPF stuff, leveraging data binding, I don't even name all my controls anymore, since I don't have to reference them from code-behind (since there really isn't any codebehind anymore ;) A: What's wrong is mixing standards. What's right is making sure that everyone does the same thing. int Box = iBottom * nVerticleSide A: The original prefix was meant to be used to spot problems in equations, but has somehow devolved into making the code slightly easier to read since you don't have to go look for the variable declaration. With todays smart editors where you can simply hover over any variable to find the full type, and not just an abbreviation for it, this type of hungarian notation has lost a lot of its meaning. I'm breaking the habit a little bit but prefixing with the type can be useful in JavaScript that doesn't have strong variable typing. A: When using a dynamically typed language, I occasionally use Apps Hungarian. For statically typed languages I don't. See my explanation in the other thread. A: Hungarian notation is pointless in type-safe languages. e.g. A common prefix you will see in old Microsoft code is "lpsz" which means "long pointer to a zero-terminated string". Since the early 1700's we haven't used segmented architectures where short and long pointers exist, the normal string representation in C++ is always zero-terminated, and the compiler is type-safe so won't let us apply non-string operations to the string. Therefore none of this information is of any real use to a programmer - it's just more typing. However, I use a similar idea: prefixes that clarify the usage of a variable. The main ones are: * *m = member *c = const *s = static *v = volatile *p = pointer (and pp=pointer to pointer, etc) *i = index or iterator These can be combined, so a static member variable which is a pointer would be "mspName". Where are these useful? * *Where the usage is important, it is a good idea to constantly remind the programmer that a variable is (e.g.) a volatile or a pointer *Pointer dereferencing used to do my head in until I used the p prefix. Now it's really easy to know when you have an object (Orange) a pointer to an object (pOrange) or a pointer to a pointer to an object (ppOrange). To dereference an object, just put an asterisk in front of it for each p in its name. Case solved, no more deref bugs! *In constructors I usually find that a parameter name is identical to a member variable's name (e.g. size). I prefer to use "mSize = size;" than "size = theSize" or "this.size = size". It is also much safer: I don't accidentally use "size = 1" (setting the parameter) when I meant to say "mSize = 1" (setting the member) *In loops, my iterator variables are all meaningful names. Most programmers use "i" or "index" and then have to make up new meaningless names ("j", "index2") when they want an inner loop. I use a meaningful name with an i prefix (iHospital, iWard, iPatient) so I always know what an iterator is iterating. *In loops, you can mix several related variables by using the same base name with different prefixes: Orange orange = pOrange[iOrange]; This also means you don't make array indexing errors (pApple[i] looks ok, but write it as pApple[iOrange] and the error is immediately obvious). *Many programmers will use my system without knowing it: by add a lengthy suffix like "Index" or "Ptr" - there isn't any good reason to use a longer form than a single character IMHO, so I use "i" and "p". Less typing, more consistent, easier to read. This is a simple system which adds meaningful and useful information to code, and eliminates the possibility of many simple but common programming mistakes. A: I've been working for IBM for the past 6 months and I haven't seen it anywhere (thank god because I hate it.) I see either camelCase or c_style. thisMethodIsPrettyCool() this_method_is_pretty_cool() A: It depends on your language and environment. As a rule I wouldn't use it, unless the development environment you're in makes it hard to find the type of the variable. There's also two different types of Hungarian notation. See Joel's article. I can't find it (his names don't exactly make them easy to find), anyone have a link to the one I mean? Edit: Wedge has the article I mean in his post. A: Original form (The Right Hungarian Notation :) ) where prefix means type (i.e. length, quantity) of value stored by variable is OK, but not necessary in all type of applications. The popular form (The Wrong Hungarian Notation) where prefix means type (String, int) is useless in most of modern programming languages. Especially with meaningless names like strA. I can't understand we people use meaningless names with long prefixes which gives nothing. A: I use type based (Systems HN) for components (eg editFirstName, lblStatus etc) as it makes autocomplete work better. I sometimes use App HN for variables where the type infomation is isufficient. Ie fpX indicates a fixed pointed variable (int type, but can't be mixed and matched with an int), rawInput for user strings that haven't been validated etc A: Being a PHP programmer where it's very loosely typed, I don't make a point to use it. However I will occasionally identify something as an array or as an object depending on the size of the system and the scope of the variable.
{ "language": "en", "url": "https://stackoverflow.com/questions/5428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34" }
Q: Accessing a CONST attribute of series of Classes This is how I wanted to do it which would work in PHP 5.3.0+ <?php class MyClass { const CONSTANT = 'Const var'; } $classname = 'MyClass'; echo $classname::CONSTANT; // As of PHP 5.3.0 ?> But I'm restricted to using PHP 5.2.6. Can anyone think of a simple way to simulate this behavior without instantiating the class? A: You can accomplish this without using eval in pre-5.3 code. Just use the constant function: <?php class MyClass { const CONSTANT = 'Const var'; } $classname = 'MyClass'; echo constant("$classname::CONSTANT"); ?> A: If you absolutly need to access a constant like that, you can do this: <?php class MyClass { const CONSTANT = 'Const var'; } $classname = 'MyClass'; echo eval( 'return '.$classname.'::CONSTANT;' ); ?> But, if i were you, I'd try not to use eval.
{ "language": "en", "url": "https://stackoverflow.com/questions/5459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Telligent's Community Server The company I work for is wanting to add blog functionality to our website and they were looking to spend an awful amount of money to have some crap being built on top of a CMS they purchased (sitecore). I pointed them to Telligent's Community Server and we had a sales like meeting today to get the Marketing folks on board. My question is if anyone has had issues working with Community Server, skinning it and extending it? I wanted to explain a bit why I am thinking Community Server, the company is wanting multiple blogs with multiple authors. I want to be out of the admin part of this as much as possible and didn't think there were too many engines that having multiple blogs didn't mean db work. I also like the other functionality that Community Server provides and think the company will find it useful, particularly the media section as right now we have some really shotty way of dealing with whitepapers and stuff. edit: We are actually using the Sitecore blog module for a single blog on our intranet (which is actually what the CMS is serving). Some reasoning for why I don't like it for our public site are they are on different servers, it doesn't support multiple authors, there is no built in syndication, it is a little flimsy feeling to me from looking at the source and I personally think the other features of Community Server make its price tag worth it. another edit: Need to stick to .net software that run on sql server in my company's case, but I don't mind seeing recommendations for others. ExpressionEngine looks promising, will try it out on my personal box. A: I've done quite a few projects using Community Server. If you're okay with the out-of-the-box functionality, or you don't mind sticking to the version you start with, I think you'll be very happy. The times I've run into headaches using CS is when the client wants functionality CS does not provide, but also insists on keeping the ability to upgrade to the latest version whenever Telligent releases an update. You can mostly support that by making all of your changes either in a separate project or by only modifying aspx/ascx files (no codebehinds). Some kind of merge is going to be required though no matter how well you plan it out. A: Community Server itself has been very solid for me, but if all you need is a blogging engine then it may be overkill. Skinning it, for example, is quite a bit of work (despite their quite powerful Chameleon theme engine). I'd probably look closer at one of the dedicated blog engines out there, like BlogEngine.NET, dasBlog or SubText, if that's all you need. Go with Community Server if you think you'll want more "community-focused" features like forums etc. A: You can also take a look at Telligent Graffiti CMS. http://graffiticms.com/ It supports multiple blogs and authors. Update: It's now open source and available at http://graffiticms.codeplex.com/ A: Community Server 2008.5 lets you add several members that can post articles. Also with Community Server 2008.5 you now have wiki's along with forums and the blogs. This probably has one of the better web based admin control panel's I seen in a while. This let's you easily change several things including the site's theme (or skin). To me it is one of the most scalable applications I have seen in a while. We are using it for our site http://knowledgemgmtsolutions.com. A: Skinning is pretty straightforward, and the sidebar widgets aren't very difficult to create (if you don't mind building controls in code). The widgets also allow options for the users to customize them in the control panel very easily. I doubt you'll find a strong community of widget builders for Community Server however. Nothing compared to the dev community for blogs like wordpress. I recommend starting templates from scratch and adding in CS controls as needed, to get the markup you prefer for styling and to use only what you need. Setting up different roles for users to post to different blogs is also very easy and requires no coding. You can have blog groups, and allow only certain users to post to certain blogs. A: Sitecore's Forum module is powered by Community Server and integrated with Sitecore CMS. A: Have you had a look at the Shared Source blog module for Sitecore? A: Expression Engine with the Multi-Site Manager works great for that kind of situation.
{ "language": "en", "url": "https://stackoverflow.com/questions/5460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How can I undo git reset --hard HEAD~1? Is it possible to undo the changes caused by the following command? If so, how? git reset --hard HEAD~1 A: If you're really lucky, like I was, you can go back into your text editor and hit 'undo'. I know that's not really a proper answer, but it saved me half a day's work so hopefully it'll do the same for someone else! A: In most cases, yes. Depending on the state your repository was in when you ran the command, the effects of git reset --hard can range from trivial to undo, to basically impossible. Below I have listed a range of different possible scenarios, and how you might recover from them. All my changes were committed, but now the commits are gone! This situation usually occurs when you run git reset with an argument, as in git reset --hard HEAD~. Don't worry, this is easy to recover from! If you just ran git reset and haven't done anything else since, you can get back to where you were with this one-liner: git reset --hard @{1} This resets your current branch whatever state it was in before the last time it was modified (in your case, the most recent modification to the branch would be the hard reset you are trying to undo). If, however, you have made other modifications to your branch since the reset, the one-liner above won't work. Instead, you should run git reflog <branchname> to see a list of all recent changes made to your branch (including resets). That list will look something like this: 7c169bd master@{0}: reset: moving to HEAD~ 3ae5027 master@{1}: commit: Changed file2 7c169bd master@{2}: commit: Some change 5eb37ca master@{3}: commit (initial): Initial commit Find the operation in this list that you want to "undo". In the example above, it would be the first line, the one that says "reset: moving to HEAD~". Then copy the representation of the commit before (below) that operation. In our case, that would be master@{1} (or 3ae5027, they both represent the same commit), and run git reset --hard <commit> to reset your current branch back to that commit. I staged my changes with git add, but never committed. Now my changes are gone! This is a bit trickier to recover from. git does have copies of the files you added, but since these copies were never tied to any particular commit you can't restore the changes all at once. Instead, you have to locate the individual files in git's database and restore them manually. You can do this using git fsck. For details on this, see Undo git reset --hard with uncommitted files in the staging area. I had changes to files in my working directory that I never staged with git add, and never committed. Now my changes are gone! Uh oh. I hate to tell you this, but you're probably out of luck. git doesn't store changes that you don't add or commit to it, and according to the documentation for git reset: --hard Resets the index and working tree. Any changes to tracked files in the working tree since <commit> are discarded. It's possible that you might be able to recover your changes with some sort of disk recovery utility or a professional data recovery service, but at this point that's probably more trouble than it's worth. A: What you want to do is to specify the sha1 of the commit you want to restore to. You can get the sha1 by examining the reflog (git reflog) and then doing git reset --hard <sha1 of desired commit> But don't wait too long... after a few weeks git will eventually see that commit as unreferenced and delete all the blobs. A: Made a tiny script to make it slightly easier to find the commit one is looking for: git fsck --lost-found | grep commit | cut -d ' ' -f 3 | xargs -i git show \{\} | egrep '^commit |Date:' Yes, it can be made considerably prettier with awk or something like it, but it's simple and I just needed it. Might save someone else 30 seconds. A: I've just did a hard reset on wrong project. What saved my life was Eclipse's local history. IntelliJ Idea is said to have one, too, and so may your editor, it's worth checking: * *Eclipse help topic on Local History *http://wiki.eclipse.org/FAQ_Where_is_the_workspace_local_history_stored%3F A: Example of IRL case: $ git fsck --lost-found Checking object directories: 100% (256/256), done. Checking objects: 100% (3/3), done. dangling blob 025cab9725ccc00fbd7202da543f556c146cb119 dangling blob 84e9af799c2f5f08fb50874e5be7fb5cb7aa7c1b dangling blob 85f4d1a289e094012819d9732f017c7805ee85b4 dangling blob 8f654d1cd425da7389d12c17dd2d88d318496d98 dangling blob 9183b84bbd292dcc238ca546dab896e073432933 dangling blob 1448ee51d0ea16f259371b32a557b60f908d15ee dangling blob 95372cef6148d980ab1d7539ee6fbb44f5e87e22 dangling blob 9b3bf9fb1ee82c6d6d5ec9149e38fe53d4151fbd dangling blob 2b21002ca449a9e30dbb87e535fbd4e65bac18f7 dangling blob 2fff2f8e4ea6408ac84a8560477aa00583002e66 dangling blob 333e76340b59a944456b4befd0e007c2e23ab37b dangling blob b87163c8def315d40721e592f15c2192a33816bb dangling blob c22aafb90358f6bf22577d1ae077ad89d9eea0a7 dangling blob c6ef78dd64c886e9c9895e2fc4556e69e4fbb133 dangling blob 4a71f9ff8262701171d42559a283c751fea6a201 dangling blob 6b762d368f44ddd441e5b8eae6a7b611335b49a2 dangling blob 724d23914b48443b19eada79c3eb1813c3c67fed dangling blob 749ffc9a412e7584245af5106e78167b9480a27b dangling commit f6ce1a403399772d4146d306d5763f3f5715cb5a <- it's this one $ git show f6ce1a403399772d4146d306d5763f3f5715cb5a commit f6ce1a403399772d4146d306d5763f3f5715cb5a Author: Stian Gudmundsen Høiland <stian@Stians-Mac-mini.local> Date: Wed Aug 15 08:41:30 2012 +0200 *MY COMMIT MESSAGE IS DISPLAYED HERE* diff --git a/Some.file b/Some.file new file mode 100644 index 0000000..15baeba --- /dev/null +++ b/Some.file *THE WHOLE COMMIT IS DISPLAYED HERE* $ git rebase f6ce1a403399772d4146d306d5763f3f5715cb5a First, rewinding head to replay your work on top of it... Fast-forwarded master to f6ce1a403399772d4146d306d5763f3f5715cb5a. A: My problem is almost similar. I have uncommitted files before I enter git reset --hard. Thankfully. I managed to skip all these resources. After I noticed that I can just undo (ctrl-z for windows/linux cmd-shift-z for mac). I just want to add this to all of the answers above. Note. Its not possible to undo unopened files. * *How can I undo git reset --hard HEAD~1? *Undo git reset --hard with uncommitted files in the staging area *Recovering added file after doing git reset --hard HEAD^ *Medium article: How To: Recover From a Git Hard Reset (Or, This is Why We Probably Shouldn’t but Totally Can Have Nice Things) A: git reflog and back to the last HEAD 6a56624 (HEAD -> master) HEAD@{0}: reset: moving to HEAD~3 1a9bf73 HEAD@{1}: commit: add changes in model generate binary A: git reflog * *Find your commit sha in the list then copy and paste it into this command: git cherry-pick <the sha> A: This has saved my life: https://medium.com/@CarrieGuss/how-to-recover-from-a-git-hard-reset-b830b5e3f60c Basically you need to run: for blob in $(git fsck --lost-found | awk ‘$2 == “blob” { print $3 }’); do git cat-file -p $blob > $blob.txt; done Then manually going through the pain to re-organise your files to the correct structure. Takeaway: Never use git reset --hard if you dont completely 100% understand how it works, best not to use it. A: The answer is hidden in the detailed response above, you can simply do: $> git reset --hard HEAD@{1} (See the output of git reflog show) A: If you are using a JetBrains IDE (anything IntelliJ based), you can recover even your uncommited changes via their "Local History" feature. Right-click on your top-level directory in your file tree, find "Local History" in the context menu, and choose "Show History". This will open up a view where your recent edits can be found, and once you have found the revision you want to go back to, right click on it and click "Revert". A: If you have not yet garbage collected your repository (e.g. using git repack -d or git gc, but note that garbage collection can also happen automatically), then your commit is still there – it's just no longer reachable through the HEAD. You can try to find your commit by looking through the output of git fsck --lost-found. Newer versions of Git have something called the "reflog", which is a log of all changes that are made to the refs (as opposed to changes that are made to the repository contents). So, for example, every time you switch your HEAD (i.e. every time you do a git checkout to switch branches) that will be logged. And, of course, your git reset also manipulated the HEAD, so it was also logged. You can access older states of your refs in a similar way that you can access older states of your repository, by using an @ sign instead of a ~, like git reset HEAD@{1}. It took me a while to understand what the difference is between HEAD@{1} and HEAD~1, so here is a little explanation: git init git commit --allow-empty -mOne git commit --allow-empty -mTwo git checkout -b anotherbranch git commit --allow-empty -mThree git checkout master # This changes the HEAD, but not the repository contents git show HEAD~1 # => One git show HEAD@{1} # => Three git reflog So, HEAD~1 means "go to the commit before the commit that HEAD currently points at", while HEAD@{1} means "go to the commit that HEAD pointed at before it pointed at where it currently points at". That will easily allow you to find your lost commit and recover it. A: Pat Notz is correct. You can get the commit back so long as it's been within a few days. git only garbage collects after about a month or so unless you explicitly tell it to remove newer blobs. $ git init Initialized empty Git repository in .git/ $ echo "testing reset" > file1 $ git add file1 $ git commit -m 'added file1' Created initial commit 1a75c1d: added file1 1 files changed, 1 insertions(+), 0 deletions(-) create mode 100644 file1 $ echo "added new file" > file2 $ git add file2 $ git commit -m 'added file2' Created commit f6e5064: added file2 1 files changed, 1 insertions(+), 0 deletions(-) create mode 100644 file2 $ git reset --hard HEAD^ HEAD is now at 1a75c1d... added file1 $ cat file2 cat: file2: No such file or directory $ git reflog 1a75c1d... HEAD@{0}: reset --hard HEAD^: updating HEAD f6e5064... HEAD@{1}: commit: added file2 $ git reset --hard f6e5064 HEAD is now at f6e5064... added file2 $ cat file2 added new file You can see in the example that the file2 was removed as a result of the hard reset, but was put back in place when I reset via the reflog. A: Before answering lets add some background, explaining what is this HEAD. First of all what is HEAD? HEAD is simply a reference to the current commit (latest) on the current branch. There can only be a single HEAD at any given time. (excluding git worktree) The content of HEAD is stored inside .git/HEAD and it contains the 40 bytes SHA-1 of the current commit. detached HEAD If you are not on the latest commit - meaning that HEAD is pointing to a prior commit in history its called detached HEAD. On the command line it will look like this- SHA-1 instead of the branch name since the HEAD is not pointing to the the tip of the current branch A few options on how to recover from a detached HEAD: git checkout git checkout <commit_id> git checkout -b <new branch> <commit_id> git checkout HEAD~X // x is the number of commits t go back This will checkout new branch pointing to the desired commit. This command will checkout to a given commit. At this point you can create a branch and start to work from this point on. # Checkout a given commit. # Doing so will result in a `detached HEAD` which mean that the `HEAD` # is not pointing to the latest so you will need to checkout branch # in order to be able to update the code. git checkout <commit-id> # create a new branch forked to the given commit git checkout -b <branch name> git reflog You can always use the reflog as well. git reflog will display any change which updated the HEAD and checking out the desired reflog entry will set the HEAD back to this commit. Every time the HEAD is modified there will be a new entry in the reflog git reflog git checkout HEAD@{...} This will get you back to your desired commit git reset HEAD --hard <commit_id> "Move" your head back to the desired commit. # This will destroy any local modifications. # Don't do it if you have uncommitted work you want to keep. git reset --hard 0d1d7fc32 # Alternatively, if there's work to keep: git stash git reset --hard 0d1d7fc32 git stash pop # This saves the modifications, then reapplies that patch after resetting. # You could get merge conflicts, if you've modified things which were # changed since the commit you reset to. * *Note: (Since Git 2.7) you can also use the git rebase --no-autostash as well. git revert <sha-1> "Undo" the given commit or commit range. The reset command will "undo" any changes made in the given commit. A new commit with the undo patch will be commited while the original commit will remain in the history as well. # add new commit with the undo of the original one. # the <sha-1> can be any commit(s) or commit range git revert <sha-1> This schema illustrate which command does what. As you can see there reset && checkout modify the HEAD. A: It is possible to recover it if Git hasn't garbage collected yet. Get an overview of dangling commits with fsck: $ git fsck --lost-found dangling commit b72e67a9bb3f1fc1b64528bcce031af4f0d6fcbf Recover the dangling commit with rebase: $ git rebase b72e67a9bb3f1fc1b64528bcce031af4f0d6fcbf A: As far as I know, --hard will discard uncommitted changes. Since these aren't tracked by git. But you can undo the discarded commit. $ git reflog will list: b0d059c HEAD@{0}: reset: moving to HEAD~1 4bac331 HEAD@{1}: commit: added level introduction.... .... where 4bac331 is the discarded commit. Now just move the head to that commit: $ git reset --hard 4bac331 A: I know this is an old thread... but as many people are searching for ways to undo stuff in Git, I still think it may be a good idea to continue giving tips here. When you do a "git add" or move anything from the top left to the bottom left in git gui the content of the file is stored in a blob and the file content is possible to recover from that blob. So it is possible to recover a file even if it was not committed but it has to have been added. git init echo hello >> test.txt git add test.txt Now the blob is created but it is referenced by the index so it will no be listed with git fsck until we reset. So we reset... git reset --hard git fsck you will get a dangling blob ce013625030ba8dba906f756967f9e9ca394464a git show ce01362 will give you the file content "hello" back To find unreferenced commits I found a tip somewhere suggesting this. gitk --all $(git log -g --pretty=format:%h) I have it as a tool in git gui and it is very handy. A: git reset --hard - you can use to revert one page and after that you can stash or pull everything from origin again A: Note: this answer is only valid if you use IDEs like IntelliJ I recently faced a similar issue where I neither staged my changes nor committed them. There is an option of local history. I was able to revert changes from the local history of IntelliJ (ref). Hope it helps someone.
{ "language": "en", "url": "https://stackoverflow.com/questions/5473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1529" }
Q: How to specify javascript to run when ModalPopupExtender is shown The ASP.NET AJAX ModalPopupExtender has OnCancelScript and OnOkScript properties, but it doesn't seem to have an OnShowScript property. I'd like to specify a javascript function to run each time the popup is shown. In past situations, I set the TargetControlID to a dummy control and provide my own control that first does some JS code and then uses the JS methods to show the popup. But in this case, I am showing the popup from both client and server side code. Anyone know of a way to do this? BTW, I needed this because I have a textbox in the modal that I want to make a TinyMCE editor. But the TinyMCE init script doesn't work on invisible textboxes, so I had to find a way to run it at the time the modal was shown A: hmmm... I'm pretty sure that there's a shown event for the MPE... this is off the top of my head, but I think you can add an event handler to the shown event on page_load function pageLoad() { var popup = $find('ModalPopupClientID'); popup.add_shown(SetFocus); } function SetFocus() { $get('TriggerClientId').focus(); } i'm not sure tho if this will help you with calling it from the server side tho A: If you are using a button or hyperlink or something to trigger the popup to show, could you also add an additional handler to the onClick event of the trigger which should still fire the modal popup and run the javascript at the same time? A: The ModalPopupExtender modifies the button/hyperlink that you tell it to be the "trigger" element. The onclick script I add triggers before the popup is shown. I want script to fire after the popup is shown. Also, still leaves me with the problem of when I show the modal from server side. A: TinyMCE work on invisible textbox if you hide it with css (display:none;) You make an "onclick" event on TargetControlID, for init TinyMCE, if you use also an updatepanel A: var launch = false; function launchModal() { launch = true; } function pageLoad() { if (launch) { var ModalPedimento = $find('ModalPopupExtender_Pedimento'); ModalPedimento.show(); ModalPedimento.add_shown(SetFocus); } } function SetFocus() { $get('TriggerClientId').focus(); } A: For two modal forms: var launch = false; var NameObject = ''; function launchModal(ModalPopupExtender) { launch = true; NameObject = ModalPopupExtender; } function pageLoad() { if (launch) { var ModalObject = $find(NameObject); ModalObject.show(); ModalObject.add_shown(SetFocus); } } function SetFocus() { $get('TriggerClientId').focus(); } Server side: behand protected void btnNuevo_Click(object sender, EventArgs e) { //Para recuperar el formulario modal desde el lado del sercidor ScriptManager.RegisterStartupScript(Page, Page.GetType(), "key", "<script>launchModal('" + ModalPopupExtender_Factura.ID.ToString() + "');</script>", false); } A: Here's a simple way to do it in markup: <ajaxToolkit:ModalPopupExtender ID="ModalPopupExtender2" runat="server" TargetControlID="lnk_OpenGame" PopupControlID="Panel1" BehaviorID="SilverPracticeBehaviorID" > <Animations> <OnShown> <ScriptAction Script="InitializeGame();" /> </OnShown> </Animations> </ajaxToolkit:ModalPopupExtender> A: You should use the BehaviorID value mpeBID of your ModalPopupExtender. function pageLoad() { $find('mpeBID').add_shown(HideMediaPlayer); } function HideMediaPlayer() { var divMovie = $get('<%=divMovie.ClientID%>'); divMovie.style.display = "none"; }
{ "language": "en", "url": "https://stackoverflow.com/questions/5482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38" }
Q: Alternative Hostname for an IIS web site for internal access only I'm using IIS in Windows 2003 Server for a SharePoint intranet. External incoming requests will be using the host header portal.mycompany.com and be forced to use SSL. I was wondering if there's a way to set up an alternate host header such as http://internalportal/ which only accepts requests from the internal network, but doesn't force the users to use SSL. Any recommendations for how to set this up? A: Daniel, keep in mind that just because something is possbile in IIS, and via any number of off box solutions (like hardware load balancers and SSL) doesn't mean that it is supported by SharePoint, or that it is implemented in the same way. You can do what you are asking for, however you should do it via SharePoint Central Administration, and "Create or Extend a Web Application" and then "Extend and Existing Application". In this way you can create a new web site (in IIS) for accessing your existing SharePoint Web Application, one that can be accessed via a different hostheader, port, using SSL, Authentication mechanism, etc. As a general rule, if you can do something in IIS AND in SharePoint, you should do it only in SharePoint. A: Assuming that http://internalportal/ wasn't accessible from outside the company, you could set up two websites in IIS. The first site, configured to use a host header value of 'portal.mycompany.com', would require SSL. The second site, configured to use a host header value of 'internalportal', would not require SSL. The host header value is configured under 'Web Site' -> 'Advanced'. Having a hardware load balancer makes things much easier. The site on the load balancer is set up to require SSL, and your websites in IIS are setup not to require SSL. A: You could just add a second host header and internal IP address to the site for internal non-ssl access 172.16.3.1:443:portal.mycompany.com 172.16.3.2:80:internalportal
{ "language": "en", "url": "https://stackoverflow.com/questions/5494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Does it still make sense to learn low level WinAPI programming? Does it make sense, having all of the C#-managed-bliss, to go back to Petzold's Programming Windows and try to produce code w/ pure WinAPI? What can be learn from it? Isn't it just too outdated to be useful? A: Analogy: If you build cars for a living (programming), then its very pertinent to know how the engine works (Win32). A: Simple answer, YES. A: This is the answer to any question that is like.. "does it make sense to learn a low level language/api X even when a higher level language/api Y is there" YES You are able to boot up your Windows PC (or any other OS) and ask this question in SO because a couple of guys in Microsoft wrote 16-bit assembly code that loads your OS. Your browser works because someone wrote an OS kernel in C that serves all your browser's requests. It goes all the way up to scripting languages. Big or small, there is always a market and opportunity to write something in any level of abstraction. You just have to like it and fit in the right job. No api/language at any level of abstraction is irrelevent unless there is a better one competing at the same level. Another way of looking at it: A good example from one of Michael Abrash's book: A C programmer was given the task of writing a function to clear the screen. Since C was a better (higher level) abstraction over assembly and all, the programmer only knew C and knew it well. He did his best - he moved the cursor to each location on the screen and cleared the character there. He optimized the loop and made sure it ran as fast as it could. But still it was slow... until some guy came in and said there was some BIOS/VGA instruction or something that could clear the screen instantly. It always helps to know what you are walking on. A: Yes, for a few reasons: 1) .net wraps Win32 code. .net is usually a superior system to code against, but having some knowledge of the underlying Win32 layer (oops, WinAPI now that there is 64-bit code too) bolsters your knowledge of what is really happening. 2) in this economy, it is better to have some advantages over the other guy when you are looking for a job. Some WinAPI experience may provide this for you. 3) some system aspects are not available through the .net framework yet, and if you want to access those features you will need to use p/invoke (see http://www.pinvoke.net for some help there). Having at least a smattering of WinAPI experience will make your p/invoke development effort a lot more efficient. 4) (added) Now that Win8 has been around for awhile, it is still built on top of the WinAPI. iOS, Android, OS/X, and Linux are all out there, but the WinAPI will still be out there for many many years. A: This question is bordering on religious :) But I'll give my thoughts anyway. I do see value in learing the Win32 API. Most, if not all, GUI libraries (managed or unmanaged) result in calls to the Win32 API. Even the most thorough libraries don't cover 100% of the API, and hence there are always gaps which need to be plugged by direct API calls or P/invoking. Some of the names of the wrappers around the API calls have similar names to the underlying API calls, but those names aren't exactly self-documenting. So understanding the underlying API, and the terminology used therein, will aid in understanding the wrapper APIs and what they actually do. Plus, if you understand the nature of the underlying APIs that are used by frameworks, then you will make better choices with regards to which library functionality you should use in a given scenario. Cheers! A: Learning a new programming language or technology is for one of three reasons: 1. Need: you're starting a project for building a web application and you don't know anything about ASP.NET 2. Enthusiasm: you're very excited about ASP.NET MVC. why not try that? 3. Free time: but who has that anyway. The best reason to learn something new is Need. If you need to do something that the .NET framework can't do (like performance for example) then WinAPI is your solution. Until then we keep ourself busy with learning about .NET A: For most needs on the desktop you wont need to know the Win32, however there is a LOT of Win32 not in .NET, but it is in the outlaying stuff that may end up being less than 1% of your application. USB support, HID support, Windows Media Foundation just off the top of my head. There are many cool Vista API's only available from Win32. You will do yourself a large favor by learning how to do interop with a Win32 API, if you do desktop programing, because when you do need to call Win32, and you will, you won't spend weeks scratching your head. A: Personally I don't really like the Win32 API but there's value in learning it as the API will allow more control and efficiency using the GUI than a language like Visual Basic, and I believe that if you're going to make a living writing software you should know the API even if you don't use it directly. This is for reasons similar to the reasons it's good to learn C, like how a strcpy takes more time than copying an integer, or why you should use pointers to arrays as function parameters instead of arrays by value. A: Learning C or a lower level language can definitely be useful. However, I don't see any obvious advantage in using the unmanaged WinAPI. A: I've seen low level Windows API code... it ain't pretty... I wish I could unlearn it. I think it benefits to learn low level as in C, as you gain a better understanding of the hardware architecture and how all that stuff works. Learning old Windows API... I think that stuff can be left to the people at Microsoft who may need to learn it to build higher level languages and API... they built it, let them suffer with it ;-) However, if you happen to find a situation where you feel you just can't do what you need to do in a higher level language (few and far between), then perhaps start the dangerous dive into that world. A: yes. take a look at uTorrent, an amazing piece of software efficiency. Half of it's small size is due to the fact that much of it's core components were re-written to not use gargatuian libraries. Much of this couldn't be done without understanding how these libraries interface with the lower level API's A: I kept to standard C/C++ for years before learning Win32 API, and to be quite blunt, the "learning Win32 API" part is not the best technical experience of my life. In one hand Win32 API is quite cool. It's like an extension of the C standard API (who needs fopen when you can have CreateFile. But I guess UNIX/Linux/WhateverOS have the same gizmo functions. Anyway, in Unix/Linux, they have the "Everything is a file". In Windows, they have the "Everything is a... Window" (no kidding! See CreateWindow!). In the other hand, this is a legacy API. You will be dealing with raw C, and raw C madness. * *Like telling one's structure its own size to pass through a void * pointer to some Win32 function. *Messaging can be quite confusing, too: Mixing C++ objects with Win32 windows lead to very interesting examples of Chicken or Egg problem (funny moments when you write a kind of delete this ; in a class method). *Having to subclass a WinProc when you're more familiar with object inheritance is head-splitting and less than optimal. *And of course, there is the joy of "Why in this fracking world they did this thing this way ??" moments when you strike your keyboard with your head once too many and get back home with keys engraved in your forehead, just because someone thought it more logical to write an API to enable the changing of the color of a "Window", not by changing one of its properties, but by asking it to its parent window. *etc. In the last hand (three hands ???), consider that some people working with legacy APIs are themselves using legacy code styling. The moment you hear "const is for dummies" or "I don't use namespaces because they decrease the runtime speed", or the even better "Hey, who needs C++? I code in my own brand of object-oriented C!!!" (No kidding... In a professional environment, and the result was quite a sight...), you'll feel the kind of dread only condemned feel in front of the guillotine. So... All in all, it's an interesting experience. Edit After re-reading this post, I see it could be seen as overly negative. It is not. It is sometimes interesting (as well as frustrating) to know how the things work under the hood. You'll understand that, despite enormous (impossible?) constraints, the Win32 API team did wonderful work to be sure everything, from you "olde Win16 program" to your "last Win64 over-the-top application", can work together, in the past, now, and in the future. The question is: Do you really want to? Because spending weeks to do things that could be done (and done better) in other more high-level and/or object-oriented API can be quite de-motivational (real life experience: 3 weeks for Win API, against 4 hours in three other languages and/or libraries). Anyway, you'll find Raymond Chen's Blog very interesting because of his insider's view on both Win API and its evolution through the years: https://blogs.msdn.microsoft.com/oldnewthing/ A: It's important to know what is available with the Windows API. I don't think you need to crank out code with it, but you should know how it works. The .NET Framework contains a lot of functionality, but it doesn't provide managed code equivalents for the entire Windows API. Sometimes you have to get a bit closer to the metal, and knowing what's down there and how it behaves will give you a better understanding of how to use it. A: This is really the same as the question, should I learn a low level language like C (or even assembler). Coding in it is certainly slower (though of course the result is much faster), but its true advantage is you gain an insight into what is happening at close to the system level, rather than than just understanding someone else's metaphor for what is going on. It can also be better when things won't work well, or fast enough or with the sort of granularity that you need. (And do at least some subclassing and superclassing.) A: I'll put it this way. I don't like programming to the Win32 API. It can be a pain compared to managed code. BUT, I'm glad I know it because I can write programs that otherwise I wouldn't be able to. I can write programs that other people can't. Plus it gives you more insight into what your managed code is doing behind the scenes. A: The amount of value you get out of learning the Win32 API, (aside from the sorts of general insights you get from learning about how the nuts and bolts of the machine fit together) depends on what you're trying to achieve. A lot of the Win32 API has been wrapped nicely in .NET library classes, but not all of it. If for instance you're looking to do some serious audio programming, that portion of the Win32 API would be an excellent subject of study because only the most basic of operations are available from .NET classes. Last I checked even the managed DirectX DirectSound library was awful. At the risk of shameless self-promotion.... I just came across a situation where the Win32 API was my only option. I want to have different tooltips on each item in a listbox. I wrote up how I did it on this question. A: Even in very very high level languages you still make use of the API. Why? Well not every aspect of the API has been replicated by the various libraries, frameworks, etc. You need to learn the API for as long as you will need the API to accomplish what you are trying to do. (And no longer.) A: Apart from some very special cases when you need direct access to APIs, I would say NO. There is considerable time and effort required to learn to implement the native API calls correctly and the returning value is just not worth it. I would rather spend the time learning some new hot technology or framework that will make your life easier and programming less painful. Not decades-old obsolete COM libraries that nobody really uses anymore (sorry to COM users). Please don't stone me for this view. I know a lot of engineers here have really curious souls and there is nothing wrong with learning how things work. Curiousity is good and really helps understanding. But from a managerial point of view, I would rather spend a week learning how to develop Android apps than how to calls OLEs or COMs. A: Absolutely. When nobody knows the low level, who will update and write the high level languages? Also, when you understand the low level stuff, you can write more efficient code in a higher level language, and also debug more efficiently. A: The native APIs are the "real" operating system APIs. The .NET library is (with few exceptions) nothing more than a fancy wrapper around them. So yes, I'd say that anybody who can understand .NET with all its complexity, can understand relatively mundane things like talking to the API without the benefit of a middle-man. Just try to do DLL Injection from managed code. It can't be done. You will be forced to write native code for this, for windowing tweaks, for real subclassing, and a dozen other things. So yes: you should (must) know both. Edit: even if you plan to use P/Invoke. A: On the assumption that you're building apps targeted at Windows: * *it can sure be informative to understand lower levels of the system - how they work, how your code interacts with them (even if only indirectly), and where you have additional options that aren't available in the higher-level abstractions *there are times when your code might not be as efficient, high-performance or precise enough for your requirements *However, in more and more cases, folks like us (who never learned "unmanaged coding") will be able to pull off the programming we're trying to do without "learning" Win32. *Further, there's plenty of sites that provide working samples, code fragments and even fully-functional source code that you can "leverage" (borrow, plagiarize - but check that you're complying with any re-use license or copyright!) to fill in any gaps that aren't handled by the .NET framework class libraries (or the libraries that you can download or license). *If you can pull off the feats you need without messing around in Win32, and you're doing a good job of developing well-formed, readable managed code, then I'd say mastering .NET would be a better choice than spreading yourself thin over two very different environments. *If you frequently need to leverage those features of Windows that haven't received good Framework class library coverage, then by all means, learn the skills you need. *I've personally spent far too much time worrying about the "other areas" of coding that I'm supposed to understand to produce "good programs", but there's plenty of masochists out there that think everyone's needs and desires are like their own. Misery loves company. :) On the assumption that you're building apps for the "Web 2.0" world, or that would be just as useful/beneficial to *NIX & MacOS users: * *Stick with languages and compilers that target as many cross-platform environments as possible. *pure .NET in Visual Studio is better than Win32 obviously, but developing against the MONO libraries, perhaps using the Sharp Develop IDE, is probably an even better approach. *you could also spend your time learning Java, and those skills would transfer very well to C# programming (plus the Java code would theoretically run on any platform with the matching JRE). I've heard it said that Java is more like "write once, debug everywhere", but that's probably as true as (or even moreso than) C#. A: If you planning to develop a cross platform application, If you use win32, then your application could easily run on linux through WINE. This results in a highly maintainable application. This is one of the advantages of learning win32.
{ "language": "en", "url": "https://stackoverflow.com/questions/5507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55" }
Q: Rational Purify failing to jump to memory leaks So my company uses a delightfully buggy program called Rational Purify (as a plugin to Microsoft Visual Developer Studio) to manage memory leaks. The program is deigned to let you click on a memory leak after you have encountered it, and then jump to the line that the leak occurs on. Unfortunately Purify is malfunctioning and Purify will not jump to the place that the leak occurred it only mentions the class and method that the leak occurs in. Unfortunately, sometimes this is about as useful as hiring a guide to help you hunt bears and having him point to the forest and tell you there are bears there. Does anyone with Purify experience have any idea how I might fix this problem or have a good manual to look though? A: Generally you have two options, one exclude modules DLL's from instrumentation in Purify, it helps some times. Second is get BoundsChecker, this does compile time instrumentation much slower but the level of detail is an order of magnitude better. We generally use Purify on check-in, sanity checking, and BoundsChecker when we know a bug/crash exists. BoundsChecker has some nice features like only instrument files A.cpp & B.cpp, excluding all the rest. Be aware neither of these two applications function on 64 bit operating systems, and BoundsChecker will not install on 64 bit OS. Most frustrating if you make the switch to native 64 bit development with 32 bit back port! A: Purify is like a swiss knife. If you know how to use it, you will get some results, not the best but still results. If you don't, it will crash, because it is just another program running on Windows. In the end you will need a lot of patience, rebuilds and a bit of luck. A: Purify comes with a script called ScanVSSolutionForPurifyPlus.pl which will ensure that your project files have all the right settings for Purify to work properly. If you haven't run it, give it a go. (I've personally used ScanVSSolutionForPurifyPlus.pl on a large solution, and it worked like a charm. One caveat: when you give it the name of your .sln file, you might need to give it the full pathname.) A: Are you sure you have debug build? Or rather you have all PDB's enabled? Try WindDbg on your executable and check with !lmi command what is visible. Is whole code properly instrumented? Also consider using something else like free Visual Leak Detector or Microsoft's tool LeakDiag. A: I used Purify about 5 years ago. It was really flaky then. They kept promising to fix all the bugs in the 'next release'. We gave up on it in the end. One can only wonder if they used their own QA tools on their products. Oh the irony...
{ "language": "en", "url": "https://stackoverflow.com/questions/5509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "0" }
Q: Numeric Data Entry in WPF How are you handling the entry of numeric values in WPF applications? Without a NumericUpDown control, I've been using a TextBox and handling its PreviewKeyDown event with the code below, but it's pretty ugly. Has anyone found a more graceful way to get numeric data from the user without relying on a third-party control? private void NumericEditPreviewKeyDown(object sender, KeyEventArgs e) { bool isNumPadNumeric = (e.Key >= Key.NumPad0 && e.Key <= Key.NumPad9) || e.Key == Key.Decimal; bool isNumeric = (e.Key >= Key.D0 && e.Key <= Key.D9) || e.Key == Key.OemPeriod; if ((isNumeric || isNumPadNumeric) && Keyboard.Modifiers != ModifierKeys.None) { e.Handled = true; return; } bool isControl = ((Keyboard.Modifiers != ModifierKeys.None && Keyboard.Modifiers != ModifierKeys.Shift) || e.Key == Key.Back || e.Key == Key.Delete || e.Key == Key.Insert || e.Key == Key.Down || e.Key == Key.Left || e.Key == Key.Right || e.Key == Key.Up || e.Key == Key.Tab || e.Key == Key.PageDown || e.Key == Key.PageUp || e.Key == Key.Enter || e.Key == Key.Return || e.Key == Key.Escape || e.Key == Key.Home || e.Key == Key.End); e.Handled = !isControl && !isNumeric && !isNumPadNumeric; } A: How about: protected override void OnPreviewTextInput(System.Windows.Input.TextCompositionEventArgs e) { e.Handled = !AreAllValidNumericChars(e.Text); base.OnPreviewTextInput(e); } private bool AreAllValidNumericChars(string str) { foreach(char c in str) { if(!Char.IsNumber(c)) return false; } return true; } A: Why don't you just try using the KeyDown event rather than the PreviewKeyDown Event. You can stop the invalid characters there, but all the control characters are accepted. This seems to work for me: private void NumericKeyDown(object sender, System.Windows.Input.KeyEventArgs e) { bool isNumPadNumeric = (e.Key >= Key.NumPad0 && e.Key <= Key.NumPad9); bool isNumeric =((e.Key >= Key.D0 && e.Key <= Key.D9) && (e.KeyboardDevice.Modifiers == ModifierKeys.None)); bool isDecimal = ((e.Key == Key.OemPeriod || e.Key == Key.Decimal) && (((TextBox)sender).Text.IndexOf('.') < 0)); e.Handled = !(isNumPadNumeric || isNumeric || isDecimal); } A: I use a custom ValidationRule to check if text is numeric. public class DoubleValidation : ValidationRule { public override ValidationResult Validate(object value, System.Globalization.CultureInfo cultureInfo) { if (value is string) { double number; if (!Double.TryParse((value as string), out number)) return new ValidationResult(false, "Please enter a valid number"); } return ValidationResult.ValidResult; } Then when I bind a TextBox to a numeric property, I add the new custom class to the Binding.ValidationRules collection. In the example below the validation rule is checked everytime the TextBox.Text changes. <TextBox> <TextBox.Text> <Binding Path="MyNumericProperty" UpdateSourceTrigger="PropertyChanged"> <Binding.ValidationRules> <local:DoubleValidation/> </Binding.ValidationRules> </Binding> </TextBox.Text> </TextBox> A: public class NumericTextBox : TextBox { public NumericTextBox() : base() { DataObject.AddPastingHandler(this, new DataObjectPastingEventHandler(CheckPasteFormat)); } private Boolean CheckFormat(string text) { short val; return Int16.TryParse(text, out val); } private void CheckPasteFormat(object sender, DataObjectPastingEventArgs e) { var isText = e.SourceDataObject.GetDataPresent(System.Windows.DataFormats.Text, true); if (isText) { var text = e.SourceDataObject.GetData(DataFormats.Text) as string; if (CheckFormat(text)) { return; } } e.CancelCommand(); } protected override void OnPreviewTextInput(System.Windows.Input.TextCompositionEventArgs e) { if (!CheckFormat(e.Text)) { e.Handled = true; } else { base.OnPreviewTextInput(e); } } } Additionally you may customize the parsing behavior by providing appropriate dependency properties. A: Combining the ideas from a few of these answers, I have created a NumericTextBox that * *Handles decimals *Does some basic validation to ensure any entered '-' or '.' is valid *Handles pasted values Please feel free to update if you can think of any other logic that should be included. public class NumericTextBox : TextBox { public NumericTextBox() { DataObject.AddPastingHandler(this, OnPaste); } private void OnPaste(object sender, DataObjectPastingEventArgs dataObjectPastingEventArgs) { var isText = dataObjectPastingEventArgs.SourceDataObject.GetDataPresent(System.Windows.DataFormats.Text, true); if (isText) { var text = dataObjectPastingEventArgs.SourceDataObject.GetData(DataFormats.Text) as string; if (IsTextValid(text)) { return; } } dataObjectPastingEventArgs.CancelCommand(); } private bool IsTextValid(string enteredText) { if (!enteredText.All(c => Char.IsNumber(c) || c == '.' || c == '-')) { return false; } //We only validation against unselected text since the selected text will be replaced by the entered text var unselectedText = this.Text.Remove(SelectionStart, SelectionLength); if (enteredText == "." && unselectedText.Contains(".")) { return false; } if (enteredText == "-" && unselectedText.Length > 0) { return false; } return true; } protected override void OnPreviewTextInput(System.Windows.Input.TextCompositionEventArgs e) { e.Handled = !IsTextValid(e.Text); base.OnPreviewTextInput(e); } } A: You can also try using data validation if users commit data before you use it. Doing that I found was fairly simple and cleaner than fiddling about with keys. Otherwise, you could always disable Paste too! A: Add this to the main solution to make sure the the binding is updated to zero when the textbox is cleared. protected override void OnPreviewKeyUp(System.Windows.Input.KeyEventArgs e) { base.OnPreviewKeyUp(e); if (BindingOperations.IsDataBound(this, TextBox.TextProperty)) { if (this.Text.Length == 0) { this.SetValue(TextBox.TextProperty, "0"); this.SelectAll(); } } } A: My Version of Arcturus answer, can change the convert method used to work with int / uint / decimal / byte (for colours) or any other numeric format you care to use, also works with copy / paste protected override void OnPreviewTextInput( System.Windows.Input.TextCompositionEventArgs e ) { try { if ( String.IsNullOrEmpty( SelectedText ) ) { Convert.ToDecimal( this.Text.Insert( this.CaretIndex, e.Text ) ); } else { Convert.ToDecimal( this.Text.Remove( this.SelectionStart, this.SelectionLength ).Insert( this.SelectionStart, e.Text ) ); } } catch { // mark as handled if cannot convert string to decimal e.Handled = true; } base.OnPreviewTextInput( e ); } N.B. Untested code. A: This is how I do it. It uses a regular expression to check if the text that will be in the box is numeric or not. Regex NumEx = new Regex(@"^-?\d*\.?\d*$"); private void TextBox_PreviewTextInput(object sender, TextCompositionEventArgs e) { if (sender is TextBox) { string text = (sender as TextBox).Text + e.Text; e.Handled = !NumEx.IsMatch(text); } else throw new NotImplementedException("TextBox_PreviewTextInput Can only Handle TextBoxes"); } There is now a much better way to do this in WPF and Silverlight. If your control is bound to a property, all you have to do is change your binding statement a bit. Use the following for your binding: <TextBox Text="{Binding Number, Mode=TwoWay, NotifyOnValidationError=True, ValidatesOnExceptions=True}"/> Note that you can use this on custom properties too, all you have to do is throw an exception if the value in the box is invalid and the control will get highlighted with a red border. If you click on the upper right of the red border then the exception message will pop up. A: I've been using an attached property to allow the user to use the up and down keys to change the values in the text box. To use it, you just use <TextBox local:TextBoxNumbers.SingleDelta="1">100</TextBox> This doesn't actually address the validation issues that are referred to in this question, but it addresses what I do about not having a numeric up/down control. Using it for a little bit, I think I might actually like it better than the old numeric up/down control. The code isn't perfect, but it handles the cases I needed it to handle: * *Up arrow, Down arrow *Shift + Up arrow, Shift + Down arrow *Page Up, Page Down *Binding Converter on the text property Code behind using System; using System.Collections.Generic; using System.Text; using System.Windows; using System.Windows.Controls; using System.Windows.Data; using System.Windows.Input; namespace Helpers { public class TextBoxNumbers { public static Decimal GetSingleDelta(DependencyObject obj) { return (Decimal)obj.GetValue(SingleDeltaProperty); } public static void SetSingleDelta(DependencyObject obj, Decimal value) { obj.SetValue(SingleDeltaProperty, value); } // Using a DependencyProperty as the backing store for SingleValue. This enables animation, styling, binding, etc... public static readonly DependencyProperty SingleDeltaProperty = DependencyProperty.RegisterAttached("SingleDelta", typeof(Decimal), typeof(TextBoxNumbers), new UIPropertyMetadata(0.0m, new PropertyChangedCallback(f))); public static void f(DependencyObject o, DependencyPropertyChangedEventArgs e) { TextBox t = o as TextBox; if (t == null) return; t.PreviewKeyDown += new System.Windows.Input.KeyEventHandler(t_PreviewKeyDown); } private static Decimal GetSingleValue(DependencyObject obj) { return GetSingleDelta(obj); } private static Decimal GetDoubleValue(DependencyObject obj) { return GetSingleValue(obj) * 10; } private static Decimal GetTripleValue(DependencyObject obj) { return GetSingleValue(obj) * 100; } static void t_PreviewKeyDown(object sender, System.Windows.Input.KeyEventArgs e) { TextBox t = sender as TextBox; Decimal i; if (t == null) return; if (!Decimal.TryParse(t.Text, out i)) return; switch (e.Key) { case System.Windows.Input.Key.Up: if (Keyboard.Modifiers == ModifierKeys.Shift) i += GetDoubleValue(t); else i += GetSingleValue(t); break; case System.Windows.Input.Key.Down: if (Keyboard.Modifiers == ModifierKeys.Shift) i -= GetDoubleValue(t); else i -= GetSingleValue(t); break; case System.Windows.Input.Key.PageUp: i += GetTripleValue(t); break; case System.Windows.Input.Key.PageDown: i -= GetTripleValue(t); break; default: return; } if (BindingOperations.IsDataBound(t, TextBox.TextProperty)) { try { Binding binding = BindingOperations.GetBinding(t, TextBox.TextProperty); t.Text = (string)binding.Converter.Convert(i, null, binding.ConverterParameter, binding.ConverterCulture); } catch { t.Text = i.ToString(); } } else t.Text = i.ToString(); } } } A: I decided to simplify the reply marked as the answer on here to basically 2 lines using a LINQ expression. e.Handled = !e.Text.All(Char.IsNumber); base.OnPreviewTextInput(e); A: Call me crazy, but why not put plus and minus buttons at either side of the TextBox control and simply prevent the TextBox from receiving cursor focus, thereby creating your own cheap NumericUpDown control? A: Can you not just use something like the following? int numericValue = 0; if (false == int.TryParse(yourInput, out numericValue)) { // handle non-numeric input } A: private void txtNumericValue_PreviewKeyDown(object sender, KeyEventArgs e) { KeyConverter converter = new KeyConverter(); string key = converter.ConvertToString(e.Key); if (key != null && key.Length == 1) { e.Handled = Char.IsDigit(key[0]) == false; } } This is the easiest technique I've found to accomplish this. The down side is that the context menu of the TextBox still allows non-numerics via Paste. To resolve this quickly I simply added the attribute/property: ContextMenu="{x:Null}" to the TextBox thereby disabling it. Not ideal but for my scenario it will suffice. Obviously you could add a few more keys/chars in the test to include additional acceptable values (e.g. '.', '$' etc...) A: Private Sub Value1TextBox_PreviewTextInput(ByVal sender As Object, ByVal e As TextCompositionEventArgs) Handles Value1TextBox.PreviewTextInput Try If Not IsNumeric(e.Text) Then e.Handled = True End If Catch ex As Exception End Try End Sub Worked for me. A: void PreviewTextInputHandler(object sender, TextCompositionEventArgs e) { string sVal = e.Text; int val = 0; if (sVal != null && sVal.Length > 0) { if (int.TryParse(sVal, out val)) { e.Handled = false; } else { e.Handled = true; } } } A: Can also use a converter like: public class IntegerFormatConverter : IValueConverter { public object Convert(object value, System.Type targetType, object parameter, System.Globalization.CultureInfo culture) { int result; int.TryParse(value.ToString(), out result); return result; } public object ConvertBack(object value, System.Type targetType, object parameter, System.Globalization.CultureInfo culture) { int result; int.TryParse(value.ToString(), out result); return result; } }
{ "language": "en", "url": "https://stackoverflow.com/questions/5511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67" }
Q: Is there a real benefit of using J#? I just saw a comment of suggesting J#, and it made me wonder... is there a real, beneficial use of J# over Java? So, my feeling is that the only reason you would even consider using J# is that management has decreed that the company should jump on the Java bandwagon... and the .NET bandwagon. If you use J#, you are effectively losing the biggest benefit of picking Java... rich cross platform support. Sure there is Mono, but it's not as richly supported or as full featured right? I remember hearing Forms are not fully (perhaps at all) supported. I'm not trying to bash .NET here, I'm just saying, if you are going to go the Microsoft route, why not just use C#? If you are going to go the Java route, why would J# enter the picture? I'm hoping to find some real world cases here, so please especially respond if you've ACTUALLY used J# in a REAL project, and why. A: J# is no longer included in VS2008. Unless you already have J# code, you should probably stay away. From j# product page: Since customers have told us that the existing J# feature set largely meets their needs and usage of J# is declining, Microsoft is retiring the Visual J# product and Java Language Conversion Assistant tool to better allocate resources for other customer requirements. The J# language and JLCA tool will not be available in future versions of Visual Studio. To preserve existing customer investments in J#, Microsoft will continue to support the J# and JLCA technology that shipped with Visual Studio 2005 through to 2015 as per our product life-cycle strategy. For more information, see Expanded Microsoft Support Lifecycle Policy for Business & Development Products. A: Instead of J#, I would rather prefer IKVM (http://www.ikvm.net/) to convert my JARs to .NET assemblies as well as access Java APIs in C#. A: The whole purpose of J# is to ease the transition of Java developers to the .NET environment which didn't work so well (I guessing here) so Microsoft dropped J# from Visual Studio 2008. For your question, "Is there a real benefit of using J#?".. in a nutshell... No.. A: One of the killers I've found with J# in the past is that there is no built in support for referencing web services. That alone has been enough to deter me from it ever since. A: C# syntax is so close to Java (and better in some ways) that you might as well learn C# instead of J#. And since C# is more widely used, you can easily find Java --> C# tutorials on google or check out http://www.asp.net/learn and watch some videos. A: I don't think it's a matter of which language is better. In the .NET world there are some inconsistencies between the libraries different languages provide. There are certain functionality that is available in VB.NET that you might like to use from C# but can't. I remember I had to use J# to use some ZIP libraries that were not available in any other language in .NET. A: I have used J# as an easy interim step to port a java library into C#. It made for a good way to port code I don't plan to maintain from Java to .Net. However, all new development is being done in C#. A: Strongly agree that syntactically C# beats Java hands down, so there is really no reason to lament the demise of j#. Now trying to get c# compiling to Java bytecode might be an interesting move as Sun's hotspot jvm is great software. Or, for a bit of fun with what might well become the next generation of Java, how about Scala on the CLR...
{ "language": "en", "url": "https://stackoverflow.com/questions/5527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: In ASP.NET MVC I encounter an incorrect type error when rendering a user control with the correct typed object I encounter an error of the form: "The model item passed into the dictionary is of type FooViewData but this dictionary requires a model item of type bar" even though I am passing in an object of the correct type (bar) for the typed user control. A: What @MattMitchell said is probably the reason you're seeing this error. If you want to know why; it is because when you pass null as the controlData parameter when using RenderUserControl(), the framework will try to pass the view data from the current view context onto the user control instead (see UserControlExtensions.DoRendering method in System.Web.Mvc). A: What has probably happened is that the object provided when rendering the user control is actually null.
{ "language": "en", "url": "https://stackoverflow.com/questions/5544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Document or RPC based web services My gut feel is that document based web services are preferred in practice - is this other peoples experience? Are they easier to support? (I noted that SharePoint uses Any for the "document type" in its WSDL interface, I guess that makes it Document based). Also - are people offering both WSDL and Rest type services now for the same functionality? WSDL is popular for code generation, but for front ends like PHP and Rails they seem to prefer rest. A: As mentioned it is better to choose the Document Literal over RPC encoded whenever possible. It is true that the old java libraries (Axis1, Glue and other prehistoric stuff) support only RPC encoded, however in today's most modern Java SOAP libs just does not support it (e.x. AXIS2, XFire, CXF). Therefore try to expose RPC encoded service only if you know that you need to deal with a consumer that can not do better. But then again maybe just XML RPC could help for these legacy implementations. A: Document versus RPC is only a question if you are using SOAP Web Services which require a service description (WSDL). RESTful web services do not not use WSDL because the service can't be described by it, and the feeling is that REST is simpler and easier to understand. Some people have proposed WADL as a way to describe REST services. Languages like Python, Ruby and PHP make it easier to work with REST. the WSDL is used to generate C# code (a web service proxy) that can be easily called from a static language. This happens when you add a Service Reference or Web Reference in Visual Studio. Whether you provide SOAP or REST services depends on your user population. Whether the services are to be used over the internet or just inside your organization affects your choice. SOAP may have some features (WS-* standards) that work well for B2B or internal use, but suck for an internet service. Document/literal versus RPC for SOAP services are described on this IBM DevelopWorks article. Document/literal is generally considered the best to use in terms of interoperability (Java to .NET etc). As to whether it is easier to support, that depends on your circumstances. My personal view is that people tend to make this stuff more complicated than it needs to be, and REST's simpler approach is superior. A: BiranLy's answer is excellent. I would just like to add that document-vs-RPC can come down to implementation issues as well. We have found Microsoft to be Document-preferring, while our Java-based libraries were RPC-based. Whatever you choose, make sure you know what other potential clients will assume as well.
{ "language": "en", "url": "https://stackoverflow.com/questions/5598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Tables with no Primary Key I have several tables whose only unique data is a uniqueidentifier (a Guid) column. Because guids are non-sequential (and they're client-side generated so I can't use newsequentialid()), I have made a non-primary, non-clustered index on this ID field rather than giving the tables a clustered primary key. I'm wondering what the performance implications are for this approach. I've seen some people suggest that tables should have an auto-incrementing ("identity") int as a clustered primary key even if it doesn't have any meaning, as it means that the database engine itself can use that value to quickly look up a row instead of having to use a bookmark. My database is merge-replicated across a bunch of servers, so I've shied away from identity int columns as they're a bit hairy to get right in replication. What are your thoughts? Should tables have primary keys? Or is it ok to not have any clustered indexes if there are no sensible columns to index that way? A: The primary key serves three purposes: * *indicates that the column(s) should be unique *indicates that the column(s) should be non-null *document the intent that this is the unique identifier of the row The first two can be specified in lots of ways, as you have already done. The third reason is good: * *for humans, so they can easily see your intent *for the computer, so a program that might compare or otherwise process your table can query the database for the table's primary key. A primary key doesn't have to be an auto-incrementing number field, so I would say that it's a good idea to specify your guid column as the primary key. A: Just jumping in, because Matt's baited me a bit. You need to understand that although a clustered index is put on the primary key of a table by default, that the two concepts are separate and should be considered separately. A CIX indicates the way that the data is stored and referred to by NCIXs, whereas the PK provides a uniqueness for each row to satisfy the LOGICAL requirements of a table. A table without a CIX is just a Heap. A table without a PK is often considered "not a table". It's best to get an understanding of both the PK and CIX concepts separately so that you can make sensible decisions in database design. Rob A: When dealing with indexes, you have to determine what your table is going to be used for. If you are primarily inserting 1000 rows a second and not doing any querying, then a clustered index is a hit to performance. If you are doing 1000 queries a second, then not having an index will lead to very bad performance. The best thing to do when trying to tune queries/indexes is to use the Query Plan Analyzer and SQL Profiler in SQL Server. This will show you where you are running into costly table scans or other performance blockers. As for the GUID vs ID argument, you can find people online that swear by both. I have always been taught to use GUIDs unless I have a really good reason not to. Jeff has a good post that talks about the reasons for using GUIDs: https://blog.codinghorror.com/primary-keys-ids-versus-guids/. As with most anything development related, if you are looking to improve performance there is not one, single right answer. It really depends on what you are trying to accomplish and how you are implementing the solution. The only true answer is to test, test, and test again against performance metrics to ensure that you are meeting your goals. [Edit] @Matt, after doing some more research on the GUID/ID debate I came across this post. Like I mentioned before, there is not a true right or wrong answer. It depends on your specific implementation needs. But these are some pretty valid reasons to use GUIDs as the primary key: For example, there is an issue known as a "hotspot", where certain pages of data in a table are under relatively high currency contention. Basically, what happens is most of the traffic on a table (and hence page-level locks) occurs on a small area of the table, towards the end. New records will always go to this hotspot, because IDENTITY is a sequential number generator. These inserts are troublesome because they require Exlusive page lock on the page they are added to (the hotspot). This effectively serializes all inserts to a table thanks to the page locking mechanism. NewID() on the other hand does not suffer from hotspots. Values generated using the NewID() function are only sequential for short bursts of inserts (where the function is being called very quickly, such as during a multi-row insert), which causes the inserted rows to spread randomly throughout the table's data pages instead of all at the end - thus eliminating a hotspot from inserts. Also, because the inserts are randomly distributed, the chance of page splits is greatly reduced. While a page split here and there isnt too bad, the effects do add up quickly. With IDENTITY, page Fill Factor is pretty useless as a tuning mechanism and might as well be set to 100% - rows will never be inserted in any page but the last one. With NewID(), you can actually make use of Fill Factor as a performance-enabling tool. You can set Fill Factor to a level that approximates estimated volume growth between index rebuilds, and then schedule the rebuilds during off-peak hours using dbcc reindex. This effectively delays the performance hits of page splits until off-peak times. If you even think you might need to enable replication for the table in question - then you might as well make the PK a uniqueidentifier and flag the guid field as ROWGUIDCOL. Replication will require a uniquely valued guid field with this attribute, and it will add one if none exists. If a suitable field exists, then it will just use the one thats there. Yet another huge benefit for using GUIDs for PKs is the fact that the value is indeed guaranteed unique - not just among all values generated by this server, but all values generated by all computers - whether it be your db server, web server, app server, or client machine. Pretty much every modern language has the capability of generating a valid guid now - in .NET you can use System.Guid.NewGuid. This is VERY handy when dealing with cached master-detail datasets in particular. You dont have to employ crazy temporary keying schemes just to relate your records together before they are committed. You just fetch a perfectly valid new Guid from the operating system for each new record's permanent key value at the time the record is created. http://forums.asp.net/t/264350.aspx A: Nobody answered actual question: what are pluses/minuses of a table with NO PK NOR a CLUSTERED index. In my opinion, if you optimize for faster inserts (especially incremental bulk-insert, e.g. when you bulk load data into a non-empty table), such a table: with NO clustered index, NO constraints, NO Foreign Keys, NO Defaults and NO Primary Key, in a database with Simple Recovery Model, is the best. Now, if you ever want to query this table (as opposed to scanning it in its entirety) you may want to add a non-clustered non-unique indexes as needed but keep them to the minimum. A: I too have always heard having an auto-incrementing int is good for performance even if you don't actually use it. A: A Primary Key needn't be an autoincrementing field, in many cases this just means you are complicating your table structure. Instead, a Primary Key should be the minimum collection of attributes (note that most DBMS will allow a composite primary key) that uniquely identifies a tuple. In technical terms, it should be the field that every other field in the tuple is fully functionally dependent upon. (If it isn't you might need to normalise). In practice, performance issues may mean that you merge tables, and use an incrementing field, but I seem to recall something about premature optimisation being evil... A: Since you are doing replication, your are correct identities are something to stear clear of. I would make your GUID a primary key but nonclustered since you can't use newsequentialid. That stikes me as your best course. If you don't make it a PK but put a unique index on it, sooner or later that may cause people who maintain the system to not understand the FK relationships properly introducing bugs.
{ "language": "en", "url": "https://stackoverflow.com/questions/5600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41" }
Q: HTTP Libraries for Emacs I recently discovered the org-mode in emacs and it works very well for me. I also like www.RememberTheMilk.com. I would like to be able to sync my org-mode file and RTM list. I know that RTM has its API exposed as web services. I am currently looking for a HTTP library that I could use to write my script. I found a couple of links but I am still not entirely satisfied. http://www.koders.com/lisp/fidB46CCCA8D57FBD093BAF6E08289CFB4DA7624B2B.aspx?s=TV+Raman http://www.emacswiki.org/cgi-bin/wiki/http-post-simple.el Any pointers in doing web service interactions with emacs would be very useful. Also please keep in mind that I'm not a seasoned emacs expert. I have broken the initial barriers of emacs and can find my way around elisp. So, be gentle. :-) A: Emacs ships with url.el and url-http.el. Although http-get.el, http-post.el and http-cookies.el are in vogue today. Here's the GitHub link where you can get it from. http://github.com/wfarr/dotfiles/tree/master/.elisp Any other suggestions are also welcome. A: If I were to work on this, I'd use Pymacs to interface Emacs to Python and then use the existing Python API kit for Remember the Milk. Why re-implement all the HTTP crud yourself?
{ "language": "en", "url": "https://stackoverflow.com/questions/5605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3" }
Q: Better Random Generating PHP I know that just using rand() is predictable, if you know what you're doing, and have access to the server. I have a project that is highly dependent upon choosing a random number that is as unpredictable as possible. So I'm looking for suggestions, either other built-in functions or user functions, that can generate a better random number. I used this to do a little test: $i = 0; while($i < 10000){ $rand = rand(0, 100); if(!isset($array[$rand])){ $array[$rand] = 1; } else { $array[$rand]++; } sort($array); $i++; } I found the results to be evenly distributed, and there is an odd pattern to the number of times each number is generated. A: random.org has an API you can access via HTTP. RANDOM.ORG is a true random number service that generates randomness via atmospheric noise. A: I would be wary of the impression of randomness: there have been many experiments where people would choose the less random distribution. It seems the mind is not very good at producing or estimating randomness. There are good articles on randomness at Fourmilab, including another true random generator. Maybe you could get random data from both sites so if one is down you still have the other. Fourmilab also provides a test program to check randomness. You could use it to check your various myRand() programs. As for your last program, if you generate 10000 values, why don't you choose the final value amongst the 10 thousand? You restrict yourself to a subset. Also, it won't work if your $min and $max are greater than 10000. Anyway, the randomness you need depends on your application. rand() will be OK for an online game, but not OK for cryptography (anything not thoroughly tested with statistical programs will not be suitable for cryptography anyway). You be the judge! A: Adding, multiplying, or truncating a poor random source will give you a poor random result. See Introduction to Randomness and Random Numbers for an explanation. You're right about PHP rand() function. See the second figure on Statistical Analysis for a striking illustration. (The first figure is striking, but it's been drawn by Scott Adams, not plotted with rand()). One solution is to use a true random generator such as random.org. Another, if you're on Linux/BSD/etc. is to use /dev/random. If the randomness is mission critical, you will have to use a hardware random generator. A: Another way of getting random numbers, similar in concept to getting UUID PHP Version 5.3 and above openssl_random_pseudo_bytes(...) Or you can try the following library using RFC4122 A: Variation on @KG, using the milliseconds since EPOCH as the seed for rand? A: A new PHP7 there is a function that does exactly what you needed: it generates cryptographically secure pseudo-random integers. int random_int ( int $min , int $max ) Generates cryptographic random integers that are suitable for use where unbiased results are critical (i.e. shuffling a Poker deck). For a more detailed explanation about PRNG and CSPRNG (and their difference) as well as why your original approach is actually a bad idea, please read my another highly similar answer.
{ "language": "en", "url": "https://stackoverflow.com/questions/5611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Debugging: IE6 + SSL + AJAX + post form = 404 error The Setting: The program in question tries to post form data via an AJAX call to a target procedure contained in the same package as the caller. This is done for a site that uses a secure connection (HTTPS). The technology used here is PLSQL and the DOJO JavaScript library. The development tool is basically a text editor. Code Snippet: > function testPost() { >> dojo.xhrPost( { url: ''dr_tm_w_0120.test_post'', form: ''orgForm'', load: testPostXHRCallback, error: testPostXHRError }); } > function testPostXHRCallback(data,ioArgs) { >> alert(''post callback''); try{ dojo.byId("messageDiv").innerHTML = data; } catch(ex){ if(ex.name == "TypeError") { alert("A type error occurred."); } } return data; } > function testPostXHRError(data, ioArgs) { >> alert(data); alert(''Error when retrieving data from the server!''); return data; } The Problem: When using IE6 (which the entire user-base uses), the response sent back from the server is a 404 error. Observations: The program works fine in Firefox. The calling procedure cannot target any procedures within the same package. The calling procedure can target outside sites (both http, https). The other AJAX calls in the package that are not posts of form data work fine. I've searched the internets and consulted with senior-skilled team members and haven't discovered anything that satisfactorily addresses the issue. *Tried Q&A over at Dojo support forums. The Questions: What troubleshooting techniques do you recommend? What troubleshooting tools do you recommend for HTTPS analyzing? Any hypotheses on what the issue might be? Any ideas for workarounds that aren't total (bad) hacks? Ed. The Solution lomaxx, thx for the fiddler tip. you have no idea how awesome it was to get that and use it as a debugging tool. after starting it up this is what i found and how i fixed it (at least in the short term): > ef Fri, 8 Aug 2008 14:01:26 GMT dr_tm_w_0120.test_post: SIGNATURE (parameter names) MISMATCH VARIABLES IN FORM NOT IN PROCEDURE: SO1_DISPLAYED_,PO1_DISPLAYED_,RWA2_DISPLAYED_,DD1_DISPLAYED_ NON-DEFAULT VARIABLES IN PROCEDURE NOT IN FORM: 0 After seeing that message from the server, I kicked around Fiddler a bit more to see what else I could learn from it. Found that there's a WebForms tab that shows the values in the web form. Wouldn't you know it, the "xxx_DISPLAYED_" fields above were in it. I don't really understand yet why these fields exist, because I didn't create them explicitly in the web PLSQL code. But I do understand now that the target procedure has to include them as parameters to work correctly. Again, this is only in the case of IE6 for me, as Firefox worked fine. Well, that the short term answer and hack to fix it. Hopefully, a little more work in this area will lead to a better understanding of the fundamentals going on here. A: First port of call would be to fire up Fiddler and analyze the data going to and from the browser. Take a look at the headers, the url actually being called and the params (if any) being passed to the AJAX method and see if it all looks good before getting to the server. If that all looks ok, is there any way you can verify it's actually hitting the server via logging, or tracing in the AJAX method? ed: another thing I would try is rig up a test page to call the AJAX method on the server using a non-ajax based call and analyze the traffic in fiddler and compare the two.
{ "language": "en", "url": "https://stackoverflow.com/questions/5619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Is there a way to include a fragment identifier when using Asp.Net MVC ActionLink, RedirectToAction, etc.? I want some links to include a fragment identifier. Like some of the URLs on this site: Debugging: IE6 + SSL + AJAX + post form = 404 error#5626 Is there a way to do this with any of the built-in methods in MVC? Or would I have to roll my own HTML helpers? A: We're looking at including support for this in our next release. A: In MVC3 (and possibly earlier I haven't checked), you can use UrlHelper.GenerateUrl passing in the fragment parameter. Here's a helper method I use to wrap the functionalityL public static string Action(this UrlHelper url, string actionName, string controllerName, string fragment, object routeValues) { return UrlHelper.GenerateUrl( routeName: null, actionName: actionName, controllerName: controllerName, routeValues: new System.Web.Routing.RouteValueDictionary(routeValues), fragment: fragment, protocol: null, hostName: null, routeCollection: url.RouteCollection, requestContext: url.RequestContext, includeImplicitMvcValues: true /*helps fill in the nulls above*/ ); } A: @Dominic, I'm almost positive that putting that in the route will cause routing issues. @Ricky, Until MVC has support for this, you can be a little more "old school" about how you make your routes. For example, you can convert: <%= Html.ActionLink("Home", "Index") %> into: <a href='<%= Url.Action("Index") %>#2345'>Home</a> Or you can write your own helper that does essentially the same thing. A: As Brad Wilson wrote, you can build your own link in your views by simply concatenating strings. But to append a fragment name to a redirect generated via RedirectToAction (or similar) you'll need something like this: public class RedirectToRouteResultEx : RedirectToRouteResult { public RedirectToRouteResultEx(RouteValueDictionary values) : base(values) { } public RedirectToRouteResultEx(string routeName, RouteValueDictionary values) : base(routeName, values) { } public override void ExecuteResult(ControllerContext context) { var destination = new StringBuilder(); var helper = new UrlHelper(context.RequestContext); destination.Append(helper.RouteUrl(RouteName, RouteValues)); //Add href fragment if set if (!string.IsNullOrEmpty(Fragment)) { destination.AppendFormat("#{0}", Fragment); } context.HttpContext.Response.Redirect(destination.ToString(), false); } public string Fragment { get; set; } } public static class RedirectToRouteResultExtensions { public static RedirectToRouteResultEx AddFragment(this RedirectToRouteResult result, string fragment) { return new RedirectToRouteResultEx(result.RouteName, result.RouteValues) { Fragment = fragment }; } } And then, in your controller, you'd call: return RedirectToAction("MyAction", "MyController") .AddFragment("fragment-name"); That should generate the URL correctly. A: The short answer is: No. In ASP.NET MVC Preview 3 there's no first-class way for including an anchor in an action link. Unlike Rails' url_for :anchor, UrlHelper.GenerateUrl (and ActionLink, RedirectToAction and so on which use it) don't have a magic property name that lets you encode an anchor. As you point out, you could roll your own that does. This is probably the cleanest solution. Hackily, you could just include an anchor in a route and specify the value in your parameters hash: routes.MapRoute("WithTarget", "{controller}/{action}/{id}#{target}"); ... <%= Html.ActionLink("Home", "Index", new { target = "foo" })%> This will generate a URL like /Home/Index/#foo. Unfortunately this doesn't play well with URL parameters, which appear at the end of the URL. So this hack is only workable in really simple circumstances where all of your parameters appear as URL path components. A: This is a client side solution but if you have jquery available you can do something like this. <script language="javascript" type="text/javascript"> $(function () { $('div.imageHolder > a').each(function () { $(this).attr('href', $(this).attr('href') + '#tab-works'); }); }); </script> A: Fragment identifiers are supported in MVC 5. See ActionLink's overloads at https://msdn.microsoft.com/en-us/library/dd460522(v=vs.118).aspx and https://msdn.microsoft.com/en-us/library/dd492938(v=vs.118).aspx.
{ "language": "en", "url": "https://stackoverflow.com/questions/5628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22" }
Q: Any reason not to start using the HTML 5 doctype? It is supposed to be backwards compatible with HTML4 and XHTML. John Resig posted about some of the benefits. As long as we don't use any of the new and not supported yet features, would there be any downside to start building sites with this doctype? A: My question to you would be why use it if you don't use any of the new/unsupported features. I'm not saying you couldn't play around with it, but why start building sites with a doctype that offers no benefits and could be supplemented by XHTML5. A: I'd say use it and test extensively. Then let us know if it blew your house up or something. :') A: Based on the latest IE8 beta, it seems that MS will use the HTML5 doctype as a bypass for the IE8 mode switching quagmire. It seems that the biggest risk with deploying the HTML5 doctype early is that if people publish a lot of IE8-incompatible content with the HTML5 doctype before IE8 ships, MS might get cold feet about making the mode situation simple for HTML5. Update: This has been voted down, it seems. Quite obviously now that IE8 has shipped, the above consideration no longer applies. And indeed, the situation is not simple with IE8. A: The downside for me mainly concerns validation: * *Third party validation tools does not always keep up with changing specs, making my favorite tools unreliable. *I prefer to validate against strict doctypes to make sure I have closed all elements. It's an easy way to avoid simple but time consuming nesting errors. With HTML 5 you don't have to close your elements, so there is no way to find unmatched tags. A: Well consider this: When serving as text/html, all you need a doctype for is to trigger standards mode. Beyond that, the doctype does nothing as far as browsers are concerned. When serving as text/html, whether you use XHTML markup or HTML markup, it's treated by browsers as HTML. So, really it comes down to using the shortest doctype that triggers standards mode (<!DOCTYPE html>) and using HTML markup that produces the correct result in browsers. The rest is about conforming, validation and markup prerference. With that said, using <!DOCTYPE html> now and trying to make your markup conform to HTML5 is not a bad idea as long as you stick to stable features that work in browsers now. You wouldn't use anything in HTML4 or XHTML 1.x that doesn't work in browsers, would you? In other words, you use <!DOCTYPE html> with HTML4-like markup while honoring things that have been clarified in HTML5. HTML5 is about browser compatibility after all. The downside to using HTML5 now is that the spec can change quite often. This makes it important for you to keep up with the spec as it actively changes. Also http://validator.nu/ might not always be up-to-date, but http://validator.w3.org/ isn't always up-to-date either, so don't let that stop you. Of course, if you want to use XHTML 1.0 markup and conform to XHTML 1.0, then you shouldn't use <!DOCTYPE html>. Personally, I always use <!DOCTYPE html> for HTML. A: if you're going to use the doctype, experiment with the features. As long as they don't go into a production site, and you test them thoroughly, have at it. A: Consider your audience and your needs. I write pages such as class tests with a target audience of students in my courses who use FireFox 3 in an Ubuntu equipped computer laboratory. I need SVG with MathMl embedded as a foreignObject in the SVG. I use the HTML5 doctype and new HTML5 tags freely. A: Take a look at this blog post! Not really a fan of HTML5 http://www.webscienceman.com/2009/01/24/html-xhtml-html5-future-html/ A: For anyone finding this. The chart at http://hsivonen.iki.fi/doctype/ shows the various rendering modes different browsers use depending on the DOCTYPE declaration in use. It give you a good idea of how DOCTYPE switching works. A: Personally I'd say no. There is no clear benefit to HTML5 and in fact would go as far as to say that the whole thing is botched from the start. Having specialised tags for header, footers and sidebars is a huge mistake - you've got them already in the form of tags (div) and names (classes/id's). Why do we need the specialist ones? XHTML1.1 is good enough, period. In fact, since most browsers don't support HTML4 correctly, there is little point in using a doctype that is going to take years to get proper support.
{ "language": "en", "url": "https://stackoverflow.com/questions/5629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "132" }
Q: x86 Assembly on a Mac Does anyone know of any good tools (I'm looking for IDEs) to write assembly on the Mac. Xcode is a little cumbersome to me. Also, on the Intel Macs, can I use generic x86 asm? Or is there a modified instruction set? Any information about post Intel. Also: I know that on windows, asm can run in an emulated environment created by the OS to let the code think it's running on its own dedicated machine. Does OS X provide the same thing? A: After installing any version of Xcode targeting Intel-based Macs, you should be able to write assembly code. Xcode is a suite of tools, only one of which is the IDE, so you don't have to use it if you don't want to. (That said, if there are specific things you find clunky, please file a bug at Apple's bug reporter - every bug goes to engineering.) Furthermore, installing Xcode will install both the Netwide Assembler (NASM) and the GNU Assembler (GAS); that will let you use whatever assembly syntax you're most comfortable with. You'll also want to take a look at the Compiler & Debugging Guides, because those document the calling conventions used for the various architectures that Mac OS X runs on, as well as how the binary format and the loader work. The IA-32 (x86-32) calling conventions in particular may be slightly different from what you're used to. Another thing to keep in mind is that the system call interface on Mac OS X is different from what you might be used to on DOS/Windows, Linux, or the other BSD flavors. System calls aren't considered a stable API on Mac OS X; instead, you always go through libSystem. That will ensure you're writing code that's portable from one release of the OS to the next. Finally, keep in mind that Mac OS X runs across a pretty wide array of hardware - everything from the 32-bit Core Single through the high-end quad-core Xeon. By coding in assembly you might not be optimizing as much as you think; what's optimal on one machine may be pessimal on another. Apple regularly measures its compilers and tunes their output with the "-Os" optimization flag to be decent across its line, and there are extensive vector/matrix-processing libraries that you can use to get high performance with hand-tuned CPU-specific implementations. Going to assembly for fun is great. Going to assembly for speed is not for the faint of heart these days. A: Running assembly Code on Mac is just 3 steps away from you. It could be done using XCODE but better is to use NASM Command Line Tool. For My Ease I have already installed Xcode, if you have Xcode installed its good. But You can do it without XCode as well. Just Follow: * *First Install NASM using Homebrew brew install nasm *convert .asm file into Obj File using this command nasm -f macho64 myFile.asm *Run Obj File to see OutPut using command ld -macosx_version_min 10.7.0 -lSystem -o OutPutFile myFile.o && ./64 Simple Text File named myFile.asm is written below for your convenience. global start section .text start: mov rax, 0x2000004 ; write mov rdi, 1 ; stdout mov rsi, msg mov rdx, msg.len syscall mov rax, 0x2000001 ; exit mov rdi, 0 syscall section .data msg: db "Assalam O Alaikum Dear", 10 .len: equ $ - msg A: Also, on the Intel Macs, can I use generic x86 asm? or is there a modified instruction set? Any information about post Intel Mac assembly helps. It's the same instruction set; it's the same chips. A: As stated before, don't use syscall. You can use standard C library calls though, but be aware that the stack MUST be 16 byte aligned per Apple's IA32 function call ABI. If you don't align the stack, your program will crash in __dyld_misaligned_stack_error when you make a call into any of the libraries or frameworks. The following snippet assembles and runs on my system: ; File: hello.asm ; Build: nasm -f macho hello.asm && gcc -o hello hello.o SECTION .rodata hello.msg db 'Hello, World!',0x0a,0x00 SECTION .text extern _printf ; could also use _puts... GLOBAL _main ; aligns esp to 16 bytes in preparation for calling a C library function ; arg is number of bytes to pad for function arguments, this should be a multiple of 16 ; unless you are using push/pop to load args %macro clib_prolog 1 mov ebx, esp ; remember current esp and esp, 0xFFFFFFF0 ; align to next 16 byte boundary (could be zero offset!) sub esp, 12 ; skip ahead 12 so we can store original esp push ebx ; store esp (16 bytes aligned again) sub esp, %1 ; pad for arguments (make conditional?) %endmacro ; arg must match most recent call to clib_prolog %macro clib_epilog 1 add esp, %1 ; remove arg padding pop ebx ; get original esp mov esp, ebx ; restore %endmacro _main: ; set up stack frame push ebp mov ebp, esp push ebx clib_prolog 16 mov dword [esp], hello.msg call _printf ; can make more clib calls here... clib_epilog 16 ; tear down stack frame pop ebx mov esp, ebp pop ebp mov eax, 0 ; set return code ret A: Recently I wanted to learn how to compile Intel x86 on Mac OS X: For nasm: -o hello.tmp - outfile -f macho - specify format Linux - elf or elf64 Mac OSX - macho For ld: -arch i386 - specify architecture (32 bit assembly) -macosx_version_min 10.6 (Mac OSX - complains about default specification) -no_pie (Mac OSX - removes ld warning) -e main - specify main symbol name (Mac OSX - default is start) -o hello.o - outfile For Shell: ./hello.o - execution One-liner: nasm -o hello.tmp -f macho hello.s && ld -arch i386 -macosx_version_min 10.6 -no_pie -e _main -o hello.o hello.tmp && ./hello.o Let me know if this helps! I wrote how to do it on my blog here: http://blog.burrowsapps.com/2013/07/how-to-compile-helloworld-in-intel-x86.html For a more verbose explanation, I explained on my Github here: https://github.com/jaredsburrows/Assembly A: The features available to use are dependent on your processor. Apple uses the same Intel stuff as everybody else. So yes, generic x86 should be fine (assuming you're not on a PPC :D). As far as tools go, I think your best bet is a good text editor that 'understands' assembly. A: Forget about finding a IDE to write/run/compile assembler on Mac. But, remember mac is UNIX. See http://asm.sourceforge.net/articles/linasm.html. A decent guide (though short) to running assembler via GCC on Linux. You can mimic this. Macs use Intel chips so you want to look at Intel syntax.
{ "language": "en", "url": "https://stackoverflow.com/questions/5649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "69" }
Q: Sleep from within an Informix SPL procedure What's the best way to do the semantic equivalent of the traditional sleep() system call from within an Informix SPL routine? In other words, simply "pause" for N seconds (or milliseconds or whatever, but seconds are fine). I'm looking for a solution that does not involve linking some new (perhaps written by me) C code or other library into the Informix server. This has to be something I can do purely from SPL. A solution for IDS 10 or 11 would be fine. @RET - The "obvious" answer wasn't obvious to me! I didn't know about the SYSTEM command. Thank you! (And yes, I'm the guy you think I am.) Yes, it's for debugging purposes only. Unfortunately, CURRENT within an SPL will always return the same value, set at the entry to the call: "any call to CURRENT from inside the SPL function that an EXECUTE FUNCTION (or EXECUTE PROCEDURE) statement invokes returns the value of the system clock when the SPL function starts." —IBM Informix Guide to SQL Wrapping CURRENT in its own subroutine does not help. You do get a different answer on the first call to your wrapper (provided you're using YEAR TO FRACTION(5) or some other type with high enough resolution to show the the difference) but then you get that same value back on every single subsequent call, which ensures that any sort of loop will never terminate. A: There must be some good reason you're not wanting the obvious answer: SYSTEM "sleep 5". If all you're wanting is for the SPL to pause while you check various values etc, here are a couple of thoughts (all of which are utter hacks, of course): * *Make the TRACE FILE a named pipe (assuming Unix back-end), so it blocks until you choose to read from it, or *Create another table that your SPL polls for a particular entry from a WHILE loop, and insert said row from elsewhere (horribly inefficient) *Make SET LOCK MODE your friend: execute "SET LOCK MODE TO WAIT n" and deliberately requery a table you're already holding a cursor open on. You'll need to wrap this in an EXCEPTION handler, of course. Hope that is some help (and if you're the same JS of Ars and Rose::DB fame, it's the least I could do ;-) A: I'm aware that the answer is too late. However I've recently encountered the same problem and this site shows as the first one. So it is beneficial for other people to place new anwser here. Perfect solution was found by Eric Herber and published in April 2012 here: How to sleep (or yield) for a fixed time in a stored procedure Unfortunately this site is down. His solution is to use following function: integer sysadmin:yieldn( integer nseconds ) A: I assume that you want this "pause" for debugging purposes, otherwise think about it, you'll always have some better tasks to do for your server than sleep ... A suggestion: Maybe you could get CURRENT, add it a few seconds ( let mytimestamp ) then in a while loop select CURRENT while CURRENT <= mytimestamp . I've no informix setup around my desk to try it, so you'll have to figure the correct syntax. Again, do not put such a hack on a production server. You've been warned :D A: Then you'll have to warp CURRENT in another function that you'll call from the first (but this is a hack on the previous hack ...).
{ "language": "en", "url": "https://stackoverflow.com/questions/5667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: How do I create an HTML anchor in a FogBugz wiki page? The StackOverflow transcripts are enormous, and sometimes I want to link to a little bit within it. How do I create an HTML anchor in a FogBugz wiki page? A: According to this support message, the feature is not yet currently implemented: The FogBugz wiki does not currently support anchors within a document, unfortunately. It's definitely on the list of features we're considering for the next release, though. A: As of this writing, this feature is now supported -- just edit the wiki page's html directly (via the <> button). See this support question for details. Use html anchor tags as you would in a typical html doc. (Note that I did have issues w/ FB magically removing the name attribute of the targeted anchors and had to re-add them after an initial save, so YMMV). A: There is a script you can use that will create an automatic TOC for your wiki page though and this will probably solve your problem. A: It doesn't appear to be possible.
{ "language": "en", "url": "https://stackoverflow.com/questions/5674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: Is it just me, or are characters being rendered incorrectly more lately? I'm not sure if it's my system, although I haven't done anything unusual with it, but I've started noticing incorrectly rendered characters popping up in web pages, text-files, like this: http://www.kbssource.com/strange-characters.gif I have a hunch it's a related to the fairly recent trend to use unicode for everything, which is a good thing I think, combined with fonts that don't support all possible characters. So, does anyone know what's causing these blips (am I right?), and how do I stop this showing up in my own content? A: It appears that for this particular author, the text was edited in some editor that assumed it wasn't UTF8, and then re-wrote it out in UTF8. I'm basing this off the fact that if I tell my browser to interpret the page as different common encodings, none make it display correctly. This tells me that some conversion was done at some point improperly. The only problem with UTF8 is that there isn't a standardized way to recognize that a file is UTF8, and until all editors are standardizing on UTF8, there will still be conversion errors. For other unicode variants, a Byte Order Mark (BOM) is fairly standard to help identify a file, but BOMs in UTF8 files are pretty rare. To keep it from showing up in your content, make sure you're always using unicode-aware editors, and make sure that you always open your files with the proper encodings. It's a pain, unfortunately, and errors will occasionally crop up. The key is just catching them early so that you can undo it or make a few edits. A: I'm fairly positive it's nothing you can do. I've seen this on the front page of digg alot recently. It more than likely has to do with a character being encoded improperly. Not necessarily a factor of the font, just a mistake made somewhere in translation. A: It looked for a while like the underscore and angle bracket problem had gone away, but it seems it might not be fixed. here's a small sample, which should look like this: #include ____ #include <stdio.h> ____ #include Update: it looks like it's fixed in display mode, and only broken in edit mode
{ "language": "en", "url": "https://stackoverflow.com/questions/5682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2" }
Q: MVC Preview 4 - No route in the route table matches the supplied values I have a route that I am calling through a RedirectToRoute like this: return this.RedirectToRoute("Super-SuperRoute", new { year = selectedYear }); I have also tried: return this.RedirectToRoute("Super-SuperRoute", new { controller = "Super", action = "SuperRoute", id = "RouteTopic", year = selectedYear }); The route in the global.asax is like this: routes.MapRoute( "Super-SuperRoute", // Route name "Super.mvc/SuperRoute/{year}", // URL with parameters new { controller = "Super", action = "SuperRoute", id = "RouteTopic" } // Parameter defaults ); So why do I get the error: "No route in the route table matches the supplied values."? I saw that the type of selectedYear was var. When I tried to convert to int with int.Parse I realised that selectedYear was actually null, which would explain the problems. I guess next time I'll pay more attention to the values of the variables at a breakpoint :) A: What type is selectedYear? A DateTime? If so then you might need to convert to a string.
{ "language": "en", "url": "https://stackoverflow.com/questions/5690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4" }
Q: The imported project "C:\Microsoft.CSharp.targets" was not found I got this error today when trying to open a Visual Studio 2008 project in Visual Studio 2005: The imported project "C:\Microsoft.CSharp.targets" was not found. A: I used to have this following line in the csproj file: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> After deleting this file, it works fine. A: This is a global solution, not dependent on particular package or bin. In my case, I removed Packages folder from my root directory. Maybe it happens because of your packages are there but compiler is not finding it's reference. so remove older packages first and add new packages. Steps to Add new packages * *First remove, packages folder (it will be near by or one step up to your current project folder). *Then restart the project or solution. *Now, Rebuild solution file. *Project will get new references from nuGet package manager. And your issue will be resolved. This is not proper solution, but I posted it here because I face same issue. In my case, I wasn't even able to open my solution in visual studio and didn't get any help with other SO answers. A: If you are to encounter the error that says Microsoft.CSharp.Core.targets not found, these are the steps I took to correct mine: * *Open any previous working projects folder and navigate to the link showed in the error, that is Projects/(working project name)/packages/Microsoft.Net.Compilers.1.3.2/tools/ and search for Microsoft.CSharp.Core.targets file. *Copy this file and put it in the non-working project tools folder (that is, navigating to the tools folder in the non-working project as shown above) *Now close your project (if it was open) and reopen it. It should be working now. Also, to make sure everything is working properly in your now open Visual Studio Project, Go to Tools > NuGetPackage Manager > Manage NuGet Packages For Solution. Here, you might find an error that says, CodeAnalysis.dll is being used by another application. Again, go to the tools folder, find the specified file and delete it. Come back to Manage NuGet Packages For Solution. You will find a link that will ask you to Reload, click it and everything gets re-installed. Your project should be working properly now. A: I got this after reinstalling Windows. Visual Studio was installed, and I could see the Silverlight project type in the New Project window, but opening one didn't work. The solution was simple: I had to install the Silverlight Developer runtime and/or the Microsoft Silverlight 4 Tools for Visual Studio. This may seem stupid, but I overlooked it because I thought it should work, as the Silverlight project type was available. A: In my case, I opened my .csproj file in notepad and removed the following three lines. Worked like a charm: <Import Project="..\packages\Microsoft.CodeDom.Providers.DotNetCompilerPlatform.1.0.0\build\Microsoft.CodeDom.Providers.DotNetCompilerPlatform.props" Condition="Exists('..\packages\Microsoft.CodeDom.Providers.DotNetCompilerPlatform.1.0.0\build\Microsoft.CodeDom.Providers.DotNetCompilerPlatform.props')" /> <Import Project="..\packages\Microsoft.Net.Compilers.1.0.0\build\Microsoft.Net.Compilers.props" Condition="Exists('..\packages\Microsoft.Net.Compilers.1.0.0\build\Microsoft.Net.Compilers.props')" /> <Import Project="..\packages\Microsoft.Net.Compilers.1.3.2\build\Microsoft.Net.Compilers.props" Condition="Exists('..\packages\Microsoft.Net.Compilers.1.3.2\build\Microsoft.Net.Compilers.props')" /> A: For me the issue was that the path of the project contained %20 characters, because git added those instead of spaces when the repository was cloned. Another problem might be if the path to a package is too long. A: ok so what if it say this: between the gt/lt signs Import Project="$(MSBuildExtensionsPath)\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.CSharp.targets" / how do i fix the targets error? I also found that import string in a demo project (specifically "Build your own MVVM Framework" by Rob Eisenburg). If you replace that import with the one suggested by lomaxx VS2010 RTM reports that you need to install this. A: For errors with Microsoft.WebApplications.targets, you can: * *Install Visual Studio 2010 (or the same version as in development machine) in your TFS server. *Copy the “Microsoft.WebApplication.targets” from development machine file to TFS build machine. Here's the post. A: In my case I could not load one out of 5 projects in my solution. It helped to close Visual Studio and I had to delete Microsoft.Net.Compilers.1.3.2 nuget folder under packages folder. Afterwards, open your solution again and the project loaded as expected Just to be sure, close all instances of VS before you delete the folder. A: Open your csproj file in notepad (or notepad++) Find the line: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> and change it to <Import Project="$(MSBuildBinPath)\Microsoft.CSharp.targets" /> A: This link on MSDN also helps a lot to understand the reason why it doesn't work. $(MSBuildToolsPath) is the path to Microsoft.Build.Engine v3.5 (inserted automatically in a project file when you create in VS2008). If you try to build your project for .Net 2.0, be sure that you changed this path to $(MSBuildBinPath) which is the path to Microsoft.Build.Engine v2.0. A: This error can also occur when opening a Silverlight project that was built in SL 4, while you have SL 5 installed. Here is an example error message: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\Silverlight\v4.0\Microsoft.Silverlight.CSharp.targets" was not found. Note the v4.0. To resolve, edit the project and find: <TargetFrameworkVersion>v4.0</TargetFrameworkVersion> And change it to v5.0. Then reload project and it will open (unless you do not have SL 5 installed). A: For me, the issue was the path.. When cloning the project that had a space in the name. The project folder was named "Sample%20-%205" instead of what it should be: "Sample - 5" Opening the project was fine, but building failed with Could not find the file: /packages/Microsoft.Net.Compilers.1.3.2/tools/Microsoft.CSharp.Core.targets A: I deleted the obj folder and then the project loaded as expected. A: Sometimes the problem might be with hardcoded VS version in .csproj file. If you have in your csproj something like this: [...]\VisualStudio\v12.0\WebApplications\Microsoft.WebApplication.targets" You should check if the number is correct (the reason it's wrong can be the project was created with another version of Visual Studio). If it's wrong, replace it with your current version of build tools OR use the VS variable: [...]\VisualStudio\v$(VisualStudioVersion)\WebApplications\Microsoft.WebApplication.targets" A: I ran into this issue while executing an Ansible playbook so I want to add my 2 cents here. I noticed a warning message about missing Visual Studio 14. Visual Studio version 14 was released in 2015 and the solution to my problem was installing Visual Studio 2015 Professional on the host machine of my Azure DevOps agent. A: After trying to restore, closing VS, deleting the failed package, reopening, trying to restore, multiple times I just deleted everything in packages and when I did a restore and it worked perfectly. A: it seems now that the nuget packages folder has moved to a machine wide global cache, using VS2022 A: For me the issue was that the solution was to deep into the documents folder and on windows 10 there is a path character limit which was reached. As soon as I moved the solution folder up couple of folders this fixed the issue.
{ "language": "en", "url": "https://stackoverflow.com/questions/5694", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "140" }
Q: Is there a lightweight, preferable open source, formattable label control for .NET? I have been looking for a way to utilize a simple markup language, or just plain HTML, when displaying text in WinForm applications. I would like to avoid embedding a web browser control since in most of the case I just want to highlight a single word or two in a sentence. I have looked at using a RTFControl but I believe it's a bit heavy and I don't think the "language" used to do the formatting is easy. Is there a simple control that allows me to display strings like: This is a sample string with different formatting. I would be really neat if it was also possible to specify a font and/or size for the text. Oh, .NET 3.5 and WPF/XAML is not an option. A: Well, just use HTML. We have used the following 'FREE' control in some of our applications, and it's just beautiful. We can define the UI in HTML Markup and then render it using this control: http://www.terrainformatica.com/htmlayout/main.whtm Initially, we started looking at HtmlToRTF converters so that we can use an RTF control to render UI, but there is far too many options to match between the two formats. And so, we ended up using the above control. The only pre-condition is a mention of their name in your About Box.
{ "language": "en", "url": "https://stackoverflow.com/questions/5704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: When do Request.Params and Request.Form differ? I recently encountered a problem where a value was null if accessed with Request.Form but fine if retrieved with Request.Params. What are the differences between these methods that could cause this? A: Request.Form only includes variables posted through a form, while Request.Params includes both posted form variables and get variables specified as URL parameters. A: Request.Params contains a combination of QueryString, Form, Cookies and ServerVariables (added in that order). The difference is that if you have a form variable called "key1" that is in both the QueryString and Form then Request.Params["key1"] will return the QueryString value and Request.Params.GetValues("key1") will return an array of [querystring-value, form-value]. If there are multiple form values or cookies with the same key then those values will be added to the array returned by GetValues (ie. GetValues will not return a jagged array) A: The reason was that the value I was retrieving was from a form element, but the submit was done through a link + JQuery, not through a form button submit.
{ "language": "en", "url": "https://stackoverflow.com/questions/5706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36" }
Q: Better windows command line shells Is there a better windows command line shell other than cmd which has better copy paste between Windows' windows and console windows? A: Enable QuickEdit mode, under the Options tab of your shortcut to the command shell. Mark with the mouse, right-click to copy, right-click again to paste. While you're there, enable a hotkey (like CTRL + ALT + C) for lightning fast access to the shell. And no, you can't have CTRL + C for COPY, because CTRL + C means BREAK. On a related note, the Microsoftee who changed the default setting of QuickEdit mode between Windows Server 2000 and 2003 is an idiot and I heap curses upon him each workday. A: Depending on what you're trying to do with the shell, rxvt in cygwin is good. You'll get the nicety of auto copy on selection and middle click paste. The biggest downside is that some windows console apps don't play nice with cygwin. A: PowerCmd is cheaper than TakeCommand and has a lot of powerful features - not the least of which is better handling of Cut/Copy/Paste. I've only been using it a short time but I'm really impressed so far: Summary from the site: PowerCmd enhances your command prompt with an easy-to-use Windows GUI-style interface and allows you to run multiple consoles within a single tabbed window. You can easily organize multiple consoles in vertical, horizontal, and grid forms. Auto-log, auto-completion, keywords highlight, configurable font and colors, customizable toolbar for frequently used commands or tools and minimizing to tray are easy solutions to daily needs. With PowerCmd, you can save and restore your sessions from last time. Site: http://www.powercmd.com/ Features: http://www.powercmd.com/features.php A: Windows PowerShell is the obvious choice when it comes to "better windows command line shell other than cmd". Its clipboard handling isn't that much of an improvement - mark with the mouse, Enter to copy, or right mouse click to paste. A: This probably is not exactly what you want, but you can take a look at Console2 I have it configured so that shift+select auto copies and middle click pastes, really handy, internally it uses same old cmd.exe so you are not really getting a different shell. By the way, I guess Ctrl+C = copy is not the best idea in a command line context because it usually means interrupt running process. A: Not sure what specifically you mean by better copy/paste but try Take Command. Take Command supports Shift+Ins for paste and Shift+Del for cut, but apparently nothing for copy, will dig some more. A: Have you thought through what behavior you want to replace the current Ctrl+C functionality? A: There are two portion to cmd.exe. First there is the window that pops up for dealing with the text console. I would replace that with ConEmu. That program is actually meant as a wrapper for the Far File manager but works just fine without it. It is very similar to Console2 but also is much more stable and has better features. Second there is the command line interpreter. I would replace that with Powershell if you actually need any of its features. I currently run using ConEmu with a batch file to setup my preferred environment. This is kept in my Dropbox folder so it remains synchronized between my computers. A: Take Command does support Copy/Cut/Paste from the keyboard and the mouse. It's pretty handy if you do a lot of work from a command prompt. It also supports: * *Command and folder history, with popup windows to select prior commands or folders. *Screen scroll back buffer *Enhanced batch commands *Built in FTP/HTTP file access *A toolbar with programmable buttons Note: It's a paid tool, with price of $99.95. A: Console 2 http://sourceforge.net/projects/console/ http://www.hanselman.com/blog/Console2ABetterWindowsCommandPrompt.aspx A: @Chirs I think you need to clarify shell vs host(emulator). To me it sounds like you need another interface to your existing shell that better supports copy and paste, not another shell that supports more/different features. I second Pat's suggestion of Console2, it is a very good application and OSS to boot. A: I use the standard CMD.EXE shell but with a twist: an AutoHotKey script to support clipboard copy-paste as posted in: Keyboard shortcut to paste clipboard content into command prompt window (Win XP) A: The Windows cmd shell, Cygwin Bash, and msysgit Bash shells can be run within Emacs. EmacsW32 provides all three separately. You just have to set the bin directory to use either of the Bash shells. EmacsW32 also provides limited interactions between the Windows clipboard and the top item of the kill ring. A: MinGW Shell properly set up with: * *right click menu entry *~/.profile file is well above anything else I have tried. A: MinTTY on MinGW/MSYS is nice—nicer than on Cygwin because MinGW/MSYS is faster. Also, if you need cmd.exe behaviour, you can run cmd.exe inside of mintty easily. See http://code.google.com/p/mintty/.
{ "language": "en", "url": "https://stackoverflow.com/questions/5724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "66" }
Q: What are the barriers to understanding pointers and what can be done to overcome them? Why are pointers such a leading factor of confusion for many new, and even old, college level students in C or C++? Are there any tools or thought processes that helped you understand how pointers work at the variable, function, and beyond level? What are some good practice things that can be done to bring somebody to the level of, "Ah-hah, I got it," without getting them bogged down in the overall concept? Basically, drill like scenarios. A: An example of a tutorial with a good set of diagrams helps greatly with the understanding of pointers. Joel Spolsky makes some good points about understanding pointers in his Guerrilla Guide to Interviewing article: For some reason most people seem to be born without the part of the brain that understands pointers. This is an aptitude thing, not a skill thing – it requires a complex form of doubly-indirected thinking that some people just can't do. A: The problem with pointers is not the concept. It's the execution and language involved. Additional confusion results when teachers assume that it's the CONCEPT of pointers that's difficult, and not the jargon, or the convoluted mess C and C++ makes of the concept. So vast amounts of effort are poored into explaining the concept (like in the accepted answer for this question) and it's pretty much just wasted on someone like me, because I already understand all of that. It's just explaining the wrong part of the problem. To give you an idea of where I'm coming from, I'm someone who understands pointers perfectly well, and I can use them competently in assembler language. Because in assembler language they are not referred to as pointers. They are referred to as addresses. When it comes to programming and using pointers in C, I make a lot of mistakes and get really confused. I still have not sorted this out. Let me give you an example. When an api says: int doIt(char *buffer ) //*buffer is a pointer to the buffer what does it want? it could want: a number representing an address to a buffer (To give it that, do I say doIt(mybuffer), or doIt(*myBuffer)?) a number representing the address to an address to a buffer (is that doIt(&mybuffer) or doIt(mybuffer) or doIt(*mybuffer)?) a number representing the address to the address to the address to the buffer (maybe that's doIt(&mybuffer). or is it doIt(&&mybuffer) ? or even doIt(&&&mybuffer)) and so on, and the language involved doesn't make it as clear because it involves the words "pointer" and "reference" that don't hold as much meaning and clarity to me as "x holds the address to y" and "this function requires an address to y". The answer additionally depends on just what the heck "mybuffer" is to begin with, and what doIt intends to do with it. The language doesn't support the levels of nesting that are encountered in practice. Like when I have to hand a "pointer" in to a function that creates a new buffer, and it modifies the pointer to point at the new location of the buffer. Does it really want the pointer, or a pointer to the pointer, so it knows where to go to modify the contents of the pointer. Most of the time I just have to guess what is meant by "pointer" and most of the time I'm wrong, regardless of how much experience I get at guessing. "Pointer" is just too overloaded. Is a pointer an address to a value? or is it a variable that holds an address to a value. When a function wants a pointer, does it want the address that the pointer variable holds, or does it want the address to the pointer variable? I'm confused. A: I think the main barrier to understanding pointers is bad teachers. Almost everyone are taught lies about pointers: That they are nothing more than memory addresses, or that they allow you to point to arbitrary locations. And of course that they are difficult to understand, dangerous and semi-magical. None of which is true. Pointers are actually fairly simple concepts, as long as you stick to what the C++ language has to say about them and don't imbue them with attributes that "usually" turn out to work in practice, but nevertheless aren't guaranteed by the language, and so aren't part of the actual concept of a pointer. I tried to write up an explanation of this a few months ago in this blog post -- hopefully it'll help someone. (Note, before anyone gets pedantic on me, yes, the C++ standard does say that pointers represent memory addresses. But it does not say that "pointers are memory addresses, and nothing but memory addresses and may be used or thought of interchangeably with memory addresses". The distinction is important) A: Pointers is a concept that for many can be confusing at first, in particular when it comes to copying pointer values around and still referencing the same memory block. I've found that the best analogy is to consider the pointer as a piece of paper with a house address on it, and the memory block it references as the actual house. All sorts of operations can thus be easily explained. I've added some Delphi code down below, and some comments where appropriate. I chose Delphi since my other main programming language, C#, does not exhibit things like memory leaks in the same way. If you only wish to learn the high-level concept of pointers, then you should ignore the parts labelled "Memory layout" in the explanation below. They are intended to give examples of what memory could look like after operations, but they are more low-level in nature. However, in order to accurately explain how buffer overruns really work, it was important that I added these diagrams. Disclaimer: For all intents and purposes, this explanation and the example memory layouts are vastly simplified. There's more overhead and a lot more details you would need to know if you need to deal with memory on a low-level basis. However, for the intents of explaining memory and pointers, it is accurate enough. Let's assume the THouse class used below looks like this: type THouse = class private FName : array[0..9] of Char; public constructor Create(name: PChar); end; When you initialize the house object, the name given to the constructor is copied into the private field FName. There is a reason it is defined as a fixed-size array. In memory, there will be some overhead associated with the house allocation, I'll illustrate this below like this: ---[ttttNNNNNNNNNN]--- ^ ^ | | | +- the FName array | +- overhead The "tttt" area is overhead, there will typically be more of this for various types of runtimes and languages, like 8 or 12 bytes. It is imperative that whatever values are stored in this area never gets changed by anything other than the memory allocator or the core system routines, or you risk crashing the program. Allocate memory Get an entrepreneur to build your house, and give you the address to the house. In contrast to the real world, memory allocation cannot be told where to allocate, but will find a suitable spot with enough room, and report back the address to the allocated memory. In other words, the entrepreneur will choose the spot. THouse.Create('My house'); Memory layout: ---[ttttNNNNNNNNNN]--- 1234My house Keep a variable with the address Write the address to your new house down on a piece of paper. This paper will serve as your reference to your house. Without this piece of paper, you're lost, and cannot find the house, unless you're already in it. var h: THouse; begin h := THouse.Create('My house'); ... Memory layout: h v ---[ttttNNNNNNNNNN]--- 1234My house Copy pointer value Just write the address on a new piece of paper. You now have two pieces of paper that will get you to the same house, not two separate houses. Any attempts to follow the address from one paper and rearrange the furniture at that house will make it seem that the other house has been modified in the same manner, unless you can explicitly detect that it's actually just one house. Note This is usually the concept that I have the most problem explaining to people, two pointers does not mean two objects or memory blocks. var h1, h2: THouse; begin h1 := THouse.Create('My house'); h2 := h1; // copies the address, not the house ... h1 v ---[ttttNNNNNNNNNN]--- 1234My house ^ h2 Freeing the memory Demolish the house. You can then later on reuse the paper for a new address if you so wish, or clear it to forget the address to the house that no longer exists. var h: THouse; begin h := THouse.Create('My house'); ... h.Free; h := nil; Here I first construct the house, and get hold of its address. Then I do something to the house (use it, the ... code, left as an exercise for the reader), and then I free it. Lastly I clear the address from my variable. Memory layout: h <--+ v +- before free ---[ttttNNNNNNNNNN]--- | 1234My house <--+ h (now points nowhere) <--+ +- after free ---------------------- | (note, memory might still xx34My house <--+ contain some data) Dangling pointers You tell your entrepreneur to destroy the house, but you forget to erase the address from your piece of paper. When later on you look at the piece of paper, you've forgotten that the house is no longer there, and goes to visit it, with failed results (see also the part about an invalid reference below). var h: THouse; begin h := THouse.Create('My house'); ... h.Free; ... // forgot to clear h here h.OpenFrontDoor; // will most likely fail Using h after the call to .Free might work, but that is just pure luck. Most likely it will fail, at a customers place, in the middle of a critical operation. h <--+ v +- before free ---[ttttNNNNNNNNNN]--- | 1234My house <--+ h <--+ v +- after free ---------------------- | xx34My house <--+ As you can see, h still points to the remnants of the data in memory, but since it might not be complete, using it as before might fail. Memory leak You lose the piece of paper and cannot find the house. The house is still standing somewhere though, and when you later on want to construct a new house, you cannot reuse that spot. var h: THouse; begin h := THouse.Create('My house'); h := THouse.Create('My house'); // uh-oh, what happened to our first house? ... h.Free; h := nil; Here we overwrote the contents of the h variable with the address of a new house, but the old one is still standing... somewhere. After this code, there is no way to reach that house, and it will be left standing. In other words, the allocated memory will stay allocated until the application closes, at which point the operating system will tear it down. Memory layout after first allocation: h v ---[ttttNNNNNNNNNN]--- 1234My house Memory layout after second allocation: h v ---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN] 1234My house 5678My house A more common way to get this method is just to forget to free something, instead of overwriting it as above. In Delphi terms, this will occur with the following method: procedure OpenTheFrontDoorOfANewHouse; var h: THouse; begin h := THouse.Create('My house'); h.OpenFrontDoor; // uh-oh, no .Free here, where does the address go? end; After this method has executed, there's no place in our variables that the address to the house exists, but the house is still out there. Memory layout: h <--+ v +- before losing pointer ---[ttttNNNNNNNNNN]--- | 1234My house <--+ h (now points nowhere) <--+ +- after losing pointer ---[ttttNNNNNNNNNN]--- | 1234My house <--+ As you can see, the old data is left intact in memory, and will not be reused by the memory allocator. The allocator keeps track of which areas of memory has been used, and will not reuse them unless you free it. Freeing the memory but keeping a (now invalid) reference Demolish the house, erase one of the pieces of paper but you also have another piece of paper with the old address on it, when you go to the address, you won't find a house, but you might find something that resembles the ruins of one. Perhaps you will even find a house, but it is not the house you were originally given the address to, and thus any attempts to use it as though it belongs to you might fail horribly. Sometimes you might even find that a neighbouring address has a rather big house set up on it that occupies three address (Main Street 1-3), and your address goes to the middle of the house. Any attempts to treat that part of the large 3-address house as a single small house might also fail horribly. var h1, h2: THouse; begin h1 := THouse.Create('My house'); h2 := h1; // copies the address, not the house ... h1.Free; h1 := nil; h2.OpenFrontDoor; // uh-oh, what happened to our house? Here the house was torn down, through the reference in h1, and while h1 was cleared as well, h2 still has the old, out-of-date, address. Access to the house that is no longer standing might or might not work. This is a variation of the dangling pointer above. See its memory layout. Buffer overrun You move more stuff into the house than you can possibly fit, spilling into the neighbours house or yard. When the owner of that neighbouring house later on comes home, he'll find all sorts of things he'll consider his own. This is the reason I chose a fixed-size array. To set the stage, assume that the second house we allocate will, for some reason, be placed before the first one in memory. In other words, the second house will have a lower address than the first one. Also, they're allocated right next to each other. Thus, this code: var h1, h2: THouse; begin h1 := THouse.Create('My house'); h2 := THouse.Create('My other house somewhere'); ^-----------------------^ longer than 10 characters 0123456789 <-- 10 characters Memory layout after first allocation: h1 v -----------------------[ttttNNNNNNNNNN] 5678My house Memory layout after second allocation: h2 h1 v v ---[ttttNNNNNNNNNN]----[ttttNNNNNNNNNN] 1234My other house somewhereouse ^---+--^ | +- overwritten The part that will most often cause crash is when you overwrite important parts of the data you stored that really should not be randomly changed. For instance it might not be a problem that parts of the name of the h1-house was changed, in terms of crashing the program, but overwriting the overhead of the object will most likely crash when you try to use the broken object, as will overwriting links that is stored to other objects in the object. Linked lists When you follow an address on a piece of paper, you get to a house, and at that house there is another piece of paper with a new address on it, for the next house in the chain, and so on. var h1, h2: THouse; begin h1 := THouse.Create('Home'); h2 := THouse.Create('Cabin'); h1.NextHouse := h2; Here we create a link from our home house to our cabin. We can follow the chain until a house has no NextHouse reference, which means it's the last one. To visit all our houses, we could use the following code: var h1, h2: THouse; h: THouse; begin h1 := THouse.Create('Home'); h2 := THouse.Create('Cabin'); h1.NextHouse := h2; ... h := h1; while h <> nil do begin h.LockAllDoors; h.CloseAllWindows; h := h.NextHouse; end; Memory layout (added NextHouse as a link in the object, noted with the four LLLL's in the below diagram): h1 h2 v v ---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL] 1234Home + 5678Cabin + | ^ | +--------+ * (no link) In basic terms, what is a memory address? A memory address is in basic terms just a number. If you think of memory as a big array of bytes, the very first byte has the address 0, the next one the address 1 and so on upwards. This is simplified, but good enough. So this memory layout: h1 h2 v v ---[ttttNNNNNNNNNN]---[ttttNNNNNNNNNN] 1234My house 5678My house Might have these two address (the leftmost - is address 0): * *h1 = 4 *h2 = 23 Which means that our linked list above might actuall look like this: h1 (=4) h2 (=28) v v ---[ttttNNNNNNNNNNLLLL]----[ttttNNNNNNNNNNLLLL] 1234Home 0028 5678Cabin 0000 | ^ | +--------+ * (no link) It is typical to store an address that "points nowhere" as a zero-address. In basic terms, what is a pointer? A pointer is just a variable holding a memory address. You can typically ask the programming language to give you its number, but most programming languages and runtimes tries to hide the fact that there is a number beneath, just because the number itself does not really hold any meaning to you. It is best to think of a pointer as a black box, ie. you don't really know or care about how it is actually implemented, just as long as it works. A: The reason pointers seem to confuse so many people is that they mostly come with little or no background in computer architecture. Since many don't seem to have an idea of how computers (the machine) is actually implemented - working in C/C++ seems alien. A drill is to ask them to implement a simple bytecode based virtual machine (in any language they chose, python works great for this) with an instruction set focussed on pointer operations (load, store, direct/indirect addressing). Then ask them to write simple programs for that instruction set. Anything requiring slightly more than simple addition is going to involve pointers and they are sure to get it. A: I think that what makes pointers tricky to learn is that until pointers you're comfortable with the idea that "at this memory location is a set of bits that represent an int, a double, a character, whatever". When you first see a pointer, you don't really get what's at that memory location. "What do you mean, it holds an address?" I don't agree with the notion that "you either get them or you don't". They become easier to understand when you start finding real uses for them (like not passing large structures into functions). A: The reason it's so hard to understand is not because it's a difficult concept but because the syntax is inconsistent. int *mypointer; You are first learned that the leftmost part of a variable creation defines the type of the variable. Pointer declaration does not work like this in C and C++. Instead they say that the variable is pointing on the type to the left. In this case: *mypointer is pointing on an int. I didn't fully grasp pointers until i tried using them in C# (with unsafe), they work in exact same way but with logical and consistent syntax. The pointer is a type itself. Here mypointer is a pointer to an int. int* mypointer; Don't even get me started on function pointers... A: I could work with pointers when I only knew C++. I kind of knew what to do in some cases and what not to do from trial/error. But the thing that gave me complete understanding is assembly language. If you do some serious instruction level debugging with an assembly language program you've written, you should be able to understand a lot of things. A: I like the house address analogy, but I've always thought of the address being to the mailbox itself. This way you can visualize the concept of dereferencing the pointer (opening the mailbox). For instance following a linked list: 1) start with your paper with the address 2) Go to the address on the paper 3) Open the mailbox to find a new piece of paper with the next address on it In a linear linked list, the last mailbox has nothing in it (end of the list). In a circular linked list, the last mailbox has the address of the first mailbox in it. Note that step 3 is where the dereference occurs and where you'll crash or go wrong when the address is invalid. Assuming you could walk up to the mailbox of an invalid address, imagine that there's a black hole or something in there that turns the world inside out :) A: I think that the main reason that people have trouble with it is because it's generally not taught in an interesting and engaging manner. I'd like to see a lecturer get 10 volunteers from the crowd and give them a 1 meter ruler each, get them to stand around in a certain configuration and use the rulers to point at each other. Then show pointer arithmetic by moving people around (and where they point their rulers). It'd be a simple but effective (and above all memorable) way of showing the concepts without getting too bogged down in the mechanics. Once you get to C and C++ it seems to get harder for some people. I'm not sure if this is because they are finally putting theory that they don't properly grasp into practice or because pointer manipulation is inherently harder in those languages. I can't remember my own transition that well, but I knew pointers in Pascal and then moved to C and got totally lost. A: Why are pointers such a leading factor of confusion for many new, and even old, college level students in the C/C++ language? The concept of a placeholder for a value - variables - maps onto something we're taught in school - algebra. There isn't an existing parallel you can draw without understanding how memory is physically laid out within a computer, and no one thinks about this kind of thing until they're dealing with low level things - at the C/C++/byte communications level. Are there any tools or thought processes that helped you understand how pointers work at the variable, function, and beyond level? Addresses boxes. I remember when I was learning to program BASIC into microcomputers, there were these pretty books with games in them, and sometimes you had to poke values into particular addresses. They had a picture of a bunch of boxes, incrementally labelled with 0, 1, 2... and it was explained that only one small thing (a byte) could fit in these boxes, and there were a lot of them - some computers had as many as 65535! They were next to each other, and they all had an address. What are some good practice things that can be done to bring somebody to the level of, "Ah-hah, I got it," without getting them bogged down in the overall concept? Basically, drill like scenarios. For a drill? Make a struct: struct { char a; char b; char c; char d; } mystruct; mystruct.a = 'r'; mystruct.b = 's'; mystruct.c = 't'; mystruct.d = 'u'; char* my_pointer; my_pointer = &mystruct.b; cout << 'Start: my_pointer = ' << *my_pointer << endl; my_pointer++; cout << 'After: my_pointer = ' << *my_pointer << endl; my_pointer = &mystruct.a; cout << 'Then: my_pointer = ' << *my_pointer << endl; my_pointer = my_pointer + 3; cout << 'End: my_pointer = ' << *my_pointer << endl; Same example as above, except in C: // Same example as above, except in C: struct { char a; char b; char c; char d; } mystruct; mystruct.a = 'r'; mystruct.b = 's'; mystruct.c = 't'; mystruct.d = 'u'; char* my_pointer; my_pointer = &mystruct.b; printf("Start: my_pointer = %c\n", *my_pointer); my_pointer++; printf("After: my_pointer = %c\n", *my_pointer); my_pointer = &mystruct.a; printf("Then: my_pointer = %c\n", *my_pointer); my_pointer = my_pointer + 3; printf("End: my_pointer = %c\n", *my_pointer); Output: Start: my_pointer = s After: my_pointer = t Then: my_pointer = r End: my_pointer = u Perhaps that explains some of the basics through example? A: The reason I had a hard time understanding pointers, at first, is that many explanations include a lot of rubbish about passing by reference. All this does is confuse the issue. When you use a pointer parameter, you're still passing by value; but the value happens to be an address rather than, say, an int. Someone else has already linked to this tutorial, but I can highlight the moment when I began to understand pointers: A Tutorial on Pointers and Arrays in C: Chapter 3 - Pointers and Strings int puts(const char *s); For the moment, ignore the const. The parameter passed to puts() is a pointer, that is the value of a pointer (since all parameters in C are passed by value), and the value of a pointer is the address to which it points, or, simply, an address. Thus when we write puts(strA); as we have seen, we are passing the address of strA[0]. The moment I read these words, the clouds parted and a beam of sunlight enveloped me with pointer understanding. Even if you're a VB .NET or C# developer (as I am) and never use unsafe code, it's still worth understanding how pointers work, or you won't understand how object references work. Then you'll have the common-but-mistaken notion that passing an object reference to a method copies the object. A: I don't think that pointers themselves are confusing. Most people can understand the concept. Now how many pointers can you think about or how many levels of indirection are you comfortable with. It doesn't take too many to put people over the edge. The fact that they can be changed accidently by bugs in your program can also make them very difficult to debug when things go wrong in your code. A: I think it might actually be a syntax issue. The C/C++ syntax for pointers seems inconsistent and more complex than it needs to be. Ironically, the thing that actually helped me to understand pointers was encountering the concept of an iterator in the c++ Standard Template Library. It's ironic because I can only assume that iterators were conceived as a generalization of the pointer. Sometimes you just can't see the forest until you learn to ignore the trees. A: The confusion comes from the multiple abstraction layers mixed together in the "pointer" concept. Programmers don't get confused by ordinary references in Java/Python, but pointers are different in that they expose characteristics of the underlying memory-architecture. It is a good principle to cleanly separate layers of abstraction, and pointers do not do that. A: The way I liked to explain it was in terms of arrays and indexes - people might not be familiar with pointers, but they generally know what an index is. So I say imagine that the RAM is an array (and you have only 10-bytes of RAM): unsigned char RAM[10] = { 10, 14, 4, 3, 2, 1, 20, 19, 50, 9 }; Then a pointer to a variable is really just the index of (the first byte of) that variable in the RAM. So if you have a pointer/index unsigned char index = 2, then the value is obviously the third element, or the number 4. A pointer to a pointer is where you take that number and use it as an index itself, like RAM[RAM[index]]. I would draw an array on a list of paper, and just use it to show things like many pointers pointing to the same memory, pointer arithmetic, pointer to pointer, and so on. A: I found Ted Jensen's "Tutorial on Pointers and Arrays in C" an excellent resource for learning about pointers. It is divided into 10 lessons, beginning with an explanation of what pointers are (and what they're for) and finishing with function pointers. http://web.archive.org/web/20181011221220/http://home.netcom.com:80/~tjensen/ptr/cpoint.htm Moving on from there, Beej's Guide to Network Programming teaches the Unix sockets API, from which you can begin to do really fun things. http://beej.us/guide/bgnet/ A: In my first Comp Sci class, we did the following exercise. Granted, this was a lecture hall with roughly 200 students in it... Professor writes on the board: int john; John stands up Professor writes: int *sally = &john; Sally stands up, points at john Professor: int *bill = sally; Bill stands up, points at John Professor: int sam; Sam stands up Professor: bill = &sam; Bill now points to Sam. I think you get the idea. I think we spent about an hour doing this, until we went over the basics of pointer assignment. A: An analogy I've found helpful for explaining pointers is hyperlinks. Most people can understand that a link on a web page 'points' to another page on the internet, and if you can copy & paste that hyperlink then they will both point to the same original web page. If you go and edit that original page, then follow either of those links (pointers) you'll get that new updated page. A: The complexities of pointers go beyond what we can easily teach. Having students point to each other and using pieces of paper with house addresses are both great learning tools. They do a great job of introducing the basic concepts. Indeed, learning the basic concepts is vital to successfully using pointers. However, in production code, it's common to get into much more complex scenarios than these simple demonstrations can encapsulate. I've been involved with systems where we had structures pointing to other structures pointing to other structures. Some of those structures also contained embedded structures (rather than pointers to additional structures). This is where pointers get really confusing. If you've got multiple levels of indirection, and you start ending up with code like this: widget->wazzle.fizzle = fazzle.foozle->wazzle; it can get confusing really quickly (imagine a lot more lines, and potentially more levels). Throw in arrays of pointers, and node to node pointers (trees, linked lists) and it gets worse still. I've seen some really good developers get lost once they started working on such systems, even developers who understood the basics really well. Complex structures of pointers don't necessarily indicate poor coding, either (though they can). Composition is a vital piece of good object-oriented programming, and in languages with raw pointers, it will inevitably lead to multi-layered indirection. Further, systems often need to use third-party libraries with structures which don't match each other in style or technique. In situations like that, complexity is naturally going to arise (though certainly, we should fight it as much as possible). I think the best thing colleges can do to help students learn pointers is to to use good demonstrations, combined with projects that require pointer use. One difficult project will do more for pointer understanding than a thousand demonstrations. Demonstrations can get you a shallow understanding, but to deeply grasp pointers, you have to really use them. A: I don't think pointers as a concept are particularly tricky - most students' mental models map to something like this and some quick box sketches can help. The difficulty, at least that which I've experienced in the past and seen others deal with, is that the management of pointers in C/C++ can be unncessarily convoluted. A: I thought I'd add an analogy to this list that I found very helpful when explaining pointers (back in the day) as a Computer Science Tutor; first, let's: Set the stage: Consider a parking lot with 3 spaces, these spaces are numbered: ------------------- | | | | | 1 | 2 | 3 | | | | | In a way, this is like memory locations, they are sequential and contiguous.. sort of like an array. Right now there are no cars in them so it's like an empty array (parking_lot[3] = {0}). Add the data A parking lot never stays empty for long... if it did it would be pointless and no one would build any. So let's say as the day moves on the lot fills up with 3 cars, a blue car, a red car, and a green car: 1 2 3 ------------------- | o=o | o=o | o=o | | |B| | |R| | |G| | | o-o | o-o | o-o | These cars are all the same type (car) so one way to think of this is that our cars are some sort of data (say an int) but they have different values (blue, red, green; that could be an color enum) Enter the pointer Now if I take you into this parking lot, and ask you to find me a blue car, you extend one finger and use it to point to a blue car in spot 1. This is like taking a pointer and assigning it to a memory address (int *finger = parking_lot) Your finger (the pointer) is not the answer to my question. Looking at your finger tells me nothing, but if I look where you're finger is pointing to (dereferencing the pointer), I can find the car (the data) I was looking for. Reassigning the pointer Now I can ask you to find a red car instead and you can redirect your finger to a new car. Now your pointer (the same one as before) is showing me new data (the parking spot where the red car can be found) of the same type (the car). The pointer hasn't physically changed, it's still your finger, just the data it was showing me changed. (the "parking spot" address) Double pointers (or a pointer to a pointer) This works with more than one pointer as well. I can ask where is the pointer, which is pointing to the red car and you can use your other hand and point with a finger to the first finger. (this is like int **finger_two = &finger) Now if I want to know where the blue car is I can follow the first finger's direction to the second finger, to the car (the data). The dangling pointer Now let's say you're feeling very much like a statue, and you want to hold your hand pointing at the red car indefinitely. What if that red car drives away? 1 2 3 ------------------- | o=o | | o=o | | |B| | | |G| | | o-o | | o-o | Your pointer is still pointing to where the red car was but is no longer. Let's say a new car pulls in there... a Orange car. Now if I ask you again, "where is the red car", you're still pointing there, but now you're wrong. That's not an red car, that's orange. Pointer arithmetic Ok, so you're still pointing at the second parking spot (now occupied by the Orange car) 1 2 3 ------------------- | o=o | o=o | o=o | | |B| | |O| | |G| | | o-o | o-o | o-o | Well I have a new question now... I want to know the color of the car in the next parking spot. You can see you're pointing at spot 2, so you just add 1 and you're pointing at the next spot. (finger+1), now since I wanted to know what the data was there, you have to check that spot (not just the finger) so you can deference the pointer (*(finger+1)) to see there is a green car present there (the data at that location) A: Post office box number. It's a piece of information that allows you to access something else. (And if you do arithmetic on post office box numbers, you may have a problem, because the letter goes in the wrong box. And if somebody moves to another state -- with no forwarding address -- then you have a dangling pointer. On the other hand -- if the post office forwards the mail, then you have a pointer to a pointer.) A: Not a bad way to grasp it, via iterators.. but keep looking you'll see Alexandrescu start complaining about them. Many ex-C++ devs (that never understood that iterators are a modern pointer before dumping the language) jump to C# and still believe they have decent iterators. Hmm, the problem is that all that iterators are is in complete odds at what the runtime platforms (Java/CLR) are trying to achieve: new, simple, everyone-is-a-dev usage. Which can be good, but they said it once in the purple book and they said it even before and before C: Indirection. A very powerful concept but never so if you do it all the way.. Iterators are useful as they help with abstraction of algorithms, another example. And compile-time is the place for an algorithm, very simple. You know code + data, or in that other language C#: IEnumerable + LINQ + Massive Framework = 300MB runtime penalty indirection of lousy, dragging apps via heaps of instances of reference types.. "Le Pointer is cheap." A: Some answers above have asserted that "pointers aren't really hard", but haven't gone on to address directly where "pointer are hard!" comes from. Some years back I tutored first year CS students (for only one year, since I clearly sucked at it) and it was clear to me that the idea of pointer is not hard. What's hard is understanding why and when you would want a pointer. I don't think you can divorce that question - why and when to use a pointer - from explaining broader software engineering issues. Why every variable should not be a global variable, and why one should factor out similar code into functions (that, get this, use pointers to specialize their behaviour to their call site). A: I don't see what is so confusing about pointers. They point to a location in memory, that is it stores the memory address. In C/C++ you can specify the type the pointer points to. For example: int* my_int_pointer; Says that my_int_pointer contains the address to a location that contains an int. The problem with pointers is that they point to a location in memory, so it is easy to trail off into some location you should not be in. As proof look at the numerous security holes in C/C++ applications from buffer overflow (incrementing the pointer past the allocated boundary). A: Just to confuse things a bit more, sometimes you have to work with handles instead of pointers. Handles are pointers to pointers, so that the back end can move things in memory to defragment the heap. If the pointer changes in mid-routine, the results are unpredictable, so you first have to lock the handle to make sure nothing goes anywhere. http://arjay.bc.ca/Modula-2/Text/Ch15/Ch15.8.html#15.8.5 talks about it a bit more coherently than me. :-) A: Every C/C++ beginner has the same problem and that problem occurs not because "pointers are hard to learn" but "who and how it is explained". Some learners gather it verbally some visually and the best way of explaining it is to use "train" example (suits for verbal and visual example). Where "locomotive" is a pointer which can not hold anything and "wagon" is what "locomotive" tries pull (or point to). After, you can classify the "wagon" itself, can it hold animals,plants or people (or a mix of them).
{ "language": "en", "url": "https://stackoverflow.com/questions/5727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "468" }
Q: Firebug won't display console feeds for some of my sites Using Firebug v1.20b7 with Firefox v3.0.1 I use firebug a lot for web devlopment. I have very often the problem that Firebug won't show its web console for seeing the POSTs and GETs. I can view all the other tabs, including the NET tab that gives me a lot of the same information that the CONSOLE tab does. Curious if anyone else has had this problem, and maybe a solution, or maybe this is a bug of Firebug. A: There is a limitation in firebug (or rather, in firefox iteself), which will be fixed in one of the newer Firefox releases. The bug is caused by the fact that firebug needs to send data a second time to monitor what's going on in the connection. There's now a special API hook in the firefox trunk that should prevent this workaround in the future, so that firebug can really spy on what's going on :) A: Well, 1.20b7 is technically a beta version of Firebug. :) I've had problems with certain features off and on, but a restart of Firefox seems to fix it more often than not.
{ "language": "en", "url": "https://stackoverflow.com/questions/5741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: What is a good plotting library for .Net? I'm writing some data acquisition software and need a gui plotting library that is fast enough to do realtime updated graphs. I've been using Nplot which is pretty good for a free library, but I'm wondering if there are any better libraries (preferably free or cheap). A: There's a good post about this here and here. I have also used NPlot in our last project since it's easier to use. A: If you need something to display in a WinForms app then you can try out the free ZedGraph. If it is ASP.NET then I recently have used Google Charts with some great results. A: Well, not free. But we have had very good results with Nevron. Their support is excellent as well. Another good option is TeeCharts. A: ZedGraph is not WinForms only, there's a web control too. It's very good. A: Definetly go for Zedgraph, we have been using this for years and it is very good in 'real-time' graphing. Zedgraph is very well documented and there are many examples available. A: If speed is your main concern, nothing better than TeeChart. The fastLine series it provides plots 10 million points in a second. And recently they have introduced it for realtime as well. I was blown away by the speed of plotting. A: You might want to take a look at Open Flash Chart. It's an open source graphing tool built in flash and can be dynamically updated. Check out the Ajax example for an idea of what it can do.
{ "language": "en", "url": "https://stackoverflow.com/questions/5743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19" }
Q: Equivalent VB keyword for 'break' I just moved over to the Visual Basic team here at work. What is the equivalent keyword to break in Visual Basic, that is, to exit a loop early but not the method? A: In case you're inside a Sub of Function and you want to exit it, you can use : Exit Sub or Exit Function A: Exit [construct], and intelisense will tell you which one(s) are valid in a particular place. A: In both Visual Basic 6.0 and VB.NET you would use: * *Exit For to break from For loop *Wend to break from While loop *Exit Do to break from Do loop depending on the loop type. See Exit Statements for more details.
{ "language": "en", "url": "https://stackoverflow.com/questions/5759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "111" }
Q: Tab Escape Character? I'm just in the process of parsing some text and can't remember what the escape character is for a tab in C#? A: For someone who needs quick reference of C# Escape Sequences that can be used in string literals: \t     Horizontal tab (ASCII code value: 9) \n     Line feed (ASCII code value: 10) \r     Carriage return (ASCII code value: 13) \'     Single quotation mark \"     Double quotation mark \\     Backslash \?     Literal question mark \x12     ASCII character in hexadecimal notation (e.g. for 0x12) \x1234     Unicode character in hexadecimal notation (e.g. for 0x1234) It's worth mentioning that these (in most cases) are universal codes. So \t is 9 and \n is 10 char value on Windows and Linux. But newline sequence is not universal. On Windows it's \n\r and on Linux it's just \n. That's why it's best to use Environment.Newline which gets adjusted to current OS settings. With .Net Core it gets really important. A: Easy one! "\t" Edit: In fact, here's something official: Escape Sequences
{ "language": "en", "url": "https://stackoverflow.com/questions/5787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "74" }
Q: IsNothing versus Is Nothing Does anyone here use VB.NET and have a strong preference for or against using IsNothing as opposed to Is Nothing (for example, If IsNothing(anObject) or If anObject Is Nothing...)? If so, why? EDIT: If you think they're both equally acceptable, do you think it's best to pick one and stick with it, or is it OK to mix them? A: I find that Patrick Steele answered this question best on his blog: Avoiding IsNothing() I did not copy any of his answer here, to ensure Patrick Steele get's credit for his post. But I do think if you're trying to decide whether to use Is Nothing or IsNothing you should read his post. I think you'll agree that Is Nothing is the best choice. Edit - VoteCoffe's comment here Partial article contents: After reviewing more code I found out another reason you should avoid this: It accepts value types! Obviously, since IsNothing() is a function that accepts an 'object', you can pass anything you want to it. If it's a value type, .NET will box it up into an object and pass it to IsNothing -- which will always return false on a boxed value! The VB.NET compiler will check the "Is Nothing" style syntax and won't compile if you attempt to do an "Is Nothing" on a value type. But the IsNothing() function compiles without complaints. -PSteele – VoteCoffee A: VB is full of things like that trying to make it both "like English" and comfortable for people who are used to languages that use () and {} a lot. And on the other side, as you already probably know, most of the time you can use () with function calls if you want to, but don't have to. I prefer IsNothing()... but I use C and C#, so that's just what is comfortable. And I think it's more readable. But go with whatever feels more comfortable to you. A: I'm leaning towards the "Is Nothing" alternative, primarily because it seems more OO. Surely Visual Basic ain't got the Ain't keyword. A: You should absolutely avoid using IsNothing() Here are 4 reasons from the article IsNothing() VS Is Nothing * *Most importantly, IsNothing(object) has everything passed to it as an object, even value types! Since value types cannot be Nothing, it’s a completely wasted check. Take the following example: Dim i As Integer If IsNothing(i) Then ' Do something End If This will compile and run fine, whereas this: Dim i As Integer If i Is Nothing Then ' Do something End If Will not compile, instead the compiler will raise the error: 'Is' operator does not accept operands of type 'Integer'. Operands must be reference or nullable types. *IsNothing(object) is actually part of part of the Microsoft.VisualBasic.dll. This is undesirable as you have an unneeded dependency on the VisualBasic library. *Its slow - 33.76% slower in fact (over 1000000000 iterations)! *Perhaps personal preference, but IsNothing() reads like a Yoda Condition. When you look at a variable you're checking its state, with it as the subject of your investigation. i.e. does it do x? --- NOT Is xing a property of it? So I think If a IsNot Nothing reads better than If Not IsNothing(a) A: I agree with "Is Nothing". As stated above, it's easy to negate with "IsNot Nothing". I find this easier to read... If printDialog IsNot Nothing Then 'blah End If than this... If Not obj Is Nothing Then 'blah End If A: If you take a look at the MSIL as it's being executed you'll see that it doesn't compile down to the exact same code. When you use IsNothing() it actually makes a call to that method as opposed to just evaluating the expression. The reason I would tend to lean towards using "Is Nothing" is when I'm negating it becomes "IsNot Nothing' rather than "Not IsNothing(object)" which I personally feel looks more readable. A: I initially used IsNothing but I've been moving towards using Is Nothing in newer projects, mainly for readability. The only time I stick with IsNothing is if I'm maintaining code where that's used throughout and I want to stay consistent. A: Is Nothing requires an object that has been assigned to the value Nothing. IsNothing() can take any variable that has not been initialized, including of numeric type. This is useful for example when testing if an optional parameter has been passed.
{ "language": "en", "url": "https://stackoverflow.com/questions/5791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "134" }
Q: Inheritance in database? Is there any way to use inheritance in database (Specifically in SQL Server 2005)? Suppose I have few field like CreatedOn, CreatedBy which I want to add on all of my entities. I looking for an alternative way instead of adding these fields to every table. A: There is no such thing as inheritance between tables in SQL Server 2005, and as noted by the others, you can get as far as getting help adding the necessary columns to the tables when you create them, but it won't be inheritance as you know it. Think of it more like a template for your source code files. As GateKiller mentions, you can create a table containing the shared data and reference it with a foreign key, but you'll either have to have audit hooks, triggers, or do the update manually. Bottom line: Manual work. A: PostgreSQL has this feature. Just add this to the end of your table definition: INHERITS FROM (tablename[, othertable...]) The child table will have all the columns of its parent, and changes to the parent table will change the child. Also, everything in the child table will come up in queries to the parent table (by default). Unfortunately indices don't cross the parent/child border, which also means you can't make sure that certain columns are unique across both the parent and child. As far as I know, it's not a feature used very often. A: You could create a template in the template pane in Management Studio. And then use that template every time you want to create a new table. Failing that, you could store the CreatedOn and CreatedBy fields in an Audit trail table referencing the original table and id. Failing that, do it manually. A: You could use a data modeling tool such as ER/Studio or ERWin. Both tools have domain columns where you can define a column template that you can apply to any table. When the domain changes so do the associated columns. ER/Studio also has trigger templates that you can build and apply to any table. This is how we update our LastUpdatedBy and LastUpdatedDate columns without having to build and maintain hundreds of trigger scripts. If you do create an audit table you would have one row for every row in every table that uses the audit table. That could get messy. In my opinion, you're better off putting the audit columns in every table. You also may want to put a timestamp column in all of your tables. You never know when concurrency becomes a problem. Our DB audit columns that we put in every table are: CreatedDt, LastUpdatedBy, LastUpdatedDt and Timestamp. Hope this helps. A: We have a SProc that adds audit columns to a given table, and (optionally) creates a history table and associated triggers to track changes to a value. Unfortunately, company policy means I can't share, but it really isn't difficult to achieve. A: If you are using GUIDs you could create a CreateHistory table with columns GUID, CreatedOn, CreatedBy. For populating the table you would still have to create a trigger for every table or handle it in the application logic. A: You do NOT want to use inheritance to do this! When table B, C and D inherits from table A, that means that querying table A will give you records from B, C and D. Now consider... DELETE FROM a; Instead of inheritance, use LIKE instead... CREATE TABLE blah ( blah_id serial PRIMARY KEY , something text NOT NULL , LIKE template_table INCLUDING DEFALUTS ); A: Ramesh - I would implement this using supertype and subtype relationships in my E-R model. There are a few different physical options you have of implementing the relationships as well. A: in O-R mapping, inheritance maps to a parent table where the parent and child tables use the same identifier for example create table Object ( Id int NOT NULL --primary key, auto-increment Name varchar(32) ) create table SubObject ( Id int NOT NULL --primary key and also foreign key to Object Description varchar(32) ) SubObject has a foreign-key relationship to Object. when you create a SubObject row, you must first create an Object row and use the Id in both rows
{ "language": "en", "url": "https://stackoverflow.com/questions/5802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: SQL Server Escape an Underscore How do I escape the underscore character? I am writing something like the following where clause and want to be able to find actual entries with _d at the end. Where Username Like '%_d' A: These solutions totally make sense. Unfortunately, neither worked for me as expected. Instead of trying to hassle with it, I went with a work around: select * from information_schema.columns where replace(table_name,'_','!') not like '%!%' order by table_name A: Adding [ ] did the job for me like '%[\\_]%' A: I had a similar issue using like pattern '%_%' did not work - as the question indicates :-) Using '%\_%' did not work either as this first \ is interpreted "before the like". Using '%\\_%' works. The \\ (double backslash) is first converted to single \ (backslash) and then used in the like pattern. A: T-SQL Reference for LIKE: You can use the wildcard pattern matching characters as literal characters. To use a wildcard character as a literal character, enclose the wildcard character in brackets. The following table shows several examples of using the LIKE keyword and the [ ] wildcard characters. For your case: ... LIKE '%[_]d' A: This worked for me, just use the escape '%\_%' A: Obviously @Lasse solution is right, but there's another way to solve your problem: T-SQL operator LIKE defines the optional ESCAPE clause, that lets you declare a character which will escape the next character into the pattern. For your case, the following WHERE clauses are equivalent: WHERE username LIKE '%[_]d'; -- @Lasse solution WHERE username LIKE '%$_d' ESCAPE '$'; WHERE username LIKE '%^_d' ESCAPE '^'; A: None of these worked for me in SSIS v18.0, so I would up doing something like this: WHERE CHARINDEX('_', thingyoursearching) < 1..where I am trying to ignore strings with an underscore in them. If you want to find things that have an underscore, just flip it around: WHERE CHARINDEX('_', thingyoursearching) > 0 A: Adding to Gerardo Lima's answer, I was having problems when trying to use backslash as my escape character for the ESCAPE clause. This caused issues: SELECT * FROM table WHERE email LIKE '%@%\_%' ESCAPE '\' It was resolved by switching to an exclamation point. This worked: SELECT * FROM table WHERE email LIKE '%@%!_%' ESCAPE '!'
{ "language": "en", "url": "https://stackoverflow.com/questions/5821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "440" }
Q: Binary patch-generation in C# Does anyone have, or know of, a binary patch generation algorithm implementation in C#? Basically, compare two files (designated old and new), and produce a patch file that can be used to upgrade the old file to have the same contents as the new file. The implementation would have to be relatively fast, and work with huge files. It should exhibit O(n) or O(logn) runtimes. My own algorithms tend to either be lousy (fast but produce huge patches) or slow (produce small patches but have O(n^2) runtime). Any advice, or pointers for implementation would be nice. Specifically, the implementation will be used to keep servers in sync for various large datafiles that we have one master server for. When the master server datafiles change, we need to update several off-site servers as well. The most naive algorithm I have made, which only works for files that can be kept in memory, is as follows: * *Grab the first four bytes from the old file, call this the key *Add those bytes to a dictionary, where key -> position, where position is the position where I grabbed those 4 bytes, 0 to begin with *Skip the first of these four bytes, grab another 4 (3 overlap, 1 one), and add to the dictionary the same way *Repeat steps 1-3 for all 4-byte blocks in the old file *From the start of the new file, grab 4 bytes, and attempt to look it up in the dictionary *If found, find the longest match if there are several, by comparing bytes from the two files *Encode a reference to that location in the old file, and skip the matched block in the new file *If not found, encode 1 byte from the new file, and skip it *Repeat steps 5-8 for the rest of the new file This is somewhat like compression, without windowing, so it will use a lot of memory. It is, however, fairly fast, and produces quite small patches, as long as I try to make the codes output minimal. A more memory-efficient algorithm uses windowing, but produces much bigger patch files. There are more nuances to the above algorithm that I skipped in this post, but I can post more details if necessary. I do, however, feel that I need a different algorithm altogether, so improving on the above algorithm is probably not going to get me far enough. Edit #1: Here is a more detailed description of the above algorithm. First, combine the two files, so that you have one big file. Remember the cut-point between the two files. Secondly, do that grab 4 bytes and add their position to the dictionary step for everything in the whole file. Thirdly, from where the new file starts, do the loop with attempting to locate an existing combination of 4 bytes, and find the longest match. Make sure we only consider positions from the old file, or from earlier in the new file than we're currently at. This ensures that we can reuse material in both the old and the new file during patch application. Edit #2: Source code to the above algorithm You might get a warning about the certificate having some problems. I don't know how to resolve that so for the time being just accept the certificate. The source uses lots of other types from the rest of my library so that file isn't all it takes, but that's the algorithm implementation. @lomaxx, I have tried to find a good documentation for the algorithm used in subversion, called xdelta, but unless you already know how the algorithm works, the documents I've found fail to tell me what I need to know. Or perhaps I'm just dense... :) I took a quick peek on the algorithm from that site you gave, and it is unfortunately not usable. A comment from the binary diff file says: Finding an optimal set of differences requires quadratic time relative to the input size, so it becomes unusable very quickly. My needs aren't optimal though, so I'm looking for a more practical solution. Thanks for the answer though, added a bookmark to his utilities if I ever need them. Edit #1: Note, I will look at his code to see if I can find some ideas, and I'll also send him an email later with questions, but I've read that book he references and though the solution is good for finding optimal solutions, it is impractical in use due to the time requirements. Edit #2: I'll definitely hunt down the python xdelta implementation. A: Sorry I couldn't be more help. I would definately keep looking at xdelta because I have used it a number of times to produce quality diffs on 600MB+ ISO files we have generated for distributing our products and it performs very well. A: bsdiff was designed to create very small patches for binary files. As stated on its page, it requires max(17*n,9*n+m)+O(1) bytes of memory and runs in O((n+m) log n) time (where n is the size of the old file and m is the size of the new file). The original implementation is in C, but a C# port is described here and available here. A: Have you seen VCDiff? It is part of a Misc library that appears to be fairly active (last release r259, April 23rd 2008). I haven't used it, but thought it was worth mentioning. A: It might be worth checking out what some of the other guys are doing in this space and not necessarily in the C# arena either. This is a library written in c# SVN also has a binary diff algorithm and I know there's an implementation in python although I couldn't find it with a quick search. They might give you some ideas on where to improve your own algorithm A: If this is for installation or distribution, have you considered using the Windows Installer SDK? It has the ability to patch binary files. http://msdn.microsoft.com/en-us/library/aa370578(VS.85).aspx A: This is a rough guideline, but the following is for the rsync algorithm which can be used to create your binary patches. http://rsync.samba.org/tech_report/tech_report.html
{ "language": "en", "url": "https://stackoverflow.com/questions/5831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20" }
Q: Issues using MS Access as a front-end to a MySQL database back-end? Two users wanted to share the same database, originally written in MS Access, without conflicting with one another over a single MDB file. I moved the tables from a simple MS Access database to MySQL using its Migration Toolkit (which works well, by the way) and set up Access to link to those tables via ODBC. So far, I've run into the following: * *You can't insert/update/delete rows in a table without a primary key (no surprise there). *AutoNumber fields in MS Access must be the primary key or they'll just end up as integer columns in MySQL (natch, why wouldn't it be the PK?) *The tables were migrated to MySQL's InnoDB table type, but the Access relationships didn't become MySQL foreign key constraints. Once the database is in use, can I expect any other issues? Particularly when both users are working in the same table? A: Gareth Simpson opined: If it's only two users, then Access should do just fine if you put the .mdb on a shared drive. Er, no. There is no multi-user Access application for which each user should not have a dedicated copy of the front end. That means each user should have an MDB on their workstation. Why? Because the objects in front ends do not share well (not nearly as well as Jet data tables, though there aren't any of those in this scenario using MySQL as the back end). Gareth Simpson continued: I believe the recommended max concurrent users for Access is 5 but on occasion I've pushed it past this and never come unstuck. No, this is completely incorrect. The theoretical limit for users of an MDB is 255. That's not realistic, of course, as once you reach about 20 users you have to program your Access app carefully to work well (though the things you need to do in an Access-to-Jet app are the same kinds of things you'd do to make any server database application efficient, e.g., retrieving the smallest usable data sets). In this case, since each user should have an individual copy of the front-end MDB, the multi-user limits of Access/Jet are simply not relevant at all. A: I know this doesn't answer your question directly, but it might be worth checking out the SQL Server 2005 migration tool for Access. I've never used the tool, but it might be worth using with SQL Server 2005 Express Edition to see if there are the same issues as you had with MySQL A: Dont forget to put some type time/date stamp on each record. sometimes ms access will think "another user has changed or deleted the record" and will not allow you to make a change! I found this out the hard way. A: I know this topic is not too fresh, but just some additional explanations: If you want to use MS Access effectively, especially with bigger, multiuser databases, please do the following: * *split your MDB into frontend application and backend (data only) files - you'll have two separate MDB files then. *migrate all the tables with data and structure into external database. It can be: MySQL (works very well, no database size limitations, requires some more skills as it's not MS technology, but it is a good choice in many cases - moreover you can scale your backend with more RAM and additional CPUs, so everything depends on your needs and hardware capabilities); Oracle (if you have enough money or some kind of corporate license) or Oracle 10g XE (if this is not a problem, that the database size is limited up to 4 GB and it will always use 1 GB of RAM and 1 CPU), MS SQL Server 2008 (it's a great pair to have MS Access frontend and MS SQL Server backend in all the cases, but you have to pay for license! - advantages are: close integration, both technologies are form the same vendor; MS SQL Server is very easy to maintain an effective at the same time) or Express edition (same story like with Oracle XE - almost the same limitations). *relink your MS Access frontend with backend database. If you selected MS SQL Server for the backend then it will be as easy as to use the wizard from MS Access. For MySQL - you have to use ODBC drivers (it's simple and works very good). For Oracle - please do not use the ODBC drivers from Microsoft. These from Oracle will do their work much better (you can compare the time needed to execute SQL query from MS Access to Oracle via Oracle ODBC and MS Oracle ODBC drivers). At this point you'll have solid database backend and fully functional MS Access frontend - MDB file. *compile your MDB frontend to MDE - it will give you a lot of speed. Moreover, it's the only reasonable form of distributing MS Access application to your end users. *for daily work - use MDE file with MS Access frontend. For futher MS Access frontend development use MDB file. *don't use badly written ActiveX components to enhance MS Access frontend capabilities. Better write them yourself or buy the proper ones. *don't believe into the myths that there are a lot of issues with MS Access - this is a great product which can help in may occassions. The problem is a lot of people assume it's a toy or that MS Access is generaly simple. Usually they generate a lot of errors and issues by themselves and their lack of knowledge and experience. To be successfull with MS Access it is important to understand this tool - this is the same rule, like with any other technology outhere. I can tell you that I'm using quite advanced MS Access fronted to MySQL backend and I'm very satisfied (as a developer which is maintaining this application). My friends, the users are also satisfied as they feel very comfortable with the GUI (frontend), the speed (MySQL), they don't have any issues with records locking or database performance. Moreover, it's important to read a lot about good practices and other people experiences. I would say that in many cases MS Access is a good solution. I know a lot of dedicated, custom made systems which started as an experiment in form of private MS Access database (MDB file) and then evolved to: splitted MS Access (MDE - frontend, MDB - backend) and finally to: MS Access frontend (MDE) and "serious" database backend (mainly MS SQL Server and MySQL). It's also important that you can always use your MS Access solution as a working prototype - you have ready to use backend in your database (MySQL - let's assume) and you can rewrite frontend to the technology of your choice (web solution? maybe desktop C# application - what you require!). I hope I helped some of you considering the work with MS Access. Regards, Wawrzyn http://dcserwis.pl A: I had an application that worked likewise: an MS Access frontend to a MySQL backend. It was such a huge pain that I ended up writing a Win32 frontend instead. From the top of my head, I encountered the following problems: * *Development of the ODBC link seems to have ceased long ago. There are various different versions floating around --- very confusing. The ODBC link doesn't support Unicode/UTF8, and I remember there were other issues with it as well (though some could be overcome by careful configuration). *You probably want to manually tweak your db schema to make it compatible with MS Access. I see you already found out about the needed surrogate keys (i.e., int primary keys) :-) *You should keep in mind that you may need to use pass-through queries to do more sophisticated SQL manipulations of the MySQL database. *Be careful with using lots of VBA, as that tends to corrupt your frontend file. Regularly compressing the database (using main menu, Tools | Database utilities | Compress and restore, or something like that --- I'm using the Dutch version) and making lots of backups is necessary. *Access tends to cause lots of network traffic. Like, really huge lots. I haven't been able to find a solution for that. Using a network monitor is recommended if you want to keep an eye on that! *Access insists on storing booleans as 0/-1. IMHO, 0/+1 makes more sense, and I believe it is the default way of doing things in MySQL as well. Not a huge problem, but if your checkboxes don't work, you should definitely check this. One possible alternative would be to put the backend (with the data) on a shared drive. I remember this is well-documented, also in the help. You may want to have a look at some general advice on splitting into a frontend and a backend and code that automatically reconnects to the backend on startup; I can also send you some more sample code, or post it here. Otherwise, you might also want to consider MS SQL. I don't have experience with that, but I presume it works together with MS Access much more nicely! A: In general, it depends :) I haven't had a lot of problems when the application side has just been updating the data through the forms. You can get warnings/errors when the same row has been updated by more than one user; but Access seems to be constantly updating its live record sets all the time. Problems can happen if Alice is already working with record 365, and the Bob updates it, and then Alice tries to update it with her changes. As I recall, Alice will get a cryptic error message. It would be easier for the users if you trap these errors and at least give them a friendlier error message. I've had more problems when I was editing records in the VB code through RecordSets, especially when combined with editing the same data on forms. That's not necessarily a multi user problem; however, you have almost the same situation because you have one user with multiple connections to the same data. A: If it's only two users, then Access should do just fine if you put the .mdb on a shared drive. Have you tried it first rather than just assume it will be a problem. I believe the recommended max concurrent users for Access is 5 but on occasion I've pushed it past this and never come unstuck. On the other hand I did once use Access as the front end to MySQL in a single user environment (me). It was a singularly unpleasant experience, I can't imagine it would become nicer with two users.
{ "language": "en", "url": "https://stackoverflow.com/questions/5842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14" }
Q: Add 1 to a field How do I turn the following 2 queries into 1 query $sql = "SELECT level FROM skills WHERE id = $id LIMIT 1;"; $result = $db->sql_query($sql); $level = (int) $db->sql_fetchfield('level'); $db->sql_freeresult($result); ++$level; $sql = "UPDATE skills SET level = $level WHERE id = $id;"; $result = $db->sql_query($sql); $db->sql_freeresult($result); I'm using it in a phpBB mod but the gist is that I grab the level, add one to it then update, it seems that it'd be much easier and faster if I could do it as one query. Edit: $id has already been forced to be an integer, thus no escaping is needed this time. A: With PDO and prepared query: $query = $db->prepare("UPDATE skills SET level = level + 1 WHERE id = :id") $query->bindValue(":id", $id); $result = $query->execute(); A: I get downmodded for this? $sql = "UPDATE skills SET level = level+1 WHERE id = $id"; $result = $db->sql_query($sql); $db->sql_freeresult($result); In Teifion's specific case, the phpBB DDL lists that particular field as NOT NULL, so there's no danger of incrementing NULL. In the general case, you should not use NULL to represent zero. Incrementing NULL should give an answer of NULL. If you're the kind of misguided developer who thinks NULL=0, step away from keyboard and find another pastime, you're just making life hard for the rest of us. Of course, this is the computer industry and who are we to say you're wrong? If you're not wrong, use $sql = "UPDATE skills SET level = COALESCE(level,0)+1 WHERE id = $id"; ...but let's face it: you're wrong. If everyone starts at level 0, then your DDL should include level INT DEFAULT '0' NOT NULL in case the programmers forget to set it when they create a record. If not everyone starts on level 0, then skip the DEFAULT and force the programmer to supply a value on creation. If some people are beyond levels, for whom having a level is a meaningless thing, then adding one to their level equally has no meaning. In that case, drop the NOT NULL from the DDL. A: $sql = "UPDATE skills SET level = level + 1 WHERE id = $id"; I just hope you are properly sanitising $id elsewhere in your code! A: try this UPDATE skills SET level = level + 1 WHERE id = $id A: This way: UPDATE skills SET level = level + 1 WHERE id = $id A: How about: UPDATE skills SET level = level + 1 WHERE id = $id; A: Mat: That's what pasted in from the question. It hasn't been edited, so I attribute that to a bug in Markdown. But, oddly enough, I have noticed. Also: yes, mysql_escape_string()!
{ "language": "en", "url": "https://stackoverflow.com/questions/5846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24" }
Q: Automate builds for Java RCP for deployment with JNLP I've found many sources that talk about the automated Eclipse PDE process. I feel these sources don't do a good job explaining what's going on. I can create the deployable package, in a semi-manual process via the Feature Export. The automated process requires knowledge of how the org.eclipse.pde.build scripts work. I have gotten a build created, but not for JNLP. Questions: * *Has anyone ever deployed RCP through JNLP? *Were you able to automate the builds? A: I haven't done this before, but I found this site on the web giving an explanation.
{ "language": "en", "url": "https://stackoverflow.com/questions/5855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9" }
Q: mailto link for large bodies I have a page upon which a user can choose up to many different paragraphs. When the link is clicked (or button), an email will open up and put all those paragraphs into the body of the email, address it, and fill in the subject. However, the text can be too long for a mailto link. Any way around this? We were thinking about having an SP from the SQL Server do it but the user needs a nice way of 'seeing' the email before they blast 50 executive level employees with items that shouldn't be sent...and of course there's the whole thing about doing IT for IT rather than doing software programming. 80( When you build stuff for IT, it doesn't (some say shouldn't) have to be pretty just functional. In other words, this isn't the dogfood we wake it's just the dog food we have to eat. We started talking about it and decided that the 'mail form' would give us exactly what we are looking for. * *A very different look to let the user know that the gun is loaded and aimed. *The ability to change/add text to the email. *Send a copy to themselves or not. *Can be coded quickly. A: By putting the data into a form, I was able to make the body around 1800 characters long before the form stopped working. The code looked like this: <form action="mailto:youremail@domain.com"> <input type="hidden" name="Subject" value="Email subject"> <input type="hidden" name="Body" value="Email body"> <input type="submit"> </form> Edit: The best way to send emails from a web application is of course to do just that, send it directly from the web application, instead of relying on the users mailprogram. As you've discovered, the protocol for sending information to that program is limited, but with a server-based solution you would of course not have those limitations. A: Does the e-mail content need to be in the e-mail? Could you store the large content somewhere centrally (file-share/FTP site) then just send a link to the content? This makes the recipient have an extra step, but you have a consistent e-mail size, so won't run into reliability problems due to unexpectedly large or excessive content.
{ "language": "en", "url": "https://stackoverflow.com/questions/5857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13" }
Q: WCF Service - Backward compatibility issue I'm just getting into creating some WCF services, but I have a requirement to make them backward compatible for legacy (.NET 1.1 and 2.0) client applications. I've managed to get the services to run correctly for 3.0 and greater clients, but when I publish the services using a basicHttpBinding endpoint (which I believe is required for the compatibility I need), the service refactors my method signatures. e.g. public bool MethodToReturnTrue(string seedValue); appears to the client apps as public void MethodToReturnTrue(string seedValue, out bool result, out bool MethodToReturnTrueResultSpecified); I've tried every configuration parameter I can think of in the app.config for my self-hosting console app, but I can't seem to make this function as expected. I suppose this might lead to the fact that my expectations are flawed, but I'd be surprised that a WCF service is incapable of handling a bool return type to a down-level client. My current app.config looks like this. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <services> <service behaviorConfiguration="MyServiceTypeBehaviors" Name="MyCompany.Services.CentreService.CentreService"> <clear /> <endpoint address="http://localhost:8080/CSMEX" binding="basicHttpBinding" bindingConfiguration="" contract="IMetadataExchange" /> <endpoint address="http://localhost:8080/CentreService" binding="basicHttpBinding" bindingName="Compatible" name="basicEndpoint" contract="MyCompany.Services.CentreService.ICentreService" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name="MyServiceTypeBehaviors" > <serviceMetadata httpGetEnabled="true" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Can anyone advise, please? A: Ah, this is killing me! I did this at work about 3 months ago, and now I can't remember all the details. I do remember, however, that you need basicHttpBinding, and you can't use the new serializer (which is the default); you have to use the "old" XmlSerializer. Unfortunately, I don't work at the place where I did this anymore, so I can't go look at the code. I'll call my boss and see what I can dig up. A: OK, we needed to resolve this issue in the short term, and so we came up with the idea of a "interop", or compatibility layer. Baiscally, all we did was added a traditional ASMX web service to the project, and called the WCF service from that using native WCF calls. We were then able to return the appropriate types back to the client applications without a significant amount of re-factoring work. I know it was a hacky solution, but it was the best option we had with such a large legacy code-base. And the added bonus is that it actually works surprisingly well. :) A: You do have to use the XmlSerializer. For example: [ServiceContract(Namespace="CentreServiceNamespace")] [XmlSerializerFormat(Style=OperationFormatStyle.Document, SupportFaults=true, Use=OperationFormatUse.Literal)] public interface ICentreService { [OperationContract(Action="CentreServiceNamespace/MethodToReturnTrue")] bool MethodToReturnTrue(string seedValue); } You have to manually set the operation action name because the auto-generated WCF name is constructed differently from the ASMX action name (WCF includes the interface name as well, ASMX does not). Any data contracts you use should be decorated with [XmlType] rather than [DataContract]. Your config file should not need to change.
{ "language": "en", "url": "https://stackoverflow.com/questions/5863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Making a production build of a PHP project with Subversion If you are working in PHP (or I guess any programming language) and using subversion as your source control, is there a way to take your project (for example): C:\Projects\test\.svn C:\Projects\test\docs\ C:\Projects\test\faq.php C:\Projects\test\guestbook.php C:\Projects\test\index.php C:\Projects\test\test.php and build/copy/whatever it so it weeds out certain files and becomes: C:\Projects\test\faq.php C:\Projects\test\guestbook.php C:\Projects\test\index.php automatically? I'm getting tired of making a branch, and then going through the branch and deleting all of the ".svn" folders, the docs directory, and my prototyping files. I know I could probably use a .bat file to only copy the specific files I want, but I was hoping there was some way with subversion to sort of pseudo ignore a file, to where it will still version it, but where you could make a snapshot of the project that ignores the files you told it to pseudo ignore. I know I read online somewhere about some functionality that at least lets you copy without the .svn folders, but I can't find it now. A: If you use TortoiseSVN, you can use the export feature to automatically strip out all of the .svn files. I think other svn things have the same feature. Right click the root project folder, then select TortoiseSVN > Export, and tell it where you want the .svn free directory. A: Copy all the files manually or using your existing method for the first time. Then, since I take it you're on a Windows platform, install SyncToy and configure it in the subscribe method, which would effectively one-way copy only the changes made since the last pseudo-commit to production for files already in production. If you want to add a file you can just copy it manually and resume the SyncToy operation. A: Ok, so my final solution is this: Use the export command to export to a folder called "export" in the same directory as a file called "deploy.bat", then I run the deploy script (v1 stands for version 1, which is what version I am currently on in this project) This script utilizes 7-Zip, which I have placed on my system path so I can use it as a command line utility: rem replace the v1 directory with the export directory rd /s /q v1 move /y export\newIMS v1 rd /s /q export rem remove the prepDocs directory from the project rd /s /q v1\prepDocs rem remove the scripts directory from the project rd /s /q v1\scripts rem remove individual files from project del v1\.project rem del v1\inc\testLoad.html rem del v1\inc\testInc.js SET /P version=Please enter version number: rem zip the file up with 7-Zip and name it after whatever version number the user typed in. 7z a -r v%version%.zip v1 rem copy everything to the shared space ready for deployment xcopy v%version%.zip /s /q /y /i "Z:\IT\IT Security\IT Projects\IMS\v%version%.zip" xcopy v1 /s /q /y /i "Z:\IT\IT Security\IT Projects\IMS\currentVersion" rem keep the window open until user presses any key PAUSE I didn't have time to check out the SyncToy solution, so don't take this as me rejecting that method. I just knew how to do this, and didn't have time to check that one out (under a time crunch right now). Sources: http://commandwindows.com/command2.htm http://www.ss64.com/nt/
{ "language": "en", "url": "https://stackoverflow.com/questions/5872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Drag and drop ftp file upload web widgets It seems like drag and drop upload widgets disappeared from the face of Web 2.0. The last one of these I remember using was an activex widget, and inability of using it in anything other than IE doomed it. Have you used or seen one of these recently? A: The Dojo Toolkit JavaScript library supports some drag & drop functionality that I've seen work in IE6+ and FF2+. The nice thing about Dojo and other JS libraries is that they abstract away all of the browser detection stuff. I'm sure other JS libraries support this functionality. A: FTP Drop for Yahoo Widgets allows you to drag files over the widget and the file will be sent to the defined ftp server. A: Our current project makes heavy use of drag+drop, using GWT and gwt-dnd you can do some very cool stuff. Standards based, and works in IE6, Safari, Firefox, Opera, etc.. The issue of how to transmit a file is a separate one I believe. As for FTP support, I see that as being mostly replaced with HTTP File Upload support. In the case you need more flexibility (progress bar, multiple file selection), then you can make use of flash to do this. You can use Javascript to interact with an invisible flash app which performs the file transfer. YUI's file upload control does this. You can see an example on Flickr's enhanced upload page. We've built a custom version designed for use with GWT apps. Same concept. A: You can upload to FTP in browser with an applet such as JFileUpload. See: http://www.jfileupload.com/products/jfileupload/index.html [Disclosure: This is my site] It supports regular FTP, FTPS (explicit and implicit) and SFTP (FTP + SSH). It can resume broken transfer too. Applets can be moved outside browser since JRE 1.6.0_10.
{ "language": "en", "url": "https://stackoverflow.com/questions/5874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: How do I prevent dnsmasq from appending my domain name to invalid domain requests? I use dnsmasq to resolve DNS queries on my home network. Unfortunately, if a domain name is not known, it will append my own domain name to the request which means that I always end up viewing my own site in a browser. For example, if I enter http://dlksfhoiahdsfiuhsdf.com in a browser, I end up viewing my own site with that URL. If I try something like: host dlksfhoiahdsfiuhsdf.com Instead of the expected: Host dlksfhoiahdsfiuhsdf.com not found: 3(NXDOMAIN) I get this: dlksfhoiahdsfiuhsdf.com.mydomainname.com has address W.X.Y.Z. Clearly, dnsmasq is appending my domain name to impossible name requests in an effort to resolve them, but I'd rather see the not found error instead. I've tried playing with the expand-hosts and domain configuration settings, but to no avail. Is there anything else I can try? A: try querying with a trailing dot to explicitly set the root: host dlksfhoiahdsfiuhsdf.com. A: It is probably not dnsmasq doing it, but your local resolver library. If you use a unixish, try removing the "search" or "domain" lines from /etc/resolv.conf A: There might be other causes, but the most obvious cause is the configuration of /etc/resolv.conf, and the fact that most DNS clients like to be very terse about errors. benc$ host thing.one Host thing.one not found: 3(NXDOMAIN) (okay, what was I using for a DNS config?) benc$ cat /etc/resolv.conf nameserver 192.168.1.1 (edit...) benc$ cat /etc/resolv.conf search test.com nameserver 192.168.1.1 benc$ host thing.one thing.one.test.com has address 64.214.163.132 Without bothering to do a packet trace, the likely behavior is that it returns the error for the last FQDN it tried. A: You have a wildcard domain? dnsmasq is forwarding the appended name out to the external dns server and its getting wildcarded. you can use --server=/yourinternaldomainhere/ to make sure that your internal domain name lookups are not forwarded out. syntax in this case would be: --server=/domain/iptoforwardto and in this case leave the iptoforwardto area blank as you don't want it to forward anywhere. A: I tried removing domain-needed from my own configuration to replicate your issue and it did not produce this behaviour. It's the only other parameter I could find that might be close to relevant. What does your hosts file look like? Maybe something weird is going on there that makes it think all weird domains are local to your network?
{ "language": "en", "url": "https://stackoverflow.com/questions/5876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1" }
Q: Are there any negative reasons to use an N-Tier solution? I'm pretty new to my company (2 weeks) and we're starting a new platform for our system using .NET 3.5 Team Foundation from DotNetNuke. Our "architect" is suggesting we use one class project. Of course, I chime back with a "3-tier" architecture (Business, Data, Web class projects). Is there any disadvantages to using this architecture? Pro's would be separation of code from data, keeping class objects away from your code, etc. A: I guess a fairly big downside is that the extra volume of code that you have to write, manage and maintain for a small project may just be overkill. It's all down to what's appropriate for the size of the project, the expected life of the final project and the budget! Sometimes, whilst doing things 'properly' is appealing, doing something a little more 'lightweight' can be the right commercial decision! A: it tends to take an inexperienced team longer to build 3-tier.It's more code, so more bugs. I'm just playing the devil's advocate though. A: I would be pushing hard for the N tiered approach even if it's a small project. If you use an ORM tool like codesmith + nettiers you will be able to quickly setup the projects and be developing code that solves your business problems quickly. It kills me when you start a new project and you spend days sitting around spinning wheels talking about how the "architecture" should be architected. You want to be spending time solving the business problem, not solving problems that other people have solved for you. Using an ORM (it doesn't really matter which one, just pick one and stick to it) to help you get initial traction will help keep you focussed on the goals of the project and not distract you trying to solve "architecture" issues. If, at the end of the day, the architect wants to go the one project approach, there is no reason you can't create an app_code folder with a BLL and DAL folder to seperate the code for now which will help you move to an N-Tiered solution later. A: Because you want the capability of being able to distribute the layers onto different physical tiers (I always use "tier" for physical, and "layer" for logical), you should think twice before just putting everything into one class because you've got major refactorings to do if or when you do need to start distributing. A: The only disadvantage is complexity but really how hard is it to add some domain objects and bind to a list of them as opposed to using a dataset. You don't even have to create three seperate projects, you can just create 3 seperate folders within the web app and give each one a namespace like, YourCompany.YourApp.Domain, YourCompany.YourApp.Data, etc. The big advantage is having a more flexible solution. If you start writing your app as a data centric application, strongly coupling your web forms pages to datasets, you are going to end up doing a lot more work later migrating to a more domain centeric model as your business logic grows in complexity. Maybe in the short term you focus on a simple solution by creating very simple domain objects and populating them from datasets, then you can add business logic to them as needed and build out a more sophisticated ORM as needed, or use nhibernate. A: As with anything abstraction creates complexity, and so the complexity of doing N-tiered should be properly justified, e.g., does N-tiered actually benefit the system? There will be small systems that will work best with N-tiered, although a lot of them will not. Also, even if your system is small at the moment, you might want to add more features to it later -- not going N-tiered might consitute a sort of technical debt on your part, so you have to be careful.
{ "language": "en", "url": "https://stackoverflow.com/questions/5880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: Link issues (VC6) I've opened an old workspace that is a libray and its test harness. It used to work fine but now doesn't and older versions of the code don't work either with the same errors. I've tried recreating the project and that causes the same errors too. Nothing seems out of order in project settings and the code generated works in the main app. I've stripped out most of the files and got it down to the bare minimum to generate the error. Unfortunately I can't post the project as this is used in production code. The LNK2001 linker error I get usually means I've left off a library or forgot to implement a virtual function. However this is part of the standard template library - and is a header at that. The code that is listed as having the problem in IOCompletionPort.obj doesn't actually use std::string directly, but does call a class that does: Comms::Exception accepts a std::string and the value of GetLastError or WSAGetLastError. The function mentioned in the error (GetMessage) is implemented, but is a virtual function so other classes can override it if need be. However it appears that the compiler has made it as an Ansi version, but I can't find any options in the settings that would control that. I suspect that might be the problem but since there's very little in the way of options for the library I have no way of knowing for sure. However both projects to specify _MBCS in the compiler options. --------------------Configuration: TestComms - Win32 Debug-------------------- Linking... Comms.lib(IOCompletionPort.obj) : error LNK2001: unresolved external symbol "public: virtual class std::basic_string,class std::allocator > __thiscall Comms::Exception::GetMessageA(void)const " (?GetMessageA@ Exception@Comms@@UBE?AV?$basic_string@DU?$char_traits@D@std@@V?$allocator@D@2@@std@@XZ) Debug/TestComms.exe : fatal error LNK1120: 1 unresolved externals Error executing link.exe. TestComms.exe - 2 error(s), 0 warning(s) Any suggestions? I've lost most of the morning to this and don't want to lose most of the afternoon too. A: One possibility lies with Win32 ANSI/Unicode "name-mangling", which turns the symbol GetMessage into either GetMessageA or GetMessageW. There are three possibilities: * *Windows.h hasn't been loaded, so GetMessage stays GetMessage *Windows.h was loaded with symbols set for ANSI, so GetMessage becomes GetMessageA *Windows.h was loaded with symbols set for Unicode, so GetMessage becomes GetMessageW If you've compiled two different files in ways that trigger two different scenarios, you'll get a linker error. The error message indicates that the Comms::Exception class was an instance of #2, above -- perhaps it's used somewhere that windows.h hasn't been loaded? Other things I'd do in your place, just as a matter of routine: 1) Ensure that my include and library paths don't contain anything that I'm not expecting. 2) Do a "build clean" and then manually verify it, deleting any extra object files if necessary. 3) Make sure there aren't any hardcoded paths in include statements that don't mean what they meant when the project was originally rebuilt. EDIT: Fighting with the formatting :( A: @Curt: I think you came the closest. I haven't tested this but I think I sort of gave the answer in my original question. GetMessage is a define in Windows.h wrapped in a ifndef block to switch between Ansi (GetMessageA) and Unicode (GetMessageW). A: windows.h is declared at the top of IOCompletionPort.h as an include - I was sick of seeing 7 lines just to include 1 file so I have wrapped it its own file and includes that itself. This also contains some additional #defines (i.e. ULONG_PTR) as our main app won't compile with the Platform SDK installed:-( * *That is confirmed. Nothing is out of place. *I've done that - deleted the build directories *I never use hard-coded paths. A: Presuming you haven't futzed around with the Project settings deleting something you ought not have (which is where I'd expect external dependencies like User32.lib to be): Check Tools | Options | Directories | Libraries (going from memory here) and ensure that you're not missing the common-all-garden variety lib directories (again, without VC6 in front of me, I can't tell you what they are) A: This is a general problem with the way Microsoft handled the ANSI vs. Unicode APIs. Since they are all (or pretty much all) done by defining macros for the function names that resolve to the 'A' or 'W' versions of the function names you cannot safely have an identifier in your namespace/class/struct/enum/function that matches a Windows API name. The windows.h macros run roughshod over all other namespaces.
{ "language": "en", "url": "https://stackoverflow.com/questions/5892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11" }
Q: Genealogy Tree Control I've been tasked (by my wife) with creating a program to allow her to track the family trees on both sides of our family. Does anyone know of a cost-effective (free) control to represent this type of information? What I'm looking for is a modified org-chart type chart/tree. The modification is that any node should have 2 parent nodes (E.G. a child should have a Mother/Father). The solution I've come up with so far is to have 2 trees, an ancestor tree and a descendants tree, with the individual being inspected as the root node for each tree. It works but is sort of clunky. I'm working primarily in c# WinForms, so .Net type controls or source code is preferable. A: I actually spotted GRAMPS just the other day. A: Geni is probably what your looking for. A: If you're really looking for an application that you can modify try out Family.Show on CodePlex. A: I'm all for writing your own software when something doesn't suit your needs and a frequent re-inventor of the wheel. But this honestly seems like one of those things were the solution is readily available, in this case in the form of Family Tree Maker And at a mere $40 I would venture to guess that you'd come out ahead compared to the hours you would spend trying to get your own program doing exactly what you need. I currently use the software and it works great. Now, if your interest in writing it partly for the purpose of just doing it because you can and to learn something...then by all means I salute your will to learn and hope you find the control you are looking for. A: Try GeneTree. You can open a free account and build a family tree interactively. You can also find others whose DNA matches yours, who may be family members you did not know about before. If the functionality is already there, and free, why write a program? A: There have been lots of suggestions of software to use, but I'd like to add a recommendation for The Master Genealogist. I find it fits my engineering mindset more than most. There is little online to match the flexibility of standalone genealogy tools, but TNG (The Next Generation of Genealogy Sitebuilding) is quite good. A: I haven't thought too hard about this, but I reckon you could get a Custom Treeview in WPF to do what you want. I was reading an article on code project a while back that implemented an org chart this way... A: Family.Show is a very successful genealogy application and open-source you can get it from Codeplex
{ "language": "en", "url": "https://stackoverflow.com/questions/5894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18" }
Q: User access log to SQL Server I need to get a log of user access to our SQL Server so I can track average and peak concurrency usage. Is there a hidden table or something I'm missing that has this information for me? To my knowledge the application I'm looking at does not track this at the application level. I'm currently working on SQL Server 2000, but will moving to SQL Server 2005 shortly, so solutions for both are greatly appreciated. A: In SQL Server 2005, go to tree view on the left and select Server (name of the actual server) > Management > Activity Monitor. Hope this helps. A: * *on 2000 you can use sp_who2 or the dbo.sysprocesses system table *on 2005 take a look at the sys.dm_exec_sessions DMV Below is an example SELECT COUNT(*) AS StatusCount,CASE status WHEN 'Running' THEN 'Running - Currently running one or more requests' WHEN 'Sleeping ' THEN 'Sleeping - Currently running no requests' ELSE 'Dormant – Session is in prelogin state' END status FROM sys.dm_exec_sessions GROUP BY status
{ "language": "en", "url": "https://stackoverflow.com/questions/5908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: Get size of a file before downloading in Python I'm downloading an entire directory from a web server. It works OK, but I can't figure how to get the file size before download to compare if it was updated on the server or not. Can this be done as if I was downloading the file from a FTP server? import urllib import re url = "http://www.someurl.com" # Download the page locally f = urllib.urlopen(url) html = f.read() f.close() f = open ("temp.htm", "w") f.write (html) f.close() # List only the .TXT / .ZIP files fnames = re.findall('^.*<a href="(\w+(?:\.txt|.zip)?)".*$', html, re.MULTILINE) for fname in fnames: print fname, "..." f = urllib.urlopen(url + "/" + fname) #### Here I want to check the filesize to download or not #### file = f.read() f.close() f = open (fname, "w") f.write (file) f.close() @Jon: thank for your quick answer. It works, but the filesize on the web server is slightly less than the filesize of the downloaded file. Examples: Local Size Server Size 2.223.533 2.115.516 664.603 662.121 It has anything to do with the CR/LF conversion? A: The size of the file is sent as the Content-Length header. Here is how to get it with urllib: >>> site = urllib.urlopen("http://python.org") >>> meta = site.info() >>> print meta.getheaders("Content-Length") ['16535'] >>> A: Also if the server you are connecting to supports it, look at Etags and the If-Modified-Since and If-None-Match headers. Using these will take advantage of the webserver's caching rules and will return a 304 Not Modified status code if the content hasn't changed. A: In Python3: >>> import urllib.request >>> site = urllib.request.urlopen("http://python.org") >>> print("FileSize: ", site.length) A: I have reproduced what you are seeing: import urllib, os link = "http://python.org" print "opening url:", link site = urllib.urlopen(link) meta = site.info() print "Content-Length:", meta.getheaders("Content-Length")[0] f = open("out.txt", "r") print "File on disk:",len(f.read()) f.close() f = open("out.txt", "w") f.write(site.read()) site.close() f.close() f = open("out.txt", "r") print "File on disk after download:",len(f.read()) f.close() print "os.stat().st_size returns:", os.stat("out.txt").st_size Outputs this: opening url: http://python.org Content-Length: 16535 File on disk: 16535 File on disk after download: 16535 os.stat().st_size returns: 16861 What am I doing wrong here? Is os.stat().st_size not returning the correct size? Edit: OK, I figured out what the problem was: import urllib, os link = "http://python.org" print "opening url:", link site = urllib.urlopen(link) meta = site.info() print "Content-Length:", meta.getheaders("Content-Length")[0] f = open("out.txt", "rb") print "File on disk:",len(f.read()) f.close() f = open("out.txt", "wb") f.write(site.read()) site.close() f.close() f = open("out.txt", "rb") print "File on disk after download:",len(f.read()) f.close() print "os.stat().st_size returns:", os.stat("out.txt").st_size this outputs: $ python test.py opening url: http://python.org Content-Length: 16535 File on disk: 16535 File on disk after download: 16535 os.stat().st_size returns: 16535 Make sure you are opening both files for binary read/write. // open for binary write open(filename, "wb") // open for binary read open(filename, "rb") A: For a python3 (tested on 3.5) approach I'd recommend: with urlopen(file_url) as in_file, open(local_file_address, 'wb') as out_file: print(in_file.getheader('Content-Length')) out_file.write(response.read()) A: Using the returned-urllib-object method info(), you can get various information on the retrieved document. Example of grabbing the current Google logo: >>> import urllib >>> d = urllib.urlopen("http://www.google.co.uk/logos/olympics08_opening.gif") >>> print d.info() Content-Type: image/gif Last-Modified: Thu, 07 Aug 2008 16:20:19 GMT Expires: Sun, 17 Jan 2038 19:14:07 GMT Cache-Control: public Date: Fri, 08 Aug 2008 13:40:41 GMT Server: gws Content-Length: 20172 Connection: Close It's a dict, so to get the size of the file, you do urllibobject.info()['Content-Length'] print f.info()['Content-Length'] And to get the size of the local file (for comparison), you can use the os.stat() command: os.stat("/the/local/file.zip").st_size A: For anyone using Python 3 and looking for a quick solution using the requests package: import requests response = requests.head( "https://website.com/yourfile.mp4", # Example file allow_redirects=True ) print(response.headers['Content-Length']) Note: Not all responses will have a Content-Length so your application will want to check to see if it exists. if 'Content-Length' in response.headers: ... # Do your stuff here A: A requests-based solution using HEAD instead of GET (also prints HTTP headers): #!/usr/bin/python # display size of a remote file without downloading from __future__ import print_function import sys import requests # number of bytes in a megabyte MBFACTOR = float(1 << 20) response = requests.head(sys.argv[1], allow_redirects=True) print("\n".join([('{:<40}: {}'.format(k, v)) for k, v in response.headers.items()])) size = response.headers.get('content-length', 0) print('{:<40}: {:.2f} MB'.format('FILE SIZE', int(size) / MBFACTOR)) Usage $ python filesize-remote-url.py https://httpbin.org/image/jpeg ... Content-Length : 35588 FILE SIZE (MB) : 0.03 MB A: Here is a much more safer way for Python 3: import urllib.request site = urllib.request.urlopen("http://python.org") meta = site.info() meta.get('Content-Length') Returns: '49829' meta.get('Content-Length') will return the "Content-Length" header if exists. Otherwise it will be blank A: @PabloG Regarding the local/server filesize difference Following is high-level illustrative explanation of why it may occur: The size on disk sometimes is different from the actual size of the data. It depends on the underlying file-system and how it operates on data. As you may have seen in Windows when formatting a flash drive you are asked to provide 'block/cluster size' and it varies [512b - 8kb]. When a file is written on the disk, it is stored in a 'sort-of linked list' of disk blocks. When a certain block is used to store part of a file, no other file contents will be stored in the same blok, so even if the chunk is no occupuing the entire block space, the block is rendered unusable by other files. Example: When the filesystem is divided on 512b blocks, and we need to store 600b file, two blocks will be occupied. The first block will be fully utilized, while the second block will have only 88b utilized and the remaining (512-88)b will be unusable resulting in 'file-size-on-disk' being 1024b. This is why Windows has different notations for 'file size' and 'size on disk'. NOTE: There are different pros & cons that come with smaller/bigger FS block, so do a better research before playing with your filesystem. A: Quick and reliable one-liner for Python3 using urllib: import urllib url = 'https://<your url here>' size = urllib.request.urlopen(url).info().get('Content-Length', 0) .get(<dict key>, 0) gets the key from dict and if the key is absent returns 0 (or whatever the 2nd argument is) A: you can use requests to pull this data File_Name=requests.head(LINK).headers["X-File-Name"] #And other useful info** like the size of the file from this dict (headers) #like File_size=requests.head(LINK).headers["Content-Length"]
{ "language": "en", "url": "https://stackoverflow.com/questions/5909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "56" }
Q: Getting the text from a drop-down box This gets the value of whatever is selected in my dropdown menu. document.getElementById('newSkill').value I cannot however find out what property to go after for the text that's currently displayed by the drop down menu. I tried "text" then looked at W3Schools but that didn't have the answer, does anybody here know? For those not sure, here's the HTML for a drop down box. <select name="newSkill" id="newSkill"> <option value="1">A skill</option> <option value="2">Another skill</option> <option value="3">Yet another skill</option> </select> A: This should return the text value of the selected value var vSkill = document.getElementById('newSkill'); var vSkillText = vSkill.options[vSkill.selectedIndex].innerHTML; alert(vSkillText); Props: @Tanerax for reading the question, knowing what was asked and answering it before others figured it out. Edit: DownModed, cause I actually read a question fully, and answered it, sad world it is. A: document.getElementById('newSkill').options[document.getElementById('newSkill').selectedIndex].value Should work A: This works i tried it my self i thought i post it here in case someone need it... document.getElementById("newSkill").options[document.getElementById('newSkill').selectedIndex].text; A: Simply You can use jQuery instead of JavaScript $("#yourdropdownid option:selected").text(); Try This. A: Attaches a change event to the select that gets the text for each selected option and writes them in the div. You can use jQuery it very face and successful and easy to use <select name="sweets" multiple="multiple"> <option>Chocolate</option> <option>Candy</option> <option>Taffy</option> <option selected="selected">Caramel</option> <option>Fudge</option> <option>Cookie</option> </select> <div></div> $("select").change(function () { var str = ""; $("select option:selected").each(function() { str += $( this ).text() + " "; }); $( "div" ).text( str ); }).change(); A: function getValue(obj) { // it will return the selected text // obj variable will contain the object of check box var text = obj.options[obj.selectedIndex].innerHTML ; } HTML Snippet <asp:DropDownList ID="ddl" runat="server" CssClass="ComboXXX" onchange="getValue(this)"> </asp:DropDownList> A: Here is an easy and short method document.getElementById('elementID').selectedOptions[0].innerHTML A: Based on your example HTML code, here's one way to get the displayed text of the currently selected option: var skillsSelect = document.getElementById("newSkill"); var selectedText = skillsSelect.options[skillsSelect.selectedIndex].text; A: Does this get the correct answer? document.getElementById("newSkill").innerHTML A: var ele = document.getElementById('newSkill') ele.onchange = function(){ var length = ele.children.length for(var i=0; i<length;i++){ if(ele.children[i].selected){alert(ele.children[i].text)}; } } A: var selectoption = document.getElementById("dropdown"); var optionText = selectoption.options[selectoption.selectedIndex].text; A: Please try the below this is the easiest way and it works perfectly var newSkill_Text = document.getElementById("newSkill")[document.getElementById("newSkill").selectedIndex]; A: Found this a tricky question but using ideas from here I eventually got the solution using PHP & Mysqli to populate the list : and then a bit of javascript to get the working variable out. <select id="mfrbtn" onchange="changemfr()" > <option selected="selected">Choose one</option> <?php foreach($rows as $row) { echo '<option value=implode($rows)>'.$row["Mfrname"].'</option>'; } ?> </select> Then : <script language="JavaScript"> function changemfr() { var $mfr2=document.getElementById("mfrbtn").selectedOptions[0].text; alert($mfr2); } </script>
{ "language": "en", "url": "https://stackoverflow.com/questions/5913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86" }
Q: How do you feel about code folding? For those of you in the Visual Studio environment, how do you feel about wrapping any of your code in #regions? (or if any other IDE has something similar...) A: While I understand the problem that Jeff, et. al. have with regions, what I don't understand is why hitting CTRL+M,CTRL+L to expand all regions in a file is so difficult to deal with. A: I use #Region to hide ugly and useless automatically generated code, which really belongs in the automatically generated part of the partial class. But, when working with old projects or upgraded projects, you don't always have that luxury. As for other types of folding, I fold Functions all the time. If you name the function well, you will never have to look inside unless you're testing something or (re-)writing it. A: 9 out of 10 times, code folding means that you have failed to use the SoC principle for what its worth. I more or less feel the same thing about partial classes. If you have a piece of code you think is too big you need to chop it up in manageable (and reusable) parts, not hide or split it up.It will bite you the next time someone needs to change it, and cannot see the logic hidden in a 250 line monster of a method. Whenever you can, pull some code out of the main class, and into a helper or factory class. foreach (var item in Items) { //.. 100 lines of validation and data logic.. } is not as readable as foreach (var item in Items) { if (ValidatorClass.Validate(item)) RepositoryClass.Update(item); } My $0.02 anyways. A: I use Textmate (Mac only) which has Code folding and I find it really useful for folding functions, I know what my "getGet" function does, I don't need it taking up 10 lines of oh so valuable screen space. I never use it to hide a for loop, if statement or similar unless showing the code to someone else where I will hide code they have seen to avoid showing the same code twice. A: I prefer partial classes as opposed to regions. Extensive use of regions by others also give me the impression that someone, somewhere, is violating the Single Responsibility Principle and is trying to do too many things with one object. A: @Tom Partial classes are provided so that you can separate tool auto-generated code from any customisations you may need to make after the code gen has done its bit. This means your code stays intact after you re-run the codegen and doesn't get overwritten. This is a good thing. A: I'm not a fan of partial classes - I try to develop my classes such that each class has a very clear, single issue for which it's responsible. To that end, I don't believe that something with a clear responsibility should be split across multiple files. That's why I don't like partial classes. With that said, I'm on the fence about regions. For the most part, I don't use them; however, I work with code every day that includes regions - some people go really heavy on them (folding up private methods into a region and then each method folded into its own region), and some people go light on them (folding up enums, folding up attributes, etc). My general rule of thumb, as of now, is that I only put code in regions if (a) the data is likely to remain static or will not be touched very often (like enums), or (b) if there are methods that are implemented out of necessity because of subclassing or abstract method implementation, but, again, won't be touched very often. A: Regions must never be used inside methods. They may be used to group methods but this must be handled with extreme caution so that the reader of the code does not go insane. There is no point in folding methods by their modifiers. But sometimes folding may increase readability. For e.g. grouping some methods that you use for working around some issues when using an external library and you won't want to visit too often may be helpful. But the coder must always seek for solutions like wrapping the library with appropriate classes in this particular example. When all else fails, use folding for improving readibility. A: This is just one of those silly discussions that lead to nowhere. If you like regions, use them. If you don't, configure your editor to turn them off. There, everybody is happy. A: Region folding would be fine if I didn't have to manually maintain region groupings based on features of my code that are intrinsic to the language. For example, the compiler already knows it's a constructor. The IDE's code model already knows it's a constructor. But if I want to see a view of the code where the constructors are grouped together, for some reason I have to restate the fact that these things are constructors, by physically placing them together and then putting a group around them. The same goes for any other way of slicing up a class/struct/interface. What if I change my mind and want to see the public/protected/private stuff separated out into groups first, and then grouped by member kind? Using regions to mark out public properties (for example) is as bad as entering a redundant comment that adds nothing to what is already discernible from the code itself. Anyway, to avoid having to use regions for that purpose, I wrote a free, open source Visual Studio 2008 IDE add-in called Ora. It provides a grouped view automatically, making it far less necessary to maintain physical grouping or to use regions. You may find it useful. A: I generally find that when dealing with code like Events in C# where there's about 10 lines of code that are actually just part of an event declaration (the EventArgs class the delegate declaration and the event declaration) Putting a region around them and then folding them out of the way makes it a little more readable. A: This was talked about on Coding Horror. My personal belief is that is that they are useful, but like anything in excess can be too much. I use it to order my code blocks into: Enumerations Declarations Constructors Methods Event Handlers Properties A: Sometimes you might find yourself working on a team where #regions are encouraged or required. If you're like me and you can't stand messing around with folded code you can turn off outlining for C#: * *Options -> Text Editor -> C# -> Advanced Tab *Uncheck "Enter outlining mode when files open" A: I personally use #Regions all the time. I find that it helps me to keep things like properties, declarations, etc separated from each other. This is probably a good answer, too! Coding Horror Edit: Dang, Pat beat me to this! A: I think that it's a useful tool, when used properly. In many cases, I feel that methods and enumerations and other things that are often folded should be little black boxes. Unless you must look at them for some reason, their contents don't matter and should be as hidden as possible. However, I never fold private methods, comments, or inner classes. Methods and enums are really the only things I fold. A: My approach is similar to a few others here, using regions to organize code blocks into constructors, properties, events, etc. There's an excellent set of VS.NET macros by Roland Weigelt available from his blog entry, Better Keyboard Support for #region ... #endregion. I've been using these for years, mapping ctrl+. to collapse the current region and ctrl++ to expand it. Find that it works a lot better that the default VS.NET functionality which folds/unfolds everything. A: The Coding Horror article actual got me thinking about this as well. Generally, I large classes I will put a region around the member variables, constants, and properties to reduce the amount of text I have to scroll through and leave everything else outside of a region. On forms I will generally group things into "member variables, constants, and properties", form functions, and event handlers. Once again, this is more so I don't have to scroll through a lot of text when I just want to review some event handlers. A: I prefer #regions myself, but an old coworker couldn't stand to have things hidden. I understood his point once I worked on a page with 7 #regions, at least 3 of which had been auto-generated and had the same name, but in general I think they're a useful way of splitting things up and keeping everything less cluttered. A: I really don't have a problem with using #region to organize code. Personally, I'll usually setup different regions for things like properties, event handlers, and public/private methods. A: Eclipse does some of this in Java (or PHP with plugins) on its own. Allows you to fold functions and such. I tend to like it. If I know what a function does and I am not working on it, I dont need to look at it. A: Emacs has a folding minor mode, but I only fire it up occasionally. Mostly when I'm working on some monstrosity inherited from another physicist who evidently had less instruction or took less care about his/her coding practices. A: Using regions (or otherwise folding code) should have nothing to do with code smells (or hiding them) or any other idea of hiding code you don't want people to "easily" see. Regions and code folding is really all about providing a way to easily group sections of code that can be collapsed/folded/hidden to minimize the amount of extraneous "noise" around what you are currently working on. If you set things up correctly (meaning actually name your regions something useful, like the name of the method contained) then you can collapse everything except for the function you are currently editing and still maintain some level of context without having to actually see the other code lines. There probably should be some best practice type guidelines around these ideas, but I use regions extensively to provide a standard structure to my code files (I group events, class-wide fields, private properties/methods, public properties/methods). Each method or property also has a region, where the region name is the method/property name. If I have a bunch of overloaded methods, the region name is the full signature and then that entire group is wrapped in a region that is just the function name. A: I personally hate regions. The only code that should be in regions in my opinion is generated code. When I open file I always start with Ctrl+M+O. This folds to method level. When you have regions you see nothing but region names. Before checking in I group methods/fields logically so that it looks ok after Ctrl+M+O. If you need regions you have to much lines in your class. I also find that this is very common. region ThisLooksLikeWellOrganizedCodeBecauseIUseRegions // total garbage, no structure here endregion A: Enumerations Properties .ctors Methods Event Handlers That's all I use regions for. I had no idea you could use them inside of methods. Sounds like a terrible idea :)
{ "language": "en", "url": "https://stackoverflow.com/questions/5916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26" }
Q: API Yahoo India Maps Yahoo has separate map for India ( which has more details than the regular maps.yahoo.com) at http://in.maps.yahoo.com/ . But when I use the API it goes to default map. How do I get API access to YMaps India? A: I don't know about yahoo, but there is another mapping website that provides an API for India. http://biz.mapmyindia.com/APIs.html
{ "language": "en", "url": "https://stackoverflow.com/questions/5918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5" }
Q: Always Commit the same file with SVN In my web application I have a file which hold the current revision number via $Rev$. This work fine except, if I don't make any changes to that file, it doesn't get committed. Is there anyway I can force a single file to always get committed to the SVN server? I'm using TortoiseSVN for Windows so any code or step-by-step instructions would be helpful. A: If you have TortoiseSVN installed, you also have the SubWCRev tool available. Use that tool to get the revision instead of misusing the $REV$ keyword. * *create a template file which contains your defines, maybe something like const long WC_REV = $WCREV$; in a file named version.h.tmpl *on every build, call SubWCRev to create the 'real' file you can use in your application: SubWCRev path\to\workingcopy path\to\version.h.tmpl path\to\version.h This will create the file version.h from version.h.tmpl, with the text $WCREV$ replaced with the revision your working copy is currently at. The docs for SubWCRev might help too. A: I think you may be not understanding how the $Rev$ flag works. The goal of the Rev flag is not to have this revision always committed into the subversion repository. The goal is that on an update, the Rev flag will always be what the revision is. You should not need to put code into subversion that contains the revision. Subversion is very good at keeping track of that information for you. What you probably missed is that, you need to set a property on the file so that the Revision keyword will be properly processed. svn propset svn:keywords "Revision" file.txt This will ensure that whenever you do an update, the $Rev: xxx$ flag will be updated with the current revision. You don't need to worry about how it is committed to the repository. A: @gatekiller: It seems TortoiseSVN does support Client Side Hooks. A: This work fine except, if I don't make any changes to that file, it doesn't get committed. If the file never changes, why would you need it to commmit every single time? [EDIT] @Sean = I understand what he's trying to do, but if the file is never getting updated via a hook or some other process and therefore never changing then SVN will never pick it up. A: I suggest changing the approach, commiting a file every time means implicitly keeping the global revision number in that file. There user may need another keyword, GlobalKey, whose inexistance is explained here I actually have not used mentioned svnversion however, it may lead you to a solution. A: Could the revision-number file be changed by a script that deploys your website from SVN to the web-server? Not sure about on Windows, but using a bash-script, I would do something like.. $ version=$(svnversion) $ svn export . /tmp/staging/ Export complete. $ echo "Revision: ${version}" > /tmp/staging/version.txt Then /tmp/staging/version.txt would contain "Revision: 1" (or whatever the highest revision number is). You could of course replace some identifier in a file, like $Rev$ (instead of creating version.txt as per the example above) A: Basically, you want the output of the svnversion command in a file. Such files are usually kept out of the repository, and automatically created by a build script. I suggest you do the same. If you don't build, but just to a svn up on the server side, just call svnversion after svn up or create a shell script to do both actions. If you have to keep it in the repository on the other hand, calling svnversion in a pre-commit hook would be your best bet. A: Depending on your client, some of them offer a pre-commit hook that you can implement something that simply "touches" the file and flags it as changed. If your using something like Visual Studio you could create a post build task that would "touch" the file but you would have to make sure that you do a build before committing changes. A: @gradonmantank: Because he wants that file to be updated with the latest revision number. Did you read his question completely? The pre-commit hook might work. A: I used to have a manual way of doing that. I'd run a script that would use sed to replace a comment with the current timestamp in my $Rev$ file. That way, the file contents would change and Subversion would commit it. What I didn't do was to take that to the next step: using Subversion's repository hooks to automate the process. Trouble is, I'm not sure if you're allowed to change file contents in hooks. The documentation seems to suggest that you can't. Instead, I guess you'd need a little script that you'd execute in place of the svn commit command that first updates the timestamp and then runs the normal commit. A: Committing the file wouldn't do you any good. The file isn't committed with the full version inside, it is replaced with just the keyword. If you look at the file inside the repository you will see this. As such, you need to force the file to be updated in some way instead. If you're on the Windows platform, you can use the SubWCRev tool, distributed with TortoiseSVN. Documentation here. A: You can use svn pre-commit-hooks to do it. The general idea I have in mind is create one that before the commit will put the new revision number in the file (get it using svnlook) or maybe change a bogus property on the file (it has to change or SVN will ignore it). For more information about pre-commit-hooks I found this page useful. A: I think the best approach to ensuring that there is a file in your web application that has the SVN revision number is to not have a file you commit, but rather to extract it as part of your build script. If you use maven, you can do this with the maven-buildnumber-plugin.
{ "language": "en", "url": "https://stackoverflow.com/questions/5948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10" }
Q: What's your opinion on using UUIDs as database row identifiers, particularly in web apps? I've always preferred to use long integers as primary keys in databases, for simplicity and (assumed) speed. But when using a REST or Rails-like URL scheme for object instances, I'd then end up with URLs like this: http://example.com/user/783 And then the assumption is that there are also users with IDs of 782, 781, ..., 2, and 1. Assuming that the web app in question is secure enough to prevent people entering other numbers to view other users without authorization, a simple sequentially-assigned surrogate key also "leaks" the total number of instances (older than this one), in this case users, which might be privileged information. (For instance, I am user #726 in stackoverflow.) Would a UUID/GUID be a better solution? Then I could set up URLs like this: http://example.com/user/035a46e0-6550-11dd-ad8b-0800200c9a66 Not exactly succinct, but there's less implied information about users on display. Sure, it smacks of "security through obscurity" which is no substitute for proper security, but it seems at least a little more secure. Is that benefit worth the cost and complexity of implementing UUIDs for web-addressable object instances? I think that I'd still want to use integer columns as database PKs just to speed up joins. There's also the question of in-database representation of UUIDs. I know MySQL stores them as 36-character strings. Postgres seems to have a more efficient internal representation (128 bits?) but I haven't tried it myself. Anyone have any experience with this? Update: for those who asked about just using the user name in the URL (e.g., http://example.com/user/yukondude), that works fine for object instances with names that are unique, but what about the zillions of web app objects that can really only be identified by number? Orders, transactions, invoices, duplicate image names, stackoverflow questions, ... A: Rather than URLs like this: http://example.com/user/783 Why not have: http://example.com/user/yukondude Which is friendlier to humans and doesn't leak that tiny bit of information? A: You could use an integer which is related to the row number but is not sequential. For example, you could take the 32 bits of the sequential ID and rearrange them with a fixed scheme (for example, bit 1 becomes bit 6, bit 2 becomes bit 15, etc..). This will be a bidirectional encryption, and you will be sure that two different IDs will always have different encryptions. It would obviously be easy to decode, if one takes the time to generate enough IDs and get the schema, but, if I understand correctly your problem, you just want to not give away information too easily. A: We use GUIDs as primary keys for all our tables as it doubles as the RowGUID for MS SQL Server Replication. Makes it very easy when the client suddenly opens an office in another part of the world... A: I can't say about the web side of your question. But uuids are great for n-tier applications. PK generation can be decentralized: each client generates it's own pk without risk of collision. And the speed difference is generally small. Make sure your database supports an efficient storage datatype (16 bytes, 128 bits). At the very least you can encode the uuid string in base64 and use char(22). I've used them extensively with Firebird and do recommend. A: I don't think a GUID gives you many benefits. Users hate long, incomprehensible URLs. Create a shorter ID that you can map to the URL, or enforce a unique user name convention (http://example.com/user/brianly). The guys at 37Signals would probably mock you for worrying about something like this when it comes to a web app. Incidentally you can force your database to start creating integer IDs from a base value. A: It also depends on what you care about for your application. For n-tier apps GUIDs/UUIDs are simpler to implement and are easier to port between different databases. To produce Integer keys some database support a sequence object natively and some require custom construction of a sequence table. Integer keys probably (I don't have numbers) provide an advantage for query and indexing performance as well as space usage. Direct DB querying is also much easier using numeric keys, less copy/paste as they are easier to remember. A: For what it's worth, I've seen a long running stored procedure (9+ seconds) drop to just a few hundred milliseconds of run time simply by switching from GUID primary keys to integers. That's not to say displaying a GUID is a bad idea, but as others have pointed out, joining on them, and indexing them, by definition, is not going to be anywhere near as fast as with integers. A: I can answer you that in SQL server if you use a uniqueidentifier (GUID) datatype and use the NEWID() function to create values you will get horrible fragmentation because of page splits. The reason is that when using NEWID() the value generated is not sequential. SQL 2005 added the NEWSEQUANTIAL() function to remedy that One way to still use GUID and int is to have a guid and an int in a table so that the guid maps to the int. the guid is used externally but the int internally in the DB for example 457180FB-C2EA-48DF-8BEF-458573DA1C10 1 9A70FF3C-B7DA-4593-93AE-4A8945943C8A 2 1 and 2 will be used in joins and the guids in the web app. This table will be pretty narrow and should be pretty fast to query A: I work with a student management system which uses UUID's in the form of an integer. They have a table which hold the next unique ID. Although this is probably a good idea for an architectural point of view, it makes working with on a daily basis difficult. Sometimes there is a need to do bulk inserts and having a UUID makes this very difficult, usually requiring writing a cursor instead of a simple SELECT INTO statement. A: I've tried both in real web apps. My opinion is that it is preferable to use integers and have short, comprehensible URLs. As a developer, it feels a little bit awful seeing sequential integers and knowing that some information about total record count is leaking out, but honestly - most people probably don't care, and that information has never really been critical to my businesses. Having long ugly UUID URLs seems to me like much more of a turn off to normal users. A: Why couple your primary key with your URI? Why not have your URI key be human readable (or unguessable, depending on your needs), and your primary index integer based, that way you get the best of both worlds. A lot of blog software does that, where the exposed id of the entry is identified by a 'slug', and the numeric id is hidden away inside of the system. The added benefit here is that you now have a really nice URL structure, which is good for SEO. Obviously for a transaction this is not a good thing, but for something like stackoverflow, it is important (see URL up top...). Getting uniqueness isn't that difficult. If you are really concerned, store a hash of the slug inside a table somewhere, and do a lookup before insertion. edit: Stackoverflow doesn't quite use the system I describe, see Guy's comment below. A: I think using a GUID would be the better choice in your situation. It takes up more space but it's more secure. A: I think that this is one of these issues that cause quasi-religious debates, and its almost futile to talk about. I would just say use what you prefer. In 99% of systems it will no matter which type of key you use, so the benefits (stated in the other posts) of using one sort over the other will never be an issue. A: YouTube uses 11 characters with base64 encoding which offers 11^64 possibilities, and they are usually pretty manageable to write. I wonder if that would offer better performance than a full on UUID. UUID converted to base 64 would be double the size I believe. More information can be found here: https://www.youtube.com/watch?v=gocwRvLhDf8 A: Pros and Cons of UUID Note: uuid_v7 is time based uuid instead of random. So you can use it to order by creation date and solve some performance issues with db inserts if you do really many of them. Pros: * *can be generated on api level (good for distributed systems) *hides count information about entity *doesn't have limit 2,147,483,647 as 32-bit int *removes layer of errors related to passing one entity id userId: 25 to get another bookId: 25 accidently *more friendly graphql usage as ID key Cons: * *128-bit instead 32-bit int (slightly bigger size in db and ~40% bigger index, around ~30MB for 1 million rows), should be a minor concern *can't be sorted by creation (can be solved with uuid_v7) *non-time-ordered UUID versions such as UUIDv4 have poor database index locality (can be solved with uuid_v7) URL usage Depending on app you may care or not care about url. If you don't care, just use uuid as is, it's fine. If you care, then you will need to decide on url format. Best case scenario is a use of unique slug if you ok with never changing it: http://example.com/sale/super-duper-phone If your url is generated from title and you want to change slug on title change there is a few options. Use it as is and query by uuid (slug is just decoration): http://example.com/book/035a46e0-6550-11dd-ad8b-0800200c9a66/new-title Convert it to base64url: * *you can get uuid back from AYEWXcsicACGA6PT7v_h3A *AYEWXcsicACGA6PT7v_h3A - 22 characters *035a46e0-6550-11dd-ad8b-0800200c9a66 - 36 characters http://example.com/book/AYEWXcsicACGA6PT7v_h3A/new-title Generate a unique short 11 chars length string just for slug usage: http://example.com/book/icACEWXcsAY-new-title http://example.com/book/icACEWXcsAY/new-title If you don't want uuid or short id in url and want only slug, but do care about seo and user bookmarks, you will need to redirect all request from http://example.com/sale/phone-1-title to http://example.com/sale/phone-1-title-updated this will add additional complexity of managing slug history, adding fallback to history for all queries where slug is used and redirects if slugs doesn't match A: As long as you use a DB system with efficient storage, HDD is cheap these days anyway... I know GUID's can be a b*tch to work with some times and come with some query overhead however from a security perspective they are a savior. Thinking security by obscurity they fit well when forming obscure URI's and building normalised DB's with Table, Record and Column defined security you cant go wrong with GUID's, try doing that with integer based id's.
{ "language": "en", "url": "https://stackoverflow.com/questions/5949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80" }
Q: Best way to abstract season/show/episode data Basically, I've written an API to www.thetvdb.com in Python. The current code can be found here. It grabs data from the API as requested, and has to store the data somehow, and make it available by doing: print tvdbinstance[1][23]['episodename'] # get the name of episode 23 of season 1 What is the "best" way to abstract this data within the Tvdb() class? I originally used a extended Dict() that automatically created sub-dicts (so you could do x[1][2][3][4] = "something" without having to do if x[1].has_key(2): x[1][2] = [] and so on) Then I just stored the data by doing self.data[show_id][season_number][episode_number][attribute_name] = "something" This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception). Currently it's using four classes: ShowContainer, Show, Season and Episode. Each one is a very basic dict, which I can easily add extra functionality in (the search() function on Show() for example). Each has a __setitem__, __getitem_ and has_key. This works mostly fine, I can check in Shows if it has that season in it's self.data dict, if not, raise season_not_found. I can also check in Season() if it has that episode and so on. The problem now is it's presenting itself as a dict, but doesn't have all the functionality, and because I'm overriding the __getitem__ and __setitem__ functions, it's easy to accidentally recursively call __getitem__ (so I'm not sure if extending the Dict class will cause problems). The other slight problem is adding data into the dict is a lot more work than the old Dict method (which was self.data[seas_no][ep_no]['attribute'] = 'something'). See _setItem and _setData. It's not too bad, since it's currently only a read-only API interface (so the users of the API should only ever retrieve data, not add more), but it's hardly... Elegant. I think the series-of-classes system is probably the best way, but does anyone have a better idea for storing the data? And would extending the ShowContainer/etc classes with Dict cause problems? A: OK, what you need is classobj from new module. That would allow you to construct exception classes dynamically (classobj takes a string as an argument for the class name). import new myexc=new.classobj("ExcName",(Exception,),{}) i=myexc("This is the exc msg!") raise i this gives you: Traceback (most recent call last): File "<stdin>", line 1, in <module> __main__.ExcName: This is the exc msg! remember that you can always get the class name through: self.__class__.__name__ So, after some string mangling and concatenation, you should be able to obtain appropriate exception class name and construct a class object using that name and then raise that exception. P.S. - you can also raise strings, but this is deprecated. raise(self.__class__.__name__+"Exception") A: Why not use SQLite? There is good support in Python and you can write SQL queries to get the data out. Here is the Python docs for sqlite3 If you don't want to use SQLite you could do an array of dicts. episodes = [] episodes.append({'season':1, 'episode': 2, 'name':'Something'}) episodes.append({'season':1, 'episode': 2, 'name':'Something', 'actors':['Billy Bob', 'Sean Penn']}) That way you add metadata to any record and search it very easily season_1 = [e for e in episodes if e['season'] == 1] billy_bob = [e for e in episodes if 'actors' in e and 'Billy Bob' in e['actors']] for episode in billy_bob: print "Billy bob was in Season %s Episode %s" % (episode['season'], episode['episode']) A: I have done something similar in the past and used an in-memory XML document as a quick and dirty hierarchical database for storage. You can store each show/season/episode as an element (nested appropriately) and attributes of these things as xml attributes on the elements. Then you can use XQuery to get info back out. NOTE: I'm not a Python guy so I don't know what your xml support is like. NOTE 2: You'll want to profile this because it'll be bigger and slower than the solution you've already got. Likely enough if you are doing some high-volume processing then XML is probably not going to be your friend. A: I don't get this part here: This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not (so I couldn't raise the season_not_found exception) There is a way to do it - called in: >>>x={} >>>x[1]={} >>>x[1][2]={} >>>x {1: {2: {}}} >>> 2 in x[1] True >>> 3 in x[1] False what seems to be the problem with that? A: Bartosz/To clarify "This worked okay, but there was no easy way of checking if x[3][24] was supposed to exist or not" x['some show'][3][24] would return season 3, episode 24 of "some show". If there was no season 3, I want the pseudo-dict to raise tvdb_seasonnotfound, if "some show" doesn't exist, then raise tvdb_shownotfound The current system of a series of classes, each with a __getitem__ - Show checks if self.seasons.has_key(requested_season_number), the Season class checks if self.episodes.has_key(requested_episode_number) and so on. It works, but it there seems to be a lot of repeated code (each class is basically the same, but raises a different error)
{ "language": "en", "url": "https://stackoverflow.com/questions/5966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: How do I get rid of the "multiple describeType entries" warning? Does anyone know why when using BindingUtils on the selectedItem property of a ComboBox you get the following warning? Any ideas how to resolve the issue? The binding still works properly, but it would be nice to get rid of the warning. warning: multiple describeType entries for 'selectedItem' on type 'mx.controls::ComboBox': <accessor name="selectedItem" access="readwrite" type="Object" declaredBy="mx.controls::ComboBase"> <metadata name="Bindable"> <arg key="" value="valueCommit"/> </metadata> A: It is better to override the property in question and declare it final. A: Here is the code. It is basically a copy of BindingUtils.bindProperty that is setup for a ComboBox so that both the combo box and the model are updated when either of the two change. public static function bindProperty2(site:Object, prop:String, host:Object, chain:Object, commitOnly:Boolean = false):ChangeWatcher { var cbx:ComboBox = null; if ( site is ComboBox ) { cbx = ComboBox(site); } if ( host is ComboBox ) { cbx = ComboBox(host); } var labelField:String = "listID"; var w:ChangeWatcher = ChangeWatcher.watch(host, chain, null, commitOnly); if (w != null) { var func:Function; if ( site is ComboBox ) { func = function(event:*):void { var dp:ICollectionView = ICollectionView(site.dataProvider); var selItem:Object = null; for ( var i:int=0; i<dp.length; i++ ) { var obj:Object = dp[i]; if ( obj.hasOwnProperty(labelField) ) { var val:String = String(obj[labelField]); if ( val == w.getValue() ) { selItem = obj; break; } } } site.selectedItem = selItem; }; w.setHandler(func); func(null); } else { func = function(event:*):void { var value:Object = w.getValue(); if ( value == null ) { site[prop] = null; } else { site[prop] = String(w.getValue()[labelField]); } }; w.setHandler(func); func(null); } } return w; }
{ "language": "en", "url": "https://stackoverflow.com/questions/5982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8" }
Q: Should I provide accessor methods / Getter Setters for public/protected components on a form? If I have .Net Form with a component/object such as a textbox that I need to access from a parent or other form I obviously need to "upgrade" the modifier to this component to an Internal or Public level variable. Now, if I were providing a public variable of an int or string type etc. in my form class I wouldn't think twice about using Getters and (maybe) Setters around this, even if they didn't do anything other than provide direct access to the variable. However, the VS designer doesn't seem to implement such Getters/Setters for those public objects that are components on a form (and therefore does not comply with good programming practice). So, the question is; In order to do the "right thing" should I wrap such VS designer components or objects in a Getter and/or Setter? A: "However, the VS designer doesn't seem to implement such Getters/Setters for those public objects that are components on a form (and therefore does not comply with good programming practice)." If you mean the controls you're dragging and dropping onto the form, these are marked as private instance members and are added to the form's Controls collection. Why would they be otherwise? A form could have forty or fifty controls, it'd be somewhat unnecessary and unwieldy to provide a getter/setter for every control on the form. The designer leaves it up to you to provide delegated access to specific controls via public getter/setters. The designer does the right thing here. A: The reason for not implementing Getters and Setters for components on a form I believe is cause they wouldn't be "Thread Safe" .NET objects are suppose to be only modified by the form thread that created them, If you put on getter and setters you are potentially opening it up for any thread. Instead your suppose to implement a delegate system where changes to these objects are delegated to the thread that created them and ran there. A: This is a classic example of encapsulation in object-oriented design. A Form is an object whose responsibility is to present UI to the user and accept input. The interface between the Form object and other areas of the code should be a data-oriented interface, not an interface which exposes the inner implementation details of the Form. The inner workings of the Form (ie, the controls) should remain hidden from any consuming code. A mature solution would probably involve the following design points: * *Public methods or properties are behavior (show, hide, position) or data-oriented (set data, get data, update data). *All event handlers implemented by the Form are wrapped in appropriate thread delegation code to enforce Form thread-execution rules. *Controls themselves would be data-bound to the underlying data structure (where appropriate) to reduce code. And that's not even mentioning meta-development things like unit tests. A: I always do that, and if you ARE following an MVP design creating getter/setters for your view components would be a design requirement. I do not understand what you mean by "does not comply with good programming practice". Microsoft violates a lot of good programming practices to make it easier to create stuff on Visual Studio (for the sake of rapid app development) and I do not see the lack of getters/setters for controls as evidence of violating any such best practices.
{ "language": "en", "url": "https://stackoverflow.com/questions/5997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }
Q: Log4Net configuring log level How do I make Log4net only log Info level logs? Is that even possible? Can you only set a threshold? This is what I have, and it logs Info and above as I would expect. Is there anything i can do to make it only log info? <logger name="BrokerCollection.Model.XmlDocumentCreationTask"> <appender-ref ref="SubmissionAppender"/> <level value="Info" /> </logger> A: Within the definition of the appender, I believe you can do something like this: <appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender"> <filter type="log4net.Filter.LevelRangeFilter"> <param name="LevelMin" value="INFO"/> <param name="LevelMax" value="INFO"/> </filter> ... </appender> A: Yes. It is done with a filter on the appender. Here is the appender configuration I normally use, limited to only INFO level. <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="${HOMEDRIVE}\\PI.Logging\\PI.ECSignage.${COMPUTERNAME}.log" /> <appendToFile value="true" /> <maxSizeRollBackups value="30" /> <maximumFileSize value="5MB" /> <rollingStyle value="Size" /> <!--A maximum number of backup files when rolling on date/time boundaries is not supported. --> <staticLogFileName value="false" /> <lockingModel type="log4net.Appender.FileAppender+MinimalLock" /> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%date{yyyy-MM-dd HH:mm:ss.ffff} [%2thread] %-5level %20.20type{1}.%-25method at %-4line| (%-30.30logger) %message%newline" /> </layout> <filter type="log4net.Filter.LevelRangeFilter"> <levelMin value="INFO" /> <levelMax value="INFO" /> </filter> </appender> A: Use threshold. For example: <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender"> <threshold value="WARN"/> <param name="File" value="File.log" /> <param name="AppendToFile" value="true" /> <param name="RollingStyle" value="Size" /> <param name="MaxSizeRollBackups" value="10" /> <param name="MaximumFileSize" value="1024KB" /> <param name="StaticLogFileName" value="true" /> <layout type="log4net.Layout.PatternLayout"> <param name="Header" value="[Server startup]&#13;&#10;" /> <param name="Footer" value="[Server shutdown]&#13;&#10;" /> <param name="ConversionPattern" value="%d %m%n" /> </layout> </appender> <appender name="EventLogAppender" type="log4net.Appender.EventLogAppender" > <threshold value="ERROR"/> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread]- %message%newline" /> </layout> </appender> <appender name="ConsoleAppender" type="log4net.Appender.ConsoleAppender"> <threshold value="INFO"/> <layout type="log4net.Layout.PatternLayout"> <param name="ConversionPattern" value="%d [%thread] %m%n" /> </layout> </appender> In this example all INFO and above are sent to Console, all WARN are sent to file and ERRORs are sent to the Event-Log. A: If you would like to perform it dynamically try this: using System; using System.Collections.Generic; using System.Text; using log4net; using log4net.Config; using NUnit.Framework; namespace ExampleConsoleApplication { enum DebugLevel : int { Fatal_Msgs = 0 , Fatal_Error_Msgs = 1 , Fatal_Error_Warn_Msgs = 2 , Fatal_Error_Warn_Info_Msgs = 3 , Fatal_Error_Warn_Info_Debug_Msgs = 4 } class TestClass { private static readonly ILog logger = LogManager.GetLogger(typeof(TestClass)); static void Main ( string[] args ) { TestClass objTestClass = new TestClass (); Console.WriteLine ( " START " ); int shouldLog = 4; //CHANGE THIS FROM 0 TO 4 integer to check the functionality of the example //0 -- prints only FATAL messages //1 -- prints FATAL and ERROR messages //2 -- prints FATAL , ERROR and WARN messages //3 -- prints FATAL , ERROR , WARN and INFO messages //4 -- prints FATAL , ERROR , WARN , INFO and DEBUG messages string srtLogLevel = String.Empty; switch (shouldLog) { case (int)DebugLevel.Fatal_Msgs : srtLogLevel = "FATAL"; break; case (int)DebugLevel.Fatal_Error_Msgs: srtLogLevel = "ERROR"; break; case (int)DebugLevel.Fatal_Error_Warn_Msgs : srtLogLevel = "WARN"; break; case (int)DebugLevel.Fatal_Error_Warn_Info_Msgs : srtLogLevel = "INFO"; break; case (int)DebugLevel.Fatal_Error_Warn_Info_Debug_Msgs : srtLogLevel = "DEBUG" ; break ; default: srtLogLevel = "FATAL"; break; } objTestClass.SetLogingLevel ( srtLogLevel ); objTestClass.LogSomething (); Console.WriteLine ( " END HIT A KEY TO EXIT " ); Console.ReadLine (); } //eof method /// <summary> /// Activates debug level /// </summary> /// <sourceurl>http://geekswithblogs.net/rakker/archive/2007/08/22/114900.aspx</sourceurl> private void SetLogingLevel ( string strLogLevel ) { string strChecker = "WARN_INFO_DEBUG_ERROR_FATAL" ; if (String.IsNullOrEmpty ( strLogLevel ) == true || strChecker.Contains ( strLogLevel ) == false) throw new Exception ( " The strLogLevel should be set to WARN , INFO , DEBUG ," ); log4net.Repository.ILoggerRepository[] repositories = log4net.LogManager.GetAllRepositories (); //Configure all loggers to be at the debug level. foreach (log4net.Repository.ILoggerRepository repository in repositories) { repository.Threshold = repository.LevelMap[ strLogLevel ]; log4net.Repository.Hierarchy.Hierarchy hier = (log4net.Repository.Hierarchy.Hierarchy)repository; log4net.Core.ILogger[] loggers = hier.GetCurrentLoggers (); foreach (log4net.Core.ILogger logger in loggers) { ( (log4net.Repository.Hierarchy.Logger)logger ).Level = hier.LevelMap[ strLogLevel ]; } } //Configure the root logger. log4net.Repository.Hierarchy.Hierarchy h = (log4net.Repository.Hierarchy.Hierarchy)log4net.LogManager.GetRepository (); log4net.Repository.Hierarchy.Logger rootLogger = h.Root; rootLogger.Level = h.LevelMap[ strLogLevel ]; } private void LogSomething () { #region LoggerUsage DOMConfigurator.Configure (); //tis configures the logger logger.Debug ( "Here is a debug log." ); logger.Info ( "... and an Info log." ); logger.Warn ( "... and a warning." ); logger.Error ( "... and an error." ); logger.Fatal ( "... and a fatal error." ); #endregion LoggerUsage } } //eof class } //eof namespace The app config: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="LogFileAppender" type="log4net.Appender.FileAppender"> <param name="File" value="LogTest2.txt" /> <param name="AppendToFile" value="true" /> <layout type="log4net.Layout.PatternLayout"> <param name="Header" value="[Header] \r\n" /> <param name="Footer" value="[Footer] \r\n" /> <param name="ConversionPattern" value="%d [%t] %-5p %c %m%n" /> </layout> </appender> <appender name="ColoredConsoleAppender" type="log4net.Appender.ColoredConsoleAppender"> <mapping> <level value="ERROR" /> <foreColor value="White" /> <backColor value="Red, HighIntensity" /> </mapping> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%date [%thread] %-5level %logger [%property{NDC}] - %message%newline" /> </layout> </appender> <appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender"> <connectionType value="System.Data.SqlClient.SqlConnection, System.Data, Version=1.2.10.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <connectionString value="data source=ysg;initial catalog=DBGA_DEV;integrated security=true;persist security info=True;" /> <commandText value="INSERT INTO [DBGA_DEV].[ga].[tb_Data_Log] ([Date],[Thread],[Level],[Logger],[Message]) VALUES (@log_date, @thread, @log_level, @logger, @message)" /> <parameter> <parameterName value="@log_date" /> <dbType value="DateTime" /> <layout type="log4net.Layout.PatternLayout" value="%date{yyyy'-'MM'-'dd HH':'mm':'ss'.'fff}" /> </parameter> <parameter> <parameterName value="@thread" /> <dbType value="String" /> <size value="255" /> <layout type="log4net.Layout.PatternLayout" value="%thread" /> </parameter> <parameter> <parameterName value="@log_level" /> <dbType value="String" /> <size value="50" /> <layout type="log4net.Layout.PatternLayout" value="%level" /> </parameter> <parameter> <parameterName value="@logger" /> <dbType value="String" /> <size value="255" /> <layout type="log4net.Layout.PatternLayout" value="%logger" /> </parameter> <parameter> <parameterName value="@message" /> <dbType value="String" /> <size value="4000" /> <layout type="log4net.Layout.PatternLayout" value="%messag2e" /> </parameter> </appender> <root> <level value="INFO" /> <appender-ref ref="LogFileAppender" /> <appender-ref ref="AdoNetAppender" /> <appender-ref ref="ColoredConsoleAppender" /> </root> </log4net> </configuration> The references in the csproj file: <Reference Include="log4net, Version=1.2.10.0, Culture=neutral, PublicKeyToken=1b44e1d426115821, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\..\..\Log4Net\log4net-1.2.10\bin\net\2.0\release\log4net.dll</HintPath> </Reference> <Reference Include="nunit.framework, Version=2.4.8.0, Culture=neutral, PublicKeyToken=96d09a1eb7f44a77, processorArchitecture=MSIL" /> A: you can use log4net.Filter.LevelMatchFilter. other options can be found at log4net tutorial - filters in ur appender section add <filter type="log4net.Filter.LevelMatchFilter"> <levelToMatch value="Info" /> <acceptOnMatch value="true" /> </filter> the accept on match default is true so u can leave it out but if u set it to false u can filter out log4net filters
{ "language": "en", "url": "https://stackoverflow.com/questions/6007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65" }
Q: How do you deal with configuration files in source control? Let's say you have a typical web app and with a file configuration.whatever. Every developer working on the project will have one version for their dev boxes, there will be a dev, prod and stage versions. How do you deal with this in source control? Not check in this file at all, check it with different names or do something fancy altogether? A: What I've done in the past is to have a default config file which is checked in to source control. Then, each developer has their own override config file which is excluded from source control. The app first loads the default, and then if the override file is present, loads that and uses any settings from the override in preference to the default file. In general, the smaller the override file the better, but it can always contain more settings for a developer with a very non-standard environment. A: Currently I have the "template" config file with an added extension for example: web.config.rename However, I can see an issue with this method if critical changes have changed. A: +1 on the template approach. But since this question has tag Git, the distributed alternative springs to mind, in which customizations are kept on a private testing branch: A---B---C---D--- <- mainline (public) \ \ B'------D'--- <- testing (private) In this scheme, the mainline contains a generic, "template" config file requiring the minimal amount of adjustments to become functional. Now, developers/testers can tweak the config file to their heart's content, and only commit these changes locally on one a private testing branch (e.g. B' = B + customizations). Each time mainline advances, they effortlessly merge it into testing, which results in merge commits such as D' (= D + merged version of B's customizations). This scheme really shines when the "template" config file is updated: the changes from both sides get merged, and are extremely likely to result into conflicts (or test failures) if they are incompatible! A: The solution we use is to have only the single configuration file (web.config/app.config), but we add a special section to the file that contains settings for all environments. There is a LOCAL, DEV, QA, PRODUCTION sections each containing the configuration keys relevant to that environment in our config file(s). What make this all work is an assembly named xxx.Environment which is referenced in all of our applications (winforms and webforms) which tells the application which environment it is operating on. The xxx.Environment assembly reads a single line of information from the machine.config of the given machine which tells it that it is on DEV, QA, etc. This entry is present on all of our workstations and servers. Hope this helps. A: I have always kept all versions of the config files in source control, in the same folder as the web.config file. For example web.config web.qa.config web.staging.config web.production.config I prefer this naming convention (as opposed to web.config.production or production.web.config) because * *It keeps the files together when you sort by file name *It keeps the files together when you sort by file extension *If the file accidentally gets pushed to production, you won't be able to see the contents over http because IIS will prevent *.config files from being served The default config file should be configured such that you can run the application locally on your own machine. Most importantly, these files should be almost 100% identical in every aspect, even formatting. You shouldn't use tabs in one version and spaces in another for indenting. You should be able to run a diff tool against the files to see exactly what is different between them. I prefer to use WinMerge for diffing the files. When your build process creates the binaries, there should be a task that overwrites the web.config with the config file appropriate for that environment. If the files are zipped up, then the non relevant files should be deleted from that build. A: I've used the template before, i.e. web.dev.config, web.prod.config, etc, but now prefer the 'override file' technique. The web.config file contains the majority of the settings, but an external file contains environment-specific values such as db connections. Good explanation on Paul Wilson's blog. I think this reduces the amount to duplication between the config files which can cause pain when adding new values / attributes. A: @Grant is right. I'm on a team with close to 100 other developers, and our config files are not checked into source control. We have versions of the files in the repository that are pulled with each check out but they don't change. It's worked out pretty well for us. A: Configuration is code, and you should version it. We base our configuration files on usernames; in both UNIX/Mac and Windows you can access the user's login name, and as long as these are unique to the project, you are fine. You can even override this in the environment, but you should version control everything. This also allows you to examine others' configurations, which can help diagnose build and platform issues. A: The checked-in, plain-vanilla version of app/web.config should be generic enough to work on all developer machines, and be kept up to date with any new setting changes, etc. If you require a specific set of settings for dev/test/production settings, check in separate files with those settings, as GateKiller stated, with some sort of naming convention, though I usually go with "web.prod.config", as not to change the file extension. A: We use a template config file that is checked in to version control and then a step in our automated build to replace specific entries in the template file with environment-specific settings. The environment-specific settings are stored in a separate XML file that is also under version control. We're using MSBuild in our automated build, so we use the XmlUpdate task from MSBuild Community Tasks to update the values. A: For a long time, I have done exactly what bcwood has done. I keep copies of web.dev.config, web.test.config, web.prod.config, etc. under source control, and then my build/deploy system renames them automatically as it deploys to various environments. You get a certain amount of redundancy between the files (especially with all of the asp.net stuff in there), but generally it works really well. You also have to make sure that everyone on the team remembers to update all the files when they make a change. By the way, I like to keep ".config" at the end as the extension so that file associations do not get broken. As far as local developer versions of the config file, I always try my best to encourage people to use the same local settings as much as possible so that there is no need to have your own version. It doesn't always work for everyone, in which case people usually just replace it locally as needed and go from there. It's not too painful or anything. A: Don't version that file. Version a template or something. A: My team keeps separate versions of the config files for each environment (web.config.dev, web.config.test, web.config.prod). Our deployment scripts copy out the correct version, renaming it to web.config. This way, we have full version control on the config files for each environment, can easily perform a diff, etc. A: I version control it, but never push it to the other servers. If the production server requires a change, I make that change directly to the config file. It may not be pretty, but it works just fine. A: On our project we have configuration stored in files with a prefix then our build system pulls in the appropriate configuration based on the current system's hostname. This works well for us on a relatively small team, allowing us to apply config changes to other people's files if/when we add a new configuration item. Obviously this definitely doesn't scale to open source projects with an unbounded number of developers. A: We have two problems here. * *Firstly we have to control the configuration file that is shipped with the software. It is all two easy for a developer to check in an unwanted to change to the master config file, if they are using the same file in the devolvement environment. On the other side, if you have a separate configuration file that is included by the installer, it is very easy to forget to add a new setting to it, or to let the comments in it get out of sync with the comments in the devolvement configuring file. *Then we have the problem that developers have to keep there copy of the configuration file up-to-date as other developers add new configuration settings. However some settings like database connection strings are different for each developer. *There is a 3rd problem the question/answers do not cover. How do you merge in the changes a customer have make to your configuration file when you install a new version of your software? I have yet to see a good solutions that works well in all cases, however I have seen some partial solutions (that can be combined in different combinations as needed) that reduces the problem a lot. * *Firstly reduce the number of configuration items you have in your main configuration file. If you don’t have a need to let your customers change your mappings, use Fluent NHibernate (or otherwise) to move the configuration into code. Likewise for depency injection setup. *Split up the configuration file when possible, e.g. use a separate file to configure what Log4Net logs. *Don’t repeat items between lots of configuration files, e.g. if you have 4 web applications that are all installed on the same machine, have a overall configuration file that the web.config file in each application points to. (Use a relative path by default, so it is rare to have to change the web.config file) *Process the development configuration file to get the shipping configuration file. The could be done by have default values in the Xml comments that are then set in the configuration file when a build is done. Or having sections that are deleted as part of the process of creating the installer. *Instead of just having one database connection strings, have one per developers. E.g first look for “database_ianr” (where ianr is my username or machine name) in the configuration file at run time, if it is not found, then look for “database” Have a 2nd level "e.g. -oracle or -sqlserver" make it quicker for developers to get to both database systems. This can of course also be done for any other configuration value. Then all values that end in “_userName” can be striped out before shipping the configuration file. However in the end what is you is a “owner of configuration file” that takes the responsibly of managing the configuration file(s) as above or otherwise. He/She should also do a diff on the customer facings configuration file before each shipment. You can’t remove the need for a caring person some this short of problem. A: I don't think there's a single solution that works for all cases as it may depend on the sensitivity of data in the config files, or the programming language you're using, and so many other factors. But I think it's important to keep the config files for all environments under source control, so you can always know when it was changed and by whom, and more importantly, be able to recover it if things go wrong. And they will. So here's how I do it. This is for NodeJS projects usually but I think it works for other frameworks and languages as well. What I do is create a configs directory at the root of the project, and under that directory keep multiple files for all environments (and some times separate files for each developer's environment as well) which are all tracked in source control. And there is the actual file that the code uses named config at the root of the project. This is the only file that is not tracked. So it looks like this root | |- config (not tracked) | |- configs/ (all tracked) |- development |- staging |- live |- James When someone checks out the project he copies the config file he wants to use in the untracked config file and he is free to edit it as he wishes but is also responsible to copy these changes before he commits to the other environment files as needed. And on servers, the untracked file can simply be a copy (or reference) of the tracked file corresponding to that environment. In JS you can simply have 1 line to require that file. This flow may be a little complicated at first but it has great advantages: * *You never have to worry about a config file getting deleted or modified on the server without having a backup *The same if a developer has some custom config on his machine and his machine stops working for any reason *Before any deployment you can diff the config files for development and staging for example and see if there's anything missing or broken. A: We just keep the production config file checked in. It's the developer's responsibility to change the file when they pull it out of source safe for staging or development. This has burnt us in the past so I wouldn't suggest it. A: I faced that same problem and I found a solution for it. I first added all the files to the central repository (also the developer ones). So if a developer fetches the files from the repository the developer config is also there. When changing made to this file, Git should not be aware of these changes. That way changes cannot be pushed/committed to the repository but stay locally. I solved this by using the git command: update-index --assume-unchanged. I made a bat file that is executed in the prebuild of the projects that are containing a file whose changes should be ignore by Git. Here is the code I put in the bat file: IF NOT EXIST %2%\.git GOTO NOGIT set fileName=%1 set fileName=%fileName:\=/% for /f "useback tokens=*" %%a in ('%fileName%') do set fileName=%%~a set "gitUpdate=git update-index --assume-unchanged" set parameter= "%gitUpdate% %fileName%" echo %parameter% as parameter for git "C:\Program Files (x86)\Git\bin\sh.exe" --login -i -c %parameter% echo Make FIleBehaveLikeUnchangedForGit Done. GOTO END :NOGIT echo no git here. echo %2% :END In my prebuild I would made a call to the bat file, for example: call "$(ProjectDir)\..\..\MakeFileBehaveLikeUnchangedForGit.bat" "$(ProjectDir)Web.config.developer" "$(SolutionDir)" I found on SO a bat file that copies the correct config file to the web.config/app.config. I also call this bat file in the prebuild. The code for this bat file is: @echo off echo Comparing two files: %1 with %2 if not exist %1 goto File1NotFound if not exist %2 goto File2NotFound fc %1 %2 if %ERRORLEVEL%==0 GOTO NoCopy echo Files are not the same. Copying %1 over %2 copy %1 %2 /y & goto END :NoCopy echo Files are the same. Did nothing goto END :File1NotFound echo %1 not found. goto END :File2NotFound copy %1 %2 /y goto END :END echo Done. In my prebuild I would made a call to the bat file, for example: call "$(ProjectDir)\..\..\copyifnewer.bat" "$(ProjectDir)web.config.$(ConfigurationName)" "$(ProjectDir)web.config
{ "language": "en", "url": "https://stackoverflow.com/questions/6009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "102" }
Q: Default Form Button in FireFox I am building a server control that will search our db and return results. The server control is contains an ASP:Panel. I have set the default button on the panel equal to my button id and have set the form default button equal to my button id. On the Panel: MyPanel.DefaultButton = SearchButton.ID On the Control: Me.Page.Form.DefaultButton = SearchButton.UniqueID Works fine in IE & Safari I can type a search term and hit the enter key and it searches fine. If I do it in Firefox I get an alert box saying "Object reference not set to an instance of an a object. Anyone run across this before? A: Is SearchButton a LinkButton? If so, the javascript that is written to the browser doesn't work properly. Here is a good blog post explaining the issue and how to solve it: Using Panel.DefaultButton property with LinkButton control in ASP.NET A: Ends up this resolved my issue: SearchButton.UseSubmitBehavior = False A: I might be wrong and this might not make a difference but have you tried: Me.Page.Form.DefaultButton = SearchButton.ID instead of Me.Page.Form.DefaultButton = SearchButton.UniqueID
{ "language": "en", "url": "https://stackoverflow.com/questions/6076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7" }
Q: What to use for login ID? We are in the early design stages of a major rewrite of our product. Right now our customers are mostly businesses. We manage accounts. User names for an account are each on their own namespace but it means that we can't move assets between servers. We want to move to a single namespace. But that brings the problem of unique user names. So what's the best idea? * *Email address (w/verification) ? *Unique alpha-numeric string ("johnsmith9234")? *Should we look at OpenID? A: OpenID is very slick, and something you should seriously consider as it basically removes the requirement to save local usernames and passwords and worry about authentication. A lot of sites nowadays are using both OpenID and their own, giving users the option. If you do decide to roll your own, I'd recommend using the email address. Be careful, though, if you are creating something that groups users by an account (say, a company that has several users). In this case, the email address might be used more than once (if they do work for more than one company, for example), and you should allow that. HTH! A: EMAIL ADDRESS Rational * *Users don't change emails very often *Removes the step of asking for username and email address, which you'll need anyway *Users don't often forget their email address (see number one) *Email will be unique unless the user already registered for the site, in which case forward them to a forgot your password screen *Almost everyone is using email as the primary login for access to a website, this means the rate of adoption shouldn't be affected by the fact that you're asking for an email address Update After registration, be sure to ask the user to create some kind of username, don't litter a public site with their email address! Also, another benefit of using an email address as a login: you won't need any other information (like password / password confirm), just send them a temp password through the mail, or forgo passwords altogether and send them a one-use URL to their email address every time they'd like to login (see: mugshot.org) A: I like OpenID, but I'd still go with the email address, unless your user community is very technically savvy. It's still much easier for most people to understand and remember. A: If you use an email address for ID, don't require that it be verified. I learned the hard way about this when one day suddenly the number of signups at my site drastically decreased. It turns out that the entire range of IP addresses including my site's IP was blacklisted. It took a long time to resolve it. In other cases, I have seen Gmail marking very legitimate emails as spam, and that can cause trouble too. It's good to verify the email address, but don't make it block signups. A: Right now our customers are mostly businesses. People seem to be missing that line. If it's for a business, requiring them to login via OpenID really isn't very practical. They'd either have to use an external OpenID provider, or their poor tech people would have to setup and configure a company OpenID. If this were "should StackOverflow require OpenID for login" or "Should my blog-comment-system allow you to identify yourself via OpenID", my answer would be "absolutely!", but in this case, I don't think OpenID would be a good fit. A: I personally would say Email w/ Verification, OpenId is a great idea but I find that finding a provider that your already with is a pain, I only had an openId for here cause just 2 days before beta I decided to start a blog on blogspot. But everyone on the internet has an email address, especially when dealing with businesses, people aren't very opt to using there personal blog or whatnot for a business login. A: I think that OpenID is definitely worth looking at. Besides giving you a framework in which to provide a unified id for customers, it can also provide large businesses with the ability to manage their own logins and provide a common login across all products that they use, including your own. This isn't that large of a benefit now when OpenId is still relatively rare, but as more products begin to use it, I suspect that the ability to use a common company OpenId login for each employee could become a good selling point. Since you're mostly catering to businesses, I don't think that it's all that unreasonable to offer to host the OpenId accounts yourself. I just think that the extra flexibility will benefit your customers. A: If most of your customers are mostly businesses then I think that using anything other than email creates problems for your customers. Most people are comfortable with email address login and since they are a business customer will likely want to use their work email rather than a personal account. OpenID creates a situation where there is a third party involved and many businesses don't like a third party involved. A: If you are looking at OpenID you should check out http://eaut.org/ and http://emailtoid.net. Basically you can accept email addresses for a login and behind the scenes translate them to OpenID without the user having to know anything. Its pretty slick stuff... A: OpenID seems to be a very good alternative to writing your own user management/authentication piece. I'm seeing more and more sites using OpenID these days, so the barrier to entry for your users should be relatively low.
{ "language": "en", "url": "https://stackoverflow.com/questions/6080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17" }
Q: Gathering OS and tool version numbers for build archive purposes Our automated build machine needs to archive the version numbers of the OS plus various tools used during each build. (In case we ever need to replicate exactly the same build later on, perhaps when the machine is long dead.) I see the command "msinfo32.exe" can be used to dump a whole load of system version information, which we might as well archive. Does anyone know of a way to easily archive the version numbers of the Visual Studio tools? What mechanisms do other developers use to gather this kind of information for archive purposes? Extra information for Fabio Gomes. I agree with you that in 5 years time it'll probably be impossible to recreate the exact OS and tool configuration (down to the nearest security update). Unfortunately this really comes from a contractual requirement. As part of our deliverable to a customer we must provide a copy of all source code and clear instructions on exactly how to replicate the build. It's probably impossible for us to meet this requirement perfectly. So - I'll just mark your answer as correct (I agree with you that it's practically impossible), and get on with playing with the rest of stack overflow. :) PS. It would be really great if stack overflow supported replies to answers instead of having to edit the original question.. But I see it has already been denied. A: If you are building on the command line you could tell it to be verbose and capture all of the output to a text file to archiving with each build. eg, msbuild <build_file> > myfile.txt A: Sorry, but what could lead for a need to replicated the exact same build in the future? In my experience, either you keep your product installers safe or start a new build from scratch. Also IMO the only way to replicated the exact same build in the future is to run your build machine on a Virtual Machine and keep the VM backup around. I think that most softwares you will need to replicate the exact same build in the future will not be available anymore, so you will need to keep a copy of every software version you install in this machine. Could you be more specific about the problem you are trying to solve? A: An alternative suggestion: put the relevant tools (compilers, system headers and libraries, etc.) in your repository itself, rather than expecting them to be locally installed. See also a blog post I wrote on this topic. I have Visual Studio, gcc, etc. all checked into my Subversion repository.
{ "language": "en", "url": "https://stackoverflow.com/questions/6085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6" }