[{"Question":"I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. \nUnfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python. \nAnyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple.","AnswerCount":12,"Available Count":6,"Score":1.0,"is_accepted":false,"ViewCount":9979,"Q_Id":337,"Users Score":6,"Answer":"I write a SOAP server that receives XML requests and creates XML responses. (Unfortunately, it's not my project, so it's closed source, but that's another problem).\nIt turned out for me that creating (SOAP) XML documents is fairly simple if you have a data structure that \"fits\" the schema.\nI keep the envelope since the response envelope is (almost) the same as the request envelope. Then, since my data structure is a (possibly nested) dictionary, I create a string that turns this dictionary into value<\/key> items. \nThis is a task that recursion makes simple, and I end up with the right structure. This is all done in python code and is currently fast enough for production use.\nYou can also (relatively) easily build lists as well, although depending upon your client, you may hit problems unless you give length hints.\nFor me, this was much simpler, since a dictionary is a much easier way of working than some custom class. For the books, generating XML is much easier than parsing!","Q_Score":82,"Tags":"python,xml","A_Id":635,"CreationDate":"2008-08-02T03:35:00.000","Title":"XML Processing in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. \nUnfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python. \nAnyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple.","AnswerCount":12,"Available Count":6,"Score":1.0,"is_accepted":false,"ViewCount":9979,"Q_Id":337,"Users Score":8,"Answer":"I've used ElementTree for several projects and recommend it. \nIt's pythonic, comes 'in the box' with Python 2.5, including the c version cElementTree (xml.etree.cElementTree) which is 20 times faster than the pure Python version, and is very easy to use.\nlxml has some perfomance advantages, but they are uneven and you should check the benchmarks first for your use case.\nAs I understand it, ElementTree code can easily be ported to lxml.","Q_Score":82,"Tags":"python,xml","A_Id":123307,"CreationDate":"2008-08-02T03:35:00.000","Title":"XML Processing in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. \nUnfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python. \nAnyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple.","AnswerCount":12,"Available Count":6,"Score":1.0,"is_accepted":false,"ViewCount":9979,"Q_Id":337,"Users Score":8,"Answer":"It depends a bit on how complicated the document needs to be.\nI've used minidom a lot for writing XML, but that's usually been just reading documents, making some simple transformations, and writing them back out. That worked well enough until I needed the ability to order element attributes (to satisfy an ancient application that doesn't parse XML properly). At that point I gave up and wrote the XML myself.\nIf you're only working on simple documents, then doing it yourself can be quicker and simpler than learning a framework. If you can conceivably write the XML by hand, then you can probably code it by hand as well (just remember to properly escape special characters, and use str.encode(codec, errors=\"xmlcharrefreplace\")). Apart from these snafus, XML is regular enough that you don't need a special library to write it. If the document is too complicated to write by hand, then you should probably look into one of the frameworks already mentioned. At no point should you need to write a general XML writer.","Q_Score":82,"Tags":"python,xml","A_Id":202259,"CreationDate":"2008-08-02T03:35:00.000","Title":"XML Processing in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. \nUnfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python. \nAnyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple.","AnswerCount":12,"Available Count":6,"Score":0.0333209931,"is_accepted":false,"ViewCount":9979,"Q_Id":337,"Users Score":2,"Answer":"I assume that the .NET way of processing XML builds on some version of MSXML and in that case I assume that using, for example, minidom would make you feel somewhat at home. However, if it is simple processing you are doing, any library will probably do.\nI also prefer working with ElementTree when dealing with XML in Python because it is a very neat library.","Q_Score":82,"Tags":"python,xml","A_Id":69772,"CreationDate":"2008-08-02T03:35:00.000","Title":"XML Processing in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. \nUnfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python. \nAnyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple.","AnswerCount":12,"Available Count":6,"Score":0.049958375,"is_accepted":false,"ViewCount":9979,"Q_Id":337,"Users Score":3,"Answer":"I strongly recommend SAX - Simple API for XML - implementation in the Python libraries. They are fairly easy to setup and process large XML by even driven API, as discussed by previous posters here, and have low memory footprint unlike validating DOM style XML parsers.","Q_Score":82,"Tags":"python,xml","A_Id":13832269,"CreationDate":"2008-08-02T03:35:00.000","Title":"XML Processing in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it. \nUnfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python. \nAnyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple.","AnswerCount":12,"Available Count":6,"Score":1.0,"is_accepted":false,"ViewCount":9979,"Q_Id":337,"Users Score":8,"Answer":"There are 3 major ways of dealing with XML, in general: dom, sax, and xpath. The dom model is good if you can afford to load your entire xml file into memory at once, and you don't mind dealing with data structures, and you are looking at much\/most of the model. The sax model is great if you only care about a few tags, and\/or you are dealing with big files and can process them sequentially. The xpath model is a little bit of each -- you can pick and choose paths to the data elements you need, but it requires more libraries to use.\nIf you want straightforward and packaged with Python, minidom is your answer, but it's pretty lame, and the documentation is \"here's docs on dom, go figure it out\". It's really annoying.\nPersonally, I like cElementTree, which is a faster (c-based) implementation of ElementTree, which is a dom-like model.\nI've used sax systems, and in many ways they're more \"pythonic\" in their feel, but I usually end up creating state-based systems to handle them, and that way lies madness (and bugs).\nI say go with minidom if you like research, or ElementTree if you want good code that works well.","Q_Score":82,"Tags":"python,xml","A_Id":69410,"CreationDate":"2008-08-02T03:35:00.000","Title":"XML Processing in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What are the libraries that support XPath? Is there a full implementation? How is the library used? Where is its website?","AnswerCount":11,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":338237,"Q_Id":8692,"Users Score":40,"Answer":"Use LXML. LXML uses the full power of libxml2 and libxslt, but wraps them in more \"Pythonic\" bindings than the Python bindings that are native to those libraries. As such, it gets the full XPath 1.0 implementation. Native ElemenTree supports a limited subset of XPath, although it may be good enough for your needs.","Q_Score":245,"Tags":"python,xml,dom,xpath,python-2.x","A_Id":1732475,"CreationDate":"2008-08-12T11:28:00.000","Title":"How to use XPath in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a small utility that I use to download an MP3 file from a website on a schedule and then builds\/updates a podcast XML file which I've added to iTunes.\nThe text processing that creates\/updates the XML file is written in Python. However, I use wget inside a Windows .bat file to download the actual MP3 file. I would prefer to have the entire utility written in Python.\nI struggled to find a way to actually download the file in Python, thus why I resorted to using wget.\nSo, how do I download the file using Python?","AnswerCount":27,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":1341778,"Q_Id":22676,"Users Score":20,"Answer":"Following are the most commonly used calls for downloading files in python:\n\nurllib.urlretrieve ('url_to_file', file_name)\nurllib2.urlopen('url_to_file')\nrequests.get(url)\nwget.download('url', file_name)\n\nNote: urlopen and urlretrieve are found to perform relatively bad with downloading large files (size > 500 MB). requests.get stores the file in-memory until download is complete.","Q_Score":1032,"Tags":"python,http,urllib","A_Id":39573536,"CreationDate":"2008-08-22T15:34:00.000","Title":"How to download a file over HTTP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What's the best way to specify a proxy with username and password for an http connection in python?","AnswerCount":6,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":95434,"Q_Id":34079,"Users Score":15,"Answer":"Setting an environment var named http_proxy like this: http:\/\/username:password@proxy_url:port","Q_Score":58,"Tags":"python,http,proxy","A_Id":3942980,"CreationDate":"2008-08-29T06:55:00.000","Title":"How to specify an authenticated proxy for a python http connection?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I retrieve the page title of a webpage (title html tag) using Python?","AnswerCount":12,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":103114,"Q_Id":51233,"Users Score":2,"Answer":"soup.title.string actually returns a unicode string.\nTo convert that into normal string, you need to do\nstring=string.encode('ascii','ignore')","Q_Score":86,"Tags":"python,html","A_Id":17123979,"CreationDate":"2008-09-09T04:38:00.000","Title":"How can I retrieve the page title of a webpage using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I call socket.getsockname() on a socket object, it returns a tuple of my machine's internal IP and the port. However, I would like to retrieve my external IP. What's the cheapest, most efficient manner of doing this?","AnswerCount":9,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":19360,"Q_Id":58294,"Users Score":9,"Answer":"This isn't possible without cooperation from an external server, because there could be any number of NATs between you and the other computer. If it's a custom protocol, you could ask the other system to report what address it's connected to.","Q_Score":10,"Tags":"python,sockets","A_Id":58296,"CreationDate":"2008-09-12T04:21:00.000","Title":"How do I get the external IP of a socket in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I call socket.getsockname() on a socket object, it returns a tuple of my machine's internal IP and the port. However, I would like to retrieve my external IP. What's the cheapest, most efficient manner of doing this?","AnswerCount":9,"Available Count":2,"Score":0.0444152037,"is_accepted":false,"ViewCount":19360,"Q_Id":58294,"Users Score":2,"Answer":"import socket\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.connect((\"msn.com\",80))\ns.getsockname()","Q_Score":10,"Tags":"python,sockets","A_Id":256358,"CreationDate":"2008-09-12T04:21:00.000","Title":"How do I get the external IP of a socket in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Been scouring the net for something like firewatir but for python. I'm trying to automate firefox on linux. Any suggestions?","AnswerCount":8,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":21896,"Q_Id":60152,"Users Score":0,"Answer":"The languages of choice of Firefox is Javascript. Unless you have a specific requirement that requires Python, I would advice you to use that.","Q_Score":12,"Tags":"python,linux,firefox,ubuntu,automation","A_Id":60218,"CreationDate":"2008-09-12T23:28:00.000","Title":"Automate firefox with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Been scouring the net for something like firewatir but for python. I'm trying to automate firefox on linux. Any suggestions?","AnswerCount":8,"Available Count":2,"Score":0.024994793,"is_accepted":false,"ViewCount":21896,"Q_Id":60152,"Users Score":1,"Answer":"I would suggest you to use Selenium instead of Mechanize\/Twill because Mechanize would fail while handling Javascript.","Q_Score":12,"Tags":"python,linux,firefox,ubuntu,automation","A_Id":7610441,"CreationDate":"2008-09-12T23:28:00.000","Title":"Automate firefox with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a good server\/client protocol supported in Python for making data requests\/file transfers between one server and many clients. Security is also an issue - so secure login would be a plus. I've been looking into XML-RPC, but it looks to be a pretty old (and possibly unused these days?) protocol.","AnswerCount":11,"Available Count":2,"Score":0.0181798149,"is_accepted":false,"ViewCount":8856,"Q_Id":64426,"Users Score":1,"Answer":"There is no need to use HTTP (indeed, HTTP is not good for RPC in general in some respects), and no need to use a standards-based protocol if you're talking about a python client talking to a python server.\nUse a Python-specific RPC library such as Pyro, or what Twisted provides (Twisted.spread).","Q_Score":9,"Tags":"python,client","A_Id":256833,"CreationDate":"2008-09-15T16:27:00.000","Title":"Best Python supported server\/client protocol?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a good server\/client protocol supported in Python for making data requests\/file transfers between one server and many clients. Security is also an issue - so secure login would be a plus. I've been looking into XML-RPC, but it looks to be a pretty old (and possibly unused these days?) protocol.","AnswerCount":11,"Available Count":2,"Score":0.072599319,"is_accepted":false,"ViewCount":8856,"Q_Id":64426,"Users Score":4,"Answer":"I suggest you look at 1. XMLRPC 2. JSONRPC 3. SOAP 4. REST\/ATOM\nXMLRPC is a valid choice. Don't worry it is too old. That is not a problem. It is so simple that little needed changing since original specification. The pro is that in every programming langauge I know there is a library for a client to be written in. Certainly for python. I made it work with mod_python and had no problem at all.\nThe big problem with it is its verbosity. For simple values there is a lot of XML overhead. You can gzip it of cause, but then you loose some debugging ability with the tools like Fiddler.\nMy personal preference is JSONRPC. It has all of the XMLRPC advantages and it is very compact. Further, Javascript clients can \"eval\" it so no parsing is necessary. Most of them are built for version 1.0 of the standard. I have seen diverse attempts to improve on it, called 1.1 1.2 and 2.0 but they are not built one on top of another and, to my knowledge, are not widely supported yet. 2.0 looks the best, but I would still stick with 1.0 for now (October 2008)\nThird candidate would be REST\/ATOM. REST is a principle, and ATOM is how you convey bulk of data when it needs to for POST, PUT requests and GET responses.\nFor a very nice implementation of it, look at GData, Google's API. Real real nice.\nSOAP is old, and lots lots of libraries \/ langauges support it. IT is heeavy and complicated, but if your primary clients are .NET or Java, it might be worth the bother.\nVisual Studio would import your WSDL file and create a wrapper and to C# programmer it would look like local assembly indeed.\nThe nice thing about all this, is that if you architect your solution right, existing libraries for Python would allow you support more then one with almost no overhead. XMLRPC and JSONRPC are especially good match.\nRegarding authentication. XMLRPC and JSONRPC don't bother defining one. It is independent thing from the serialization. So you can implement Basic Authentication, Digest Authentication or your own with any of those. I have seen couple of examples of client side Digest Authentication for python, but am yet to see the server based one. If you use Apache, you might not need one, using mod_auth_digest Apache module instead. This depens on the nature of your application\nTransport security. It is obvously SSL (HTTPS). I can't currently remember how XMLRPC deals with, but with JSONRPC implementation that I have it is trivial - you merely change http to https in your URLs to JSONRPC and it shall be going over SSL enabled transport.","Q_Score":9,"Tags":"python,client","A_Id":256826,"CreationDate":"2008-09-15T16:27:00.000","Title":"Best Python supported server\/client protocol?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine.\nI have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":225,"Q_Id":68541,"Users Score":0,"Answer":"I would suspect the permissions on the library. Can you do a strace or similar to find out the filenames it's looking for, and then check the permissions on them?","Q_Score":0,"Tags":"python,macos,libxml2","A_Id":70895,"CreationDate":"2008-09-16T01:27:00.000","Title":"libxml2-p25 on OS X 10.5 needs sudo?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine.\nI have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":225,"Q_Id":68541,"Users Score":0,"Answer":"The PATH environment variable was the mistake.","Q_Score":0,"Tags":"python,macos,libxml2","A_Id":77114,"CreationDate":"2008-09-16T01:27:00.000","Title":"libxml2-p25 on OS X 10.5 needs sudo?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible?\nIf it's not, how would you do it with POST? Just out of curiosity.\nCheers!","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2893,"Q_Id":69979,"Users Score":0,"Answer":"With a bit of JavaScript, you could have the client hash the entered password and a server-generated nonce, and use that in an HTTP GET.","Q_Score":1,"Tags":"python,authentication,cgi","A_Id":70003,"CreationDate":"2008-09-16T07:07:00.000","Title":"Can I implement a web user authentication system in python without POST?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible?\nIf it's not, how would you do it with POST? Just out of curiosity.\nCheers!","AnswerCount":6,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":2893,"Q_Id":69979,"Users Score":5,"Answer":"You can actually do it all with GET methods. However, you'll want to use a full challenge response protocol for the logins. (You can hash on the client side using javascript. You just need to send out a unique challenge each time.) You'll also want to use SSL to ensure that no one can see the strings as they go across.\nIn some senses there's no real security difference between GET and POST requests as they both go across in plaintext, in other senses and in practice... GET is are a hell of a lot easier to intercept and is all over most people's logs and your web browser's history. :)\n(Or as suggested by the other posters, use a different method entirely like HTTP auth, digest auth or some higher level authentication scheme like AD, LDAP, kerberos or shib. However I kinda assumed that if you didn't have POST you wouldn't have these either.)","Q_Score":1,"Tags":"python,authentication,cgi","A_Id":69995,"CreationDate":"2008-09-16T07:07:00.000","Title":"Can I implement a web user authentication system in python without POST?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible?\nIf it's not, how would you do it with POST? Just out of curiosity.\nCheers!","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":2893,"Q_Id":69979,"Users Score":1,"Answer":"You could use HTTP Authentication, if supported.\nYou'd have to add SSL, as all methods, POST, GET and HTTP Auth (well, except Digest HHTP authentication) send plaintext.\nGET is basically just like POST, it just has a limit on the amount of data you can send which is usually a lot smaller than POST and a semantic difference which makes GET not a good candidate from that point of view, even if technically they both can do it.\nAs for examples, what are you using? There are many choices in Python, like the cgi module or some framework like Django, CherryPy, and so on","Q_Score":1,"Tags":"python,authentication,cgi","A_Id":69989,"CreationDate":"2008-09-16T07:07:00.000","Title":"Can I implement a web user authentication system in python without POST?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?","AnswerCount":8,"Available Count":4,"Score":0.024994793,"is_accepted":false,"ViewCount":16340,"Q_Id":85577,"Users Score":1,"Answer":"I don't think there is a built in way to get it from Python itself. \nMy question is, how are you getting the IP information from your network?\nTo get it from your local machine you could parse ifconfig (unix) or ipconfig (windows) with little difficulty.","Q_Score":9,"Tags":"python,network-programming","A_Id":85608,"CreationDate":"2008-09-17T17:23:00.000","Title":"Search for host with MAC-address using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?","AnswerCount":8,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":16340,"Q_Id":85577,"Users Score":0,"Answer":"You would want to parse the output of 'arp', but the kernel ARP cache will only contain those IP address(es) if those hosts have communicated with the host where the Python script is running.\nifconfig can be used to display the MAC addresses of local interfaces, but not those on the LAN.","Q_Score":9,"Tags":"python,network-programming","A_Id":85641,"CreationDate":"2008-09-17T17:23:00.000","Title":"Search for host with MAC-address using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?","AnswerCount":8,"Available Count":4,"Score":0.024994793,"is_accepted":false,"ViewCount":16340,"Q_Id":85577,"Users Score":1,"Answer":"It seems that there is not a native way of doing this with Python. Your best bet would be to parse the output of \"ipconfig \/all\" on Windows, or \"ifconfig\" on Linux. Consider using os.popen() with some regexps.","Q_Score":9,"Tags":"python,network-programming","A_Id":85634,"CreationDate":"2008-09-17T17:23:00.000","Title":"Search for host with MAC-address using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas?","AnswerCount":8,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":16340,"Q_Id":85577,"Users Score":0,"Answer":"Depends on your platform. If you're using *nix, you can use the 'arp' command to look up the mac address for a given IP (assuming IPv4) address. If that doesn't work, you could ping the address and then look, or if you have access to the raw network (using BPF or some other mechanism), you could send your own ARP packets (but that is probably overkill).","Q_Score":9,"Tags":"python,network-programming","A_Id":85620,"CreationDate":"2008-09-17T17:23:00.000","Title":"Search for host with MAC-address using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Basically, something similar to System.Xml.XmlWriter - A streaming XML Writer that doesn't incur much of a memory overhead. So that rules out xml.dom and xml.dom.minidom. Suggestions?","AnswerCount":6,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2617,"Q_Id":93710,"Users Score":-4,"Answer":"xml.etree.cElementTree, included in the default distribution of CPython since 2.5. Lightning fast for both reading and writing XML.","Q_Score":16,"Tags":"python,xml,streaming","A_Id":93850,"CreationDate":"2008-09-18T15:42:00.000","Title":"What's the easiest non-memory intensive way to output XML from Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if http:\/\/somedomain\/foo\/ will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this?","AnswerCount":11,"Available Count":2,"Score":0.0181798149,"is_accepted":false,"ViewCount":72799,"Q_Id":107405,"Users Score":1,"Answer":"As an aside, when using the httplib (at least on 2.5.2), trying to read the response of a HEAD request will block (on readline) and subsequently fail. If you do not issue read on the response, you are unable to send another request on the connection, you will need to open a new one. Or accept a long delay between requests.","Q_Score":117,"Tags":"python,python-2.7,http,http-headers,content-type","A_Id":779985,"CreationDate":"2008-09-20T06:38:00.000","Title":"How do you send a HEAD HTTP request in Python 2?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if http:\/\/somedomain\/foo\/ will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this?","AnswerCount":11,"Available Count":2,"Score":0.0181798149,"is_accepted":false,"ViewCount":72799,"Q_Id":107405,"Users Score":1,"Answer":"I have found that httplib is slightly faster than urllib2. I timed two programs - one using httplib and the other using urllib2 - sending HEAD requests to 10,000 URL's. The httplib one was faster by several minutes. httplib's total stats were: real 6m21.334s\n user 0m2.124s\n sys 0m16.372s\nAnd urllib2's total stats were: real 9m1.380s\n user 0m16.666s\n sys 0m28.565s\nDoes anybody else have input on this?","Q_Score":117,"Tags":"python,python-2.7,http,http-headers,content-type","A_Id":2630687,"CreationDate":"2008-09-20T06:38:00.000","Title":"How do you send a HEAD HTTP request in Python 2?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to upload some data to a server using HTTP PUT in python. From my brief reading of the urllib2 docs, it only does HTTP POST. Is there any way to do an HTTP PUT in python?","AnswerCount":14,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":174987,"Q_Id":111945,"Users Score":8,"Answer":"I needed to solve this problem too a while back so that I could act as a client for a RESTful API. I settled on httplib2 because it allowed me to send PUT and DELETE in addition to GET and POST. Httplib2 is not part of the standard library but you can easily get it from the cheese shop.","Q_Score":225,"Tags":"python,http,put","A_Id":114648,"CreationDate":"2008-09-21T20:11:00.000","Title":"Is there any way to do HTTP PUT in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any python module for rendering a HTML page with javascript and get back a DOM object?\nI want to parse a page which generates almost all of its content using javascript.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":34652,"Q_Id":126131,"Users Score":8,"Answer":"The big complication here is emulating the full browser environment outside of a browser. You can use stand alone javascript interpreters like Rhino and SpiderMonkey to run javascript code but they don't provide a complete browser like environment to full render a web page.\nIf I needed to solve a problem like this I would first look at how the javascript is rendering the page, it's quite possible it's fetching data via AJAX and using that to render the page. I could then use python libraries like simplejson and httplib2 to directly fetch the data and use that, negating the need to access the DOM object. However, that's only one possible situation, I don't know the exact problem you are solving.\nOther options include the selenium one mentioned by \u0141ukasz, some kind of webkit embedded craziness, some kind of IE win32 scripting craziness or, finally, a pyxpcom based solution (with added craziness). All these have the drawback of requiring pretty much a fully running web browser for python to play with, which might not be an option depending on your environment.","Q_Score":18,"Tags":"javascript,python,html","A_Id":126250,"CreationDate":"2008-09-24T09:05:00.000","Title":"Python library for rendering HTML and javascript","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've added cookie support to SOAPpy by overriding HTTPTransport. I need functionality beyond that of SOAPpy, so I was planning on moving to ZSI, but I can't figure out how to put the Cookies on the ZSI posts made to the service. Without these cookies, the server will think it is an unauthorized request and it will fail.\nHow can I add cookies from a Python CookieJar to ZSI requests?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":526,"Q_Id":139212,"Users Score":0,"Answer":"Additionally, the Binding class also allows any header to be added. So I figured out that I can just add a \"Cookie\" header for each cookie I need to add. This worked well for the code generated by wsdl2py, just adding the cookies right after the binding is formed in the SOAP client class. Adding a parameter to the generated class to take in the cookies as a dictionary is easy and then they can easily be iterated through and added.","Q_Score":2,"Tags":"python,web-services,cookies,soappy,zsi","A_Id":148379,"CreationDate":"2008-09-26T12:45:00.000","Title":"Adding Cookie to ZSI Posts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to redirect\/forward a Pylons request. The problem with using redirect_to is that form data gets dropped. I need to keep the POST form data intact as well as all request headers.\nIs there a simple way to do this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1415,"Q_Id":153773,"Users Score":2,"Answer":"Receiving data from a POST depends on the web browser sending data along. When the web browser receives a redirect, it does not resend that data along. One solution would be to URL encode the data you want to keep and use that with a GET. In the worst case, you could always add the data you want to keep to the session and pass it that way.","Q_Score":4,"Tags":"python,post,request,header,pylons","A_Id":153822,"CreationDate":"2008-09-30T16:11:00.000","Title":"What is the preferred way to redirect a request in Pylons without losing form data?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've discovered that cElementTree is about 30 times faster than xml.dom.minidom and I'm rewriting my XML encoding\/decoding code. However, I need to output XML that contains CDATA sections and there doesn't seem to be a way to do that with ElementTree.\nCan it be done?","AnswerCount":15,"Available Count":1,"Score":0.0133325433,"is_accepted":false,"ViewCount":52719,"Q_Id":174890,"Users Score":1,"Answer":"The DOM has (atleast in level 2) an interface\nDATASection, and an operation Document::createCDATASection. They are\nextension interfaces, supported only if an implementation supports the\n\"xml\" feature.\nfrom xml.dom import minidom\nmy_xmldoc=minidom.parse(xmlfile)\nmy_xmldoc.createCDATASection(data)\nnow u have cadata node add it wherever u want....","Q_Score":46,"Tags":"python,xml","A_Id":510324,"CreationDate":"2008-10-06T15:56:00.000","Title":"How to output CDATA using ElementTree","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If I have no connection to internet, does that mean I can't start IDLE (which comes with python 3.0)?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":945,"Q_Id":190115,"Users Score":0,"Answer":"Not really. You can download the latest version of Python 3.x suitable for whichever operating system you are using, and you can load IDLE without any internet.","Q_Score":1,"Tags":"python,python-3.x,python-idle","A_Id":56122726,"CreationDate":"2008-10-10T04:19:00.000","Title":"IDLE doesn't start in python 3.0","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Python has several ways to parse XML...\nI understand the very basics of parsing with SAX. It functions as a stream parser, with an event-driven API.\nI understand the DOM parser also. It reads the XML into memory and converts it to objects that can be accessed with Python.\nGenerally speaking, it was easy to choose between the two depending on what you needed to do, memory constraints, performance, etc.\n(Hopefully I'm correct so far.)\nSince Python 2.5, we also have ElementTree. How does this compare to DOM and SAX? Which is it more similar to? Why is it better than the previous parsers?","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":33129,"Q_Id":192907,"Users Score":7,"Answer":"ElementTree's parse() is like DOM, whereas iterparse() is like SAX. In my opinion, ElementTree is better than DOM and SAX in that it provides API easier to work with.","Q_Score":80,"Tags":"python,xml,dom,sax,elementtree","A_Id":192913,"CreationDate":"2008-10-10T20:22:00.000","Title":"XML parsing - ElementTree vs SAX and DOM","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?\nMore info:\nSo certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":1350,"Q_Id":203758,"Users Score":1,"Answer":"What is your question exactly? \nWhat happens when you do receive on a socket is that the current available data in the socket buffer is immediately returned. If you give receive (or read, I guess), a huge buffer size, such as 40000, it'll likely never return that much data at once. If you give it a tiny buffer size like 100, then it'll return the 100 bytes it has immediately and still have more available. Either way, you're not imposing a limit on how much data the client is sending you.","Q_Score":2,"Tags":"python,sockets","A_Id":203769,"CreationDate":"2008-10-15T04:55:00.000","Title":"Receive socket size limits good?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?\nMore info:\nSo certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":1350,"Q_Id":203758,"Users Score":2,"Answer":"Most likely you've seen code which protects against \"extra\" incoming data. This is often due to the possibility of buffer overruns, where the extra data being copied into memory overruns the pre-allocated array and overwrites executable code with attacker code. Code written in languages like C typically has a lot of length checking to prevent this type of attack. Functions such as gets, and strcpy are replaced with their safer counterparts like fgets and strncpy which have a length argument to prevent buffer overruns.\nIf you use a dynamic language like Python, your arrays resize so they won't overflow and clobber other memory, but you still have to be careful about sanitizing foreign data.\nChat programs likely limit the size of a message for reasons such as database field size. If 80% of your incoming messages are 40 characters or less, 90% are 60 characters or less, and 98% are 80 characters or less, why make your message text field allow 10k characters per message?","Q_Score":2,"Tags":"python,sockets","A_Id":203933,"CreationDate":"2008-10-15T04:55:00.000","Title":"Receive socket size limits good?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?\nMore info:\nSo certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1350,"Q_Id":203758,"Users Score":0,"Answer":"I don't know what your actual application is, however, setting a hard limit on the total amount of data that a client can send could be useful in reducing your exposure to denial of service attacks, e.g. client connects and sends 100MB of data which could load your application unacceptably.\nBut it really depends on what you application is. Are you after a per line limit or a total per connection limit or what?","Q_Score":2,"Tags":"python,sockets","A_Id":207096,"CreationDate":"2008-10-15T04:55:00.000","Title":"Receive socket size limits good?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I get a list of the IP addresses or host names from a local network easily in Python?\nIt would be best if it was multi-platform, but it needs to work on Mac OS X first, then others follow.\nEdit: By local I mean all active addresses within a local network, such as 192.168.xxx.xxx.\nSo, if the IP address of my computer (within the local network) is 192.168.1.1, and I have three other connected computers, I would want it to return the IP addresses 192.168.1.2, 192.168.1.3, 192.168.1.4, and possibly their hostnames.","AnswerCount":11,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":129230,"Q_Id":207234,"Users Score":24,"Answer":"If by \"local\" you mean on the same network segment, then you have to perform the following steps:\n\nDetermine your own IP address\nDetermine your own netmask\nDetermine the network range\nScan all the addresses (except the lowest, which is your network address and the highest, which is your broadcast address).\nUse your DNS's reverse lookup to determine the hostname for IP addresses which respond to your scan.\n\nOr you can just let Python execute nmap externally and pipe the results back into your program.","Q_Score":46,"Tags":"python,networking","A_Id":207246,"CreationDate":"2008-10-16T02:32:00.000","Title":"List of IP addresses\/hostnames from local network in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have noticed that my particular instance of Trac is not running quickly and has big lags. This is at the very onset of a project, so not much is in Trac (except for plugins and code loaded into SVN).\nSetup Info: This is via a SELinux system hosted by WebFaction. It is behind Apache, and connections are over SSL. Currently the .htpasswd file is what I use to control access.\nAre there any recommend ways to improve the performance of Trac?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2997,"Q_Id":213838,"Users Score":5,"Answer":"It's hard to say without knowing more about your setup, but one easy win is to make sure that Trac is running in something like mod_python, which keeps the Python runtime in memory. Otherwise, every HTTP request will cause Python to run, import all the modules, and then finally handle the request. Using mod_python (or FastCGI, whichever you prefer) will eliminate that loading and skip straight to the good stuff.\nAlso, as your Trac database grows and you get more people using the site, you'll probably outgrow the default SQLite database. At that point, you should think about migrating the database to PostgreSQL or MySQL, because they'll be able to handle concurrent requests much faster.","Q_Score":3,"Tags":"python,performance,trac","A_Id":214162,"CreationDate":"2008-10-17T21:02:00.000","Title":"How to improve Trac's performance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have noticed that my particular instance of Trac is not running quickly and has big lags. This is at the very onset of a project, so not much is in Trac (except for plugins and code loaded into SVN).\nSetup Info: This is via a SELinux system hosted by WebFaction. It is behind Apache, and connections are over SSL. Currently the .htpasswd file is what I use to control access.\nAre there any recommend ways to improve the performance of Trac?","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":2997,"Q_Id":213838,"Users Score":3,"Answer":"We've had the best luck with FastCGI. Another critical factor was to only use https for authentication but use http for all other traffic -- I was really surprised how much that made a difference.","Q_Score":3,"Tags":"python,performance,trac","A_Id":215084,"CreationDate":"2008-10-17T21:02:00.000","Title":"How to improve Trac's performance","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am interested in making a Google Talk client using Python and would like to use the Twisted libraries Words module. I have looked at the examples, but they don't work with the current implementation of Google Talk.\nHas anybody had any luck with this? Would you mind documenting a brief tutorial?\nAs a simple task, I'd like to create a client\/bot that tracks the Online time of my various Google Talk accounts so that I can get an aggregate number. I figure I could friend the bot in each account and then use the XMPP presence information to keep track of the times that I can then aggregate.\nThanks.","AnswerCount":4,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":8406,"Q_Id":227279,"Users Score":-2,"Answer":"As the Twisted libs seem to be out of date, you have two choices:\nImplement your own XMPP-handler or look for another library.\nI would suggest working with the raw XML; XMPP is not that complicated and you are bound to learn something.","Q_Score":17,"Tags":"python,twisted,xmpp,google-talk","A_Id":228877,"CreationDate":"2008-10-22T19:48:00.000","Title":"How do you create a simple Google Talk Client using the Twisted Words Python library?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a script that goes through a bunch of files and snips out a portion of the files for further processing. The script creates a new directory and creates new files for each snip that is taken out. I have to now evaluate each of the files that were created to see if it is what I needed. The script also creates an html index file with links to each of the snips. So I can click the hyperlink to see the file, make a note in a spreadsheet to indicate if the file is correct or not and then use the back button in the browser to take me back to the index list. \nI was sitting here wondering if I could somehow create a delete button in the browser next to the hyperlink. My thought is I would click the hyperlink, make a judgment about the file and if it is not one I want to keep then when I get back to the main page I just press the delete button and it is gone from the directory. \nDoes anyone have any idea if this is possible. I am writing this in python but clearly the issue is is there a way to create an htm file with a delete button-I would just use Python to write the commands for the deletion button.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1222,"Q_Id":256021,"Users Score":0,"Answer":"You would have to write the web page in Python. There are many Python web frameworks out there (e.g. Django) that are easy to work with. You could convert your entire scripting framework to a web application that has a worker thread going and crawling through html pages, saving them to a particular location, indexing them for you to see and providing a delete button that calls the system's delete function on the particular file.","Q_Score":2,"Tags":"python,web-applications,browser","A_Id":256028,"CreationDate":"2008-11-01T19:52:00.000","Title":"Can I Use Python to Make a Delete Button in a 'web page'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have written a script that goes through a bunch of files and snips out a portion of the files for further processing. The script creates a new directory and creates new files for each snip that is taken out. I have to now evaluate each of the files that were created to see if it is what I needed. The script also creates an html index file with links to each of the snips. So I can click the hyperlink to see the file, make a note in a spreadsheet to indicate if the file is correct or not and then use the back button in the browser to take me back to the index list. \nI was sitting here wondering if I could somehow create a delete button in the browser next to the hyperlink. My thought is I would click the hyperlink, make a judgment about the file and if it is not one I want to keep then when I get back to the main page I just press the delete button and it is gone from the directory. \nDoes anyone have any idea if this is possible. I am writing this in python but clearly the issue is is there a way to create an htm file with a delete button-I would just use Python to write the commands for the deletion button.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1222,"Q_Id":256021,"Users Score":1,"Answer":"You could make this even simpler by making it all happen in one main page. Instead of having a list of hyperlinks, just have the main page have one frame that loads one of the autocreated pages in it. Put a couple of buttons at the bottom - a \"Keep this page\" and a \"Delete this page.\" When you click either button, the main page refreshes, this time with the next autocreated page in the frame.\nYou could make this as a cgi script in your favorite scripting language. You can't just do this in html because an html page only does stuff client-side, and you can only delete files server-side. You will probably need as cgi args the page to show in the frame, and the last page you viewed if the button click was a \"delete\".","Q_Score":2,"Tags":"python,web-applications,browser","A_Id":256040,"CreationDate":"2008-11-01T19:52:00.000","Title":"Can I Use Python to Make a Delete Button in a 'web page'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page.","AnswerCount":7,"Available Count":2,"Score":0.0855049882,"is_accepted":false,"ViewCount":97893,"Q_Id":257409,"Users Score":3,"Answer":"Use htmllib to extract all img tags (override do_img), then use urllib2 to download all the images.","Q_Score":47,"Tags":"python,screen-scraping","A_Id":257413,"CreationDate":"2008-11-02T21:31:00.000","Title":"Download image file from the HTML page source using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page.","AnswerCount":7,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":97893,"Q_Id":257409,"Users Score":8,"Answer":"You have to download the page and parse html document, find your image with regex and download it.. You can use urllib2 for downloading and Beautiful Soup for parsing html file.","Q_Score":47,"Tags":"python,screen-scraping","A_Id":257412,"CreationDate":"2008-11-02T21:31:00.000","Title":"Download image file from the HTML page source using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm conducting experiments regarding e-mail spam. One of these experiments require sending mail thru Tor. Since I'm using Python and smtplib for my experiments, I'm looking for a way to use the Tor proxy (or other method) to perform that mail sending.\nIdeas how this can be done?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2405,"Q_Id":266849,"Users Score":1,"Answer":"Because of abuse by spammers, many Tor egress nodes decline to emit port 25 (SMTP) traffic, so you may have problems.","Q_Score":2,"Tags":"python,smtp,tor","A_Id":275164,"CreationDate":"2008-11-05T21:50:00.000","Title":"Using Python's smtplib with Tor","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am running my HTTPServer in a separate thread (using the threading module which has no way to stop threads...) and want to stop serving requests when the main thread also shuts down.\nThe Python documentation states that BaseHTTPServer.HTTPServer is a subclass of SocketServer.TCPServer, which supports a shutdown method, but it is missing in HTTPServer.\nThe whole BaseHTTPServer module has very little documentation :(","AnswerCount":11,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":87640,"Q_Id":268629,"Users Score":15,"Answer":"I think you can use [serverName].socket.close()","Q_Score":58,"Tags":"python,http,basehttpserver","A_Id":4020093,"CreationDate":"2008-11-06T13:10:00.000","Title":"How to stop BaseHTTPServer.serve_forever() in a BaseHTTPRequestHandler subclass?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I develop a client-server style, database based system and I need to devise a way to stress \/ load test the system. Customers inevitably want to know such things as:\n\u2022 How many clients can a server support?\n\u2022 How many concurrent searches can a server support?\n\u2022 How much data can we store in the database?\n\u2022 Etc.\nKey to all these questions is response time. We need to be able to measure how response time and performance degrades as new load is introduced so that we could for example, produce some kind of pretty graph we could throw at clients to give them an idea what kind of performance to expect with a given hardware configuration.\nRight now we just put out fingers in the air and make educated guesses based on what we already know about the system from experience. As the product is put under more demanding conditions, this is proving to be inadequate for our needs going forward though.\nI've been given the task of devising a method to get such answers in a meaningful way. I realise that this is not a question that anyone can answer definitively but I'm looking for suggestions about how people have gone about doing such work on their own systems.\nOne thing to note is that we have full access to our client API via the Python language (courtesy of SWIG) which is a lot easier to work with than C++ for this kind of work.\nSo there we go, I throw this to the floor: really interested to see what ideas you guys can come up with!","AnswerCount":5,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":10487,"Q_Id":271825,"Users Score":5,"Answer":"For performance you are looking at two things: latency (the responsiveness of the application) and throughput (how many ops per interval). For latency you need to have an acceptable benchmark. For throughput you need to have a minimum acceptable throughput.\nThese are you starting points. For telling a client how many xyz's you can do per interval then you are going to need to know the hardware and software configuration. Knowing the production hardware is important to getting accurate figures. If you do not know the hardware configuration then you need to devise a way to map your figures from the test hardware to the eventual production hardware.\nWithout knowledge of hardware then you can really only observe trends in performance over time rather than absolutes.\nKnowing the software configuration is equally important. Do you have a clustered server configuration, is it load balanced, is there anything else running on the server? Can you scale your software or do you have to scale the hardware to meet demand.\nTo know how many clients you can support you need to understand what is a standard set of operations. A quick test is to remove the client and write a stub client and the spin up as many of these as you can. Have each one connect to the server. You will eventually reach the server connection resource limit. Without connection pooling or better hardware you can't get higher than this. Often you will hit a architectural issue before here but in either case you have an upper bounds.\nTake this information and design a script that your client can enact. You need to map how long your script takes to perform the action with respect to how long it will take the expected user to do it. Start increasing your numbers as mentioned above to you hit the point where the increase in clients causes a greater decrease in performance. \nThere are many ways to stress test but the key is understanding expected load. Ask your client about their expectations. What is the expected demand per interval? From there you can work out upper loads.\nYou can do a soak test with many clients operating continously for many hours or days. You can try to connect as many clients as you can as fast you can to see how well your server handles high demand (also a DOS attack). \nConcurrent searches should be done through your standard behaviour searches acting on behalf of the client or, write a script to establish a semaphore that waits on many threads, then you can release them all at once. This is fun and punishes your database. When performing searches you need to take into account any caching layers that may exist. You need to test both caching and without caching (in scenarios where everyone makes unique search requests).\nDatabase storage is based on physical space; you can determine row size from the field lengths and expected data population. Extrapolate this out statistically or create a data generation script (useful for your load testing scenarios and should be an asset to your organisation) and then map the generated data to business objects. Your clients will care about how many \"business objects\" they can store while you will care about how much raw data can be stored.\nOther things to consider: What is the expected availability? What about how long it takes to bring a server online. 99.9% availability is not good if it takes two days to bring back online the one time it does go down. On the flip side a lower availablility is more acceptable if it takes 5 seconds to reboot and you have a fall over.","Q_Score":13,"Tags":"python,database,client-server,load-testing,stress-testing","A_Id":271918,"CreationDate":"2008-11-07T11:35:00.000","Title":"How should I stress test \/ load test a client server application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I develop a client-server style, database based system and I need to devise a way to stress \/ load test the system. Customers inevitably want to know such things as:\n\u2022 How many clients can a server support?\n\u2022 How many concurrent searches can a server support?\n\u2022 How much data can we store in the database?\n\u2022 Etc.\nKey to all these questions is response time. We need to be able to measure how response time and performance degrades as new load is introduced so that we could for example, produce some kind of pretty graph we could throw at clients to give them an idea what kind of performance to expect with a given hardware configuration.\nRight now we just put out fingers in the air and make educated guesses based on what we already know about the system from experience. As the product is put under more demanding conditions, this is proving to be inadequate for our needs going forward though.\nI've been given the task of devising a method to get such answers in a meaningful way. I realise that this is not a question that anyone can answer definitively but I'm looking for suggestions about how people have gone about doing such work on their own systems.\nOne thing to note is that we have full access to our client API via the Python language (courtesy of SWIG) which is a lot easier to work with than C++ for this kind of work.\nSo there we go, I throw this to the floor: really interested to see what ideas you guys can come up with!","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":10487,"Q_Id":271825,"Users Score":0,"Answer":"If you have the budget, LoadRunner would be perfect for this.","Q_Score":13,"Tags":"python,database,client-server,load-testing,stress-testing","A_Id":271891,"CreationDate":"2008-11-07T11:35:00.000","Title":"How should I stress test \/ load test a client server application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say I wanted to make a python script interface with a site like Twitter.\nWhat would I use to do that? I'm used to using curl\/wget from bash, but Python seems to be much nicer to use. What's the equivalent?\n(This isn't Python run from a webserver, but run locally via the command line)","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":839,"Q_Id":285226,"Users Score":2,"Answer":"Python has a very nice httplib module as well as a url module which together will probably accomplish most of what you need (at least with regards to wget functionality).","Q_Score":6,"Tags":"python,web-services,twitter","A_Id":285252,"CreationDate":"2008-11-12T20:28:00.000","Title":"What Python tools can I use to interface with a website's API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to connect to an Exchange mailbox in a Python script, without using any profile setup on the local machine (including using Outlook). If I use win32com to create a MAPI.Session I could logon (with the Logon() method) with an existing profile, but I want to just provide a username & password.\nIs this possible? If so, could someone provide example code? I would prefer if it only used the standard library and the pywin32 package. Unfortunately, enabling IMAP access for the Exchange server (and then using imaplib) is not possible.\nIn case it is necessary: all the script will be doing is connecting to the mailbox, and running through the messages in the Inbox, retrieving the contents. I can handle writing the code for that, if I can get a connection in the first place!\nTo clarify regarding Outlook: Outlook will be installed on the local machine, but it does not have any accounts setup (i.e. all the appropriate libraries will be available, but I need to operate independently from anything setup inside of Outlook).","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":107535,"Q_Id":288546,"Users Score":1,"Answer":"I'm pretty sure this is going to be impossible without using Outlook and a MAPI profile. If you can sweet talk your mail admin into enabling IMAP on the Exchange server it would make your life a lot easier.","Q_Score":26,"Tags":"python,email,connection,exchange-server,pywin32","A_Id":288569,"CreationDate":"2008-11-13T22:19:00.000","Title":"Connect to Exchange mailbox with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have developers with knowledge of these languages - Ruby , Python, .Net or Java. We are developing an application which will mainly handle XML documents. Most of the work is to convert predefined XML files into database tables, providing mapping between XML documents through database, creating reports from database etc. Which language will be the easiest and fastest to work with?\n(It is a web-app)","AnswerCount":9,"Available Count":1,"Score":0.0444152037,"is_accepted":false,"ViewCount":18193,"Q_Id":301493,"Users Score":2,"Answer":"either C# or VB.Net using LiNQ to XML. LiNQ to XML is very very powerful and easy to implement","Q_Score":21,"Tags":"java,.net,python,xml,ruby","A_Id":301538,"CreationDate":"2008-11-19T10:35:00.000","Title":"Which language is easiest and fastest to work with XML content?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a way to find all nodes in a xml tree using cElementTree? The findall method works only for specified tags.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1525,"Q_Id":304216,"Users Score":1,"Answer":"Have you looked at node.getiterator()?","Q_Score":2,"Tags":"python,xml,search,celementtree","A_Id":304221,"CreationDate":"2008-11-20T03:06:00.000","Title":"Find all nodes from an XML using cElementTree","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Comparable to cacti or mrtg.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1942,"Q_Id":310759,"Users Score":0,"Answer":"or you can start building your own solution (like me), you will be surprised how much can you do with few lines of code using for instance cherryp for web server, pysnmp, and python rrd module.","Q_Score":1,"Tags":"python,django,pylons,snmp,turbogears","A_Id":541516,"CreationDate":"2008-11-22T02:26:00.000","Title":"Does anyone know of a python based web ui for snmp monitoring?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to run a simple request\/response python module under an\nexisting system with windows\/apache\/FastCGI.\nAll the FastCGI wrappers for python I tried work for Linux only\n(they use socket.fromfd() and other such shticks).\nIs there a wrapper that runs under windows?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":3212,"Q_Id":312928,"Users Score":2,"Answer":"You might find it easier to ditch FastCGI altogether and just run a python webserver on a localhost port. Then just use mod_rewrite to map the apache urls to the internal webserver.\n(I started offering FastCGI at my hosting company and to my surprise, nearly everyone ditched it in favor of just running their own web server on the ports I provided them.)","Q_Score":6,"Tags":"python,windows,apache,fastcgi","A_Id":318517,"CreationDate":"2008-11-23T20:39:00.000","Title":"Python as FastCGI under windows and apache","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"What's the best way to validate that an IP entered by the user is valid? It comes in as a string.","AnswerCount":11,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":305361,"Q_Id":319279,"Users Score":0,"Answer":"I only needed to parse IP v4 addresses. My solution based on Chills strategy follows:\n\ndef getIP():\n valid = False\n while not valid :\n octets = raw_input( \"Remote Machine IP Address:\" ).strip().split(\".\")\n try: valid=len( filter( lambda(item):0<=int(item)<256, octets) ) == 4\n except: valid = False\n return \".\".join( octets )","Q_Score":175,"Tags":"python,validation,networking,ip-address","A_Id":17214916,"CreationDate":"2008-11-25T23:40:00.000","Title":"How to validate IP address in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an object that can build itself from an XML string, and write itself out to an XML string. I'd like to write a unit test to test round tripping through XML, but I'm having trouble comparing the two XML versions. Whitespace and attribute order seem to be the issues. Any suggestions for how to do this? This is in Python, and I'm using ElementTree (not that that really matters here since I'm just dealing with XML in strings at this level).","AnswerCount":10,"Available Count":2,"Score":0.0599281035,"is_accepted":false,"ViewCount":17177,"Q_Id":321795,"Users Score":3,"Answer":"Why are you examining the XML data at all?\nThe way to test object serialization is to create an instance of the object, serialize it, deserialize it into a new object, and compare the two objects. When you make a change that breaks serialization or deserialization, this test will fail.\nThe only thing checking the XML data is going to find for you is if your serializer is emitting a superset of what the deserializer requires, and the deserializer silently ignores stuff it doesn't expect.\nOf course, if something else is going to be consuming the serialized data, that's another matter. But in that case, you ought to be thinking about establishing a schema for the XML and validating it.","Q_Score":41,"Tags":"python,xml,elementtree","A_Id":322088,"CreationDate":"2008-11-26T19:09:00.000","Title":"Comparing XML in a unit test in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an object that can build itself from an XML string, and write itself out to an XML string. I'd like to write a unit test to test round tripping through XML, but I'm having trouble comparing the two XML versions. Whitespace and attribute order seem to be the issues. Any suggestions for how to do this? This is in Python, and I'm using ElementTree (not that that really matters here since I'm just dealing with XML in strings at this level).","AnswerCount":10,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":17177,"Q_Id":321795,"Users Score":0,"Answer":"The Java component dbUnit does a lot of XML comparisons, so you might find it useful to look at their approach (especially to find any gotchas that they may have already addressed).","Q_Score":41,"Tags":"python,xml,elementtree","A_Id":322600,"CreationDate":"2008-11-26T19:09:00.000","Title":"Comparing XML in a unit test in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to check whether a page is being redirected or not without actually downloading the content. I just need the final URL. What's the best way of doing this is Python?\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":3318,"Q_Id":331855,"Users Score":1,"Answer":"When you open the URL with urllib2, and you're redirected, you get a status 30x for redirection. Check the info to see the location to which you're redirected. You don't need to read the page to read the info() that's part of the response.","Q_Score":5,"Tags":"python,http,http-headers","A_Id":331871,"CreationDate":"2008-12-01T19:10:00.000","Title":"How to determine if a page is being redirected","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We use a number of diffrent web services in our company, wiki(moinmoin), bugtracker (internally), requestracker (customer connection), subversion. Is there a way to parse the wikipages so that if I write \"... in Bug1234 you could ...\" Bug1234 woud be renderd as a link to http:\/\/mybugtracker\/bug1234","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":714,"Q_Id":343769,"Users Score":3,"Answer":"check out the interwiki page in moinmoin, (most wikis have them) we use trac for example and you can set up different link paths to point to your different web resources. So in our Trac you can go [[SSGWiki:Some Topic]] and it will point to another internal wiki.","Q_Score":2,"Tags":"python,wiki,moinmoin","A_Id":343926,"CreationDate":"2008-12-05T13:08:00.000","Title":"How to use InterWiki links in moinmoin?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a directory full (~103, 104) of XML files from which I need to extract the contents of several fields. \nI've tested different xml parsers, and since I don't need to validate the contents (expensive) I was thinking of simply using xml.parsers.expat (the fastest one) to go through the files, one by one to extract the data. \n\nIs there a more efficient way? (simple text matching doesn't work)\nDo I need to issue a new ParserCreate() for each new file (or string) or can I reuse the same one for every file?\nAny caveats?\n\nThanks!","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":572,"Q_Id":344559,"Users Score":1,"Answer":"If you know that the XML files are generated using the ever-same algorithm, it might be more efficient to not do any XML parsing at all. E.g. if you know that the data is in lines 3, 4, and 5, you might read through the file line-by-line, and then use regular expressions.\nOf course, that approach would fail if the files are not machine-generated, or originate from different generators, or if the generator changes over time. However, I'm optimistic that it would be more efficient.\nWhether or not you recycle the parser objects is largely irrelevant. Many more objects will get created, so a single parser object doesn't really count much.","Q_Score":3,"Tags":"python,xml,performance,large-files,expat-parser","A_Id":344641,"CreationDate":"2008-12-05T17:15:00.000","Title":"What is the most efficient way of extracting information from a large number of xml files in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a directory full (~103, 104) of XML files from which I need to extract the contents of several fields. \nI've tested different xml parsers, and since I don't need to validate the contents (expensive) I was thinking of simply using xml.parsers.expat (the fastest one) to go through the files, one by one to extract the data. \n\nIs there a more efficient way? (simple text matching doesn't work)\nDo I need to issue a new ParserCreate() for each new file (or string) or can I reuse the same one for every file?\nAny caveats?\n\nThanks!","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":572,"Q_Id":344559,"Users Score":1,"Answer":"One thing you didn't indicate is whether or not you're reading the XML into a DOM of some kind. I'm guessing that you're probably not, but on the off chance you are, don't. Use xml.sax instead. Using SAX instead of DOM will get you a significant performance boost.","Q_Score":3,"Tags":"python,xml,performance,large-files,expat-parser","A_Id":345650,"CreationDate":"2008-12-05T17:15:00.000","Title":"What is the most efficient way of extracting information from a large number of xml files in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a directory full (~103, 104) of XML files from which I need to extract the contents of several fields. \nI've tested different xml parsers, and since I don't need to validate the contents (expensive) I was thinking of simply using xml.parsers.expat (the fastest one) to go through the files, one by one to extract the data. \n\nIs there a more efficient way? (simple text matching doesn't work)\nDo I need to issue a new ParserCreate() for each new file (or string) or can I reuse the same one for every file?\nAny caveats?\n\nThanks!","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":572,"Q_Id":344559,"Users Score":3,"Answer":"The quickest way would be to match strings (with, e.g., regular expressions) instead of parsing XML - depending on your XMLs this could actually work.\nBut the most important thing is this: instead of thinking through several options, just implement them and time them on a small set. This will take roughly the same amount of time, and will give you real numbers do drive you forward.\nEDIT:\n\nAre the files on a local drive or network drive? Network I\/O will kill you here.\nThe problem parallelizes trivially - you can split the work among several computers (or several processes on a multicore computer).","Q_Score":3,"Tags":"python,xml,performance,large-files,expat-parser","A_Id":344694,"CreationDate":"2008-12-05T17:15:00.000","Title":"What is the most efficient way of extracting information from a large number of xml files in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using urllib2 to read in a page. I need to do a quick regex on the source and pull out a few variables but urllib2 presents as a file object rather than a string.\nI'm new to python so I'm struggling to see how I use a file object to do this. Is there a quick way to convert this into a string?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":48579,"Q_Id":346230,"Users Score":77,"Answer":"You can use Python in interactive mode to search for solutions.\nif f is your object, you can enter dir(f) to see all methods and attributes. There's one called read. Enter help(f.read) and it tells you that f.read() is the way to retrieve a string from an file object.","Q_Score":31,"Tags":"python,file,urllib2","A_Id":346237,"CreationDate":"2008-12-06T12:41:00.000","Title":"Read file object as string in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I receive and send email in python? A 'mail server' of sorts.\nI am looking into making an app that listens to see if it receives an email addressed to foo@bar.domain.com, and sends an email to the sender.\nNow, am I able to do this all in python, would it be best to use 3rd party libraries?","AnswerCount":9,"Available Count":4,"Score":0.0444152037,"is_accepted":false,"ViewCount":51409,"Q_Id":348392,"Users Score":2,"Answer":"Depending on the amount of mail you are sending you might want to look into using a real mail server like postifx or sendmail (*nix systems) Both of those programs have the ability to send a received mail to a program based on the email address.","Q_Score":43,"Tags":"python,email","A_Id":348579,"CreationDate":"2008-12-08T00:12:00.000","Title":"Receive and send emails in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I receive and send email in python? A 'mail server' of sorts.\nI am looking into making an app that listens to see if it receives an email addressed to foo@bar.domain.com, and sends an email to the sender.\nNow, am I able to do this all in python, would it be best to use 3rd party libraries?","AnswerCount":9,"Available Count":4,"Score":0.0886555158,"is_accepted":false,"ViewCount":51409,"Q_Id":348392,"Users Score":4,"Answer":"poplib and smtplib will be your friends when developing your app.","Q_Score":43,"Tags":"python,email","A_Id":348403,"CreationDate":"2008-12-08T00:12:00.000","Title":"Receive and send emails in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I receive and send email in python? A 'mail server' of sorts.\nI am looking into making an app that listens to see if it receives an email addressed to foo@bar.domain.com, and sends an email to the sender.\nNow, am I able to do this all in python, would it be best to use 3rd party libraries?","AnswerCount":9,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":51409,"Q_Id":348392,"Users Score":12,"Answer":"I do not think it would be a good idea to write a real mail server in Python. This is certainly possible (see mcrute's and Manuel Ceron's posts to have details) but it is a lot of work when you think of everything that a real mail server must handle (queuing, retransmission, dealing with spam, etc).\nYou should explain in more detail what you need. If you just want to react to incoming email, I would suggest to configure the mail server to call a program when it receives the email. This program could do what it wants (updating a database, creating a file, talking to another Python program).\nTo call an arbitrary program from the mail server, you have several choices:\n\nFor sendmail and Postfix, a ~\/.forward containing \"|\/path\/to\/program\"\nIf you use procmail, a recipe action of |path\/to\/program\nAnd certainly many others","Q_Score":43,"Tags":"python,email","A_Id":349352,"CreationDate":"2008-12-08T00:12:00.000","Title":"Receive and send emails in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I receive and send email in python? A 'mail server' of sorts.\nI am looking into making an app that listens to see if it receives an email addressed to foo@bar.domain.com, and sends an email to the sender.\nNow, am I able to do this all in python, would it be best to use 3rd party libraries?","AnswerCount":9,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":51409,"Q_Id":348392,"Users Score":7,"Answer":"Python has an SMTPD module that will be helpful to you for writing a server. You'll probably also want the SMTP module to do the re-send. Both modules are in the standard library at least since version 2.3.","Q_Score":43,"Tags":"python,email","A_Id":348423,"CreationDate":"2008-12-08T00:12:00.000","Title":"Receive and send emails in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Edit: How to return\/serve a file from a python controller (back end) over a web server, with the file_name? as suggested by @JV","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":14590,"Q_Id":352340,"Users Score":2,"Answer":"You can either pass back a reference to the file itself i.e. the full path to the file. Then you can open the file or otherwise manipulate it.\nOr, the more normal case is to pass back the file handle, and, use the standard read\/write operations on the file handle.\nIt is not recommended to pass the actual data as files can be arbiterally large and the program could run out of memory.\nIn your case, you probably want to return a tuple containing the open file handle, the file name and any other meta data you are interested in.","Q_Score":1,"Tags":"python,file,mime-types,download","A_Id":352385,"CreationDate":"2008-12-09T10:34:00.000","Title":"Return file from python module","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread.","AnswerCount":13,"Available Count":1,"Score":0.0767717131,"is_accepted":false,"ViewCount":100904,"Q_Id":366682,"Users Score":5,"Answer":"The only \"safe\" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.\nWhy? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.\nSo look at the process way, do not look at the thread way.","Q_Score":90,"Tags":"python,multithreading","A_Id":366754,"CreationDate":"2008-12-14T16:20:00.000","Title":"How to limit execution time of a function call?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using python and I need to map locations like \"Bloomington, IN\" to GPS coordinates so I can measure distances between them.\nWhat Geocoding libraries\/APIs do you recommend? Solutions in other languages are also welcome.","AnswerCount":9,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":13754,"Q_Id":373383,"Users Score":0,"Answer":"You can have better look in Geopy module.And it is worth enough to use as it contains Google map, yahoo map geocoders with which you can implement geocodings.","Q_Score":23,"Tags":"python,api,rest,geocoding","A_Id":2229732,"CreationDate":"2008-12-17T01:19:00.000","Title":"Geocoding libraries","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:\n(104, 'Connection reset by peer')\nWhen I listen in with wireshark, the \"good\" and \"bad\" responses look very similar:\n\nBecause of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK\nThe service sends the response, one packet per header (HTTP\/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.\n(Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.\n(Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.\n\nBoth the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual_env.\nThe client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.\nThe service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple_server. I ran the WSGI app through wsgiref.validator with no issues.\nIt seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket._socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.\n\"Connection reset by peer\" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?\n** Further debugging - Looks like server on Linux **\nI have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86_64? A bad glibc? wsgiref? Still looking...\n** Further testing - wsgiref looks flaky **\nWe've gone to production with Apache and mod_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":140801,"Q_Id":383738,"Users Score":2,"Answer":"I had the same issue however with doing an upload of a very large file using a python-requests client posting to a nginx+uwsgi backend.\nWhat ended up being the cause was the the backend had a cap on the max file size for uploads lower than what the client was trying to send.\nThe error never showed up in our uwsgi logs since this limit was actually one imposed by nginx. \nUpping the limit in nginx removed the error.","Q_Score":39,"Tags":"python,sockets,wsgi,httplib2,werkzeug","A_Id":52826181,"CreationDate":"2008-12-20T21:04:00.000","Title":"104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:\n(104, 'Connection reset by peer')\nWhen I listen in with wireshark, the \"good\" and \"bad\" responses look very similar:\n\nBecause of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK\nThe service sends the response, one packet per header (HTTP\/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.\n(Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.\n(Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.\n\nBoth the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual_env.\nThe client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.\nThe service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple_server. I ran the WSGI app through wsgiref.validator with no issues.\nIt seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket._socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.\n\"Connection reset by peer\" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?\n** Further debugging - Looks like server on Linux **\nI have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86_64? A bad glibc? wsgiref? Still looking...\n** Further testing - wsgiref looks flaky **\nWe've gone to production with Apache and mod_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production.","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":140801,"Q_Id":383738,"Users Score":11,"Answer":"Don't use wsgiref for production. Use Apache and mod_wsgi, or something else.\nWe continue to see these connection resets, sometimes frequently, with wsgiref (the backend used by the werkzeug test server, and possibly others like the Django test server). Our solution was to log the error, retry the call in a loop, and give up after ten failures. httplib2 tries twice, but we needed a few more. They seem to come in bunches as well - adding a 1 second sleep might clear the issue.\nWe've never seen a connection reset when running through Apache and mod_wsgi. I don't know what they do differently, (maybe they just mask them), but they don't appear.\nWhen we asked the local dev community for help, someone confirmed that they see a lot of connection resets with wsgiref that go away on the production server. There's a bug there, but it is going to be hard to find it.","Q_Score":39,"Tags":"python,sockets,wsgi,httplib2,werkzeug","A_Id":481952,"CreationDate":"2008-12-20T21:04:00.000","Title":"104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:\n(104, 'Connection reset by peer')\nWhen I listen in with wireshark, the \"good\" and \"bad\" responses look very similar:\n\nBecause of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK\nThe service sends the response, one packet per header (HTTP\/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.\n(Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.\n(Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.\n\nBoth the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual_env.\nThe client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.\nThe service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple_server. I ran the WSGI app through wsgiref.validator with no issues.\nIt seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket._socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.\n\"Connection reset by peer\" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?\n** Further debugging - Looks like server on Linux **\nI have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86_64? A bad glibc? wsgiref? Still looking...\n** Further testing - wsgiref looks flaky **\nWe've gone to production with Apache and mod_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production.","AnswerCount":4,"Available Count":3,"Score":0.1488850336,"is_accepted":false,"ViewCount":140801,"Q_Id":383738,"Users Score":3,"Answer":"Normally, you'd get an RST if you do a close which doesn't linger (i.e. in which data can be discarded by the stack if it hasn't been sent and ACK'd) and a normal FIN if you allow the close to linger (i.e. the close waits for the data in transit to be ACK'd).\nPerhaps all you need to do is set your socket to linger so that you remove the race condition between a non lingering close done on the socket and the ACKs arriving?","Q_Score":39,"Tags":"python,sockets,wsgi,httplib2,werkzeug","A_Id":384415,"CreationDate":"2008-12-20T21:04:00.000","Title":"104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there any way to send ARP packet on Windows without the use of another library such as winpcap?\nI have heard that Windows XP SP2 blocks raw ethernet sockets, but I have also heard that raw sockets are only blocked for administrators. Any clarification here?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4899,"Q_Id":395846,"Users Score":0,"Answer":"You could use the OpenVPN tap to send arbitrary packets as if you where using raw sockets.","Q_Score":3,"Tags":"python,sockets,ethernet,arp","A_Id":503144,"CreationDate":"2008-12-28T04:52:00.000","Title":"How do I send an ARP packet through python on windows without needing winpcap?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using XML minidom (xml.dom.minidom) in Python, but any error in the XML will kill the parser.\nIs it possible to ignore them, like a browser for example?\nI am trying to write a browser in Python, but it just throws an exception if the tags aren't fully compatible.","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":4564,"Q_Id":399980,"Users Score":3,"Answer":"It should be noted that while HTML looks like XML it is not XML. XHTML is an XML form of HTML.","Q_Score":6,"Tags":"python,xml,minidom","A_Id":400669,"CreationDate":"2008-12-30T10:48:00.000","Title":"Ignoring XML errors in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a Python library that interfaces with a web service API. Like many web services I've encountered, this one requests limiting the rate of requests. I would like to provide an optional parameter, limit, to the class instantiation that, if provided, will hold outgoing requests until the number of seconds specified passes.\nI understand that the general scenario is the following: an instance of the class makes a request via a method. When it does, the method emits some signal that sets a lock variable somewhere, and begins a countdown timer for the number of seconds in limit. (In all likelihood, the lock is the countdown timer itself.) If another request is made within this time frame, it must be queued until the countdown timer reaches zero and the lock is disengaged; at this point, the oldest request on the queue is sent, and the countdown timer is reset and the lock is re-engaged.\nIs this a case for threading? Is there another approach I'm not seeing?\nShould the countdown timer and lock be instance variables, or should they belong to the class, such that all instances of the class hold requests?\nAlso, is this generally a bad idea to provide rate-limiting functionality within a library? I reason since, by default, the countdown is zero seconds, the library still allows developers to use the library and provide their own rate-limiting schemes. Given any developers using the service will need to rate-limit requests anyway, however, I figure that it would be a convenience for the library to provide a means of rate-limiting.\nRegardless of placing a rate-limiting scheme in the library or not, I'll want to write an application using the library, so suggested techniques will come in handy.","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":17294,"Q_Id":401215,"Users Score":1,"Answer":"SO I am assuming something simple like \nimport time\ntime.sleep(2) \nwill not work for waiting 2 seconds between requests","Q_Score":18,"Tags":"python,web-services,rate-limiting","A_Id":401826,"CreationDate":"2008-12-30T19:30:00.000","Title":"How to limit rate of requests to web services in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working on a Python library that interfaces with a web service API. Like many web services I've encountered, this one requests limiting the rate of requests. I would like to provide an optional parameter, limit, to the class instantiation that, if provided, will hold outgoing requests until the number of seconds specified passes.\nI understand that the general scenario is the following: an instance of the class makes a request via a method. When it does, the method emits some signal that sets a lock variable somewhere, and begins a countdown timer for the number of seconds in limit. (In all likelihood, the lock is the countdown timer itself.) If another request is made within this time frame, it must be queued until the countdown timer reaches zero and the lock is disengaged; at this point, the oldest request on the queue is sent, and the countdown timer is reset and the lock is re-engaged.\nIs this a case for threading? Is there another approach I'm not seeing?\nShould the countdown timer and lock be instance variables, or should they belong to the class, such that all instances of the class hold requests?\nAlso, is this generally a bad idea to provide rate-limiting functionality within a library? I reason since, by default, the countdown is zero seconds, the library still allows developers to use the library and provide their own rate-limiting schemes. Given any developers using the service will need to rate-limit requests anyway, however, I figure that it would be a convenience for the library to provide a means of rate-limiting.\nRegardless of placing a rate-limiting scheme in the library or not, I'll want to write an application using the library, so suggested techniques will come in handy.","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":17294,"Q_Id":401215,"Users Score":1,"Answer":"Your rate limiting scheme should be heavily influenced by the calling conventions of the underlying code (syncronous or async), as well as what scope (thread, process, machine, cluster?) this rate-limiting will operate at.\nI would suggest keeping all the variables within the instance, so you can easily implement multiple periods\/rates of control.\nLastly, it sounds like you want to be a middleware component. Don't try to be an application and introduce threads on your own. Just block\/sleep if you are synchronous and use the async dispatching framework if you are being called by one of them.","Q_Score":18,"Tags":"python,web-services,rate-limiting","A_Id":401332,"CreationDate":"2008-12-30T19:30:00.000","Title":"How to limit rate of requests to web services in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working on a Python library that interfaces with a web service API. Like many web services I've encountered, this one requests limiting the rate of requests. I would like to provide an optional parameter, limit, to the class instantiation that, if provided, will hold outgoing requests until the number of seconds specified passes.\nI understand that the general scenario is the following: an instance of the class makes a request via a method. When it does, the method emits some signal that sets a lock variable somewhere, and begins a countdown timer for the number of seconds in limit. (In all likelihood, the lock is the countdown timer itself.) If another request is made within this time frame, it must be queued until the countdown timer reaches zero and the lock is disengaged; at this point, the oldest request on the queue is sent, and the countdown timer is reset and the lock is re-engaged.\nIs this a case for threading? Is there another approach I'm not seeing?\nShould the countdown timer and lock be instance variables, or should they belong to the class, such that all instances of the class hold requests?\nAlso, is this generally a bad idea to provide rate-limiting functionality within a library? I reason since, by default, the countdown is zero seconds, the library still allows developers to use the library and provide their own rate-limiting schemes. Given any developers using the service will need to rate-limit requests anyway, however, I figure that it would be a convenience for the library to provide a means of rate-limiting.\nRegardless of placing a rate-limiting scheme in the library or not, I'll want to write an application using the library, so suggested techniques will come in handy.","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":17294,"Q_Id":401215,"Users Score":2,"Answer":"Queuing may be overly complicated. A simpler solution is to give your class a variable for the time the service was last called. Whenever the service is called (!1), set waitTime to delay - Now + lastcalltime. delay should be equal to the minimum allowable time between requests. If this number is positive, sleep for that long before making the call (!2). The disadvantage\/advantage of this approach is that it treats the web service requests as being synchronous. The advantage is that it is absurdly simple and easy to implement. \n\n(!1): Should happen right after receiving a response from the service, inside the wrapper (probably at the bottom of the wrapper).\n(!2): Should happen when the python wrapper around the web service is called, at the top of the wrapper.\n\nS.Lott's solution is more elegant, of course.","Q_Score":18,"Tags":"python,web-services,rate-limiting","A_Id":401390,"CreationDate":"2008-12-30T19:30:00.000","Title":"How to limit rate of requests to web services in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to write a program that will find bus stop times and update my personal webpage accordingly.\nIf I were to do this manually I would \n\nVisit www.calgarytransit.com\nEnter a stop number. ie) 9510\nClick the button \"next bus\"\n\nThe results may look like the following:\n\n10:16p Route 154\n 10:46p Route 154\n 11:32p Route 154\n\nOnce I've grabbed the time and routes then I will update my webpage accordingly. \nI have no idea where to start. I know diddly squat about web programming but can write some C and Python. What are some topics\/libraries I could look into?","AnswerCount":8,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":24102,"Q_Id":419260,"Users Score":0,"Answer":"As long as the layout of the web page your trying to 'scrape' doesnt regularly change, you should be able to parse the html with any modern day programming language.","Q_Score":3,"Tags":"python,c,text,webpage","A_Id":419273,"CreationDate":"2009-01-07T05:14:00.000","Title":"Grabbing text from a webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to write a program that will find bus stop times and update my personal webpage accordingly.\nIf I were to do this manually I would \n\nVisit www.calgarytransit.com\nEnter a stop number. ie) 9510\nClick the button \"next bus\"\n\nThe results may look like the following:\n\n10:16p Route 154\n 10:46p Route 154\n 11:32p Route 154\n\nOnce I've grabbed the time and routes then I will update my webpage accordingly. \nI have no idea where to start. I know diddly squat about web programming but can write some C and Python. What are some topics\/libraries I could look into?","AnswerCount":8,"Available Count":2,"Score":0.024994793,"is_accepted":false,"ViewCount":24102,"Q_Id":419260,"Users Score":1,"Answer":"That site doesnt offer an API for you to be able to get the appropriate data that you need. In that case you'll need to parse the actual HTML page returned by, for example, a CURL request .","Q_Score":3,"Tags":"python,c,text,webpage","A_Id":419271,"CreationDate":"2009-01-07T05:14:00.000","Title":"Grabbing text from a webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to get the text of selected item from a drop down box element in html forms? (using python)\nHow can I store the value to a variable, when I select one item from the drop down box using mouse? (ie. without using a submit button)\nThis is for a application which I am doing in app engine which only supports Python.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":13168,"Q_Id":419908,"Users Score":0,"Answer":"The problem with using onchange is that not all users are using a mouse. If you have a combo-box and change the value with the keyboard, you'd never be able to get past the first value without the form submitting.\n~Cyrix","Q_Score":1,"Tags":"javascript,python,html,google-app-engine,drop-down-menu","A_Id":7284763,"CreationDate":"2009-01-07T11:05:00.000","Title":"Getting selected value from drop down box in a html form without submit","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a set of word documents that contains a lot of non-embedded images in them. The url that the images point to no longer exist. I would like to programmatically change the domain name of the url to something else. How can I go about doing this in Java or Python ?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1351,"Q_Id":428308,"Users Score":0,"Answer":"You want to do this in Java or Python. Try OpenOffice.\nIn OpenOffice, you can insert Java or Python code as a \"Makro\".\nI'm sure there will be a possibility to change the image URLs.","Q_Score":1,"Tags":"java,python,image,ms-word","A_Id":430717,"CreationDate":"2009-01-09T14:46:00.000","Title":"How to programmatically change urls of images in word documents","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python application that relies on a file that is downloaded by a client from a website.\nThe website is not under my control and has no API to check for a \"latest version\" of the file.\nIs there a simple way to access the file (in python) via a URL and check it's date (or size) without having to download it to the clients machine each time?\nupdate: Thanks to those who mentioned the\"last-modified\" date. This is the correct parameter to look at.\nI guess I didn't state the question well enough. How do I do this from a python script? I want to application to check the file and then download it if (last-modified date < current file date).","AnswerCount":6,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":12198,"Q_Id":428895,"Users Score":6,"Answer":"Take into account that 'last-modified' may not be present:\n\n>>> from urllib import urlopen\n>>> f=urlopen('http:\/\/google.com\/')\n>>> i=f.info()\n>>> i.keys()\n['set-cookie', 'expires', 'server', 'connection', 'cache-control', 'date', 'content-type']\n>>> i.getdate('date')\n(2009, 1, 10, 16, 17, 8, 0, 1, 0)\n>>> i.getheader('date')\n'Sat, 10 Jan 2009 16:17:08 GMT'\n>>> i.getdate('last-modified')\n>>>\n\nNow you can compare:\n\nif (i.getdate('last-modified') or i.getheader('date')) > current_file_date:\n open('file', 'w').write(f.read())","Q_Score":3,"Tags":"python,http","A_Id":431411,"CreationDate":"2009-01-09T17:11:00.000","Title":"How can I get the created date of a file on the web (with Python)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python application that relies on a file that is downloaded by a client from a website.\nThe website is not under my control and has no API to check for a \"latest version\" of the file.\nIs there a simple way to access the file (in python) via a URL and check it's date (or size) without having to download it to the clients machine each time?\nupdate: Thanks to those who mentioned the\"last-modified\" date. This is the correct parameter to look at.\nI guess I didn't state the question well enough. How do I do this from a python script? I want to application to check the file and then download it if (last-modified date < current file date).","AnswerCount":6,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":12198,"Q_Id":428895,"Users Score":7,"Answer":"There is no reliable way to do this. For all you know, the file can be created on the fly by the web server and the question \"how old is this file\" is not meaningful. The webserver may choose to provide Last-Modified header, but it could tell you whatever it wants.","Q_Score":3,"Tags":"python,http","A_Id":428951,"CreationDate":"2009-01-09T17:11:00.000","Title":"How can I get the created date of a file on the web (with Python)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to poll a web service, in this case twitter's API, and I'm wondering what the conventional wisdom is on this topic. I'm not sure whether this is important, but I've always found feedback useful in the past.\nA couple scenarios I've come up with:\n\nThe querying process starts every X seconds, eg a cron job runs a python script\nA process continually loops and queries at each iteration, eg ... well, here is where I enter unfamiliar territory. Do I just run a python script that doesn't end?\n\nThanks for your advice.\nps - regarding the particulars of twitter: I know that it sends emails for following and direct messages, but sometimes one might want the flexibility of parsing @replies. In those cases, I believe polling is as good as it gets.\npps - twitter limits bots to 100 requests per 60 minutes. I don't know if this also limits web scraping or rss feed reading. Anyone know how easy or hard it is to be whitelisted?\nThanks again.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5380,"Q_Id":430226,"Users Score":0,"Answer":"You should have a page that is like a Ping or Heartbeat page. The you have another process that \"tickles\" or hits that page, usually you can do this in your Control Panel of your web host, or use a cron if you have a local access. Then this script can keep statistics of how often it has polled in a database or some data store and then you poll the service as often as you really need to, of course limiting it to whatever the providers limit is. You definitely don't want to (and certainly don't want to rely) on a python scrip that \"doesn't end.\" :)","Q_Score":5,"Tags":"python,twitter,polling","A_Id":430245,"CreationDate":"2009-01-10T00:10:00.000","Title":"Best way to poll a web service (eg, for a twitter app)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was just wondering what network libraries there are out there for Python for building a TCP\/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me. \nAlso, would using Twisted even have a benefit over rolling my own server with select.select()?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":12949,"Q_Id":441849,"Users Score":1,"Answer":"Just adding an answer to re-iterate other posters - it'll be worth it to use Twisted. There's no reason to write yet another TCP server that'll end up working not as well as one using twisted would. The only reason would be if writing your own is much faster, developer-wise, but if you just bite the bullet and learn twisted now, your future projects will benefit greatly. And, as others have said, you'll be able to do much more complex stuff if you use twisted from the start.","Q_Score":12,"Tags":"python,networking,twisted","A_Id":442079,"CreationDate":"2009-01-14T03:51:00.000","Title":"Good Python networking libraries for building a TCP server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I was just wondering what network libraries there are out there for Python for building a TCP\/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me. \nAlso, would using Twisted even have a benefit over rolling my own server with select.select()?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":12949,"Q_Id":441849,"Users Score":6,"Answer":"The standard library includes SocketServer and related modules which might be sufficient for your needs. This is a good middle ground between a complex framework like Twisted, and rolling your own select() loop.","Q_Score":12,"Tags":"python,networking,twisted","A_Id":441863,"CreationDate":"2009-01-14T03:51:00.000","Title":"Good Python networking libraries for building a TCP server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm a hobbyist (and fairly new) programmer who has written several useful (to me) scripts in python to handle various system automation tasks that involve copying, renaming, and downloading files amongst other sundry activities.\nI'd like to create a web page served from one of my systems that would merely present a few buttons which would allow me to initiate these scripts remotely.\nThe problem is that I don't know where to start investigating how to do this. Let's say I have a script called:\nfile_arranger.py\nWhat do I need to do to have a webpage execute that script? This isn't meant for public consumption, so anything lightweight would be great. For bonus points, what do I need to look into to provide the web user with the output from such scripts?\nedit: The first answer made me realize I forgot to include that this is a Win2k3 system.","AnswerCount":9,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":38798,"Q_Id":448837,"Users Score":0,"Answer":"When setting this up, please be careful to restrict access to the scripts that take some action on your web server. It is not sufficient to place them in a directory where you just don't publish the URL, because sooner or later somebody will find them.\nAt the very least, put these scripts in a location that is password protected. You don't want just anybody out there on the internet being able to run your scripts.","Q_Score":25,"Tags":"python,windows,web-services,cgi","A_Id":449199,"CreationDate":"2009-01-15T22:58:00.000","Title":"How do I create a webpage with buttons that invoke various Python scripts on the system serving the webpage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm a hobbyist (and fairly new) programmer who has written several useful (to me) scripts in python to handle various system automation tasks that involve copying, renaming, and downloading files amongst other sundry activities.\nI'd like to create a web page served from one of my systems that would merely present a few buttons which would allow me to initiate these scripts remotely.\nThe problem is that I don't know where to start investigating how to do this. Let's say I have a script called:\nfile_arranger.py\nWhat do I need to do to have a webpage execute that script? This isn't meant for public consumption, so anything lightweight would be great. For bonus points, what do I need to look into to provide the web user with the output from such scripts?\nedit: The first answer made me realize I forgot to include that this is a Win2k3 system.","AnswerCount":9,"Available Count":2,"Score":0.022218565,"is_accepted":false,"ViewCount":38798,"Q_Id":448837,"Users Score":1,"Answer":"A simple cgi script (or set of scripts) is all you need to get started. The other answers have covered how to do this so I won't repeat it; instead, I will stress that using plain text will get you a long way. Just output the header (print(\"Content-type: text\/plain\\n\") plus print adds its own newline to give you the needed blank line) and then run your normal program.\nThis way, any normal output from your script gets sent to the browser and you don't have to worry about HTML, escaping, frameworks, anything. \"Do the simplest thing that could possibly work.\"\nThis is especially appropriate for non-interactive private administrative tasks like you describe, and lets you use identical programs from a shell with a minimum of fuss. Your driver, the page with the buttons, can be a static HTML file with single-button forms. Or even a list of links.\nTo advance from there, look at the logging module (for example, sending INFO messages to the browser but not the command line, or easily categorizing messages by using different loggers, by configuring your handlers), and then start to consider template engines and frameworks.\nDon't output your own HTML and skip to using one of the many existing libraries\u2014it'll save a ton of headache even spending a bit of extra time to learn the library. Or at the very least encapsulate your output by effectively writing your own mini-engine.","Q_Score":25,"Tags":"python,windows,web-services,cgi","A_Id":449062,"CreationDate":"2009-01-15T22:58:00.000","Title":"How do I create a webpage with buttons that invoke various Python scripts on the system serving the webpage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm new to web services and as an introduction I'm playing around with the Twitter API using the Twisted framework in python. I've read up on the different formats they offer, but it's still not clear to me which one I should use in my fairly simple project.\nSpecifically the practical difference between using JSON or XML is something I'd like guidance on. All I'm doing is requesting the public timeline and caching it locally.\nThanks.","AnswerCount":3,"Available Count":2,"Score":0.2605204458,"is_accepted":false,"ViewCount":8204,"Q_Id":453158,"Users Score":4,"Answer":"RSS and Atom are XML formats.\nJSON is a string which can be evaluated as Javascript code.","Q_Score":6,"Tags":"python,xml,json,twitter,twisted","A_Id":453160,"CreationDate":"2009-01-17T11:19:00.000","Title":"What is the practical difference between xml, json, rss and atom when interfacing with Twitter?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to web services and as an introduction I'm playing around with the Twitter API using the Twisted framework in python. I've read up on the different formats they offer, but it's still not clear to me which one I should use in my fairly simple project.\nSpecifically the practical difference between using JSON or XML is something I'd like guidance on. All I'm doing is requesting the public timeline and caching it locally.\nThanks.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":8204,"Q_Id":453158,"Users Score":1,"Answer":"I would say the amount of data being sent over the wire is one factor. XML data stream will be bigger than JSON for the same data. But you can use whatever you know more\/have more experience. \nI would recommend JSON, as it's more \"pythonic\" than XML.","Q_Score":6,"Tags":"python,xml,json,twitter,twisted","A_Id":453164,"CreationDate":"2009-01-17T11:19:00.000","Title":"What is the practical difference between xml, json, rss and atom when interfacing with Twitter?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a \"manager\" process on a node, and several worker processes. The manager is the actual server who holds all of the connections to the clients. The manager accepts all incoming packets and puts them into a queue, and then the worker processes pull the packets out of the queue, process them, and generate a result. They send the result back to the manager (by putting them into another queue which is read by the manager), but here is where I get stuck: how do I send the result to a specific socket? When dealing with the processing of the packets on a single process, it's easy, because when you receive a packet you can reply to it by just grabbing the \"transport\" object in-context. But how would I do this with the method I'm using?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":829,"Q_Id":460068,"Users Score":3,"Answer":"It sounds like you might need to keep a reference to the transport (or protocol) along with the bytes the just came in on that protocol in your 'event' object. That way responses that came in on a connection go out on the same connection. \nIf things don't need to be processed serially perhaps you should think about setting up functors that can handle the data in parallel to remove the need for queueing. Just keep in mind that you will need to protect critical sections of your code.\nEdit:\nJudging from your other question about evaluating your server design it would seem that processing in parallel may not be possible for your situation, so my first suggestion stands.","Q_Score":2,"Tags":"python,sockets,twisted,multiprocess","A_Id":460245,"CreationDate":"2009-01-20T03:43:00.000","Title":"Python\/Twisted - Sending to a specific socket object?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?\nI was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":303,"Q_Id":464314,"Users Score":1,"Answer":"When you say the last minute, do you mean the exact last seconds or the last full minute from x:00 to x:59? The latter will be easier to implement and would probably give accurate results. You have one prev variable holding the value of the hits for the previous minute. Then you have a current value that increments every time there is a new hit. You return the value of prev to the users. At the change of the minute you swap prev with current and reset current.\nIf you want higher analysis you could split the minute in 2 to 6 slices. You need a variable or list entry for every slice. Let's say you have 6 slices of 10 seconds. You also have an index variable pointing to the current slice (0..5). For every hit you increment a temp variable. When the slice is over, you replace the value of the indexed variable with the value of temp, reset temp and move the index forward. You return the sum of the slice variables to the users.","Q_Score":2,"Tags":"python","A_Id":464347,"CreationDate":"2009-01-21T07:14:00.000","Title":"Python - Hits per minute implementation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?\nI was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas?","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":303,"Q_Id":464314,"Users Score":1,"Answer":"For what it's worth, your implementation above won't work if you don't receive a packet every second, as the next second entry won't necessarily be reset to 0.\nEither way, afaik the \"correct\" way to do this, ala logs analysis, is to keep a limited record of all the queries you receive. So just chuck the query, time received etc. into a database, and then simple database queries will give you the use over a minute, or any minute in the past. Not sure whether this is too heavyweight for you, though.","Q_Score":2,"Tags":"python","A_Id":464329,"CreationDate":"2009-01-21T07:14:00.000","Title":"Python - Hits per minute implementation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?\nI was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas?","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":303,"Q_Id":464314,"Users Score":3,"Answer":"A common pattern for solving this in other languages is to let the thing being measured simply increment an integer. Then you leave it to the listening client to determine intervals and frequencies.\nSo you basically do not let the socket server know about stuff like \"minutes\", because that's a feature the observer calculates. Then you can also support multiple listeners with different interval resolution.\nI suppose you want some kind of ring-buffer structure to do the rolling logging.","Q_Score":2,"Tags":"python","A_Id":464322,"CreationDate":"2009-01-21T07:14:00.000","Title":"Python - Hits per minute implementation?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"while learning some basic programming with python, i found web.py. i\ngot stuck with a stupid problem:\ni wrote a simple console app with a main loop that proccesses items\nfrom a queue in seperate threads. my goal is to use web.py to add\nitems to my queue and report status of the queue via web request. i\ngot this running as a module but can\u00b4t integrate it into my main app.\nmy problem is when i start the http server with app.run() it blocks my\nmain loop.\nalso tried to start it with thread.start_new_thread but it still\nblocks.\nis there an easy way to run web.py\u00b4s integrated http server in the\nbackground within my app.\nin the likely event that i am a victim of a fundamental\nmissunderstanding, any attempt to clarify my error in reasoning would\nhelp ;.) ( please bear with me, i am a beginner :-)","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":7054,"Q_Id":500935,"Users Score":1,"Answer":"Wouldn't is be simpler to re-write your main-loop code to be a function that you call over and over again, and then call that from the function that you pass to runsimple...\nIt's guaranteed not to fully satisfy your requirements, but if you're in a rush, it might be easiest.","Q_Score":17,"Tags":"python,multithreading,web-services,web.py","A_Id":501570,"CreationDate":"2009-02-01T14:47:00.000","Title":"Using web.py as non blocking http-server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've written several scripts that make use of the gdata API, and they all (obviously) have my API key and client ID in plain-text. How am I supposed to distribute these?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":306,"Q_Id":513806,"Users Score":0,"Answer":"If we assume that you want clients to use their own keys I'd recommend putting them in a configuration file which defaults to an (invalid) sentinel value.\nIf on the other hand you want the script to use your key the best you can do is obfuscate it. After all, if your program can read it then an attacker (with a debugger) can read it too.","Q_Score":3,"Tags":"python,gdata-api","A_Id":513838,"CreationDate":"2009-02-04T23:06:00.000","Title":"How to distribute script using gdata-python-client?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to write a python library to wrap a REST-style API offered by a particular Web service. Does anyone know of any good learning resources for such work, preferably aimed at intermediate Python programmers?\nI'd like a good article on the subject, but I'd settle for nice, clear code examples.\nCLARIFICATION: What I'm looking to do is write a Python client to interact with a Web service -- something to construct HTTP requests and parse XML\/JSON responses, all wrapped up in Python objects.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":15114,"Q_Id":517237,"Users Score":2,"Answer":"My favorite combination is httplib2 (or pycurl for performance) and simplejson. As REST is more \"a way of design\" then a real \"protocol\" there is not really a reusable thing (that I know of). On Ruby you have something like ActiveResource. And to be honest, even that would just expose some tables as a webservice, whereas the power of xml\/json is that they are more like \"views\" that can contain multiple objects optimized for your application. I hope this makes sense :-)","Q_Score":21,"Tags":"python,web-services,api,rest","A_Id":518161,"CreationDate":"2009-02-05T18:29:00.000","Title":"HOWTO: Write Python API wrapper?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to write a python library to wrap a REST-style API offered by a particular Web service. Does anyone know of any good learning resources for such work, preferably aimed at intermediate Python programmers?\nI'd like a good article on the subject, but I'd settle for nice, clear code examples.\nCLARIFICATION: What I'm looking to do is write a Python client to interact with a Web service -- something to construct HTTP requests and parse XML\/JSON responses, all wrapped up in Python objects.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":15114,"Q_Id":517237,"Users Score":0,"Answer":"You should take a look at PyFacebook. This is a python wrapper for the Facebook API, and it's one of the most nicely done API's I have ever used.","Q_Score":21,"Tags":"python,web-services,api,rest","A_Id":981474,"CreationDate":"2009-02-05T18:29:00.000","Title":"HOWTO: Write Python API wrapper?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi im coding to code a Tool that searchs for Dirs and files.\nhave done so the tool searchs for dirs, but need help to make it search for files on websites.\nAny idea how it can be in python?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":188,"Q_Id":520362,"Users Score":1,"Answer":"You cannot get a directory listing on a website.\nPedantically, HTTP has no notion of directory.\nPratically, WebDAV provides a directory listing verb, so you can use that if WebDAV is enabled.\nOtherwise, the closest thing you can do is similar to what recursive wget does: get a page, parse the HTML, look for hyperlinks (a\/@href in xpath), filter out hyperlinks that do not point to URL below the current page, recurse into the remaining urls.\nYou can do further filtering, depending on your use case, such as removing the query part of the URL (anything after the first ?).\nWhen the server has a directory listing feature enabled, this gives you something usable. This also gives you something usable if the website has no directory listing but is organized in a sensible way.","Q_Score":0,"Tags":"python","A_Id":520423,"CreationDate":"2009-02-06T13:55:00.000","Title":"Search Files& Dirs on Website","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi im coding to code a Tool that searchs for Dirs and files.\nhave done so the tool searchs for dirs, but need help to make it search for files on websites.\nAny idea how it can be in python?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":188,"Q_Id":520362,"Users Score":1,"Answer":"You can only do this if you have permission to browse directories on the site and no default page exists.","Q_Score":0,"Tags":"python","A_Id":520397,"CreationDate":"2009-02-06T13:55:00.000","Title":"Search Files& Dirs on Website","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi im coding to code a Tool that searchs for Dirs and files.\nhave done so the tool searchs for dirs, but need help to make it search for files on websites.\nAny idea how it can be in python?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":188,"Q_Id":520362,"Users Score":1,"Answer":"Is this tool scanning the directories of your own website (in which the tool is running), or external sites?","Q_Score":0,"Tags":"python","A_Id":520373,"CreationDate":"2009-02-06T13:55:00.000","Title":"Search Files& Dirs on Website","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone know if there is some parameter available for programmatic search on yahoo allowing to restrict results so only links to files of specific type will be returned (like PDF for example)?\nIt's possible to do that in GUI, but how to make it happen through API?\nI'd very much appreciate a sample code in Python, but any other solutions might be helpful as well.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1551,"Q_Id":522781,"Users Score":0,"Answer":"Thank you.\nI found myself that something like this works OK (file type is the first argument, and query is the second):\nformat = sys.argv[1]\nquery = \" \".join(sys.argv[2:])\nsrch = create_search(\"Web\", app_id, query=query, format=format)","Q_Score":1,"Tags":"python,yahoo-api,yahoo-search","A_Id":526491,"CreationDate":"2009-02-07T00:27:00.000","Title":"how to search for specific file type with yahoo search API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I accept cookies in a python script?","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":12543,"Q_Id":525773,"Users Score":1,"Answer":"There's the cookielib library. You can also implement your own cookie storage and policies, the cookies are found in the set-cookie header of the response (Set-Cookie: name=value), then you send the back to a server in one or more Cookie headers in the request (Cookie: name=value).","Q_Score":11,"Tags":"python,cookies","A_Id":525982,"CreationDate":"2009-02-08T14:09:00.000","Title":"Accept Cookies in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I accept cookies in a python script?","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":12543,"Q_Id":525773,"Users Score":1,"Answer":"I believe you mean having a Python script that tries to speak HTTP.\nI suggest you to use a high-level library that handles cookies automatically.\npycurl, mechanize, twill - you choose.\nFor Nikhil Chelliah:\nI don't see what's not clear here.\nAccepting a cookie happens client-side. The server can set a cookie.","Q_Score":11,"Tags":"python,cookies","A_Id":525966,"CreationDate":"2009-02-08T14:09:00.000","Title":"Accept Cookies in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need script to add quotes in url string from url.txt\nfrom http:\/\/www.site.com\/info.xx to \"http:\/\/www.site.com\/info.xx\"","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8426,"Q_Id":543199,"Users Score":0,"Answer":"write one...\nperl is my favourite scripting language... it appears you may prefer Python.\njust read in the file and add \\\" before and after it..\nthis is pretty easy in perl.\nthis seems more like a request than a question... should this be on stackoverflow?","Q_Score":2,"Tags":"python,ruby,perl","A_Id":543218,"CreationDate":"2009-02-12T20:54:00.000","Title":"Add Quotes in url string from file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a simple web crawler in python and I wan't to make a simple queue class, but I'm not quite sure the best way to start. I want something that holds only unique items to process, so that the crawler will only crawl each page once per script run (simply to avoid infinite looping). Can anyone give me or point me to a simple queue example that I could run off of?","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":2069,"Q_Id":549536,"Users Score":1,"Answer":"Why not use a list if you need order (or even a heapq, as was formerly suggested by zacherates before a set was suggested instead) and also use a set to check for duplicates?","Q_Score":1,"Tags":"python,queue","A_Id":549555,"CreationDate":"2009-02-14T18:44:00.000","Title":"Simple unique non-priority queue system","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using libcurl to DL a webpage, then i am scanning it for data and doing something with one of the links. However, once in a while the page is different then i except thus i extract bad data and pycurl throws an exception. I tried finding the exception name for pycurl but had no luck.\nIs there a way i can get the traceback to execute a function so i can dump the file so i can look at the file input and see were my code went wrong?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":992,"Q_Id":550804,"Users Score":2,"Answer":"Can you catch all exceptions somewhere in the main block and use sys.exc_info() for callback information and log that to your file. exc_info() returns not just exception type, but also call traceback so there should information what went wrong.","Q_Score":2,"Tags":"python,error-handling,pycurl","A_Id":550815,"CreationDate":"2009-02-15T12:29:00.000","Title":"python runtime error, can dump a file?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python.\nThe base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.","AnswerCount":15,"Available Count":2,"Score":0.0532828229,"is_accepted":false,"ViewCount":34136,"Q_Id":561486,"Users Score":4,"Answer":"Base64 takes 4 bytes\/characters to encode 3 bytes and can only encode multiples of 3 bytes (and adds padding otherwise).\nSo representing 4 bytes (your average int) in Base64 would take 8 bytes. Encoding the same 4 bytes in hex would also take 8 bytes. So you wouldn't gain anything for a single int.","Q_Score":68,"Tags":"python,url,base64","A_Id":561534,"CreationDate":"2009-02-18T15:25:00.000","Title":"How to convert an integer to the shortest url-safe string in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want the shortest possible way of representing an integer in a URL. For example, 11234 can be shortened to '2be2' using hexadecimal. Since base64 uses is a 64 character encoding, it should be possible to represent an integer in base64 using even less characters than hexadecimal. The problem is I can't figure out the cleanest way to convert an integer to base64 (and back again) using Python.\nThe base64 module has methods for dealing with bytestrings - so maybe one solution would be to convert an integer to its binary representation as a Python string... but I'm not sure how to do that either.","AnswerCount":15,"Available Count":2,"Score":0.0266603475,"is_accepted":false,"ViewCount":34136,"Q_Id":561486,"Users Score":2,"Answer":"If you are looking for a way to shorten the integer representation using base64, I think you need to look elsewhere. When you encode something with base64 it doesn't get shorter, in fact it gets longer. \nE.g. 11234 encoded with base64 would yield MTEyMzQ=\nWhen using base64 you have overlooked the fact that you are not converting just the digits (0-9) to a 64 character encoding. You are converting 3 bytes into 4 bytes so you are guaranteed your base64 encoded string would be 33.33% longer.","Q_Score":68,"Tags":"python,url,base64","A_Id":561547,"CreationDate":"2009-02-18T15:25:00.000","Title":"How to convert an integer to the shortest url-safe string in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it safe to use Python UUID module generated values in URL's of a webpage? Wnat to use those ID's as part of URL's. Are there any non-safe characters ever generated by Python UUID that shouldn't be in URL's?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":3987,"Q_Id":567324,"Users Score":9,"Answer":"It is good practice to always urlencode data that will be placed into URLs. Then you need not be concerned with the specifics of UUID or if it will change in the future.","Q_Score":4,"Tags":"python,url,uuid","A_Id":567347,"CreationDate":"2009-02-19T21:47:00.000","Title":"Is it safe to use Python UUID module generated values in URL's of a webpage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using BeautifulStoneSoup to parse an XML document and change some attributes. I noticed that it automatically converts all XML tags to lowercase. For example, my source file has elements, which BeautifulSoup converts to .\nThis appears to be causing problems since the program I am feeding my modified XML document to does not seem to accept the lowercase versions. Is there a way to prevent this behavior in BeautifulSoup?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1662,"Q_Id":567999,"Users Score":4,"Answer":"No, that's not a built-in option. The source is pretty straightforward, though. It looks like you want to change the value of encodedName in Tag.__str__.","Q_Score":10,"Tags":"python,xml,beautifulsoup","A_Id":568081,"CreationDate":"2009-02-20T01:52:00.000","Title":"Preventing BeautifulSoup from converting my XML tags to lowercase","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am using Python to connect to an FTP server that contains a new list of data once every hour. I am only connecting once a day, and I only want to download the newest file in the directory. Is there a way to do this?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1672,"Q_Id":570433,"Users Score":1,"Answer":"Seems like any system that is automatically generating a file once an hour is likely to be using an automated naming scheme. Are you over thinking the problem by asking the server for the newest file instead of more easily parsing the file names? \nThis wouldn't work in all cases, and if the directory got large it might become time consuming to get the file listing. But it seems likely to work in most cases.","Q_Score":3,"Tags":"python,ftp","A_Id":571410,"CreationDate":"2009-02-20T17:10:00.000","Title":"How can I get the newest file from an FTP server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Python web client that uses urllib2. It is easy enough to add HTTP headers to my outgoing requests. I just create a dictionary of the headers I want to add, and pass it to the Request initializer.\nHowever, other \"standard\" HTTP headers get added to the request as well as the custom ones I explicitly add. When I sniff the request using Wireshark, I see headers besides the ones I add myself. My question is how do a I get access to these headers? I want to log every request (including the full set of HTTP headers), and can't figure out how.\nany pointers?\nin a nutshell: How do I get all the outgoing headers from an HTTP request created by urllib2?","AnswerCount":8,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":14482,"Q_Id":603856,"Users Score":0,"Answer":"see urllib2.py:do_request (line 1044 (1067)) and urllib2.py:do_open (line 1073)\n(line 293) self.addheaders = [('User-agent', client_version)] (only 'User-agent' added)","Q_Score":15,"Tags":"python,urllib2","A_Id":603916,"CreationDate":"2009-03-02T20:24:00.000","Title":"How do you get default headers in a urllib2 Request?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results)\nAre there any standard protocols for this? I've heard about STUN but I don't know how can I use it?\nP.S. I'm planning on writing a short python script to do it","AnswerCount":16,"Available Count":3,"Score":0.0374824318,"is_accepted":false,"ViewCount":19213,"Q_Id":613471,"Users Score":3,"Answer":"Your simplest way may be to ask some server on the outside of your network.\nOne thing to keep in mind is that different destinations may see a different address for you. The router may be multihomed. And really that's just where problems begin.","Q_Score":28,"Tags":"python,ip-address,tcp","A_Id":613477,"CreationDate":"2009-03-05T03:18:00.000","Title":"Discovering public IP programmatically","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results)\nAre there any standard protocols for this? I've heard about STUN but I don't know how can I use it?\nP.S. I'm planning on writing a short python script to do it","AnswerCount":16,"Available Count":3,"Score":0.012499349,"is_accepted":false,"ViewCount":19213,"Q_Id":613471,"Users Score":1,"Answer":"Here are a few public services that support IPv4 and IPv6:\n\ncurl http:\/\/icanhazip.com\ncurl http:\/\/www.trackip.net\/ip\ncurl https:\/\/ipapi.co\/ip\ncurl http:\/\/api6.ipify.org\ncurl http:\/\/www.cloudflare.com\/cdn-cgi\/trace\ncurl http:\/\/checkip.dns.he.net\n\nThe following seem to support only IPv4 at this time:\n\ncurl http:\/\/bot.whatismyipaddress.com\ncurl http:\/\/checkip.dyndns.org\ncurl http:\/\/ifconfig.me\ncurl http:\/\/ip-api.com\ncurl http:\/\/api.infoip.io\/ip\n\nIt's easy to make an HTTP call programmatically. So all should be relatively easy to use, and you can try multiple different URLs in case one fails.","Q_Score":28,"Tags":"python,ip-address,tcp","A_Id":60525518,"CreationDate":"2009-03-05T03:18:00.000","Title":"Discovering public IP programmatically","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm behind a router, I need a simple command to discover my public ip (instead of googling what's my ip and clicking one the results)\nAre there any standard protocols for this? I've heard about STUN but I don't know how can I use it?\nP.S. I'm planning on writing a short python script to do it","AnswerCount":16,"Available Count":3,"Score":0.0374824318,"is_accepted":false,"ViewCount":19213,"Q_Id":613471,"Users Score":3,"Answer":"If the network has an UpNp server running on the gateway you are able to talk to the gateway and ask it for your outside IP address.","Q_Score":28,"Tags":"python,ip-address,tcp","A_Id":613518,"CreationDate":"2009-03-05T03:18:00.000","Title":"Discovering public IP programmatically","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to get all the messages from my gmail inbox, but I am facing 2 problems.\n\nIt does not get all the emails, (as per the count in stat function)\nThe order of emails it get is random.\n\nI am unsure if its the problem with poplib or gmail pop server.\nWhat am I missing here?","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1096,"Q_Id":617892,"Users Score":2,"Answer":"You can also try imaplib module since GMail also provides access to email via IMAP protocol.","Q_Score":1,"Tags":"python,python-2.5,poplib","A_Id":628130,"CreationDate":"2009-03-06T06:55:00.000","Title":"Poplib not working correctly?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to download emails from the gmail inbox only using poplib.Unfortunately I do not see any option to select Inbox alone, and poplib gives me emails from sent items too.\nHow do I select emails only from inbox?\nI dont want to use any gmail specific libraries.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2392,"Q_Id":625148,"Users Score":4,"Answer":"POP3 has no concept of 'folders'. If gmail is showing you both 'sent' as well as 'received' mail, then you really don't have any option but to receive all that email.\nPerhaps you would be better off using IMAP4 instead of POP3. Python has libraries that will work with gmail's IMAP4 server.","Q_Score":2,"Tags":"python,gmail,pop3,poplib","A_Id":625175,"CreationDate":"2009-03-09T05:49:00.000","Title":"Select mails from inbox alone via poplib","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I'm in the middle of web-based filesystem abstraction layer development. \nJust like file browser, except it has some extra features like freaky permissions etc. \nI would like users to be notified somehow about directory changes. \nSo, i.e. when someone uploads a new file via FTP, certain users should get a proper message. It is not required for the message to be extra detailed, I don't really need to show the exact resource changed. The parent directory name should be enough.\nWhat approach would you recommend?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2053,"Q_Id":649623,"Users Score":0,"Answer":"A simple approach would be to monitor\/check the last modification date of the working directory (using os.stat() for example). \nWhenever a file in a directory is modified, the working directory's (the directory the file is in) last modification date changes as well.\nAt least this works on the filesystems I am working on (ufs, ext3). I'm not sure if all filesystems do it this way.","Q_Score":3,"Tags":"python,file,filesystems,checksum","A_Id":649665,"CreationDate":"2009-03-16T08:22:00.000","Title":"Directory checksum with python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need advice and how to got about setting up a simple service for my users. I would like to add a new feature where users can send and receive emails from their gmail account. I have seen this done several times and I know its possible.\nThere use to be a project for \"Libgmailer\" at sourceforge but I think it was abandoned. Is anyone aware of anything similar?\nI have found that Gmail has a Python API but my site is making use of PHP.\nI really need ideas on how to best go about this!\nThanks all for any input","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":498,"Q_Id":656180,"Users Score":6,"Answer":"any library\/source that works with imap or pop will work.","Q_Score":1,"Tags":"php,python,email,gmail","A_Id":656198,"CreationDate":"2009-03-17T21:51:00.000","Title":"Implementation: How to retrieve and send emails for different Gmail accounts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need advice and how to got about setting up a simple service for my users. I would like to add a new feature where users can send and receive emails from their gmail account. I have seen this done several times and I know its possible.\nThere use to be a project for \"Libgmailer\" at sourceforge but I think it was abandoned. Is anyone aware of anything similar?\nI have found that Gmail has a Python API but my site is making use of PHP.\nI really need ideas on how to best go about this!\nThanks all for any input","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":498,"Q_Id":656180,"Users Score":0,"Answer":"Just a thought, Gmail supports POP\/IMAP access. Could you do it using those protocols? It would mean asking your users to go into their gmail and enable it though.","Q_Score":1,"Tags":"php,python,email,gmail","A_Id":656205,"CreationDate":"2009-03-17T21:51:00.000","Title":"Implementation: How to retrieve and send emails for different Gmail accounts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need advice and how to got about setting up a simple service for my users. I would like to add a new feature where users can send and receive emails from their gmail account. I have seen this done several times and I know its possible.\nThere use to be a project for \"Libgmailer\" at sourceforge but I think it was abandoned. Is anyone aware of anything similar?\nI have found that Gmail has a Python API but my site is making use of PHP.\nI really need ideas on how to best go about this!\nThanks all for any input","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":498,"Q_Id":656180,"Users Score":0,"Answer":"Well if Google didn't come up with anything personally I'd see if I could reverse engineer the Python API by implementing it and watching it with a packet sniffer. My guess is it's just accessing some web service which should be pretty easy to mimic regardless of the language you're using.","Q_Score":1,"Tags":"php,python,email,gmail","A_Id":656194,"CreationDate":"2009-03-17T21:51:00.000","Title":"Implementation: How to retrieve and send emails for different Gmail accounts?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way I can preserve the original order of attributes when processing XML with minidom?\nSay I have: \nwhen I modify this with minidom the attributes are rearranged alphabetically blue, green, and red. I'd like to preserve the original order.\nI am processing the file by looping through the elements returned by elements = doc.getElementsByTagName('color') and then I do assignments like this e.attributes[\"red\"].value = \"233\".","AnswerCount":9,"Available Count":1,"Score":0.022218565,"is_accepted":false,"ViewCount":8252,"Q_Id":662624,"Users Score":1,"Answer":"1.Custom your own 'Element.writexml' method.\nfrom 'minidom.py' copy Element's writexml code to your own file.\nrename it to writexml_nosort,\ndelete 'a_names.sort()' (python 2.7)\nor change 'a_names = sorted(attrs.keys())' to 'a_names = attrs.keys()'(python 3.4)\nchange the Element's method to your own:\nminidom.Element.writexml = writexml_nosort;\n2.custom your favorite order:\nright_order = ['a', 'b', 'c', 'a1', 'b1']\n3.adjust your element 's _attrs\nnode._attrs = OrderedDict( [(k,node._attrs[k]) for k in right_order ] )","Q_Score":13,"Tags":"python,xml,minidom","A_Id":29696911,"CreationDate":"2009-03-19T15:23:00.000","Title":"Preserve order of attributes when modifying with minidom","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I could use some pseudo-code, or better, Python. I am trying to implement a rate-limiting queue for a Python IRC bot, and it partially works, but if someone triggers less messages than the limit (e.g., rate limit is 5 messages per 8 seconds, and the person triggers only 4), and the next trigger is over the 8 seconds (e.g., 16 seconds later), the bot sends the message, but the queue becomes full and the bot waits 8 seconds, even though it's not needed since the 8 second period has lapsed.","AnswerCount":10,"Available Count":1,"Score":0.0599281035,"is_accepted":false,"ViewCount":108156,"Q_Id":667508,"Users Score":3,"Answer":"One solution is to attach a timestamp to each queue item and to discard the item after 8 seconds have passed. You can perform this check each time the queue is added to.\nThis only works if you limit the queue size to 5 and discard any additions whilst the queue is full.","Q_Score":173,"Tags":"python,algorithm,message-queue","A_Id":667528,"CreationDate":"2009-03-20T19:02:00.000","Title":"What's a good rate limiting algorithm?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?","AnswerCount":6,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":175519,"Q_Id":667640,"Users Score":47,"Answer":"Short answer:\n\nuse a non-blocking recv(), or a blocking recv() \/ select() with a very\n short timeout.\n\nLong answer:\nThe way to handle socket connections is to read or write as you need to, and be prepared to handle connection errors.\nTCP distinguishes between 3 forms of \"dropping\" a connection: timeout, reset, close.\nOf these, the timeout can not really be detected, TCP might only tell you the time has not expired yet. But even if it told you that, the time might still expire right after.\nAlso remember that using shutdown() either you or your peer (the other end of the connection) may close only the incoming byte stream, and keep the outgoing byte stream running, or close the outgoing stream and keep the incoming one running.\nSo strictly speaking, you want to check if the read stream is closed, or if the write stream is closed, or if both are closed.\nEven if the connection was \"dropped\", you should still be able to read any data that is still in the network buffer. Only after the buffer is empty will you receive a disconnect from recv().\nChecking if the connection was dropped is like asking \"what will I receive after reading all data that is currently buffered ?\" To find that out, you just have to read all data that is currently bufferred.\nI can see how \"reading all buffered data\", to get to the end of it, might be a problem for some people, that still think of recv() as a blocking function. With a blocking recv(), \"checking\" for a read when the buffer is already empty will block, which defeats the purpose of \"checking\".\nIn my opinion any function that is documented to potentially block the entire process indefinitely is a design flaw, but I guess it is still there for historical reasons, from when using a socket just like a regular file descriptor was a cool idea.\nWhat you can do is:\n\nset the socket to non-blocking mode, but than you get a system-depended error to indicate the receive buffer is empty, or the send buffer is full\nstick to blocking mode but set a very short socket timeout. This will allow you to \"ping\" or \"check\" the socket with recv(), pretty much what you want to do\nuse select() call or asyncore module with a very short timeout. Error reporting is still system-specific.\n\nFor the write part of the problem, keeping the read buffers empty pretty much covers it. You will discover a connection \"dropped\" after a non-blocking read attempt, and you may choose to stop sending anything after a read returns a closed channel.\nI guess the only way to be sure your sent data has reached the other end (and is not still in the send buffer) is either:\n\nreceive a proper response on the same socket for the exact message that you sent. Basically you are using the higher level protocol to provide confirmation.\nperform a successful shutdow() and close() on the socket\n\nThe python socket howto says send() will return 0 bytes written if channel is closed. You may use a non-blocking or a timeout socket.send() and if it returns 0 you can no longer send data on that socket. But if it returns non-zero, you have already sent something, good luck with that :)\nAlso here I have not considered OOB (out-of-band) socket data here as a means to approach your problem, but I think OOB was not what you meant.","Q_Score":69,"Tags":"python,sockets","A_Id":15175067,"CreationDate":"2009-03-20T19:31:00.000","Title":"How to tell if a connection is dead in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want my python application to be able to tell when the socket on the other side has been dropped. Is there a method for this?","AnswerCount":6,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":175519,"Q_Id":667640,"Users Score":41,"Answer":"It depends on what you mean by \"dropped\". For TCP sockets, if the other end closes the connection either through \nclose() or the process terminating, you'll find out by reading an end of file, or getting a read error, usually the errno being set to whatever 'connection reset by peer' is by your operating system. For python, you'll read a zero length string, or a socket.error will be thrown when you try to read or write from the socket.","Q_Score":69,"Tags":"python,sockets","A_Id":667710,"CreationDate":"2009-03-20T19:31:00.000","Title":"How to tell if a connection is dead in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. \nIs there a simple way to setup an async \/ callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.\nThe flow would look like:\nUser request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results\nI recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .","AnswerCount":10,"Available Count":3,"Score":0.0399786803,"is_accepted":false,"ViewCount":10221,"Q_Id":668257,"Users Score":2,"Answer":"I'd just build a service in twisted that did that concurrent fetch and analysis and access that from web.py as a simple http request.","Q_Score":9,"Tags":"python,asynchronous","A_Id":668772,"CreationDate":"2009-03-20T22:40:00.000","Title":"Python: simple async download of url content?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. \nIs there a simple way to setup an async \/ callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.\nThe flow would look like:\nUser request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results\nI recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .","AnswerCount":10,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":10221,"Q_Id":668257,"Users Score":0,"Answer":"Actually you can integrate twisted with web.py. I'm not really sure how as I've only done it with django (used twisted with it).","Q_Score":9,"Tags":"python,asynchronous","A_Id":668723,"CreationDate":"2009-03-20T22:40:00.000","Title":"Python: simple async download of url content?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a web.py server that responds to various user requests. One of these requests involves downloading and analyzing a series of web pages. \nIs there a simple way to setup an async \/ callback based url download mechanism in web.py? Low resource usage is particularly important as each user initiated request could result in download of multiple pages.\nThe flow would look like:\nUser request -> web.py -> Download 10 pages in parallel or asynchronously -> Analyze contents, return results\nI recognize that Twisted would be a nice way to do this, but I'm already in web.py so I'm particularly interested in something that can fit within web.py .","AnswerCount":10,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":10221,"Q_Id":668257,"Users Score":0,"Answer":"I'm not sure I'm understanding your question, so I'll give multiple partial answers to start with.\n\nIf your concern is that web.py is having to download data from somewhere and analyze the results before responding, and you fear the request may time out before the results are ready, you could use ajax to split the work up. Return immediately with a container page (to hold the results) and a bit of javascript to poll the sever for the results until the client has them all. Thus the client never waits for the server, though the user still has to wait for the results.\nIf your concern is tying up the server waiting for the client to get the results, I doubt if that will actually be a problem. Your networking layers should not require you to wait-on-write\nIf you are worrying about the server waiting while the client downloads static content from elsewhere, either ajax or clever use of redirects should solve your problem","Q_Score":9,"Tags":"python,asynchronous","A_Id":668486,"CreationDate":"2009-03-20T22:40:00.000","Title":"Python: simple async download of url content?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server\/hosting? Is this even possible?","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":19222,"Q_Id":670398,"Users Score":0,"Answer":"You could create 2 applications? 1 for deployment and the other for testing. \nAlternatively, you can also include an oauth_callback parameter when you requesting for a request token. Some providers will redirect to the url specified by oauth_callback (eg. Twitter, Google) but some will ignore this callback url and redirect to the one specified during configuration (eg. Yahoo)","Q_Score":37,"Tags":"python,oauth","A_Id":3117885,"CreationDate":"2009-03-22T01:37:00.000","Title":"How do I develop against OAuth locally?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server\/hosting? Is this even possible?","AnswerCount":7,"Available Count":3,"Score":0.1418931938,"is_accepted":false,"ViewCount":19222,"Q_Id":670398,"Users Score":5,"Answer":"This was with the Facebook OAuth - I actually was able to specify 'http:\/\/127.0.0.1:8080' as the Site URL and the callback URL. It took several minutes for the changes to the Facebook app to propagate, but then it worked.","Q_Score":37,"Tags":"python,oauth","A_Id":7971246,"CreationDate":"2009-03-22T01:37:00.000","Title":"How do I develop against OAuth locally?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building a Python application that needs to communicate with an OAuth service provider. The SP requires me to specify a callback URL. Specifying localhost obviously won't work. I'm unable to set up a public facing server. Any ideas besides paying for server\/hosting? Is this even possible?","AnswerCount":7,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":19222,"Q_Id":670398,"Users Score":10,"Answer":"In case you are using *nix style system, create a alias like 127.0.0.1 mywebsite.dev in \/etc\/hosts (you need have the line which is similar to above mentioned in the file, Use http:\/\/website.dev\/callbackurl\/for\/app in call back URL and during local testing.","Q_Score":37,"Tags":"python,oauth","A_Id":12107449,"CreationDate":"2009-03-22T01:37:00.000","Title":"How do I develop against OAuth locally?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Very occasionally when making a http request, I am waiting for an age for a response that never comes. What is the recommended way to cancel this request after a reasonable period of time?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":578,"Q_Id":683493,"Users Score":2,"Answer":"Set the HTTP request timeout.","Q_Score":1,"Tags":"python,httpwebrequest","A_Id":683519,"CreationDate":"2009-03-25T21:17:00.000","Title":"Timeout on a HTTP request in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been asked to quote a project where they want to see sent email using POP. I am pretty sure this is not possible, but I thought if it was.\nSo is it possible given a users POP email server details to access their sent mail?\nIf so any examples in Python or fetchmail?","AnswerCount":4,"Available Count":4,"Score":0.1488850336,"is_accepted":false,"ViewCount":3820,"Q_Id":690527,"Users Score":3,"Answer":"Pop doesn't support sent email. Pop is an inbox only, Sent mail will be stored in IMAP, Exchange or other proprietary system.","Q_Score":0,"Tags":"python,email,pop3,fetchmail","A_Id":690536,"CreationDate":"2009-03-27T16:45:00.000","Title":"Is it possible to Access a Users Sent Email over POP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been asked to quote a project where they want to see sent email using POP. I am pretty sure this is not possible, but I thought if it was.\nSo is it possible given a users POP email server details to access their sent mail?\nIf so any examples in Python or fetchmail?","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":3820,"Q_Id":690527,"Users Score":1,"Answer":"The smtp (mail sending) server could forward a copy of all sent mail back to the sender, they could then access this over pop.","Q_Score":0,"Tags":"python,email,pop3,fetchmail","A_Id":690541,"CreationDate":"2009-03-27T16:45:00.000","Title":"Is it possible to Access a Users Sent Email over POP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been asked to quote a project where they want to see sent email using POP. I am pretty sure this is not possible, but I thought if it was.\nSo is it possible given a users POP email server details to access their sent mail?\nIf so any examples in Python or fetchmail?","AnswerCount":4,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":3820,"Q_Id":690527,"Users Score":5,"Answer":"POP3 only handles receiving email; sent mail is sent via SMTP in these situations, and may be sent via a different ISP to the receiver (say, when you host your own email server, but use your current ISP to send). As such, this isn't directly possible.\nIMAP could do it, as this offers server side email folders as well as having the server handle the interface to both send and receive SMTP traffic","Q_Score":0,"Tags":"python,email,pop3,fetchmail","A_Id":690542,"CreationDate":"2009-03-27T16:45:00.000","Title":"Is it possible to Access a Users Sent Email over POP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been asked to quote a project where they want to see sent email using POP. I am pretty sure this is not possible, but I thought if it was.\nSo is it possible given a users POP email server details to access their sent mail?\nIf so any examples in Python or fetchmail?","AnswerCount":4,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":3820,"Q_Id":690527,"Users Score":1,"Answer":"Emails are not sent using POP, but collected from a server using POP. They are sent using SMTP, and they don't hang around on the server once they're gone.\nYou might want to look into IMAP?","Q_Score":0,"Tags":"python,email,pop3,fetchmail","A_Id":690540,"CreationDate":"2009-03-27T16:45:00.000","Title":"Is it possible to Access a Users Sent Email over POP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it?\nEdit: I think I've found the problem. My character data contains \"<\" and \">\" characters, presumably from html tags. After being parsed, these are converted to \"<\" and \">\" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2338,"Q_Id":708531,"Users Score":0,"Answer":"You could load it into Firefox, if you don't have an XML editor. Firefox shows you the error.","Q_Score":0,"Tags":"python,xml,sax","A_Id":715813,"CreationDate":"2009-04-02T06:35:00.000","Title":"Python SAX parser says XML file is not well-formed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it?\nEdit: I think I've found the problem. My character data contains \"<\" and \">\" characters, presumably from html tags. After being parsed, these are converted to \"<\" and \">\" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2338,"Q_Id":708531,"Users Score":0,"Answer":"I would second recommendation to try to parse it using another XML parser. That should give an indication as to whether it's the document that's wrong, or parser.\nAlso, the actual error message might be useful. One fairly common problem for example is that the xml declaration (if one is used, it's optional) must be the very first thing -- not even whitespace is allowed before it.","Q_Score":0,"Tags":"python,xml,sax","A_Id":711033,"CreationDate":"2009-04-02T06:35:00.000","Title":"Python SAX parser says XML file is not well-formed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I stripped some tags that I thought were unnecessary from an XML file. Now when I try to parse it, my SAX parser throws an error and says my file is not well-formed. However, I know every start tag has an end tag. The file's opening tag has a link to an XML schema. Could this be causing the trouble? If so, then how do I fix it?\nEdit: I think I've found the problem. My character data contains \"<\" and \">\" characters, presumably from html tags. After being parsed, these are converted to \"<\" and \">\" characters, which seems to bother the SAX parser. Is there any way to prevent this from happening?","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":2338,"Q_Id":708531,"Users Score":2,"Answer":"I would suggest putting those tags back in and making sure it still works. Then, if you want to take them out, do it one at a time until it breaks.\nHowever, I question the wisdom of taking them out. If it's your XML file, you should understand it better. If it's a third-party XML file, you really shouldn't be fiddling with it (until you understand it better :-).","Q_Score":0,"Tags":"python,xml,sax","A_Id":708546,"CreationDate":"2009-04-02T06:35:00.000","Title":"Python SAX parser says XML file is not well-formed","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have seen many projects using simplejson module instead of json module from the Standard Library. Also, there are many different simplejson modules. Why would use these alternatives, instead of the one in the Standard Library?","AnswerCount":13,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":141384,"Q_Id":712791,"Users Score":6,"Answer":"Another reason projects use simplejson is that the builtin json did not originally include its C speedups, so the performance difference was noticeable.","Q_Score":405,"Tags":"python,json,simplejson","A_Id":714748,"CreationDate":"2009-04-03T06:56:00.000","Title":"What are the differences between json and simplejson Python modules?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have seen many projects using simplejson module instead of json module from the Standard Library. Also, there are many different simplejson modules. Why would use these alternatives, instead of the one in the Standard Library?","AnswerCount":13,"Available Count":4,"Score":0.0307595242,"is_accepted":false,"ViewCount":141384,"Q_Id":712791,"Users Score":2,"Answer":"In python3, if you a string of b'bytes', with json you have to .decode() the content before you can load it. simplejson takes care of this so you can just do simplejson.loads(byte_string).","Q_Score":405,"Tags":"python,json,simplejson","A_Id":38016773,"CreationDate":"2009-04-03T06:56:00.000","Title":"What are the differences between json and simplejson Python modules?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have seen many projects using simplejson module instead of json module from the Standard Library. Also, there are many different simplejson modules. Why would use these alternatives, instead of the one in the Standard Library?","AnswerCount":13,"Available Count":4,"Score":0.0767717131,"is_accepted":false,"ViewCount":141384,"Q_Id":712791,"Users Score":5,"Answer":"The builtin json module got included in Python 2.6. Any projects that support versions of Python < 2.6 need to have a fallback. In many cases, that fallback is simplejson.","Q_Score":405,"Tags":"python,json,simplejson","A_Id":712795,"CreationDate":"2009-04-03T06:56:00.000","Title":"What are the differences between json and simplejson Python modules?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have seen many projects using simplejson module instead of json module from the Standard Library. Also, there are many different simplejson modules. Why would use these alternatives, instead of the one in the Standard Library?","AnswerCount":13,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":141384,"Q_Id":712791,"Users Score":0,"Answer":"I came across this question as I was looking to install simplejson for Python 2.6. I needed to use the 'object_pairs_hook' of json.load() in order to load a json file as an OrderedDict. Being familiar with more recent versions of Python I didn't realize that the json module for Python 2.6 doesn't include the 'object_pairs_hook' so I had to install simplejson for this purpose. From personal experience this is why i use simplejson as opposed to the standard json module.","Q_Score":405,"Tags":"python,json,simplejson","A_Id":31269030,"CreationDate":"2009-04-03T06:56:00.000","Title":"What are the differences between json and simplejson Python modules?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a list somewhere of recommendations of different Python-based REST frameworks for use on the serverside to write your own RESTful APIs? Preferably with pros and cons.\nPlease feel free to add recommendations here. :)","AnswerCount":16,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":241535,"Q_Id":713847,"Users Score":0,"Answer":"I strongly recommend TurboGears or Bottle:\nTurboGears:\n\nless verbose than django\nmore flexible, less HTML-oriented \nbut: less famous\n\nBottle:\n\nvery fast\nvery easy to learn\nbut: minimalistic and not mature","Q_Score":321,"Tags":"python,web-services,rest,frameworks","A_Id":1722910,"CreationDate":"2009-04-03T13:13:00.000","Title":"Recommendations of Python REST (web services) framework?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a list somewhere of recommendations of different Python-based REST frameworks for use on the serverside to write your own RESTful APIs? Preferably with pros and cons.\nPlease feel free to add recommendations here. :)","AnswerCount":16,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":241535,"Q_Id":713847,"Users Score":8,"Answer":"I don't see any reason to use Django just to expose a REST api, there are lighter and more flexible solutions. Django carries a lot of other things to the table, that are not always needed. For sure not needed if you only want to expose some code as a REST service. \nMy personal experience, fwiw, is that once you have a one-size-fits-all framework, you'll start to use its ORM, its plugins, etc. just because it's easy, and in no time you end up having a dependency that is very hard to get rid of.\nChoosing a web framework is a tough decision, and I would avoid picking a full stack solution just to expose a REST api. \nNow, if you really need\/want to use Django, then Piston is a nice REST framework for django apps.\nThat being said, CherryPy looks really nice too, but seems more RPC than REST.\nLooking at the samples (I never used it), probably web.py is the best and cleanest if you only need REST.","Q_Score":321,"Tags":"python,web-services,rest,frameworks","A_Id":6897383,"CreationDate":"2009-04-03T13:13:00.000","Title":"Recommendations of Python REST (web services) framework?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a web application and I would like to enable real time SMS notifications to the users of the applications. \nNote: I currently cannot use the Twitter API because I live in West Africa, and Twitter doesn't send SMS to my country.\nAlso email2sms is not an option because the mobile operators don't allow that in my country.","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2976,"Q_Id":716946,"Users Score":0,"Answer":"I don't have any knowledge in this area. But I think you'll have to talk to the mobile operators, and see if they have any API for sending SMS messages. \nYou'll probably have to pay them, or have some scheme for customers to pay them. Alternatively there might be some 3rd party that implements this functionality.","Q_Score":1,"Tags":"python,sms,notifications","A_Id":716953,"CreationDate":"2009-04-04T11:47:00.000","Title":"How do I enable SMS notifications in my web apps?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a web application and I would like to enable real time SMS notifications to the users of the applications. \nNote: I currently cannot use the Twitter API because I live in West Africa, and Twitter doesn't send SMS to my country.\nAlso email2sms is not an option because the mobile operators don't allow that in my country.","AnswerCount":7,"Available Count":2,"Score":0.057080742,"is_accepted":false,"ViewCount":2976,"Q_Id":716946,"Users Score":2,"Answer":"The easiest way to accomplish this is by using a third party API. Some I know that work well are:\n\nrestSms.me\nTwilio.com\nClicatell.com\n\nI have used all of them and they easiest\/cheapest one to implement was restSms.me\nHope that helps.","Q_Score":1,"Tags":"python,sms,notifications","A_Id":5414483,"CreationDate":"2009-04-04T11:47:00.000","Title":"How do I enable SMS notifications in my web apps?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Problems\n\nhow to make an Ajax buttons (upward and downward arrows) such that the number can increase or decrease\nhow to save the action af an user to an variable NumberOfVotesOfQuestionID\n\nI am not sure whether I should use database or not for the variable. However, I know that there is an easier way too to save the number of votes.\nHow can you solve those problems?\n[edit]\nThe server-side programming language is Python.","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":7516,"Q_Id":719194,"Users Score":3,"Answer":"You create the buttons, which can be links or images or whatever. Now hook a JavaScript function up to each button's click event. On clicking, the function fires and\n\nSends a request to the server code that says, more or less, +1 or -1.\nServer code takes over. This will vary wildly depending on what framework you use (or don't) and a bunch of other things.\nCode connects to the database and runs a query to +1 or -1 the score. How this happens will vary wildly depending on your database design, but it'll be something like UPDATE posts SET score=score+1 WHERE score_id={{insert id here}};.\nDepending on what the database says, the server returns a success code or a failure code as the AJAX request response.\nResponse gets sent to AJAX, asynchronously.\nThe JS response function updates the score if it's a success code, displays an error if it's a failure.\n\nYou can store the code in a variable, but this is complicated and depends on how well you know the semantics of your code's runtime environment. It eventually needs to be pushed to persistent storage anyway, so using the database 100% is a good initial solution. When the time for optimizing performance comes, there are enough software in the world to cache database queries to make you feel woozy so it's not that big a deal.","Q_Score":32,"Tags":"javascript,python,html,ajax","A_Id":719293,"CreationDate":"2009-04-05T16:07:00.000","Title":"How can you make a vote-up-down button like in Stackoverflow?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python script that runs continuously. It outputs 2 lines of info every 30 seconds. I'd like to be able to view this output on the web. In particular, I'd like the site to auto-update (add the new output at the top of the page\/site every 30 seconds without having to refresh the page).\nI understand I can do this with javascript but is there a python only based solution? Even if there is, is javascript the way to go? I'm more than willing to learn javascript if needed but if not, I'd like to stay focused on python.\nSorry for the basic question but I'm still clueless when it comes to web programming.\nThx!","AnswerCount":9,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":16918,"Q_Id":731470,"Users Score":0,"Answer":"Is this for a real webapp? Or is this a convenience thing for you to view output in the browser? If it's more so for convenience, you could consider using mod_python.\nmod_python is an extension for the apache webserver that embeds a python interpreter in the web server (so the script runs server side). It would easily let you do this sort of thing locally or for your own convenience. Then you could just run the script with mod python and have the handler post your results. You could probably easily implement the refreshing too, but I would not know off the top of my head how to do this.\nHope this helps... check out mod_python. It's not too bad once you get everything configured.","Q_Score":7,"Tags":"javascript,python","A_Id":731629,"CreationDate":"2009-04-08T19:31:00.000","Title":"What's easiest way to get Python script output on the web?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python script that runs continuously. It outputs 2 lines of info every 30 seconds. I'd like to be able to view this output on the web. In particular, I'd like the site to auto-update (add the new output at the top of the page\/site every 30 seconds without having to refresh the page).\nI understand I can do this with javascript but is there a python only based solution? Even if there is, is javascript the way to go? I'm more than willing to learn javascript if needed but if not, I'd like to stay focused on python.\nSorry for the basic question but I'm still clueless when it comes to web programming.\nThx!","AnswerCount":9,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":16918,"Q_Id":731470,"Users Score":0,"Answer":"JavaScript is the primary way to add this sort of interactivity to a website. You can make the back-end Python, but the client will have to use JavaScript AJAX calls to update the page. Python doesn't run in the browser, so you're out of luck if you want to use just Python.\n(It's also possible to use Flash or Java applets, but that's a pretty heavyweight solution for what seems like a small problem.)","Q_Score":7,"Tags":"javascript,python","A_Id":731476,"CreationDate":"2009-04-08T19:31:00.000","Title":"What's easiest way to get Python script output on the web?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a python script that runs continuously. It outputs 2 lines of info every 30 seconds. I'd like to be able to view this output on the web. In particular, I'd like the site to auto-update (add the new output at the top of the page\/site every 30 seconds without having to refresh the page).\nI understand I can do this with javascript but is there a python only based solution? Even if there is, is javascript the way to go? I'm more than willing to learn javascript if needed but if not, I'd like to stay focused on python.\nSorry for the basic question but I'm still clueless when it comes to web programming.\nThx!","AnswerCount":9,"Available Count":3,"Score":0.022218565,"is_accepted":false,"ViewCount":16918,"Q_Id":731470,"Users Score":1,"Answer":"You need Javascript in one way or another for your 30 second refresh. Alternatively, you could set a meta tag refresh for every 30 seconds to redirect to the current page, but the Javascript route will prevent page flicker.","Q_Score":7,"Tags":"javascript,python","A_Id":731477,"CreationDate":"2009-04-08T19:31:00.000","Title":"What's easiest way to get Python script output on the web?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking for a well-supported multithreaded Python HTTP server that supports chunked encoding replies. (I.e. \"Transfer-Encoding: chunked\" on responses). What's the best HTTP server base to start with for this purpose?","AnswerCount":6,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":6143,"Q_Id":732222,"Users Score":2,"Answer":"Twisted supports chunked transfer and it does so transparently. i.e., if your request handler does not specify a response length, twisted will automatically switch to chunked transfer and it will generate one chunk per call to Request.write.","Q_Score":7,"Tags":"python,http,chunked-encoding","A_Id":9326192,"CreationDate":"2009-04-08T22:58:00.000","Title":"Python HTTP server that supports chunked encoding?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":4305,"Q_Id":765964,"Users Score":1,"Answer":"You will have to build the whole access logic to S3 in your applications","Q_Score":10,"Tags":"python,django,amazon-web-services,amazon-s3","A_Id":766030,"CreationDate":"2009-04-19T19:51:00.000","Title":"Amazon S3 permissions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":4305,"Q_Id":765964,"Users Score":8,"Answer":"Have the user hit your server\nHave the server set up a query-string authentication with a short expiration (minutes, hours?)\nHave your server redirect to #2","Q_Score":10,"Tags":"python,django,amazon-web-services,amazon-s3","A_Id":768090,"CreationDate":"2009-04-19T19:51:00.000","Title":"Amazon S3 permissions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Trying to understand S3...How do you limit access to a file you upload to S3? For example, from a web application, each user has files they can upload, but how do you limit access so only that user has access to that file? It seems like the query string authentication requires an expiration date and that won't work for me, is there another way to do this?","AnswerCount":4,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":4305,"Q_Id":765964,"Users Score":14,"Answer":"There are various ways to control access to the S3 objects:\n\nUse the query string auth - but as you noted this does require an expiration date. You could make it far in the future, which has been good enough for most things I have done.\nUse the S3 ACLS - but this requires the user to have an AWS account and authenticate with AWS to access the S3 object. This is probably not what you are looking for.\nYou proxy the access to the S3 object through your application, which implements your access control logic. This will bring all the bandwidth through your box.\nYou can set up an EC2 instance with your proxy logic - this keeps the bandwidth closer to S3 and can reduce latency in certain situations. The difference between this and #3 could be minimal, but depends your particular situation.","Q_Score":10,"Tags":"python,django,amazon-web-services,amazon-s3","A_Id":768050,"CreationDate":"2009-04-19T19:51:00.000","Title":"Amazon S3 permissions","Data Science and Machine Learning":0,"Database and SQL":1,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there.\nI'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP_KEEPALIVE options to their lowest.\nWhen I use nonblocking sockets, I've noticed that a recv() will return an error (\"resource temporarily unavailable\") when it reads from a live socket, but returns \"\" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but after sending some data.\nAside from connecting\/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process?","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":3463,"Q_Id":771399,"Users Score":2,"Answer":"I'd recommend not leaving your (single) test socket connected - make a new connection each time you need to poll. Every load balancer \/ server availability system I've ever seen uses this method instead of a persistent connection.\nIf the remote server hasn't responded within a reasonable amount of time (e.g. 10s) mark it as \"down\". Use timers and signals rather than function response codes to handle that timeout.","Q_Score":3,"Tags":"python,tcp,monitoring,port","A_Id":771422,"CreationDate":"2009-04-21T07:11:00.000","Title":"Monitoring a tcp port","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there.\nI'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP_KEEPALIVE options to their lowest.\nWhen I use nonblocking sockets, I've noticed that a recv() will return an error (\"resource temporarily unavailable\") when it reads from a live socket, but returns \"\" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but after sending some data.\nAside from connecting\/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process?","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3463,"Q_Id":771399,"Users Score":0,"Answer":"It is theoretically possible to spam a keepalive packet. But to set it to very low intervals, you may need to dig into raw sockets. Also, your host may ignore it if its coming in too fast.\nThe best way to check if a host is alive in a TCP connection is to send data, and wait for an ACK packet. If the ACK packet arrives, the SEND function will return non-zero.","Q_Score":3,"Tags":"python,tcp,monitoring,port","A_Id":771438,"CreationDate":"2009-04-21T07:11:00.000","Title":"Monitoring a tcp port","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there.\nI'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP_KEEPALIVE options to their lowest.\nWhen I use nonblocking sockets, I've noticed that a recv() will return an error (\"resource temporarily unavailable\") when it reads from a live socket, but returns \"\" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but after sending some data.\nAside from connecting\/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process?","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":3463,"Q_Id":771399,"Users Score":1,"Answer":"\"it becomes difficult to tell when the remote host goes down\"\nCorrect. This is a feature of TCP. The whole point of TCP is to have an enduring connection between ports. Theoretically an application can drop and reconnect to the port through TCP (the socket libraries don't provide a lot of support for this, but it's part of the TCP protocol).","Q_Score":3,"Tags":"python,tcp,monitoring,port","A_Id":773207,"CreationDate":"2009-04-21T07:11:00.000","Title":"Monitoring a tcp port","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For fun, I've been toying around with writing a load balancer in python and have been trying to figure the best (correct?) way to test if a port is available and the remote host is still there.\nI'm finding that, once connected, it becomes difficult to tell when the remote host goes down. I've turned keep alive on, but can't get it to recognize a downed connection sooner than a minute (I realize polling more often than a minute might be overkill, but lets say I wanted to), even after setting the various TCP_KEEPALIVE options to their lowest.\nWhen I use nonblocking sockets, I've noticed that a recv() will return an error (\"resource temporarily unavailable\") when it reads from a live socket, but returns \"\" when reading from a dead one (send and recv of 0 bytes, which might be the cause?). That seems like an odd way to test for it connected, though, and makes it impossible to tell if the connected died but after sending some data.\nAside from connecting\/disconnecting for every check, is there something I can do? Can I manually send a tcp keepalive, or can I establish a lower level connection that will let me test the connectivity without sending real data the remote server would potentially process?","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3463,"Q_Id":771399,"Users Score":0,"Answer":"ping was invented for that purpose\nalso you might be able to send malformed TCP packets to your destination. For example, in the TCP headers there is a flag for acknowleging end of transmission, its the FIN message. If you send a message with ACK and FIN the remote host should complain with a return packet and you'll be able to evaluate round trip time.","Q_Score":3,"Tags":"python,tcp,monitoring,port","A_Id":771415,"CreationDate":"2009-04-21T07:11:00.000","Title":"Monitoring a tcp port","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I have a small comment shoutbox type cgi process running on a server and currently when someone leaves a comment I simply format that comment into html i.e\n

$title<\/p>\n

$comment<\/p>\nand store in a flat file.\nWould it be faster and acceptably low in LOC to reimplement the storage in xml or json, in a simple spec of my own or stick with the simple html route?. \nI don't want to use relational database for this.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":331,"Q_Id":777090,"Users Score":3,"Answer":"If a flat file is fast enough, then go with that, since it's very simple and accessible. Storing as XML and JSON but still using a flat file probably is very comparable in performance.\nYou might want to consider (ignore this if you just left it out of your question) sanitizing\/filtering the text, so that users can't break your HTML by e.g. entering \"<\/p>\" in the comment text.","Q_Score":1,"Tags":"python,xml,json","A_Id":777119,"CreationDate":"2009-04-22T12:56:00.000","Title":"fastest way to store comment data python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Other than basic python syntax, what other key areas should I learn to get a website live?\nIs there a web.config in the python world?\nWhich libraries handle things like authentication? or is that all done manually via session cookies and database tables?\nAre there any web specific libraries?\nEdit: sorry!\nI am well versed in asp.net, I want to branch out and learn Python, hence this question (sorry, terrible start to this question I know).","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":177,"Q_Id":777924,"Users Score":0,"Answer":"Oh, golly.\nLook, this is gonna be real hard to answer because, read as you wrote it, you're missing a lot of steps. Like, you need a web server, a design, some HTML, and so on.\nAre you building from the ground up? Asking about Python makes me suspect you may be using something like Zope.","Q_Score":0,"Tags":"python","A_Id":777952,"CreationDate":"2009-04-22T15:46:00.000","Title":"Other than basic python syntax, what other key areas should I learn to get a website live?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing an rather simple http web server in python3. The web server needs to be simple - only basic reading from config files, etc. I am using only standard libraries and for now it works rather ok. \nThere is only one requirement for this project, which I can't implement on my own - virtual hosts. I need to have at least two virtual hosts, defined in config files. The problem is, that I can't find a way how can I implement them in python. Does anyone have any guides, articles, maybe some simple implementation how can this be done?\nI would be grateful for any help.","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":3802,"Q_Id":781466,"Users Score":10,"Answer":"Virtual hosts work by obeying the Host: header in the HTTP request.\nJust read the headers of the request, and take action based on the value of the Host: header","Q_Score":3,"Tags":"python,http,python-3.x,virtualhost","A_Id":781474,"CreationDate":"2009-04-23T12:20:00.000","Title":"Python3 Http Web Server: virtual hosts","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Of course an HTML page can be parsed using any number of python parsers, but I'm surprised that there don't seem to be any public parsing scripts to extract meaningful content (excluding sidebars, navigation, etc.) from a given HTML doc. \nI'm guessing it's something like collecting DIV and P elements and then checking them for a minimum amount of text content, but I'm sure a solid implementation would include plenty of things that I haven't thought of.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":3581,"Q_Id":796490,"Users Score":1,"Answer":"What is meaningful and what is not, it depends on the semantic of the page. If the semantics is crappy, your code won't \"guess\" what is meaningful. I use readability, which you linked in the comment, and I see that on many pages I try to read it does not provide any result, not talking about a decent one.\nIf someone puts the content in a table, you're doomed. Try readability on a phpbb forum you'll see what I mean.\nIf you want to do it, go with a regexp on

<\/p>, or parse the DOM.","Q_Score":8,"Tags":"python,html,parsing,semantics,html-content-extraction","A_Id":796530,"CreationDate":"2009-04-28T06:40:00.000","Title":"python method to extract content (excluding navigation) from an HTML page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"If I have some xml containing things like the following mediawiki markup:\n\n\" ...collected in the 12th century, of which [[Alexander the Great]] was the\n hero, and in which he was represented,\n somewhat like the British [[King\n Arthur|Arthur]]\"\n\nwhat would be the appropriate arguments to something like:\nre.findall([[__?__]], article_entry)\nI am stumbling a bit on escaping the double square brackets, and getting the proper link for text like: [[Alexander of Paris|poet named Alexander]]","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1132,"Q_Id":809837,"Users Score":1,"Answer":"RegExp: \\w+( \\w+)+(?=]])\ninput\n[[Alexander of Paris|poet named Alexander]]\noutput\npoet named Alexander\ninput\n[[Alexander of Paris]]\noutput\nAlexander of Paris","Q_Score":3,"Tags":"python,regex,mediawiki","A_Id":809900,"CreationDate":"2009-05-01T01:11:00.000","Title":"Python regex for finding contents of MediaWiki markup links","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i'm trying to save an image from a website using selenium server & python client.\ni know the image's URL, but i can't find the code to save it, either when it's the the document itself, or when it's embedded in the current browser session.\nthe workaround i found so far is to save the page's screenshot (there are 2 selenium methods for doing just that), but i want the original image.\ni don't mind fiddling with the clicking menu options etc. but i couldn't found how.\nthanks","AnswerCount":5,"Available Count":2,"Score":0.1194272985,"is_accepted":false,"ViewCount":9916,"Q_Id":816704,"Users Score":3,"Answer":"To do this the way you want (to actually capture the content sent down to the browser) you'd need to modify Selenium RC's proxy code (see ProxyHandler.java) and store the files locally on the disk in parallel to sending the response back to the browser.","Q_Score":9,"Tags":"python,selenium","A_Id":827891,"CreationDate":"2009-05-03T09:51:00.000","Title":"save an image with selenium & firefox","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i'm trying to save an image from a website using selenium server & python client.\ni know the image's URL, but i can't find the code to save it, either when it's the the document itself, or when it's embedded in the current browser session.\nthe workaround i found so far is to save the page's screenshot (there are 2 selenium methods for doing just that), but i want the original image.\ni don't mind fiddling with the clicking menu options etc. but i couldn't found how.\nthanks","AnswerCount":5,"Available Count":2,"Score":-0.0399786803,"is_accepted":false,"ViewCount":9916,"Q_Id":816704,"Users Score":-1,"Answer":"How about going to the image URL and then taking a screenshot of the page? Firefox displays the image in full screen. Hope this helps..","Q_Score":9,"Tags":"python,selenium","A_Id":816777,"CreationDate":"2009-05-03T09:51:00.000","Title":"save an image with selenium & firefox","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Given an ip address (say 192.168.0.1), how do I check if it's in a network (say 192.168.0.0\/24) in Python?\nAre there general tools in Python for ip address manipulation? Stuff like host lookups, ip adddress to int, network address with netmask to int and so on? Hopefully in the standard Python library for 2.5.","AnswerCount":28,"Available Count":1,"Score":0.0142847425,"is_accepted":false,"ViewCount":174240,"Q_Id":819355,"Users Score":2,"Answer":"#This works properly without the weird byte by byte handling\ndef addressInNetwork(ip,net):\n '''Is an address in a network'''\n # Convert addresses to host order, so shifts actually make sense\n ip = struct.unpack('>L',socket.inet_aton(ip))[0]\n netaddr,bits = net.split('\/')\n netaddr = struct.unpack('>L',socket.inet_aton(netaddr))[0]\n # Must shift left an all ones value, \/32 = zero shift, \/0 = 32 shift left\n netmask = (0xffffffff << (32-int(bits))) & 0xffffffff\n # There's no need to mask the network address, as long as its a proper network address\n return (ip & netmask) == netaddr","Q_Score":123,"Tags":"python,networking,ip-address,cidr","A_Id":10053031,"CreationDate":"2009-05-04T08:59:00.000","Title":"How can I check if an ip is in a network in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using wx.FileDialog in a wxPython 2.8.8.0 application, under Xubuntu 8.10.. My problem is that this dialog isn't network-aware, so I can't browse Samba shares.\nI see that this problem plagues other applications too (Firefox, Audacious...) so I'd like to ask where I could find informations on how to make it work.\nIs that dialog supposed to be already network-aware? Am I missing something? Some library maybe? Or should I write my own implementation?\nMany thanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":276,"Q_Id":825724,"Users Score":0,"Answer":"Robin Dunn himself told me that\n\nIt's using the \"native\" GTK file\n dialog, just like the other apps, so \n there isn't anything that wx can do\n about it.\n\nSo as a workaround I ended up installing gvfs-fuse and browsing\nthe network through $HOME\/.gvfs.. A bit klunky but it works.","Q_Score":1,"Tags":"linux,ubuntu,wxpython","A_Id":876524,"CreationDate":"2009-05-05T16:22:00.000","Title":"Network-aware wx.FileDialog","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Many python libraries, even recently written ones, use httplib2 or the socket interface to perform networking tasks.\nThose are obviously easier to code on than Twisted due to their blocking nature, but I think this is a drawback when integrating them with other code, especially GUI one. If you want scalability, concurrency or GUI integration while avoiding multithreading, Twisted is then a natural choice.\nSo I would be interested in opinions in those matters:\n\nShould new networking code (with the exception of small command line tools) be written with Twisted?\nWould you mix Twisted, http2lib or socket code in the same project?\nIs Twisted pythonic for most libraries (it is more complex than alternatives, introduce a dependency to a non-standard package...)?\n\nEdit: please let me phrase this in another way. Do you feel writing new library code with Twisted may add a barrier to its adoption? Twisted has obvious benefits (especially portability and scalability as stated by gimel), but the fact that it is not a core python library may be considered by some as a drawback.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1969,"Q_Id":846950,"Users Score":0,"Answer":"Should new networking code (with the exception of small command line tools) be written with Twisted?\n\n\nMaybe. It really depends. Sometimes its just easy enough to wrap the blocking calls in their own thread. Twisted is good for large scale network code.\n\nWould you mix Twisted, http2lib or socket code in the same project?\n\n\nSure. But just remember that Twisted is single threaded, and that any blocking call in Twisted will block the entire engine.\n\nIs Twisted pythonic for most libraries (it is more complex than alternatives, introduce a dependency to a non-standard package...)?\n\n\nThere are many Twisted zealots that will say it belongs in the Python standard library. But many people can implement decent networking code with asyncore\/asynchat.","Q_Score":6,"Tags":"python,networking,sockets,twisted,httplib2","A_Id":847014,"CreationDate":"2009-05-11T06:30:00.000","Title":"Is Twisted an httplib2\/socket replacement?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to communicate using flex client with GAE, I am able to communicate using XMl from GAE to FLex but how should I post from flex3 to python code present on App Engine.\nCan anyone give me a hint about how to send login information from Flex to python \nAny ideas suggest me some examples.....please provide me some help \nRegards,\nRadhika","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1782,"Q_Id":854353,"Users Score":0,"Answer":"Do a HTTP post from Flex to your AppEngine app using the URLRequest class.","Q_Score":0,"Tags":"python,apache-flex,google-app-engine","A_Id":854403,"CreationDate":"2009-05-12T19:11:00.000","Title":"How to establish communication between flex and python code build on Google App Engine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Nice to meet you.\nA socket makes a program in Python by Linux (the transmission of a message) \u21d2 Windows (the reception), b\nut the following errors occur and cannot connect now.\nLinux, Windows are network connection together, and there is the authority to cut.\nsocket.error: (111, 'Connection refused')\nCould you help me!?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":619,"Q_Id":881332,"Users Score":2,"Answer":"111 means the listener is down\/not accepting connections - restart the Windows app that should be listening for connections, or disconnect any already-bound clients.","Q_Score":0,"Tags":"python,sockets","A_Id":881349,"CreationDate":"2009-05-19T07:00:00.000","Title":"Error occurs when I connect with socket in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"buildin an smtp client in python . which can send mail , and also show that mail has been received through any mail service for example gmail !!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":769,"Q_Id":892196,"Users Score":0,"Answer":"Depends what you mean by \"received\". It's possible to verify \"delivery\" of a message to a server but there is no 100% reliable guarantee it actually ended up in a mailbox. smtplib will throw an exception on certain conditions (like the remote end reporting user not found) but just as often the remote end will accept the mail and then either filter it or send a bounce notice at a later time.","Q_Score":0,"Tags":"python,smtp","A_Id":892264,"CreationDate":"2009-05-21T10:04:00.000","Title":"How would one build an smtp client in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to create a simple online poll application. I have created a backend in python that handles vote tracking, poll display, results display and admin setup. However, if I wanted a third party to be able to embed the poll in their website, what would be the recommended way of doing so? I would love to be able to provide a little javascript to drop into the third parties web page, but I can't use javascript because it would require a cross-domain access. What approach would provide an easy general solution for third parties?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":552,"Q_Id":903104,"Users Score":1,"Answer":"Make your app into a Google Gadget, Open Social gadget, or other kind of gadgets -- these are all designed to be embeddable into third-party pages with as little fuss as possible.","Q_Score":1,"Tags":"javascript,python,cross-domain","A_Id":903112,"CreationDate":"2009-05-24T04:53:00.000","Title":"How to embed a Poll in a Web Page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to write a script which can automatically download gameplay videos. The webpages look like dota.sgamer.com\/Video\/Detail\/402 and www.wfbrood.com\/movie\/spl2009\/movie_38214.html, they have flv player embedded in the flash plugin.\nIs there any library to help me find out the exact flv urls? or any other ideas to get it?\nMany thanks for your replies","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":382,"Q_Id":905403,"Users Score":0,"Answer":"if the embed player makes use of some variable where the flv path is set then you can download it, if not.. I doubt you find something to do it \"automaticly\" since every site make it's own player and identify the file by id not by path, which makes hard to know where the flv file is.","Q_Score":1,"Tags":"python,download,flv","A_Id":905451,"CreationDate":"2009-05-25T05:07:00.000","Title":"Is there any library to find out urls of embedded flvs in a webpage?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have this-\n\nen.wikipedia.org\/w\/api.php?action=login&lgname=user&lgpassword=password\n\nBut it doesn't work because it is a get request. What would the the post request version of this?\nCheers!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":276,"Q_Id":909929,"Users Score":0,"Answer":"Since your sample is in PHP, use $_REQUEST, this holds the contents of both $_GET and $_POST.","Q_Score":0,"Tags":"python,forms,post,get","A_Id":909975,"CreationDate":"2009-05-26T10:11:00.000","Title":"Changing a get request to a post in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hey i have a webpage for searching a database. i would like to be able to implement cookies using python to store what a user searches for and provide them with a recently searched field when they return. is there a way to implement this using the python Cookie library??","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1329,"Q_Id":920278,"Users Score":1,"Answer":"Usually, we do the following.\n\nUse a framework.\nEstablish a session. Ideally, ask for a username of some kind. If you don't want to ask for names or anything, you can try to the browser's IP address as the key for the session (this can turn into a nightmare, but you can try it.)\nUsing the session identification (username or IP address), save the searches in a database on your server.\nWhen the person logs in again, retrieve their query information from your local database.\n\nMoral of the story. Don't trust the cookie to have anything it but session identification. And even then, it will get hijacked either on purpose or accidentally.\n\nIntentional hijacking is the way one person poses as another.\nAccident hijacking occurs when multiple people share the same IP address (because they share the same computer).","Q_Score":0,"Tags":"python,cookies","A_Id":920727,"CreationDate":"2009-05-28T10:57:00.000","Title":"Using cookies with python to store searches","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know there is ftplib for ftp, shutil for local files, what about NFS? I know urllib2 can get files via HTTP\/HTTPS\/FTP\/FTPS, but it can't put files.\nIf there is a uniform library that automatically detects the protocol (FTP\/NFS\/LOCAL) with URI and deals with file transfer (get\/put) transparently, it's even better, does it exist?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1426,"Q_Id":925716,"Users Score":1,"Answer":"Have a look at KDE IOSlaves. They can manage all the protocol you describe, plus a few others (samba, ssh, ...).\nYou can instantiates IOSlaves through PyKDE or if that dependency is too big, you can probably manage the ioslave from python with the subprocess module.","Q_Score":3,"Tags":"python,file,ftp,networking,nfs","A_Id":926044,"CreationDate":"2009-05-29T12:22:00.000","Title":"Is there a uniform python library to transfer files using different protocols","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to automate a app using python. I need help to send keyboard commands through python. I am using powerBook G4.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":665,"Q_Id":939746,"Users Score":0,"Answer":"To the best of my knowledge, python does not contain the ability to simulate keystrokes. You can however use python to call a program which has the functionality that you need for OS X. You could also write said program using Objective C most likely.\nOr you could save yourself the pain and use Automator. Perhaps if you posted more details about what you were automating, I could add something further.","Q_Score":1,"Tags":"python","A_Id":947222,"CreationDate":"2009-06-02T14:02:00.000","Title":"How i can send the commands from keyboards using python. I am trying to automate mac app (GUI)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Goal: simple browser app, for navigating files on a web server, in a tree view.\nBackground: Building a web site as a learning experience, w\/ Apache, mod_python, Python code. (No mod_wsgi yet.)\nWhat tools should I learn to write the browser tree? I see JavaScript, Ajax, neither of which I know. Learn them? Grab a JS example from the web and rework? Can such a thing be built in raw HTML? Python I'm advanced beginner but I realize that's server side.\nIf you were going to build such a toy from scratch, what would you use? What would be the totally easy, cheesy way, the intermediate way, the fully professional way?\nNo Django yet please -- This is an exercise in learning web programming nuts and bolts.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":552,"Q_Id":941638,"Users Score":0,"Answer":"set \"Indexes\" option to the directory in the apache config.\nTo learn how to build webapps in python, learn django.","Q_Score":2,"Tags":"javascript,python,html,web-applications","A_Id":943612,"CreationDate":"2009-06-02T20:11:00.000","Title":"How to construct a web file browser?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Goal: simple browser app, for navigating files on a web server, in a tree view.\nBackground: Building a web site as a learning experience, w\/ Apache, mod_python, Python code. (No mod_wsgi yet.)\nWhat tools should I learn to write the browser tree? I see JavaScript, Ajax, neither of which I know. Learn them? Grab a JS example from the web and rework? Can such a thing be built in raw HTML? Python I'm advanced beginner but I realize that's server side.\nIf you were going to build such a toy from scratch, what would you use? What would be the totally easy, cheesy way, the intermediate way, the fully professional way?\nNo Django yet please -- This is an exercise in learning web programming nuts and bolts.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":552,"Q_Id":941638,"Users Score":1,"Answer":"If you want to make interactive browser, you have to learn JS and ajax. \nIf you want to build only browser based on links, python would be enough.","Q_Score":2,"Tags":"javascript,python,html,web-applications","A_Id":941664,"CreationDate":"2009-06-02T20:11:00.000","Title":"How to construct a web file browser?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I use python 2.4.1 on Linux, and a python package written inside the company I work in, for establishing a connection between 2 hosts for test purposes.\nUpon establishing the connection the side defined as the client side failed when calling socket.connect with the correct parameters (I checked) with the error code 111. After searching the web for this error means, I learned that it means that the connection was actively refused.\nBut the code in the package for establishing the connection is supposed to deal with it, only it knows 10061 as the error code for this same error: The connection is refused.\nCould it be that there are identical error codes for the same logical errors? Could it be that 111 is a system error of the Linux OS, as 10061 is python's or even another OS? Even so, isn't the entire concept of error codes to unify the logical errors with the same codes?\nShould I simply add the 111 error code to the handling condition?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":867,"Q_Id":961465,"Users Score":6,"Answer":"It appears Python is exposing the error code from the OS - the interpretation of the code is OS-dependent.\n111 is ECONNREFUSED on many Linux systems, and on Cygwin.\n146 is ECONNREFUSED on Solaris.\n10061 is WSAECONNREFUSED in winerror.h - it's the Windows Socket API's version of ECONNREFUSED.\nNo doubt on other systems, it's different again.\nThe correct way to handle this is use symbolic comparisons based on the OS's definition of ECONNREFUSED; that's the way you do it in C, for example. In other words, have a constant called ECONNREFUSED that has the value of ECONNREFUSED for that platform, in a platform-specific library (which will be necessary to link to the OS's socket primitives in any case), and compare error codes with the ECONNREFUSED constant, rather than magic numbers.\nI don't know what Python's standard approach to OS error codes is. I suspect it's not terribly well thought out.","Q_Score":2,"Tags":"python,error-handling,sockets","A_Id":961484,"CreationDate":"2009-06-07T08:08:00.000","Title":"Identical Error Codes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size.\nThe easy way to get the files is use \"wget -r -np -A files url\". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb\/sec.\nBut, just to play around I was also using python to build my urlparser.\nDownloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb\/sec. I use HTMLParser for parsing the href tags.\nI am not sure why is this happening. Are there any settings for this.\nThanks","AnswerCount":10,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":37225,"Q_Id":974741,"Users Score":0,"Answer":"There shouldn't be a difference really. All urlretrieve does is make a simple HTTP GET request. Have you taken out your data processing code and done a straight throughput comparison of wget vs. pure python?","Q_Score":9,"Tags":"python,urllib2,wget","A_Id":975759,"CreationDate":"2009-06-10T10:18:00.000","Title":"wget Vs urlretrieve of python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size.\nThe easy way to get the files is use \"wget -r -np -A files url\". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb\/sec.\nBut, just to play around I was also using python to build my urlparser.\nDownloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb\/sec. I use HTMLParser for parsing the href tags.\nI am not sure why is this happening. Are there any settings for this.\nThanks","AnswerCount":10,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":37225,"Q_Id":974741,"Users Score":0,"Answer":"Please show us some code. I'm pretty sure that it has to be with the code and not on urlretrieve. \nI've worked with it in the past and never had any speed related issues.","Q_Score":9,"Tags":"python,urllib2,wget","A_Id":976135,"CreationDate":"2009-06-10T10:18:00.000","Title":"wget Vs urlretrieve of python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size.\nThe easy way to get the files is use \"wget -r -np -A files url\". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb\/sec.\nBut, just to play around I was also using python to build my urlparser.\nDownloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb\/sec. I use HTMLParser for parsing the href tags.\nI am not sure why is this happening. Are there any settings for this.\nThanks","AnswerCount":10,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":37225,"Q_Id":974741,"Users Score":0,"Answer":"You can use wget -k to engage relative links in all urls.","Q_Score":9,"Tags":"python,urllib2,wget","A_Id":2350655,"CreationDate":"2009-06-10T10:18:00.000","Title":"wget Vs urlretrieve of python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size.\nThe easy way to get the files is use \"wget -r -np -A files url\". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb\/sec.\nBut, just to play around I was also using python to build my urlparser.\nDownloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb\/sec. I use HTMLParser for parsing the href tags.\nI am not sure why is this happening. Are there any settings for this.\nThanks","AnswerCount":10,"Available Count":5,"Score":0.0199973338,"is_accepted":false,"ViewCount":37225,"Q_Id":974741,"Users Score":1,"Answer":"Since python suggests using urllib2 instead of urllib, I take a test between urllib2.urlopen and wget.\nThe result is, it takes nearly the same time for both of them to download the same file.Sometimes, urllib2 performs even better. \nThe advantage of wget lies in a dynamic progress bar to show the percent finished and the current download speed when transferring.\nThe file size in my test is 5MB.I haven't used any cache module in python and I am not aware of how wget works when downloading big size file.","Q_Score":9,"Tags":"python,urllib2,wget","A_Id":7782898,"CreationDate":"2009-06-10T10:18:00.000","Title":"wget Vs urlretrieve of python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a task to download Gbs of data from a website. The data is in form of .gz files, each file being 45mb in size.\nThe easy way to get the files is use \"wget -r -np -A files url\". This will donwload data in a recursive format and mirrors the website. The donwload rate is very high 4mb\/sec.\nBut, just to play around I was also using python to build my urlparser.\nDownloading via Python's urlretrieve is damm slow, possible 4 times as slow as wget. The download rate is 500kb\/sec. I use HTMLParser for parsing the href tags.\nI am not sure why is this happening. Are there any settings for this.\nThanks","AnswerCount":10,"Available Count":5,"Score":0.0199973338,"is_accepted":false,"ViewCount":37225,"Q_Id":974741,"Users Score":1,"Answer":"Maybe you can wget and then inspect the data in Python?","Q_Score":9,"Tags":"python,urllib2,wget","A_Id":974809,"CreationDate":"2009-06-10T10:18:00.000","Title":"wget Vs urlretrieve of python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":0.0665680765,"is_accepted":false,"ViewCount":856,"Q_Id":978581,"Users Score":3,"Answer":"Consuming the webservices is more efficient because there are a lot more things you can do to scale your webservices and webserver (via caching, etc.). By consuming the middle layer, you also have the options to change the returned data format (e.g. you can decide to use JSON rather than XML). Scaling database is much harder (involving replication, etc.) so in general, reduce hits on DB if you can.","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":978590,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":0.0,"is_accepted":false,"ViewCount":856,"Q_Id":978581,"Users Score":0,"Answer":"It depends from case to case, you'll have to measure (or at least make an educated guess).\nYou'll have to consider several things.\nWeb service\n\nit might hit database itself\nit can be cached\nit will introduce network latency and might be unreliable\nor it could be in local network and faster than accessing even local disk\n\nDB\n\nmight be slow since it needs to access disk (although databases have internal caches, but those are usually not targeted)\nshould be reliable\n\nTechnology itself doesn't mean much in terms of speed - in one case database parses SQL, in other XML parser parses XML, and database is usually acessed via socket as well, so you have both parsing and network in either case.\nCaching data in your application if applicable is probably a good idea.","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":978717,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":0.022218565,"is_accepted":false,"ViewCount":856,"Q_Id":978581,"Users Score":1,"Answer":"There is not enough information to be able to say for sure in the general case. Why don't you do some tests and find out? Since it sounds like you are using python you will probably want to use the timeit module.\nSome things that could effect the result:\n\nPerformance of the web service you are using\nReliability of the web service you are using\nDistance between servers\nAmount of data being returned\n\nI would guess that if it is cacheable, that a cached version of the data will be faster, but that does not necessarily mean using a local RDBMS, it might mean something like memcached or an in memory cache in your application.","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":978603,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":0.0,"is_accepted":false,"ViewCount":856,"Q_Id":978581,"Users Score":0,"Answer":"As a few people have said, it depends, and you should test it.\nOften external services are slow, and caching them locally (in a database in memory, e.g., with memcached) is faster. But perhaps not.\nFortunately, it's cheap and easy to test.","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":978773,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":-0.022218565,"is_accepted":false,"ViewCount":856,"Q_Id":978581,"Users Score":-1,"Answer":"It sounds like you essentially want to cache results, and are wondering if it's worth it. But if so, I would NOT use a database (I assume you are thinking of a relational DB): RDBMSs are not good for caching; even though many use them. You don't need persistence nor ACID.\nIf choice was between Oracle\/MySQL and external web service, I would start with just using service.\nInstead, consider real caching systems; local or not (memcache, simple in-memory caches etc).\nOr if you must use a DB, use key\/value store, BDB works well. Store response message in its serialized form (XML), try to fetch from cache, if not, from service, parse. Or if there's a convenient and more compact serialization, store and fetch that.","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":979028,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":0.0,"is_accepted":false,"ViewCount":856,"Q_Id":978581,"Users Score":0,"Answer":"Test definitely. As a rule of thumb, XML is good for communicating between apps, but once you have the data inside of your app, everything should go into a database table. This may not apply in all cases, but 95% of the time it has for me. Anytime I ever tried to store data any other way (ex. XML in a content management system) I ended up wishing I would have just used good old sprocs and sql server.","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":978864,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":1.2,"is_accepted":true,"ViewCount":856,"Q_Id":978581,"Users Score":4,"Answer":"Everyone is being very polite in answering this question: \"it depends\"... \"you should test\"... and so forth.\nTrue, the question does not go into great detail about the application and network topographies involved, but if the question is even being asked, then it's likely a) the DB is \"local\" to the application (on the same subnet, or the same machine, or in memory), and b) the webservice is not. After all, the OP uses the phrases \"external service\" and \"display on your own site.\" The phrase \"parsing it once or however many times you need to each day\" also suggests a set of data that doesn't exactly change every second.\nThe classic SOA myth is that the network is always available; going a step further, I'd say it's a myth that the network is always available with low latency. Unless your own internal systems are crap, sending an HTTP query across the Internet will always be slower than a query to a local DB or DB cluster. There are any number of reasons for this: number of hops to the remote server, outage or degradation issues that you can't control on the remote end, and the internal processing time for the remote web service application to analyze your request, hit its own persistence backend (aka DB), and return a result.\nFire up your app. Do some latency and response times to your DB. Now do the same to a remote web service. Unless your DB is also across the Internet, you'll notice a huge difference.\nIt's not at all hard for a competent technologist to scale a DB, or for you to completely remove the DB from caching using memcached and other paradigms; the latency between servers sitting near each other in the datacentre is monumentally less than between machines over the Internet (and more secure, to boot). Even if achieving this scale requires some thought, it's under your control, unlike a remote web service whose scaling and latency are totally opaque to you. I, for one, would not be too happy with the idea that the availability and responsiveness of my site are based on someone else entirely.\nFinally, what happens if the remote web service is unavailable? Imagine a world where every request to your site involves a request over the Internet to some other site. What happens if that other site is unavailable? Do your users watch a spinning cursor of death for several hours? Do they enjoy an Error 500 while your site borks on this unexpected external dependency? \nIf you find yourself adopting an architecture whose fundamental features depend on a remote Internet call for every request, think very carefully about your application before deciding if you can live with the consequences.","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":978857,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":0.022218565,"is_accepted":false,"ViewCount":856,"Q_Id":978581,"Users Score":1,"Answer":"It depends - who is calling the web service? Is the web service called every time the user hits the page? If that's the case I'd recommend introducing a caching layer of some sort - many web service API's throttle the amount of hits you can make per hour.\nWhether you choose to parse the cached XML on the fly or call the data from a database probably won't matter (unless we are talking enterprise scaling here). Personally, I'd much rather make a simple SQL call than write a DOM Parser (which is much more prone to exceptional scenarios).","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":978607,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I was wondering when dealing with a web service API that returns XML, whether it's better (faster) to just call the external service each time and parse the XML (using ElementTree) for display on your site or to save the records into the database (after parsing it once or however many times you need to each day) and make database calls instead for that same information.","AnswerCount":9,"Available Count":9,"Score":1.0,"is_accepted":false,"ViewCount":856,"Q_Id":978581,"Users Score":6,"Answer":"First off -- measure. Don't just assume that one is better or worse than the other.\nSecond, if you really don't want to measure, I'd guess the database is a bit faster (assuming the database is relatively local compared to the web service). Network latency usually is more than parse time unless we're talking a really complex database or really complex XML.","Q_Score":1,"Tags":"python,mysql,xml,django,parsing","A_Id":978593,"CreationDate":"2009-06-10T23:09:00.000","Title":"Is it more efficient to parse external XML or to hit the database?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I tried using the ssl module in Python 2.6 but I was told that it wasn't available. After installing OpenSSL, I recompiled 2.6 but the problem persists.\nAny suggestions?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":7646,"Q_Id":979551,"Users Score":4,"Answer":"Did you install the OpenSSL development libraries? I had to install openssl-devel on CentOS, for example. On Ubuntu, sudo apt-get build-dep python2.5 did the trick (even for Python 2.6).","Q_Score":2,"Tags":"python,ssl,openssl","A_Id":979598,"CreationDate":"2009-06-11T05:56:00.000","Title":"Adding SSL support to Python 2.6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I tried using the ssl module in Python 2.6 but I was told that it wasn't available. After installing OpenSSL, I recompiled 2.6 but the problem persists.\nAny suggestions?","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":7646,"Q_Id":979551,"Users Score":-1,"Answer":"Use the binaries provided by python.org or by your OS distributor. It's a lot easier than building it yourself, and all the features are usually compiled in.\nIf you really need to build it yourself, you'll need to provide more information here about what build options you provided, what your environment is like, and perhaps provide some logs.","Q_Score":2,"Tags":"python,ssl,openssl","A_Id":996622,"CreationDate":"2009-06-11T05:56:00.000","Title":"Adding SSL support to Python 2.6","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a Pythonic HTTP server that is supposed to determine client's IP. How do I do that in Python? Is there any way to get the request headers and extract it from there?\nPS: I'm using WebPy.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":1464,"Q_Id":979599,"Users Score":3,"Answer":"web.env.get('REMOTE_ADDR')","Q_Score":3,"Tags":"python,http,header,request,ip","A_Id":979637,"CreationDate":"2009-06-11T06:16:00.000","Title":"Extracting IP from request in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed lxml which was built using a standalone version of libxml2. Reason for this was that the lxml needed a later version of libxml2 than what was currently installed.\nWhen I use the lxml module how do I tell it (python) where to find the correct version of the libxml2 shared library?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2097,"Q_Id":985155,"Users Score":5,"Answer":"Assuming you're talking about a .so file, it's not up to Python to find it -- it's up to the operating system's dynamic library loaded. For Linux, for example, LD_LIBRARY_PATH is the environment variable you need to set.","Q_Score":5,"Tags":"python","A_Id":985176,"CreationDate":"2009-06-12T05:28:00.000","Title":"How to specify native library search path for python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a django application hosted on webfaction which now has a static\/private ip.\nOur network in the office is obviously behind a firewall and the AD server is running behind this firewall. From inside the network i can authenticate using python-ldap with the AD's internal IP address and the port 389 and all works well.\nWhen i move this to the hosted webserver i change the ip address and port that has been openend up on our firewall. For simplicity the port we opened up is 389 however the requests to authenticate always timeout. When logged into webfaction and running python from the shell and querying the ipaddress i get webfactional's general ip address rather than my static ip.\nIs this whats happening when i try and auth in django? the request comes from the underlying ip address that python is running on rather than the static ip that my firewall is expecting?\nIm fairly clueless to all this networking and port mapping so any help would be much appreciated! \nHope that makes sense?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2571,"Q_Id":990459,"Users Score":0,"Answer":"There are quite a few components between your hosted django application and your internal AD. You will need to test each to see if everything in the pathways between them is correct.\nSo your AD server is sitting behind your firewall. Your firewall has ip \"a.b.c.d\" and all traffic to the firewall ip on port 389 is forwarded to the AD server. I would recommend that you change this to a higher more random port on your firewall, btw. Less scans there.\nWith the shell access you can test to see if you can reach your network. Have your firewall admin check the firewall logs while you try one of the following (or something similar with python) :\n\ncheck the route to your firewall (this might not work if webfaction blocks this, otherwise you will see a list of hosts along which your traffic will pass - if there is a firewall on the route somewhere you will see that your connection is lost there as this is dropped by default on most firewalls): \ntracert a.b.c.d\ndo a telnet to your firewall ip on port 389 (the telnet test will allow your firewall admin to see the connection attempts coming in on port 389 in his log. If those do arrive, that means that external comm should work fine):\ntelnet a.b.c.d 389 \n\nSimilarly, you need to check that your AD server receives these requests (check your logs) and as well can respond to them. Perhaps your AD server is not set up to talk to the firewall ?","Q_Score":0,"Tags":"python,active-directory,ldap,webserver","A_Id":991550,"CreationDate":"2009-06-13T10:09:00.000","Title":"Python LDAP Authentication from remote web server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to use mechanize with python to get all the links of the page, and then open the links.How can I do it?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":8097,"Q_Id":1011975,"Users Score":2,"Answer":"The Browser object in mechanize has a links method that will retrieve all the links on the page.","Q_Score":3,"Tags":"python,mechanize","A_Id":1012022,"CreationDate":"2009-06-18T10:32:00.000","Title":"How to get links on a webpage using mechanize and open those links","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to use python to sftp a file, and the code works great in the interactive shell -- even pasting it in all at once.\nWhen I try to import the file (just to compile it), the code hangs with no exceptions or obvious errors. \nHow do I get the code to compile, or does someone have working code that accomplishes sftp by some other method?\nThis code hangs right at the ssh.connect() statement:\n\n\"\"\" ProblemDemo.py\n Chopped down from the paramiko demo file.\n\n This code works in the shell but hangs when I try to import it!\n\"\"\"\nfrom time import sleep\nimport os\n\nimport paramiko\n\n\nsOutputFilename = \"redacted.htm\" #-- The payload file\n\nhostname = \"redacted.com\"\n####-- WARNING! Embedded passwords! Remove ASAP.\nsUsername = \"redacted\"\nsPassword = \"redacted\"\nsTargetDir = \"redacted\"\n\n#-- Get host key, if we know one.\nhostkeytype = None\nhostkey = None\nhost_keys = {}\ntry:\n host_keys = paramiko.util.load_host_keys(os.path.expanduser('~\/.ssh\/known_hosts'))\nexcept IOError:\n try:\n # try ~\/ssh\/ too, because windows can't have a folder named ~\/.ssh\/\n host_keys = paramiko.util.load_host_keys(os.path.expanduser('~\/ssh\/known_hosts'))\n except IOError:\n print '*** Unable to open host keys file'\n host_keys = {}\n\nif host_keys.has_key(hostname):\n hostkeytype = host_keys[hostname].keys()[0]\n hostkey = host_keys[hostname][hostkeytype]\n print 'Using host key of type %s' % hostkeytype\n\n\nssh = paramiko.Transport((hostname, 22))\n\nssh.connect(username=sUsername, password=sPassword, hostkey=hostkey)\n\nsftp = paramiko.SFTPClient.from_transport(ssh)\n\nsftp.chdir (sTargetDir)\n\nsftp.put (sOutputFilename, sOutputFilename)\n\nssh.close()","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":6518,"Q_Id":1013064,"Users Score":0,"Answer":"Weirdness aside, I was just using import to compile the code. Turning the script into a function seems like an unnecessary complication for this kind of application.\nSearched for alternate means to compile and found:\n\nimport py_compile\npy_compile.compile(\"ProblemDemo.py\")\n\nThis generated a pyc file that works as intended.\nSo the lesson learned is that import is not a robust way to compile python scripts.","Q_Score":3,"Tags":"python,shell,compilation,sftp","A_Id":1013366,"CreationDate":"2009-06-18T14:45:00.000","Title":"Why does this python code hang on import\/compile but work in the shell?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have formatted text (with newlines, tabs, etc.) coming in from a Telnet connection. I have a python script that manages the Telnet connection and embeds the Telnet response in XML that then gets passed through an XSLT transform. How do I pass that XML through the transform without losing the original formatting? I have access to the transformation script and the python script but not the transform invocation itself.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":223,"Q_Id":1015816,"Users Score":0,"Answer":"Data stored in XML comes out the same way it goes in. So if you store the text in an element, no whitespace and newlines are lost unless you tamper with the data in the XSLT. \nEnclosing the text in CDATA is unnecessary unless there is some formatting that is invalid in XML (pointy brackets, ampersands, quotes) and you don't want to XML-escape the text under any circumstances. This is up to you, but in any case XML-escaping is completely transparent when the XML is handled with an XML-aware tool chain.\nTo answer your question more specifically, you need to show some input, the essential part of the transformation, and some output.","Q_Score":0,"Tags":"python,xslt","A_Id":1016919,"CreationDate":"2009-06-19T00:02:00.000","Title":"Passing Formatted Text Through XSLT","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to build some statistics for an email group I participate. Is there any Python API to access the email data on a GoogleGroup?\nAlso, I know some statistics are available on the group's main page. I'm looking for something more complex than what is shown there.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1163,"Q_Id":1017794,"Users Score":3,"Answer":"There isn't an API that I know of, however you can access the XML feed and manipulate it as required.","Q_Score":6,"Tags":"python,google-groups","A_Id":1017810,"CreationDate":"2009-06-19T12:48:00.000","Title":"Is there an API to access a Google Group data?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am tired of clicking \"File\" and then \"Save Page As\" in Firefox when I want to save some websites.\nIs there any script to do this in Python? I would like to save the pictures and css files so that when I read it offline, it looks normal.","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":2633,"Q_Id":1035825,"Users Score":1,"Answer":"Like Cobbal stated, this is largely what wget is designed to do. I believe there's some flags\/arguments that you can set to make it download the entire page, CSS + all. I suggest just alias-ing into something more convenient to type, or tossing it into a quick script.","Q_Score":3,"Tags":"python","A_Id":1035855,"CreationDate":"2009-06-23T23:40:00.000","Title":"Any Python Script to Save Websites Like Firefox?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to write a Python-based Web Bot that can read and interpret an HTML page, then execute an onClick function and receive the resulting new HTML page. I can already read the HTML page and I can determine the functions to be called by the onClick command, but I have no idea how to execute those functions or how to receive the resulting HTML code.\nAny ideas?","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6042,"Q_Id":1036660,"Users Score":0,"Answer":"Well obviously python won't interpret the JS for you (though there may be modules out there that can). I suppose you need to convert the JS instructions to equivalent transformations in Python.\nI suppose ElementTree or BeautifulSoup would be good starting points to interpret the HTML structure.","Q_Score":3,"Tags":"python,html,bots","A_Id":1036758,"CreationDate":"2009-06-24T06:15:00.000","Title":"Python Web-based Bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to write a Python-based Web Bot that can read and interpret an HTML page, then execute an onClick function and receive the resulting new HTML page. I can already read the HTML page and I can determine the functions to be called by the onClick command, but I have no idea how to execute those functions or how to receive the resulting HTML code.\nAny ideas?","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":6042,"Q_Id":1036660,"Users Score":0,"Answer":"Why don't you just sniff what gets sent after the onclick event and replicate that with your bot?","Q_Score":3,"Tags":"python,html,bots","A_Id":5873989,"CreationDate":"2009-06-24T06:15:00.000","Title":"Python Web-based Bot","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). \nAt the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. \nWould some type of concurrency help? PyCurl.CurlMulti object? \nI am open to all suggestions. Thanks!","AnswerCount":6,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":3744,"Q_Id":1051275,"Users Score":2,"Answer":"I don't know anything about python, but in general you would want to break the task down into smaller chunks so that they can be run concurrently. You could break it down by file type, or alphabetical or something, and then run a separate script for each portion of the break down.","Q_Score":3,"Tags":"python,curl,amazon-s3,amazon-web-services,boto","A_Id":1051338,"CreationDate":"2009-06-26T21:02:00.000","Title":"Downloading a Large Number of Files from S3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What's the Fastest way to get a large number of files (relatively small 10-50kB) from Amazon S3 from Python? (In the order of 200,000 - million files). \nAt the moment I am using boto to generate Signed URLs, and using PyCURL to get the files one by one. \nWould some type of concurrency help? PyCurl.CurlMulti object? \nI am open to all suggestions. Thanks!","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":3744,"Q_Id":1051275,"Users Score":0,"Answer":"I've been using txaws with twisted for S3 work, though what you'd probably want is just to get the authenticated URL and use twisted.web.client.DownloadPage (by default will happily go from stream to file without much interaction).\nTwisted makes it easy to run at whatever concurrency you want. For something on the order of 200,000, I'd probably make a generator and use a cooperator to set my concurrency and just let the generator generate every required download request.\nIf you're not familiar with twisted, you'll find the model takes a bit of time to get used to, but it's oh so worth it. In this case, I'd expect it to take minimal CPU and memory overhead, but you'd have to worry about file descriptors. It's quite easy to mix in perspective broker and farm the work out to multiple machines should you find yourself needing more file descriptors or if you have multiple connections over which you'd like it to pull down.","Q_Score":3,"Tags":"python,curl,amazon-s3,amazon-web-services,boto","A_Id":1051408,"CreationDate":"2009-06-26T21:02:00.000","Title":"Downloading a Large Number of Files from S3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to write a script that connects to a bunch of sites on our corporate intranet over HTTPS and verifies that their SSL certificates are valid; that they are not expired, that they are issued for the correct address, etc. We use our own internal corporate Certificate Authority for these sites, so we have the public key of the CA to verify the certificates against.\nPython by default just accepts and uses SSL certificates when using HTTPS, so even if a certificate is invalid, Python libraries such as urllib2 and Twisted will just happily use the certificate.\nHow do I verify a certificate in Python?","AnswerCount":11,"Available Count":1,"Score":-0.0181798149,"is_accepted":false,"ViewCount":206011,"Q_Id":1087227,"Users Score":-1,"Answer":"I was having the same problem but wanted to minimize 3rd party dependencies (because this one-off script was to be executed by many users). My solution was to wrap a curl call and make sure that the exit code was 0. Worked like a charm.","Q_Score":87,"Tags":"python","A_Id":20517707,"CreationDate":"2009-07-06T14:17:00.000","Title":"Validate SSL certificates with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have my web page in python, I am able to get the IP address of the user, who will be accessing our web page, we want to get the mac address of the user's PC, is it possible in python, we are using Linux PC, we want to get it on Linux.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":7015,"Q_Id":1092379,"Users Score":1,"Answer":"All you can access is what the user sends to you.\nMAC address is not part of that data.","Q_Score":2,"Tags":"python,python-3.x","A_Id":1092392,"CreationDate":"2009-07-07T13:35:00.000","Title":"want to get mac address of remote PC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am writing an application to test a network driver for handling corrupted data. And I thought of sending this data using raw socket, so it will not be corrected by the sending machine's TCP-IP stack.\nI am writing this application solely on Linux. I have code examples of using raw sockets in system-calls, but I would really like to keep my test as dynamic as possible, and write most if not all of it in Python.\nI have googled the web a bit for explanations and examples of the usage of raw sockets in python, but haven't found anything really enlightening. Just a a very old code example that demonstrates the idea, but in no means work.\nFrom what I gathered, Raw Socket usage in Python is nearly identical in semantics to UNIX's raw socket, but without the structs that define the packets structure.\nI was wondering if it would even be better not to write the raw socket part of the test in Python, but in C with system-calls, and call it from the main Python code?","AnswerCount":8,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":109072,"Q_Id":1117958,"Users Score":2,"Answer":"Eventually the best solution for this case was to write the entire thing in C, because it's not a big application, so it would've incurred greater penalty to write such a small thing in more than 1 language.\nAfter much toying with both the C and python RAW sockets, I eventually preferred the C RAW sockets. RAW sockets require bit-level modifications of less than 8 bit groups for writing the packet headers. Sometimes writing only 4 bits or less. python defines no assistance to this, whereas Linux C has a full API for this.\nBut I definitely believe that if only this little bit of header initialization was handled conveniently in python, I would've never used C here.","Q_Score":46,"Tags":"python,sockets,raw-sockets","A_Id":1186810,"CreationDate":"2009-07-13T06:36:00.000","Title":"How Do I Use Raw Socket in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"To be more specific, I'm using python and making a pool of HTTPConnection (httplib) and was wondering if there is an limit on the number of concurrent HTTP connections on a windows server.","AnswerCount":2,"Available Count":1,"Score":0.2913126125,"is_accepted":false,"ViewCount":4587,"Q_Id":1121951,"Users Score":3,"Answer":"AFAIK, the numbers of internet sockets (necessary to make TCP\/IP connections) is naturally limited on every machine, but it's pretty high. 1000 simulatneous connections shouldn't be a problem for the client machine, as each socket uses only little memory. If you start receiving data through all these channels, this might change though. I've heard of test setups that created a couple of thousands connections simultaneously from a single client.\nThe story is usually different for the server, when it does heavy lifting for each incoming connection (like forking off a worker process etc.). 1000 incoming connections will impact its performance, and coming from the same client they can easily be taken for a DoS attack. I hope you're in charge of both the client and the server... or is it the same machine?","Q_Score":4,"Tags":"python","A_Id":1122107,"CreationDate":"2009-07-13T20:48:00.000","Title":"What is the maximum simultaneous HTTP connections allowed on one machine (windows server 2008) using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can't run firefox from a sudoed python script that drops privileges to normal user. If i write\n\n$ sudo python\n>>> import os\n>>> import pwd, grp\n>>> uid = pwd.getpwnam('norby')[2]\n>>> gid = grp.getgrnam('norby')[2]\n>>> os.setegid(gid)\n>>> os.seteuid(uid)\n>>> import webbrowser\n>>> webbrowser.get('firefox').open('www.google.it')\nTrue\n>>> # It returns true but doesn't work\n>>> from subprocess import Popen,PIPE\n>>> p = Popen('firefox www.google.it', shell=True,stdout=PIPE,stderr=PIPE)\n>>> # Doesn't execute the command\n>>> You shouldn't really run Iceweasel through sudo WITHOUT the -H option.\nContinuing as if you used the -H option.\nNo protocol specified\nError: cannot open display: :0\n\n\nI think that is not a python problem, but firefox\/iceweasel\/debian configuration problem. Maybe firefox read only UID and not EUID, and doesn't execute process because UID is equal 0. What do you think about?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":903,"Q_Id":1139835,"Users Score":1,"Answer":"This could be your environment. Changing the permissions will still leave environment variables like $HOME pointing at the root user's directory, which will be inaccessible. It may be worth trying altering these variables by changing os.environ before launching the browser. There may also be other variables worth checking.","Q_Score":0,"Tags":"python,browser,debian,uid","A_Id":1140199,"CreationDate":"2009-07-16T19:38:00.000","Title":"Python fails to execute firefox webbrowser from a root executed script with privileges drop","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I\u2019m looking for a quick way to get an HTTP response code from a URL (i.e. 200, 404, etc). I\u2019m not sure which library to use.","AnswerCount":8,"Available Count":1,"Score":0.0748596907,"is_accepted":false,"ViewCount":153420,"Q_Id":1140661,"Users Score":3,"Answer":"The urllib2.HTTPError exception does not contain a getcode() method. Use the code attribute instead.","Q_Score":90,"Tags":"python","A_Id":1491225,"CreationDate":"2009-07-16T22:27:00.000","Title":"What\u2019s the best way to get an HTTP response code from a URL?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files?\nThanks!","AnswerCount":7,"Available Count":3,"Score":-0.0285636566,"is_accepted":false,"ViewCount":1833,"Q_Id":1145286,"Users Score":-1,"Answer":"Its a time to buy a new hard drive!\nYou can make backup before trying all other answers and don't get data lost :)","Q_Score":8,"Tags":"python,file","A_Id":1148604,"CreationDate":"2009-07-17T19:41:00.000","Title":"Change python file in place","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files?\nThanks!","AnswerCount":7,"Available Count":3,"Score":0.0285636566,"is_accepted":false,"ViewCount":1833,"Q_Id":1145286,"Users Score":1,"Answer":"I'm pretty sure there is, as I've even been able to edit\/read from the source files of scripts I've run, but the biggest problem would probably be all the shifting that would be done if you started at the beginning of the file. On the other hand, if you go through the file and record all the starting positions of the lines, you could then go in reverse order of position to copy the lines out; once that's done, you could go back, take the new files, one at a time, and (if they're small enough), use readlines() to generate a list, reverse the order of the list, then seek to the beginning of the file and overwrite the lines in their old order with the lines in their new one.\n(You would truncate the file after reading the first block of lines from the end by using the truncate() method, which truncates all data past the current file position if used without any arguments besides that of the file object, assuming you're using one of the classes or a subclass of one of the classes from the io package to read your file. You'd just have to make sure that the current file position ends up at the beginning of the last line to be written to a new file.)\nEDIT: Based on your comment about having to make the separations at the proper closing tags, you'll probably also have to develop an algorithm to detect such tags (perhaps using the peek method), possibly using a regular expression.","Q_Score":8,"Tags":"python,file","A_Id":1145329,"CreationDate":"2009-07-17T19:41:00.000","Title":"Change python file in place","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a large xml file (40 Gb) that I need to split into smaller chunks. I am working with limited space, so is there a way to delete lines from the original file as I write them to new files?\nThanks!","AnswerCount":7,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1833,"Q_Id":1145286,"Users Score":0,"Answer":"If time is not a major factor (or wear and tear on your disk drive):\n\nOpen handle to file\nRead up to the size of your partition \/ logical break point (due to the xml)\nSave the rest of your file to disk (not sure how python handles this as far as directly overwriting file or memory usage)\nWrite the partition to disk\ngoto 1\n\nIf Python does not give you this level of control, you may need to dive into C.","Q_Score":8,"Tags":"python,file","A_Id":1145341,"CreationDate":"2009-07-17T19:41:00.000","Title":"Change python file in place","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to extract some data from various HTML pages using a python program. Unfortunately, some of these pages contain user-entered data which occasionally has \"slight\" errors - namely tag mismatching.\nIs there a good way to have python's xml.dom try to correct errors or something of the sort? Alternatively, is there a better way to extract data from HTML pages which may contain errors?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":778,"Q_Id":1147090,"Users Score":0,"Answer":"If jython is acceptable to you, tagsoup is very good at parsing junk - if it is, I found the jdom libraries far easier to use than other xml alternatives.\nThis is a snippet from a demo mockup to do with screen scraping from tfl's journey planner:\n\n private Document getRoutePage(HashMap params) throws Exception {\n String uri = \"http:\/\/journeyplanner.tfl.gov.uk\/bcl\/XSLT_TRIP_REQUEST2\";\n HttpWrapper hw = new HttpWrapper();\n String page = hw.urlEncPost(uri, params);\n SAXBuilder builder = new SAXBuilder(\"org.ccil.cowan.tagsoup.Parser\");\n Reader pageReader = new StringReader(page);\n return builder.build(pageReader);\n }","Q_Score":0,"Tags":"python,xml,dom,expat-parser","A_Id":1149208,"CreationDate":"2009-07-18T09:24:00.000","Title":"Python xml.dom and bad XML","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a dilemma where I want to create an application that manipulates google contacts information. The problem comes down to the fact that Python only supports version 1.0 of the api whilst Java supports 3.0.\nI also want it to be web-based so I'm having a look at google app engine, but it seems that only the python version of app engine supports the import of gdata apis whilst java does not.\nSo its either web based and version 1.0 of the api or non-web based and version 3.0 of the api.\nI actually need version 3.0 to get access to the extra fields provided by google contacts.\nSo my question is, is there a way to get access to the gdata api under Google App Engine using Java?\nIf not is there an ETA on when version 3.0 of the gdata api will be released for python?\nCheers.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":802,"Q_Id":1148165,"Users Score":0,"Answer":"I'm having a look into the google data api protocol which seems to solve the problem.","Q_Score":0,"Tags":"java,python,google-app-engine,gdata-api","A_Id":1149886,"CreationDate":"2009-07-18T18:08:00.000","Title":"Possible to access gdata api when using Java App Engine?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a web-app that uses several 3rd party web APIs, and I want to keep track of the low level request and responses for ad-hock analysis. So I'm looking for a recipe that will get Python's urllib2 to log all bytes transferred via HTTP. Maybe a sub-classed Handler?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":3899,"Q_Id":1170744,"Users Score":2,"Answer":"This looks pretty tricky to do. There are no hooks in urllib2, urllib, or httplib (which this builds on) for intercepting either input or output data.\nThe only thing that occurs to me, other than switching tactics to use an external tool (of which there are many, and most people use such things), would be to write a subclass of socket.socket in your own new module (say, \"capture_socket\") and then insert that into httplib using \"import capture_socket; import httplib; httplib.socket = capture_socket\". You'd have to copy all the necessary references (anything of the form \"socket.foo\" that is used in httplib) into your own module, but then you could override things like recv() and sendall() in your subclass to do what you like with the data.\nComplications would likely arise if you were using SSL, and I'm not sure whether this would be sufficient or if you'd also have to make your own socket._fileobject as well. It appears doable though, and perusing the source in httplib.py and socket.py in the standard library would tell you more.","Q_Score":19,"Tags":"python,http,logging,urllib2","A_Id":1844608,"CreationDate":"2009-07-23T09:56:00.000","Title":"How do I get urllib2 to log ALL transferred bytes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This only needs to work on a single subnet and is not for malicious use. \nI have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools provide this functionality, but I want to build it into my own.\nThe tool uses Python's urllib2 for transport. Is it possible to send HTTP requests with spoofed IP addresses for the packets making up the request?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":37293,"Q_Id":1180878,"Users Score":7,"Answer":"Quick note, as I just learned this yesterday:\nI think you've implied you know this already, but any responses to an HTTP request go to the IP address that shows up in the header. So if you are wanting to see those responses, you need to have control of the router and have it set up so that the spoofed IPs are all routed back to the IP you are using to view the responses.","Q_Score":27,"Tags":"python,http,networking,sockets,urllib2","A_Id":1180897,"CreationDate":"2009-07-25T01:11:00.000","Title":"Spoofing the origination IP address of an HTTP request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This only needs to work on a single subnet and is not for malicious use. \nI have a load testing tool written in Python that basically blasts HTTP requests at a URL. I need to run performance tests against an IP-based load balancer, so the requests must come from a range of IP's. Most commercial performance tools provide this functionality, but I want to build it into my own.\nThe tool uses Python's urllib2 for transport. Is it possible to send HTTP requests with spoofed IP addresses for the packets making up the request?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":37293,"Q_Id":1180878,"Users Score":1,"Answer":"I suggest seeing if you can configure your load balancer to make it's decision based on the X-Forwarded-For header, rather than the source IP of the packet containing the HTTP request. I know that most of the significant commercial load balancers have this capability.\nIf you can't do that, then I suggest that you probably need to configure a linux box with a whole heap of secondary IP's - don't bother configuring static routes on the LB, just make your linux box the default gateway of the LB device.","Q_Score":27,"Tags":"python,http,networking,sockets,urllib2","A_Id":1186102,"CreationDate":"2009-07-25T01:11:00.000","Title":"Spoofing the origination IP address of an HTTP request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wonder what is the best way to handle parallel SSH connections in python.\nI need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.\nIs this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.\nThanks.","AnswerCount":6,"Available Count":4,"Score":0.0333209931,"is_accepted":false,"ViewCount":7048,"Q_Id":1185855,"Users Score":1,"Answer":"You can simply use subprocess.Popen for that purpose, without any problems.\nHowever, you might want to simply install cronjobs on the remote machines. :-)","Q_Score":3,"Tags":"python,ssh,parallel-processing","A_Id":1185871,"CreationDate":"2009-07-26T23:19:00.000","Title":"Parallel SSH in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wonder what is the best way to handle parallel SSH connections in python.\nI need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.\nIs this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.\nThanks.","AnswerCount":6,"Available Count":4,"Score":0.0333209931,"is_accepted":false,"ViewCount":7048,"Q_Id":1185855,"Users Score":1,"Answer":"Reading the paramiko API docs, it looks like it is possible to open one ssh connection, and multiplex as many ssh tunnels on top of that as are wished. Common ssh clients (openssh) often do things like this automatically behind the scene if there is already a connection open.","Q_Score":3,"Tags":"python,ssh,parallel-processing","A_Id":1185880,"CreationDate":"2009-07-26T23:19:00.000","Title":"Parallel SSH in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wonder what is the best way to handle parallel SSH connections in python.\nI need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.\nIs this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.\nThanks.","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":7048,"Q_Id":1185855,"Users Score":3,"Answer":"Yes, you can do this with paramiko.\nIf you're connecting to one server, you can run multiple channels through a single connection. If you're connecting to multiple servers, you can start multiple connections in separate threads. No need to manage multiple processes, although you could substitute the multiprocessing module for the threading module and have the same effect.\nI haven't looked into twisted conch in a while, but it looks like it getting updates again, which is nice. I couldn't give you a good feature comparison between the two, but I find paramiko is easier to get going. It takes a little more effort to get into twisted, but it could be well worth it if you're doing other network programming.","Q_Score":3,"Tags":"python,ssh,parallel-processing","A_Id":1188586,"CreationDate":"2009-07-26T23:19:00.000","Title":"Parallel SSH in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wonder what is the best way to handle parallel SSH connections in python.\nI need to open several SSH connections to keep in background and to feed commands in interactive or timed batch way.\nIs this possible to do it with the paramiko libraries? It would be nice not to spawn a different SSH process for each connection.\nThanks.","AnswerCount":6,"Available Count":4,"Score":-0.0333209931,"is_accepted":false,"ViewCount":7048,"Q_Id":1185855,"Users Score":-1,"Answer":"This might not be relevant to your question. But there are tools like pssh, clusterssh etc. that can parallely spawn connections. You can couple Expect with pssh to control them too.","Q_Score":3,"Tags":"python,ssh,parallel-processing","A_Id":1516547,"CreationDate":"2009-07-26T23:19:00.000","Title":"Parallel SSH in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've only used XML RPC and I haven't really delved into SOAP but I'm trying to find a good comprehensive guide, with real world examples or even a walkthrough of some minimal REST application.\nI'm most comfortable with Python\/PHP.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":499,"Q_Id":1186839,"Users Score":1,"Answer":"I like the examples in the Richardson & Ruby book, \"RESTful Web Services\" from O'Reilly.","Q_Score":0,"Tags":"php,python,xml,rest,soap","A_Id":1186876,"CreationDate":"2009-07-27T07:11:00.000","Title":"Real world guide on using and\/or setting up REST web services?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a web service that accepts passed in params using http POST but in a specific order, eg (name,password,data). I have tried to use httplib but all the Python http POST libraries seem to take a dictionary, which is an unordered data structure. Any thoughts on how to http POST params in order for Python?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":345,"Q_Id":1188737,"Users Score":2,"Answer":"Why would you need a specific order in the POST parameters in the first place? As far as I know there are no requirements that POST parameter order is preserved by web servers.\nEvery language I have used, has used a dictionary type object to hold these parameters as they are inherently key\/value pairs.","Q_Score":2,"Tags":"python,http","A_Id":1188759,"CreationDate":"2009-07-27T15:11:00.000","Title":"Python POST ordered params","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to port some code that relies heavily on lxml from a CPython application to IronPython.\nlxml is very Pythonic and I would like to keep using it under IronPython, but it depends on libxslt and libxml2, which are C extensions.\nDoes anyone know of a workaround to allow lxml under IronPython or a version of lxml that doesn't have those C-extension dependencies?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2349,"Q_Id":1200726,"Users Score":1,"Answer":"Something which you might have already considered: \nAn alternative is to first port the lxml library to IPy and then your code (depending on the code size). You might have to write some C# wrappers for the native C calls to the C extensions -- I'm not sure what issues, if any, are involved in this with regards to IPy.\nOr if the code which you are porting is small, as compared to lxml, then maybe you can just remove the lxml dependency and use the .NET XML libraries.","Q_Score":6,"Tags":".net,xml,ironpython,python,lxml","A_Id":1211395,"CreationDate":"2009-07-29T14:36:00.000","Title":"How to get lxml working under IronPython?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to raise an exception on the Server Side of an SimpleXMLRPCServer; however, all attempts get a \"Fault 1\" exception on the client side.\nRPC_Server.AbortTest()\n File \"C:\\Python25\\lib\\xmlrpclib.py\", line 1147, in call\n return self.__send(self.__name, args)\n File \"C:\\Python25\\lib\\xmlrpclib.py\", line 1437, in __request\n verbose=self.__verbose\n File \"C:\\Python25\\lib\\xmlrpclib.py\", line 1201, in request\n return self._parse_response(h.getfile(), sock)\n File \"C:\\Python25\\lib\\xmlrpclib.py\", line 1340, in _parse_response\n return u.close()\n File \"C:\\Python25\\lib\\xmlrpclib.py\", line 787, in close\n raise Fault(**self._stack[0])\nxmlrpclib.Fault: :Test Aborted by a RPC\nrequest\">","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":746,"Q_Id":1201507,"Users Score":1,"Answer":"Yes, this is what happens when you raise an exception on the server side. Are you expecting the SimpleXMLRPCServer to return the exception to the client?\nYou can only use objects that can be marshalled through XML. This includes\n\nboolean : The True and False constants\nintegers : Pass in directly\nfloating-point numbers : Pass in directly\nstrings : Pass in directly\narrays : Any Python sequence type containing conformable elements. Arrays are returned as lists\nstructures : A Python dictionary. Keys must be strings, values may be any conformable type. Objects of user-defined classes can be passed in; only their __dict__ attribute is transmitted.\ndates : in seconds since the epoch (pass in an instance of the DateTime class) or a datetime.datetime instance.\nbinary data : pass in an instance of the Binary wrapper class","Q_Score":0,"Tags":"python,exception,simplexmlrpcserver","A_Id":1202742,"CreationDate":"2009-07-29T16:34:00.000","Title":"Sending an exception on the SimpleXMLRPCServer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I get the current Windows' browser proxy setting, as well as set them to a value?\nI know I can do this by looking in the registry at Software\\Microsoft\\Windows\\CurrentVersion\\Internet Settings\\ProxyServer, but I'm looking, if it is possible, to do this without messing directly with the registry.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":12290,"Q_Id":1201771,"Users Score":3,"Answer":"urllib module automatically retrieves settings from registry when no proxies are specified as a parameter or in the environment variables\n\nIn a Windows environment, if no proxy\n environment variables are set, proxy\n settings are obtained from the\n registry\u2019s Internet Settings section.\n\nSee the documentation of urllib module referenced in the earlier post.\nTo set the proxy I assume you'll need to use the pywin32 module and modify the registry directly.","Q_Score":2,"Tags":"python,windows,proxy,registry","A_Id":1205881,"CreationDate":"2009-07-29T17:14:00.000","Title":"How to set proxy in Windows with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to download mp3 file to users machine without his\/her consent while they are listening the song.So, next time they visit that web page they would not have to download same mp3, but palypack from the local file. this will save some bandwidth for me and for them. it something pandora used to do but I really don't know how to.\nany ideas?","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":170,"Q_Id":1211363,"Users Score":2,"Answer":"Don't do this.\nMost files are cached anyway.\nBut if you really want to add this (because users asked for it), make it optional (default off).","Q_Score":0,"Tags":"python,django,web-applications","A_Id":1211434,"CreationDate":"2009-07-31T08:47:00.000","Title":"downloading files to users machine?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to download mp3 file to users machine without his\/her consent while they are listening the song.So, next time they visit that web page they would not have to download same mp3, but palypack from the local file. this will save some bandwidth for me and for them. it something pandora used to do but I really don't know how to.\nany ideas?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":170,"Q_Id":1211363,"Users Score":4,"Answer":"You can't forcefully download files to a user without his consent. If that was possible you can only imagine what severe security flaw that would be.\nYou can do one of two things:\n\ncount on the browser to cache the media file\nserve the media via some 3rd party plugin (Flash, for example)","Q_Score":0,"Tags":"python,django,web-applications","A_Id":1211370,"CreationDate":"2009-07-31T08:47:00.000","Title":"downloading files to users machine?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there a way to limit amount of data downloaded by python's urllib2 module ? Sometimes I encounter with broken sites with sort of \/dev\/random as a page and it turns out that they use up all memory on a server.","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":869,"Q_Id":1224910,"Users Score":3,"Answer":"urllib2.urlopen returns a file-like object, and you can (at least in theory) .read(N) from such an object to limit the amount of data returned to N bytes at most.\nThis approach is not entirely fool-proof, because an actively-hostile site may go to quite some lengths to fool a reasonably trusty received, like urllib2's default opener; in this case, you'll need to implement and install your own opener that knows how to guard itself against such attacks (for example, getting no more than a MB at a time from the open socket, etc, etc).","Q_Score":3,"Tags":"python,urllib2","A_Id":1224950,"CreationDate":"2009-08-03T22:20:00.000","Title":"limit downloaded page size","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I try to automatically download a file from some webpage using Python, \nI get Webpage Dialog window (I use IE). The window has two buttons, such as 'Continue' and 'Cancel'. I cannot figure out how to click on the Continue Button. The problem is \nthat I don't know how to control Webpage Dialog with Python. I tried to use \nwinGuiAuto to find the controls of the window, but it fails to recognize any Button type \ncontrols... An ideas?\nSasha\nA clarification of my question:\nMy purpose is to download stock data from a certain web site. I need to perform it for many stocks so I need python to do it for me in a repetitive way. This specific site exports the data by letting me download it in Excel file by clicking a link. However after clicking the link I get a Web Page dialog box asking me if I am sure that I want to download this file. This Web page dialog is my problem - it is not an html page and it is not a regular windows dialog box. It is something else and I cannot configure how to control it with python. It has two buttons and I need to click on one of them (i.e. Continue). It seems like it is a special kind of window implemented in IE. It is distinguished by its title which looks like this: Webpage Dialog -- Download blalblabla. If I click Continue mannually it opens a regular windows dialog box (open,save,cancel) which i know how to handle with winGuiAuto library. Tried to use this library for the Webpage Dialog window with no luck. Tried to recognize the buttons with Autoit Info tool -no luck either. In fact, maybe these are not buttons, but actually links, however I cannot see the links and there is no source code visible... What I need is someone to tell me what this Web page Dialog box is and how to control it with Python. That was my question.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2761,"Q_Id":1225686,"Users Score":0,"Answer":"You can't, and you don't want to. When you ask a question, try explaining what you are trying to achieve, and not just the task immediately before you. You are likely barking down the wrong path. There is some other way of doing what you are trying to do.","Q_Score":0,"Tags":"python,dialog,webpage","A_Id":1226061,"CreationDate":"2009-08-04T04:03:00.000","Title":"How to control Webpage dialog with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a pretty intensive chat socket server written in Twisted Python, I start it using internet.TCPServer with a factory and that factory references to a protocol object that handles all communications with the client.\nHow should I make sure a protocol instance completely destroys itself once a client has disconnected?\nI've got a function named connectionLost that is fired up once a client disconnects and I try stopping all activity right there but I suspect some reactor stuff (like twisted.words instances) keep running for obsolete protocol instances.\nWhat would be the best approach to handle this?\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":721,"Q_Id":1234292,"Users Score":0,"Answer":"ok, for sorting out this issue I have set a __del__ method in the protocol class and I am now logging protocol instances that have not been garbage collected within 1 minute from the time the client has disconnected. \nIf anybody has any better solution I'll still be glad to hear about it but so far I have already fixed a few potential memory leaks using this log.\nThanks!","Q_Score":4,"Tags":"python,sockets,twisted,twisted.words","A_Id":1236382,"CreationDate":"2009-08-05T16:23:00.000","Title":"In Twisted Python - Make sure a protocol instance would be completely deallocated","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to find a Python library that would take an audio file (e.g. .ogg, .wav) and convert it into mp3 for playback on a webpage. \nAlso, any thoughts on setting its quality for playback would be great.\nThank you.","AnswerCount":6,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":51200,"Q_Id":1246131,"Users Score":2,"Answer":"You may use ctypes module to call functions directly from dynamic libraries. It doesn't require you to install external Python libs and it has better performance than command line tools, but it's usually harder to implement (plus of course you need to provide external library).","Q_Score":21,"Tags":"python,audio,compression","A_Id":1334217,"CreationDate":"2009-08-07T17:51:00.000","Title":"Python library for converting files to MP3 and setting their quality","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to find a Python library that would take an audio file (e.g. .ogg, .wav) and convert it into mp3 for playback on a webpage. \nAlso, any thoughts on setting its quality for playback would be great.\nThank you.","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":51200,"Q_Id":1246131,"Users Score":1,"Answer":"Another option to avoid installing Python modules for this simple task would be to just exec \"lame\" or other command line encoder from the Python script (with the popen module.)","Q_Score":21,"Tags":"python,audio,compression","A_Id":1246816,"CreationDate":"2009-08-07T17:51:00.000","Title":"Python library for converting files to MP3 and setting their quality","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am considering programming the network related features of my application in Python instead of the C\/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.\nAlthough the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development.\nWhat do you think of the python socket module :\n\nIs it reliable and fast enough for production quality software ?\nAre there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ?\n\nThanks in advance,\nPaul","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":1015,"Q_Id":1253905,"Users Score":3,"Answer":"Python is a mature language that can do almost anything that you can do in C\/C++ (even direct memory access if you really want to hurt yourself).\nYou'll find that you can write beautiful code in it in a very short time, that this code is readable from the start and that it will stay readable (you will still know what it does even after returning one year later).\nThe drawback of Python is that your code will be somewhat slow. \"Somewhat\" as in \"might be too slow for certain cases\". So the usual approach is to write as much as possible in Python because it will make your app maintainable. Eventually, you might run into speed issues. That would be the time to consider to rewrite a part of your app in C.\nThe main advantages of this approach are:\n\nYou already have a running application. Translating the code from Python to C is much more simple than write it from scratch.\nYou already have a running application. After the translation of a small part of Python to C, you just have to test that small part and you can use the rest of the app (that didn't change) to do it.\nYou don't pay a price upfront. If Python is fast enough for you, you'll never have to do the optional optimization.\nPython is much, much more powerful than C. Every line of Python can do the same as 100 or even 1000 lines of C.","Q_Score":4,"Tags":"python,network-programming","A_Id":1254288,"CreationDate":"2009-08-10T09:26:00.000","Title":"Suggestion Needed - Networking in Python - A good idea?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am considering programming the network related features of my application in Python instead of the C\/C++ API. The intended use of networking is to pass text messages between two instances of my application, similar to a game passing player positions as often as possible over the network.\nAlthough the python socket modules seems sufficient and mature, I want to check if there are limitations of the python module which can be a problem at a later stage of the development.\nWhat do you think of the python socket module :\n\nIs it reliable and fast enough for production quality software ?\nAre there any known limitations which can be a problem if my app. needs more complex networking other than regular client-server messaging ?\n\nThanks in advance,\nPaul","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1015,"Q_Id":1253905,"Users Score":1,"Answer":"To answer #1, I know that among other things, EVE Online (the MMO) uses a variant of Python for their server code.","Q_Score":4,"Tags":"python,network-programming","A_Id":1253945,"CreationDate":"2009-08-10T09:26:00.000","Title":"Suggestion Needed - Networking in Python - A good idea?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the best way to map a network share to a windows drive using Python?\nThis share also requires a username and password.","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":68179,"Q_Id":1271317,"Users Score":0,"Answer":"I had trouble getting this line to work: \nwin32wnet.WNetAddConnection2(win32netcon.RESOURCETYPE_DISK, drive, networkPath, None, user, password)\nBut was successful with this:\nwin32wnet.WNetAddConnection2(1, 'Z:', r'\\UNCpath\\share', None, 'login', 'password')","Q_Score":33,"Tags":"python,windows,mapping,drive","A_Id":20201066,"CreationDate":"2009-08-13T11:09:00.000","Title":"What is the best way to map windows drives using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What would be the best method to restrict access to my XMLRPC server by IP address? I see the class CGIScript in web\/twcgi.py has a render method that is accessing the request... but I am not sure how to gain access to this request in my server. I saw an example where someone patched twcgi.py to set environment variables and then in the server access the environment variables... but I figure there has to be a better solution.\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3265,"Q_Id":1273297,"Users Score":0,"Answer":"I'd use a firewall on windows, or iptables on linux.","Q_Score":2,"Tags":"python,twisted","A_Id":1273455,"CreationDate":"2009-08-13T17:03:00.000","Title":"Python Twisted: restricting access by IP address","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to scrap a page in youtube with python which has lot of ajax in it\nI've to call the java script each time to get the info. But i'm not really sure how to go about it. I'm using the urllib2 module to open URLs. Any help would be appreciated.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":4364,"Q_Id":1281075,"Users Score":2,"Answer":"Here is how I would do it: Install Firebug on Firefox, then turn the NET on in firebug and click on the desired link on YouTube. Now see what happens and what pages are requested. Find the one that are responsible for the AJAX part of page. Now you can use urllib or Mechanize to fetch the link. If you CAN pull the same content this way, then you have what you are looking for, then just parse the content. If you CAN'T pull the content this way, then that would suggest that the requested page might be looking at user login credentials, sessions info or other header fields such as HTTP_REFERER ... etc. Then you might want to look at something more extensive like the scrapy ... etc. I would suggest that you always follow the simple path first. Good luck and happy \"responsibly\" scraping! :)","Q_Score":2,"Tags":"python,ajax,screen-scraping","A_Id":3134226,"CreationDate":"2009-08-15T03:34:00.000","Title":"Scraping Ajax - Using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"suppose, I need to perform a set of procedure on a particular website\nsay, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website.\nI know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script.\nIs there a module available in python, which can help me do that?\nthanks","AnswerCount":15,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":108234,"Q_Id":1292817,"Users Score":19,"Answer":"selenium will do exactly what you want and it handles javascript","Q_Score":29,"Tags":"python,browser-automation","A_Id":3486971,"CreationDate":"2009-08-18T09:23:00.000","Title":"How to automate browsing using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"suppose, I need to perform a set of procedure on a particular website\nsay, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website.\nI know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script.\nIs there a module available in python, which can help me do that?\nthanks","AnswerCount":15,"Available Count":3,"Score":0.0133325433,"is_accepted":false,"ViewCount":108234,"Q_Id":1292817,"Users Score":1,"Answer":"The best solution that i have found (and currently implementing) is :\n- scripts in python using selenium webdriver\n- PhantomJS headless browser (if firefox is used you will have a GUI and will be slower)","Q_Score":29,"Tags":"python,browser-automation","A_Id":20679640,"CreationDate":"2009-08-18T09:23:00.000","Title":"How to automate browsing using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"suppose, I need to perform a set of procedure on a particular website\nsay, fill some forms, click submit button, send the data back to server, receive the response, again do something based on the response and send the data back to the server of the website.\nI know there is a webbrowser module in python, but I want to do this without invoking any web browser. It hast to be a pure script.\nIs there a module available in python, which can help me do that?\nthanks","AnswerCount":15,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":108234,"Q_Id":1292817,"Users Score":0,"Answer":"httplib2 + beautifulsoup\nUse firefox + firebug + httpreplay to see what the javascript passes to and from the browser from the website. Using httplib2 you can essentially do the same via post and get","Q_Score":29,"Tags":"python,browser-automation","A_Id":3988708,"CreationDate":"2009-08-18T09:23:00.000","Title":"How to automate browsing using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm developing an FTP client in Python ftplib. How do I add proxies support to it (most FTP apps I have seen seem to have it)? I'm especially thinking about SOCKS proxies, but also other types... FTP, HTTP (is it even possible to use HTTP proxies with FTP program?)\nAny ideas how to do it?","AnswerCount":6,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":18202,"Q_Id":1293518,"Users Score":2,"Answer":"Standard module ftplib doesn't support proxies. It seems the only solution is to write your own customized version of the ftplib.","Q_Score":9,"Tags":"python,proxy,ftp,ftplib","A_Id":1293579,"CreationDate":"2009-08-18T12:28:00.000","Title":"Proxies in Python FTP application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Long story short, I created a new gmail account, and linked several other accounts to it (each with 1000s of messages), which I am importing. All imported messages arrive as unread, but I need them to appear as read.\nI have a little experience with python, but I've only used mail and imaplib modules for sending mail, not processing accounts.\nIs there a way to bulk process all items in an inbox, and simply mark messages older than a specified date as read?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":5833,"Q_Id":1296446,"Users Score":1,"Answer":"Just go to the Gmail web interface, do an advanced search by date, then select all and mark as read.","Q_Score":5,"Tags":"python,email,gmail,imap,pop3","A_Id":1296476,"CreationDate":"2009-08-18T20:52:00.000","Title":"Parse Gmail with Python and mark all older than date as \"read\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Long story short, I created a new gmail account, and linked several other accounts to it (each with 1000s of messages), which I am importing. All imported messages arrive as unread, but I need them to appear as read.\nI have a little experience with python, but I've only used mail and imaplib modules for sending mail, not processing accounts.\nIs there a way to bulk process all items in an inbox, and simply mark messages older than a specified date as read?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":5833,"Q_Id":1296446,"Users Score":1,"Answer":"Rather than try to parse our HTML why not just use the IMAP interface? Hook it up to a standard mail client and then just sort by date and mark whichever ones you want as read.","Q_Score":5,"Tags":"python,email,gmail,imap,pop3","A_Id":1296465,"CreationDate":"2009-08-18T20:52:00.000","Title":"Parse Gmail with Python and mark all older than date as \"read\"","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have setup the logging module for my new python script. I have two handlers, one sending stuff to a file, and one for email alerts. The SMTPHandler is setup to mail anything at the ERROR level or above.\nEverything works great, unless the SMTP connection fails. If the SMTP server does not respond or authentication fails (it requires SMTP auth), then the whole script dies.\nI am fairly new to python, so I am trying to figure out how to capture the exception that the SMTPHandler is raising so that any problems sending the log message via email won't bring down my entire script. Since I am also writing errors to a log file, if the SMTP alert fails, I just want to keep going, not halt anything.\nIf I need a \"try:\" statement, would it go around the logging.handlers.SMTPHandler setup, or around the individual calls to my_logger.error()?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1574,"Q_Id":1304593,"Users Score":0,"Answer":"You probably need to do both. To figure this out, I suggest to install a local mail server and use that. This way, you can shut it down while your script runs and note down the error message.\nTo keep the code maintainable, you should extends SMTPHandler in such a way that you can handle the exceptions in a single place (instead of wrapping every logger call with try-except).","Q_Score":1,"Tags":"python,logging,handler","A_Id":1304622,"CreationDate":"2009-08-20T07:45:00.000","Title":"Python logging SMTPHandler - handling offline SMTP server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to make a cURL request to a https URL, but I have to go through a proxy as well. Is there some problem with doing this? I have been having so much trouble doing this with curl and php, that I tried doing it with urllib2 in Python, only to find that urllib2 cannot POST to https when going through a proxy. I haven't been able to find any documentation to this effect with cURL, but I was wondering if anyone knew if this was an issue?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":9009,"Q_Id":1308760,"Users Score":0,"Answer":"No problem since the proxy server supports the CONNECT method.","Q_Score":2,"Tags":"php,python,curl,https,urllib2","A_Id":1308768,"CreationDate":"2009-08-20T20:56:00.000","Title":"cURL: https through a proxy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software.\nHow can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections.\nCan I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":9577,"Q_Id":1308879,"Users Score":2,"Answer":"Normally you just listen on 0.0.0.0. This is an alias for all IP addresses.","Q_Score":6,"Tags":".net,python,networking,sockets","A_Id":1308897,"CreationDate":"2009-08-20T21:17:00.000","Title":"Simulate multiple IP addresses for testing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to simulate multiple embedded server devices that are typically used for motor control. In real life, there can be multiple servers on the network and our desktop software acts as a client to all the motor servers simultaneously. We have a half-dozen of these motor control servers on hand for basic testing, but it's getting expensive to test bigger systems with the real hardware. I'd like to build a simulator that can look like many servers on the network to test our client software.\nHow can I build a simulator that will look like it has many IP addresses on the same port without physically having many NIC's. For example, the client software will try to connect to servers 192.168.10.1 thru 192.168.10.50 on port 1111. The simulator will accept all of those connections and run simulations as if it were moving physical motors and send back simulated data on those socket connections.\nCan I use a router to map all of those addresses to a single testing server, or ideally, is there a way to use localhost to 'spoof' those IP addresses? The client software is written in .Net, but Python ideas would be welcomed as well.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":9577,"Q_Id":1308879,"Users Score":5,"Answer":"A. consider using Bonjour (zeroconf) for service discovery\nB. You can assign 1 or more IP addresses the same NIC:\nOn XP, Start -> Control Panel -> Network Connections and select properties on your NIC (usually 'Local Area Connection').\nScroll down to Internet Protocol (TCP\/IP), select it and click on [Properties].\nIf you are using DHCP, you will need to get a static, base IP, from your IT.\n Otherwise, click on [Advanced] and under 'IP Addresses' click [Add..]\n Enter the IP information for the additional IP you want to add.\nRepeat for each additional IP address.\nC. Consider using VMWare, as you can configure multiple systems and virtual IPs within a single, logical, network of \"computers\".\n-- sky","Q_Score":6,"Tags":".net,python,networking,sockets","A_Id":1309096,"CreationDate":"2009-08-20T21:17:00.000","Title":"Simulate multiple IP addresses for testing","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im in the process of writing a python script to act as a \"glue\" between an application and some external devices. The script itself is quite straight forward and has three distinct processes:\n\nRequest data (from a socket connection, via UDP)\nReceive response (from a socket connection, via UDP)\nProcess response and make data available to 3rd party application\n\nHowever, this will be done repetitively, and for several (+\/-200 different) devices. So once its reached device #200, it would start requesting data from device #001 again. My main concern here is not to bog down the processor whilst executing the script. \nUPDATE:\nI am using three threads to do the above, one thread for each of the above processes. The request\/response is asynchronous as each response contains everything i need to be able to process it (including the senders details).\nIs there any way to allow the script to run in the background and consume as little system resources as possible while doing its thing? This will be running on a windows 2003 machine.\nAny advice would be appreciated.","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":738,"Q_Id":1352760,"Users Score":5,"Answer":"If you are using blocking I\/O to your devices, then the script won't consume any processor while waiting for the data. How much processor you use depends on what sorts of computation you are doing with the data.","Q_Score":0,"Tags":"python,performance,process,background","A_Id":1352777,"CreationDate":"2009-08-30T00:58:00.000","Title":"Python script performance as a background process","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":1,"Web Development":0},{"Question":"How can I download files from a website using wildacrds in Python? I have a site that I need to download file from periodically. The problem is the filenames change each time. A portion of the file stays the same though. How can I use a wildcard to specify the unknown portion of the file in a URL?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":1403,"Q_Id":1359090,"Users Score":7,"Answer":"If the filename changes, there must still be a link to the file somewhere (otherwise nobody would ever guess the filename). A typical approach is to get the HTML page that contains a link to the file, search through that looking for the link target, and then send a second request to get the actual file you're after.\nWeb servers do not generally implement such a \"wildcard\" facility as you describe, so you must use other techniques.","Q_Score":1,"Tags":"python","A_Id":1359101,"CreationDate":"2009-08-31T19:46:00.000","Title":"Wildcard Downloads with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?","AnswerCount":12,"Available Count":7,"Score":1.0,"is_accepted":false,"ViewCount":234195,"Q_Id":1364173,"Users Score":10,"Answer":"On Mac press Ctrl+\\ to quit a python process attached to a terminal.","Q_Score":142,"Tags":"python","A_Id":48303184,"CreationDate":"2009-09-01T19:17:00.000","Title":"Stopping python using ctrl+c","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?","AnswerCount":12,"Available Count":7,"Score":1.0,"is_accepted":false,"ViewCount":234195,"Q_Id":1364173,"Users Score":24,"Answer":"This post is old but I recently ran into the same problem of Ctrl+C not terminating Python scripts on Linux. I used Ctrl+\\ (SIGQUIT).","Q_Score":142,"Tags":"python","A_Id":40704008,"CreationDate":"2009-09-01T19:17:00.000","Title":"Stopping python using ctrl+c","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?","AnswerCount":12,"Available Count":7,"Score":0.049958375,"is_accepted":false,"ViewCount":234195,"Q_Id":1364173,"Users Score":3,"Answer":"On a mac \/ in Terminal: \n\nShow Inspector (right click within the terminal window or Shell >Show Inspector)\nclick the Settings icon above \"running processes\"\nchoose from the list of options under \"Signal Process Group\" (Kill, terminate, interrupt, etc).","Q_Score":142,"Tags":"python","A_Id":42792308,"CreationDate":"2009-09-01T19:17:00.000","Title":"Stopping python using ctrl+c","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?","AnswerCount":12,"Available Count":7,"Score":0.0166651236,"is_accepted":false,"ViewCount":234195,"Q_Id":1364173,"Users Score":1,"Answer":"Forcing the program to close using Alt+F4 (shuts down current program)\nSpamming the X button on CMD for e.x.\nTaskmanager (first Windows+R and then \"taskmgr\") and then end the task.\n\nThose may help.","Q_Score":142,"Tags":"python","A_Id":52672359,"CreationDate":"2009-09-01T19:17:00.000","Title":"Stopping python using ctrl+c","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?","AnswerCount":12,"Available Count":7,"Score":1.0,"is_accepted":false,"ViewCount":234195,"Q_Id":1364173,"Users Score":57,"Answer":"If it is running in the Python shell use Ctrl + Z, otherwise locate the python process and kill it.","Q_Score":142,"Tags":"python","A_Id":1364179,"CreationDate":"2009-09-01T19:17:00.000","Title":"Stopping python using ctrl+c","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?","AnswerCount":12,"Available Count":7,"Score":0.0166651236,"is_accepted":false,"ViewCount":234195,"Q_Id":1364173,"Users Score":1,"Answer":"For the record, what killed the process on my Raspberry 3B+ (running raspbian) was Ctrl+'. On my French AZERTY keyboard, the touch ' is also number 4.","Q_Score":142,"Tags":"python","A_Id":54316333,"CreationDate":"2009-09-01T19:17:00.000","Title":"Stopping python using ctrl+c","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that uses threads and makes lots of HTTP requests. I think what's happening is that while a HTTP request (using urllib2) is reading, it's blocking and not responding to CtrlC to stop the program. Is there any way around this?","AnswerCount":12,"Available Count":7,"Score":1.2,"is_accepted":true,"ViewCount":234195,"Q_Id":1364173,"Users Score":206,"Answer":"On Windows, the only sure way is to use CtrlBreak. Stops every python script instantly!\n(Note that on some keyboards, \"Break\" is labeled as \"Pause\".)","Q_Score":142,"Tags":"python","A_Id":1364199,"CreationDate":"2009-09-01T19:17:00.000","Title":"Stopping python using ctrl+c","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to play with inter-process communication and since I could not figure out how to use named pipes under Windows I thought I'll use network sockets. Everything happens locally. The server is able to launch slaves in a separate process and listens on some port. The slaves do their work and submit the result to the master. How do I figure out which port is available? I assume I cannot listen on port 80 or 21?\nI'm using Python, if that cuts the choices down.","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":154753,"Q_Id":1365265,"Users Score":47,"Answer":"Bind the socket to port 0. A random free port from 1024 to 65535 will be selected. You may retrieve the selected port with getsockname() right after bind().","Q_Score":189,"Tags":"python,sockets,ipc,port","A_Id":1365281,"CreationDate":"2009-09-02T00:07:00.000","Title":"On localhost, how do I pick a free port number?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to play with inter-process communication and since I could not figure out how to use named pipes under Windows I thought I'll use network sockets. Everything happens locally. The server is able to launch slaves in a separate process and listens on some port. The slaves do their work and submit the result to the master. How do I figure out which port is available? I assume I cannot listen on port 80 or 21?\nI'm using Python, if that cuts the choices down.","AnswerCount":5,"Available Count":2,"Score":0.1586485043,"is_accepted":false,"ViewCount":154753,"Q_Id":1365265,"Users Score":4,"Answer":"You can listen on whatever port you want; generally, user applications should listen to ports 1024 and above (through 65535). The main thing if you have a variable number of listeners is to allocate a range to your app - say 20000-21000, and CATCH EXCEPTIONS. That is how you will know if a port is unusable (used by another process, in other words) on your computer. \nHowever, in your case, you shouldn't have a problem using a single hard-coded port for your listener, as long as you print an error message if the bind fails.\nNote also that most of your sockets (for the slaves) do not need to be explicitly bound to specific port numbers - only sockets that wait for incoming connections (like your master here) will need to be made a listener and bound to a port. If a port is not specified for a socket before it is used, the OS will assign a useable port to the socket. When the master wants to respond to a slave that sends it data, the address of the sender is accessible when the listener receives data.\nI presume you will be using UDP for this?","Q_Score":189,"Tags":"python,sockets,ipc,port","A_Id":1365283,"CreationDate":"2009-09-02T00:07:00.000","Title":"On localhost, how do I pick a free port number?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to use Twisted in a sort of spidering program that manages multiple client connections. I'd like to maintain of a pool of about 5 clients working at one time. The functionality of each client is to connect to a specified IRC server that it gets from a list, enter a specific channel, and then save the list of the users in that channel to a database.\nThe problem I'm having is more architectural than anything. I'm fairly new to Twisted and I don't know what options are available for managing multiple clients. I'm assuming the easiest way is to simply have each ClientCreator instance die off once it's completed its work and have a central loop that can check to see if there's room to add a new client. I would think this isn't a particularly unusual problem so I'm hoping to glean some information from other peoples' experiences.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4725,"Q_Id":1365737,"Users Score":4,"Answer":"The best option is really just to do the obvious thing here. Don't have a loop, or a repeating timed call; just have handlers that do the right thing.\nKeep a central connection-management object around, and make event-handling methods feed it the information it needs to keep going. When it starts, make 5 outgoing connections. Keep track of how many are in progress, maintain a list with them in it. When a connection succeeds (in connectionMade) update the list to remember the connection's new state. When a connection completes (in connectionLost) tell the connection manager; its response should be to remove that connection and make a new connection somewhere else. In the middle, it should be fairly obvious how to fire off a request for the names you need and stuff them into a database (waiting for the database insert to complete before dropping your IRC connection, most likely, by waiting for the Deferred to come back from adbapi).","Q_Score":6,"Tags":"python,twisted","A_Id":1408498,"CreationDate":"2009-09-02T03:45:00.000","Title":"Managing multiple Twisted client connections","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to download a list of web pages. I know wget can do this. However downloading every URL in every five minutes and save them to a folder seems beyond the capability of wget.\nDoes anyone knows some tools either in java or python or Perl which accomplishes the task?\nThanks in advance.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2545,"Q_Id":1367189,"Users Score":5,"Answer":"Write a bash script that uses wget and put it in your crontab to run every 5 minutes. (*\/5 * * * *)\nIf you need to keep a history of all these web pages, set a variable at the beginning of your script with the current unixtime and append it to the output filenames.","Q_Score":1,"Tags":"python,download,webpage,wget,web-crawler","A_Id":1367209,"CreationDate":"2009-09-02T11:39:00.000","Title":"How to download a webpage in every five minutes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I would like to read a website asynchronously, which isnt possible with urllib as far as I know. Now I tried reading with with plain sockets, but HTTP is giving me hell.\nI run into all kind of funky encodings, for example transfer-encoding: chunked, have to parse all that stuff manually, and I feel like coding C, not python at the moment. \nIsnt there a nicer way like URLLib, asynchronously? I dont really feel like re-implementing the whole HTTP specification, when it's all been done before.\nTwisted isnt an option currently.\nGreetings, \nTom","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":2352,"Q_Id":1367453,"Users Score":1,"Answer":"The furthest I came was using modified asynchttp, that codeape suggested. I have tried to use both asyncore\/asynchat and asynchttp, with lots of pain. It took me far too long to try to fix all the bugs in it (there's a method handle_read, nearly copied from asyncore, only badly indented and was giving me headaches with chunked encoding). Also, asyncore and asynchat are best not used according to some hints I got on google.\nI have settled with twisted, but that's obviously out of the question for you.\nIt might also depend what are you trying to do with your application and why you want async requests, if threads are an option or not, if you're doing GUI programming or something else so if you could shed some more inforation, that's always good. If not, I'd vote for threaded version suggested above, it offers much more readability and maintainability.","Q_Score":7,"Tags":"python,web-services,sockets","A_Id":1372289,"CreationDate":"2009-09-02T12:39:00.000","Title":"Reading a website with asyncore","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to understand crc32 to generate the unique url for web page.\nIf we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?\nWhat could be the approximative string length to keep the checksum to be 2^32?\nWhen I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further.\nMostly I want to convert the url (maximum 1024 chars) to shorted id.","AnswerCount":6,"Available Count":5,"Score":1.2,"is_accepted":true,"ViewCount":3162,"Q_Id":1401218,"Users Score":7,"Answer":"There is no such number as the \"maximum number of urls can be used so that we can avoid duplicates\" for CRC32.\nThe problem is that CRC32 can produce duplicates, and it's not a function of how many values you throw at it, it's a function of what those values look like.\nSo you might have a collision on the second url, if you're unlucky.\nYou should not base your algorithm on producing a unique hash, instead produce a unique value for each url manually.","Q_Score":5,"Tags":"c#,python,url,crc32,short-url","A_Id":1401231,"CreationDate":"2009-09-09T18:16:00.000","Title":"CRC32 to make short URL for web","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to understand crc32 to generate the unique url for web page.\nIf we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?\nWhat could be the approximative string length to keep the checksum to be 2^32?\nWhen I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further.\nMostly I want to convert the url (maximum 1024 chars) to shorted id.","AnswerCount":6,"Available Count":5,"Score":0.1325487884,"is_accepted":false,"ViewCount":3162,"Q_Id":1401218,"Users Score":4,"Answer":"If you're already storing the full URL in a database table, an integer ID is pretty short, and can be made shorter by converting it to base 16, 64, or 85. If you can use a UUID, you can use an integer, and you may as well, since it's shorter and I don't see what benefit a UUID would provide in your lookup table.","Q_Score":5,"Tags":"c#,python,url,crc32,short-url","A_Id":1401237,"CreationDate":"2009-09-09T18:16:00.000","Title":"CRC32 to make short URL for web","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to understand crc32 to generate the unique url for web page.\nIf we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?\nWhat could be the approximative string length to keep the checksum to be 2^32?\nWhen I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further.\nMostly I want to convert the url (maximum 1024 chars) to shorted id.","AnswerCount":6,"Available Count":5,"Score":0.0333209931,"is_accepted":false,"ViewCount":3162,"Q_Id":1401218,"Users Score":1,"Answer":"CRC32 means cyclic redundancy check with 32 bits where any arbitrary amount of bits is summed up to a 32 bit check sum. And check sum functions are surjective, that means multiple input values have the same output value. So you cannot inverse the function.","Q_Score":5,"Tags":"c#,python,url,crc32,short-url","A_Id":1401243,"CreationDate":"2009-09-09T18:16:00.000","Title":"CRC32 to make short URL for web","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to understand crc32 to generate the unique url for web page.\nIf we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?\nWhat could be the approximative string length to keep the checksum to be 2^32?\nWhen I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further.\nMostly I want to convert the url (maximum 1024 chars) to shorted id.","AnswerCount":6,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":3162,"Q_Id":1401218,"Users Score":0,"Answer":"No, even you use md5, or any other check sum, the URL CAN BE duplicate, it all depends on your luck.\nSo don't make an unique url base on those check sum","Q_Score":5,"Tags":"c#,python,url,crc32,short-url","A_Id":1401286,"CreationDate":"2009-09-09T18:16:00.000","Title":"CRC32 to make short URL for web","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to understand crc32 to generate the unique url for web page.\nIf we use the crc32, what is the maximum number of urls can be used so that we can avoid duplicates?\nWhat could be the approximative string length to keep the checksum to be 2^32?\nWhen I tried UUID for an url and convert the uuid bytes to base 64, I could reduce to 22 chars long. I wonder I can reduce still further.\nMostly I want to convert the url (maximum 1024 chars) to shorted id.","AnswerCount":6,"Available Count":5,"Score":0.0665680765,"is_accepted":false,"ViewCount":3162,"Q_Id":1401218,"Users Score":2,"Answer":"The right way to make a short URL is to store the full one in the database and publish something that maps to the row index. A compact way is to use the Base64 of the row ID, for example. Or you could use a UID for the primary key and show that.\nDo not use a checksum, because it's too small and very likely to conflict. A cryptographic hash is larger and less likely, but it's still not the right way to go.","Q_Score":5,"Tags":"c#,python,url,crc32,short-url","A_Id":1401331,"CreationDate":"2009-09-09T18:16:00.000","Title":"CRC32 to make short URL for web","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Okay. So I have about 250,000 high resolution images. What I want to do is go through all of them and find ones that are corrupted. If you know what 4scrape is, then you know the nature of the images I.\nCorrupted, to me, is the image is loaded into Firefox and it says\n\nThe image \u201csuch and such image\u201d cannot be displayed, because it contains errors.\n\nNow, I could select all of my 250,000 images (~150gb) and drag-n-drop them into Firefox. That would be bad though, because I don't think Mozilla designed Firefox to open 250,000 tabs. No, I need a way to programmatically check whether an image is corrupted.\nDoes anyone know a PHP or Python library which can do something along these lines? Or an existing piece of software for Windows?\nI have already removed obviously corrupted images (such as ones that are 0 bytes) but I'm about 99.9% sure that there are more diseased images floating around in my throng of a collection.","AnswerCount":5,"Available Count":1,"Score":0.1194272985,"is_accepted":false,"ViewCount":23397,"Q_Id":1401527,"Users Score":3,"Answer":"If your exact requirements are that it show correctly in FireFox you may have a difficult time - the only way to be sure would be to link to the exact same image loading source code as FireFox.\nBasic image corruption (file is incomplete) can be detected simply by trying to open the file using any number of image libraries.\nHowever many images can fail to display simply because they stretch a part of the file format that the particular viewer you are using can't handle (GIF in particular has a lot of these edge cases, but you can find JPEG and the rare PNG file that can only be displayed in specific viewers). There are also some ugly JPEG edge cases where the file appears to be uncorrupted in viewer X, but in reality the file has been cut short and is only displaying correctly because very little information has been lost (FireFox can show some cut off JPEGs correctly [you get a grey bottom], but others result in FireFox seeming the load them half way and then display the error message instead of the partial image)","Q_Score":20,"Tags":"php,python,image","A_Id":1401566,"CreationDate":"2009-09-09T19:15:00.000","Title":"How do I programmatically check whether an image (PNG, JPEG, or GIF) is corrupted?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I find out the http request my python cgi received? I need different behaviors for HEAD and GET.\nThanks!","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6773,"Q_Id":1417715,"Users Score":0,"Answer":"Why do you need to distinguish between GET and HEAD?\nNormally you shouldn't distinguish and should treat a HEAD request just like a GET. This is because a HEAD request is meant to return the exact same headers as a GET. The only difference is there will be no response content. Just because there is no response content though doesn't mean you no longer have to return a valid Content-Length header, or other headers, which are dependent on the response content.\nIn mod_wsgi, which various people are pointing you at, it will actually deliberately change the request method from HEAD to GET in certain cases to guard against people who wrongly treat HEAD differently. The specific case where this is done is where an Apache output filter is registered. The reason that it is done in this case is because the output filter may expect to see the response content and from that generate additional response headers. If you were to decide not to bother to generate any response content for a HEAD request, you will deprive the output filter of the content and the headers they add may then not agree with what would be returned from a GET request. The end result of this is that you can stuff up caches and the operation of the browser.\nThe same can apply equally for CGI scripts behind Apache as output filters can still be added in that case as well. For CGI scripts there is nothing in place though to protect against users being stupid and doing things differently for a HEAD request.","Q_Score":12,"Tags":"python,http,httpwebrequest,cgi","A_Id":1420886,"CreationDate":"2009-09-13T13:12:00.000","Title":"Detecting the http request type (GET, HEAD, etc) from a python cgi","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to access a SOAP API using Suds. The SOAP API documentation states that I have to provide three cookies with some login data. How can I accomplish this?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1581,"Q_Id":1417902,"Users Score":4,"Answer":"Set a \"Cookie\" HTTP Request Header having the required name\/value pairs. This is how Cookie values are usually transmitted in HTTP Based systems. You can add multiple key\/value pairs in the same http header.\nSingle Cookie\n\nCookie: name1=value1\n\nMultiple Cookies (seperated by semicolons)\n\nCookie: name1=value1; name2=value2","Q_Score":2,"Tags":"python,soap,cookies,suds","A_Id":1417916,"CreationDate":"2009-09-13T14:44:00.000","Title":"Sending cookies in a SOAP request using Suds","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also?","AnswerCount":12,"Available Count":4,"Score":0.0665680765,"is_accepted":false,"ViewCount":90474,"Q_Id":1418082,"Users Score":4,"Answer":"If you're on Windows, one option is to run the tests under a different user account. This means the browser and java server will not be visible to your own account.","Q_Score":93,"Tags":"python,selenium,selenium-rc","A_Id":1750751,"CreationDate":"2009-09-13T16:07:00.000","Title":"Is it possible to hide the browser in Selenium RC?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also?","AnswerCount":12,"Available Count":4,"Score":0.049958375,"is_accepted":false,"ViewCount":90474,"Q_Id":1418082,"Users Score":3,"Answer":"This is how I run my tests with maven on a linux desktop (Ubuntu). I got fed up not being able to work with the firefox webdriver always taking focus.\nI installed xvfb \n\nxvfb-run -a mvn clean install\n\nThats it","Q_Score":93,"Tags":"python,selenium,selenium-rc","A_Id":11261393,"CreationDate":"2009-09-13T16:07:00.000","Title":"Is it possible to hide the browser in Selenium RC?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also?","AnswerCount":12,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":90474,"Q_Id":1418082,"Users Score":0,"Answer":"On MacOSX, I haven't been able to hide the browser window, but at least I figured out how to move it to a different display so it doesn't disrupt my workflow so much. While Firefox is running tests, just control-click its icon in the dock, select Options, and Assign to Display 2.","Q_Score":93,"Tags":"python,selenium,selenium-rc","A_Id":24662478,"CreationDate":"2009-09-13T16:07:00.000","Title":"Is it possible to hide the browser in Selenium RC?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using Selenium RC to automate some browser operations but I want the browser to be invisible. Is this possible? How? What about Selenium Grid? Can I hide the Selenium RC window also?","AnswerCount":12,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":90474,"Q_Id":1418082,"Users Score":0,"Answer":"Using headless Chrome would be your best bet, or you could post directly to the site to interact with it, which would save a lot of compute power for other things\/processes. I use this when testing out web automation bots that search for shoes on multiple sites using cpu heavy elements, the more power you save, and the simpler your program is, the easier it is to run multiple processes at a time with muhc greater speed and reliability.","Q_Score":93,"Tags":"python,selenium,selenium-rc","A_Id":55484939,"CreationDate":"2009-09-13T16:07:00.000","Title":"Is it possible to hide the browser in Selenium RC?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program which starts up a PHP script using the subprocess.Popen() function. The PHP script needs to communicate back-and-forth with Python, and I am trying to find an easy but robust way to manage the message sending\/receiving.\nI have already written a working protocol using basic sockets, but it doesn't feel very robust - I don't have any logic to handle dropped messages, and I don't even fully understand how sockets work which leaves me uncertain about what else could go wrong.\nAre there any generic libraries or IPC frameworks which are easier than raw sockets?\n\nATM I need something which supports Python and PHP, but in the future I may want to be able to use C, Perl and Ruby also.\nI am looking for something robust, i.e. when the server or client crashes, the other party needs to be able to recover gracefully.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2322,"Q_Id":1424593,"Users Score":0,"Answer":"You could look at shared memory or named pipes, but I think there are two more likely options, assuming at least one of these languages is being used for a webapp:\nA. Use your database's atomicity. In python, begin a transaction, put a message into a table, and end the transaction. From php, begin a transaction, take a message out of the table or mark it \"read\", and end the transaction. Make your PHP and\/or python self-aware enough not to post the same messages twice. Voila; reliable (and scaleable) IPC, using existing web architecture.\nB. Make your webserver (assuming as webapp) capable of running both php and python, locking down any internal processes to just localhost access, and then call them using xmlrpc or soap from your other language using standard libraries. This is also scalable, as you can change your URLs and security lock-downs later.","Q_Score":1,"Tags":"php,python,ipc","A_Id":1424687,"CreationDate":"2009-09-15T00:47:00.000","Title":"Easy, Robust IPC between Python and PHP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to simulate MyApp that imports a module (ResourceX) which requires a resource that is not available at the time and will not work. \nA solution for this is to make and import a mock module of ResourceX (named ResourceXSimulated) and divert it to MyApp as ResourceX. I want to do this in order to avoid breaking a lot of code and get all kinds of exception from MyApp.\nI am using Python and It should be something like:\n\"Import ResourceXSimulated as ResourceX\"\n\"ResourceX.getData()\", actually calls ResourceXSimultated.getData()\nLooking forward to find out if Python supports this kind of redirection.\nCheers.\nADDITIONAL INFO: I have access to the source files.\nUPDATE: I am thinking of adding as little code as possible to MyApp regarding using the fake module and add this code near the import statements.","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":256,"Q_Id":1443173,"Users Score":0,"Answer":"Yes, Python can do that, and so long as the methods exposed in the ResourceXSimulated module \"look and smell\" like these of the original module, the application should not see much any difference (other than, I'm assuming, bogus data fillers, different response times and such).","Q_Score":1,"Tags":"python,testing,mocking,module,monkeypatching","A_Id":1443195,"CreationDate":"2009-09-18T08:12:00.000","Title":"Is it possible to divert a module in python? (ResourceX diverted to ResourceXSimulated)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to simulate MyApp that imports a module (ResourceX) which requires a resource that is not available at the time and will not work. \nA solution for this is to make and import a mock module of ResourceX (named ResourceXSimulated) and divert it to MyApp as ResourceX. I want to do this in order to avoid breaking a lot of code and get all kinds of exception from MyApp.\nI am using Python and It should be something like:\n\"Import ResourceXSimulated as ResourceX\"\n\"ResourceX.getData()\", actually calls ResourceXSimultated.getData()\nLooking forward to find out if Python supports this kind of redirection.\nCheers.\nADDITIONAL INFO: I have access to the source files.\nUPDATE: I am thinking of adding as little code as possible to MyApp regarding using the fake module and add this code near the import statements.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":256,"Q_Id":1443173,"Users Score":1,"Answer":"Yes, it's possible. Some starters:\nYou can \"divert\" modules by manipulating sys.modules. It keeps a list of imported modules, and there you can make your module appear under the same name as the original one. You must do this manipulating before any module that imports the module you want to fake though.\nYou can also make a package called a different name, but in that package actually use the original module name, for your completely different module. This works well as long as the original module isn't installed.\nIn none of these cases you can use both modules at the same time. For that you need to monkey-patch the original module.\nAnd of course: It' perfectly possible to just call the new module with the old name. But it might be confusing.","Q_Score":1,"Tags":"python,testing,mocking,module,monkeypatching","A_Id":1443281,"CreationDate":"2009-09-18T08:12:00.000","Title":"Is it possible to divert a module in python? (ResourceX diverted to ResourceXSimulated)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to stream a binary data using python. I do not have any idea how to achieve it. I did created python socket program using SOCK_DGRAM. Problem with SOCK_STREAM is that it does not work over internet as our isp dont allow tcp server socket. \nI want to transmit screen shots periodically to remote computer.\nI have an idea of maintaining a Que of binary data and have two threads write and read synchronously. \nI do not want to use VNC .\nHow do I do it?\nI did written server socket and client socket using SOCK_STREAM it was working on localhost and did not work over internet even if respective ip's are placed. We also did tried running tomcat web server on one pc and tried accessing via other pc on internet and was not working.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1737,"Q_Id":1451349,"Users Score":2,"Answer":"There are two problems here.\nFirst problem, you will need to be able to address the remote party. This is related to what you referred to as \"does not work over Internet as most ISP don't allow TCP server socket\". It is usually difficult because the other party could be placed behind a NAT or a firewall.\nAs for the second problem, the problem of actual transmitting of data after you can make a TCP connection, python socket would work if you can address the remote party.","Q_Score":1,"Tags":"python,sockets","A_Id":1451356,"CreationDate":"2009-09-20T16:01:00.000","Title":"How to stream binary data in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to stream a binary data using python. I do not have any idea how to achieve it. I did created python socket program using SOCK_DGRAM. Problem with SOCK_STREAM is that it does not work over internet as our isp dont allow tcp server socket. \nI want to transmit screen shots periodically to remote computer.\nI have an idea of maintaining a Que of binary data and have two threads write and read synchronously. \nI do not want to use VNC .\nHow do I do it?\nI did written server socket and client socket using SOCK_STREAM it was working on localhost and did not work over internet even if respective ip's are placed. We also did tried running tomcat web server on one pc and tried accessing via other pc on internet and was not working.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":1737,"Q_Id":1451349,"Users Score":3,"Answer":"SOCK_STREAM is the correct way to stream data.\nWhat you're saying about ISPs makes very little sense; they don't control whether or not your machine listens on a certain port on an interface. Perhaps you're talking about firewall\/addressing issues?\nIf you insist on using UDP (and you shouldn't because you'll have to worry about packets arriving out of place or not arriving at all) then you'll need to first use socket.bind and then socket.recvfrom in a loop to read data and keep track of open connections. It'll be hard work to do correctly.","Q_Score":1,"Tags":"python,sockets","A_Id":1451365,"CreationDate":"2009-09-20T16:01:00.000","Title":"How to stream binary data in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am sending packets from one pc to other. I am using python socket socket.socket(socket.AF_INET, socket.SOCK_DGRAM ). Do we need to take care of order in which packets are received ?\nIn ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Or some of them present in operating system ? \nOn localhost I get all packets in order. \nWill it make any difference over internet ?","AnswerCount":2,"Available Count":2,"Score":0.4621171573,"is_accepted":false,"ViewCount":2274,"Q_Id":1458087,"Users Score":5,"Answer":"SOCK_DGRAM means you want to send packets by UDP -- no order guarantee, no guarantee of reception, no guarantee of lack of repetition. SOCK_STREAM would imply TCP -- no packet boundary guarantee, but (unless the connection's dropped;-) guarantee of order, reception, and no duplication. TCP\/IP, the networking model that won the heart and soul of every live practitioned and made the Internet happen, is not compliant to ISO\/OSI -- a standard designed at the drafting table and never really winning in the real world.\nThe Internet as she lives and breathes is TCP\/IP all the way. Don't rely on tests done on a low-latency local network as in ANY way representative of what will happen out there in the real world. Welcome to the real world, BTW, and, good luck (you'll need some!-).","Q_Score":2,"Tags":"python,sockets","A_Id":1458109,"CreationDate":"2009-09-22T04:23:00.000","Title":"Python socket programming and ISO-OSI model","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am sending packets from one pc to other. I am using python socket socket.socket(socket.AF_INET, socket.SOCK_DGRAM ). Do we need to take care of order in which packets are received ?\nIn ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ? Or some of them present in operating system ? \nOn localhost I get all packets in order. \nWill it make any difference over internet ?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":2274,"Q_Id":1458087,"Users Score":4,"Answer":"To answer your immediate question, if you're using SOCK_STREAM, then you're actually using TCP, which is an implementation of the transport layer which does take care of packet ordering and integrity for you. So it sounds like that's what you want to use. SOCK_DGRAM is actually UDP, which doesn't take care of any integrity for you.\n\nDo we need to take care of order in which packets are received ? In ISO-OSI model layers below transport layer handle all packets communication. Do all ISO-OSI layers present in the program ?\n\nJust to clear this up, in the ISO-OSI model, all the layers below the transport layer handle sending of a single packet from one computer to the other, and don't \"understand\" the concept of packet ordering (it doesn't apply to them).\nIn this model, there is another layer (the session layer, above the transport layer) which is responsible for defining the session behavior. It is this layer which decides whether to have things put in place to prevent reordering, to ensure integrity, and so on.\nIn the modern world, the ISO-OSI model is more of an idealistic template, rather than an actual model. TCP\/IP is the actual implementation which is used almost everywhere.\nIn TCP\/IP, the transport layer is the one that has the role of defining whether there is any session behavior or not.","Q_Score":2,"Tags":"python,sockets","A_Id":1458734,"CreationDate":"2009-09-22T04:23:00.000","Title":"Python socket programming and ISO-OSI model","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file?\nI'm looking for a solution in Python (and preferably boto library).","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":9405,"Q_Id":1464961,"Users Score":0,"Answer":"Note that the 'copy' method on the Key object has a \"preserve_acl\" parameter (False by default) that will copy the source's ACL to the destination object.","Q_Score":17,"Tags":"python,amazon-s3,boto","A_Id":7366501,"CreationDate":"2009-09-23T09:34:00.000","Title":"How to clone a key in Amazon S3 using Python (and boto)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file?\nI'm looking for a solution in Python (and preferably boto library).","AnswerCount":6,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":9405,"Q_Id":1464961,"Users Score":4,"Answer":"Browsing through boto's source code I found that the Key object has a \"copy\" method. Thanks for your suggestion about CopyObject operation.","Q_Score":17,"Tags":"python,amazon-s3,boto","A_Id":1466148,"CreationDate":"2009-09-23T09:34:00.000","Title":"How to clone a key in Amazon S3 using Python (and boto)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a file contained in a key in my S3 bucket. I want to create a new key, which will contain the same file. Is it possible to do without downloading that file?\nI'm looking for a solution in Python (and preferably boto library).","AnswerCount":6,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":9405,"Q_Id":1464961,"Users Score":2,"Answer":"S3 allows object by object copy.\nThe CopyObject operation creates a copy of an object when you specify the key and bucket of a source object and the key and bucket of a target destination. \nNot sure if boto has a compact implementation.","Q_Score":17,"Tags":"python,amazon-s3,boto","A_Id":1465978,"CreationDate":"2009-09-23T09:34:00.000","Title":"How to clone a key in Amazon S3 using Python (and boto)?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What steps would be necessary, and what kind of maintenance would be expected if I wanted to contribute a module to the Python standard API? For example I have a module that encapsulates automated update functionality similar to Java's JNLP.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":129,"Q_Id":1465302,"Users Score":2,"Answer":"First, look at modules on pypi. Download several that are related to what you're doing so you can see exactly what the state of the art is. \nFor example, look at easy_install for an example of something like what you're proposing.\nAfter looking at other modules, write yours to look like theirs.\nThen publish information on your blog.\nWhen people show an interest, post it to SourceForge or something similar. This will allow you to get started slowly.\nWhen people start using it, you'll know exactly what kind of maintenance you need to do.\nThen, when demand ramps up, you can create the pypi information required to publish it on pypi.\nFinally, when it becomes so popular that people demand it be added to Python as a standard part of the library, many other folks will be involved in helping you mature your offering.","Q_Score":4,"Tags":"python,api","A_Id":1465505,"CreationDate":"2009-09-23T10:56:00.000","Title":"What is involved in adding to the standard Python API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to extract type of object or class name from message received on a udp socket in python using metaclasses\/reflection ?\nThe scenario is like this:\n\nReceive udp buffer on a socket.\nThe UDP buffer is a serialized binary string(a message). But the type of message is not known at this time. So can't de-serialize into appropriate message.\nNow, my ques is Can I know the classname of the seraialized binary string(recvd as UDP buffer) so that I can de-serialize into appropriate message and process further.\n\nThanks in Advance.","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":642,"Q_Id":1487582,"Users Score":2,"Answer":"What you receive from the udp socket is a byte string -- that's all the \"type of object or class name\" that's actually there. If the byte string was built as a serialized object (e.g. via pickle, or maybe marshal etc) then you can deserialize it back to an object (using e.g. pickle.loads) and then introspect to your heart's content. But most byte strings were built otherwise and will raise exceptions when you try to loads from them;-).\nEdit: the OP's edit mentions the string is \"a serialized object\" but still doesn't say what serialization approach produced it, and that makes all the difference. pickle (and for a much narrower range of type marshal) place enough information on the strings they produce (via the .dumps functions of the modules) that their respective loads functions can deserialize back to the appropriate type; but other approaches (e.g., struct.pack) do not place such metadata in the strings they produce, so it's not feasible to deserialize without other, \"out of bands\" so to speak, indications about the format in use. So, o O.P., how was that serialized string of bytes produced in the first place...?","Q_Score":1,"Tags":"python,sockets,udp","A_Id":1487619,"CreationDate":"2009-09-28T15:08:00.000","Title":"Type of object from udp buffer in python using metaclasses\/reflection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to extract type of object or class name from message received on a udp socket in python using metaclasses\/reflection ?\nThe scenario is like this:\n\nReceive udp buffer on a socket.\nThe UDP buffer is a serialized binary string(a message). But the type of message is not known at this time. So can't de-serialize into appropriate message.\nNow, my ques is Can I know the classname of the seraialized binary string(recvd as UDP buffer) so that I can de-serialize into appropriate message and process further.\n\nThanks in Advance.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":642,"Q_Id":1487582,"Users Score":0,"Answer":"Updated answer after updated question:\n\"But the type of message is not known at this time. So can't de-serialize into appropriate message.\"\nWhat you get is a sequence of bytes. How that sequence of types should be interpreted is a question of how the protocol looks. Only you know what protocol you use. So if you don't know the type of message, then there is nothing you can do about it. If you are to receive a stream of data an interpret it, you must know what that data means, otherwise you can't interpret it.\nIt's as simple as that.\n\"Now, my ques is Can I know the classname of the seraialized binary string\"\nYes. The classname is \"str\", as all strings. (Unless you use Python 3, in which case you would not get a str but a binary). The data inside that str has no classname. It's just binary data. It means whatever the sender wants it to mean.\nAgain, I need to stress that you should not try to make this into a generic question. Explain exactly what you are trying to do, not generically, but specifically.","Q_Score":1,"Tags":"python,sockets,udp","A_Id":1487602,"CreationDate":"2009-09-28T15:08:00.000","Title":"Type of object from udp buffer in python using metaclasses\/reflection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've develop a chat server using Twisted framework in Python. It works fine with a Telnet client. But when I use my flash client problem appear... \n(the flash client work find with my old php chat server, I rewrote the server in python to gain performance) \nThe connexion is establish between the flash client and the twisted server: XMLSocket .onConnect return TRUE. So it's not a problem of permission with the policy file. \nI'm not able to send any message from Flash clien with XMLSOCket function send(), nothing is receive on th server side. I tried to end those message with '\\n' or '\\n\\0' or '\\0' without succes.\nYou have any clue?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":927,"Q_Id":1489931,"Users Score":0,"Answer":"I find out that the default delimiter for line, use by Twisted is '\\r\\n'. It can be overwrite in a your children class with:\nLineOnlyReceiver.delimiter = '\\n'","Q_Score":1,"Tags":"python,flash,twisted","A_Id":1490530,"CreationDate":"2009-09-29T00:02:00.000","Title":"Chat server with Twisted framework in python can't receive data from flash client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've develop a chat server using Twisted framework in Python. It works fine with a Telnet client. But when I use my flash client problem appear... \n(the flash client work find with my old php chat server, I rewrote the server in python to gain performance) \nThe connexion is establish between the flash client and the twisted server: XMLSocket .onConnect return TRUE. So it's not a problem of permission with the policy file. \nI'm not able to send any message from Flash clien with XMLSOCket function send(), nothing is receive on th server side. I tried to end those message with '\\n' or '\\n\\0' or '\\0' without succes.\nYou have any clue?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":927,"Q_Id":1489931,"Users Score":1,"Answer":"Changing LineOnlyReceiver.delimiter is a pretty bad idea, since that changes the delivery for all instances of LineOnlyReceiver (unless they've changed it themselves on a subclass or on the instance). If you ever happen to use any such code, it will probably break.\nYou should change delimiter by setting it on your LineOnlyReceiver subclass, since it's your subclass that has this requirement.","Q_Score":1,"Tags":"python,flash,twisted","A_Id":1729776,"CreationDate":"2009-09-29T00:02:00.000","Title":"Chat server with Twisted framework in python can't receive data from flash client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to create XML using the ElementTree object structure in python. It all works very well except when it comes to processing instructions. I can create a PI easily using the factory function ProcessingInstruction(), but it doesn't get added into the elementtree. I can add it manually, but I can't figure out how to add it above the root element where PI's are normally placed. Anyone know how to do this? I know of plenty of alternative methods of doing it, but it seems that this must be built in somewhere that I just can't find.","AnswerCount":5,"Available Count":1,"Score":0.0798297691,"is_accepted":false,"ViewCount":4332,"Q_Id":1489949,"Users Score":2,"Answer":"Yeah, I don't believe it's possible, sorry. ElementTree provides a simpler interface to (non-namespaced) element-centric XML processing than DOM, but the price for that is that it doesn't support the whole XML infoset.\nThere is no apparent way to represent the content that lives outside the root element (comments, PIs, the doctype and the XML declaration), and these are also discarded at parse time. (Aside: this appears to include any default attributes specified in the DTD internal subset, which makes ElementTree strictly-speaking a non-compliant XML processor.)\nYou can probably work around it by subclassing or monkey-patching the Python native ElementTree implementation's write() method to call _write on your extra PIs before _writeing the _root, but it could be a bit fragile.\nIf you need support for the full XML infoset, probably best stick with DOM.","Q_Score":6,"Tags":"python,xml,elementtree","A_Id":1490057,"CreationDate":"2009-09-29T00:09:00.000","Title":"ElementTree in Python 2.6.2 Processing Instructions support?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Having great luck working with single-source feed parsing in Universal Feed Parser, but now I need to run multiple feeds through it and generate chronologically interleaved output (not RSS). Seems like I'll need to iterate through URLs and stuff every entry into a list of dictionaries, then sort that by the entry timestamps and take a slice off the top. That seems do-able, but pretty expensive resource-wise (I'll cache it aggressively for that reason). \nJust wondering if there's an easier way - an existing library that works with feedparser to do simple aggregation, for example. Sample code? Gotchas or warnings? Thanks.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1280,"Q_Id":1496067,"Users Score":1,"Answer":"Here is already suggestion to store data in the database, e.g. bsddb.btopen() or any RDBMS.\nTake a look at heapq.merge() and bisect.insort() or use one of B-tree implementations if you'd like to merge data in memory.","Q_Score":0,"Tags":"python,django","A_Id":1496616,"CreationDate":"2009-09-30T04:04:00.000","Title":"Aggregating multiple feeds with Universal Feed Parser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've got a form, returned by Python mechanize Browser and got via forms() method. How can I perform XPath search inside form node, that is, among descendant nodes of the HTML form node? TIA\nUpd:\nHow to save html code of the form?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":596,"Q_Id":1509404,"Users Score":1,"Answer":"By parsing the browser contents with lxml, which has xpath support.","Q_Score":0,"Tags":"python,mechanize","A_Id":1509434,"CreationDate":"2009-10-02T13:10:00.000","Title":"How to search XPath inside Python ClientForm object?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am making a community for web-comic artist who will be able to sync their existing website to this site.\nHowever, I am in debate for what CMS I should use: Drupal or Wordpress.\nI have heard great things about Drupal, where it is really aimed for Social Networking. I actually got to play a little bit in the back end of Drupal and it seemed quite complicated to me, but I am not going to give up to fully understand how Drupal works.\nAs for Wordpress, I am very familiar with the Framework. I have the ability to extend it to do what I want, but I am hesitating because I think the framework is not built for communities (I think it may slow down in the future).\nI also have a unrelated question as well: Should I go with a Python CMS? \nI heard very great things about Python and how much better it is compare to PHP.\nYour advice is appreciated.","AnswerCount":6,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3472,"Q_Id":1513062,"Users Score":9,"Answer":"Difficult decision. Normally I would say 'definitely Drupal' without hesitation, as Drupal was build as a System for community sites from the beginning, whereas Wordpress still shows its heritage as a blogging solution, at least that's what I hear quite often. But then I'm working with Drupal all the time recently and haven't had a closer look at Wordpress for quite a while.\nThat said, Drupal has grown into a pretty complex system over the years, so there is quite a learning curve for newcomers. Given that you are already familiar with Wordpress, it might be more efficient for you to go with that, provided it can do all that you need.\nSo I would recommend Drupal, but you should probably get some opinions from people experienced with Wordpress concerning the possibility to turn it into a community site first.\n\nAs for the Python vs. PHP CMS question, I'd say that the quality of a CMS is a function of the ability of its developers, the maturity of the system, the surrounding 'ecosystem', etc. and not of the particular language used to build it. (And discussions about the quality of one established language vs. another? Well - let's just not go there ;)","Q_Score":2,"Tags":"python,wordpress,drupal,content-management-system,social-networking","A_Id":1513657,"CreationDate":"2009-10-03T07:17:00.000","Title":"Drupal or Wordpress CMS as a Social Network?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is a Python mechanize object with a form with almost all values set, but not yet submitted. Now I want to fetch another page using cookies from mechanize instance, but without resetting the page, forms and so on, e.g. so that the values remain set (I just need to get body string of another page, nothing else). So is there a way to:\n\nTell mechanize not to reset the page (perhaps, through UserAgentBase)?\nMake urllib2 use mechanize's cookie jar? NB: urllib2.HTTPCookieProcessor(self.br._ua_handlers[\"_cookies\"].cookiejar) doesn't work\nAny other way to pass cookie to urllib?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":1699,"Q_Id":1513823,"Users Score":2,"Answer":"Some wild ideas:\n\nFetch the second page before filling in the form? \nOr fetch the new page and then goBack()? Although maybe that will reset the values.","Q_Score":3,"Tags":"python,mechanize","A_Id":1513899,"CreationDate":"2009-10-03T14:08:00.000","Title":"How to get a http page using mechanize cookies?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Can someone please tell how to write a Non-Blocking server code using the socket library alone.Thanks","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2625,"Q_Id":1515686,"Users Score":2,"Answer":"Why socket alone? It's so much simpler to use another standard library module, asyncore -- and if you can't, at the very least select!\nIf you're constrained by your homework's condition to only use socket, then I hope you can at least add threading (or multiprocessing), otherwise you're seriously out of luck -- you can make sockets with timeout, but juggling timing-out sockets without the needed help from any of the other obvious standard library modules (to support either async or threaded serving) is a serious mess indeed-y...;-).","Q_Score":0,"Tags":"python,sockets,nonblocking","A_Id":1515698,"CreationDate":"2009-10-04T05:35:00.000","Title":"Non Blocking Server in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I was trying to find out how I can go about verifying a self-signed certificate by a server in python. I could not find much data in google. I also want to make sure that the server url \nThanks in advance for any insights.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6958,"Q_Id":1519074,"Users Score":0,"Answer":"It is impossible to verify a self-signed certificate because of its very nature: it is self-signed. \nYou have to sign a certificate by some other trusted third party's certificate to be able to verify anything, and after this you can add that third party's certificate to the list of your trusted CAs and then you will be able to verify certificates signed by that certificate\/CA.\nIf you want recommendations about how to do this in Python, you should provide the name of the SSL library you are using, since there is a choice of SSL libraries for Python.","Q_Score":9,"Tags":"python,ssl","A_Id":1520341,"CreationDate":"2009-10-05T09:37:00.000","Title":"Verifying peer in SSL using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to Python and reading someone else's code:\nshould urllib.urlopen() be followed by urllib.close()? Otherwise, one would leak connections, correct?","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":53885,"Q_Id":1522636,"Users Score":6,"Answer":"Strictly speaking, this is true. But in practice, once (if) urllib goes out of scope, the connection will be closed by the automatic garbage collector.","Q_Score":73,"Tags":"python,urllib","A_Id":1522662,"CreationDate":"2009-10-05T21:59:00.000","Title":"should I call close() after urllib.urlopen()?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to Python and reading someone else's code:\nshould urllib.urlopen() be followed by urllib.close()? Otherwise, one would leak connections, correct?","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":53885,"Q_Id":1522636,"Users Score":1,"Answer":"You basically do need to explicitly close your connection when using IronPython. The automatic closing on going out of scope relies on the garbage collection. I ran into a situation where the garbage collection did not run for so long that Windows ran out of sockets. I was polling a webserver at high frequency (i.e. as high as IronPython and the connection would allow, ~7Hz). I could see the \"established connections\" (i.e. sockets in use) go up and up on PerfMon. The solution was to call gc.collect() after every call to urlopen.","Q_Score":73,"Tags":"python,urllib","A_Id":55125414,"CreationDate":"2009-10-05T21:59:00.000","Title":"should I call close() after urllib.urlopen()?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to know if the following situation and scripts are at all possible: \nI'm looking to have a photo-gallery (Javascript) webpage that will display in order of the latest added to the Dropbox folder (PHP or Python?). \nThat is, when someone adds a picture to the Dropbox folder, there is a script on the webpage that will check the Dropbox folder and then embed those images onto the webpage via the newest added and the webpage will automatically be updated. \nIs it at all possible to link to a Dropbox folder via a webpage? If so, how would I best go about using scripts to automate the process of updating the webpage with new content? \nAny and all help is very appreciated, thanks!","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":4092,"Q_Id":1522951,"Users Score":2,"Answer":"If you can install the DropBox client on the webserver then it would be simple to let it sync your folder and then iterate over the contents of the folder with a programming language (PHP, Python, .NET etc) and produce the gallery page. This could be done every time the page is requested or as a scheduled job which recreayes a static page. This is all dependent on you having access to install the client on your server.","Q_Score":2,"Tags":"php,python,html,dropbox","A_Id":2074899,"CreationDate":"2009-10-05T23:43:00.000","Title":"Update a gallery webpage via Dropbox?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm supposed to display images at certain times of the day on the webpage, Please can anyone tell me how to go about it","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":328,"Q_Id":1524713,"Users Score":0,"Answer":"You could make a Date object in javascript. Check the current time and depending on the time, you set the img src to whatever image you want for that time of day :) or hide the image through myimg.style.visibility = \"hidden\" if you dont want to display an image at that moment.","Q_Score":1,"Tags":"python,django","A_Id":1524724,"CreationDate":"2009-10-06T10:15:00.000","Title":"How do I display images at different times on webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm supposed to display images at certain times of the day on the webpage, Please can anyone tell me how to go about it","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":328,"Q_Id":1524713,"Users Score":0,"Answer":"If you need to change the image before a page refresh, you could use jquery ajax call to get the correct image. jquery has some interval functionality which would allow this.","Q_Score":1,"Tags":"python,django","A_Id":1524812,"CreationDate":"2009-10-06T10:15:00.000","Title":"How do I display images at different times on webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Anyone know of a good feed parser for python 3.1?\nI was using feedparser for 2.5 but it doesn't seem to be ported to 3.1 yet, and it's apparently more complicated than just running 2to3.py on it.\nAny help?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3122,"Q_Id":1527230,"Users Score":0,"Answer":"Start porting feedparser to Python 3.1.","Q_Score":8,"Tags":"python,rss,python-3.x,feed","A_Id":1568128,"CreationDate":"2009-10-06T18:21:00.000","Title":"Python 3.1 RSS Parser?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a server that has to respond to HTTP and XML-RPC requests. Right now I have an instance of SimpleXMLRPCServer, and an instance of BaseHTTPServer.HTTPServer with a custom request handler, running on different ports. I'd like to run both services on a single port. \nI think it should be possible to modify the CGIXMLRPCRequestHandler class to also serve custom HTTP requests on some paths, or alternately, to use multiple request handlers based on what path is requested. I'm not really sure what the cleanest way to do this would be, though.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":849,"Q_Id":1540011,"Users Score":0,"Answer":"Is there a reason not to run a real webserver out front with url rewrites to the two ports you are usign now? It's going to make life much easier in the long run","Q_Score":0,"Tags":"python,http,xml-rpc","A_Id":1540053,"CreationDate":"2009-10-08T19:45:00.000","Title":"Python HTTP server with XML-RPC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a server that has to respond to HTTP and XML-RPC requests. Right now I have an instance of SimpleXMLRPCServer, and an instance of BaseHTTPServer.HTTPServer with a custom request handler, running on different ports. I'd like to run both services on a single port. \nI think it should be possible to modify the CGIXMLRPCRequestHandler class to also serve custom HTTP requests on some paths, or alternately, to use multiple request handlers based on what path is requested. I'm not really sure what the cleanest way to do this would be, though.","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":849,"Q_Id":1540011,"Users Score":0,"Answer":"Use SimpleXMLRPCDispatcher class directly from your own request handler.","Q_Score":0,"Tags":"python,http,xml-rpc","A_Id":1543370,"CreationDate":"2009-10-08T19:45:00.000","Title":"Python HTTP server with XML-RPC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If yes are there any frameworks\/Tutorials\/tips\/etc recommended? \nN00b at Python but I have tons of PHP experience and wanted to expand my skill set.\nI know Python is great at server side execution, just wanted to know about client side as well.","AnswerCount":8,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":54388,"Q_Id":1540214,"Users Score":7,"Answer":"Silverlight can run IronPython, so you can make Silverlight applications. Which is client-side.","Q_Score":68,"Tags":"python,client-side","A_Id":1540379,"CreationDate":"2009-10-08T20:27:00.000","Title":"Can Python be used for client side web development?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If yes are there any frameworks\/Tutorials\/tips\/etc recommended? \nN00b at Python but I have tons of PHP experience and wanted to expand my skill set.\nI know Python is great at server side execution, just wanted to know about client side as well.","AnswerCount":8,"Available Count":3,"Score":-0.024994793,"is_accepted":false,"ViewCount":54388,"Q_Id":1540214,"Users Score":-1,"Answer":"No. Browsers don't run Python.","Q_Score":68,"Tags":"python,client-side","A_Id":1540233,"CreationDate":"2009-10-08T20:27:00.000","Title":"Can Python be used for client side web development?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If yes are there any frameworks\/Tutorials\/tips\/etc recommended? \nN00b at Python but I have tons of PHP experience and wanted to expand my skill set.\nI know Python is great at server side execution, just wanted to know about client side as well.","AnswerCount":8,"Available Count":3,"Score":0.0748596907,"is_accepted":false,"ViewCount":54388,"Q_Id":1540214,"Users Score":3,"Answer":"On Windows, any language that registers for the Windows Scripting Host can run in IE. At least the ActiveState version of Python could do that; I seem to recall that has been superseded by a more official version these days.\nBut that solution requires the user to install a python interpreter and run some script or .reg file to put the correct \"magic\" into the registry for the hooks to work.","Q_Score":68,"Tags":"python,client-side","A_Id":7437506,"CreationDate":"2009-10-08T20:27:00.000","Title":"Can Python be used for client side web development?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to implement a small test utility which consumes extremely simple SOAP XML (HTTP POST) messages. This is a protocol which I have to support, and it's not my design decision to use SOAP (just trying to prevent those \"why do you use protocol X?\" answers) \nI'd like to use stuff that's already in the basic python 2.6.x installation. What's the easiest way to do that? The sole SOAP message is really simple, I'd rather not use any enterprisey tools like WSDL class generation if possible. \nI already implemented the same functionality earlier in Ruby with just plain HTTPServlet::AbstractServlet and REXML parser. Worked fine.\nI thought I could a similar solution in Python with BaseHTTPServer, BaseHTTPRequestHandler and the elementree parser, but it's not obvious to me how I can read the contents of my incoming SOAP POST message. The documentation is not that great IMHO.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":515,"Q_Id":1547520,"Users Score":1,"Answer":"I wrote something like this in Boo, using a .Net HTTPListener, because I too had to implement someone else's defined WSDL.\nThe WSDL I was given used document\/literal form (you'll need to make some adjustments to this information if your WSDL uses rpc\/encoded). I wrapped the HTTPListener in a class that allowed client code to register callbacks by SOAP action, and then gave that class a Start method that would kick off the HTTPListener. You should be able to do something very similar in Python, with a getPOST() method on BaseHTTPServer to:\n\nextract the SOAP action from the HTTP\nheaders\nuse elementtree to extract the SOAP\nheader and SOAP body from the POST'ed\nHTTP\ncall the defined callback for the\nSOAP action, sending these extracted values\nreturn the response text given by the\ncallback in a corresponding SOAP\nenvelope; if the callback raises an\nexception, catch it and re-wrap it as\na SOAP fault\n\nThen you just implement a callback per SOAP action, which gets the XML content passed to it, parses this with elementtree, performs the desired action (or mock action if this is tester), and constructs the necessary response XML (I was not too proud to just create this explicitly using string interpolation, but you could use elementtree to create this by serializing a Python response object).\nIt will help if you can get some real SOAP sample messages in order to help you not tear out your hair, especially in the part where you create the necessary response XML.","Q_Score":1,"Tags":"python,http,soap","A_Id":1547642,"CreationDate":"2009-10-10T09:49:00.000","Title":"A minimalist, non-enterprisey approach for a SOAP server in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a two websites in php and python.\nWhen a user sends a request to the server I need php\/python to send an HTTP POST request to a remote server. I want to reply to the user immediately without waiting for a response from the remote server.\nIs it possible to continue running a php\/python script after sending a response to the user. In that case I'll first reply to the user and only then send the HTTP POST request to the remote server.\nIs it possible to create a non-blocking HTTP client in php\/python without handling the response at all?\nA solution that will have the same logic in php and python is preferable for me.\nThanks","AnswerCount":6,"Available Count":1,"Score":0.0333209931,"is_accepted":false,"ViewCount":12400,"Q_Id":1555517,"Users Score":1,"Answer":"What you need to do is have the PHP script execute another script that does the server call and then sends the user the request.","Q_Score":14,"Tags":"php,python,nonblocking","A_Id":1555614,"CreationDate":"2009-10-12T16:22:00.000","Title":"sending a non-blocking HTTP POST request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense.\nNow I'm trying to port this over to Google App Engine, and keep getting stuck.\nI've ported Python Mechanize to work with Google App Engine, but it doesn't support DOM inspection with XPATH.\nI've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'.\nDo I keep trying to hack ElementTree in there, or do I try to use something else?\nthanks,\nMark","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":1959,"Q_Id":1563165,"Users Score":11,"Answer":"Beautiful Soup.","Q_Score":2,"Tags":"python,google-app-engine,xpath,beautifulsoup,mechanize","A_Id":1563177,"CreationDate":"2009-10-13T21:58:00.000","Title":"What pure Python library should I use to scrape a website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I currently have some Ruby code used to scrape some websites. I was using Ruby because at the time I was using Ruby on Rails for a site, and it just made sense.\nNow I'm trying to port this over to Google App Engine, and keep getting stuck.\nI've ported Python Mechanize to work with Google App Engine, but it doesn't support DOM inspection with XPATH.\nI've tried the built-in ElementTree, but it choked on the first HTML blob I gave it when it ran into '&mdash'.\nDo I keep trying to hack ElementTree in there, or do I try to use something else?\nthanks,\nMark","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":1959,"Q_Id":1563165,"Users Score":6,"Answer":"lxml -- 100x better than elementtree","Q_Score":2,"Tags":"python,google-app-engine,xpath,beautifulsoup,mechanize","A_Id":1563301,"CreationDate":"2009-10-13T21:58:00.000","Title":"What pure Python library should I use to scrape a website?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Standard libraries (xmlrpclib+SimpleXMLRPCServer in Python 2 and xmlrpc.server in Python 3) report all errors (including usage errors) as python exceptions which is not suitable for public services: exception strings are often not easy understandable without python knowledge and might expose some sensitive information. It's not hard to fix this, but I prefer to avoid reinventing the wheel. Is there a third party library with better error reporting? I'm interested in good fault messages for all usage errors and hiding internals when reporting internal errors (this is better done with logging).\nxmlrpclib already have the constants for such errors: NOT_WELLFORMED_ERROR, UNSUPPORTED_ENCODING, INVALID_ENCODING_CHAR, INVALID_XMLRPC, METHOD_NOT_FOUND, INVALID_METHOD_PARAMS, INTERNAL_ERROR.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1435,"Q_Id":1571598,"Users Score":1,"Answer":"I don't think you have a library specific problem. When using any library or framework you typically want to trap all errors, log them somewhere, and throw up \"Oops, we're having problems. You may want to contact us at x@x.com with error number 100 and tell us what you did.\" So wrap your failable entry points in try\/catches, create a generic logger and off you go...","Q_Score":1,"Tags":"python,xml-rpc","A_Id":1608160,"CreationDate":"2009-10-15T10:50:00.000","Title":"XML-RPC server with better error reporting","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a python library which implements a standalone TCP stack?\nI can't use the usual python socket library because I'm receiving a stream of packets over a socket (they are being tunneled to me over this socket). When I receive a TCP SYN packet addressed to a particular port, I'd like to accept the connection (send a syn-ack) and then get the data sent by the other end (ack'ing appropriately).\nI was hoping there was some sort of TCP stack already written which I could utilize. Any ideas? I've used lwip in the past for a C project -- something along those lines in python would be perfect.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8496,"Q_Id":1581087,"Users Score":0,"Answer":"I know this isn't directly Python related but if you are looking to do heavy network processing, you should consider Erlang instead of Python. Just a suggestion really... you can always take a shot a doing this with Twisted... if you feel adventurous (and have lots of time on your side) ;-)","Q_Score":8,"Tags":"python,tcp,network-programming,network-protocols,raw-sockets","A_Id":1581097,"CreationDate":"2009-10-17T00:50:00.000","Title":"Python TCP stack implementation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?","AnswerCount":13,"Available Count":5,"Score":0.0153834017,"is_accepted":false,"ViewCount":12544,"Q_Id":1581895,"Users Score":1,"Answer":"The way I solved this (actually I did this in Scala, not Python) was to use both a Set and a Queue, only adding links to the queue (and set) if they did not already exist in the set.\nBoth the set and queue were encapsulated in a single thread, exposing only a queue-like interface to the consumer threads.\nEdit: someone else suggested SQLite and that is also something I am considering, if the set of visited URLs needs to grow large. (Currently each crawl is only a few hundred pages so it easily fits in memory.) But the database is something that can also be encapsulated within the set itself, so the consumer threads need not be aware of it.","Q_Score":17,"Tags":"python,multithreading,queue","A_Id":1581902,"CreationDate":"2009-10-17T10:21:00.000","Title":"How check if a task is already in python Queue?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?","AnswerCount":13,"Available Count":5,"Score":0.0153834017,"is_accepted":false,"ViewCount":12544,"Q_Id":1581895,"Users Score":1,"Answer":"SQLite is so simple to use and would fit perfectly... just a suggestion.","Q_Score":17,"Tags":"python,multithreading,queue","A_Id":1581903,"CreationDate":"2009-10-17T10:21:00.000","Title":"How check if a task is already in python Queue?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?","AnswerCount":13,"Available Count":5,"Score":-0.0461211021,"is_accepted":false,"ViewCount":12544,"Q_Id":1581895,"Users Score":-3,"Answer":"Also, instead of a set you might try using a dictionary. Operations on sets tend to get rather slow when they're big, whereas a dictionary lookup is nice and quick.\nMy 2c.","Q_Score":17,"Tags":"python,multithreading,queue","A_Id":1581908,"CreationDate":"2009-10-17T10:21:00.000","Title":"How check if a task is already in python Queue?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?","AnswerCount":13,"Available Count":5,"Score":0.0153834017,"is_accepted":false,"ViewCount":12544,"Q_Id":1581895,"Users Score":1,"Answer":"Why only use the array (ideally, a dictionary would be even better) to filter things you've already visited? Add things to your array\/dictionary as soon as you queue them up, and only add them to the queue if they're not already in the array\/dict. Then you have 3 simple separate things:\n\nLinks not yet seen (neither in queue nor array\/dict)\nLinks scheduled to be visited (in both queue and array\/dict)\nLinks already visited (in array\/dict, not in queue)","Q_Score":17,"Tags":"python,multithreading,queue","A_Id":1581920,"CreationDate":"2009-10-17T10:21:00.000","Title":"How check if a task is already in python Queue?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a simple crawler in Python using the threading and Queue modules. I fetch a page, check links and put them into a queue, when a certain thread has finished processing page, it grabs the next one from the queue. I'm using an array for the pages I've already visited to filter the links I add to the queue, but if there are more than one threads and they get the same links on different pages, they put duplicate links to the queue. So how can I find out whether some url is already in the queue to avoid putting it there again?","AnswerCount":13,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":12544,"Q_Id":1581895,"Users Score":0,"Answer":"instead of \"array of pages already visited\" make an \"array of pages already added to the queue\"","Q_Score":17,"Tags":"python,multithreading,queue","A_Id":1582421,"CreationDate":"2009-10-17T10:21:00.000","Title":"How check if a task is already in python Queue?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have snapshots of multiple webpages taken at 2 times. What is a reliable method to determine which webpages have been modified? \nI can't rely on something like an RSS feed, and I need to ignore minor noise like date text.\nIdeally I am looking for a Python solution, but an intuitive algorithm would also be great.\nThanks!","AnswerCount":4,"Available Count":1,"Score":-0.049958375,"is_accepted":false,"ViewCount":2946,"Q_Id":1587902,"Users Score":-1,"Answer":"just take snapshots of the files with MD5 or SHA1...if the values differ the next time you check, then they are modified.","Q_Score":6,"Tags":"python,diff,webpage,snapshot","A_Id":1588461,"CreationDate":"2009-10-19T10:13:00.000","Title":"how to determine if webpage has been modified","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking for a way to prevent multiple hosts from issuing simultaneous commands to a Python XMLRPC listener. The listener is responsible for running scripts to perform tasks on that system that would fail if multiple users tried to issue these commands at the same time. Is there a way I can block all incoming requests until the single instance has completed?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5976,"Q_Id":1589150,"Users Score":0,"Answer":"There are several choices:\n\nUse single-process-single-thread server like SimpleXMLRPCServer to process requests subsequently.\nUse threading.Lock() in threaded server.\nYou some external locking mechanism (like lockfile module or GET_LOCK() function in mysql) in multiprocess server.","Q_Score":7,"Tags":"python,xml-rpc","A_Id":1590010,"CreationDate":"2009-10-19T14:54:00.000","Title":"Python XMLRPC with concurrent requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking for a way to prevent multiple hosts from issuing simultaneous commands to a Python XMLRPC listener. The listener is responsible for running scripts to perform tasks on that system that would fail if multiple users tried to issue these commands at the same time. Is there a way I can block all incoming requests until the single instance has completed?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":5976,"Q_Id":1589150,"Users Score":0,"Answer":"Can you have another communication channel? If yes, then have a \"call me back when it is my turn\" protocol running between the server and the clients.\nIn other words, each client would register its intention to issue requests to the server and the said server would \"callback\" the next-up client when it is ready.","Q_Score":7,"Tags":"python,xml-rpc","A_Id":1589181,"CreationDate":"2009-10-19T14:54:00.000","Title":"Python XMLRPC with concurrent requests","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have an XML document that I would like to update after it already contains data.\nI thought about opening the XML file in \"a\" (append) mode. The problem is that the new data will be written after the root closing tag.\nHow can I delete the last line of a file, then start writing data from that point, and then close the root tag?\nOf course I could read the whole file and do some string manipulations, but I don't think that's the best idea..","AnswerCount":9,"Available Count":1,"Score":0.0444152037,"is_accepted":false,"ViewCount":153237,"Q_Id":1591579,"Users Score":2,"Answer":"To make this process more robust, you could consider using the SAX parser (that way you don't have to hold the whole file in memory), read & write till the end of tree and then start appending.","Q_Score":49,"Tags":"python,xml,io","A_Id":1591732,"CreationDate":"2009-10-19T22:52:00.000","Title":"How to update\/modify an XML file in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So, within a webapp.RequestHandler subclass I would use self.request.uri to get the request URI. But, I can't access this outside of a RequestHandler and so no go. Any ideas?\nI'm running Python and I'm new at it as well as GAE.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":786,"Q_Id":1593483,"Users Score":2,"Answer":"You should generally be doing everything within some sort of RequestHandler or the equivalent in your non-WebApp framework. However, if you really insist on being stuck in the early 1990s and writing plain CGI scripts, the environment variables SERVER_NAME and PATH_INFO may be what you want; see a CGI reference for more info.","Q_Score":1,"Tags":"python,google-app-engine","A_Id":1593985,"CreationDate":"2009-10-20T09:33:00.000","Title":"Get the request uri outside of a RequestHandler in Google App Engine (Python)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I simply want to create an automatic script that can run (preferably) on a web-server, and simply 'clicks' on an object of a web page. I am new to Python or whatever language this would be used for so I thought I would go here to ask where to start! This may seem like I want the script to scam advertisements or do something illegal, but it's simply to interact with another website.","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":23157,"Q_Id":1597833,"Users Score":6,"Answer":"It doesn't have to be Python, I've seen it done in PHP and Perl, and you can probably do it in many other languages.\nThe general approach is:\n1) You give your app a URL and it makes an HTTP request to that URL. I think I have seen this done with php\/wget. Probably many other ways to do it.\n2) Scan the HTTP response for other URLs that you want to \"click\" (really, sending HTTP requests to them), and then send requests to those. Parsing the links usually requires some understanding of regular expressions (if you are not familiar with regular expressions, brush up on it - it's important stuff ;)).","Q_Score":17,"Tags":"python,bots","A_Id":1597878,"CreationDate":"2009-10-20T23:18:00.000","Title":"Where do I start with a web bot?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a XML document \"abc.xml\": \nI need to write a function replace(name, newvalue) which can replace the value node with tag 'name' with the new value and write it back to the disk. Is this possible in python? How should I do this?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2424,"Q_Id":1602919,"Users Score":2,"Answer":"Sure it is possible. \nThe xml.etree.ElementTree module will help you with parsing XML, finding tags and replacing values.\nIf you know a little bit more about the XML file you want to change, you can probably make the task a bit easier than if you need to write a generic function that will handle any XML file.\nIf you are already familiar with DOM parsing, there's a xml.dom package to use instead of the ElementTree one.","Q_Score":0,"Tags":"python,xml,python-3.x","A_Id":1603011,"CreationDate":"2009-10-21T19:02:00.000","Title":"Setting value for a node in XML document in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been taking a few graduate classes with a professor I like alot and she raves about SAS all of the time. I \"grew up\" learning stats using SPSS, and with their recent decisions to integrate their stats engine with R and Python, I find it difficult to muster up the desire to learn anything else. I am not that strong in Python, but I can get by with most tasks that I want to accomplish.\nAdmittedly, I do see the upside to SAS, but I have learned to do some pretty cool things combining SPSS and Python, like grabbing data from the web and analyzing it real-time. Plus, I really like that I can use the GUI to generate the base for my code before I add my final modifications. In SAS, it looks like I would have to program everything by hand (ignoring Enterprise Guide).\nMy question is this. Can you grab data from the web and parse it into SAS datasets? This is a deal-breaker for me. What about interfacing with API's like Google Analytics, Twitter, etc? Are there external IDE's that you can use to write and execute SAS programs? \nAny help will be greatly appreciated.\nBrock","AnswerCount":2,"Available Count":1,"Score":0.4621171573,"is_accepted":false,"ViewCount":2215,"Q_Id":1628372,"Users Score":5,"Answer":"yes. sas 9.2 can interact with soap and restful apis. i haven't had much success with twitter. i have had some success with google spreadsheets (in sas 9.1.3) and i've seen code to pull google analytics (in sas 9.2).\nas with python and r, you can write the code in any text editor, but you'll need to have sas to actually execute it. lately, i've been bouncing between eclipse, pspad, and sas's enhanced editor for writing code, but i always have to submit in sas.","Q_Score":1,"Tags":"python,statistics,sas","A_Id":1628669,"CreationDate":"2009-10-27T01:56:00.000","Title":"SAS and Web Data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Ive been working on a feature of my application to implement a leaderboard - basically stack rank users according to their score. Im currently tracking the score on an individual basis. My thought is that this leaderboard should be relative instead of absolute i.e. instead of having the top 10 highest scoring users across the site, its a top 10 among a user's friend network. This seems better because everyone has a chance to be #1 in their network and there is a form of friendly competition for those that are interested in this sort of thing. Im already storing the score for each user so the challenge is how to compute the rank of that score in real time in an efficient way. Im using Google App Engine so there are some benefits and limitations (e.g., IN [array]) queries perform a sub-query for every element of the array and also are limited to 30 elements per statement\nFor example\n1st Jack 100 \n2nd John 50 \nHere are the approaches I came up with but they all seem to be inefficient and I thought that this community could come up with something more elegant. My sense is that any solution will likely be done with a cron and that I will store a daily rank and list order to optimize read operations but it would be cool if there is something more lightweight and real time\n\nPull the list of all users of the site ordered by score.\nFor each user pick their friends out of that list and create new rankings.\nStore the rank and list order.\nUpdate daily.\nCons - If I get a lot of users this will take forever\n\n2a. For each user pick their friends and for each friend pick score.\nSort that list.\nStore the rank and list order.\nUpdate daily.\nRecord the last position of each user so that the pre-existing list can be used for re-ordering for the next update in order to make it more efficient (may save sorting time)\n2b. Same as above except only compute the rank and list order for people who's profiles have been viewed in the last day\nCons - rank is only up to date for the 2nd person that views the profile","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1929,"Q_Id":1628562,"Users Score":4,"Answer":"If writes are very rare compared to reads (a key assumption in most key-value stores, and not just in those;-), then you might prefer to take a time hit when you need to update scores (a write) rather than to get the relative leaderboards (a read). Specifically, when a user's score change, queue up tasks for each of their friends to update their \"relative leaderboards\" and keep those leaderboards as list attributes (which do keep order!-) suitably sorted (yep, the latter's a denormalization -- it's often necessary to denormalize, i.e., duplicate information appropriately, to exploit key-value stores at their best!-).\nOf course you'll also update the relative leaderboards when a friendship (user to user connection) disappears or appears, but those should (I imagine) be even rarer than score updates;-).\nIf writes are pretty frequent, since you don't need perfectly precise up-to-the-second info (i.e., it's not financials\/accounting stuff;-), you still have many viable approaches to try.\nE.g., big score changes (rarer) might trigger the relative-leaderboards recomputes, while smaller ones (more frequent) get stashed away and only applied once in a while \"when you get around to it\". It's hard to be more specific without ballpark numbers about frequency of updates of various magnitude, typical network-friendship cluster sizes, etc, etc. I know, like everybody else, you want a perfect approach that applies no matter how different the sizes and frequencies in question... but, you just won't find one!-)","Q_Score":0,"Tags":"python,google-app-engine,leaderboard","A_Id":1628703,"CreationDate":"2009-10-27T03:07:00.000","Title":"Real time update of relative leaderboard for each user among friends","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically, what I'm trying to do is simply make a small script that accesses finds the most recent post in a forum and pulls some text or an image out of it. I have this working in python, using the htmllib module and some regex. But, the script still isn't very convenient as is, it would be much nicer if I could somehow put it into an HTML document. It appears that simply embedding Python scripts is not possible, so I'm looking to see if theres a similar feature like python's htmllib that can be used to access some other webpage and extract some information from it. \n(Essentially, if I could get this script going in the form of an html document, I could just open one html document, rather than navigate to several different pages to get the information I want to check)\nI'm pretty sure that javascript doesn't have the functionality I need, but I was wondering about other languages such as jQuery, or even something like AJAX?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":213,"Q_Id":1628564,"Users Score":1,"Answer":"There are two general approaches:\n\nModify your Python code so that it runs as a CGI (or WSGI or whatever) module and generate the page of interest by running some server side code.\nUse Javascript with jQuery to load the content of interest by running some client side code.\n\nThe difference between these two approaches is where the third party server sees the requests coming from. In the first case, it's from your web server. In the second case, it's from the browser of the user accessing your page.\nSome browsers may not handle loading content from third party servers very gracefully (that is, they might pop up warning boxes or something).","Q_Score":1,"Tags":"javascript,jquery,python","A_Id":1628598,"CreationDate":"2009-10-27T03:07:00.000","Title":"Is it possible access other webpages from within another page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Instead of just using urllib does anyone know of the most efficient package for fast, multithreaded downloading of URLs that can operate through http proxies? I know of a few such as Twisted, Scrapy, libcurl etc. but I don't know enough about them to make a decision or even if they can use proxies.. Anyone know of the best one for my purposes? Thanks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":9003,"Q_Id":1628766,"Users Score":1,"Answer":"usually proxies filter websites categorically based on how the website was created. It is difficult to transmit data through proxies based on categories. Eg youtube is classified as audio\/video streams therefore youtube is blocked in some places espically schools.\nIf you want to bypass proxies and get the data off a website and put it in your own genuine website like a dot com website that can be registered it to you.\nWhen you are making and registering the website categorise your website as anything you want.","Q_Score":1,"Tags":"python,proxy,multithreading,web-crawler,pool","A_Id":5803567,"CreationDate":"2009-10-27T04:23:00.000","Title":"Python Package For Multi-Threaded Spider w\/ Proxy Support?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm learning socket programming (in python) and I was wondering what the best\/typical way of encapsulating data is? My packets will be used to issue run, stop, configure, etc. commands on the receiving side. Is it helpful to use JSON or just straight text?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":916,"Q_Id":1633934,"Users Score":0,"Answer":"If you're developing something as a learning exercise you might find it best to go with a structured text (ie. human readable and human writable) format.\nAn example would be to use a fixed number of fields per command, fixed width text fields and\/or easily parsable field delimiters.\nGenerally text is less efficient in terms of packet size, but it does have the benefits that you can read it easily if you do a packet capture (eg. using wireshark) or if you want to use telnet to mimic a client.\nAnd if this is only a learning exercise then ease of debugging is a significant issue.","Q_Score":2,"Tags":"python,network-programming","A_Id":1634178,"CreationDate":"2009-10-27T22:03:00.000","Title":"Designing a simple network packet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning socket programming (in python) and I was wondering what the best\/typical way of encapsulating data is? My packets will be used to issue run, stop, configure, etc. commands on the receiving side. Is it helpful to use JSON or just straight text?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":916,"Q_Id":1633934,"Users Score":1,"Answer":"I suggest plain text to begin with - it is easier to debug. The format that your text takes depends on what you're doing, how many commands, arguments, etc. Have you fleshed out how your commands will look? Once you figure out what that looks like it'll likely suggest a format all on its own.\nAre you using TCP or UDP? TCP is easy since it is a stream, but if you're using UDP keep in mind the maximum size of UDP packets and thus how big your message can be.","Q_Score":2,"Tags":"python,network-programming","A_Id":1635005,"CreationDate":"2009-10-27T22:03:00.000","Title":"Designing a simple network packet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a file, which change it content in a short time. But I'd like to read it before it is ready. The problem is, that it is an xml-file (log). So when you read it, it could be, that not all tags are closed.\nI would like to know if there is a possibility to close all opened tags correctly, that there are no problems to show it in the browser (with xslt stylsheet). This should be made by using included features of python.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1809,"Q_Id":1644994,"Users Score":0,"Answer":"You could use BeautifulStoneSoup (XML part of BeautifulSoup).\nwww.crummy.com\/software\/BeautifulSoup\nIt's not ideal, but it would circumvent the problem if you cannot fix the file's output...\nIt's basically a previously implemented version of what Denis said.\nYou can just join whatever you need into the soup and it will do its best to fix it.","Q_Score":5,"Tags":"python,xml","A_Id":1652871,"CreationDate":"2009-10-29T16:36:00.000","Title":"Close all opened xml tags","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a file, which change it content in a short time. But I'd like to read it before it is ready. The problem is, that it is an xml-file (log). So when you read it, it could be, that not all tags are closed.\nI would like to know if there is a possibility to close all opened tags correctly, that there are no problems to show it in the browser (with xslt stylsheet). This should be made by using included features of python.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1809,"Q_Id":1644994,"Users Score":0,"Answer":"You can use any SAX parser by feeding data available so far to it. Use SAX handler that just reconstructs source XML, keep the stack of tags opened and close them in reverse order at the end.","Q_Score":5,"Tags":"python,xml","A_Id":1645047,"CreationDate":"2009-10-29T16:36:00.000","Title":"Close all opened xml tags","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to write simple proxy server for some purpose. In it I use httplib to access remote web-server. But there's one problem: web server returns TWO Set-Cookie headers in one response, and httplib mangles them together in httplib.HTTPResponse.getheaders(), effectively joining cookies with comma [which is strange, because getheaders returns a LIST, not DICT, so I thought they wrote it with multiple headers of the same name). So, when I send this joined header back to client, it confuses client. How can I obtain full list of headers in httplib (without just splitting Set-Cookie header on commas)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2867,"Q_Id":1649401,"Users Score":4,"Answer":"HTTPResponse.getheaders() returns a list of combined headers (actually my calling dict.items()). The only place where incoming headers are stored untouched is HTTPResponse.msg.headers.","Q_Score":4,"Tags":"python,httplib","A_Id":1649579,"CreationDate":"2009-10-30T12:03:00.000","Title":"How to handle multiple Set-Cookie header in HTTP response","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want a fast way to grab a URL and parse it while streaming. Ideally this should be super fast. My language of choice is Python. I have an intuition that twisted can do this but I'm at a loss to find an example.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1573,"Q_Id":1659380,"Users Score":0,"Answer":"You only need to parse a single URL? Then don't worry. Use urllib2 to open the connection and pass the file handle into ElementTree.\nVariations you can try would be to use ElementTree's incremental parser or to use iterparse, but that depends on what your real requirements are. There's \"super fast\" but there's also \"fast enough.\"\nIt's only when you start having multiple simultaneous connections where you should look at Twisted or multithreading.","Q_Score":3,"Tags":"python,xml,twisted","A_Id":1682249,"CreationDate":"2009-11-02T04:06:00.000","Title":"How do I fetch an XML document and parse it with Python twisted?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am interested in your opinions on unittesting code that uses Corba to communicate with a server.\nWould you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with \"mock anything\".\nThanks!\nNote:\nI believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example:\nA web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed.\nAnd this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code.\nI believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests). \nIt's just that this is quite a lot of \"extra work\" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind.","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":625,"Q_Id":1660049,"Users Score":3,"Answer":"Don't try to unittest Corba. Assume that Corba works. Unittest your own code. This means:\n\nCreate a unit test which checks that you correctly set up Corba and that you can invoke a single method and read a property. If that works, all other methods and properties will work, too.\nAfter that, test that all the exposed objects work correctly. You don't need Corba for this.","Q_Score":2,"Tags":"python,unit-testing,mocking,corba","A_Id":1660185,"CreationDate":"2009-11-02T08:38:00.000","Title":"Unittesting Corba in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am interested in your opinions on unittesting code that uses Corba to communicate with a server.\nWould you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with \"mock anything\".\nThanks!\nNote:\nI believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example:\nA web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed.\nAnd this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code.\nI believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests). \nIt's just that this is quite a lot of \"extra work\" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind.","AnswerCount":3,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":625,"Q_Id":1660049,"Users Score":1,"Answer":"I would set up a test server, and do live tests on that. Unittesting can be tricky with network stuff, so it's best to keep it as real as possible. Any mocking would be done on the test server, for instance if you need to communicate to three different servers, it could be set up with three different IP addresses to play the role of all three servers.","Q_Score":2,"Tags":"python,unit-testing,mocking,corba","A_Id":1660187,"CreationDate":"2009-11-02T08:38:00.000","Title":"Unittesting Corba in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am interested in your opinions on unittesting code that uses Corba to communicate with a server.\nWould you mock the Corba objects? In Python that's sort of a pain in the ass because all the methods of Corba objects are loaded dynamically. So you're basically stuck with \"mock anything\".\nThanks!\nNote:\nI believe I have not made myself clear enough, so I'll try to give a somewhat more concrete example:\nA web application needs to display a page containing data received from the server. It obtains the data by calling server_pagetable.getData() and then formats the data, converts them to the correct python types (because Corba does not have e.g. a date type etc.) and finally creates the HTML code to be displayed.\nAnd this is what I would like to test - the methods that receive the data and do all the transformations and finally create the HTML code.\nI believe the most straightforward decision is to mock the Corba objects as they essentially comprise both the networking and db functionality (which ought not to be tested in unit tests). \nIt's just that this is quite a lot of \"extra work\" to do - mocking all the Corba objects (there is a User object, a server session object, the pagetable object, an admin object etc.). Maybe it's just because I'm stuck with Corba and therefore I have to reflect the object hierarchy dictated by the server with mocks. On the other hand, it could be that there is some cool elegant solution to testing code using Corba that just did not cross my mind.","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":625,"Q_Id":1660049,"Users Score":0,"Answer":"I have got similar work to tackle but I probably will not write a test for implementation of CORBA objects or more specifically COM objects (implementation of CORBA). I have to write tests for work that uses these structures as oppose to the structures themselves (although I could land myself in that role too if I ask too many questions). In the end of the day, unittest is integration on a smaller scale so whenever I write tests I am always thinking of input and outputs rather than actual structures. From the way you have written your problem my concentration would be on the data of server_pagetable.getData() and the output HTML without caring too much about what happens inbetween (because that is the code you are testing, you don't want to define the code in the test but ensure that output is correct). If you want to test individual functions inbetween then I would get mock data (essentially still data, so you can generate mock data rather than mock class if possible). Mocks are only used when you don't have parts of the full code and those functions needs some input from those parts of the code but as you are not interested in them or don't have them you simplify the interaction with them. This is just my opinion.","Q_Score":2,"Tags":"python,unit-testing,mocking,corba","A_Id":51438774,"CreationDate":"2009-11-02T08:38:00.000","Title":"Unittesting Corba in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using SUDS SOAP client how do I specify web service URL. I can see clearly that WSDL path is specified in Client constructor but what if I wan't to change web service url?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":5956,"Q_Id":1670569,"Users Score":1,"Answer":"I think you have to create a new Client object for each different URL.","Q_Score":2,"Tags":"python,soap,suds","A_Id":1670775,"CreationDate":"2009-11-03T22:31:00.000","Title":"Changing web service url in SUDS library","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to write a Python lib that will implement the client side of a certain chat protocol. \nAfter I connect to the server,\nI start the main loop where I read from the server and handle received commands and here I need to call a callback function (like on_message or on file_received, etc). \nHow should I go about implementing this?\nShould a start a new thread for each callback function? As maybe some callbacks will take some time to return and I will timeout.\nAlso,\nIf the main loop where I read from the server is in a thread can I write to the socket from another thread(send messages to the server)?\nOr is there a better approach?\nThanks.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":481,"Q_Id":1670735,"Users Score":2,"Answer":"I would use the select module, or alternately twisted, however select is a bit more portable, and to my mind somewhat more pythonic.","Q_Score":0,"Tags":"python,multithreading,chat","A_Id":1671922,"CreationDate":"2009-11-03T23:05:00.000","Title":"python chat client lib","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. \nI want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. \nhow to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)\nbtw. workers are using read-only data so there is no need to maintain locking and communication between them","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2509,"Q_Id":1674696,"Users Score":0,"Answer":"Another option is a queue table in the database.\nThe worker processes run in a loop or off cron and poll the queue table for new jobs.","Q_Score":4,"Tags":"python,nginx,load-balancing,wsgi,reverse-proxy","A_Id":1718183,"CreationDate":"2009-11-04T15:51:00.000","Title":"how to process long-running requests in python workers?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. \nI want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. \nhow to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)\nbtw. workers are using read-only data so there is no need to maintain locking and communication between them","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2509,"Q_Id":1674696,"Users Score":0,"Answer":"I think you can configure modwsgi\/Apache so it will have several \"hot\" Python interpreters\nin separate processes ready to go at all times and also reuse them for new accesses\n(and spawn a new one if they are all busy).\nIn this case you could load all the preprocessed data as module globals and they would\nonly get loaded once per process and get reused for each new access. In fact I'm not sure this isn't the default configuration\nfor modwsgi\/Apache.\nThe main problem here is that you might end up consuming\na lot of \"core\" memory (but that may not be a problem either).\nI think you can also configure modwsgi for single process\/multiple\nthread -- but in that case you may only be using one CPU because\nof the Python Global Interpreter Lock (the infamous GIL), I think.\nDon't be afraid to ask at the modwsgi mailing list -- they are very\nresponsive and friendly.","Q_Score":4,"Tags":"python,nginx,load-balancing,wsgi,reverse-proxy","A_Id":1675726,"CreationDate":"2009-11-04T15:51:00.000","Title":"how to process long-running requests in python workers?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. \nI want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. \nhow to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)\nbtw. workers are using read-only data so there is no need to maintain locking and communication between them","AnswerCount":7,"Available Count":4,"Score":0.0285636566,"is_accepted":false,"ViewCount":2509,"Q_Id":1674696,"Users Score":1,"Answer":"The most simple solution in this case is to use the webserver to do all the heavy lifting. Why should you handle threads and\/or processes when the webserver will do all that for you?\nThe standard arrangement in deployments of Python is:\n\nThe webserver start a number of processes each running a complete python interpreter and loading all your data into memory.\nHTTP request comes in and gets dispatched off to some process\nProcess does your calculation and returns the result directly to the webserver and user\nWhen you need to change your code or the graph data, you restart the webserver and go back to step 1.\n\nThis is the architecture used Django and other popular web frameworks.","Q_Score":4,"Tags":"python,nginx,load-balancing,wsgi,reverse-proxy","A_Id":1682864,"CreationDate":"2009-11-04T15:51:00.000","Title":"how to process long-running requests in python workers?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python (well, it's php now but we're rewriting) function that takes some parameters (A and B) and compute some results (finds best path from A to B in a graph, graph is read-only), in typical scenario one call takes 0.1s to 0.9s to complete. This function is accessed by users as a simple REST web-service (GET bestpath.php?from=A&to=B). Current implementation is quite stupid - it's a simple php script+apache+mod_php+APC, every requests needs to load all the data (over 12MB in php arrays), create all structures, compute a path and exit. I want to change it. \nI want a setup with N independent workers (X per server with Y servers), each worker is a python app running in a loop (getting request -> processing -> sending reply -> getting req...), each worker can process one request at a time. I need something that will act as a frontend: get requests from users, manage queue of requests (with configurable timeout) and feed my workers with one request at a time. \nhow to approach this? can you propose some setup? nginx + fcgi or wsgi or something else? haproxy? as you can see i'am a newbie in python, reverse-proxy, etc. i just need a starting point about architecture (and data flow)\nbtw. workers are using read-only data so there is no need to maintain locking and communication between them","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":2509,"Q_Id":1674696,"Users Score":0,"Answer":"You could use nginx load balancer to proxy to PythonPaste paster (which serves WSGI, for example Pylons), that launches each request as separate thread anyway.","Q_Score":4,"Tags":"python,nginx,load-balancing,wsgi,reverse-proxy","A_Id":1676102,"CreationDate":"2009-11-04T15:51:00.000","Title":"how to process long-running requests in python workers?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How do I get started with getting going with XML-RPC with joomla? I've been looking around for documentation and finding nothing...\nI'd like to connect to a joomla server, (after enabling the Core Joomla XML-RPC plugin), and be able to do things like login and add an article, and tweak all the parameters of the article if possible.\nMy xml-rpc client implementation will be in python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2670,"Q_Id":1694205,"Users Score":3,"Answer":"the book \"Mastering Joomla 1.5 Extension and Framework Development\" has a nice explanation of that.\nJoomla has a fex XML-RPC plugins that let you do a few things, like the blogger API interface. (plugins\/xmlrpc\/blogger.php)\nYou should create your own XML-RPC plugin to do the custom things you want.","Q_Score":3,"Tags":"python,joomla,xml-rpc","A_Id":1696183,"CreationDate":"2009-11-07T19:53:00.000","Title":"Joomla and XMLRPC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"currently im making some crawler script,one of problem is \nsometimes if i open webpage with PAMIE,webpage can't open and hang forever.\nare there any method to close PAMIE's IE or win32com's IE ? \nsuch like if webpage didn't response or loading complete less than 10sec or so .\nthanks in advance","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":294,"Q_Id":1698362,"Users Score":0,"Answer":"I think what you are looking for is somewhere to set the timeout on your request. I would suggest looking into the documentation on PAMIE.","Q_Score":0,"Tags":"python,time,multithreading,pamie","A_Id":1698371,"CreationDate":"2009-11-08T23:40:00.000","Title":"win32com and PAMIE web page open timeout","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"currently im making some crawler script,one of problem is \nsometimes if i open webpage with PAMIE,webpage can't open and hang forever.\nare there any method to close PAMIE's IE or win32com's IE ? \nsuch like if webpage didn't response or loading complete less than 10sec or so .\nthanks in advance","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":294,"Q_Id":1698362,"Users Score":2,"Answer":"Just use, to initialize your PAMIE instance, PAMIE(timeOut=100) or whatever. The units of measure for timeOut are \"tenths of a second\" (!); the default is 3000 (300 seconds, i.e., 5 minutes); with 300 as I suggested, you'd time out after 10 seconds as you request.\n(You can pass the timeOut= parameter even when you're initializing with a URL, but in that case the timeout will only be active after the initial navigation).","Q_Score":0,"Tags":"python,time,multithreading,pamie","A_Id":1698422,"CreationDate":"2009-11-08T23:40:00.000","Title":"win32com and PAMIE web page open timeout","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. I can connect and pull the first request without problem. Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. The first results in the 401 UNAUTHORIZED. The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload.\nEverything is great. My HTTPDigestAuthHandler opener is working, thanks to urllib2. \nIn the same program I attempt to request a second, different page, from the same server. I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment).\nInstead it starts from scratch and first gets the 401 and regenerates the information needed for a 200.\nIs it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request?\n[Re-read that a couple times until it makes sense, I am not sure how to make it any more plain]\nGoogle has yielded surprisingly little so I guess not. I looked at the code for urllib2.py and its really messy (comments like: \"This isn't a fabulous effort\"), so I wouldn't be shocked if this was a bug. I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. That led me to keepalive.py but that didn't work for me either.\nPycurl won't work either.\nI can hand code the entire interaction, but I would like to piggy back on existing libraries where possible.\nIn summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 for second).\nIf you happen to have tried this before and already know its not possible please let me know. If you have an alternative I am all ears.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1902,"Q_Id":1706644,"Users Score":1,"Answer":"Although it's not available out of the box, urllib2 is flexible enough to add it yourself. Subclass HTTPDigestAuthHandler, hack it (retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header).","Q_Score":1,"Tags":"python,authentication,urllib2,digest","A_Id":1717241,"CreationDate":"2009-11-10T09:29:00.000","Title":"Client Digest Authentication Python with URLLIB2 will not remember Authorization Header Information","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Then how do I import that? I run everything in python 2.4, but one of my scripts import xml.etree.ElementTree...which is only Python 2.5","AnswerCount":5,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1923,"Q_Id":1713398,"Users Score":4,"Answer":"Then it fails.\nYou can't import a python 2.5 library while you're running python 2.4. It won't work.\nWhy can't you run python 2.5+?","Q_Score":0,"Tags":"python,linux,unix","A_Id":1713411,"CreationDate":"2009-11-11T06:20:00.000","Title":"What if one of my programs runs in python 2.4, but IMPORTS something that requires python 2.5?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any library to deserialize with python which is serialized with java?","AnswerCount":7,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":8371,"Q_Id":1714624,"Users Score":6,"Answer":"Java binary serialization is really designed to be used with Java. To do it in Python you'd have to have all the relevant Java classes available for inspection, and create Python objects appropriately - it would be pretty hideous and fragile.\nYou're better off using a cross-platform serialization format such as Thrift, Protocol Buffers, JSON or XML. If you can't change which serialization format is used in the Java code, I'd suggest writing new Java code which deserializes from the binary format and then reserializes to a cross-platform format.","Q_Score":13,"Tags":"java,python,serialization","A_Id":1714644,"CreationDate":"2009-11-11T11:33:00.000","Title":"Is there any library to deserialize with python which is serialized with java","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there any library to deserialize with python which is serialized with java?","AnswerCount":7,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":8371,"Q_Id":1714624,"Users Score":0,"Answer":"If you are using Java classes, then I don't even know what it would mean to deserialize a Java class in a Python environment. If you are only using simple primitives (ints, floats, strings), then it probably wouldn't be too hard to build a Python library that could deserialize the Java format.\nBut as others have said, there are better cross-platform solutions.","Q_Score":13,"Tags":"java,python,serialization","A_Id":1714862,"CreationDate":"2009-11-11T11:33:00.000","Title":"Is there any library to deserialize with python which is serialized with java","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I currently have a small Python script that I'm using to spawn multiple executables, (voice chat servers), and in the next version of the software, the servers have the ability to receive heartbeat signals on the UDP port. (There will be possibly thousands of servers on one machine, ranging from ports 7878 and up)\nMy problem is that these servers might (read: will) be running on the same machine as my Python script and I had planned on opening a UDP port, and just sending the heartbeat, waiting for the reply, and voila...I could restart servers when\/if they weren't responding by killing the task and re-loading the server.\nProblem is that I cannot open a UDP port that the server is already using. Is there a way around this? The project lead is implementing the heartbeat still, so I'm sure any suggestions in how the heartbeat system could be implemented would be welcome also. -- This is a pretty generic script though that might apply to other programs so my main focus is still communicating on that UDP port.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":5210,"Q_Id":1722993,"Users Score":1,"Answer":"I'm pretty sure this is possible on Linux; I don't know about other UNIXes.\nThere are two ways to propagate a file descriptor from one process to another:\n\nWhen a process fork()s, the child inherits all the file descriptors of the parent.\nA process can send a file descriptor to another process over a \"UNIX Domain Socket\". See sendmsg() and recvmsg(). In Python, the _multiprocessing extension module will do this for you; see _multiprocessing.sendfd() and _multiprocessing.recvfd().\n\nI haven't experimented with multiple processes listening on UDP sockets. But for TCP, on Linux, if multiple processes all listen on a single TCP socket, one of them will be randomly chosen when a connection comes in. So I suspect Linux does something sensible when multiple processes are all listening on the same UDP socket.\nTry it and let us know!","Q_Score":0,"Tags":"python,udp,communication,daemon,ports","A_Id":1723643,"CreationDate":"2009-11-12T15:22:00.000","Title":"Multiple programs using the same UDP port? Possible?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I currently have a small Python script that I'm using to spawn multiple executables, (voice chat servers), and in the next version of the software, the servers have the ability to receive heartbeat signals on the UDP port. (There will be possibly thousands of servers on one machine, ranging from ports 7878 and up)\nMy problem is that these servers might (read: will) be running on the same machine as my Python script and I had planned on opening a UDP port, and just sending the heartbeat, waiting for the reply, and voila...I could restart servers when\/if they weren't responding by killing the task and re-loading the server.\nProblem is that I cannot open a UDP port that the server is already using. Is there a way around this? The project lead is implementing the heartbeat still, so I'm sure any suggestions in how the heartbeat system could be implemented would be welcome also. -- This is a pretty generic script though that might apply to other programs so my main focus is still communicating on that UDP port.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":5210,"Q_Id":1722993,"Users Score":2,"Answer":"This isn't possible. What you'll have to do is have one UDP master program that handles all UDP communication over the one port, and communicates with your servers in another way (UDP on different ports, named pipes, ...)","Q_Score":0,"Tags":"python,udp,communication,daemon,ports","A_Id":1723017,"CreationDate":"2009-11-12T15:22:00.000","Title":"Multiple programs using the same UDP port? Possible?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"In short I'm creating a Flash based multiplayer game and I'm now starting to work on the server-side code. Well I'm the sole developer of the project so I'm seeking a high-level socket library that works well with games to speed up my development time.\nI was trying to use the Twisted Framework (for Python) but I'm having some personal issues with it so I'm looking for another solution.\nI'm open to either Java or a Python based library. The main thing is that the library is stable enough for multiplayer games and the library needs to be \"high-level\" (abstract) since I'm new to socket programming for games.\nI want to also note that I will be using the raw binary socket for my Flash game (Actionscript 3.0) since I assume it will be faster than the traditional Flash XML socket.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":8269,"Q_Id":1728266,"Users Score":0,"Answer":"High-level on one side and raw binary sockets on the other won't work. Sorry, but you'll need to go low-level on the server side too.\nEDIT: in response to the OP's comment. I am not aware of any \"high level\" interface of the nature that you are talking about for Java. And frankly I don't think it makes a lot of sense. If you are going to talk bytes over Socket streams you really do need to understand the standard JDK Socket \/ ServerSocket APIs; e.g. timeouts, keep-alive, etc.","Q_Score":10,"Tags":"java,python,sockets","A_Id":1728302,"CreationDate":"2009-11-13T09:55:00.000","Title":"Seeking a High-Level Library for Socket Programming (Java or Python)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm not familiar with PowerBuilder but I have a task to create Automatic UI Test Application for PB. We've decided to do it in Python with pywinauto and iaccesible libraries. The problem is that some UI elements like newly added lists record can not be accesed from it (even inspect32 can't get it). \nAny ideas how to reach this elements and make them testable?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":3825,"Q_Id":1741023,"Users Score":2,"Answer":"I'm experimenting with code for a tool for automating PowerBuilder-based GUIs as well. From what I can see, your best bet would be to use the PowerBuilder Native Interface (PBNI), and call PowerScript code from within your NVO.\nIf you like, feel free to send me an email (see my profile for my email address), I'd be interested in exchanging ideas about how to do this.","Q_Score":6,"Tags":"python,testing,powerbuilder","A_Id":1741142,"CreationDate":"2009-11-16T09:23:00.000","Title":"How to make PowerBuilder UI testing application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm not familiar with PowerBuilder but I have a task to create Automatic UI Test Application for PB. We've decided to do it in Python with pywinauto and iaccesible libraries. The problem is that some UI elements like newly added lists record can not be accesed from it (even inspect32 can't get it). \nAny ideas how to reach this elements and make them testable?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":3825,"Q_Id":1741023,"Users Score":1,"Answer":"I've seen in AutomatedQa support that they a recipe recommending using msaa and setting some properties on the controls. I do not know if it works.","Q_Score":6,"Tags":"python,testing,powerbuilder","A_Id":2328021,"CreationDate":"2009-11-16T09:23:00.000","Title":"How to make PowerBuilder UI testing application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I run a Python program so it outputs its STDOUT and inputs its STDIN to\/from a remote telnet client?\nAll the program does is print out text then wait for raw_input(), repeatedly. I want a remote user to use it without needing shell access. It can be single threaded\/single user.","AnswerCount":4,"Available Count":2,"Score":0.2449186624,"is_accepted":false,"ViewCount":953,"Q_Id":1758276,"Users Score":5,"Answer":"Make the Python script into the shell for that user. (Or if that doesn't work, wrap it up in bash script or even a executable).\n(You might have to put it in \/etc\/shells (or equiv.))","Q_Score":3,"Tags":"python,telnet","A_Id":1758310,"CreationDate":"2009-11-18T19:02:00.000","Title":"How can I run a Python program over telnet?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I run a Python program so it outputs its STDOUT and inputs its STDIN to\/from a remote telnet client?\nAll the program does is print out text then wait for raw_input(), repeatedly. I want a remote user to use it without needing shell access. It can be single threaded\/single user.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":953,"Q_Id":1758276,"Users Score":0,"Answer":"You can just create a new linux user and set their shell to your script.\nThen when they telnet in and enter the username\/password, the program runs instead of bash or whatever the default shell is.","Q_Score":3,"Tags":"python,telnet","A_Id":1760716,"CreationDate":"2009-11-18T19:02:00.000","Title":"How can I run a Python program over telnet?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when I set the Firefox proxy with python webdriver, it doesn't wait until the page is fully downloaded, this doesn't happen when I don't set one. How can I change this behavior? Or how can I check that the page download is over?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1062,"Q_Id":1785607,"Users Score":1,"Answer":"The simplest thing to do is to poll the page looking for an element you know will be present once the download is complete. The Java webdriver bindings offer a \"Wait\" class for just this purpose, though there isn't (yet) an analogue for this in the python bindings.","Q_Score":1,"Tags":"python,firefox,proxy,webdriver","A_Id":1790122,"CreationDate":"2009-11-23T20:06:00.000","Title":"Python Webdriver doesn't wait until the page is downloaded in Firefox when used with proxy","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For example, I want to join a prefix path to resource paths like \/js\/foo.js.\nI want the resulting path to be relative to the root of the server. In the above example if the prefix was \"media\" I would want the result to be \/media\/js\/foo.js.\nos.path.join does this really well, but how it joins paths is OS dependent. In this case I know I am targeting the web, not the local file system.\nIs there a best alternative when you are working with paths you know will be used in URLs? Will os.path.join work well enough? Should I just roll my own?","AnswerCount":14,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":128395,"Q_Id":1793261,"Users Score":74,"Answer":"Since, from the comments the OP posted, it seems he doesn't want to preserve \"absolute URLs\" in the join (which is one of the key jobs of urlparse.urljoin;-), I'd recommend avoiding that. os.path.join would also be bad, for exactly the same reason.\nSo, I'd use something like '\/'.join(s.strip('\/') for s in pieces) (if the leading \/ must also be ignored -- if the leading piece must be special-cased, that's also feasible of course;-).","Q_Score":147,"Tags":"python,url","A_Id":1794540,"CreationDate":"2009-11-24T22:06:00.000","Title":"How to join components of a path when you are constructing a URL in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"the normal behavior of urllib\/urllib2 is if an error code is sent in the header of the response (i.e 404) an Exception is raised. \nHow do you look for specific errors i.e (40x, or 50x) based on the different errors, do different things. Also, how do you read the actual data being returned HTML\/JSON etc (The data usually has error details which is different to the HTML error code)","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":5703,"Q_Id":1803741,"Users Score":1,"Answer":"In urllib2 HTTPError exception is also a valid HTTP response, so you can treat an HTTP error as an exceptional event or valid response. But in urllib you have to subclass URLopener and define http_error_ method[s] or redefine http_error_default to handle them all.","Q_Score":4,"Tags":"python,error-handling","A_Id":1803796,"CreationDate":"2009-11-26T13:43:00.000","Title":"Error codes returned by urllib\/urllib2 and the actual page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Basically, i have a list of 30,000 URLs.\nThe script goes through the URLs and downloads them (with a 3 second delay in between).\nAnd then it stores the HTML in a database.\nAnd it loops and loops...\nWhy does it randomly get \"Killed.\"? I didn't touch anything.\nEdit: this happens on 3 of my linux machines. \nThe machines are on a Rackspace cloud with 256 MB memory. Nothing else is running.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":20694,"Q_Id":1811173,"Users Score":1,"Answer":"Is it possible that it's hitting an uncaught exception? Are you running this from a shell, or is it being run from cron or in some other automated way? If it's automated, the output may not be displayed anywhere.","Q_Score":17,"Tags":"python,mysql,url","A_Id":1811196,"CreationDate":"2009-11-28T00:47:00.000","Title":"Why does my python script randomly get killed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Basically, i have a list of 30,000 URLs.\nThe script goes through the URLs and downloads them (with a 3 second delay in between).\nAnd then it stores the HTML in a database.\nAnd it loops and loops...\nWhy does it randomly get \"Killed.\"? I didn't touch anything.\nEdit: this happens on 3 of my linux machines. \nThe machines are on a Rackspace cloud with 256 MB memory. Nothing else is running.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":20694,"Q_Id":1811173,"Users Score":1,"Answer":"Are you using some sort of queue manager or process manager of some sort ?\nI got apparently random killed messages when the batch queue manager I was using was sending SIGUSR2 when the time was up. \nOtherwise I strongly favor the out of memory option.","Q_Score":17,"Tags":"python,mysql,url","A_Id":1811350,"CreationDate":"2009-11-28T00:47:00.000","Title":"Why does my python script randomly get killed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"when I can't delete FF cookies from webdriver. When I use the .delete_all_cookies method, it returns None. And when I try to get_cookies, I get the following error:\nwebdriver_common.exceptions.ErrorInResponseException: Error occurred when processing\npacket:Content-Length: 120\n{\"elementId\": \"null\", \"context\": \"{9b44672f-d547-43a8-a01e-a504e617cfc1}\", \"parameters\": [], \"commandName\": \"getCookie\"}\nresponse:Length: 266\n{\"commandName\":\"getCookie\",\"isError\":true,\"response\":{\"lineNumber\":576,\"message\":\"Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsIDOMLocation.host]\",\"name\":\"NS_ERROR_FAILURE\"},\"elementId\":\"null\",\"context\":\"{9b44672f-d547-43a8-a01e-a504e617cfc1} \"}\nHow can I fix it?\nUpdate:\nThis happens with clean installation of webdriver with no modifications. The changes I've mentioned in another post were made later than this post being posted (I was trying to fix the issue myself).","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2862,"Q_Id":1813044,"Users Score":0,"Answer":"Hmm, I actually haven't worked with Webdriver so this may be of no help at all... but in your other post you mention that you're experimenting with modifying the delete cookie webdriver js function. Did get_cookies fail before you were modifying the delete function? What happens when you get cookies before deleting them? I would guess that the modification you're making to the delete function in webdriver-read-only\\firefox\\src\\extension\\components\\firefoxDriver.js could break the delete function. Are you doing it just for debugging or do you actually want the browser itself to show a pop up when the driver tells it to delete cookies? It wouldn't surprise me if this modification broke.\nMy real advice though would be actually to start using Selenium instead of Webdriver since it's being discontinued in it's current incarnation, or morphed into Selenium. Selenium is more actively developed and has pretty active and responsive forms. It will continue to be developed and stable while the merge is happening, while I take it Webdriver might not have as many bugfixes going forward. I've had success using the Selenium commands that control cookies. They seem to be revamping their documentation and for some reason there isn't any link to the Python API, but if you download selenium rc, you can find the Python API doc in selenium-client-driver-python, you'll see there are a good 5 or so useful methods for controlling cookies, which you use in your own custom Python methods if you want to, say, delete all the cookies with a name matching a certain regexp. If for some reason you do want the browser to alert() some info about the deleted cookies too, you could do that by getting the cookie names\/values from the python method, and then passing them to selenium's getEval() statement which will execute arbitrary js you feed it (like \"alert()\"). ... If you do go the selenium route feel free to contact me if you get a blocker, I might be able to assist.","Q_Score":0,"Tags":"python,firefox,webdriver","A_Id":1814160,"CreationDate":"2009-11-28T16:59:00.000","Title":"How to delete Firefox cookies from webdriver in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script which needs the browser that selenium is operating close and re-open, without losing its cookies.\nAny idea on how to go about it?\nBasically, it's a check to see that if the user opens and closes his browser, his cookies stay intact.","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":1370,"Q_Id":1818969,"Users Score":3,"Answer":"You should be able to use the stop and start commands. You will need to ensure that you are not clearing cookies between sessions, and depending on the browser you're launching you may also need to use the -browserSessionReuse command line option.","Q_Score":1,"Tags":"python,selenium","A_Id":1819059,"CreationDate":"2009-11-30T10:20:00.000","Title":"Close and open a new browser in Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a script which needs the browser that selenium is operating close and re-open, without losing its cookies.\nAny idea on how to go about it?\nBasically, it's a check to see that if the user opens and closes his browser, his cookies stay intact.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1370,"Q_Id":1818969,"Users Score":0,"Answer":"This is a feature of the browser and not your concern: If there is a bug in the browser, then there is little you can do. If you need to know whether a certain version of the browser works correctly, then define a manual test (write a document that explains the steps), do it once and record the result somewhere (like \"Browser XXX version YYY works\").\nWhen you know that a certain browser (version) works, then that's not going to change, so there is no need to repeat the test.","Q_Score":1,"Tags":"python,selenium","A_Id":1819042,"CreationDate":"2009-11-30T10:20:00.000","Title":"Close and open a new browser in Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My Application has a lot of calculation being done in JavaScript according to how and when the user acts on the application. The project prints out valuable information (through console calls) as to how this calculation is going on, and so we can easily spot any NaNs creeping in.\nWe are planning to integrate Selenium (RC with python) to test or project, but if we could get the console output messages in the python test case, we can identify any NaNs or even any miscalculations.\nSo, is there a way that Selenium can absorb these outputs (preferably in a console-less environment)?\nIf not, I would like to know if I can divert the console calls, may be by rebinding the console variable to something else, so that selenium can get that output and notify the python side. Or if not console, is there any other way that I can achieve this.\nI know selenium has commands like waitForElementPresent etc., but I don't want to show these intermediate calculations on the application, or is it the only way?\nAny help appreciated.\nThank you.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2109,"Q_Id":1819903,"Users Score":1,"Answer":"If you are purely testing that the JavaScript functions are performing the correct calculations with the given inputs, I would suggest separating your JavaScript from your page and use a JavaScript testing framework to test the functionality. Testing low level code using Selenium is a lot of unnecessary overhead. If you're going against the fully rendered page, this would require your application to be running to a server, which should not be a dependency of testing raw JavaScript.\nWe recently converted our application from using jsUnit to use YUI Test and it has been promising so far. We run about 150 tests in both FireFox and IE in less than three minutes. Our testing still isn't ideal - we still test a lot of JavaScript the hard way using Selenium. However, moving some of the UI tests to YUI Test has saved us a lot of time in our Continuous Integration environment.","Q_Score":3,"Tags":"javascript,python,testing,selenium,selenium-rc","A_Id":1902514,"CreationDate":"2009-11-30T13:42:00.000","Title":"Javascript communication with Selenium (RC)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"There is CherryPy. Are there any others?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":507,"Q_Id":1835668,"Users Score":1,"Answer":"also:\nweb.py (webpy.org)\npaste (pythonpaste.org)","Q_Score":2,"Tags":"python,http","A_Id":1837070,"CreationDate":"2009-12-02T20:43:00.000","Title":"What Python-only HTTP\/1.1 web servers are available?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to automate interaction with a webpage. I've been using pycurl up til now but eventually the webpage will use javascript so I'm looking for alternatives . A typical interaction is \"open the page, search for some text, click on a link (which opens a form), fill out the form and submit\".\nWe're deploying on Google App engine, if that makes a difference. \nClarification: we're deploying the webpage on appengine. But the interaction is run on a separate machine. So selenium seems like it's the best choice.","AnswerCount":5,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":9387,"Q_Id":1836987,"Users Score":6,"Answer":"Twill and mechanize don't do Javascript, and Qt and Selenium can't run on App Engine ((1)), which only supports pure Python code. I do not know of any pure-Python Javascript interpreter, which is what you'd need to deploy a JS-supporting scraper on App Engine:-(.\nMaybe there's something in Java, which would at least allow you to deploy on (the Java version of) App Engine? App Engine app versions in Java and Python can use the same datastore, so you could keep some part of your app in Python... just not the part that needs to understand Javascript. Unfortunately I don't know enough about the Java \/ AE environment to suggest any specific package to try.\n((1)): to clarify, since there seems to be a misunderstanding that has gotten so far as to get me downvoted: if you run Selenium or other scrapers on a different computer, you can of course target a site deployed in App Engine (it doesn't matter how the website you're targeting is deployed, what programming language[s] it uses, etc, etc, as long as it's a website you can access [[real website: flash, &c, may likely be different]]). How I read the question is, the OP is looking for ways to have the scraping run as part of an App Engine app -- that is the problematic part, not where you (or somebody else;-) runs the site being scraped!","Q_Score":4,"Tags":"python,google-app-engine,pycurl","A_Id":1837397,"CreationDate":"2009-12-03T00:59:00.000","Title":"Automate interaction with a webpage in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I view the source of page, I do not find the image src. but the image is displayed on the page. This image is generated by some server side code.\nI am using the selenium for testing. I want to download this image for verification\/comparison.\nHow to get that image using python?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":1838047,"Users Score":0,"Answer":"if you just want to download the image, theres two strategies you can try:\n\nuse a something like Firebug or Chrome developer tools. right click the element in question, click \"inspect element\", and look at the css properties of the element. if you look around, you should find something like background-image style or maybe just a normal tag. then you'll have the url to the image.\nuse a something like Firebug or Chrome developer tools: look in the \"resources\" tab, and look for image files that show up.","Q_Score":0,"Tags":"python","A_Id":1870291,"CreationDate":"2009-12-03T06:26:00.000","Title":"How to get the URL Image which is displyed by script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When I view the source of page, I do not find the image src. but the image is displayed on the page. This image is generated by some server side code.\nI am using the selenium for testing. I want to download this image for verification\/comparison.\nHow to get that image using python?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":163,"Q_Id":1838047,"Users Score":0,"Answer":"If you aren't seeing an actual image tag in the HTML, your next step would seem to be figuring out how its being displayed. \nThe first place I'd suggest looking is in the .css files for this page. Images can actually be embedded using CSS, and this seems like the next likely option after being in the HTML code itself.\nIf it isn't in there, you may be dealing with some form of technique deliberately intended to prevent you from being able to download the image with a script. This may use obfuscated JavaScript or something similar and I wouldn't expect people to be able to give you an easy solution to bypass it (since it was carefully designed to resist exactly that).","Q_Score":0,"Tags":"python","A_Id":1838243,"CreationDate":"2009-12-03T06:26:00.000","Title":"How to get the URL Image which is displyed by script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to learn how to use XMPP and to create a simple web application with real collaboration features.\nI am writing the application with Python(WSGI), and the application will require javascript enabled because I am going to use jQuery or Dojo.\nI have downloaded Openfire for the server and which lib to choose? SleekXMPP making trouble with tlslite module(python 2.5 and I need only python 2.6).\nWhat is your suggestion?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1373,"Q_Id":1847120,"Users Score":0,"Answer":"I have found a lot of issues with Openfire and TLS are not with the xmpp lib :( -- SleekXMPP in the trunk has been converted to Python 3.0 and the branch is maintained for Python 2.5\nUnlike Julien, I would only go with Twisted Words if you really need the power of Twisted or if you are already using Twisted. IMO SleekXMPP offers the closest match to the current XEP's in use today.","Q_Score":3,"Tags":"javascript,python,xmpp,wsgi","A_Id":1881020,"CreationDate":"2009-12-04T14:02:00.000","Title":"Best XMPP Library for Python Web Application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Are they just the same protocol or something different?\nI am just confused about it.\nActually, I want to call a web service written in C# with ASP.NET by Python. I have tried XMLRPC but it seems just did not work.\nSo what is the actually difference among them? \nThanks.","AnswerCount":4,"Available Count":3,"Score":0.1488850336,"is_accepted":false,"ViewCount":2232,"Q_Id":1847534,"Users Score":3,"Answer":"They are completely different protocols, you need to find out the protocol used by the web service you wish to consume and program to that. Web services is really just a concept XML-RPC, SOAP and REST are actual technologies the implement this concept. These implementations are not interoperable (without some translation layer).\nAll these protocols enable basically the same sort of thing, calling into remote some application over the web. However the details of how they do this differ, they are not just different names for the same protocol.","Q_Score":3,"Tags":"c#,python,web-services,xml-rpc","A_Id":1847560,"CreationDate":"2009-12-04T15:06:00.000","Title":"Can anyone explain the difference between XMLRPC, SOAP and also the C# Web Service?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are they just the same protocol or something different?\nI am just confused about it.\nActually, I want to call a web service written in C# with ASP.NET by Python. I have tried XMLRPC but it seems just did not work.\nSo what is the actually difference among them? \nThanks.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":2232,"Q_Id":1847534,"Users Score":5,"Answer":"All of them use the same transport protocol (HTTP).\nXMLRPC formats a traditional RPC call with XML for remote execution.\nSOAP wraps the call in a SOAP envelope (still XML, different formatting, oriented towards message based services rather than RPC style calls).\nIf you're using C#, your best bet is probably SOAP based Web Services (at least out of the options you listed).","Q_Score":3,"Tags":"c#,python,web-services,xml-rpc","A_Id":1847573,"CreationDate":"2009-12-04T15:06:00.000","Title":"Can anyone explain the difference between XMLRPC, SOAP and also the C# Web Service?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are they just the same protocol or something different?\nI am just confused about it.\nActually, I want to call a web service written in C# with ASP.NET by Python. I have tried XMLRPC but it seems just did not work.\nSo what is the actually difference among them? \nThanks.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":2232,"Q_Id":1847534,"Users Score":1,"Answer":"xml-rpc: Its a mechanism to call remote procedure & function accross network for distributed system integration. It uses XML based message document and HTTP as transport protocol. Further, it only support 6 basic data type as well as array for communication.\nSOAP: SOAP is also XML-based protocol for information exchange using HTPP transport protocol. However, it is more advanced then XML-RPC protocol. It uses XML formatted message that helps communicating complex data types accross distributed application, and hence is widely used now a days.","Q_Score":3,"Tags":"c#,python,web-services,xml-rpc","A_Id":3913857,"CreationDate":"2009-12-04T15:06:00.000","Title":"Can anyone explain the difference between XMLRPC, SOAP and also the C# Web Service?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a spider in Python to crawl a site. Trouble is, I need to examine about 2.5 million pages, so I could really use some help making it optimized for speed.\nWhat I need to do is examine the pages for a certain number, and if it is found to record the link to the page. The spider is very simple, it just needs to sort through a lot of pages.\nI'm completely new to Python, but have used Java and C++ before. I have yet to start coding it, so any recommendations on libraries or frameworks to include would be great. Any optimization tips are also greatly appreciated.","AnswerCount":6,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":7091,"Q_Id":1853673,"Users Score":3,"Answer":"You waste a lot of time waiting for network requests when spidering, so you'll definitely want to make your requests in parallel. I would probably save the result data to disk and then have a second process looping over the files searching for the term. That phase could easily be distributed across multiple machines if you needed extra performance.","Q_Score":6,"Tags":"python,web-crawler","A_Id":1853865,"CreationDate":"2009-12-05T22:28:00.000","Title":"Writing a Faster Python Spider","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a spider in Python to crawl a site. Trouble is, I need to examine about 2.5 million pages, so I could really use some help making it optimized for speed.\nWhat I need to do is examine the pages for a certain number, and if it is found to record the link to the page. The spider is very simple, it just needs to sort through a lot of pages.\nI'm completely new to Python, but have used Java and C++ before. I have yet to start coding it, so any recommendations on libraries or frameworks to include would be great. Any optimization tips are also greatly appreciated.","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":7091,"Q_Id":1853673,"Users Score":0,"Answer":"What Adam said. I did this once to map out Xanga's network. The way I made it faster is by having a thread-safe set containing all usernames I had to look up. Then I had 5 or so threads making requests at the same time and processing them. You're going to spend way more time waiting for the page to DL than you will processing any of the text (most likely), so just find ways to increase the number of requests you can get at the same time.","Q_Score":6,"Tags":"python,web-crawler","A_Id":1854592,"CreationDate":"2009-12-05T22:28:00.000","Title":"Writing a Faster Python Spider","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a spider in Python to crawl a site. Trouble is, I need to examine about 2.5 million pages, so I could really use some help making it optimized for speed.\nWhat I need to do is examine the pages for a certain number, and if it is found to record the link to the page. The spider is very simple, it just needs to sort through a lot of pages.\nI'm completely new to Python, but have used Java and C++ before. I have yet to start coding it, so any recommendations on libraries or frameworks to include would be great. Any optimization tips are also greatly appreciated.","AnswerCount":6,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":7091,"Q_Id":1853673,"Users Score":3,"Answer":"Spidering somebody's site with millions of requests isn't very polite. Can you instead ask the webmaster for an archive of the site? Once you have that, it's a simple matter of text searching.","Q_Score":6,"Tags":"python,web-crawler","A_Id":1853689,"CreationDate":"2009-12-05T22:28:00.000","Title":"Writing a Faster Python Spider","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML.\nI would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for?\nThanks.\nEDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to \"dismantle\" and understand complex source code, in order to grasp programming techniques and concepts. \nEDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.","AnswerCount":9,"Available Count":5,"Score":0.0,"is_accepted":false,"ViewCount":1248,"Q_Id":1854827,"Users Score":0,"Answer":"Maybe you have a project in mind that you want to code up? It's very hard to read what other people write, the best way to learn is to try something. Other people will have gone through the problems you will come across, and so why code is written the way it is may start to make sense. This is an excellent site to post questions, no matter how stupid you consider them.","Q_Score":2,"Tags":"python,coding-style,code-readability","A_Id":2628550,"CreationDate":"2009-12-06T09:04:00.000","Title":"How can a total, complete beginner read source code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML.\nI would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for?\nThanks.\nEDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to \"dismantle\" and understand complex source code, in order to grasp programming techniques and concepts. \nEDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.","AnswerCount":9,"Available Count":5,"Score":1.0,"is_accepted":false,"ViewCount":1248,"Q_Id":1854827,"Users Score":6,"Answer":"I would recommend you understand the basics. What are methods, classes, variables and so on. It would be important to understand the constructs you are seeing. If you don't understand those then it's just going to be a bunch of characters.","Q_Score":2,"Tags":"python,coding-style,code-readability","A_Id":1854832,"CreationDate":"2009-12-06T09:04:00.000","Title":"How can a total, complete beginner read source code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML.\nI would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for?\nThanks.\nEDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to \"dismantle\" and understand complex source code, in order to grasp programming techniques and concepts. \nEDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.","AnswerCount":9,"Available Count":5,"Score":0.0665680765,"is_accepted":false,"ViewCount":1248,"Q_Id":1854827,"Users Score":3,"Answer":"Donald Knuth suggests:\n\"It [is] basically the way you solve some kind of unknown puzzle -- make tables and charts and get a little more information here and make a hypothesis.\"\n(From \"Coders at Work\", Chapter 15)\nIn my opinion, the easiest way to understand a program is to study the data structures first. Write them down, memorize them. Only then, think about how they move through program-time.\nAs an aside, it is sort of a shame how few books there are on code reading. \"Coders at Work\" is probably the best so far. Ironically, \"Reading Code\" is one of the worst so far.","Q_Score":2,"Tags":"python,coding-style,code-readability","A_Id":1973070,"CreationDate":"2009-12-06T09:04:00.000","Title":"How can a total, complete beginner read source code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML.\nI would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for?\nThanks.\nEDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to \"dismantle\" and understand complex source code, in order to grasp programming techniques and concepts. \nEDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.","AnswerCount":9,"Available Count":5,"Score":0.0665680765,"is_accepted":false,"ViewCount":1248,"Q_Id":1854827,"Users Score":3,"Answer":"There is no magic way to learn anything without reading and writing code yourself. If you get stuck there are always folks in SO who will help you.","Q_Score":2,"Tags":"python,coding-style,code-readability","A_Id":1854841,"CreationDate":"2009-12-06T09:04:00.000","Title":"How can a total, complete beginner read source code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am a complete, total beginner in programming, although I do have knowledge of CSS and HTML.\nI would like to learn Python. I downloaded lots of source code but the amount of files and the complexity really confuses me. I don't know where to begin. Is there a particular order I should look for?\nThanks.\nEDIT: Sorry guys, I forgot to mention that I already have both the online tutorial and a couple of books handy. I basically I don't quite understand how to \"dismantle\" and understand complex source code, in order to grasp programming techniques and concepts. \nEDIT2: Thanks for the extremely quick comments, guys. I really appreciate it. This website is awesome.","AnswerCount":9,"Available Count":5,"Score":0.0665680765,"is_accepted":false,"ViewCount":1248,"Q_Id":1854827,"Users Score":3,"Answer":"To understand source code in any language, you first need to learn the language. It's as simple as that!\nUsually, reading source code (as a sole activity) will hurt your head without giving much benefit in terms of learning the underlying language. You need a structured tour through carefully chosen small source code examples, such as a book or tutorial will give you.\nCheck Amazon out for books and Google for tutorials, try a few. The links offered by some of the other answers would also be a great starting point.","Q_Score":2,"Tags":"python,coding-style,code-readability","A_Id":1854836,"CreationDate":"2009-12-06T09:04:00.000","Title":"How can a total, complete beginner read source code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm working on writing a Python client for Direct Connect P2P networks. Essentially, it works by connecting to a central server, and responding to other users who are searching for files.\nOccasionally, another client will ask us to connect to them, and they might begin downloading a file from us. This is a direct connection to the other client, and doesn't go through the central server.\nWhat is the best way to handle these connections to other clients? I'm currently using one Twisted reactor to connect to the server, but is it better have multiple reactors, one per client, with each one running in a different thread? Or would it be better to have a completely separate Python script that performs the connection to the client?\nIf there's some other solution that I don't know about, I'd love to hear it. I'm new to programming with Twisted, so I'm open to suggestions and other resources.\nThanks!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1176,"Q_Id":1856786,"Users Score":3,"Answer":"Without knowing all the details of the protocol, I would still recommend using a single reactor -- a reactor scales quite well (especially advanced ones such as PollReactor) and this way you will avoid the overhead connected with threads (that's how Twisted and other async systems get their fundamental performance boost, after all -- by avoiding such overhead). In practice, threads in Twisted are useful mainly when you need to interface to a library whose functions could block on you.","Q_Score":3,"Tags":"python,twisted,p2p","A_Id":1857145,"CreationDate":"2009-12-06T22:10:00.000","Title":"Proper way to implement a Direct Connect client in Twisted?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a Python script gathering info from some remote network devices. The output can be maybe 20 to 1000 lines of text. This then goes into excel on my local PC for now.\nNow access to this Linux device is convoluted, a citrix session to a remote windows server then ssh to the Linux device half way around the world. There is no ftp, scp, or anything like that, so I can't generate the excel on the Linux device and transfer it locally. The ONLY way to get the info is to copy\/paste from the ssh window into the local machine and post-process it\nMy question is what would be the best (from a user point of view as others will be using it) format to generate? 1.as it is now (spaces & tabs), 2.reformat as csv or as 3.convert to xml","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":720,"Q_Id":1870383,"Users Score":0,"Answer":"Reformat it as CSV. It's dead easy to do, is fairly human readable, and can be read by loads of pieces of spreadsheet software.","Q_Score":0,"Tags":"python,xml,excel,csv","A_Id":1870411,"CreationDate":"2009-12-08T22:36:00.000","Title":"Python to generate output ready for Excel","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a small wave thingy where i need to load a wave based on an outside event. So i don't have a context to work with.\nI've been looking at the python api for a while but i can't figure out the correct way to get a wave object (that i can then call CreateBlip() on) when i just have the waveid.\nIs there something i've just overlooked? or do I have to make a 'raw' json request instead of using the api ?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":95,"Q_Id":1876908,"Users Score":0,"Answer":"At the moment the answer is that i can't be done. Hopefully in a future version of the Api.","Q_Score":1,"Tags":"python,google-wave","A_Id":1932852,"CreationDate":"2009-12-09T21:11:00.000","Title":"Loading a wave from waveid","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What would the best way of unpacking a python string into fields\nI have data received from a tcp socket, it is packed as follows, I believe it will be in a string from the socket recv function\nIt has the following format\nuint8 - header\nuint8 - length\nuint32 - typeID\nuint16 -param1\nuint16 -param2\nuint16 -param3\nuint16 -param4\nchar[24] - name string\nuint32 - checksum\nuint8 - footer \n(I also need to unpack other packets with different formats to the above)\nHow do I unpack these?\nI am new to python, have done a bit of 'C'. If I was using 'C' I would probably use a structure, would this be the way to go with Python?\nRegards\nX","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":11515,"Q_Id":1879914,"Users Score":1,"Answer":"Is this the best way of doing this or is there a better way\nIt is likely that there will be strings with other formats which will require a different unpacking scheme\nfield1 = struct.unpack('B',data[0])\nfield2 = struct.unpack('B',data[1])\nfield3 = struct.unpack('!I',data[2:6])\nfield4 = struct.unpack('!H',data[6:8])\nfield5 = struct.unpack('!H',data[8:10])\nfield6 = struct.unpack('!H',data[10:12])\nfield7 = struct.unpack('!H',data[12:14])\nfield8 = struct.unpack('20s',data[14:38])\nfield9 = struct.unpack('!I',data[38:42])\nfield10 = struct.unpack('B',data[42]) \nRegards","Q_Score":5,"Tags":"python,string,unpack","A_Id":1880427,"CreationDate":"2009-12-10T09:50:00.000","Title":"Decoding packed data into a structure","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have built an XML-RPC interface in Python and I need to enforce some stricter typing. For example, passing string '10' instead of int 10. I can clean this up with some type casting and a little exception handling, but I am wondering if there is any other way of forcing type integrity such as something XML-RPC specific, a decorator, or something else.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":220,"Q_Id":1925487,"Users Score":1,"Answer":"It's always going to be converted to a string anyway, so why do you care what's being passed in? If you use \"%s\" % number or even just str(number), then it doesn't matter whether number is a string or an int.","Q_Score":2,"Tags":"python,django","A_Id":1925617,"CreationDate":"2009-12-18T00:12:00.000","Title":"XML-RPC method parameter data typing in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need some code to get the address of the socket i just created (to filter out packets originating from localhost on a multicast network)\nthis:\nsocket.gethostbyname(socket.gethostname())\nworks on mac but it returns only the localhost IP in linux... is there anyway to get the LAN address\nthanks\n--edit--\nis it possible to get it from the socket settings itself, like, the OS has to select a LAN IP to send on... can i play on getsockopt(... IP_MULTICAST_IF...) i dont know exactly how to use this though...?\n--- edit ---\nSOLVED!\nsend_sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_LOOP, 0)\nputting this on the send socket eliminated packet echos to the host sending them, which eliminates the need for the program to know which IP the OS has selected to send.\nyay!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":626,"Q_Id":1925974,"Users Score":0,"Answer":"quick answer - socket.getpeername() (provided that socket is a socket object, not a module)\n(playing around in python\/ipython\/idle\/... interactive shell is very helpful)\n.. or if I read you question carefully, maybe socket.getsockname() :)","Q_Score":0,"Tags":"python,sockets,ip-address","A_Id":1926048,"CreationDate":"2009-12-18T02:49:00.000","Title":"How to get the LAN IP that a socket is sending (linux)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've got a python web crawler and I want to distribute the download requests among many different proxy servers, probably running squid (though I'm open to alternatives). For example, it could work in a round-robin fashion, where request1 goes to proxy1, request2 to proxy2, and eventually looping back around. Any idea how to set this up?\nTo make it harder, I'd also like to be able to dynamically change the list of available proxies, bring some down, and add others.\nIf it matters, IP addresses are assigned dynamically.\nThanks :)","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16640,"Q_Id":1934088,"Users Score":6,"Answer":"Make your crawler have a list of proxies and with each HTTP request let it use the next proxy from the list in a round robin fashion. However, this will prevent you from using HTTP\/1.1 persistent connections. Modifying the proxy list will eventually result in using a new or not using a proxy.\nOr have several connections open in parallel, one to each proxy, and distribute your crawling requests to each of the open connections. Dynamics may be implemented by having the connetor registering itself with the request dispatcher.","Q_Score":10,"Tags":"python,proxy,screen-scraping,web-crawler,squid","A_Id":1934198,"CreationDate":"2009-12-19T20:46:00.000","Title":"Rotating Proxies for web scraping","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've import contacts from gmail by using gdata api, \nand is there any apis like that for hotmail\/live\/Aol ?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":637,"Q_Id":1938945,"Users Score":2,"Answer":"There is Windows Live Contact API for Hotmail\/Live mail.\nYahoo Contact API for Yahoo also exists, but to this date, no AOL contact api. \nI would suggest you try openinviter (openinviter.com) to import contacts. Unfortunately, you will not have OAuth capabilities, but it is the best class out there and works with 90+ different email providers. \nNote: it is written in php, but creating a wrapper won't be too hard.","Q_Score":0,"Tags":"python,api","A_Id":1939043,"CreationDate":"2009-12-21T08:50:00.000","Title":"Is there any libraries could import contacts from hotmail\/live\/aol account?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to do in place edit of XML document using xpath ?\nI'd prefer any python solution but Java would be fine too.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":375,"Q_Id":1964583,"Users Score":1,"Answer":"Using XML to store data is probably not optimal, as you experience here. Editing XML is extremely costly.\nOne way of doing the editing is parsing the xml into a tree, and then inserting stuff into that three, and then rebuilding the xml file. \nEditing an xml file in place is also possible, but then you need some kind of search mechanism that finds the location you need to edit or insert into, and then write to the file from that point. Remember to also read the remaining data, because it will be overwritten. This is fine for inserting new tags or data, but editing existing data makes it even more complicated.\nMy own rule is to not use XML for storage, but to present data. So the storage facility, or some kind of middle man, needs to form xml files from the data it has.","Q_Score":2,"Tags":"python,xpath","A_Id":1964631,"CreationDate":"2009-12-26T23:08:00.000","Title":"edit in place using xpath","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm creating a python script which accepts a path to a remote file and an n number of threads. The file's size will be divided by the number of threads, when each thread completes I want them to append the fetch data to a local file.\nHow do I manage it so that the order in which the threads where generated will append to the local file in order so that the bytes don't get scrambled?\nAlso, what if I'm to download several files simultaneously?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":3862,"Q_Id":1965213,"Users Score":1,"Answer":"You need to fetch completely separate parts of the file on each thread. Calculate the chunk start and end positions based on the number of threads. Each chunk must have no overlap obviously.\nFor example, if target file was 3000 bytes long and you want to fetch using three thread:\n\nThread 1: fetches bytes 1 to 1000\nThread 2: fetches bytes 1001 to 2000\nThread 3: fetches bytes 2001 to 3000\n\nYou would pre-allocate an empty file of the original size, and write back to the respective positions within the file.","Q_Score":0,"Tags":"python,multithreading","A_Id":1965219,"CreationDate":"2009-12-27T04:56:00.000","Title":"File downloading using python with threads","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking at existing python code that heavily uses Paramiko to do SSH and FTP. I need to allow the same code to work with some hosts that do not support a secure connection and over which I have no control. \nIs there a quick and easy way to do it via Paramiko, or do I need to step back, create some abstraction that supports both paramiko and Python's FTP libraries, and refactor the code to use this abstraction?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":7051,"Q_Id":1977571,"Users Score":7,"Answer":"No, paramiko has no support for telnet or ftp -- you're indeed better off using a higher-level abstraction and implementing it twice, with paramiko and without it (with the ftplib and telnetlib modules of the Python standard library).","Q_Score":6,"Tags":"python,paramiko","A_Id":1978007,"CreationDate":"2009-12-29T23:30:00.000","Title":"Does Paramiko support non-secure telnet and ftp instead of just SSH and SFTP?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Web2Py and i want to import my program simply once per session... not everytime the page is loaded. is this possible ? such as \"import Client\" being used on the page but only import it once per session..","AnswerCount":1,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":953,"Q_Id":1978426,"Users Score":6,"Answer":"In web2py your models and controllers are executed, not imported. They are executed every time a request arrives. If you press the button [compile] in admin, they will be bytecode compiled and some other optimizations are performs.\nIf your app (in models and controllers) does \"import somemodule\", then the import statement is executed at every request but \"somemodule\" is actually imported only the first time it is executed, as you asked.","Q_Score":4,"Tags":"python,web2py","A_Id":1980510,"CreationDate":"2009-12-30T04:27:00.000","Title":"Web2py Import Once per Session","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to develop a tool for my project using python. The requirements are:\n\nEmbed a web server to let the user get some static files, but the traffic is not very high.\nUser can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?\nBesides the web server, the tool need some background job to do, so these jobs need to be done with the web server.\n\nSo, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here.","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":1002,"Q_Id":1978791,"Users Score":0,"Answer":"Why dont you use open source build tools (continuous integration tools) like Cruise. Most of them come with a web server\/xml interface and sometimes with fancy reports as well.","Q_Score":0,"Tags":"python,cherrypy","A_Id":1978818,"CreationDate":"2009-12-30T06:56:00.000","Title":"Python web server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to develop a tool for my project using python. The requirements are:\n\nEmbed a web server to let the user get some static files, but the traffic is not very high.\nUser can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?\nBesides the web server, the tool need some background job to do, so these jobs need to be done with the web server.\n\nSo, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here.","AnswerCount":5,"Available Count":4,"Score":-0.1194272985,"is_accepted":false,"ViewCount":1002,"Q_Id":1978791,"Users Score":-3,"Answer":"This sounds like a fun project. So, why don't write your own HTTP server? Its not so complicated after all, HTTP is a well-known and easy to implement protocol and you'll gain a lot of new knowledge!\nCheck documentation or manual pages (whatever you prefer) of socket(), bind(), listen(), accept() and so on.","Q_Score":0,"Tags":"python,cherrypy","A_Id":1979792,"CreationDate":"2009-12-30T06:56:00.000","Title":"Python web server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to develop a tool for my project using python. The requirements are:\n\nEmbed a web server to let the user get some static files, but the traffic is not very high.\nUser can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?\nBesides the web server, the tool need some background job to do, so these jobs need to be done with the web server.\n\nSo, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here.","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":1002,"Q_Id":1978791,"Users Score":1,"Answer":"Use the WSGI Reference Implementation wsgiref already provided with Python\nUse REST protocols with JSON (not XML-RPC). It's simpler and faster than XML.\nBackground jobs are started with subprocess.","Q_Score":0,"Tags":"python,cherrypy","A_Id":1979714,"CreationDate":"2009-12-30T06:56:00.000","Title":"Python web server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to develop a tool for my project using python. The requirements are:\n\nEmbed a web server to let the user get some static files, but the traffic is not very high.\nUser can configure the tool using http, I don't want a GUI page, I just need a RPC interface, like XML-RPC? or others?\nBesides the web server, the tool need some background job to do, so these jobs need to be done with the web server.\n\nSo, Which python web server is best choice? I am looking at CherryPy, If you have other recommendation, please write it here.","AnswerCount":5,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":1002,"Q_Id":1978791,"Users Score":3,"Answer":"what about the internal python webserver ?\njust type \"python web server\" in google, and host the 1st result...","Q_Score":0,"Tags":"python,cherrypy","A_Id":1979101,"CreationDate":"2009-12-30T06:56:00.000","Title":"Python web server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am porting some Java code to Python and we would like to use Python 3 but I can't find LDAP module for Python 3 in Windows.\nThis is forcing us to use 2.6 version and it is bothersome as rest of the code is already in 3.0 format.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":16151,"Q_Id":1982442,"Users Score":-3,"Answer":"This answer is no longer accurate; see below for other answers.\nSorry to break this on you, but I don't think there is a python-ldap for Python 3 (yet)...\nThat's the reason why we should keep active development at Python 2.6 for now (as long as most crucial dependencies (libs) are not ported to 3.0).","Q_Score":12,"Tags":"python,ldap,python-3.x","A_Id":1982479,"CreationDate":"2009-12-30T20:56:00.000","Title":"Does Python 3 have LDAP module?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to upload a file from my computer to a file hoster like hotfile.com via a Python script. Because Hotfile is only offering a web-based upload service (no ftp).\nI need Python first to login with my username and password and after that to upload the file. When the file transfer is over, I need the Download and Delete-link (which is generated right after the Upload has finished). \nIs this even possible? If so, can anybody tell me how the script looks like or even give my hints how to build it?\nThanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":7457,"Q_Id":1993060,"Users Score":0,"Answer":"You mention they do not offer FTP, but I went to their site and found the following:\n\nHow to upload with FTP?\n ftp.hotfile.com user: your hotfile\n username pass: your hotfile password\n You can upload and make folders, but\n cant rename,move files\n\nTry it. If it works, using FTP from within Python will be a very simple task.","Q_Score":2,"Tags":"python,authentication,file-upload,automation","A_Id":1993139,"CreationDate":"2010-01-02T22:14:00.000","Title":"Upload file to a website via Python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a need to display some basic info about a facebook group on a website i am building. All i am really looking to show is the total number of members, and maybe a list of the few most recent people who joined. \nI would like to not have to login to FB to accomplish this, is there an API for groups that allows anonymous access? or do i have to go the screen scraping route?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":342,"Q_Id":2008816,"Users Score":1,"Answer":"Use the Python Facebook module on Google Code.","Q_Score":1,"Tags":"python,django,facebook","A_Id":2012448,"CreationDate":"2010-01-05T20:23:00.000","Title":"Python + Facebook, getting info about a group easily","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do I get the MAC address of a remote host on my LAN? I'm using Python and Linux.","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":22526,"Q_Id":2010816,"Users Score":0,"Answer":"Many years ago, I was tasked with gathering various machine info from all machines on a corporate campus. One desired piece of info was the MAC address, which is difficult to get on a network that spanned multiple subnets. At the time, I used the Windows built-in \"nbtstat\" command.\nToday there is a Unix utility called \"nbtscan\" that provides similar info. If you do not wish to use an external tool, maybe there are NetBIOS libraries for python that could be used to gather the info for you?","Q_Score":6,"Tags":"python,linux,networking,mac-address","A_Id":2010975,"CreationDate":"2010-01-06T03:40:00.000","Title":"Get remote MAC address using Python and Linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm looking for a payment gateway company so we can avoid tiresome PCI-DSS certification and its associated expenses. I'll get this out the way now, I don't want Paypal. It does what I want but it's really not a company I want to trust with any sort of money.\nIt needs to support the following flow:\n\nUser performs actions on our site, generating an amount that needs to be paid.\nOur server contacts the gateway asynchronously (no hidden inputs) and tells it about the user, how much they need to pay. The gateway returns a URL and perhaps a tentative transaction ID.\nOur server stores the transaction ID and redirects the user to the URL provided by the gateway.\nThe user fills out their payment details on the remote server.\nWhen they have completed that, the gateway asynchronously contacts our server with the outcome, transaction id, etc and forwards them back to us (at a predestined URL).\nWe can show the user their order is complete\/failed\/etc. Fin.\n\nIf at all possible, UK or EU based and developer friendly.\nWe don't need any concept of a shopping basket as we have that all handled in our code already.\nWe have (or at least will have by launch) a proper merchant banking account - so cover services like Paypay aren't needed.\nIf their API covers Python (we're using Django) explicitly, all the better but I think I'm capable enough to decipher any other examples and transcode them into Python myself.","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":3656,"Q_Id":2022067,"Users Score":2,"Answer":"It sounds like you want something like Worldpay or even Google Checkout. But it all depends what your turnover is, because these sorts of providers (who host the payment page themselves), tend to take a percentage of every transaction, rather than a fixed monthly fee that you can get from elsewhere.\nThe other thing to consider is, if you have any way of taking orders over the phone, and the phone operators need to take customers' credit card details, then your whole internal network will need to be PCI compliant, too.\nIf you JUST need it for a website, then that makes it easier. If you have a low turnover, then check out the sites mentioned above. If you have a high turnover, then it may work out more cost effective in the long run to get PCI-DSS certified and still keep control of credit card transactions in-house, giving you more flexibility, and cheaper transaction costs.","Q_Score":9,"Tags":"python,payment-gateway,payment","A_Id":2023105,"CreationDate":"2010-01-07T17:01:00.000","Title":"Looking for a payment gateway","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking for a payment gateway company so we can avoid tiresome PCI-DSS certification and its associated expenses. I'll get this out the way now, I don't want Paypal. It does what I want but it's really not a company I want to trust with any sort of money.\nIt needs to support the following flow:\n\nUser performs actions on our site, generating an amount that needs to be paid.\nOur server contacts the gateway asynchronously (no hidden inputs) and tells it about the user, how much they need to pay. The gateway returns a URL and perhaps a tentative transaction ID.\nOur server stores the transaction ID and redirects the user to the URL provided by the gateway.\nThe user fills out their payment details on the remote server.\nWhen they have completed that, the gateway asynchronously contacts our server with the outcome, transaction id, etc and forwards them back to us (at a predestined URL).\nWe can show the user their order is complete\/failed\/etc. Fin.\n\nIf at all possible, UK or EU based and developer friendly.\nWe don't need any concept of a shopping basket as we have that all handled in our code already.\nWe have (or at least will have by launch) a proper merchant banking account - so cover services like Paypay aren't needed.\nIf their API covers Python (we're using Django) explicitly, all the better but I think I'm capable enough to decipher any other examples and transcode them into Python myself.","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":3656,"Q_Id":2022067,"Users Score":4,"Answer":"You might want to take a look at Adyen (www.adyen.com). They are European and provide a whole lot of features and a very friendly interface. They don't charge a monthly or set up fee and seem to be reasonably priced per transaction.\nTheir hosted payments page can be completely customised which was an amazing improvement for us.","Q_Score":9,"Tags":"python,payment-gateway,payment","A_Id":2258716,"CreationDate":"2010-01-07T17:01:00.000","Title":"Looking for a payment gateway","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm looking for a payment gateway company so we can avoid tiresome PCI-DSS certification and its associated expenses. I'll get this out the way now, I don't want Paypal. It does what I want but it's really not a company I want to trust with any sort of money.\nIt needs to support the following flow:\n\nUser performs actions on our site, generating an amount that needs to be paid.\nOur server contacts the gateway asynchronously (no hidden inputs) and tells it about the user, how much they need to pay. The gateway returns a URL and perhaps a tentative transaction ID.\nOur server stores the transaction ID and redirects the user to the URL provided by the gateway.\nThe user fills out their payment details on the remote server.\nWhen they have completed that, the gateway asynchronously contacts our server with the outcome, transaction id, etc and forwards them back to us (at a predestined URL).\nWe can show the user their order is complete\/failed\/etc. Fin.\n\nIf at all possible, UK or EU based and developer friendly.\nWe don't need any concept of a shopping basket as we have that all handled in our code already.\nWe have (or at least will have by launch) a proper merchant banking account - so cover services like Paypay aren't needed.\nIf their API covers Python (we're using Django) explicitly, all the better but I think I'm capable enough to decipher any other examples and transcode them into Python myself.","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":3656,"Q_Id":2022067,"Users Score":2,"Answer":"I just finished something exactly like this using First Data Global Gateway (don't really want to provide a link, can find with Google). There's no Python API because their interface is nothing but http POST.\nYou have the choice of gathering credit card info yourself before posting the form to their server, as long as the connection is SSL and the referring URL is known to them (meaning it's your form but you can't store or process the data first).\nIn the FDGG gateway \"terminal interface\" you configure your URL endpoints for authorization accepted\/failed and it will POST transaction information.\nI can't say it was fun and their \"test\" mode was buggy but it works. Sorry, don't know if it's available in UK\/EU but it's misnamed if it isn't :)","Q_Score":9,"Tags":"python,payment-gateway,payment","A_Id":2023033,"CreationDate":"2010-01-07T17:01:00.000","Title":"Looking for a payment gateway","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to make an HTTP request in Python 2.6.4, using the urllib module. Is there any way to set the request headers?\nI am sure that this is possible using urllib2, but I would prefer to use urllib since it seems simpler.","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":872,"Q_Id":2031745,"Users Score":2,"Answer":"There isn't any way to do that, which is precisely the reason urllib is deprecated in favour of urllib2. So just use urllib2 rather than writing new code to a deprecated interface.","Q_Score":0,"Tags":"python,http,urllib,python-2.6,python-2.x","A_Id":2031786,"CreationDate":"2010-01-09T00:27:00.000","Title":"Any way to set request headers when doing a request using urllib in Python 2.x?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Has anyone done a Python CLI to edit Firefox bookmarks ?\nMy worldview is that of Unix file trees; I want\n\nfind \/re\/ in given or all fields in given or all subtrees\ncd\nls with context\nmv this ..\/there\/\n\nWhether it uses bookamrks.html or places.sqlite is secondary -- whatever's easier.\nClarification added: I'd be happy to quit Firefox, edit bookmarks in the CLI, import the new database in Firefox.\nIn otherwords, database locking is a moot point; first let's see code for a rough cut CLI.\n(Why a text CLI and not a GUI ?\nCLIs are simpler (for me), and one could easily program e.g.\nmv old-bookmarks to 2009\/same-structure\/.\nNonetheless links to a really good bookmarker GUI, for Firefox or anything else, would be useful too.)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2530,"Q_Id":2034373,"Users Score":0,"Answer":"I don't know about all the features you've mentioned but \"Organize bookmars\" option in the Bookmarks menu is pretty decent with respect to features.","Q_Score":1,"Tags":"python,firefox,command-line-interface,bookmarks","A_Id":2034590,"CreationDate":"2010-01-09T18:18:00.000","Title":"Python CLI to edit Firefox bookmarks?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have built a Python server to which various clients can connect, and I need to set a predefined series of messages from clients to the server - For example the client passes in a name to the server when it first connects. \nI was wondering what the best way to approach this is? How should I build a simple protocol for their communication? \nShould the messages start with a specific set of bytes to mark them out as part of this protocol, then contain some sort of message id? Any suggestions or further reading appreciated.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":2451,"Q_Id":2042133,"Users Score":1,"Answer":"Read some protocols, and try to find one that looks like what you need. Does it need to be message-oriented or stream-oriented? Does it need request order to be preserved, does it need requests to be paired with responses? Do you need message identifiers? Retries, back-off? Is it an RPC protocol, a message queue protocol?","Q_Score":3,"Tags":"python,sockets,protocols","A_Id":2042187,"CreationDate":"2010-01-11T13:37:00.000","Title":"Python Sockets - Creating a message format","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Often times I want to automate http queries. I currently use Java(and commons http client), but would probably prefer a scripting based approach. Something really quick and simple. Where I can set a header, go to a page and not worry about setting up the entire OO lifecycle, setting each header, calling up an html parser... I am looking for a solution in ANY language, preferable scripting","AnswerCount":12,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":4483,"Q_Id":2043058,"Users Score":0,"Answer":"What about using PHP+Curl, or just bash?","Q_Score":8,"Tags":"python,ruby,perl,http,scripting","A_Id":2043069,"CreationDate":"2010-01-11T16:15:00.000","Title":"Scripting HTTP more effeciently","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I more or less know how to use select() to take a list of sockets, and only return the ones that are ready to read\/write something. The project I'm working on now has a class called 'user'. Each 'user' object contains its own socket. What I would like to do is pass a list of users to a select(), and get back a list of only the users where user.socket is ready to read\/write. Any thoughts on where to start on this?\nEdit: Changed switch() to select(). I need to proofread better.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":547,"Q_Id":2046727,"Users Score":2,"Answer":"You should have your User class implement a fileno(self) method which returns self.thesocket.fileno() -- that's the way to make select work on your own classes (sockets only on windows, arbitrary files on Unix-like systems). Not sure what switch is supposed to me -- don't recognize it as a standard library (or built-in) Python concept...?","Q_Score":1,"Tags":"python,sockets","A_Id":2046760,"CreationDate":"2010-01-12T04:28:00.000","Title":"Creating waitable objects in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am pretty sure the answer is no but of course there are cleverer guys than me!\nIs there a way to construct a lazy SAX based XML parser that can be stopped (e.g. raising an exception is a possible way of doing this) but also resumable ?\nI am looking for a possible solution for Python >= 2.6 with standard XML libraries. The \"lazy\" part is also trivial: I am really after the \"resumable\" property here.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1020,"Q_Id":2059455,"Users Score":0,"Answer":"Expat can be stopped and is resumable. AFAIK Python SAX parser uses Expat. Does the API really not expose the stopping stuff to the Python side?? \nEDIT: nope, looks like the parser stopping isn't available from Python...","Q_Score":1,"Tags":"python,xml,sax","A_Id":2059524,"CreationDate":"2010-01-13T19:07:00.000","Title":"Lazy SAX XML parser with stop\/resume","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Any function in that Graphviz which can do that?\nIf not, any other free software that can do that?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2271,"Q_Id":2066259,"Users Score":0,"Answer":"Compute the complement yourself, then plot it.","Q_Score":2,"Tags":"python,graph,plot,graphviz,complement","A_Id":2066328,"CreationDate":"2010-01-14T17:49:00.000","Title":"How to draw complement of a network graph?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a client\/server application in Python and I'm finding it necessary to get a new connection to the server for each request from the client. My server is just inheriting from TCPServer and I'm inheriting from BaseRequestHandler to do my processing. I'm not calling self.request.close() anywhere in the handler, but somehow the server seems to be hanging up on my client. What's up?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":9010,"Q_Id":2066810,"Users Score":0,"Answer":"You sure the client is not hanging up on the server? This is a bit too vague to really tell what is up, but generally a server that is accepting data from a client will quit the connection of the read returns no data.","Q_Score":10,"Tags":"python,sockets","A_Id":2066907,"CreationDate":"2010-01-14T19:09:00.000","Title":"Does the TCPServer + BaseRequestHandler in Python's SocketServer close the socket after each call to handle()?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a client\/server application in Python and I'm finding it necessary to get a new connection to the server for each request from the client. My server is just inheriting from TCPServer and I'm inheriting from BaseRequestHandler to do my processing. I'm not calling self.request.close() anywhere in the handler, but somehow the server seems to be hanging up on my client. What's up?","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":9010,"Q_Id":2066810,"Users Score":9,"Answer":"Okay, I read the code (on my Mac, SocketServer.py is at \/System\/Library\/Frameworks\/Python.framework\/Versions\/2.5\/lib\/python2.5\/).\nIndeed, TCPServer is closing the connection. In BaseServer.handle_request, process_request is called, which calls close_request. In the TCPServer class, close_request calls self.request.close(), and self.request is just the socket used to handle the request.\nSo the answer to my question is \"Yes\".","Q_Score":10,"Tags":"python,sockets","A_Id":2072002,"CreationDate":"2010-01-14T19:09:00.000","Title":"Does the TCPServer + BaseRequestHandler in Python's SocketServer close the socket after each call to handle()?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to grab daily sunrise\/sunset times from a web site. Is it possible to scrape web content with Python? what are the modules used? Is there any tutorial available?","AnswerCount":10,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":208635,"Q_Id":2081586,"Users Score":63,"Answer":"I'd really recommend Scrapy.\nQuote from a deleted answer:\n\n\nScrapy crawling is fastest than mechanize because uses asynchronous operations (on top of Twisted).\nScrapy has better and fastest support for parsing (x)html on top of libxml2.\nScrapy is a mature framework with full unicode, handles redirections, gzipped responses, odd encodings, integrated http cache, etc.\nOnce you are into Scrapy, you can write a spider in less than 5 minutes that download images, creates thumbnails and export the extracted data directly to csv or json.","Q_Score":188,"Tags":"python,web-scraping,screen-scraping","A_Id":8603040,"CreationDate":"2010-01-17T16:06:00.000","Title":"Web scraping with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i am new to network programming in python. I wanted to know that what is the maximum size packet we can transmit or receive on python socket? and how to find out it?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":7528,"Q_Id":2091097,"Users Score":1,"Answer":"I don't think there's any Python-specific limits. UDP packets have a theoretical limit of circa 65kb and TCP no upper limit, but you'll have flow control problems if you use packets much more than a few kilobytes.","Q_Score":6,"Tags":"python,sockets,network-programming","A_Id":2091157,"CreationDate":"2010-01-19T04:35:00.000","Title":"What is the maximum packet size a python socket can handle?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just downloaded Beautiful Soup and I've decided I'll make a small library (is that what they call them in Python?) that will return results of a movie given and IMDB movie search.\nMy question is, how exactly does this import thing work?\nFor example, I downloaded BeautifulSoup and all it is, is a .py file. Does that file have to be in the same folder as the my python application (my project that will use the library)?","AnswerCount":6,"Available Count":1,"Score":-0.0333209931,"is_accepted":false,"ViewCount":217,"Q_Id":2095505,"Users Score":-1,"Answer":"Might not be relevant, but have you considered using imdbpy? Last time I used it it worked pretty well...","Q_Score":2,"Tags":"python,import","A_Id":2097024,"CreationDate":"2010-01-19T17:28:00.000","Title":"A few questions regarding Pythons 'import' feature","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"In my python application using mod_wsgi and cherrypy ontop of Apache my response code get changed to a 500 from a 403. I am explicitly setting this to 403. \ni.e.\ncherrypy.response.status = 403\nI do not understand where and why the response code that the client receives is 500. Does anyone have any experience with this problem>","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1737,"Q_Id":2106377,"Users Score":1,"Answer":"The HTTP 500 error is used for internal server errors. Something in the server or your application is likely throwing an exception, so no matter what you set the response code to be before this, CherryPy will send a 500 back.\nYou can look into whatever tools CherryPy includes for debugging or logging (I'm not familiar with them). You can also set breakpoints into your code and continue stepping into the CherryPy internals until it hits the error case.","Q_Score":3,"Tags":"python,apache,mod-wsgi,cherrypy","A_Id":2106456,"CreationDate":"2010-01-21T01:46:00.000","Title":"CherryPy changes my response code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have been working with python for a while now. Recently I got into Sockets with Twisted which was good for learning Telnet, SSH, and Message Passing. I wanted to take an idea and implement it in a web fashion. A week of searching and all I can really do is create a resource that handles GET and POST all to itself. And this I am told is bad practice.\nSo The questions I have after one week:\n* Are other options like Tornado and Standard Python Sockets a better (or more popular) approach?\n* Should one really use separate resources in Twisted GET and POST operations?\n* What is a good resource to start in this area of Python Development? \nMy background with languages are C, Java, HTML\/DHTML\/XHTML\/XML and my main systems (even home) are Linux.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":474,"Q_Id":2114847,"Users Score":1,"Answer":"I'd recommend against building your own web server and handling raw socket calls to build web applications; it makes much more sense to just write your web services as wsgi applications and use an existing web server, whether it's something like tornado or apache with mod_wsgi.","Q_Score":2,"Tags":"python,webserver,twisted,tornado","A_Id":2114986,"CreationDate":"2010-01-22T04:01:00.000","Title":"Python approach to Web Services and\/or handeling GET and POST","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have extensive experience with PHP cURL but for the last few months I've been coding primarily in Java, utilizing the HttpClient library.\nMy new project requires me to use Python, once again putting me at the crossroads of seemingly comparable libraries: pycurl and urllib2.\nPutting aside my previous experience with PHP cURL, what is the recommended library in Python? Is there a reason to use one but not the other? Which is the more popular option?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":4948,"Q_Id":2121945,"Users Score":1,"Answer":"Use urllib2. It's got very good documentation in python, while pycurl is mostly C documentation. If you hit a wall, switch to mechanize or pycurl.","Q_Score":2,"Tags":"python,urllib2,pycurl","A_Id":2122198,"CreationDate":"2010-01-23T03:20:00.000","Title":"Python: urllib2 or Pycurl?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have extensive experience with PHP cURL but for the last few months I've been coding primarily in Java, utilizing the HttpClient library.\nMy new project requires me to use Python, once again putting me at the crossroads of seemingly comparable libraries: pycurl and urllib2.\nPutting aside my previous experience with PHP cURL, what is the recommended library in Python? Is there a reason to use one but not the other? Which is the more popular option?","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":4948,"Q_Id":2121945,"Users Score":3,"Answer":"urllib2 is part of the standard library, pycurl isn't (so it requires a separate step of download\/install\/package etc). That alone, quite apart from any difference in intrinsic quality, is guaranteed to make urllib2 more popular (and can be a pretty good pragmatical reason to pick it -- convenience!-).","Q_Score":2,"Tags":"python,urllib2,pycurl","A_Id":2121967,"CreationDate":"2010-01-23T03:20:00.000","Title":"Python: urllib2 or Pycurl?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a big list of Twitter users stored in a database, almost 1000.\nI would like to use the Streaming API in order to stream tweets from these users, but I cannot find an appropriate way to do this.\nHelp would be very much appreciated.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2823,"Q_Id":2123651,"Users Score":2,"Answer":"You can track 400 filter words and 5000 userids via streaming api. \nFilter words can be something apple, orange, ipad etc...\nAnd in order to track any user's timeline you need to get the user's twitter user id.","Q_Score":6,"Tags":"php,python,twitter","A_Id":8286513,"CreationDate":"2010-01-23T15:34:00.000","Title":"Streaming multiple tweets - from multiple users? - Twitter API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a way to connect a frontend server (running Django) with a backend server.\nI want to avoid inventing my own protocol on top of a socket, so my plan was to use SimpleHTTPServer + JSON or XML.\nHowever, we also require some security (authentication + encryption) for the connection, which isn't quite as simple to implement.\nAny ideas for alternatives? What mechanisms would you use? I definitely want to avoid CORBA (we have used it before, and it's way too complex for what we need).","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":324,"Q_Id":2125149,"Users Score":1,"Answer":"Use a client side certificate for the connection. This is a good monetization technique to get more income for your client side app.","Q_Score":1,"Tags":"python,json,networking,ipc","A_Id":2125162,"CreationDate":"2010-01-23T23:18:00.000","Title":"Network IPC With Authentication (in Python)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"What library should I use for network programming? Is sockets the best, or is there a higher level interface, that is standard?\nI need something that will be pretty cross platform (ie. Linux, Windows, Mac OS X), and it only needs to be able to connect to other Python programs using the same library.","AnswerCount":8,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1164,"Q_Id":2128266,"Users Score":0,"Answer":"Socket is low level api, it is mapped directly to operating system interface.\nTwisted, Tornado ... are high level framework (of course they are built on socket because socket is low level).\nWhen it come to TCP\/IP programming, you should have some basic knowledge to make a decision about what you shoud use:\n\nWill you use well-known protocol like HTTP, FTP or create your own protocol? \nBlocking or non-blocking? Twisted, Tornado are non-blocking framework (basically like nodejs). \nOf course, socket can do everything because every other framework is base on its ;)","Q_Score":2,"Tags":"python,sockets,network-programming","A_Id":39160652,"CreationDate":"2010-01-24T18:51:00.000","Title":"Network programming in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What library should I use for network programming? Is sockets the best, or is there a higher level interface, that is standard?\nI need something that will be pretty cross platform (ie. Linux, Windows, Mac OS X), and it only needs to be able to connect to other Python programs using the same library.","AnswerCount":8,"Available Count":2,"Score":0.024994793,"is_accepted":false,"ViewCount":1164,"Q_Id":2128266,"Users Score":1,"Answer":"The socket module in the standard lib is in my opinion a good choice if you don't need high performance.\nIt is a very famous API that is known by almost every developpers of almost every languages. It's quite sipmple and there is a lot of information available on the internet. Moreover, it will be easier for other people to understand your code.\nI guess that an event-driven framework like Twisted has better performance but in basic cases standard sockets are enough.\nOf course, if you use a higher-level protocol (http, ftp...), you should use the corresponding implementation in the python standard library.","Q_Score":2,"Tags":"python,sockets,network-programming","A_Id":2128966,"CreationDate":"2010-01-24T18:51:00.000","Title":"Network programming in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program with many threads. I was thinking of creating a socket, bind it to localhost, and have the threads read\/write to this central location. However I do not want this socket open to the rest of the network, just connections from 127.0.0.1 should be accepted. How would I do this (in Python)? And is this a suitable design? Or is there something a little more elegant?","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":15248,"Q_Id":2135595,"Users Score":0,"Answer":"notionOn TCP\/IP networks 127.0.0.0\/8 is a non-routeable network, so you should not be able to send an IP datagram destined to 127.0.0.1 across a routed infrastructure. The router will just discard the datagram. However, it is possible to construct and send datagrams with a destination address of 127.0.0.1, so a host on the same network (IP sense of network) as your host could possibly get the datagram to your host's TCP\/IP stack. This is where your local firewal comes into play. Your local (host) firewall should have a rule that discards IP datagrams destined for 127.0.0.0\/8 coming into any interface other than lo0 (or the equivalent loopback interface). If your host either 1) has such firewall rules in place or 2) exists on its own network (or shared with only completely trusted hosts) and behind a well configured router, you can safely just bind to 127.0.0.1 and be fairly certain any datagrams you receive on the socket came from the local machine. The prior answers address how to open and bind to 127.0.0.1.","Q_Score":10,"Tags":"python,client-server","A_Id":2135937,"CreationDate":"2010-01-25T21:00:00.000","Title":"Creating a socket restricted to localhost connections only","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python program with many threads. I was thinking of creating a socket, bind it to localhost, and have the threads read\/write to this central location. However I do not want this socket open to the rest of the network, just connections from 127.0.0.1 should be accepted. How would I do this (in Python)? And is this a suitable design? Or is there something a little more elegant?","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":15248,"Q_Id":2135595,"Users Score":0,"Answer":"If you do sock.bind((port,'127.0.0.1')) it will only listen on localhost, and not on other interfaces, so that's all you need.","Q_Score":10,"Tags":"python,client-server","A_Id":2135628,"CreationDate":"2010-01-25T21:00:00.000","Title":"Creating a socket restricted to localhost connections only","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've well developed Python Server having workflows, views, object - ORM\/OSV, etc...\nServer\/Client communication based on socket protocol, can be done by any of service \n1. XMLRPC Service\n2. Socket Service \nnow I want to develop a Fully Ajax based GUI web Client..\nI've web\/socket services to communicate with server.\nwhat I need is to select the technology, I've several options like,\n\nExtJS - CherryPy\nGWT\nExt-GWT\nCheeryPy\nDjango + JQuery\nDjango + Extjs\n???\n???...","AnswerCount":5,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2943,"Q_Id":2138868,"Users Score":0,"Answer":"How about Pylons + SQLAlchemy + ExtJS? We use it and it works great!","Q_Score":5,"Tags":"python,django,gwt,extjs,webclient","A_Id":2140012,"CreationDate":"2010-01-26T10:51:00.000","Title":"Which technology is preferable to build a web based GUI Client?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've well developed Python Server having workflows, views, object - ORM\/OSV, etc...\nServer\/Client communication based on socket protocol, can be done by any of service \n1. XMLRPC Service\n2. Socket Service \nnow I want to develop a Fully Ajax based GUI web Client..\nI've web\/socket services to communicate with server.\nwhat I need is to select the technology, I've several options like,\n\nExtJS - CherryPy\nGWT\nExt-GWT\nCheeryPy\nDjango + JQuery\nDjango + Extjs\n???\n???...","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":2943,"Q_Id":2138868,"Users Score":1,"Answer":"I'm not sure I understood exactly on the server side, but i'm a big fan of Flex as a way to develop proper software for the browser, rather than mess of trying to make HTML do things it was never made for. Partly an idealistic reasoning, but I also am still not impressed by the 'feel' of JS-based GUIs.\nFlex has good server-communication options... web-services, sockets, remote objects, etc.","Q_Score":5,"Tags":"python,django,gwt,extjs,webclient","A_Id":2138949,"CreationDate":"2010-01-26T10:51:00.000","Title":"Which technology is preferable to build a web based GUI Client?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to scrape a site with python. I obtain the source html code with the urlib module, but I need to scrape also some html code that is generated by a javascript function (which is included in the html source). What this functions does \"in\" the site is that when you press a button it outputs some html code. How can I \"press\" this button with python code? Can scrapy help me? I captured the POST request with firebug but when I try to pass it on the url I get a 403 error. Any suggestions?","AnswerCount":5,"Available Count":1,"Score":0.1586485043,"is_accepted":false,"ViewCount":17188,"Q_Id":2148493,"Users Score":4,"Answer":"I have had to do this before (in .NET) and you are basically going to have to host a browser, get it to click the button, and then interrogate the DOM (document object model) of the browser to get at the generated HTML.\nThis is definitely one of the downsides to web apps moving towards an Ajax\/Javascript approach to generating HTML client-side.","Q_Score":18,"Tags":"javascript,python,browser,screen-scraping","A_Id":2148595,"CreationDate":"2010-01-27T16:20:00.000","Title":"scrape html generated by javascript with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"In Python's mechanize.Browser module, when you submit a form the browser instance goes to that page. For this one request, I don't want that; I want it just to stay on the page it's currently on and give me the response in another object (for looping purposes). Anyone know a quick to do this?\nEDIT:\nHmm, so I have this kind of working with ClientForm.HTMLForm.click(), which returns a urllib2 request, but I need the cookies from mechanize's cookiejar to be used on my urllib2.urlopen request. Is there a method in mechanize that will let me send a request just like urllib2 with the exception that cookies will be imported?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1070,"Q_Id":2152098,"Users Score":7,"Answer":"The answer to my immediate question in the headline is yes, with mechanize.Browser.open_novisit(). It works just like open(), but it doesn't change the state of the Browser instance -- that is, it will retrieve the page, and your Browser object will stay where it was.","Q_Score":3,"Tags":"python,screen-scraping,mechanize","A_Id":2167177,"CreationDate":"2010-01-28T03:24:00.000","Title":"Can I get my instance of mechanize.Browser to stay on the same page after calling b.form.submit()?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I retrieve contacts from hotmail with python?\nIs there any example?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1864,"Q_Id":2165517,"Users Score":0,"Answer":"use octazen, but you have to pay for it","Q_Score":2,"Tags":"python,hotmail","A_Id":2458380,"CreationDate":"2010-01-29T22:08:00.000","Title":"How do I retrieve Hotmail contacts with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does Python have screen scraping libraries that offer JavaScript support?\nI've been using pycurl for simple HTML requests, and Java's HtmlUnit for more complicated requests requiring JavaScript support.\nIdeally I would like to be able to do everything from Python, but I haven't come across any libraries that would allow me to do it. Do they exist?","AnswerCount":7,"Available Count":1,"Score":-0.057080742,"is_accepted":false,"ViewCount":10580,"Q_Id":2190502,"Users Score":-2,"Answer":"I have not found anything for this. I use a combination of beautifulsoup and custom routines...","Q_Score":14,"Tags":"python,screen-scraping,htmlunit,pycurl","A_Id":2190517,"CreationDate":"2010-02-03T08:11:00.000","Title":"Screen scraping with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'd like to make sure that my message was delivered to a queue. \nTo do so I'm adding the mandatory param to the basic_publish.\nWhat else should I do to receive the basic.return message if my message wasn't successfully delivered?\nI can't use channel.wait() to listen for the basic.return because when my message is successfully delivered the wait() function hangs forever. (There is no timeout)\nOn the other hand. When I don't call channel.wait() the channel.returned_messages will remain empty, even if the message isn't delivered.\nI use py-amqplib version 0.6.\nAny solution is welcome.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1192,"Q_Id":2191695,"Users Score":1,"Answer":"It is currently impossible as the basic.return is sent asynchronously when a message is dropped in broker. When message was sent successfully no data is reported from server.\nSo pyAMQP can't listen for such messages.\nI've read few threads about this problem. Possible solution were:\n\nuse txAMQP, twisted version of amqp that handles basic.return\nuse pyAMQP with wait with timeout. (I'm not sure if that is currently possible)\nping server frequently with synchronous commands so that pyAMQP will able to pick basic.return messages when they arrive.\n\nBecause the level of support for pyAMQP and rabbitMQ in general is quite low, we decided not to use amqp broker at all.","Q_Score":2,"Tags":"python,rabbitmq,amqp,py-amqplib","A_Id":2240828,"CreationDate":"2010-02-03T12:08:00.000","Title":"How to use listen on basic.return in python client of AMQP","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the point of '\/segment\/segment\/'.split('\/') returning ['', 'segment', 'segment', '']?\nNotice the empty elements. If you're splitting on a delimiter that happens to be at position one and at the very end of a string, what extra value does it give you to have the empty string returned from each end?","AnswerCount":8,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":93416,"Q_Id":2197451,"Users Score":9,"Answer":"Having x.split(y) always return a list of 1 + x.count(y) items is a precious regularity -- as @gnibbler's already pointed out it makes split and join exact inverses of each other (as they obviously should be), it also precisely maps the semantics of all kinds of delimiter-joined records (such as csv file lines [[net of quoting issues]], lines from \/etc\/group in Unix, and so on), it allows (as @Roman's answer mentioned) easy checks for (e.g.) absolute vs relative paths (in file paths and URLs), and so forth.\nAnother way to look at it is that you shouldn't wantonly toss information out of the window for no gain. What would be gained in making x.split(y) equivalent to x.strip(y).split(y)? Nothing, of course -- it's easy to use the second form when that's what you mean, but if the first form was arbitrarily deemed to mean the second one, you'd have lot of work to do when you do want the first one (which is far from rare, as the previous paragraph points out).\nBut really, thinking in terms of mathematical regularity is the simplest and most general way you can teach yourself to design passable APIs. To take a different example, it's very important that for any valid x and y x == x[:y] + x[y:] -- which immediately indicates why one extreme of a slicing should be excluded. The simpler the invariant assertion you can formulate, the likelier it is that the resulting semantics are what you need in real life uses -- part of the mystical fact that maths is very useful in dealing with the universe. \nTry formulating the invariant for a split dialect in which leading and trailing delimiters are special-cased... counter-example: string methods such as isspace are not maximally simple -- x.isspace() is equivalent to x and all(c in string.whitespace for c in x) -- that silly leading x and is why you so often find yourself coding not x or x.isspace(), to get back to the simplicity which should have been designed into the is... string methods (whereby an empty string \"is\" anything you want -- contrary to man-in-the-street horse-sense, maybe [[empty sets, like zero &c, have always confused most people;-)]], but fully conforming to obvious well-refined mathematical common-sense!-).","Q_Score":146,"Tags":"python,string,split","A_Id":2197605,"CreationDate":"2010-02-04T05:14:00.000","Title":"Why are empty strings returned in split() results?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"What is the point of '\/segment\/segment\/'.split('\/') returning ['', 'segment', 'segment', '']?\nNotice the empty elements. If you're splitting on a delimiter that happens to be at position one and at the very end of a string, what extra value does it give you to have the empty string returned from each end?","AnswerCount":8,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":93416,"Q_Id":2197451,"Users Score":6,"Answer":"Well, it lets you know there was a delimiter there. So, seeing 4 results lets you know you had 3 delimiters. This gives you the power to do whatever you want with this information, rather than having Python drop the empty elements, and then making you manually check for starting or ending delimiters if you need to know it.\nSimple example: Say you want to check for absolute vs. relative filenames. This way you can do it all with the split, without also having to check what the first character of your filename is.","Q_Score":146,"Tags":"python,string,split","A_Id":2197494,"CreationDate":"2010-02-04T05:14:00.000","Title":"Why are empty strings returned in split() results?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to verify that a video service is provided from an URL in python. I am asking does anyone know of any good libraries to use or a way to do this. I have not found much info for this on the web.\nThanks","AnswerCount":4,"Available Count":2,"Score":0.1488850336,"is_accepted":false,"ViewCount":4091,"Q_Id":2207110,"Users Score":3,"Answer":"If you do not want to use a library, as suggested by synack, you can open a socket connection to the given URL and send an RTSP DESCRIEBE request. That is actually quite simple since RTSP is text-based HTTP-like. You would need to parse the response for a meaningful result, e.g look for the presence of media streams.","Q_Score":4,"Tags":"python,url,video-streaming,rtsp","A_Id":2247454,"CreationDate":"2010-02-05T12:23:00.000","Title":"Verify RTSP service via URL","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to verify that a video service is provided from an URL in python. I am asking does anyone know of any good libraries to use or a way to do this. I have not found much info for this on the web.\nThanks","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4091,"Q_Id":2207110,"Users Score":0,"Answer":"I don't believe Live555 provides a python library. However, they do provide source code that can be compiled to build openRTSP. This is a simple command-line utility that will perform the entire RTSP handshake to connect to the server and begin streaming to the client. It also can provide statistic measurements (such as jitter, number of packets lost, etc.) that can be used to measure the quality of the streaming connection.","Q_Score":4,"Tags":"python,url,video-streaming,rtsp","A_Id":2421464,"CreationDate":"2010-02-05T12:23:00.000","Title":"Verify RTSP service via URL","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am building a Python web application and we will need a user identity verification solution... something to verify the users identity during account registration.\nI was wondering if anyone had any experience in integrating such a solution. What vendors\/products out there have worked well with you? Any tips?\nI don't have any experience in this matter so feel free to let me know if any additional information is required.\nThanks in advance!","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":747,"Q_Id":2223790,"Users Score":0,"Answer":"You should have a look at WS-Trust.\nA implementation of that is Windows Identity Foundation. But I'm sure You'll find more.","Q_Score":5,"Tags":"python,identity,verification","A_Id":2223896,"CreationDate":"2010-02-08T18:12:00.000","Title":"Online identity verification solution","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am building a Python web application and we will need a user identity verification solution... something to verify the users identity during account registration.\nI was wondering if anyone had any experience in integrating such a solution. What vendors\/products out there have worked well with you? Any tips?\nI don't have any experience in this matter so feel free to let me know if any additional information is required.\nThanks in advance!","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":747,"Q_Id":2223790,"Users Score":1,"Answer":"There are many different ways to implement a verification system, the concept is quite simple but actually building can be a hassle, especially if you are doing it from scratch. \nThe best way to approach this is to find a framework that handles the aspect of verification. Turbogears and Pylons are both capable of this rather than doing it yourself or using third party apps.\nPersonally I have worked on commercial projects using both frameworks and was able to sort out verification quite easily.\nUser verification utilizes specific concepts and low level technology such as: the internet's stateless characteristic, session handling, database design, etc...\nSo the point I am making is that it would be better if you rather got a good, stable framework that could do the dirty work for you.\nBy the way what framework are you thinking of using? That would help me give a more detailed answer.\nHope this helps?","Q_Score":5,"Tags":"python,identity,verification","A_Id":2224273,"CreationDate":"2010-02-08T18:12:00.000","Title":"Online identity verification solution","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm considering to use XML-RPC.NET to communicate with a Linux XML-RPC server written in Python. I have tried a sample application (MathApp) from Cook Computing's XML-RPC.NET but it took 30 seconds for the app to add two numbers within the same LAN with server. \nI have also tried to run a simple client written in Python on Windows 7 to call the same server and it responded in 5 seconds. The machine has 4 GB of RAM with comparable processing power so this is not an issue. \nThen I tried to call the server from a Windows XP system with Java and PHP. Both responses were pretty fast, almost instantly. The server was responding quickly on localhost too, so I don't think the latency arise from server. \nMy googling returned me some problems regarding Windows' use of IPv6 but our call to server does include IPv4 address (not hostname) in the same subnet. Anyways I turned off IPv6 but nothing changed.\nAre there any more ways to check for possible causes of latency?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1326,"Q_Id":2235643,"Users Score":0,"Answer":"Run a packet capture on the client machine, check the network traffic timings versus the time the function is called.\nThis may help you determine where the latency is in your slow process, e.g. application start-up time, name resolution, etc.\nHow are you addressing the server from the client? By IP? By FQDN? Is the addressing method the same in each of the applications your using?\nIf you call the same remote procedure multiple times from the same slow application, does the time taken increase linearly?","Q_Score":1,"Tags":"c#,.net,python,windows-7,xml-rpc","A_Id":2235703,"CreationDate":"2010-02-10T09:23:00.000","Title":"Slow XML-RPC in Windows 7 with XML-RPC.NET","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to tell urllib2.urlopen (or a custom opener) to use 127.0.0.1 (or ::1) to resolve addresses. I wouldn't change my \/etc\/resolv.conf, however.\nOne possible solution is to use a tool like dnspython to query addresses and httplib to build a custom url opener. I'd prefer telling urlopen to use a custom nameserver though. Any suggestions?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":13644,"Q_Id":2236498,"Users Score":0,"Answer":"You will need to implement your own dns lookup client (or using dnspython as you said). The name lookup procedure in glibc is pretty complex to ensure compatibility with other non-dns name systems. There's for example no way to specify a particular DNS server in the glibc library at all.","Q_Score":17,"Tags":"python,dns,urllib2,dnspython,urlopen","A_Id":2237322,"CreationDate":"2010-02-10T11:46:00.000","Title":"Tell urllib2 to use custom DNS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a script which pulls some XML from a remote server. If this script is running on any server other than production, it works.\nUpload it to production however, and it fails. It is using cURL for the request but it doesn't matter how we do it - fopen, file_get_contents, sockets - it just times out. This also happens if I use a Python script to request the URL.\nThe same script, supplied with another URL to query, works - every time. Obviously it doesn't return the XML we're looking for but it DOES return SOMETHINg - it CAN connect to the remote server. \nIf this URL is requested via the command line using, for example, curl or wget, again, data is returned. It's not the data we're looking for (in fact, it returns an empty root element) but something DOES come back.\nInterestingly, if we strip out query string elements from the URL (the full URL has 7 query string elements and runs to about 450 characters in total) the script will return the same empty XML response. Certain combinations of the query string will once again cause the script to time out.\nThis, as you can imagine, has me utterly baffled - it seems to work in every circumstance EXCEPT the one it needs to work in. We can get a response on our dev servers, we can get a response on the command line, we can get a response if we drop certain QS elements - we just can't get the response we want with the correct URL on the LIVE server.\nDoes anyone have any suggestions at all? I'm at my wits end!","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":606,"Q_Id":2236864,"Users Score":1,"Answer":"Run Wireshark and see how far the request goes. Could be a firewall issue, a DNS resolution problem, among other things.\nAlso, try bumping your curl timeout to something much higher, like 300s, and see how it goes.","Q_Score":0,"Tags":"php,python,xml,apache,curl","A_Id":2236930,"CreationDate":"2010-02-10T12:54:00.000","Title":"PHP \/ cURL problem opening remote file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a good library in python that will help me parse RSS feeds. Has anyone used feedparser? Any feedback?","AnswerCount":8,"Available Count":2,"Score":-0.024994793,"is_accepted":false,"ViewCount":23570,"Q_Id":2244836,"Users Score":-1,"Answer":"I Strongly recommend feedparser.","Q_Score":41,"Tags":"python,rss,feedparser","A_Id":2245462,"CreationDate":"2010-02-11T13:57:00.000","Title":"RSS feed parser library in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a good library in python that will help me parse RSS feeds. Has anyone used feedparser? Any feedback?","AnswerCount":8,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":23570,"Q_Id":2244836,"Users Score":2,"Answer":"If you want an alternative, try xml.dom.minidom.\nLike \"Django is Python\", \"RSS is XML\".","Q_Score":41,"Tags":"python,rss,feedparser","A_Id":2245280,"CreationDate":"2010-02-11T13:57:00.000","Title":"RSS feed parser library in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I browsed the python socket docs and google for two days but I did not find any answer. Yeah I am a network programming newbie :)\nI would like to implement some LAN chatting system with specific function for our needs. I am at the very beginning. I was able to implement a client-server model where the client connects to the server (socket.SOCK_STREAM) and they are able to change messages. I want to step forward. I want the client to discover the LAN with a broadcast how many other clients are available.\nI failed. Is it possible that a socket.SOCK_STREAM type socket could not be used for this task?\nIf so, what are my opportunities? using udp packets? How I have to listen for brodcast messages\/packets?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3935,"Q_Id":2247228,"Users Score":4,"Answer":"The broadcast is defined by the destination address.\nFor example if your own ip is 192.168.1.2, the broadcast address would be 192.168.1.255 (in most cases)\nIt is not related directly to python and will probably not be in its documentation. You are searching for network \"general\" knowledge, to a level much higher than sockets programming\n*EDIT\nYes you are right, you cannot use SOCK_STREAM. SOCK_STREAM defines TCP communication. You should use UDP for broadcasting with socket.SOCK_DGRAM","Q_Score":6,"Tags":"python","A_Id":2247237,"CreationDate":"2010-02-11T19:45:00.000","Title":"stream socket send\/receive broadcast messages?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have edited about 100 html files locally, and now I want to push them to my live server, which I can only access via ftp.\nThe HTML files are in many different directories, but hte directory structure on the remote machine is the same as on the local machine.\nHow can I recursively descend from my top-level directory ftp-ing all of the .html files to the corresponding directory\/filename on the remote machine?\nThanks!","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":409,"Q_Id":2263782,"Users Score":0,"Answer":"umm, maybe by pressing F5 in mc for linux or total commander for windows?","Q_Score":0,"Tags":"python,networking,scripting,ftp","A_Id":2263804,"CreationDate":"2010-02-15T02:37:00.000","Title":"How to upload all .html files to a remote server using FTP and preserving file structure?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have edited about 100 html files locally, and now I want to push them to my live server, which I can only access via ftp.\nThe HTML files are in many different directories, but hte directory structure on the remote machine is the same as on the local machine.\nHow can I recursively descend from my top-level directory ftp-ing all of the .html files to the corresponding directory\/filename on the remote machine?\nThanks!","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":409,"Q_Id":2263782,"Users Score":0,"Answer":"if you have a mac, you can try cyberduck. It's good for syncing remote directory structures via ftp.","Q_Score":0,"Tags":"python,networking,scripting,ftp","A_Id":2299546,"CreationDate":"2010-02-15T02:37:00.000","Title":"How to upload all .html files to a remote server using FTP and preserving file structure?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Given this:\nIt%27s%20me%21\nUnencode it and turn it into regular text?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5032,"Q_Id":2277302,"Users Score":11,"Answer":"Take a look at urllib.unquote and urllib.unquote_plus. That will address your problem. Technically though url \"encoding\" is the process of passing arguments into a url with the & and ? characters (e.g. www.foo.com?x=11&y=12).","Q_Score":11,"Tags":"python,url,encoding","A_Id":2277313,"CreationDate":"2010-02-16T23:50:00.000","Title":"How do I url unencode in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use ms office communicator client apis, and i wan to use those in python is it possible to do ?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1868,"Q_Id":2286790,"Users Score":2,"Answer":">>> import win32com.client\n>>> msg = win32com.client.Dispatch('Communicator.UIAutomation')\n>>> msg.InstantMessage('user@domain.com')","Q_Score":2,"Tags":"python,api,office-communicator","A_Id":3754769,"CreationDate":"2010-02-18T06:48:00.000","Title":"How can we use ms office communicator client exposed APIs in python, is that possible?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing service in python that async ping domains. So it must be able to ping many ip's at the same time. I wrote it on epoll ioloop, but have problem with packets loss.\nWhen there are many simultaneous ICMP requests much part of replies on them didn't reach my servise. What may cause this situation and how i can make my service ping many hosts at the same time without packet loss?\nThanks)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":788,"Q_Id":2299751,"Users Score":0,"Answer":"A problem you might be having is due to the fact that ICMP is layer 3 of the OSI model and does not use a port for communication. In short, ICMP isn't really designed for this. The desired behavior is still possible but perhaps the IP Stack you are using is getting in the way and if this is on a Windows system then 100% sure this is your problem. I would fire up Wireshark to make sure you are actually getting incoming packets, if this is the case then I would use libpcap to track in ICMP replies. If the problem is with sending then you'll have to use raw sockets and build your own ICMP packets.","Q_Score":1,"Tags":"python,ping,icmp","A_Id":2299927,"CreationDate":"2010-02-19T21:32:00.000","Title":"Problem with asyn icmp ping","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a general Client-Server socket program where the client sends commands to the Server, which executes it and sends the result to the Client. \nHowever if there is an error while executing a command, I want to be able to inform the Client of an error. I know I could send the String \"ERROR\" or maybe something like -1 etc, but these could also be part of the command output. Is there any better way of sending an error or an exception over a socket.\nMy Server is in Java and Client is in Python","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":2302761,"Users Score":0,"Answer":"Typically when doing client-server communication you need to establish some kind of protocol. One very simple protocol is to send the String \"COMMAND\" before you send any commands and the String \"ERROR\" before you send any errors. This doubles the number of Strings you have to send but gives more flexibility.\nThere are also a number of more sophisticated protocols already developed. Rather than sending Strings you could construct a Request object which you then serialize and send to the client. The client can then reconstruct the Request object and perform the request whether it's performing an error or running a command.","Q_Score":1,"Tags":"java,python,sockets","A_Id":2302787,"CreationDate":"2010-02-20T16:05:00.000","Title":"Pass error on socket","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a general Client-Server socket program where the client sends commands to the Server, which executes it and sends the result to the Client. \nHowever if there is an error while executing a command, I want to be able to inform the Client of an error. I know I could send the String \"ERROR\" or maybe something like -1 etc, but these could also be part of the command output. Is there any better way of sending an error or an exception over a socket.\nMy Server is in Java and Client is in Python","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":134,"Q_Id":2302761,"Users Score":0,"Answer":"You're already (necessarily) establishing some format or protocol whereby strings are being sent back and forth -- either you're somehow terminating each string, or sending its length first, or the like. (TCP is intrinsically just a stream so without such a protocol there would be no way the recipient could possibly know when the command or output is finished!-)\nSo, whatever approach you're using to delimiting strings, just make it so the results sent back from server to client are two strings each and every time: one being the error description (empty if no error), the other being the commands's results (empty if no results). That's going to be trivial both to send and receive\/parse, and have minimal overhead (sending an empty string should be as simple as sending just a terminator or a length of 0).","Q_Score":1,"Tags":"java,python,sockets","A_Id":2302805,"CreationDate":"2010-02-20T16:05:00.000","Title":"Pass error on socket","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am having trouble exchanging my Oauth request token for an Access Token. My Python application successfully asks for a Request Token and then redirects to the Google login page asking to grant access to my website. When I grant access I retrieve a 200 status code but exchanging this authorized request token for an access token gives me a 'The token is invalid' message. \nThe Google Oauth documentation says: \"Google redirects with token and verifier regardless of whether the token has been authorized.\" so it seems that authorizing the request token fails but then I am not sure how I should get an authorized request token. Any suggestions?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1332,"Q_Id":2306984,"Users Score":1,"Answer":"When you're exchanging for the access token, the oauth_verifier parameter is required. If you don't provide that parameter, then google will tell you that the token is invalid.","Q_Score":2,"Tags":"python,oauth,google-api","A_Id":3110939,"CreationDate":"2010-02-21T18:50:00.000","Title":"Exchange Oauth Request Token for Access Token fails Google API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Using Python, how does one parse\/access files with Linux-specific features, like \"~\/.mozilla\/firefox\/*.default\"? I've tried this, but it doesn't work.\nThanks","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":7357,"Q_Id":2313053,"Users Score":2,"Answer":"It's important to remember:\n\nuse of the tilde ~ expands the home directory as per Poke's answer\nuse of the forward slash \/ is the separator for linux \/ *nix directories\nby default, *nix systems such as linux for example has a wild card globbing in the shell, for instance echo *.* will return back all files that match the asterisk dot asterisk (as per Will McCutcheon's answer!)","Q_Score":3,"Tags":"python,linux,path","A_Id":2313168,"CreationDate":"2010-02-22T18:17:00.000","Title":"Python: How to Access Linux Paths","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am coding a python (2.6) interface to a web service. I need to communicate via http so that :\n\nCookies are handled automatically,\nThe requests are asynchronous,\nThe order in which the requests are sent is respected (the order in which the responses to these requests are received does not matter).\n\nI have tried what could be easily derived from the build-in libraries, facing different problems :\n\nUsing httplib and urllib2, the requests are synchronous unless I use thread, in which case the order is not guaranteed to be respected,\nUsing asyncore, there was no library to automatically deal with cookies send by the web service.\n\nAfter some googling, it seems that there are many examples of python scripts or libraries that match 2 out of the 3 criteria, but not the 3 of them. I am thinking of reading through the cookielib sources and adapting what I need of it to asyncore (or only to my application in a ad hoc manner), but it seems strange that nothing like this exists yet, as I guess I am not the only one interested. If anyone knows of pointers about this problem, it would be greatly appreciated.\nThank you.\nEdit to clarify :\nWhat I am doing is a local proxy that interfaces my IRC client with a webchat. It creates a socket that listens to IRC connections, then upon receiving one, it logs in the webchat via http. I don't have access to the behaviour of the webchat, and it uses cookies for session IDs. When client sends several IRC requests to my python proxy, I have to forward them to the webchat's server via http and with cookies. I also want to do this asynchronously (I don't want to wait for the http response before I send the next request), and currently what happens is that the order in which the http requests are sent is not the order in which the IRC commands were received.\nI hope this clarifies the question, and I will of course detail more if it doesn't.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1531,"Q_Id":2315151,"Users Score":2,"Answer":"Using httplib and urllib2, the\n requests are synchronous unless I use\n thread, in which case the order is not\n guaranteed to be respected\n\nHow would you know that the order has been respected unless you get your response back from the first connection before you send the response to the second connection? After all, you don't care what order the responses come in, so it's very possible that the responses come back in the order you expect but that your requests were processed in the wrong order!\nThe only way you can guarantee the ordering is by waiting for confirmation that the first request has successfully arrived (eg. you start receiving the response for it) before beginning the second request. You can do this by not launching the second thread until you reach the response handling part of the first thread.","Q_Score":1,"Tags":"python,http,asynchronous,cookies","A_Id":2317612,"CreationDate":"2010-02-22T23:43:00.000","Title":"Python: Asynchronous http requests sent in order with automatic handling of cookies?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i have a string \n'''\n{\"session_key\":\"3.KbRiifBOxY_0ouPag6__.3600.1267063200-16423986\",\"uid\":164\n23386,\"expires\":12673200,\"secret\":\"sm7WM_rRtjzXeOT_jDoQ__\",\"sig\":\"6a6aeb66\n64a1679bbeed4282154b35\"}\n'''\nhow to get the value .\nthanks","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":115,"Q_Id":2330857,"Users Score":0,"Answer":"For a simple-to-code method, I suggest using ast.parse() or eval() to create a dictionary from your string, and then accessing the fields as usual. The difference between the two functions above is that ast.parse can only evaluate base types, and is therefore more secure if someone can give you a string that could contain \"bad\" code.","Q_Score":1,"Tags":"python","A_Id":2330884,"CreationDate":"2010-02-25T01:00:00.000","Title":"which is the best way to get the value of 'session_key','uid','expires'","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I found the quoted text in Programming Python 3rd edition by Mark Lutz from Chapter 16: Server-Side Scripting (page 987):\n\nForms also include a method option to specify the encoding style to be used to send data over a socket to the target server machine. Here, we use the post style, which contacts the server and then ships it a stream of user input data in a separate transmission. An alternative get style ships input information to the server in a single transmission step, by adding user inputs to the end of the URL used to invoke the script, usually after a ? character (more on this soon).\n\nI read this with some puzzlement. As far as I know post data is sent in the same transmission as a part of the same http header. I have never heard of this extra step for post data transmission.\nI quickly looked over the relevant HTTP rfc's and didn't notice any distinction in version 1.0 or 1.1. I also used wireshark for some analysis and didn't notice multiple transmissions for post.\nAm I missing something fundamental or is this an error in the text?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":372,"Q_Id":2339742,"Users Score":0,"Answer":"Yes, there is only one transmission of data between server and client.\nThe context of the passage was referring to communication between web server and cgi application. Server communication between the web server and the cgi application using POST happens in two separate transfers. The request for the python script is sent by the server in a single transfer and then the POST data is sent separately over stdin (two transfers).\nWhereas with GET the input data is passed as env vars or command line args in one transfer.","Q_Score":2,"Tags":"python,http,post,cgi,get","A_Id":2790879,"CreationDate":"2010-02-26T05:51:00.000","Title":"HTTP POST Requests require multiple transmissions?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I found the quoted text in Programming Python 3rd edition by Mark Lutz from Chapter 16: Server-Side Scripting (page 987):\n\nForms also include a method option to specify the encoding style to be used to send data over a socket to the target server machine. Here, we use the post style, which contacts the server and then ships it a stream of user input data in a separate transmission. An alternative get style ships input information to the server in a single transmission step, by adding user inputs to the end of the URL used to invoke the script, usually after a ? character (more on this soon).\n\nI read this with some puzzlement. As far as I know post data is sent in the same transmission as a part of the same http header. I have never heard of this extra step for post data transmission.\nI quickly looked over the relevant HTTP rfc's and didn't notice any distinction in version 1.0 or 1.1. I also used wireshark for some analysis and didn't notice multiple transmissions for post.\nAm I missing something fundamental or is this an error in the text?","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":372,"Q_Id":2339742,"Users Score":1,"Answer":"Simple POST request is in single step. but when you are uploading a file, than the form is posted in multiple parts. In that case, the content type application\/x-www-form-urlencoded is changed to multipart\/form-data.","Q_Score":2,"Tags":"python,http,post,cgi,get","A_Id":2339754,"CreationDate":"2010-02-26T05:51:00.000","Title":"HTTP POST Requests require multiple transmissions?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a project that requires a single configuration file whose data is used by multiple modules.\nMy question is: what is the common approach to that? should i read the configuration file from each\nof my modules (files) or is there any other way to do it?\nI was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example).\nresponse: opted for the conf.py approach then since it it is modular, flexible and simple\nI can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way.\naccepted answer: chose \"Alex Martelli\" response because it was the most complete. voted up other answers because they where good and useful too.","AnswerCount":4,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":6382,"Q_Id":2348927,"Users Score":4,"Answer":"The approach you describe is ok. If you want to add support for user config files, you can use execfile(os.path.expanduser(\"~\/.yourprogram\/config.py\")).","Q_Score":23,"Tags":"python,configuration-files","A_Id":2348941,"CreationDate":"2010-02-27T20:53:00.000","Title":"python single configuration file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a project that requires a single configuration file whose data is used by multiple modules.\nMy question is: what is the common approach to that? should i read the configuration file from each\nof my modules (files) or is there any other way to do it?\nI was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example).\nresponse: opted for the conf.py approach then since it it is modular, flexible and simple\nI can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way.\naccepted answer: chose \"Alex Martelli\" response because it was the most complete. voted up other answers because they where good and useful too.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":6382,"Q_Id":2348927,"Users Score":10,"Answer":"I like the approach of a single config.py module whose body (when first imported) parses one or more configuration-data files and sets its own \"global variables\" appropriately -- though I'd favor config.teamdata over the round-about config.data['teamdata'] approach.\nThis assumes configuration settings are read-only once loaded (except maybe in unit-testing scenarios, where the test-code will be doing its own artificial setting of config variables to properly exercise the code-under-test) -- it basically exploits the nature of a module as the simplest Pythonic form of \"singleton\" (when you don't need subclassing or other features supported only by classes and not by modules, of course).\n\"One or more\" configuration files (e.g. first one somewhere in \/etc for general default settings, then one under \/usr\/local for site-specific overrides thereof, then again possibly one in the user's home directory for user specific settings) is a common and useful pattern.","Q_Score":23,"Tags":"python,configuration-files","A_Id":2349182,"CreationDate":"2010-02-27T20:53:00.000","Title":"python single configuration file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing a project that requires a single configuration file whose data is used by multiple modules.\nMy question is: what is the common approach to that? should i read the configuration file from each\nof my modules (files) or is there any other way to do it?\nI was thinking to have a module named config.py that reads the configuration files and whenever I need a config I do import config and then do something like config.data['teamsdir'] get the 'teamsdir' property (for example).\nresponse: opted for the conf.py approach then since it it is modular, flexible and simple\nI can just put the configuration data directly in the file, latter if i want to read from a json file a xml file or multiple sources i just change the conf.py and make sure the data is accessed the same way.\naccepted answer: chose \"Alex Martelli\" response because it was the most complete. voted up other answers because they where good and useful too.","AnswerCount":4,"Available Count":3,"Score":0.1488850336,"is_accepted":false,"ViewCount":6382,"Q_Id":2348927,"Users Score":3,"Answer":"One nice approach is to parse the config file(s) into a Python object when the application starts and pass this object around to all classes and modules requiring access to the configuration. \nThis may save a lot of time parsing the config.","Q_Score":23,"Tags":"python,configuration-files","A_Id":2349159,"CreationDate":"2010-02-27T20:53:00.000","Title":"python single configuration file","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know a little of dom, and would like to learn about ElementTree. Python 2.6 has a somewhat older implementation of ElementTree, but still usable. However, it looks like it comes with two different classes: xml.etree.ElementTree and xml.etree.cElementTree. Would someone please be so kind to enlighten me with their differences? Thank you.","AnswerCount":5,"Available Count":2,"Score":0.1586485043,"is_accepted":false,"ViewCount":14861,"Q_Id":2351694,"Users Score":4,"Answer":"ElementTree is implemented in python while cElementTree is implemented in C. Thus cElementTree will be faster, but also not available where you don't have access to C, such as in Jython or IronPython or on Google App Engine.\nFunctionally, they should be equivalent.","Q_Score":24,"Tags":"python,xml","A_Id":2351710,"CreationDate":"2010-02-28T16:32:00.000","Title":"What are the Difference between cElementtree and ElementTree?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know a little of dom, and would like to learn about ElementTree. Python 2.6 has a somewhat older implementation of ElementTree, but still usable. However, it looks like it comes with two different classes: xml.etree.ElementTree and xml.etree.cElementTree. Would someone please be so kind to enlighten me with their differences? Thank you.","AnswerCount":5,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":14861,"Q_Id":2351694,"Users Score":31,"Answer":"It is the same library (same API, same features) but ElementTree is implemented in Python and cElementTree is implemented in C.\nIf you can, use the C implementation because it is optimized for fast parsing and low memory use, and is 15-20 times faster than the Python implementation.\nUse the Python version if you are in a limited environment (C library loading not allowed).","Q_Score":24,"Tags":"python,xml","A_Id":2351707,"CreationDate":"2010-02-28T16:32:00.000","Title":"What are the Difference between cElementtree and ElementTree?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently trying to scrape a website that has fairly poorly-formatted HTML (often missing closing tags, no use of classes or ids so it's incredibly difficult to go straight to the element you want, etc.). I've been using BeautifulSoup with some success so far but every once and a while (though quite rarely), I run into a page where BeautifulSoup creates the HTML tree a bit differently from (for example) Firefox or Webkit. While this is understandable as the formatting of the HTML leaves this ambiguous, if I were able to get the same parse tree as Firefox or Webkit produces I would be able to parse things much more easily.\nThe problems are usually something like the site opens a tag twice and when BeautifulSoup sees the second tag, it immediately closes the first while Firefox and Webkit nest the tags.\nIs there a web scraping library for Python (or even any other language (I'm getting desperate)) that can reproduce the parse tree generated by Firefox or WebKit (or at least get closer than BeautifulSoup in cases of ambiguity).","AnswerCount":10,"Available Count":1,"Score":0.0199973338,"is_accepted":false,"ViewCount":4506,"Q_Id":2397295,"Users Score":1,"Answer":"Well, WebKit is open source so you could use its own parser (in the WebCore component), if any language is acceptable","Q_Score":9,"Tags":"python,firefox,webkit,web-scraping","A_Id":2397311,"CreationDate":"2010-03-07T18:07:00.000","Title":"Web scraping with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to make a simple IRC client in Python (as kind of a project while I learn the language).\nI have a loop that I use to receive and parse what the IRC server sends me, but if I use raw_input to input stuff, it stops the loop dead in its tracks until I input something (obviously).\nHow can I input something without the loop stopping?\n(I don't think I need to post the code, I just want to input something without the while 1: loop stopping.)\nI'm on Windows.","AnswerCount":14,"Available Count":1,"Score":0.0285636566,"is_accepted":false,"ViewCount":93857,"Q_Id":2408560,"Users Score":2,"Answer":"I'd do what Mickey Chan said, but I'd use unicurses instead of normal curses.\nUnicurses is universal (works on all or at least almost all operating systems)","Q_Score":73,"Tags":"python,windows,input","A_Id":53794715,"CreationDate":"2010-03-09T11:17:00.000","Title":"Non-blocking console input?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say there are two empty Queues. Is there a way to get an item from the queue that gets it first?\nSo I have a queue of high anonymous proxies, queues of anonymous and transparent ones. Some threads may need only high anon. proxies, while others may accept both high anon. and just anon. proxies. That's why I can't put them all to a single queue.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":119,"Q_Id":2411306,"Users Score":0,"Answer":"If I had this problem (and \"polling\", i.e. trying each queue alternately with short timeouts, was unacceptable -- it usually is, being very wasteful of CPU time etc), I would tackle it by designing a \"multiqueue\" object -- one with multiple condition variables, one per \"subqueue\" and an overall one. A put to any subqueue would signal that subqueue's specific condition variable as well as the overall one; a get from a specific subqueue would only wait on its specific condition variable, but there would also be a \"get from any subqueue\" which waits on the overall condition variable instead. (If more combinations than \"get from this specific subqueue\" or \"get from any subqueue\" need to be supported, just as many condition variables as combinations to support would be needed).\nIt would be much simpler to code if get and put were reduced to their bare bones (no timeouts, no no-waits, etc) and all subqueues used a single overall mutex (very small overhead wrt many mutexes, and much easier to code in a deadlock-free way;-). The subqueues could be exposed as \"simplified queue-like duckies\" to existing code which assumes it's dealing with a plain old queue (e.g. the multiqueue could support indexing to return proxy objects for the purpose).\nWith these assumptions, it wouldn't be much code, though it would be exceedingly tricky to write and inspect for correctness (alas, testing is of limited use when very subtle threading code is in play) -- I can't take the time for that right now, though I'd be glad to give it a try tonight (8 hours from now or so) if the assumptions are roughly correct and no other preferable answer has surfaced.","Q_Score":0,"Tags":"python,multithreading","A_Id":2411865,"CreationDate":"2010-03-09T18:01:00.000","Title":"How to get an item from a set of Queues?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Say there are two empty Queues. Is there a way to get an item from the queue that gets it first?\nSo I have a queue of high anonymous proxies, queues of anonymous and transparent ones. Some threads may need only high anon. proxies, while others may accept both high anon. and just anon. proxies. That's why I can't put them all to a single queue.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":119,"Q_Id":2411306,"Users Score":0,"Answer":"You could check both queues in turn, each time using a short timeout. That way you would most likely read from the first queue that receives data. However, this solution is prone to race conditions if you will be getting many items on a regular basis.\nIf that is the case, do you have a good reason for not just writing data to one queue?","Q_Score":0,"Tags":"python,multithreading","A_Id":2411355,"CreationDate":"2010-03-09T18:01:00.000","Title":"How to get an item from a set of Queues?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"A search for \"python\" and \"xml\" returns a variety of libraries for combining the two.\nThis list probably faulty:\n\nxml.dom\nxml.etree\nxml.sax\nxml.parsers.expat\nPyXML\nbeautifulsoup?\nHTMLParser\nhtmllib\nsgmllib\n\nBe nice if someone can offer a quick summary of when to use which, and why.","AnswerCount":4,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":803,"Q_Id":2430423,"Users Score":4,"Answer":"I find xml.etree essentially sufficient for everything, except for BeautifulSoup if I ever need to parse broken XML (not a common problem, differently from broken HTML, which BeautifulSoup also helps with and is everywhere): it has reasonable support for reading entire XML docs in memory, navigating them, creating them, incrementally-parsing large docs. lxml supports the same interface, and is generally faster -- useful to push performance when you can afford to install third party Python extensions (e.g. on App Engine you can't -- but xml.etree is still there, so you can run exactly the same code). lxml also has more features, and offers BeautifulSoup too.\nThe other libs you mention mimic APIs designed for very different languages, and in general I see no reason to contort Python into those gyrations. If you have very specific needs such as support for xslt, various kinds of validations, etc, it may be worth looking around for other libraries yet, but I haven't had such needs in a long time so I'm not current wrt the offerings for them.","Q_Score":8,"Tags":"python,xml","A_Id":2430575,"CreationDate":"2010-03-12T04:02:00.000","Title":"Which XML library for what purposes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A search for \"python\" and \"xml\" returns a variety of libraries for combining the two.\nThis list probably faulty:\n\nxml.dom\nxml.etree\nxml.sax\nxml.parsers.expat\nPyXML\nbeautifulsoup?\nHTMLParser\nhtmllib\nsgmllib\n\nBe nice if someone can offer a quick summary of when to use which, and why.","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":803,"Q_Id":2430423,"Users Score":6,"Answer":"The DOM\/SAX divide is a basic one. It applies not just to python since DOM and SAX are cross-language.\nDOM: read the whole document into memory and manipulate it.\nGood for:\n\ncomplex relationships across tags in the markup\nsmall intricate XML documents\nCautions:\n\n\nEasy to use excessive memory\n\n\nSAX: parse the document while you read it. Good for:\n\nLong documents or open ended streams\nplaces where memory is a constraint\nCautions:\n\n\nYou'll need to code a stateful parser, which can be tricky\n\n\nbeautifulsoup:\nGreat for HTML or not-quite-well-formed markup. Easy to use and fast. Good for screen scraping, etc. It can work with markup where the XML based ones would just through an error saying the markup is incorrect.\nMost of the rest I haven't used, but I don't think there's hard and fast rules about when to use which. Just your standard considerations: who is going to maintain the code, which APIs do you find most easy to use, how well do they work, etc.\nIn general, for basic needs, it's nice to use the standard library modules since they are \"standard\" and thus available and well known. However, if you need to dig deep into something, almost always there are newer nonstandard modules with superior functionality outside of the standard library.","Q_Score":8,"Tags":"python,xml","A_Id":2430541,"CreationDate":"2010-03-12T04:02:00.000","Title":"Which XML library for what purposes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"A search for \"python\" and \"xml\" returns a variety of libraries for combining the two.\nThis list probably faulty:\n\nxml.dom\nxml.etree\nxml.sax\nxml.parsers.expat\nPyXML\nbeautifulsoup?\nHTMLParser\nhtmllib\nsgmllib\n\nBe nice if someone can offer a quick summary of when to use which, and why.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":803,"Q_Id":2430423,"Users Score":1,"Answer":"For many problems you can get by with the xml. It has the major advantage of being part of the standard library. This means that it is pre-installed on almost every system and that the interface will be static. It is not the best, or the fastest, but it is there.\nFor everything else there is lxml. Specically, lxml is best for parsing broken HTML, xHTML, or suspect feeds. It uses libxml2 and libxslt to handle XPath, XSLT, and EXSLT. The tutorial is clear and the interface is simplistically straight-forward. The rest of the libraries mentioned exist because lxml was not available in its current form.\nThis is my opinion.","Q_Score":8,"Tags":"python,xml","A_Id":2430695,"CreationDate":"2010-03-12T04:02:00.000","Title":"Which XML library for what purposes?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have server on Python and client + web service on Ruby. That works only if file from URL is less than 800 k. It seems like \"socket.puts data\" in a client works, but \"output = socket.gets\" - not. I think problem is in a Python part. For big files tests run \"Connection reset by peer\". Is it buffer size variable by default somewhere in a Python?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":183,"Q_Id":2435294,"Users Score":0,"Answer":"Could you add a little more information and code to your example?\nAre you thinking about sock.recv_into() which takes a buffer and buffer size as arguments? Alternately, are you hitting a timeout issue by failing to have a keepalive on the Ruby side?\nGuessing in advance of knowledge.","Q_Score":0,"Tags":"python,ruby,client,size","A_Id":2435731,"CreationDate":"2010-03-12T19:34:00.000","Title":"File size in Python server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm have an action \/json that returns json from the server.\nUnfortunately in IE, the browser likes to cache this json.\nHow can I make it so that this action doesn't cache?","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1603,"Q_Id":2439987,"Users Score":1,"Answer":"The jQuery library has pretty nice ajax functions, and settings to control them. One of them is is called \"cache\" and it will automatically append a random number to the query that essentially forces the browser to not cache the page. This can be set along with the parameter \"dataType\", which can be set to \"json\" to make the ajax request get json data. I've been using this in my code and haven't had a problem with IE.\nHope this helps","Q_Score":2,"Tags":"python,internet-explorer,caching,pylons","A_Id":3907139,"CreationDate":"2010-03-13T20:54:00.000","Title":"Disable browser caching in pylons","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using urllib2 to interact with a website that sends back multiple Set-Cookie headers. However the response header dictionary only contains one - seems the duplicate keys are overriding each other.\nIs there a way to access duplicate headers with urllib2?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3778,"Q_Id":2454494,"Users Score":0,"Answer":"set-cookie is different though. From RFC 6265:\n\nOrigin servers SHOULD NOT fold multiple Set-Cookie header fields into\n a single header field. The usual mechanism for folding HTTP headers\n fields (i.e., as defined in [RFC2616]) might change the semantics of\n the Set-Cookie header field because the %x2C (\",\") character is used\n by Set-Cookie in a way that conflicts with such folding.\n\nIn theory then, this looks like a bug.","Q_Score":3,"Tags":"python,header,urllib2,setcookie","A_Id":39896162,"CreationDate":"2010-03-16T13:06:00.000","Title":"urllib2 multiple Set-Cookie headers in response","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would I download files (video) with Python using wget and save them locally? There will be a bunch of files, so how do I know that one file is downloaded so as to automatically start downloding another one?\nThanks.","AnswerCount":6,"Available Count":2,"Score":-1.0,"is_accepted":false,"ViewCount":92663,"Q_Id":2467609,"Users Score":-6,"Answer":"No reason to use python. Avoid writing a shell script in Python and go with something like bash or an equivalent.","Q_Score":32,"Tags":"python,linux","A_Id":2467717,"CreationDate":"2010-03-18T04:55:00.000","Title":"Using wget via Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"How would I download files (video) with Python using wget and save them locally? There will be a bunch of files, so how do I know that one file is downloaded so as to automatically start downloding another one?\nThanks.","AnswerCount":6,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":92663,"Q_Id":2467609,"Users Score":9,"Answer":"No reason to use os.system. Avoid writing a shell script in Python and go with something like urllib.urlretrieve or an equivalent.\nEdit... to answer the second part of your question, you can set up a thread pool using the standard library Queue class. Since you're doing a lot of downloading, the GIL shouldn't be a problem. Generate a list of the URLs you wish to download and feed them to your work queue. It will handle pushing requests to worker threads. \nI'm waiting for a database update to complete, so I put this together real quick.\n\n\n#!\/usr\/bin\/python\n\nimport sys\nimport threading\nimport urllib\nfrom Queue import Queue\nimport logging\n\nclass Downloader(threading.Thread):\n def __init__(self, queue):\n super(Downloader, self).__init__()\n self.queue = queue\n\n def run(self):\n while True:\n download_url, save_as = queue.get()\n # sentinal\n if not download_url:\n return\n try:\n urllib.urlretrieve(download_url, filename=save_as)\n except Exception, e:\n logging.warn(\"error downloading %s: %s\" % (download_url, e))\n\nif __name__ == '__main__':\n queue = Queue()\n threads = []\n for i in xrange(5):\n threads.append(Downloader(queue))\n threads[-1].start()\n\n for line in sys.stdin:\n url = line.strip()\n filename = url.split('\/')[-1]\n print \"Download %s as %s\" % (url, filename)\n queue.put((url, filename))\n\n # if we get here, stdin has gotten the ^D\n print \"Finishing current downloads\"\n for i in xrange(5):\n queue.put((None, None))","Q_Score":32,"Tags":"python,linux","A_Id":2467646,"CreationDate":"2010-03-18T04:55:00.000","Title":"Using wget via Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a python server that listens on a couple sockets. At startup, I try to connect to these sockets before listening, so I can be sure that nothing else is already using that port. This adds about three seconds to my server's startup (which is about .54 seconds without the test) and I'd like to trim it down. Since I'm only testing localhost, I think a timeout of about 50 milliseconds is more than ample for that. Unfortunately, the socket.setdefaulttimeout(50) method doesn't seem to work for some reason.\nHow I can trim this down?","AnswerCount":7,"Available Count":3,"Score":0.057080742,"is_accepted":false,"ViewCount":46722,"Q_Id":2470971,"Users Score":2,"Answer":"Are you on Linux? If so, perhaps your application could run netstat -lant (or netstat -lanu if you're using UDP) and see what ports are in use. This should be faster...","Q_Score":29,"Tags":"python,sockets","A_Id":2471078,"CreationDate":"2010-03-18T15:21:00.000","Title":"Fast way to test if a port is in use using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python server that listens on a couple sockets. At startup, I try to connect to these sockets before listening, so I can be sure that nothing else is already using that port. This adds about three seconds to my server's startup (which is about .54 seconds without the test) and I'd like to trim it down. Since I'm only testing localhost, I think a timeout of about 50 milliseconds is more than ample for that. Unfortunately, the socket.setdefaulttimeout(50) method doesn't seem to work for some reason.\nHow I can trim this down?","AnswerCount":7,"Available Count":3,"Score":0.057080742,"is_accepted":false,"ViewCount":46722,"Q_Id":2470971,"Users Score":2,"Answer":"Simon B's answer is the way to go - don't check anything, just try to bind and handle the error case if it's already in use.\nOtherwise you're in a race condition where some other app can grab the port in between your check that it's free and your subsequent attempt to bind to it. That means you still have to handle the possibility that your call to bind will fail, so checking in advance achieved nothing.","Q_Score":29,"Tags":"python,sockets","A_Id":2471762,"CreationDate":"2010-03-18T15:21:00.000","Title":"Fast way to test if a port is in use using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python server that listens on a couple sockets. At startup, I try to connect to these sockets before listening, so I can be sure that nothing else is already using that port. This adds about three seconds to my server's startup (which is about .54 seconds without the test) and I'd like to trim it down. Since I'm only testing localhost, I think a timeout of about 50 milliseconds is more than ample for that. Unfortunately, the socket.setdefaulttimeout(50) method doesn't seem to work for some reason.\nHow I can trim this down?","AnswerCount":7,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":46722,"Q_Id":2470971,"Users Score":14,"Answer":"How about just trying to bind to the port you want, and handle the error case if the port is occupied?\n(If the issue is that you might start the same service twice then don't look at open ports.)\nThis is the reasonable way also to avoid causing a race-condition, as @eemz said in another answer.","Q_Score":29,"Tags":"python,sockets","A_Id":2471039,"CreationDate":"2010-03-18T15:21:00.000","Title":"Fast way to test if a port is in use using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to write some kind of multi protocol bot (jabber\/irc) that would read messages from fifo file (one liners mostly) and then send them to irc channel and jabber contacts. So far, I managed to create two factories to connect to jabber and irc, and they seem to be working. \nHowever, I've problem with reading the fifo file - I have no idea how to read it in a loop (open file, read line, close file, jump to open file and so on) outside of reactor loop to get the data I need to send, and then get that data to reactor loop for sending in both protocols. I've been looking for information on how to do it in best way, but Im totally lost in the dark. Any suggestion\/help would be highly appreciated.\nThanks in advance!","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1766,"Q_Id":2476234,"Users Score":1,"Answer":"The fifo is the problem. Read from a socket instead. This will fit info the Twisted event-driven model much better. Trying to do things outside the control of the reactor is usually the wrong approach.\n---- update based on feedback that the fifo is an external constraint, not avoidable ----\nOK, the central issue is that you can not write code in the main (and only) thread of your Twisted app that makes blocking read calls to a fifo. That will cause the whole app to stall if there is nothing to read. So you're either looking at reading the fifo asynchronously, creating a separate thread to read it, or splitting the app in two.\nThe last option is the simplest - modify the Twisted app so that it listens on a socket and write a separate little \"forwarder\" app that runs in a simple loop, reading the fifo and writing everything it hears to the socket.","Q_Score":4,"Tags":"python,twisted,xmpp,irc,fifo","A_Id":2476445,"CreationDate":"2010-03-19T09:45:00.000","Title":"Python (Twisted) - reading from fifo and sending read data to multiple protocols","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im trying to write some kind of multi protocol bot (jabber\/irc) that would read messages from fifo file (one liners mostly) and then send them to irc channel and jabber contacts. So far, I managed to create two factories to connect to jabber and irc, and they seem to be working. \nHowever, I've problem with reading the fifo file - I have no idea how to read it in a loop (open file, read line, close file, jump to open file and so on) outside of reactor loop to get the data I need to send, and then get that data to reactor loop for sending in both protocols. I've been looking for information on how to do it in best way, but Im totally lost in the dark. Any suggestion\/help would be highly appreciated.\nThanks in advance!","AnswerCount":2,"Available Count":2,"Score":0.2913126125,"is_accepted":false,"ViewCount":1766,"Q_Id":2476234,"Users Score":3,"Answer":"You can read\/write on a file descriptor without blocking the reactor as you do with sockets, by the way doesn't sockets use file descriptors?\nIn your case create a class that implements twisted.internet.interfaces.IReadDescriptor and add to reactor using twisted.internet.interfaces.IReactorFDSet.addReader. For an example of IReadDescriptor implementation look at twisted.internet.tcp.Connection.\nI cannot be more specific because i never did by my self, but i hope this could be a start point.","Q_Score":4,"Tags":"python,twisted,xmpp,irc,fifo","A_Id":2478970,"CreationDate":"2010-03-19T09:45:00.000","Title":"Python (Twisted) - reading from fifo and sending read data to multiple protocols","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed boto like so: python setup.py install; and then when I launch my python script (that imports moduls from boto) on shell, an error like this shows up: ImportError: No module named boto.s3.connection\nHow to settle the matter?","AnswerCount":2,"Available Count":2,"Score":0.4621171573,"is_accepted":false,"ViewCount":3095,"Q_Id":2481417,"Users Score":5,"Answer":"I fixed same problem at Ubuntu using apt-get install python-boto","Q_Score":1,"Tags":"python,boto","A_Id":6861411,"CreationDate":"2010-03-20T00:50:00.000","Title":"Problem importing modul2s from boto","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have installed boto like so: python setup.py install; and then when I launch my python script (that imports moduls from boto) on shell, an error like this shows up: ImportError: No module named boto.s3.connection\nHow to settle the matter?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":3095,"Q_Id":2481417,"Users Score":2,"Answer":"This can happen if the Python script does not use your default python executable. Check the shebang on the first line of the script (on *nix) or the .py file association (on Windows) and run that against setup.py instead.","Q_Score":1,"Tags":"python,boto","A_Id":2481420,"CreationDate":"2010-03-20T00:50:00.000","Title":"Problem importing modul2s from boto","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a recommendation for a pythonic library that can marshall python objects to XML(let it be a file).\nI need to be able read that XML later on with Java (JAXB) and unmarshall it.\nI know JAXB has some issues that makes it not play nice with .NET XML libraries so a recommendation on something that actually works would be great.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":475,"Q_Id":2492490,"Users Score":1,"Answer":"As Ignacio says, XML is XML. On the python side, I recommend using lxml, unless you have more specific needs that are better met by another library. If you are restricted to the standard library, look at ElementTree or cElementTree, which are also excellent, and which inspired (and are functionally mostly equivalent to) lxml.etree.\nEdit: On closer look, it seems you are not just looking for XML, but for XML representations of objects. For this, check out lxml.objectify, or Amara. I haven't tried using them for interoperability with Java, but they're worth a try. If you're just looking for a way to do data exchange, you might also try custom JSON objects.","Q_Score":2,"Tags":"java,python,xml,marshalling,interop","A_Id":2492599,"CreationDate":"2010-03-22T13:23:00.000","Title":"Python XML + Java XML interoperability","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"HI,\nI have a device that exposes a telnet interface which you can log into using a username and password and then manipulate the working of the device.\nI have to write a C program that hides the telnet aspect from the client and instead provides an interface for the user to control the device.\nWhat would be a good way to proceed. I tried writing a simple socket program but it stops at the login prompt. My guess is that i am not following the TCP protocol.\nHas anyone attempted this, is there an opensource library out there to do this?\nThanks\nAddition:\nEventually i wish to expose it through a web api\/webservice. The platform is linux.","AnswerCount":9,"Available Count":3,"Score":0.0665680765,"is_accepted":false,"ViewCount":18409,"Q_Id":2519598,"Users Score":3,"Answer":"While telnet is almost just a socket tied to a terminal it's not quite. I believe that there can be some control characters that get passed shortly after the connection is made. If your device is sending some unexpected control data then it may be confusing your program.\nIf you haven't already, go download wireshark (or tshark or tcpdump) and monitor your connection. Wireshark (formerly ethereal) is cross platform and pretty easy to use for simple stuff. Filter with tcp.port == 23","Q_Score":9,"Tags":"python,c,telnet","A_Id":2525924,"CreationDate":"2010-03-25T21:37:00.000","Title":"Writing a telnet client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"HI,\nI have a device that exposes a telnet interface which you can log into using a username and password and then manipulate the working of the device.\nI have to write a C program that hides the telnet aspect from the client and instead provides an interface for the user to control the device.\nWhat would be a good way to proceed. I tried writing a simple socket program but it stops at the login prompt. My guess is that i am not following the TCP protocol.\nHas anyone attempted this, is there an opensource library out there to do this?\nThanks\nAddition:\nEventually i wish to expose it through a web api\/webservice. The platform is linux.","AnswerCount":9,"Available Count":3,"Score":0.0886555158,"is_accepted":false,"ViewCount":18409,"Q_Id":2519598,"Users Score":4,"Answer":"telnet's protocol is pretty straightforward... you just create a TCP connection, and send and receive ASCII data. That's pretty much it.\nSo all you really need to do is create a program that connects via TCP, then reads characters from the TCP socket and parses it to update the GUI, and\/or writes characters to the socket in response to the user manipulating controls in the GUI.\nHow you would implement that depends a lot on what software you are using to construct your interface. On the TCP side, a simple event loop around select() would be sufficient.","Q_Score":9,"Tags":"python,c,telnet","A_Id":2519786,"CreationDate":"2010-03-25T21:37:00.000","Title":"Writing a telnet client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"HI,\nI have a device that exposes a telnet interface which you can log into using a username and password and then manipulate the working of the device.\nI have to write a C program that hides the telnet aspect from the client and instead provides an interface for the user to control the device.\nWhat would be a good way to proceed. I tried writing a simple socket program but it stops at the login prompt. My guess is that i am not following the TCP protocol.\nHas anyone attempted this, is there an opensource library out there to do this?\nThanks\nAddition:\nEventually i wish to expose it through a web api\/webservice. The platform is linux.","AnswerCount":9,"Available Count":3,"Score":0.022218565,"is_accepted":false,"ViewCount":18409,"Q_Id":2519598,"Users Score":1,"Answer":"Unless the application is trivial, a better starting point would be to figure out how you're going to create the GUI. This is a bigger question and will have more impact on your project than how exactly you telnet into the device. You mention C at first, but then start talking about Python, which makes me believe you are relatively flexible in the matter.\nOnce you are set on a language\/platform, then look for a telnet library -- you should find something reasonable already implemented.","Q_Score":9,"Tags":"python,c,telnet","A_Id":2521858,"CreationDate":"2010-03-25T21:37:00.000","Title":"Writing a telnet client","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This whole topic is way out of my depth, so forgive my imprecise question, but I have two computers both connected to one LAN.\nWhat I want is to be able to communicate one string between the two, by running a python script on the first (the host) where the string will originate, and a second on the client computer to retrieve the string.\nWhat is the most efficient way for an inexperienced programmer like me to achieve this?","AnswerCount":3,"Available Count":2,"Score":-0.1325487884,"is_accepted":false,"ViewCount":996,"Q_Id":2534527,"Users Score":-2,"Answer":"File share and polling filesystem every minute. No joke. Of course, it depends on what are requirements for your applications and what lag is acceptable but in practice using file shares is quite common.","Q_Score":3,"Tags":"python","A_Id":2534778,"CreationDate":"2010-03-28T20:55:00.000","Title":"Python inter-computer communication","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"This whole topic is way out of my depth, so forgive my imprecise question, but I have two computers both connected to one LAN.\nWhat I want is to be able to communicate one string between the two, by running a python script on the first (the host) where the string will originate, and a second on the client computer to retrieve the string.\nWhat is the most efficient way for an inexperienced programmer like me to achieve this?","AnswerCount":3,"Available Count":2,"Score":0.2605204458,"is_accepted":false,"ViewCount":996,"Q_Id":2534527,"Users Score":4,"Answer":"First, lets get the nomenclature straight. Usually the part that initiate the communication is the client, the parts that is waiting for a connection is a server, which then will receive the data from the client and generate a response. From your question, the \"host\" is the client and the \"client\" seems to be the server.\nThen you have to decide how to transfer the data. You can use straight sockets, in which case you can use SocketServer, or you can rely on an existing protocol, like HTTP or XML-RPC, in which case you will find ready to use library packages with plenty of examples (e.g. xmlrpclib and SimpleXMLRPCServer)","Q_Score":3,"Tags":"python","A_Id":2534582,"CreationDate":"2010-03-28T20:55:00.000","Title":"Python inter-computer communication","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would I check if the remote host is up without having a port number? Is there any other way I could check other then using regular ping.\nThere is a possibility that the remote host might drop ping packets","AnswerCount":7,"Available Count":3,"Score":0.0285636566,"is_accepted":false,"ViewCount":46690,"Q_Id":2535055,"Users Score":1,"Answer":"Many firewalls are configured to drop ping packets without responding. In addition, some network adapters will respond to ICMP ping requests without input from the operating system network stack, which means the operating system might be down, but the host still responds to pings (usually you'll notice if you reboot the server, say, it'll start responding to pings some time before the OS actually comes up and other services start up).\nThe only way to be certain that a host is up is to actually try to connect to it via some well-known port (e.g. web server port 80).\nWhy do you need to know if the host is \"up\", maybe there's a better way to do it.","Q_Score":10,"Tags":"python,network-programming,network-protocols","A_Id":2535076,"CreationDate":"2010-03-28T23:40:00.000","Title":"Check if remote host is up in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would I check if the remote host is up without having a port number? Is there any other way I could check other then using regular ping.\nThere is a possibility that the remote host might drop ping packets","AnswerCount":7,"Available Count":3,"Score":0.057080742,"is_accepted":false,"ViewCount":46690,"Q_Id":2535055,"Users Score":2,"Answer":"A protocol-level PING is best, i.e., connecting to the server and interacting with it in a way that doesn't do real work. That's because it is the only real way to be sure that the service is up. An ICMP ECHO (a.k.a. ping) would only tell you that the other end's network interface is up, and even then might be blocked; FWIW, I have seen machines where all user processes were bricked but which could still be pinged. In these days of application servers, even getting a network connection might not be enough; what if the hosted app is down or otherwise non-functional? As I said, talking sweet-nothings to the actual service that you are interested in is the best, surest approach.","Q_Score":10,"Tags":"python,network-programming,network-protocols","A_Id":2535139,"CreationDate":"2010-03-28T23:40:00.000","Title":"Check if remote host is up in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How would I check if the remote host is up without having a port number? Is there any other way I could check other then using regular ping.\nThere is a possibility that the remote host might drop ping packets","AnswerCount":7,"Available Count":3,"Score":0.0285636566,"is_accepted":false,"ViewCount":46690,"Q_Id":2535055,"Users Score":1,"Answer":"What about trying something that requires a RPC like a 'tasklist' command in conjunction with a ping?","Q_Score":10,"Tags":"python,network-programming,network-protocols","A_Id":17115260,"CreationDate":"2010-03-28T23:40:00.000","Title":"Check if remote host is up in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How to Write\/Read a file to\/from a network folder\/share using python? The application will run under Linux and network folder\/share can be a Linux\/Windows System.\nAlso, how to check that network folder\/share has enough space before writing a file?\nWhat things should i consider?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1813,"Q_Id":2542025,"Users Score":1,"Answer":"Mount the shares using Samba, check the free space on the share using df or os.statvfs and read\/write to it like any other folder.","Q_Score":1,"Tags":"python,network-shares","A_Id":2542026,"CreationDate":"2010-03-29T19:33:00.000","Title":"Read\/Write a file from\/to network folder\/share using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using minidom to parse an xml file and it threw an error indicating that the data is not well formed. I figured out that some of the pages have characters like \u00e0\u00b9\u201e\u00e0\u00b8\u00ad\u00e0\u00b9\u20ac\u00e0\u00b8\u0178\u00e0\u00b8\u00a5 &, causing the parser to hiccup. Is there an easy way to clean the file before I start parsing it? Right now I'm using a regular expressing to throw away anything that isn't an alpha numeric character and the <\/> characters, but it isn't quite working.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5927,"Q_Id":2545783,"Users Score":0,"Answer":"It looks like you're dealing with data which are saved with some kind of encoding \"as if\" they were ASCII. XML file should normally be UTF8, and SAX (the underlying parser used by minidom) should handle that, so it looks like something's wrong in that part of the processing chain. Instead of focusing on \"cleaning up\" I'd first try to make sure the encoding is correct and correctly recognized. Maybe a broken XML directive? Can you edit your Q to show the first few lines of the file, especially the \\r\\n\\r\\n, the server responds with HTTP\/1.0 200 Connection established\\r\\n\\r\\n and then (after the double line ends) you can communicate just as you would communicate with example.com port 1234 without the proxy (as I understand you already have the client-server communication part done).","Q_Score":3,"Tags":"python,proxy,tcp,sockets,socks","A_Id":2714593,"CreationDate":"2010-04-15T16:07:00.000","Title":"Python, implementing proxy support for a socket based application (not urllib2)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using urllib2 to open a url. Now I need the html file as a string. How do I do this?","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":17378,"Q_Id":2647723,"Users Score":16,"Answer":"In python3, it should be changed to urllib.request.openurl('http:\/\/www.example.com\/').read().decode('utf-8').","Q_Score":7,"Tags":"python,string,urllib2","A_Id":35367453,"CreationDate":"2010-04-15T17:48:00.000","Title":"urllib2 to string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"what is the advantage of using a python virtualbox API instead of using XPCOM?","AnswerCount":3,"Available Count":1,"Score":0.3215127375,"is_accepted":false,"ViewCount":14030,"Q_Id":2652146,"Users Score":5,"Answer":"I would generally recommend against either one. If you need to use virtualization programmatically, take a look at libvirt, which gives you cross platform and cross hypervisor support; which lets you do kvm\/xen\/vz\/vmware later on.\nThat said, the SOAP api is using two extra abstraction layers (the client and server side of the HTTP transaction), which is pretty clearly then just calling the XPCOM interface.\nIf you need local host only support, use XPCOM. The extra indirection of libvirt\/SOAP doesn't help you.\nIf you need to access virtualbox on a various hosts across multiple client machines, use SOAP or libvirt\nIf you want cross platform support, or to run your code on Linux, use libvirt.","Q_Score":9,"Tags":"python,virtualbox,xpcom","A_Id":2655522,"CreationDate":"2010-04-16T10:26:00.000","Title":"What is the advantage of using Python Virtualbox API?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"PdfFileReader reads the content from a pdf file to create an object.\nI am querying the pdf from a cdn via urllib.urlopen(), this provides me a file like object, which has no seek. PdfFileReader, however uses seek.\nWhat is the simple way to create a PdfFileReader object from a pdf downloaded via url.\nNow, what can I do to avoid writing to disk and reading it again via file().\nThanks in advance.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":275,"Q_Id":2653079,"Users Score":1,"Answer":"I suspect you may be optimising prematurely here.\nMost modern systems will cache files in memory for a significant period of time before they flush them to disk, so if you write the data to a temporary file, read it back in, then close and delete the file you may find that there's no significant disc traffic (unless it really is 100MB).\nYou might want to look at using tempfile.TemporaryFile() which creates a temporary file that is automatically deleted when closed, or else tempfile.SpooledTemporaryFile() which explicitly holds it all in memory until it exceeds a particular size.","Q_Score":2,"Tags":"python,file,urllib,file-type","A_Id":2653447,"CreationDate":"2010-04-16T12:59:00.000","Title":"Inexpensive ways to add seek to a filetype object","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have around 5 GB of html data which I want to process to find links to a set of websites and perform some additional filtering. Right now I use simple regexp for each site and iterate over them, searching for matches. In my case links can be outside of \"a\" tags and be not well formed in many ways(like \"\\n\" in the middle of link) so I try to grab as much \"links\" as I can and check them later in other scripts(so no BeatifulSoup\\lxml\\etc). The problem is that my script is pretty slow, so I am thinking about any ways to speed it up. I am writing a set of test to check different approaches, but hope to get some advices :)\nRight now I am thinking about getting all links without filtering first(maybe using C module or standalone app, which doesn't use regexp but simple search to get start and end of every link) and then using regexp to match ones I need.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":156,"Q_Id":2662595,"Users Score":1,"Answer":"Ways out.\n\nParallelise\nProfile your code to see where the bottleneck is. The result are often surprising. \nUse a single regexp (concatenate using |) rather than multiple ones.","Q_Score":2,"Tags":"python,html,screen-scraping,hyperlink","A_Id":2663277,"CreationDate":"2010-04-18T14:46:00.000","Title":"Extract anything that looks like links from large amount of data in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I currently have built a system that checks user IP, browser, and a random-string cookie to determine if he is an admin.\nIn the worst case, someone steals my cookie, uses the same browser I do, and masks his IP to appear as mine. Is there another layer of security I should add onto my script to make it more secure?\nEDIT: To clarify: my website accepts absolutely NO input from users. I'm just designing a back-end admin panel to make it easier to update database entries.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":263,"Q_Id":2670346,"Users Score":1,"Answer":"Https is a must, but you also have to come to terms with the fact that no site can be 100% secure. The only other way for you to get a significant improvement in security is to have very short session timeouts and provide you users with hardware tokens, but even tokens can be stolen.","Q_Score":3,"Tags":"python,security","A_Id":2670489,"CreationDate":"2010-04-19T19:49:00.000","Title":"Web Security: Worst-Case Situation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I currently have built a system that checks user IP, browser, and a random-string cookie to determine if he is an admin.\nIn the worst case, someone steals my cookie, uses the same browser I do, and masks his IP to appear as mine. Is there another layer of security I should add onto my script to make it more secure?\nEDIT: To clarify: my website accepts absolutely NO input from users. I'm just designing a back-end admin panel to make it easier to update database entries.","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":263,"Q_Id":2670346,"Users Score":1,"Answer":"THe one thing I miss besides everything that is mentioned is fixing \"all other security problems\". \n\nIf you have a SQL injection, you're effort on the cookies is a waste of time.\nIf you have a XSRF vuln, you're effort on the cookies is a waste of time.\nIf you have XSS, ....\nIf you have HPP, ...\nIf you have ...., ....\n\nYou get the point.\nIf you really want to cover everything, I suggest you get the vulnerability landscape clear and build an attack tree (Bruce Schneier).","Q_Score":3,"Tags":"python,security","A_Id":2670747,"CreationDate":"2010-04-19T19:49:00.000","Title":"Web Security: Worst-Case Situation","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How to create simple web site with Python?\nI mean really simple, f.ex, you see text \"Hello World\", and there are button \"submit\", which onClick will show AJAX box \"submit successful\".\nI want to start develop some stuff with Python, and I don't know where to start.","AnswerCount":6,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":60467,"Q_Id":2681754,"Users Score":3,"Answer":"Why don't you try out the Google AppEngine stuff? They give you a local environment (that runs on your local system) for developing the application. They have nice, easy intro material for getting the site up and running - your \"hello, world\" example will be trivial to implement.\nFrom there on, you can either go with some other framework (using what you have learnt, as the vanilla AppEngine stuff is pretty standard for simple python web frameworks) or carry on with the other stuff Google provides (like hosting your app for you...)","Q_Score":22,"Tags":"python,html,web-applications","A_Id":2684119,"CreationDate":"2010-04-21T09:40:00.000","Title":"How to create simple web site with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I wish to get a list of connections to a manager. I can get last_accepted from the servers' listener, but I want all connections. There HAS to be a method I am missing somewhere to return all connections to a server or manager\nPlease help!!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":712,"Q_Id":2686893,"Users Score":0,"Answer":"Looking at multiprocessing\/connection.py, the listener just doesn't seem to track all connections -- you could, however, subclass it and override accept to append accepted connections to a list.","Q_Score":0,"Tags":"python,multiprocessing","A_Id":2687986,"CreationDate":"2010-04-21T22:07:00.000","Title":"python multiprocessing server connections","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know Twisted can do this well but what about just plain socket?\nHow'd you tell if you randomly lost your connection in socket? Like, If my internet was to go out of a second and come back on.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1516,"Q_Id":2697989,"Users Score":0,"Answer":"If the internet comes and goes momentarily, you might not actually lose the TCP session. If you do, the socket API will throw some kind of exception, usually socket.timeout.","Q_Score":2,"Tags":"python,sockets","A_Id":2698024,"CreationDate":"2010-04-23T11:06:00.000","Title":"Socket Lose Connection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I know Twisted can do this well but what about just plain socket?\nHow'd you tell if you randomly lost your connection in socket? Like, If my internet was to go out of a second and come back on.","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":1516,"Q_Id":2697989,"Users Score":1,"Answer":"I'm assuming you're talking about TCP.\nIf your internet connection is out for a second, you might not lose the TCP connection at all, it'll just retransmit and resume operation.\nThere's ofcourse 100's of other reasons you could lose the connection(e.g. a NAT gateway inbetween decided to throw out the connection silently. The other end gets hit by a nuke. Your router burns up. The guy at the other end yanks out his network cable, etc. etc.)\nHere's what you should do if you need to detect dead peers\/closed sockets etc.:\n\nRead from the socket or in any other way wait for events of incoming data on it. This allows you to detect when the connection was gracefully closed, or an error occured on it (reading on it returns 0 or -1) - atleast if the other end is still able to send a TCP FIN\/RST or ICMP packet to your host.\nWrite to the socket - e.g. send some heartbeats every N seconds. Just reading from the socket won't detect the problem when the other end fails silently. If that PC goes offline, it can obviously not tell you that it did - so you'll have to send it something and see if it responds.\nIf you don't want to write heartbeats every N seconds, you can atleast turn on TCP keepalive - and you'll eventually get notified if the peer is dead. You still have to read from the socket, and the keepalive are usually sent every 2 hours by default. That's still better than keeping dead sockets around for months though.","Q_Score":2,"Tags":"python,sockets","A_Id":2698055,"CreationDate":"2010-04-23T11:06:00.000","Title":"Socket Lose Connection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a python script that downloads a file given by a URL. Unfortuneatly the URL is in the form of a PHP script i.e. www.website.com\/generatefilename.php?file=5233\nIf you visit the link in a browser, you are prompted to download the actual file and extension. I need to send this link to the downloader, but I can't send the downloader the PHP link.\nHow would I get the full file name in a usable variable?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":171,"Q_Id":2705856,"Users Score":2,"Answer":"What you need to do is examine the Content-Disposition header sent by the PHP script. it will look something like:\nContent-Disposition: attachment; filename=theFilenameYouWant\nAs to how you actually examine that header it depends on the python code you're currently using to fetch the URL. If you post some code I'll be able to give a more detailed answer.","Q_Score":0,"Tags":"php,python,url,scripting","A_Id":2705877,"CreationDate":"2010-04-24T19:36:00.000","Title":"I want the actual file name that is returned by a PHP script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"What are the best practices for extending an existing Python module \u2013 in this case, I want to extend the python-twitter package by adding new methods to the base API class.\nI've looked at tweepy, and I like that as well; I just find python-twitter easier to understand and extend with the functionality I want.\nI have the methods written already \u2013 I'm trying to figure out the most Pythonic and least disruptive way to add them into the python-twitter package module, without changing this modules\u2019 core.","AnswerCount":6,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":33640,"Q_Id":2705964,"Users Score":6,"Answer":"Don't add them to the module. Subclass the classes you want to extend and use your subclasses in your own module, not changing the original stuff at all.","Q_Score":29,"Tags":"python,module,tweepy,python-module,python-twitter","A_Id":2705976,"CreationDate":"2010-04-24T20:12:00.000","Title":"How do I extend a python module? Adding new functionality to the `python-twitter` package","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I get information about a user's PC connected to my socket","AnswerCount":3,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":486,"Q_Id":2707599,"Users Score":3,"Answer":"a socket is a \"virtual\" channel established between to electronic devices through a network (a bunch of wires). the only informations available about a remote host are those published on the network.\nthe basic informations are those provided in the TCP\/IP headers, namely the remote IP address, the size of the receive buffer, and a bunch of useless flags. for any other informations, you will have to request from other services.\na reverse DNS lookup will get you a name associated with the IP address. a traceroute will tell you what is the path to the remote computer (or at least to a machine acting as a gateway\/proxy to the remote host). a Geolocation request can give you an approximate location of the remote computer. if the remote host is a server itself accessible to the internet through a registered domain name, a WHOIS request can give you the name of the person in charge of the domain. on a LAN (Local Area Network: a home or enterprise network), an ARP or RARP request will get you a MAC address and many more informations (as much as the network administrator put when they configured the network), possibly the exact location of the computer.\nthere are many many more informations available, but only if they were published. if you know what you are looking for and where to query those informations, you can be very successful. if the remote host is quite hidden and uses some simple stealth technics (anonymous proxy) you will get nothing relevant.","Q_Score":0,"Tags":"python,sockets","A_Id":2707933,"CreationDate":"2010-04-25T08:17:00.000","Title":"Socket: Get user information","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to set timeout on python's socket recv method. How to do it?","AnswerCount":11,"Available Count":1,"Score":0.0906594778,"is_accepted":false,"ViewCount":301595,"Q_Id":2719017,"Users Score":5,"Answer":"You can use socket.settimeout() which accepts a integer argument representing number of seconds. For example, socket.settimeout(1) will set the timeout to 1 second","Q_Score":159,"Tags":"python,sockets,timeout","A_Id":53769737,"CreationDate":"2010-04-27T05:51:00.000","Title":"How to set timeout on python's socket recv method?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use the htmllib module but it's been removed from Python 3.0. Does anyone know what's the replacement for this module?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":5120,"Q_Id":2730752,"Users Score":1,"Answer":"I believe lxml has been ported to Python 3","Q_Score":11,"Tags":"python,python-3.x","A_Id":2734917,"CreationDate":"2010-04-28T15:14:00.000","Title":"Replacement for htmllib module in Python 3.0","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to use the htmllib module but it's been removed from Python 3.0. Does anyone know what's the replacement for this module?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":5120,"Q_Id":2730752,"Users Score":1,"Answer":"I heard Beautiful soup is getting a port to 3.0.","Q_Score":11,"Tags":"python,python-3.x","A_Id":2732223,"CreationDate":"2010-04-28T15:14:00.000","Title":"Replacement for htmllib module in Python 3.0","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm almost afraid to post this question, there has to be an obvious answer I've overlooked, but here I go:\nContext: I am creating a blog for educational purposes (want to learn python and web.py). I've decided that my blog have posts, so I've created a Post class. I've also decided that posts can be created, read, updated, or deleted (so CRUD). So in my Post class, I've created methods that respond to POST, GET, PUT, and DELETE HTTP methods). So far so good. \nThe current problem I'm having is a conceptual one, I know that sending a PUT HTTP message (with an edited Post) to, e.g., \/post\/52 should update post with id 52 with the body contents of the HTTP message. \nWhat I do not know is how to conceptually correctly serve the (HTML) edit page.\nWill doing it like this: \/post\/52\/edit violate the idea of URI, as 'edit' is not a resource, but an action? \nOn the other side though, could it be considered a resource since all that URI will respond to is a GET method, that will only return an HTML page?\nSo my ultimate question is this: How do I serve an HTML page intended for user editing in a RESTful manner?","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":267,"Q_Id":2750341,"Users Score":2,"Answer":"Instead of calling it \/post\/52\/edit, what if you called it \/post\/52\/editor?\nNow it is a resource. Dilemma averted.","Q_Score":4,"Tags":"python,rest,web.py","A_Id":2750368,"CreationDate":"2010-05-01T14:43:00.000","Title":"Is www.example.com\/post\/21\/edit a RESTful URI? I think I know the answer, but have another question","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm almost afraid to post this question, there has to be an obvious answer I've overlooked, but here I go:\nContext: I am creating a blog for educational purposes (want to learn python and web.py). I've decided that my blog have posts, so I've created a Post class. I've also decided that posts can be created, read, updated, or deleted (so CRUD). So in my Post class, I've created methods that respond to POST, GET, PUT, and DELETE HTTP methods). So far so good. \nThe current problem I'm having is a conceptual one, I know that sending a PUT HTTP message (with an edited Post) to, e.g., \/post\/52 should update post with id 52 with the body contents of the HTTP message. \nWhat I do not know is how to conceptually correctly serve the (HTML) edit page.\nWill doing it like this: \/post\/52\/edit violate the idea of URI, as 'edit' is not a resource, but an action? \nOn the other side though, could it be considered a resource since all that URI will respond to is a GET method, that will only return an HTML page?\nSo my ultimate question is this: How do I serve an HTML page intended for user editing in a RESTful manner?","AnswerCount":4,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":267,"Q_Id":2750341,"Users Score":4,"Answer":"Another RESTful approach is to use the query string for modifiers: \/post\/52?edit=1\nAlso, don't get too hung up on the purity of the REST model. If your app doesn't fit neatly into the model, break the rules.","Q_Score":4,"Tags":"python,rest,web.py","A_Id":2750379,"CreationDate":"2010-05-01T14:43:00.000","Title":"Is www.example.com\/post\/21\/edit a RESTful URI? I think I know the answer, but have another question","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to migrate a legacy mailing list to a new web forum software and was wondering if mailman has an export option or an API to get all lists, owners, members and membership types.","AnswerCount":3,"Available Count":1,"Score":0.2605204458,"is_accepted":false,"ViewCount":2988,"Q_Id":2756311,"Users Score":4,"Answer":"probably too late, but the list_members LISTNAME command (executed from a shell) will give you all the members of a list.\nlist_admins LISTNAME will give you the owners\nWhat do you mean by membership type? list_members does have an option to filter on digest vs non-digest members. I don't think there's a way to get the moderation flag without writing a script for use with withlist","Q_Score":2,"Tags":"python,api,mailman","A_Id":3154975,"CreationDate":"2010-05-03T05:41:00.000","Title":"Does Mailman have an API or an export lists, users and owners option?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I've written a Python application that makes web requests using the urllib2 library after which it scrapes the data. I could deploy this as a web application which means all urllib2 requests go through my web-server. This leads to the danger of the server's IP being banned due to the high number of web requests for many users. The other option is to create an desktop application which I don't want to do. Is there any way I could deploy my application so that I can get my web-requests through the client side. One way was to use Jython to create an applet but I've read that Java applets can only make web-requests to the server it is deployed on and the only way to to circumvent this is to create a server side proxy which leads us back to the problem of the server's ip getting banned.\nThis might sounds sound like and impossible situation and I'll probably end up creating a desktop application but I thought I'd ask if anyone knew of an alternate solution.\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":1085,"Q_Id":2763274,"Users Score":1,"Answer":"You probably can use AJAX requests made from JavaScript that is a part of client-side.\n\nUse server \u2192 client communication to give commands and necessary data to make a request\n\u2026and use AJAX communication from client to 3rd party server then.","Q_Score":1,"Tags":"python,urllib2,urllib","A_Id":2763308,"CreationDate":"2010-05-04T06:33:00.000","Title":"making urllib request in Python from the client side","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to extract the info string from an internet radio streamed over HTTP. By info string I mean the short note about the currently played song, band name etc.\nPreferably I'd like to do it in python. So far I've tried opening a socket but from there I got a bunch of binary data that I could not parse...\nthanks for any hints","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2040,"Q_Id":2766787,"Users Score":1,"Answer":"Sounds like you might need some stepping stone projects before you're ready for this. There's no reason to use a low-level socket library for HTTP. There are great tools both command line utilities and python standard library modules like urlopen2 that can handle the low level TCP and HTTP specifics for you.\nDo you know the URL where you data resides? Have you tried something simple on the command line like using cURL to grab the raw HTML and then some basic tools like grep to hunt down the info you need? I assume here the metadata is actually available as HTML as opposed to being in a binary format read directly by the radio streamer (which presumably is in flash perhaps?).\nHard to give you any specifics because your question doesn't include any technical details about your data source.","Q_Score":0,"Tags":"python,http,streaming,metadata","A_Id":2792800,"CreationDate":"2010-05-04T15:44:00.000","Title":"Parse metadata from http live stream","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking into using Lua in a web project. I can't seem to find any way of directly parsing in pure python and running Lua code in Python.\nDoes anyone know how to do this?\nJoe","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":3999,"Q_Id":2767854,"Users Score":2,"Answer":"From your comments, it appears you a interested in a secure way of executing untrusted code.\nRedifining python builtins, as you suggested in a comment, is a horrible way to secure code.\nWhat you want is sandboxing, there are solutions for python, but I wouldn't recommend them. You would be much better of using Jython or IronPython, because the JVM and .NET clr, were designed with sandboxing in mind.\nI personally believe that in most cases, if you need to execute untrusted code, then you are putting too much or not enough trust in your users.","Q_Score":2,"Tags":"python,lua,eval","A_Id":2768130,"CreationDate":"2010-05-04T18:14:00.000","Title":"Lua parser in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking into using Lua in a web project. I can't seem to find any way of directly parsing in pure python and running Lua code in Python.\nDoes anyone know how to do this?\nJoe","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":3999,"Q_Id":2767854,"Users Score":1,"Answer":"@the_drow\nFrom Lua's web site: \n\nLua is a fast language engine with small footprint that you can embed\n easily into your application. Lua has a simple and well documented API\n that allows strong integration with code written in other languages.\n It is easy to extend Lua with libraries written in other languages. It\n is also easy to extend programs written in other languages with Lua.\n Lua has been used to extend programs written not only in C and C++, but also in Java, C#, Smalltalk, Fortran, Ada, Erlang, and even in\n other scripting languages, such as Perl and Ruby.\n\n@Joe Simpson\nCheck out Lunatic Python, it might have what you want. I know it's an old question, but other people might be looking for this answer, as well. It's a good question that deserves a good answer.","Q_Score":2,"Tags":"python,lua,eval","A_Id":18090375,"CreationDate":"2010-05-04T18:14:00.000","Title":"Lua parser in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to be able to block the urls that are stored in a text file on the hard disk using Python. If the url the user tries to visit is in the file, it redirects them to another page instead. How is this done?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":957,"Q_Id":2774006,"Users Score":1,"Answer":"Doing this at the machine level is a weak solution, it would be pretty easy for a technically inclined user to bypass.\nEven with a server side proxy it will be very easy to bypass unless you firewall normal http traffic, at a bare minimum block ports 80, 443.\nYou could program a proxy in python as Alex suggested, but this is a pretty common problem and there are plenty of off the shelf solutions.\nThat being said, I think that restricting web access will do nothing but aggravate your users.","Q_Score":1,"Tags":"python,windows,internet-explorer,url","A_Id":2774159,"CreationDate":"2010-05-05T14:19:00.000","Title":"Internet Explorer URL blocking with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Getting attributes using minidom in Python, one uses the \"attributes\" property. e.g. node.attributes[\"id\"].value \nSo if I have <\/a>, that should give me \"foo\". node.attributes[\"id\"] does not return the value of the named attribute, but an xml.dom.minidom.Attr instance. \nBut looking at the help for Attr, by doing help('xml.dom.minidom.Attr'), nowhere is this magic \"value\" property mentioned. I like to learn APIs by looking at the type hierarchy, instance methods etc. Where did this \"value\" property come from?? Why is it not listed in the Attr class' page? The only data descriptors mentioned are isId, localName and schemaType. Its also not inherited from any superclasses. Since I'm new to Python, would some of the Python gurus enlighten?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":10689,"Q_Id":2785703,"Users Score":0,"Answer":"Geez, never noticed that before. You're not kidding, node.value isn't mentioned anywhere. It is definitely being set in the code though under def __setitem__ in xml.dom.minidom.\nNot sure what to say other than, it looks like you'll have to use that.","Q_Score":3,"Tags":"python,xml,minidom","A_Id":2785722,"CreationDate":"2010-05-07T01:52:00.000","Title":"python xml.dom.minidom.Attr question","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a (tabbed) application for Facebook that requires a background process to run on a server and, periodically, upload images to an album on this application's page. \nWhat I'm trying to do is create a script that will:\na) authenticate me with the app\nb) upload an image to a specific album\nAll of this entirely from the command line and completely with the new Graph API.\nMy problem right now is trying to locate the documentation that will allow me to get a token without a pop-up window of sorts. \nThoughts?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1483,"Q_Id":2791683,"Users Score":1,"Answer":"If you only need to authenticate as one user, you can get an access token with the offline_access permission that will last forever and just bake that into the script.","Q_Score":4,"Tags":"python,facebook,oauth","A_Id":7356440,"CreationDate":"2010-05-07T21:04:00.000","Title":"Can you authenticate Facebook Graph entirely from command line with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Does httplib.HTTPException have error codes? If so how do I get at them from the exception instance? Any help is appreciated.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":3303,"Q_Id":2791946,"Users Score":5,"Answer":"The httplib module doesn't use exceptions to convey HTTP responses, just genuine errors (invalid HTTP responses, broken headers, invalid status codes, prematurely broken connections, etc.) Most of the httplib.HTTPException subclasses just have an associated message string (stored in the args attribute), if even that. httplib.HTTPException itself may have an \"errno\" value as the first entry in args (when raised through httplib.FakeSocket) but it's not a HTTP error code.\nThe HTTP response codes are conveyed through the httplib.HTTPConnection object, though; the getresponse method will (usually) return a HTTPResponse instance with a status attribute set to the HTTP response code, and a reason attribute set to the text version of it. This includes error codes like 404 and 500. I say \"usually\" because you (or a library you use) can override httplib.HTTPConnection.response_class to return something else.","Q_Score":2,"Tags":"python,http,exception,tcp","A_Id":2792030,"CreationDate":"2010-05-07T22:05:00.000","Title":"python httplib httpexception error codes","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using threads and xmlrpclib in python at the same time. Periodically, I create a bunch of thread to complete a service on a remote server via xmlrpclib. The problem is that, there are times that the remote server doesn't answer. This causes the thread to wait forever for a response which it never gets. Over time, number of threads in this state increases and will reach the maximum number of allowed threads on the system (I'm using fedora). \nI tried to use socket.setdefaulttimeout(10); but the exception that is created by that will cause the server to defunct. I used it at server side but it seems that it doesn't work :\/\nAny idea how can I handle this issue?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":110,"Q_Id":2806397,"Users Score":1,"Answer":"You are doing what I usually call (originally in Spanish xD) \"happy road programming\". You should implement your programs to handle undesired cases, not only the ones you want to happen.\nThe threads here are only showing an underlying mistake: your server can't handle a timeout, and the implementation is rigid in a way that adding a timeout causes the server to crash due to an unhandled exception.\nImplement it more robustly: it must be able to withstand an exception, servers can't die because of a misbehaving client. If you don't fix this kind of problem now, you may have similar issues later on.","Q_Score":0,"Tags":"python,multithreading,xml-rpc","A_Id":4199610,"CreationDate":"2010-05-10T20:54:00.000","Title":"too many threads due to synch communication","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using threads and xmlrpclib in python at the same time. Periodically, I create a bunch of thread to complete a service on a remote server via xmlrpclib. The problem is that, there are times that the remote server doesn't answer. This causes the thread to wait forever for a response which it never gets. Over time, number of threads in this state increases and will reach the maximum number of allowed threads on the system (I'm using fedora). \nI tried to use socket.setdefaulttimeout(10); but the exception that is created by that will cause the server to defunct. I used it at server side but it seems that it doesn't work :\/\nAny idea how can I handle this issue?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":110,"Q_Id":2806397,"Users Score":0,"Answer":"It seems like your real problem is that the server hangs on certain requests, and dies if the client closes the socket - the threads are just a side effect of the implementation. If I'm understanding what you're saying correctly, then the only way to fix this would be to fix the server to respond to all requests, or to be more robust with network failure, or (preferably) both.","Q_Score":0,"Tags":"python,multithreading,xml-rpc","A_Id":2806488,"CreationDate":"2010-05-10T20:54:00.000","Title":"too many threads due to synch communication","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP\/socketid in determining what data belongs to whom.\nHowever, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received.\nThe only way I can see to do this is to use some sort of client\/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance.\nAny information would be appreciated. Thanks.","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4369,"Q_Id":2808092,"Users Score":0,"Answer":"If you absolutely need to verify that a particular user is a particular user then you need to use some form of encryption where the user signs their messages. This can be done pretty quickly because the user only needs to generate a hash of their message and then sign (encrypt) the hash.\nFor your game application you probably don't need to worry about this. Most ISPs wont allow their users to spoof IP addresses thus you need to only worry about users behind NAT in which you may have multiple users running from the same IP address. In this case, and the general one, you can fairly safely identify unique users based on a tuple containing ip address and UDP port.","Q_Score":8,"Tags":"python,security,encryption,cryptography,udp","A_Id":2808130,"CreationDate":"2010-05-11T04:33:00.000","Title":"UDP security and identifying incoming data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP\/socketid in determining what data belongs to whom.\nHowever, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received.\nThe only way I can see to do this is to use some sort of client\/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance.\nAny information would be appreciated. Thanks.","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4369,"Q_Id":2808092,"Users Score":0,"Answer":"I would look into the Garage Games networking library. It is written in C++ and uses UDP. It is designed for low latency and is considered one of the best for games. \nIf I remember correctly they would actually calculate the likely position of the player both on the client side and the server side. It would do this for many aspects to ensure integrity of the data. It also would do a crc check on the client software and compare against the server software to make sure they matched. \nI am not sure you can license it separately anymore so you may have to license the game engine (100 bucks). It would at least give you some insight on a proven approach to UDP for games. Another possibility is looking into the PyGame networking code. It may have already addressed the issues you are facing.","Q_Score":8,"Tags":"python,security,encryption,cryptography,udp","A_Id":7210998,"CreationDate":"2010-05-11T04:33:00.000","Title":"UDP security and identifying incoming data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have been creating an application using UDP for transmitting and receiving information. The problem I am running into is security. Right now I am using the IP\/socketid in determining what data belongs to whom.\nHowever, I have been reading about how people could simply spoof their IP, then just send data as a specific IP. So this seems to be the wrong way to do it (insecure). So how else am I suppose to identify what data belongs to what users? For instance you have 10 users connected, all have specific data. The server would need to match the user data to this data we received.\nThe only way I can see to do this is to use some sort of client\/server key system and encrypt the data. I am curious as to how other applications (or games, since that's what this application is) make sure their data is genuine. Also there is the fact that encryption takes much longer to process than unencrypted. Although I am not sure by how much it will affect performance.\nAny information would be appreciated. Thanks.","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":4369,"Q_Id":2808092,"Users Score":0,"Answer":"I'm breaking this down into four levels of security.\n\nExtremely Insecure - Anyone on the network can spoof a valid request\/response with generally available prior knowledge. (ie syslog)\nVery Insecure - Anyone on the network can spoof a valid request\/response only if they have at least read access to the wire. (Passive MITM) (ie http accessable forum with browser cookies)\nSomewhat Insecure - Anyone in the network can spoof a valid request\/response if they can read AND make changes to the wire (Active MITM) (ie https site with self-signed cert)\nSecure - Requests\/Responses cannot be spoofed even with full access to the\nwire. (ie https accessable ecommerce site)\n\nFor Internet games the very insecure solution might actually be acceptable (It would be my choice) It requires no crypto. Just a field in your apps UDP packet format with some kind of random practically unguessable session identifier ferried around for the duration of the game.\nSomewhat insecure requires a little bit of crypto but none of the trust\/PKI\/PSK needed to prevent Active-MITM of the secure solution. With somewhat insecure if the data payloads were not sensitive you could use an integrity only cipher with (TCP) TLS\/ (UDP) DTLS to reduce processing overhead and latency at the client and server.\nFor games UDP is a huge benefit because if there is packet loss you don't want the IP stack to waste time retransmitting stale state - you want to send new state. With UDP there are a number of clever schemes such as non-acknowledged frames (world details which don't matter so much if their lost) and statistical methods of duplicating important state data to counter predictable levels of observed packet loss.\nAt the end of the day I would recommend go very insecure or somewhat insecure \/w DTLS integrity only.","Q_Score":8,"Tags":"python,security,encryption,cryptography,udp","A_Id":2815170,"CreationDate":"2010-05-11T04:33:00.000","Title":"UDP security and identifying incoming data","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I distinguish between a broadcasted message and a direct message for my ip?\nI'm doing this in python.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":83,"Q_Id":2830326,"Users Score":0,"Answer":"Basically what you need to do is create a raw socket, receive a datagram, and examine the destination address in the header. If that address is a broadcast address for the network adapter the socket is bound to, then you're golden.\nI don't know how to do this in Python, so I suggest looking for examples of raw sockets and go from there. Bear in mind, you will need root access to use raw sockets, and you had better be real careful if you plan on sending using a raw socket.\nAs you might imagine, this will not be a fun thing to do. I suggest trying to find a way to avoid doing this.","Q_Score":0,"Tags":"python","A_Id":2830485,"CreationDate":"2010-05-13T21:09:00.000","Title":"Distinguishing between broadcasted messages and direct messages","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? \nBasically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded?\nthanks!","AnswerCount":5,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":5361,"Q_Id":2830880,"Users Score":8,"Answer":"The GIL is not held by the Python interpreter when doing network operations. If you are doing work that is network-bound (like a crawler), you can safely ignore the effects of the GIL.\nOn the other hand, you may want to measure your performance if you create lots of threads doing processing (after downloading). Limiting the number of threads there will reduce the effects of the GIL on your performance.","Q_Score":10,"Tags":"python,multithreading,gil","A_Id":2830905,"CreationDate":"2010-05-13T23:02:00.000","Title":"Does a multithreaded crawler in Python really speed things up?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? \nBasically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded?\nthanks!","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":5361,"Q_Id":2830880,"Users Score":1,"Answer":"Another consideration: if you're scraping a single website and the server places limits on the frequency of requests your can send from your IP address, adding multiple threads may make no difference.","Q_Score":10,"Tags":"python,multithreading,gil","A_Id":2830933,"CreationDate":"2010-05-13T23:02:00.000","Title":"Does a multithreaded crawler in Python really speed things up?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Recently I needed to generate a huge HTML page containing a report with several thousand row table. And, obviously, I did not want to build the whole HTML (or the underlying tree) in memory. As result, I built the page with the old good string interpolation, but I do not like the solution.\nThus, I wonder whether there are Python templating engines that can yield resulting page content by parts.\nUPD 1: I am not interested in listing all available frameworks and templating engines.\nI am interested in templating solutions that I can use separately from any framework and which can yield content by portions instead of building the whole result in memory.\nI understand the usability enhancements from partial content loading with client scripting, but that is out of the scope of my current question. Say, I want to generate a huge HTML\/XML and stream it into a local file.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":1055,"Q_Id":2832915,"Users Score":1,"Answer":"You don't need a streaming templating engine - I do this all the time, and long before you run into anything vaguely heavy server-side, the browser will start to choke. Rendering a 10000 row table will peg the CPU for several seconds in pretty much any browser; scrolling it will be bothersomely choppy in chrome, and the browser mem usage will rise regardless of browser.\nWhat you can do (and I've previously implemented, even though in retrospect it turns out not to be necessary) is use client-side xslt. Printing the xslt processing instruction and the opening and closing tag using strings is easy and fairly safe; then you can stream each individual row as a standalone xml element using whatever xml writer technique you prefer.\nHowever - you really don't need this, and likely never will - if ever your html generator gets too slow, the browser will be an order of magnitude more problematic.\nSo, unless you benchmarked this and have determined you really have a problem, don't waste your time. If you do have a problem, you can solve it without fundamentally changing the method - in memory generation can work just fine.","Q_Score":4,"Tags":"python,templates","A_Id":2897474,"CreationDate":"2010-05-14T09:02:00.000","Title":"Python templates for huge HTML\/XML","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Recently I needed to generate a huge HTML page containing a report with several thousand row table. And, obviously, I did not want to build the whole HTML (or the underlying tree) in memory. As result, I built the page with the old good string interpolation, but I do not like the solution.\nThus, I wonder whether there are Python templating engines that can yield resulting page content by parts.\nUPD 1: I am not interested in listing all available frameworks and templating engines.\nI am interested in templating solutions that I can use separately from any framework and which can yield content by portions instead of building the whole result in memory.\nI understand the usability enhancements from partial content loading with client scripting, but that is out of the scope of my current question. Say, I want to generate a huge HTML\/XML and stream it into a local file.","AnswerCount":5,"Available Count":2,"Score":0.0798297691,"is_accepted":false,"ViewCount":1055,"Q_Id":2832915,"Users Score":2,"Answer":"It'd be more user-friendly (assuming they have javascript enabled) to build the table via javascript by using e.g. a jQuery plugin which allows automatical loading of contents as soon as you scroll down. Then only few rows are loaded initially and when the user scrolls down more rows are loaded on demand.\nIf that's not a solution, you could use three templates: one for everything before the rows, one for everything after the rows and a third one for the rows.\nThen you first send the before-rows template, then generate the rows and send them immediately, then the after-rows template. Then you will have only one block\/row in memory instead of the whole table.","Q_Score":4,"Tags":"python,templates","A_Id":2832958,"CreationDate":"2010-05-14T09:02:00.000","Title":"Python templates for huge HTML\/XML","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to develop an app using Django 1.1 on Webfaction. I'd like to get the IP address of the incoming request, but when I use request.META['REMOTE_ADDR'] it returns 127.0.0.1. There seems to be a number of different ways of getting the address, such as using \nHTTP_X_FORWARDED_FOR or plugging in some middleware called SetRemoteAddrFromForwardedFor. Just wondering what the best approach was?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":748,"Q_Id":2840329,"Users Score":1,"Answer":"I use the middleware because this way I don't have to change the app's code. \nIf I want to migrate my app to other hosting servers, I only need to modify the middleware without affecting other parts. \nSecurity is not an issue because on WebFaction you can trust what comes in from the front end server.","Q_Score":0,"Tags":"python,django","A_Id":2840883,"CreationDate":"2010-05-15T13:46:00.000","Title":"Django: What's the correct way to get the requesting IP address?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Sometimes I have to send a message to a specific IP and sometimes I have to broadcast the message to all the IP's in my network. At the other end I have to distinguish between a broadcast and a normal one, but recvfrom() just returns the address the message came from;\nthere is no difference between them. Can anyone help me distinguish them?\nUDP is the protocol.","AnswerCount":1,"Available Count":1,"Score":0.761594156,"is_accepted":false,"ViewCount":116,"Q_Id":2848098,"Users Score":5,"Answer":"I don't think it's possible with Python's socket module. UDP is a very minimalistic protocol, and the only way to distinguish between a broadcast and a non-broadcast UDP packet is by looking at the destination address. However, you cannot inspect that part of the packet with the BSD socket API (if I remember it correctly), and the socket module exposes the BSD socket API only. Your best bet would probably be to use the first byte of the message to denote whether it is a broadcast or a unicast message.","Q_Score":2,"Tags":"python","A_Id":2848539,"CreationDate":"2010-05-17T10:00:00.000","Title":"How to identify a broadcasted message?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a long document in XML from which I need to produce static HTML pages (for distribution via CD). I know (to varying degrees) JavaScript, PHP and Python. The current options I've considered are listed here:\n\nI'm not ruling out JavaScript, so one option would be to use ajax to dynamically load the XML content into HTML pages. Edit: I'd use jQuery for this option.\nLearn some basic XSLT and produce HTML to the correct spec this way.\nProduce the site with PHP (for example) and then generate a static site.\nWrite a script (in Python for example) to convert the XML into HTML. This is similar to the XSLT option but without having to learn XSLT.\n\nUseful information:\n\nThe XML will likely change at some point, so I'd like to be able to easily regenerate the site.\nI'll have to produce some kind of menu for jumping around the document (so I'll need to produce some kind of index of the content).\n\nI'd like to know if anyone has any better ideas that I haven't thought of. If not, I'd like you to tell me which of my options seems the most sensible. I think I know what I'm going to do, but I'd like a second opinion. Thanks.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":1365,"Q_Id":2850534,"Users Score":1,"Answer":"I would go with the PHP option. The reason being is that when the XML changes your site content \"should\" automatically change without you having to touch your PHP code.\nCreating a Python script to generate lots of static pages just seems like a bad idea to me and with javascript you will have your cross-browser headaches (unless you are using a framework maybe).\nUse the server side languages for these kind of tasks, it is what they were made for.","Q_Score":3,"Tags":"python,html,xml,ajax,xslt","A_Id":2850582,"CreationDate":"2010-05-17T15:47:00.000","Title":"Producing a static HTML site from XML content","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a long document in XML from which I need to produce static HTML pages (for distribution via CD). I know (to varying degrees) JavaScript, PHP and Python. The current options I've considered are listed here:\n\nI'm not ruling out JavaScript, so one option would be to use ajax to dynamically load the XML content into HTML pages. Edit: I'd use jQuery for this option.\nLearn some basic XSLT and produce HTML to the correct spec this way.\nProduce the site with PHP (for example) and then generate a static site.\nWrite a script (in Python for example) to convert the XML into HTML. This is similar to the XSLT option but without having to learn XSLT.\n\nUseful information:\n\nThe XML will likely change at some point, so I'd like to be able to easily regenerate the site.\nI'll have to produce some kind of menu for jumping around the document (so I'll need to produce some kind of index of the content).\n\nI'd like to know if anyone has any better ideas that I haven't thought of. If not, I'd like you to tell me which of my options seems the most sensible. I think I know what I'm going to do, but I'd like a second opinion. Thanks.","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":1365,"Q_Id":2850534,"Users Score":2,"Answer":"I would go with the XSLT option, controlled via parameters to generate different pages from the same XML source if needed. It's really the tool made for XML transformations.","Q_Score":3,"Tags":"python,html,xml,ajax,xslt","A_Id":2850603,"CreationDate":"2010-05-17T15:47:00.000","Title":"Producing a static HTML site from XML content","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a long document in XML from which I need to produce static HTML pages (for distribution via CD). I know (to varying degrees) JavaScript, PHP and Python. The current options I've considered are listed here:\n\nI'm not ruling out JavaScript, so one option would be to use ajax to dynamically load the XML content into HTML pages. Edit: I'd use jQuery for this option.\nLearn some basic XSLT and produce HTML to the correct spec this way.\nProduce the site with PHP (for example) and then generate a static site.\nWrite a script (in Python for example) to convert the XML into HTML. This is similar to the XSLT option but without having to learn XSLT.\n\nUseful information:\n\nThe XML will likely change at some point, so I'd like to be able to easily regenerate the site.\nI'll have to produce some kind of menu for jumping around the document (so I'll need to produce some kind of index of the content).\n\nI'd like to know if anyone has any better ideas that I haven't thought of. If not, I'd like you to tell me which of my options seems the most sensible. I think I know what I'm going to do, but I'd like a second opinion. Thanks.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1365,"Q_Id":2850534,"Users Score":0,"Answer":"Go with what you are most comfortable with.\nIf it's straightforward you could use (for example) php to generate a page and then use a command line script (in python or php) to create cached files for you.","Q_Score":3,"Tags":"python,html,xml,ajax,xslt","A_Id":2850635,"CreationDate":"2010-05-17T15:47:00.000","Title":"Producing a static HTML site from XML content","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I recently downloaded and installed feedparser with python,\nI tried to run it but Netbeans shouts on import:\nImportError: No module named feedparser\nrestarted the Netbeans, still no go.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1032,"Q_Id":2852301,"Users Score":1,"Answer":"Netbeans by default uses Jython, \nif you go to Tools>Python Platforms and see that Jython is the default. \nSwitch it to Python and so the installed libraries would work. \nIf you already have a project, you should right click on it, choose Python and on the platform choose Python instead of Jython.","Q_Score":0,"Tags":"python,netbeans,feedparser","A_Id":2856391,"CreationDate":"2010-05-17T19:59:00.000","Title":"adding the feedparser module to python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am programming web interface which communicates with cisco switches via telnet. I want to make such system which will be storing one telnet connection per switch and every script (web interface, cron jobs, etc.) will have access to it. This is needed to make a single query queue for each device and prevent huge cisco processor load caused by several concurent telnet connections. \nHow do I can do this? \nupdated\nOption with connection handling daemon is good and will work in the best way. Sharing telnet connection object between scripts may be difficult to implement and debug. But this option is interesting because interface is using only by couple of operators and cron jobs.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1334,"Q_Id":2856356,"Users Score":1,"Answer":"The usual way would be to have a process running in the background that keeps hold of the persistent telnet connections and commands queued to go down them.\nThen have the front-end scripts connect to it (eg. via a unix socket) to queue commands and get the results asynchronously.\nBut this might be overkill. How many people are you expecting to be using a switch interface concurrently? A lightweight alternative for just the web part of it would be to keep the telnet connection object in the web scripts, and configure the web server\/gateway to only launch one instance of your webapp at once.","Q_Score":0,"Tags":"python,django,telnet,telnetlib","A_Id":2856596,"CreationDate":"2010-05-18T10:30:00.000","Title":"Python (Django). Store telnet connection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a function or method I could call in Python \nThat would tell me if the data is RSS or HTML?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":2882549,"Users Score":0,"Answer":"Filetypes should generally be determined out-of-band. eg. if you are fetching the file from a web server, the place to look would be the Content-Type header of the HTTP response. If you're fetching a local file, the filesystem would have a way of determining filetype\u2014on Windows that'd be looking at the file extension.\nIf none of that is available, you'd have to resort to content sniffing. This is never wholly reliable, and RSS is particularly annoying because there are multiple incompatible versions of it, but about the best you could do would probably be:\n\nAttempt to parse the content with an XML parser. If it fails, the content isn't well-formed XML so can't be RSS.\nLook at the document.documentElement.namespaceURI. If it's http:\/\/www.w3.org\/1999\/xhtml, you've got XHTML. If it's http:\/\/www.w3.org\/1999\/02\/22-rdf-syntax-ns#, you've got RSS (of one flavour).\nIf the document.documentElement.tagName is rss, you've got RSS (of a slightly different flavour).\n\nIf the file couldn't be parsed as XML, it could well be HTML (or some tag-soup approximation of it). It's conceivable it might also be broken RSS. In that case most feed tools would reject it. If you need to still detect this case you'd be reduced to looking for strings like \n[...]\nWhile this is perfectly legal XML code, and it's even recommended to use the header, I'd like to get rid of it as one of the programs I'm working with has problems here.\nI can't seem to find the appropriate option in xml.dom.minidom, so I wondered if there are other packages which do allow to neglect the header.\nCheers,\nNico","AnswerCount":8,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":9270,"Q_Id":2933262,"Users Score":0,"Answer":"If you're set on using minidom, just scan back in the file and remove the first line after writing all the XML you need.","Q_Score":12,"Tags":"python,xml","A_Id":2933332,"CreationDate":"2010-05-29T00:28:00.000","Title":"How to write an XML file without header in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"when using Python's stock XML tools such as xml.dom.minidom for XML writing, a file would always start off like\n\n[...]\nWhile this is perfectly legal XML code, and it's even recommended to use the header, I'd like to get rid of it as one of the programs I'm working with has problems here.\nI can't seem to find the appropriate option in xml.dom.minidom, so I wondered if there are other packages which do allow to neglect the header.\nCheers,\nNico","AnswerCount":8,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":9270,"Q_Id":2933262,"Users Score":0,"Answer":"Purists may not like to hear this, but I have found using an XML parser to generate XML to be overkill. Just generate it directly as strings. This also lets you generate files larger than you can keep in memory, which you can't do with DOM. Reading XML is another story.","Q_Score":12,"Tags":"python,xml","A_Id":2933289,"CreationDate":"2010-05-29T00:28:00.000","Title":"How to write an XML file without header in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible for my python web app to provide an option the for user to automatically send jobs to the locally connected printer? Or will the user always have to use the browser to manually print out everything.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1545,"Q_Id":2936384,"Users Score":0,"Answer":"If your Python webapp is running inside a browser on the client machine, I don't see any other way than manually for the user.\nSome workarounds you might want to investigate:\n\nif you web app is installed on the client machine, you will be able to connect directly to the printer, as you have access to the underlying OS system.\nyou could potentially create a plugin that can be installed on the browser that does this for him, but I have no clue as how this works technically.\nwhat is it that you want to print ? You could generate a pdf that contains everything that the user needs to print, in one go ?","Q_Score":0,"Tags":"python,web-applications,printing","A_Id":2936475,"CreationDate":"2010-05-29T19:48:00.000","Title":"python web script send job to printer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"is there any library or way exist from which I can convert my xml records to yaml format ?","AnswerCount":4,"Available Count":1,"Score":0.1488850336,"is_accepted":false,"ViewCount":6025,"Q_Id":2943862,"Users Score":3,"Answer":"The difference between XML and YAML is significant enough to warrant a redesign of the schema you are using to store your data. You should write a script to parse your XML records and output YAML formatted data.\nThere are some methods out there to convert any generic XML into YAML, but the results are far less usable than a method designed specifically for your schema.","Q_Score":10,"Tags":"python,xml,tags,yaml","A_Id":2955385,"CreationDate":"2010-05-31T13:37:00.000","Title":"is there anything exist to convert xml -> yaml directly?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"lets say you run third party program on your computer whitch create a process named example.exe\nhow do i determinate if this process is running and how many windows does he open? How do i intercept network communication between this windows and server?\nmy goal is to create an app whitch will be monitoring network trafic between example.exe and its home server in order to analyze data and save to database, and finally simulate user interaction to get more relevant data","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":620,"Q_Id":2945074,"Users Score":0,"Answer":"You could use wireshark from wireshark.org to sniff the network traffic (or any other packet sniffer).","Q_Score":2,"Tags":"python,networking,communication","A_Id":2945291,"CreationDate":"2010-05-31T17:30:00.000","Title":"python intercepting communication","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wonder if it is better add an element by opening file, search 'good place' and add string which contains xml code.\nOr use some library... i have no idea. I know how can i get nodes and properties from xml through for example lxml but what's the simpliest and the best way to add?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":4733,"Q_Id":2977779,"Users Score":1,"Answer":"The safest way to add nodes to an XML document is to load it into a DOM, add the nodes programmatically and write it out again. There are several Python XML libraries. I have used minidom, but I have no reason to recommend it specifically over the others.","Q_Score":4,"Tags":"python,xml","A_Id":2977799,"CreationDate":"2010-06-04T21:01:00.000","Title":"add xml node to xml file with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm building my startup and I'm thinking ahead for shared use of services.\nSo far I want to allow people who have a user account on one app to be able to use the same user account on another app. This means I will have to build an authentication server.\nI would like some opinions on how to allow an app to talk to the authentication server. Should I use curl? Should I use Python's http libs? All the code will be in Python.\nAll it's going to do is ask the authentication server if the person is allowed to use that app and the auth server will return a JSON user object. All authorization (roles and resources) will be app independent, so this app will not have to handle that.\nSorry if this seems a bit newbish; this is the first time I have separated authentication from the actual application.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":308,"Q_Id":2986317,"Users Score":1,"Answer":"Assuming you plan to write your own auth client code, it isn't event-driven, and you don't need to validate an https certificate, I would suggest using python's built-in urllib2 to call the auth server. This will minimize dependencies, which ought to make deployment and upgrades easier.\nThat being said, there are more than a few existing auth-related protocols and libraries in the world, some of which might save you some time and security worries over writing code from scratch. For example, if you make your auth server speak OpenID, many off-the-self applications and servers (including Apache) will have auth client plugins already made for you.","Q_Score":0,"Tags":"python,authentication,rest","A_Id":2986411,"CreationDate":"2010-06-06T23:14:00.000","Title":"Talking to an Authentication Server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm building my startup and I'm thinking ahead for shared use of services.\nSo far I want to allow people who have a user account on one app to be able to use the same user account on another app. This means I will have to build an authentication server.\nI would like some opinions on how to allow an app to talk to the authentication server. Should I use curl? Should I use Python's http libs? All the code will be in Python.\nAll it's going to do is ask the authentication server if the person is allowed to use that app and the auth server will return a JSON user object. All authorization (roles and resources) will be app independent, so this app will not have to handle that.\nSorry if this seems a bit newbish; this is the first time I have separated authentication from the actual application.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":308,"Q_Id":2986317,"Users Score":0,"Answer":"Your question isn't really a programming problem so much as it is an architecture problem. What I would recommend for your specific situation is to setup an LDAP server for authentication, authorization, and accounting (AAA). Then have your applications use that (every language has modules and libraries for LDAP). It is a reliable, secure, proven, and well-known way of handling such things.\nEven if you strictly want to enforce HTTP-based authentication it is easy enough to slap an authentication server in front of your LDAP and call it a day. There's even existing code to do just that so you won't have to re-invent the wheel.","Q_Score":0,"Tags":"python,authentication,rest","A_Id":2986610,"CreationDate":"2010-06-06T23:14:00.000","Title":"Talking to an Authentication Server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to write some unit tests for a small web service written with Cherrypy and I am wondering what's the best way to figure out that the server has started, so i don't get connection refused if I try to connect too early to the service ?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":250,"Q_Id":2988636,"Users Score":4,"Answer":"I got it figured out:\ncherrypy.engine.start(); cherrypy.server.wait()\nit's the way to go.\nOtherwise, I think you can get away with some tricks with\ncherrypy.server.bus.states","Q_Score":4,"Tags":"python,cherrypy","A_Id":2989432,"CreationDate":"2010-06-07T10:20:00.000","Title":"cherrypy when to know that the server has started","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing an application that sends files over network, I want to develop a custom protocol to not limit myself in term on feature richness (http wouldn't be appropriate, the nearest thing is the bittorrent protocol maybe).\nI've tried with twisted, I've built a good app but there's a bug in twisted that makes my GUI blocking, so I've to switch to another framework\/strategy.\nWhat do you suggest? Using raw sockets and using gtk mainloop (there are select-like functions in the toolkit) is too much difficult? \nIt's viable running two mainloops in different threads?\nAsking for suggestions","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1158,"Q_Id":2991852,"Users Score":1,"Answer":"Disclaimer: I have little experience with network applications.\nThat being said, the raw sockets isn't terribly difficult to wrap your head around\/use, especially if you're not too worried about optimization. That takes more thought, of course. But using GTK and raw sockets should be fairly straightforward. Especially since you've used the twisted framework, which IIRC, just abstracts some of the more nitty-gritty details of socket managing.","Q_Score":1,"Tags":"python,networking","A_Id":2991942,"CreationDate":"2010-06-07T17:47:00.000","Title":"networking application and GUI in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing an application that sends files over network, I want to develop a custom protocol to not limit myself in term on feature richness (http wouldn't be appropriate, the nearest thing is the bittorrent protocol maybe).\nI've tried with twisted, I've built a good app but there's a bug in twisted that makes my GUI blocking, so I've to switch to another framework\/strategy.\nWhat do you suggest? Using raw sockets and using gtk mainloop (there are select-like functions in the toolkit) is too much difficult? \nIt's viable running two mainloops in different threads?\nAsking for suggestions","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":1158,"Q_Id":2991852,"Users Score":1,"Answer":"Two threads: one for the GUI, one for sending\/receiving data. Tkinter would be a perfectly fine toolkit for this. You don't need twisted or any other external libraries or toolkits -- what comes out of the box is sufficient to get the job done.","Q_Score":1,"Tags":"python,networking","A_Id":2991935,"CreationDate":"2010-06-07T17:47:00.000","Title":"networking application and GUI in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm at the moment working on a web page where the users who visit it should have the possibility to create an event in my web page's name. There is a Page on Facebook for the web page which should be the owner of the user created event. Is this possible? All users are authenticated using Facebook Connect, but since the event won't be created in their name I don't know if that's so much of help. The Python SDK will be used since the event shall be implemented server side.\n\/ D","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":700,"Q_Id":3005640,"Users Score":0,"Answer":"This is possible, using the access token provided for your page you can publish to this as you would with a user. If you want to post FROM the USER than you need to use the current user's access token, if you want to post FROM the PAGE then using the access token from the page you can publish to that","Q_Score":1,"Tags":"python,django,facebook","A_Id":6583766,"CreationDate":"2010-06-09T12:12:00.000","Title":"Create event for another owner using Facebook Graph API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am looking for a way of programmatically testing a script written with the asyncore Python module. My test consists of launching the script in question -- if a TCP listen socket is opened, the test passes. Otherwise, if the script dies before getting to that point, the test fails.\nThe purpose of this is knowing if a nightly build works (at least up to a point) or not. \nI was thinking the best way to test would be to launch the script in some kind of sandbox wrapper which waits for a socket request. I don't care about actually listening for anything on that port, just intercepting the request and using that as an indication that my test passed.\nI think it would be preferable to intercept the open socket request, rather than polling at set intervals (I hate polling!). But I'm a bit out of my depths as far as how exactly to do this.\nCan I do this with a shell script? Or perhaps I need to override the asyncore module at the Python level?\nThanks in advance,\n- B","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":838,"Q_Id":3014686,"Users Score":0,"Answer":"Another option is to mock the socket module before importing the asyncore module. Of course, then you have to make sure that the mock works properly first.","Q_Score":0,"Tags":"python,testing,sockets,wrapper","A_Id":3019494,"CreationDate":"2010-06-10T13:19:00.000","Title":"How can I build a wrapper to wait for listening on a port?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"how can i send an xml file on my system to an http server using python standard library??","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":5264,"Q_Id":3020979,"Users Score":1,"Answer":"You can achieve that through a standard http post request.","Q_Score":7,"Tags":"python,xml,http","A_Id":3021000,"CreationDate":"2010-06-11T07:35:00.000","Title":"send xml file to http using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a simple graphics library that supports the following functionality:\n\nAbility to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent),\nAbility to load bitmap images,\nAbility to read current color of pixel in a given coordinate.\n\nIdeally using JavaScript or Python.\nSeems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?","AnswerCount":5,"Available Count":3,"Score":0.1194272985,"is_accepted":false,"ViewCount":1047,"Q_Id":3021514,"Users Score":3,"Answer":"PyGame can do all of those things. OTOH, I don't think it embeds into a GUI too well.","Q_Score":2,"Tags":"javascript,python,graphics,canvas,svg","A_Id":3022580,"CreationDate":"2010-06-11T09:00:00.000","Title":"Simple graphics API with transparency, polygons, reading image pixels?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a simple graphics library that supports the following functionality:\n\nAbility to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent),\nAbility to load bitmap images,\nAbility to read current color of pixel in a given coordinate.\n\nIdeally using JavaScript or Python.\nSeems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1047,"Q_Id":3021514,"Users Score":0,"Answer":"I voted for PyGame, but I would also like to point out that the new QT graphics library seems quite capable. I have not used PyQT with QT4 yet, but I really like PyQT development with QT3.","Q_Score":2,"Tags":"javascript,python,graphics,canvas,svg","A_Id":3023182,"CreationDate":"2010-06-11T09:00:00.000","Title":"Simple graphics API with transparency, polygons, reading image pixels?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need a simple graphics library that supports the following functionality:\n\nAbility to draw polygons (not just rectangles!) with RGBA colors (i.e., partially transparent),\nAbility to load bitmap images,\nAbility to read current color of pixel in a given coordinate.\n\nIdeally using JavaScript or Python.\nSeems like HTML 5 Canvas can handle #2 and #3 but not #1, whereas SVG can handle #1 and #2 but not #3. Am I missing something (about either of these two)? Or are there other alternatives?","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":1047,"Q_Id":3021514,"Users Score":2,"Answer":"I ended up going with Canvas. The \"secret\" of polygons is using paths. Thanks, \"tur1ng\"!","Q_Score":2,"Tags":"javascript,python,graphics,canvas,svg","A_Id":3027643,"CreationDate":"2010-06-11T09:00:00.000","Title":"Simple graphics API with transparency, polygons, reading image pixels?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working a script that will upload videos to YouTube with different accounts. Is there a way to use HTTPS or SOCKS proxies to filter all the requests. My client doesn't want to leave any footprints for Google. The only way I found was to set the proxy environment variable beforehand but this seems cumbersome. Is there some way I'm missing?\nThanks :)","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1129,"Q_Id":3026881,"Users Score":0,"Answer":"Setting an environment variable (e.g. import os; os.environ['BLAH']='BLUH' once at the start of your program \"seems cumbersome\"?! What does count as \"non-cumbersome\" for you, pray?","Q_Score":0,"Tags":"python,api,youtube,gdata","A_Id":3027001,"CreationDate":"2010-06-11T23:34:00.000","Title":"How to use a Proxy with Youtube API? (Python)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way I can programmatically determine the status of a download in Chrome or Mozilla Firefox? I would like to know if the download was aborted or completed successfully.\nFor writing the code I'd be using either Perl, PHP or Python.\nPlease help.\nThank You.","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":676,"Q_Id":3029824,"Users Score":-2,"Answer":"There are scripts out there that output the file in chunks, recording how many bytes they've echoed out, but those are completely unreliable and you can't accurately ascertain whether or not the user successfully received the complete file.\nThe short answer is no, really, unless you write your own download manager (in Java) that runs a callback to your server when the download completes.","Q_Score":0,"Tags":"php,python,perl,download","A_Id":3029877,"CreationDate":"2010-06-12T19:18:00.000","Title":"Programmatically determining the status of a file download","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to send F2 key to telnet host. How do I send it using python...using getch() I found that the character < used for the F2 key but when sending >, its not working. I think there is a way to send special function keys but I am not able to find it. If somebody knows please help me. Thanks in advance","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2908,"Q_Id":3035390,"Users Score":4,"Answer":"Extended keys (non-alphanumeric or symbol) are composed of a sequence of single characters, with the sequence depending on the terminal you have told the telnet server you are using. You will need to send all characters in the sequence in order to make it work. Here, using od -c <<< 'CtrlVF2' I was able to see a sequence of \\x1b0Q with the xterm terminal.","Q_Score":4,"Tags":"python,telnet","A_Id":3035415,"CreationDate":"2010-06-14T06:40:00.000","Title":"how to send F2 key to remote host using python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to find a way to in real-time find the shortest path between nodes in a huge graph. It has hundreds of thousands of vertices and millions of edges. I know this question has been asked before and I guess the answer is to use a breadth-first search, but I'm more interested in to know what software you can use to implement it. For example, it would be totally perfect if it already exist a library (with python bindings!) for performing bfs in undirected graphs.","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":24381,"Q_Id":3038661,"Users Score":0,"Answer":"Depending on what kind of additional information you have, A* may be extremely efficient. In particular, if given a node you can compute an estimate of the cost from that node to the goal, A* is optimally efficient.","Q_Score":16,"Tags":"python,graph,shortest-path,dijkstra,breadth-first-search","A_Id":3042109,"CreationDate":"2010-06-14T15:50:00.000","Title":"Efficiently finding the shortest path in large graphs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a python script that accepts a file from the user and saves it.\nIs it possible to not upload the file immediately but to que it up and when the server has less load to upload it then.\nCan this be done by transferring the file to the browsers storage area or taking the file from the Harddrive and transferring to the User's RAM?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":100,"Q_Id":3040290,"Users Score":3,"Answer":"There is no reliable way to do what you're asking, because fundamentally, your server has no control over the user's browser, computer, or internet connection. If you don't care about reliability, you might try writing a bunch of javascript to trigger the upload at a scheduled time, but it just wouldn't work if the user closed his browser, navigated away from your web page, turned off his computer, walked away from his wifi signal, etc.\nIf your web site is really so heavily loaded that it buckles when lots of users upload files at once, it might be time to profile your code, use multiple servers, or perhaps use a separate upload server to accept files and then schedule transfer to your main server later.","Q_Score":0,"Tags":"python,file,architecture,file-upload","A_Id":3040346,"CreationDate":"2010-06-14T19:32:00.000","Title":"Python timed file upload","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone know of a memory efficient way to generate very large xml files (e.g. 100-500 MiB) in Python? \nI've been utilizing lxml, but memory usage is through the roof.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":4751,"Q_Id":3049188,"Users Score":2,"Answer":"The only sane way to generate so large an XML file is line by line, which means printing while running a state machine, and lots of testing.","Q_Score":11,"Tags":"python,xml,lxml","A_Id":3049245,"CreationDate":"2010-06-15T21:27:00.000","Title":"Generating very large XML files in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does anyone know of a memory efficient way to generate very large xml files (e.g. 100-500 MiB) in Python? \nI've been utilizing lxml, but memory usage is through the roof.","AnswerCount":4,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":4751,"Q_Id":3049188,"Users Score":2,"Answer":"Obviously, you've got to avoid having to build the entire tree ( whether DOM or etree or whatever ) in memory. But the best way depends on the source of your data and how complicated and interlinked the structure of your output is.\nIf it's big because it's got thousands of instances of fairly independent items, then you can generate the outer wrapper, and then build trees for each item and then serialize each fragment to the output. \nIf the fragments aren't so independent, then you'll need to do some extra bookkeeping -- like maybe manage a database of generated ids & idrefs. \nI would break it into 2 or 3 parts: a sax event producer, an output serializer\neating sax events, and optionally, if it seems easier to work with some independent pieces as objects or trees, something to build those objects and then turn them into sax events for the serializer. \nMaybe you could just manage it all as direct text output, instead of dealing with sax events: that depends on how complicated it is. \nThis may also be a good place to use python generators as a way of streaming the output without having to build large structures in memory.","Q_Score":11,"Tags":"python,xml,lxml","A_Id":3050007,"CreationDate":"2010-06-15T21:27:00.000","Title":"Generating very large XML files in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"just wonder how those air ticket booking website redirect the user to the airline booking website and then fill up(i suppose doing POST) the required information so that the users will land on the booking page with origin\/destination\/date selected?\nIs the technique used is to open up new browser window and do a ajax POST from there?\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":81,"Q_Id":3050477,"Users Score":0,"Answer":"It can work like this:\non air ticket booking system you have a html form pointing on certain airline booking website (by action parameter). If user submits data then data lands on airline booking website and this website proceed the request.\nUsuallly people want to get back to the first site. This can be done by sending return url with request data. Of course there must be an API on the airline booking site to handle such url.\nThis is common mechanism when you do online payments, all kind of reservations, etc.\nNot sure about your idea to use ajax calls. Simple html form is enough here. Note that also making ajax calls between different domains can be recognized as attempt to access the restricted url.","Q_Score":0,"Tags":"javascript,python","A_Id":3051343,"CreationDate":"2010-06-16T03:07:00.000","Title":"redirection follow by post","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am able to establish the initial telnet session. But from this session I need to create a second. Basically I can not telnet directly to the device I need to access. Interactively this is not an issue but I am attempting to setup an automated test using python.\nDoes anyone know who to accomplish this?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":797,"Q_Id":3054086,"Users Score":0,"Answer":"If you log in from A to B to C, do you need the console input from A to go to C ?\nIf not, it is fairly straightforward, as you can execute commands on the second server to connect to the third.\nI do something like that using SSH, where I have paramiko and scripts installed on both A and B. A logs in to B and executes a command to start a python script on B which then connects to C and does whatever.","Q_Score":0,"Tags":"python,telnet,telnetlib","A_Id":3054195,"CreationDate":"2010-06-16T14:18:00.000","Title":"Using Python: How can I telnet into a server and then from that connection telnet into a second server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am able to establish the initial telnet session. But from this session I need to create a second. Basically I can not telnet directly to the device I need to access. Interactively this is not an issue but I am attempting to setup an automated test using python.\nDoes anyone know who to accomplish this?","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":797,"Q_Id":3054086,"Users Score":1,"Answer":"After establishing the first connection, just write the same telnet command you use manually to that connection.","Q_Score":0,"Tags":"python,telnet,telnetlib","A_Id":3054153,"CreationDate":"2010-06-16T14:18:00.000","Title":"Using Python: How can I telnet into a server and then from that connection telnet into a second server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have two projects in Eclipse with Java and Python code, using Jython. Also I'm using PyDev. One project can import and use the xml module just fine, and the other gives the error ImportError: No module named xml. As far as I can tell, all the project properties are set identically. The working project was created from scratch and the other comes from code checked out of an svn repository and put into a new project.\nWhat could be the difference?\nedit- Same for os, btw. It's just missing some path somewhere...","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":190,"Q_Id":3057382,"Users Score":2,"Answer":"eclipse stores project data in files like\n\n.project \n.pydevprojct\n.classpath\n\nwith checkin \/ checkout via svn it is possible to lost some of these files\ncheck your dot-files","Q_Score":2,"Tags":"java,python,eclipse,jython,pydev","A_Id":3119610,"CreationDate":"2010-06-16T21:38:00.000","Title":"Jython project in Eclipse can't find the xml module, but works in an identical project","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to put together a bash or python script to play with the facebook graph API. Using the API looks simple, but I'm having trouble setting up curl in my bash script to call authorize and access_token. Does anyone have a working example?","AnswerCount":8,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":72749,"Q_Id":3058723,"Users Score":6,"Answer":"There IS a way to do it, I've found it, but it's a lot of work and will require you to spoof a browser 100% (and you'll likely be breaking their terms of service)\nSorry I can't provide all the details, but the gist of it:\n\nassuming you have a username\/password for a facebook account, go curl for the oauth\/authenticate... page. Extract any cookies returned in the \"Set-Cookie\" header and then follow any \"Location\" headers (compiling cookies along the way).\nscrape the login form, preserving all fields, and submit it (setting the referer and content-type headers, and inserting your email\/pass) same cookie collection from (1) required\nsame as (2) but now you're going to need to POST the approval form acquired after (2) was submitted, set the Referer header with thr URL where the form was acquired.\nfollow the redirects until it sends you back to your site, and get the \"code\" parameter out of that URL\nExchange the code for an access_token at the oauth endpoint\n\nThe main gotchas are cookie management and redirects. Basically, you MUST mimic a browser 100%. I think it's hackery but there is a way, it's just really hard!","Q_Score":41,"Tags":"python,bash,facebook","A_Id":3381527,"CreationDate":"2010-06-17T03:41:00.000","Title":"Programmatically getting an access token for using the Facebook Graph API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable.\nIdeally\n\nWorks on either Linux or Windows - meaning the solution is OS independent. \nWorks for any type of file.\n\nGood enough:\n\nWorks just on windows but can be done through some library or whatever that can be accessed with Python.\nWorks only for PDF files.\n\nCurrent best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":8480,"Q_Id":3070210,"Users Score":1,"Answer":"I ended up resolving it for our situation. As it turns out the process that was writing the files out had them opened exclusively so all we had to do was try opening them for read access - when denied they were in use.","Q_Score":8,"Tags":"python,windows,linux,pdf,file-io","A_Id":3073958,"CreationDate":"2010-06-18T13:59:00.000","Title":"Need a way to determine if a file is done being written to","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"The situation I'm in is this - there's a process that's writing to a file, sometimes the file is rather large say 400 - 500MB. I need to know when it's done writing. How can I determine this? If I look in the directory I'll see it there but it might not be done being written. Plus this needs to be done remotely - as in on the same internal LAN but not on the same computer and typically the process that wants to know when the file writing is done is running on a Linux box with a the process that's writing the file and the file itself on a windows box. No samba isn't an option. xmlrpc communication to a service on that windows box is an option as well as using snmp to check if that's viable.\nIdeally\n\nWorks on either Linux or Windows - meaning the solution is OS independent. \nWorks for any type of file.\n\nGood enough:\n\nWorks just on windows but can be done through some library or whatever that can be accessed with Python.\nWorks only for PDF files.\n\nCurrent best idea is to periodically open the file in question from some process on the windows box and look at the last bytes checking for the PDF end tag and accounting for the eol differences because the file may have been created on Linux or Windows.","AnswerCount":2,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":8480,"Q_Id":3070210,"Users Score":8,"Answer":"There are probably many approaches you can take. I would try to open the file with write access. If that succeeds then no-one else is writing to that file.\nBuild a web service around this concept if you don't have direct access to the file between machines.","Q_Score":8,"Tags":"python,windows,linux,pdf,file-io","A_Id":3070749,"CreationDate":"2010-06-18T13:59:00.000","Title":"Need a way to determine if a file is done being written to","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm using PAMIE to control IE to automatically browse to a list of URLs. I want to find which URLs return IE's malware warning and which ones don't. I'm new to PAMIE, and PAMIE's documentation is non-existent or cryptic at best. How can I get a page's content from PAMIE so I can work with it in Python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":256,"Q_Id":3073151,"Users Score":0,"Answer":"Browsing the CPamie.py file did the trick. Turns out, I didn't even need the page content - PAMIE's findText method lets you match any string on the page. Works great!","Q_Score":0,"Tags":"python,pamie","A_Id":3098190,"CreationDate":"2010-06-18T21:17:00.000","Title":"How do I get the page content from PAMIE?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am interested in making an HTTP Banner Grabber, but when i connect to a server on port 80 and i send something (e.g. \"HEAD \/ HTTP\/1.1\") recv doesn't return anything to me like when i do it in let's say netcat..\nHow would i go about this?\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":2102,"Q_Id":3076263,"Users Score":2,"Answer":"Are you sending a \"\\r\\n\\r\\n\" to indicate the end of the request? If you're not, the server's still waiting for the rest of the request.","Q_Score":1,"Tags":"python,netcat","A_Id":3076282,"CreationDate":"2010-06-19T16:27:00.000","Title":"HTTP Banner Grabbing with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the question says, what would be the difference between:\nx.getiterator() and x.iter(), where x is an ElementTree or an Element? Cause it seems to work for both, I have tried it.\nIf I am wrong somewhere, correct me please.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1970,"Q_Id":3077010,"Users Score":0,"Answer":"getiterator is the ElementTree standard spelling for this method; iter is an equivalent lxml-only method that will stop your code from working in ElementTree if you need it, and appears to have no redeeming qualities whatsoever except saving you from typing 7 more characters for the method name;-).","Q_Score":5,"Tags":"python,lxml","A_Id":3077047,"CreationDate":"2010-06-19T19:53:00.000","Title":"What is the difference between getiterator() and iter() wrt to lxml","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a free open-source solution taking raw e-mail message (as a piece of text) and returning each header field, each attachment and the message body as separate fields?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":873,"Q_Id":3078189,"Users Score":2,"Answer":"Yes... For each language you pointed out, I've used the one in Python myself. Try perusing the library documentation for your chosen library.\n(Note: You may be expecting a \"nice\", high-level library for this parsing... That's a tricky area, email has evolved and grown without much design, there are a lot of dark corners, and API's reflect that).","Q_Score":2,"Tags":"java,php,python,email,parsing","A_Id":3078197,"CreationDate":"2010-06-20T04:15:00.000","Title":"Is there an open-source eMail message (headers, attachments, etc.) parser?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to create an application in Django which will allow for each client to point their domain to my server. At this point, I would want their domain to be accessed via https protocol and have a valid SSL connection. With OpenSSL, more specifically M2Crypto, can I do this right out the gate? Or, do I still need to purchase an SSL cert? Also, if the former is true (can do without purchase), does this mean I need to have a Python-based web server listening on 443 or does this all somehow still work with NGINX, et al?\nAny help is appreciated.","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":703,"Q_Id":3078487,"Users Score":2,"Answer":"You will need a SSL cert, and let the web server handle the HTTPS.","Q_Score":1,"Tags":"python,ssl,openssl,m2crypto","A_Id":3078518,"CreationDate":"2010-06-20T07:17:00.000","Title":"OpenSSL for HTTPS without a certificate","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I check if a specific ip address or proxy is alive or dead","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":4148,"Q_Id":3078704,"Users Score":0,"Answer":"An IP address corresponds to a device. You can't \"connect\" to a device in the general sense. You can connect to services on the device identified by ports. So, you find the ip address and port of the proxy server you're interested in and then try connecting to it using a simple socket.connect. If it connects fine, you can alteast be sure that something is running on that port of that ip address. Then you go ahead and use it and if things are not as you expect, you can make further decisions.","Q_Score":2,"Tags":"python,sockets,proxy","A_Id":3078730,"CreationDate":"2010-06-20T09:03:00.000","Title":"how to check if an ip address or proxy is working or not","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I check if a specific ip address or proxy is alive or dead","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":4148,"Q_Id":3078704,"Users Score":3,"Answer":"Because there may be any level of filtering or translation between you and the remote host, the only way to determine whether you can connect to a specific host is to actually try to connect. If the connection succeeds, then you can, else you can't.\nPinging isn't sufficient because ICMP ECHO requests may be blocked yet TCP connections might go through fine.","Q_Score":2,"Tags":"python,sockets,proxy","A_Id":3078719,"CreationDate":"2010-06-20T09:03:00.000","Title":"how to check if an ip address or proxy is working or not","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to build a tag cloud out of a webpage\/feed. Once you get the word frequency table of tags, it's easy to build the tagcloud. But my doubt is how do I retrieve the tags\/keywords from the webpage\/feed? \nThis is what I'm doing now:\nGet the content -> strip HTML -> split them with \\s\\n\\t(space,newline,tab) -> Keyword list\nBut this does not work great.\nIs there a better way?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":226,"Q_Id":3083784,"Users Score":0,"Answer":"What you have is a rough 1st order approximation. I think if you then go back through the data and search for frequency of 2-word phrases, then 3 word phrases, up till the total number of words that can be considered a tag, you'll get a better representation of keyword frequency.\nYou can refine this rough search pattern by specifying certain words that can be contained as part of a phrase (pronouns ect).","Q_Score":1,"Tags":"python,tags,visualization,keyword","A_Id":3250807,"CreationDate":"2010-06-21T10:17:00.000","Title":"How do I get tags\/keywords from a webpage\/feed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This follows my previous questions on using lxml and Python.\nI have a question, as to when I have a choice between using the methods provided by the lxml.etree and where I can make use of XPath, what should I use? \nFor example, to get a list of all the X tags in a XML document, I could either iterate through it using the getiterator() of lxml.etree, or I could write the XPath expression: \/\/x.\nThere may be many more examples, this is just one. Question is, which should when I have a choose and why?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":96,"Q_Id":3084627,"Users Score":1,"Answer":"XPath is usually preferable to an explicit iteration over elements. XPath is more succinct, and will likely be faster since it is implemented inside the XML engine.\nYou'd want to use an explicit iteration if there were complex criteria that couldn't be expressed easily (or at all) in XPath, or if you needed to visit all the nodes for some other processing anyway, or if you wanted to get rich debugging output.","Q_Score":1,"Tags":"python,lxml","A_Id":3084858,"CreationDate":"2010-06-21T12:34:00.000","Title":"Confused about using XPath or not","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working with selenium.\nwhile trying to click a button it creates a pop up (alert) and doesn\u2019t return a page object.\nBecause of that I can\u2019t use \u201cclick\u201d alone as this method expects a page object and eventually fails because of a timeout.\nI can use the \u201cchooseOkOnNextConfirmation()\u201d but this will click the pop up and i also want to verify that the pop up actually appeared.\nIs there any method that will click and verify this alert?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":3169,"Q_Id":3084850,"Users Score":1,"Answer":"as far as I know you have to use always in alerts\n\nselenium.get_confirmation()\n\nfrom python doc:\nIf an confirmation is generated but you do not consume it with getConfirmation, the next Selenium action will fail.","Q_Score":2,"Tags":"python,selenium","A_Id":3103295,"CreationDate":"2010-06-21T13:01:00.000","Title":"How to click and verify the existence of a pop up (alert)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"If Python, if you are developing a system service that communicates with user applications through sockets, and you want to treat sockets connected by different users differently, how would you go about that?\nIf I know that all connecting sockets will be from localhost, is there a way to lookup through the OS (either on windows or linux) which user is making the connection request?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":195,"Q_Id":3105705,"Users Score":1,"Answer":"Unfortunately, at this point in time the python libraries don't support the usual SCM_CREDENTIALS method of passing credentials along a Unix socket.\nYou'll need to use an \"ugly\" method as described in another answer to find it.","Q_Score":2,"Tags":"python,sockets","A_Id":3107066,"CreationDate":"2010-06-23T21:30:00.000","Title":"Determine user connecting a local socket with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it possible to filter all outgoing connections through a HTTPS or SOCKS proxy? I have a script that users various apis & calls scripts that use mechanize\/urllib. I'd like to filter every connection through a proxy, setting the proxy in my 'main' script (the one that calls all the apis). Is this possible?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":798,"Q_Id":3115286,"Users Score":0,"Answer":"to use tor with mechanize I use tor+polipo.\nset polipo to use parent proxy socksParentProxy=localhost:9050 at confing file.\nthen use \nbrowser.set_proxies({\"http\": \"localhost:8118\"})\nwhere 8118 is your polipo port.\nso you are using polipo http proxy that uses sock to use tor \nhope it helps :)","Q_Score":3,"Tags":"python,proxies","A_Id":8904814,"CreationDate":"2010-06-25T03:06:00.000","Title":"Python proxy question","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Are there any equivalents in objective-c to the following python urllib2 functions?\nRequest, urlopen, HTTPError, HTTPCookieProRequest, urlopen, HTTPError, HTTPCookieProcessor\nAlso, how would I able to to this and change the method from \"get\" to \"post\"?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":632,"Q_Id":3120430,"Users Score":1,"Answer":"NSMutableHTTPURLRequest, a category of NSMutableURLRequest, is how you set up an HTTP request. Using that class you will specify a method (GET or POST), headers and a url.\nNSURLConnection is how you open the connection. You will pass in a request and delegate, and the delegate will receive data, errors and messages related to the connection as they become available.\nNSHTTPCookieStorage is how you manage existing cookies. There are a number of related classes in the NSHTTPCookie family.\nWith urlopen, you open a connection and read from it. There is no direct equivalent to that unless you use something lower level like CFReadStreamCreateForHTTPRequest. In Objective-C everything is passive, where you are notified when events occur on the stream.","Q_Score":0,"Tags":"python,objective-c","A_Id":3120602,"CreationDate":"2010-06-25T18:23:00.000","Title":"Is there an Objective-C equivalent to Python urllib and urllib2?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using XMPP in Python, and I can send messages, but how can I receive?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3190,"Q_Id":3121518,"Users Score":0,"Answer":"Good post. I notice this code snippet is also in the logger example in xmpppy sourceforge website.\nI wonder if it is possible to reply to incoming messages. The code above only receives and the nickname resource ID does not indicate who the sender is (in terms of JID format, user@server) unless xmpppy can translate that appropriately. So how might one take the received message nd \"echo\" it back to the sender? Or is that not easily possible with the xmpppy library and need to find a different XMPP library?","Q_Score":1,"Tags":"python,xmpp,xmpppy","A_Id":3553285,"CreationDate":"2010-06-25T21:17:00.000","Title":"How can I get a response with XMPP client in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a C program and a Python program on the same machine. The C program generates some data in nested structures. What form of IPC is the best way to get this data across to the python program? \nSerializing in C (especially nested structures) is a real bear, from what I hear, due to lack of serialization libraries. I am not very familiar with shared memory, but I assume the formatting of the C structures may not be very palatable to the python program when it comes to memory alignment and following pointers. The ctype and struct library seems to be for non-nested structures only. So far, what I am thinking is: \nWrap all the data in the C program into some xml or json format, write it via socket to python program and then let python program interpret the xml\/json formatted data. Looks very cumbersome with lots of overheads. \nAny better ideas ?","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":766,"Q_Id":3127467,"Users Score":2,"Answer":"I think you answered your own question. JSON is certainly a good choice. It's also not terribly difficult to do your own serialization in C.","Q_Score":1,"Tags":"python,c,sockets,serialization","A_Id":3127588,"CreationDate":"2010-06-27T13:26:00.000","Title":"Sending binary data over IPC from C to Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"If you are parsing html or xml (with python), and looking for certain tags, it can hurt performance to lower or uppercase an entire document so that your comparisons are accurate. What percentage (estimated) of xml and html docs use any upper case characters in their tags?","AnswerCount":3,"Available Count":3,"Score":0.3215127375,"is_accepted":false,"ViewCount":98,"Q_Id":3127984,"Users Score":5,"Answer":"XML (and XHTML) tags are case-sensitive ... so and would be different elements.\nHowever a lot (rough estimate) of HTML (not XHTML) tags are random-case.","Q_Score":1,"Tags":"python,html,xml","A_Id":3127997,"CreationDate":"2010-06-27T16:30:00.000","Title":"When matching html or xml tags, should one worry about casing?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"If you are parsing html or xml (with python), and looking for certain tags, it can hurt performance to lower or uppercase an entire document so that your comparisons are accurate. What percentage (estimated) of xml and html docs use any upper case characters in their tags?","AnswerCount":3,"Available Count":3,"Score":0.1325487884,"is_accepted":false,"ViewCount":98,"Q_Id":3127984,"Users Score":2,"Answer":"Only if you're using XHTML as this is case sensitive, whereas HTML is not so you can ignore case differences. Test for the doctype before worrying about checking for case.","Q_Score":1,"Tags":"python,html,xml","A_Id":3128006,"CreationDate":"2010-06-27T16:30:00.000","Title":"When matching html or xml tags, should one worry about casing?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"If you are parsing html or xml (with python), and looking for certain tags, it can hurt performance to lower or uppercase an entire document so that your comparisons are accurate. What percentage (estimated) of xml and html docs use any upper case characters in their tags?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":98,"Q_Id":3127984,"Users Score":1,"Answer":"I think you're overly concerned about performance. If you're talking about arbitrary web pages, 90% of them will be HTML, not XHTML, so you should do case-insensitive comparisons. Lowercasing a string is extremely fast, and should be less than 1% of the total time of your parser. If you're not sure, carefully time your parser on a document that's already all lowercase, with and without the lowercase conversions.\nEven a pure-Python implementation of lower() would be negligible compared to the rest of the parsing, but it's better than that - CPython implements lower() in C code, so it really is as fast as possible.\nRemember, premature optimization is the root of all evil. Make your program correct first, then make it fast.","Q_Score":1,"Tags":"python,html,xml","A_Id":3128078,"CreationDate":"2010-06-27T16:30:00.000","Title":"When matching html or xml tags, should one worry about casing?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My Python application makes a lot of HTTP requests using the urllib2 module. This application might be used over very unreliable networks where latencies could be low and dropped packets and network timeouts might be very common. Is is possible to override a part of the urllib2 module so that each request is retried an X number of times before raising any exceptions? Has anyone seen something like this? \nCan i achieve this without modifying my whole application and just creating a wrapper over the urllib2 module. Thus any code making requests using this module automatically gets to use the retry functionality.\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6335,"Q_Id":3130923,"Users Score":0,"Answer":"Modifying parts of a library is never a good idea.\nYou can write wrappers around the methods you use to fetch data that would provide the desired behavior. Which would be trivial.\nYou can for example define methods with the same names as in urllib2 in your own module called myurllib2. Then just change the imports everywhere you use urllib2","Q_Score":2,"Tags":"python,urllib2,urllib","A_Id":3131037,"CreationDate":"2010-06-28T08:17:00.000","Title":"Make urllib retry multiple times","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to develop my first python web project. It have multiple tabs (like apple.com have Store, iPhone, iPad etc tabs) and when user click on any tab, the page is served from server.\nI want to make sure that the selected tab will have different background color when page is loaded.\nWhich is a best way to do it? JavaScript\/CSS\/Directly from server? and How?\nThanks.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":149,"Q_Id":3137167,"Users Score":1,"Answer":"I think the best way would be through CSS. You can handle it by adding the pseudoclass :active to the CSS.\nOther way is serving the page with a new class added to the tab, which will change the background color, but I would not recommend that.","Q_Score":0,"Tags":"python","A_Id":3137299,"CreationDate":"2010-06-29T00:56:00.000","Title":"Highlight selected Tab - Python webpage","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am already aware of tag based HTML parsing in Python using BeautifulSoup, htmllib etc. \nHowever, I want a powerful engine which can do complex tasks like read html tables, lists etc. and present these as simple to use objects within code. Does python have such powerful libraries?","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":921,"Q_Id":3167679,"Users Score":2,"Answer":"BeautifulSoup is a nice library and provides a good way to parse HTML with some handy ways to parse the data very easily. \nWhat you are trying to do, can easily be done using some simple regular expressions. You can write regular expressions to search for a particular pattern of data and extract the data you need.","Q_Score":4,"Tags":"python,html-parsing","A_Id":3167761,"CreationDate":"2010-07-02T17:00:00.000","Title":"Complex HTML parsing with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Where?\nI'm trying google and any of the proxys I've tried worked...\nI'm trying urllib.open with it...\nI don't know if urllib need some special proxy type or something like that...\nThank you\nps: I need some proxies to ping a certain website and not got banned from my ip","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":448,"Q_Id":3169425,"Users Score":0,"Answer":"You probably don't even need to use a proxy. The urllib module knows how to contact web servers directly. \nYou may need to use a proxy if you're behind certain kinds of corporate firewalls, but in that case you can't just choose any proxy to use, you have to use the corporate proxy. In such a case, a list of open proxies on Google isn't going to help you.","Q_Score":0,"Tags":"python,proxy","A_Id":3169461,"CreationDate":"2010-07-02T22:19:00.000","Title":"Where can I get some proxy list good for use it with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Where?\nI'm trying google and any of the proxys I've tried worked...\nI'm trying urllib.open with it...\nI don't know if urllib need some special proxy type or something like that...\nThank you\nps: I need some proxies to ping a certain website and not got banned from my ip","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":448,"Q_Id":3169425,"Users Score":0,"Answer":"Try setting up your own proxy and connecting to it...","Q_Score":0,"Tags":"python,proxy","A_Id":3169457,"CreationDate":"2010-07-02T22:19:00.000","Title":"Where can I get some proxy list good for use it with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am new to python and scrapy . \nI am running the scrapy-ctl.py from another python script using \nsubprocess module.But I want to parse the 'start url' to the spider from \nthis script itself.Is it possible to parse start_urls(which are \ndetermined in the script from which scrapy-ctl is run) to the spider? \nI will be greatful for any suggestions or ideas regarding this....:) \nThanking in advance....","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":179,"Q_Id":3179979,"Users Score":2,"Answer":"You can override the start_requests() method in your spider to get the starting requests (which, by default, are generated using the urls in the start_urls attribute).","Q_Score":0,"Tags":"python,windows,web-crawler,scrapy","A_Id":3186698,"CreationDate":"2010-07-05T13:49:00.000","Title":"how to parse a string to spider from another script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want my Python script to access a URL through an IP specified in the script instead of through the default DNS for the domain. Basically I want the equivalent of adding an entry to my \/etc\/hosts file, but I want the change to apply only to my script instead of globally on the whole server. Any ideas?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":911,"Q_Id":3183617,"Users Score":2,"Answer":"Whether this works or not will depend on whether the far end site is using HTTP\/1.1 named-based virtual hosting or not.\nIf they're not, you can simply replace the hostname part of the URL with their IP address, per @Greg's answer.\nIf they are, however, you have to ensure that the correct Host: header is sent as part of the HTTP request. Without that, a virtual hosting web server won't know which site's content to give you. Refer to your HTTP client API (Curl?) to see if you can add or change default request headers.","Q_Score":2,"Tags":"python,dns,urllib,hosts","A_Id":3184895,"CreationDate":"2010-07-06T05:08:00.000","Title":"Alternate host\/IP for python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want my Python script to access a URL through an IP specified in the script instead of through the default DNS for the domain. Basically I want the equivalent of adding an entry to my \/etc\/hosts file, but I want the change to apply only to my script instead of globally on the whole server. Any ideas?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":911,"Q_Id":3183617,"Users Score":0,"Answer":"You can use an explicit IP number to connect to a specific machine by embedding that into the URL: http:\/\/127.0.0.1\/index.html is equivalent to http:\/\/localhost\/index.html \nThat said, it isn't a good idea to use IP numbers instead of DNS entries. IPs change a lot more often than DNS entries, meaning your script has a greater chance of breaking if you hard-code the address instead of letting it resolve normally.","Q_Score":2,"Tags":"python,dns,urllib,hosts","A_Id":3183666,"CreationDate":"2010-07-06T05:08:00.000","Title":"Alternate host\/IP for python script","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to write a firewall in python? Say it would block all traffic?","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":31230,"Q_Id":3189138,"Users Score":3,"Answer":"I'm sure it's probably possible, but ill-advised. As mcandre mentions, most OSes couple the low level networking capabilities you need for a firewall tightly into the kernel and thus this task is usually done in C\/C++ and integrates tightly with the kernel. The microkernel OSes (Mach et al) might be more amenable than linux. You may be able to mix some python and C, but I think the more interesting discussion here is going to be around \"why should I\"\/\"why shouldn't I\" implement a firewall in python as opposed to just is it technically possible.","Q_Score":13,"Tags":"python,firewall","A_Id":3189187,"CreationDate":"2010-07-06T18:34:00.000","Title":"Is it possible to write a firewall in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to write a firewall in python? Say it would block all traffic?","AnswerCount":6,"Available Count":4,"Score":0.0665680765,"is_accepted":false,"ViewCount":31230,"Q_Id":3189138,"Users Score":2,"Answer":"\"Yes\" - that's usually the answer to \"is it possible...?\" questions.\nHow difficult and specific implementations are something else entirely. I suppose technically in a don't do this sort of way, if you were hell-bent on making a quick firewall in Python, you could use the socket libraries and open connections to and from yourself on every port. I have no clue how effective that would be, though it seems like it wouldn't be. Of course, if you're simply interested in rolling your own, and doing this as a learning experience, then cool, you have a long road ahead of you and plenty of education.\nOTOH, if you're actually worried about network security there are tons of other products out there that you can use, from iptables on *nix, to ZoneAlarm on windows. Plenty of them are both free and secure so there's really no reason to roll your own except on an \"I want to learn\" basis.","Q_Score":13,"Tags":"python,firewall","A_Id":3189232,"CreationDate":"2010-07-06T18:34:00.000","Title":"Is it possible to write a firewall in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to write a firewall in python? Say it would block all traffic?","AnswerCount":6,"Available Count":4,"Score":0.1325487884,"is_accepted":false,"ViewCount":31230,"Q_Id":3189138,"Users Score":4,"Answer":"I'm sure in theory you could achieve what you want, but I believe in practice your idea is not doable (if you wonder why, it's because it's too hard to \"interface\" a high level language with the low level kernel).\nWhat you could do instead is some Python tool that controls the firewall of the operating system so you could add rules, delete , etc. (in a similar way to what iptables does in Linux).","Q_Score":13,"Tags":"python,firewall","A_Id":3189540,"CreationDate":"2010-07-06T18:34:00.000","Title":"Is it possible to write a firewall in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to write a firewall in python? Say it would block all traffic?","AnswerCount":6,"Available Count":4,"Score":0.0996679946,"is_accepted":false,"ViewCount":31230,"Q_Id":3189138,"Users Score":3,"Answer":"Interesting thread. I stumbled on it looking for Python NFQUEUE examples.\nMy take is you could create a great firewall in python and use the kernel.\nE.g.\nAdd a linux fw rule through IP tables that forward sys packets (the first) to NFQUEUE for python FW to decide what to do.\nIf you like it mark the tcp stream\/flow with a FW mark using NFQUEUE and then have an iptables rule that just allows all traffic streams with the mark.\nThis way you can have a powerful high-level python program deciding to allow or deny traffic, and the speed of the kernel to forward all other packets in the same flow.","Q_Score":13,"Tags":"python,firewall","A_Id":15045900,"CreationDate":"2010-07-06T18:34:00.000","Title":"Is it possible to write a firewall in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need an advice. What python framework I can use to develop a SOAP web service? I know about SOAPpy and ZSI but that libraries aren't under active development. Is there something better?\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1428,"Q_Id":3195437,"Users Score":0,"Answer":"If library is not under active development, then there are two options: it was abandoned, it has no errors anymore.\nWhy are you looking something else? Did you test these two?","Q_Score":0,"Tags":"python,soap","A_Id":3195467,"CreationDate":"2010-07-07T13:55:00.000","Title":"Python framework for SOAP web services","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to open a page using urllib2 but i keep getting connection timed out errors.\nThe line which i am using is: \nf = urllib2.urlopen(url) \nexact error is:\nURLError: ","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":17707,"Q_Id":3197299,"Users Score":1,"Answer":"As a general strategy, open wireshark and watch the traffic generated by urllib2.urlopen(url). You may be able to see where the error is coming from.","Q_Score":3,"Tags":"python,urllib2","A_Id":3410537,"CreationDate":"2010-07-07T17:30:00.000","Title":"urllib2 connection timed out error","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'd like to integrate a web site written in Python (using Pylons) with an existing SAML based authentication service. From reading about SAML, I believe that the IdP (which already exists in this scenario) will send an XML document (via browser post) to the Service Provider (which I am implementing). The Service Provider will need to parse this XML and verify the identity of the user.\nAre there any existing Python libraries that implement this functionality?\nThank you,","AnswerCount":1,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":3557,"Q_Id":3198104,"Users Score":-1,"Answer":"I know you are looking for a Python based solution but there are quite a few \"server\" based solutions that would potentially solve your problem as well and require few ongoing code maintenance issues. \nFor example, using the Apache or IIS Integration kits in conjunction with the PingFederate server from www.pingidentity.com would allow you to pretty quickly and easily support SAML 1.0, 1.1, 2.0, WS-Fed and OpenID for your SP Application.\nHope this helps","Q_Score":7,"Tags":"python,authentication,saml,single-sign-on","A_Id":3255786,"CreationDate":"2010-07-07T19:23:00.000","Title":"Implementing a SAML client in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"To give a little background, I'm writing (or am going to write) a daemon in Python for scheduling tasks to run at user-specified dates. The scheduler daemon also needs to have a JSON-based HTTP web service interface (buzzword mania, I know) for adding tasks to the queue and monitoring the scheduler's status. The interface needs to receive requests while the daemon is running, so they either need to run in a separate thread or cooperatively multitask somehow. Ideally the web service interface should run in the same process as the daemon, too.\nI could think of a few ways to do it, but I'm wondering if there's some obvious module out there that's specifically tailored for this kind of thing. Any suggestions about what to use, or about the project in general are quite welcome. Thanks! :)","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":400,"Q_Id":3201446,"Users Score":0,"Answer":"Don't re-invent the bicycle!\nRun jobs via cron script, and create a separate web interface using, for example, Django or Tornado.\nConnect them via a database. Even sqlite will do the job if you don't want to scale on more machines.","Q_Score":0,"Tags":"python,web-services","A_Id":3201519,"CreationDate":"2010-07-08T07:26:00.000","Title":"what's a good module for writing an http web service interface for a daemon?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"To give a little background, I'm writing (or am going to write) a daemon in Python for scheduling tasks to run at user-specified dates. The scheduler daemon also needs to have a JSON-based HTTP web service interface (buzzword mania, I know) for adding tasks to the queue and monitoring the scheduler's status. The interface needs to receive requests while the daemon is running, so they either need to run in a separate thread or cooperatively multitask somehow. Ideally the web service interface should run in the same process as the daemon, too.\nI could think of a few ways to do it, but I'm wondering if there's some obvious module out there that's specifically tailored for this kind of thing. Any suggestions about what to use, or about the project in general are quite welcome. Thanks! :)","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":400,"Q_Id":3201446,"Users Score":0,"Answer":"I believed all kinds of python web framework is useful.\nYou can pick up one like CherryPy, which is small enough to integrate into your system. Also CherryPy includes a pure python WSGI server for production. \nAlso the performance may not be as good as apache, but it's already very stable.","Q_Score":0,"Tags":"python,web-services","A_Id":3201631,"CreationDate":"2010-07-08T07:26:00.000","Title":"what's a good module for writing an http web service interface for a daemon?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have been using BeautifulSoup but as I understand it that library is no longer being maintained. So what should I use ? I have heard about Xpath but what else is there ?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":653,"Q_Id":3244335,"Users Score":0,"Answer":"Well, if you're not duty-bound to python, you could always use a TagSoup parser. It's a Java library, but it gives very good results. You could also just use Tidy to clean your input before trying to parse it.","Q_Score":3,"Tags":"python,parsing","A_Id":3244345,"CreationDate":"2010-07-14T08:05:00.000","Title":"No more BeautifulSoup","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"My application has a xml based configuration. It has also a xsd file. Before my application starts, xmllint will check the configuration against the xsd file. \nWith the growth of my application, the configuration structure has changed a bit. Now I have to face this problem: When I provide a new version of my application to customer, I have to upgrade the existing configuration. \nHow to make this done easy and clever?\nMy idea is to build a configuration object using python, and then read configuration v1 from file and save it as v2. But if later the structure is changed again, I have to build another configuration object model.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":102,"Q_Id":3247516,"Users Score":1,"Answer":"For all configuration settings that remain the same between configurations, have your installation script copy those over from the old config file if it exists. For the rest, just have some defaults that the user can change if necessary, as usual for a config file. Unless I've misunderstood the question, it sounds like you're making a bigger deal out of this than it needs to be.\nBy the way, you'd really only need one \"updater\" script, because you could parametrize the XML tagging such that it go through your new config file\/config layout file, and then just check the tags in the old file against that and copy the data from the ones that are present in the new file. I haven't worked with XSD files before, so I don't know the specifics of working with them, but I don't think it should be that difficult.","Q_Score":0,"Tags":"python,xml,configuration,xsd,upgrade","A_Id":3247629,"CreationDate":"2010-07-14T15:06:00.000","Title":"Approach to upgrade application configuration","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Some web pages, having their urls, have \"Download\" Text, which are hyperlinks.\nHow can I get the hyperlinks form the urls\/pages by python or ironpython.\nAnd can I download the files with these hyperlinks by python or ironpython?\nHow can I do that?\nAre there any C# tools?\nI am not native english speaker, so sorry for my english.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":210,"Q_Id":3261198,"Users Score":1,"Answer":"The easiest way would be to pass the HTML page into an XML\/HTML parser, and then call getElementsByTagName(\"A\") on the root node. Once you get that, iterate through the list and pull out the href parameter.","Q_Score":0,"Tags":"c#,python,ironpython","A_Id":3261217,"CreationDate":"2010-07-16T00:56:00.000","Title":"How can I download files form web pages?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How do i check if the page has pending AJAX or HTTP GET\/POST requests? I use javascript and\/or python for this checking. \nwhat i wanted to do is execute a script if a page has finished all requests. onload doesn't work for me, if you used firebugs net panel, you would know. onload fires when the page is loaded but there is a possibility that there are still pending request hanging around somewhere.\nthank you in advance.","AnswerCount":8,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":25834,"Q_Id":3262473,"Users Score":0,"Answer":"You would need to keep track of each XMLHttpRequest and monitor whether it completes or the asynchronous callback is executed.","Q_Score":17,"Tags":"javascript,python,html","A_Id":3262533,"CreationDate":"2010-07-16T06:38:00.000","Title":"Check Pending AJAX requests or HTTP GET\/POST request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How do i check if the page has pending AJAX or HTTP GET\/POST requests? I use javascript and\/or python for this checking. \nwhat i wanted to do is execute a script if a page has finished all requests. onload doesn't work for me, if you used firebugs net panel, you would know. onload fires when the page is loaded but there is a possibility that there are still pending request hanging around somewhere.\nthank you in advance.","AnswerCount":8,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":25834,"Q_Id":3262473,"Users Score":4,"Answer":"I see you mention you are using Prototype.js. You can track active requests with Prototype by checking the Ajax.activeRequestCount value. You could check this using setTimeout or setInterval to make sure that any requests triggered on page load have completed (if that's what you're looking to do)","Q_Score":17,"Tags":"javascript,python,html","A_Id":3263704,"CreationDate":"2010-07-16T06:38:00.000","Title":"Check Pending AJAX requests or HTTP GET\/POST request","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm running complex tests that create many cookies for different sections of my web site.\nOccasionally I have to restart the browser in the middle a long test and since the Selenium server doesn't modify the base Firefox profile, the cookies evaporate.\nIs there any way I can save all of the cookies to a Python variable before terminating the browser and restore them after starting a new browser instance?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1991,"Q_Id":3265062,"Users Score":0,"Answer":"Yes, sure. Look at getCookie, getCookieByName and createCookie methods.","Q_Score":2,"Tags":"python,cookies,selenium,selenium-rc","A_Id":3314427,"CreationDate":"2010-07-16T13:02:00.000","Title":"How to save and restore all cookies with Selenium RC?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to invoke my chrome or firefox browser when a file that I specify is modified. How could I \"watch\" that file to do something when it gets modified?\nProgrammatically it seems the steps are.. basically set a never ending interval every second or so and cache the initial modification date, then compare the date every second, when it changes invoke X.","AnswerCount":7,"Available Count":1,"Score":0.0855049882,"is_accepted":false,"ViewCount":27121,"Q_Id":3274334,"Users Score":3,"Answer":"Install inotify-tools and write a simple shell script to watch a file.","Q_Score":14,"Tags":"python,linux","A_Id":3274680,"CreationDate":"2010-07-18T04:39:00.000","Title":"How can I \"watch\" a file for modification \/ change?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm new to python programming, and want to try to edit scripts in IDLE instead of the OSX command line. However, when I try to start it, it gives me the error \"Idle Subprocess didn't make a connection. Either Idle can't start a subprocess or personal firewall software is blocking the connection.\" I don't have a firewall configured, so what could the problem be?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":13342,"Q_Id":3277946,"Users Score":2,"Answer":"You can try running IDLE with the \"-n\" option. From the IDLE help:\n\nRunning without a subprocess:\n\n If IDLE is started with the -n command line switch it will run in a\n single process and will not create the subprocess which runs the RPC\n Python execution server. This can be useful if Python cannot create\n the subprocess or the RPC socket interface on your platform. However,\n in this mode user code is not isolated from IDLE itself. Also, the\n environment is not restarted when Run\/Run Module (F5) is selected. If\n your code has been modified, you must reload() the affected modules and\n re-import any specific items (e.g. from foo import baz) if the changes\n are to take effect. For these reasons, it is preferable to run IDLE\n with the default subprocess if at all possible.","Q_Score":3,"Tags":"python,macos,subprocess","A_Id":3277996,"CreationDate":"2010-07-19T01:27:00.000","Title":"No IDLE Subprocess connection","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to make a python script that tests the bandwidth of a connection. I am thinking of downloading\/uploading a file of a known size using urllib2, and measuring the time it takes to perform this task. I would also like to measure the delay to a given IP address, such as is given by pinging the IP. Is this possible using urllib2?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1534,"Q_Id":3280391,"Users Score":0,"Answer":"You could download an empty file to measure the delay. You measure more the only the network delay, but the difference should be too big I expect.","Q_Score":4,"Tags":"python,urllib2,bandwidth","A_Id":3280448,"CreationDate":"2010-07-19T10:52:00.000","Title":"Bandwidth test, delay test using urllib2","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On a linux box I've got a python script that's always started from predefined user. It may take a while for it to finish so I want to allow other users to stop it from the web.\nUsing kill fails with Operation not permitted. \nCan I somehow modify my long running python script so that it'll recive a signal from another user? Obviously, that another user is the one that starts a web server.\nMay be there's entirely different way to approach this problem I can't think of right now.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":332,"Q_Id":3281107,"Users Score":1,"Answer":"If you do not want to execute the kill command with the correct permissions, you can send any other signal to the other script. It is then the other scripts' responsibility to terminate. You cannot force it, unless you have the permissions to do so. \nThis can happen with a network connection, or a 'kill' file whose existence is checked by the other script, or anything else the script is able to listen to.","Q_Score":3,"Tags":"python,linux,signals,kill","A_Id":3281123,"CreationDate":"2010-07-19T12:46:00.000","Title":"terminate script of another user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On a linux box I've got a python script that's always started from predefined user. It may take a while for it to finish so I want to allow other users to stop it from the web.\nUsing kill fails with Operation not permitted. \nCan I somehow modify my long running python script so that it'll recive a signal from another user? Obviously, that another user is the one that starts a web server.\nMay be there's entirely different way to approach this problem I can't think of right now.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":332,"Q_Id":3281107,"Users Score":1,"Answer":"Off the top of my head, one solution would be threading the script and waiting for a kill signal via some form or another. Or rather than threading, you could have a file that the script checks every N times through a loop - then you just write a kill signal to that file (which of course has write permissions by the web user).\nI'm not terribly familiar with kill, other than killing my own scripts, so there may be a better solution.","Q_Score":3,"Tags":"python,linux,signals,kill","A_Id":3281132,"CreationDate":"2010-07-19T12:46:00.000","Title":"terminate script of another user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"On a linux box I've got a python script that's always started from predefined user. It may take a while for it to finish so I want to allow other users to stop it from the web.\nUsing kill fails with Operation not permitted. \nCan I somehow modify my long running python script so that it'll recive a signal from another user? Obviously, that another user is the one that starts a web server.\nMay be there's entirely different way to approach this problem I can't think of right now.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":332,"Q_Id":3281107,"Users Score":0,"Answer":"You could use sudo to perform the kill command as root, but that is horrible practice.\nHow about having the long-running script check some condition every x seconds, for example the existence of a file like \/tmp\/stop-xyz.txt? If that file is found, the script terminates itself immediately. \n(Or any other means of inter-process communication - it doesn't matter.)","Q_Score":3,"Tags":"python,linux,signals,kill","A_Id":3281121,"CreationDate":"2010-07-19T12:46:00.000","Title":"terminate script of another user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am working on a application, and my job just is to develop a sample Python interface for the application. The application can provide XML-based document, I can get the document via HTTP Get method, but the problem is the XML-based document is endless which means there will be no end element. I know that the document should be handled by SAX, but how to deal with the endless problem? Any idea, sample code?","AnswerCount":7,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1933,"Q_Id":3284289,"Users Score":0,"Answer":"If the document is endless why not add end tag (of main element) manually before opening it in parser? I don't know Python but why not add <\/endtag> to string?","Q_Score":5,"Tags":"python,xml","A_Id":3284880,"CreationDate":"2010-07-19T19:24:00.000","Title":"python handle endless XML","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"This is my first questions here, so I hope it will be done correctly ;)\nI've been assigned the task to give a web interface to some \"home made\" python script.\nThis script is used to check some web sites\/applications availability, via curl commands. A very important aspect of this script is that it gives its results in real-time, writing line by line to the standard output.\nBy giving a web interface to this script, the main goal is that the script can be easily used from anywhere, for example via a smartphone. So the web interface must be quite basic, and work \"plugin-free\".\nMy problem is that most solutions I thought or found on the web (ajax, django, even a simple post) seem to be needing a full generation of the page before sending it to the browser, losing this important \"real-time\" aspect.\nAny idea on how to do this properly ?\nThanks in advance.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3095,"Q_Id":3289584,"Users Score":0,"Answer":"Your task sounds interesting. :-) A scenario that just came into mind: You continuosly scrape the resources with your home-brew scripts, and push the results into your persistent database and a caching system -- like Redis -- simultanously. Your caching system\/layer serves as primary data source when serving client requests. Redis f.e. is a high-performant key-value-store that is capable to handle 100k connections per second. Though only the n latest (say f.e. 50k entries) matter the caching system will only hold these entries and let you solely focus on developing the server-side API (handling connections, processing requests, reading from Redis) and the frontend. The communication between frontend and backend-API could be driven by WebSocket connections. A pretty new part of the HTML5 spec. Though, however, already supported by many browser versions released these days. Alternatively you could fallback on some asynchronous Flash Socket-stuff. Websockets basically allow for persistent connections between a client and a server; you can register event listener that are called for every incoming data\/-packet -- no endless polling or other stuff.","Q_Score":8,"Tags":"python","A_Id":3289731,"CreationDate":"2010-07-20T11:50:00.000","Title":"Web-ifing a python command line script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to use Python 2.4.4 to convert XML to and from a Python dictionary. All I need are the node names and values, I'm not worried about attributes because the XML I'm parsing doesn't have any. I can't use ElementTree because that isn't available for 2.4.4, and I can't use 3rd party libraries due to my work environment. What's the easiest way for me to do this? Are there any good snippets?\nAlso, if there isn't an easy way to do this, are there any alternative serialization formats that Python 2.4.4 has native support for?","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":24276,"Q_Id":3292973,"Users Score":1,"Answer":"Grey's link includes some solutions that look pretty robust. If you want to roll your own though, you could use xml.dom.node's childNode member recursively, terminating when node.childNode = None.","Q_Score":4,"Tags":"python,xml,serialization,xml-serialization,python-2.4","A_Id":3294357,"CreationDate":"2010-07-20T18:11:00.000","Title":"XML to\/from a Python dictionary","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I put my Mac to sleep while an interactive Python session with a selenium instance and corresponding browser is running, after waking up the browser (or selenium server?) are no longer responding to any commands from the Python shell.\nThis forces me to restart the browser, losing the state of my test.\nIs there a way to overcome this?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":429,"Q_Id":3293788,"Users Score":0,"Answer":"You might be able to make this work by setting ridiculously large timeout values on your Selenium commands. But, you may still run into a problem where MacOS X kills the network connection when it goes to sleep. Once the connection is severed, you're only real option would be to grab the test session ID and try to reconnect to it, providing Selenium hasn't time the commands out yet.","Q_Score":2,"Tags":"python,macos,selenium,selenium-rc","A_Id":4579133,"CreationDate":"2010-07-20T19:50:00.000","Title":"How to resume a Selenium RC test after a computer sleep?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing an online browser game, based on google maps, with Django backend, and I am getting close to the point where I need to make a decision on how to implement the (backend) timed events - i.e. NPC possession quantity raising (e.g. city population should grow based on some variables - city size, application speed).\nThe possible solutions I found are:\n\nPutting the queued actions in a table and processing them along with every request.\n\nProblems: huge overhead, harder to implement\n\n\nUsing cron or something similar\n\nProblem: this is an external tool, and I want as little external tools as possible.\n\n\n\nAny other solutions?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":770,"Q_Id":3294682,"Users Score":5,"Answer":"Running a scheduled task to perform updates in your game, at any interval, will give you a spike of heavy database use. If your game logic relies on all of those database values to be up to date at the same time (which is very likely, if you're running an interval based update), you'll have to have scheduled downtime for as long as that cronjob is running. When that time becomes longer, as your player base grows, this becomes extremely annoying.\nIf you're trying to reduce database overhead, you should store values with their last update time and growth rates, and only update those rows when the quantity or rate of growth changes. \nFor example, a stash of gold, that grows at 5 gold per minute, only updates when a player withdraws gold from it. When you need to know the current amount, it is calculated based on the last update time, the current time, the amount stored at the last update, and the rate of growth.\nData that changes over time, without requiring interaction, does not belong in the database. It belongs in the logic end of your game. When a player performs an activity you need to remember, or a calculation becomes too cumbersome to generate again, that's when you store it.","Q_Score":3,"Tags":"python,django,cron","A_Id":3294995,"CreationDate":"2010-07-20T21:50:00.000","Title":"Browser-based MMO best-practice","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I developed a system that consists of software and hardware interaction. Basically its a transaction system where the transaction details are encrypted on a PCI device then returned back to my web based system where it is stored in a DB then displayed using javascript\/extjs in the browser. How I do this now is the following:\nTransaction encoding process\n1.The user selects a transaction from a grid and presses \"encode\" button,extjs\/js then sends the string to PHP where it is formatted and inserted into requests[incoming_request]. At this stage I start a extjs taskmanager to do interval checks on the requests[response] column for a result, and I display a \"please wait...\" message.\n2.I have created a python daemon service that monitors the requests table for any transactions to encode.The python daemon then picks up any requests[incoming_request] then encodes the request and stores the result in requests[response] table.\n3.The extjs taskmanager then picks up the requests[response] for the transaction and displays it to the user and then removes the \"please wait...\" message and terminates the taskmanager.\nNow my question is: Is there a better way of doing this encryption process by using 3rd party Messaging and Queuing middleware systems? If so please help.\nThank You!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":362,"Q_Id":3297110,"Users Score":0,"Answer":"I would change it this way:\n\nmake PHP block and wait until Python daemon finishes processing the transaction\nincrease the timeout in the Ext.data.Connection() so it would wait until PHP responds\nremove the Ext.MessageBox and handle possible errors in the callback handler in Ext.data.Connection()\n\nI.e. instead of waiting for the transaction to complete in JavaScript (which requires several calls to the webserver) you are now waiting in PHP.\nThis is assuming you are using Ext.data.Connection() to call the PHP handler - if any other Ext object is used the principle is the same but the timeout setting \/ completion handling would differ.","Q_Score":1,"Tags":"php,javascript,python,ajax,extjs","A_Id":3309648,"CreationDate":"2010-07-21T07:34:00.000","Title":"I need help with messaging and queuing middleware systems for extjs","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"When would someone use httplib and when urllib?\nWhat are the differences?\nI think I ready urllib uses httplib, I am planning to make an app that will need to make http request and so far I only used httplib.HTTPConnection in python for requests, and reading about urllib I see I can use that for request too, so whats the benefit of one or the other?","AnswerCount":6,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":42140,"Q_Id":3305250,"Users Score":10,"Answer":"urllib\/urllib2 is built on top of httplib. It offers more features than writing to httplib directly.\nhowever, httplib gives you finer control over the underlying connections.","Q_Score":56,"Tags":"python,http,urllib,httplib","A_Id":3305508,"CreationDate":"2010-07-22T01:58:00.000","Title":"Python urllib vs httplib?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When would someone use httplib and when urllib?\nWhat are the differences?\nI think I ready urllib uses httplib, I am planning to make an app that will need to make http request and so far I only used httplib.HTTPConnection in python for requests, and reading about urllib I see I can use that for request too, so whats the benefit of one or the other?","AnswerCount":6,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":42140,"Q_Id":3305250,"Users Score":6,"Answer":"If you're dealing solely with http\/https and need access to HTTP specific stuff, use httplib.\nFor all other cases, use urllib2.","Q_Score":56,"Tags":"python,http,urllib,httplib","A_Id":3305339,"CreationDate":"2010-07-22T01:58:00.000","Title":"Python urllib vs httplib?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When would someone use httplib and when urllib?\nWhat are the differences?\nI think I ready urllib uses httplib, I am planning to make an app that will need to make http request and so far I only used httplib.HTTPConnection in python for requests, and reading about urllib I see I can use that for request too, so whats the benefit of one or the other?","AnswerCount":6,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":42140,"Q_Id":3305250,"Users Score":46,"Answer":"urllib (particularly urllib2) handles many things by default or has appropriate libs to do so. For example, urllib2 will follow redirects automatically and you can use cookiejar to handle login scripts. These are all things you'd have to code yourself if you were using httplib.","Q_Score":56,"Tags":"python,http,urllib,httplib","A_Id":3305261,"CreationDate":"2010-07-22T01:58:00.000","Title":"Python urllib vs httplib?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am using google app engine for fetching the feed url bur few of the urls are 301 redirect i want to get the final url which returns me the result\ni am usign the universal feed reader for parsing the url is there any way or any function which can give me the final url.","AnswerCount":3,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1704,"Q_Id":3309695,"Users Score":3,"Answer":"It is not possible to get the 'final' URL by parsing, in order to resolve it, you would need to at least perform an HTTP HEAD operation","Q_Score":1,"Tags":"python,google-app-engine,feedparser","A_Id":3309766,"CreationDate":"2010-07-22T14:05:00.000","Title":"how to get final redirected url","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"i am using google app engine for fetching the feed url bur few of the urls are 301 redirect i want to get the final url which returns me the result\ni am usign the universal feed reader for parsing the url is there any way or any function which can give me the final url.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":1704,"Q_Id":3309695,"Users Score":0,"Answer":"You can do this by handling redirects manually. When calling fetch, pass in follow_redirects=False. If your response object's HTTP status is a redirect code, either 301 or 302, grab the Location response header and fetch again until the HTTP status is something else. Add a sanity check (perhaps 5 redirects max) to avoid redirect loops.","Q_Score":1,"Tags":"python,google-app-engine,feedparser","A_Id":3309853,"CreationDate":"2010-07-22T14:05:00.000","Title":"how to get final redirected url","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"The big mission: I am trying to get a few lines of summary of a webpage. i.e. I want to have a function that takes a URL and returns the most informative paragraph from that page. (Which would usually be the first paragraph of actual content text, in contrast to \"junk text\", like the navigation bar.)\nSo I managed to reduce an HTML page to a bunch of text by cutting out the tags, throwing out the and all the scripts. But some of the text is still \"junk text\". I want to know where the actual paragraphs of text begin. (Ideally it should be human-language-agnostic, but if you have a solution only for English, that might help too.)\nHow can I figure out which of the text is \"junk text\" and which is actual content?\nUPDATE: I see some people have pointed me to use an HTML parsing library. I am using Beautiful Soup. My problem isn't parsing HTML; I already got rid of all the HTML tags, I just have a bunch of text and I want to separate the context text from the junk text.","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2958,"Q_Id":3325817,"Users Score":2,"Answer":"A general solution to this problem is a non-trivial problem to solve.\nTo put this in context, a large part of Google's success with search has come from their ability to automatically discern some semantic meaning from arbitrary Web pages, namely figuring out where the \"content\" is.\nOne idea that springs to mind is if you can crawl many pages from the same site then you will be able to identify patterns. Menu markup will be largely the same between all pages. If you zero this out somehow (and it will need to fairly \"fuzzy\") what's left is the content.\nThe next step would be to identify the text and what constitutes a boundary. Ideally that would be some HTML paragraphs but you won't get that lucky most of the time.\nA better approach might be to find the RSS feeds for the site and get the content that way because that will be stripped down as is. Ignore any AdSense (or similar) content and you should be able to get the text.\nOh and absolutely throw out your regex code for this. This requires an HTML parser absolutely without question.","Q_Score":2,"Tags":"python,html,text,screen-scraping","A_Id":3325874,"CreationDate":"2010-07-24T16:15:00.000","Title":"Python: Detecting the actual text paragraphs in a string","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm learning Python and would like to start a small project. It seems that making IRC bots is a popular project amongst beginners so I thought I would implement one. Obviously, there are core functionalities like being able to connect to a server and join a channel but what are some good functionalities that are usually included in the bots? Thanks for your ideas.","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":3546,"Q_Id":3328315,"Users Score":1,"Answer":"Again, this is an utterly personal suggestion, but I would really like to see eggdrop rewritten in Python.\nSuch a project could use Twisted to provide the base IRC interaction, but would then need to support add-on scripts.\nThis would be great for allowing easy IRC bot functionality to be built upon using python, instead of TCL, scripts.","Q_Score":4,"Tags":"python,irc","A_Id":3329252,"CreationDate":"2010-07-25T06:40:00.000","Title":"IRC bot functionalities","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning Python and would like to start a small project. It seems that making IRC bots is a popular project amongst beginners so I thought I would implement one. Obviously, there are core functionalities like being able to connect to a server and join a channel but what are some good functionalities that are usually included in the bots? Thanks for your ideas.","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3546,"Q_Id":3328315,"Users Score":0,"Answer":"That is very subjective and totally depends upon where the bot will be used. I'm sure others will have nice suggestions. But whatever you do, please do not query users arbitrarily. And do not spam the main chat periodically.","Q_Score":4,"Tags":"python,irc","A_Id":3328343,"CreationDate":"2010-07-25T06:40:00.000","Title":"IRC bot functionalities","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning Python and would like to start a small project. It seems that making IRC bots is a popular project amongst beginners so I thought I would implement one. Obviously, there are core functionalities like being able to connect to a server and join a channel but what are some good functionalities that are usually included in the bots? Thanks for your ideas.","AnswerCount":5,"Available Count":4,"Score":0.0399786803,"is_accepted":false,"ViewCount":3546,"Q_Id":3328315,"Users Score":1,"Answer":"I'm also in the process of writing a bot in node.js. Here are some of my goals\/functions:\n\nmap '@' command so the bot detects the last URI in message history and uses the w3 html validation service\nsetup a trivia game by invoking !ask, asks a question with 3 hints, have the ability to load custom questions based on category\nget the weather with weather [zip\/name]\nhook up jseval command to evaluate javascript, same for python and perl and haskell\nseen command that reports the last time the bot has \"seen\" a person online\ntranslate command to translate X language string to Y language string\nmap dict to a dictionary service\nmap wik to wiki service","Q_Score":4,"Tags":"python,irc","A_Id":3328322,"CreationDate":"2010-07-25T06:40:00.000","Title":"IRC bot functionalities","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning Python and would like to start a small project. It seems that making IRC bots is a popular project amongst beginners so I thought I would implement one. Obviously, there are core functionalities like being able to connect to a server and join a channel but what are some good functionalities that are usually included in the bots? Thanks for your ideas.","AnswerCount":5,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":3546,"Q_Id":3328315,"Users Score":0,"Answer":"Make a google search to get a library that implements IRC protocol for you. That way you only need to add the features, those are already something enough to bother you.\nCommon functions:\n\nConduct a search from a wiki or google\nNotify people on project\/issue updates\nLeave a message\nToy for spamming the channel\nPick a topic\nCategorize messages\nSearch from channel logs","Q_Score":4,"Tags":"python,irc","A_Id":3328376,"CreationDate":"2010-07-25T06:40:00.000","Title":"IRC bot functionalities","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Suppose I downloaded the HTML code, and I can parse it.\nHow do I get the \"best\" description of that website, if that website does not have meta-description tag?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":519,"Q_Id":3332494,"Users Score":1,"Answer":"It's very hard to come up with a rule that works 100% of the time, obviously, but my suggestion as a starting point would be to look for the first

tag (or

,

, etc - the highest one you can find) then the bit of text after that can be used as the description. As long as the site is semantically marked-up, that should give you a good description (I guess you could also take the contents of the

itself, but that's more like the \"title\").\nIt's interesting to note that Google (for example) uses a keyword-specific extract of the page contents to display as the description, rather than a static description. Not sure if that'll work for your situation, though.","Q_Score":6,"Tags":"python,html,string,templates,parsing","A_Id":3332528,"CreationDate":"2010-07-26T05:59:00.000","Title":"What's the best way to get a description of the website, in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hi i need to parse xml file which is more then 1 mb in size, i know GAE can handle request and response up to 10 MB but as we need to use SAX parser API and API GAE has limit of 1 MB so is there way we can parse file more then 1 mb any ways.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":485,"Q_Id":3332897,"Users Score":2,"Answer":"The 1MB limit doesn't apply to parsing; however, you can't fetch more than 1MB from URLfetch; you'll only get the first 1MB from the API.\nIt's probably not going to be possible to get the XML into your application using the URLfetch API. If the data is smaller than 10MB, you can arrange for an external process to POST it to your application and then process it. If it's between 10MB and 2GB, you'd need to use the Blobstore API to upload it, read it in to your application in 1MB chunks, and process the concatenation of those chunks.","Q_Score":1,"Tags":"python,google-app-engine,parsing","A_Id":3334425,"CreationDate":"2010-07-26T07:14:00.000","Title":"Google app engine parsing xml more then 1 mb","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"I use python ftplib to connect to a ftp server which is running on active mode; That means the server will connect my client machine on a random port when data is sent between us.\nConsidering security issue, Can I specify the client's data port (or port range) and let the server connect the certain port? \nMany Thanks for your response.","AnswerCount":2,"Available Count":1,"Score":-0.1973753202,"is_accepted":false,"ViewCount":1268,"Q_Id":3333929,"Users Score":-2,"Answer":"Since Python 3.3, ftplib functions that establish connections take a source_addr argument that allows you to do exactly this.","Q_Score":0,"Tags":"python,networking,ftplib","A_Id":68453582,"CreationDate":"2010-07-26T10:16:00.000","Title":"How can I specify the client's data port for a ftp server in active mode?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using long polling for a chat with gevent. I'm using Event.wait() when waiting for new messages to be posted on the chat.\n\nI would like to handle the occasion a client disconnects with some functionality:\ne.g. Return \"client has disconnected\" as a message for other chat users\n\nIs this possible? =)","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1701,"Q_Id":3334777,"Users Score":1,"Answer":"This depends on which WSGI server you use. AFAIK gevent.wsgi will not notify your handler in any way when the client closes the connection, because libevent-http does not do that. However, with gevent.pywsgi it should be possible. You'll probably need to start an additional greenlet to monitor the socket condition and somehow notify the greenlet that runs the handler, e.g. by killing it. I could be missing an easier way to do this though.","Q_Score":7,"Tags":"python,django,long-polling,gevent","A_Id":3338935,"CreationDate":"2010-07-26T12:31:00.000","Title":"Capturing event of client disconnecting! - Gevent\/Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have a simple socket server on an android emulator. When I'm only sending data to it, it works just fine. But then if I want to echo that data back to the python script, it doesn't work at all. Here's the code that works:\nandroid:\n\n\n\ntry {\n serverSocket = new ServerSocket(port);\n } catch (IOException e1) {\n \/\/ TODO Auto-generated catch block\n e1.printStackTrace();\n }\n\n\n while (checkingForClients) {\n\n try {\n clientSocket = serverSocket.accept();\n\n out = new PrintWriter(clientSocket.getOutputStream(), true);\n in = new BufferedReader(new InputStreamReader(\n clientSocket.getInputStream()));\n\n line = null;\n while ((line = in.readLine()) != null) {\n Log.d(\"ServerActivity\", line);\n\n \/* THIS IS THE LINE THAT DOESN'T WORK*\/\n \/\/out.println(line);\n handler.post(new Runnable() {\n @Override\n public void run() {\n\n if(incomingData == null){\n Log.e(\"Socket Thingey\", \"Null Error\");\n }\n \/\/out.println(line);\n\n incomingData.setText(\"Testing\");\n\n incomingData.setText(line);\n\n\n }\n });\n\n }\n\n } catch (IOException e) {\n \/\/ TODO Auto-generated catch block\n e.printStackTrace();\n }\n\n\n\npython:\n\n\n\nimport socket \n\nhost = 'localhost' \nport = 5000\nsize = 1024 \ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM) \ns.connect((host,port)) \ns.send('Hello, World!') \n# data = s.recv(size) THIS LINE CAUSES PROBLEMS\ns.close() \nprint 'Received:' , data\n\n\n\nSo there are 2 commented lines. Without those lines, it works perfectly. But if I add in s.recv(size) in python it just freezes and I assume waits for the received data. But the problem is that the android code never gets the sent data. So I have no idea what to do.\nKeep in mind I'm new to python and to sockets.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2058,"Q_Id":3339971,"Users Score":0,"Answer":"The Android code is reading lines, so you need probably to send a \\n or possibly \\r\\n at the end of your Python send string.","Q_Score":0,"Tags":"java,python,android,sockets","A_Id":8259497,"CreationDate":"2010-07-27T00:39:00.000","Title":"python receiving from a socket","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to download an image file from potentially 5 sites.\nMeaning that if the image wasn't found in site#1, try site#2, etc.\nHow can I test if the file was downloaded?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":153,"Q_Id":3340152,"Users Score":3,"Answer":"You can call getcode() on the object you get back from urlopen().\ngetcode() gives you the HTTP status response from the server, so you can test to see if you got an HTTP 200 response, which would mean the download was successful.","Q_Score":1,"Tags":"python,urllib","A_Id":3340185,"CreationDate":"2010-07-27T01:28:00.000","Title":"When download an image, does urllib have a return code if it's successful or not?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can I ask for few question in one post to XML-RPC server?\nIf yes, how can I do it in python and xmlrpclib?\nI'm using XML-RPC server on slow connection, so I would like to call few functions at once, because each call costs me 700ms.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":231,"Q_Id":3343082,"Users Score":0,"Answer":"Whether or not possible support of multicall makes any difference to you depends on where the 700ms is going.\nHow did you measure your 700ms?\nRun a packet capture of a query and analyse the results. It should be possible to infer roughly round-trip-time, bandwidth constraints, whether it's the application layer of the server or even the name resolution of your client machine.","Q_Score":0,"Tags":"python,soap,xml-rpc,xmlrpclib","A_Id":3343497,"CreationDate":"2010-07-27T11:25:00.000","Title":"Does XML-RPC in general allows to call few functions at once?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how and with which python library is it possible to make an httprequest (https) with a user:password or a token?\nbasically the equivalent to curl -u user:pwd https:\/\/www.mysite.com\/\nthank you","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2459,"Q_Id":3355822,"Users Score":0,"Answer":"class urllib2.HTTPSHandler \nA class to handle opening of HTTPS URLs.\n21.6.7. HTTPPasswordMgr Objects\nThese methods are available on HTTPPasswordMgr and HTTPPasswordMgrWithDefaultRealm objects.\nHTTPPasswordMgr.add_password(realm, uri, user, passwd) \nuri can be either a single URI, or a sequence of URIs. realm, user and passwd must be strings. This causes (user, passwd) to be used as authentication tokens when authentication for realm and a super-URI of any of the given URIs is given.\nHTTPPasswordMgr.find_user_password(realm, authuri) \nGet user\/password for given realm and URI, if any. This method will return (None, None) if there is no matching user\/password.\nFor HTTPPasswordMgrWithDefaultRealm objects, the realm None will be searched if the given realm has no matching user\/password.","Q_Score":7,"Tags":"python,authentication,httprequest,token","A_Id":3355925,"CreationDate":"2010-07-28T17:51:00.000","Title":"python http request with token","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm trying to scrape and submit information to websites that heavily rely on Javascript to do most of its actions. The website won't even work when i disable Javascript in my browser.\nI've searched for some solutions on Google and SO and there was someone who suggested i should reverse engineer the Javascript, but i have no idea how to do that. \nSo far i've been using Mechanize and it works on websites that don't require Javascript.\nIs there any way to access websites that use Javascript by using urllib2 or something similar? \nI'm also willing to learn Javascript, if that's what it takes.","AnswerCount":6,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":23351,"Q_Id":3362859,"Users Score":6,"Answer":"I would actually suggest using Selenium. Its mainly designed for testing Web-Applications from a \"user perspective however it is basically a \"FireFox\" driver. I've actually used it for this purpose ... although I was scraping an dynamic AJAX webpage. As long as the Javascript form has a recognizable \"Anchor Text\" that Selenium can \"click\" everything should sort itself out.\nHope that helps","Q_Score":17,"Tags":"javascript,python,screen-scraping","A_Id":3364608,"CreationDate":"2010-07-29T13:18:00.000","Title":"Scraping websites with Javascript enabled?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is it possible to control a web browser like Firefox using Python? \nI would want to do things like \n\nlaunch the browser\nforce clicks on URLs\ntake screenshots \n\netc.","AnswerCount":6,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":50784,"Q_Id":3369073,"Users Score":0,"Answer":"Depends what do you actually want to achieve. If you need to do some automatic stuff w\/out user interference, you can just use underlying engine of the browser, like Gecko or WebKit, w\/out loading browser itself. There are ready Python bindings to these engines available.\nBrowsers themself do not provide this kind of API to outside processes. For Firefox, you would need to inject some browser-side code into chrome, either as extension or plugin.","Q_Score":27,"Tags":"python,browser,webbrowser-control","A_Id":3370550,"CreationDate":"2010-07-30T06:16:00.000","Title":"Controlling Browser using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to control a web browser like Firefox using Python? \nI would want to do things like \n\nlaunch the browser\nforce clicks on URLs\ntake screenshots \n\netc.","AnswerCount":6,"Available Count":2,"Score":0.0333209931,"is_accepted":false,"ViewCount":50784,"Q_Id":3369073,"Users Score":1,"Answer":"Ag great way to control a browser in Python is to use PyQt4.QtWebKit.","Q_Score":27,"Tags":"python,browser,webbrowser-control","A_Id":3370579,"CreationDate":"2010-07-30T06:16:00.000","Title":"Controlling Browser using Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Got a situation where I'm going to be parsing websites. each site has to have it's own \"parser\" and possibly it's own way of dealing with cookies\/etc..\nI'm trying to get in my head which would be a better choice.\nChoice I:\n I can create a multiprocessing function, where the (masterspawn) app gets an input url, and in turn it spans a process\/function within the masterspawn app that then handles all the setup\/fetching\/parsing of the page\/URL.\nThis approach would have one master app running, and it in turn creates multiple instances of the internal function.. Should be fast, yes\/no?\nChoice II:\n I could create a \"Twisted\" kind of server, that would essentially do the same thing as Choice I. The difference being that using \"Twisted\" would also impose some overhead. I'm trying to evaluate Twisted, with regards to it being a \"Server\" but i don't need it to perform the fetching of the url.\nChoice III:\n I could use scrapy. I'm inclined not to go this route as I don't want\/need to use the overhead that scrapy appears to have. As i stated, each of the targeted URLs needs its own parse function, as well as dealing with the cookies...\nMy goal is to basically have the \"architected\" solution spread across multiple boxes, where each client box interfaces with a master server that allocates the urls to be parsed.\nthanks for any comments on this..\n-tom","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":815,"Q_Id":3374943,"Users Score":2,"Answer":"There are two dimensions to this question: concurrency and distribution. \nConcurrency: either Twisted or multiprocessing will do the job of concurrently handling fetching\/parsing jobs. I'm not sure though where your premise of the \"Twisted overhead\" comes from. On the contrary, the multiprocessing path would incur much more overhead, since a (relatively heavy-weight) OS-process would have to be spawned. Twisteds' way of handling concurrency is much more light-weight. \nDistribution: multiprocessing won't distribute your fetch\/parse jobs to different boxes. Twisted can do this, eg. using the AMP protocol building facilities. \nI cannot comment on scrapy, never having used it.","Q_Score":1,"Tags":"python,twisted,multiprocessing","A_Id":3379621,"CreationDate":"2010-07-30T20:02:00.000","Title":"question comparing multiprocessing vs twisted","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i developed an aplication built on twitter api , but i get erorrs like a mesage that i've parsed and deleted to be parsed again at the next execution , could that be because i left the twitter connection opened or is just a fault of the twitter API. I also tried to delete all direct messages because it seemed too full for me but instead the Api has just reset the count of my messages , the messages haven't been deleted:((","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":105,"Q_Id":3385990,"Users Score":2,"Answer":"Twitter's API is over HTTP, which is a stateless protocol. you don't really need to close the connection, since connections made and closed for each request","Q_Score":1,"Tags":"python,twitter","A_Id":3388478,"CreationDate":"2010-08-02T07:59:00.000","Title":"twitter connection needs to be closed?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Do you know how I could get one of the IPv6 adress of one of my interface in python2.6. I tried something with the socket module which lead me nowhere.\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3133,"Q_Id":3388911,"Users Score":0,"Answer":"You could just simply run 'ifconfig' with a subprocess.* call and parse the output.","Q_Score":6,"Tags":"python,linux,ipv6","A_Id":3388966,"CreationDate":"2010-08-02T14:53:00.000","Title":"How to get the IPv6 address of an interface under linux","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm tryng to verify if all my page links are valid, and also something similar to me if all the pages have a specified link like contact. i use python unit testing and selenium IDE to record actions that need to be tested.\nSo my question is can i verify the links in a loop or i need to try every link on my own?\ni tried to do this with __iter__ but it didn't get any close ,there may be a reason that i'm poor at oop, but i still think that there must me another way of testing links than clicking them and recording one by one.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":412,"Q_Id":3397850,"Users Score":0,"Answer":"What exactly is \"Testing links\"?\nIf it means they lead to non-4xx URIs, I'm afraid You must visit them.\nAs for existence of given links (like \"Contact\"), You may look for them using xpath.","Q_Score":2,"Tags":"python,testing,black-box","A_Id":3397887,"CreationDate":"2010-08-03T15:05:00.000","Title":"how can i verify all links on a page as a black-box tester","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm tryng to verify if all my page links are valid, and also something similar to me if all the pages have a specified link like contact. i use python unit testing and selenium IDE to record actions that need to be tested.\nSo my question is can i verify the links in a loop or i need to try every link on my own?\ni tried to do this with __iter__ but it didn't get any close ,there may be a reason that i'm poor at oop, but i still think that there must me another way of testing links than clicking them and recording one by one.","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":412,"Q_Id":3397850,"Users Score":0,"Answer":"You could (as yet another alternative), use BeautifulSoup to parse the links on your page and try to retrieve them via urllib2.","Q_Score":2,"Tags":"python,testing,black-box","A_Id":3399490,"CreationDate":"2010-08-03T15:05:00.000","Title":"how can i verify all links on a page as a black-box tester","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"For the past 10 hours I've been trying to accomplish this:\nTranslation of my blocking httpclient using standard lib...\nInto a twisted nonblocking\/async version of it. \n10 hours later... scoring through their APIs-- it appears no one has EVER needed to do be able to do that. Nice framework, but seems ...a bit overwhelming to just set a socket to a different interface.\nCan any python gurus shed some light on this and\/or send me in the right direction? or any docs that I could have missed? THANKS!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":314,"Q_Id":3399185,"Users Score":0,"Answer":"Well, it doesn't look like you've missed anything. client.getPage doesn't directly support setting the bind address. I'm just guessing here but I would suspect it's one of those cases where it just never occured to the original developer that someone would want to specify the bind address. \nEven though there isn't built-in support for doing this, it should be pretty easy to do. The way you specify binding addresses for outgoing connections in twisted is by passing the bind address to the reactor.connectXXX() functions. Fortunately, the code for getPage() is really simple. I'd suggest three things:\n\nCopy the code for getPage() and it's associated helper function into your project\nModify them to pass through the bind address\nCreate a patch to fix this oversight and send it to the Twisted folks :)","Q_Score":0,"Tags":"python,twisted.web","A_Id":3400337,"CreationDate":"2010-08-03T17:48:00.000","Title":"Overloading twisted.client.getPage to set the client socket's bindaddress !","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Which library\/module is the best to use for downloading large 500mb+ files in terms of speed, memory, cpu? I was also contemplating using pycurl.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2537,"Q_Id":3402271,"Users Score":0,"Answer":"At sizes of 500MB+ one has to worry about data integrity, and HTTP is not designed with data integrity in mind.\nI'd rather use python bindings for rsync (if they exist) or even bittorrent, which was initially implemented in python. Both rsync and bittorrent address the data integrity issue.","Q_Score":1,"Tags":"python,curl,urllib2","A_Id":3402359,"CreationDate":"2010-08-04T02:56:00.000","Title":"best way to download large files with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm scraping a html page, then using xml.dom.minidom.parseString() to create a dom object.\nhowever, the html page has a '&'. I can use cgi.escape to convert this into & but it also converts all my html <> tags into <> which makes parseString() unhappy.\nhow do i go about this? i would rather not just hack it and straight replace the \"&\"s\nthanks","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":576,"Q_Id":3403168,"Users Score":0,"Answer":"You shouldn't use an XML parser to parse data that isn't XML. Find an HTML parser instead, you'll be happier in the long run. The standard library has a few (HTMLParser and htmllib), and BeautifulSoup is a well-loved third-party package.","Q_Score":1,"Tags":"python,escaping,html-entities","A_Id":3405525,"CreationDate":"2010-08-04T06:40:00.000","Title":"need to selectively escape html entities (&)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to create a python script that opens a single page at a time, however python + mozilla make it so everytime I do this, it opens up a new tab. I want it to keep just a single window open so that it can loop forever without crashing due to too many windows or tabs. It will be going to about 6-7 websites and the current code imports time and webbrowser.\nwebbrowser.open('url')\ntime.sleep(100)\nwebbrowser.open('next url') \n\/\/but here it will open a new tab, when I just want it to change the page.\nAny information would be greatful,\nThank you.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2001,"Q_Id":3408891,"Users Score":1,"Answer":"In firefox, if you go to about:config and set browser.link.open_newwindow to \"1\", that will cause a clicked link that would open in a new window or tab to stay in the current tab. I'm not sure if this applies to calls from 3rd-party apps, but it might be worth a try.\nOf course, this will now apply to everything you do in firefox (though ctrl + click will still open links in a new tab)","Q_Score":0,"Tags":"python,browser","A_Id":3408987,"CreationDate":"2010-08-04T19:01:00.000","Title":"How do I edit the url in python and open a new page without having a new window or tab opened?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to open a new tab in my web browser using python's webbrowser. However, now my browser is brought to the top and I am directly moved to the opened tab. I haven't found any information about this in documentation, but maybe there is some hidden api. Can I open this tab in the possible most unobtrusive way, which means:\n\nnot bringing browser to the top if it's minimzed,\nnot moving me the opened tab (especially if I am at the moment working in other tab - my process is working in the background and it would be very annoying to have suddenly my work interrupted by a new tab)?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":392,"Q_Id":3417756,"Users Score":0,"Answer":"On WinXP, at least, it appears that this is not possible (from my tests with IE).\nFrom what I can see, webbrowser is a fairly simple convenience module that creates (probably ) a subprocess-style call to the browser executable. \nIf you want that sort of granularity you'll have to see if your browser accepts command line arguments to that effect, or exposes that control in some other way.","Q_Score":3,"Tags":"python,tabs,python-webbrowser","A_Id":3418619,"CreationDate":"2010-08-05T18:09:00.000","Title":"python: open unfocused tab with webbrowser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am now using python base64 module to decode a base64 coded XML file, what I did was to find each of the data (there are thousands of them as for exmaple in \"ABC....\", the \"ABC...\" was the base64 encoded data) and add it to a string, lets say s, then I use base64.b64decode(s) to get the result, I am not sure of the result of the decoding, was it a string, or bytes? In addition, how should convert such decoded data from the so-called \"network byte order\" to a \"host byte order\"? Thanks!","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":7855,"Q_Id":3422457,"Users Score":2,"Answer":"Each base64 encoded string should be decoded separately - you can't concatenate encoded strings (and get a correct decoding).\nThe result of the decode is a string, of byte-buffer - in Python, they're equivalent.\nRegarding the network\/host order - sequences of bytes, have no such 'order' (or endianity) - it only matters when interpreting these bytes as words \/ ints of larger width (i.e. more than 8 bits).","Q_Score":0,"Tags":"python,base64,byte","A_Id":3422530,"CreationDate":"2010-08-06T09:20:00.000","Title":"Python base64 data decode and byte order convert","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a requirement to create a Python application that accepts dial up connections over ISDN from client software and relays messages from this connection to a website application running on a LAMP webserver.\nDo we have some modules or support for this kind of implementation in python?\nPlease suggest.\nThanks in advance.","AnswerCount":1,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":974,"Q_Id":3464996,"Users Score":1,"Answer":"You should have system hardware and software that handles establishing ISDN links, that's not something you should be trying to reimplement yourself.\nYou need to consult the documentation for that hardware and software, and the documentation for the client software, to determine how that connection can be made available to your application, and what communications protocol the client will be using over the ISDN link.\n(If you're really lucky, the client actually uses PPP to establish a TCP\/IP connection.)","Q_Score":0,"Tags":"python,dial-up,isdn","A_Id":3465106,"CreationDate":"2010-08-12T05:36:00.000","Title":"ISDN dial up connection with python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have used 3 languages for Web Scraping - Ruby, PHP and Python and honestly none of them seems to perfect for the task. \nRuby has an excellent mechanize and XML parsing library but the spreadsheet support is very poor. \nPHP has excellent spreadsheet and HTML parsing library but it does not have an equivalent of WWW:Mechanize.\nPython has a very poor Mechanize library. I had many problems with it and still unable to solve them. Its spreadsheet library also is more or less decent since it unable to create XLSX files.\nIs there anything which is just perfect for webscraping. \nPS: I am working on windows platform.","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":2443,"Q_Id":3468028,"Users Score":1,"Answer":"Short answer is no.\nThe problem is that HTML is a large family of formats - and only the more recent variants are consistent (and XML based). If you're going to use PHP then I would recommend using the DOM parser as this can handle a lot of html which does not qualify as well-formed XML.\nReading between the lines of your post - you seem to be:\n1) capturing content from the web with a requirement for complex interaction management\n2) parsing the data into a consistent machine readable format\n3) writing the data to a spreadsheet\nWhich is certainly 3 seperate problems - if no one language meets all 3 requirements then why not use the best tool for the job and just worry about an suitable interim format\/medium for the data?\nC.","Q_Score":7,"Tags":"php,python,ruby,web-scraping","A_Id":3469962,"CreationDate":"2010-08-12T13:18:00.000","Title":"Is there any language which is just \"perfect\" for web scraping?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Hey all, I have a site that looks up info for the end user, is written in Python, and requires several urlopen commands. As a result it takes a bit for a page to load. I was wondering if there was a way to make it faster? Is there an easy Python way to cache or a way to make the urlopen scripts fun last? \nThe urlopens access the Amazon API to get prices, so the site needs to be somewhat up to date. The only option I can think of is to make a script to make a mySQL db and run it ever now and then, but that would be a nuisance.\nThanks!","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":805,"Q_Id":3468248,"Users Score":0,"Answer":"How often do the price(s) change? If they're pretty constant (say once a day, or every hour or so), just go ahead and write a cron script (or equivalent) that retrieves the values and stores it in a database or text file or whatever it is you need.\nI don't know if you can check the timestamp data from the Amazon API - if they report that sort of thing.","Q_Score":3,"Tags":"python,sql,caching,urlopen","A_Id":3468315,"CreationDate":"2010-08-12T13:40:00.000","Title":"Caching options in Python or speeding up urlopen","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"As the title suggests, I'm working on a site written in python and it makes several calls to the urllib2 module to read websites. I then parse them with BeautifulSoup. \nAs I have to read 5-10 sites, the page takes a while to load. \nI'm just wondering if there's a way to read the sites all at once? Or anytricks to make it faster, like should I close the urllib2.urlopen after each read, or keep it open?\nAdded: also, if I were to just switch over to php, would that be faster for fetching and Parsi g HTML and XML files from other sites? I just want it to load faster, as opposed to the ~20 seconds it currently takes","AnswerCount":9,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":32925,"Q_Id":3472515,"Users Score":0,"Answer":"How about using pycurl? \nYou can apt-get it by\n$ sudo apt-get python-pycurl","Q_Score":15,"Tags":"python,http,concurrency,urllib2","A_Id":3472568,"CreationDate":"2010-08-12T22:26:00.000","Title":"Python urllib2.urlopen() is slow, need a better way to read several urls","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a site, colorurl.com, and I need users to be able to type in colorurl.com\/00ff00 (or some variation of that), and see the correct page. However, with the naked domain issue, users who type in colorurl.com\/somepath will instead be redirected to www.colorurl.com\/.\nIs there a way to detect this in python, and then redirect the user to where they meant to go (With the www. added?)\nEDIT:\nClarification: In my webhost's configuration I have colorurl.com forward to www.colorurl.com. They do not support keeping the path (1and1). I have to detect the previous path and redirect users to it.\n\nUser goes to colorurl.com\/path\nUser is redirected to www.colorurl.com\nApp needs to detect what the path was.\nApp sends user to www.colorurl.com\/path","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":830,"Q_Id":3482152,"Users Score":0,"Answer":"You need to use a third-party site to do the redirection to www.*; many registrars offer this service. Godaddy's service (which is even free with domain registration) forwards foo.com\/bar to www.foo.com\/bar; I can't speak to the capabilities of the others but it seems to me that any one that doesn't behave this way is broken.","Q_Score":4,"Tags":"python,google-app-engine,redirect","A_Id":3483631,"CreationDate":"2010-08-14T05:27:00.000","Title":"Google App Engine - Naked Domain Path Redirect in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":1},{"Question":"Is there a way to monitor server ports using SNMP (I'm using net-snmp-python to check this with python). \nSo far I've checked pretty simple with \"nc\" command, however I want to see if I can do this with SNMP. \nThank you for your answers and patience.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":9078,"Q_Id":3485203,"Users Score":0,"Answer":"You might try running nmap against the ports you want to check, but that won't necessarily give you an indication that the server process on the other side of an open port is alive.","Q_Score":3,"Tags":"python,networking,snmp","A_Id":3486005,"CreationDate":"2010-08-14T21:34:00.000","Title":"Check ports with SNMP (net-snmp)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a way to monitor server ports using SNMP (I'm using net-snmp-python to check this with python). \nSo far I've checked pretty simple with \"nc\" command, however I want to see if I can do this with SNMP. \nThank you for your answers and patience.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":9078,"Q_Id":3485203,"Users Score":0,"Answer":"It's hard to see where SNMP might fit in.\nThe best way to monitor would be to use a protocol specific client (i.e., run a simple query v.s. MySQL, retrieve a test file using FTP, etc.)\nIf that doesn't work, you can open a TCP or UDP socket to the ports and see if anyone is listening.","Q_Score":3,"Tags":"python,networking,snmp","A_Id":3485524,"CreationDate":"2010-08-14T21:34:00.000","Title":"Check ports with SNMP (net-snmp)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to send it and forget it. The http rest service call I'm making takes a few seconds to respond. The goal is to avoid waiting those few seconds before more code can execute.\nI'd rather not use python threads\nI'll use twisted async calls if I must and ignore the response.","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":2585,"Q_Id":3486372,"Users Score":1,"Answer":"You are going to have to implement that asynchronously as HTTP protocol states you have a request and a reply. \nAnother option would be to work directly with the socket, bypassing any pre-built module. This would allow you to violate protocol and write your own bit that ignores any responses, in essence dropping the connection after it has made the request.","Q_Score":6,"Tags":"python,http","A_Id":3486383,"CreationDate":"2010-08-15T05:49:00.000","Title":"How can I make an http request without getting back an http response in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to send it and forget it. The http rest service call I'm making takes a few seconds to respond. The goal is to avoid waiting those few seconds before more code can execute.\nI'd rather not use python threads\nI'll use twisted async calls if I must and ignore the response.","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":2585,"Q_Id":3486372,"Users Score":0,"Answer":"HTTP implies a request and a reply for that request. Go with an async approach.","Q_Score":6,"Tags":"python,http","A_Id":3486378,"CreationDate":"2010-08-15T05:49:00.000","Title":"How can I make an http request without getting back an http response in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Here is what I would like to do, and I want to know how some people with experience in this field do this:\nWith three POST requests I get from the http server:\n\nwidgets and layout\nand then app logic (minimal)\ndata\n\nOr maybe it's better to combine the first two or all three. I'm thinking of using pyqt. I think I can load .ui files. I can parse json data. I just think it would be rather dangerous to pass code over a network to be executed on the client. If someone can hijack the connection, or can change the apps setting to access a bogus server, that is nasty.\nI want to do it this way because it keeps all the clients up-to-date. It's sort of like a webapp but simpler because of Qt. Essentially the \"thin\" app is just a minimal compiled python file that loads data from a server.\nHow can I do this without introducing security issues on the client? Is https good enough? Is there a way to get pyqt to run in a sandbox of sorts?\nPS. I'm not stuck on Qt or python. I do like the concept though. I don't really want to use Java - server or client side.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1647,"Q_Id":3517841,"Users Score":1,"Answer":"Your desire to send \"app logic\" from the server to the client without sending \"code\" is inherently self-contradictory, though you may not realize that yet -- even if the \"logic\" you're sending is in some simplified ad-hoc \"language\" (which you don't even think of as a language;-), to all intents and purposes your Python code will be interpreting that language and thereby execute that code. You may \"sandbox\" things to some extent, but in the end, that's what you're doing.\nTo avoid hijackings and other tricks, instead, use HTTPS and validate the server's cert in your client: that will protect you from all the problems you're worrying about (if somebody can edit the app enough to defeat the HTTPS cert validation, they can edit it enough to make it run whatever code they want, without any need to send that code from a server;-).\nOnce you're using https, having the server send Python modules (in source form if you need to support multiple Python versions on the clients, else bytecode is fine) and the client thereby save them to disk and import \/ reload them, will be just fine. You'll basically be doing a variant of the classic \"plugins architecture\" where the \"plugins\" are in fact being sent from the server (instead of being found on disk in a given location).","Q_Score":0,"Tags":"python,qt,networking,pyqt,thin","A_Id":3517886,"CreationDate":"2010-08-19T00:33:00.000","Title":"how to implement thin client app with pyqt","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":1,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a small HTTP server and everything is working fine locally, but I am not able to connect to the server from any other computer, including other computers on the network. I'm not sure if it is a server problem, or if I just need to make some adjustments to Windows. I turned the firewall off, so that can't be the probelm.\nI am using Python 2.6 on Windows 7.","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":213,"Q_Id":3522641,"Users Score":2,"Answer":"Without any code sample I can only assume that your server is listening on some private interface like localhost\/127.0.0.1 and not something that is connected to the rest of your network.","Q_Score":0,"Tags":"python,windows,http","A_Id":3522672,"CreationDate":"2010-08-19T14:13:00.000","Title":"Python Server Help","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a small HTTP server and everything is working fine locally, but I am not able to connect to the server from any other computer, including other computers on the network. I'm not sure if it is a server problem, or if I just need to make some adjustments to Windows. I turned the firewall off, so that can't be the probelm.\nI am using Python 2.6 on Windows 7.","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":213,"Q_Id":3522641,"Users Score":0,"Answer":"Some things to check:\n\nCan you connect to the server via your machine's IP instead of localhost? I.e. if your machine is 1.2.3.4 in the network and the server is listening on port 8080, can you see it by opening a browser to http:\/\/1.2.3.4:8080 on the same machine?\nCan you do (1) from another machine? (just a sanity check...)\nDo other servers work throughout the network? I.e. if you run a simple FTP server (like Filezilla server) on the machine, can you FTP to it from other machines?\nCan you ping one machine from another?\nDo you still have firewalls running? (i.e. default Windows firewall)","Q_Score":0,"Tags":"python,windows,http","A_Id":3522734,"CreationDate":"2010-08-19T14:13:00.000","Title":"Python Server Help","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've develop webmail client for any mail server.\nI want to implement message conversion for it \u2014 for example same emails fwd\/reply\/reply2all should be shown together like gmail does... \nMy question is: what's the key to find those emails which are either reply\/fwd or related to the original mail....","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1161,"Q_Id":3530851,"Users Score":3,"Answer":"The In-Reply-To header of the child should have the value of the Message-Id header of the parent(s).","Q_Score":0,"Tags":"python,imap,pop3,imaplib,poplib","A_Id":3566252,"CreationDate":"2010-08-20T12:34:00.000","Title":"How to maintain mail conversion (reply \/ forward \/ reply to all like gmail) of email using Python pop\/imap lib?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've develop webmail client for any mail server.\nI want to implement message conversion for it \u2014 for example same emails fwd\/reply\/reply2all should be shown together like gmail does... \nMy question is: what's the key to find those emails which are either reply\/fwd or related to the original mail....","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":1161,"Q_Id":3530851,"Users Score":2,"Answer":"Google just seems to chain messages based on the subject line (so does Apple Mail by the way.)","Q_Score":0,"Tags":"python,imap,pop3,imaplib,poplib","A_Id":3530868,"CreationDate":"2010-08-20T12:34:00.000","Title":"How to maintain mail conversion (reply \/ forward \/ reply to all like gmail) of email using Python pop\/imap lib?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Does python have a full fledged email library with things for pop, smtp, pop3 with ssl, mime?\nI want to create a web mail interface that pulls emails from email servers, and then shows the emails, along with attachments, can display the sender, subject, etc. (handles all the encoding issues etc).\nIt's one thing to be available in the libraries and another for them to be production ready. I'm hoping someone who has used them to pull emails w\/attachments etc. in a production environment can comment on this.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":513,"Q_Id":3538430,"Users Score":2,"Answer":"It has all the components you need, in a more modular and flexible arrangement than you appear to envisage -- the standard library's email package deals with the message once you have received it, and separate modules each deal with means of sending and receiving, such as pop, smtp, imap. SSL is an option for each of them (if the counterpart, e.g. mail server, supports it, of course), being basically just \"a different kind of socket\".\nHave you looked at the rich online docs for all of these standard library modules?","Q_Score":0,"Tags":"python,email,smtp,mime,pop3","A_Id":3538453,"CreationDate":"2010-08-21T18:16:00.000","Title":"Does python have a robust pop3, smtp, mime library where I could build a webmail interface?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to do router configuration using python , but dont want to use any application level protocol to configure it . Is it possible to deal it on a hardware level ? Please do tell if the question is vague or if it needs more explanation , then I would put more details on as to what I have my doubt in","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":616,"Q_Id":3555485,"Users Score":1,"Answer":"There is a package named roscraco that configure and extract information from some consumer level routers. It's available on PyPi.","Q_Score":1,"Tags":"python","A_Id":5845120,"CreationDate":"2010-08-24T10:30:00.000","Title":"is it possible to write python scripts which can do router configuration without telnetting into the router?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i want to do router configuration using python , but dont want to use any application level protocol to configure it . Is it possible to deal it on a hardware level ? Please do tell if the question is vague or if it needs more explanation , then I would put more details on as to what I have my doubt in","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":616,"Q_Id":3555485,"Users Score":1,"Answer":"The title of your question by itself makes some sense.\nThe body of your question doesn't make sense.\n\nis it possible to write python scripts which can do router configuration without telnetting into the router?\nYes, depending on the platform. You maybe able to use a variety of other methods to configure the router that do not include telnet. E.g. xml-rpc, ssh + interactive, scp config file or fragments, snmp to induce upload config file, etc.\n\nIs it possible to deal it on a hardware level?\nYou're in the realms of nanotech microscopy and seriously invalidating the warranty on your router.","Q_Score":1,"Tags":"python","A_Id":3555720,"CreationDate":"2010-08-24T10:30:00.000","Title":"is it possible to write python scripts which can do router configuration without telnetting into the router?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the urllib2.urlopen method to open a URL and fetch the markup of a webpage. Some of these sites redirect me using the 301\/302 redirects. I would like to know the final URL that I've been redirected to. How can I get this?","AnswerCount":4,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":34914,"Q_Id":3556266,"Users Score":1,"Answer":"e.g.: \nurllib2.urlopen('ORIGINAL LINK').geturl()\nurllib2.urlopen(urllib2.Request('ORIGINAL LINK')).geturl()","Q_Score":22,"Tags":"python,urllib2","A_Id":31354580,"CreationDate":"2010-08-24T12:12:00.000","Title":"How can I get the final redirect URL when using urllib2.urlopen?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using the urllib2.urlopen method to open a URL and fetch the markup of a webpage. Some of these sites redirect me using the 301\/302 redirects. I would like to know the final URL that I've been redirected to. How can I get this?","AnswerCount":4,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":34914,"Q_Id":3556266,"Users Score":4,"Answer":"The return value of urllib2.urlopen has a geturl() method which should return the actual (i.e. last redirect) url.","Q_Score":22,"Tags":"python,urllib2","A_Id":3556295,"CreationDate":"2010-08-24T12:12:00.000","Title":"How can I get the final redirect URL when using urllib2.urlopen?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have tried this probably 6 or 7 different ways, such as using various attribute values, XPath, id pattern matching (it always matches \":\\w\\w\"), etc. as locators, and nothing has worked. If anyone can give me a tested, confirmed-working locator string for this button, I'd be much obliged.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":964,"Q_Id":3561993,"Users Score":0,"Answer":"If you want to emulate a click on the button, just go to #compose.","Q_Score":1,"Tags":"c#,java,python,gmail,selenium-rc","A_Id":3562629,"CreationDate":"2010-08-25T00:18:00.000","Title":"How to access Gmail's \"Send\" button using Selenium RC for Java or C# or Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to encode and store, and decode arguments in Python and getting lost somewhere along the way. Here are my steps:\n1) I use google toolkit's gtm_stringByEscapingForURLArgument to convert an NSString properly for passing into HTTP arguments. \n2) On my server (python), I store these string arguments as something like u'1234567890-\/:;()$&@\".,?!\\'[]{}#%^*+=_\\\\|~<>\\u20ac\\xa3\\xa5\\u2022.,?!\\'' (note that these are the standard keys on an iphone keypad in the \"123\" view and the \"#+=\" view, the \\u and \\x chars in there being some monetary prefixes like pound, yen, etc)\n3) I call urllib.quote(myString,'') on that stored value, presumably to %-escape them for transport to the client so the client can unpercent escape them.\nThe result is that I am getting an exception when I try to log the result of % escaping. Is there some crucial step I am overlooking that needs to be applied to the stored value with the \\u and \\x format in order to properly convert it for sending over http?\nUpdate: The suggestion marked as the answer below worked for me. I am providing some updates to address the comments below to be complete, though.\nThe exception I received cited an issue with \\u20ac. I don't know if it was a problem with that specifically, rather than the fact that it was the first unicode character in the string.\nThat \\u20ac char is the unicode for the 'euro' symbol. I basically found I'd have issues with it unless I used the urllib2 quote method.","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":83104,"Q_Id":3563126,"Users Score":2,"Answer":"You are out of your luck with stdlib, urllib.quote doesn't work with unicode. If you are using django you can use django.utils.http.urlquote which works properly with unicode","Q_Score":48,"Tags":"python,url-encoding","A_Id":3563366,"CreationDate":"2010-08-25T05:42:00.000","Title":"URL encoding\/decoding with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm not sure how to find this information, I have found a few tutorials so far about using Python with selenium but none have so much as touched on this.. I am able to run some basic test scripts through python that automate selenium but it just shows the browser window for a few seconds and then closes it.. I need to get the browser output into a string \/ variable (ideally) or at least save it to a file so that python can do other things on it (parse it, etc).. I would appreciate if anyone can point me towards resources on how to do this. Thanks","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4318,"Q_Id":3571233,"Users Score":2,"Answer":"There's a Selenium.getHtmlSource() method in Java, most likely it is also available in Python. It returns the source of the current page as string, so you can do whatever you want with it","Q_Score":3,"Tags":"python,selenium,browser-automation","A_Id":3573288,"CreationDate":"2010-08-26T00:22:00.000","Title":"Selenium with Python, how do I get the page output after running a script?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"A sever I can't influence sends very broken XML.\nSpecifically, a Unicode WHITE STAR would get encoded as UTF-8 (E2 98 86) and then translated using a Latin-1 to HTML entity table. What I get is â 98 86 (9 bytes) in a file that's declared as utf-8 with no DTD.\nI couldn't configure W3C tidy in a way that doesn't garble this irreversibly. I only found how to make lxml skip it silently. SAX uses Expat, which cannot recover after encountering this. I'd like to avoid BeautifulSoup for speed reasons.\nWhat else is there?","AnswerCount":2,"Available Count":1,"Score":0.1973753202,"is_accepted":false,"ViewCount":1950,"Q_Id":3577652,"Users Score":2,"Answer":"BeautifulSoup is your best bet in this case. I suggest profiling before ruling out BeautifulSoup altogether.","Q_Score":4,"Tags":"python,xml","A_Id":3577694,"CreationDate":"2010-08-26T17:18:00.000","Title":"How to parse broken XML in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to create REST Web Services, that returns JSON or XML, using Python ?\nCould you give me some recomandations ?\nThank you.","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":12155,"Q_Id":3577994,"Users Score":0,"Answer":"Sure, you can use any web framework you like, just set the content-type header to the mime type you need. For generating json I recommend the simplejson module (ranamed to json and included in the standard library since 2.6), for handling XML the lxml library is very nice.","Q_Score":2,"Tags":"python,web-services,rest","A_Id":3578028,"CreationDate":"2010-08-26T17:57:00.000","Title":"Creating REST Web Services with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"They didn't mention this in python documentation. And recently I'm testing a website simply refreshing the site using urllib2.urlopen() to extract certain content, I notice sometimes when I update the site urllib2.urlopen() seems not get the newly added content. So I wonder it does cache stuff somewhere, right?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":12290,"Q_Id":3586295,"Users Score":0,"Answer":"If you make changes and test the behaviour from browser and from urllib, it is easy to make a stupid mistake.\nIn browser you are logged in, but in urllib.urlopen your app can redirect you always to the same login page, so if you just see the page size or the top of your common layout, you could think that your changes have no effect.","Q_Score":13,"Tags":"python,urllib2,urlopen","A_Id":38239971,"CreationDate":"2010-08-27T16:34:00.000","Title":"Does urllib2.urlopen() cache stuff?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"They didn't mention this in python documentation. And recently I'm testing a website simply refreshing the site using urllib2.urlopen() to extract certain content, I notice sometimes when I update the site urllib2.urlopen() seems not get the newly added content. So I wonder it does cache stuff somewhere, right?","AnswerCount":5,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":12290,"Q_Id":3586295,"Users Score":10,"Answer":"So I wonder it does cache stuff somewhere, right? \n\nIt doesn't. \nIf you don't see new data, this could have many reasons. Most bigger web services use server-side caching for performance reasons, for example using caching proxies like Varnish and Squid or application-level caching.\nIf the problem is caused by server-side caching, usally there's no way to force the server to give you the latest data.\n\nFor caching proxies like squid, things are different. Usually, squid adds some additional headers to the HTTP response (response().info().headers).\nIf you see a header field called X-Cache or X-Cache-Lookup, this means that you aren't connected to the remote server directly, but through a transparent proxy.\nIf you have something like: X-Cache: HIT from proxy.domain.tld, this means that the response you got is cached. The opposite is X-Cache MISS from proxy.domain.tld, which means that the response is fresh.","Q_Score":13,"Tags":"python,urllib2,urlopen","A_Id":3586796,"CreationDate":"2010-08-27T16:34:00.000","Title":"Does urllib2.urlopen() cache stuff?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"They didn't mention this in python documentation. And recently I'm testing a website simply refreshing the site using urllib2.urlopen() to extract certain content, I notice sometimes when I update the site urllib2.urlopen() seems not get the newly added content. So I wonder it does cache stuff somewhere, right?","AnswerCount":5,"Available Count":3,"Score":-0.0798297691,"is_accepted":false,"ViewCount":12290,"Q_Id":3586295,"Users Score":-2,"Answer":"I find it hard to believe that urllib2 does not do caching, because in my case, upon restart of the program the data is refreshed. If the program is not restarted, the data appears to be cached forever. Also retrieving the same data from Firefox never returns stale data.","Q_Score":13,"Tags":"python,urllib2,urlopen","A_Id":3936916,"CreationDate":"2010-08-27T16:34:00.000","Title":"Does urllib2.urlopen() cache stuff?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm writing a game which requires users to log in to their accounts in order to be able to play. What's the best way of transmitting passwords from client to server and storing them?\nI'm using Python and Twisted, if that's of any relevance.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":447,"Q_Id":3595835,"Users Score":1,"Answer":"The best way is to authenticate via SSL\/TLS. The best way of storing passwords is to store them hashed with some complex hash like sha1(sha1(password)+salt) with salt.","Q_Score":0,"Tags":"python,security,passwords,network-programming","A_Id":3595865,"CreationDate":"2010-08-29T17:38:00.000","Title":"Handling Password Authentication over a Network","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any approach to generate editor of an XML file basing on an XSD scheme? (It should be a Java or Python web based editor).","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":4417,"Q_Id":3599569,"Users Score":1,"Answer":"Funny, I'm concerning myself with something similar. I'm building an editor (not really WYSIWYG, but it abstracts the DOM away) for the XMLs Civilization 4 (strategy game) usesu to store about everything. I thought about it for quite a while and built two prototypes (in Python), one of which looks promising so I will extend it in the future. Note that Civ 4 XMLs are merely more than a buzzword-conform database (just the kind of data you better store in JSON\/YAML and the like, mostly key-value pairs with a few sublists of key-value pairs - no recursive data structures).\nMy first approach was based on the fact that there are mostly key-value pairs, which doesn't fit documents that exploit the full power of XML (recursive data structures, etc). My new design is more sophisticated - up to now, I only built a (still buggy) validator factory this way, but I'm looking forward to extend it, e.g. for schema-sensetive editing widgets. The basic idea is to walk the XSD's DOM, recognize the expected content (a list of other nodes, text of a specific format, etc), build in turn (recursively) validators for these, and then build a higher-order validator that applies all the previously generated validators in the right order. It propably takes some exposure to functional programming to get comfortable with the idea. For the editing part (btw, I use PyQt), I plan to generate a Label-LineEdit pair for tags which contain text and a heading (Label) for tags that contain other elements, possibly indenting the subelements and\/or providing folding. Again, recursion is the key to build these.\nQt allows us to attach a validator to an text input widget, so this part is easy once we can generate a validator for e.g. a tag containing an \"int\". For tags containing other tags, something similar to the above is possible: Generate a validator for each subelement and chain them. The only part that needs to change is how we get the content. Ignoring comments, attributes, processing instructions, etc, this should still be relatively simple - for a \"tag: content\" pair, generate \"content\" and feed it to your DOM parser; for elements with subelements, generate a representation of the children and put it between \"...\". Attributes could be implemented as key-value pairs too, only with an extra flag.","Q_Score":4,"Tags":"java,python,xml,xsd","A_Id":3599767,"CreationDate":"2010-08-30T10:33:00.000","Title":"Automatic editor of XML (based on XSD scheme)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to develop some scripts for iTunes in python and actually i'm having quite a hard time getting the API information. \nI'm using the win32com.client module but i would really need to get all the specifications, methods and details.\nThere are a few examples but I need some extra data......\nthanks!!!","AnswerCount":4,"Available Count":1,"Score":-0.049958375,"is_accepted":false,"ViewCount":9348,"Q_Id":3602728,"Users Score":-1,"Answer":"Run dir(my_com_client) to get a list of available methods.","Q_Score":5,"Tags":"python,itunes,win32com","A_Id":3602834,"CreationDate":"2010-08-30T17:27:00.000","Title":"iTunes API for python scripting","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning to use the Queue module, and am a bit confused about how a queue consumer thread can be made to know that the queue is complete. Ideally I'd like to use get() from within the consumer thread and have it throw an exception if the queue has been marked \"done\". Is there a better way to communicate this than by appending a sentinel value to mark the last item in the queue?","AnswerCount":5,"Available Count":3,"Score":0.0798297691,"is_accepted":false,"ViewCount":10185,"Q_Id":3605188,"Users Score":2,"Answer":"Queue is a FIFO (first in first out) register so remember that the consumer can be faster than producer. When consumers thread detect that the queue is empty normally realise one of following actions:\n\nSend to API: switch to next thread.\nSend to API: sleep some ms and than check again the queue.\nSend to API: wait on event (like new message in queue).\n\nIf you wont that consumers thread terminate after job is complete than put in queue a sentinel value to terminate task.","Q_Score":14,"Tags":"python,multithreading,queue","A_Id":3607187,"CreationDate":"2010-08-31T00:21:00.000","Title":"Communicating end of Queue","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning to use the Queue module, and am a bit confused about how a queue consumer thread can be made to know that the queue is complete. Ideally I'd like to use get() from within the consumer thread and have it throw an exception if the queue has been marked \"done\". Is there a better way to communicate this than by appending a sentinel value to mark the last item in the queue?","AnswerCount":5,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":10185,"Q_Id":3605188,"Users Score":0,"Answer":"The best practice way of doing this would be to have the queue itself notify a client that it has reached the 'done' state. The client can then take any action that is appropriate.\nWhat you have suggested; checking the queue to see if it is done periodically, would be highly undesirable. Polling is an antipattern in multithreaded programming, you should always be using notifications.\nEDIT:\nSo your saying that the queue itself knows that it's 'done' based on some criteria and needs to notify the clients of that fact. I think you are correct and the best way to do this is by throwing when a client calls get() and the queue is in the done state. If your throwing this would negate the need for a sentinel value on the client side. Internally the queue can detect that it is 'done' in any way it pleases e.g. queue is empty, it's state was set to done etc I don't see any need for a sentinel value.","Q_Score":14,"Tags":"python,multithreading,queue","A_Id":3605282,"CreationDate":"2010-08-31T00:21:00.000","Title":"Communicating end of Queue","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm learning to use the Queue module, and am a bit confused about how a queue consumer thread can be made to know that the queue is complete. Ideally I'd like to use get() from within the consumer thread and have it throw an exception if the queue has been marked \"done\". Is there a better way to communicate this than by appending a sentinel value to mark the last item in the queue?","AnswerCount":5,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":10185,"Q_Id":3605188,"Users Score":8,"Answer":"A sentinel is a natural way to shut down a queue, but there are a couple things to watch out for.\nFirst, remember that you may have more than one consumer, so you need to send a sentinel once for each running consumer, and guarantee that each consumer will only consume one sentinel, to ensure that each consumer receives its shutdown sentinel.\nSecond, remember that Queue defines an interface, and that when possible, code should behave regardless of the underlying Queue. You might have a PriorityQueue, or you might have some other class that exposes the same interface and returns values in some other order.\nUnfortunately, it's hard to deal with both of these. To deal with the general case of different queues, a consumer that's shutting down must continue to consume values after receiving its shutdown sentinel until the queue is empty. That means that it may consume another thread's sentinel. This is a weakness of the Queue interface: it should have a Queue.shutdown call to cause an exception to be thrown by all consumers, but that's missing.\nSo, in practice:\n\nif you're sure you're only ever using a regular Queue, simply send one sentinel per thread.\nif you may be using a PriorityQueue, ensure that the sentinel has the lowest priority.","Q_Score":14,"Tags":"python,multithreading,queue","A_Id":3605258,"CreationDate":"2010-08-31T00:21:00.000","Title":"Communicating end of Queue","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python for web programming and javascript heavily. Currently, i am using NetBeans \nbut i am looking for another IDE. NetBeans is not very good while programming with python and javascript. Any suggestion?","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":6379,"Q_Id":3608409,"Users Score":0,"Answer":"It's not quite IDE, but on MacOSX i'm using TextMate, it have many extensions which makes it very powerful.","Q_Score":6,"Tags":"javascript,python,ide","A_Id":3608498,"CreationDate":"2010-08-31T11:12:00.000","Title":"IDE Suggestion for python and javascript","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python for web programming and javascript heavily. Currently, i am using NetBeans \nbut i am looking for another IDE. NetBeans is not very good while programming with python and javascript. Any suggestion?","AnswerCount":7,"Available Count":4,"Score":0.0285636566,"is_accepted":false,"ViewCount":6379,"Q_Id":3608409,"Users Score":1,"Answer":"PyCharm (and other IDEs on IDEA platform) is brilliant IDE for python, js, xml, css and other languages in webdev stack.","Q_Score":6,"Tags":"javascript,python,ide","A_Id":3608554,"CreationDate":"2010-08-31T11:12:00.000","Title":"IDE Suggestion for python and javascript","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python for web programming and javascript heavily. Currently, i am using NetBeans \nbut i am looking for another IDE. NetBeans is not very good while programming with python and javascript. Any suggestion?","AnswerCount":7,"Available Count":4,"Score":0.0285636566,"is_accepted":false,"ViewCount":6379,"Q_Id":3608409,"Users Score":1,"Answer":"I use Eclipse with Pydev (Python) and Aptana (Javascript) plugins","Q_Score":6,"Tags":"javascript,python,ide","A_Id":3608748,"CreationDate":"2010-08-31T11:12:00.000","Title":"IDE Suggestion for python and javascript","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am using python for web programming and javascript heavily. Currently, i am using NetBeans \nbut i am looking for another IDE. NetBeans is not very good while programming with python and javascript. Any suggestion?","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":6379,"Q_Id":3608409,"Users Score":0,"Answer":"For web programming I used Espresso, it only work on Mac but it is quite good,this one is an IDE.\nI don't think the rest classify as an IDE.\nFor python I use sublimetext2 because it can be customize and has a great GUI feel.\nI used to use notepad++ don't really suggest it.\nI think if you are asking for efficiency use vim.","Q_Score":6,"Tags":"javascript,python,ide","A_Id":28404847,"CreationDate":"2010-08-31T11:12:00.000","Title":"IDE Suggestion for python and javascript","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I try to move email from mailbox's gmail to another one, Just curious that UID of each email will change when move to new mailbox ?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":5836,"Q_Id":3615561,"Users Score":4,"Answer":"Yes of course the UID is changed when you do move operation.\nthe new UID for that mail will be the next UID from the destination folder.\n(i.e if the last mail UID of the destination folder is : 9332 , \nthen the UID of the move email will be 9333) \nNote: UID is changed but the Message-Id will not be changed during any operation on that mail)","Q_Score":0,"Tags":"python,imap,imaplib","A_Id":3636059,"CreationDate":"2010-09-01T06:37:00.000","Title":"About IMAP UID with imaplib","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am scripting in python for some web automation. I know i can not automate captchas but here is what i want to do:\nI want to automate everything i can up to the captcha. When i open the page (usuing urllib2) and parse it to find that it contains a captcha, i want to open the captcha using Tkinter. Now i know that i will have to save the image to my harddrive first, then open it but there is an issue before that. The captcha image that is on screen is not directly in the source anywhere. There is a variable in the source, inside some javascript, that points to another page that has the link to the image, BUT if you load that middle page, the captcha picture for that link changes, so the image associated with that javascript variable is no longer valid. It may be impossible to gather the image using this method, so please enlighten me if you have any ideas on this. \nNow if I use firebug to load the page, there is a \"GET\" that is a direct link to the current Captcha image that i am seeing, and i'm wondering if there is anyway to make python or ullib2 see the \"GET\"s that are going on when a page is loaded, because if that was possible, this would be simple. \nPlease let me know if you have any suggestions.","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":1432,"Q_Id":3623077,"Users Score":2,"Answer":"Of course the captcha's served by a page which will serve a new one each time (if it was repeated, then once it was solved for one fake userid, a spammer could automatically make a million!). I think you need some \"screenshot\" functionality to capture the image you want -- there is no cross-platform way to invoke such functionality, but each platform (or desktop manager in the case of Linux, BSD, etc) tends to have one. Or, you could automate the browser (e.g. via SeleniumRC) to \"screenshot\" (e.g. \"print to PDF\") things at the right time. (I believe what you're seeing in firebug may be misleading you because it is \"showing a snapshot\"... just at the html source or DOM level rather than at a screen\/bitmap level).","Q_Score":0,"Tags":"python,web-applications,firebug,tkinter,urllib2","A_Id":3623274,"CreationDate":"2010-09-02T00:38:00.000","Title":"Is there a way to save a captcha image and view it later in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a form which when submitted by a user redirects to a thank you page and the file chosen for download begins to download.\nHow can I save this file using python? I can use python's urllib.urlopen to open the url to post to but the html returned is the thank you page, which I suspected it would be. Is there a solution that allows me to grab the contents of the file being served for download from the website and save that locally?\nThanks in advance for any help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":383,"Q_Id":3628454,"Users Score":2,"Answer":"If you're getting back a thank you page, the URL to the file is likely to be in there somewhere. Look for or JavaScript redirects. Ctrl+F'ing the page for the file name might also help.\nSome sites may have extra protection in, so if you can't figure it out, post a link to the site, just in case someone can be bothered to look.","Q_Score":1,"Tags":"python,html,download,urllib","A_Id":3628497,"CreationDate":"2010-09-02T15:07:00.000","Title":"Python - How do I save a file delivered from html?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I remove \/ inspect \/ modify handlers configured for my loggers using the fileConfig() function?\nFor removing there is Logger.removeHandler(hdlr) method, but how do I get the handler in first place if it was configured from file?","AnswerCount":5,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":46204,"Q_Id":3630774,"Users Score":78,"Answer":"logger.handlers contains a list with all handlers of a logger.","Q_Score":70,"Tags":"python,logging","A_Id":3630800,"CreationDate":"2010-09-02T20:05:00.000","Title":"logging remove \/ inspect \/ modify handlers configured by fileConfig()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am attempting to use the tweepy api to make a twitter function and I have two issues.\nI have little experience with the terminal and Python in general.\n1) It installed properly with Python 2.6, however I can't use it or install it with Python 3.1. When I attempt to install the module in 3.1 it gives me an error that there is no module setuptools. Originally I thought that perhaps I was unable to use tweepy module with 3.1, however in the readme it says \"Python 3 branch (3.1)\", which I assume means it is compatible. When I searched for the setuptools module, which I figured I could load into the new version, there was only modules for up to Python 2.7. How would I install the Tweepy api properly on Python 3.1?\n2) My default Python when run from terminal is 2.6.1 and I would like to make it 3.1 so I don't have to type python3.1.","AnswerCount":2,"Available Count":1,"Score":-0.0996679946,"is_accepted":false,"ViewCount":1565,"Q_Id":3631828,"Users Score":-1,"Answer":"Update: The comments below have some solid points against this technique. \n2) What OS are you running? Generally, there is a symlink somewhere in your system, which points from 'python' to 'pythonx.x', where x.x is the version number preferred by your operating system. On Linux, there is a symlink \/usr\/bin\/python, which points to (on Ubuntu 10.04) \/usr\/bin\/python2.6 on a standard installation.\nJust manually change the current link to point to the python3.1 binary, and you are fine.","Q_Score":0,"Tags":"python,tweepy","A_Id":3631888,"CreationDate":"2010-09-02T22:39:00.000","Title":"Python defaults and using tweepy api","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using Python to parse an auction site.\nIf I use browser to open this site, it will go to a loading page, then jump to the search result page automatically.\nIf I use urllib2 to open the webpage, the read() method only return the loading page.\nIs there any python package could wait until all contents are loaded then read() method return all results?\nThanks.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":862,"Q_Id":3637681,"Users Score":0,"Answer":"How does the search page work? If it loads anything using Ajax, you could do some basic reverse engineering and find the URLs involved using Firebug's Net panel or Wireshark and then use urllib2 to load those.\nIf it's more complicated than that, you could simulate the actions JS performs manually without loading and interpreting JavaScript. It all depends on how the search page works.\nLastly, I know there are ways to run scripting on pages without a browser, since that's what some functional testing suites do, but my guess is that this could be the most complicated approach.","Q_Score":0,"Tags":"javascript,python","A_Id":3637740,"CreationDate":"2010-09-03T16:26:00.000","Title":"How to parse a web use javascript to load .html by Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"i'm crawling an SNS with crawler written in python\nit works for a long time, but few days ago, the webpages got from my severs were ERROR 403 FORBIDDEN.\ni tried to change the cookie, change the browser, change the account, but all failed.\nand it seems that are the forbidden severs are in the same network segment.\nwhat can i do? steal someone else's ip? = =...\nthx a lot","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2076,"Q_Id":3648525,"Users Score":1,"Answer":"Looks like you've been blacklisted at the router level in that subnet, perhaps because you (or somebody else in the subnet) was violating terms of use, robots.txt, max crawling frequency as specified in a site-map, or something like that.\nThe solution is not technical, but social: contact the webmaster, be properly apologetic, learn what exactly you (or one of your associates) had done wrong, convincingly promise to never do it again, apologize again until they remove the blacklisting. If you can give that webmaster any reason why they should want to let you crawl that site (e.g., your crawling feeds a search engine that will bring them traffic, or something like this), so much the better!-)","Q_Score":0,"Tags":"python,web-crawler,http-status-code-403","A_Id":3648748,"CreationDate":"2010-09-06T01:19:00.000","Title":"how to crawl a 403 forbidden SNS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have to migrate data to OpenERP through XMLRPC by using TerminatOOOR.\nI send a name with value \"Rotule right Aur\u00e9lia\".\nIn Python the name with be encoded with value : 'Rotule right Aur\\xc3\\xa9lia '\nBut in TerminatOOOR (xmlrpc client) the data is encoded with value 'Rotule middle Aur\\357\\277\\275lia'\nSo in the server side, the data value is not decoded correctly and I get bad data. \nThe terminateOOOR is a ruby plugin for Kettle ( Java product) and I guess it should encode data by utf-8.\nI just don't know why it happens like this.\nAny help?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1981,"Q_Id":3651031,"Users Score":1,"Answer":"This issue comes from Kettle.\nMy program is using Kettle to get an Excel file, get the active sheet and transfer the data in that sheet to TerminateOOOR for further handling.\nAt the phase of reading data from Excel file, Kettle can not recognize the encoding then it gives bad data to TerminateOOOR. \nMy work around solution is manually exporting excel to csv before giving data to TerminateOOOR. By doing this, I don't use the feature to mapping excel column name a variable name (used by kettle).","Q_Score":1,"Tags":"python,ruby,unicode,xml-rpc","A_Id":3698942,"CreationDate":"2010-09-06T11:23:00.000","Title":"Handling unicode data in XMLRPC","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have substantial PHP experience, although I realize that PHP probably isn't the best language for a large-scale web crawler because a process can't run indefinitely. What languages do people suggest?","AnswerCount":7,"Available Count":4,"Score":0.0285636566,"is_accepted":false,"ViewCount":6849,"Q_Id":3664016,"Users Score":1,"Answer":"You could consider using a combination of python and PyGtkMozEmbed or PyWebKitGtk plus javascript to create your spider.\nThe spidering could be done in javascript after the page and all other scripts have loaded.\nYou'd have one of the few web spiders that supports javascript, and might pick up some hidden stuff the others don't see :)","Q_Score":3,"Tags":"php,c++,python,web-crawler","A_Id":3664065,"CreationDate":"2010-09-08T01:27:00.000","Title":"What languages are good for writing a web crawler?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have substantial PHP experience, although I realize that PHP probably isn't the best language for a large-scale web crawler because a process can't run indefinitely. What languages do people suggest?","AnswerCount":7,"Available Count":4,"Score":-0.0855049882,"is_accepted":false,"ViewCount":6849,"Q_Id":3664016,"Users Score":-3,"Answer":"C# and C++ are probably the best two languages for this, it's just a matter of which you know better and which is faster (C# is probably easier).\nI wouldn't recommend Python, Javascript, or PHP. They will usually be slower in text processing compared to a C-family language. If you're looking to crawl any significant chunk of the web, you'll need all the speed you can get.\nI've used C# and the HtmlAgilityPack to do so before, it works relatively well and is pretty easy to pick up. The ability to use a lot of the same commands to work with HTML as you would XML makes it nice (I had experience working with XML in C#).\nYou might want to test the speed of available C# HTML parsing libraries vs C++ parsing libraries. I know in my app, I was running through 60-70 fairly messy pages a second and pulling a good bit of data out of each (but that was a site with a pretty constant layout).\nEdit: I notice you mentioned accessing a database. Both C++ and C# have libraries to work with most common database systems, from SQLite (which would be great for a quick crawler on a few sites) to midrange engines like MySQL and MSSQL up to the bigger DB engines (I've never used Oracle or DB2 from either language, but it's possible).","Q_Score":3,"Tags":"php,c++,python,web-crawler","A_Id":3664086,"CreationDate":"2010-09-08T01:27:00.000","Title":"What languages are good for writing a web crawler?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have substantial PHP experience, although I realize that PHP probably isn't the best language for a large-scale web crawler because a process can't run indefinitely. What languages do people suggest?","AnswerCount":7,"Available Count":4,"Score":1.0,"is_accepted":false,"ViewCount":6849,"Q_Id":3664016,"Users Score":6,"Answer":"Any language you can easily use with a good network library and support for parsing the formats you want to crawl. Those are really the only qualifications.","Q_Score":3,"Tags":"php,c++,python,web-crawler","A_Id":3664054,"CreationDate":"2010-09-08T01:27:00.000","Title":"What languages are good for writing a web crawler?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have substantial PHP experience, although I realize that PHP probably isn't the best language for a large-scale web crawler because a process can't run indefinitely. What languages do people suggest?","AnswerCount":7,"Available Count":4,"Score":1.2,"is_accepted":true,"ViewCount":6849,"Q_Id":3664016,"Users Score":0,"Answer":"C++ - if you know what you're doing. You will not need a web server and a web application, because a web crawler is just a client, after all.","Q_Score":3,"Tags":"php,c++,python,web-crawler","A_Id":3664049,"CreationDate":"2010-09-08T01:27:00.000","Title":"What languages are good for writing a web crawler?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I m trying to use smtp class from Python 2.6.4 to send smtp email from a WinXP VMware machine.\nAfter the send method is called, I always got this error:\nsocket.error: [Errno 10061] No connection could be made because the target machine actively refused it.\nFew stuff I noticed:\n\nThe same code works in the physical WinXP machine with user in\/not in the domain, connected to the same smtp server.\nIf I use the smtp server which is setup in the same VM machine, then it works.\n\nAny help is appreciate!","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":828,"Q_Id":3664438,"Users Score":2,"Answer":"The phrase \"...because the target machine actively refused it\" usually means there's a firewall that drops any unauthorized connections. Is there a firewall service on the SMTP server that's blocking the WinXP VM's IP address?\nOr, more likely: Is the SMTP server not configured to accept relays from the WinXP VM's IP address?","Q_Score":1,"Tags":"python,email,smtp,vmware","A_Id":3668622,"CreationDate":"2010-09-08T03:23:00.000","Title":"Python smtp connection is always failed in a VMware Windows machine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am developing an email client in Python.\nIs it possible to check if an email contains an attachement just from the e-mail header without downloading the whole E-Mail?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2147,"Q_Id":3676344,"Users Score":5,"Answer":"\"attachment\" is quite a broad term. Is an image for HTML message an attachment? \nIn general, you can try analyzing content-type header. If it's multipart\/mixed, most likely the message contains an attachment.","Q_Score":4,"Tags":"python,email,imap,imaplib","A_Id":3676393,"CreationDate":"2010-09-09T12:05:00.000","Title":"Is it possible to check if an email contains an attachement just from the e-mail header?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Can anyone point me towards tutorials for using the Python API in Ntop (other than that Luca Deris paper)?\nIn web interfaces there is about > online documentation > python engine but I think this link has an error. Does anyone have access to that document to re-post online for me?","AnswerCount":1,"Available Count":1,"Score":0.537049567,"is_accepted":false,"ViewCount":1189,"Q_Id":3686080,"Users Score":3,"Answer":"If you have ntop installed you can look at the example files in \/usr\/share\/ntop\/python (that's where they're at in the Ubuntu package version, at least).\nIf you have epydoc installed you can run make from within the \/usr\/share\/ntop\/python\/docs directory to generate the documentation. Once you do that the About > Online Documentation > Python ntop Engine > Python API link will work correctly (it seems like a bug that it requires work on the part of the user to fix that link).","Q_Score":0,"Tags":"python","A_Id":7302974,"CreationDate":"2010-09-10T15:46:00.000","Title":"Ntop Python API","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there anyway I can parse a website by just viewing the content as displayed to the user in his browser? That is, instead of downloading \"page.htm\"l and starting to parse the whole page with all the HTML\/javascript tags, I will be able to retrieve the version as displayed to users in their browsers. I would like to \"crawl\" websites and rank them according to keywords popularity (viewing the HTML source version is problematic for that purpose).\nThanks!\nJoel","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":82,"Q_Id":3690560,"Users Score":0,"Answer":"You could get the source and strip the tags out, leaving only non-tag text, which works for almost all pages, except those where JavaScript-generated content is essential.","Q_Score":0,"Tags":"python,html","A_Id":3690576,"CreationDate":"2010-09-11T10:09:00.000","Title":"Counting content only in HTML page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there anyway I can parse a website by just viewing the content as displayed to the user in his browser? That is, instead of downloading \"page.htm\"l and starting to parse the whole page with all the HTML\/javascript tags, I will be able to retrieve the version as displayed to users in their browsers. I would like to \"crawl\" websites and rank them according to keywords popularity (viewing the HTML source version is problematic for that purpose).\nThanks!\nJoel","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":82,"Q_Id":3690560,"Users Score":0,"Answer":"A browser also downloads the page.html and then renders it. You should work the same way. Use a html parser like lxml.html or BeautifulSoup, using those you can ask for only the text enclosed within tags (and arguments you do like, like title and alt attributes).","Q_Score":0,"Tags":"python,html","A_Id":3690865,"CreationDate":"2010-09-11T10:09:00.000","Title":"Counting content only in HTML page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm moving some tests from Selenium to the WebDriver. My problem is that I can't find an equivalent for selenium.wait_for_condition. Do the Python bindings have this at the moment, or is it still planned?","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":3806,"Q_Id":3694508,"Users Score":0,"Answer":"The Java binding include a Wait class. This class repeatedly checks for a condition (with sleeps between) until a timeout is reached. If you can detect the completion of your Javascript using the normal API, you can take the same approach.","Q_Score":8,"Tags":"python,selenium,webdriver","A_Id":3743112,"CreationDate":"2010-09-12T10:37:00.000","Title":"selenium.wait_for_condition equivalent in Python bindings for WebDriver","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can we call the CLI executables commands using Python\nFor example i have 3 linux servers which are at the remote location and i want to execute some commands on those servers like finding the version of the operating system or executing any other commands. So how can we do this in Python. I know this is done through some sort of web service (SOAP or REST) or API but i am not sure....... So could you all please guide me.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":170,"Q_Id":3699268,"Users Score":0,"Answer":"Depends on how you want to design your software.\nYou could do stand-alone scripts as servers listening for requests on specific ports,\nor you could use a webserver which runs python scripts so you just have to access a URL.\nREST is one option to implement the latter.\nYou should then look for frameworks for REST development with python, or if it\u2019s simple logic with not so many possible requests can do it on your own as a web-script.","Q_Score":1,"Tags":"python,django,web-services,api,soap","A_Id":3699299,"CreationDate":"2010-09-13T09:41:00.000","Title":"How can we call the CLI executables commands using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am working on a project that requires me to collect a large list of URLs to websites about certain topics. I would like to write a script that will use google to search specific terms, then save the URLs from the results to a file. How would I go about doing this? I have used a module called xgoogle, but it always returned no results.\nI am using Python 2.6 on Windows 7.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":402,"Q_Id":3732595,"Users Score":0,"Answer":"Make sure that you change the User-Agent of urllib2. The default one tends to get blocked by Google. Make sure that you obey the terms of use of the search engine that you're scripting.","Q_Score":0,"Tags":"python,windows,search,hyperlink","A_Id":3732721,"CreationDate":"2010-09-17T04:04:00.000","Title":"Search Crawling \"Bot\"?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am going to write a TCP server, the client sends me XML message, I am wondering if below condition will happen and how to avoid that:\n1) client sends <\/cmd>\n2) sever is busy doing something\n3) clients sends <\/cmd>\n4) server does a recv() and put the string to buffer\nWill the buffer be filled with <\/cmd><\/cmd> or even worse <\/cmd><\/cmd>\n2) sever is busy doing something\n3) clients sends <\/cmd>\n4) server does a recv() and put the string to buffer\nWill the buffer be filled with <\/cmd><\/cmd> or even worse <\/cmd>[1]->[2]->...->[Step N]. The master program knows the step (state) it is currently at.\nI want to stream this in real time to a website (in the local area network) so that when my colleagues open, say, http:\/\/thecomputer:8000, they can see a real time rendering of the current state of our workflow with any relevant details.\nI've tought about writing the state of the script to an StringIO object (streaming to it) and use Javascript to refresh the browser, but I honestly have no idea how to actually do this.\nAny advice?","AnswerCount":5,"Available Count":1,"Score":0.0399786803,"is_accepted":false,"ViewCount":880,"Q_Id":3739296,"Users Score":1,"Answer":"You could have the python script write an xml file that you get with an ajax request in your web page, and get the status info from that.","Q_Score":4,"Tags":"javascript,python,streaming,real-time","A_Id":3739311,"CreationDate":"2010-09-17T21:47:00.000","Title":"Streaming the state of a Python script to a website","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to see if I can access an online API, but for that, I need to have Internet access.\nHow can I see if there's a connection available and active using Python?","AnswerCount":21,"Available Count":2,"Score":1.0,"is_accepted":false,"ViewCount":229860,"Q_Id":3764291,"Users Score":10,"Answer":"You can just try to download data, and if connection fail you will know that somethings with connection isn't fine.\nBasically you can't check if computer is connected to internet. There can be many reasons for failure, like wrong DNS configuration, firewalls, NAT. So even if you make some tests, you can't have guaranteed that you will have connection with your API until you try.","Q_Score":162,"Tags":"python,networking","A_Id":3764315,"CreationDate":"2010-09-21T20:39:00.000","Title":"How can I see if there's an available and active network connection in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to see if I can access an online API, but for that, I need to have Internet access.\nHow can I see if there's a connection available and active using Python?","AnswerCount":21,"Available Count":2,"Score":0.047583087,"is_accepted":false,"ViewCount":229860,"Q_Id":3764291,"Users Score":5,"Answer":"Try the operation you were attempting to do anyway. If it fails python should throw you an exception to let you know.\nTo try some trivial operation first to detect a connection will be introducing a race condition. What if the internet connection is valid when you test but goes down before you need to do actual work?","Q_Score":162,"Tags":"python,networking","A_Id":3764759,"CreationDate":"2010-09-21T20:39:00.000","Title":"How can I see if there's an available and active network connection in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am currently working on exposing data from legacy system over the web. I have a (legacy) server application that sends and receives data over UDP. The software uses UDP to send sequential updates to a given set of variables in (near) real-time (updates every 5-10 ms). thus, I do not need to capture all UDP data -- it is sufficient that the latest update is retrieved.\nIn order to expose this data over the web, I am considering building a lightweight web server that reads\/write UDP data and exposes this data over HTTP.\nAs I am experienced with Python, I am considering to use it.\nThe question is the following: how can I (continuously) read data from UDP and send snapshots of it over TCP\/HTTP on-demand with Python? So basically, I am trying to build a kind of \"UDP2HTTP\" adapter to interface with the legacy app so that I wouldn't need to touch the legacy code.\nA solution that is WSGI compliant would be much preferred. Of course any tips are very welcome and MUCH appreciated!","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4771,"Q_Id":3768019,"Users Score":4,"Answer":"The software uses UDP to send sequential updates to a given set of variables in (near) real-time (updates every 5-10 ms). thus, I do not need to capture all UDP data -- it is sufficient that the latest update is retrieved\n\nWhat you must do is this.\nStep 1.\nBuild a Python app that collects the UDP data and caches it into a file. Create the file using XML, CSV or JSON notation.\nThis runs independently as some kind of daemon. This is your listener or collector. \nWrite the file to a directory from which it can be trivially downloaded by Apache or some other web server. Choose names and directory paths wisely and you're done.\nDone.\nIf you want fancier results, you can do more. You don't need to, since you're already done.\nStep 2. \nBuild a web application that allows someone to request this data being accumulated by the UDP listener or collector.\nUse a web framework like Django for this. Write as little as possible. Django can serve flat files created by your listener.\nYou're done. Again.\nSome folks think relational databases are important. If so, you can do this. Even though you're already done.\nStep 3.\nModify your data collection to create a database that the Django ORM can query. This requires some learning and some adjusting to get a tidy, simple ORM model.\nThen write your final Django application to serve the UDP data being collected by your listener and loaded into your Django database.","Q_Score":6,"Tags":"python,wsgi","A_Id":3768227,"CreationDate":"2010-09-22T09:44:00.000","Title":"How to serve data from UDP stream over HTTP in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I am trying to use python's imaplib to create an email and send it to a mailbox with specific name, e.g. INBOX. Anyone has some great suggestion :).","AnswerCount":4,"Available Count":1,"Score":-1.0,"is_accepted":false,"ViewCount":18674,"Q_Id":3769701,"Users Score":-6,"Answer":"No idea how they do it but doesn't Microsoft Outlook let you move an email from a local folder to a remote IMAP folder?","Q_Score":5,"Tags":"python,imaplib","A_Id":3787209,"CreationDate":"2010-09-22T13:28:00.000","Title":"How to create an email and send it to specific mailbox with imaplib","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"how do i run a python program that is received by a client from server without writing it into a new python file?","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":387,"Q_Id":3798067,"Users Score":0,"Answer":"Dcolish's answer is good. I'm not sure the idea of executing code that comes in on a network interface is good in itself, though - you will need to take care to verify that you can trust the sending party, especially if this interface is going to be exposed to the Internet or really any production network.","Q_Score":0,"Tags":"python","A_Id":3805092,"CreationDate":"2010-09-26T13:49:00.000","Title":"network programming in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using web2py for an intranet site and need to get current login windows user id in my controller. Whether any function is available?","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":996,"Q_Id":3798606,"Users Score":1,"Answer":"If you mean you need code at the server to know the windows id of the current browser user, web2py isn't going to be able to tell you that. Windows authentication has nothing to do with web protocols.","Q_Score":3,"Tags":"python,windows,web2py","A_Id":3798630,"CreationDate":"2010-09-26T16:04:00.000","Title":"How to get windows user id in web2py for an intranet application?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I changed my domain from abc.com to xyz.com. After that my facebook authentication is not working.\nIt is throwing a key error KeyError: 'access_token'I am using python as my language.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":186,"Q_Id":3806082,"Users Score":0,"Answer":"You probably need to update the domain in the facebook settings\/api key which allow you access.","Q_Score":0,"Tags":"python,facebook","A_Id":3806101,"CreationDate":"2010-09-27T17:08:00.000","Title":"My facebook authentication is not working?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Is there any python function that validates E-mail addresses, aware of IDN domains ?\nFor instance, user@example.com should be as correct as user@z\u00e4\u00e4z.de or user@\u7d0d\u8c46.ac.jp\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":548,"Q_Id":3806393,"Users Score":1,"Answer":"It is very difficult to validate an e-mail address because the syntax is so flexible. The best strategy is to send a test e-mail to the entered address.","Q_Score":0,"Tags":"python","A_Id":3806787,"CreationDate":"2010-09-27T17:43:00.000","Title":"Function to validate an E-mail (IDN aware)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I've been searching on this but can't seem to find an exact answer (most get into more complicated things like multithreading, etc), I just want to do something like a Try, Except statement where if the process doesn't finish within X number of seconds it will throw an exception.\nEDIT: The reason for this is that I am using a website testing software (selenium) with a configuration that sometimes causes it to hang. It doesn't throw an error, doesn't timeout or do anything so I have no way of catching it. I am wondering what the best way is to determine that this has occured so I can move on in my application, so I was thinking if I could do something like, \"if this hasn't finished by X seconds... move on\".","AnswerCount":3,"Available Count":1,"Score":0.1325487884,"is_accepted":false,"ViewCount":6285,"Q_Id":3810869,"Users Score":2,"Answer":"You can't do it without some sort of multithreading or multiprocessing, even if that's hidden under some layers of abstraction, unless that \"process\" you're running is specifically designed for asynchronicity and calls-back to a known function once in a while.\nIf you describe what that process actually is, it will be easier to provide real solutions. I don't think that you appreciate the power of Python where it comes to implementations that are succinct while being complete. This may take just a few lines of code to implement, even if using multithreading\/multiprocessing.","Q_Score":4,"Tags":"python,error-handling,timeout","A_Id":3810878,"CreationDate":"2010-09-28T08:15:00.000","Title":"Python, Timeout of Try, Except Statement after X number of seconds?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"There is the URL of page on the Internet. I need to get a screenshot of this page (no matter in which browser). \nI need a script (PHP, Python (even Django framework)) that receives the URL (string) and output screenshot-file at the exit (file gif, png, jpg).\nUPD:\nI need dynamically create a page where opposite to URL will be placed screenshot of the page with the same URL.","AnswerCount":6,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":31489,"Q_Id":3811674,"Users Score":0,"Answer":"If you are family with Python, you can use PyQt4. This library supports to get screenshot from a url.","Q_Score":7,"Tags":"php,python,django,url,screenshot","A_Id":21106018,"CreationDate":"2010-09-28T10:08:00.000","Title":"Convert URL to screenshot (script)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"How can I know if a node that is being accessed using TCP socket is alive or if the connection was interrupted and other errors?\nThanks!","AnswerCount":1,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":240,"Q_Id":3813451,"Users Score":2,"Answer":"You can't. Any intermediate nodes can drop your packets or the reply packets from the remote node.","Q_Score":1,"Tags":"python,sockets,system,distributed","A_Id":3813510,"CreationDate":"2010-09-28T13:51:00.000","Title":"Python socket programming","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wish to set the namespace prefix in xml.etree. I found register_namespace(prefix, url) on the Web but this threw \"unknown attribute\". I have also tried nsmap=NSMAP but this also fails. I'd be grateful for example syntax that shows how to add specified namespace prefixes","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1128,"Q_Id":3814365,"Users Score":1,"Answer":"register_namespace was only introduced in lxml 2.3 (still beta)\nI believe you can provide an nsmap parameter (dictionary with prefix-uri mappings) when creating an element, but I don't think you can change it for an existing element. (there is an .nsmap property on the element, but changing that doesn't seem to work. There is also a .prefix property on the element, but that's read-only)","Q_Score":3,"Tags":"python,xml.etree","A_Id":3814788,"CreationDate":"2010-09-28T15:28:00.000","Title":"how to set namespace prefixes in xml.etree","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i wrote a py script to fetch page from web,it just read write permission enough,so my question is when we need execute permission?","AnswerCount":3,"Available Count":3,"Score":1.0,"is_accepted":false,"ViewCount":6642,"Q_Id":3822336,"Users Score":6,"Answer":"Read\/write is enough if you want to run it by typing python file.py. If you want to run it directly as if it were a compiled program, e.g. .\/file.py, then you need execute permission (and the appropriate hash-bang line at the top).","Q_Score":4,"Tags":"python,permissions,chmod","A_Id":3822354,"CreationDate":"2010-09-29T14:01:00.000","Title":"when we need chmod +x file.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i wrote a py script to fetch page from web,it just read write permission enough,so my question is when we need execute permission?","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":6642,"Q_Id":3822336,"Users Score":0,"Answer":"If you want to be able to run it directly with $ file.py then you'll need the execute bit set. Otherwise you can run it with $ python file.py.","Q_Score":4,"Tags":"python,permissions,chmod","A_Id":3822352,"CreationDate":"2010-09-29T14:01:00.000","Title":"when we need chmod +x file.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i wrote a py script to fetch page from web,it just read write permission enough,so my question is when we need execute permission?","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":6642,"Q_Id":3822336,"Users Score":5,"Answer":"It's required to do so if you need to run the script in this way: .\/file.py. Keep in mind though, you need to put the path of python at the very top of the script: #!\/usr\/bin\/python.\nBut wait, you need to make sure you have the proper path, to do that execute: which python.","Q_Score":4,"Tags":"python,permissions,chmod","A_Id":3822410,"CreationDate":"2010-09-29T14:01:00.000","Title":"when we need chmod +x file.py","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"The situation is that I have a small datacenter, with each server running python instances. It's not your usual distributed worker setup, as each server has a specific role with an appropriate long-running process.\nI'm looking for good ways to implement the the cross-server communication. REST seems like overkill. XML-RPC seems nice, but I haven't played with it yet. What other libraries should I be looking at to get this done?\nRequirements:\nComputation servers crunch numbers in the background. Other servers would like to occasionally ask them for values, based upon their calculation sets. I know this seems pretty well aligned with a REST mentality, but I'm curious about other options.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":196,"Q_Id":3823420,"Users Score":1,"Answer":"It wasn't obvious from your question but if getting answers back synchronously doesn't matter to you (i.e., you are just asking for work to be performed) you might want to consider just using a job queue. It's generally the easiest way to communicate between hosts. If you don't mind depending on AWS using SQS is super simple. If you can't depend on AWS then you might want to try something like RabbitMQ. Many times problems that we think need to be communicated synchronously are really just queues in disguise.","Q_Score":0,"Tags":"python,network-protocols","A_Id":3824612,"CreationDate":"2010-09-29T15:57:00.000","Title":"What are good specs\/libraries for closed network communication in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"When I call selenium.get_text(\"foo\") on a certain element it returns back a different value depending on what browser I am working in due to the way each browser handles newlines.\nExample:\nAn elements string is \"hello[newline]how are you today?[newline]Very well, thank you.\"\nWhen selenium gets this back from IE it gets the string \"hello\\nhow are you today?\\nVery well, thank you.\"\nWhen selenium gets this back from Firefox it gets the string \"hello\\n how are you today?\\n Very well, thank you.\"\n(Notice that IE changes [newline] into '\\n' and Firefox changes it into '\\n ')\nIs there anyway using selenium\/python that I can easily strip out this discrepancy?\nI thought about using .replace(\"\\n \", \"\\n\"), but that would cause issues if there was an intended space after a newline (for whatever reason).\nAny ideas?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1614,"Q_Id":3824734,"Users Score":0,"Answer":"I ended up just doing a check of what browser I was running and then returning the string with the '\\n ' replaced with '\\n' if the browser was firefox.","Q_Score":1,"Tags":"python,selenium","A_Id":3825411,"CreationDate":"2010-09-29T18:31:00.000","Title":"Selenium and Python: remove \\n from returned selenium.get_text()","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a script that runs continuously when invoked and every 5 minutes checks my gmail inbox. To get it to run every 5 minutes I am using the time.sleep() function. However I would like user to end the script anytime my pressing q, which it seems cant be done when using time.sleep(). Any suggestions on how i can do this?\nAli","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":5470,"Q_Id":3836620,"Users Score":0,"Answer":"If you really wanted to (and wanted to waste a lot of resources), you could cut your loop into 200 ms chunks. So sleep 200 ms, check input, repeat until five minutes elapse, and then check your inbox. I wouldn't recommend it, though.\nWhile it's sleeping, though, the process is blocked and won't receive input until the sleep ends.\nOh, as an added note, if you hit the key while it's sleeping, it should still go into the buffer, so it'll get pulled out when the sleep ends and input is finally read, IIRC.","Q_Score":4,"Tags":"python,continuous","A_Id":3836650,"CreationDate":"2010-10-01T04:51:00.000","Title":"Continous loop and exiting in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I just need to write a simple python CGI script to parse the contents of a POST request containing JSON. This is only test code so that I can test a client application until the actual server is ready (written by someone else).\nI can read the cgi.FieldStorage() and dump the keys() but the request body containing the JSON is nowhere to be found.\nI can also dump the os.environ() which provides lots of info except that I do not see a variable containing the request body.\nAny input appreciated.\nChris","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":8362,"Q_Id":3836828,"Users Score":8,"Answer":"notice that if you call cgi.FieldStorage() before in your code, you can't get the body data from stdin, because it just be read once.","Q_Score":11,"Tags":"python,parsing,cgi,request","A_Id":39910366,"CreationDate":"2010-10-01T05:49:00.000","Title":"How to parse the \"request body\" using python CGI?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a list of xml examples I would like to turn into schemas (xsd files). Exactly what the trang tool does (http:\/\/www.thaiopensource.com\/relaxng\/trang.html). I don't like calling trang from my script (i.e doing os.system('java -jar trang...')) - is there a python package I can use instead?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":387,"Q_Id":3849632,"Users Score":0,"Answer":"If you are running Jython (http:\/\/jython.org\/) then you could import trang and run it internally.","Q_Score":3,"Tags":"python,xml,xsd","A_Id":5188517,"CreationDate":"2010-10-03T11:59:00.000","Title":"Python: Is there a way to generate xsd files based on xml examples","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a server with two separate Ethernet connections. When I bind a socket in python it defaults to one of the two networks. How do I pull a multicast stream from the second network in Python? I have tried calling bind using the server's IP address on the second network, but that hasn't worked.","AnswerCount":4,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":9358,"Q_Id":3859090,"Users Score":0,"Answer":"I figured it out. It turns out that the piece I was missing was adding the interface to the mreq structure that is used in adding membership to a multicast group.","Q_Score":6,"Tags":"python,sockets,networking","A_Id":3873419,"CreationDate":"2010-10-04T21:04:00.000","Title":"Choosing multicast network interface in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"How can I call shutdown() in a SocketServer after receiving a certain message \"exit\"? As I know, the call to serve_forever() will block the server.\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.3799489623,"is_accepted":false,"ViewCount":3881,"Q_Id":3863281,"Users Score":4,"Answer":"No the serve_forever is checking a flag on a regular basis (by default 0.5 sec). Calling shutdown will raise this flag and cause the serve_forever to end.","Q_Score":6,"Tags":"python,sockets,socketserver","A_Id":3863539,"CreationDate":"2010-10-05T11:40:00.000","Title":"Python SocketServer","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Im using scrapy to crawl a news website on a daily basis. How do i restrict scrapy from scraping already scraped URLs. Also is there any clear documentation or examples on SgmlLinkExtractor.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":9648,"Q_Id":3871613,"Users Score":1,"Answer":"I think jama22's answer is a little incomplete. \nIn the snippet if self.FILTER_VISITED in x.meta:, you can see that you require FILTER_VISITED in your Request instance in order for that request to be ignored. This is to ensure that you can differentiate between links that you want to traverse and move around and item links that well, you don't want to see again.","Q_Score":15,"Tags":"python,web-crawler,scrapy","A_Id":8830983,"CreationDate":"2010-10-06T10:38:00.000","Title":"Scrapy - how to identify already scraped urls","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Im using scrapy to crawl a news website on a daily basis. How do i restrict scrapy from scraping already scraped URLs. Also is there any clear documentation or examples on SgmlLinkExtractor.","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":9648,"Q_Id":3871613,"Users Score":1,"Answer":"Scrapy can auto-filter urls which are scraped, isn't it? Some different urls point to the same page will not be filtered, such as \"www.xxx.com\/home\/\" and \"www.xxx.com\/home\/index.html\".","Q_Score":15,"Tags":"python,web-crawler,scrapy","A_Id":13578588,"CreationDate":"2010-10-06T10:38:00.000","Title":"Scrapy - how to identify already scraped urls","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am running my code on multiple VPSes (with more than one IP, which are set up as aliases to the network interfaces) and I am trying to figure out a way such that my code acquires the IP addresses from the network interfaces on the fly and bind to it. Any ideas on how to do it in python without adding a 3rd party library ?\nEdit I know about socket.gethostbyaddr(socket.gethostname()) and about the 3rd party package netifaces, but I am looking for something more elegant from the standard library ... and parsing the output of the ifconfig command is not something elegant :)","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":118,"Q_Id":3881951,"Users Score":0,"Answer":"The IP addresses are assigned to your VPSes, no possibility to change them on the fly.\nYou have to open a SSH tunnel to or install a proxy on your VPSes.\nI think a SSH tunnel would be the best way how to do it, and then use it as SOCKS5 proxy from Python.","Q_Score":1,"Tags":"python","A_Id":3882193,"CreationDate":"2010-10-07T13:10:00.000","Title":"figuring out how to get all of the public ips of a machine","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to show .ppt (PowerPoint) files uploaded by my user on my website. I could do this by converting them into Flash files, then showing the Flash files on the web page. But I don't want to use Flash to do this. I want to show it, like google docs shows, without using Flash.\nI've already solved the problem for .pdf files by converting them into images using ImageMagick, but now I have trouble with .ppt files.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1773,"Q_Id":3882249,"Users Score":0,"Answer":"Now i found a solution to showing .ppt file on my website without using the flash \nthe solution is:\njust convert the .ppt file to .pdf files using any language or using software(e.g. open office) and then use Imagemagick to convert that .pdf into image and show to your web page \nonce again thanks to you all for answering my question.","Q_Score":2,"Tags":"java,c++,python,powerpoint,google-docs","A_Id":3899697,"CreationDate":"2010-10-07T13:45:00.000","Title":"How google docs shows my .PPT files without using a flash viewer?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"On my linux machine, 1 of 3 network interfaces may be actually connected to the internet. I need to get the IP address of the currently connected interface, keeping in mind that my other 2 interfaces may be assigned IP addresses, just not be connected.\nI can just ping a website through each of my interfaces to determine which one has connectivity, but I'd like to get this faster than waiting for a ping time out. And I'd like to not have to rely on an external website being up.\nUpdate:\nAll my interfaces may have ip addresses and gateways. This is for an embedded device. So we allow the user to choose between say eth0 and eth1. But if there's no connection on the interface that the user tells us to use, we fall back to say eth2 which (in theory) will always work.\nSo what I need to do is first check if the user's selection is connected and if so return that IP. Otherwise I need to get the ip of eth2. I can get the IPs of the interfaces just fine, it's just determining which one is actually connected.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2443,"Q_Id":3885160,"Users Score":0,"Answer":"If the default gateway for the system is reliable, then grab that from the output from route -n the line that contains \" UG \" (note the spaces) will also contain the IP of the gateway and interface name of the active interface.","Q_Score":1,"Tags":"python,linux,networking,ip-address","A_Id":3885255,"CreationDate":"2010-10-07T19:24:00.000","Title":"Determine IP address of CONNECTED interface (linux) in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Using just python, is it possible to possible to use a USB flash drive to serve files locally to a browser, and save information off the online web?\nIdeally I would only need python.\nWhere would I start?","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":195,"Q_Id":3885519,"Users Score":0,"Answer":"This doesn't seem much different then serving files from a local hard drive. You could map the thumbdrive to always be something not currently used on your machine (like U:).","Q_Score":0,"Tags":"python,web-services,usb","A_Id":3885544,"CreationDate":"2010-10-07T20:13:00.000","Title":"Is it possible to use a USB flash drive to serve files locally to a browser?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have client for web interface to long running process. I'd like to have output from that process to be displayed as it comes. Works great with urllib.urlopen(), but it doesn't have timeout parameter. On the other hand with urllib2.urlopen() the output is buffered. Is there a easy way to disable that buffer?","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1600,"Q_Id":3888812,"Users Score":0,"Answer":"A quick hack that has occurred to me is to use urllib.urlopen() with threading.Timer() to emulate timeout. But that's only quick and dirty hack.","Q_Score":1,"Tags":"python,urllib2,urllib,buffering,urlopen","A_Id":3888827,"CreationDate":"2010-10-08T08:20:00.000","Title":"unbuffered urllib2.urlopen","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"Is it possible to stream my webcam form my local machine that's connected to the internet to show up on my website without using any media server or something similar?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":807,"Q_Id":3890271,"Users Score":1,"Answer":"You could do it with some kind of java applet or flash\/silverlight application, just look at sites like \"chat roulette\"","Q_Score":0,"Tags":"python,webcam","A_Id":3890377,"CreationDate":"2010-10-08T12:03:00.000","Title":"How to stream my webcam through my site?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"OK so im using websockets to let javascript talk to python and that works very well BUT the data i need to send often has several parts like an array, (username,time,text) but how could i send it ? I originally though to encode each one in base64 or urlencode then use a character like | which those encoding methods will never use and then split the information. Unfortunately i cant find a method which both python and javascript can both do.\nSo the question, is there a encoding method which bath can do OR is there a different better way i can send the data because i havent really done anything like this before. (I have but AJAX requests and i send that data URL encoded). Also im not sending miles of text, about 100bytes at a time if that.\nthankyou !\nedit\nMost comments point to JSON,so, Whats the best convert to use for javascript because javascript stupidly cant convert string to JSON,or the other way round.\nFinished\nWell jaascript does have a native way to convert javascript to string, its just hidden form the world. JSON.stringify(obj, [replacer], [space]) to convert it to a string and JSON.parse(string, [reviver]) to convert it back","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":1908,"Q_Id":3890390,"Users Score":7,"Answer":"JSON is definitely the way to go. It has a very small overhead and is capable of storing almost any kind of data. I am not a python expert, but i am sure that there is some kind of en\/decoder available.","Q_Score":1,"Tags":"javascript,python,sockets,encoding","A_Id":3890407,"CreationDate":"2010-10-08T12:22:00.000","Title":"Python to javascript communication","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to expose an RS232 connection to clients via a network socket. I plan to write in python a TCP socket server which will listen on some port, allow client to connect and handle outgoing and manage and control requests and replies to from the R2232 port. \nMy question is, how do I synchronize the clients, each client will send some string to the serial port and after the reply is sent back I need to return that client the result. Only then do I need to process the next request. \nHow to I synchronize access to the serial port ?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1048,"Q_Id":3900403,"Users Score":1,"Answer":"The simplest way is to simply accept a connection, handle the request, and close the connection. This way, your program handles only one request at a time.\nAn alternative is to use locking or semaphores, to prevent multiple clients accessing the RS232 port simultaneously.","Q_Score":1,"Tags":"python,sockets,serial-port","A_Id":3900446,"CreationDate":"2010-10-10T13:03:00.000","Title":"Writing a TCP to RS232 driver","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"maybe this is a noob question, but I'm receiving some data over TCP and when I look at the string I get the following:\n\\x00\\r\\xeb\\x00\\x00\\x00\\x00\\x01t\\x00\nWhat is that \\r character, and what does the t in \\x01t mean?\nI've tried Googling, but I'm not sure what to Google for...\nthanks.","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":819,"Q_Id":3906903,"Users Score":9,"Answer":"\\r is a carriage return (0x0d), the t is a t.","Q_Score":0,"Tags":"python,networking,character,bits","A_Id":3906938,"CreationDate":"2010-10-11T14:00:00.000","Title":"Non-binary(hex) characters in string received over TCP with Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say there is a server on the internet that one can send a piece of code to for evaluation. At some point server takes all code that has been submitted, and starts running and evaluating it. However, at some point it will definitely bump into \"os.system('rm -rf *')\" sent by some evil programmer. Apart from \"rm -rf\" you could expect people try using the server to send spam or dos someone, or fool around with \"while True: pass\" kind of things.\nIs there a way to coop with such unfriendly\/untrusted code? In particular I'm interested in a solution for python. However if you have info for any other language, please share.","AnswerCount":7,"Available Count":4,"Score":0.057080742,"is_accepted":false,"ViewCount":5900,"Q_Id":3910223,"Users Score":2,"Answer":"It's impossible to provide an absolute solution for this because the definition of 'bad' is pretty hard to nail down.\nIs opening and writing to a file bad or good? What if that file is \/dev\/ram?\nYou can profile signatures of behavior, or you can try to block anything that might be bad, but you'll never win. Javascript is a pretty good example of this, people run arbitrary javascript code all the time on their computers -- it's supposed to be sandboxed but there's all sorts of security problems and edge conditions that crop up.\nI'm not saying don't try, you'll learn a lot from the process.\nMany companies have spent millions (Intel just spent billions on McAffee) trying to understand how to detect 'bad code' -- and every day machines running McAffe anti-virus get infected with viruses. Python code isn't any less dangerous than C. You can run system calls, bind to C libraries, etc.","Q_Score":8,"Tags":"python,trusted-vs-untrusted","A_Id":3910435,"CreationDate":"2010-10-11T21:53:00.000","Title":"sandbox to execute possibly unfriendly python code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say there is a server on the internet that one can send a piece of code to for evaluation. At some point server takes all code that has been submitted, and starts running and evaluating it. However, at some point it will definitely bump into \"os.system('rm -rf *')\" sent by some evil programmer. Apart from \"rm -rf\" you could expect people try using the server to send spam or dos someone, or fool around with \"while True: pass\" kind of things.\nIs there a way to coop with such unfriendly\/untrusted code? In particular I'm interested in a solution for python. However if you have info for any other language, please share.","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":5900,"Q_Id":3910223,"Users Score":0,"Answer":"I think a fix like this is going to be really hard and it reminds me of a lecture I attended about the benefits of programming in a virtual environment. \nIf you're doing it virtually its cool if they bugger it. It wont solve a while True: pass but rm -rf \/ won't matter.","Q_Score":8,"Tags":"python,trusted-vs-untrusted","A_Id":3910850,"CreationDate":"2010-10-11T21:53:00.000","Title":"sandbox to execute possibly unfriendly python code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say there is a server on the internet that one can send a piece of code to for evaluation. At some point server takes all code that has been submitted, and starts running and evaluating it. However, at some point it will definitely bump into \"os.system('rm -rf *')\" sent by some evil programmer. Apart from \"rm -rf\" you could expect people try using the server to send spam or dos someone, or fool around with \"while True: pass\" kind of things.\nIs there a way to coop with such unfriendly\/untrusted code? In particular I'm interested in a solution for python. However if you have info for any other language, please share.","AnswerCount":7,"Available Count":4,"Score":0.057080742,"is_accepted":false,"ViewCount":5900,"Q_Id":3910223,"Users Score":2,"Answer":"I would seriously consider virtualizing the environment to run this stuff, so that exploits in whatever mechanism you implement can be firewalled one more time by the configuration of the virtual machine.\nNumber of users and what kind of code you expect to test\/run would have considerable influence on choices btw. If they aren't expected to link to files or databases, or run computationally intensive tasks, and you have very low pressure, you could be almost fine by just preventing file access entirely and imposing a time limit on the process before it gets killed and the submission flagged as too expensive or malicious.\nIf the code you're supposed to test might be any arbitrary Django extension or page, then you're in for a lot of work probably.","Q_Score":8,"Tags":"python,trusted-vs-untrusted","A_Id":3910730,"CreationDate":"2010-10-11T21:53:00.000","Title":"sandbox to execute possibly unfriendly python code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Let's say there is a server on the internet that one can send a piece of code to for evaluation. At some point server takes all code that has been submitted, and starts running and evaluating it. However, at some point it will definitely bump into \"os.system('rm -rf *')\" sent by some evil programmer. Apart from \"rm -rf\" you could expect people try using the server to send spam or dos someone, or fool around with \"while True: pass\" kind of things.\nIs there a way to coop with such unfriendly\/untrusted code? In particular I'm interested in a solution for python. However if you have info for any other language, please share.","AnswerCount":7,"Available Count":4,"Score":0.0,"is_accepted":false,"ViewCount":5900,"Q_Id":3910223,"Users Score":0,"Answer":"Unless I'm mistaken (and I very well might be), this is much of the reason behind the way Google changed Python for the App Engine. You run Python code on their server, but they've removed the ability to write to files. All data is saved in the \"nosql\" database. \nIt's not a direct answer to your question, but an example of how this problem has been dealt with in some circumstances.","Q_Score":8,"Tags":"python,trusted-vs-untrusted","A_Id":3910895,"CreationDate":"2010-10-11T21:53:00.000","Title":"sandbox to execute possibly unfriendly python code","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using urllib.urlopen to read a file from a URL. What is the best way to get the filename? Do servers always return the Content-Disposition header?\nThanks.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":575,"Q_Id":3912910,"Users Score":1,"Answer":"It's an optional header, so no. See if it exists, and if not then fall back to checking the URL.","Q_Score":1,"Tags":"python,urllib,urlopen","A_Id":3912922,"CreationDate":"2010-10-12T08:49:00.000","Title":"Get filename when using urllib.urlopen","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there any way in python for S60 (using the python 2.5.4 codebase) to track the amount of data transferred over the mobile device's internet connection?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":83,"Q_Id":3928685,"Users Score":2,"Answer":"Symbian C++ API has such a capability, so it is possible to write a python library for that, but if such already exists, that I do not know...\nBR\nSTeN","Q_Score":0,"Tags":"python,symbian,pys60","A_Id":4002429,"CreationDate":"2010-10-13T22:48:00.000","Title":"Measuring internet data transfers","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Disclaimer here: I'm really not a programmer. I'm eager to learn, but my experience is pretty much basic on c64 20 years ago and a couple of days of learning Python.\nI'm just starting out on a fairly large (for me as a beginner) screen scraping project. So far I have been using python with mechanize+lxml for my browsing\/parsing. Now I'm encountering some really javascript heavy pages that doesn't show a anything without javascript enabled, which means trouble for mechanize.\nFrom my searching I've kind come to the conclusion that I have a basically a few options:\n\nTrying to figure out what the javascript is doing a emulate that in my code (I don't quite know where to start with this. ;-))\nUsing pywin32 to control internet explorer or something similar, like using the webkit-browser from pyqt4 or even using telnet and mozrepl (this seems really hard)\nSwitching language to perl since www::Mechanize seems be a lot more mature on per (addons and such for javascript). Don't know too much about this at all. \n\nIf anyone has some pointers here that would be great. I understand that I need to do a lot of trial and error, but would be nice I wouldn't go too far away from the \"true\" answer, if there is such a thing.","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":795,"Q_Id":3929005,"Users Score":0,"Answer":"For nonprogrammers, I recomment using IRobotSoft. It is visual oriented and with full javascript support. The shortcoming is that it runs only on Windows. The good thing is you can become an expert just by trial and error to learn the software.","Q_Score":2,"Tags":"python,screen-scraping","A_Id":3992151,"CreationDate":"2010-10-13T23:58:00.000","Title":"Options for handling javascript heavy pages while screen scraping","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am trying to make an app for authenticating user with their facebook account in python. App opens the facebook login page in web browser. After user logs in, facebook redirects it to thei dummy success page. At that moment i need to capture that redirect url in my app. I am not able to catch that URL. \nI am opening fb login page by using webbrowser.open . How can i catch the redirect url after opening web browser?\nAny suggestions will be very helpful.\nThanks,\nTara Singh","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":755,"Q_Id":3930129,"Users Score":0,"Answer":"There's a getLoginUrl in the facebook SDK. You might want to look at that.\n-Roozbeh","Q_Score":2,"Tags":"python,facebook","A_Id":4663777,"CreationDate":"2010-10-14T04:38:00.000","Title":"Catching the Access Token sent by Facebook after successful authentication","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Do you think is technically possible to take a screeshot of a website programmatically?\nI would like to craft a scheduled Python task that crawls a list of websites taking an homepage screenshot of them.\nDo you think is technically feasible or do you know third party website that offer a service like that (Input: url --> Output: screenshot) ?\nAny suggestion?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":2523,"Q_Id":3940098,"Users Score":0,"Answer":"It's certainly technically possible.\nYou would probably have to render the HTML directly onto an image file (or more likely, onto an in-memory bitmap that's written to an image file once completed).\nI don't know any libraries to do this for you (apart from a modified WebKit, perhaps)... but there's certainly websites that do this.\nOf course, this is a bit more involved than just opening the page in a browser on a machine and taking a screenshot programatically, but the result would likely be better if you don't care about the result from a specific browser.","Q_Score":6,"Tags":"python,google-app-engine,screenshot","A_Id":3940121,"CreationDate":"2010-10-15T06:53:00.000","Title":"Is it technically possible to take a screenshot of a website programmatically?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm unfamiliar with the new oauth system. I wanted to crawl the status updates of my friends, and their friends' (if permissions allow) with my specified account credentials using the python-twitter api. \nWith the new oauth authentication, does it means that I have to first register an application with twitter before I can use api?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":206,"Q_Id":3940774,"Users Score":1,"Answer":"Yes, thats right. You need to register it and connect \"grant access\" it with your twitter id, if you want, for example, post something on your twitter wall. Also see \"connections\" in your twitter id.","Q_Score":1,"Tags":"python,oauth,twitter","A_Id":3940860,"CreationDate":"2010-10-15T08:53:00.000","Title":"noob question regarding twitter oauth","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm unfamiliar with the new oauth system. I wanted to crawl the status updates of my friends, and their friends' (if permissions allow) with my specified account credentials using the python-twitter api. \nWith the new oauth authentication, does it means that I have to first register an application with twitter before I can use api?","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":206,"Q_Id":3940774,"Users Score":0,"Answer":"For use api you must register your aplication or use GET methods to post into twi through web interface.","Q_Score":1,"Tags":"python,oauth,twitter","A_Id":3940840,"CreationDate":"2010-10-15T08:53:00.000","Title":"noob question regarding twitter oauth","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I can traverse generic tags easily with BS, but I don't know how to find specific tags. For example, how can I find all occurances of
? Is this possible with BS?","AnswerCount":2,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":62029,"Q_Id":3945750,"Users Score":9,"Answer":"with bs4 things have changed a little. so the code should look like this\nsoup = BeautifulSoup(htmlstring,'lxml')\nsoup.find_all('div', {'style':\"width=300px;\"})","Q_Score":29,"Tags":"python,beautifulsoup","A_Id":45193575,"CreationDate":"2010-10-15T20:11:00.000","Title":"Find a specific tag with BeautifulSoup","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I always found links in html source stored in such format, the question is how do I change such links back to what it's normally like?\nThanks a lot!","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":797,"Q_Id":3949739,"Users Score":0,"Answer":"urllib.unquote() on its own may still cause problems by throwing the exception:\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position n: ordinal not in range(128)\nIn that case try:\nprint urllib.unquote(\"Ober%C3%B6sterreich.txt\").decode(\"utf8\")","Q_Score":0,"Tags":"python,html","A_Id":5693726,"CreationDate":"2010-10-16T16:33:00.000","Title":"How to turn an encoded link such as \"http%3A%2F%2Fexample.com%2Fwhatever\" into \"http:\/\/example.com\/whatever\" in python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have a Python application that, to be brief, receives data from a remote server, processes it, responds to the server, and occasionally saves the processed data to disk. The problem I've encountered is that there is a lot of data to write, and the save process can take upwards of half a minute. This is apparently a blocking operation, so the network IO is stalled during this time. I'd like to be able to make the save operation take place in the background, so-to-speak, so that the application can continue to communicate with the server reasonably quickly.\nI know that I probably need some kind of threading module to accomplish this, but I can't tell what the differences are between thread, threading, multiprocessing, and the various other options. Does anybody know what I'm looking for?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1332,"Q_Id":3950607,"Users Score":7,"Answer":"Since you're I\/O bound, then use the threading module. \nYou should almost never need to use thread, it's a low-level interface; the threading module is a high-level interface wrapper for thread.\nThe multiprocessing module is different from the threading module, multiprocessing uses multiple subprocesses to execute a task; multiprocessing just happens to use the same interface as threading to reduce learning curve. multiprocessing is typically used when you have CPU bound calculation, and need to avoid the GIL (Global Interpreter Lock) in a multicore CPU.\nA somewhat more esoteric alternative to multi-threading is asynchronous I\/O using asyncore module. Another options includes Stackless Python and Twisted.","Q_Score":6,"Tags":"python,multithreading,io,blocking,nonblocking","A_Id":3950630,"CreationDate":"2010-10-16T20:20:00.000","Title":"What threading module should I use to prevent disk IO from blocking network IO?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"So I have 16 GB worth of XML files to process (about 700 files total), and I already have a functional PHP script to do that (With XMLReader) but it's taking forever. I was wondering if parsing in Python would be faster (Python being the only other language I'm proficient in, I'm sure something in C would be faster).","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1714,"Q_Id":3953563,"Users Score":2,"Answer":"I think that both of them can rely over wrappers for fast C libraries (mostly libxml2) so there's shouldn't be too much difference in parsing per se.\nYou could try if there are differences caused by overhead, then it depends what are you gonna do over that XML. Parsing it for what?","Q_Score":2,"Tags":"php,python,xml","A_Id":3953576,"CreationDate":"2010-10-17T14:07:00.000","Title":"Is XML parsing in PHP as fast as Python or other alternatives?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"So I have 16 GB worth of XML files to process (about 700 files total), and I already have a functional PHP script to do that (With XMLReader) but it's taking forever. I was wondering if parsing in Python would be faster (Python being the only other language I'm proficient in, I'm sure something in C would be faster).","AnswerCount":3,"Available Count":2,"Score":0.1325487884,"is_accepted":false,"ViewCount":1714,"Q_Id":3953563,"Users Score":2,"Answer":"There's actually three differing performance problems here:\n\nThe time it takes to parse a file, which depends on the size of individual files.\nThe time it takes to handle the files and directories in the filesystem, if there's a lot of them.\nWriting the data into your databases.\n\nWhere you should look for performance improvements depends on which one of these is the biggest bottleneck.\nMy guess is that the last one is the biggest problem because writes is almost always the slowest: writes can't be cached, they requires writing to disk and if the data is sorted it can take a considerable time to find the right spot to write it.\nYou presume that the bottleneck is the first alternative, the XML parsing. If that is the case, changing language is not the first thing to do. Instead you should see if there's some sort of SAX parser for your language. SAX parsing is much faster and memory effective than DOM parsing.","Q_Score":2,"Tags":"php,python,xml","A_Id":3953874,"CreationDate":"2010-10-17T14:07:00.000","Title":"Is XML parsing in PHP as fast as Python or other alternatives?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW.\nMy plan: \n\nShake hands:\n\nBrowser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser.\nBrowser generate a encrypt key k3, and decrypt key k4, send k3 to server.\n\nBrowse:\n\nDuring the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4.\nPlease figure out my mistake.\nIf it's right, my question is \n\nhow to generate a key pair in\njavascript and python, are there\nsome libraries?\nhow to encrypt and decrypt data in\njavascript and python , are there\nsome libraries?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":2770,"Q_Id":3977274,"Users Score":1,"Answer":"You can't stop the men in the middle from trapping your packets\/messages, especially if they don't really care if you find out. What you can do is encrypt your messages so that trapping them does not enable them to read what you're sending and receiving. In theory that's fine, but in practice you can't do modern crypto by hand even with the keys: you need to transfer some software too, and that's where it gets much more awkward.\nYou want to have the client's side of the crypto software locally, or at least enough to be able to check whether a digital signature of the crypto software is correct. Digital signatures are very difficult to forge. Deliver signed code, check its signature, and if the signature validates against a public key that you trust (alas, you'll have to transfer that out of band) then you know that the code (plus any CA certificates \u2013 trust roots \u2013 sent along with it) can be trusted to work as desired. The packets can then go over plain HTTP; they'll either get to where they're meant to or be intercepted, but either way nobody but the intended recipient will be able to read them. The only advantage of SSL is that it builds virtually all of this stuff for you and makes it easy.\nI have no idea how practical it is to do this all in Javascript. Obviously it can do it \u2013 it's a Turing-complete language, it has access to all the requisite syscalls \u2013 but it could be stupidly expensive. It might be easier to think in terms of using GPG\u2026\n(Hiding the fact from the government that you are communicating at all is a different problem entirely.)","Q_Score":5,"Tags":"javascript,python,ssl","A_Id":3978603,"CreationDate":"2010-10-20T11:29:00.000","Title":"Encryption: simulate SSL in javascript and python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW.\nMy plan: \n\nShake hands:\n\nBrowser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser.\nBrowser generate a encrypt key k3, and decrypt key k4, send k3 to server.\n\nBrowse:\n\nDuring the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4.\nPlease figure out my mistake.\nIf it's right, my question is \n\nhow to generate a key pair in\njavascript and python, are there\nsome libraries?\nhow to encrypt and decrypt data in\njavascript and python , are there\nsome libraries?","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2770,"Q_Id":3977274,"Users Score":0,"Answer":"There's a big problem, if security really is a big concern: Your algorithm is going to be transfered unsecured. Can you trust the client at all? Can the client trust the server at all?","Q_Score":5,"Tags":"javascript,python,ssl","A_Id":3977301,"CreationDate":"2010-10-20T11:29:00.000","Title":"Encryption: simulate SSL in javascript and python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Because of china Great Firewall has blocked google appengine's https port. So I want to simulate a Secure Socket Layer by javascript and python to protect my users information will not be capture by those ISP and GFW.\nMy plan: \n\nShake hands:\n\nBrowser request server, server generate a encrypt key k1, and decrypt key k2, send k1 to browser.\nBrowser generate a encrypt key k3, and decrypt key k4, send k3 to server.\n\nBrowse:\n\nDuring the session, browser encrypt data with k1 and send to server, server decrypt with k2. server encrypt data with k3 and response to browser, browser decrypt with k4.\nPlease figure out my mistake.\nIf it's right, my question is \n\nhow to generate a key pair in\njavascript and python, are there\nsome libraries?\nhow to encrypt and decrypt data in\njavascript and python , are there\nsome libraries?","AnswerCount":4,"Available Count":3,"Score":0.0996679946,"is_accepted":false,"ViewCount":2770,"Q_Id":3977274,"Users Score":2,"Answer":"You have a fundamental problem in that a JavaScript implementation of SSL would have no built-in root certificates to establish trust, which makes it impossible to prevent a man-in-the-middle attack. Any certificates you deliver from your site, including a root certificate, could be intercepted and replaced by a spy.\nNote that this is a fundamental limitation, not a peculiarity of the way SSL works. All cryptographic security relies on establishing a shared secret. The root certificates deployed with mainstream browsers provide the entry points to a trust network established by certifying authorities (CAs) that enable you to establish the shared secret with a known third party. These certificates are not, AFAIK, directly accessible to JavaScript code. They are only used to establish secure (e.g., https) connections.","Q_Score":5,"Tags":"javascript,python,ssl","A_Id":3977325,"CreationDate":"2010-10-20T11:29:00.000","Title":"Encryption: simulate SSL in javascript and python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm creating a desktop application that requires authorization from a remote server before performing certain actions locally. \nWhat's the best way to have my desktop application notified when the server approves the request for authorization? Authorization takes 20 seconds average on, 5 seconds minimum, with a 120 second timeout.\nI considered polling the server ever 3 seconds or so, but this would be hard to scale when I deploy the application more widely, and seems inelegant. \nI have full control over the design of the server and client API. The server is using web.py on Ubuntu 10.10, Python 2.6.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":116,"Q_Id":3978739,"Users Score":0,"Answer":"Does the remote end block while it does the authentication? If so, you can use a simple select to block till it returns.\nAnother way I can think of is to pass a callback URL to the authentication server asking it to call it when it's done so that your client app can proceed. Something like a webhook.","Q_Score":0,"Tags":"python,authentication,authorization,polling,web.py","A_Id":3978891,"CreationDate":"2010-10-20T14:08:00.000","Title":"How can my desktop application be notified of a state change on a remote server?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have a list of product names in Chinese. I want to translate these into English, I have tried Google AJAX language API, but it seems that translation is not good, it would be great if someone could give me some advice about or point me towards a better choice.\nThank you.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4408,"Q_Id":3979092,"Users Score":2,"Answer":"I think Google are probably one of the best web based automatic translation services.","Q_Score":4,"Tags":"python,translate","A_Id":3979445,"CreationDate":"2010-10-20T14:44:00.000","Title":"Is there a translation API service for Chinese to English?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm working on a script currently that needs to pull information down from a specific user's wall. The only problem is that it requires authentication, and the script needs to be able to run without any human interference. Unfortunately all I can find thus far tells me that I need to register an application, and then do the whole FB Connect dance to pull off what I want. Problem is that requires browser interaction, which I'm trying to avoid.\nI figured I could probably just use httplib2, and login this route. I got that to work, only to find that with that method I still don't get an \"access_token\" in any retrievable method. If I could get that token without launching a browser, I'd be completely set. Surely people are crawling feeds and such without using FB Connect right? Is it just not possible, thus why I'm hitting so many road blocks? Open to any suggestions you all might have.","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":4595,"Q_Id":4000896,"Users Score":5,"Answer":"What you are trying to do is not possible. You are going to have to use a browser to get an access token one way or another. You cannot collect username and passwords (a big violation of Facebook's TOS). If you need a script that runs without user interaction you will still need to use a browser to authenticate, but once you have the user's token you can use it without their direct interaction. You must request the \"offline_access\" permission to gain an access token that does not expire. You can save this token and then use it for however long you need.","Q_Score":4,"Tags":"python,facebook","A_Id":4000963,"CreationDate":"2010-10-22T20:55:00.000","Title":"Logging into Facebook without a Browser","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am solving a problem of transferring images from a camera in a loop from a client (a robot with camera) to a server (PC).\nI am trying to come up with ideas how to maximize the transfer speed so I can get the best possible FPS (that is because I want to create a live video stream out of the transferred images). Disregarding the physical limitations of WIFI stick on the robot, what would you suggest?\nSo far I have decided:\n\nto use YUV colorspace instead of RGB\nto use UDP protocol instead of TCP\/IP\n\nIs there anything else I could do to get the maximum fps possible?","AnswerCount":2,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":375,"Q_Id":4013046,"Users Score":2,"Answer":"Compress the difference between successive images. Add some checksum. Provide some way for the receiver to request full image data for the case where things get out of synch.\nThere are probably a host of protocols doing that already.\nSo, search for live video stream protocols.\nCheers & hth.,","Q_Score":0,"Tags":"c#,c++,python,algorithm,performance","A_Id":4013104,"CreationDate":"2010-10-25T08:49:00.000","Title":"How to speed up transfer of images from client to server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am solving a problem of transferring images from a camera in a loop from a client (a robot with camera) to a server (PC).\nI am trying to come up with ideas how to maximize the transfer speed so I can get the best possible FPS (that is because I want to create a live video stream out of the transferred images). Disregarding the physical limitations of WIFI stick on the robot, what would you suggest?\nSo far I have decided:\n\nto use YUV colorspace instead of RGB\nto use UDP protocol instead of TCP\/IP\n\nIs there anything else I could do to get the maximum fps possible?","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":375,"Q_Id":4013046,"Users Score":4,"Answer":"This might be quite a bit of work but if your client can handle the computations in real time you could use the same method that video encoders use. Send a key frame every say 5 frames and in between only send the information that changed not the whole frame. I don't know the details of how this is done, but try Googling p-frames or video compression.","Q_Score":0,"Tags":"c#,c++,python,algorithm,performance","A_Id":4013112,"CreationDate":"2010-10-25T08:49:00.000","Title":"How to speed up transfer of images from client to server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details.\nExtra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions?\nSo basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP.\nI have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives?\nThank you","AnswerCount":6,"Available Count":3,"Score":0.0333209931,"is_accepted":false,"ViewCount":1907,"Q_Id":4014670,"Users Score":1,"Answer":"You can write a WEB APPLICATION. The encryption part is solved by simple HTTPS usage. On the server side (your home computer with USB devices attached to it) you should use Python (since you're quite experienced with it) and a Python Web Framework you want (I.E. Django).","Q_Score":3,"Tags":"java,php,javascript,python","A_Id":4014696,"CreationDate":"2010-10-25T12:48:00.000","Title":"Send commands between two computers over the internet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details.\nExtra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions?\nSo basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP.\nI have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives?\nThank you","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1907,"Q_Id":4014670,"Users Score":0,"Answer":"Well, I think that java can work well, in fact you have to deal with system calls to manage usb devices and things like that (and as far as I know, PHP is not the best language to do this). Also shouldn't be so hard to create a basic server\/client program, just use good encryption mechanism to not show commands around web.","Q_Score":3,"Tags":"java,php,javascript,python","A_Id":4014765,"CreationDate":"2010-10-25T12:48:00.000","Title":"Send commands between two computers over the internet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I wish to control my computer (and usb devices attached to the computer) at home with any computer that is connected to the internet. The computer at home must have a program installed that receives commands from any other computer that is connected to the internet. I thought it would be best if I do this with a web interface as it would not be necessary to install software on that computer. For obvious reasons it would require log in details.\nExtra details: The main part of the project is actually a device that I will develop that connects to the computer's usb port. Sorry if it was a bit vague in my original question. This device will perform simple functions such as turning lights on etc. At first I will just attempt to switch the lights remotely using the internet. Later on I will add commands that can control certain aspects of the computer such as the music player. I think doing a full remote desktop connection to control my device is therefore not quite necessary. Does anybody know of any open source projects that can perform these functions?\nSo basically the problem is sending encrypted commands from a web interface to my computer at home. What would be the best method to achieve this and what programming languages should I use? I know Java, Python and C quite well, but have very little experience with web applications, such as Javascript and PHP.\nI have looked at web chat examples as it is sort of similar concept to what I wish to achieve, except the text can be replaced with commands. Is this a viable solution or are there better alternatives?\nThank you","AnswerCount":6,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1907,"Q_Id":4014670,"Users Score":0,"Answer":"I you are looking for solution you could use from any computer anywhere in the worls without the need to install any software on client pc, try logmein.com (http:\/\/secure.logmein.com).\nIt is free, reliable, works in any modern browser, you don't have to remmeber IPs and hope they won't change, ...\nOr if this is a \"for fun project\" why not write a php script, open port 80 in your router so you can access you script from outside, possibly dynamically link some domain to your IP (http:\/\/www.dyndns.com\/). In the script you would just login and then for example type the orders in textfield in some form in your script. Lets just say you want to do some command prompt stuf, so you will basically remotely construst a *.bat file for example. Then the script stores this a fromtheinternets.bat to a folder on your desktop that is being constantly monitored for changes. And when such a change is found you just activate the bat file.\nInsecure? Yes (It could be made secureER)\nFun to write? Definitely\nPS: I am new here, hope it's not \"illegal\" to post link to actual services, instead of wiki lists. This is by no means and advertisement, I am just a happy user. :)","Q_Score":3,"Tags":"java,php,javascript,python","A_Id":4015151,"CreationDate":"2010-10-25T12:48:00.000","Title":"Send commands between two computers over the internet","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I've have asked these questions before with no proper answer. I hope I'll get some response here. \nI'm developing an instant messenger in python and I'd like to handle video\/audio streaming with VLC. Tha basic idea right now is that in each IM client I'm running one VLC instance that acts as a server that streams to all the users I want, and another VLC instance that's a client and recieves and displays all the streams that other users are sending to me. As you can see, it's kind of a P2P connection and I am having lots of problems.\nMy first problem was VLC can handle only one stream per port, but I solved this using VLM, the Videolan Manager which allows multiple streams with one instance and on one port. \nMy second problem was this kind of P2P take has several drawbacks as if someone is behind NAT or a router, you have to do manual configurations to forward the packages from the router to your PC, and it also has another drawback, you can only forward to 1 PC, so you would be able to use the program in only one workstation. \nAlso, the streams were transported in HTTP protocol, which uses TCP and it's pretty slow. When I tried to do the same with RTSP, I wasn't able to get the stream outside my private LAN.\nSo, this P2P take is very unlikely to be implemented successfully by an amateur like me, as it has all the typical NAT traversal problems, things that I don't want to mess with as this is not a commercial application, just a school project I must finish in order to graduate as a technician. Finally, I've been recommended to a use a server in a well known IP and that would solve the problem, only one router configuration and let both ends of the conversations be clients. I have no idea how to implement this idea, please any help is useful. Thanks in advance. Sorry for any error, I am not a programming\/networking expert nor am I an english-speaking person.","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":224,"Q_Id":4015227,"Users Score":0,"Answer":"I think they were suggesting you run your program on a LAN which has no ports blocked.","Q_Score":0,"Tags":"python,streaming,p2p,vlc,instant-messaging","A_Id":4200613,"CreationDate":"2010-10-25T13:53:00.000","Title":"Problems with VLC and instant messaging","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Right now I'm base 64 encoding them and using data uris. The idea was that this will somehow lower the number of requests the browser needs to make. Does this bucket hold any water?\nWhat is the best way of serving images in general? DB, from FS, S3?\nI am most interested in python and java based answers, but all are welcome!","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":547,"Q_Id":4046242,"Users Score":1,"Answer":"Data urls will definitely reduce the number of requests to the server, since the browser doesn't have to ask for the pixels in a separate request. But they are not supported in all browsers. You'll have to make the tradeoff.","Q_Score":3,"Tags":"java,javascript,python,image","A_Id":4046258,"CreationDate":"2010-10-28T18:54:00.000","Title":"What is the best way to serve small static images?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm writing a Python client+server that uses gevent.socket for communication. Are there any good ways of testing the socket-level operation of the code (for example, verifying that SSL connections with an invalid certificate will be rejected)? Or is it simplest to just spawn a real server?\nEdit: I don't believe that \"naive\" mocking will be sufficient to test the SSL components because of the complex interactions involved. Am I wrong in that? Or is there a better way to test SSL'd stuff?","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":14758,"Q_Id":4047897,"Users Score":9,"Answer":"Mocking and stubbing are great, but sometimes you need to take it up to the next level of integration. Since spawning a server, even a fakeish one, can take some time, consider a separate test suite (call them integration tests) might be in order.\n\"Test it like you are going to use it\" is my guideline, and if you mock and stub so much that your test becomes trivial it's not that useful (though almost any test is better than none). If you are concerned about handling bad SSL certs, by all means make some bad ones and write a test fixture you can feed them to. If that means spawning a server, so be it. Maybe if that bugs you enough it will lead to a refactoring that will make it testable another way.","Q_Score":20,"Tags":"python,sockets,testing,gevent","A_Id":4048286,"CreationDate":"2010-10-28T23:16:00.000","Title":"Python: unit testing socket-based code?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to parse a xml file using Python and I was wondering if there was any way of automating the task over manually walking through all xml nodes\/attributes using xml.dom.minidom library.\nEssentially what would be sweet is if I could load a xml schema for the xml file I am reading then have that automatically generate some kind of data struct\/set with all of the data within the xml.\nIn C# land this is possible via creating a strongly typed dataset class from a xml schema and then using this dataset to read the xml file in.\nIs there any equivalent in Python?","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":931,"Q_Id":4054205,"Users Score":-1,"Answer":"hey dude - take beautifulSoup - it is a super library. HEAD over to the site scraperwiki.com\nthe can help you!","Q_Score":1,"Tags":"c#,python,xml,dataset,schema","A_Id":4054231,"CreationDate":"2010-10-29T17:05:00.000","Title":"Python XML Parse (using schema to generate dataset)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm looking to parse a xml file using Python and I was wondering if there was any way of automating the task over manually walking through all xml nodes\/attributes using xml.dom.minidom library.\nEssentially what would be sweet is if I could load a xml schema for the xml file I am reading then have that automatically generate some kind of data struct\/set with all of the data within the xml.\nIn C# land this is possible via creating a strongly typed dataset class from a xml schema and then using this dataset to read the xml file in.\nIs there any equivalent in Python?","AnswerCount":3,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":931,"Q_Id":4054205,"Users Score":0,"Answer":"You might take a look at lxml.objectify, particularly the E-factory. It's not really an equivalent to the ADO tools, but you may find it useful nonetheless.","Q_Score":1,"Tags":"c#,python,xml,dataset,schema","A_Id":4054752,"CreationDate":"2010-10-29T17:05:00.000","Title":"Python XML Parse (using schema to generate dataset)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a program which sends more than 15 queries to Google in each iteration, total iterations is about 50. For testing I have to run this program several times. However, by doing that, after several times, Google blocks me. is there any ways so I can fool google maybe by adding delays between each iteration? Also I have heard that google can actually learn the timesteps. so I need these delays to be random so google cannot find a patter from it to learn my behavior. also it should be short so the whole process doesn't take so much.\nDoes anyone knows something, or can provide me a piece of code in python?\nThanks","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":111313,"Q_Id":4054254,"Users Score":1,"Answer":"Also you can try to use few proxy servers for prevent ban by IP adress. urllib support proxies by special constructor parameter, httplib can use proxy too","Q_Score":67,"Tags":"python,delay","A_Id":4054980,"CreationDate":"2010-10-29T17:10:00.000","Title":"How to add random delays between the queries sent to Google to avoid getting blocked in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have written a program which sends more than 15 queries to Google in each iteration, total iterations is about 50. For testing I have to run this program several times. However, by doing that, after several times, Google blocks me. is there any ways so I can fool google maybe by adding delays between each iteration? Also I have heard that google can actually learn the timesteps. so I need these delays to be random so google cannot find a patter from it to learn my behavior. also it should be short so the whole process doesn't take so much.\nDoes anyone knows something, or can provide me a piece of code in python?\nThanks","AnswerCount":5,"Available Count":2,"Score":0.1586485043,"is_accepted":false,"ViewCount":111313,"Q_Id":4054254,"Users Score":4,"Answer":"Since you're not testing Google's speed, figure out some way to simulate it when doing your testing (as @bstpierre suggested in his comment). This should solve your problem and factor its variable response times out at the same time.","Q_Score":67,"Tags":"python,delay","A_Id":4054614,"CreationDate":"2010-10-29T17:10:00.000","Title":"How to add random delays between the queries sent to Google to avoid getting blocked in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Our client wants us to implement change history for website articles. What is the best way to do it?","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":289,"Q_Id":4075309,"Users Score":0,"Answer":"I presume you're using a CMS. If not, use one. WordPress is a good start.\nIf you're developing from scratch, the usual method is to have two tables: one for page information (so title, menu position etc.) and then a page_content table, which has columns for page_id, content, and timestamp.\nAs you save a page, instead of updating a database table you instead write a new record to the page_content table with the page's ID and the time of the save. That way, when displaying pages on your front-end you just select the latest record for that particular page ID, but you also have a history of that page by querying for all records by page_id, sorted by timestamp.","Q_Score":0,"Tags":"php,.net,python,ruby","A_Id":4076484,"CreationDate":"2010-11-02T06:11:00.000","Title":"What is the best way to store change history of website articles?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"Our client wants us to implement change history for website articles. What is the best way to do it?","AnswerCount":3,"Available Count":2,"Score":-0.0665680765,"is_accepted":false,"ViewCount":289,"Q_Id":4075309,"Users Score":-1,"Answer":"There is a wide variety of ways to do this as you alluded by tagging php, .net, python, and ruby. You missed a few off the top of my head perl and jsp. Each of these have their plusses and minuses and is really a question of what best suits your needs.\nPHP is probably the fastest reward for time spent.\nRuby, i'm assuming Ruby on Rails, is the automatic Buzz Word Bingo for the day.\n.Net, are you all microsoft every where and want easy integration with your exchange server and a nice outlook API?\npython? Do you like the scripted languages but you're too good for php and ruby.\nEach of these languages have their strong points and their draw backs and it's really a matter of what you know, how much you have to spend, and what is your timeframe.","Q_Score":0,"Tags":"php,.net,python,ruby","A_Id":4075372,"CreationDate":"2010-11-02T06:11:00.000","Title":"What is the best way to store change history of website articles?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need something like rfc822.AddressList to parse, say, the content of the \"TO\" header field of an email into individual addresses. Since rfc822 is deprecated in favor of the email package, I looked for something similar there but couldn't find anything. Does anyone know what I'm supposed to use instead?\nThanks!","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1862,"Q_Id":4084608,"Users Score":6,"Answer":"Oh it's email.utils.getaddresses. Just make sure to call it with a list.","Q_Score":7,"Tags":"python,email,rfc822,email-parsing","A_Id":4084648,"CreationDate":"2010-11-03T06:12:00.000","Title":"Is there a non-deprecated equivalent of rfc822.AddressList?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"first of all, I'm sorry for my English\nI am doing some scripting in Python using Selenium RC.\nThe aim is to access to some website, and download some files\nI would like to know, at the end of the script, what files exactly have been downloaded\nAt that moment, I'm doing something a bit naive, which is checking the new files who appears in the download directory of Firefox, it's working well but if I launch severals clients in the same times, they can't detect which files they own etc...\nSo i was trying to find a solution to that problem, if it's possible to handle the download from Firefox to know exactly when a download occur, and what is downloaded, then I would be super fine, but so far, I haven't find anything about that\nThanks for your help","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1024,"Q_Id":4088703,"Users Score":0,"Answer":"In this case, Just create a new folder everytime and download your file there.\nMake sure the foldername is incremented if it already exits (Ex: folder1, folder2, Folder3.....)","Q_Score":3,"Tags":"python,firefox,selenium,download,firefox-addon","A_Id":13342386,"CreationDate":"2010-11-03T15:28:00.000","Title":"Is it possible to know what file is downloaded by Firefox with Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"first of all, I'm sorry for my English\nI am doing some scripting in Python using Selenium RC.\nThe aim is to access to some website, and download some files\nI would like to know, at the end of the script, what files exactly have been downloaded\nAt that moment, I'm doing something a bit naive, which is checking the new files who appears in the download directory of Firefox, it's working well but if I launch severals clients in the same times, they can't detect which files they own etc...\nSo i was trying to find a solution to that problem, if it's possible to handle the download from Firefox to know exactly when a download occur, and what is downloaded, then I would be super fine, but so far, I haven't find anything about that\nThanks for your help","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1024,"Q_Id":4088703,"Users Score":0,"Answer":"I haven't tried it myself, but I would consider setting up multiple Firefox profiles each set with a different download directory and then telling my instances to use those profiles (or maybe programmatically setting profile values if you're using Selenium2 - I'm not sure if download directory is possible to change or not). Then you can keep monitoring each directory and seeing what was downloaded for each session.","Q_Score":3,"Tags":"python,firefox,selenium,download,firefox-addon","A_Id":4698653,"CreationDate":"2010-11-03T15:28:00.000","Title":"Is it possible to know what file is downloaded by Firefox with Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"first of all, I'm sorry for my English\nI am doing some scripting in Python using Selenium RC.\nThe aim is to access to some website, and download some files\nI would like to know, at the end of the script, what files exactly have been downloaded\nAt that moment, I'm doing something a bit naive, which is checking the new files who appears in the download directory of Firefox, it's working well but if I launch severals clients in the same times, they can't detect which files they own etc...\nSo i was trying to find a solution to that problem, if it's possible to handle the download from Firefox to know exactly when a download occur, and what is downloaded, then I would be super fine, but so far, I haven't find anything about that\nThanks for your help","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":1024,"Q_Id":4088703,"Users Score":0,"Answer":"If you are working with python-->Selenium RC why don't you just\ncreate a lastdownload.txt type of file, and put in the dates, filenames\nof the files you download.\nSo each time your script runs, it will check the fileserver, and your log file\nto see which files are new, which files you already have. (if same filename is used\nyou can check the lastupdatetime of headers, or even the filesize as a way to compare)\nThen you just download the new files... so this way you replicate a simple incremental mechanism with lookup on a txt file...","Q_Score":3,"Tags":"python,firefox,selenium,download,firefox-addon","A_Id":4118400,"CreationDate":"2010-11-03T15:28:00.000","Title":"Is it possible to know what file is downloaded by Firefox with Selenium","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using pythonbrew to install Python 2.6.6 on Snow Leopard. It failed with a readline error, then a socket error. I installed readline from source, which made the installer happy on the next attempt, but the socket error remains:\n\ntest_socket\ntest test_socket failed -- Traceback (most recent call last):\n File \"\/Users\/gferguson\/python\/pythonbrew\/build\/Python-2.6.6\/Lib\/test\/test_socket.py\", line 483, in testSockName\n my_ip_addr = socket.gethostbyname(socket.gethostname())\ngaierror: [Errno 8] nodename nor servname provided, or not known\n\nDigging around with the system Python shows:\n\n>>> import socket\n>>> my_ip_addr = socket.gethostbyname(socket.gethostname())\nTraceback (most recent call last):\n File \"\", line 1, in \nsocket.gaierror: [Errno 8] nodename nor servname provided, or not known\n>>> socket.gethostname()\n'S1WSMA-JHAMI'\n>>> socket.gethostbyname('S1WSMA-JHAMI')\nTraceback (most recent call last):\n File \"\", line 1, in \nsocket.gaierror: [Errno 8] nodename nor servname provided, or not known\n>>> socket.gethostbyname('google.com')\n'74.125.227.20'\n\nI triangulated the problem with Ruby's IRB:\n\nIPSocket.getaddress(Socket.gethostname)\nSocketError: getaddrinfo: nodename nor servname provided, or not known\n\nSo, I'm not sure if this is a bug in the resolver not understanding the hostname, or if there's something weird in the machine's configuration, or if it's something weird in our network's DNS lookup, but whatever it is the installer isn't happy.\nI think it's a benign failure in the installer though, so I feel safe to force the test to succeed, but I'm not sure how to tell pythonbrew how to ignore that test value or specifically pass test_socket.\nI'm also seeing the following statuses but haven't figured out if they're significant yet:\n\n33 tests skipped:\n test_al test_bsddb test_bsddb3 test_cd test_cl test_codecmaps_cn\n test_codecmaps_hk test_codecmaps_jp test_codecmaps_kr\n test_codecmaps_tw test_curses test_dl test_epoll test_gdbm test_gl\n test_imageop test_imgfile test_largefile test_linuxaudiodev\n test_normalization test_ossaudiodev test_pep277 test_py3kwarn\n test_smtpnet test_socketserver test_startfile test_sunaudiodev\n test_timeout test_urllib2net test_urllibnet test_winreg\n test_winsound test_zipfile64\n1 skip unexpected on darwin:\n test_dl\n\nAnyone have experience getting Python 2.6.6 installed with pythonbrew on Snow Leopard?\n\nUpdate: I just tried the socket.gethostbyname(socket.gethostname()) command from Python installed on my MacBook Pro with Snow Leopard, and it successfully reported my IP back so it appears the problem is in the system config at work. I am going to ask at SO's sibling \"Apple\" site and see if anyone knows what it might be.","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2411,"Q_Id":4090753,"Users Score":0,"Answer":"The solution was to --force pythonbrew to install in spite of the errors. \nI tested the socket responses using the built-in Python, Perl and Ruby, and they had the same problem resolving the localhost name. I tested using a current version of Ruby and Python on one of my Linux boxes, and the calls worked, so I was pretty sure it was something outside of that particular Mac's configuration. \nAfter forcing the install I tested the socket calls to other hosts and got the expected results and haven't had any problems doing other networking tasks so I think everything is fine.","Q_Score":0,"Tags":"python,macos,installation,osx-snow-leopard","A_Id":4161287,"CreationDate":"2010-11-03T19:15:00.000","Title":"Workaround for Pythonbrew failing because test_socket can't resolve?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I have exposed a simple RESTful JSON url via CherryPy (Python web framework). I have a second application (using Pylons) which needs to reach a URL exposed by CherryPy. Both are being served via localhost. Both URLs resolve just fine when using a browser directly.\nBut, when a DOJO script running from the initial Pylons request invokes the JSON url from CherryPy, it fails. I open LiveHeaders in Firefox and find that DOJO is first sending an HTTP \"OPTIONS\" request. CherryPy refuses the OPTIONS request with a 405, Method Not Allowed and it all stops.\nIf I drop this same page into the CherryPy application, all is well.\nWhat is the best way to resolve this on my localhost dev platform? .... and will this occur in Prod?","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2060,"Q_Id":4107576,"Users Score":1,"Answer":"My guess would be you are serving these two apps locally via 2 different ports, which is making dojo try to execute a cross-domain XHR call.\nYou need to be able to serve the JSON URL from the same URL (protocol, hostname, & port) to make a successful XHR call. I do this by using nginx locally, and configuring it to serve the database requests from my Dojo application by forwarding them to CouchDB.","Q_Score":1,"Tags":"javascript,python,ajax,dojo","A_Id":5576563,"CreationDate":"2010-11-05T15:50:00.000","Title":"DOJO AJAX Request asking for OPTIONS","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am experiencing strange behavior with urllib2.urlopen() on Ubuntu 10.10. The first request to a url goes fast but the second takes a long time to connect. I think between 5 and 10 seconds. On windows this just works normal?\nDoes anybody have an idea what could cause this issue?\nThanks, Onno","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":427,"Q_Id":4110992,"Users Score":3,"Answer":"5 seconds sounds suspiciously like the DNS resolving timeout. \nA hunch, It's possible that it's cycling through the DNS servers in your \/etc\/resolv.conf and if one of them is broken, the default timeout is 5 seconds on linux, after which it will try the next one, looping back to the top when it's tried them all.\nIf you have multiple DNS servers listed in resolv.conf, try removing all but one. If this fixes it; then after that see why you're being assigned incorrect resolving servers.","Q_Score":1,"Tags":"python,ubuntu,urllib2,ubuntu-10.10","A_Id":4112300,"CreationDate":"2010-11-05T23:37:00.000","Title":"Strange urllib2.urlopen() behavior on Ubuntu 10.10","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I'm trying to parse some html in Python. There were some methods that actually worked before... but nowadays there's nothing I can actually use without workarounds.\n\nbeautifulsoup has problems after SGMLParser went away\nhtml5lib cannot parse half of what's \"out there\"\nlxml is trying to be \"too correct\" for typical html (attributes and tags cannot contain unknown namespaces, or an exception is thrown, which means almost no page with Facebook connect can be parsed)\n\nWhat other options are there these days? (if they support xpath, that would be great)","AnswerCount":5,"Available Count":2,"Score":0.1973753202,"is_accepted":false,"ViewCount":3978,"Q_Id":4114722,"Users Score":5,"Answer":"html5lib cannot parse half of what's \"out there\"\n\nThat sounds extremely implausible. html5lib uses exactly the same algorithm that's also implemented in recent versions of Firefox, Safari and Chrome. If that algorithm broke half the web, I think we would have heard. If you have particular problems with it, do file bugs.","Q_Score":15,"Tags":"python,html,parsing","A_Id":4115108,"CreationDate":"2010-11-06T19:17:00.000","Title":"Python html parsing that actually works","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm trying to parse some html in Python. There were some methods that actually worked before... but nowadays there's nothing I can actually use without workarounds.\n\nbeautifulsoup has problems after SGMLParser went away\nhtml5lib cannot parse half of what's \"out there\"\nlxml is trying to be \"too correct\" for typical html (attributes and tags cannot contain unknown namespaces, or an exception is thrown, which means almost no page with Facebook connect can be parsed)\n\nWhat other options are there these days? (if they support xpath, that would be great)","AnswerCount":5,"Available Count":2,"Score":0.0399786803,"is_accepted":false,"ViewCount":3978,"Q_Id":4114722,"Users Score":1,"Answer":"I think the problem is that most HTML is ill-formed. XHTML tried to fix that, but it never really caught on enough - especially as most browsers do \"intelligent workarounds\" for ill-formed code.\nEven a few years ago I tried to parse HTML for a primitive spider-type app, and found the problems too difficult. I suspect writing your own might be on the cards, although we can't be the only people with this problem!","Q_Score":15,"Tags":"python,html,parsing","A_Id":4114746,"CreationDate":"2010-11-06T19:17:00.000","Title":"Python html parsing that actually works","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I'm getting some content from Twitter API, and I have a little problem, indeed I sometimes get a tweet ending with only one backslash.\nMore precisely, I'm using simplejson to parse Twitter stream.\nHow can I escape this backslash ?\nFrom what I have read, such raw string shouldn't exist ...\nEven if I add one backslash (with two in fact) I still get an error as I suspected (since I have a odd number of backslashes)\nAny idea ?\nI can just forget about these tweets too, but I'm still curious about that.\nThanks : )","AnswerCount":4,"Available Count":1,"Score":0.049958375,"is_accepted":false,"ViewCount":1886,"Q_Id":4121751,"Users Score":1,"Answer":"Prepending the string with r (stands for \"raw\") will escape all characters inside the string. For example:\nprint r'\\b\\n\\\\'\nwill output\n\\b\\n\\\\\nHave I understood the question correctly?","Q_Score":1,"Tags":"python,string,escaping,backslash","A_Id":4121817,"CreationDate":"2010-11-08T06:38:00.000","Title":"[Python]How to deal with a string ending with one backslash?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"Hi I am trying to secure a server function being used for an Ajax request, so that the function is not accessed for any sort of malicious activity. I have done the following till now:-\n\nI am checking whether a valid session is present while the function is being called.\nI am using POST rather than GET\nI look for specific headers by using request.is_xhr else I induce a redirect.\nI have compressed the javascript using dojo shrinksafe(..i am using dojo..)\n\nWhat else can and should be done here. Need your expert advice on this.\n(NB-I am using Flask and Dojo)","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":280,"Q_Id":4131327,"Users Score":2,"Answer":"No any special secure actions required. Consider ajax request as any other client request.","Q_Score":1,"Tags":"python,ajax,dojo,flask","A_Id":4131349,"CreationDate":"2010-11-09T07:24:00.000","Title":"Handling and securing server functions in an ajax request..python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"For a while I've been using a package called \"gnosis-utils\" which provides an XML pickling service for Python. This class works reasonably well, however it seems to have been neglected by it's developer for the last four years. \nAt the time we originally selected gnosis it was the only XML serization tool for Python. The advantage of Gnosis was that it provided a set of classes whose function was very similar to the built-in Python XML pickler. It produced XML which python-developers found easy to read, but non-python developers found confusing. \nNow that the proejct has grown we have a new requirement: We need to be able to exchange XML with our colleagues who prefer Java or .Net. These non-python developers will not be using Python - they intend to produce XML directly, hence we have a need to simplify the format of the XML. \nSo are there any alternatives to Gnosis. Our requirements:\n\nMust work on Python 2.4 \/ Windows x86 32bit\nOutput must be XML, as simple as possible\nAPI must resemble Pickle as closely as possible\nPerformance is not hugely important\n\nOf course we could simply adapt Gnosis, however we'd prefer to simply use a component which already provides the functions we requrie (assuming that it exists).","AnswerCount":2,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2017,"Q_Id":4135836,"Users Score":0,"Answer":"So what you're looking for is a python library that spits out arbitrary XML for your objects? You don't need to control the format, so you can't be bothered to actually write something that iterates over the relevant properties of your data and generates the XML using one of the existing tools?\nThis seems like a bad idea. Arbitrary XML serialization doesn't sound like a good way to move forward. Any format that includes all of pickle's features is going to be ugly, verbose, and very nasty to use. It will not be simple. It will not translate well into Java.\nWhat does your data look like?\nIf you tell us precisely what aspects of pickle you need (and why lxml.objectify doesn't fulfill those), we will be better able to help you.\nHave you considered using JSON for your serialization? It's easy to parse, natively supports python-like data structures, and has wide-reaching support. As an added bonus, it doesn't open your code to all kinds of evil exploits the way the native pickle module does.\nHonestly, you need to bite the bullet and define a format, and build a serializer using the standard XML tools, if you absolutely must use XML. Consider JSON.","Q_Score":4,"Tags":"python,xml,serialization,pickle","A_Id":4136375,"CreationDate":"2010-11-09T16:10:00.000","Title":"XML object serialization in python, are there any alternatives to Gnosis?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I would like to access any element in a web page. I know how to do that when I have a form (form = cgi.FieldStorage()), but not when I have, for example, a table.\nHow can I do that?\nThanks","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":178,"Q_Id":4149598,"Users Score":0,"Answer":"You can access only data, posted by form (or as GET parameters).\nSo, you can extract data you need using JavaScript and post it through form","Q_Score":0,"Tags":"python,cgi,html-parsing","A_Id":4149742,"CreationDate":"2010-11-10T22:07:00.000","Title":"How can I access any element in a web page with Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I need to pickle a scapy packet. Most of the time this works, but sometimes the pickler complains about a function object. As a rule of thumb: ARP packets pickle fine. Some UDP packets are problematic.","AnswerCount":6,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":4367,"Q_Id":4156328,"Users Score":3,"Answer":"(This is more for reference, so no votes expected)\nThe Scapy list scapy.ml@secdev.org is well-monitored and tends to be very responsive. If you don't get answers here, try there as well.","Q_Score":12,"Tags":"python,pickle,scapy","A_Id":4157378,"CreationDate":"2010-11-11T15:57:00.000","Title":"How to pickle a scapy packet?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Using Python, how might one read a file's path from a remote server?\nThis is a bit more clear to me on my local PC.","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":36179,"Q_Id":4163456,"Users Score":0,"Answer":"use the os.path module to manipulate path string (you need to import os)\n\nthe current directory is os.path.abspath(os.curdir)\njoin 2 parts of a path with os.path.join(dirname, filename): this will take care of inserting the right path separator ('\\' or '\/', depending on the operating system) for building the path","Q_Score":10,"Tags":"python","A_Id":4164507,"CreationDate":"2010-11-12T09:58:00.000","Title":"Python - how to read path file\/folder from server","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have 2 pages, a static html page and a python script - hosted on [local] google app engine.\n\/html\/hello.html\ndefine as login: required\n\/broadcast\nwhich is a python script\nwhen I access hello.html for the first time I am redirected to login page, I sign in, and then redirected back to hello.html.\ninside hello.html - an AJAX call with jQuery is executed to load data from '\/broadcast', this call errors saying 'you're not logged in'!\nBUT - the same call to '\/broadcast' through the browser address field succeeds as if I AM signed in!\nas if the ajax and the browser callers have different cookies!??\nHELP, am I going bananas?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":90,"Q_Id":4163748,"Users Score":2,"Answer":"Stupid me... \nThe ajax call was to localhost\/broadcast\nand the browser address field was 127.0.0.1\/broadcast\n...\nthe cookies for \"different\" domains ('127.0.0.1' != 'localhost') are not shared ofcourse...\nThen I haven't gone mad...","Q_Score":0,"Tags":"javascript,python,ajax,google-app-engine,jquery","A_Id":4164056,"CreationDate":"2010-11-12T10:37:00.000","Title":"AJAX and browser GET calls appear to have different cookies","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am writing a python script to combine about 20+ RSS feeds. I would like to use a custom solution instead of feedjack or planetfeed. \nI use feedparser to parse the feeds and mysql to cache them. \nThe problem I am running into is determining which feeds have already been cached and which haven't.\nSome pseudo code for what I have tried:\n\ncreate a list of all feed items\nget the date of last item cached from db\ncheck which items in my list have a date greater than my item from the db and return this filtered list\nsort the returned filtered list by date the item was created\nadd new items to the db\n\nI feel like this would work, but my problem is that not all of the dates on the RSS feeds I am using are correct. Sometimes a publisher, for whatever reason, will have feed items with dates in the future. If this future date gets added to the db, then it will always be greater than the date of the items in my list. So, the comparison stops working and no new items get added to the db. I would like to come up with another solution and not rely on the publishers dates. \nHow would some of you pros do this? Assuming you have to combine multiple rss feeds, save them to a mysql db and then return them in ordered by date. I'm just looking for pseudo code to give me an idea of the best way to do this. \nThanks for your help.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1116,"Q_Id":4167863,"Users Score":1,"Answer":"Depending on how often the feeds are updated and how often you check, you could simply fix broken dates (if it's in the future, reset it to today), before adding them to the database.\nOther than that, you'd have to use some sort of ID\u2014I think RSS has an ID field on each item. If your feeds are kept in order, you can get the most recent cached ID, find that in the feed items list, and then add everything newer. If they're out of order, you'd have to check each one against your cache, and add it if it's missing.","Q_Score":1,"Tags":"python","A_Id":4168693,"CreationDate":"2010-11-12T18:29:00.000","Title":"best algorithm to combine multiple RSS feeds using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am looking for tutorials and\/or examples of certain components of a social network web app that may include Python code examples of:\n\nuser account auto-gen function(database)\nfriend\/follow function (Twitter\/Facebook style)\nmessaging\/reply function (Twitter style)\nlive chat function (Facebook style)\nblog function\npublic forums (like Get Satisfaction or Stack Overflow)\nprofile page template auto-gen function\n\nI just want to start getting my head around how Python can be used to make these features. I am not looking for a solution like Pinax since it is built upon Django and I will be ultimately using Pylons or just straight up Python.","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":2075,"Q_Id":4173883,"Users Score":5,"Answer":"So you're not interested in a fixed solution but want to program it yourself, do I get that correctly? If not: Go with a fixed solution. This will be a lot of programming effort, and whatever you want to do afterwards, doing it in another framework than you intended will be a much smaller problem.\nBut if you're actually interested in the programming experience, and you haven't found any tutorials googling for, say \"messaging python tutorial\", then that's because these are large-scale projects,- if you describe a project of this size, you're so many miles above actual lines of code that the concrete programming language almost doesn't matter (or at least you don't get stuck with the details). So you need to break these things down into smaller components. \nFor example, the friend\/follow function: How to insert stuff into a table with a user id, how to keep a table of follow-relations, how to query for a user all texts from people she's following (of course there's also some infrastructural issues if you hit >100.000 people, but you get the idea ;). Then you can ask yourself, which is the part of this which I don't know how to do in Python? If your problem, on the other hand, is breaking down the problems into these subproblems, you need to start looking for help on that, but that's probably not language specific (so you might just want to start googling for \"architecture friend feed\" or whatever). Also, you could ask that here (beware, each bullet point makes for a huge question in itself ;). Finally, you could get into the Pinax code (don't know it but I assume it's open source) and see how they're doing it. You could try porting some of their stuff to Pylons, for example, so you don't have to reinvent their wheel, learn how they do it, end up in the framework you wanted and maybe even create something reusable by others.\nsorry for tl;dr, that's because I don't have a concrete URL to point you to!","Q_Score":3,"Tags":"python,social-networking,pylons,get-satisfaction","A_Id":4174212,"CreationDate":"2010-11-13T17:49:00.000","Title":"Where can I find Python code examples, or tutorials, of social networking style functions\/components?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I want to do a test load for a web page. I want to do it in python with multiple threads.\nFirst POST request would login user (set cookies).\nThen I need to know how many users doing the same POST request simultaneously can server take.\nSo I'm thinking about spawning threads in which requests would be made in loop.\nI have a couple of questions:\n1. Is it possible to run 1000 - 1500 requests at the same time CPU wise? I mean wouldn't it slow down the system so it's not reliable anymore?\n2. What about the bandwidth limitations? How good the channel should be for this test to be reliable?\nServer on which test site is hosted is Amazon EC2 script would be run from another server(Amazon too).\nThanks!","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":8721,"Q_Id":4179879,"Users Score":1,"Answer":"too many variables. 1000 at the same time... no. in the same second... possibly. bandwidth may well be the bottleneck. this is something best solved by experimentation.","Q_Score":4,"Tags":"python,multithreading,load-testing","A_Id":4180003,"CreationDate":"2010-11-14T21:46:00.000","Title":"Python script load testing web page","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have a browser which sends utf-8 characters to my Python server, but when I retrieve it from the query string, the encoding that Python returns is ASCII. How can I convert the plain string to utf-8?\nNOTE: The string passed from the web is already UTF-8 encoded, I just want to make Python to treat it as UTF-8 not ASCII.","AnswerCount":12,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":817441,"Q_Id":4182603,"Users Score":4,"Answer":"First, str in Python is represented in Unicode.\nSecond, UTF-8 is an encoding standard to encode Unicode string to bytes. There are many encoding standards out there (e.g. UTF-16, ASCII, SHIFT-JIS, etc.).\n\nWhen the client sends data to your server and they are using UTF-8, they are sending a bunch of bytes not str.\nYou received a str because the \"library\" or \"framework\" that you are using, has implicitly converted some random bytes to str.\nUnder the hood, there is just a bunch of bytes. You just need ask the \"library\" to give you the request content in bytes and you will handle the decoding yourself (if library can't give you then it is trying to do black magic then you shouldn't use it).\n\nDecode UTF-8 encoded bytes to str: bs.decode('utf-8')\nEncode str to UTF-8 bytes: s.encode('utf-8')","Q_Score":230,"Tags":"python,python-2.7,unicode,utf-8","A_Id":63293431,"CreationDate":"2010-11-15T08:26:00.000","Title":"How to convert a string to utf-8 in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have Django app that presents a list of items that you can add comments to. \nWhat i basically want to do is something like the Facebook did: when someone post a comment on your item, you will receive an e-mail. What I want to do, is when you reply to that e-mail, the reply to be posted as a comment reply on the website. \nWhat should I use to achieve this using python as much as possible ? Maybe even Django ?","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":422,"Q_Id":4183158,"Users Score":0,"Answer":"You can for example write script for importing comments from mailbox(for example 1-3 minutes for cron).\nYou should connect to special mailbox which collects replies from users(comments).\nEvery mail have own header and title. You really can find out which post user try to comment(by header or title), and then import django enviroment and insert new recods.","Q_Score":2,"Tags":"python,django,email","A_Id":4183238,"CreationDate":"2010-11-15T09:51:00.000","Title":"How to post a comment on e-mail reply?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have Django app that presents a list of items that you can add comments to. \nWhat i basically want to do is something like the Facebook did: when someone post a comment on your item, you will receive an e-mail. What I want to do, is when you reply to that e-mail, the reply to be posted as a comment reply on the website. \nWhat should I use to achieve this using python as much as possible ? Maybe even Django ?","AnswerCount":4,"Available Count":2,"Score":-0.049958375,"is_accepted":false,"ViewCount":422,"Q_Id":4183158,"Users Score":-1,"Answer":"I think a good way is how Google+ handles it using a + on email address it can be reply+id-or hash-of-parent@domain.com then u must write a worker that check the POP server and","Q_Score":2,"Tags":"python,django,email","A_Id":18135014,"CreationDate":"2010-11-15T09:51:00.000","Title":"How to post a comment on e-mail reply?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I want to to test my application's handling of timeouts when grabbing data via urllib2, and I want to have some way to force the request to timeout. \nShort of finding a very very slow internet connection, what method can I use? \nI seem to remember an interesting application\/suite for simulating these sorts of things. Maybe someone knows the link?","AnswerCount":5,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":6193,"Q_Id":4188723,"Users Score":0,"Answer":"why not write a very simple CGI script in bash that just sleeps for the required timeout period?","Q_Score":6,"Tags":"python,urllib2","A_Id":4188773,"CreationDate":"2010-11-15T20:55:00.000","Title":"How can I force urllib2 to time out?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking to write a program that searches for the tags in an xml document and changes the string between the tags from localhost to manager. The tag might appear in the xml document multiple times, and the document does have a definite path. Would python or vbscript make the most sense for this problem? And can anyone provide a template so I can get started? That would be great. Thanks.","AnswerCount":5,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":1031,"Q_Id":4198416,"Users Score":0,"Answer":"I was able to get this to work by using the vbscript solutions provided. The reasons I hadn't committed to a Visual Basic script before was that I didn't think it was possible to execute this script remotely with PsExec. It turns out I solved this problem as well with the help of Server Fault. In case you are interested in how that works, cscript.exe is the command parameter of PsExec and the vbscript file serves as the argument of cscript. Thanks for all the help, everyone!","Q_Score":0,"Tags":"python,xml,scripting,vbscript,batch-file","A_Id":4230800,"CreationDate":"2010-11-16T20:04:00.000","Title":"batch script or python program to edit string in xml tags","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm currently working on a site that makes several calls to big name online sellers like eBay and Amazon to fetch prices for certain items. The issue is, currently it takes a few seconds (as far as I can tell, this time is from making the calls) to load the results, which I'd like to be more instant (~10 seconds is too much in my opinion). \nI've already cached other information that I need to fetch, but that information is static. Is there a way that I can cache the prices but update them only when needed? The code is in Python and I store info in a mySQL database.\nI was thinking of somehow using chron or something along that lines to update it every so often, but it would be nice if there was a simpler and less intense approach to this problem.\nThanks!","AnswerCount":4,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":196,"Q_Id":4208989,"Users Score":0,"Answer":"How are you getting the price? If you are scrapping the data from the normal HTML page using a tool such as BeautifulSoup, that may be slowing down the round-trip time. In this case, it might help to compute a fast checksum (such as MD5) from the page to see if it has changed, before parsing it. If you are using a API which gives a short XML version of the price, this is probably not an issue.","Q_Score":1,"Tags":"python,mysql,html,caching","A_Id":4210460,"CreationDate":"2010-11-17T20:46:00.000","Title":"Caching online prices fetched via API unless they change","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a generic\/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables.","AnswerCount":4,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2008,"Q_Id":4213696,"Users Score":0,"Answer":"We do something like this at work sometimes but not in python. In that case, each usage requires a custom program to be written. We only have a SAX parser available. Using an XML decoder to get a dictionary\/hash in a single step would help a lot.\nAt the very least you'd have to tell it which tags map to which to tables and fields, no pre-existing lib can know that...","Q_Score":3,"Tags":"python,mysql,xml,r","A_Id":4214098,"CreationDate":"2010-11-18T10:15:00.000","Title":"Parsing an xml file and storing it into a database","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a generic\/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables.","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":2008,"Q_Id":4213696,"Users Score":1,"Answer":"There's the XML package for reading XML into R, and the RMySQL package for writing data from R into MySQL. \nBetween the two there's a lot of work. XML surpasses the scope of a RDBMS like MySQL so something that could handle any XML thrown at it would be either ridiculously complex or trivially useless.","Q_Score":3,"Tags":"python,mysql,xml,r","A_Id":4214476,"CreationDate":"2010-11-18T10:15:00.000","Title":"Parsing an xml file and storing it into a database","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is there a generic\/automatic way in R or in python to parse xml files with its nodes and attributes, automatically generate mysql tables for storing that information and then populate those tables.","AnswerCount":4,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":2008,"Q_Id":4213696,"Users Score":4,"Answer":"They're three separate operations: parsing, table creation, and data population. You can do all three with python, but there's nothing \"automatic\" about it. I don't think it's so easy.\nFor example, XML is hierarchical and SQL is relational, set-based. I don't think it's always so easy to get a good relational schema for every single XML stream you can encounter.","Q_Score":3,"Tags":"python,mysql,xml,r","A_Id":4213749,"CreationDate":"2010-11-18T10:15:00.000","Title":"Parsing an xml file and storing it into a database","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I wrote the program that would need to authenticate users using their Linux usernames and passwords. I think it should do with PAM. I have tried searching from google PAM module for python3, but I did not find any. Is there a ready to use the PAM libraries, or try to make my own library? Is PAM usage some special security risks that should be taken into?\nI know that I can authenticate users with python3 spwd class but I dont want to use that, because then I have to run my program with root access.","AnswerCount":2,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":2466,"Q_Id":4216163,"Users Score":1,"Answer":"+che\nthe python pam module you linked to is not python3 compatible. there are three pam modules that i'm aware of {'pam', 'pypam', 'spypam'}, and none are py3 compatible.\ni've modified Chris AtLee's original pam package to work with python3. cleaning it up a bit before feeding back to him","Q_Score":2,"Tags":"python,linux,authentication,python-3.x,pam","A_Id":8396713,"CreationDate":"2010-11-18T15:03:00.000","Title":"Authenticate user in linux with python 3","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am working on a project that requires me to create multiple threads to download a large remote file. I have done this already but i cannot understand while it takes a longer amount of time to download a the file with multiple threads compared to using just a single thread. I used my xampp localhost to carry out the time elapsed test. I would like to know if its a normal behaviour or is it because i have not tried downloading from a real server. \nThanks\nKennedy","AnswerCount":3,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":1871,"Q_Id":4219134,"Users Score":4,"Answer":"9 women can't combine to make a baby in one month. If you have 10 threads, they each have only 10% the bandwidth of a single thread, and there is the additional overhead for context switching, etc.","Q_Score":0,"Tags":"python,multithreading,download,urllib2","A_Id":4219434,"CreationDate":"2010-11-18T20:15:00.000","Title":"Python\/Urllib2\/Threading: Single download thread faster than multiple download threads. Why?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"i am working on a project that requires me to create multiple threads to download a large remote file. I have done this already but i cannot understand while it takes a longer amount of time to download a the file with multiple threads compared to using just a single thread. I used my xampp localhost to carry out the time elapsed test. I would like to know if its a normal behaviour or is it because i have not tried downloading from a real server. \nThanks\nKennedy","AnswerCount":3,"Available Count":2,"Score":0.0665680765,"is_accepted":false,"ViewCount":1871,"Q_Id":4219134,"Users Score":1,"Answer":"Twisted uses non-blocking I\/O, that means if data is not available on socket right now, doesn't block the entire thread, so you can handle many socket connections waiting for I\/O in one thread simultaneous. But if doing something different than I\/O (parsing large amounts of data) you still block the thread.\nWhen you're using stdlib's socket module it does blocking I\/O, that means when you're call socket.read and data is not available at the moment \u2014 it will block entire thread, so you need one thread per connection to handle concurrent download.\nThese are two approaches to concurrency:\n\nFork new thread for new connection (threading + socket from stdlib).\nMultiplex I\/O and handle may connections in one thread (Twisted).","Q_Score":0,"Tags":"python,multithreading,download,urllib2","A_Id":4222497,"CreationDate":"2010-11-18T20:15:00.000","Title":"Python\/Urllib2\/Threading: Single download thread faster than multiple download threads. Why?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am looking for a python snippet to read an internet radio stream(.asx, .pls etc) and save it to a file.\nThe final project is cron'ed script that will record an hour or two of internet radio and then transfer it to my phone for playback during my commute. (3g is kind of spotty along my commute)\nany snippits or pointers are welcome.","AnswerCount":6,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":16163,"Q_Id":4247248,"Users Score":3,"Answer":"I am aware this is a year old, but this is still a viable question, which I have recently been fiddling with.\nMost internet radio stations will give you an option of type of download, I choose the MP3 version, then read the info from a raw socket and write it to a file. The trick is figuring out how fast your download is compared to playing the song so you can create a balance on the read\/write size. This would be in your buffer def.\nNow that you have the file, it is fine to simply leave it on your drive (record), but most players will delete from file the already played chunk and clear the file out off the drive and ram when streaming is stopped.\nI have used some code snippets from a file archive without compression app to handle a lot of the file file handling, playing, buffering magic. It's very similar in how the process flows. If you write up some sudo-code (which I highly recommend) you can see the similarities.","Q_Score":12,"Tags":"python,stream,audio-streaming,radio","A_Id":13279976,"CreationDate":"2010-11-22T15:47:00.000","Title":"Record streaming and saving internet radio in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"For a research project, I am collecting tweets using Python-Twitter. However, when running our program nonstop on a single computer for a week we manage to collect about only 20 MB of data per week. I am only running this program on one machine so that we do not collect the same tweets twice.\nOur program runs a loop that calls getPublicTimeline() every 60 seconds. I tried to improve this by calling getUserTimeline() on some of the users that appeared in the public timeline. However, this consistently got me banned from collecting tweets at all for about half an hour each time. Even without the ban, it seemed that there was very little speed-up by adding this code.\nI know about Twitter's \"whitelisting\" that allows a user to submit more requests per hour. I applied for this about three weeks ago, and have not hear back since, so I am looking for alternatives that will allow our program to collect tweets more efficiently without going over the standard rate limit. Does anyone know of a faster way to collect public tweets from Twitter? We'd like to get about 100 MB per week.\nThanks.","AnswerCount":3,"Available Count":1,"Score":0.0665680765,"is_accepted":false,"ViewCount":5951,"Q_Id":4249684,"Users Score":1,"Answer":"I did a similar project analyzing data from tweets. If you're just going at this from a pure data collection\/analysis angle, you can just scrape any of the better sites that collect these tweets for various reasons. Many sites allow you to search by hashtag, so throw in a popular enough hashtag and you've got thousands of results. I just scraped a few of these sites for popular hashtags, collected these into a large list, queried that list against the site, and scraped all of the usable information from the results. Some sites also allow you to export the data directly, making this task even easier. You'll get a lot of garbage results that you'll probably need to filter (spam, foreign language, etc), but this was the quickest way that worked for our project. Twitter will probably not grant you whitelisted status, so I definitely wouldn't count on that.","Q_Score":5,"Tags":"python,twitter,python-twitter","A_Id":4250479,"CreationDate":"2010-11-22T20:02:00.000","Title":"How to Collect Tweets More Quickly Using Twitter API in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Question: Where is a good starting point for learning to write server applications?\n\nInfo:\nI'm looking in to writing a distributed computing system to harvest the idle cycles of the couple hundred computers sitting idle around my college's campus. There are systems that come close, but don't quite meet all the requirements I need. (most notable all transactions have to be made through SSH because the network blocks everything else) So I've decided to write my own application. partly to get exactly what I want, but also for experience.\nImportant features:\n\nWritten in python\nAll transaction made through ssh(this is solved through the simple use of pexpect)\nServer needs to be able to take potentially hundreds of hits. I'll optimize later, the point being simulation sessions.\n\nI feel like those aren't to ridiculous of things to try and accomplish. But with the last one I'm not certain where to even start. I've actually already accomplished the first 2 and written a program that will log into my server, and then print ls -l to a file locally. so that isn't hard. but how do i attach several clients asking the server for simulation data to crunch all at the same time? obviously it feels like threading comes in to play here, but more than that I'm sure.\nThis is where my problem is. Where does one even start researching how to write server applications? Am I even using the right wording? What information is there freely available on the internet and\/or what books are there on such? again, specifically python, but a step in the right direction is one more than where i am now.\np.s. this seeemed more fitting for stackoverflow than serverfault. Correct me if I am wrong.","AnswerCount":3,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":186,"Q_Id":4253557,"Users Score":0,"Answer":"Here's an approach.\n\nWrite an \"agent\" in Python. The agent is installed on the various computers. It does whatever processing your need locally. It uses urllib2 to make RESTful HTTP requests of the server. It either posts data or requests work to do or whatever is supposed to go on.\nWrite a \"server\" in Python. The server is installed on one computer. This is written using wsgiref and is a simple WSGI-based server that serves requests from the various agents scattered around campus.\n\nWhile this requires agent installation, it's very, very simple. It can be made very, very secure (use HTTP Digest Authentication). And the agent's privileges define the level of vulnerability. If the agent is running in an account with relatively few privileges, it's quite safe. The agent shouldn't run as root and the agent's account should not be allowed to su or sudo.","Q_Score":4,"Tags":"python","A_Id":4255361,"CreationDate":"2010-11-23T07:04:00.000","Title":"where to start programing a server application","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"We have a Rest API that requires client certificate authentication. The API is used by this collection of python scripts that a user can run. To make it so that the user doesn't have to enter their password for their client certificate every time they run one of the scripts, we've created this broker process in java that a user can startup and run in the background which holds the user's certificate password in memory (we just have the javax.net.ssl.keyStorePassword property set in the JVM). The scripts communicate with this process and the process just forwards the Rest API calls to the server (adding the certificate credentials). \nTo do the IPC between the scripts and the broker process we're just using a socket. The problem is that the socket opens up a security risk in that someone could use the Rest API using another person's certificate by communicating through the broker process port on the other person's machine. We've mitigated the risk somewhat by using java security to only allow connections to the port from localhost. I think though someone in theory could still do it by remotely connecting to the machine and then using the port. Is there a way to further limit the use of the port to the current windows user? Or maybe is there another form of IPC I could use that can do authorization using the current windows user?\nWe're using Java for the broker process just because everyone on our team is much more familiar with Java than python but it could be rewritten in python if that would help.\nEdit: Just remembered the other reason for using java for the broker process is that we are stuck with using python v2.6 and at this version https with client certificates doesn't appear to be supported (at least not without using a 3rd party library).","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":521,"Q_Id":4257038,"Users Score":0,"Answer":"The most simple approach is to use cookie-based access control. Have a file in the user's profile\/homedirectory which contains the cookie. Have the Java server generate and save the cookie, and have the Python client scripts send the cookie as the first piece of data on any TCP connection.\nThis is secure as long as an adversary cannot get the cookie, which then should be protected by file system ACLs.","Q_Score":0,"Tags":"java,python,windows,ipc,ssl-certificate","A_Id":4257229,"CreationDate":"2010-11-23T14:33:00.000","Title":"IPC on Windows between Java and Python secured to the current user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"We have a Rest API that requires client certificate authentication. The API is used by this collection of python scripts that a user can run. To make it so that the user doesn't have to enter their password for their client certificate every time they run one of the scripts, we've created this broker process in java that a user can startup and run in the background which holds the user's certificate password in memory (we just have the javax.net.ssl.keyStorePassword property set in the JVM). The scripts communicate with this process and the process just forwards the Rest API calls to the server (adding the certificate credentials). \nTo do the IPC between the scripts and the broker process we're just using a socket. The problem is that the socket opens up a security risk in that someone could use the Rest API using another person's certificate by communicating through the broker process port on the other person's machine. We've mitigated the risk somewhat by using java security to only allow connections to the port from localhost. I think though someone in theory could still do it by remotely connecting to the machine and then using the port. Is there a way to further limit the use of the port to the current windows user? Or maybe is there another form of IPC I could use that can do authorization using the current windows user?\nWe're using Java for the broker process just because everyone on our team is much more familiar with Java than python but it could be rewritten in python if that would help.\nEdit: Just remembered the other reason for using java for the broker process is that we are stuck with using python v2.6 and at this version https with client certificates doesn't appear to be supported (at least not without using a 3rd party library).","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":521,"Q_Id":4257038,"Users Score":0,"Answer":"I think I've come up with a solution inspired by Martin's post above. When the broker process starts up I'll create an mini http server listening on the IPC port. Also during startup I'll write a file containing a randomly generated password (that's different every startup) to the user's home directory so that only the user can read the file (or an administrator but I don't think I need to worry about that). Then I'll lock down the IPC port by requiring all http requests sent there to use the password. It's a bit Rube Goldberg-esque but I think it will work.","Q_Score":0,"Tags":"java,python,windows,ipc,ssl-certificate","A_Id":4271608,"CreationDate":"2010-11-23T14:33:00.000","Title":"IPC on Windows between Java and Python secured to the current user","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I'm using python with urllib2 & cookielib and such to open a url. This url set's one cookie in it's header and two more in the page with some javascript. It then redirects to a different page.\nI can parse out all the relevant info for the cookies being set with the javascript, but I can't for the life of me figure out how to get them into the cookie-jar as cookies.\nEssentially, when I follow to the site being redirected too, those two cookies have to be accessible by that site.\nTo be very specific, I'm trying to login in to gomtv.net by using their \"login in with a Twitter account\" feature in python.\nAnyone?","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":255,"Q_Id":4258278,"Users Score":0,"Answer":"You can't set cookies for another domain - browsers will not allow it.","Q_Score":2,"Tags":"python,authentication,cookies,cookielib","A_Id":4258354,"CreationDate":"2010-11-23T16:25:00.000","Title":"How do I manually put cookies in a jar?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have some url to parse, and they used some javascript to create it dynamicly. So if i want to parse the result generated page with python... how can i do that ?\nFirefox do that well with web developer... so i think it possible ... but i don't know where to start...\nThx for help\nlo","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":212,"Q_Id":4264076,"Users Score":0,"Answer":"I you want generated source you'll need a browser, I don't think you can with only python.","Q_Score":2,"Tags":"javascript,python,parsing,url,dynamically-generated","A_Id":4264223,"CreationDate":"2010-11-24T06:35:00.000","Title":"How to see generated source from an URL page with python script and not anly source?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have some url to parse, and they used some javascript to create it dynamicly. So if i want to parse the result generated page with python... how can i do that ?\nFirefox do that well with web developer... so i think it possible ... but i don't know where to start...\nThx for help\nlo","AnswerCount":2,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":212,"Q_Id":4264076,"Users Score":2,"Answer":"I've done this by doing a POST of document.body.innerHTML, after the page is loaded, to a CGI script in Python.\nFor the parsing, BeautifulSoup is a good choice.","Q_Score":2,"Tags":"javascript,python,parsing,url,dynamically-generated","A_Id":4264239,"CreationDate":"2010-11-24T06:35:00.000","Title":"How to see generated source from an URL page with python script and not anly source?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I know it's possible to open up specific URL's with python's webbrowser module. Is it possible to use strings as search queries with it, or another module? Say in an engine like Google or Yahoo?","AnswerCount":3,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":830,"Q_Id":4265580,"Users Score":1,"Answer":"Of course it's possible - they're just GET requests. So long as you format the URL properly with the query string correct and all (http:\/\/google.com\/search?q=query - look at the site to see what it needs to be), it'll work fine. It's just a URL.","Q_Score":0,"Tags":"python,browser,search-engine","A_Id":4266457,"CreationDate":"2010-11-24T10:09:00.000","Title":"Python Web Search","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to write a peer-to-peer chat application in Python? \nI am thinking of this from a hobbyist project point-of-view. Can two machines connect to each other directly without involving a server? I have always wondered this, but never actually seen it implemented anywhere so I am thinking there must be a catch somewhere.\nPS: I intend to learn Twisted, so if that is involved, it would be an added advantage!","AnswerCount":3,"Available Count":3,"Score":0.1973753202,"is_accepted":false,"ViewCount":2413,"Q_Id":4269287,"Users Score":3,"Answer":"Yes, each computer (as long as their on the same network) can establish a server instance with inbound and outbound POST\/GET.","Q_Score":2,"Tags":"python,twisted","A_Id":4269340,"CreationDate":"2010-11-24T16:47:00.000","Title":"Writing a P2P chat application in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to write a peer-to-peer chat application in Python? \nI am thinking of this from a hobbyist project point-of-view. Can two machines connect to each other directly without involving a server? I have always wondered this, but never actually seen it implemented anywhere so I am thinking there must be a catch somewhere.\nPS: I intend to learn Twisted, so if that is involved, it would be an added advantage!","AnswerCount":3,"Available Count":3,"Score":0.0,"is_accepted":false,"ViewCount":2413,"Q_Id":4269287,"Users Score":0,"Answer":"I think i am way too late in putting my two bits here, i accidentally stumbled upon here as i was also searching on similar lines. I think you can do this fairly easily using just sockets only, however as mentioned above one of the machines would have to act like a server, to whome the other will connect.\nI am not familiar with twisted, but i did achieved this using just sockets. But yes even i am curious to know how would you achieve peer2peer chat communication if there are multiple clients connected to a server. Creating a chat room kind of app is easy but i am having hard time in thinking how to handle peer to peer connections.","Q_Score":2,"Tags":"python,twisted","A_Id":49536752,"CreationDate":"2010-11-24T16:47:00.000","Title":"Writing a P2P chat application in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"Is it possible to write a peer-to-peer chat application in Python? \nI am thinking of this from a hobbyist project point-of-view. Can two machines connect to each other directly without involving a server? I have always wondered this, but never actually seen it implemented anywhere so I am thinking there must be a catch somewhere.\nPS: I intend to learn Twisted, so if that is involved, it would be an added advantage!","AnswerCount":3,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":2413,"Q_Id":4269287,"Users Score":5,"Answer":"Yes. You can do this pretty easily with Twisted. Just have one of the peers act like a server and the other one act like a client. In fact, the twisted tutorial will get you most of the way there.\nThe only problem you're likely to run into is firewalls. Most people run their home machines behind SNAT routers, which make it tougher to connect directly to them from outside. You can get around it with port forwarding though.","Q_Score":2,"Tags":"python,twisted","A_Id":4269328,"CreationDate":"2010-11-24T16:47:00.000","Title":"Writing a P2P chat application in Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to have a python client that can discover queues on a restarted RabbitMQ server exchange, and then start up a clients to resume consuming messages from each queue. How can I discover queues from some RabbitMQ compatible python api\/library?","AnswerCount":8,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":70261,"Q_Id":4287941,"Users Score":27,"Answer":"As far as I know, there isn't any way of doing this. That's nothing to do with Python, but because AMQP doesn't define any method of queue discovery.\nIn any case, in AMQP it's clients (consumers) that declare queues: publishers publish messages to an exchange with a routing key, and consumers determine which queues those routing keys go to. So it does not make sense to talk about queues in the absence of consumers.","Q_Score":42,"Tags":"python,rabbitmq,amqp","A_Id":4288304,"CreationDate":"2010-11-26T19:06:00.000","Title":"How can I list or discover queues on a RabbitMQ exchange using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I need to have a python client that can discover queues on a restarted RabbitMQ server exchange, and then start up a clients to resume consuming messages from each queue. How can I discover queues from some RabbitMQ compatible python api\/library?","AnswerCount":8,"Available Count":2,"Score":0.049958375,"is_accepted":false,"ViewCount":70261,"Q_Id":4287941,"Users Score":2,"Answer":"Management features are due in a future version of AMQP. So for now you will have to wait till for a new version that will come with that functionality.","Q_Score":42,"Tags":"python,rabbitmq,amqp","A_Id":4289172,"CreationDate":"2010-11-26T19:06:00.000","Title":"How can I list or discover queues on a RabbitMQ exchange using python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My code was working correctly till yesterday and I was able to fetch tweets, from GetSearch(), but now it is returning empty list, though I check my credentials are correct\nIs something changed recently?? \nThank you","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":297,"Q_Id":4291319,"Users Score":0,"Answer":"They might have a limit of requests in a certain amount of time or they had a failure on the system. You can ask for new credentials to see if the problem was the first one and try getting the tweets with them.","Q_Score":0,"Tags":"python-twitter","A_Id":4291535,"CreationDate":"2010-11-27T11:08:00.000","Title":"python-twitter GetSearch giving empty list","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":1,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"My goal: \nI want to host a folder of photos, but if at anytime 100 files are being downloaded, I want to redirect a new downloader\/request to a 'waiting page' and give them a place in line and an approximate countdown clock until its their turn to download their requested content. Then either redirect them directly to the content, or (ideally) give them a button (token,expiring serial number) they can click that will take them to the content when they are ready.\nI've seen sites do something similar to this, such as rapidshare, but I have not seen an open-source example of this type of setup. I would think it would be combining several technologies and modifying request headers?\nAny help\/ideas would be greatly appreciated!","AnswerCount":1,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":62,"Q_Id":4295823,"Users Score":0,"Answer":"Twisted network engine is about the best answer for you. What you can have is you can have the downloader serving a maximum of 100 x people then when the queue is full you will direct people to a holding loop, in the holding loop they will wait x seconds, check if queue is full, check not expired, see who else is waiting, if this ticket was here first, jump to top of download queue. As a TCP\/IP connection comes in on twisted the level of control on your clients is so insane that you can do some might and powerful things in weird and wonderful ways, now imagine building this into a scalable and interactive twisted http server where you keep the level of control but you can actually serve resources. \nThe simplest way to get away with it is probably a pool of tickets, when a download is complete the downloader returns the ticket to the pool for someone else to take, if there are no tickets wait your turn.","Q_Score":1,"Tags":"c#,php,python,apache,nginx","A_Id":4297198,"CreationDate":"2010-11-28T07:35:00.000","Title":"Controlling rate of downloads on a per request and\/or per resource basis (and providing a first-come-first-serve waiting system)","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have to upload a webpage on cdn. Say test.html, test.css, image1.jpg etc. Now I am uploading all these file one by one. I think which is not efficient. So, is it possible to keep all these files in folder and then upload this folder on the cdn? If yes, then what parameters i need to take care about that. Does zipping the folder helpful? I am using python.\nThanks in Advance","AnswerCount":2,"Available Count":1,"Score":0.0,"is_accepted":false,"ViewCount":1114,"Q_Id":4299324,"Users Score":0,"Answer":"I think you are trying to upload the static content of your website (not the user uploaded files) to CDN via FTP client or something similar. \nTo achieve bulk upload you may ZIP all such files on local machine and upload to your webserver. Unzip files on webserver and write a batch script which utlize the CDN API to send files in CDN container. \nFor fulture new or modified files, write another batch script to grab all new\/modified files and send to CDN container via CDN API.","Q_Score":0,"Tags":"python,cdn","A_Id":18712080,"CreationDate":"2010-11-28T21:55:00.000","Title":"How to upload multiple files on cdn?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I feel stacked here trying to change encodings with Python 2.5\nI have XML response, which I encode to UTF-8: response.encode('utf-8'). That is fine, but the program which uses this info doesn't like this encoding and I have to convert it to other code page. Real example is that I use ghostscript python module to embed pdfmark data to a PDF file - end result is with wrong characters in Acrobat.\nI've done numerous combinations with .encode() and .decode() between 'utf-8' and 'latin-1' and it drives me crazy as I can't output correct result.\nIf I output the string to a file with .encode('utf-8') and then convert this file from UTF-8 to CP1252 (aka latin-1) with i.e. iconv.exe and embed the data everything is fine.\nBasically can someone help me convert i.e. character \u00e1 which is UTF-8 encoded as hex: C3 A1 to latin-1 as hex: E1?\nThanks in advance","AnswerCount":4,"Available Count":2,"Score":1.2,"is_accepted":true,"ViewCount":144526,"Q_Id":4299802,"Users Score":23,"Answer":"Instead of .encode('utf-8'), use .encode('latin-1').","Q_Score":21,"Tags":"python,encoding","A_Id":4299809,"CreationDate":"2010-11-28T23:37:00.000","Title":"Python: convert string from UTF-8 to Latin-1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I feel stacked here trying to change encodings with Python 2.5\nI have XML response, which I encode to UTF-8: response.encode('utf-8'). That is fine, but the program which uses this info doesn't like this encoding and I have to convert it to other code page. Real example is that I use ghostscript python module to embed pdfmark data to a PDF file - end result is with wrong characters in Acrobat.\nI've done numerous combinations with .encode() and .decode() between 'utf-8' and 'latin-1' and it drives me crazy as I can't output correct result.\nIf I output the string to a file with .encode('utf-8') and then convert this file from UTF-8 to CP1252 (aka latin-1) with i.e. iconv.exe and embed the data everything is fine.\nBasically can someone help me convert i.e. character \u00e1 which is UTF-8 encoded as hex: C3 A1 to latin-1 as hex: E1?\nThanks in advance","AnswerCount":4,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":144526,"Q_Id":4299802,"Users Score":0,"Answer":"If the previous answers do not solve your problem, check the source of the data that won't print\/convert properly.\nIn my case, I was using json.load on data incorrectly read from file by not using the encoding=\"utf-8\". Trying to de-\/encode the resulting string to latin-1 just does not help...","Q_Score":21,"Tags":"python,encoding","A_Id":32096180,"CreationDate":"2010-11-28T23:37:00.000","Title":"Python: convert string from UTF-8 to Latin-1","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a script which will run on my server. Its purpose is to download the document. If any person hit the particular url he\/she should be able to download the document. I am using urllib.urlretrieve but it download document on the server side not on the client. How to download in python at client side?","AnswerCount":4,"Available Count":3,"Score":1.2,"is_accepted":true,"ViewCount":322,"Q_Id":4311347,"Users Score":2,"Answer":"If the script runs on your server, its purpose is to serve a document, not to download it (the latter would be the urllib solution).\nDepending on your needs you can:\n\nSet up static file serving with e.g. Apache\nMake the script execute on a certain URL (e.g. with mod_wsgi), then the script should set the Content-Type (provides document type such as \"text\/plain\") and Content-Disposition (provides download filename) headers and send the document data\n\nAs your question is not more specific, this answer can't be either.","Q_Score":0,"Tags":"python","A_Id":4311727,"CreationDate":"2010-11-30T07:10:00.000","Title":"how to download in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a script which will run on my server. Its purpose is to download the document. If any person hit the particular url he\/she should be able to download the document. I am using urllib.urlretrieve but it download document on the server side not on the client. How to download in python at client side?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":322,"Q_Id":4311347,"Users Score":1,"Answer":"If the document is on your server and your intention is that the user should be able to download this file, couldn't you just serve the url to that resource as a hyperlink in your HTML code. Sorry if I have been obtuse but this seems the most logical step given your explanation.","Q_Score":0,"Tags":"python","A_Id":4311383,"CreationDate":"2010-11-30T07:10:00.000","Title":"how to download in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am writing a script which will run on my server. Its purpose is to download the document. If any person hit the particular url he\/she should be able to download the document. I am using urllib.urlretrieve but it download document on the server side not on the client. How to download in python at client side?","AnswerCount":4,"Available Count":3,"Score":0.049958375,"is_accepted":false,"ViewCount":322,"Q_Id":4311347,"Users Score":1,"Answer":"Set the appropriate Content-type header, then send the file contents.","Q_Score":0,"Tags":"python","A_Id":4311378,"CreationDate":"2010-11-30T07:10:00.000","Title":"how to download in python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":0},{"Question":"I have got a url in this form - http:\\\\\/\\\\\/en.wikipedia.org\\\\\/wiki\\\\\/The_Truman_Show. How can I make it normal url. I have tried using urllib.unquote without much success. \nI can always use regular expressions or some simple string replace stuff. But I believe that there is a better way to handle this...","AnswerCount":3,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":8233,"Q_Id":4312197,"Users Score":11,"Answer":"urllib.unquote is for replacing %xx escape codes in URLs with the characters they represent. It won't be useful for this.\nYour \"simple string replace stuff\" is probably the best solution.","Q_Score":4,"Tags":"python,url","A_Id":4312223,"CreationDate":"2010-11-30T09:29:00.000","Title":"Python unescape URL","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am trying to do WSDL SOAP connection to our JIRA server using SOAPpy (Python SOAP Library).\nAll seems to be fine except when I try finding specific issues. Through the web browser looking up the bug ID actually redirects to a bug (with a different ID), however it is the bug in question just moved to a different project.\nAttempts to getIssue via the SOAPpy API results in an exception that the issue does not exist.\nAny way around this?\nThanks","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":281,"Q_Id":4320135,"Users Score":2,"Answer":"Yes, there's an existing bug on this I've seen. Use the JIRA issue id instead of the key to locate it, as a workaround.","Q_Score":3,"Tags":"python,soap,wsdl,jira","A_Id":4350861,"CreationDate":"2010-12-01T00:25:00.000","Title":"Python JIRA SOAPpy annoying redirect on findIssue","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I have an lxml object called item and it may have a child called item.brand, however it's possible that there is none as this is returned from an API. How can I check this in Python?","AnswerCount":1,"Available Count":1,"Score":1.2,"is_accepted":true,"ViewCount":706,"Q_Id":4339708,"Users Score":4,"Answer":"Try hasattr().","Q_Score":0,"Tags":"python","A_Id":4339741,"CreationDate":"2010-12-02T20:51:00.000","Title":"How can I see if a child exists in Python?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"My app opens a TCP socket and waits for data from other users on the network using the same application. At the same time, it can broadcast data to a specified host on the network. \nCurrently, I need to manually enter the IP of the destination host to be able to send data. I want to be able to find a list of all hosts running the application and have the user pick which host to broadcast data to.\nIs Bonjour\/ZeroConf the right route to go to accomplish this? (I'd like it to cross-platform OSX\/Win\/*Nix)","AnswerCount":4,"Available Count":1,"Score":0.0996679946,"is_accepted":false,"ViewCount":1210,"Q_Id":4343575,"Users Score":2,"Answer":"Zeroconf\/DNS-SD is an excellent idea in this case. It's provided by Bonjour on OS X and Windows (but must be installed separately or as part of an Apple product on Windows), and by Avahi on FOSS *nix.","Q_Score":2,"Tags":"python,networking,bonjour,zeroconf","A_Id":4343600,"CreationDate":"2010-12-03T08:07:00.000","Title":"Proper way to publish and find services on a LAN using Python","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":1,"Web Development":0},{"Question":"I want to use python urllib2 to simulate a login action, I use Fiddler to catch the packets and got that the login action is just an ajax request and the username and password is sent as json data, but I have no idea how to use urllib2 to send json data, help...","AnswerCount":4,"Available Count":1,"Score":1.0,"is_accepted":false,"ViewCount":32746,"Q_Id":4348061,"Users Score":21,"Answer":"For Python 3.x\nNote the following\n\nIn Python 3.x the urllib and urllib2 modules have been combined. The module is named urllib. So, remember that urllib in Python 2.x and urllib in Python 3.x are DIFFERENT modules.\nThe POST data for urllib.request.Request in Python 3 does NOT accept a string (str) -- you have to pass a bytes object (or an iterable of bytes)\n\nExample\npass json data with POST in Python 3.x\n\nimport urllib.request\nimport json\n\njson_dict = { 'name': 'some name', 'value': 'some value' }\n\n# convert json_dict to JSON\njson_data = json.dumps(json_dict)\n\n# convert str to bytes (ensure encoding is OK)\npost_data = json_data.encode('utf-8')\n\n# we should also say the JSON content type header\nheaders = {}\nheaders['Content-Type'] = 'application\/json'\n\n# now do the request for a url\nreq = urllib.request.Request(url, post_data, headers)\n\n# send the request\nres = urllib.request.urlopen(req)\n\n# res is a file-like object\n# ...\n\n\nFinally note that you can ONLY send a POST request if you have SOME data to send.\nIf you want to do an HTTP POST without sending any data, you should send an empty dict as data.\n\ndata_dict = {}\npost_data = json.dumps(data_dict).encode()\n\nreq = urllib.request.Request(url, post_data)\nres = urllib.request.urlopen(req)","Q_Score":16,"Tags":"python,json,urllib2","A_Id":7469725,"CreationDate":"2010-12-03T17:15:00.000","Title":"How to use python urllib2 to send json data for login","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":1,"System Administration and DevOps":0,"Web Development":0},{"Question":"I am iusing in my html. I am trying to handle the request on server side using python BaseHTTPServer. I want to figure out how the request from video tag looks like???","AnswerCount":2,"Available Count":2,"Score":0.0996679946,"is_accepted":false,"ViewCount":513,"Q_Id":4348707,"Users Score":1,"Answer":"It will be a simple GET request, just like any other resource embedded in an HTML document.\nIf you really want to examine exactly what browsers send, then use something like Charles or the Net tab of Firebug.","Q_Score":0,"Tags":"python,html","A_Id":4348712,"CreationDate":"2010-12-03T18:37:00.000","Title":"is Video tag in html a POST request or GET request?","Data Science and Machine Learning":0,"Database and SQL":0,"GUI and Desktop Applications":0,"Networking and APIs":1,"Other":0,"Python Basics and Environment":0,"System Administration and DevOps":0,"Web Development":1},{"Question":"I am iusing in my html. I am trying to handle the request on server side using python BaseHTTPServer. I want to figure out how the request from video tag looks like???","AnswerCount":2,"Available Count":2,"Score":0.0,"is_accepted":false,"ViewCount":513,"Q_Id":4348707,"Users Score":0,"Answer":"POST is usually reserved for form submissions because you are POSTing form information to the server. In this case you are just GETing the contents of a