Web Development
int64 0
1
| Data Science and Machine Learning
int64 0
1
| Question
stringlengths 28
6.1k
| is_accepted
bool 2
classes | Q_Id
int64 337
51.9M
| Score
float64 -1
1.2
| Other
int64 0
1
| Database and SQL
int64 0
1
| Users Score
int64 -8
412
| Answer
stringlengths 14
7k
| Python Basics and Environment
int64 0
1
| ViewCount
int64 13
1.34M
| System Administration and DevOps
int64 0
1
| Q_Score
int64 0
1.53k
| CreationDate
stringlengths 23
23
| Tags
stringlengths 6
90
| Title
stringlengths 15
149
| Networking and APIs
int64 1
1
| Available Count
int64 1
12
| AnswerCount
int64 1
28
| A_Id
int64 635
72.5M
| GUI and Desktop Applications
int64 0
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it.
Unfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python.
Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. | false | 337 | 1 | 0 | 0 | 6 | I write a SOAP server that receives XML requests and creates XML responses. (Unfortunately, it's not my project, so it's closed source, but that's another problem).
It turned out for me that creating (SOAP) XML documents is fairly simple if you have a data structure that "fits" the schema.
I keep the envelope since the response envelope is (almost) the same as the request envelope. Then, since my data structure is a (possibly nested) dictionary, I create a string that turns this dictionary into <key>value</key> items.
This is a task that recursion makes simple, and I end up with the right structure. This is all done in python code and is currently fast enough for production use.
You can also (relatively) easily build lists as well, although depending upon your client, you may hit problems unless you give length hints.
For me, this was much simpler, since a dictionary is a much easier way of working than some custom class. For the books, generating XML is much easier than parsing! | 0 | 9,979 | 0 | 82 | 2008-08-02T03:35:00.000 | python,xml | XML Processing in Python | 1 | 6 | 12 | 635 | 0 |
0 | 0 | I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it.
Unfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python.
Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. | false | 337 | 1 | 0 | 0 | 8 | I've used ElementTree for several projects and recommend it.
It's pythonic, comes 'in the box' with Python 2.5, including the c version cElementTree (xml.etree.cElementTree) which is 20 times faster than the pure Python version, and is very easy to use.
lxml has some perfomance advantages, but they are uneven and you should check the benchmarks first for your use case.
As I understand it, ElementTree code can easily be ported to lxml. | 0 | 9,979 | 0 | 82 | 2008-08-02T03:35:00.000 | python,xml | XML Processing in Python | 1 | 6 | 12 | 123,307 | 0 |
0 | 0 | I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it.
Unfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python.
Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. | false | 337 | 1 | 0 | 0 | 8 | It depends a bit on how complicated the document needs to be.
I've used minidom a lot for writing XML, but that's usually been just reading documents, making some simple transformations, and writing them back out. That worked well enough until I needed the ability to order element attributes (to satisfy an ancient application that doesn't parse XML properly). At that point I gave up and wrote the XML myself.
If you're only working on simple documents, then doing it yourself can be quicker and simpler than learning a framework. If you can conceivably write the XML by hand, then you can probably code it by hand as well (just remember to properly escape special characters, and use str.encode(codec, errors="xmlcharrefreplace")). Apart from these snafus, XML is regular enough that you don't need a special library to write it. If the document is too complicated to write by hand, then you should probably look into one of the frameworks already mentioned. At no point should you need to write a general XML writer. | 0 | 9,979 | 0 | 82 | 2008-08-02T03:35:00.000 | python,xml | XML Processing in Python | 1 | 6 | 12 | 202,259 | 0 |
0 | 0 | I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it.
Unfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python.
Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. | false | 337 | 0.033321 | 0 | 0 | 2 | I assume that the .NET way of processing XML builds on some version of MSXML and in that case I assume that using, for example, minidom would make you feel somewhat at home. However, if it is simple processing you are doing, any library will probably do.
I also prefer working with ElementTree when dealing with XML in Python because it is a very neat library. | 0 | 9,979 | 0 | 82 | 2008-08-02T03:35:00.000 | python,xml | XML Processing in Python | 1 | 6 | 12 | 69,772 | 0 |
0 | 0 | I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it.
Unfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python.
Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. | false | 337 | 0.049958 | 0 | 0 | 3 | I strongly recommend SAX - Simple API for XML - implementation in the Python libraries. They are fairly easy to setup and process large XML by even driven API, as discussed by previous posters here, and have low memory footprint unlike validating DOM style XML parsers. | 0 | 9,979 | 0 | 82 | 2008-08-02T03:35:00.000 | python,xml | XML Processing in Python | 1 | 6 | 12 | 13,832,269 | 0 |
0 | 0 | I am about to build a piece of a project that will need to construct and post an XML document to a web service and I'd like to do it in Python, as a means to expand my skills in it.
Unfortunately, whilst I know the XML model fairly well in .NET, I'm uncertain what the pros and cons are of the XML models in Python.
Anyone have experience doing XML processing in Python? Where would you suggest I start? The XML files I'll be building will be fairly simple. | false | 337 | 1 | 0 | 0 | 8 | There are 3 major ways of dealing with XML, in general: dom, sax, and xpath. The dom model is good if you can afford to load your entire xml file into memory at once, and you don't mind dealing with data structures, and you are looking at much/most of the model. The sax model is great if you only care about a few tags, and/or you are dealing with big files and can process them sequentially. The xpath model is a little bit of each -- you can pick and choose paths to the data elements you need, but it requires more libraries to use.
If you want straightforward and packaged with Python, minidom is your answer, but it's pretty lame, and the documentation is "here's docs on dom, go figure it out". It's really annoying.
Personally, I like cElementTree, which is a faster (c-based) implementation of ElementTree, which is a dom-like model.
I've used sax systems, and in many ways they're more "pythonic" in their feel, but I usually end up creating state-based systems to handle them, and that way lies madness (and bugs).
I say go with minidom if you like research, or ElementTree if you want good code that works well. | 0 | 9,979 | 0 | 82 | 2008-08-02T03:35:00.000 | python,xml | XML Processing in Python | 1 | 6 | 12 | 69,410 | 0 |
0 | 0 | What are the libraries that support XPath? Is there a full implementation? How is the library used? Where is its website? | false | 8,692 | 1 | 0 | 0 | 40 | Use LXML. LXML uses the full power of libxml2 and libxslt, but wraps them in more "Pythonic" bindings than the Python bindings that are native to those libraries. As such, it gets the full XPath 1.0 implementation. Native ElemenTree supports a limited subset of XPath, although it may be good enough for your needs. | 0 | 338,237 | 0 | 245 | 2008-08-12T11:28:00.000 | python,xml,dom,xpath,python-2.x | How to use XPath in Python? | 1 | 1 | 11 | 1,732,475 | 0 |
0 | 0 | I have a small utility that I use to download an MP3 file from a website on a schedule and then builds/updates a podcast XML file which I've added to iTunes.
The text processing that creates/updates the XML file is written in Python. However, I use wget inside a Windows .bat file to download the actual MP3 file. I would prefer to have the entire utility written in Python.
I struggled to find a way to actually download the file in Python, thus why I resorted to using wget.
So, how do I download the file using Python? | false | 22,676 | 1 | 0 | 0 | 20 | Following are the most commonly used calls for downloading files in python:
urllib.urlretrieve ('url_to_file', file_name)
urllib2.urlopen('url_to_file')
requests.get(url)
wget.download('url', file_name)
Note: urlopen and urlretrieve are found to perform relatively bad with downloading large files (size > 500 MB). requests.get stores the file in-memory until download is complete. | 0 | 1,341,778 | 0 | 1,032 | 2008-08-22T15:34:00.000 | python,http,urllib | How to download a file over HTTP? | 1 | 1 | 27 | 39,573,536 | 0 |
0 | 0 | What's the best way to specify a proxy with username and password for an http connection in python? | false | 34,079 | 1 | 0 | 0 | 15 | Setting an environment var named http_proxy like this: http://username:password@proxy_url:port | 0 | 95,434 | 0 | 58 | 2008-08-29T06:55:00.000 | python,http,proxy | How to specify an authenticated proxy for a python http connection? | 1 | 1 | 6 | 3,942,980 | 0 |
1 | 0 | How can I retrieve the page title of a webpage (title html tag) using Python? | false | 51,233 | 0.033321 | 0 | 0 | 2 | soup.title.string actually returns a unicode string.
To convert that into normal string, you need to do
string=string.encode('ascii','ignore') | 0 | 103,114 | 0 | 86 | 2008-09-09T04:38:00.000 | python,html | How can I retrieve the page title of a webpage using Python? | 1 | 1 | 12 | 17,123,979 | 0 |
0 | 0 | When I call socket.getsockname() on a socket object, it returns a tuple of my machine's internal IP and the port. However, I would like to retrieve my external IP. What's the cheapest, most efficient manner of doing this? | true | 58,294 | 1.2 | 0 | 0 | 9 | This isn't possible without cooperation from an external server, because there could be any number of NATs between you and the other computer. If it's a custom protocol, you could ask the other system to report what address it's connected to. | 0 | 19,360 | 0 | 10 | 2008-09-12T04:21:00.000 | python,sockets | How do I get the external IP of a socket in Python? | 1 | 2 | 9 | 58,296 | 0 |
0 | 0 | When I call socket.getsockname() on a socket object, it returns a tuple of my machine's internal IP and the port. However, I would like to retrieve my external IP. What's the cheapest, most efficient manner of doing this? | false | 58,294 | 0.044415 | 0 | 0 | 2 | import socket
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("msn.com",80))
s.getsockname() | 0 | 19,360 | 0 | 10 | 2008-09-12T04:21:00.000 | python,sockets | How do I get the external IP of a socket in Python? | 1 | 2 | 9 | 256,358 | 0 |
0 | 0 | Been scouring the net for something like firewatir but for python. I'm trying to automate firefox on linux. Any suggestions? | false | 60,152 | 0 | 1 | 0 | 0 | The languages of choice of Firefox is Javascript. Unless you have a specific requirement that requires Python, I would advice you to use that. | 0 | 21,896 | 0 | 12 | 2008-09-12T23:28:00.000 | python,linux,firefox,ubuntu,automation | Automate firefox with python? | 1 | 2 | 8 | 60,218 | 0 |
0 | 0 | Been scouring the net for something like firewatir but for python. I'm trying to automate firefox on linux. Any suggestions? | false | 60,152 | 0.024995 | 1 | 0 | 1 | I would suggest you to use Selenium instead of Mechanize/Twill because Mechanize would fail while handling Javascript. | 0 | 21,896 | 0 | 12 | 2008-09-12T23:28:00.000 | python,linux,firefox,ubuntu,automation | Automate firefox with python? | 1 | 2 | 8 | 7,610,441 | 0 |
0 | 0 | I'm looking for a good server/client protocol supported in Python for making data requests/file transfers between one server and many clients. Security is also an issue - so secure login would be a plus. I've been looking into XML-RPC, but it looks to be a pretty old (and possibly unused these days?) protocol. | false | 64,426 | 0.01818 | 0 | 0 | 1 | There is no need to use HTTP (indeed, HTTP is not good for RPC in general in some respects), and no need to use a standards-based protocol if you're talking about a python client talking to a python server.
Use a Python-specific RPC library such as Pyro, or what Twisted provides (Twisted.spread). | 0 | 8,856 | 0 | 9 | 2008-09-15T16:27:00.000 | python,client | Best Python supported server/client protocol? | 1 | 2 | 11 | 256,833 | 0 |
0 | 0 | I'm looking for a good server/client protocol supported in Python for making data requests/file transfers between one server and many clients. Security is also an issue - so secure login would be a plus. I've been looking into XML-RPC, but it looks to be a pretty old (and possibly unused these days?) protocol. | false | 64,426 | 0.072599 | 0 | 0 | 4 | I suggest you look at 1. XMLRPC 2. JSONRPC 3. SOAP 4. REST/ATOM
XMLRPC is a valid choice. Don't worry it is too old. That is not a problem. It is so simple that little needed changing since original specification. The pro is that in every programming langauge I know there is a library for a client to be written in. Certainly for python. I made it work with mod_python and had no problem at all.
The big problem with it is its verbosity. For simple values there is a lot of XML overhead. You can gzip it of cause, but then you loose some debugging ability with the tools like Fiddler.
My personal preference is JSONRPC. It has all of the XMLRPC advantages and it is very compact. Further, Javascript clients can "eval" it so no parsing is necessary. Most of them are built for version 1.0 of the standard. I have seen diverse attempts to improve on it, called 1.1 1.2 and 2.0 but they are not built one on top of another and, to my knowledge, are not widely supported yet. 2.0 looks the best, but I would still stick with 1.0 for now (October 2008)
Third candidate would be REST/ATOM. REST is a principle, and ATOM is how you convey bulk of data when it needs to for POST, PUT requests and GET responses.
For a very nice implementation of it, look at GData, Google's API. Real real nice.
SOAP is old, and lots lots of libraries / langauges support it. IT is heeavy and complicated, but if your primary clients are .NET or Java, it might be worth the bother.
Visual Studio would import your WSDL file and create a wrapper and to C# programmer it would look like local assembly indeed.
The nice thing about all this, is that if you architect your solution right, existing libraries for Python would allow you support more then one with almost no overhead. XMLRPC and JSONRPC are especially good match.
Regarding authentication. XMLRPC and JSONRPC don't bother defining one. It is independent thing from the serialization. So you can implement Basic Authentication, Digest Authentication or your own with any of those. I have seen couple of examples of client side Digest Authentication for python, but am yet to see the server based one. If you use Apache, you might not need one, using mod_auth_digest Apache module instead. This depens on the nature of your application
Transport security. It is obvously SSL (HTTPS). I can't currently remember how XMLRPC deals with, but with JSONRPC implementation that I have it is trivial - you merely change http to https in your URLs to JSONRPC and it shall be going over SSL enabled transport. | 0 | 8,856 | 0 | 9 | 2008-09-15T16:27:00.000 | python,client | Best Python supported server/client protocol? | 1 | 2 | 11 | 256,826 | 0 |
0 | 0 | When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine.
I have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo? | false | 68,541 | 0 | 0 | 0 | 0 | I would suspect the permissions on the library. Can you do a strace or similar to find out the filenames it's looking for, and then check the permissions on them? | 0 | 225 | 0 | 0 | 2008-09-16T01:27:00.000 | python,macos,libxml2 | libxml2-p25 on OS X 10.5 needs sudo? | 1 | 2 | 3 | 70,895 | 0 |
0 | 0 | When trying to use libxml2 as myself I get an error saying the package cannot be found. If I run as as super user I am able to import fine.
I have installed python25 and all libxml2 and libxml2-py25 related libraries via fink and own the entire path including the library. Any ideas why I'd still need to sudo? | false | 68,541 | 0 | 0 | 0 | 0 | The PATH environment variable was the mistake. | 0 | 225 | 0 | 0 | 2008-09-16T01:27:00.000 | python,macos,libxml2 | libxml2-p25 on OS X 10.5 needs sudo? | 1 | 2 | 3 | 77,114 | 0 |
0 | 0 | My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible?
If it's not, how would you do it with POST? Just out of curiosity.
Cheers! | false | 69,979 | 0 | 1 | 0 | 0 | With a bit of JavaScript, you could have the client hash the entered password and a server-generated nonce, and use that in an HTTP GET. | 0 | 2,893 | 0 | 1 | 2008-09-16T07:07:00.000 | python,authentication,cgi | Can I implement a web user authentication system in python without POST? | 1 | 3 | 6 | 70,003 | 0 |
0 | 0 | My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible?
If it's not, how would you do it with POST? Just out of curiosity.
Cheers! | true | 69,979 | 1.2 | 1 | 0 | 5 | You can actually do it all with GET methods. However, you'll want to use a full challenge response protocol for the logins. (You can hash on the client side using javascript. You just need to send out a unique challenge each time.) You'll also want to use SSL to ensure that no one can see the strings as they go across.
In some senses there's no real security difference between GET and POST requests as they both go across in plaintext, in other senses and in practice... GET is are a hell of a lot easier to intercept and is all over most people's logs and your web browser's history. :)
(Or as suggested by the other posters, use a different method entirely like HTTP auth, digest auth or some higher level authentication scheme like AD, LDAP, kerberos or shib. However I kinda assumed that if you didn't have POST you wouldn't have these either.) | 0 | 2,893 | 0 | 1 | 2008-09-16T07:07:00.000 | python,authentication,cgi | Can I implement a web user authentication system in python without POST? | 1 | 3 | 6 | 69,995 | 0 |
0 | 0 | My university doesn't support the POST cgi method (I know, it's crazy), and I was hoping to be able to have a system where a user can have a username and password and log in securely. Is this even possible?
If it's not, how would you do it with POST? Just out of curiosity.
Cheers! | false | 69,979 | 0.033321 | 1 | 0 | 1 | You could use HTTP Authentication, if supported.
You'd have to add SSL, as all methods, POST, GET and HTTP Auth (well, except Digest HHTP authentication) send plaintext.
GET is basically just like POST, it just has a limit on the amount of data you can send which is usually a lot smaller than POST and a semantic difference which makes GET not a good candidate from that point of view, even if technically they both can do it.
As for examples, what are you using? There are many choices in Python, like the cgi module or some framework like Django, CherryPy, and so on | 0 | 2,893 | 0 | 1 | 2008-09-16T07:07:00.000 | python,authentication,cgi | Can I implement a web user authentication system in python without POST? | 1 | 3 | 6 | 69,989 | 0 |
0 | 0 | I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas? | false | 85,577 | 0.024995 | 0 | 0 | 1 | I don't think there is a built in way to get it from Python itself.
My question is, how are you getting the IP information from your network?
To get it from your local machine you could parse ifconfig (unix) or ipconfig (windows) with little difficulty. | 0 | 16,340 | 1 | 9 | 2008-09-17T17:23:00.000 | python,network-programming | Search for host with MAC-address using Python | 1 | 4 | 8 | 85,608 | 0 |
0 | 0 | I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas? | false | 85,577 | 0 | 0 | 0 | 0 | You would want to parse the output of 'arp', but the kernel ARP cache will only contain those IP address(es) if those hosts have communicated with the host where the Python script is running.
ifconfig can be used to display the MAC addresses of local interfaces, but not those on the LAN. | 0 | 16,340 | 1 | 9 | 2008-09-17T17:23:00.000 | python,network-programming | Search for host with MAC-address using Python | 1 | 4 | 8 | 85,641 | 0 |
0 | 0 | I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas? | false | 85,577 | 0.024995 | 0 | 0 | 1 | It seems that there is not a native way of doing this with Python. Your best bet would be to parse the output of "ipconfig /all" on Windows, or "ifconfig" on Linux. Consider using os.popen() with some regexps. | 0 | 16,340 | 1 | 9 | 2008-09-17T17:23:00.000 | python,network-programming | Search for host with MAC-address using Python | 1 | 4 | 8 | 85,634 | 0 |
0 | 0 | I'd like to search for a given MAC address on my network, all from within a Python script. I already have a map of all the active IP addresses in the network but I cannot figure out how to glean the MAC address. Any ideas? | false | 85,577 | 0 | 0 | 0 | 0 | Depends on your platform. If you're using *nix, you can use the 'arp' command to look up the mac address for a given IP (assuming IPv4) address. If that doesn't work, you could ping the address and then look, or if you have access to the raw network (using BPF or some other mechanism), you could send your own ARP packets (but that is probably overkill). | 0 | 16,340 | 1 | 9 | 2008-09-17T17:23:00.000 | python,network-programming | Search for host with MAC-address using Python | 1 | 4 | 8 | 85,620 | 0 |
0 | 0 | Basically, something similar to System.Xml.XmlWriter - A streaming XML Writer that doesn't incur much of a memory overhead. So that rules out xml.dom and xml.dom.minidom. Suggestions? | true | 93,710 | 1.2 | 0 | 0 | -4 | xml.etree.cElementTree, included in the default distribution of CPython since 2.5. Lightning fast for both reading and writing XML. | 0 | 2,617 | 0 | 16 | 2008-09-18T15:42:00.000 | python,xml,streaming | What's the easiest non-memory intensive way to output XML from Python? | 1 | 1 | 6 | 93,850 | 0 |
0 | 0 | What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if http://somedomain/foo/ will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this? | false | 107,405 | 0.01818 | 1 | 0 | 1 | As an aside, when using the httplib (at least on 2.5.2), trying to read the response of a HEAD request will block (on readline) and subsequently fail. If you do not issue read on the response, you are unable to send another request on the connection, you will need to open a new one. Or accept a long delay between requests. | 0 | 72,799 | 0 | 117 | 2008-09-20T06:38:00.000 | python,python-2.7,http,http-headers,content-type | How do you send a HEAD HTTP request in Python 2? | 1 | 2 | 11 | 779,985 | 0 |
0 | 0 | What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if http://somedomain/foo/ will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this? | false | 107,405 | 0.01818 | 1 | 0 | 1 | I have found that httplib is slightly faster than urllib2. I timed two programs - one using httplib and the other using urllib2 - sending HEAD requests to 10,000 URL's. The httplib one was faster by several minutes. httplib's total stats were: real 6m21.334s
user 0m2.124s
sys 0m16.372s
And urllib2's total stats were: real 9m1.380s
user 0m16.666s
sys 0m28.565s
Does anybody else have input on this? | 0 | 72,799 | 0 | 117 | 2008-09-20T06:38:00.000 | python,python-2.7,http,http-headers,content-type | How do you send a HEAD HTTP request in Python 2? | 1 | 2 | 11 | 2,630,687 | 0 |
0 | 0 | I need to upload some data to a server using HTTP PUT in python. From my brief reading of the urllib2 docs, it only does HTTP POST. Is there any way to do an HTTP PUT in python? | false | 111,945 | 1 | 0 | 0 | 8 | I needed to solve this problem too a while back so that I could act as a client for a RESTful API. I settled on httplib2 because it allowed me to send PUT and DELETE in addition to GET and POST. Httplib2 is not part of the standard library but you can easily get it from the cheese shop. | 0 | 174,987 | 0 | 225 | 2008-09-21T20:11:00.000 | python,http,put | Is there any way to do HTTP PUT in python | 1 | 1 | 14 | 114,648 | 0 |
1 | 0 | Is there any python module for rendering a HTML page with javascript and get back a DOM object?
I want to parse a page which generates almost all of its content using javascript. | false | 126,131 | 1 | 0 | 0 | 8 | The big complication here is emulating the full browser environment outside of a browser. You can use stand alone javascript interpreters like Rhino and SpiderMonkey to run javascript code but they don't provide a complete browser like environment to full render a web page.
If I needed to solve a problem like this I would first look at how the javascript is rendering the page, it's quite possible it's fetching data via AJAX and using that to render the page. I could then use python libraries like simplejson and httplib2 to directly fetch the data and use that, negating the need to access the DOM object. However, that's only one possible situation, I don't know the exact problem you are solving.
Other options include the selenium one mentioned by Łukasz, some kind of webkit embedded craziness, some kind of IE win32 scripting craziness or, finally, a pyxpcom based solution (with added craziness). All these have the drawback of requiring pretty much a fully running web browser for python to play with, which might not be an option depending on your environment. | 0 | 34,652 | 0 | 18 | 2008-09-24T09:05:00.000 | javascript,python,html | Python library for rendering HTML and javascript | 1 | 1 | 2 | 126,250 | 0 |
0 | 0 | I've added cookie support to SOAPpy by overriding HTTPTransport. I need functionality beyond that of SOAPpy, so I was planning on moving to ZSI, but I can't figure out how to put the Cookies on the ZSI posts made to the service. Without these cookies, the server will think it is an unauthorized request and it will fail.
How can I add cookies from a Python CookieJar to ZSI requests? | false | 139,212 | 0 | 1 | 0 | 0 | Additionally, the Binding class also allows any header to be added. So I figured out that I can just add a "Cookie" header for each cookie I need to add. This worked well for the code generated by wsdl2py, just adding the cookies right after the binding is formed in the SOAP client class. Adding a parameter to the generated class to take in the cookies as a dictionary is easy and then they can easily be iterated through and added. | 0 | 526 | 0 | 2 | 2008-09-26T12:45:00.000 | python,web-services,cookies,soappy,zsi | Adding Cookie to ZSI Posts | 1 | 1 | 2 | 148,379 | 0 |
0 | 0 | I'm trying to redirect/forward a Pylons request. The problem with using redirect_to is that form data gets dropped. I need to keep the POST form data intact as well as all request headers.
Is there a simple way to do this? | true | 153,773 | 1.2 | 0 | 0 | 2 | Receiving data from a POST depends on the web browser sending data along. When the web browser receives a redirect, it does not resend that data along. One solution would be to URL encode the data you want to keep and use that with a GET. In the worst case, you could always add the data you want to keep to the session and pass it that way. | 0 | 1,415 | 0 | 4 | 2008-09-30T16:11:00.000 | python,post,request,header,pylons | What is the preferred way to redirect a request in Pylons without losing form data? | 1 | 1 | 1 | 153,822 | 0 |
0 | 0 | I've discovered that cElementTree is about 30 times faster than xml.dom.minidom and I'm rewriting my XML encoding/decoding code. However, I need to output XML that contains CDATA sections and there doesn't seem to be a way to do that with ElementTree.
Can it be done? | false | 174,890 | 0.013333 | 0 | 0 | 1 | The DOM has (atleast in level 2) an interface
DATASection, and an operation Document::createCDATASection. They are
extension interfaces, supported only if an implementation supports the
"xml" feature.
from xml.dom import minidom
my_xmldoc=minidom.parse(xmlfile)
my_xmldoc.createCDATASection(data)
now u have cadata node add it wherever u want.... | 0 | 52,719 | 0 | 46 | 2008-10-06T15:56:00.000 | python,xml | How to output CDATA using ElementTree | 1 | 1 | 15 | 510,324 | 0 |
0 | 0 | If I have no connection to internet, does that mean I can't start IDLE (which comes with python 3.0)? | false | 190,115 | 0 | 0 | 0 | 0 | Not really. You can download the latest version of Python 3.x suitable for whichever operating system you are using, and you can load IDLE without any internet. | 1 | 945 | 0 | 1 | 2008-10-10T04:19:00.000 | python,python-3.x,python-idle | IDLE doesn't start in python 3.0 | 1 | 1 | 3 | 56,122,726 | 0 |
0 | 0 | Python has several ways to parse XML...
I understand the very basics of parsing with SAX. It functions as a stream parser, with an event-driven API.
I understand the DOM parser also. It reads the XML into memory and converts it to objects that can be accessed with Python.
Generally speaking, it was easy to choose between the two depending on what you needed to do, memory constraints, performance, etc.
(Hopefully I'm correct so far.)
Since Python 2.5, we also have ElementTree. How does this compare to DOM and SAX? Which is it more similar to? Why is it better than the previous parsers? | false | 192,907 | 1 | 0 | 0 | 7 | ElementTree's parse() is like DOM, whereas iterparse() is like SAX. In my opinion, ElementTree is better than DOM and SAX in that it provides API easier to work with. | 0 | 33,129 | 0 | 80 | 2008-10-10T20:22:00.000 | python,xml,dom,sax,elementtree | XML parsing - ElementTree vs SAX and DOM | 1 | 1 | 4 | 192,913 | 0 |
0 | 0 | I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?
More info:
So certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it? | false | 203,758 | 0.066568 | 0 | 0 | 1 | What is your question exactly?
What happens when you do receive on a socket is that the current available data in the socket buffer is immediately returned. If you give receive (or read, I guess), a huge buffer size, such as 40000, it'll likely never return that much data at once. If you give it a tiny buffer size like 100, then it'll return the 100 bytes it has immediately and still have more available. Either way, you're not imposing a limit on how much data the client is sending you. | 1 | 1,350 | 0 | 2 | 2008-10-15T04:55:00.000 | python,sockets | Receive socket size limits good? | 1 | 3 | 3 | 203,769 | 0 |
0 | 0 | I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?
More info:
So certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it? | true | 203,758 | 1.2 | 0 | 0 | 2 | Most likely you've seen code which protects against "extra" incoming data. This is often due to the possibility of buffer overruns, where the extra data being copied into memory overruns the pre-allocated array and overwrites executable code with attacker code. Code written in languages like C typically has a lot of length checking to prevent this type of attack. Functions such as gets, and strcpy are replaced with their safer counterparts like fgets and strncpy which have a length argument to prevent buffer overruns.
If you use a dynamic language like Python, your arrays resize so they won't overflow and clobber other memory, but you still have to be careful about sanitizing foreign data.
Chat programs likely limit the size of a message for reasons such as database field size. If 80% of your incoming messages are 40 characters or less, 90% are 60 characters or less, and 98% are 80 characters or less, why make your message text field allow 10k characters per message? | 1 | 1,350 | 0 | 2 | 2008-10-15T04:55:00.000 | python,sockets | Receive socket size limits good? | 1 | 3 | 3 | 203,933 | 0 |
0 | 0 | I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?
More info:
So certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it? | false | 203,758 | 0 | 0 | 0 | 0 | I don't know what your actual application is, however, setting a hard limit on the total amount of data that a client can send could be useful in reducing your exposure to denial of service attacks, e.g. client connects and sends 100MB of data which could load your application unacceptably.
But it really depends on what you application is. Are you after a per line limit or a total per connection limit or what? | 1 | 1,350 | 0 | 2 | 2008-10-15T04:55:00.000 | python,sockets | Receive socket size limits good? | 1 | 3 | 3 | 207,096 | 0 |
0 | 0 | How can I get a list of the IP addresses or host names from a local network easily in Python?
It would be best if it was multi-platform, but it needs to work on Mac OS X first, then others follow.
Edit: By local I mean all active addresses within a local network, such as 192.168.xxx.xxx.
So, if the IP address of my computer (within the local network) is 192.168.1.1, and I have three other connected computers, I would want it to return the IP addresses 192.168.1.2, 192.168.1.3, 192.168.1.4, and possibly their hostnames. | true | 207,234 | 1.2 | 0 | 0 | 24 | If by "local" you mean on the same network segment, then you have to perform the following steps:
Determine your own IP address
Determine your own netmask
Determine the network range
Scan all the addresses (except the lowest, which is your network address and the highest, which is your broadcast address).
Use your DNS's reverse lookup to determine the hostname for IP addresses which respond to your scan.
Or you can just let Python execute nmap externally and pipe the results back into your program. | 0 | 129,230 | 1 | 46 | 2008-10-16T02:32:00.000 | python,networking | List of IP addresses/hostnames from local network in Python | 1 | 1 | 11 | 207,246 | 0 |
0 | 0 | I have noticed that my particular instance of Trac is not running quickly and has big lags. This is at the very onset of a project, so not much is in Trac (except for plugins and code loaded into SVN).
Setup Info: This is via a SELinux system hosted by WebFaction. It is behind Apache, and connections are over SSL. Currently the .htpasswd file is what I use to control access.
Are there any recommend ways to improve the performance of Trac? | true | 213,838 | 1.2 | 1 | 0 | 5 | It's hard to say without knowing more about your setup, but one easy win is to make sure that Trac is running in something like mod_python, which keeps the Python runtime in memory. Otherwise, every HTTP request will cause Python to run, import all the modules, and then finally handle the request. Using mod_python (or FastCGI, whichever you prefer) will eliminate that loading and skip straight to the good stuff.
Also, as your Trac database grows and you get more people using the site, you'll probably outgrow the default SQLite database. At that point, you should think about migrating the database to PostgreSQL or MySQL, because they'll be able to handle concurrent requests much faster. | 0 | 2,997 | 0 | 3 | 2008-10-17T21:02:00.000 | python,performance,trac | How to improve Trac's performance | 1 | 2 | 4 | 214,162 | 0 |
0 | 0 | I have noticed that my particular instance of Trac is not running quickly and has big lags. This is at the very onset of a project, so not much is in Trac (except for plugins and code loaded into SVN).
Setup Info: This is via a SELinux system hosted by WebFaction. It is behind Apache, and connections are over SSL. Currently the .htpasswd file is what I use to control access.
Are there any recommend ways to improve the performance of Trac? | false | 213,838 | 0.148885 | 1 | 0 | 3 | We've had the best luck with FastCGI. Another critical factor was to only use https for authentication but use http for all other traffic -- I was really surprised how much that made a difference. | 0 | 2,997 | 0 | 3 | 2008-10-17T21:02:00.000 | python,performance,trac | How to improve Trac's performance | 1 | 2 | 4 | 215,084 | 0 |
0 | 0 | I am interested in making a Google Talk client using Python and would like to use the Twisted libraries Words module. I have looked at the examples, but they don't work with the current implementation of Google Talk.
Has anybody had any luck with this? Would you mind documenting a brief tutorial?
As a simple task, I'd like to create a client/bot that tracks the Online time of my various Google Talk accounts so that I can get an aggregate number. I figure I could friend the bot in each account and then use the XMPP presence information to keep track of the times that I can then aggregate.
Thanks. | false | 227,279 | -0.099668 | 0 | 0 | -2 | As the Twisted libs seem to be out of date, you have two choices:
Implement your own XMPP-handler or look for another library.
I would suggest working with the raw XML; XMPP is not that complicated and you are bound to learn something. | 1 | 8,406 | 0 | 17 | 2008-10-22T19:48:00.000 | python,twisted,xmpp,google-talk | How do you create a simple Google Talk Client using the Twisted Words Python library? | 1 | 1 | 4 | 228,877 | 0 |
1 | 0 | I have written a script that goes through a bunch of files and snips out a portion of the files for further processing. The script creates a new directory and creates new files for each snip that is taken out. I have to now evaluate each of the files that were created to see if it is what I needed. The script also creates an html index file with links to each of the snips. So I can click the hyperlink to see the file, make a note in a spreadsheet to indicate if the file is correct or not and then use the back button in the browser to take me back to the index list.
I was sitting here wondering if I could somehow create a delete button in the browser next to the hyperlink. My thought is I would click the hyperlink, make a judgment about the file and if it is not one I want to keep then when I get back to the main page I just press the delete button and it is gone from the directory.
Does anyone have any idea if this is possible. I am writing this in python but clearly the issue is is there a way to create an htm file with a delete button-I would just use Python to write the commands for the deletion button. | false | 256,021 | 0 | 0 | 0 | 0 | You would have to write the web page in Python. There are many Python web frameworks out there (e.g. Django) that are easy to work with. You could convert your entire scripting framework to a web application that has a worker thread going and crawling through html pages, saving them to a particular location, indexing them for you to see and providing a delete button that calls the system's delete function on the particular file. | 0 | 1,222 | 0 | 2 | 2008-11-01T19:52:00.000 | python,web-applications,browser | Can I Use Python to Make a Delete Button in a 'web page' | 1 | 2 | 4 | 256,028 | 0 |
1 | 0 | I have written a script that goes through a bunch of files and snips out a portion of the files for further processing. The script creates a new directory and creates new files for each snip that is taken out. I have to now evaluate each of the files that were created to see if it is what I needed. The script also creates an html index file with links to each of the snips. So I can click the hyperlink to see the file, make a note in a spreadsheet to indicate if the file is correct or not and then use the back button in the browser to take me back to the index list.
I was sitting here wondering if I could somehow create a delete button in the browser next to the hyperlink. My thought is I would click the hyperlink, make a judgment about the file and if it is not one I want to keep then when I get back to the main page I just press the delete button and it is gone from the directory.
Does anyone have any idea if this is possible. I am writing this in python but clearly the issue is is there a way to create an htm file with a delete button-I would just use Python to write the commands for the deletion button. | false | 256,021 | 0.049958 | 0 | 0 | 1 | You could make this even simpler by making it all happen in one main page. Instead of having a list of hyperlinks, just have the main page have one frame that loads one of the autocreated pages in it. Put a couple of buttons at the bottom - a "Keep this page" and a "Delete this page." When you click either button, the main page refreshes, this time with the next autocreated page in the frame.
You could make this as a cgi script in your favorite scripting language. You can't just do this in html because an html page only does stuff client-side, and you can only delete files server-side. You will probably need as cgi args the page to show in the frame, and the last page you viewed if the button click was a "delete". | 0 | 1,222 | 0 | 2 | 2008-11-01T19:52:00.000 | python,web-applications,browser | Can I Use Python to Make a Delete Button in a 'web page' | 1 | 2 | 4 | 256,040 | 0 |
1 | 0 | I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page. | false | 257,409 | 0.085505 | 0 | 0 | 3 | Use htmllib to extract all img tags (override do_img), then use urllib2 to download all the images. | 0 | 97,893 | 0 | 47 | 2008-11-02T21:31:00.000 | python,screen-scraping | Download image file from the HTML page source using python? | 1 | 2 | 7 | 257,413 | 0 |
1 | 0 | I am writing a scraper that downloads all the image files from a HTML page and saves them to a specific folder. all the images are the part of the HTML page. | false | 257,409 | 1 | 0 | 0 | 8 | You have to download the page and parse html document, find your image with regex and download it.. You can use urllib2 for downloading and Beautiful Soup for parsing html file. | 0 | 97,893 | 0 | 47 | 2008-11-02T21:31:00.000 | python,screen-scraping | Download image file from the HTML page source using python? | 1 | 2 | 7 | 257,412 | 0 |
0 | 0 | I'm conducting experiments regarding e-mail spam. One of these experiments require sending mail thru Tor. Since I'm using Python and smtplib for my experiments, I'm looking for a way to use the Tor proxy (or other method) to perform that mail sending.
Ideas how this can be done? | true | 266,849 | 1.2 | 1 | 0 | 1 | Because of abuse by spammers, many Tor egress nodes decline to emit port 25 (SMTP) traffic, so you may have problems. | 0 | 2,405 | 0 | 2 | 2008-11-05T21:50:00.000 | python,smtp,tor | Using Python's smtplib with Tor | 1 | 1 | 2 | 275,164 | 0 |
0 | 0 | I am running my HTTPServer in a separate thread (using the threading module which has no way to stop threads...) and want to stop serving requests when the main thread also shuts down.
The Python documentation states that BaseHTTPServer.HTTPServer is a subclass of SocketServer.TCPServer, which supports a shutdown method, but it is missing in HTTPServer.
The whole BaseHTTPServer module has very little documentation :( | false | 268,629 | 1 | 0 | 0 | 15 | I think you can use [serverName].socket.close() | 0 | 87,640 | 0 | 58 | 2008-11-06T13:10:00.000 | python,http,basehttpserver | How to stop BaseHTTPServer.serve_forever() in a BaseHTTPRequestHandler subclass? | 1 | 1 | 11 | 4,020,093 | 0 |
0 | 0 | I develop a client-server style, database based system and I need to devise a way to stress / load test the system. Customers inevitably want to know such things as:
• How many clients can a server support?
• How many concurrent searches can a server support?
• How much data can we store in the database?
• Etc.
Key to all these questions is response time. We need to be able to measure how response time and performance degrades as new load is introduced so that we could for example, produce some kind of pretty graph we could throw at clients to give them an idea what kind of performance to expect with a given hardware configuration.
Right now we just put out fingers in the air and make educated guesses based on what we already know about the system from experience. As the product is put under more demanding conditions, this is proving to be inadequate for our needs going forward though.
I've been given the task of devising a method to get such answers in a meaningful way. I realise that this is not a question that anyone can answer definitively but I'm looking for suggestions about how people have gone about doing such work on their own systems.
One thing to note is that we have full access to our client API via the Python language (courtesy of SWIG) which is a lot easier to work with than C++ for this kind of work.
So there we go, I throw this to the floor: really interested to see what ideas you guys can come up with! | false | 271,825 | 0.197375 | 1 | 0 | 5 | For performance you are looking at two things: latency (the responsiveness of the application) and throughput (how many ops per interval). For latency you need to have an acceptable benchmark. For throughput you need to have a minimum acceptable throughput.
These are you starting points. For telling a client how many xyz's you can do per interval then you are going to need to know the hardware and software configuration. Knowing the production hardware is important to getting accurate figures. If you do not know the hardware configuration then you need to devise a way to map your figures from the test hardware to the eventual production hardware.
Without knowledge of hardware then you can really only observe trends in performance over time rather than absolutes.
Knowing the software configuration is equally important. Do you have a clustered server configuration, is it load balanced, is there anything else running on the server? Can you scale your software or do you have to scale the hardware to meet demand.
To know how many clients you can support you need to understand what is a standard set of operations. A quick test is to remove the client and write a stub client and the spin up as many of these as you can. Have each one connect to the server. You will eventually reach the server connection resource limit. Without connection pooling or better hardware you can't get higher than this. Often you will hit a architectural issue before here but in either case you have an upper bounds.
Take this information and design a script that your client can enact. You need to map how long your script takes to perform the action with respect to how long it will take the expected user to do it. Start increasing your numbers as mentioned above to you hit the point where the increase in clients causes a greater decrease in performance.
There are many ways to stress test but the key is understanding expected load. Ask your client about their expectations. What is the expected demand per interval? From there you can work out upper loads.
You can do a soak test with many clients operating continously for many hours or days. You can try to connect as many clients as you can as fast you can to see how well your server handles high demand (also a DOS attack).
Concurrent searches should be done through your standard behaviour searches acting on behalf of the client or, write a script to establish a semaphore that waits on many threads, then you can release them all at once. This is fun and punishes your database. When performing searches you need to take into account any caching layers that may exist. You need to test both caching and without caching (in scenarios where everyone makes unique search requests).
Database storage is based on physical space; you can determine row size from the field lengths and expected data population. Extrapolate this out statistically or create a data generation script (useful for your load testing scenarios and should be an asset to your organisation) and then map the generated data to business objects. Your clients will care about how many "business objects" they can store while you will care about how much raw data can be stored.
Other things to consider: What is the expected availability? What about how long it takes to bring a server online. 99.9% availability is not good if it takes two days to bring back online the one time it does go down. On the flip side a lower availablility is more acceptable if it takes 5 seconds to reboot and you have a fall over. | 0 | 10,487 | 0 | 13 | 2008-11-07T11:35:00.000 | python,database,client-server,load-testing,stress-testing | How should I stress test / load test a client server application? | 1 | 2 | 5 | 271,918 | 0 |
0 | 0 | I develop a client-server style, database based system and I need to devise a way to stress / load test the system. Customers inevitably want to know such things as:
• How many clients can a server support?
• How many concurrent searches can a server support?
• How much data can we store in the database?
• Etc.
Key to all these questions is response time. We need to be able to measure how response time and performance degrades as new load is introduced so that we could for example, produce some kind of pretty graph we could throw at clients to give them an idea what kind of performance to expect with a given hardware configuration.
Right now we just put out fingers in the air and make educated guesses based on what we already know about the system from experience. As the product is put under more demanding conditions, this is proving to be inadequate for our needs going forward though.
I've been given the task of devising a method to get such answers in a meaningful way. I realise that this is not a question that anyone can answer definitively but I'm looking for suggestions about how people have gone about doing such work on their own systems.
One thing to note is that we have full access to our client API via the Python language (courtesy of SWIG) which is a lot easier to work with than C++ for this kind of work.
So there we go, I throw this to the floor: really interested to see what ideas you guys can come up with! | false | 271,825 | 0 | 1 | 0 | 0 | If you have the budget, LoadRunner would be perfect for this. | 0 | 10,487 | 0 | 13 | 2008-11-07T11:35:00.000 | python,database,client-server,load-testing,stress-testing | How should I stress test / load test a client server application? | 1 | 2 | 5 | 271,891 | 0 |
0 | 0 | Let's say I wanted to make a python script interface with a site like Twitter.
What would I use to do that? I'm used to using curl/wget from bash, but Python seems to be much nicer to use. What's the equivalent?
(This isn't Python run from a webserver, but run locally via the command line) | false | 285,226 | 0.07983 | 1 | 0 | 2 | Python has a very nice httplib module as well as a url module which together will probably accomplish most of what you need (at least with regards to wget functionality). | 0 | 839 | 0 | 6 | 2008-11-12T20:28:00.000 | python,web-services,twitter | What Python tools can I use to interface with a website's API? | 1 | 1 | 5 | 285,252 | 0 |
0 | 0 | I need to connect to an Exchange mailbox in a Python script, without using any profile setup on the local machine (including using Outlook). If I use win32com to create a MAPI.Session I could logon (with the Logon() method) with an existing profile, but I want to just provide a username & password.
Is this possible? If so, could someone provide example code? I would prefer if it only used the standard library and the pywin32 package. Unfortunately, enabling IMAP access for the Exchange server (and then using imaplib) is not possible.
In case it is necessary: all the script will be doing is connecting to the mailbox, and running through the messages in the Inbox, retrieving the contents. I can handle writing the code for that, if I can get a connection in the first place!
To clarify regarding Outlook: Outlook will be installed on the local machine, but it does not have any accounts setup (i.e. all the appropriate libraries will be available, but I need to operate independently from anything setup inside of Outlook). | true | 288,546 | 1.2 | 1 | 0 | 1 | I'm pretty sure this is going to be impossible without using Outlook and a MAPI profile. If you can sweet talk your mail admin into enabling IMAP on the Exchange server it would make your life a lot easier. | 0 | 107,535 | 0 | 26 | 2008-11-13T22:19:00.000 | python,email,connection,exchange-server,pywin32 | Connect to Exchange mailbox with Python | 1 | 1 | 4 | 288,569 | 0 |
1 | 0 | We have developers with knowledge of these languages - Ruby , Python, .Net or Java. We are developing an application which will mainly handle XML documents. Most of the work is to convert predefined XML files into database tables, providing mapping between XML documents through database, creating reports from database etc. Which language will be the easiest and fastest to work with?
(It is a web-app) | false | 301,493 | 0.044415 | 1 | 0 | 2 | either C# or VB.Net using LiNQ to XML. LiNQ to XML is very very powerful and easy to implement | 0 | 18,193 | 0 | 21 | 2008-11-19T10:35:00.000 | java,.net,python,xml,ruby | Which language is easiest and fastest to work with XML content? | 1 | 1 | 9 | 301,538 | 0 |
0 | 0 | Is there a way to find all nodes in a xml tree using cElementTree? The findall method works only for specified tags. | false | 304,216 | 0.099668 | 0 | 0 | 1 | Have you looked at node.getiterator()? | 0 | 1,525 | 0 | 2 | 2008-11-20T03:06:00.000 | python,xml,search,celementtree | Find all nodes from an XML using cElementTree | 1 | 1 | 2 | 304,221 | 0 |
1 | 0 | Comparable to cacti or mrtg. | false | 310,759 | 0 | 1 | 0 | 0 | or you can start building your own solution (like me), you will be surprised how much can you do with few lines of code using for instance cherryp for web server, pysnmp, and python rrd module. | 0 | 1,942 | 0 | 1 | 2008-11-22T02:26:00.000 | python,django,pylons,snmp,turbogears | Does anyone know of a python based web ui for snmp monitoring? | 1 | 1 | 2 | 541,516 | 0 |
0 | 0 | I need to run a simple request/response python module under an
existing system with windows/apache/FastCGI.
All the FastCGI wrappers for python I tried work for Linux only
(they use socket.fromfd() and other such shticks).
Is there a wrapper that runs under windows? | false | 312,928 | 0.132549 | 0 | 0 | 2 | You might find it easier to ditch FastCGI altogether and just run a python webserver on a localhost port. Then just use mod_rewrite to map the apache urls to the internal webserver.
(I started offering FastCGI at my hosting company and to my surprise, nearly everyone ditched it in favor of just running their own web server on the ports I provided them.) | 0 | 3,212 | 1 | 6 | 2008-11-23T20:39:00.000 | python,windows,apache,fastcgi | Python as FastCGI under windows and apache | 1 | 1 | 3 | 318,517 | 0 |
0 | 0 | What's the best way to validate that an IP entered by the user is valid? It comes in as a string. | false | 319,279 | 0 | 0 | 0 | 0 | I only needed to parse IP v4 addresses. My solution based on Chills strategy follows:
def getIP():
valid = False
while not valid :
octets = raw_input( "Remote Machine IP Address:" ).strip().split(".")
try: valid=len( filter( lambda(item):0<=int(item)<256, octets) ) == 4
except: valid = False
return ".".join( octets ) | 1 | 305,361 | 0 | 175 | 2008-11-25T23:40:00.000 | python,validation,networking,ip-address | How to validate IP address in Python? | 1 | 1 | 11 | 17,214,916 | 0 |
0 | 0 | I have an object that can build itself from an XML string, and write itself out to an XML string. I'd like to write a unit test to test round tripping through XML, but I'm having trouble comparing the two XML versions. Whitespace and attribute order seem to be the issues. Any suggestions for how to do this? This is in Python, and I'm using ElementTree (not that that really matters here since I'm just dealing with XML in strings at this level). | false | 321,795 | 0.059928 | 1 | 0 | 3 | Why are you examining the XML data at all?
The way to test object serialization is to create an instance of the object, serialize it, deserialize it into a new object, and compare the two objects. When you make a change that breaks serialization or deserialization, this test will fail.
The only thing checking the XML data is going to find for you is if your serializer is emitting a superset of what the deserializer requires, and the deserializer silently ignores stuff it doesn't expect.
Of course, if something else is going to be consuming the serialized data, that's another matter. But in that case, you ought to be thinking about establishing a schema for the XML and validating it. | 0 | 17,177 | 0 | 41 | 2008-11-26T19:09:00.000 | python,xml,elementtree | Comparing XML in a unit test in Python | 1 | 2 | 10 | 322,088 | 0 |
0 | 0 | I have an object that can build itself from an XML string, and write itself out to an XML string. I'd like to write a unit test to test round tripping through XML, but I'm having trouble comparing the two XML versions. Whitespace and attribute order seem to be the issues. Any suggestions for how to do this? This is in Python, and I'm using ElementTree (not that that really matters here since I'm just dealing with XML in strings at this level). | false | 321,795 | 0 | 1 | 0 | 0 | The Java component dbUnit does a lot of XML comparisons, so you might find it useful to look at their approach (especially to find any gotchas that they may have already addressed). | 0 | 17,177 | 0 | 41 | 2008-11-26T19:09:00.000 | python,xml,elementtree | Comparing XML in a unit test in Python | 1 | 2 | 10 | 322,600 | 0 |
0 | 0 | I need to check whether a page is being redirected or not without actually downloading the content. I just need the final URL. What's the best way of doing this is Python?
Thanks! | false | 331,855 | 0.099668 | 1 | 0 | 1 | When you open the URL with urllib2, and you're redirected, you get a status 30x for redirection. Check the info to see the location to which you're redirected. You don't need to read the page to read the info() that's part of the response. | 0 | 3,318 | 0 | 5 | 2008-12-01T19:10:00.000 | python,http,http-headers | How to determine if a page is being redirected | 1 | 1 | 2 | 331,871 | 0 |
1 | 0 | We use a number of diffrent web services in our company, wiki(moinmoin), bugtracker (internally), requestracker (customer connection), subversion. Is there a way to parse the wikipages so that if I write "... in Bug1234 you could ..." Bug1234 woud be renderd as a link to http://mybugtracker/bug1234 | true | 343,769 | 1.2 | 0 | 0 | 3 | check out the interwiki page in moinmoin, (most wikis have them) we use trac for example and you can set up different link paths to point to your different web resources. So in our Trac you can go [[SSGWiki:Some Topic]] and it will point to another internal wiki. | 0 | 714 | 0 | 2 | 2008-12-05T13:08:00.000 | python,wiki,moinmoin | How to use InterWiki links in moinmoin? | 1 | 1 | 3 | 343,926 | 0 |
0 | 0 | I have a directory full (~103, 104) of XML files from which I need to extract the contents of several fields.
I've tested different xml parsers, and since I don't need to validate the contents (expensive) I was thinking of simply using xml.parsers.expat (the fastest one) to go through the files, one by one to extract the data.
Is there a more efficient way? (simple text matching doesn't work)
Do I need to issue a new ParserCreate() for each new file (or string) or can I reuse the same one for every file?
Any caveats?
Thanks! | false | 344,559 | 0.049958 | 1 | 0 | 1 | If you know that the XML files are generated using the ever-same algorithm, it might be more efficient to not do any XML parsing at all. E.g. if you know that the data is in lines 3, 4, and 5, you might read through the file line-by-line, and then use regular expressions.
Of course, that approach would fail if the files are not machine-generated, or originate from different generators, or if the generator changes over time. However, I'm optimistic that it would be more efficient.
Whether or not you recycle the parser objects is largely irrelevant. Many more objects will get created, so a single parser object doesn't really count much. | 0 | 572 | 0 | 3 | 2008-12-05T17:15:00.000 | python,xml,performance,large-files,expat-parser | What is the most efficient way of extracting information from a large number of xml files in python? | 1 | 3 | 4 | 344,641 | 0 |
0 | 0 | I have a directory full (~103, 104) of XML files from which I need to extract the contents of several fields.
I've tested different xml parsers, and since I don't need to validate the contents (expensive) I was thinking of simply using xml.parsers.expat (the fastest one) to go through the files, one by one to extract the data.
Is there a more efficient way? (simple text matching doesn't work)
Do I need to issue a new ParserCreate() for each new file (or string) or can I reuse the same one for every file?
Any caveats?
Thanks! | false | 344,559 | 0.049958 | 1 | 0 | 1 | One thing you didn't indicate is whether or not you're reading the XML into a DOM of some kind. I'm guessing that you're probably not, but on the off chance you are, don't. Use xml.sax instead. Using SAX instead of DOM will get you a significant performance boost. | 0 | 572 | 0 | 3 | 2008-12-05T17:15:00.000 | python,xml,performance,large-files,expat-parser | What is the most efficient way of extracting information from a large number of xml files in python? | 1 | 3 | 4 | 345,650 | 0 |
0 | 0 | I have a directory full (~103, 104) of XML files from which I need to extract the contents of several fields.
I've tested different xml parsers, and since I don't need to validate the contents (expensive) I was thinking of simply using xml.parsers.expat (the fastest one) to go through the files, one by one to extract the data.
Is there a more efficient way? (simple text matching doesn't work)
Do I need to issue a new ParserCreate() for each new file (or string) or can I reuse the same one for every file?
Any caveats?
Thanks! | true | 344,559 | 1.2 | 1 | 0 | 3 | The quickest way would be to match strings (with, e.g., regular expressions) instead of parsing XML - depending on your XMLs this could actually work.
But the most important thing is this: instead of thinking through several options, just implement them and time them on a small set. This will take roughly the same amount of time, and will give you real numbers do drive you forward.
EDIT:
Are the files on a local drive or network drive? Network I/O will kill you here.
The problem parallelizes trivially - you can split the work among several computers (or several processes on a multicore computer). | 0 | 572 | 0 | 3 | 2008-12-05T17:15:00.000 | python,xml,performance,large-files,expat-parser | What is the most efficient way of extracting information from a large number of xml files in python? | 1 | 3 | 4 | 344,694 | 0 |
0 | 0 | I'm using urllib2 to read in a page. I need to do a quick regex on the source and pull out a few variables but urllib2 presents as a file object rather than a string.
I'm new to python so I'm struggling to see how I use a file object to do this. Is there a quick way to convert this into a string? | true | 346,230 | 1.2 | 0 | 0 | 77 | You can use Python in interactive mode to search for solutions.
if f is your object, you can enter dir(f) to see all methods and attributes. There's one called read. Enter help(f.read) and it tells you that f.read() is the way to retrieve a string from an file object. | 1 | 48,579 | 0 | 31 | 2008-12-06T12:41:00.000 | python,file,urllib2 | Read file object as string in python | 1 | 1 | 3 | 346,237 | 0 |
0 | 0 | How can I receive and send email in python? A 'mail server' of sorts.
I am looking into making an app that listens to see if it receives an email addressed to foo@bar.domain.com, and sends an email to the sender.
Now, am I able to do this all in python, would it be best to use 3rd party libraries? | false | 348,392 | 0.044415 | 1 | 0 | 2 | Depending on the amount of mail you are sending you might want to look into using a real mail server like postifx or sendmail (*nix systems) Both of those programs have the ability to send a received mail to a program based on the email address. | 0 | 51,409 | 0 | 43 | 2008-12-08T00:12:00.000 | python,email | Receive and send emails in python | 1 | 4 | 9 | 348,579 | 0 |
0 | 0 | How can I receive and send email in python? A 'mail server' of sorts.
I am looking into making an app that listens to see if it receives an email addressed to foo@bar.domain.com, and sends an email to the sender.
Now, am I able to do this all in python, would it be best to use 3rd party libraries? | false | 348,392 | 0.088656 | 1 | 0 | 4 | poplib and smtplib will be your friends when developing your app. | 0 | 51,409 | 0 | 43 | 2008-12-08T00:12:00.000 | python,email | Receive and send emails in python | 1 | 4 | 9 | 348,403 | 0 |
0 | 0 | How can I receive and send email in python? A 'mail server' of sorts.
I am looking into making an app that listens to see if it receives an email addressed to foo@bar.domain.com, and sends an email to the sender.
Now, am I able to do this all in python, would it be best to use 3rd party libraries? | false | 348,392 | 1 | 1 | 0 | 12 | I do not think it would be a good idea to write a real mail server in Python. This is certainly possible (see mcrute's and Manuel Ceron's posts to have details) but it is a lot of work when you think of everything that a real mail server must handle (queuing, retransmission, dealing with spam, etc).
You should explain in more detail what you need. If you just want to react to incoming email, I would suggest to configure the mail server to call a program when it receives the email. This program could do what it wants (updating a database, creating a file, talking to another Python program).
To call an arbitrary program from the mail server, you have several choices:
For sendmail and Postfix, a ~/.forward containing "|/path/to/program"
If you use procmail, a recipe action of |path/to/program
And certainly many others | 0 | 51,409 | 0 | 43 | 2008-12-08T00:12:00.000 | python,email | Receive and send emails in python | 1 | 4 | 9 | 349,352 | 0 |
0 | 0 | How can I receive and send email in python? A 'mail server' of sorts.
I am looking into making an app that listens to see if it receives an email addressed to foo@bar.domain.com, and sends an email to the sender.
Now, am I able to do this all in python, would it be best to use 3rd party libraries? | false | 348,392 | 1 | 1 | 0 | 7 | Python has an SMTPD module that will be helpful to you for writing a server. You'll probably also want the SMTP module to do the re-send. Both modules are in the standard library at least since version 2.3. | 0 | 51,409 | 0 | 43 | 2008-12-08T00:12:00.000 | python,email | Receive and send emails in python | 1 | 4 | 9 | 348,423 | 0 |
1 | 0 | Edit: How to return/serve a file from a python controller (back end) over a web server, with the file_name? as suggested by @JV | false | 352,340 | 0.132549 | 0 | 0 | 2 | You can either pass back a reference to the file itself i.e. the full path to the file. Then you can open the file or otherwise manipulate it.
Or, the more normal case is to pass back the file handle, and, use the standard read/write operations on the file handle.
It is not recommended to pass the actual data as files can be arbiterally large and the program could run out of memory.
In your case, you probably want to return a tuple containing the open file handle, the file name and any other meta data you are interested in. | 0 | 14,590 | 0 | 1 | 2008-12-09T10:34:00.000 | python,file,mime-types,download | Return file from python module | 1 | 1 | 3 | 352,385 | 0 |
0 | 0 | There is a socket related function call in my code, that function is from another module thus out of my control, the problem is that it blocks for hours occasionally, which is totally unacceptable, How can I limit the function execution time from my code? I guess the solution must utilize another thread. | false | 366,682 | 0.076772 | 0 | 0 | 5 | The only "safe" way to do this, in any language, is to use a secondary process to do that timeout-thing, otherwise you need to build your code in such a way that it will time out safely by itself, for instance by checking the time elapsed in a loop or similar. If changing the method isn't an option, a thread will not suffice.
Why? Because you're risking leaving things in a bad state when you do. If the thread is simply killed mid-method, locks being held, etc. will just be held, and cannot be released.
So look at the process way, do not look at the thread way. | 1 | 100,904 | 0 | 90 | 2008-12-14T16:20:00.000 | python,multithreading | How to limit execution time of a function call? | 1 | 1 | 13 | 366,754 | 0 |
0 | 0 | I'm using python and I need to map locations like "Bloomington, IN" to GPS coordinates so I can measure distances between them.
What Geocoding libraries/APIs do you recommend? Solutions in other languages are also welcome. | false | 373,383 | 0 | 1 | 0 | 0 | You can have better look in Geopy module.And it is worth enough to use as it contains Google map, yahoo map geocoders with which you can implement geocodings. | 0 | 13,754 | 0 | 23 | 2008-12-17T01:19:00.000 | python,api,rest,geocoding | Geocoding libraries | 1 | 1 | 9 | 2,229,732 | 0 |
1 | 0 | We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:
(104, 'Connection reset by peer')
When I listen in with wireshark, the "good" and "bad" responses look very similar:
Because of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK
The service sends the response, one packet per header (HTTP/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.
(Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.
(Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.
Both the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual_env.
The client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.
The service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple_server. I ran the WSGI app through wsgiref.validator with no issues.
It seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket._socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.
"Connection reset by peer" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?
** Further debugging - Looks like server on Linux **
I have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86_64? A bad glibc? wsgiref? Still looking...
** Further testing - wsgiref looks flaky **
We've gone to production with Apache and mod_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production. | false | 383,738 | 0.099668 | 0 | 0 | 2 | I had the same issue however with doing an upload of a very large file using a python-requests client posting to a nginx+uwsgi backend.
What ended up being the cause was the the backend had a cap on the max file size for uploads lower than what the client was trying to send.
The error never showed up in our uwsgi logs since this limit was actually one imposed by nginx.
Upping the limit in nginx removed the error. | 0 | 140,801 | 0 | 39 | 2008-12-20T21:04:00.000 | python,sockets,wsgi,httplib2,werkzeug | 104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN? | 1 | 3 | 4 | 52,826,181 | 0 |
1 | 0 | We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:
(104, 'Connection reset by peer')
When I listen in with wireshark, the "good" and "bad" responses look very similar:
Because of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK
The service sends the response, one packet per header (HTTP/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.
(Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.
(Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.
Both the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual_env.
The client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.
The service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple_server. I ran the WSGI app through wsgiref.validator with no issues.
It seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket._socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.
"Connection reset by peer" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?
** Further debugging - Looks like server on Linux **
I have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86_64? A bad glibc? wsgiref? Still looking...
** Further testing - wsgiref looks flaky **
We've gone to production with Apache and mod_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production. | false | 383,738 | 1 | 0 | 0 | 11 | Don't use wsgiref for production. Use Apache and mod_wsgi, or something else.
We continue to see these connection resets, sometimes frequently, with wsgiref (the backend used by the werkzeug test server, and possibly others like the Django test server). Our solution was to log the error, retry the call in a loop, and give up after ten failures. httplib2 tries twice, but we needed a few more. They seem to come in bunches as well - adding a 1 second sleep might clear the issue.
We've never seen a connection reset when running through Apache and mod_wsgi. I don't know what they do differently, (maybe they just mask them), but they don't appear.
When we asked the local dev community for help, someone confirmed that they see a lot of connection resets with wsgiref that go away on the production server. There's a bug there, but it is going to be hard to find it. | 0 | 140,801 | 0 | 39 | 2008-12-20T21:04:00.000 | python,sockets,wsgi,httplib2,werkzeug | 104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN? | 1 | 3 | 4 | 481,952 | 0 |
1 | 0 | We're developing a Python web service and a client web site in parallel. When we make an HTTP request from the client to the service, one call consistently raises a socket.error in socket.py, in read:
(104, 'Connection reset by peer')
When I listen in with wireshark, the "good" and "bad" responses look very similar:
Because of the size of the OAuth header, the request is split into two packets. The service responds to both with ACK
The service sends the response, one packet per header (HTTP/1.0 200 OK, then the Date header, etc.). The client responds to each with ACK.
(Good request) the server sends a FIN, ACK. The client responds with a FIN, ACK. The server responds ACK.
(Bad request) the server sends a RST, ACK, the client doesn't send a TCP response, the socket.error is raised on the client side.
Both the web service and the client are running on a Gentoo Linux x86-64 box running glibc-2.6.1. We're using Python 2.5.2 inside the same virtual_env.
The client is a Django 1.0.2 app that is calling httplib2 0.4.0 to make requests. We're signing requests with the OAuth signing algorithm, with the OAuth token always set to an empty string.
The service is running Werkzeug 0.3.1, which is using Python's wsgiref.simple_server. I ran the WSGI app through wsgiref.validator with no issues.
It seems like this should be easy to debug, but when I trace through a good request on the service side, it looks just like the bad request, in the socket._socketobject.close() function, turning delegate methods into dummy methods. When the send or sendto (can't remember which) method is switched off, the FIN or RST is sent, and the client starts processing.
"Connection reset by peer" seems to place blame on the service, but I don't trust httplib2 either. Can the client be at fault?
** Further debugging - Looks like server on Linux **
I have a MacBook, so I tried running the service on one and the client website on the other. The Linux client calls the OS X server without the bug (FIN ACK). The OS X client calls the Linux service with the bug (RST ACK, and a (54, 'Connection reset by peer')). So, it looks like it's the service running on Linux. Is it x86_64? A bad glibc? wsgiref? Still looking...
** Further testing - wsgiref looks flaky **
We've gone to production with Apache and mod_wsgi, and the connection resets have gone away. See my answer below, but my advice is to log the connection reset and retry. This will let your server run OK in development mode, and solidly in production. | false | 383,738 | 0.148885 | 0 | 0 | 3 | Normally, you'd get an RST if you do a close which doesn't linger (i.e. in which data can be discarded by the stack if it hasn't been sent and ACK'd) and a normal FIN if you allow the close to linger (i.e. the close waits for the data in transit to be ACK'd).
Perhaps all you need to do is set your socket to linger so that you remove the race condition between a non lingering close done on the socket and the ACKs arriving? | 0 | 140,801 | 0 | 39 | 2008-12-20T21:04:00.000 | python,sockets,wsgi,httplib2,werkzeug | 104, 'Connection reset by peer' socket error, or When does closing a socket result in a RST rather than FIN? | 1 | 3 | 4 | 384,415 | 0 |
0 | 0 | Is there any way to send ARP packet on Windows without the use of another library such as winpcap?
I have heard that Windows XP SP2 blocks raw ethernet sockets, but I have also heard that raw sockets are only blocked for administrators. Any clarification here? | false | 395,846 | 0 | 0 | 0 | 0 | You could use the OpenVPN tap to send arbitrary packets as if you where using raw sockets. | 0 | 4,899 | 0 | 3 | 2008-12-28T04:52:00.000 | python,sockets,ethernet,arp | How do I send an ARP packet through python on windows without needing winpcap? | 1 | 1 | 2 | 503,144 | 0 |
0 | 0 | I am using XML minidom (xml.dom.minidom) in Python, but any error in the XML will kill the parser.
Is it possible to ignore them, like a browser for example?
I am trying to write a browser in Python, but it just throws an exception if the tags aren't fully compatible. | false | 399,980 | 0.197375 | 0 | 0 | 3 | It should be noted that while HTML looks like XML it is not XML. XHTML is an XML form of HTML. | 0 | 4,564 | 0 | 6 | 2008-12-30T10:48:00.000 | python,xml,minidom | Ignoring XML errors in Python | 1 | 1 | 3 | 400,669 | 0 |
1 | 0 | I'm working on a Python library that interfaces with a web service API. Like many web services I've encountered, this one requests limiting the rate of requests. I would like to provide an optional parameter, limit, to the class instantiation that, if provided, will hold outgoing requests until the number of seconds specified passes.
I understand that the general scenario is the following: an instance of the class makes a request via a method. When it does, the method emits some signal that sets a lock variable somewhere, and begins a countdown timer for the number of seconds in limit. (In all likelihood, the lock is the countdown timer itself.) If another request is made within this time frame, it must be queued until the countdown timer reaches zero and the lock is disengaged; at this point, the oldest request on the queue is sent, and the countdown timer is reset and the lock is re-engaged.
Is this a case for threading? Is there another approach I'm not seeing?
Should the countdown timer and lock be instance variables, or should they belong to the class, such that all instances of the class hold requests?
Also, is this generally a bad idea to provide rate-limiting functionality within a library? I reason since, by default, the countdown is zero seconds, the library still allows developers to use the library and provide their own rate-limiting schemes. Given any developers using the service will need to rate-limit requests anyway, however, I figure that it would be a convenience for the library to provide a means of rate-limiting.
Regardless of placing a rate-limiting scheme in the library or not, I'll want to write an application using the library, so suggested techniques will come in handy. | false | 401,215 | 0.033321 | 0 | 0 | 1 | SO I am assuming something simple like
import time
time.sleep(2)
will not work for waiting 2 seconds between requests | 1 | 17,294 | 0 | 18 | 2008-12-30T19:30:00.000 | python,web-services,rate-limiting | How to limit rate of requests to web services in Python? | 1 | 3 | 6 | 401,826 | 0 |
1 | 0 | I'm working on a Python library that interfaces with a web service API. Like many web services I've encountered, this one requests limiting the rate of requests. I would like to provide an optional parameter, limit, to the class instantiation that, if provided, will hold outgoing requests until the number of seconds specified passes.
I understand that the general scenario is the following: an instance of the class makes a request via a method. When it does, the method emits some signal that sets a lock variable somewhere, and begins a countdown timer for the number of seconds in limit. (In all likelihood, the lock is the countdown timer itself.) If another request is made within this time frame, it must be queued until the countdown timer reaches zero and the lock is disengaged; at this point, the oldest request on the queue is sent, and the countdown timer is reset and the lock is re-engaged.
Is this a case for threading? Is there another approach I'm not seeing?
Should the countdown timer and lock be instance variables, or should they belong to the class, such that all instances of the class hold requests?
Also, is this generally a bad idea to provide rate-limiting functionality within a library? I reason since, by default, the countdown is zero seconds, the library still allows developers to use the library and provide their own rate-limiting schemes. Given any developers using the service will need to rate-limit requests anyway, however, I figure that it would be a convenience for the library to provide a means of rate-limiting.
Regardless of placing a rate-limiting scheme in the library or not, I'll want to write an application using the library, so suggested techniques will come in handy. | false | 401,215 | 0.033321 | 0 | 0 | 1 | Your rate limiting scheme should be heavily influenced by the calling conventions of the underlying code (syncronous or async), as well as what scope (thread, process, machine, cluster?) this rate-limiting will operate at.
I would suggest keeping all the variables within the instance, so you can easily implement multiple periods/rates of control.
Lastly, it sounds like you want to be a middleware component. Don't try to be an application and introduce threads on your own. Just block/sleep if you are synchronous and use the async dispatching framework if you are being called by one of them. | 1 | 17,294 | 0 | 18 | 2008-12-30T19:30:00.000 | python,web-services,rate-limiting | How to limit rate of requests to web services in Python? | 1 | 3 | 6 | 401,332 | 0 |
1 | 0 | I'm working on a Python library that interfaces with a web service API. Like many web services I've encountered, this one requests limiting the rate of requests. I would like to provide an optional parameter, limit, to the class instantiation that, if provided, will hold outgoing requests until the number of seconds specified passes.
I understand that the general scenario is the following: an instance of the class makes a request via a method. When it does, the method emits some signal that sets a lock variable somewhere, and begins a countdown timer for the number of seconds in limit. (In all likelihood, the lock is the countdown timer itself.) If another request is made within this time frame, it must be queued until the countdown timer reaches zero and the lock is disengaged; at this point, the oldest request on the queue is sent, and the countdown timer is reset and the lock is re-engaged.
Is this a case for threading? Is there another approach I'm not seeing?
Should the countdown timer and lock be instance variables, or should they belong to the class, such that all instances of the class hold requests?
Also, is this generally a bad idea to provide rate-limiting functionality within a library? I reason since, by default, the countdown is zero seconds, the library still allows developers to use the library and provide their own rate-limiting schemes. Given any developers using the service will need to rate-limit requests anyway, however, I figure that it would be a convenience for the library to provide a means of rate-limiting.
Regardless of placing a rate-limiting scheme in the library or not, I'll want to write an application using the library, so suggested techniques will come in handy. | false | 401,215 | 0.066568 | 0 | 0 | 2 | Queuing may be overly complicated. A simpler solution is to give your class a variable for the time the service was last called. Whenever the service is called (!1), set waitTime to delay - Now + lastcalltime. delay should be equal to the minimum allowable time between requests. If this number is positive, sleep for that long before making the call (!2). The disadvantage/advantage of this approach is that it treats the web service requests as being synchronous. The advantage is that it is absurdly simple and easy to implement.
(!1): Should happen right after receiving a response from the service, inside the wrapper (probably at the bottom of the wrapper).
(!2): Should happen when the python wrapper around the web service is called, at the top of the wrapper.
S.Lott's solution is more elegant, of course. | 1 | 17,294 | 0 | 18 | 2008-12-30T19:30:00.000 | python,web-services,rate-limiting | How to limit rate of requests to web services in Python? | 1 | 3 | 6 | 401,390 | 0 |
0 | 0 | I would like to write a program that will find bus stop times and update my personal webpage accordingly.
If I were to do this manually I would
Visit www.calgarytransit.com
Enter a stop number. ie) 9510
Click the button "next bus"
The results may look like the following:
10:16p Route 154
10:46p Route 154
11:32p Route 154
Once I've grabbed the time and routes then I will update my webpage accordingly.
I have no idea where to start. I know diddly squat about web programming but can write some C and Python. What are some topics/libraries I could look into? | false | 419,260 | 0 | 0 | 0 | 0 | As long as the layout of the web page your trying to 'scrape' doesnt regularly change, you should be able to parse the html with any modern day programming language. | 0 | 24,102 | 0 | 3 | 2009-01-07T05:14:00.000 | python,c,text,webpage | Grabbing text from a webpage | 1 | 2 | 8 | 419,273 | 0 |
0 | 0 | I would like to write a program that will find bus stop times and update my personal webpage accordingly.
If I were to do this manually I would
Visit www.calgarytransit.com
Enter a stop number. ie) 9510
Click the button "next bus"
The results may look like the following:
10:16p Route 154
10:46p Route 154
11:32p Route 154
Once I've grabbed the time and routes then I will update my webpage accordingly.
I have no idea where to start. I know diddly squat about web programming but can write some C and Python. What are some topics/libraries I could look into? | false | 419,260 | 0.024995 | 0 | 0 | 1 | That site doesnt offer an API for you to be able to get the appropriate data that you need. In that case you'll need to parse the actual HTML page returned by, for example, a CURL request . | 0 | 24,102 | 0 | 3 | 2009-01-07T05:14:00.000 | python,c,text,webpage | Grabbing text from a webpage | 1 | 2 | 8 | 419,271 | 0 |
1 | 0 | How to get the text of selected item from a drop down box element in html forms? (using python)
How can I store the value to a variable, when I select one item from the drop down box using mouse? (ie. without using a submit button)
This is for a application which I am doing in app engine which only supports Python. | false | 419,908 | 0 | 0 | 0 | 0 | The problem with using onchange is that not all users are using a mouse. If you have a combo-box and change the value with the keyboard, you'd never be able to get past the first value without the form submitting.
~Cyrix | 0 | 13,168 | 0 | 1 | 2009-01-07T11:05:00.000 | javascript,python,html,google-app-engine,drop-down-menu | Getting selected value from drop down box in a html form without submit | 1 | 1 | 4 | 7,284,763 | 0 |
0 | 0 | I have a set of word documents that contains a lot of non-embedded images in them. The url that the images point to no longer exist. I would like to programmatically change the domain name of the url to something else. How can I go about doing this in Java or Python ? | false | 428,308 | 0 | 0 | 0 | 0 | You want to do this in Java or Python. Try OpenOffice.
In OpenOffice, you can insert Java or Python code as a "Makro".
I'm sure there will be a possibility to change the image URLs. | 0 | 1,351 | 0 | 1 | 2009-01-09T14:46:00.000 | java,python,image,ms-word | How to programmatically change urls of images in word documents | 1 | 1 | 4 | 430,717 | 0 |
0 | 0 | I have a python application that relies on a file that is downloaded by a client from a website.
The website is not under my control and has no API to check for a "latest version" of the file.
Is there a simple way to access the file (in python) via a URL and check it's date (or size) without having to download it to the clients machine each time?
update: Thanks to those who mentioned the"last-modified" date. This is the correct parameter to look at.
I guess I didn't state the question well enough. How do I do this from a python script? I want to application to check the file and then download it if (last-modified date < current file date). | false | 428,895 | 1 | 0 | 0 | 6 | Take into account that 'last-modified' may not be present:
>>> from urllib import urlopen
>>> f=urlopen('http://google.com/')
>>> i=f.info()
>>> i.keys()
['set-cookie', 'expires', 'server', 'connection', 'cache-control', 'date', 'content-type']
>>> i.getdate('date')
(2009, 1, 10, 16, 17, 8, 0, 1, 0)
>>> i.getheader('date')
'Sat, 10 Jan 2009 16:17:08 GMT'
>>> i.getdate('last-modified')
>>>
Now you can compare:
if (i.getdate('last-modified') or i.getheader('date')) > current_file_date:
open('file', 'w').write(f.read()) | 0 | 12,198 | 0 | 3 | 2009-01-09T17:11:00.000 | python,http | How can I get the created date of a file on the web (with Python)? | 1 | 2 | 6 | 431,411 | 0 |
0 | 0 | I have a python application that relies on a file that is downloaded by a client from a website.
The website is not under my control and has no API to check for a "latest version" of the file.
Is there a simple way to access the file (in python) via a URL and check it's date (or size) without having to download it to the clients machine each time?
update: Thanks to those who mentioned the"last-modified" date. This is the correct parameter to look at.
I guess I didn't state the question well enough. How do I do this from a python script? I want to application to check the file and then download it if (last-modified date < current file date). | false | 428,895 | 1 | 0 | 0 | 7 | There is no reliable way to do this. For all you know, the file can be created on the fly by the web server and the question "how old is this file" is not meaningful. The webserver may choose to provide Last-Modified header, but it could tell you whatever it wants. | 0 | 12,198 | 0 | 3 | 2009-01-09T17:11:00.000 | python,http | How can I get the created date of a file on the web (with Python)? | 1 | 2 | 6 | 428,951 | 0 |
0 | 0 | I need to poll a web service, in this case twitter's API, and I'm wondering what the conventional wisdom is on this topic. I'm not sure whether this is important, but I've always found feedback useful in the past.
A couple scenarios I've come up with:
The querying process starts every X seconds, eg a cron job runs a python script
A process continually loops and queries at each iteration, eg ... well, here is where I enter unfamiliar territory. Do I just run a python script that doesn't end?
Thanks for your advice.
ps - regarding the particulars of twitter: I know that it sends emails for following and direct messages, but sometimes one might want the flexibility of parsing @replies. In those cases, I believe polling is as good as it gets.
pps - twitter limits bots to 100 requests per 60 minutes. I don't know if this also limits web scraping or rss feed reading. Anyone know how easy or hard it is to be whitelisted?
Thanks again. | false | 430,226 | 0 | 1 | 0 | 0 | You should have a page that is like a Ping or Heartbeat page. The you have another process that "tickles" or hits that page, usually you can do this in your Control Panel of your web host, or use a cron if you have a local access. Then this script can keep statistics of how often it has polled in a database or some data store and then you poll the service as often as you really need to, of course limiting it to whatever the providers limit is. You definitely don't want to (and certainly don't want to rely) on a python scrip that "doesn't end." :) | 0 | 5,380 | 0 | 5 | 2009-01-10T00:10:00.000 | python,twitter,polling | Best way to poll a web service (eg, for a twitter app) | 1 | 1 | 2 | 430,245 | 0 |
0 | 0 | I was just wondering what network libraries there are out there for Python for building a TCP/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me.
Also, would using Twisted even have a benefit over rolling my own server with select.select()? | false | 441,849 | 0.039979 | 0 | 0 | 1 | Just adding an answer to re-iterate other posters - it'll be worth it to use Twisted. There's no reason to write yet another TCP server that'll end up working not as well as one using twisted would. The only reason would be if writing your own is much faster, developer-wise, but if you just bite the bullet and learn twisted now, your future projects will benefit greatly. And, as others have said, you'll be able to do much more complex stuff if you use twisted from the start. | 0 | 12,949 | 1 | 12 | 2009-01-14T03:51:00.000 | python,networking,twisted | Good Python networking libraries for building a TCP server? | 1 | 2 | 5 | 442,079 | 0 |
0 | 0 | I was just wondering what network libraries there are out there for Python for building a TCP/IP server. I know that Twisted might jump to mind but the documentation seems scarce, sloppy, and scattered to me.
Also, would using Twisted even have a benefit over rolling my own server with select.select()? | false | 441,849 | 1 | 0 | 0 | 6 | The standard library includes SocketServer and related modules which might be sufficient for your needs. This is a good middle ground between a complex framework like Twisted, and rolling your own select() loop. | 0 | 12,949 | 1 | 12 | 2009-01-14T03:51:00.000 | python,networking,twisted | Good Python networking libraries for building a TCP server? | 1 | 2 | 5 | 441,863 | 0 |
1 | 0 | I'm a hobbyist (and fairly new) programmer who has written several useful (to me) scripts in python to handle various system automation tasks that involve copying, renaming, and downloading files amongst other sundry activities.
I'd like to create a web page served from one of my systems that would merely present a few buttons which would allow me to initiate these scripts remotely.
The problem is that I don't know where to start investigating how to do this. Let's say I have a script called:
file_arranger.py
What do I need to do to have a webpage execute that script? This isn't meant for public consumption, so anything lightweight would be great. For bonus points, what do I need to look into to provide the web user with the output from such scripts?
edit: The first answer made me realize I forgot to include that this is a Win2k3 system. | false | 448,837 | 0 | 0 | 0 | 0 | When setting this up, please be careful to restrict access to the scripts that take some action on your web server. It is not sufficient to place them in a directory where you just don't publish the URL, because sooner or later somebody will find them.
At the very least, put these scripts in a location that is password protected. You don't want just anybody out there on the internet being able to run your scripts. | 0 | 38,798 | 0 | 25 | 2009-01-15T22:58:00.000 | python,windows,web-services,cgi | How do I create a webpage with buttons that invoke various Python scripts on the system serving the webpage? | 1 | 2 | 9 | 449,199 | 0 |
1 | 0 | I'm a hobbyist (and fairly new) programmer who has written several useful (to me) scripts in python to handle various system automation tasks that involve copying, renaming, and downloading files amongst other sundry activities.
I'd like to create a web page served from one of my systems that would merely present a few buttons which would allow me to initiate these scripts remotely.
The problem is that I don't know where to start investigating how to do this. Let's say I have a script called:
file_arranger.py
What do I need to do to have a webpage execute that script? This isn't meant for public consumption, so anything lightweight would be great. For bonus points, what do I need to look into to provide the web user with the output from such scripts?
edit: The first answer made me realize I forgot to include that this is a Win2k3 system. | false | 448,837 | 0.022219 | 0 | 0 | 1 | A simple cgi script (or set of scripts) is all you need to get started. The other answers have covered how to do this so I won't repeat it; instead, I will stress that using plain text will get you a long way. Just output the header (print("Content-type: text/plain\n") plus print adds its own newline to give you the needed blank line) and then run your normal program.
This way, any normal output from your script gets sent to the browser and you don't have to worry about HTML, escaping, frameworks, anything. "Do the simplest thing that could possibly work."
This is especially appropriate for non-interactive private administrative tasks like you describe, and lets you use identical programs from a shell with a minimum of fuss. Your driver, the page with the buttons, can be a static HTML file with single-button forms. Or even a list of links.
To advance from there, look at the logging module (for example, sending INFO messages to the browser but not the command line, or easily categorizing messages by using different loggers, by configuring your handlers), and then start to consider template engines and frameworks.
Don't output your own HTML and skip to using one of the many existing libraries—it'll save a ton of headache even spending a bit of extra time to learn the library. Or at the very least encapsulate your output by effectively writing your own mini-engine. | 0 | 38,798 | 0 | 25 | 2009-01-15T22:58:00.000 | python,windows,web-services,cgi | How do I create a webpage with buttons that invoke various Python scripts on the system serving the webpage? | 1 | 2 | 9 | 449,062 | 0 |
0 | 0 | I'm new to web services and as an introduction I'm playing around with the Twitter API using the Twisted framework in python. I've read up on the different formats they offer, but it's still not clear to me which one I should use in my fairly simple project.
Specifically the practical difference between using JSON or XML is something I'd like guidance on. All I'm doing is requesting the public timeline and caching it locally.
Thanks. | false | 453,158 | 0.26052 | 1 | 0 | 4 | RSS and Atom are XML formats.
JSON is a string which can be evaluated as Javascript code. | 0 | 8,204 | 0 | 6 | 2009-01-17T11:19:00.000 | python,xml,json,twitter,twisted | What is the practical difference between xml, json, rss and atom when interfacing with Twitter? | 1 | 2 | 3 | 453,160 | 0 |
0 | 0 | I'm new to web services and as an introduction I'm playing around with the Twitter API using the Twisted framework in python. I've read up on the different formats they offer, but it's still not clear to me which one I should use in my fairly simple project.
Specifically the practical difference between using JSON or XML is something I'd like guidance on. All I'm doing is requesting the public timeline and caching it locally.
Thanks. | false | 453,158 | 0.066568 | 1 | 0 | 1 | I would say the amount of data being sent over the wire is one factor. XML data stream will be bigger than JSON for the same data. But you can use whatever you know more/have more experience.
I would recommend JSON, as it's more "pythonic" than XML. | 0 | 8,204 | 0 | 6 | 2009-01-17T11:19:00.000 | python,xml,json,twitter,twisted | What is the practical difference between xml, json, rss and atom when interfacing with Twitter? | 1 | 2 | 3 | 453,164 | 0 |
0 | 0 | I have a "manager" process on a node, and several worker processes. The manager is the actual server who holds all of the connections to the clients. The manager accepts all incoming packets and puts them into a queue, and then the worker processes pull the packets out of the queue, process them, and generate a result. They send the result back to the manager (by putting them into another queue which is read by the manager), but here is where I get stuck: how do I send the result to a specific socket? When dealing with the processing of the packets on a single process, it's easy, because when you receive a packet you can reply to it by just grabbing the "transport" object in-context. But how would I do this with the method I'm using? | true | 460,068 | 1.2 | 0 | 0 | 3 | It sounds like you might need to keep a reference to the transport (or protocol) along with the bytes the just came in on that protocol in your 'event' object. That way responses that came in on a connection go out on the same connection.
If things don't need to be processed serially perhaps you should think about setting up functors that can handle the data in parallel to remove the need for queueing. Just keep in mind that you will need to protect critical sections of your code.
Edit:
Judging from your other question about evaluating your server design it would seem that processing in parallel may not be possible for your situation, so my first suggestion stands. | 0 | 829 | 1 | 2 | 2009-01-20T03:43:00.000 | python,sockets,twisted,multiprocess | Python/Twisted - Sending to a specific socket object? | 1 | 1 | 1 | 460,245 | 0 |
0 | 0 | This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?
I was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas? | true | 464,314 | 1.2 | 0 | 0 | 1 | When you say the last minute, do you mean the exact last seconds or the last full minute from x:00 to x:59? The latter will be easier to implement and would probably give accurate results. You have one prev variable holding the value of the hits for the previous minute. Then you have a current value that increments every time there is a new hit. You return the value of prev to the users. At the change of the minute you swap prev with current and reset current.
If you want higher analysis you could split the minute in 2 to 6 slices. You need a variable or list entry for every slice. Let's say you have 6 slices of 10 seconds. You also have an index variable pointing to the current slice (0..5). For every hit you increment a temp variable. When the slice is over, you replace the value of the indexed variable with the value of temp, reset temp and move the index forward. You return the sum of the slice variables to the users. | 0 | 303 | 0 | 2 | 2009-01-21T07:14:00.000 | python | Python - Hits per minute implementation? | 1 | 3 | 3 | 464,347 | 0 |
0 | 0 | This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?
I was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas? | false | 464,314 | 0.066568 | 0 | 0 | 1 | For what it's worth, your implementation above won't work if you don't receive a packet every second, as the next second entry won't necessarily be reset to 0.
Either way, afaik the "correct" way to do this, ala logs analysis, is to keep a limited record of all the queries you receive. So just chuck the query, time received etc. into a database, and then simple database queries will give you the use over a minute, or any minute in the past. Not sure whether this is too heavyweight for you, though. | 0 | 303 | 0 | 2 | 2009-01-21T07:14:00.000 | python | Python - Hits per minute implementation? | 1 | 3 | 3 | 464,329 | 0 |
0 | 0 | This seems like such a trivial problem, but I can't seem to pin how I want to do it. Basically, I want to be able to produce a figure from a socket server that at any time can give the number of packets received in the last minute. How would I do that?
I was thinking of maybe summing a dictionary that uses the current second as a key, and when receiving a packet it increments that value by one, as well as setting the second+1 key above it to 0, but this just seems sloppy. Any ideas? | false | 464,314 | 0.197375 | 0 | 0 | 3 | A common pattern for solving this in other languages is to let the thing being measured simply increment an integer. Then you leave it to the listening client to determine intervals and frequencies.
So you basically do not let the socket server know about stuff like "minutes", because that's a feature the observer calculates. Then you can also support multiple listeners with different interval resolution.
I suppose you want some kind of ring-buffer structure to do the rolling logging. | 0 | 303 | 0 | 2 | 2009-01-21T07:14:00.000 | python | Python - Hits per minute implementation? | 1 | 3 | 3 | 464,322 | 0 |
1 | 0 | while learning some basic programming with python, i found web.py. i
got stuck with a stupid problem:
i wrote a simple console app with a main loop that proccesses items
from a queue in seperate threads. my goal is to use web.py to add
items to my queue and report status of the queue via web request. i
got this running as a module but can´t integrate it into my main app.
my problem is when i start the http server with app.run() it blocks my
main loop.
also tried to start it with thread.start_new_thread but it still
blocks.
is there an easy way to run web.py´s integrated http server in the
background within my app.
in the likely event that i am a victim of a fundamental
missunderstanding, any attempt to clarify my error in reasoning would
help ;.) ( please bear with me, i am a beginner :-) | false | 500,935 | 0.049958 | 0 | 0 | 1 | Wouldn't is be simpler to re-write your main-loop code to be a function that you call over and over again, and then call that from the function that you pass to runsimple...
It's guaranteed not to fully satisfy your requirements, but if you're in a rush, it might be easiest. | 0 | 7,054 | 0 | 17 | 2009-02-01T14:47:00.000 | python,multithreading,web-services,web.py | Using web.py as non blocking http-server | 1 | 1 | 4 | 501,570 | 0 |
0 | 0 | I've written several scripts that make use of the gdata API, and they all (obviously) have my API key and client ID in plain-text. How am I supposed to distribute these? | false | 513,806 | 0 | 0 | 0 | 0 | If we assume that you want clients to use their own keys I'd recommend putting them in a configuration file which defaults to an (invalid) sentinel value.
If on the other hand you want the script to use your key the best you can do is obfuscate it. After all, if your program can read it then an attacker (with a debugger) can read it too. | 1 | 306 | 0 | 3 | 2009-02-04T23:06:00.000 | python,gdata-api | How to distribute script using gdata-python-client? | 1 | 1 | 2 | 513,838 | 0 |
0 | 0 | I'd like to write a python library to wrap a REST-style API offered by a particular Web service. Does anyone know of any good learning resources for such work, preferably aimed at intermediate Python programmers?
I'd like a good article on the subject, but I'd settle for nice, clear code examples.
CLARIFICATION: What I'm looking to do is write a Python client to interact with a Web service -- something to construct HTTP requests and parse XML/JSON responses, all wrapped up in Python objects. | false | 517,237 | 0.07983 | 0 | 0 | 2 | My favorite combination is httplib2 (or pycurl for performance) and simplejson. As REST is more "a way of design" then a real "protocol" there is not really a reusable thing (that I know of). On Ruby you have something like ActiveResource. And to be honest, even that would just expose some tables as a webservice, whereas the power of xml/json is that they are more like "views" that can contain multiple objects optimized for your application. I hope this makes sense :-) | 1 | 15,114 | 0 | 21 | 2009-02-05T18:29:00.000 | python,web-services,api,rest | HOWTO: Write Python API wrapper? | 1 | 2 | 5 | 518,161 | 0 |