instruction
stringlengths 12
148
| input
stringlengths 16
30.2k
| output
stringlengths 11
27.3k
|
---|---|---|
What can you use Python generator functions for?
|
I'm starting to learn Python and I've come across generator functions, those that have a yield statement in them. I want to know what types of problems that these functions are really good at solving.
|
Generators give you lazy evaluation. You use them by iterating over them, either explicitly with 'for' or implicitly by passing it to any function or construct that iterates. You can think of generators as returning multiple items, as if they return a list, but instead of returning them all at once they return them one-by-one, and the generator function is paused until the next item is requested.
Generators are good for calculating large sets of results (in particular calculations involving loops themselves) where you don't know if you are going to need all results, or where you don't want to allocate the memory for all results at the same time. Or for situations where the generator uses another generator, or consumes some other resource, and it's more convenient if that happened as late as possible.
Another use for generators (that is really the same) is to replace callbacks with iteration. In some situations you want a function to do a lot of work and occasionally report back to the caller. Traditionally you'd use a callback function for this. You pass this callback to the work-function and it would periodically call this callback. The generator approach is that the work-function (now a generator) knows nothing about the callback, and merely yields whenever it wants to report something. The caller, instead of writing a separate callback and passing that to the work-function, does all the reporting work in a little 'for' loop around the generator.
For example, say you wrote a 'filesystem search' program. You could perform the search in its entirety, collect the results and then display them one at a time. All of the results would have to be collected before you showed the first, and all of the results would be in memory at the same time. Or you could display the results while you find them, which would be more memory efficient and much friendlier towards the user. The latter could be done by passing the result-printing function to the filesystem-search function, or it could be done by just making the search function a generator and iterating over the result.
If you want to see an example of the latter two approaches, see os.path.walk() (the old filesystem-walking function with callback) and os.walk() (the new filesystem-walking generator.) Of course, if you really wanted to collect all results in a list, the generator approach is trivial to convert to the big-list approach:
big_list = list(the_generator)
|
How do I merge a 2D array in Python into one string with List Comprehension?
|
List Comprehension for me seems to be like the opaque block of granite that regular expressions are for me. I need pointers.
Say, I have a 2D list:
li = [[0,1,2],[3,4,5],[6,7,8]]
I would like to merge this either into one long list
li2 = [0,1,2,3,4,5,6,7,8]
or into a string with separators:
s = "0,1,2,3,4,5,6,7,8"
Really, I'd like to know how to do both.
|
Like so:
[ item for innerlist in outerlist for item in innerlist ]
Turning that directly into a string with separators:
','.join(str(item) for innerlist in outerlist for item in innerlist)
Yes, the order of 'for innerlist in outerlist' and 'for item in innerlist' is correct. Even though the "body" of the loop is at the start of the listcomp, the order of nested loops (and 'if' clauses) is still the same as when you would write the loop out:
for innerlist in outerlist:
for item in innerlist:
...
|
How to generate all permutations of a list in Python
|
How do you generate all the permutations of a list in Python, independently of the type of elements in that list?
For example:
permutations([])
[]
permutations([1])
[1]
permutations([1, 2])
[1, 2]
[2, 1]
permutations([1, 2, 3])
[1, 2, 3]
[1, 3, 2]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[3, 2, 1]
EDIT:
Eliben pointed to a solution that's similar to mine although simpler, so I'm choosing it as the accepted answer, although Python 2.6+ has a builtin solution in the itertools module:
import itertools
itertools.permutations([1, 2, 3])
|
And in Python 2.6 onwards:
import itertools
itertools.permutations([1,2,3])
(returned as a generator. Use list(permutations(l)) to return as a list.)
|
What is "thread local storage" in Python, and why do I need it?
|
In Python specifically, how do variables get shared between threads?
Although I have used threading.Thread before I never really understood or saw examples of how variables got shared. Are they shared between the main thread and the children or only among the children? When would I need to use thread local storage to avoid this sharing?
I have seen many warnings about synchronizing access to shared data among threads by using locks but I have yet to see a really good example of the problem.
Thanks in advance!
|
In Python, everything is shared, except for function-local variables (because each function call gets its own set of locals, and threads are always separate function calls.) And even then, only the variables themselves (the names that refer to objects) are local to the function; objects themselves are always global, and anything can refer to them.
The Thread object for a particular thread is not a special object in this regard. If you store the Thread object somewhere all threads can access (like a global variable) then all threads can access that one Thread object. If you want to atomically modify anything that you didn't just create in this very same thread, and did not store anywhere another thread can get at it, you have to protect it by a lock. And all threads must of course share this very same lock, or it wouldn't be very effective.
If you want actual thread-local storage, that's where threading.local comes in. Attributes of threading.local are not shared between threads; each thread sees only the attributes it itself placed in there. If you're curious about its implementation, the source is in _threading_local.py in the standard library.
|
Are locks unnecessary in multi-threaded Python code because of the GIL?
|
If you are relying on an implementation of Python that has a Global Interpreter Lock (i.e. CPython) and writing multithreaded code, do you really need locks at all?
If the GIL doesn't allow multiple instructions to be executed in parallel, wouldn't shared data be unnecessary to protect?
sorry if this is a dumb question, but it is something I have always wondered about Python on multi-processor/core machines.
same thing would apply to any other language implementation that has a GIL.
|
You will still need locks if you share state between threads. The GIL only protects the interpreter internally. You can still have inconsistent updates in your own code.
For example:
#!/usr/bin/env python
import threading
shared_balance = 0
class Deposit(threading.Thread):
def run(self):
for _ in xrange(1000000):
global shared_balance
balance = shared_balance
balance += 100
shared_balance = balance
class Withdraw(threading.Thread):
def run(self):
for _ in xrange(1000000):
global shared_balance
balance = shared_balance
balance -= 100
shared_balance = balance
threads = [Deposit(), Withdraw()]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print shared_balance
Here, your code can be interrupted between reading the shared state (balance = shared_balance) and writing the changed result back (shared_balance = balance), causing a lost update. The result is a random value for the shared state.
To make the updates consistent, run methods would need to lock the shared state around the read-modify-write sections (inside the loops) or have some way to detect when the shared state had changed since it was read.
|
Making a beta code for a public django site
|
I'm about to put a beta version of the site I'm working on up on the web. It needs to have a beta code to restrict access. The site is written in django.
I don't want to change the fundamental Auth system to accommodate a beta code, and I don't care particularly that the security of the beta code is iron-clad, just that it's a significant stumbling block.
How should I do this? It's a fairly large project, so adding code to every view is far from ideal.
That solution works well. The Middleware Class I ended up with this this:
from django.http import HttpResponseRedirect
class BetaMiddleware(object):
"""
Require beta code session key in order to view any page.
"""
def process_request(self, request):
if request.path != '/beta/' and not request.session.get('in_beta'):
return HttpResponseRedirect('%s?next=%s' % ('/beta/', request.path))
|
Start with this Django snippet, but modify it to check request.session['has_beta_access']. If they don't have it, then have it return a redirect to a "enter beta code" page that, when posted to with the right code, sets that session variable to True.
Making it a public beta then just consists of removing that middleware from your MIDDLEWARE_CLASSES setting.
|
How to bundle a Python application including dependencies?
|
I need to package my python application, its dependencies and python into a single MSI installer. The end result should desirably be:
Python is installed in the standard location
the package and its dependencies are installed in a separate directory (possibly site-packages)
the installation directory should contain the python uncompressed and a standalone executable is not required
|
Kind of a dup of this question about how to make a python into an executable.
It boils down to:
py2exe on windows, Freeze on Linux, and
py2app on Mac.
|
Unit Testing File Modifications
|
A common task in programs I've been working on lately is modifying a text file in some way. (Hey, I'm on Linux. Everything's a file. And I do large-scale system admin.)
But the file the code modifies may not exist on my desktop box. And I probably don't want to modify it if it IS on my desktop.
I've read about unit testing in Dive Into Python, and it's pretty clear what I want to do when testing an app that converts decimal to Roman Numerals (the example in DintoP). The testing is nicely self-contained. You don't need to verify that the program PRINTS the right thing, you just need to verify that the functions are returning the right output to a given input.
In my case, however, we need to test that the program is modifying its environment correctly. Here's what I've come up with:
1) Create the "original" file in a standard location, perhaps /tmp.
2) Run the function that modifies the file, passing it the path to the file in /tmp.
3) Verify that the file in /tmp was changed correctly; pass/fail unit test accordingly.
This seems kludgy to me. (Gets even kludgier if you want to verify that backup copies of the file are created properly, etc.) Has anyone come up with a better way?
|
You're talking about testing too much at once. If you start trying to attack a testing problem by saying "Let's verify that it modifies its environment correctly", you're doomed to failure. Environments have dozens, maybe even millions of potential variations.
Instead, look at the pieces ("units") of your program. For example, are you going to have a function that determines where the files are that have to be written? What are the inputs to that function? Perhaps an environment variable, perhaps some values read from a config file? Test that function, and don't actually do anything that modifies the filesystem. Don't pass it "realistic" values, pass it values that are easy to verify against. Make a temporary directory, populate it with files in your test's setUp method.
Then test the code that writes the files. Just make sure it's writing the right contents file contents. Don't even write to a real filesystem! You don't need to make "fake" file objects for this, just use Python's handy StringIO modules; they're "real" implementations of the "file" interface, they're just not the ones that your program is actually going to be writing to.
Ultimately you will have to test the final, everything-is-actually-hooked-up-for-real top-level function that passes the real environment variable and the real config file and puts everything together. But don't worry about that to get started. For one thing, you will start picking up tricks as you write individual tests for smaller functions and creating test mocks, fakes, and stubs will become second nature to you. For another: even if you can't quite figure out how to test that one function call, you will have a very high level of confidence that everything which it is calling works perfectly. Also, you'll notice that test-driven development forces you to make your APIs clearer and more flexible. For example: it's much easier to test something that calls an open() method on an object that came from somewhere abstract, than to test something that calls os.open on a string that you pass it. The open method is flexible; it can be faked, it can be implemented differently, but a string is a string and os.open doesn't give you any leeway to catch what methods are called on it.
You can also build testing tools to make repetitive tasks easy. For example, twisted provides facilities for creating temporary files for testing built right into its testing tool. It's not uncommon for testing tools or larger projects with their own test libraries to have functionality like this.
|
How do you send a HEAD HTTP request in Python 2?
|
What I'm trying to do here is get the headers of a given URL so I can determine the MIME type. I want to be able to see if http://somedomain/foo/ will return an HTML document or a JPEG image for example. Thus, I need to figure out how to send a HEAD request so that I can read the MIME type without having to download the content. Does anyone know of an easy way of doing this?
|
urllib2 can be used to perform a HEAD request. This is a little nicer than using httplib since urllib2 parses the URL for you instead of requiring you to split the URL into host name and path.
>>> import urllib2
>>> class HeadRequest(urllib2.Request):
... def get_method(self):
... return "HEAD"
...
>>> response = urllib2.urlopen(HeadRequest("http://google.com/index.html"))
Headers are available via response.info() as before. Interestingly, you can find the URL that you were redirected to:
>>> print response.geturl()
http://www.google.com.au/index.html
|
Disable output buffering
|
Is output buffering enabled by default in Python's interpreter for sys.stdout?
If the answer is positive, what are all the ways to disable it?
Suggestions so far:
Use the -u command line switch
Wrap sys.stdout in an object that flushes after every write
Set PYTHONUNBUFFERED env var
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
Is there any other way to set some global flag in sys/sys.stdout programmatically during execution?
|
From Magnus Lycka answer on a mailing list:
You can skip buffering for a whole
python process using "python -u"
(or#!/usr/bin/env python -u etc) or by
setting the environment variable
PYTHONUNBUFFERED.
You could also replace sys.stdout with
some other stream like wrapper which
does a flush after every call.
class Unbuffered(object):
def __init__(self, stream):
self.stream = stream
def write(self, data):
self.stream.write(data)
self.stream.flush()
def __getattr__(self, attr):
return getattr(self.stream, attr)
import sys
sys.stdout = Unbuffered(sys.stdout)
print 'Hello'
|
Union and Intersect in Django
|
class Tag(models.Model):
name = models.CharField(maxlength=100)
class Blog(models.Model):
name = models.CharField(maxlength=100)
tags = models.ManyToManyField(Tag)
Simple models just to ask my question.
I wonder how can i query blogs using tags in two different ways.
Blog entries that are tagged with "tag1" or "tag2":
Blog.objects.filter(tags_in=[1,2]).distinct()
Blog objects that are tagged with "tag1" and "tag2" : ?
Blog objects that are tagged with exactly "tag1" and "tag2" and nothing else : ??
Tag and Blog is just used for an example.
|
You could use Q objects for #1:
# Blogs who have either hockey or django tags.
from django.db.models import Q
Blog.objects.filter(
Q(tags__name__iexact='hockey') | Q(tags__name__iexact='django')
)
Unions and intersections, I believe, are a bit outside the scope of the Django ORM, but its possible to to these. The following examples are from a Django application called called django-tagging that provides the functionality. Line 346 of models.py:
For part two, you're looking for a union of two queries, basically
def get_union_by_model(self, queryset_or_model, tags):
"""
Create a ``QuerySet`` containing instances of the specified
model associated with *any* of the given list of tags.
"""
tags = get_tag_list(tags)
tag_count = len(tags)
queryset, model = get_queryset_and_model(queryset_or_model)
if not tag_count:
return model._default_manager.none()
model_table = qn(model._meta.db_table)
# This query selects the ids of all objects which have any of
# the given tags.
query = """
SELECT %(model_pk)s
FROM %(model)s, %(tagged_item)s
WHERE %(tagged_item)s.content_type_id = %(content_type_id)s
AND %(tagged_item)s.tag_id IN (%(tag_id_placeholders)s)
AND %(model_pk)s = %(tagged_item)s.object_id
GROUP BY %(model_pk)s""" % {
'model_pk': '%s.%s' % (model_table, qn(model._meta.pk.column)),
'model': model_table,
'tagged_item': qn(self.model._meta.db_table),
'content_type_id': ContentType.objects.get_for_model(model).pk,
'tag_id_placeholders': ','.join(['%s'] * tag_count),
}
cursor = connection.cursor()
cursor.execute(query, [tag.pk for tag in tags])
object_ids = [row[0] for row in cursor.fetchall()]
if len(object_ids) > 0:
return queryset.filter(pk__in=object_ids)
else:
return model._default_manager.none()
For part #3 I believe you're looking for an intersection. See line 307 of models.py
def get_intersection_by_model(self, queryset_or_model, tags):
"""
Create a ``QuerySet`` containing instances of the specified
model associated with *all* of the given list of tags.
"""
tags = get_tag_list(tags)
tag_count = len(tags)
queryset, model = get_queryset_and_model(queryset_or_model)
if not tag_count:
return model._default_manager.none()
model_table = qn(model._meta.db_table)
# This query selects the ids of all objects which have all the
# given tags.
query = """
SELECT %(model_pk)s
FROM %(model)s, %(tagged_item)s
WHERE %(tagged_item)s.content_type_id = %(content_type_id)s
AND %(tagged_item)s.tag_id IN (%(tag_id_placeholders)s)
AND %(model_pk)s = %(tagged_item)s.object_id
GROUP BY %(model_pk)s
HAVING COUNT(%(model_pk)s) = %(tag_count)s""" % {
'model_pk': '%s.%s' % (model_table, qn(model._meta.pk.column)),
'model': model_table,
'tagged_item': qn(self.model._meta.db_table),
'content_type_id': ContentType.objects.get_for_model(model).pk,
'tag_id_placeholders': ','.join(['%s'] * tag_count),
'tag_count': tag_count,
}
cursor = connection.cursor()
cursor.execute(query, [tag.pk for tag in tags])
object_ids = [row[0] for row in cursor.fetchall()]
if len(object_ids) > 0:
return queryset.filter(pk__in=object_ids)
else:
return model._default_manager.none()
|
Delete all data for a kind in Google App Engine
|
I would like to wipe out all data for a specific kind in Google App Engine. What is the
best way to do this?
I wrote a delete script (hack), but since there is so much data is
timeout's out after a few hundred records.
|
I am currently deleting the entities by their key, and it seems to be faster.
from google.appengine.ext import db
class bulkdelete(webapp.RequestHandler):
def get(self):
self.response.headers['Content-Type'] = 'text/plain'
try:
while True:
q = db.GqlQuery("SELECT __key__ FROM MyModel")
assert q.count()
db.delete(q.fetch(200))
time.sleep(0.5)
except Exception, e:
self.response.out.write(repr(e)+'\n')
pass
from the terminal, I run curl -N http://...
|
Python Music Library?
|
I'm looking at writing a little drum machine in Python for fun. I've googled some and found the python pages on music and basic audio as well as a StackOverflow question on generating audio files, but what I'm looking for is a decent library for music creation. Has anyone on here tried to do something like this before? If so, what was your solution? What, either of the ones I've found, or something I haven't found, would be a decent library for audio manipulation?
Minimally, I'd like to be able to do something similar to Audacity's scope within python, but if anyone knows of a library that can do more... I'm all ears.
|
Take a close look at cSounds. There are Python bindings allow you to do pretty flexible digital synthesis. There are some pretty complete packages available, too.
See http://www.csounds.com/node/188 for a package.
See http://www.csounds.com/journal/issue6/pythonOpcodes.html for information on Python scripting within cSounds.
|
Is there something like 'autotest' for Python unittests?
|
Basically, growl notifications (or other callbacks) when tests break or pass. Does anything like this exist?
If not, it should be pretty easy to write.. Easiest way would be to..
run python-autotest myfile1.py myfile2.py etc.py
Check if files-to-be-monitored have been modified (possibly just if they've been saved).
Run any tests in those files.
If a test fails, but in the previous run it passed, generate a growl alert. Same with tests that fail then pass.
Wait, and repeat steps 2-5.
The problem I can see there is if the tests are in a different file. The simple solution would be to run all the tests after each save.. but with slower tests, this might take longer than the time between saves, and/or could use a lot of CPU power etc..
The best way to do it would be to actually see what bits of code have changed, if function abc() has changed, only run tests that interact with this.. While this would be great, I think it'd be extremely complex to implement?
To summarise:
Is there anything like the Ruby tool autotest (part of the ZenTest package), but for Python code?
How do you check which functions have changed between two revisions of a script?
Is it possible to determine which functions a command will call? (Somewhat like a reverse traceback)
|
I found autonose to be pretty unreliable but sniffer seems to work very well.
$ pip install sniffer
$ cd myproject
Then instead of running "nosetests", you run:
$ sniffer
Or instead of nosetests --verbose --with-doctest, you run:
$ sniffer -x--verbose -x--with-doctest
As described in the readme, it's a good idea to install one of the platform-specific filesystem-watching libraries, pyinotify, pywin32 or MacFSEvents (all installable via pip etc)
|
How to get instance variables in Python?
|
Is there a built-in method in Python to get an array of all a class' instance variables? For example, if I have this code:
class hi:
def __init__(self):
self.ii = "foo"
self.kk = "bar"
Is there a way for me to do this:
>>> mystery_method(hi)
["ii", "kk"]
Thanks guys!
Edit: I originally had asked for class variables erroneously. Thanks to all who brought this to my attention!
|
Every object has a __dict__ variable containing all the variables and its values in it.
Try this
>>> hi_obj = hi()
>>> hi_obj.__dict__.keys()
|
Change the width of form elements created with ModelForm in Django
|
How can i change the width of a textarea form element if i used ModelForm to create it?
Here is my product class:
class ProductForm(ModelForm):
long_desc = forms.CharField(widget=forms.Textarea)
short_desc = forms.CharField(widget=forms.Textarea)
class Meta:
model = Product
And the template code...
{% for f in form %}
{{ f.name }}:{{ f }}
{% endfor %}
f is the actual form element...
|
The easiest way for your use case is to use CSS. It's a language meant for defining presentation. Look at the code generated by form, take note of the ids for fields that interest you, and change appearance of these fields through CSS.
Example for long_desc field in your ProductForm (when your form does not have a custom prefix):
#id_long_desc {
width: 300px;
height: 200px;
}
Second approach is to pass the attrs keyword to your widget constructor.
class ProductForm(ModelForm):
long_desc = forms.CharField(widget=forms.Textarea(attrs={'cols': 10, 'rows': 20}))
short_desc = forms.CharField(widget=forms.Textarea)
class Meta:
model = Product
It's described in Django documentation.
Third approach is to leave the nice declarative interface of newforms for a while and set your widget attributes in custom constructor.
class ProductForm(ModelForm):
long_desc = forms.CharField(widget=forms.Textarea)
short_desc = forms.CharField(widget=forms.Textarea)
class Meta:
model = Product
# Edit by bryan
def __init__(self, *args, **kwargs):
super(ProductForm, self).__init__(*args, **kwargs) # Call to ModelForm constructor
self.fields['long_desc'].widget.attrs['cols'] = 10
self.fields['long_desc'].widget.attrs['rows'] = 20
This approach has the following advantages:
You can define widget attributes for fields that are generated automatically from your model without redefining whole fields.
It doesn't depend on the prefix of your form.
|
Is there an easy way to request a URL in python and NOT follow redirects?
|
Looking at the source of urllib2 it looks like the easiest way to do it would be to subclass HTTPRedirectHandler and then use build_opener to override the default HTTPRedirectHandler, but this seems like a lot of (relatively complicated) work to do what seems like it should be pretty simple.
|
Here is the Requests way:
import requests
r = requests.get('http://github.com', allow_redirects=False)
print(r.status_code)
|
Dirty fields in django
|
In my app i need to save changed values (old and new) when model gets saved. Any examples or working code?
I need this for premoderation of content. For example, if user changes something in model, then administrator can see all changes in separate table and then decide to apply them or not.
|
I've found Armin's idea very useful. Here is my variation;
class DirtyFieldsMixin(object):
def __init__(self, *args, **kwargs):
super(DirtyFieldsMixin, self).__init__(*args, **kwargs)
self._original_state = self._as_dict()
def _as_dict(self):
return dict([(f.name, getattr(self, f.name)) for f in self._meta.local_fields if not f.rel])
def get_dirty_fields(self):
new_state = self._as_dict()
return dict([(key, value) for key, value in self._original_state.iteritems() if value != new_state[key]])
Edit: I've tested this BTW.
Sorry about the long lines. The difference is (aside from the names) it only caches local non-relation fields. In other words it doesn't cache a parent model's fields if present.
And there's one more thing; you need to reset _original_state dict after saving. But I didn't want to overwrite save() method since most of the times we discard model instances after saving.
def save(self, *args, **kwargs):
super(Klass, self).save(*args, **kwargs)
self._original_state = self._as_dict()
|
Close a tkinter window?
|
How do I end a Tkinter program? Let's say I have this code:
from Tkinter import *
def quit():
# code to exit
root = Tk()
Button(root, text="Quit", command=quit).pack()
root.mainloop()
How should I define the quit function?
|
We can use:
def quit():
root.quit()
or
def quit():
root.destroy()
|
How do I handle the window close event in Tkinter?
|
How do I handle the window close event (user clicking the 'X' button) in a Python Tkinter program?
|
Tkinter supports a mechanism called protocol handlers. Here, the term protocol refers to the interaction between the application and the window manager. The most commonly used protocol is called WM_DELETE_WINDOW, and is used to define what happens when the user explicitly closes a window using the window manager.
You can use the protocol method to install a handler for this protocol (the widget must be a Tk or Toplevel widget):
Here you have a concrete example:
import tkinter as tk
from tkinter import messagebox
root = tk.Tk()
def on_closing():
if messagebox.askokcancel("Quit", "Do you want to quit?"):
root.destroy()
root.protocol("WM_DELETE_WINDOW", on_closing)
root.mainloop()
|
What is a "callable" in Python?
|
Now that it's clear what a metaclass is, there is an associated concept that I use all the time without knowing what it really means.
I suppose everybody made once a mistake with parenthesis, resulting in an "object is not callable" exception. What's more, using __init__ and __new__ lead to wonder what this bloody __call__ can be used for.
Could you give me some explanations, including examples with the magic method ?
|
A callable is anything that can be called.
The built-in callable (PyCallable_Check in objects.c) checks if the argument is either:
an instance of a class with a __call__ method or
is of a type that has a non null tp_call (c struct) member which indicates callability otherwise (such as in functions, methods etc.)
The method named __call__ is (according to the documentation)
Called when the instance is ''called'' as a function
Example
class Foo:
def __call__(self):
print 'called'
foo_instance = Foo()
foo_instance() #this is calling the __call__ method
|
Is there any way to do HTTP PUT in python
|
I need to upload some data to a server using HTTP PUT in python. From my brief reading of the urllib2 docs, it only does HTTP POST. Is there any way to do an HTTP PUT in python?
|
I've used a variety of python HTTP libs in the past, and I've settled on 'Requests' as my favourite. Existing libs had pretty useable interfaces, but code can end up being a few lines too long for simple operations. A basic PUT in requests looks like:
payload = {'username': 'bob', 'email': 'bob@bob.com'}
>>> r = requests.put("http://somedomain.org/endpoint", data=payload)
You can then check the response status code with:
r.status_code
or the response with:
r.content
Requests has a lot synactic sugar and shortcuts that'll make your life easier.
|
Using Python's ftplib to get a directory listing, portably
|
You can use ftplib for full FTP support in Python. However the preferred way of getting a directory listing is:
# File: ftplib-example-1.py
import ftplib
ftp = ftplib.FTP("www.python.org")
ftp.login("anonymous", "ftplib-example-1")
data = []
ftp.dir(data.append)
ftp.quit()
for line in data:
print "-", line
Which yields:
$ python ftplib-example-1.py
- total 34
- drwxrwxr-x 11 root 4127 512 Sep 14 14:18 .
- drwxrwxr-x 11 root 4127 512 Sep 14 14:18 ..
- drwxrwxr-x 2 root 4127 512 Sep 13 15:18 RCS
- lrwxrwxrwx 1 root bin 11 Jun 29 14:34 README -> welcome.msg
- drwxr-xr-x 3 root wheel 512 May 19 1998 bin
- drwxr-sr-x 3 root 1400 512 Jun 9 1997 dev
- drwxrwxr-- 2 root 4127 512 Feb 8 1998 dup
- drwxr-xr-x 3 root wheel 512 May 19 1998 etc
...
I guess the idea is to parse the results to get the directory listing. However this listing is directly dependent on the FTP server's way of formatting the list. It would be very messy to write code for this having to anticipate all the different ways FTP servers might format this list.
Is there a portable way to get an array filled with the directory listing?
(The array should only have the folder names.)
|
Try to use ftp.nlst(dir).
However, note that if the folder is empty, it might throw an error:
files = []
try:
files = ftp.nlst()
except ftplib.error_perm, resp:
if str(resp) == "550 No files found":
print "No files in this directory"
else:
raise
for f in files:
print f
|
python.array versus numpy.array
|
If you are creating a 1d array in Python is there any benefit to using the NumPy package?
|
It all depends on what you plan to do with the array. If all you're doing is creating arrays of simple data types and doing I/O, the array module will do just fine.
If, on the other hand, you want to do any kind of numerical calculations, the array module doesn't provide any help with that. NumPy (and SciPy) give you a wide variety of operations between arrays and special functions that are useful not only for scientific work but for things like advanced image manipulation or in general anything where you need to perform efficient calculations with large amounts of data.
Numpy is also much more flexible, e.g. it supports arrays of any type of Python objects, and is also able to interact "natively" with your own objects if they conform to the array interface.
|
py2exe - generate single executable file
|
I thought I heard that py2exe was able to do this, but I never figured it out. Has anyone successfully done this? Can I see your setup.py file, and what command line options you used?
Basically I'm thinking of it giving me a single executable file that does something like unzips itself to maybe /temp and runs.
|
The way to do this using py2exe is to use the bundle_files option in your setup.py file. For a single file you will want to set bundle_files to 1, compressed to True, and set the zipfile option to None. That way it creates one compressed file for easy distribution.
Here is a more complete description of the bundle_file option quoted directly from the py2exe site*
Using "bundle_files" and "zipfile"
An easier (and better) way to create
single-file executables is to set
bundle_files to 1 or 2, and to set
zipfile to None. This approach does
not require extracting files to a
temporary location, which provides
much faster program startup.
Valid values for bundle_files are:
3 (default) don't bundle
2 bundle everything but the Python interpreter
1 bundle everything, including the Python interpreter
If zipfile is set to None, the files will be bundle
within the executable instead of library.zip.
Here is a sample setup.py:
from distutils.core import setup
import py2exe, sys, os
sys.argv.append('py2exe')
setup(
options = {'py2exe': {'bundle_files': 1, 'compressed': True}},
windows = [{'script': "single.py"}],
zipfile = None,
)
|
Python - When to use file vs open
|
What's the difference between file and open in Python? When should I use which one? (Say I'm in 2.5)
|
You should always use open().
As the documentation states:
When opening a file, it's preferable
to use open() instead of invoking this
constructor directly. file is more
suited to type testing (for example,
writing "isinstance(f, file)").
Also, file() has been removed since Python 3.0.
|
Is there a function in Python to split a string without ignoring the spaces?
|
Is there a function in Python to split a string without ignoring the spaces in the resulting list?
E.g:
s="This is the string I want to split".split()
gives me
>>> s
['This', 'is', 'the', 'string', 'I', 'want', 'to', 'split']
I want something like
['This',' ','is',' ', 'the',' ','string', ' ', .....]
|
>>> import re
>>> re.split(r"(\s+)", "This is the string I want to split")
['This', ' ', 'is', ' ', 'the', ' ', 'string', ' ', 'I', ' ', 'want', ' ', 'to', ' ', 'split']
Using the capturing parentheses in re.split() causes the function to return the separators as well.
|
Is there a function in python to split a word into a list?
|
Is there a function in python to split a word into a list of single letters? e.g:
s="Word to Split"
to get
wordlist=['W','o','r','d','','t','o' ....]
|
>>> list("Word to Split")
['W', 'o', 'r', 'd', ' ', 't', 'o', ' ', 'S', 'p', 'l', 'i', 't']
|
Deploying Django: How do you do it?
|
I have tried following guides like this one but it just didnt work for me.
So my question is this: What is a good guide for deploying Django, and how do you deploy your Django.
I keep hearing that capastrano is pretty nifty to use, but i have no idea as to how to work it or what it does (apart from automation of deploying code), or even if i want/need to use it or not.
|
mod_wsgi in combination with a virtualenv for all the dependencies, a mercurial checkout into the virtualenv and a fabric recipe to check out the changes on the server.
I wrote an article about my usual workflow: Deploying Python Web Applications. Hope that helps.
|
Class method differences in Python: bound, unbound and static
|
What is the difference between the following class methods?
Is it that one is static and the other is not?
class Test(object):
def method_one(self):
print "Called method_one"
def method_two():
print "Called method_two"
a_test = Test()
a_test.method_one()
a_test.method_two()
|
In Python, there is a distinction between bound and unbound methods.
Basically, a call to a member function (like method_one), a bound function
a_test.method_one()
is translated to
Test.method_one(a_test)
i.e. a call to an unbound method. Because of that, a call to your version of method_two will fail with a TypeError
>>> a_test = Test()
>>> a_test.method_two()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: method_two() takes no arguments (1 given)
You can change the behavior of a method using a decorator
class Test(object):
def method_one(self):
print "Called method_one"
@staticmethod
def method_two():
print "Called method two"
The decorator tells the built-in default metaclass type (the class of a class, cf. this question) to not create bound methods for method_two.
Now, you can invoke static method both on an instance or on the class directly:
>>> a_test = Test()
>>> a_test.method_one()
Called method_one
>>> a_test.method_two()
Called method_two
>>> Test.method_two()
Called method_two
|
Is a Python dictionary an example of a hash table?
|
One of the basic data structures in Python is the dictionary, which allows one to record "keys" for looking up "values" of any type. Is this implemented internally as a hash table? If not, what is it?
|
Yes, it is a hash mapping or hash table. You can read a description of python's dict implementation, as written by Tim Peters, here.
That's why you can't use something 'not hashable' as a dict key, like a list:
>>> a = {}
>>> b = ['some', 'list']
>>> hash(b)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list objects are unhashable
>>> a[b] = 'some'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: list objects are unhashable
You can read more about hash tables or check how it has been implemented in python and why it is implemented that way.
|
How can I consume a WSDL (SOAP) web service in Python?
|
I want to use a WSDL SOAP based web service in Python. I have looked at the Dive Into Python code but the SOAPpy module does not work under Python 2.5.
I have tried using suds which works partly, but breaks with certain types (suds.TypeNotFound: Type not found: 'item').
I have also looked at Client but this does not appear to support WSDL.
And I have looked at ZSI but it looks very complex. Does anyone have any sample code for it?
The WSDL is https://ws.pingdom.com/soap/PingdomAPI.wsdl and works fine with the PHP 5 SOAP client.
|
I would recommend that you have a look at SUDS
"Suds is a lightweight SOAP python client for consuming Web Services."
|
How do I configure the ip address with CherryPy?
|
I'm using python and CherryPy to create a simple internal website that about 2 people use. I use the built in webserver with CherryPy.quickstart and never messed with the config files. I recently changed machines so I installed the latest Python and cherrypy and when I run the site I can access it from localhost:8080 but not through the IP or the windows machine name. It could be a machine configuration difference or a newer version of CherryPy or Python. Any ideas how I can bind to the correct IP address?
Edit: to make it clear, I currently don't have a config file at all.
|
server.socket_host: '0.0.0.0'
...would also work. That's IPv4 INADDR_ANY, which means, "listen on all interfaces".
In a config file, the syntax is:
[global]
server.socket_host: '0.0.0.0'
In code:
cherrypy.server.socket_host = '0.0.0.0'
|
Using Pylint with Django
|
I would very much like to integrate pylint into the build process for
my python projects, but I have run into one show-stopper: One of the
error types that I find extremely useful--:E1101: *%s %r has no %r
member*--constantly reports errors when using common django fields,
for example:
E1101:125:get_user_tags: Class 'Tag' has no 'objects' member
which is caused by this code:
def get_user_tags(username):
"""
Gets all the tags that username has used.
Returns a query set.
"""
return Tag.objects.filter( ## This line triggers the error.
tagownership__users__username__exact=username).distinct()
# Here is the Tag class, models.Model is provided by Django:
class Tag(models.Model):
"""
Model for user-defined strings that help categorize Events on
on a per-user basis.
"""
name = models.CharField(max_length=500, null=False, unique=True)
def __unicode__(self):
return self.name
How can I tune Pylint to properly take fields such as objects into account? (I've also looked into the Django source, and I have been unable to find the implementation of objects, so I suspect it is not "just" a class field. On the other hand, I'm fairly new to python, so I may very well have overlooked something.)
Edit: The only way I've found to tell pylint to not warn about these warnings is by blocking all errors of the type (E1101) which is not an acceptable solution, since that is (in my opinion) an extremely useful error. If there is another way, without augmenting the pylint source, please point me to specifics :)
See here for a summary of the problems I've had with pychecker and pyflakes -- they've proven to be far to unstable for general use. (In pychecker's case, the crashes originated in the pychecker code -- not source it was loading/invoking.)
|
I use the following: pylint --generated-members=objects
|
How can I search a word in a Word 2007 .docx file?
|
I'd like to search a Word 2007 file (.docx) for a text string, e.g., "some special phrase" that could/would be found from a search within Word.
Is there a way from Python to see the text? I have no interest in formatting - I just want to classify documents as having or not having "some special phrase".
|
After reading your post above, I made a 100% native Python docx module to solve this specific problem.
# Import the module
from docx import *
# Open the .docx file
document = opendocx('A document.docx')
# Search returns true if found
search(document,'your search string')
The docx module is at https://python-docx.readthedocs.org/en/latest/
|
How do you create an osx application/dmg from a python package?
|
I want to create a mac osx application from python package and then put it in a disk image.
Because I load some resources out of the package, the package should not reside in a zip file.
The resulting disk image should display the background picture to "drag here -> applications" for installation.
|
I don't know the correct way to do it, but this manual method is the approach I've used for simple scripts which seems to have preformed suitably.
I'll assume that whatever directory I'm in, the Python files for my program are in the relative src/ directory, and that the file I want to execute (which has the proper shebang and execute permissions) is named main.py.
$ mkdir -p MyApplication.app/Contents/MacOS
$ mv src/* MyApplication.app/Contents/MacOS
$ cd MyApplication.app/Contents/MacOS
$ mv main.py MyApplication
At this point we have an application bundle which, as far as I know, should work on any Mac OS system with Python installed (which I think it has by default). It doesn't have an icon or anything, that requires adding some more metadata to the package which is unnecessary for my purposes and I'm not familiar with.
To create the drag-and-drop installer is quite simple. Use Disk Utility to create a New Disk Image of approximately the size you require to store your application. Open it up, copy your application and an alias of /Applications to the drive, then use View Options to position them as you want.
The drag-and-drop message is just a background of the disk image, which you can also specify in View Options. I haven't done it before, but I'd assume that after you whip up an image in your editor of choice you could copy it over, set it as the background and then use chflags hidden to prevent it from cluttering up your nice window.
I know these aren't the clearest, simplest or most detailed instructions out there, but I hope somebody may find them useful.
|
How to Retrieve name of current Windows User (AD or local) using Python?
|
How can I retrieve the name of the currently logged in user, using a python script? The function should work regardless of whether it is a domain/ad user or a local user.
|
Try this:
import os;
print os.environ.get( "USERNAME" )
That should do the job.
|
How do I get a decimal value when using the division operator in Python?
|
For example, the standard division symbol '/' rounds to zero:
>>> 4 / 100
0
However, I want it to return 0.04. What do I use?
|
There are three options:
>>> 4 / float(100)
0.04
>>> 4 / 100.0
0.04
which is the same behavior as the C, C++, Java etc, or
>>> from __future__ import division
>>> 4 / 100
0.04
You can also activate this behavior by passing the argument -Qnew to the Python interpreter:
$ python -Qnew
>>> 4 / 100
0.04
The second option will be the default in Python 3.0. If you want to have the old integer division, you have to use the // operator.
Edit: added section about -Qnew, thanks to ΤÎΩΤÎÎÎÎ¥!
|
How do I use timezones with a datetime object in python?
|
How do I properly represent a different timezone in my timezone? The below example only works because I know that EDT is one hour ahead of me, so I can uncomment the subtraction of myTimeZone()
import datetime, re
from datetime import tzinfo
class myTimeZone(tzinfo):
"""docstring for myTimeZone"""
def utfoffset(self, dt):
return timedelta(hours=1)
def myDateHandler(aDateString):
"""u'Sat, 6 Sep 2008 21:16:33 EDT'"""
_my_date_pattern = re.compile(r'\w+\,\s+(\d+)\s+(\w+)\s+(\d+)\s+(\d+)\:(\d+)\:(\d+)')
day, month, year, hour, minute, second = _my_date_pattern.search(aDateString).groups()
month = [
'JAN', 'FEB', 'MAR',
'APR', 'MAY', 'JUN',
'JUL', 'AUG', 'SEP',
'OCT', 'NOV', 'DEC'
].index(month.upper()) + 1
dt = datetime.datetime(
int(year), int(month), int(day),
int(hour), int(minute), int(second)
)
# dt = dt - datetime.timedelta(hours=1)
# dt = dt - dt.tzinfo.utfoffset(myTimeZone())
return (dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second, 0, 0, 0)
def main():
print myDateHandler("Sat, 6 Sep 2008 21:16:33 EDT")
if __name__ == '__main__':
main()
|
I recommend babel and pytz when working with timezones. Keep your internal datetime objects naive and in UTC and convert to your timezone for formatting only. The reason why you probably want naive objects (objects without timezone information) is that many libraries and database adapters have no idea about timezones.
Babel
pytz
|
How can i parse a comma delimited string into a list (caveat)?
|
I need to be able to take a string like:
'''foo, bar, "one, two", three four'''
into:
['foo', 'bar', 'one, two', 'three four']
I have an feeling (with hints from #python) that the solution is going to involve the shlex module.
|
It depends how complicated you want to get... do you want to allow more than one type of quoting. How about escaped quotes?
Your syntax looks very much like the common CSV file format, which is supported by the Python standard library:
import csv
reader = csv.reader(['''foo, bar, "one, two", three four'''], skipinitialspace=True)
for r in reader:
print r
Outputs:
['foo', 'bar', 'one, two', 'three four']
HTH!
|
How to start IDLE (Python editor) without using the shortcut on Windows Vista?
|
I'm trying to teach Komodo to fire up IDLE when I hit the right keystrokes. I can use the exact path of the shortcut in start menu in the Windows Explorer location bar to launch IDLE so I was hoping Komodo would be able to use it as well. But, giving this path to Komodo causes it to say that 1 is returned. This appears to be a failure as IDLE doesn't start up.
I thought I'd avoid the shortcut and just use the exact path. I go to the start menu, find the shortcut for IDLE, right click to look at the properties. The target is grayed out, but says "Python 2.5.2". The "Start in" is set to, "C:\Python25\". The "Open File Location" button is also grayed out.
How do I find out where this shortcut is really pointing? I have tried starting python.exe and pythonw.exe both in C:\Python25, but neither starts up IDLE.
|
There's a file called idle.py in your Python installation directory in Lib\idlelib\idle.py
If you run that file with Python, then IDLE should start.
c:\Python25\pythonw.exe c:\Python25\Lib\idlelib\idle.py
|
How do you use the ellipsis slicing syntax in Python?
|
This came up in Hidden features of Python, but I can't see good documentation or examples that explain how the feature works.
|
The ellipsis is used to slice higher-dimensional data structures.
It's designed to mean at this point, insert as many full slices (:) to extend the multi-dimensional slice to all dimensions.
Example:
>>> from numpy import arange
>>> a = arange(16).reshape(2,2,2,2)
Now, you have a 4-dimensional matrix of order 2x2x2x2. To select all first elements in the 4th dimension, you can use the ellipsis notation
>>> a[..., 0].flatten()
array([ 0, 2, 4, 6, 8, 10, 12, 14])
which is equivalent to
>>> a[:,:,:,0].flatten()
array([ 0, 2, 4, 6, 8, 10, 12, 14])
In your own implementations, you're free to ignore the contract mentioned above and use it for whatever you see fit.
|
Is there a way to convert indentation in Python code to braces?
|
I am a totally blind programmer who would like to learn Python. Unfortunately the fact that code blocks are represented with different levels of indentation is a major stumbling block. I was wondering if there were any tools available that would allow me to write code using braces or some other code block delimiter and then convert that format into a properly indented representation that the Python interpreter could use?
|
There's a solution to your problem that is distributed with python itself. pindent.py, it's located in the Tools\Scripts directory in a windows install (my path to it is C:\Python25\Tools\Scripts), it looks like you'd have grab it from svn.python.org if you are running on Linux or OSX.
It adds comments when blocks are closed, or can properly indent code if comments are put in. Here's an example of the code outputted by pindent with the command:
pindent -c myfile.py
def foobar(a, b):
if a == b:
a = a+1
elif a < b:
b = b-1
if b > a: a = a-1
# end if
else:
print 'oops!'
# end if
# end def foobar
Where the original myfile.py was:
def foobar(a, b):
if a == b:
a = a+1
elif a < b:
b = b-1
if b > a: a = a-1
else:
print 'oops!'
You can also use pindent.py -d to insert the correct indentation based on comments (read the header of pindent.py for details), this should allow you to code in python without worrying about indentation.
I'd be interested to learn what solution you end up using, if you require any further assistance, please comment on this post and I'll try to help.
|
Iron python, beautiful soup, win32 app
|
Does beautiful soup work with iron python?
If so with which version of iron python?
How easy is it to distribute a windows desktop app on .net 2.0 using iron python (mostly c# calling some python code for parsing html)?
|
I was asking myself this same question and after struggling to follow advice here and elsewhere to get IronPython and BeautifulSoup to play nicely with my existing code I decided to go looking for an alternative native .NET solution. BeautifulSoup is a wonderful bit of code and at first it didn't look like there was anything comparable available for .NET, but then I found the HTML Agility Pack and if anything I think I've actually gained some maintainability over BeautifulSoup. It takes clean or crufty HTML and produces a elegant XML DOM from it that can be queried via XPath. With a couple lines of code you can even get back a raw XDocument and then craft your queries in LINQ to XML. Honestly, if web scraping is your goal, this is about the cleanest solution you are likely to find.
Edit
Here is a simple (read: not robust at all) example that parses out the US House of Representatives holiday schedule:
using System;
using System.Collections.Generic;
using HtmlAgilityPack;
namespace GovParsingTest
{
class Program
{
static void Main(string[] args)
{
HtmlWeb hw = new HtmlWeb();
string url = @"http://www.house.gov/house/House_Calendar.shtml";
HtmlDocument doc = hw.Load(url);
HtmlNode docNode = doc.DocumentNode;
HtmlNode div = docNode.SelectSingleNode("//div[@id='primary']");
HtmlNodeCollection tableRows = div.SelectNodes(".//tr");
foreach (HtmlNode row in tableRows)
{
HtmlNodeCollection cells = row.SelectNodes(".//td");
HtmlNode dateNode = cells[0];
HtmlNode eventNode = cells[1];
while (eventNode.HasChildNodes)
{
eventNode = eventNode.FirstChild;
}
Console.WriteLine(dateNode.InnerText);
Console.WriteLine(eventNode.InnerText);
Console.WriteLine();
}
//Console.WriteLine(div.InnerHtml);
Console.ReadKey();
}
}
}
|
How do I uninstall python from OSX Leopard so that I can use the MacPorts version?
|
I want to use the macports version of python instead of the one that comes with Leopard.
|
Don't. Apple ships various system utilities that rely on the system Python (and particularly the Python "framework" build); removing it will cause you problems.
Instead, modify your PATH environ variable in your ~/.bash_profile to put /opt/local/bin first.
|
Adding code to __init__.py
|
I'm taking a look at how the model system in django works and I noticed something that I don't understand.
I know that you create an empty __init__.py file to specify that the current directory is a package. And that you can set some variable in __init__.py so that import * works properly.
But django adds a bunch of from ... import ... statements and defines a bunch of classes in __init__.py. Why? Doesn't this just make things look messy? Is there a reason that requires this code in __init__.py?
|
All imports in __init__.py are made available when you import the package (directory) that contains it.
Example:
./dir/__init__.py:
import something
./test.py:
import dir
# can now use dir.something
EDIT: forgot to mention, the code in __init__.py runs the first time you import any module from that directory. So it's normally a good place to put any package-level initialisation code.
EDIT2: dgrant pointed out to a possible confusion in my example. In __init__.py import something can import any module, not necessary from the package. For example, we can replace it with import datetime, then in our top level test.py both of these snippets will work:
import dir
print dir.datetime.datetime.now()
and
import dir.some_module_in_dir
print dir.datetime.datetime.now()
The bottom line is: all names assigned in __init__.py, be it imported modules, functions or classes, are automatically available in the package namespace whenever you import the package or a module in the package.
|
time length of an mp3 file
|
What is the simplest way to determine the length (in seconds) of a given mp3 file, without using outside libraries? (python source highly appreciated)
|
You can use pymad. It's an external library, but don't fall for the Not Invented Here trap. Any particular reason you don't want any external libraries?
import mad
mf = mad.MadFile("foo.mp3")
track_length_in_milliseconds = mf.total_time()
Spotted here.
--
If you really don't want to use an external library, have a look here and check out how he's done it. Warning: it's complicated.
|
Tabs versus spaces in Python programming
|
I have always used tabs for indentation when I do Python programming. But then I came across a question here on SO where someone pointed out that most Python programmers use spaces instead of tabs to minimize editor-to-editor mistakes.
How does that make a difference? Are there other reasons why one would use spaces instead of tabs for Python? Or is it simply not true?
Should I switch my editor to insert spaces instead of tabs right away or keep on going like I used to?
|
Tired of chasing after indentation typos ( 8 spaces ? no, 7 oops 9 ... ) I switched my sources to 'tabs only'.
1 tab == 1 indent level, full stop
The point is: if you want to display the indentation as 4, 8 or pi / 12 character width, just change the settings in your text editor, don't mess with the code :p
(personally I use 4 char width tab... but some would prefer 3 or 8 space, or even use variable width fonts !!)
|
Parse DICOM files in native Python
|
What is the simplest and most-pythonic way to parse a DICOM file?
A native Python implementation without the use of non-Python libraries would be much preferred. DICOM is the standard file format in digital medical imaging (look here for more information).
There are some C/C++ libraries that support reading (a subset) of DICOM files. Two or three of them even have Python bindings. A native Python parser would serve two purposes for me:
No need to build any external C/C++ libraries.
Learn about the DICOM file format.
|
I'm using pydicom heavily these days, and it rocks.
It's pretty easy to start playing with it:
import dicom
data = dicom.read_file("yourdicomfile.dcm")
To get the interesting stuff out of that "data" object, somehow resembling dcmdump output:
for key in data.dir():
value = getattr(data, key, '')
if type(value) is dicom.UID.UID or key == "PixelData":
continue
print "%s: %s" % (key, value)
I think a great way to learn more about the dicom format is to open similar files and write code to compare them according to various aspects: study description, window width and center, pixel representation and so on.
Have fun! :)
|
Using **kwargs with SimpleXMLRPCServer in python
|
I have a class that I wish to expose as a remote service using pythons SimpleXMLRPCServer. The server startup looks like this:
server = SimpleXMLRPCServer((serverSettings.LISTEN_IP,serverSettings.LISTEN_PORT))
service = Service()
server.register_instance(service)
server.serve_forever()
I then have a ServiceRemote class that looks like this:
def __init__(self,ip,port):
self.rpcClient = xmlrpclib.Server('http://%s:%d' %(ip,port))
def __getattr__(self, name):
# forward all calls to the rpc client
return getattr(self.rpcClient, name)
So all calls on the ServiceRemote object will be forwarded to xmlrpclib.Server, which then forwards it to the remote server. The problem is a method in the service that takes named varargs:
@useDb
def select(self, db, fields, **kwargs):
pass
The @useDb decorator wraps the function, creating the db before the call and opening it, then closing it after the call is done before returning the result.
When I call this method, I get the error "call() got an unexpected keyword argument 'name'". So, is it possible to call methods taking variable named arguments remotely? Or will I have to create an override for each method variation I need.
Thanks for the responses. I changed my code around a bit so the question is no longer an issue. However now I know this for future reference if I indeed do need to implement positional arguments and support remote invocation. I think a combination of Thomas and praptaks approaches would be good. Turning kwargs into positional args on the client through xmlrpclient, and having a wrapper on methods serverside to unpack positional arguments.
|
You can't do this with plain xmlrpc since it has no notion of keyword arguments. However, you can superimpose this as a protocol on top of xmlrpc that would always pass a list as first argument, and a dictionary as a second, and then provide the proper support code so this becomes transparent for your usage, example below:
Server
from SimpleXMLRPCServer import SimpleXMLRPCServer
class Server(object):
def __init__(self, hostport):
self.server = SimpleXMLRPCServer(hostport)
def register_function(self, function, name=None):
def _function(args, kwargs):
return function(*args, **kwargs)
_function.__name__ = function.__name__
self.server.register_function(_function, name)
def serve_forever(self):
self.server.serve_forever()
#example usage
server = Server(('localhost', 8000))
def test(arg1, arg2):
print 'arg1: %s arg2: %s' % (arg1, arg2)
return 0
server.register_function(test)
server.serve_forever()
Client
import xmlrpclib
class ServerProxy(object):
def __init__(self, url):
self._xmlrpc_server_proxy = xmlrpclib.ServerProxy(url)
def __getattr__(self, name):
call_proxy = getattr(self._xmlrpc_server_proxy, name)
def _call(*args, **kwargs):
return call_proxy(args, kwargs)
return _call
#example usage
server = ServerProxy('http://localhost:8000')
server.test(1, 2)
server.test(arg2=2, arg1=1)
server.test(1, arg2=2)
server.test(*[1,2])
server.test(**{'arg1':1, 'arg2':2})
|
Fetch a Wikipedia article with Python
|
I try to fetch a Wikipedia article with Python's urllib:
f = urllib.urlopen("http://en.wikipedia.org/w/index.php?title=Albert_Einstein&printable=yes")
s = f.read()
f.close()
However instead of the html page I get the following response: Error - Wikimedia Foundation:
Request: GET http://en.wikipedia.org/w/index.php?title=Albert_Einstein&printable=yes, from 192.35.17.11 via knsq1.knams.wikimedia.org (squid/2.6.STABLE21) to ()
Error: ERR_ACCESS_DENIED, errno [No Error] at Tue, 23 Sep 2008 09:09:08 GMT
Wikipedia seems to block request which are not from a standard browser.
Anybody know how to work around this?
|
You need to use the urllib2 that superseedes urllib in the python std library in order to change the user agent.
Straight from the examples
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
infile = opener.open('http://en.wikipedia.org/w/index.php?title=Albert_Einstein&printable=yes')
page = infile.read()
|
SVG rendering in a PyGame application
|
In a pyGame application, I would like to render resolution-free GUI widgets described in SVG.
What tool and/or library can I use to reach this goal ?
(I like the OCEMP GUI toolkit but it seems to be bitmap dependent for its rendering)
|
This is a complete example which combines hints by other people here.
It should render a file called test.svg from the current directory. It was tested on Ubuntu 10.10, python-cairo 1.8.8, python-pygame 1.9.1, python-rsvg 2.30.0.
#!/usr/bin/python
import array
import math
import cairo
import pygame
import rsvg
WIDTH = 512
HEIGHT = 512
data = array.array('c', chr(0) * WIDTH * HEIGHT * 4)
surface = cairo.ImageSurface.create_for_data(
data, cairo.FORMAT_ARGB32, WIDTH, HEIGHT, WIDTH * 4)
pygame.init()
window = pygame.display.set_mode((WIDTH, HEIGHT))
svg = rsvg.Handle(file="test.svg")
ctx = cairo.Context(surface)
svg.render_cairo(ctx)
screen = pygame.display.get_surface()
image = pygame.image.frombuffer(data.tostring(), (WIDTH, HEIGHT),"ARGB")
screen.blit(image, (0, 0))
pygame.display.flip()
clock = pygame.time.Clock()
while True:
clock.tick(15)
for event in pygame.event.get():
if event.type == pygame.QUIT:
raise SystemExit
|
Directory listing in Python
|
How do I get a list of all files (and directories) in a given directory in Python?
|
This is a way to traverse every file and directory in a directory tree:
import os
for dirname, dirnames, filenames in os.walk('.'):
# print path to all subdirectories first.
for subdirname in dirnames:
print(os.path.join(dirname, subdirname))
# print path to all filenames.
for filename in filenames:
print(os.path.join(dirname, filename))
# Advanced usage:
# editing the 'dirnames' list will stop os.walk() from recursing into there.
if '.git' in dirnames:
# don't go into any .git directories.
dirnames.remove('.git')
|
Why does Python pep-8 strongly recommend spaces over tabs for indentation?
|
I see on Stack Overflow and PEP 8 that the recommendation is to use spaces only for indentation in Python programs. I can understand the need for consistent indentation and I have felt that pain.
Is there an underlying reason for spaces to be preferred? I would have thought that tabs were far easier to work with.
|
The answer was given right there in the PEP [ed: this passage has been edited out in 2013]. I quote:
The most popular way of indenting Python is with spaces only.
What other underlying reason do you need?
To put it less bluntly: Consider also the scope of the PEP as stated in the very first paragraph:
This document gives coding conventions for the Python code comprising the standard library in the main Python distribution.
The intention is to make all code that goes in the official python distribution consistently formatted (I hope we can agree that this is universally a Good Thingâ¢).
Since the decision between spaces and tabs for an individual programmer is a) really a matter of taste and b) easily dealt with by technical means (editors, conversion scripts, etc.), there is a clear way to end all discussion: chose one.
Guido was the one to choose. He didn't even have to give a reason, but he still did by referring to empirical data.
For all other purposes you can either take this PEP as a recommendation, or you can ignore it -- your choice, or your team's, or your team leaders.
But if I may give you one advice: don't mix'em ;-) [ed: Mixing tabs and spaces is no longer an option.]
|
How can I normalize a URL in python
|
I'd like to know do I normalize a URL in python.
For example, If I have a url string like : "http://www.example.com/foo goo/bar.html"
I need a library in python that will transform the extra space (or any other non normalized character) to a proper URL.
|
Have a look at this module: werkzeug.utils. (now in werkzeug.urls)
The function you are looking for is called "url_fix" and works like this:
>>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)')
'http://de.wikipedia.org/wiki/Elf%20%28Begriffskl%C3%A4rung%29'
It's implemented in Werkzeug as follows:
import urllib
import urlparse
def url_fix(s, charset='utf-8'):
"""Sometimes you get an URL by a user that just isn't a real
URL because it contains unsafe characters like ' ' and so on. This
function can fix some of the problems in a similar way browsers
handle data entered by the user:
>>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)')
'http://de.wikipedia.org/wiki/Elf%20%28Begriffskl%C3%A4rung%29'
:param charset: The target charset for the URL if the url was
given as unicode string.
"""
if isinstance(s, unicode):
s = s.encode(charset, 'ignore')
scheme, netloc, path, qs, anchor = urlparse.urlsplit(s)
path = urllib.quote(path, '/%')
qs = urllib.quote_plus(qs, ':&=')
return urlparse.urlunsplit((scheme, netloc, path, qs, anchor))
|
Accessing object memory address
|
When you call the object.__repr__() method in python you get something like this back: <__main__.Test object at 0x2aba1c0cf890>, is there any way to get a hold of the memory address if you overload __repr__(), other then calling super(Class, obj).__repr__() and regexing it out?
|
The Python manual has this to say about id():
Return the ``identity'' of an object.
This is an integer (or long integer)
which is guaranteed to be unique and
constant for this object during its
lifetime. Two objects with
non-overlapping lifetimes may have the
same id() value. (Implementation note:
this is the address of the object.)
So in CPython, this will be the address of the object. No such guarantee for any other Python interpreter, though.
Note that if you're writing a C extension, you have full access to the internals of the Python interpreter, including access to the addresses of objects directly.
|
How do I include a stacktrace in my Django 500.html page?
|
I'm running Django 1.0 and I'm close to deploying my app. As such, I'll be changing the DEBUG setting to False.
With that being said, I'd still like to include the stacktrace on my 500.html page when errors occur. By doing so, users can copy-and-paste the errors and easily email them to the developers.
Any thoughts on how best to approach this issue?
|
Automatically log your 500s, that way:
You know when they occur.
You don't need to rely on users sending you stacktraces.
Joel recommends even going so far as automatically creating tickets in your bug tracker when your application experiences a failure. Personally, I create a (private) RSS feed with the stacktraces, urls, etc. that the developers can subscribe to.
Showing stack traces to your users on the other hand could possibly leak information that malicious users could use to attack your site. Overly detailed error messages are one of the classic stepping stones to SQL injection attacks.
Edit (added code sample to capture traceback):
You can get the exception information from the sys.exc_info call. While formatting the traceback for display comes from the traceback module:
import traceback
import sys
try:
raise Exception("Message")
except:
type, value, tb = sys.exc_info()
print >> sys.stderr, type.__name__, ":", value
print >> sys.stderr, '\n'.join(traceback.format_tb(tb))
Prints:
Exception : Message
File "exception.py", line 5, in <module>
raise Exception("Message")
|
How do you translate this regular-expression idiom from Perl into Python?
|
I switched from Perl to Python about a year ago and haven't looked back. There is only one idiom that I've ever found I can do more easily in Perl than in Python:
if ($var =~ /foo(.+)/) {
# do something with $1
} elsif ($var =~ /bar(.+)/) {
# do something with $1
} elsif ($var =~ /baz(.+)/) {
# do something with $1
}
The corresponding Python code is not so elegant since the if statements keep getting nested:
m = re.search(r'foo(.+)', var)
if m:
# do something with m.group(1)
else:
m = re.search(r'bar(.+)', var)
if m:
# do something with m.group(1)
else:
m = re.search(r'baz(.+)', var)
if m:
# do something with m.group(2)
Does anyone have an elegant way to reproduce this pattern in Python? I've seen anonymous function dispatch tables used, but those seem kind of unwieldy to me for a small number of regular expressions...
|
Using named groups and a dispatch table:
r = re.compile(r'(?P<cmd>foo|bar|baz)(?P<data>.+)')
def do_foo(data):
...
def do_bar(data):
...
def do_baz(data):
...
dispatch = {
'foo': do_foo,
'bar': do_bar,
'baz': do_baz,
}
m = r.match(var)
if m:
dispatch[m.group('cmd')](m.group('data'))
With a little bit of introspection you can auto-generate the regexp and the dispatch table.
|
How do I find the location of my Python site-packages directory?
|
How do I find the location of my site-packages directory?
|
>>> import site; site.getsitepackages()
['/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages']
(or just first item with site.getsitepackages()[0])
|
How do I copy a file in python?
|
How do I copy a file in Python? I couldn't find anything under os.
|
shutil has many methods you can use. One of which is:
from shutil import copyfile
copyfile(src, dst)
Copy the contents of the file named src to a file named dst. The destination location must be writable; otherwise, an IOError exception will be raised. If dst already exists, it will be replaced. Special files such as character or block devices and pipes cannot be copied with this function. src and dst are path names given as strings.
|
What is the intended use of the DEFAULT section in config files used by ConfigParser?
|
I've used ConfigParser for quite a while for simple configs. One thing that's bugged me for a long time is the DEFAULT section. I'm not really sure what's an appropriate use. I've read the documentation, but I would really like to see some clever examples of its use and how it affects other sections in the file (something that really illustrates the kind of things that are possible).
|
I found an explanation here by googling for "windows ini" "default section". Summary: whatever you put in the [DEFAULT] section gets propagated to every other section. Using the example from the linked website, let's say I have a config file called test1.ini:
[host 1]
lh_server=192.168.0.1
vh_hosts = PloneSite1:8080
lh_root = PloneSite1
[host 2]
lh_server=192.168.0.1
vh_hosts = PloneSite2:8080
lh_root = PloneSite2
I can read this using ConfigParser:
>>> cp = ConfigParser.ConfigParser()
>>> cp.read('test1.ini')
['test1.ini']
>>> cp.get('host 1', 'lh_server')
'192.168.0.1'
But I notice that lh_server is the same in both sections; and, indeed, I realise that it will be the same for most hosts I might add. So I can do this, as test2.ini:
[DEFAULT]
lh_server=192.168.0.1
[host 1]
vh_root = PloneSite1
lh_root = PloneSite1
[host 2]
vh_root = PloneSite2
lh_root = PloneSite2
Despite the sections not having lh_server keys, I can still access them:
>>> cp.read('test2.ini')
['test2.ini']
>>> cp.get('host 1', 'lh_server')
'192.168.0.1'
Read the linked page for a further example of using variable substitution in the DEFAULT section to simplify the INI file even more.
|
extracting text from MS word files in python
|
for working with MS word files in python, there is python win32 extensions, which can be used in windows. How do I do the same in linux?
Is there any library?
|
Use the native Python docx module. Here's how to extract all the text from a doc:
document = docx.Document(filename)
docText = '\n\n'.join([
paragraph.text.encode('utf-8') for paragraph in document.paragraphs
])
print docText
See Python DocX site
Also check out Textract which pulls out tables etc.
Parsing XML with regexs invokes cthulu. Don't do it!
|
How do I modify a text file in Python?
|
I'm using Python, and would like to insert a string into a text file without deleting or copying the file. How can I do that?
|
Unfortunately there is no way to insert into the middle of a file without re-writing it. As previous posters have indicated, you can append to a file or overwrite part of it using seek but if you want to add stuff at the beginning or the middle, you'll have to rewrite it.
This is an operating system thing, not a Python thing. It is the same in all languages.
What I usually do is read from the file, make the modifications and write it out to a new file called myfile.txt.tmp or something like that. This is better than reading the whole file into memory because the file may be too large for that. Once the temporary file is completed, I rename it the same as the original file.
This is a good, safe way to do it because if the file write crashes or aborts for any reason, you still have your untouched original file.
|
Python library for rendering HTML and javascript
|
Is there any python module for rendering a HTML page with javascript and get back a DOM object?
I want to parse a page which generates almost all of its content using javascript.
|
The big complication here is emulating the full browser environment outside of a browser. You can use stand alone javascript interpreters like Rhino and SpiderMonkey to run javascript code but they don't provide a complete browser like environment to full render a web page.
If I needed to solve a problem like this I would first look at how the javascript is rendering the page, it's quite possible it's fetching data via AJAX and using that to render the page. I could then use python libraries like simplejson and httplib2 to directly fetch the data and use that, negating the need to access the DOM object. However, that's only one possible situation, I don't know the exact problem you are solving.
Other options include the selenium one mentioned by Åukasz, some kind of webkit embedded craziness, some kind of IE win32 scripting craziness or, finally, a pyxpcom based solution (with added craziness). All these have the drawback of requiring pretty much a fully running web browser for python to play with, which might not be an option depending on your environment.
|
Iterate a list with indexes in Python
|
I could swear I've seen the function (or method) that takes a list, like this [3, 7, 19] and makes it into iterable list of tuples, like so: [(0,3), (1,7), (2,19)] to use it instead of:
for i in range(len(name_of_list)):
name_of_list[i] = something
but I can't remember the name and googling "iterate list" gets nothing.
|
>>> a = [3,4,5,6]
>>> for i, val in enumerate(a):
... print i, val
...
0 3
1 4
2 5
3 6
>>>
|
Find out number of capture groups in Python regular expressions
|
Is there a way to determine how many capture groups there are in a given regular expression?
I would like to be able to do the follwing:
def groups(regexp, s):
""" Returns the first result of re.findall, or an empty default
>>> groups(r'(\d)(\d)(\d)', '123')
('1', '2', '3')
>>> groups(r'(\d)(\d)(\d)', 'abc')
('', '', '')
"""
import re
m = re.search(regexp, s)
if m:
return m.groups()
return ('',) * num_of_groups(regexp)
This allows me to do stuff like:
first, last, phone = groups(r'(\w+) (\w+) ([\d\-]+)', 'John Doe 555-3456')
However, I don't know how to implement num_of_groups. (Currently I just work around it.)
EDIT: Following the advice from rslite, I replaced re.findall with re.search.
sre_parse seems like the most robust and comprehensive solution, but requires tree traversal and appears to be a bit heavy.
MizardX's regular expression seems to cover all bases, so I'm going to go with that.
|
def num_groups(regex):
return re.compile(regex).groups
|
How to parse an ISO 8601-formatted date in Python?
|
I need to parse RFC 3339 strings like "2008-09-03T20:56:35.450686Z" into Python's datetime type.
I have found strptime in the Python standard library, but it is not very convenient.
What is the best way to do this?
|
The python-dateutil package can parse not only RFC 3339 datetime strings like the one in the question, but also other ISO 8601 date and time strings that don't comply with RFC 3339 (such as ones with no UTC offset, or ones that represent only a date).
>>> import dateutil.parser
>>> dateutil.parser.parse('2008-09-03T20:56:35.450686Z') # RFC 3339 format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686, tzinfo=tzutc())
>>> dateutil.parser.parse('2008-09-03T20:56:35.450686') # ISO 8601 extended format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686)
>>> dateutil.parser.parse('20080903T205635.450686') # ISO 8601 basic format
datetime.datetime(2008, 9, 3, 20, 56, 35, 450686)
>>> dateutil.parser.parse('20080903') # ISO 8601 basic format, date only
datetime.datetime(2008, 9, 3, 0, 0)
|
Should Python import statements always be at the top of a module?
|
PEP 08 states:
Imports are always put at the top of the file, just after any module comments and docstrings, and before module globals and constants.
However if the class/method/function that I am importing is only used in rare cases, surely it is more efficient to do the import when it is needed?
Isn't this:
class SomeClass(object):
def not_often_called(self)
from datetime import datetime
self.datetime = datetime.now()
more efficient than this?
from datetime import datetime
class SomeClass(object):
def not_often_called(self)
self.datetime = datetime.now()
|
Module importing is quite fast, but not instant. This means that:
Putting the imports at the top of the module is fine, because it's a trivial cost that's only paid once.
Putting the imports within a function will cause calls to that function to take longer.
So if you care about efficiency, put the imports at the top. Only move them into a function if your profiling shows that would help (you did profile to see where best to improve performance, right??)
The best reasons I've seen to perform lazy imports are:
Optional library support. If your code has multiple paths that use different libraries, don't break if an optional library is not installed.
In the __init__.py of a plugin, which might be imported but not actually used. Examples are Bazaar plugins, which use bzrlib's lazy-loading framework.
|
Using property() on classmethods
|
I have a class with two class methods (using the classmethod() function) for getting and setting what is essentially a static variable. I tried to use the property() function with these, but it results in an error. I was able to reproduce the error with the following in the interpreter:
>>> class foo(object):
... _var=5
... def getvar(cls):
... return cls._var
... getvar=classmethod(getvar)
... def setvar(cls,value):
... cls._var=value
... setvar=classmethod(setvar)
... var=property(getvar,setvar)
...
>>> f.getvar()
5
>>> f.setvar(4)
>>> f.getvar()
4
>>> f.var
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
>>> f.var=5
Traceback (most recent call last):
File "<stdin>", line 1, in ?
TypeError: 'classmethod' object is not callable
Is it possible to use the property() function with classmethod decorated functions?
|
Reading the Python 2.2 release notes, I find the following.
The get method [of a property] won't be called when
the property is accessed as a class
attribute (C.x) instead of as an
instance attribute (C().x). If you
want to override the __get__ operation
for properties when used as a class
attribute, you can subclass property -
it is a new-style type itself - to
extend its __get__ method, or you can
define a descriptor type from scratch
by creating a new-style class that
defines __get__, __set__ and
__delete__ methods.
NOTE: The below method doesn't actually work for setters, only getters.
Therefore, I believe the prescribed solution is to create a ClassProperty as a subclass of property.
class ClassProperty(property):
def __get__(self, cls, owner):
return self.fget.__get__(None, owner)()
class foo(object):
_var=5
def getvar(cls):
return cls._var
getvar=classmethod(getvar)
def setvar(cls,value):
cls._var=value
setvar=classmethod(setvar)
var=ClassProperty(getvar,setvar)
assert foo.getvar() == 5
foo.setvar(4)
assert foo.getvar() == 4
assert foo.var == 4
foo.var = 3
assert foo.var == 3
However, the setters don't actually work:
foo.var = 4
assert foo.var == foo._var # raises AssertionError
foo._var is unchanged, you've simply overwritten the property with a new value.
You can also use ClassProperty as a decorator:
class Foo(object):
_var = 5
@ClassProperty
@classmethod
def var(cls):
return cls._var
@var.setter
@classmethod
def var(cls, value):
cls._var = value
assert foo.var == 5
|
Generic Exception Handling in Python the "Right Way"
|
Sometimes I find myself in the situation where I want to execute several sequential commands like such:
try:
foo(a, b)
except Exception, e:
baz(e)
try:
bar(c, d)
except Exception, e:
baz(e)
...
This same pattern occurs when exceptions simply need to be ignored.
This feels redundant and the excessive syntax causes it to be surprisingly difficult to follow when reading code.
In C, I would have solved this type of problem easily with a macro, but unfortunately, this cannot be done in straight python.
Question: How can I best reduce the code footprint and increase code readability when coming across this pattern?
|
You could use the with statement if you have python 2.5
from __future__ import with_statement
import contextlib
@contextlib.contextmanager
def handler():
try:
yield
except Exception, e:
baz(e)
Your example now becomes:
with handler():
foo(a, b)
with handler():
bar(c, d)
|
How do you test that a Python function throws an exception?
|
How does one write a unittest that fails only if a function doesn't throw an expected exception?
|
Use TestCase.assertRaises (or TestCase.failUnlessRaises) from the unittest module, for example:
import mymod
class MyTestCase(unittest.TestCase):
def test1(self):
self.assertRaises(SomeCoolException, mymod.myfunc)
|
Is there an inverse function for time.gmtime() that parses a UTC tuple to seconds since the epoch?
|
python's time module seems a little haphazard. For example, here is a list of methods in there, from the docstring:
time() -- return current time in seconds since the Epoch as a float
clock() -- return CPU time since process start as a float
sleep() -- delay for a number of seconds given as a float
gmtime() -- convert seconds since Epoch to UTC tuple
localtime() -- convert seconds since Epoch to local time tuple
asctime() -- convert time tuple to string
ctime() -- convert time in seconds to string
mktime() -- convert local time tuple to seconds since Epoch
strftime() -- convert time tuple to string according to format specification
strptime() -- parse string to time tuple according to format specification
tzset() -- change the local timezone
Looking at localtime() and its inverse mktime(), why is there no inverse for gmtime() ?
Bonus questions: what would you name the method ? How would you implement it ?
|
There is actually an inverse function, but for some bizarre reason, it's in the calendar module: calendar.timegm(). I listed the functions in this answer.
|
How do I efficiently filter computed values within a Python list comprehension?
|
The Python list comprehension syntax makes it easy to filter values within a comprehension. For example:
result = [x**2 for x in mylist if type(x) is int]
Will return a list of the squares of integers in mylist. However, what if the test involves some (costly) computation and you want to filter on the result? One option is:
result = [expensive(x) for x in mylist if expensive(x)]
This will result in a list of non-"false" expensive(x) values, however expensive() is called twice for each x. Is there a comprehension syntax that allows you to do this test while only calling expensive once per x?
|
Came up with my own answer after a minute of thought. It can be done with nested comprehensions:
result = [y for y in (expensive(x) for x in mylist) if y]
I guess that works, though I find nested comprehensions are only marginally readable
|
Python Date Comparisons
|
I would like to find out if a particular python datetime object is older than X hours or minutes. I am trying to do something similar to:
if (datetime.now() - self.timestamp) > 100
# Where 100 is either seconds or minutes
This generates a type error.
What is the proper way to do date time comparison in python? I already looked at WorkingWithTime which is close but not exactly what I want. I assume I just want the datetime object represented in seconds so that I can do a normal int comparison.
Please post lists of datetime best practices.
|
Use the datetime.timedelta class:
>>> from datetime import datetime, timedelta
>>> then = datetime.now() - timedelta(hours = 2)
>>> now = datetime.now()
>>> (now - then) > timedelta(days = 1)
False
>>> (now - then) > timedelta(hours = 1)
True
Your example could be written as:
if (datetime.now() - self.timestamp) > timedelta(seconds = 100)
or
if (datetime.now() - self.timestamp) > timedelta(minutes = 100)
|
Request UAC elevation from within a Python script?
|
I want my Python script to copy files on Vista. When I run it from a normal cmd.exe window, no errors are generated, yet the files are NOT copied. If I run cmd.exe "as administator" and then run my script, it works fine.
This makes sense since User Account Control (UAC) normally prevents many file system actions.
Is there a way I can, from within a Python script, invoke a UAC elevation request (those dialogs that say something like "such and such app needs admin access, is this OK?")
If that's not possible, is there a way my script can at least detect that it is not elevated so it can fail gracefully?
|
It took me a little while to get dguaraglia's answer working, so in the interest of saving others time, here's what I did to implement this idea:
import os
import sys
import win32com.shell.shell as shell
ASADMIN = 'asadmin'
if sys.argv[-1] != ASADMIN:
script = os.path.abspath(sys.argv[0])
params = ' '.join([script] + sys.argv[1:] + [ASADMIN])
shell.ShellExecuteEx(lpVerb='runas', lpFile=sys.executable, lpParameters=params)
sys.exit(0)
|
Showing the stack trace from a running Python application
|
I have this Python application that gets stuck from time to time and I can't find out where.
Is there any way to signal Python interpreter to show you the exact code that's running?
Some kind of on-the-fly stacktrace?
Related questions:
Print current call stack from a method in Python code
Check what a running process is doing: print stack trace of an uninstrumented Python program
|
I have module I use for situations like this - where a process will be running for a long time but gets stuck sometimes for unknown and irreproducible reasons. Its a bit hacky, and only works on unix (requires signals):
import code, traceback, signal
def debug(sig, frame):
"""Interrupt running process, and provide a python prompt for
interactive debugging."""
d={'_frame':frame} # Allow access to frame object.
d.update(frame.f_globals) # Unless shadowed by global
d.update(frame.f_locals)
i = code.InteractiveConsole(d)
message = "Signal received : entering python shell.\nTraceback:\n"
message += ''.join(traceback.format_stack(frame))
i.interact(message)
def listen():
signal.signal(signal.SIGUSR1, debug) # Register handler
To use, just call the listen() function at some point when your program starts up (You could even stick it in site.py to have all python programs use it), and let it run. At any point, send the process a SIGUSR1 signal, using kill, or in python:
os.kill(pid, signal.SIGUSR1)
This will cause the program to break to a python console at the point it is currently at, showing you the stack trace, and letting you manipulate the variables. Use control-d (EOF) to continue running (though note that you will probably interrupt any I/O etc at the point you signal, so it isn't fully non-intrusive.
I've another script that does the same thing, except it communicates with the running process through a pipe (to allow for debugging backgrounded processes etc). Its a bit large to post here, but I've added it as a python cookbook recipe.
|
Is there a difference between `==` and `is` in Python?
|
My Google-fu has failed me.
In Python, are the following two tests for equality equivalent (ha!)?
n = 5
# Test one.
if n == 5:
print 'Yay!'
# Test two.
if n is 5:
print 'Yay!'
Does this hold true for objects where you would be comparing instances (a list say)?
Okay, so this kind of answers my question:
L = []
L.append(1)
if L == [1]:
print 'Yay!'
# Holds true, but...
if L is [1]:
print 'Yay!'
# Doesn't.
So == tests value where is tests to see if they are the same object?
|
is will return True if two variables point to the same object, == if the objects referred to by the variables are equal.
>>> a = [1, 2, 3]
>>> b = a
>>> b is a
True
>>> b == a
True
>>> b = a[:]
>>> b is a
False
>>> b == a
True
In your case, the second test only works because Python caches small integer objects, which is an implementation detail. For larger integers, this does not work:
>>> 1000 is 10**3
False
>>> 1000 == 10**3
True
The same holds true for string literals:
>>> "a" is "a"
True
>>> "aa" is "a" * 2
True
>>> x = "a"
>>> "aa" is x * 2
False
>>> "aa" is intern(x*2)
True
Please see this question as well.
|
Efficiently match multiple regexes in Python
|
Lexical analyzers are quite easy to write when you have regexes. Today I wanted to write a simple general analyzer in Python, and came up with:
import re
import sys
class Token(object):
""" A simple Token structure.
Contains the token type, value and position.
"""
def __init__(self, type, val, pos):
self.type = type
self.val = val
self.pos = pos
def __str__(self):
return '%s(%s) at %s' % (self.type, self.val, self.pos)
class LexerError(Exception):
""" Lexer error exception.
pos:
Position in the input line where the error occurred.
"""
def __init__(self, pos):
self.pos = pos
class Lexer(object):
""" A simple regex-based lexer/tokenizer.
See below for an example of usage.
"""
def __init__(self, rules, skip_whitespace=True):
""" Create a lexer.
rules:
A list of rules. Each rule is a `regex, type`
pair, where `regex` is the regular expression used
to recognize the token and `type` is the type
of the token to return when it's recognized.
skip_whitespace:
If True, whitespace (\s+) will be skipped and not
reported by the lexer. Otherwise, you have to
specify your rules for whitespace, or it will be
flagged as an error.
"""
self.rules = []
for regex, type in rules:
self.rules.append((re.compile(regex), type))
self.skip_whitespace = skip_whitespace
self.re_ws_skip = re.compile('\S')
def input(self, buf):
""" Initialize the lexer with a buffer as input.
"""
self.buf = buf
self.pos = 0
def token(self):
""" Return the next token (a Token object) found in the
input buffer. None is returned if the end of the
buffer was reached.
In case of a lexing error (the current chunk of the
buffer matches no rule), a LexerError is raised with
the position of the error.
"""
if self.pos >= len(self.buf):
return None
else:
if self.skip_whitespace:
m = self.re_ws_skip.search(self.buf[self.pos:])
if m:
self.pos += m.start()
else:
return None
for token_regex, token_type in self.rules:
m = token_regex.match(self.buf[self.pos:])
if m:
value = self.buf[self.pos + m.start():self.pos + m.end()]
tok = Token(token_type, value, self.pos)
self.pos += m.end()
return tok
# if we're here, no rule matched
raise LexerError(self.pos)
def tokens(self):
""" Returns an iterator to the tokens found in the buffer.
"""
while 1:
tok = self.token()
if tok is None: break
yield tok
if __name__ == '__main__':
rules = [
('\d+', 'NUMBER'),
('[a-zA-Z_]\w+', 'IDENTIFIER'),
('\+', 'PLUS'),
('\-', 'MINUS'),
('\*', 'MULTIPLY'),
('\/', 'DIVIDE'),
('\(', 'LP'),
('\)', 'RP'),
('=', 'EQUALS'),
]
lx = Lexer(rules, skip_whitespace=True)
lx.input('erw = _abc + 12*(R4-623902) ')
try:
for tok in lx.tokens():
print tok
except LexerError, err:
print 'LexerError at position', err.pos
It works just fine, but I'm a bit worried that it's too inefficient. Are there any regex tricks that will allow me to write it in a more efficient / elegant way ?
Specifically, is there a way to avoid looping over all the regex rules linearly to find one that fits?
|
I suggest using the re.Scanner class, it's not documented in the standard library, but it's well worth using. Here's an example:
import re
scanner = re.Scanner([
(r"-?[0-9]+\.[0-9]+([eE]-?[0-9]+)?", lambda scanner, token: float(token)),
(r"-?[0-9]+", lambda scanner, token: int(token)),
(r" +", lambda scanner, token: None),
])
>>> scanner.scan("0 -1 4.5 7.8e3")[0]
[0, -1, 4.5, 7800.0]
|
Which is more preferable to use in Python: lambda functions or nested functions ('def')?
|
I mostly use lambda functions but sometimes use nested functions that seem to provide the same behavior.
Here are some trivial examples where they functionally do the same thing if either were found within another function:
Lambda function
>>> a = lambda x : 1 + x
>>> a(5)
6
Nested function
>>> def b(x): return 1 + x
>>> b(5)
6
Is there advantages to using one over the other? (Performance? Readability? Limitations? Consistency? etc.) Does it even matter? If doesn't then does that violate the Pythonic principle: âThere should be oneâand preferably only oneâobvious way to do itâ.
|
If you need to assign the lambda to a name, use a def instead. defs are just syntactic sugar for an assignment, so the result is the same, and they are a lot more flexible and readable.
lambdas can be used for use once, throw away functions which won't have a name.
However, this use case is very rare. You rarely need to pass around unnamed function objects.
The builtins map() and filter() need function objects, but list comprehensions and generator expressions are generally more readable than those functions and can cover all use cases, without the need of lambdas.
For the cases you really need a small function object, you should use the operator module functions, like operator.add instead of lambda x, y: x + y
If you still need some lambda not covered, you might consider writing a def, just to be more readable. If the function is more complex than the ones at operator module, a def is probably better.
So, real world good lambda use cases are very rare.
|
Display number with leading zeros
|
Given:
a = 1
b = 10
c = 100
I want to display a leading zero for all numbers with less than 2 digits, i.e.:
01
10
100
|
Here you are:
print "%02d" % (1,)
Basically % is like printf or sprint.
|
Python Library Path
|
In ruby the library path is provided in $:, in perl it's in @INC - how do you get the list of paths that Python searches for modules when you do an import?
|
I think you're looking for sys.path
import sys
print (sys.path)
|
Should you always favor xrange() over range()?
|
Why or why not?
|
For performance, especially when you're iterating over a large range, xrange() is usually better. However, there are still a few cases why you might prefer range():
In python 3, range() does what xrange() used to do and xrange() does not exist. If you want to write code that will run on both Python 2 and Python 3, you can't use xrange().
range() can actually be faster in some cases - eg. if iterating over the same sequence multiple times. xrange() has to reconstruct the integer object every time, but range() will have real integer objects. (It will always perform worse in terms of memory however)
xrange() isn't usable in all cases where a real list is needed. For instance, it doesn't support slices, or any list methods.
[Edit] There are a couple of posts mentioning how range() will be upgraded by the 2to3 tool. For the record, here's the output of running the tool on some sample usages of range() and xrange()
RefactoringTool: Skipping implicit fixer: buffer
RefactoringTool: Skipping implicit fixer: idioms
RefactoringTool: Skipping implicit fixer: ws_comma
--- range_test.py (original)
+++ range_test.py (refactored)
@@ -1,7 +1,7 @@
for x in range(20):
- a=range(20)
+ a=list(range(20))
b=list(range(20))
c=[x for x in range(20)]
d=(x for x in range(20))
- e=xrange(20)
+ e=range(20)
As you can see, when used in a for loop or comprehension, or where already wrapped with list(), range is left unchanged.
|
How many bytes per element are there in a Python list (tuple)?
|
For example, how much memory is required to store a list of one million (32-bit) integers?
alist = range(1000000) # or list(range(1000000)) in Python 3.0
|
"It depends." Python allocates space for lists in such a way as to achieve amortized constant time for appending elements to the list.
In practice, what this means with the current implementation is... the list always has space allocated for a power-of-two number of elements. So range(1000000) will actually allocate a list big enough to hold 2^20 elements (~ 1.045 million).
This is only the space required to store the list structure itself (which is an array of pointers to the Python objects for each element). A 32-bit system will require 4 bytes per element, a 64-bit system will use 8 bytes per element.
Furthermore, you need space to store the actual elements. This varies widely. For small integers (-5 to 256 currently), no additional space is needed, but for larger numbers Python allocates a new object for each integer, which takes 10-100 bytes and tends to fragment memory.
Bottom line: it's complicated and Python lists are not a good way to store large homogeneous data structures. For that, use the array module or, if you need to do vectorized math, use NumPy.
PS- Tuples, unlike lists, are not designed to have elements progressively appended to them. I don't know how the allocator works, but don't even think about using it for large data structures :-)
|
Python: SWIG vs ctypes
|
In python, under what circumstances is SWIG a better choice than ctypes for calling entry points in shared libraries? Let's assume you don't already have the SWIG interface file(s). What are the performance metrics of the two?
|
I have a rich experience of using swig. SWIG claims that it is rapid solution for wrapping things. But in the real life...
Cons:
SWIG is developed to be general, for everyone and for 20+ languages. Generally it leads to drawbacks:
- needs configuration (SWIG .i templates), sometimes it is tricky,
- lack of treatment of some special cases (see python properties further),
- lack of the performance for some languages.
Python cons:
1) Code style inconsistancy. C++ and python have very different code styles (that is obvious, certainly), the possibilities of swig of making target code more pythonish is very limited. As example, it is butt-heart to create properties from getters and setters. See this q&a
2) Lack of broad community. Swig have some good documentation. But if one caught something that is not in the documentation, there is no information at all. No blogs nor googling helps. So one have to heavily dig SWIG generated code in such cases... That is terrible, I could say...
Procs:
In simple cases it is really rapid, easy and straight forward
If you produced swig interface files once, you can wrap this C++ code to ANY of other 20+ languages (!!!).
One big concern about SWIG is a performance. Since version 2.04 SWIG includes '-builtin' flag wich makes SWIG even faster than other automated ways of wrapping. At least some benchmarks shows this.
When to USE SWIG?
So I concluded for myself two cases when the swig is good to use:
2) If one needs to wrap C++ code for several languages. Or if potentially there could be a time when one needs to distribute the code for several languages. Using SWIG is reliable in this case.
1) If one needs to rapidly wrap just several functions from some C++ library for end use.
Live experience
Update :
It is a year and half passed as we did a conversion of our library by using SWIG.
First we made a python version.There were several moments when we experienced troubles with SWIG - it is true. But right now we expanded our library to Java and .NET. So we have 3 languages with 1 SWIG. And I could say that SWIG rocks in terms of saving a LOT of time.
Update 2:
It is two years as we use SWIG for this library. SWIG is integrated in our build system. Recently we had major API change of C++ library. SWIG worked perfectly. The only thing we needed to do is to add several %rename to .i files so our CppCamelStyleFunctions() now looks_more_pythonish in python. First I was concerned about some problems that could arise, but nothing went wrong. It was amazing. Just several edits and everything distributed in 3 languages. Now I am confident that it was a good solution to use SWIG in our case.
Update 3:
It is 3+ years we use SWIG for our library. Major change: python part was totally rewritten in pure python. The reason is that python is used for the majority of applications of our library now. Even if pure python version works slower than C++ wrapping, it is more convenient for users to work with pure python, not struggling with native libraries.
SWIG is still used for .NET and Java versions.
The Main question here "Would we use SWIG for python, if we started the project from the beginning?". We would! SWIG allowed us to rapidly distribute our product to many languages. It worked for a period of time which gave us the opportunity for better understanding our users requirements.
|
Python web development - with or without a framework
|
I am planning on porting a PHP application over to Python. The application is mostly about data collection and processing. The main application runs as a stand alone command line application. There is a web interface to the application which is basically a very light weight reporting interface.
I did not use a framework in the PHP version, but being new to Python, I am wondering if it would be advantageous to use something like Django or at the very least Genshi. The caveat is I do not want my application distribution to be overwhelmed by the framework parts I would need to distribute with the application.
Is using only the cgi import in Python the best way to go in this circumstance? I would tend to think a framework is too much overhead, but perhaps I'm not thinking in a very "python" way about them. What suggestions do you have in this scenario?
|
The command-line Python, IMO, definitely comes first. Get that to work, since that's the core of what you're doing.
The issue is that using a web framework's ORM from a command line application isn't obvious. Django provides specific instructions for using their ORM from a command-line app. Those are annoying at first, but I think they're a life-saver in the long run. I use it heavily for giant uploads of customer-supplied files.
Don't use bare CGI. It's not impossible, but too many things can go wrong, and they've all been solved by the frameworks. Why reinvent something? Just use someone else's code.
Frameworks involve learning, but no real "overhead". They're not slow. They're code you don't have to write or debug.
Learn some Python.
Do the Django tutorial.
Start to build a web app.
a. Start a Django project. Build a small application in that project.
b. Build your new model using the Django ORM. Create a Django unit test for the model. Be sure that it works. You'll be able to use the default admin pages and do a lot of playing around. Just don't build the entire web site yet.
Get your command-line app to work using Django ORM. Essentially, you have to finesse the settings file for this app to work nicely. See the settings/configuration section.
Once you've got your command line and the default admin running, you can finish
the web app.
Here's the golden rule of frameworks: It's code you don't have to write, debug or maintain. Use them.
|
What is the difference between @staticmethod and @classmethod in Python?
|
What is the difference between a function decorated with @staticmethod and one decorated with @classmethod?
|
Maybe a bit of example code will help: Notice the difference in the call signatures of foo, class_foo and static_foo:
class A(object):
def foo(self,x):
print "executing foo(%s,%s)"%(self,x)
@classmethod
def class_foo(cls,x):
print "executing class_foo(%s,%s)"%(cls,x)
@staticmethod
def static_foo(x):
print "executing static_foo(%s)"%x
a=A()
Below is the usual way an object instance calls a method. The object instance, a, is implicitly passed as the first argument.
a.foo(1)
# executing foo(<__main__.A object at 0xb7dbef0c>,1)
With classmethods, the class of the object instance is implicitly passed as the first argument instead of self.
a.class_foo(1)
# executing class_foo(<class '__main__.A'>,1)
You can also call class_foo using the class. In fact, if you define something to be
a classmethod, it is probably because you intend to call it from the class rather than from a class instance. A.foo(1) would have raised a TypeError, but A.class_foo(1) works just fine:
A.class_foo(1)
# executing class_foo(<class '__main__.A'>,1)
One use people have found for class methods is to create inheritable alternative constructors.
With staticmethods, neither self (the object instance) nor cls (the class) is implicitly passed as the first argument. They behave like plain functions except that you can call them from an instance or the class:
a.static_foo(1)
# executing static_foo(1)
A.static_foo('hi')
# executing static_foo(hi)
Staticmethods are used to group functions which have some logical connection with a class to the class.
foo is just a function, but when you call a.foo you don't just get the function,
you get a "partially applied" version of the function with the object instance a bound as the first argument to the function. foo expects 2 arguments, while a.foo only expects 1 argument.
a is bound to foo. That is what is meant by the term "bound" below:
print(a.foo)
# <bound method A.foo of <__main__.A object at 0xb7d52f0c>>
With a.class_foo, a is not bound to class_foo, rather the class A is bound to class_foo.
print(a.class_foo)
# <bound method type.class_foo of <class '__main__.A'>>
Here, with a staticmethod, even though it is a method, a.static_foo just returns
a good 'ole function with no arguments bound. static_foo expects 1 argument, and
a.static_foo expects 1 argument too.
print(a.static_foo)
# <function static_foo at 0xb7d479cc>
And of course the same thing happens when you call static_foo with the class A instead.
print(A.static_foo)
# <function static_foo at 0xb7d479cc>
|
Get last n lines of a file with Python, similar to tail
|
I'm writing a log file viewer for a web application and for that I want to paginate through the lines of the log file. The items in the file are line based with the newest item on the bottom.
So I need a tail() method that can read n lines from the bottom and supports an offset. What I came up with looks like this:
def tail(f, n, offset=0):
"""Reads a n lines from f with an offset of offset lines."""
avg_line_length = 74
to_read = n + offset
while 1:
try:
f.seek(-(avg_line_length * to_read), 2)
except IOError:
# woops. apparently file is smaller than what we want
# to step back, go to the beginning instead
f.seek(0)
pos = f.tell()
lines = f.read().splitlines()
if len(lines) >= to_read or pos == 0:
return lines[-to_read:offset and -offset or None]
avg_line_length *= 1.3
Is this a reasonable approach? What is the recommended way to tail log files with offsets?
|
This may be quicker than yours. Makes no assumptions about line length. Backs through the file one block at a time till it's found the right number of '\n' characters.
def tail( f, lines=20 ):
total_lines_wanted = lines
BLOCK_SIZE = 1024
f.seek(0, 2)
block_end_byte = f.tell()
lines_to_go = total_lines_wanted
block_number = -1
blocks = [] # blocks of size BLOCK_SIZE, in reverse order starting
# from the end of the file
while lines_to_go > 0 and block_end_byte > 0:
if (block_end_byte - BLOCK_SIZE > 0):
# read the last block we haven't yet read
f.seek(block_number*BLOCK_SIZE, 2)
blocks.append(f.read(BLOCK_SIZE))
else:
# file too small, start from begining
f.seek(0,0)
# only read what was not read
blocks.append(f.read(block_end_byte))
lines_found = blocks[-1].count('\n')
lines_to_go -= lines_found
block_end_byte -= BLOCK_SIZE
block_number -= 1
all_read_text = ''.join(reversed(blocks))
return '\n'.join(all_read_text.splitlines()[-total_lines_wanted:])
I don't like tricky assumptions about line length when -- as a practical matter -- you can never know things like that.
Generally, this will locate the last 20 lines on the first or second pass through the loop. If your 74 character thing is actually accurate, you make the block size 2048 and you'll tail 20 lines almost immediately.
Also, I don't burn a lot of brain calories trying to finesse alignment with physical OS blocks. Using these high-level I/O packages, I doubt you'll see any performance consequence of trying to align on OS block boundaries. If you use lower-level I/O, then you might see a speedup.
|
Key Presses in Python
|
Is it possible to make it appear to a system that a key was pressed, for example I need to make A key be pressed thousands of times, and it is much to time consuming to do it manually, I would like to write something to do it for me, and the only thing I know well enough is Python.
A better way to put it, I need to emulate a key press, I.E. not capture a key press.
More Info (as requested):
I am running windows XP and need to send the keys to another application.
|
Install the pywin32 extensions. Then you can do the following:
import win32com.client as comclt
wsh= comclt.Dispatch("WScript.Shell")
wsh.AppActivate("Notepad") # select another application
wsh.SendKeys("a") # send the keys you want
Search for documentation of the WScript.Shell object (I believe installed by default in all Windows XP installations). You can start here, perhaps.
EDIT: Sending F11
import win32com.client as comctl
wsh = comctl.Dispatch("WScript.Shell")
# Google Chrome window title
wsh.AppActivate("icanhazip.com")
wsh.SendKeys("{F11}")
|
Is there something like Python's getattr() in C#?
|
Is there something like Python's getattr() in C#? I would like to create a window by reading a list which contains the names of controls to put on the window.
|
There is also Type.InvokeMember.
public static class ReflectionExt
{
public static object GetAttr(this object obj, string name)
{
Type type = obj.GetType();
BindingFlags flags = BindingFlags.Instance |
BindingFlags.Public |
BindingFlags.GetProperty;
return type.InvokeMember(name, flags, Type.DefaultBinder, obj, null);
}
}
Which could be used like:
object value = ReflectionExt.GetAttr(obj, "PropertyName");
or (as an extension method):
object value = obj.GetAttr("PropertyName");
|
How can I read the RGB value of a given pixel in Python?
|
If I open an image with open("image.jpg"), how can I get the RGB values of a pixel, if I have the coordinates of the pixel?
Then how can I do the reverse of this? Starting with a blank graphic, 'write' a pixel with a certain RGB value?
It would be so much better if I didn't have to download any additional libraries.
|
It's probably best to use the Python Image Library to do this which I'm afraid is a separate download.
The easiest way to do what you want is via the load() method on the Image object which returns a pixel access object which you can manipulate like an array:
from PIL import Image
im = Image.open("dead_parrot.jpg") #Can be many different formats.
pix = im.load()
print im.size #Get the width and hight of the image for iterating over
print pix[x,y] #Get the RGBA Value of the a pixel of an image
pix[x,y] = value # Set the RGBA Value of the image (tuple)
Alternatively, look at ImageDraw which gives a much richer API for creating images.
|
Pure Python XSLT library
|
Is there an XSLT library that is pure Python?
Installing libxml2+libxslt or any similar C libraries is a problem on some of the platforms I need to support.
I really only need basic XSLT support, and speed is not a major issue.
|
Unfortunately there are no pure-python XSLT processors at the moment. If you need something that is more platform independent, you may want to use a Java-based XSLT processor like Saxon. 4Suite is working on a pure-python XPath parser, but it doesn't look like a pure XSLT processor will be out for some time. Perhaps it would be best to use some of Python's functional capabilities to try and approximate the existing stylesheet or look into the feasibility of using Java instead.
|
Is it feasible to compile Python to machine code?
|
How feasible would it be to compile Python (possibly via an intermediate C representation) into machine code?
Presumably it would need to link to a Python runtime library, and any parts of the Python standard library which were Python themselves would need to be compiled (and linked in) too.
Also, you would need to bundle the Python interpreter if you wanted to do dynamic evaluation of expressions, but perhaps a subset of Python that didn't allow this would still be useful.
Would it provide any speed and/or memory usage advantages? Presumably the startup time of the Python interpreter would be eliminated (although shared libraries would still need loading at startup).
|
As @Greg Hewgill says it, there are good reasons why this is not always possible. However, certain kinds of code (like very algorithmic code) can be turned into "real" machine code.
There are several options:
Use Psyco, which emits machine code dynamically. You should choose carefully which methods/functions to convert, though.
Use Cython, which is a Python-like language that is compiled into a Python C extension
Use PyPy, which has a translator from RPython (a restricted subset of Python that does not support some of the most "dynamic" features of Python) to C or LLVM.
PyPy is still highly experimental
not all extensions will be present
After that, you can use one of the existing packages (freeze, Py2exe, PyInstaller) to put everything into one binary.
All in all: there is no general answer for your question. If you have Python code that is performance-critical, try to use as much builtin functionality as possible (or ask a "How do I make my Python code faster" question). If that doesn't help, try to identify the code and port it to C (or Cython) and use the extension.
|
listing all functions in a python module
|
I have a python module installed on my system and I'd like to be able to see what functions/classes/methods are available in it.
I want to call the doc function on each one. In ruby I can do something like ClassName.methods to get a list of all the methods available on that class. Is there something similar in python?
eg. something like:
from somemodule import foo
print foo.methods # or whatever is the correct method to call
|
You can use dir(module) to see all available methods/attributes. Also check out PyDocs.
|
Authenticating against active directory using python + ldap
|
How do I authenticate against AD using Python + LDAP. I'm currently using the python-ldap library and all it is producing is tears.
I can't even bind to perform a simple query:
import sys
import ldap
Server = "ldap://my-ldap-server"
DN, Secret, un = sys.argv[1:4]
Base = "dc=mydomain,dc=co,dc=uk"
Scope = ldap.SCOPE_SUBTREE
Filter = "(&(objectClass=user)(sAMAccountName="+un+"))"
Attrs = ["displayName"]
l = ldap.initialize(Server)
l.protocol_version = 3
print l.simple_bind_s(DN, Secret)
r = l.search(Base, Scope, Filter, Attrs)
Type,user = l.result(r,60)
Name,Attrs = user[0]
if hasattr(Attrs, 'has_key') and Attrs.has_key('displayName'):
displayName = Attrs['displayName'][0]
print displayName
sys.exit()
Running this with myusername@mydomain.co.uk password username gives me one of two errors:
Invalid Credentials - When I mistype or intentionally use wrong credentials it fails to authenticate.
ldap.INVALID_CREDENTIALS: {'info': '80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 52e, vece', 'desc': 'Invalid credentials'}
Or
ldap.OPERATIONS_ERROR: {'info': '00000000: LdapErr: DSID-0C090627, comment: In order to perform this operation a successful bind must be completed on the connection., data 0, vece', 'desc': 'Operations error'}
What am I missing out to bind properly?
I am getting the same errors on fedora and windows.
|
I was missing
l.set_option(ldap.OPT_REFERRALS, 0)
From the init.
|
How to list only top level directories in Python?
|
I want to be able to list only the directories inside some folder.
This means I don't want filenames listed, nor do I want additional sub-folders.
Let's see if an example helps. In the current directory we have:
>>> os.listdir(os.getcwd())
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'LICENSE.txt', 'mod_p
ython-wininst.log', 'NEWS.txt', 'pymssql-wininst.log', 'python.exe', 'pythonw.ex
e', 'README.txt', 'Removemod_python.exe', 'Removepymssql.exe', 'Scripts', 'tcl',
'Tools', 'w9xpopen.exe']
However, I don't want filenames listed. Nor do I want sub-folders such as \Lib\curses. Essentially what I want works with the following:
>>> for root, dirnames, filenames in os.walk('.'):
... print dirnames
... break
...
['cx_Oracle-doc', 'DLLs', 'Doc', 'include', 'Lib', 'libs', 'Scripts', 'tcl', 'Tools']
However, I'm wondering if there's a simpler way of achieving the same results. I get the impression that using os.walk only to return the top level is inefficient/too much.
|
os.walk('.').next()[1]
|
How do I find what is using memory in a Python process in a production system?
|
My production system occasionally exhibits a memory leak I have not been able to reproduce in a development environment. I've used a Python memory profiler (specifically, Heapy) with some success in the development environment, but it can't help me with things I can't reproduce, and I'm reluctant to instrument our production system with Heapy because it takes a while to do its thing and its threaded remote interface does not work well in our server.
What I think I want is a way to dump a snapshot of the production Python process (or at least gc.get_objects), and then analyze it offline to see where it is using memory. How do I get a core dump of a python process like this? Once I have one, how do I do something useful with it?
|
Using Python's gc garbage collector interface and sys.getsizeof() it's possible to dump all the python objects and their sizes. Here's the code I'm using in production to troubleshoot a memory leak:
rss = psutil.Process(os.getpid()).get_memory_info().rss
# Dump variables if using more than 100MB of memory
if rss > 100 * 1024 * 1024:
memory_dump()
os.abort()
def memory_dump():
dump = open("memory.pickle", 'w')
for obj in gc.get_objects():
i = id(obj)
size = sys.getsizeof(obj, 0)
# referrers = [id(o) for o in gc.get_referrers(obj) if hasattr(o, '__class__')]
referents = [id(o) for o in gc.get_referents(obj) if hasattr(o, '__class__')]
if hasattr(obj, '__class__'):
cls = str(obj.__class__)
cPickle.dump({'id': i, 'class': cls, 'size': size, 'referents': referents}, dump)
Note that I'm only saving data from objects that have a __class__ attribute because those are the only objects I care about. It should be possible to save the complete list of objects, but you will need to take care choosing other attributes. Also, I found that getting the referrers for each object was extremely slow so I opted to save only the referents. Anyway, after the crash, the resulting pickled data can be read back like this:
dump = open("memory.pickle")
while dump:
obj = cPickle.load(dump)
|
How do I wrap a string in a file in Python?
|
How do I create a file-like object (same duck type as File) with the contents of a string?
|
Use the StringIO module. For example:
>>> from cStringIO import StringIO
>>> f = StringIO('foo')
>>> f.read()
'foo'
I use cStringIO (which is faster), but note that it doesn't accept Unicode strings that cannot be encoded as plain ASCII strings. (You can switch to StringIO by changing "from cStringIO" to "from StringIO".)
|
Is there an easy way to populate SlugField from CharField?
|
class Foo(models.Model):
title = models.CharField(max_length=20)
slug = models.SlugField()
Is there a built-in way to get the slug field to autopopulate based on the title? Perhaps in the Admin and outside of the Admin.
|
for Admin in Django 1.0 and up, you'd need to use
prepopulated_fields = {'slug': ('title',), }
in your admin.py
Your key in the prepopulated_fields dictionary is the field you want filled, and the value is a tuple of fields you want concatenated.
Outside of admin, you can use the slugify function in your views. In templates, you can use the |slugify filter.
There is also this package which will take care of this automatically: https://pypi.python.org/pypi/django-autoslug
|
overloading __init__ in python
|
Let's say I have a class that has a member called data which is a list.
I want to be able to initialize the class with, for example, a filename (which contains data to initialize the list) or with an actual list.
What's your technique for doing this?
Do you just check the type by looking at __class__?
Is there some trick I might be missing?
I'm used to C++ where overloading by argument type is easy.
Thanks.
|
A much neater way to get 'alternate constructors' is to use classmethods. For instance:
>>> class MyData:
... def __init__(self, data):
... "Initialize MyData from a sequence"
... self.data = data
...
... @classmethod
... def fromfilename(cls, filename):
... "Initialize MyData from a file"
... data = open(filename).readlines()
... return cls(data)
...
... @classmethod
... def fromdict(cls, datadict):
... "Initialize MyData from a dict's items"
... return cls(datadict.items())
...
>>> MyData([1, 2, 3]).data
[1, 2, 3]
>>> MyData.fromfilename("/tmp/foobar").data
['foo\n', 'bar\n', 'baz\n']
>>> MyData.fromdict({"spam": "ham"}).data
[('spam', 'ham')]
The reason it's neater is that there is no doubt about what type is expected, and you aren't forced to guess at what the caller intended for you to do with the datatype it gave you. The problem with isinstance(x, basestring) is that there is no way for the caller to tell you, for instance, that even though the type is not a basestring, you should treat it as a string (and not another sequence.) And perhaps the caller would like to use the same type for different purposes, sometimes as a single item, and sometimes as a sequence of items. Being explicit takes all doubt away and leads to more robust and clearer code.
|
What limitations have closures in Python compared to language X closures?
|
Where X is any programming language (C#, Javascript, Lisp, Perl, Ruby, Scheme, etc) which supports some flavour of closures.
Some limitations are mentioned in the Closures in Python (compared to Ruby's closures), but the article is old and many limitations do not exist in modern Python any more.
Seeing a code example for a concrete limitation would be great.
Related questions:
Can you explain closures (as they relate to Python)?
What is a âClosureâ?
How does a javascript closure work ?
|
The most important limitation, currently, is that you cannot assign to an outer-scope variable. In other words, closures are read-only:
>>> def outer(x):
... def inner_reads():
... # Will return outer's 'x'.
... return x
... def inner_writes(y):
... # Will assign to a local 'x', not the outer 'x'
... x = y
... def inner_error(y):
... # Will produce an error: 'x' is local because of the assignment,
... # but we use it before it is assigned to.
... tmp = x
... x = y
... return tmp
... return inner_reads, inner_writes, inner_error
...
>>> inner_reads, inner_writes, inner_error = outer(5)
>>> inner_reads()
5
>>> inner_writes(10)
>>> inner_reads()
5
>>> inner_error(10)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 11, in inner_error
UnboundLocalError: local variable 'x' referenced before assignment
A name that gets assigned to in a local scope (a function) is always local, unless declared otherwise. While there is the 'global' declaration to declare a variable global even when it is assigned to, there is no such declaration for enclosed variables -- yet. In Python 3.0, there is (will be) the 'nonlocal' declaration that does just that.
You can work around this limitation in the mean time by using a mutable container type:
>>> def outer(x):
... x = [x]
... def inner_reads():
... # Will return outer's x's first (and only) element.
... return x[0]
... def inner_writes(y):
... # Will look up outer's x, then mutate it.
... x[0] = y
... def inner_error(y):
... # Will now work, because 'x' is not assigned to, just referenced.
... tmp = x[0]
... x[0] = y
... return tmp
... return inner_reads, inner_writes, inner_error
...
>>> inner_reads, inner_writes, inner_error = outer(5)
>>> inner_reads()
5
>>> inner_writes(10)
>>> inner_reads()
10
>>> inner_error(15)
10
>>> inner_reads()
15
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.