instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Column comparison in Django queries
I have a following model: class Car(models.Model): make = models.CharField(max_length=40) mileage_limit = models.IntegerField() mileage = models.IntegerField() I want to select all cars where mileage is less than mileage_limit, so in SQL it would be something like: select * from car where mileage < mileage_limit; Using Q object in Django, I know I can compare columns with any value/object, e.g. if I wanted to get cars that have mileage say less than 100,000 it would be something like: cars = Car.objects.filter(Q(mileage__lt=100000)) Instead of a fixed value I would like to use the column name (in my case it is mileage_limit). So I would like to be able to do something like: cars = Car.objects.filter(Q(mileage__lt=mileage_limit)) However this results in an error, since it is expecting a value/object, not a column name. Is there a way to compare two columns using Q object? I feel like it would be a very commonly used feature and there should be an easy way to do this, however couldn't find anything about it in the documentation. Note: this is a simplified example, for which the use of Q object might seam to be unnecessary. However the real model has many more columns, and the real query is more complex, that's why I am using Q. Here in this question I just wanted to figure out specifically how to compare columns using Q. EDIT Apparently after release of Django 1.1 it would be possible to do the following: cars = Car.objects.filter(mileage__lt=F('mileage_limit')) Still not sure if F is supposed to work together with Q like this: cars = Car.objects.filter(Q(mileage__lt=F('mileage_limit')))
You can't do this right now without custom SQL. The django devs are working on an F() function that would make it possible: #7210 - F() syntax, design feedback required.
What is the most "pythonic" way to iterate over a list in chunks?
I have a Python script which takes as input a list of integers, which I need to work with four integers at a time. Unfortunately, I don't have control of the input, or I'd have it passed in as a list of four-element tuples. Currently, I'm iterating over it this way: for i in xrange(0, len(ints), 4): # dummy op for example code foo += ints[i] * ints[i + 1] + ints[i + 2] * ints[i + 3] It looks a lot like "C-think", though, which makes me suspect there's a more pythonic way of dealing with this situation. The list is discarded after iterating, so it needn't be preserved. Perhaps something like this would be better? while ints: foo += ints[0] * ints[1] + ints[2] * ints[3] ints[0:4] = [] Still doesn't quite "feel" right, though. :-/ Related question: How do you split a list into evenly sized chunks in Python?
def chunker(seq, size): return (seq[pos:pos + size] for pos in xrange(0, len(seq), size)) Simple. Easy. Fast. Works with any sequence: text = "I am a very, very helpful text" for group in chunker(text, 7): print repr(group), # 'I am a ' 'very, v' 'ery hel' 'pful te' 'xt' print '|'.join(chunker(text, 10)) # I am a ver|y, very he|lpful text animals = ['cat', 'dog', 'rabbit', 'duck', 'bird', 'cow', 'gnu', 'fish'] for group in chunker(animals, 3): print group # ['cat', 'dog', 'rabbit'] # ['duck', 'bird', 'cow'] # ['gnu', 'fish']
For Python support, what company would be best to get hosting from?
I want to be able to run WSGI apps but my current hosting restricts it. Does anybody know a company that can accommodate my requirements?
My automatic response would be WebFaction. I haven't personally hosted with them, but they are primarily Python-oriented (founded by the guy who wrote CherryPy, for example, and as far as I know they were the first to roll out Python 3.0 support).
What is the fastest way to draw an image from discrete pixel values in Python?
I wish to draw an image based on computed pixel values, as a means to visualize some data. Essentially, I wish to take a 2-dimensional matrix of color triplets and render it. Do note that this is not image processing, since I'm not transforming an existing image nor doing any sort of whole-image transformations, and it's also not vector graphics as there is no pre-determined structure to the image I'm rendering- I'm probably going to be producing amorphous blobs of color one pixel at a time. I need to render images about 1kx1k pixels for now, but something scalable would be useful. Final target format is PNG or any other lossless format. I've been using PIL at the moment via ImageDraw's draw.point , and I was wondering, given the very specific and relatively basic features I require, is there any faster library available?
If you have numpy and scipy available (and if you are manipulating large arrays in Python, I would recommend them), then the scipy.misc.pilutil.toimage function is very handy. A simple example: import numpy as np import scipy.misc as smp # Create a 1024x1024x3 array of 8 bit unsigned integers data = np.zeros( (1024,1024,3), dtype=np.uint8 ) data[512,512] = [254,0,0] # Makes the middle pixel red data[512,513] = [0,0,255] # Makes the next pixel blue img = smp.toimage( data ) # Create a PIL image img.show() # View in default viewer The nice thing is toimage copes with diferent data types very well, so a 2d array of floating point numbers gets sensibly converted to greyscale etc. You can download numpy and scipy from http://docs.scipy.org/doc/scipy/reference/misc.html.
Open document with default application in Python
I need to be able to open a document using its default application in Windows and Mac OS. Basically, I want to do the same thing that happens when you double click on the document icon in Explorer or Finder. What is the best way to do this in Python?
Use the subprocess module available on Python 2.4+, not os.system(), so you don't have to deal with shell escaping. import subprocess, os if sys.platform.startswith('darwin'): subprocess.call(('open', filepath)) elif os.name == 'nt': os.startfile(filepath) elif os.name == 'posix': subprocess.call(('xdg-open', filepath)) The double parentheses are because subprocess.call() wants a sequence as its first argument, so we're using a tuple here. On Linux systems with Gnome there is also a gnome-open command that does the same thing, but xdg-open is the Free Desktop Foundation standard and works across Linux desktop environments.
How do I set permissions (attributes) on a file in a ZIP file using Python's zipfile module?
When I extract files from a ZIP file created with the Python zipfile module, all the files are not writable, read only etc. The file is being created and extracted under Linux and Python 2.5.2. As best I can tell, I need to set the ZipInfo.external_attr property for each file, but this doesn't seem to be documented anywhere I could find, can anyone enlighten me?
This seems to work (thanks Evan, putting it here so the line is in context): buffer = "path/filename.zip" # zip filename to write (or file-like object) name = "folder/data.txt" # name of file inside zip bytes = "blah blah blah" # contents of file inside zip zip = zipfile.ZipFile(buffer, "w", zipfile.ZIP_DEFLATED) info = zipfile.ZipInfo(name) info.external_attr = 0777 << 16L # give full access to included file zip.writestr(info, bytes) zip.close() I'd still like to see something that documents this... An additional resource I found was a note on the Zip file format: http://www.pkware.com/documents/casestudies/APPNOTE.TXT
What is an alternative to execfile in Python 3.0?
It seems they canceled in Python 3.0 all the easy way to quickly load a script file - both execfile() and reload(). Is there an obvious alternative I'm missing?
You are just supposed to read the file and exec the code yourself. 2to3 current replaces execfile("somefile.py", global_vars, local_vars) as with open("somefile.py") as f: code = compile(f.read(), "somefile.py", 'exec') exec(code, global_vars, local_vars) (The compile call isn't strictly needed, but it associates the filename with the code object making debugging a little easier.) See: http://docs.python.org/release/2.7.3/library/functions.html#execfile http://docs.python.org/release/3.2.3/library/functions.html#compile http://docs.python.org/release/3.2.3/library/functions.html#exec
Python: Is there a way to determine the encoding of text file?
I know there is something buried in here. But I was just wondering if there is an actual way built into Python to determine text file encoding? Thanks for your help :) Edit: As a side question, it can be ignored if you want but why is the type of encoding not put into the file so it could be detected easier?
Correctly detecting the encoding all times is impossible. (From chardet FAQ:) However, some encodings are optimized for specific languages, and languages are not random. Some character sequences pop up all the time, while other sequences make no sense. A person fluent in English who opens a newspaper and finds “txzqJv 2!dasd0a QqdKjvz” will instantly recognize that that isn't English (even though it is composed entirely of English letters). By studying lots of “typical” text, a computer algorithm can simulate this kind of fluency and make an educated guess about a text's language. There is the chardet library that uses that study to try to detect encoding. chardet is a port of the auto-detection code in Mozilla. You can also use UnicodeDammit. It will try the following methods: An encoding discovered in the document itself: for instance, in an XML declaration or (for HTML documents) an http-equiv META tag. If Beautiful Soup finds this kind of encoding within the document, it parses the document again from the beginning and gives the new encoding a try. The only exception is if you explicitly specified an encoding, and that encoding actually worked: then it will ignore any encoding it finds in the document. An encoding sniffed by looking at the first few bytes of the file. If an encoding is detected at this stage, it will be one of the UTF-* encodings, EBCDIC, or ASCII. An encoding sniffed by the chardet library, if you have it installed. UTF-8 Windows-1252
Python: import the containing package
In a module residing inside a package, i have the need to use a function defined within the __init__.py of that package. how can i import the package within the module that resides within the package, so i can use that function? Importing __init__ inside the module will not import the package, but instead a module named __init__, leading to two copies of things with different names... Is there a pythonic way to do this?
Also, starting in Python 2.5, relative imports are possible. e.g.: from . import foo Quoting from http://docs.python.org/tutorial/modules.html#intra-package-references: Starting with Python 2.5, in addition to the implicit relative imports described above, you can write explicit relative imports with the from module import name form of import statement. These explicit relative imports use leading dots to indicate the current and parent packages involved in the relative import. From the surrounding module for example, you might use: from . import echo from .. import formats from ..filters import equalizer
Python Split String
Lets Say we have Zaptoit:685158:zaptoit@hotmail.com How do you split so it only be left 685158:zaptoit@hotmail.com
>>> s = 'Zaptoit:685158:zaptoit@hotmail.com' >>> s.split( ':', 1 )[1] '685158:zaptoit@hotmail.com'
How to implement a python REPL that nicely handles asynchronous output?
I have a Python-based app that can accept a few commands in a simple read-eval-print-loop. I'm using raw_input('> ') to get the input. On Unix-based systems, I also import readline to make things behave a little better. All this is working fine. The problem is that there are asynchronous events coming in, and I'd like to print output as soon as they happen. Unfortunately, this makes things look ugly. The "> " string doesn't show up again after the output, and if the user is halfway through typing something, it chops their text in half. It should probably redraw the user's text-in-progress after printing something. This seems like it must be a solved problem. What's the proper way to do this? Also note that some of my users are Windows-based. TIA Edit: The accepted answer works under Unixy platforms (when the readline module is available), but if anyone knows how to make this work under Windows, it would be much appreciated!
Maybe something like this will do the trick: #!/usr/bin/env python2.6 from __future__ import print_function import readline import threading PROMPT = '> ' def interrupt(): print() # Don't want to end up on the same line the user is typing on. print('Interrupting cow -- moo!') print(PROMPT, readline.get_line_buffer(), sep='', end='') def cli(): while True: cli = str(raw_input(PROMPT)) if __name__ == '__main__': threading.Thread(target=cli).start() threading.Timer(2, interrupt).start() I don't think that stdin is thread-safe, so you can end up losing characters to the interrupting thread (that the user will have to retype at the end of the interrupt). I exaggerated the amount of interrupt time with the time.sleep call. The readline.get_line_buffer call won't display the characters that get lost, so it all turns out alright. Note that stdout itself isn't thread safe, so if you've got multiple interrupting threads of execution, this can still end up looking gross.
IronPython Webframework
There seem to be many excellent web frameworks for Python. Has anyone used any of these (Pylons, Web2Py, Django) with IronPython?
Django has been run on IronPython before, but as a proof-of-concept. I know the IronPython team are interested in Django support as a metric for Python-compatibility. Somewhat related is the possibility to use IronPython with ASP.NET and ASP.NET MVC, which is probably more mature.
Duplicating model instances and their related objects in Django / Algorithm for recusrively duplicating an object
I've models for Books, Chapters and Pages. They are all written by a User: from django.db import models class Book(models.Model) author = models.ForeignKey('auth.User') class Chapter(models.Model) author = models.ForeignKey('auth.User') book = models.ForeignKey(Book) class Page(models.Model) author = models.ForeignKey('auth.User') book = models.ForeignKey(Book) chapter = models.ForeignKey(Chapter) What I'd like to do is duplicate an existing Book and update it's User to someone else. The wrinkle is I would also like to duplicate all related model instances to the Book - all it's Chapters and Pages as well! Things get really tricky when look at a Page - not only will the new Pages need to have their author field updated but they will also need to point to the new Chapter objects! Does Django support an out of the box way of doing this? What would a generic algorithm for duplicating a model look like? Cheers, John Update: The classes given above are just an example to illustrate the problem I'm having!
This no longer works in Django 1.3 as CollectedObjects was removed. See changeset 14507 I posted my solution on Django Snippets. It's based heavily on the django.db.models.query.CollectedObject code used for deleting objects: from django.db.models.query import CollectedObjects from django.db.models.fields.related import ForeignKey def duplicate(obj, value, field): """ Duplicate all related objects of `obj` setting `field` to `value`. If one of the duplicate objects has an FK to another duplicate object update that as well. Return the duplicate copy of `obj`. """ collected_objs = CollectedObjects() obj._collect_sub_objects(collected_objs) related_models = collected_objs.keys() root_obj = None # Traverse the related models in reverse deletion order. for model in reversed(related_models): # Find all FKs on `model` that point to a `related_model`. fks = [] for f in model._meta.fields: if isinstance(f, ForeignKey) and f.rel.to in related_models: fks.append(f) # Replace each `sub_obj` with a duplicate. sub_obj = collected_objs[model] for pk_val, obj in sub_obj.iteritems(): for fk in fks: fk_value = getattr(obj, "%s_id" % fk.name) # If this FK has been duplicated then point to the duplicate. if fk_value in collected_objs[fk.rel.to]: dupe_obj = collected_objs[fk.rel.to][fk_value] setattr(obj, fk.name, dupe_obj) # Duplicate the object and save it. obj.id = None setattr(obj, field, value) obj.save() if root_obj is None: root_obj = obj return root_obj
How do I unload (reload) a Python module?
I have a long-running Python server and would like to be able to upgrade a service without restarting the server. What's the best way do do this? if foo.py has changed: unimport foo <-- How do I do this? import foo myfoo = foo.Foo()
You can reload a module when it has already been imported by using the reload builtin function in Python 2: import foo while True: # Do some things. if is_changed(foo): foo = reload(foo) In Python 3, reload was moved to the imp module. In 3.4, imp was deprecated in favor of importlib, and reload was added to the latter. When targeting 3 or later, either reference the appropriate module when calling reload or import it. I think that this is what you want. Web servers like Django's development server use this so that you can see the effects of your code changes without restarting the server process itself. To quote from the docs: Python modules’ code is recompiled and the module-level code reexecuted, defining a new set of objects which are bound to names in the module’s dictionary. The init function of extension modules is not called a second time. As with all other objects in Python the old objects are only reclaimed after their reference counts drop to zero. The names in the module namespace are updated to point to any new or changed objects. Other references to the old objects (such as names external to the module) are not rebound to refer to the new objects and must be updated in each namespace where they occur if that is desired. As you noted in your question, you'll have to reconstruct Foo objects if the Foo class resides in the foo module.
Is there a pure Python Lucene?
The ruby folks have Ferret. Someone know of any similar initiative for Python? We're using PyLucene at current, but I'd like to investigate moving to pure Python searching.
Whoosh is a new project which is similar to lucene, but is pure python.
Is there a label/goto in Python?
Is there a goto or any equivalent in Python to be able to jump to a specific line of code?
No, Python does not support labels and goto, if that is what you're after. It's a (highly) structured programming language.
How do I stop a program when an exception is raised in Python?
I need to stop my program when an exception is raised in Python. How do I implement this?
import sys try: print("stuff") except: sys.exit(0)
unicode() vs. str.decode() for a utf8 encoded byte string (python 2.x)
Is there any reason to prefer unicode(somestring, 'utf8') as opposed to somestring.decode('utf8')? My only thought is that .decode() is a bound method so python may be able to resolve it more efficiently, but correct me if I'm wrong.
It's easy to benchmark it: >>> from timeit import Timer >>> ts = Timer("s.decode('utf-8')", "s = 'ééé'") >>> ts.timeit() 8.9185450077056885 >>> tu = Timer("unicode(s, 'utf-8')", "s = 'ééé'") >>> tu.timeit() 2.7656929492950439 >>> Obviously, unicode() is faster. FWIW, I don't know where you get the impression that methods would be faster - it's quite the contrary.
Installing Python 3.0 on Cygwin
The Question What is the correct way to install Python 3.0 alongside Python 2.x using Cygwin? Notes I already have a working copy of Cygwin, and Python 2.x is installed within Cygwin (/lib/python2.x, not c:\python2.x). Also, I would like to be able to call python 3 separately (and only intentionally) by leaving python pointing to Python 2.x to preserve existing dependencies. I would like to use python30 or some alternative. Any pointers to guides on the subject would be much appreciated. I cannot seem to find one either at the cygwin site or python.org.
As of yesterday (Wed 25 July 2012), Python 3.2.3 is included in the standard Cygwin installer. Just run Cygwin's setup.exe again (download it from cygwin.com again if you need to), and you should be able to select and install it like any other package. This will install as python3, leaving any existing 2.x install in place: $ python -V Python 2.6.8 $ python3 -V Python 3.2.3 $ ls -l $(which python) $(which python3) lrwxrwxrwx 1 me Domain Users 13 Jun 21 15:12 /usr/bin/python -> python2.6.exe lrwxrwxrwx 1 me root 14 Jul 26 10:56 /usr/bin/python3 -> python3.2m.exe
Using "with" statement for CSV files in Python
Is it possible to use the with statement directly with CSV files? It seems natural to be able to do something like this: import csv with csv.reader(open("myfile.csv")) as reader: # do things with reader But csv.reader doesn't provide the __enter__ and __exit__ methods, so this doesn't work. I can however do it in two steps: import csv with open("myfile.csv") as f: reader = csv.reader(f) # do things with reader Is this second way the ideal way to do it? Why wouldn't they make csv.reader directly compatible with the with statement?
The primary use of with statement is an exception-safe cleanup of an object used in the statement. with makes sure that files are closed, locks are released, contexts are restored, etc. Does csv.reader have things to cleanup in case of exception? I'd go with: with open("myfile.csv") as f: for row in csv.reader(f): # process row You don't need to submit the patch to use csv.reader and with statement together. import contextlib Help on function contextmanager in module contextlib: contextmanager(func) @contextmanager decorator. Typical usage: @contextmanager def some_generator(<arguments>): <setup> try: yield <value> finally: <cleanup> This makes this: with some_generator(<arguments>) as <variable>: <body> equivalent to this: <setup> try: <variable> = <value> <body> finally: <cleanup> Here's a concrete example how I've used it: curses_screen.
How can I subtract a day from a Python date?
I have a Python datetime.datetime object. What is the best way to subtract one day?
You can use a timedelta object: from datetime import datetime, timedelta d = datetime.today() - timedelta(days=days_to_subtract)
Driving Excel from Python in Windows
We have various spreadsheets that employ deliciously complicated macros and third party extensions to produce complicated models. I'm working on a project that involves slightly tweaking various inputs and seeing the results. Rather than doing this by hand or writing VBA, I'd like to see if I can write a python script to drive this. In other words, the python script will start up, load the excel sheet, and then interact with the sheet by making minor changes in some cells and seeing how they affect other cells. So, my question is twofold: What is the best library to use to drive excel from python in such fashion? Where's the best documentation/examples on using said library? Cheers, /YGA
For controlling Excel, use pywin32, like @igowen suggests. Note that it is possible to use static dispatch. Use makepy.py from the pywin32 project to create a python module with the python wrappers. Using the generated wrappers simplifies development, since for instance ipython gives you tab completion and help during development. Static dispatch example: x:> makepy.py "Microsoft Excel 11.0 Object Library" ... Generating... Importing module x:> ipython > from win32com.client import Dispatch > excel = Dispatch("Excel.Application") > wb = excel.Workbooks.Append() > range = wb.Sheets[0].Range("A1") > range.[Press Tab] range.Activate range.Merge range.AddComment range.NavigateArrow range.AdvancedFilter range.NoteText ... range.GetOffset range.__repr__ range.GetResize range.__setattr__ range.GetValue range.__str__ range.Get_Default range.__unicode__ range.GoalSeek range._get_good_object_ range.Group range._get_good_single_object_ range.Insert range._oleobj_ range.InsertIndent range._prop_map_get_ range.Item range._prop_map_put_ range.Justify range.coclass_clsid range.ListNames range.__class__ > range.Value = 32 ... Documentation links: The O'Reilly book Python Programming on Win32 has an Integrating with Excel chapter. Same book, free sample chapter Advanced Python and COM covers makepy in detail. Tutorials win32com documentation, I suggest you read this first.
Java "Virtual Machine" vs. Python "Interpreter" parlance?
It's seems rare to read of a Python "virtual machine" while in Java "virtual machine" is used all the time. Both interpret byte codes, why call one a virtual machine and the other an interpreter?
A virtual machine is a virtual computing environment with a specific set of atomic well defined instructions that are supported independent of any specific language and it is generally thought of as a sandbox unto itself. The VM is analogous to an instruction set of a specific CPU and tends to work at a more fundamental level with very basic building blocks of such instructions (or byte codes) that are independent of the next. An instruction executes deterministically based only on the current state of the virtual machine and does not depend on information elsewhere in the instruction stream at that point in time. An interpreter on the other hand is more sophisticated in that it is tailored to parse a stream of some syntax that is of a specific language and of a specific grammer that must be decoded in the context of the surrounding tokens. You can't look at each byte or even each line in isolation and know exactly what to do next. The tokens in the language can't be taken in isolation like they can relative to the instructions (byte codes) of a VM. A Java compiler converts Java language into a byte-code stream no different than a C compiler converts C Language programs into assembly code. An interpreter on the other hand doesn't really convert the program into any well defined intermediate form, it just takes the program actions as a matter of the process of interpreting the source. Another test of the difference between a VM and an interpreter is whether you think of it as being language independent. What we know as the Java VM is not really Java specific. You could make a compiler from other languages that result in byte codes that can be run on the JVM. On the other hand, I don't think we would really think of "compiling" some other language other than Python into Python for interpretation by the Python interpreter. Because of the sophistication of the interpretation process, this can be a relatively slow process....specifically parsing and identifying the language tokens, etc. and understanding the context of the source to be able to undertake the execution process within the interpreter. To help accelerate such interpreted languages, this is where we can define intermediate forms of pre-parsed, pre-tokenized source code that is more readily directly interpreted. This sort of binary form is still interpreted at execution time, it is just starting from a much less human readable form to improve performance. However, the logic executing that form is not a virtual machine, because those codes still can't be taken in isolation - the context of the surrounding tokens still matter, they are just now in a different more computer efficient form.
IronPython on ASP.NET MVC
Has anyone tried ASP.NET MVC using IronPython? Having done a lot of Python development recently, it would be nice to continue with the language as I go into a potential ASP.NET MVC project. I'm especially interested in exploiting the dynamic aspects of Python with .NET features such as LINQ and want to know if this will be possible. The other route that may be viable for certain dynamic programming would be C# 4.0 with its dynamic keyword. Thoughts, experiences?
Yes, there is an MVC example from the DLR team. You might also be interested in Spark.
Pre-populate an inline FormSet?
I'm working on an attendance entry form for a band. My idea is to have a section of the form to enter event information for a performance or rehearsal. Here's the model for the event table: class Event(models.Model): event_id = models.AutoField(primary_key=True) date = models.DateField() event_type = models.ForeignKey(EventType) description = models.TextField() Then I'd like to have an inline FormSet that links the band members to the event and records whether they were present, absent, or excused: class Attendance(models.Model): attendance_id = models.AutoField(primary_key=True) event_id = models.ForeignKey(Event) member_id = models.ForeignKey(Member) attendance_type = models.ForeignKey(AttendanceType) comment = models.TextField(blank=True) Now, what I'd like to do is to pre-populate this inline FormSet with entries for all the current members and default them to being present (around 60 members). Unfortunately, Django doesn't allow initial values in this case. Any suggestions?
So, you're not going to like the answer, partly because I'm not yet done writing the code and partly because it's a lot of work. What you need to do, as I discovered when I ran into this myself, is: Spend a lot of time reading through the formset and model-formset code to get a feel for how it all works (not helped by the fact that some of the functionality lives on the formset classes, and some of it lives in factory functions which spit them out). You will need this knowledge in the later steps. Write your own formset class which subclasses from BaseInlineFormSet and accepts initial. The really tricky bit here is that you must override __init__(), and you must make sure that it calls up to BaseFormSet.__init__() rather than using the direct parent or grandparent __init__() (since those are BaseInlineFormSet and BaseModelFormSet, respectively, and neither of them can handle initial data). Write your own subclass of the appropriate admin inline class (in my case it was TabularInline) and override its get_formset method to return the result of inlineformset_factory() using your custom formset class. On the actual ModelAdmin subclass for the model with the inline, override add_view and change_view, and replicate most of the code, but with one big change: build the initial data your formset will need, and pass it to your custom formset (which will be returned by your ModelAdmin's get_formsets() method). I've had a few productive chats with Brian and Joseph about improving this for future Django releases; at the moment, the way the model formsets work just make this more trouble than it's usually worth, but with a bit of API cleanup I think it could be made extremely easy.
Generic catch for python
I have some extremely weird behavior that seems to result in silent exceptions. How can I write a general try catch where I can debug all exceptions. Something along the lines of: try: # something that fails except e: print e A bit more about the problem at hand in detail: I have a Django app that on the my comp (Ubuntu Linux 8.10) works fine both through runserver and mod-python. On the deploy server (Ubunut Linux 8.10) it works fine through runserver, but fails via apache in mod-python. I have reduced the cause down to a part off the app that uses Berkeley DB (bsddb.db), and secondary keys. The callback method for secondary keys uses pickle to format the keys. It fails when I call pickle on a single value. However, it only fails when I use cPickle, and using pickle on the same values outside the callback function also works. I just want to know why it fails with cPickle.
Exceptions are already printed by default before program termination. If you want to send the error somewhere else (not print it) you can do this: try: something() except Exception as e: send_somewhere(traceback.format_exception(*sys.exc_info())) raise # reraises the exception note that this format using the as keyword is for python > 2.6. The old way was: except Exception, e:
Python: Callbacks, Delegates, ... ? What is common?
Just want to know what's the common way to react on events in python. There are several ways in other languages like callback functions, delegates, listener-structures and so on. Is there a common way? Which default language concepts or additional modules are there and which can you recommend?
Personally I don't see a difference between callbacks, listeners, and delegates. The observer pattern (a.k.a listeners, a.k.a "multiple callbacks") is easy to implement - just hold a list of observers, and add or remove callables from it. These callables can be functions, bound methods, or classes with the __call__ magic method. All you have to do is define the interface you expect from these - e.g. do they receive any parameters. class Foo(object): def __init__(self): self._bar_observers = [] def add_bar_observer(self, observer): self._bar_observers.append(observer) def notify_bar(self, param): for observer in self._bar_observers: observer(param) def observer(param): print "observer(%s)" % param class Baz(object): def observer(self, param): print "Baz.observer(%s)" % param class CallableClass(object): def __call__(self, param): print "CallableClass.__call__(%s)" % param baz = Baz() foo = Foo() foo.add_bar_observer(observer) # function foo.add_bar_observer(baz.observer) # bound method foo.add_bar_observer(CallableClass()) # callable instance foo.notify_bar(3)
How to create python bytes object from long hex string?
I have a long sequence of hex digits in a string, such as 000000000000484240FA063DE5D0B744ADBED63A81FAEA390000C8428640A43D5005BD44 only much longer, several kilobytes. Is there a builtin way to convert this to a bytes object in python 2.6/3?
You can do this with the hex codec. ie: >>> s='000000000000484240FA063DE5D0B744ADBED63A81FAEA390000C8428640A43D5005BD44' >>> s.decode('hex') '\x00\x00\x00\x00\x00\x00HB@\xfa\x06=\xe5\xd0\xb7D\xad\xbe\xd6:\x81\xfa\xea9\x00\x00\xc8B\x86@\xa4=P\x05\xbdD'
sqlalchemy, turning a list of IDs to a list of objects
I have sequence of IDs I want to retrieve. It's simple: session.query(Record).filter(Record.id.in_(seq)).all() Is there a better way to do it?
Your code is absolutety fine. IN is like a bunch of X=Y joined with OR and is pretty fast in contemporary databases. However, if your list of IDs is long, you could make the query a bit more efficient by passing a sub-query returning the list of IDs.
convert a string of bytes into an int (python)
How can I convert a string of bytes into an int in python? Say like this: 'y\xcc\xa6\xbb' I came up with a clever/stupid way of doing it: sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1])) I know there has to be something builtin or in the standard library that does this more simply... This is different from converting a string of hex digits for which you can use int(xxx, 16), but instead I want to convert a string of actual byte values. UPDATE: I kind of like James' answer a little better because it doesn't require importing another module, but Greg's method is faster: >>> from timeit import Timer >>> Timer('struct.unpack("<L", "y\xcc\xa6\xbb")[0]', 'import struct').timeit() 0.36242198944091797 >>> Timer("int('y\xcc\xa6\xbb'.encode('hex'), 16)").timeit() 1.1432669162750244 My hacky method: >>> Timer("sum(ord(c) << (i * 8) for i, c in enumerate('y\xcc\xa6\xbb'[::-1]))").timeit() 2.8819329738616943 FURTHER UPDATE: Someone asked in comments what's the problem with importing another module. Well, importing a module isn't necessarily cheap, take a look: >>> Timer("""import struct\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""").timeit() 0.98822188377380371 Including the cost of importing the module negates almost all of the advantage that this method has. I believe that this will only include the expense of importing it once for the entire benchmark run; look what happens when I force it to reload every time: >>> Timer("""reload(struct)\nstruct.unpack(">L", "y\xcc\xa6\xbb")[0]""", 'import struct').timeit() 68.474128007888794 Needless to say, if you're doing a lot of executions of this method per one import than this becomes proportionally less of an issue. It's also probably i/o cost rather than cpu so it may depend on the capacity and load characteristics of the particular machine.
In Python 3.2 and later, use >>> int.from_bytes(b'y\xcc\xa6\xbb', byteorder='big') 2043455163 or >>> int.from_bytes(b'y\xcc\xa6\xbb', byteorder='little') 3148270713 according to the endianness of your byte-string. This also works for bytestring-integers of arbitrary length, and for two's-complement signed integers by specifying signed=True. See the docs for from_bytes.
Cleanest way to run/debug python programs in windows
Python for Windows by default comes with IDLE, which is the barest-bones IDE I've ever encountered. For editing files, I'll stick to emacs, thank you very much. However, I want to run programs in some other shell than the crappy windows command prompt, which can't be widened to more than 80 characters. IDLE lets me run programs in it if I open the file, then hit F5 (to go Run-> Run Module). I would rather like to just "run" the command, rather than going through the rigmarole of closing the emacs file, loading the IDLE file, etc. A scan of google and the IDLE docs doesn't seem to give much help about using IDLE's shell but not it's IDE. Any advice from the stack overflow guys? Ideally I'd either like advice on running programs using IDLE's shell advice on other ways to run python programs in windows outside of IDLE or "cmd". Thanks, /YGA
For an interactive interpreter, nothing beats IPython. It's superb. It's also free and open source. On Windows, you'll want to install the readline library. Instructions for that are on the IPython installation documentation. Winpdb is my Python debugger of choice. It's free, open source, and cross platform (using wxWidgets for the GUI). I wrote a tutorial on how to use Winpdb to help get people started on using graphical debuggers.
GAE - How to live with no joins?
Example Problem: Entities: User contains name and a list of friends (User references) Blog Post contains title, content, date and Writer (User) Requirement: I want a page that displays the title and a link to the blog of the last 10 posts by a user's friend. I would also like the ability to keep paging back through older entries. SQL Solution: So in sql land it would be something like: select * from blog_post where user_id in (select friend_id from user_friend where user_id = :userId) order by date GAE solutions i can think of are: Load user, loop through the list of friends and load their latest blog posts. Finally merge all the blog posts to find the latest 10 blog entries In a blog post have a list of all users that have the writer as a friend. This would mean a simple read but would result in quota overload when adding a friend who has lots of blog posts. I don't believe either of these solutions will scale. Im sure others have hit this problem but I've searched, watched google io videos, read other's code ... What am i missing?
If you look at how the SQL solution you provided will be executed, it will go basically like this: Fetch a list of friends for the current user For each user in the list, start an index scan over recent posts Merge-join all the scans from step 2, stopping when you've retrieved enough entries You can carry out exactly the same procedure yourself in App Engine, by using the Query instances as iterators and doing a merge join over them. You're right that this will not scale well to large numbers of friends, but it suffers from exactly the same issues the SQL implementation has, it just doesn't disguise them as well: Fetching the latest 20 (for example) entries costs roughly O(n log n) work, where n is the number of friends.
Django: how do you serve media / stylesheets and link to them within templates
Variations of this question have been asked, but I'm still unable to get my stylesheets to load correctly when my templates are rendered. I'm attempting to serve static media from the Django process during development - which is strongly discouraged in production, I'm aware. I'll post my configuration and my template, and hopefully someone can help me to understand where I'm going wrong. Note that I did try to follow the example on the Django project website, however it doesn't mention how to refer to your stylesheets from a template. I've also tried many different variations of the same thing, so my code/settings may be a little off from what's described. settings.py MEDIA_ROOT = 'D:/Dev Tools/django_projects/dso/media' MEDIA_URL = '/media/' ADMIN_MEDIA_PREFIX = '/media/' urls.py from django.conf.urls.defaults import * from django.conf import settings from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', (r'^admin/(.*)', admin.site.root), (r'^ovramt/$', 'dso.ovramt.views.index'), ) if settings.DEBUG: urlpatterns += patterns('', (r'^media/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}), ) Within my template: <head> <title> {% block title %} DSO Template {% endblock %} </title> <meta http-equiv="Content-Type" content="text/html;charset=utf-8" > <link rel="stylesheet" type="text/css" href="../media/styles.css"> </head> I assure you, the files (templates/media) are in the correct directory on my file system. If there's any extra information I need to provide, please post a comment. Edit: One of the problems I was having was the use of a '/' prepending my links. If the forward slash is prepended, the link is opened from the root of the site. If there is no forward slash, the link is opened in the current level. An example: www.example.com/application/ has a link "/app2/ and a link "app3/". app2 will open at www.example.com/app2/ and app3 will open at www.example.com/application/app3/. This was confusing me I think.
I just had to figure this out myself. settings.py: MEDIA_ROOT = 'C:/Server/Projects/project_name/static/' MEDIA_URL = '/static/' ADMIN_MEDIA_PREFIX = '/media/' urls.py: from django.conf import settings ... if settings.DEBUG: urlpatterns += patterns('', (r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.MEDIA_ROOT}), ) template file: <link rel="stylesheet" type="text/css" href="/static/css/style.css" /> With the file located here: "C:/Server/Projects/project_name/static/css/style.css"
How can I check for Python version in a program that uses new language features?
If I have a Python script that requires at least a particular version of Python, what is the correct way to fail gracefully when an earlier version of Python is used to launch the script? How do I get control early enough to issue an error message and exit? For example, I have a program that uses the ternery operator (new in 2.5) and "with" blocks (new in 2.6). I wrote a simple little interpreter-version checker routine which is the first thing the script would call ... except it doesn't get that far. Instead, the script fails during python compilation, before my routines are even called. Thus the user of the script sees some very obscure synax error tracebacks - which pretty much require an expert to deduce that it is simply the case of running the wrong version of Python. I know how to check the version of Python. The issue is that some syntax is illegal in older versions of Python. Consider this program: import sys if sys.version_info < (2, 4): raise "must use python 2.5 or greater" else: # syntax error in 2.4, ok in 2.5 x = 1 if True else 2 print x When run under 2.4, I want this result $ ~/bin/python2.4 tern.py must use python 2.5 or greater and not this result: $ ~/bin/python2.4 tern.py File "tern.py", line 5 x = 1 if True else 2 ^ SyntaxError: invalid syntax (Channeling for a coworker.)
You can test using eval: try: eval("1 if True else 2") except SyntaxError: # doesn't have ternary Also, with is available in Python 2.5, just add from __future__ import with_statement . EDIT: to get control early enough, you could split it do different .py files and check compatibility in the main file before importing (e.g. in __init__.py in a package): # __init__.py # Check compatibility try: eval("1 if True else 2") except SyntaxError: raise ImportError("requires ternary support") # import from another module from impl import *
Possible values from sys.platform?
What are the possible return values from the following command? import sys print sys.platform I know there is a lot of possibilities, so I'm mainly interested in the "main" ones (Windows, Linux, Mac OS)
.---------------------.----------. | System | Value | |---------------------|----------| | Linux (2.x and 3.x) | linux2 | | Windows | win32 | | Windows/Cygwin | cygwin | | Mac OS X | darwin | | OS/2 | os2 | | OS/2 EMX | os2emx | | RiscOS | riscos | | AtheOS | atheos | | FreeBSD 7 | freebsd7 | | FreeBSD 8 | freebsd8 | '---------------------'----------'
How can I deploy a Perl/Python/Ruby script without installing an interpreter?
I want to write a piece of software which is essentially a regex data scrubber. I am going to take a contact list in CSV and remove all non-word characters and such from the person's name. This project has Perl written all over it but my client base is largely non-technical and installing Perl on Windows would not be worth it for them. Any ideas on how I can use a Perl/Python/Ruby type language without all the headaches of getting the interpreter on their computer? Thought about web for a second but it would not work for business reasons.
You can get Windows executables in all three languages. As usual with Perl, there's more than one way to do it: PAR Packer (free/open-source) perl2exe (shareware) PerlApp (part of the Perl Dev Kit from ActiveState, commercial) Python py2exe PyInstaller Ruby RubyScript2Exe OCRA
Python's re module - saving state?
One of the biggest annoyances I find in Python is the inability of the re module to save its state without explicitly doing it in a match object. Often, one needs to parse lines and if they comply a certain regex take out values from them by the same regex. I would like to write code like this: if re.match('foo (\w+) bar (\d+)', line): # do stuff with .group(1) and .group(2) elif re.match('baz whoo_(\d+)', line): # do stuff with .group(1) # etc. But unfortunately it's impossible to get to the matched object of the previous call to re.match, so this is written like this: m = re.match('foo (\w+) bar (\d+)', line) if m: # do stuff with m.group(1) and m.group(2) else: m = re.match('baz whoo_(\d+)', line) if m: # do stuff with m.group(1) Which is rather less convenient and gets really unwieldy as the list of elifs grows longer. A hackish solution would be to wrap the re.match and re.search in my own objects that keep state somewhere. Has anyone used this? Are you aware of semi-standard implementations (in large frameworks or something)? What other workarounds can you recommend? Or perhaps, am I just misusing the module and could achieve my needs in a cleaner way? Thanks in advance
You might like this module which implements the wrapper you are looking for.
What is the difference between encode/decode?
I've never been sure that I understand the difference between str/unicode decode and encode. I know that str().decode() is for when you have a string of bytes that you know has a certain character encoding, given that encoding name it will return a unicode string. I know that unicode().encode() converts unicode chars into a string of bytes according to a given encoding name. But I don't understand what str().encode() and unicode().decode() are for. Can anyone explain, and possibly also correct anything else I've gotten wrong above? EDIT: Several answers give info on what .encode does on a string, but no-one seems to know what .decode does for unicode.
The decode method of unicode strings really doesn't have any applications at all (unless you have some non-text data in a unicode string for some reason -- see below). It is mainly there for historical reasons, i think. In Python 3 it is completely gone. unicode().decode() will perform an implicit encoding of s using the default (ascii) codec. Verify this like so: >>> s = u'ö' >>> s.decode() Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0: ordinal not in range(128) >>> s.encode('ascii') Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 0: ordinal not in range(128) The error messages are exactly the same. For str().encode() it's the other way around -- it attempts an implicit decoding of s with the default encoding: >>> s = 'ö' >>> s.decode('utf-8') u'\xf6' >>> s.encode() Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128) Used like this, str().encode() is also superfluous. But there is another application of the latter method that is useful: there are encodings that have nothing to do with character sets, and thus can be applied to 8-bit strings in a meaningful way: >>> s.encode('zip') 'x\x9c;\xbc\r\x00\x02>\x01z' You are right, though: the ambiguous usage of "encoding" for both these applications is... awkard. Again, with separate byte and string types in Python 3, this is no longer an issue.
Django: Increment blog entry view count by one. Is this efficient?
I have the following code in my index view. latest_entry_list = Entry.objects.filter(is_published=True).order_by('-date_published')[:10] for entry in latest_entry_list: entry.views = entry.views + 1 entry.save() If there are ten (the limit) rows returned from the initial query, will the save issue 10 seperate updated calls to the database, or is Django "smart" enough to issue just one update call? Is there a more efficient method to achieve this result?
You can use F() objects for this. Here is how you import F: from django.db.models import F New in Django 1.1. Calls to update can also use F() objects to update one field based on the value of another field in the model. This is especially useful for incrementing counters based upon their current value. Entry.objects.filter(is_published=True).update(views=F('views')+1) Although you can't do an update on a sliced query set... edit: actually you can... This can be done completely in django ORM. You need two SQL queries: Do your filter and collect a list of primary keys Do an update on a non-sliced query set of items matching any of those primary keys. Getting the non-sliced query set is the hard bit. I wondered about using in_bulk but that returns a dictionary, not a query set. One would usually use Q objects to do complex OR type queries and that will work, but pk__in does the job much more simply. latest_entry_ids = Entry.objects.filter(is_published=True)\ .order_by('-date_published') .values_list('id', flat=True)[:10] non_sliced_query_set = Entry.objects.filter(pk__in=latest_entry_ids) n = non_sliced_query_set.update(views=F('views')+1) print n or 0, 'items updated' Due to the way that django executes queries lazily, this results in just 2 database hits, no matter how many items are updated.
What is __init__.py for?
What is __init__.py for in a Python source directory?
It's a part of a package. Here's the documentation. The __init__.py files are required to make Python treat the directories as containing packages; this is done to prevent directories with a common name, such as string, from unintentionally hiding valid modules that occur later (deeper) on the module search path. In the simplest case, __init__.py can just be an empty file, but it can also execute initialization code for the package or set the __all__ variable, described later.
How do I create a webpage with buttons that invoke various Python scripts on the system serving the webpage?
I'm a hobbyist (and fairly new) programmer who has written several useful (to me) scripts in python to handle various system automation tasks that involve copying, renaming, and downloading files amongst other sundry activities. I'd like to create a web page served from one of my systems that would merely present a few buttons which would allow me to initiate these scripts remotely. The problem is that I don't know where to start investigating how to do this. Let's say I have a script called: *file_arranger.py* What do I need to do to have a webpage execute that script? This isn't meant for public consumption, so anything lightweight would be great. For bonus points, what do I need to look into to provide the web user with the output from such scripts? edit: The first answer made me realize I forgot to include that this is a Win2k3 system.
This page on the python site has a good description and example of what you need to do to run a python CGI script. Start out with the simplest case first. Just make a short script that prints html. #!/usr/bin/python #on windows change to your path to the python exe print "Content-Type: text/html" # HTML is following print # blank line, end of headers print "<TITLE>CGI script output</TITLE>" print "<H1>This is my first CGI script</H1>" print "Hello, world!" When you try this the first time, the hardest part is usually figuring out where to put the script and how to make the web server recognize and run it. If you are using an apache web sever, take a look at these configuration steps. Once you have this simple script working, you will just need to add an html form and button tag and use the action property to point it to scripts you want to run.
How do I determine the size of an object in Python?
In C, we can find the size of an int, char, etc. I want to know how to get size of objects like a string, integer, etc. in Python. Related question: How many bytes per element are there in a Python list (tuple)? I am using an XML file which contains size fields that specify the size of value. I must parse this XML and do my coding. When I want to change the value of a particular field, I will check the size field of that value. Here I want to compare whether the new value that I'm gong to enter is of the same size as in XML. I need to check the size of new value. In case of a string I can say its the length. But in case of int, float, etc. I am confused.
Just use the sys.getsizeof function defined in the sys module. sys.getsizeof(object[, default]): Return the size of an object in bytes. The object can be any type of object. All built-in objects will return correct results, but this does not have to hold true for third-party extensions as it is implementation specific. The default argument allows to define a value which will be returned if the object type does not provide means to retrieve the size and would cause a TypeError. getsizeof calls the object’s __sizeof__ method and adds an additional garbage collector overhead if the object is managed by the garbage collector. Usage example, in python 3.0: >>> import sys >>> x = 2 >>> sys.getsizeof(x) 14 >>> sys.getsizeof(sys.getsizeof) 32 >>> sys.getsizeof('this') 38 >>> sys.getsizeof('this also') 48 If you are in python < 2.6 and don't have sys.getsizeof you can use this extensive module instead. Never used it though.
Making a method private in a python subclass
Is it possible to make a public method private in a subclass ? I don't want other classes extending this one to be able to call some of the methods . Here is an example : class A: def __init__(self): #do something here def method(self): #some code here class B(A): def __init__(self): A.__init__(self) #additional initialization goes here def method(self): #this overrides the method ( and possibly make it private here ) from this point forward , I don't want any class that extends from B to be able to call method . Is this possible ? EDIT : a "logical" reason for this is that I don't want users to call methods in wrong order.
Contrary to popular fashion on this subject, there are legitimate reasons to have a distinction between public, private, and protected members, whether you work in Python or a more traditional OOP environment. Many times, it comes to be that you develop auxiliary methods for a particularly long-winded task at some level of object specialization. Needless to say, you really don't want these methods inherited by any subclass because they make no sense in the specialized context and shouldn't even be visible; and yet they are visible, and they diminish the utility of things like tab completion, object navigators, and other system software, because everything at all different levels of abstraction get flattened and thrown together. These programming aids are not trivial, mind you. They are only trivial if you're a student and enjoy doing the same thing a million times just because you're learning how. Python historically developed in such a way that to implement the public/private distinction became increasingly difficult due to ideological inertia and compatibility concerns. That's the plain truth. It would be a real headache for everyone to change what they've been doing. Consequently, we now have a million Python fans out there, all of whom have read the same one or two original articles deciding unequivocally that the public/private distinction is "unpythonic". These people, for lack of critical thought or fairness to wide-spread, common practices, instantly use this occasion to accrete a predictable slew of appologetics -- De Defensione Serpentis -- which I suspect arises not from a rational selection of the via pythonis (the pythonic way) but from neglect of other languages, which they either choose not to use, are not skilled at using, or are not able to use because of work. As someone already said, the best you can do in Python to produce an effect similar to private methods is to prepend the method name with __ (two underscores). On the other hand, the only thing this accomplishes, practically speaking, is the insertion of a transmogrified attribute name in the object's __dict__. For instance, say you have the following class definition: class Dog(object): def __bark(self): print 'woof' If you run dir(Dog()), you'll see a strange member, called _Dog__bark. Indeed, the only reason this trick exists is to circumvent the problem I described before: namely, preventing inheritance, overloading, and replacement of super methods. Hopefully there will be some standardized implementation of private methods in the future, when people realize that tissue need not have access to the methods by which the individual cell replicates DNA, and the conscious mind need constantly figure out how to repair its tissues and internal organs.
Is it worth using Python's re.compile?
Is there any benefit in using compile for regular expressions in Python? h = re.compile('hello') h.match('hello world') vs re.match('hello', 'hello world')
I've had a lot of experience running a compiled regex 1000s of times versus compiling on-the-fly, and have not noticed any perceivable difference. Obviously, this is anecdotal, and certainly not a great argument against compiling, but I've found the difference to be negligible. EDIT: After a quick glance at the actual Python 2.5 library code, I see that Python internally compiles AND CACHES regexes whenever you use them anyway (including calls to re.match()), so you're really only changing WHEN the regex gets compiled, and shouldn't be saving much time at all - only the time it takes to check the cache (a key lookup on an internal dict type). From module re.py (comments are mine): def match(pattern, string, flags=0): return _compile(pattern, flags).match(string) def _compile(*key): # Does cache check at top of function cachekey = (type(key[0]),) + key p = _cache.get(cachekey) if p is not None: return p # ... # Does actual compilation on cache miss # ... # Caches compiled regex if len(_cache) >= _MAXCACHE: _cache.clear() _cache[cachekey] = p return p I still often pre-compile regular expressions, but only to bind them to a nice, reusable name, not for any expected performance gain.
How can I install the Beautiful Soup module on the Mac?
I read this without finding the solution: http://docs.python.org/install/index.html
The "normal" way is to: Go to the Beautiful Soup web site, http://www.crummy.com/software/BeautifulSoup/ Download the package Unpack it In a Terminal window, cd to the resulting directory Type python setup.py install Another solution is to use easy_install. Go to http://peak.telecommunity.com/DevCenter/EasyInstall), install the package using the instructions on that page, and then type, in a Terminal window: easy_install BeautifulSoup4 # for older v3: # easy_install BeautifulSoup easy_install will take care of downloading, unpacking, building, and installing the package. The advantage to using easy_install is that it knows how to search for many different Python packages, because it queries the PyPI registry. Thus, once you have easy_install on your machine, you install many, many different third-party packages simply by one command at a shell.
Python object.__repr__(self) should be an expression?
I was looking at the builtin object methods in the Python documentation, and I was interested in the documentation for object.__repr__(self). Here's what it says: Called by the repr() built-in function and by string conversions (reverse quotes) to compute the “official” string representation of an object. If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment). If this is not possible, a string of the form <...some useful description...> should be returned. The return value must be a string object. If a class defines repr() but not str(), then repr() is also used when an “informal” string representation of instances of that class is required. This is typically used for debugging, so it is important that the representation is information-rich and unambiguous The most interesting part to me, was... If at all possible, this should look like a valid Python expression that could be used to recreate an object with the same value ... but I'm not sure exactly what this means. It says it should look like an expression which can be used to recreate the object, but does that mean it should just be an example of the sort of expression you could use, or should it be an actual expression, that can be executed (eval etc..) to recreate the object? Or... should it be just a rehasing of the actual expression which was used, for pure information purposes? In general I'm a bit confused as to exactly what I should be putting here.
>>> from datetime import date >>> >>> repr(date.today()) # calls date.today().__repr__() 'datetime.date(2009, 1, 16)' >>> eval(_) # _ is the output of the last command datetime.date(2009, 1, 16) The output is a string that can be parsed by the python interpreter and results in an equal object. If that's not possible, it should return a string in the form of <...some useful description...>.
How do I create a list of Python lambdas (in a list comprehension/for loop)?
I want to create a list of lambda objects from a list of constants in Python; for instance: listOfNumbers = [1,2,3,4,5] square = lambda x: x * x listOfLambdas = [lambda: square(i) for i in listOfNumbers] This will create a list of lambda objects, however, when I run them: for f in listOfLambdas: print f(), I would expect that it would print 1 4 9 16 25 Instead, it prints: 25 25 25 25 25 It seems as though the lambdas have all been given the wrong parameter. Have I done something wrong, and is there a way to fix it? I'm in Python 2.4 I think. EDIT: a bit more of trying things and such came up with this: listOfLambdas = [] for num in listOfNumbers: action = lambda: square(num) listOfLambdas.append(action) print action() Prints the expected squares from 1 to 25, but then using the earlier print statement: for f in listOfLambdas: print f(), still gives me all 25s. How did the existing lambda objects change between those two print calls? Related question: Why results of map() and list comprehension are different?
You have: listOfLambdas = [lambda: i*i for i in range(6)] for f in listOfLambdas: print f() Output: 25 25 25 25 25 25 You need currying! Aside from being delicious, use this default value "hack". listOfLambdas = [lambda i=i: i*i for i in range(6)] for f in listOfLambdas: print f() Output: 0 1 4 9 16 25 Note the i=i. That's where the magic happens.
Does python have an equivalent to Java Class.forName()?
I have the need to take a string argument and create an object of the class named in that string in Python. In Java, I would use Class.forName().newInstance(). Is there an equivalent in Python? Thanks for the responses. To answer those who want to know what I'm doing: I want to use a command line argument as the class name, and instantiate it. I'm actually programming in Jython and instantiating Java classes, hence the Java-ness of the question. getattr() works great. Thanks much.
Reflection in python is a lot easier and far more flexible than it is in Java. I recommend reading this tutorial There's no direct function (that I know of) which takes a fully qualified class name and returns the class, however you have all the pieces needed to build that, and you can connect them together. One bit of advice though: don't try to program in Java style when you're in python. If you can explain what is it that you're trying to do, maybe we can help you find a more pythonic way of doing it. Here's a function that does what you want: def get_class( kls ): parts = kls.split('.') module = ".".join(parts[:-1]) m = __import__( module ) for comp in parts[1:]: m = getattr(m, comp) return m You can use the return value of this function as if it were the class itself. Here's a usage example: >>> D = get_class("datetime.datetime") >>> D <type 'datetime.datetime'> >>> D.now() datetime.datetime(2009, 1, 17, 2, 15, 58, 883000) >>> a = D( 2010, 4, 22 ) >>> a datetime.datetime(2010, 4, 22, 0, 0) >>> How does that work? We're using __import__ to import the module that holds the class, which required that we first extract the module name from the fully qualified name. Then we import the module: m = __import__( module ) In this case, m will only refer to the top level module, For example, if your class lives in foo.baz module, then m will be the module foo We can easily obtain a reference to foo.baz using getattr( m, 'baz' ) To get from the top level module to the class, have to recursively use gettatr on the parts of the class name Say for example, if you class name is foo.baz.bar.Model then we do this: m = __import__( "foo.baz.bar" ) #m is package foo m = getattr( m, "baz" ) #m is package baz m = getattr( m, "bar" ) #m is module bar m = getattr( m, "Model" ) #m is class Model This is what's happening in this loop: for comp in parts[1:]: m = getattr(m, comp) At the end of the loop, m will be a reference to the class. This means that m is actually the class itslef, you can do for instance: a = m() #instantiate a new instance of the class b = m( arg1, arg2 ) # pass arguments to the constructor
Advice regarding IPython + MacVim Workflow
I've just found IPython and I can report that I'm in deep love. And the affection was immediate. I think this affair will turn into something lasting, like the one I have with screen. Ipython and screen happen to be the best of friends too so it's a triangular drama. Purely platonic, mind you. The reason IPython hits the soft spots with me are very much because I generally like command prompts, and especially *nix-inspired prompts with inspiration from ksh, csh (yes, chs is a monster, but as a prompt it sport lots of really good features), bash and zsh. And IPython does sure feel like home for a *nix prompt rider. Mixing the system shell and python is also a really good idea. Plus, of course, IPython helps a lot when solving the Python Challenge riddles. Invaluable even. Now, I love Vim too. Since I learnt vi back in the days there's no turning back. And I'm on Mac when I have a choice. Now I'd like to glue together my IPython + MacVim workflow. What I've done so far is that I start Ipython using: ipython -e "open -a MacVim" Thus when I edit from IPython it starts MacVim with the file/module loaded. Could look a bit like so: In [4]: %run foo #This also "imports" foo anew hello world In [5]: edit foo Editing... done. Executing edited code... #This happens immediately hello world In [6]: %run foo hello SO World OK. I think this can be improved. Maybe there's a way to tie IPython into MacVim too? Please share your experiences. Of course if you use TextMate or some other fav editor I'm interested too. Maybe some of the lessons are general.
I use Linux, but I believe this tip can be used in OS X too. I use GNU Screen to send IPython commands from Vim as recommended by this tip. This is how I do it: First, you should open a terminal and start a screen session called 'ipython' or whatever you want, and then start IPython: $ screen -S ipython $ ipython Then you should put this in your .vimrc: autocmd FileType python map F5 :w<CR>:!screen -x ipython -X stuff $'\%run %:p:h\n'<CR><CR> Then when you hit F5, it will tell Screen to execute the command '%run file' inside the 'ipython' created previously, where file is your current buffer in Vim. You can tweak this to execute the command you want inside IPython from Vim. For example I use this: autocmd FileType python map <F5> :w<CR>:!screen -x ipython -X stuff $'\%reset\ny\n\%cd %:p:h\n\%run %:t\n'<CR><CR> This executes %reset (answering yes to the prompt), then change to the directory where the current buffer in vim is located and then %run the file. This is specially useful if you have the %pdb active in IPython. Don't forget that you need an active Screen session called 'ipython' with IPython running inside. If you like Emacs. There is good support for IPython.
Is there a fast way to generate a dict of the alphabet in Python?
I want to generate a dict with the letters of the alphabet as the keys, something like letter_count = {'a': 0, 'b': 0, 'c': 0} what would be a fast way of generating that dict, rather than me having to type it in? Thanks for your help. EDIT Thanks everyone for your solutions :) nosklo's solution is probably the shortest Also, thanks for reminding me about the Python string module.
I find this solution more elegant: import string d = dict.fromkeys(string.ascii_lowercase, 0)
How do I re.search or re.match on a whole file without reading it all into memory?
I want to be able to run a regular expression on an entire file, but I'd like to be able to not have to read the whole file into memory at once as I may be working with rather large files in the future. Is there a way to do this? Thanks! Clarification: I cannot read line-by-line because it can span multiple lines.
You can use mmap to map the file to memory. The file contents can then be accessed like a normal string: import re, mmap with open('/var/log/error.log', 'r+') as f: data = mmap.mmap(f.fileno(), 0) mo = re.search('error: (.*)', data) if mo: print "found error", mo.group(1) This also works for big files, the file content is internally loaded from disk as needed.
Pythonic macro syntax
I've been working on an alternative compiler front-end for Python where all syntax is parsed via macros. I'm finally to the point with its development that I can start work on a superset of the Python language where macros are an integral component. My problem is that I can't come up with a pythonic macro definition syntax. I've posted several examples in two different syntaxes in answers below. Can anyone come up with a better syntax? It doesn't have to build off the syntax I've proposed in any way -- I'm completely open here. Any comments, suggestions, etc would be helpful, as would alternative syntaxes showing the examples I've posted. A note about the macro structure, as seen in the examples I've posted: The use of MultiLine/MLMacro and Partial/PartialMacro tell the parser how the macro is applied. If it's multiline, the macro will match multiple line lists; generally used for constructs. If it's partial, the macro will match code in the middle of a list; generally used for operators.
After thinking about it a while a few days ago, and coming up with nothing worth posting, I came back to it now and came up with some syntax I rather like, because it nearly looks like python: macro PrintMacro: syntax: "print", OneOrMore(Var(), name='vars') return Printnl(vars, None) Make all the macro "keywords" look like creating python objects (Var() instead of simple Var) Pass the name of elements as a "keyword parameter" to items we want a name for. It should still be easy to find all the names in the parser, since this syntax definition anyway needs to be interpreted in some way to fill the macro classes syntax variable. needs to be converted to fill the syntax variable of the resulting macro class. The internal syntax representation could also look the same: class PrintMacro(Macro): syntax = 'print', OneOrMore(Var(), name='vars') ... The internal syntax classes like OneOrMore would follow this pattern to allow subitems and an optional name: class MacroSyntaxElement(object): def __init__(self, *p, name=None): self.subelements = p self.name = name When the macro matches, you just collect all items that have a name and pass them as keyword parameters to the handler function: class Macro(): ... def parse(self, ...): syntaxtree = [] nameditems = {} # parse, however this is done # store all elements that have a name as # nameditems[name] = parsed_element self.handle(syntaxtree, **nameditems) The handler function would then be defined like this: class PrintMacro(Macro): ... def handle(self, syntaxtree, vars): return Printnl(vars, None) I added the syntaxtree as a first parameter that is always passed, so you wouldn't need to have any named items if you just want to do very basic stuff on the syntax tree. Also, if you don't like the decorators, why not add the macro type like a "base class"? IfMacro would then look like this: macro IfMacro(MultiLine): syntax: Group("if", Var(), ":", Var(), name='if_') ZeroOrMore("elif", Var(), ":", Var(), name='elifs') Optional("else", Var(name='elseBody')) return If( [(cond, Stmt(body)) for keyword, cond, colon, body in [if_] + elifs], None if elseBody is None else Stmt(elseBody) ) And in the internal representation: class IfMacro(MultiLineMacro): syntax = ( Group("if", Var(), ":", Var(), name='if_'), ZeroOrMore("elif", Var(), ":", Var(), name='elifs'), Optional("else", Var(name='elseBody')) ) def handle(self, syntaxtree, if_=None, elifs=None, elseBody=None): # Default parameters in case there is no such named item. # In this case this can only happen for 'elseBody'. return If( [(cond, Stmt(body)) for keyword, cond, body in [if_] + elifs], None if elseNody is None else Stmt(elseBody) ) I think this would give a quite flexible system. Main advantages: Easy to learn (looks like standard python) Easy to parse (parses like standard python) Optional items can be easily handled, since you can have a default parameter None in the handler Flexible use of named items: You don't need to name any items if you don't want, because the syntax tree is always passed in. You can name any subexpressions in a big macro definition, so it's easy to pick out specific stuff you're interested in Easily extensible if you want to add more features to the macro constructs. For example Several("abc", min=3, max=5, name="a"). I think this could also be used to add default values to optional elements like Optional("step", Var(), name="step", default=1). I'm not sure about the quote/unquote syntax with "quote:" and "$", but some syntax for this is needed, since it makes life much easier if you don't have to manually write syntax trees. Probably its a good idea to require (or just permit?) parenthesis for "$", so that you can insert more complicated syntax parts, if you want. Like $(Stmt(a, b, c)). The ToMacro would look something like this: # macro definition macro ToMacro(Partial): syntax: Var(name='start'), "to", Var(name='end'), Optional("inclusive", name='inc'), Optional("step", Var(name='step')) if step == None: step = quote(1) if inclusive: return quote: xrange($(start), $(end)+1, $(step)) else: return quote: xrange($(start), $(end), $(step)) # resulting macro class class ToMacro(PartialMacro): syntax = Var(name='start'), "to", Var(name='end'), Optional("inclusive", name='inc'), Optional("step", Var(name='step')) def handle(syntaxtree, start=None, end=None, inc=None, step=None): if step is None: step = Number(1) if inclusive: return ['xrange', ['(', start, [end, '+', Number(1)], step, ')']] return ['xrange', ['(', start, end, step, ')']]
Putting separate python packages into same namespace?
I'm developing a python framework that would have "addons" written as separate packages. I.e.: import myframework from myframework.addons import foo, bar Now, what I'm trying to arrange is so that these addons can be distributed separately from core framework and injected into myframework.addons namespace. Currently my best solution to this is the following. An add-on would be deployed (most likely into {python_version}/site-packages/ like so: fooext/ fooext/__init__.py fooext/myframework/ fooext/myframework/__init__.py fooext/myframework/addons/ fooext/myframework/addons/__init__.py fooext/myframework/addons/foo.py The fooext/myframework/addons/__init__.py would have the pkgutil path extension code: import pkgutil __path__ = pkgutil.extend_path(__path__, __name__) The problem is that for this to work, the PYTHONPATH needs to have fooext/ in it, however the only thing it would have is the parent install directory (most likely, the above-mentioned site-packages). The solution to this is to have extra code in myframework/addons/__init__.py which would tranverse sys.path and look for any modules with a myframework sub-package, in which case it adds it to sys.path and everything works. Another idea I had is to write the addon files directly to myframework/addons/ install location, but then it would make development and deployed namespace differ. Is there a better way to accomplish this or perhaps a different approach to the above distribution problem altogether?
See namespace packages: http://www.python.org/dev/peps/pep-0382/ or in setuptools: http://peak.telecommunity.com/DevCenter/setuptools#namespace-packages
Python get proper line ending
Is there an easy way to get the type of line ending that the current operating system uses?
If you are operating on a file that you opened in text mode, then you are correct that line breaks all show up as '\n'. Otherwise, you are looking for os.linesep . From http://docs.python.org/library/os.html: os.linesep The string used to separate (or, rather, terminate) lines on the current platform. This may be a single character, such as '\n' for POSIX, or multiple characters, for example, '\r\n' for Windows. Do not use os.linesep as a line terminator when writing files opened in text mode (the default); use a single '\n' instead, on all platforms.
No module named MySQLdb
I am using Python version 2.5.4 and install MySQL version 5.0 and Django. Django is working fine with Python, but not MySQL. Can anyone help me to solve this issue? I am using it in Vista.
You need to use one of the following commands. Which one depends on what OS and software you have and use. easy_install mysql-python (mix os) pip install mysql-python (mix os) apt-get install python-mysqldb (Linux Ubuntu, ...) cd /usr/ports/databases/py-MySQLdb && make install clean (FreeBSD) yum install MySQL-python (Linux Fedora, CentOS ...) For Windows, see this answer: Install mysql-python (Windows)
Advice on Python/Django and message queues
I have an application in Django, that needs to send a large number of emails to users in various use cases. I don't want to handle this synchronously within the application for obvious reasons. Has anyone any recommendations for a message queuing server which integrates well with Python, or they have used on a Django project? The rest of my stack is Apache, mod_python, MySQL.
In your specific case, where it's just an email queue, I wold take the easy way out and use django-mailer. As a nice side bonues there are other pluggable projects that are smart enough to take advantage of django-mailer when they see it in the stack. As for more general queue solutions, I haven't been able to try any of these yet, but here's a list of ones that look more interesting to me: pybeanstalk/beanstalkd python interface to gearman (which is probably much more interesting now with the release of the C version of gearman) memcacheQ stomp Celery
Break on exception in pydev
Is it possible to get the pydev debugger to break on exception?
This was added by the PyDev author, under Run > Manage Python Exception Breakpoints
JSON datetime between Python and JavaScript
I want to send a datetime.datetime object in serialized form from Python using JSON and de-serialize in JavaScript using JSON. What is the best way to do this?
You can add the 'default' parameter to json.dumps to handle this: date_handler = lambda obj: ( obj.isoformat() if isinstance(obj, datetime.datetime) or isinstance(obj, datetime.date) else None ) json.dumps(datetime.datetime.now(), default=date_handler) '"2010-04-20T20:08:21.634121"' Which is ISO 8601 format. A more comprehensive default handler function: def handler(obj): if hasattr(obj, 'isoformat'): return obj.isoformat() elif isinstance(obj, ...): return ... else: raise TypeError, 'Object of type %s with value of %s is not JSON serializable' % (type(obj), repr(obj)) Update: Added output of type as well as value. Update: Also handle date
Limiting floats to two decimal points
I want a to be rounded to 13.95. >>> a 13.949999999999999 >>> round(a, 2) 13.949999999999999 The round function does not work the way I expected.
You are running into the old problem with floating point numbers that all numbers cannot be represented. The command line is just showing you the full floating point form from memory. In floating point your rounded version is the same number. Since computers are binary they store floating point numbers as an integer and then divide it by a power of two so 13.95 will be represented in a similar fashion to 125650429603636838/(2**53). Double precision numbers have 53 bits (16 digits) of precision and regular floats have 24 bits (8 digits) of precision. The floating point in python uses double precision to store the values. for example >>> 125650429603636838/(2**53) 13.949999999999999 >>> 234042163/(2**24) 13.949999988079071 >>> a=13.946 >>> print(a) 13.946 >>> print("%.2f" % a) 13.95 >>> round(a,2) 13.949999999999999 >>> print("%.2f" % round(a,2)) 13.95 >>> print("{0:.2f}".format(a)) 13.95 >>> print("{0:.2f}".format(round(a,2))) 13.95 >>> print("{0:.15f}".format(round(a,2))) 13.949999999999999 If you are after only two decimal places as in currency then you have a couple of better choices use integers and store values in cents not dollars and then divide by 100 to convert to dollars. Or use a fixed point number like decimal
Is there any advantage in using a Python class?
I have a Python class full of static methods. What are the advantages and disadvantages of packaging these in a class rather than raw functions?
There are none. This is what modules are for: grouping related functions. Using a class full of static methods makes me cringe from Javaitis. The only time I would use a static function is if the function is an integral part of the class. (In fact, I'd probably want to use a class method anyway.)
Can't get Python to import from a different folder
I can't seem to get Python to import a module in a subfolder. I get the error when I try to create an instance of the class from the imported module, but the import itself succeeds. Here is my directory structure: Server -server.py -Models --user.py Here's the contents of server.py: from sys import path from os import getcwd path.append(getcwd() + "\\models") #Yes, i'm on windows print path import user u=user.User() #error on this line And user.py: class User(Entity): using_options(tablename='users') username = Field(String(15)) password = Field(String(64)) email = Field(String(50)) status = Field(Integer) created = Field(DateTime) The error is: AttributeError: 'module' object has no attribute 'User'
I believe you need to create a file called __init__.py in the Models directory so that python treats it as a module. Then you can do: from Models.user import User You can include code in the __init__.py (for instance initialization code that a few different classes need) or leave it blank. But it must be there.
Class factory in Python
I'm new to Python and need some advice implementing the scenario below. I have two classes for managing domains at two different registrars. Both have the same interface, e.g. class RegistrarA(Object): def __init__(self, domain): self.domain = domain def lookup(self): ... def register(self, info): ... and class RegistrarB(object): def __init__(self, domain): self.domain = domain def lookup(self): ... def register(self, info): ... I would like to create a Domain class that, given a domain name, loads the correct registrar class based on the extension, e.g. com = Domain('test.com') #load RegistrarA com.lookup() biz = Domain('test.biz') #load RegistrarB biz.lookup() I know this can be accomplished using a factory function (see below), but is this the best way of doing it or is there a better way using OOP features? def factory(domain): if ...: return RegistrarA(domain) else: return RegistrarB(domain)
I think using a function is fine. The more interesting question is how do you determine which registrar to load? One option is to have an abstract base Registrar class which concrete implementations subclass, then iterate over its __subclasses__() calling an is_registrar_for() class method: class Registrar(object): def __init__(self, domain): self.domain = domain class RegistrarA(Registrar): @classmethod def is_registrar_for(cls, domain): return domain == 'foo.com' class RegistrarB(Registrar): @classmethod def is_registrar_for(cls, domain): return domain == 'bar.com' def Domain(domain): for cls in Registrar.__subclasses__(): if cls.is_registrar_for(domain): return cls(domain) raise ValueError print Domain('foo.com') print Domain('bar.com') This will let you transparently add new Registrars and delegate the decision of which domains each supports, to them.
AttributeError: 'module' object has no attribute 'model'
Can anyone help me please to solve this.. from django.db import models # Create your models here. class Poll(models.model): question = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') class Choice(models.Model): poll = models.ForeignKey(Poll) choice = models.CharField(max_length=200) votes = models.IntegerField() Running: c:\projects\mysite>python manage.py sql polls Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "C:\Python25\Lib\site-packages\django\core\management\__init__.py", line 340, in execute_manager utility.execute() File "C:\Python25\Lib\site-packages\django\core\management\__init__.py", line 295, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Python25\Lib\site-packages\django\core\management\base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "C:\Python25\Lib\site-packages\django\core\management\base.py", line 221, in execute self.validate() File "C:\Python25\Lib\site-packages\django\core\management\base.py", line 249, in validate num_errors = get_validation_errors(s, app) File "C:\Python25\lib\site-packages\django\core\management\validation.py", line 28, in get_validation_errors for (app_name, error) in get_app_errors().items(): File "C:\Python25\lib\site-packages\django\db\models\loading.py", line 128, in get_app_errors self._populate() File "C:\Python25\lib\site-packages\django\db\models\loading.py", line 57, in _populate self.load_app(app_name, True) File "C:\Python25\lib\site-packages\django\db\models\loading.py", line 72, in load_app mod = __import__(app_name, {}, {}, ['models']) File "c:\projects\mysite\..\mysite\polls\models.py", line 4, in <module> class Poll(models.model): AttributeError: 'module' object has no attribute 'model'
It's called models.Model and not models.model (case sensitive). Fix your Poll model like this - class Poll(models.Model): question = models.CharField(max_length=200) pub_date = models.DateTimeField('date published') Hope that helps...
Extending python - to swig, not to swig or Cython
I found the bottleneck in my python code, played around with psycho etc. Then decided to write a c/c++ extension for performance. With the help of swig you almost don't need to care about arguments etc. Everything works fine. Now my question: swig creates a quite large py-file which does a lot of 'checkings' and 'PySwigObject' before calling the actual .pyd or .so code. Does anyone of you have any experience whether there is some more performance to gain if you hand-write this file or let swig do it.
You should consider Boost.Python if you are not planning to generate bindings for other languages as well with swig. If you have a lot of functions and classes to bind, Py++ is a great tool that automatically generates the needed code to make the bindings. Pybindgen may also be an option, but it's a new project and less complete that Boost.Python. Edit: Maybe I need to be more explicit about pro and cons. Swig: pro: you can generate bindings for many scripting languages. cons: I don't like the way the parser works. I don't know if the made some progress but two years ago the C++ parser was quite limited. Most of the time I had to copy/past my .h headers add some % characters and give extra hints to the swig parser. I was also needed to deal with the Python C-API from time to time for (not so) complicated type conversions. I'm not using it anymore. Boost.Python: pro: It's a very complete library. It allows you to do almost everything that is possible with the C-API, but in C++. I never had to write C-API code with this library. I also never encountered bug due to the library. Code for bindings either works like a charm or refuse compile. It's probably one of the best solutions currently available if you already have some C++ library to bind. But if you only have a small C function to rewrite, I would probably try with Cython. cons: if you don't have a pre-compiled Boost.Python library you're going to use Bjam (sort of make replacement). I really hate Bjam and its syntax. Python libraries created with B.P tend to become obese. It also takes a lot of time to compile them. Py++ (discontinued): it's Boost.Python made easy. Py++ uses a C++ parser to read your code and then generates Boost.Python code automatically. You also have a great support from its author (no it's not me ;-) ). cons: only the problems due to Boost.Python itself. Update: As of 2014 this project now looks discontinued. Pybindgen: It generates the code dealing with the C-API. You can either describe functions and classes in a Python file, or let Pybindgen read your headers and generate bindings automatically (for this it uses pygccxml, a python library wrote by the author of Py++). cons: it's a young project, with a smaller team than Boost.Python. There are still some limitations: you cannot use multiple inheritance for your C++ classes, Callbacks (not automatically, custom callback handling code can be written, though). Translation of Python exceptions to C. It's definitely worth a good look. A new one: On 2009/01/20 the author of Py++ announced a new package for interfacing C/C++ code with python. It is based on ctypes. I didn't try it already but I will! Note: this project looks discontiued, as Py++. CFFI: I did not know the existence of this one until very recently so for now I cannot give my opinion. It looks like you can define C functions in Python strings and call them directly from the same Python module. Cython: This is the method I'm currently using in my projects. Basically you write code in special .pyx files. Those files are compiled (translated) into C code which in turn are compiled to CPython modules. Cython code can look like regular Python (and in fact pure Python are valid .pyx Cython files), but you can also more information like variable types. This optional typing allows Cython to generate faster C code. Code in Cython files can call both pure Python functions but also C and C++ functions (and also C++ methods). It took me some time to think in Cython, that in the same code call C and C++ function, mix Python and C variables, and so on. But it's a very powerful language, with an active (in 2014) and friendly community.
What is the best real time plotting widget for wxPython?
I would like to show a read time graph with one or two curves an up to 50 samples per second using Python and wxPython. The widget should support both Win32 and Linux platforms. Any hints are welcome. Edited to add: I don't need to update the display at 50 fps, but up need to show up to 50 samples of data on both curves, with a reasonable update rate for the display (5..10 fps should be okay). Edited to add: I have used mathplotlib in a project with good success. I have then settled for wx.lib.plot for other projects, which I found to be simpler, but somewhat easier to use and consuming less CPU cycles. As wx.lib comes as part of the standard wxPython distribution is is particularly easy to use.
Here's a sample of a dynamic plotter with wxPython and matplotlib. While not 50 FPS, it draws smoothly and quickly enough for most real-time data views: http://eli.thegreenplace.net/2008/08/01/matplotlib-with-wxpython-guis/ Here's just the code paste: http://paste.pocoo.org/show/100358/
Is there an easily available implementation of erf() for Python?
I can implement the error function, erf, myself, but I'd prefer not to. Is there a python package with no external dependencies that contains an implementation of this function? I have found http://pylab.sourceforge.net/packages/included_functions.html>this but this seems to be part of some much larger package (and it's not even clear which one!). I'm sorry if this is a naive question - I'm totally new to Python.
Since v.2.7. the standard math module contains erf function. This should be the easiest way. http://docs.python.org/2/library/math.html#math.erf
Is there a way to detach matplotlib plots so that the computation can continue?
After these instructions in the Python interpreter one gets a window with a plot: from matplotlib.pyplot import * plot([1,2,3]) show() # other code Unfortunately, I don't know how to continue to interactively explore the figure created by show() while the program does further calculations. Is it possible at all? Sometimes calculations are long and it would help if they would proceed during examination of intermediate results.
Use matplotlib's calls that won't block: Using draw(): from matplotlib.pyplot import plot, draw, show plot([1,2,3]) draw() print 'continue computation' # at the end call show to ensure window won't close. show() Using interactive mode: from matplotlib.pyplot import plot, ion, show ion() # enables interactive mode plot([1,2,3]) # result shows immediatelly (implicit draw()) print 'continue computation' # at the end call show to ensure window won't close. show()
How do I install an .egg file without easy_install in Windows?
I have Python 2.6 and I want to install easy _ install module. The problem is that the only available installation package of easy _ install for Python 2.6 is an .egg file! What should I do?
You could try this script. #!python """Bootstrap setuptools installation If you want to use setuptools in your package's setup.py, just include this file in the same directory with it, and add this to the top of your setup.py::     from ez_setup import use_setuptools     use_setuptools() If you want to require a specific version of setuptools, set a download mirror, or use an alternate download directory, you can do so by supplying the appropriate options to ``use_setuptools()``. This file can also be run as a script to install or upgrade setuptools. """ import sys DEFAULT_VERSION = "0.6c11" DEFAULT_URL     = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3] md5_data = {     'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',     'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',     'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',     'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',     'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',     'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',     'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',     'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',     'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',     'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',     'setuptools-0.6c10-py2.3.egg': 'ce1e2ab5d3a0256456d9fc13800a7090',     'setuptools-0.6c10-py2.4.egg': '57d6d9d6e9b80772c59a53a8433a5dd4',     'setuptools-0.6c10-py2.5.egg': 'de46ac8b1c97c895572e5e8596aeb8c7',     'setuptools-0.6c10-py2.6.egg': '58ea40aef06da02ce641495523a0b7f5',     'setuptools-0.6c11-py2.3.egg': '2baeac6e13d414a9d28e7ba5b5a596de',     'setuptools-0.6c11-py2.4.egg': 'bd639f9b0eac4c42497034dec2ec0c2b',     'setuptools-0.6c11-py2.5.egg': '64c94f3bf7a72a13ec83e0b24f2749b2',     'setuptools-0.6c11-py2.6.egg': 'bfa92100bd772d5a213eedd356d64086',     'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',     'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',     'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',     'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',     'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',     'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',     'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',     'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',     'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',     'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',     'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',     'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',     'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',     'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',     'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',     'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',     'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',     'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902',     'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de',     'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b',     'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03',     'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a',     'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6',     'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a', } import sys, os try: from hashlib import md5 except ImportError: from md5 import md5 def _validate_md5(egg_name, data):     if egg_name in md5_data:         digest = md5(data).hexdigest()         if digest != md5_data[egg_name]:             print >>sys.stderr, (                 "md5 validation of %s failed!  (Possible download problem?)"                 % egg_name             )             sys.exit(2)     return data def use_setuptools(     version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,     download_delay=15 ):     """Automatically find/download setuptools and make it available on sys.path     `version` should be a valid setuptools version number that is available     as an egg for download under the `download_base` URL (which should end with     a '/').  `to_dir` is the directory where setuptools will be downloaded, if     it is not already available.  If `download_delay` is specified, it should     be the number of seconds that will be paused before initiating a download,     should one be required.  If an older version of setuptools is installed,     this routine will print a message to ``sys.stderr`` and raise SystemExit in     an attempt to abort the calling script.     """     was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules     def do_download():         egg = download_setuptools(version, download_base, to_dir, download_delay)         sys.path.insert(0, egg)         import setuptools; setuptools.bootstrap_install_from = egg     try:         import pkg_resources     except ImportError:         return do_download()            try:         pkg_resources.require("setuptools>="+version); return     except pkg_resources.VersionConflict, e:         if was_imported:             print >>sys.stderr, (             "The required version of setuptools (>=%s) is not available, and\n"             "can't be installed while this script is running. Please install\n"             " a more recent version first, using 'easy_install -U setuptools'."             "\n\n(Currently using %r)"             ) % (version, e.args[0])             sys.exit(2)     except pkg_resources.DistributionNotFound:         pass     del pkg_resources, sys.modules['pkg_resources']    # reload ok     return do_download() def download_setuptools(     version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,     delay = 15 ):     """Download setuptools from a specified location and return its filename     `version` should be a valid setuptools version number that is available     as an egg for download under the `download_base` URL (which should end     with a '/'). `to_dir` is the directory where the egg will be downloaded.     `delay` is the number of seconds to pause before an actual download attempt.     """     import urllib2, shutil     egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])     url = download_base + egg_name     saveto = os.path.join(to_dir, egg_name)     src = dst = None     if not os.path.exists(saveto):  # Avoid repeated downloads         try:             from distutils import log             if delay:                 log.warn(""" --------------------------------------------------------------------------- This script requires setuptools version %s to run (even to display help).  I will attempt to download it for you (from %s), but you may need to enable firewall access for this script first. I will start the download in %d seconds. (Note: if this machine does not have network access, please obtain the file    %s and place it in this directory before rerunning this script.) ---------------------------------------------------------------------------""",                     version, download_base, delay, url                 ); from time import sleep; sleep(delay)             log.warn("Downloading %s", url)             src = urllib2.urlopen(url)             # Read/write all in one block, so we don't create a corrupt file             # if the download is interrupted.             data = _validate_md5(egg_name, src.read())             dst = open(saveto,"wb"); dst.write(data)         finally:             if src: src.close()             if dst: dst.close()     return os.path.realpath(saveto) def main(argv, version=DEFAULT_VERSION):     """Install or upgrade setuptools and EasyInstall"""     try:         import setuptools     except ImportError:         egg = None         try:             egg = download_setuptools(version, delay=0)             sys.path.insert(0,egg)             from setuptools.command.easy_install import main             return main(list(argv)+[egg])   # we're done here         finally:             if egg and os.path.exists(egg):                 os.unlink(egg)     else:         if setuptools.__version__ == '0.0.1':             print >>sys.stderr, (             "You have an obsolete version of setuptools installed.  Please\n"             "remove it from your system entirely before rerunning this script."             )             sys.exit(2)     req = "setuptools>="+version     import pkg_resources     try:         pkg_resources.require(req)     except pkg_resources.VersionConflict:         try:             from setuptools.command.easy_install import main         except ImportError:             from easy_install import main         main(list(argv)+[download_setuptools(delay=0)])         sys.exit(0) # try to force an exit     else:         if argv:             from setuptools.command.easy_install import main             main(argv)         else:             print "Setuptools version",version,"or greater has been installed."             print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)' def update_md5(filenames):     """Update our built-in md5 registry"""     import re     for name in filenames:         base = os.path.basename(name)         f = open(name,'rb')         md5_data[base] = md5(f.read()).hexdigest()         f.close()     data = ["    %r: %r,\n" % it for it in md5_data.items()]     data.sort()     repl = "".join(data)     import inspect     srcfile = inspect.getsourcefile(sys.modules[__name__])     f = open(srcfile, 'rb'); src = f.read(); f.close()     match = re.search("\nmd5_data = {\n([^}]+)}", src)     if not match:         print >>sys.stderr, "Internal error!"         sys.exit(2)     src = src[:match.start(1)] + repl + src[match.end(1):]     f = open(srcfile,'w')     f.write(src)     f.close() if __name__=='__main__':     if len(sys.argv)>2 and sys.argv[1]=='--md5update':         update_md5(sys.argv[2:])     else:         main(sys.argv[1:])
Adding folders to a zip file using python
I want to create a zip file. Add a folder to the zip file and then add a bunch of files to that folder. So I want to end up with a zip file with a single folder with files in. I dont know if its bad practice to have folders in zip files or something but google gives me nothing on the subject. I started out with this: def addFolderToZip(myZipFile,folder): folder = folder.encode('ascii') #convert path to ascii for ZipFile Method for file in glob.glob(folder+"/*"): if os.path.isfile(file): print file myZipFile.write(file, os.path.basename(file), zipfile.ZIP_DEFLATED) elif os.path.isdir(file): addFolderToZip(myZipFile,file) def createZipFile(filename,files,folders): curTime=strftime("__%Y_%m_%d", time.localtime()) filename=filename+curTime; print filename zipFilename=utils.getFileName("files", filename+".zip") myZipFile = zipfile.ZipFile( zipFilename, "w" ) # Open the zip file for writing for file in files: file = file.encode('ascii') #convert path to ascii for ZipFile Method if os.path.isfile(file): (filepath, filename) = os.path.split(file) myZipFile.write( file, filename, zipfile.ZIP_DEFLATED ) for folder in folders: addFolderToZip(myZipFile,folder) myZipFile.close() return (1,zipFilename) (success,filename)=createZipFile(planName,files,folders); Taken from: http://mail.python.org/pipermail/python-list/2006-August/396166.html Which gets rid of all folders and puts all files in the target folder (and its subfolders) into a single zip file. I couldnt get it to add an entire folder. If I feed the path to a folder in myZipFile.write, I get IOError: [Errno 13] Permission denied: '..\packed\bin' Any help is much welcome. Related question: How do I zip the contents of a folder using python (version 2.5)?
You can also use shutil import shutil shutil.make_archive("desired_zipfile_name_no", "zip", "name_of_the_folder_you_want_to_zip") This will put the whole folder in the zip.
Standard way to embed version into python package?
Is there a standard way to associate version string with a python package in such way that I could do the following? import foo print foo.version I would imagine there's some way to retrieve that data without any extra hardcoding, since minor/major strings are specified in setup.py already. Alternative solution that I found was to have import __version__ in my foo/__init__.py and then have __version__.py generated by setup.py.
Here is how I do this. Advantages of the following method: It provides a __version__ attribute. It provides the standard metadata version. Therefore it will be detected by pkg_resources or other tools that parse the package metadata (EGG-INFO and/or PKG-INFO, PEP 0345). It doesn't import your package (or anything else) when building your package, which can cause problems in some situations. (See the comments below about what problems this can cause.) There is only one place that the version number is written down, so there is only one place to change it when the version number changes, and there is less chance of inconsistent versions. Here is how it works: the "one canonical place" to store the version number is a .py file, named "_version.py" which is in your Python package, for example in myniftyapp/_version.py. This file is a Python module, but your setup.py doesn't import it! (That would defeat feature 3.) Instead your setup.py knows that the contents of this file is very simple, something like: __version__ = "3.6.5" And so your setup.py opens the file and parses it, with code like: import re VERSIONFILE="myniftyapp/_version.py" verstrline = open(VERSIONFILE, "rt").read() VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]" mo = re.search(VSRE, verstrline, re.M) if mo: verstr = mo.group(1) else: raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,)) Then your setup.py passes that string as the value of the "version" argument to setup(), thus satisfying feature 2. To satisfy feature 1, you can have your package (at run-time, not at setup time!) import the _version file from myniftyapp/__init__.py like this: from _version import __version__ Here is an example of this technique that I've been using for years. The code in that example is a bit more complicated, but the simplified example that I wrote into this comment should be a complete implementation. Here is example code of importing the version. If you see anything wrong with this approach, please let me know: zooko at zooko dot com. If you don't see anything wrong with this approach then use it! Because the more packages come with their version numbers in the expected places the better!
How do you run your own code alongside Tkinter's event loop?
My little brother is just getting into programming, and for his Science Fair project, he's doing a simulation of a flock of birds in the sky. He's gotten most of his code written, and it works nicely, but the birds need to move every moment. Tkinter, however, hogs the time for its own event loop, and so his code won't run. Doing root.mainloop() runs, runs, and keeps running, and the only thing it runs is the event handlers. Is there a way to have his code run alongside the mainloop (without multithreading, it's confusing and this should be kept simple), and if so, what is it? Right now, he came up with an ugly hack, tying his move() function to <b1-motion>, so that as long as he holds the button down and wiggles the mouse, it works. But there's got to be a better way.
Use the after method on the Tk object: from tkinter import * root = Tk() def task(): print("hello") root.after(2000, task) # reschedule event in 2 seconds root.after(2000, task) root.mainloop() Here's the declaration and documentation for the after method: def after(self, ms, func=None, *args): """Call function once after given time. MS specifies the time in milliseconds. FUNC gives the function which shall be called. Additional parameters are given as parameters to the function call. Return identifier to cancel scheduling with after_cancel."""
BeautifulSoup - modifying all links in a piece of HTML?
I need to be able to modify every single link in an HTML document. I know that I need to use the SoupStrainer but I'm not 100% positive on how to implement it. If someone could direct me to a good resource or provide a code example, it'd be very much appreciated. Thanks.
Maybe something like this would work? (I don't have a Python interpreter in front of me, unfortunately) from BeautifulSoup import BeautifulSoup soup = BeautifulSoup('<p>Blah blah blah <a href="http://google.com">Google</a></p>') for a in soup.findAll('a'): a['href'] = a['href'].replace("google", "mysite") result = str(soup)
Simulating a 'local static' variable in python
Consider the following code: def CalcSomething(a): if CalcSomething._cache.has_key(a): return CalcSomething._cache[a] CalcSomething._cache[a] = ReallyCalc(a) return CalcSomething._cache[a] CalcSomething._cache = { } This is the easiest way I can think of for simulating a 'local static' variable in python. What bothers me is that CalcSomething._cache is mentioned outside the function's definition, but the alternative would be something like that: if not hasattr(CalcSomething, "_cache"): setattr(CalcSomething, "_cache", { } ) inside the function's definition, which is really cumbersome. Is there a more elegant way? [EDIT] Just to clarify, this question is not about local function caches, as the example above might suggest. Here is another short example where a 'static local' might be handy: def ParseString(s): return ParseString._parser.parse(s) # Create a Parser object once, which will be used for all parsings. # Assuming a Parser object is heave on resources, for the sake of this example. ParseString._parser = Parser()
Turn it into a callable object (since that's what it really is.) class CalcSomething(object): def __init__(self): self._cache = {} def __call__(self, a): if a not in self._cache: self._cache[a] = self.reallyCalc(a) return self._cache[a] def reallyCalc(self, a): return # a real answer calcSomething = CalcSomething() Now you can use calcSomething as if it were a function. But it remains tidy and self-contained.
Can I get the matrix determinant using Numpy?
I read in the manual of Numpy that there is function det(M) that can calculate the determinant. However, I can't find the det() method in Numpy. By the way, I use Python 2.5. There should be no compatibility problems with Numpy.
You can use numpy.linalg.det to compute the determinant of an array: In [1]: import numpy In [2]: M = [[1, 2], [3, 4]] In [3]: print numpy.linalg.det(M) Out[3]: -2.0000000000000004
How can I type-check variables in Python?
I have a Python function that takes a numeric argument that must be an integer in order for it behave correctly. What is the preferred way of verifying this in Python? My first reaction is to do something like this: def isInteger(n): return int(n) == n But I can't help thinking that this is 1) expensive 2) ugly and 3) subject to the tender mercies of machine epsilon. Does Python provide any native means of type checking variables? Or is this considered to be a violation of the language's dynamically typed design? EDIT: since a number of people have asked - the application in question works with IPv4 prefixes, sourcing data from flat text files. If any input is parsed into a float, that record should be viewed as malformed and ignored.
isinstance(n, int) If you need to know whether it's definitely an actual int and not a subclass of int (generally you shouldn't need to do this): type(n) is int this: return int(n) == n isn't such a good idea, as cross-type comparisons can be true - notably int(3.0)==3.0
Python - Doing absolute imports from a subfolder
Basically I'm asking the same question as this guy: http://stackoverflow.com/questions/72852/how-to-do-relative-imports-in-python But no one gave him a correct answer. Given that you are inside a subfolder and you want to go up a directory and then into ANOTHER subfolder, doing what they suggested does not work (as the OP pointed out in his comments to their answers). I know that you can do this by using sys.path, but I would prefer a cleaner method. Example: App __init__.py Package_A --__init__.py --Module_A.py Package_B --__init__.py --Module_B.py How would I import Module_A into Module_B?
main.py setup.py app/ -> __init__.py package_a/ -> __init__.py module_a.py package_b/ -> __init__.py module_b.py You run python main.py. main.py does: import app.package_a.module_a module_a.py does import app.package_b.module_b Alternatively 2 or 3 could use: from app.package_a import module_a That will work as long as you have app in your PYTHONPATH. main.py could be anywhere then. So you write a setup.py to copy (install) the whole app package and subpackages to the target system's python folders, and main.py to target system's script folders.
Using DPAPI with Python?
Is there a way to use the DPAPI (Data Protection Application Programming Interface) on Windows XP with Python? I would prefer to use an existing module if there is one that can do it. Unfortunately I haven't been able to find a way with Google or Stack Overflow. EDIT: I've taken the example code pointed to by "dF" and tweaked it into a standalone library which can be simply used at a high level to crypt and decrypt using DPAPI in user mode. Simply call dpapi.cryptData(text_to_encrypt) which returns an encrypted string, or the reverse decryptData(encrypted_data_string), which returns the plain text. Here's the library: # DPAPI access library # This file uses code originally created by Crusher Joe: # http://article.gmane.org/gmane.comp.python.ctypes/420 # from ctypes import * from ctypes.wintypes import DWORD LocalFree = windll.kernel32.LocalFree memcpy = cdll.msvcrt.memcpy CryptProtectData = windll.crypt32.CryptProtectData CryptUnprotectData = windll.crypt32.CryptUnprotectData CRYPTPROTECT_UI_FORBIDDEN = 0x01 extraEntropy = "cl;ad13 \0al;323kjd #(adl;k$#ajsd" class DATA_BLOB(Structure): _fields_ = [("cbData", DWORD), ("pbData", POINTER(c_char))] def getData(blobOut): cbData = int(blobOut.cbData) pbData = blobOut.pbData buffer = c_buffer(cbData) memcpy(buffer, pbData, cbData) LocalFree(pbData); return buffer.raw def Win32CryptProtectData(plainText, entropy): bufferIn = c_buffer(plainText, len(plainText)) blobIn = DATA_BLOB(len(plainText), bufferIn) bufferEntropy = c_buffer(entropy, len(entropy)) blobEntropy = DATA_BLOB(len(entropy), bufferEntropy) blobOut = DATA_BLOB() if CryptProtectData(byref(blobIn), u"python_data", byref(blobEntropy), None, None, CRYPTPROTECT_UI_FORBIDDEN, byref(blobOut)): return getData(blobOut) else: return "" def Win32CryptUnprotectData(cipherText, entropy): bufferIn = c_buffer(cipherText, len(cipherText)) blobIn = DATA_BLOB(len(cipherText), bufferIn) bufferEntropy = c_buffer(entropy, len(entropy)) blobEntropy = DATA_BLOB(len(entropy), bufferEntropy) blobOut = DATA_BLOB() if CryptUnprotectData(byref(blobIn), None, byref(blobEntropy), None, None, CRYPTPROTECT_UI_FORBIDDEN, byref(blobOut)): return getData(blobOut) else: return "" def cryptData(text): return Win32CryptProtectData(text, extraEntropy) def decryptData(cipher_text): return Win32CryptUnprotectData(cipher_text, extraEntropy)
I have been using CryptProtectData and CryptUnprotectData through ctypes, with the code from http://article.gmane.org/gmane.comp.python.ctypes/420 and it has been working well.
Parallel processing from a command queue on Linux (bash, python, ruby... whatever)
I have a list/queue of 200 commands that I need to run in a shell on a Linux server. I only want to have a maximum of 10 processes running (from the queue) at once. Some processes will take a few seconds to complete, other processes will take much longer. When a process finishes I want the next command to be "popped" from the queue and executed. Does anyone have code to solve this problem? Further elaboration: There's 200 pieces of work that need to be done, in a queue of some sort. I want to have at most 10 pieces of work going on at once. When a thread finishes a piece of work it should ask the queue for the next piece of work. If there's no more work in the queue, the thread should die. When all the threads have died it means all the work has been done. The actual problem I'm trying to solve is using imapsync to synchronize 200 mailboxes from an old mail server to a new mail server. Some users have large mailboxes and take a long time tto sync, others have very small mailboxes and sync quickly.
On the shell, xargs can be used to queue parallel command processing. For example, for having always 3 sleeps in parallel, sleeping for 1 second each, and executing 10 sleeps in total do echo {1..10} | xargs -d ' ' -n1 -P3 sh -c 'sleep 1s' _ And it would sleep for 4 seconds in total. If you have a list of names, and want to pass the names to commands executed, again executing 3 commands in parallel, do cat names | xargs -n1 -P3 process_name Would execute the command process_name alice, process_name bob and so on.
How are POST and GET variables handled in Python?
In PHP you can just use $_POST for POST and $_GET for GET (Query string) variables. What's the equivalent in Python?
suppose you're posting a html form with this: <input type="text" name="username"> If using raw cgi: import cgi form = cgi.FieldStorage() print form["username"] If using Django, Pylons, Flask or Pyramid: print request.GET['username'] # for GET form method print request.POST['username'] # for POST form method Using Turbogears, Cherrypy: from cherrypy import request print request.params['username'] Web.py: form = web.input() print form.username Werkzeug: print request.form['username'] If using Cherrypy or Turbogears, you can also define your handler function taking a parameter directly: def index(self, username): print username Google App Engine: class SomeHandler(webapp2.RequestHandler): def post(self): name = self.request.get('username') # this will get the value from the field named username self.response.write(name) # this will write on the document So you really will have to choose one of those frameworks.
Combining two sorted lists in Python
I have two lists of objects. Each list is already sorted by a property of the object that is of the datetime type. I would like to combine the two lists into one sorted list. Is the best way just to do a sort or is there a smarter way to do this in Python?
People seem to be over complicating this.. Just combine the two lists, then sort them: >>> l1 = [1, 3, 4, 7] >>> l2 = [0, 2, 5, 6, 8, 9] >>> l1.extend(l2) >>> sorted(l1) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ..or shorter (and without modifying l1): >>> sorted(l1 + l2) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ..easy! Plus, it's using only two built-in functions, so assuming the lists are of a reasonable size, it should be quicker than implementing the sorting/merging in a loop. More importantly, the above is much less code, and very readable. If your lists are large (over a few hundred thousand, I would guess), it may be quicker to use an alternative/custom sorting method, but there are likely other optimisations to be made first (e.g not storing millions of datetime objects) Using the timeit.Timer().repeat() (which repeats the functions 1000000 times), I loosely benchmarked it against ghoseb's solution, and sorted(l1+l2) is substantially quicker: merge_sorted_lists took.. [9.7439379692077637, 9.8844599723815918, 9.552299976348877] sorted(l1+l2) took.. [2.860386848449707, 2.7589840888977051, 2.7682540416717529]
Python regular expressions - how to capture multiple groups from a wildcard expression?
I have a Python regular expression that contains a group which can occur zero or many times - but when I retrieve the list of groups afterwards, only the last one is present. Example: re.search("(\w)*", "abcdefg").groups() this returns the list ('g',) I need it to return ('a','b','c','d','e','f','g',) Is that possible? How can I do it?
re.findall(r"\w","abcdefg")
Python code to pick out all possible combinations from a list?
I have a list with 15 numbers in, and I need to write some code that produces all 32,768 combinations of those numbers. I've found some code (by googling) that apparently does what I'm looking for, but I found the code fairly opaque and am wary of using it. Plus I have a feeling there must be a more elegant solution. The only thing that occurs to me would be to just loop through the decimal integers 1-32768 and convert those to binary, and use the binary representation as a filter to pick out the appropriate numbers. Does anyone know of a better way? Using map(), maybe?
This answer missed one aspect: the OP asked for ALL combinations... not just combinations of length "r". So you'd either have to loop through all lengths "L": import itertools stuff = [1, 2, 3] for L in range(0, len(stuff)+1): for subset in itertools.combinations(stuff, L): print(subset) Or -- if you want to get snazzy (or bend the brain of whoever reads your code after you) -- you can generate the chain of "combinations()" generators, and iterate through that: from itertools import chain, combinations def all_subsets(ss): return chain(*map(lambda x: combinations(ss, x), range(0, len(ss)+1))) for subset in all_subsets(stuff): print(subset)
Writing unit tests in Django / Python
I've not used Unit Tests before other than a quick introduction in a Uni course. I'm currently writing an application though and would like to teach myself TDD in the process. The problem is, I've no idea what to test or really how. I'm writing a Django application, and so far have only created the models (and customised the admin application). This is how I've written the skeletons of my tests so far: class ModelTests(TestCase): fixtures = ['initial_data.json',] def setUp(self): pass def testSSA(self): ssa = SSA.objects.create(name="sdfsdf", cost_center=1111, street_num=8, street_name="dfsdfsf Street", suburb="sdfsdfsdf", post_code=3333) def testResident(self): pass def testSSA_Client(self): pass I planned to write a function to test each model within the ModelTests class. Is this a good way of writing tests? Also, what exactly should I be testing for? That creating a model with all of the fields completed works? That a half complete model fails? That any special cases are tested (like a null and is_required=False)? I've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I? What do I need to test for a web application written in Django/Python? Some examples would be nice.
Is a function to test each model within the ModelTests class a good way of writing tests? No. What exactly should I be testing for? That creating a model with all of the fields completed works? That a half complete model fails? That any special cases are tested (like a null and is_required=False)? I've trust in the ORM, which as far as I'm aware is heavily tested, so I shouldn't need to test all of the methods should I? Not much of that. You might test validation rules, but that isn't meaningful until you've defined some Form objects. Then you have something to test -- does the form enforce all the rules. You'll need at least one TestCase class for each form. A function will be a scenario -- different combinations of inputs that are allowed or not allowed. For each Model class, you'll need at least one TestCase class definition. TestCases are cheap, define lots of them. Your model embodies your "business entity" definitions. Your models will have methods that implement business rules. Your methods will do things like summarize, filter, calculate, aggregate, reduce, all kinds of things. You'll have functions for each of these features of a model class. You're not testing Django. You're testing how your business rules actually work in Django. Later, when you have more stuff in your application (forms, views, urls, etc.) you'll want to use the Django unittest client to exercise each method for each url. Again, one TestCase per
What is a simple way to generate keywords from a text?
I suppose I could take a text and remove high frequency English words from it. By keywords, I mean that I want to extract words that are most the characterizing of the content of the text (tags ) . It doesn't have to be perfect, a good approximation is perfect for my needs. Has anyone done anything like that? Do you known a Perl or Python library that does that? Lingua::EN::Tagger is exactly what I asked however I needed a library that could work for french text too.
The name for the "high frequency English words" is stop words and there are many lists available. I'm not aware of any python or perl libraries, but you could encode your stop word list in a binary tree or hash (or you could use python's frozenset), then as you read each word from the input text, check if it is in your 'stop list' and filter it out. Note that after you remove the stop words, you'll need to do some stemming to normalize the resulting text (remove plurals, -ings, -eds), then remove all the duplicate "keywords".
How can I hide the console window in a PyQt app running on Windows?
Surely this is possible? I have been hunting through PyQt tutorials and documentation but cannot find the answer to it. Probably I just need to phrase my search query differently. [Edit] Thanks PEZ for the answer - more details including use of the .pyw extension in Python Programming on Win32 chapter 20
I think you should be able to run your app with pythonw.exe.
Converting string into datetime
Short and simple. I've got a huge list of date-times like this as strings: Jun 1 2005 1:33PM Aug 28 1999 12:00AM I'm going to be shoving these back into proper datetime fields in a database so I need to magic them into real datetime objects. Any help (even if it's just a kick in the right direction) would be appreciated. Edit: This is going through Django's ORM so I can't use SQL to do the conversion on insert.
from datetime import datetime date_object = datetime.strptime('Jun 1 2005 1:33PM', '%b %d %Y %I:%M%p') Link to the Python documentation for strptime and a link for the strftime format mask This is also a really nice reference for stftime
Why do I have to specify my own class when using super(), and is there a way to get around it?
When using Python's super() to do method chaining, you have to explicitly specify your own class, for example: class MyDecorator(Decorator): def decorate(self): super(MyDecorator, self).decorate() I have to specify the name of my class MyDecorator as an argument to super(). This is not DRY. When I rename my class now I will have to rename it twice. Why is this implemented this way? And is there a way to weasel out of having to write the name of the class twice(or more)?
Your wishes come true: Just use python 3.0. In it you just use super() and it does super(ThisClass, self). Documentation here. Code sample from the documentation: class C(B): def method(self, arg): super().method(arg) # This does the same thing as: super(C, self).method(arg)
How can I return system information in Python?
Using Python, how can information such as CPU usage, memory usage (free, used, etc), process count, etc be returned in a generic manner so that the same code can be run on Linux, Windows, BSD, etc? Alternatively, how could this information be returned on all the above systems with the code specific to that OS being run only if that OS is indeed the operating environment?
Regarding cross-platform: your best bet is probably to write platform-specific code, and then import it conditionally. e.g. import sys if sys.platform == 'win32': import win32_sysinfo as sysinfo elif sys.platform == 'darwin': import mac_sysinfo as sysinfo elif 'linux' in sys.platform: import linux_sysinfo as sysinfo #etc print 'Memory available:', sysinfo.memory_available() For specific resources, as Anthony points out you can access /proc under linux. For Windows, you could have a poke around at the Microsoft Script Repository. I'm not sure where to get that kind of information on Macs, but I can think of a great website where you could ask :-)
Implementing a "rules engine" in Python
I'm writing a log collection / analysis application in Python and I need to write a "rules engine" to match and act on log messages. It needs to feature: Regular expression matching for the message itself Arithmetic comparisons for message severity/priority Boolean operators I envision An example rule would probably be something like: (message ~ "program\\[\d+\\]: message" and severity >= high) or (severity >= critical) I'm thinking about using PyParsing or similar to actually parse the rules and construct the parse tree. The current (not yet implemented) design I have in mind is to have classes for each rule type, and construct and chain them together according to the parse tree. Then each rule would have a "matches" method that could take a message object return whether or not it matches the rule. Very quickly, something like: class RegexRule(Rule): def __init__(self, regex): self.regex = regex def match(self, message): return self.regex.match(message.contents) class SeverityRule(Rule): def __init__(self, operator, severity): self.operator = operator def match(self, message): if operator == ">=": return message.severity >= severity # more conditions here... class BooleanAndRule(Rule): def __init__(self, rule1, rule2): self.rule1 = rule1 self.rule2 = rule2 def match(self, message): return self.rule1.match(message) and self.rule2.match(message) These rule classes would then be chained together according to the parse tree of the message, and the match() method called on the top rule, which would cascade down until all the rules were evaluated. I'm just wondering if this is a reasonable approach, or if my design and ideas are way totally out of whack? Unfortunately I never had the chance to take a compiler design course or anything like that in Unviersity so I'm pretty much coming up with this stuff of my own accord. Could someone with some experience in these kinds of things please chime in and evaluate the idea? EDIT: Some good answers so far, here's a bit of clarification. The aim of the program is to collect log messages from servers on the network and store them in the database. Apart from the collection of log messages, the collector will define a set of rules that will either match or ignore messages depending on the conditions and flag an alert if necessary. I can't see the rules being of more than a moderate complexity, and they will be applied in a chain (list) until either a matching alert or ignore rule is hit. However, this part isn't quite as relevant to the question. As far the syntax being close to Python syntax, yes that is true, however I think it would be difficult to filter the Python down to the point where the user couldn't inadvertently do some crazy stuff with the rules that was not intended.
Do not invent yet another rules language. Either use Python or use some other existing, already debugged and working language like BPEL. Just write your rules in Python, import them and execute them. Life is simpler, far easier to debug, and you've actually solved the actual log-reading problem without creating another problem. Imagine this scenario. Your program breaks. It's now either the rule parsing, the rule execution, or the rule itself. You must debug all three. If you wrote the rule in Python, it would be the rule, and that would be that. "I think it would be difficult to filter the Python down to the point where the user couldn't inadvertently do some crazy stuff with the rules that was not intended." This is largely the "I want to write a compiler" argument. 1) You're the primary user. You'll write, debug and maintain the rules. Are there really armies of crazy programmers who will be doing crazy things? Really? If there is any potential crazy user, talk to them. Teach Them. Don't fight against them by inventing a new language (which you will then have to maintain and debug forever.) 2) It's just log processing. There's no real cost to the craziness. No one is going to subvert the world economic system with faulty log handling. Don't make a small task with a few dozen lines of Python onto a 1000 line interpreter to interpret a few dozen lines of some rule language. Just write the few dozen lines of Python. Just write it in Python as quickly and clearly as you can and move on to the next project.
In what situation should the built-in 'operator' module be used in python?
I'm speaking of this module: http://docs.python.org/library/operator.html From the article: The operator module exports a set of functions implemented in C corresponding to the intrinsic operators of Python. For example, operator.add(x, y) is equivalent to the expression x+y. The function names are those used for special class methods; variants without leading and trailing __ are also provided for convenience. I'm not sure I understand the benefit or purpose of this module.
Possibly the most popular usage is operator.itemgetter. Given a list lst of tuples, you can sort by the ith element by: lst.sort(key=operator.itemgetter(i)) Certainly, you could do the same thing without operator by defining your own key function, but the operator module makes it slightly neater. As to the rest, python allows a functional style of programming, and so it can come up -- for instance, Greg's reduce example. You might argue: "Why do I need operator.add when I can just do: add = lambda x, y: x+y?" The answers are: operator.add is (I think) slightly faster. It makes the code easier to understand for you, or another person later, looking at it. They don't need to look for the definition of add, because they know what the operator module does.
Ruby equivalent of Python's "dir"?
In Python we can "dir" a module, like this: >>> import re >>> dir(re) And it lists all functions in the module. Is there a similar way to do this in Ruby?
As far as I know not exactly but you get somewhere with object.methods.sort
How can I listen for 'usb device inserted' events in Linux, in Python?
I'd like to write a Python script for Amarok in Linux to automatically copy the stackoverflow podcast to my player. When I plug in the player, it would mount the drive, copy any pending podcasts, and eject the player. How can I listen for the "plugged in" event? I have looked through hald but couldn't find a good example.
Update: As said in comments, Hal is not supported in recent distributions, the standard now is udev, Here is a small example that makes use of glib loop and udev, I keep the Hal version for historical reasons. This is basically the example in the pyudev documentation, adapted to work with older versions, and with the glib loop, notice that the filter should be customized for your specific needing: import glib from pyudev import Context, Monitor try: from pyudev.glib import MonitorObserver def device_event(observer, device): print 'event {0} on device {1}'.format(device.action, device) except: from pyudev.glib import GUDevMonitorObserver as MonitorObserver def device_event(observer, action, device): print 'event {0} on device {1}'.format(action, device) context = Context() monitor = Monitor.from_netlink(context) monitor.filter_by(subsystem='usb') observer = MonitorObserver(monitor) observer.connect('device-event', device_event) monitor.start() glib.MainLoop().run() Old version with Hal and d-bus: You can use D-Bus bindings and listen to DeviceAdded and DeviceRemoved signals. You will have to check the capabilities of the Added device in order to select the storage devices only. Here is a small example, you can remove the comments and try it. import dbus import gobject class DeviceAddedListener: def __init__(self): You need to connect to Hal Manager using the System Bus. self.bus = dbus.SystemBus() self.hal_manager_obj = self.bus.get_object( "org.freedesktop.Hal", "/org/freedesktop/Hal/Manager") self.hal_manager = dbus.Interface(self.hal_manager_obj, "org.freedesktop.Hal.Manager") And you need to connect a listener to the signals you are interested on, in this case DeviceAdded. self.hal_manager.connect_to_signal("DeviceAdded", self._filter) I'm using a filter based on capabilities. It will accept any volume and will call do_something with if, you can read Hal documentation to find the more suitable queries for your needs, or more information about the properties of the Hal devices. def _filter(self, udi): device_obj = self.bus.get_object ("org.freedesktop.Hal", udi) device = dbus.Interface(device_obj, "org.freedesktop.Hal.Device") if device.QueryCapability("volume"): return self.do_something(device) Example function that shows some information about the volume: def do_something(self, volume): device_file = volume.GetProperty("block.device") label = volume.GetProperty("volume.label") fstype = volume.GetProperty("volume.fstype") mounted = volume.GetProperty("volume.is_mounted") mount_point = volume.GetProperty("volume.mount_point") try: size = volume.GetProperty("volume.size") except: size = 0 print "New storage device detectec:" print " device_file: %s" % device_file print " label: %s" % label print " fstype: %s" % fstype if mounted: print " mount_point: %s" % mount_point else: print " not mounted" print " size: %s (%.2fGB)" % (size, float(size) / 1024**3) if __name__ == '__main__': from dbus.mainloop.glib import DBusGMainLoop DBusGMainLoop(set_as_default=True) loop = gobject.MainLoop() DeviceAddedListener() loop.run()
Why does 1+++2 = 3 in python?
I am from C background and I just started learning python... while trying some programs, I got this doubt... how python evaluates the expression 1+++2? No matter how many number of '+' I put in between, it is printing 3 as the answer. Please can anyone explain this behavior and for 1--2 it is printing 3 and for 1---2 it is printing -1 please clear my doubt. Regards Sunil
Your expression is the same as: 1+(+(+2)) Any numeric expression can be preceded by - to make it negative, or + to do nothing (the option is present for symmetry). With negative signs: 1-(-(2)) = 1-(-2) = 1+2 = 3 and 1-(-(-2)) = 1-(2) = -1 I see you clarified your question to say that you come from a C background. In Python, there are no increment operators like ++ and -- in C, which was probably the source of your confusion. To increment or decrement a variable i or j in Python use this style: i += 1 j -= 1
Jython and python modules
I've just started using the PythonInterpreter from within my Java classes, and it works great! However, if I try to include python modules (re, HTMLParser, etc.), I'm receiving the following exception (for re): Exception in thread "main" Traceback (innermost last): File "", line 1, in ? ImportError: no module named re How could I make the classes from the jython jar "see" the modules python has available?
You embed jython and you will use some Python-Modules somewere: if you want to set the path (sys.path) in your Java-Code : public void init() { interp = new PythonInterpreter(null, new PySystemState()); PySystemState sys = Py.getSystemState(); sys.path.append(new PyString(rootPath)); sys.path.append(new PyString(modulesDir)); } Py is in org.python.core. rootPath and modulesDir is where YOU want ! let rootPath point where you located the standard-jython-lib Have a look at src/org/python/util/PyServlet.java in the Jython-Source-Code for example
Why compile Python code?
Why would you compile a Python script? You can run them directly from the .py file and it works fine, so is there a performance advantage or something? I also notice that some files in my application get compiled into .pyc while others do not, why is this?
It's compiled to bytecode which can be used much, much, much faster. The reason some files aren't compiled is that the main script, which you invoke with python main.py is recompiled every time you run the script. All imported scripts will be compiled and stored on the disk. Important addition by Ben Blank: It's worth noting that while running a compiled script has a faster startup time (as it doesn't need to be compiled), it doesn't run any faster.
Any way to override the and operator in Python?
I tried overriding __and__, but that is for the & operator, not and - the one that I want. Can I override and?
No you can't override and and or. With the behavior that these have in Python (i.e. short-circuiting) they are more like control flow tools than operators and overriding them would be more like overriding if than + or -. You can influence the truth value of your objects (i.e. whether they evaluate as true or false) by overriding __nonzero__ (or __bool__ in Python 3).
Customizing an Admin form in Django while also using autodiscover
I want to modify a few tiny details of Django's built-in django.contrib.auth module. Specifically, I want a different form that makes username an email field (and email an alternate email address. (I'd rather not modify auth any more than necessary -- a simple form change seems to be all that's needed.) When I use autodiscover with a customized ModelAdmin for auth I wind up conflicting with auth's own admin interface and get an "already registered" error. It looks like I have to create my own admin site, enumerating all of my Models. It's only 18 classes, but it seems like a DRY problem -- every change requires both adding to the Model and adding to the customized admin site. Or, should I write my own version of "autodiscover with exclusions" to essentially import all the admin modules except auth?
None of the above. Just use admin.site.unregister(). Here's how I recently added filtering Users on is_active in the admin (n.b. is_active filtering is now on the User model by default in Django core; still works here as an example), all DRY as can be: from django.contrib import admin from django.contrib.auth.admin import UserAdmin from django.contrib.auth.models import User class MyUserAdmin(UserAdmin): list_filter = UserAdmin.list_filter + ('is_active',) admin.site.unregister(User) admin.site.register(User, MyUserAdmin)
Pros and cons of IronPython and IronPython Studio
We are ready in our company to move everything to Python instead of C#, we are a consulting company and we usually write small projects in C# we don't do huge projects and our work is more based on complex mathematical models not complex software structures. So we believe IronPython is a good platform for us because it provides standard GUI functionality on windows and access to all of .Net libraries. I know Ironpython studio is not complete, and in fact I had a hard time adding my references but I was wondering if someone could list some of the pros and cons of this migration for us, considering Python code is easier to read by our clients and we usually deliver a proof-of-concept prototype instead of a full-functional code, our clients usually go ahead and implement the application themselves
My company, Resolver Systems, develops what is probably the biggest application written in IronPython yet. (It's called Resolver One, and it's a Pythonic spreadsheet). We are also hosting the Ironclad project (to run CPython extensions under IronPython) and that is going well (we plan to release a beta of Resolver One & numpy soon). The reason we chose IronPython was the .NET integration - our clients want 100% integration on Windows and the easiest way to do that right now is .NET. We design our GUI (without behaviour) in Visual Studio, compile it into a DLL and subclass it from IronPython to add behaviour. We have found that IronPython is faster at some cases and slower at some others. However, the IronPython team is very responsive, whenever we report a regression they fix it and usually backport it to the bugfix release. If you worry about performance, you can always implement a critical part in C# (we haven't had to do that yet). If you have experience with C#, then IronPython will be natural for you, and easier than C#, especially for prototypes. Regarding IronPython studio, we don't use it. Each of us has his editor of choice (TextPad, Emacs, Vim & Wing), and everything works fine.
Way to have compiled python files in a separate folder?
Is it possible to have Python save the .pyc files to a separate folder location that is in sys.path? /code foo.py foo.pyc bar.py bar.pyc To: /code foo.py bar.py /code_compiled foo.pyc bar.pyc I would like this because I feel it'd be more organized. Thanks for any help you can give me.
There is PEP 304: Controlling Generation of Bytecode Files. Its status is Withdrawn and corresponding patch rejected. Therefore there might be no direct way to do it. If you don't need source code then you may just delete *.py files. *.pyc files can be used as is or packed in an egg.
Usage of __slots__?
What is the purpose of __slots__ in Python — especially with respect to when would I want to use it and when not?
Quoting Jacob Hallen: The proper use of __slots__ is to save space in objects. Instead of having a dynamic dict that allows adding attributes to objects at anytime, there is a static structure which does not allow additions after creation. This saves the overhead of one dict for every object that uses slots. While this is sometimes a useful optimization, it would be completely unnecessary if the Python interpreter was dynamic enough so that it would only require the dict when there actually were additions to the object. Unfortunately there is a side effect to slots. They change the behavior of the objects that have slots in a way that can be abused by control freaks and static typing weenies. This is bad, because the control freaks should be abusing the metaclasses and the static typing weenies should be abusing decorators, since in Python, there should be only one obvious way of doing something. Making CPython smart enough to handle saving space without __slots__ is a major undertaking, which is probably why it is not on the list of changes for P3k (yet).