instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
TFS Webservice Documentation
We use a lot of of python to do much of our deployment and would be handy to connect to our TFS server to get information on iteration paths, tickets etc. I can see the webservice but unable to find any documentation. Just wondering if anyone knew of anything?
The web services are not documented by Microsoft as it is not an officially supported route to talk to TFS. The officially supported route is to use their .NET API. In the case of your sort of application, the course of action I usually recommend is to create your own web service shim that lives on the TFS server (or another server) and uses their API to talk to the server but allows you to present the data in a nice way to your application. Their object model simplifies the interactions a great deal (depending on what you want to do) and so it actually means less code over-all - but better tested and testable code and also you can work around things such as the NTLM auth used by the TFS web services. Hope that helps, Martin.
Can you monkey patch methods on core types in python?
Ruby can add methods to the Number class and other core types to get effects like: 1.should_equal(1) But it seems like python cannot do this. Is this true? And if so, why? Does it have something to do with the fact that type can't be modified? Update: Rather than talking about different definitions of monkey patching, I would like to just focus on the example above. I have already concluded that it cannot be done as a few of you have answered. But I would like a more detailed explanation of why it cannot be done, and maybe what feature, if available in python, would allow this. To answer some of you: The reason I might want to do this is simply aesthetics/readability. item.price.should_equal(19.99) reads more like English and clearly indicates which is the tested value and which is the expected value, as supposed to: should_equal(item.price, 19.99) This concept is what Rspec and some other Ruby frameworks are based on.
No, you cannot. In Python, all data (classes, methods, functions, etc) defined in C extension modules (including builtins) are immutable. This is because C modules are shared between multiple interpreters in the same process, so monkeypatching them would also affect unrelated interpreters in the same process. However, classes defined in Python code may be monkeypatched because they are local to that interpreter.
XML parsing - ElementTree vs SAX and DOM
Python has several ways to parse XML... I understand the very basics of parsing with SAX. It functions as a stream parser, with an event-driven API. I understand the DOM parser also. It reads the XML into memory and coverts it to objects that can be accessed with Python. Generally speaking, it was easy to choose between the 2 depending on what you needed to do, memory constraints, performance, etc. (hopefully I'm correct so far). Since Python 2.5, we also have ElementTree. How does this compare to DOM and SAX? Which is it more similar to? Why is it better than the previous parsers?
ElementTree is much easier to use, because it represents an XML tree (basically) as a structure of lists, and attributes are represented as dictionaries. ElementTree needs much less memory for XML trees than DOM (and thus is faster), and the parsing overhead via iterparse is comparable to SAX. Additionally, iterparse returns partial structures, and you can keep memory usage constant during parsing by discarding the structures as soon as you process them. ElementTree, as in Python 2.5, has only a small feature set compared to full-blown XML libraries, but it's enough for many applications. If you need a validating parser or complete XPath support, lxml is the way to go. For a long time, it used to be quite unstable, but I haven't had any problems with it since 2.1. ElementTree deviates from DOM, where nodes have access to their parent and siblings. Handling actual documents rather than data stores is also a bit cumbersome, because text nodes aren't treated as actual nodes. In the XML snippet <a>This is <b>a</b> test</a> The string test will be the so-called tail of element b. In general, I recommend ElementTree as the default for all XML processing with Python, and DOM or SAX as the solutions for specific problems.
Standalone Python applications in Linux
How can I distribute a standalone Python application in Linux? I think I can take for granted the presence of a recent Python interpreter in any modern distribution. The problem is dealing with those libraries that do not belong to the standard library, i.e. wxPython, scipy, python cryptographic toolkit, reportlab, and so on. Is there a working Linux counterpart to, say, py2exe (which, by the way, I have never tried)? Is there a free, opensource one?
Create a deb (for everything Debian-derived) and an rpm (for Fedora/SuSE). Add the right dependencies to the packaging and you can be reasonably sure that it will work.
What is the best project structure for a Python application?
Imagine that you want to develop a non-trivial end-user desktop (not web) application in Python. What is the best way to structure the project's folder hierarchy? Desirable features are ease of maintenance, IDE-friendliness, suitability for source control branching/merging, and easy generation of install packages. In particular: Where do you put the source? Where do you put application startup scripts? Where do you put the IDE project cruft? Where do you put the unit/acceptance tests? Where do you put non-Python data such as config files? Where do you put non-Python sources such as C++ for pyd/so binary extension modules?
Doesn't too much matter. Whatever makes you happy will work. There aren't a lot of silly rules because Python projects can be simple. /scripts or /bin for that kind of command-line interface stuff /tests for your tests /lib for your C-language libraries /doc for most documentation /apidoc for the Epydoc-generated API docs. And the top-level directory can contain README's, Config's and whatnot. The hard choice is whether or not to use a /src tree. Python doesn't have a distinction between /src, /lib, and /bin like Java or C has. Since a top-level /src directory is seen by some as meaningless, your top-level directory can be the top-level architecture of your application. /foo /bar /baz I recommend putting all of this under the "name-of-my-product" directory. So, if you're writing an application named quux, the directory that contains all this stuff is named /quux. Another project's PYTHONPATH, then, can include /path/to/quux/foo to reuse the QUUX.foo module. In my case, since I use Komodo Edit, my IDE cuft is a single .KPF file. I actually put that in the top-level /quux directory, and omit adding it to SVN.
What are good rules of thumb for Python imports?
I am a little confused by the multitude of ways in which you can import modules in Python. import X import X as Y from A import B I have been reading up about scoping and namespaces, but I would like some practical advice on what is the best strategy, under which circumstances and why. Should imports happen at a module level or a method/function level? In the __init__.py or in the module code itself? My question is not really answered by "Python packages - import by class, not file" although it is obviously related.
In production code in our company, we try to follow the following rules. We place imports at the beginning of the file, right after the main file's docstring, e.g.: """ Registry related functionality. """ import wx # ... Now, if we import a class that is one of few in the imported module, we import the name directly, so that in the code we only have to use the last part, e.g.: from RegistryController import RegistryController from ui.windows.lists import ListCtrl, DynamicListCtrl There are modules, however, that contain dozens of classes, e.g. list of all possible exceptions. Then we import the module itself and reference to it in the code: from main.core import Exceptions # ... raise Exceptions.FileNotFound() We use the import X as Y as rarely as possible, because it makes searching for usage of a particular module or class difficult. Sometimes, however, you have to use it if you wish to import two classes that have the same name, but exist in different modules, e.g.: from Queue import Queue from main.core.MessageQueue import Queue as MessageQueue As a general rule, we don't do imports inside methods -- they simply make code slower and less readable. Some may find this a good way to easily resolve cyclic imports problem, but a better solution is code reorganization.
Python Decimal
Does anyone know of a faster decimal implementation in python. As example below demonstrates, standard python decimal is ~100 times slower than float. from timeit import Timer def run(val, the_class): test = the_class(1) for c in xrange(10000): d = the_class(val) d + test d - test d * test d / test d ** test str(d) abs(d) if __name__ == "__main__": a = Timer("run(123.345, float)", "from decimal_benchmark import run") print "FLOAT", a.timeit(1) a = Timer("run('123.345', Decimal)", "from decimal_benchmark import run; from decimal import Decimal") print "DECIMAL", a.timeit(1) FLOAT 0.040635041427 DECIMAL 3.39666790146 Thanks, Maksim
You can try cdecimal: from cdecimal import Decimal
In production, Apache + mod_wsgi or Nginx + mod_wsgi?
What to use for a medium to large python WSGI application, Apache + mod_wsgi or Nginx + mod_wsgi? Which combination will need more memory and CPU time? Which one is faster? Which is known for being more stable than the other? I am also thinking to use CherryPy's WSGI server but I hear it's not very suitable for a very high-load application, what do you know about this? Note: I didn't use any Python Web Framework, I just wrote the whole thing from scratch. Note': Other suggestions are also welcome.
For nginx/mod_wsgi, ensure you read: http://blog.dscpl.com.au/2009/05/blocking-requests-and-nginx-version-of.html Because of how nginx is an event driven system underneath, it has behavioural characteristics which are detrimental to blocking applications such as is the case with WSGI based applications. Worse case scenario is that with multiprocess nginx configuration, you can see user requests be blocked even though some nginx worker processes may be idle. Apache/mod_wsgi doesn't have this issue as Apache processes will only accept requests when it has the resources to actually handle the request. Apache/mod_wsgi will thus give more predictable and reliable behaviour.
What is the time complexity of popping elements from list in Python?
I wonder what is the time complexity of pop method of list objects in Python (in CPython particulary). Also does the value of N for list.pop(N) affects the complexity?
Yes, it is O(1) to pop the last element of a Python list, and O(N) to pop an arbitrary element (since the whole rest of the list has to be shifted). Here's a great article on how Python lists are stored and manipulated: http://effbot.org/zone/python-list.htm
How to check if a string in Python is in ASCII?
I want to I check whether a string is in ASCII or not. I am aware of ord(), however when I try ord('é'), I have TypeError: ord() expected a character, but string of length 2 found. I understood it is caused by the way I built Python (as explained in ord()'s documentation). Is there another way to check?
I think you are not asking the right question-- A string in python has no property corresponding to 'ascii', utf-8, or any other encoding. The source of your string (whether you read it from a file, input from a keyboard, etc.) may have encoded a unicode string in ascii to produce your string, but that's where you need to go for an answer. Perhaps the question you can ask is: "Is this string the result of encoding a unicode string in ascii?" -- This you can answer by trying: try: mystring.decode('ascii') except UnicodeDecodeError: print "it was not a ascii-encoded unicode string" else: print "It may have been an ascii-encoded unicode string"
How to check if OS is Vista in Python?
How, in the simplest possible way, distinguish between Windows XP and Windows Vista, using Python and pywin32 or wxPython? Essentially, I need a function that called will return True iff current OS is Vista: >>> isWindowsVista() True
Python has the lovely 'platform' module to help you out. >>> import platform >>> platform.win32_ver() ('XP', '5.1.2600', 'SP2', 'Multiprocessor Free') >>> platform.system() 'Windows' >>> platform.version() '5.1.2600' >>> platform.release() 'XP' NOTE: As mentioned in the comments proper values may not be returned when using older versions of python.
Can you list the keyword arguments a Python function receives?
I have a dict, which I need to pass key/values as keyword arguments.. For example.. d_args = {'kw1': 'value1', 'kw2': 'value2'} example(**d_args) This works fine, but if there are values in the d_args dict that are not accepted by the example function, it obviously dies.. Say, if the example function is defined as def example(kw2): This is a problem since I don't control either the generation of the d_args, or the example function.. They both come from external modules, and example only accepts some of the keyword-arguments from the dict.. Ideally I would just do parsed_kwargs = feedparser.parse(the_url) valid_kwargs = get_valid_kwargs(parsed_kwargs, valid_for = PyRSS2Gen.RSS2) PyRSS2Gen.RSS2(**valid_kwargs) I will probably just filter the dict, from a list of valid keyword-arguments, but I was wondering: Is there a way to programatically list the keyword arguments the a specific function takes?
A little nicer than inspecting the code object directly and working out the variables is to use the inspect module. >>> import inspect >>> def func(a,b,c=42, *args, **kwargs): pass >>> inspect.getargspec(func) (['a', 'b', 'c'], 'args', 'kwargs', (42,)) If you want to know if its callable with a particular set of args, you need the args without a default already specified. These can be got by: def getRequiredArgs(func): args, varargs, varkw, defaults = inspect.getargspec(func) if defaults: args = args[:-len(defaults)] return args # *args and **kwargs are not required, so ignore them. Then a function to tell what you are missing from your particular dict is: def missingArgs(func, argdict): return set(getRequiredArgs(func)).difference(argdict) Similarly, to check for invalid args, use: def invalidArgs(func, argdict): args, varargs, varkw, defaults = inspect.getargspec(func) if varkw: return set() # All accepted return set(argdict) - set(args) And so a full test if it is callable is : def isCallableWithArgs(func, argdict): return not missingArgs(func, argdict) and not invalidArgs(func, argdict) (This is good only as far as python's arg parsing. Any runtime checks for invalid values in kwargs obviously can't be detected.)
Docstrings for data?
Is there a way to describe the module's data in a similar way that a docstring describes a module or a funcion? class MyClass(object): def my_function(): """This docstring works!""" return True my_list = [] """This docstring does not work!"""
To my knowledge, it is not possible to assign docstrings to module data members. PEP 224 suggests this feature, but the PEP was rejected. I suggest you document the data members of a module in the module's docstring: # module.py: """About the module. module.data: contains the word "spam" """ data = "spam"
Can I pickle a python dictionary into a sqlite3 text field?
Any gotchas I should be aware of? Can I store it in a text field, or do I need to use a blob? (I'm not overly familiar with either pickle or sqlite, so I wanted to make sure I'm barking up the right tree with some of my high-level design ideas.)
I needed to achieve the same thing too. I turns out it caused me quite a headache before I finally figured out, thanks to this post, how to actually make it work in a binary format. To insert/update: pdata = cPickle.dumps(data, cPickle.HIGHEST_PROTOCOL) curr.execute("insert into table (data) values (:data)", sqlite3.Binary(pdata)) You must specify the second argument to dumps to force a binary pickling. Also note the sqlite3.Binary to make it fit in the BLOB field. To retrieve data: curr.execute("select data from table limit 1") for row in curr: data = cPickle.loads(str(row['data'])) When retrieving a BLOB field, sqlite3 gets a 'buffer' python type, that needs to be strinyfied using str before being passed to the loads method.
Get last answer
In many symbolic math systems, such as Matlab or Mathematica, you can use a variable like Ans or % to retrieve the last computed value. Is there a similar facility in the Python shell?
Underscore. >>> 5+5 10 >>> _ 10 >>> _ + 5 15 >>> _ 15
Shortest Sudoku Solver in Python - How does it work?
I was playing around with my own Sudoku solver and was looking for some pointers to good and fast design when I came across this: def r(a):i=a.find('0');~i or exit(a);[m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for j in range(81)]or r(a[:i]+m+a[i+1:])for m in'%d'%5**18] from sys import*;r(argv[1]) My own implementation solves Sudokus the same way I solve them in my head but how does this cryptic algorithm work? http://scottkirkwood.blogspot.com/2006/07/shortest-sudoku-solver-in-python.html
Well, you can make things a little easier by fixing up the syntax: def r(a): i = a.find('0') ~i or exit(a) [m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3)or a[j]for j in range(81)] or r(a[:i]+m+a[i+1:])for m in'%d'%5**18] from sys import * r(argv[1]) Cleaning up a little: from sys import exit, argv def r(a): i = a.find('0') if i == -1: exit(a) for m in '%d' % 5**18: m in[(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3) or a[j] for j in range(81)] or r(a[:i]+m+a[i+1:]) r(argv[1]) Okay, so this script expects a command-line argument, and calls the function r on it. If there are no zeros in that string, r exits and prints out its argument. (If another type of object is passed, None is equivalent to passing zero, and any other object is printed to sys.stderr and results in an exit code of 1. In particular, sys.exit("some error message") is a quick way to exit a program when an error occurs. See http://www.python.org/doc/2.5.2/lib/module-sys.html) I guess this means that zeros correspond to open spaces, and a puzzle with no zeros is solved. Then there's that nasty recursive expression. The loop is interesting: for m in'%d'%5**18 Why 5**18? It turns out that '%d'%5**18 evaluates to '3814697265625'. This is a string that has each digit 1-9 at least once, so maybe it's trying to place each of them. In fact, it looks like this is what r(a[:i]+m+a[i+1:]) is doing: recursively calling r, with the first blank filled in by a digit from that string. But this only happens if the earlier expression is false. Let's look at that: m in [(i-j)%9*(i/9^j/9)*(i/27^j/27|i%9/3^j%9/3) or a[j] for j in range(81)] So the placement is done only if m is not in that monster list. Each element is either a number (if the first expression is nonzero) or a character (if the first expression is zero). m is ruled out as a possible substitution if it appears as a character, which can only happen if the first expression is zero. When is the expression zero? It has three parts that are multiplied: (i-j)%9 which is zero if i and j are a multiple of 9 apart, i.e. the same column. (i/9^j/9) which is zero if i/9 == j/9, i.e. the same row. (i/27^j/27|i%9/3^j%9/3) which is zero if both of these are zero: i/27^j^27 which is zero if i/27 == j/27, i.e. the same block of three rows i%9/3^j%9/3 which is zero if i%9/3 == j%9/3, i.e. the same block of three columns If any of these three parts is zero, the entire expression is zero. In other words, if i and j share a row, column, or 3x3 block, then the value of j can't be used as a candidate for the blank at i. Aha! from sys import exit, argv def r(a): i = a.find('0') if i == -1: exit(a) for m in '3814697265625': okay = True for j in range(81): if (i-j)%9 == 0 or (i/9 == j/9) or (i/27 == j/27 and i%9/3 == j%9/3): if a[j] == m: okay = False break if okay: # At this point, m is not excluded by any row, column, or block, so let's place it and recurse r(a[:i]+m+a[i+1:]) r(argv[1]) Note that if none of the placements work out, r will return and back up to the point where something else can be chosen, so it's a basic depth first algorithm. Not using any heuristics, it's not particularly efficient. I took this puzzle from Wikipedia (http://en.wikipedia.org/wiki/Sudoku): $ time python sudoku.py 530070000600195000098000060800060003400803001700020006060000280000419005000080079 534678912672195348198342567859761423426853791713924856961537284287419635345286179 real 0m47.881s user 0m47.223s sys 0m0.137s Addendum: How I would rewrite it as a maintenance programmer (this version has about a 93x speedup :) import sys def same_row(i,j): return (i/9 == j/9) def same_col(i,j): return (i-j) % 9 == 0 def same_block(i,j): return (i/27 == j/27 and i%9/3 == j%9/3) def r(a): i = a.find('0') if i == -1: sys.exit(a) excluded_numbers = set() for j in range(81): if same_row(i,j) or same_col(i,j) or same_block(i,j): excluded_numbers.add(a[j]) for m in '123456789': if m not in excluded_numbers: # At this point, m is not excluded by any row, column, or block, so let's place it and recurse r(a[:i]+m+a[i+1:]) if __name__ == '__main__': if len(sys.argv) == 2 and len(sys.argv[1]) == 81: r(sys.argv[1]) else: print 'Usage: python sudoku.py puzzle' print ' where puzzle is an 81 character string representing the puzzle read left-to-right, top-to-bottom, and 0 is a blank'
Creating self-contained python applications
I'm trying to create a self-contained version of pisa (html to pdf converter, latest version), but I can't succeed due to several errors. I've tried py2exe, bb-freeze and cxfreeze. This has to be in windows, which makes my life a bit harder. I remember that a couple of months ago the author had a zip file containing the install, but now it's gone, leaving me only with the python dependent way. How would you work this out?
Check out pyinstaller, it makes standalone executables (as in one .EXE file, and that's it).
Does python support multiprocessor/multicore programming?
What is the difference between multiprocessor programming and multicore programming? preferably show examples in python how to write a small program for multiprogramming & multicore programming
There is no such thing as "multiprocessor" or "multicore" programming. The distinction between "multiprocessor" and "multicore" computers is probably not relevant to you as an application programmer; it has to do with subtleties of how the cores share access to memory. In order to take advantage of a multicore (or multiprocessor) computer, you need a program written in such a way that it can be run in parallel, and a runtime that will allow the program to actually be executed in parallel on multiple cores (and operating system, although any operating system you can run on your PC will do this). This is really parallel programming, although there are different approaches to parallel programming. The ones that are relevant to Python are multiprocessing and multithreading. In languages like C, C++, Java, and C#, you can write parallel programs by executing multiple threads. The global interpreter lock in the CPython and PyPy runtimes preclude this option; but only for those runtimes. (In my personal opinion, multithreading is dangerous and tricky and it is generally a good thing that Python encourages you not to consider it as a way to get a performance advantage.) If you want to write a parallel program which can run on multiple cores in Python, you have a few different options: Write a multithreaded program using the threading module and run it in the IronPython or Jython runtime. Use the processing module, (now included in Python 2.6 as the multiprocessing module), to run your code in multiple processes at once. Use the subprocess module to run multiple python interpreters and communicate between them. Use Twisted and Ampoule. This has the advantage of not just running your code across different processes, but (if you don't share access to things like files) potentially across different computers as well. No matter which of these options you choose, you will need to understand how to split the work that your program is doing up into chunks that make sense to separate. Since I'm not sure what kind of programs you are thinking of writing, it would be difficult to provide a useful example.
How do I execute a program from python? os.system fails due to spaces in path
I have a python script that needs to execute an external program, but for some reason fails. If I have the following script: import os; os.system("C:\\Temp\\a b c\\Notepad.exe"); raw_input(); Then it fails with the following error: 'C:\Temp\a' is not recognized as an internal or external command, operable program or batch file. If I escape the program with quotes: import os; os.system('"C:\\Temp\\a b c\\Notepad.exe"'); raw_input(); Then it works. However, if I add a parameter, it stops working again: import os; os.system('"C:\\Temp\\a b c\\Notepad.exe" "C:\\test.txt"'); raw_input(); What is the right way to execute a program and wait for it to complete? I do not need to read output from it, as it is a visual program that does a job and then just exits, but I need to wait for it to complete. Also note, moving the program to a non-spaced path is not an option either. Edit This does not work either: import os; os.system("'C:\\Temp\\a b c\\Notepad.exe'"); raw_input(); Note the swapped single/double quotes. with or without a parameter to notepad here, it fails with the error message The filename, directory name, or volume label syntax is incorrect.
subprocess.call will avoid problems with having to deal with quoting conventions of various shells. It accepts a list, rather than a string, so arguments are more easily delimited. i.e. import subprocess subprocess.call(['C:\\Temp\\a b c\\Notepad.exe', 'C:\\test.txt'])
Checking for member existence in Python
I regularly want to check if an object has a member or not. An example is the creation of a singleton in a function. For that purpose, you can use hasattr like this: class Foo(object): @classmethod def singleton(self): if not hasattr(self, 'instance'): self.instance = Foo() return self.instance But you can also do this: class Foo(object): @classmethod def singleton(self): try: return self.instance except AttributeError: self.instance = Foo() return self.instance Is one method better of the other? Edit: Added the @classmethod ... But note that the question is not about how to make a singleton but how to check the presence of a member in an object. Edit: For that example, a typical usage would be: s = Foo.singleton() Then s is an object of type Foo, the same each time. And, typically, the method is called many times.
These are two different methodologies: №1 is LBYL (look before you leap) and №2 is EAFP (easier to ask forgiveness than permission). Pythonistas typically suggest that EAFP is better, with arguments in style of "what if a process creates the file between the time you test for it and the time you try to create it yourself?". This argument does not apply here, but it's the general idea. Exceptions should not be treated as too exceptional. Performance-wise in your case —since setting up exception managers (the try keyword) is very cheap in CPython while creating an exception (the raise keyword and internal exception creation) is what is relatively expensive— using method №2 the exception would be raised only once; afterwards, you just use the property.
Is it possible to compile Python natively (beyond pyc byte code)?
I wonder if it is possible to create an executable module from a Python script. I need to have the most performance and the flexibility of Python script, without needing to run in the Python environment. I would use this code to load on demand user modules to customize my application.
There's pyrex that compiles python like source to python extension modules rpython which allows you to compile python with some restrictions to various backends like C, LLVM, .Net etc. There's also shed-skin which translates python to C++, but I can't say if it's any good. PyPy implements a JIT compiler which attempts to optimize runtime by translating pieces of what's running at runtime to machine code, if you write for the PyPy interpreter that might be a feasible path. The same author that is working on JIT in PyPy wrote psyco previously which optimizes python in the CPython interpreter.
How can I check the syntax of Python code in Emacs without actually executing it?
Python's IDLE has 'Check Module' (Alt-X) to check the syntax which can be called without needing to run the code. Is there an equivalent way to do this in Emacs instead of running and executing the code?
python -m py_compile script.py
Why do attribute references act like this with Python inheritance?
The following seems strange.. Basically, the somedata attribute seems shared between all the classes that inherited from the_base_class. class the_base_class: somedata = {} somedata['was_false_in_base'] = False class subclassthing(the_base_class): def __init__(self): print self.somedata first = subclassthing() {'was_false_in_base': False} first.somedata['was_false_in_base'] = True second = subclassthing() {'was_false_in_base': True} >>> del first >>> del second >>> third = subclassthing() {'was_false_in_base': True} Defining self.somedata in the __init__ function is obviously the correct way to get around this (so each class has it's own somedata dict) - but when is such behavior desirable?
You are right, somedata is shared between all instances of the class and it's subclasses, because it is created at class definition time. The lines somedata = {} somedata['was_false_in_base'] = False are executed when the class is defined, i.e. when the interpreter encounters the class statement - not when the instance is created (think static initializer blocks in Java). If an attribute does not exist in a class instance, the class object is checked for the attribute. At class definition time, you can run arbritrary code, like this: import sys class Test(object): if sys.platform == "linux2": def hello(self): print "Hello Linux" else: def hello(self): print "Hello ~Linux" On a Linux system, Test().hello() will print Hello Linux, on all other systems the other string will be printed. In constrast, objects in __init__ are created at instantiation time and belong to the instance only (when they are assigned to self): class Test(object): def __init__(self): self.inst_var = [1, 2, 3] Objects defined on a class object rather than instance can be useful in many cases. For instance, you might want to cache instances of your class, so that instances with the same member values can be shared (assuming they are supposed to be immutable): class SomeClass(object): __instances__ = {} def __new__(cls, v1, v2, v3): try: return cls.__insts__[(v1, v2, v3)] except KeyError: return cls.__insts__.setdefault( (v1, v2, v3), object.__new__(cls, v1, v2, v3)) Mostly, I use data in class bodies in conjunction with metaclasses or generic factory methods.
Scrape a dynamic website
What is the best method to scrape a dynamic website where most of the content is generated by what appears to be ajax requests? I have previous experience with a Mechanize, BeautifulSoup, and python combo, but I am up for something new. --Edit-- For more detail: I'm trying to scrape the CNN primary database. There is a wealth of information there, but there doesn't appear to be an api.
This is a difficult problem because you either have to reverse engineer the javascript on a per-site basis, or implement a javascript engine and run the scripts (which has its own difficulties and pitfalls). It's a heavy weight solution, but I've seen people doing this with greasemonkey scripts - allow Firefox to render everything and run the javascript, and then scrape the elements. You can even initiate user actions on the page if needed.
Python: Difference between class and instance attributes
Is there any meaningful distinction between: class A(object): foo = 5 # some default value vs. class B(object): def __init__(self, foo=5): self.foo = foo If you're creating a lot of instances, is there any difference in performance or space requirements for the two styles? When you read the code, do you consider the meaning of the two styles to be significantly different?
Beyond performance considerations, there is a significant semantic difference. In the class attribute case, there is just one object referred to. In the instance-attribute-set-at-instantiation, there can be multiple objects referred to. For instance >>> class A: foo = [] >>> a, b = A(), A() >>> a.foo.append(5) >>> b.foo [5] >>> class A: ... def __init__(self): self.foo = [] >>> a, b = A(), A() >>> a.foo.append(5) >>> b.foo []
List of IP addresses/hostnames from local network in Python
How can I get a list of the IP addresses or host names from a local network easily in Python? It would be best if it was multi-platform, but it needs to work on Mac OS X first, then others follow. Edit: By local I mean all active addresses within a local network, such as 192.168.xxx.xxx. So, if the IP address of my computer (within the local network) is 192.168.1.1, and I have three other connected computers, I would want it to return the IP addresses 192.168.1.2, 192.168.1.3, 192.168.1.4, and possibly their hostnames.
Update: The script is now located on github. I wrote a small python script, that leverages scapy's arping().
How to enable MySQL client auto re-connect with MySQLdb?
I came across PHP way of doing the trick: my_bool reconnect = 1; mysql_options(&mysql, MYSQL_OPT_RECONNECT, &reconnect); but no luck with MySQLdb (python-mysql). Can anybody please give a clue? Thanks.
I solved this problem by creating a function that wraps the cursor.execute() method since that's what was throwing the MySQLdb.OperationalError exception. The other example above implies that it is the conn.cursor() method that throws this exception. import MySQLdb class DB: conn = None def connect(self): self.conn = MySQLdb.connect() def query(self, sql): try: cursor = self.conn.cursor() cursor.execute(sql) except (AttributeError, MySQLdb.OperationalError): self.connect() cursor = self.conn.cursor() cursor.execute(sql) return cursor db = DB() sql = "SELECT * FROM foo" cur = db.query(sql) # wait a long time for the Mysql connection to timeout cur = db.query(sql) # still works
Can I use Python as a Bash replacement?
I currently do my textfile manipulation through a bunch of badly remembered AWK, sed, Bash and a tiny bit of Perl. I've seen mentioned a few places that python is good for this kind of thing, I know a little and I would like to know more. Is Python a good choice for this, and is there a good book or guide to learning how to use Python to replace shell scripting, AWK, sed and friends?
Any shell has several sets of features. The Essential Linux/Unix commands. All of these are available through the subprocess library. This isn't always the best first choice for doing all external commands. Look also at shutil for some commands that are separate Linux commands, but you could probably implement directly in your Python scripts. Another huge batch of Linux commands are in the os library; you can do these more simply in Python. And -- bonus! -- more quickly. Each separate Linux command in the shell (with a few exceptions) forks a subprocess. By using Python shutil and os modules, you don't fork a subprocess. The shell environment features. This includes stuff that sets a command's environment (current directory and environment variables and what-not). You can easily manage this from Python directly. The shell programming features. This is all the process status code checking, the various logic commands (if, while, for, etc.) the test command and all of it's relatives. The function definition stuff. This is all much, much easier in Python. This is one of the huge victories in getting rid of bash and doing it in Python. Interaction features. This includes command history and what-not. You don't need this for writing shell scripts. This is only for human interaction, and not for script-writing. The shell file management features. This includes redirection and pipelines. This is trickier. Much of this can be done with subprocess. But some things that are easy in the shell are unpleasant in Python. Specifically stuff like (a | b; c ) | something >result. This runs two processes in parallel (with output of a as input to b), followed by a third process. The output from that sequence is run in parallel with something and the output is collected into a file named result. That's just complex to express in any other language. Specific programs (awk, sed, grep, etc.) can often be rewritten as Python modules. Don't go overboard. Replace what you need and evolve your "grep" module. Don't start out writing a Python module that replaces "grep". The best thing is that you can do this in steps. Replace AWK and PERL with Python. Leave everything else alone. Look at replacing GREP with Python. This can be a bit more complex, but your version of GREP can be tailored to your processing needs. Look at replacing FIND with Python loops that use os.walk. This is a big win because you don't spawn as many processes. Look at replacing common shell logic (loops, decisions, etc.) with Python scripts.
Convert hex string to int in Python
How do I convert a hex string to an int in Python? I may have it as "0xffff" or just "ffff".
Without the 0x prefix, you need to specify the base explicitly, otherwise there's no way to tell: x = int("deadbeef", 16) With the 0x prefix, Python can distinguish hex and decimal automatically. >>> print int("0xdeadbeef", 0) 3735928559 >>> print int("10", 0) 10 (You must specify 0 as the base in order to invoke this prefix-guessing behavior; omitting the second parameter means to assume base-10. See the comments for more details.)
Map two lists into a dictionary in Python
Imagine that you have: keys = ('name', 'age', 'food') values = ('Monty', 42, 'spam') What is the simplest way to produce the following dictionary ? dict = {'name' : 'Monty', 'age' : 42, 'food' : 'spam'} This code works, but I'm not really proud of it : dict = {} junk = map(lambda k, v: dict.update({k: v}), keys, values)
Like this: >>> keys = ['a', 'b', 'c'] >>> values = [1, 2, 3] >>> dictionary = dict(zip(keys, values)) >>> print dictionary {'a': 1, 'b': 2, 'c': 3} Voila :-) The pairwise dict constructor and zip function are awesomely useful: https://docs.python.org/2/library/functions.html#func-dict
Python: unsigned 32 bit bitwise arithmetic
Trying to answer to another post whose solution deals with IP addresses and netmasks, I got stuck with plain bitwise arithmetic. Is there a standard way, in Python, to carry on bitwise AND, OR, XOR, NOT operations assuming that the inputs are "32 bit" (maybe negative) integers or longs, and that the result must be a long in the range [0, 2**32]? In other words, I need a working Python counterpart to the C bitwise operations between unsigned longs. EDIT: the specific issue is this: >>> m = 0xFFFFFF00 # netmask 255.255.255.0 >>> ~m -4294967041L # wtf?! I want 255
You can use ctypes and its c_uint32: >>> import ctypes >>> m = 0xFFFFFF00 >>> ctypes.c_uint32(~m).value 255L So what I did here was casting ~m to a C 32-bit unsigned integer and retrieving its value back in Python format.
Python's __import__ doesn't work as expected
When using __import__ with a dotted name, something like: somepackage.somemodule, the module returned isn't somemodule, whatever is returned seems to be mostly empty! what's going on here?
From the python docs on __import__: __import__( name[, globals[, locals[, fromlist[, level]]]]) ... When the name variable is of the form package.module, normally, the top-level package (the name up till the first dot) is returned, not the module named by name. However, when a non-empty fromlist argument is given, the module named by name is returned. This is done for compatibility with the bytecode generated for the different kinds of import statement; when using "import spam.ham.eggs", the top-level package spam must be placed in the importing namespace, but when using "from spam.ham import eggs", the spam.ham subpackage must be used to find the eggs variable. As a workaround for this behavior, use getattr() to extract the desired components. For example, you could define the following helper: def my_import(name): mod = __import__(name) components = name.split('.') for comp in components[1:]: mod = getattr(mod, comp) return mod To paraphrase: When you ask for somepackage.somemodule, __import__ returns somepackage.__init__.py, which is often empty. It will return somemodule if you provide fromlist (a list of the variable names inside somemodule you want, which are not actually returned) You can also, as I did, use the function they suggest. Note: I asked this question fully intending to answer it myself. There was a big bug in my code, and having misdiagnosed it, it took me a long time to figure it out, so I figured I'd help the SO community out and post the gotcha I ran into here.
Python Inverse of a Matrix
How do I get the inverse of a matrix in python? I've implemented it myself, but it's pure python, and I suspect there are faster modules out there to do it.
You should have a look at numpy if you do matrix manipulation. This is a module mainly written in C, which will be much faster than programming in pure python. Here is an example of how to invert a matrix, and do other matrix manipulation. from numpy import matrix from numpy import linalg A = matrix( [[1,2,3],[11,12,13],[21,22,23]]) # Creates a matrix. x = matrix( [[1],[2],[3]] ) # Creates a matrix (like a column vector). y = matrix( [[1,2,3]] ) # Creates a matrix (like a row vector). print A.T # Transpose of A. print A*x # Matrix multiplication of A and x. print A.I # Inverse of A. print linalg.solve(A, x) # Solve the linear equation system. You can also have a look at the array module, which is a much more efficient implementation of lists when you have to deal with only one data type.
Using SQLite in a Python program
I have created a Python module that creates and populates several SQLite tables. Now, I want to use it in a program but I don't really know how to call it properly. All the tutorials I've found are essentially "inline", i.e. they walk through using SQLite in a linear fashion rather than how to actually use it in production. What I'm trying to do is have a method check to see if the database is already created. If so, then I can use it. If not, an exception is raised and the program will create the database. (Or use if/else statements, whichever is better). I created a test script to see if my logic is correct but it's not working. When I create the try statement, it just creates a new database rather than checking if one already exists. The next time I run the script, I get an error that the table already exists, even if I tried catching the exception. (I haven't used try/except before but figured this is a good time to learn). Are there any good tutorials for using SQLite operationally or any suggestions on how to code this? I've looked through the pysqlite tutorial and others I found but they don't address this.
Don't make this more complex than it needs to be. The big, independent databases have complex setup and configuration requirements. SQLite is just a file you access with SQL, it's much simpler. Do the following. Add a table to your database for "Components" or "Versions" or "Configuration" or "Release" or something administrative like that. CREATE TABLE REVISION( RELEASE_NUMBER CHAR(20) ); In your application, connect to your database normally. Execute a simple query against the revision table. Here's what can happen. The query fails to execute: your database doesn't exist, so execute a series of CREATE statements to build it. The query succeeds but returns no rows or the release number is lower than expected: your database exists, but is out of date. You need to migrate from that release to the current release. Hopefully, you have a sequence of DROP, CREATE and ALTER statements to do this. The query succeeds, and the release number is the expected value. Do nothing more, your database is configured correctly.
Binary search (bisection) in Python
Is there a library function that performs binary search on a list/tuple and return the position of the item if found and 'False' (-1, None, etc.) if not? I found the functions bisect_left/right in the bisect module, but they still return a position even if the item is not in the list. That's perfectly fine for their intended usage, but I just want to know if an item is in the list or not (don't want to insert anything). I thought of using bisect_left and then checking if the item at that position is equal to what I'm searching, but that seems cumbersome (and I also need to do bounds checking if the number can be larger than the largest number in my list). If there is a nicer method I'd like to know about it. Edit To clarify what I need this for: I'm aware that a dictionary would be very well suited for this, but I'm trying to keep the memory consumption as low as possible. My intended usage would be a sort of double-way look-up table. I have in the table a list of values and I need to be able to access the values based on their index. And also I want to be able to find the index of a particular value or None if the value is not in the list. Using a dictionary for this would be the fastest way, but would (approximately) double the memory requirements. I was asking this question thinking that I may have overlooked something in the Python libraries. It seems I'll have to write my own code, as Moe suggested.
from bisect import bisect_left def binary_search(a, x, lo=0, hi=None): # can't use a to specify default for hi hi = hi if hi is not None else len(a) # hi defaults to len(a) pos = bisect_left(a,x,lo,hi) # find insertion position return (pos if pos != hi and a[pos] == x else -1) # don't walk off the end
Opening a handle to a device in Python on Windows
I'm trying to use the giveio.sys driver which requires a "file" to be opened before you can access protected memory. I'm looking at a C example from WinAVR/AVRdude that uses the syntax: #define DRIVERNAME "\\\\.\\giveio" HANDLE h = CreateFile(DRIVERNAME, GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL); but this does not seem to work in Python - I just get a "The specified path is invalid" error, for both f = os.open("\\\\.\\giveio", os.O_RDONLY) and f = os.open("//./giveio", os.O_RDONLY) Why doesn't this do the same thing? Edited to hopefully reduce confusion of ideas (thanks Will). I did verify that the device driver is running via the batch files that come with AVRdude. Further edited to clarify SamB's bounty.
Solution: in python you have to use win32file.CreateFile() instead of open(). Thanks everyone for telling me what I was trying to do, it helped me find the answer!
Keyboard interruptable blocking queue in Python
It seems import Queue Queue.Queue().get(timeout=10) is keyboard interruptible (ctrl-c) whereas import Queue Queue.Queue().get() is not. I could always create a loop; import Queue q = Queue() while True: try: q.get(timeout=1000) except Queue.Empty: pass but this seems like a strange thing to do. So, is there a way of getting an indefinitely waiting but keyboard interruptible Queue.get()?
Queue objects have this behavior because they lock using Condition objects form the threading module. So your solution is really the only way to go. However, if you really want a Queue method that does this, you can monkeypatch the Queue class. For example: def interruptable_get(self): while True: try: return self.get(timeout=1000) except Queue.Empty: pass Queue.interruptable_get = interruptable_get This would let you say q.interruptable_get() instead of interruptable_get(q) although monkeypatching is generally discouraged by the Python community in cases such as these, since a regular function seems just as good.
Using django-rest-interface
I have a django application that I'd like to add some rest interfaces to. I've seen http://code.google.com/p/django-rest-interface/ but it seems to be pretty simplistic. For instance it doesn't seem to have a way of enforcing security. How would I go about limiting what people can view and manipulate through the rest interface? Normally I'd put this kind of logic in my views. Is this the right place or should I be moving some more logic down into the model? Alternatively is there a better library out there or do I need to roll my own?
I would look into using django-piston http://bitbucket.org/jespern/django-piston/wiki/Home application if security is your main concern. I have used django-rest-interface in the past, its reliable and though simple can be quite powerful, however django-piston seems more flexible going forward.
A good multithreaded python webserver?
I am looking for a python webserver which is multithreaded instead of being multi-process (as in case of mod_python for apache). I want it to be multithreaded because I want to have an in memory object cache that will be used by various http threads. My webserver does a lot of expensive stuff and computes some large arrays which needs to be cached in memory for future use to avoid recomputing. This is not possible in a multi-process web server environment. Storing this information in memcache is also not a good idea as the arrays are large and storing them in memcache would lead to deserialization of data coming from memcache apart from the additional overhead of IPC. I implemented a simple webserver using BaseHttpServer, it gives good performance but it gets stuck after a few hours time. I need some more matured webserver. Is it possible to configure apache to use mod_python under a thread model so that I can do some object caching?
CherryPy. Features, as listed from the website: A fast, HTTP/1.1-compliant, WSGI thread-pooled webserver. Typically, CherryPy itself takes only 1-2ms per page! Support for any other WSGI-enabled webserver or adapter, including Apache, IIS, lighttpd, mod_python, FastCGI, SCGI, and mod_wsgi Easy to run multiple HTTP servers (e.g. on multiple ports) at once A powerful configuration system for developers and deployers alike A flexible plugin system Built-in tools for caching, encoding, sessions, authorization, static content, and many more A native mod_python adapter A complete test suite Swappable and customizable...everything. Built-in profiling, coverage, and testing support.
Would Python make a good substitute for the Windows command-line/batch scripts?
I've got some experience with Bash, which I don't mind, but now that I'm doing a lot of Windows development I'm needing to do basic stuff/write basic scripts using the Windows command-line language. For some reason said language really irritates me, so I was considering learning Python and using that instead. Is Python suitable for such things? Moving files around, creating scripts to do things like unzipping a backup and restoring a SQL database, etc.
Python is well suited for these tasks, and I would guess much easier to develop in and debug than Windows batch files. The question is, I think, how easy and painless it is to ensure that all the computers that you have to run these scripts on, have Python installed.
How do you convert YYYY-MM-DDTHH:mm:ss.000Z time format to MM/DD/YYYY time format in Python?
For example, I'm trying to convert 2008-09-26T01:51:42.000Z to 09/26/2008. What's the simplest way of accomplishing this?
The easiest way is to use dateutil.parser.parse() to parse the date string into a timezone aware datetime object, then use strftime() to get the format you want. import datetime, dateutil.parser d = dateutil.parser.parse('2008-09-26T01:51:42.000Z') print d.strftime('%m/%d/%Y') #==> '09/26/2008'
python module dlls
Is there a way to make a python module load a dll in my application directory rather than the version that came with the python installation, without making changes to the python installation (which would then require I made an installer, and be careful I didn't break other apps for people by overwrting python modules and changing dll versions globaly...)? Specifically I would like python to use my version of the sqlite3.dll, rather than the version that came with python (which is older and doesn't appear to have the fts3 module).
If you're talking about Python module DLLs, then simply modifying sys.path should be fine. However, if you're talking about DLLs linked against those DLLs; i.e. a libfoo.dll which a foo.pyd depends on, then you need to modify your PATH environment variable. I wrote about doing this for PyGTK a while ago, but in your case I think it should be as simple as: import os os.environ['PATH'] = 'my-app-dir' + ';' + os.environ['PATH'] That will insert my-app-dir at the head of your Windows path, which I believe also controls the load-order for DLLs. Keep in mind that you will need to do this before loading the DLL in question, i.e., before importing anything interesting. sqlite3 may be a bit of a special case, though, since it is distributed with Python; it's obviously kind of tricky to test this quickly, so I haven't checked sqlite3.dll specifically.
Can you add new statements to Python's syntax?
Can you add new statements (like print, raise, with) to Python's syntax? Say, to allow.. mystatement "Something" Or, new_if True: print "example" Not so much if you should, but rather if it's possible (short of modifying the python interpreters code)
You may find this useful - Python internals: adding a new statement to Python, quoted here: This article is an attempt to better understand how the front-end of Python works. Just reading documentation and source code may be a bit boring, so I'm taking a hands-on approach here: I'm going to add an until statement to Python. All the coding for this article was done against the cutting-edge Py3k branch in the Python Mercurial repository mirror. The until statement Some languages, like Ruby, have an until statement, which is the complement to while (until num == 0 is equivalent to while num != 0). In Ruby, I can write: num = 3 until num == 0 do puts num num -= 1 end And it will print: 3 2 1 So, I want to add a similar capability to Python. That is, being able to write: num = 3 until num == 0: print(num) num -= 1 A language-advocacy digression This article doesn't attempt to suggest the addition of an until statement to Python. Although I think such a statement would make some code clearer, and this article displays how easy it is to add, I completely respect Python's philosophy of minimalism. All I'm trying to do here, really, is gain some insight into the inner workings of Python. Modifying the grammar Python uses a custom parser generator named pgen. This is a LL(1) parser that converts Python source code into a parse tree. The input to the parser generator is the file Grammar/Grammar[1]. This is a simple text file that specifies the grammar of Python. [1]: From here on, references to files in the Python source are given relatively to the root of the source tree, which is the directory where you run configure and make to build Python. Two modifications have to be made to the grammar file. The first is to add a definition for the until statement. I found where the while statement was defined (while_stmt), and added until_stmt below [2]: compound_stmt: if_stmt | while_stmt | until_stmt | for_stmt | try_stmt | with_stmt | funcdef | classdef | decorated if_stmt: 'if' test ':' suite ('elif' test ':' suite)* ['else' ':' suite] while_stmt: 'while' test ':' suite ['else' ':' suite] until_stmt: 'until' test ':' suite [2]: This demonstrates a common technique I use when modifying source code I’m not familiar with: work by similarity. This principle won’t solve all your problems, but it can definitely ease the process. Since everything that has to be done for while also has to be done for until, it serves as a pretty good guideline. Note that I've decided to exclude the else clause from my definition of until, just to make it a little bit different (and because frankly I dislike the else clause of loops and don't think it fits well with the Zen of Python). The second change is to modify the rule for compound_stmt to include until_stmt, as you can see in the snippet above. It's right after while_stmt, again. When you run make after modifying Grammar/Grammar, notice that the pgen program is run to re-generate Include/graminit.h and Python/graminit.c, and then several files get re-compiled. Modifying the AST generation code After the Python parser has created a parse tree, this tree is converted into an AST, since ASTs are much simpler to work with in subsequent stages of the compilation process. So, we're going to visit Parser/Python.asdl which defines the structure of Python's ASTs and add an AST node for our new until statement, again right below the while: | While(expr test, stmt* body, stmt* orelse) | Until(expr test, stmt* body) If you now run make, notice that before compiling a bunch of files, Parser/asdl_c.py is run to generate C code from the AST definition file. This (like Grammar/Grammar) is another example of the Python source-code using a mini-language (in other words, a DSL) to simplify programming. Also note that since Parser/asdl_c.py is a Python script, this is a kind of bootstrapping - to build Python from scratch, Python already has to be available. While Parser/asdl_c.py generated the code to manage our newly defined AST node (into the files Include/Python-ast.h and Python/Python-ast.c), we still have to write the code that converts a relevant parse-tree node into it by hand. This is done in the file Python/ast.c. There, a function named ast_for_stmt converts parse tree nodes for statements into AST nodes. Again, guided by our old friend while, we jump right into the big switch for handling compound statements and add a clause for until_stmt: case while_stmt: return ast_for_while_stmt(c, ch); case until_stmt: return ast_for_until_stmt(c, ch); Now we should implement ast_for_until_stmt. Here it is: static stmt_ty ast_for_until_stmt(struct compiling *c, const node *n) { /* until_stmt: 'until' test ':' suite */ REQ(n, until_stmt); if (NCH(n) == 4) { expr_ty expression; asdl_seq *suite_seq; expression = ast_for_expr(c, CHILD(n, 1)); if (!expression) return NULL; suite_seq = ast_for_suite(c, CHILD(n, 3)); if (!suite_seq) return NULL; return Until(expression, suite_seq, LINENO(n), n->n_col_offset, c->c_arena); } PyErr_Format(PyExc_SystemError, "wrong number of tokens for 'until' statement: %d", NCH(n)); return NULL; } Again, this was coded while closely looking at the equivalent ast_for_while_stmt, with the difference that for until I've decided not to support the else clause. As expected, the AST is created recursively, using other AST creating functions like ast_for_expr for the condition expression and ast_for_suite for the body of the until statement. Finally, a new node named Until is returned. Note that we access the parse-tree node n using some macros like NCH and CHILD. These are worth understanding - their code is in Include/node.h. Digression: AST composition I chose to create a new type of AST for the until statement, but actually this isn't necessary. I could've saved some work and implemented the new functionality using composition of existing AST nodes, since: until condition: # do stuff Is functionally equivalent to: while not condition: # do stuff Instead of creating the Until node in ast_for_until_stmt, I could have created a Not node with an While node as a child. Since the AST compiler already knows how to handle these nodes, the next steps of the process could be skipped. Compiling ASTs into bytecode The next step is compiling the AST into Python bytecode. The compilation has an intermediate result which is a CFG (Control Flow Graph), but since the same code handles it I will ignore this detail for now and leave it for another article. The code we will look at next is Python/compile.c. Following the lead of while, we find the function compiler_visit_stmt, which is responsible for compiling statements into bytecode. We add a clause for Until: case While_kind: return compiler_while(c, s); case Until_kind: return compiler_until(c, s); If you wonder what Until_kind is, it's a constant (actually a value of the _stmt_kind enumeration) automatically generated from the AST definition file into Include/Python-ast.h. Anyway, we call compiler_until which, of course, still doesn't exist. I'll get to it an a moment. If you're curious like me, you'll notice that compiler_visit_stmt is peculiar. No amount of grep-ping the source tree reveals where it is called. When this is the case, only one option remains - C macro-fu. Indeed, a short investigation leads us to the VISIT macro defined in Python/compile.c: #define VISIT(C, TYPE, V) {\ if (!compiler_visit_ ## TYPE((C), (V))) \ return 0; \ It's used to invoke compiler_visit_stmt in compiler_body. Back to our business, however... As promised, here's compiler_until: static int compiler_until(struct compiler *c, stmt_ty s) { basicblock *loop, *end, *anchor = NULL; int constant = expr_constant(s->v.Until.test); if (constant == 1) { return 1; } loop = compiler_new_block(c); end = compiler_new_block(c); if (constant == -1) { anchor = compiler_new_block(c); if (anchor == NULL) return 0; } if (loop == NULL || end == NULL) return 0; ADDOP_JREL(c, SETUP_LOOP, end); compiler_use_next_block(c, loop); if (!compiler_push_fblock(c, LOOP, loop)) return 0; if (constant == -1) { VISIT(c, expr, s->v.Until.test); ADDOP_JABS(c, POP_JUMP_IF_TRUE, anchor); } VISIT_SEQ(c, stmt, s->v.Until.body); ADDOP_JABS(c, JUMP_ABSOLUTE, loop); if (constant == -1) { compiler_use_next_block(c, anchor); ADDOP(c, POP_BLOCK); } compiler_pop_fblock(c, LOOP, loop); compiler_use_next_block(c, end); return 1; } I have a confession to make: this code wasn't written based on a deep understanding of Python bytecode. Like the rest of the article, it was done in imitation of the kin compiler_while function. By reading it carefully, however, keeping in mind that the Python VM is stack-based, and glancing into the documentation of the dis module, which has a list of Python bytecodes with descriptions, it's possible to understand what's going on. That's it, we're done... Aren't we? After making all the changes and running make, we can run the newly compiled Python and try our new until statement: >>> until num == 0: ... print(num) ... num -= 1 ... 3 2 1 Voila, it works! Let's see the bytecode created for the new statement by using the dis module as follows: import dis def myfoo(num): until num == 0: print(num) num -= 1 dis.dis(myfoo) Here's the result: 4 0 SETUP_LOOP 36 (to 39) >> 3 LOAD_FAST 0 (num) 6 LOAD_CONST 1 (0) 9 COMPARE_OP 2 (==) 12 POP_JUMP_IF_TRUE 38 5 15 LOAD_NAME 0 (print) 18 LOAD_FAST 0 (num) 21 CALL_FUNCTION 1 24 POP_TOP 6 25 LOAD_FAST 0 (num) 28 LOAD_CONST 2 (1) 31 INPLACE_SUBTRACT 32 STORE_FAST 0 (num) 35 JUMP_ABSOLUTE 3 >> 38 POP_BLOCK >> 39 LOAD_CONST 0 (None) 42 RETURN_VALUE The most interesting operation is number 12: if the condition is true, we jump to after the loop. This is correct semantics for until. If the jump isn't executed, the loop body keeps running until it jumps back to the condition at operation 35. Feeling good about my change, I then tried running the function (executing myfoo(3)) instead of showing its bytecode. The result was less than encouraging: Traceback (most recent call last): File "zy.py", line 9, in myfoo(3) File "zy.py", line 5, in myfoo print(num) SystemError: no locals when loading 'print' Whoa... this can't be good. So what went wrong? The case of the missing symbol table One of the steps the Python compiler performs when compiling the AST is create a symbol table for the code it compiles. The call to PySymtable_Build in PyAST_Compile calls into the symbol table module (Python/symtable.c), which walks the AST in a manner similar to the code generation functions. Having a symbol table for each scope helps the compiler figure out some key information, such as which variables are global and which are local to a scope. To fix the problem, we have to modify the symtable_visit_stmt function in Python/symtable.c, adding code for handling until statements, after the similar code for while statements [3]: case While_kind: VISIT(st, expr, s->v.While.test); VISIT_SEQ(st, stmt, s->v.While.body); if (s->v.While.orelse) VISIT_SEQ(st, stmt, s->v.While.orelse); break; case Until_kind: VISIT(st, expr, s->v.Until.test); VISIT_SEQ(st, stmt, s->v.Until.body); break; [3]: By the way, without this code there’s a compiler warning for Python/symtable.c. The compiler notices that the Until_kind enumeration value isn’t handled in the switch statement of symtable_visit_stmt and complains. It’s always important to check for compiler warnings! And now we really are done. Compiling the source after this change makes the execution of myfoo(3) work as expected. Conclusion In this article I've demonstrated how to add a new statement to Python. Albeit requiring quite a bit of tinkering in the code of the Python compiler, the change wasn't difficult to implement, because I used a similar and existing statement as a guideline. The Python compiler is a sophisticated chunk of software, and I don't claim being an expert in it. However, I am really interested in the internals of Python, and particularly its front-end. Therefore, I found this exercise a very useful companion to theoretical study of the compiler's principles and source code. It will serve as a base for future articles that will get deeper into the compiler. References I used a few excellent references for the construction of this article. Here they are, in no particular order: PEP 339: Design of the CPython compiler - probably the most important and comprehensive piece of official documentation for the Python compiler. Being very short, it painfully displays the scarcity of good documentation of the internals of Python. "Python Compiler Internals" - an article by Thomas Lee "Python: Design and Implementation" - a presentation by Guido van Rossum Python (2.5) Virtual Machine, A guided tour - a presentation by Peter Tröger original source
What is the purpose of the colon before a block in Python?
What is the purpose of the colon before a block in Python? Example: if n == 0: print "The end"
The colon is there to declare the start of an indented block. Technically, it's not necessary; you could just indent and de-indent when the block is done. However, based on the Python koan “explicit is better than implicit” (EIBTI), I believe that Guido deliberately made the colon obligatory, so any statement that should be followed by indented code ends in a colon. (It also allows one-liners if you continue after the colon, but this style is not in wide use.) It also makes the work of syntax-aware auto-indenting editors easier, which also counted in the decision. This question turns out to be a Python FAQ, and I found one of its answers by Guido here: Why are colons required for the if/while/def/class statements? The colon is required primarily to enhance readability (one of the results of the experimental ABC language). Consider this: if a == b print a versus if a == b: print a Notice how the second one is slightly easier to read. Notice further how a colon sets off the example in this FAQ answer; it’s a standard usage in English. Another minor reason is that the colon makes it easier for editors with syntax highlighting; they can look for colons to decide when indentation needs to be increased instead of having to do a more elaborate parsing of the program text.
Why Python decorators rather than closures?
I still haven't got my head around decorators in Python. I've already started using a lot of closures to do things like customize functions and classes in my coding. Eg. class Node : def __init__(self,val,children) : self.val = val self.children = children def makeRunner(f) : def run(node) : f(node) for x in node.children : run(x) return run tree=Node(1,[Node(2,[]),Node(3,[Node(4,[]),Node(5,[])])]) def pp(n) : print "%s," % n.val printTree = makeRunner(pp) printTree(tree) As far as I can see, decorators are just a different syntax for doing something similar. Instead of def pp(n) : print "%s," % n.val printTree = makeRunner(pp) I would write : @makeRunner def printTree(n) : print "%s," % n.val Is this all there is to decorators? Or is there a fundamental difference that I've missed?
While it is true that syntactically, decorators are just "sugar", that is not the best way to think about them. Decorators allow you to weave functionality into your existing code without actually modifying it. And they allow you to do it in a way that is declarative. This allows you to use decorators to do aspect-oriented programming (AOP). So you want to use a decorator when you have a cross-cutting concern that you want to encapsulate in one place. The quintessential example would probably be logging, where you want to log the entry or exit of a function, or both. Using a decorator is equivalent to applying advice (log this!) to a joinpoint (during method entry or exit). Method decoration is a concept like OOP or list comprehensions. As you point out, it is not always appropriate, and can be overused. But in the right place, it can be useful for making code more modular and decoupled.
Python embedded in CPP: how to get data back to CPP
While working on a C++ project, I was looking for a third party library for something that is not my core business. I found a really good library, doing exactly what's needed, but it is written in Python. I decided to experiment with embedding Python code in C++, using the Boost.Python library. The C++ code looks something like this: #include <string> #include <iostream> #include <boost/python.hpp> using namespace boost::python; int main(int, char **) { Py_Initialize(); try { object module((handle<>(borrowed(PyImport_AddModule("__main__"))))); object name_space = module.attr("__dict__"); object ignored = exec("from myModule import MyFunc\n" "MyFunc(\"some_arg\")\n", name_space); std::string res = extract<std::string>(name_space["result"]); } catch (error_already_set) { PyErr_Print(); } Py_Finalize(); return 0; } A (very) simplified version of the Python code looks like this: import thirdparty def MyFunc(some_arg): result = thirdparty.go() print result Now the problem is this: 'MyFunc' executes fine, i can see the print of 'result'. What i cannot do is read 'result' back from the C++ code. The extract command never finds 'result' in any namespace. I tried defining 'result' as a global, i even tried returning a tuple, but i cannot get it to work.
First of all, change your function to return the value. printing it will complicate things since you want to get the value back. Suppose your MyModule.py looks like this: import thirdparty def MyFunc(some_arg): result = thirdparty.go() return result Now, to do what you want, you have to go beyond basic embedding, as the documentation says. Here is the full code to run your function: #include <Python.h> int main(int argc, char *argv[]) { PyObject *pName, *pModule, *pFunc; PyObject *pArgs, *pArg, *pResult; int i; Py_Initialize(); pName = PyString_FromString("MyModule.py"); /* Error checking of pName left out as exercise */ pModule = PyImport_Import(pName); Py_DECREF(pName); if (pModule != NULL) { pFunc = PyObject_GetAttrString(pModule, "MyFunc"); /* pFunc is a new reference */ if (pFunc) { pArgs = PyTuple_New(0); pArg = PyString_FromString("some parameter") /* pArg reference stolen here: */ PyTuple_SetItem(pArgs, 0, pArg); pResult = PyObject_CallObject(pFunc, pArgs); Py_DECREF(pArgs); if (pResult != NULL) { printf("Result of call: %s\n", PyString_AsString(pResult)); Py_DECREF(pResult); } else { Py_DECREF(pFunc); Py_DECREF(pModule); PyErr_Print(); fprintf(stderr,"Call failed\n"); return 1; } } else { if (PyErr_Occurred()) PyErr_Print(); fprintf(stderr, "Cannot find function"); } Py_XDECREF(pFunc); Py_DECREF(pModule); } else { PyErr_Print(); fprintf(stderr, "Failed to load module"); return 1; } Py_Finalize(); return 0; }
In Python, what does it mean if an object is subscriptable or not?
Which types of objects fall into the domain of "subscriptable"?
It basically means that the object implements the __getitem__() method. In other words, it describes objects that are "containers", meaning they contain other objects. This includes lists, tuples, and dictionaries.
How can I determine the display idle time from Python in Windows, Linux, and MacOS?
I would like to know how long it's been since the user last hit a key or moved the mouse - not just in my application, but on the whole "computer" (i.e. display), in order to guess whether they're still at the computer and able to observe notifications that pop up on the screen. I'd like to do this purely from (Py)GTK+, but I am amenable to calling platform-specific functions. Ideally I'd like to call functions which have already been wrapped from Python, but if that's not possible, I'm not above a little bit of C or ctypes code, as long as I know what I'm actually looking for. On Windows I think the function I want is GetLastInputInfo, but that doesn't seem to be wrapped by pywin32; I hope I'm missing something.
Gajim does it this way on Windows, OS X and GNU/Linux (and other *nixes): Python wrapper module (also includes Windows idle time detection code, using GetTickCount with ctypes); Ctypes-based module to get X11 idle time (using XScreenSaverQueryInfo, was a C module in old Gajim versions); C module to get OS X idle time (using HIDIdleTime system property). Those links are to quite dated 0.12 version, so you may want to check current source for possible further improvements and changes.
Getting method parameter names in python
Given the python function: def aMethod(arg1, arg2): pass How can I extract the number and names of the arguments. Ie. given that I have a reference to func, I want the func.[something] to return ("arg1", "arg2") The usage scenario for this is that I have a decorator, and I wish to use the method arguments in the same order that they appear for the actual function as a key. Ie. how would the decorator look that printed "a,b" when I call aMethod("a","b")
Take a look at the inspect module - this will do the inspection of the various code object properties for you. >>> inspect.getargspec(aMethod) (['arg1', 'arg2'], None, None, None) The other results are the name of the *args and **kwargs variables, and the defaults provided. ie. >>> def foo(a,b,c=4, *arglist, **keywords): pass >>> inspect.getargspec(foo) (['a', 'b', 'c'], 'arglist', 'keywords', (4,))
How Python web frameworks, WSGI and CGI fit together
I have a Bluehost account where I can run Python scripts as CGI. I guess it's the simplest CGI, because to run I have to define the following in .htaccess: Options +ExecCGI AddType text/html py AddHandler cgi-script .py Now, whenever I look up web programming with Python, I hear a lot about WSGI and how most frameworks use it. But I just don't understand how it all fits together, especially when my web server is given (Apache running at a host's machine) and not something I can really play with (except defining .htaccess commands). How are WSGI, CGI, and the frameworks all connected? What do I need to know, install, and do if I want to run a web framework (say web.py or CherryPy) on my basic CGI configuration? How to install WSGI support?
How WSGI, CGI, and the frameworks are all connected ? Apache listens on port 80. It gets an HTTP request. It parses the request to find a way to respond. Apache has a LOT of choices for responding. One way to respond is to use CGI to run a script. Another way to respond is to simply serve a file. In the case of CGI, Apache prepares an environment and invokes the script through the CGI protocol. This is a standard Unix Fork/Exec situation -- the CGI subprocess inherits an OS environment including the socket and stdout. The CGI subprocess writes a response, which goes back to Apache; Apache sends this response to the browser. CGI is primitive and annoying. Mostly because it forks a subprocess for every request, and subprocess must exit or close stdout and stderr to signify end of response. WSGI is an interface that is based on the CGI design pattern. It is not necessarily CGI -- it does not have to fork a subprocess for each request. It can be CGI, but it doesn't have to be. WSGI adds to the CGI design pattern in several important ways. It parses the HTTP Request Headers for you and adds these to the environment. It supplies any POST-oriented input as a file-like object in the environment. It also provides you a function that will formulate the response, saving you from a lot of formatting details. What do I need to know / install / do if I want to run a web framework (say web.py or cherrypy) on my basic CGI configuration ? Recall that forking a subprocess is expensive. There are two ways to work around this. Embedded mod_wsgi or mod_python embeds Python inside Apache; no process is forked. Apache runs the Django application directly. Daemon mod_wsgi or mod_fastcgi allows Apache to interact with a separate daemon (or "long-running process"), using the WSGI protocol. You start your long-running Django process, then you configure Apache's mod_fastcgi to communicate with this process. Note that mod_wsgi can work in either mode: embedded or daemon. When you read up on mod_fastcgi, you'll see that Django uses flup to create a WSGI-compatible interface from the information provided by mod_fastcgi. The pipeline works like this. Apache -> mod_fastcgi -> FLUP (via FastCGI protocol) -> Django (via WSGI protocol) Django has several "django.core.handlers" for the various interfaces. For mod_fastcgi, Django provides a manage.py runfcgi that integrates FLUP and the handler. For mod_wsgi, there's a core handler for this. How to install WSGI support ? Follow these instructions. http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango For background see this http://docs.djangoproject.com/en/dev/howto/deployment/#howto-deployment-index
Ensure a single instance of an application in Linux
I'm working on a GUI application in WxPython, and I am not sure how I can ensure that only one copy of my application is running at any given time on the machine. Due to the nature of the application, running more than once doesn't make any sense, and will fail quickly. Under Win32, I can simply make a named mutex and check that at startup. Unfortunately, I don't know of any facilities in Linux that can do this. I'm looking for something that will automatically be released should the application crash unexpectedly. I don't want to have to burden my users with having to manually delete lock files because I crashed.
The Right Thing is advisory locking using flock(LOCK_EX); in Python, this is found in the fcntl module. Unlike pidfiles, these locks are always automatically released when your process dies for any reason, have no race conditions exist relating to file deletion (as the file doesn't need to be deleted to release the lock), and there's no chance of a different process inheriting the PID and thus appearing to validate a stale lock. If you want unclean shutdown detection, you can write a marker (such as your PID, for traditionalists) into the file after grabbing the lock, and then truncate the file to 0-byte status before a clean shutdown (while the lock is being held); thus, if the lock is not held and the file is non-empty, an unclean shutdown is indicated.
Is it possible to implement Python code-completion in TextMate?
PySmell seems like a good starting point. I think it should be possible, PySmell's idehelper.py does a majority of the complex stuff, it should just be a case of giving it the current line, offering up the completions (the bit I am not sure about) and then replacing the line with the selected one. >>> import idehelper >>> # The path is where my PYSMELLTAGS file is located: >>> PYSMELLDICT = idehelper.findPYSMELLDICT("/Users/dbr/Desktop/pysmell/") >>> options = idehelper.detectCompletionType("", "" 1, 2, "", PYSMELLDICT) >>> completions = idehelper.findCompletions("proc", PYSMELLDICT, options) >>> print completions [{'dup': '1', 'menu': 'pysmell.pysmell', 'kind': 'f', 'word': 'process', 'abbr': 'process(argList, excluded, output, verbose=False)'}] It'll never be perfect, but it would be extremely useful (even if just for completing the stdlib modules, which should never change, so you wont have to constantly regenerate the PYSMELLTAGS file whenever you add a function) Progressing! I have the utter-basics of completion in place - barely works, but it's close.. I ran python pysmells.py /System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/*.py -O /Library/Python/2.5/site-packages/pysmell/PYSMELLTAGS Place the following in a TextMate bundle script, set "input: entire document", "output: insert as text", "activation: key equivalent: alt+esc", "scope selector: source.python" #!/usr/bin/env python import os import sys from pysmell import idehelper CUR_WORD = os.environ.get("TM_CURRENT_WORD") cur_file = os.environ.get("TM_FILEPATH") orig_source = sys.stdin.read() line_no = int(os.environ.get("TM_LINE_NUMBER")) cur_col = int(os.environ.get("TM_LINE_INDEX")) # PYSMELLS is currently in site-packages/pysmell/ PYSMELLDICT = idehelper.findPYSMELLDICT("/Library/Python/2.5/site-packages/pysmell/blah") options = idehelper.detectCompletionType(cur_file, orig_source, line_no, cur_col, "", PYSMELLDICT) completions = idehelper.findCompletions(CUR_WORD, PYSMELLDICT, options) if len(completions) > 0: new_word = completions[0]['word'] new_word = new_word.replace(CUR_WORD, "", 1) # remove what user has already typed print new_word Then I made a new python document, typed "import urll" and hit alt+escape, and it completed it to "import urllib"! As I said, it's entirely a work-in-progress, so don't use it yet.. Last update: orestis has integrated this into the PySmell project's code! Any further fiddling will happen on github
EDIT: I've actually took your code above and integrated into a command. It will properly show a completion list for you to choose. You can grab it here: http://github.com/orestis/pysmell/tree/master (hit download and do python setup.py install). It's rough but it works. - please report any errors on http://code.google.com/p/pysmell/ -- Hi, I'm the developer of PySmell. I also use a Mac, so if you can send me an email (contact info is in the source code) with your progress so far, I can try to integrate it :) Oh BTW it's called PySmell - no trailing 's' :)
Putting Copyright Symbol into a Python File
I need to include a copyright statement at the top of every Python source file I produce: # Copyright: © 2008 etc. However, when I then run such a file I get this message: SyntaxError: Non-ASCII character '\xa9' in file MyFile.py on line 3, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details. Apparently Python isn't happy about the copyright symbol because it assumes the source file is all in ASCII. Either I need to make my first line be: # -*- coding: iso-8859-1 -*- to tell Python I'm using Latin encoding, or I can change the copyright statement to: # Copyright: \xa9 2008 etc. which just possibly doesn't have the same legal standing. Is there a more elegant solution?
The copyright symbol in ASCII is spelled (c) or "Copyright". See circular 61, Copyright Registration for Computer Programs. While it's true that the legal formalism (see Circular 1, Copyright Basics) is The symbol © (the letter C in a circle), or the word “Copyright,” or the abbreviation “Copr.”; and... And it's also true that To guarantee protection for a copyrighted work in all UCC member countries, the notice must consist of the symbol © (the word “Copyright” or the abbreviation is not acceptable) You can dig through circular 3 and 38a. This has, however, already been tested in court. It isn't an interesting issue. If you do a search for "(c) acceptable for c-in-a-circle", you'll find that lawyers all agree that (c) is an acceptable substitute. See Perle and Williams. See Scott on Information Technology Law.
ElementTree XPath - Select Element based on attribute
I am having trouble using the attribute XPath Selector in ElementTree, which I should be able to do according to the Documentation Here's some sample code XML <root> <target name="1"> <a></a> <b></b> </target> <target name="2"> <a></a> <b></b> </target> </root> Python def parse(document): root = et.parse(document) for target in root.findall("//target[@name='a']"): print target._children I am receiving the following Exception: expected path separator ([)
The syntax you're trying to use is new in ElementTree 1.3. Such version is shipped with Python 2.7 or higher. If you have Python 2.6 or less you still have ElementTree 1.2.6 or less.
Sorting a tuple that contains tuples
I have the following tuple, which contains tuples: MY_TUPLE = ( ('A','Apple'), ('C','Carrot'), ('B','Banana'), ) I'd like to sort this tuple based upon the second value contained in inner-tuples (i.e., sort Apple, Carrot, Banana rather than A, B, C). Any thoughts?
from operator import itemgetter MY_SORTED_TUPLE = tuple(sorted(MY_TUPLE, key=itemgetter(1))) or without itemgetter: MY_SORTED_TUPLE = tuple(sorted(MY_TUPLE, key=lambda item: item[1]))
How to use 'super' in Python?
Can someone explain to me the difference between doing: class Child(SomeBaseClass): def __init__(self): super(Child, self).__init__() and this: class Child(SomeBaseClass): def __init__(self): SomeBaseClass.__init__(self) I've seen super being used quite a lot in classes with only single inheritance. I can see why you'd use it in multiple inheritance but am unclear as to what the advantages are of using it in this kind of situation.
The benefits of super() in single-inheritance are minimal -- mostly, you don't have to hard-code the name of the base class into every method that uses its parent methods. However, it's almost impossible to use multiple-inheritance without super(). This includes common idioms like mixins, interfaces, abstract classes, etc. This extends to code that later extends yours. If somebody later wanted to write a class that extended Child and a mixin, their code would not work properly.
Using Variables for Class Names in Python?
I want to know how to use variables for objects and function names in Python. In PHP, you can do this: $className = "MyClass"; $newObject = new $className(); How do you do this sort of thing in Python? Or, am I totally not appreciating some fundamental difference with Python, and if so, what is it?
Assuming that some_module has a class named "class_name": import some_module klass = getattr(some_module, "class_name") some_object = klass() I should note that you should be careful here: turning strings into code can be dangerous if the string came from the user, so you should keep security in mind in this situation. :) One other method (assuming that we still are using "class_name"): class_lookup = { 'class_name' : class_name } some_object = class_lookup['class_name']() #call the object once we've pulled it out of the dict The latter method is probably the most secure way of doing this, so it's probably what you should use if at all possible.
Elegant structured text file parsing
I need to parse a transcript of a live chat conversation. My first thought on seeing the file was to throw regular expressions at the problem but I was wondering what other approaches people have used. I put elegant in the title as i've previously found that this type of task has a danger of getting hard to maintain just relying on regular expressions. The transcripts are being generated by www.providesupport.com and emailed to an account, I then extract a plain text transcript attachment from the email. The reason for parsing the file is to extract the conversation text for later but also to identify visitors and operators names so that the information can be made available via a CRM. Here is an example of a transcript file: Chat Transcript Visitor: Random Website Visitor Operator: Milton Company: Initech Started: 16 Oct 2008 9:13:58 Finished: 16 Oct 2008 9:45:44 Random Website Visitor: Where do i get the cover sheet for the TPS report? * There are no operators available at the moment. If you would like to leave a message, please type it in the input field below and click "Send" button * Call accepted by operator Milton. Currently in room: Milton, Random Website Visitor. Milton: Y-- Excuse me. You-- I believe you have my stapler? Random Website Visitor: I really just need the cover sheet, okay? Milton: it's not okay because if they take my stapler then I'll, I'll, I'll set the building on fire... Random Website Visitor: oh i found it, thanks anyway. * Random Website Visitor is now off-line and may not reply. Currently in room: Milton. Milton: Well, Ok. But… that's the last straw. * Milton has left the conversation. Currently in room: room is empty. Visitor Details --------------- Your Name: Random Website Visitor Your Question: Where do i get the cover sheet for the TPS report? IP Address: 255.255.255.255 Host Name: 255.255.255.255 Referrer: Unknown Browser/OS: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727)
No and in fact, for the specific type of task you describe, I doubt there's a "cleaner" way to do it than regular expressions. It looks like your files have embedded line breaks so typically what we'll do here is make the line your unit of decomposition, applying per-line regexes. Meanwhile, you create a small state machine and use regex matches to trigger transitions in that state machine. This way you know where you are in the file, and what types of character data you can expect. Also, consider using named capture groups and loading the regexes from an external file. That way if the format of your transcript changes, it's a simple matter of tweaking the regex, rather than writing new parse-specific code.
How do I perform query filtering in django templates
I need to perform a filtered query from within a django template, to get a set of objects equivalent to python code within a view: queryset = Modelclass.objects.filter(somekey=foo) In my template I would like to do {% for object in data.somekey_set.FILTER %} but I just can't seem to find out how to write FILTER.
You can't do this, which is by design. The Django framework authors intended a strict separation of presentation code from data logic. Filtering models is data logic, and outputting HTML is presentation logic. So you have several options. The easiest is to do the filtering, then pass the result to render_to_response. Or you could write a method in your model so that you can say {% for object in data.filtered_set %}. Finally, you could write your own template tag, although in this specific case I would advise against that.
Javascript style dot notation for dictionary keys unpythonic?
I've started to use constructs like these: class DictObj(object): def __init__(self): self.d = {} def __getattr__(self, m): return self.d.get(m, None) def __setattr__(self, m, v): super.__setattr__(self, m, v) Update: based on this thread, I've revised the DictObj implementation to: class dotdict(dict): def __getattr__(self, attr): return self.get(attr, None) __setattr__= dict.__setitem__ __delattr__= dict.__delitem__ class AutoEnum(object): def __init__(self): self.counter = 0 self.d = {} def __getattr__(self, c): if c not in self.d: self.d[c] = self.counter self.counter += 1 return self.d[c] where DictObj is a dictionary that can be accessed via dot notation: d = DictObj() d.something = 'one' I find it more aesthetically pleasing than d['something']. Note that accessing an undefined key returns None instead of raising an exception, which is also nice. Update: Smashery makes a good point, which mhawke expands on for an easier solution. I'm wondering if there are any undesirable side effects of using dict instead of defining a new dictionary; if not, I like mhawke's solution a lot. AutoEnum is an auto-incrementing Enum, used like this: CMD = AutoEnum() cmds = { "peek": CMD.PEEK, "look": CMD.PEEK, "help": CMD.HELP, "poke": CMD.POKE, "modify": CMD.POKE, } Both are working well for me, but I'm feeling unpythonic about them. Are these in fact bad constructs?
Your DictObj example is actually quite common. Object-style dot-notation access can be a win if you are dealing with ‘things that resemble objects’, ie. they have fixed property names containing only characters valid in Python identifiers. Stuff like database rows or form submissions can be usefully stored in this kind of object, making code a little more readable without the excess of ['item access']. The implementation is a bit limited - you don't get the nice constructor syntax of dict, len(), comparisons, 'in', iteration or nice reprs. You can of course implement those things yourself, but in the new-style-classes world you can get them for free by simply subclassing dict: class AttrDict(dict): __getattr__ = dict.__getitem__ __setattr__ = dict.__setitem__ __delattr__ = dict.__delitem__ To get the default-to-None behaviour, simply subclass Python 2.5's collections.defaultdict class instead of dict.
RFC 1123 Date Representation in Python?
Is there a fairly easy way to convert a datetime object into an RFC 1123 (HTTP/1.1) date/time string, i.e. a string with the format Sun, 06 Nov 1994 08:49:37 GMT Using strftime does not work, since the strings are locale-dependant. Do I have to build the string by hand?
You can use wsgiref.handlers.format_date_time from the stdlib which does not rely on locale settings from wsgiref.handlers import format_date_time from datetime import datetime from time import mktime now = datetime.now() stamp = mktime(now.timetuple()) print format_date_time(stamp) #--> Wed, 22 Oct 2008 10:52:40 GMT You can use email.utils.formatdate from the stdlib which does not rely on locale settings from email.utils import formatdate from datetime import datetime from time import mktime now = datetime.now() stamp = mktime(now.timetuple()) print formatdate( timeval = stamp, localtime = False, usegmt = True ) #--> Wed, 22 Oct 2008 10:55:46 GMT If you can set the locale process wide then you can do: import locale, datetime locale.setlocale(locale.LC_TIME, 'en_US') datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT') If you don't want to set the locale process wide you could use Babel date formating from datetime import datetime from babel.dates import format_datetime now = datetime.utcnow() format = 'EEE, dd LLL yyyy hh:mm:ss' print format_datetime(now, format, locale='en') + ' GMT' A manual way to format it which is identical with wsgiref.handlers.format_date_time is: def httpdate(dt): """Return a string representation of a date according to RFC 1123 (HTTP/1.1). The supplied date must be in UTC. """ weekday = ["Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun"][dt.weekday()] month = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"][dt.month - 1] return "%s, %02d %s %04d %02d:%02d:%02d GMT" % (weekday, dt.day, month, dt.year, dt.hour, dt.minute, dt.second)
"Pretty" Continuous Integration for Python
This is a slightly.. vain question, but BuildBot's output isn't particularly nice to look at.. For example, compared to.. phpUnderControl Jenkins Hudson CruiseControl.rb ..and others, BuildBot looks rather.. archaic I'm currently playing with Hudson, but it is very Java-centric (although with this guide, I found it easier to setup than BuildBot, and produced more info) Basically: is there any Continuous Integration systems aimed at python, that produce lots of shiny graphs and the likes? Update: Since this time the Jenkins project has replaced Hudson as the community version of the package. The original authors have moved to this project as well. Jenkins is now a standard package on Ubuntu/Debian, RedHat/Fedora/CentOS, and others. The following update is still essentially correct. The starting point to do this with Jenkins is different. Update: After trying a few alternatives, I think I'll stick with Hudson. Integrity was nice and simple, but quite limited. I think Buildbot is better suited to having numerous build-slaves, rather than everything running on a single machine like I was using it. Setting Hudson up for a Python project was pretty simple: Download Hudson from http://hudson-ci.org/ Run it with java -jar hudson.war Open the web interface on the default address of http://localhost:8080 Go to Manage Hudson, Plugins, click "Update" or similar Install the Git plugin (I had to set the git path in the Hudson global preferences) Create a new project, enter the repository, SCM polling intervals and so on Install nosetests via easy_install if it's not already In the a build step, add nosetests --with-xunit --verbose Check "Publish JUnit test result report" and set "Test report XMLs" to **/nosetests.xml That's all that's required. You can setup email notifications, and the plugins are worth a look. A few I'm currently using for Python projects: SLOCCount plugin to count lines of code (and graph it!) - you need to install sloccount separately Violations to parse the PyLint output (you can setup warning thresholds, graph the number of violations over each build) Cobertura can parse the coverage.py output. Nosetest can gather coverage while running your tests, using nosetests --with-coverage (this writes the output to **/coverage.xml)
You might want to check out Nose and the Xunit output plugin. You can have it run your unit tests, and coverage checks with this command: nosetests --with-xunit --enable-cover That'll be helpful if you want to go the Jenkins route, or if you want to use another CI server that has support for JUnit test reporting. Similarly you can capture the output of pylint using the violations plugin for Jenkins
Batch Renaming of Files in a Directory
Is there an easy way to rename a group of files already contained in a directory, using Python? Example: I have a directory full of *.doc files and I want to rename them in a consistent way. X.doc -> "new(X).doc" Y.doc -> "new(Y).doc"
Such renaming is quite easy, for example with os and glob modules: import glob, os def rename(dir, pattern, titlePattern): for pathAndFilename in glob.iglob(os.path.join(dir, pattern)): title, ext = os.path.splitext(os.path.basename(pathAndFilename)) os.rename(pathAndFilename, os.path.join(dir, titlePattern % title + ext)) You could then use it in your example like this: rename(r'c:\temp\xx', r'*.doc', r'new(%s)') The above example will convert all *.doc files in c:\temp\xx dir to new(%s).doc, where %s is the previous base name of the file (without extension).
Django Forms - How to Use Prefix Parameter
Say I have a form like: class GeneralForm(forms.Form): field1 = forms.IntegerField(required=False) field2 = forms. IntegerField(required=False) And I want to show it twice on a page within one form tag each time with a different prefix e.g.,: rest of page ... <form ..> GeneralForm(data,prefix="form1").as_table() GeneralForm(data,prefix="form2").as_table() <input type="submit" /> </form> rest of page ... When the user submits this, how do I get the submitted form back into two separate forms to do validation, and redisplay it? This was the only documentation I could find and it's peckish.
You process each form as you normally would, ensuring that you create instances which have the same prefixes as those used to generate the form initially. Here's a slightly awkward example using the form you've given, as I don't know what the exact use case is: def some_view(request): if request.method == 'POST': form1 = GeneralForm(request.POST, prefix='form1') form2 = GeneralForm(request.POST, prefix='form2') if all([form1.is_valid(), form2.is_valid()]): pass # Do stuff with the forms else: form1 = GeneralForm(prefix='form1') form2 = GeneralForm(prefix='form2') return render_to_response('some_template.html', { 'form1': form1, 'form2': form2, }) Here's some real-world sample code which demonstrates processing forms using the prefix: http://collingrady.wordpress.com/2008/02/18/editing-multiple-objects-in-django-with-newforms/
How does one put a link / url to the web-site's home page in Django?
In Django templates, is there a variable in the context (e.g. {{ BASE\_URL }}, {{ ROOT\_URL }}, or {{ MEDIA\_URL }} that one can use to link to the home url of a project? I.e. if Django is running in the root of a project, the variable (let's call it R) {{ R }} in a template would be /. If the root url is a sub-folder http://host/X/ the variable {{ R }} would be /X/ (or http://host/X/). It seems painfully simple, but I can't find an answer. :) Thank you!
You could give the URL configuration which you're using to handle the home page a name and use that: urls.py: from django.conf.urls.defaults import * urlpatterns = patterns('myproject.views', url(r'^$', 'index', name='index'), ) Templates: <a href="{% url index %}">... This note in the Django Book has some tips about deploying your applications to a subdirectory: http://www.djangobook.com/en/1.0/chapter20/#cn43
Python Disk-Based Dictionary
I was running some dynamic programming code (trying to brute-force disprove the Collatz conjecture =P) and I was using a dict to store the lengths of the chains I had already computed. Obviously, it ran out of memory at some point. Is there any easy way to use some variant of a dict which will page parts of itself out to disk when it runs out of room? Obviously it will be slower than an in-memory dict, and it will probably end up eating my hard drive space, but this could apply to other problems that are not so futile. I realized that a disk-based dictionary is pretty much a database, so I manually implemented one using sqlite3, but I didn't do it in any smart way and had it look up every element in the DB one at a time... it was about 300x slower. Is the smartest way to just create my own set of dicts, keeping only one in memory at a time, and paging them out in some efficient manner?
The 3rd party shove module is also worth taking a look at. It's very similar to shelve in that it is a simple dict-like object, however it can store to various backends (such as file, SVN, and S3), provides optional compression, and is even threadsafe. It's a very handy module from shove import Shove mem_store = Shove() file_store = Shove('file://mystore') file_store['key'] = value
How do you create a simple Google Talk Client using the Twisted Words Python library?
I am interested in making a Google Talk client using Python and would like to use the Twisted libraries Words module. I have looked at the examples, but they don't work with the current implementation of Google Talk. Has anybody had any luck with this? Would you mind documenting a brief tutorial? As a simple task, I'd like to create a client/bot that tracks the Online time of my various Google Talk accounts so that I can get an aggregate number. I figure I could friend the bot in each account and then use the XMPP presence information to keep track of the times that I can then aggregate. Thanks.
wokkel is the future of twisted words. metajack wrote a really nice blog post on getting started. If you want a nice, functional sample project to start with, check out my whatsup bot.
ASCII value of a character in Python
How do I get the ASCII value of a character as an int in Python?
From here: function ord() would get the int value of the char. And in case you want to convert back after playing with the number, function chr() does the trick. >>> ord('a') 97 >>> chr(97) 'a' >>> chr(ord('a') + 3) 'd' >>> There is also the unichr function, returning the Unicode character whose ordinal is the unichr argument: >>> unichr(97) u'a' >>> unichr(1234) u'\u04d2'
Open file, read it, process, and write back - shortest method in Python
I want to do some basic filtering on a file. Read it, do processing, write it back. I'm not looking for "golfing", but want the simplest and most elegant method to achieve this. I came up with: from __future__ import with_statement filename = "..." # or sys.argv... with open(filename) as f: new_txt = # ...some translation of f.read() open(filename, 'w').write(new_txt) The with statement makes things shorter since I don't have to explicitly open and close the file. Any other ideas ?
Actually an easier way using fileinput is to use the inplace parameter: import fileinput for line in fileinput.input (filenameToProcess, inplace=1): process (line) If you use the inplace parameter it will redirect stdout to your file, so that if you do a print it will write back to your file. This example adds line numbers to your file: import fileinput for line in fileinput.input ("b.txt",inplace=1): print "%d: %s" % (fileinput.lineno(),line),
Passing a list while retaining the original
So I'm teaching myself Python, and I'm having an issue with lists. I want to pass my function a list and pop items off it while retaining the original list. How do I make python "instance" the passed list rather that passing a pointer to the original one? Example: def burninate(b): c = [] for i in range(3): c.append(b.pop()) return c a = range(6) d = burninate(a) print a, d Output: [0, 1, 2] [5, 4, 3] Desired output: [0, 1, 2, 3, 4, 5] [5, 4, 3] Thanks!
As other answers have suggested, you can provide your function with a copy of the list. As an alternative, your function could take a copy of the argument: def burninate(b): c = [] b = list(b) for i in range(3): c.append(b.pop()) return c Basically, you need to be clear in your mind (and in your documentation) whether your function will change its arguments. In my opinion, functions that return computed values should not change their arguments, and functions that change their arguments should not return anything. See python's [].sort(), [].extend(), {}.update(), etc. for examples. Obviously there are exceptions (like .pop()). Also, depending on your particular case, you could rewrite the function to avoid using pop() or other functions that modify the argument. e.g. def burninante(b): return b[:-4:-1] # return the last three elements in reverse order
What's win32con module in python? Where can I find it?
I'm building an open source project that uses python and c++ in Windows. I came to the following error message: ImportError: No module named win32con The same happened in a "prebuilt" code that it's working ( except in my computer :P ) I think this is kind of "popular" module in python because I've saw several messages in other forums but none that could help me. I have Python2.6, should I have that module already installed? Is that something of VC++? Thank you for the help. I got this url http://sourceforge.net/projects/pywin32/ but I'm not sure what to do with the executable :S
This module contains constants related to Win32 programming. It is not part of the Python 2.6 release, but should be part of the download of the pywin32 project. Edit: I imagine that the executable is an installation program, though the last time I downloaded pywin32 it was just a zip file.
Python debugger: Stepping into a function that you have called interactively
Python is quite cool, but unfortunately, its debugger is not as good as perl -d. One thing that I do very commonly when experimenting with code is to call a function from within the debugger, and step into that function, like so: # NOTE THAT THIS PROGRAM EXITS IMMEDIATELY WITHOUT CALLING FOO() ~> cat -n /tmp/show_perl.pl 1 #!/usr/local/bin/perl 2 3 sub foo { 4 print "hi\n"; 5 print "bye\n"; 6 } 7 8 exit 0; ~> perl -d /tmp/show_perl.pl Loading DB routines from perl5db.pl version 1.28 Editor support available. Enter h or `h h' for help, or `man perldebug' for more help. main::(/tmp/show_perl.pl:8): exit 0; # MAGIC HAPPENS HERE -- I AM STEPPING INTO A FUNCTION THAT I AM CALLING INTERACTIVELY DB<1> s foo() main::((eval 6)[/usr/local/lib/perl5/5.8.6/perl5db.pl:628]:3): 3: foo(); DB<<2>> s main::foo(/tmp/show_perl.pl:4): print "hi\n"; DB<<2>> n hi main::foo(/tmp/show_perl.pl:5): print "bye\n"; DB<<2>> n bye DB<2> n Debugged program terminated. Use q to quit or R to restart, use O inhibit_exit to avoid stopping after program termination, h q, h R or h O to get additional info. DB<2> q This is incredibly useful when trying to step through a function's handling of various different inputs to figure out why it fails. However, it does not seem to work in either pdb or pydb (I'd show an equivalent python example to the one above but it results in a large exception stack dump). So my question is twofold: Am I missing something? Is there a python debugger that would indeed let me do this? Obviously I could put the calls in the code myself, but I love working interactively, eg. not having to start from scratch when I want to try calling with a slightly different set of arguments.
And I've answered my own question! It's the "debug" command in pydb: ~> cat -n /tmp/test_python.py 1 #!/usr/local/bin/python 2 3 def foo(): 4 print "hi" 5 print "bye" 6 7 exit(0) 8 ~> pydb /tmp/test_python.py (/tmp/test_python.py:7): <module> 7 exit(0) (Pydb) debug foo() ENTERING RECURSIVE DEBUGGER ------------------------Call level 11 (/tmp/test_python.py:3): foo 3 def foo(): ((Pydb)) s (/tmp/test_python.py:4): foo 4 print "hi" ((Pydb)) s hi (/tmp/test_python.py:5): foo 5 print "bye" ((Pydb)) s bye ------------------------Return from level 11 (<type 'NoneType'>) ----------------------Return from level 10 (<type 'NoneType'>) LEAVING RECURSIVE DEBUGGER (/tmp/test_python.py:7): <module>
How do I iterate through a string in Python?
As an example, lets say I wanted to list the frequency of each letter of the alphabet in a string. What would be the easiest way to do it? This is an example of what I'm thinking of... the question is how to make allTheLetters equal to said letters without something like allTheLetters = "abcdefg...xyz". In many other languages I could just do letter++ and increment my way through the alphabet, but thus far I haven't come across a way to do that in python. def alphCount(text): lowerText = text.lower() for letter in allTheLetters: print letter + ":", lowertext.count(letter)
The question you've asked (how to iterate through the alphabet) is not the same question as the problem you're trying to solve (how to count the frequency of letters in a string). You can use string.lowercase, as other posters have suggested: import string allTheLetters = string.lowercase To do things the way you're "used to", treating letters as numbers, you can use the "ord" and "chr" functions. There's absolutely no reason to ever do exactly this, but maybe it comes closer to what you're actually trying to figure out: def getAllTheLetters(begin='a', end='z'): beginNum = ord(begin) endNum = ord(end) for number in xrange(beginNum, endNum+1): yield chr(number) You can tell it does the right thing because this code prints True: import string print ''.join(getAllTheLetters()) == string.lowercase But, to solve the problem you're actually trying to solve, you want to use a dictionary and collect the letters as you go: from collections import defaultdict def letterOccurrances(string): frequencies = defaultdict(lambda: 0) for character in string: frequencies[character.lower()] += 1 return frequencies Use like so: occs = letterOccurrances("Hello, world!") print occs['l'] print occs['h'] This will print '3' and '1' respectively. Note that this works for unicode as well: # -*- coding: utf-8 -*- occs = letterOccurrances(u"héĺĺó, ẃóŕĺd!") print occs[u'l'] print occs[u'ĺ'] If you were to try the other approach on unicode (incrementing through every character) you'd be waiting a long time; there are millions of unicode characters. To implement your original function (print the counts of each letter in alphabetical order) in terms of this: def alphCount(text): for character, count in sorted(letterOccurrances(text).iteritems()): print "%s: %s" % (character, count) alphCount("hello, world!")
SQLite parameter substitution problem
Using SQLite3 with Python 2.5, I'm trying to iterate through a list and pull the weight of an item from the database based on the item's name. I tried using the "?" parameter substitution suggested to prevent SQL injections but it doesn't work. For example, when I use: for item in self.inventory_names: self.cursor.execute("SELECT weight FROM Equipment WHERE name = ?", item) self.cursor.close() I get the error: sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 8 supplied. I believe this is somehow caused by the initial creation of the database; the module I made that actually creates the DB does have 8 bindings. cursor.execute("""CREATE TABLE Equipment (id INTEGER PRIMARY KEY, name TEXT, price INTEGER, weight REAL, info TEXT, ammo_cap INTEGER, availability_west TEXT, availability_east TEXT)""") However, when I use the less-secure "%s" substitution for each item name, it works just fine. Like so: for item in self.inventory_names: self.cursor.execute("SELECT weight FROM Equipment WHERE name = '%s'" % item) self.cursor.close() I can't figure out why it thinks I have 8 bindins when I'm only calling one. How can I fix it?
The Cursor.execute() method expects a sequence as second parameter. You are supplying a string which happens to be 8 characters long. Use the following form instead: self.cursor.execute("SELECT weight FROM Equipment WHERE name = ?", [item]) Python library reference 13.13.3: sqlite3 Cursor Objects.
os.walk without digging into directories below
How do I limit os.walk to only return files in the directory I provide it? def _dir_list(self, dir_name, whitelist): outputList = [] for root, dirs, files in os.walk(dir_name): for f in files: if os.path.splitext(f)[1] in whitelist: outputList.append(os.path.join(root, f)) else: self._email_to_("ignore") return outputList
Don't use os.walk. Example: import os root = "C:\\" for item in os.listdir(root): if os.path.isfile(os.path.join(root, item)): print item
How to flush output of Python print?
How do I force Python's print function to output to the screen?
import sys sys.stdout.flush() Print by default prints to sys.stdout. References: http://docs.python.org/reference/simple_stmts.html#the-print-statement http://docs.python.org/library/sys.html http://docs.python.org/library/stdtypes.html#file-objects
Given a list of variable names in Python, how do I a create a dictionary with the variable names as keys (to the variables' values)?
I have a list of variable names, like this: ['foo', 'bar', 'baz'] (I originally asked how I convert a list of variables. See Greg Hewgill's answer below.) How do I convert this to a dictionary where the keys are the variable names (as strings) and the values are the values of the variables? {'foo': foo, 'bar': bar, 'baz': baz} Now that I'm re-asking the question, I came up with: d = {} for name in list_of_variable_names: d[name] = eval(name) Can that be improved upon? Update, responding to the question (in a comment) of why I'd want to do this: I often find myself using the % operator to strings with a dictionary of names and values to interpolate. Often the names in the string is just the names of local variables. So (with the answer below) I can do something like this: message = '''Name: %(name)s ZIP: %(zip)s Dear %(name)s, ...''' % dict((x, locals()[x]) for x in ['name', 'zip'])
Forget filtering locals()! The dictionary you give to the formatting string is allowed to contain unused keys: >>> name = 'foo' >>> zip = 123 >>> unused = 'whoops!' >>> locals() {'name': 'foo', 'zip': 123, ... 'unused': 'whoops!', ...} >>> '%(name)s %(zip)i' % locals() 'foo 123'
What does the "yield" keyword do?
What is the use of the yield keyword in Python? What does it do? For example, I'm trying to understand this code1: def _get_child_candidates(self, distance, min_dist, max_dist): if self._leftchild and distance - max_dist < self._median: yield self._leftchild if self._rightchild and distance + max_dist >= self._median: yield self._rightchild And this is the caller: result, candidates = list(), [self] while candidates: node = candidates.pop() distance = node._get_dist(obj) if distance <= max_dist and distance >= min_dist: result.extend(node._values) candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result What happens when the method _get_child_candidates is called? A list is returned? A single element is returned? Is it called again? When will subsequent calls stop? 1. The code comes from Jochen Schulz (jrschulz), who made a great Python library for metric spaces. This is the link to the complete source: Module mspace.
To understand what yield does, you must understand what generators are. And before generators come iterables. Iterables When you create a list, you can read its items one by one. Reading its items one by one is called iteration: >>> mylist = [1, 2, 3] >>> for i in mylist: ... print(i) 1 2 3 mylist is an iterable. When you use a list comprehension, you create a list, and so an iterable: >>> mylist = [x*x for x in range(3)] >>> for i in mylist: ... print(i) 0 1 4 Everything you can use "for... in..." on is an iterable; lists, strings, files... These iterables are handy because you can read them as much as you wish, but you store all the values in memory and this is not always what you want when you have a lot of values. Generators Generators are iterators, but you can only iterate over them once. It's because they do not store all the values in memory, they generate the values on the fly: >>> mygenerator = (x*x for x in range(3)) >>> for i in mygenerator: ... print(i) 0 1 4 It is just the same except you used () instead of []. BUT, you cannot perform for i in mygenerator a second time since generators can only be used once: they calculate 0, then forget about it and calculate 1, and end calculating 4, one by one. Yield Yield is a keyword that is used like return, except the function will return a generator. >>> def createGenerator(): ... mylist = range(3) ... for i in mylist: ... yield i*i ... >>> mygenerator = createGenerator() # create a generator >>> print(mygenerator) # mygenerator is an object! <generator object createGenerator at 0xb7555c34> >>> for i in mygenerator: ... print(i) 0 1 4 Here it's a useless example, but it's handy when you know your function will return a huge set of values that you will only need to read once. To master yield, you must understand that when you call the function, the code you have written in the function body does not run. The function only returns the generator object, this is a bit tricky :-) Then, your code will be run each time the for uses the generator. Now the hard part: The first time the for calls the generator object created from your function, it will run the code in your function from the beginning until it hits yield, then it'll return the first value of the loop. Then, each other call will run the loop you have written in the function one more time, and return the next value, until there is no value to return. The generator is considered empty once the function runs but does not hit yield anymore. It can be because the loop had come to an end, or because you do not satisfy an "if/else" anymore. Your code explained Generator: # Here you create the method of the node object that will return the generator def node._get_child_candidates(self, distance, min_dist, max_dist): # Here is the code that will be called each time you use the generator object: # If there is still a child of the node object on its left # AND if distance is ok, return the next child if self._leftchild and distance - max_dist < self._median: yield self._leftchild # If there is still a child of the node object on its right # AND if distance is ok, return the next child if self._rightchild and distance + max_dist >= self._median: yield self._rightchild # If the function arrives here, the generator will be considered empty # there is no more than two values: the left and the right children Caller: # Create an empty list and a list with the current object reference result, candidates = list(), [self] # Loop on candidates (they contain only one element at the beginning) while candidates: # Get the last candidate and remove it from the list node = candidates.pop() # Get the distance between obj and the candidate distance = node._get_dist(obj) # If distance is ok, then you can fill the result if distance <= max_dist and distance >= min_dist: result.extend(node._values) # Add the children of the candidate in the candidates list # so the loop will keep running until it will have looked # at all the children of the children of the children, etc. of the candidate candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) return result This code contains several smart parts: The loop iterates on a list but the list expands while the loop is being iterated :-) It's a concise way to go through all these nested data even if it's a bit dangerous since you can end up with an infinite loop. In this case, candidates.extend(node._get_child_candidates(distance, min_dist, max_dist)) exhausts all the values of the generator, but while keeps creating new generator objects which will produce different values from the previous ones since it's not applied on the same node. The extend() method is a list object method that expects an iterable and adds its values to the list. Usually we pass a list to it: >>> a = [1, 2] >>> b = [3, 4] >>> a.extend(b) >>> print(a) [1, 2, 3, 4] But in your code it gets a generator, which is good because: You don't need to read the values twice. You may have a lot of children and you don't want them all stored in memory. And it works because Python does not care if the argument of a method is a list or not. Python expects iterables so it will work with strings, lists, tuples and generators! This is called duck typing and is one of the reason why Python is so cool. But this is another story, for another question... You can stop here, or read a little bit to see an advanced use of a generator: Controlling a generator exhaustion >>> class Bank(): # let's create a bank, building ATMs ... crisis = False ... def create_atm(self): ... while not self.crisis: ... yield "$100" >>> hsbc = Bank() # when everything's ok the ATM gives you as much as you want >>> corner_street_atm = hsbc.create_atm() >>> print(corner_street_atm.next()) $100 >>> print(corner_street_atm.next()) $100 >>> print([corner_street_atm.next() for cash in range(5)]) ['$100', '$100', '$100', '$100', '$100'] >>> hsbc.crisis = True # crisis is coming, no more money! >>> print(corner_street_atm.next()) <type 'exceptions.StopIteration'> >>> wall_street_atm = hsbc.create_atm() # it's even true for new ATMs >>> print(wall_street_atm.next()) <type 'exceptions.StopIteration'> >>> hsbc.crisis = False # trouble is, even post-crisis the ATM remains empty >>> print(corner_street_atm.next()) <type 'exceptions.StopIteration'> >>> brand_new_atm = hsbc.create_atm() # build a new one to get back in business >>> for cash in brand_new_atm: ... print cash $100 $100 $100 $100 $100 $100 $100 $100 $100 ... It can be useful for various things like controlling access to a resource. Itertools, your best friend The itertools module contains special functions to manipulate iterables. Ever wish to duplicate a generator? Chain two generators? Group values in a nested list with a one liner? Map / Zip without creating another list? Then just import itertools. An example? Let's see the possible orders of arrival for a 4 horse race: >>> horses = [1, 2, 3, 4] >>> races = itertools.permutations(horses) >>> print(races) <itertools.permutations object at 0xb754f1dc> >>> print(list(itertools.permutations(horses))) [(1, 2, 3, 4), (1, 2, 4, 3), (1, 3, 2, 4), (1, 3, 4, 2), (1, 4, 2, 3), (1, 4, 3, 2), (2, 1, 3, 4), (2, 1, 4, 3), (2, 3, 1, 4), (2, 3, 4, 1), (2, 4, 1, 3), (2, 4, 3, 1), (3, 1, 2, 4), (3, 1, 4, 2), (3, 2, 1, 4), (3, 2, 4, 1), (3, 4, 1, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 1, 3, 2), (4, 2, 1, 3), (4, 2, 3, 1), (4, 3, 1, 2), (4, 3, 2, 1)] Understanding the inner mechanisms of iteration Iteration is a process implying iterables (implementing the __iter__() method) and iterators (implementing the __next__() method). Iterables are any objects you can get an iterator from. Iterators are objects that let you iterate on iterables. More about it in this article about how does the for loop work.
Python inheritance - how to disable a function
In C++ you can disable a function in parent's class by declaring it as private in the child class. How can this be done in Python? I.E. How can I hide parent's function from child's public interface?
There really aren't any true "private" attributes or methods in Python. One thing you can do is simply override the method you don't want in the subclass, and raise an exception: >>> class Foo( object ): ... def foo( self ): ... print 'FOO!' ... >>> class Bar( Foo ): ... def foo( self ): ... raise AttributeError( "'Bar' object has no attribute 'foo'" ) ... >>> b = Bar() >>> b.foo() Traceback (most recent call last): File "<interactive input>", line 1, in <module> File "<interactive input>", line 3, in foo AttributeError: 'Bar' object has no attribute 'foo'
Will Django be a good choice for a permissions based web-app?
I've been exploring the details of Django for about a week now and like what I see. However I've come upon some.. negativity in relation to fine-grained control of permissions to the CRUD interface. What I'm writing is an Intranet client management web-app. The organisation is about 6 tiers, and I need to restrict access to client groups based on tiers. Continually expanding. I have a fairly good idea how I'm going to do this, but am not sure if I'll be able to integrate it well into the pre-built admin interface. I've done absolutely zero Django development otherwise I'd probably have a better idea on whether this would work or not. I probably won't use Django if the generated admin interface is going to be useless to this project - but like I said, there is a heavy reliance on fine-grained custom permissions. Will Django let me build custom permissions/rules and integrate it seamlessly into the admin CRUD interface? Update One: I want to use the admin app to minimise the repitition of generating CRUD interfaces, so yes I consider it a must have. Update Two: I want to describe the permissions required for this project. A client can belong to one or many 'stores'. Full time employees should only be able to edit clients at their store (even if they belong to another store). However, they should not be able to see/edit clients at another store. Casuals should only be able to view clients based on what store they are rostered too (or if the casual is logged in as the store user - more likely). Management above them need to be able to see all employees for the stores they manage, nothing more. Senior management should be able to edit ALL employees and grant permissions below themselves. After reading the django documentation, it says you can't (autmoatically) set permissions for a sub-set of a group. Only the entire group. Is it easy enough to mock up your own permissions for this purpose?
If I read your updated requirements correctly, I don't think Django's existing auth system will be sufficient. It sounds like you need a full-on ACL system. This subject has come up a number of times. Try googling on django+acl. Random samplings ... There was a Summer of Code project a couple of years ago, but I'm not sure where they got to. See http://code.djangoproject.com/wiki/GenericAuthorization There is a fresh ticket at djngoproject.org that might be interesting: http://code.djangoproject.com/ticket/9444 There is some interesting code snips on dumpz.org: http://dumpz.org/274/ models.py http://dumpz.org/273/ signals.py ... but there are zero docs. Good luck!
How do I restrict foreign keys choices to related objects only in django
I have a two way foreign relation similar to the following class Parent(models.Model): name = models.CharField(max_length=255) favoritechild = models.ForeignKey("Child", blank=True, null=True) class Child(models.Model): name = models.CharField(max_length=255) myparent = models.ForeignKey(Parent) How do I restrict the choices for Parent.favoritechild to only children whose parent is itself? I tried class Parent(models.Model): name = models.CharField(max_length=255) favoritechild = models.ForeignKey("Child", blank=True, null=True, limit_choices_to = {"myparent": "self"}) but that causes the admin interface to not list any children.
I just came across ForeignKey.limit_choices_to in the Django docs. Not sure yet how this works, but it might just be the right think here. Update: ForeignKey.limit_choices_to allows to specify either a constant, a callable or a Q object to restrict the allowable choices for the key. A constant obviously is no use here, since it knows nothing about the objects involved. Using a callable (function or class method or any callable object) seem more promising. The problem remains how to access the necessary information form the HttpRequest object. Using thread local storage may be a solution. 2. Update: Here is what hast worked for me: I created a middle ware as described in the link above. It extracts one or more arguments from the request's GET part, such as "product=1" and stores this information in the thread locals. Next there is a class method in the model that reads the thread local variable and returns a list of ids to limit the choice of a foreign key field. @classmethod def _product_list(cls): """ return a list containing the one product_id contained in the request URL, or a query containing all valid product_ids if not id present in URL used to limit the choice of foreign key object to those related to the current product """ id = threadlocals.get_current_product() if id is not None: return [id] else: return Product.objects.all().values('pk').query It is important to return a query containing all possible ids if none was selected so the normal admin pages work ok. The foreign key field is then declared as: product = models.ForeignKey(Product, limit_choices_to=dict(id__in=BaseModel._product_list)) The catch is that you have to provide the information to restrict the choices via the request. I don't see a way to access "self" here.
Lexical closures in Python
While I was investigating a problem I had with lexical closures in Javascript code, I came along this problem in Python: flist = [] for i in xrange(3): def func(x): return x * i flist.append(func) for f in flist: print f(2) Note that this example mindfully avoids lambda. It prints "4 4 4", which is surprising. I'd expect "0 2 4". This equivalent Perl code does it right: my @flist = (); foreach my $i (0 .. 2) { push(@flist, sub {$i * $_[0]}); } foreach my $f (@flist) { print $f->(2), "\n"; } "0 2 4" is printed. Can you please explain the difference ? Update: The problem is not with i being global. This displays the same behavior: flist = [] def outer(): for i in xrange(3): def inner(x): return x * i flist.append(inner) outer() #~ print i # commented because it causes an error for f in flist: print f(2) As the commented line shows, i is unknown at that point. Still, it prints "4 4 4".
The functions defined in the loop keep accessing the same variable i while its value changes. At the end of the loop, all the functions point to the same variable, which is holding the last value in the loop: the effect is what reported in the example. In order to evaluate i and use its value, a common pattern is to set it as a parameter default: parameter defaults are evaluated when the def statement is executed, and thus the value of the loop variable is frozen. The following works as expected: flist = [] for i in xrange(3): def func(x, i=i): # the *value* of i is copied in func() environment return x * i flist.append(func) for f in flist: print f(2)
Environment Variables in Python on Linux
Python's access to environment variables does not accurately reflect the operating system's view of the processes environment. os.getenv and os.environ do not function as expected in particular cases. Is there a way to properly get the running process' environment? To demonstrate what I mean, take the two roughly equivalent programs (the first in C, the other in python): #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char *argv[]){ char *env; for(;;){ env = getenv("SOME_VARIABLE"); if(env) puts(env); sleep(5); } } import os import time while True: env = os.getenv("SOME_VARIABLE") if env is not None: print env time.sleep(5) Now, if we run the C program and attach to the running process with gdb and forcibly change the environment under the hood by doing something like this: (gdb) print setenv("SOME_VARIABLE", "my value", 1) [Switching to Thread -1208600896 (LWP 16163)] $1 = 0 (gdb) print (char *)getenv("SOME_VARIABLE") $2 = 0x8293126 "my value" then the aforementioned C program will start spewing out "my value" once every 5 seconds. The aforementioned python program, however, will not. Is there a way to get the python program to function like the C program in this case? (Yes, I realize this is a very obscure and potentially damaging action to perform on a running process) Also, I'm currently using python 2.4, this may have been fixed in a later version of python.
That's a very good question. It turns out that the os module initializes os.environ to the value of posix.environ, which is set on interpreter start up. In other words, the standard library does not appear to provide access to the getenv function. That is a case where it would probably be safe to use ctypes on unix. Since you would be calling an ultra-standard libc function.
How can I use Python for large scale development?
I would be interested to learn about large scale development in Python and especially in how do you maintain a large code base? When you make incompatibility changes to the signature of a method, how do you find all the places where that method is being called. In C++/Java the compiler will find it for you, how do you do it in Python? When you make changes deep inside the code, how do you find out what operations an instance provides, since you don't have a static type to lookup? How do you handle/prevent typing errors (typos)? Are UnitTest's used as a substitute for static type checking? As you can guess I almost only worked with statically typed languages (C++/Java), but I would like to try my hands on Python for larger programs. But I had a very bad experience, a long time ago, with the clipper (dBase) language, which was also dynamically typed.
Don't use a screw driver as a hammer Python is not a statically typed language, so don't try to use it that way. When you use a specific tool, you use it for what it has been built. For Python, it means: Duck typing : no type checking. Only behavior matters. Therefore your code must be designed to use this feature. A good design means generic signatures, no dependences between components, high abstraction levels.. So if you change anything, you won't have to change the rest of the code. Python will not complain either, that what it has been built for. Types are not an issue. Huge standard library. You do not need to change all your calls in the program if you use standard features you haven't coded yourself. And Python come with batteries included. I keep discovering them everyday. I had no idea of the number of modules I could use when I started and tried to rewrite existing stuff like everybody. It's OK, you can't get it all right from the beginning. You don't write Java, C++, Python, PHP, Erlang, whatever, the same way. They are good reasons why there is room for each of so many different languages, they do not do the same things. Unit tests are not a substitute Unit tests must be performed with any language. The most famous unit test library (JUnit) is from the Java world! This has nothing to do with types. You check behaviors, again. You avoid trouble with regression. You ensure your customer you are on tracks. Python for large scale projects Languages, libraries and frameworks don't scale. Architectures do. If you design a solid architecture, if you are able to make it evolves quickly, then it will scale. Unit tests help, automatic code check as well. But they are just safety nets. And small ones. Python is especially suitable for large projects because it enforces some good practices and has a lot of usual design patterns built-in. But again, do not use it for what it is not designed. E.g : Python is not a technology for CPU intensive tasks. In a huge project, you will most likely use several different technologies anyway. As a SGBD and a templating language, or else. Python is no exception. You will probably want to use C/C++ for the part of your code you need to be fast. Or Java to fit in a Tomcat environment. Don't know, don't care. Python can play well with these. As a conclusion My answer may feel a bit rude, but don't get me wrong: this is a very good question. A lot of people come to Python with old habits. I screwed myself trying to code Java like Python. You can, but will never get the best of it. If you have played / want to play with Python, it's great! It's a wonderful tool. But just a tool, really.
How do I convert any image to a 4-color paletted image using the Python Imaging Library?
I have a device that supports 4-color graphics (much like CGA in the old days). I wanted to use PIL to read the image and convert it using my 4-color palette (of red, green, yellow, black), but I can't figure out if it's even possible at all. I found some mailing list archive posts that seem to suggest other people have tried to do so and failed. A simple python example would be much appreciated! Bonus points if you add something that then converts the image to a byte string where each byte represents 4 pixels of data (with each two bits representing a color from 0 to 3)
First: your four colour palette (black, green, red, yellow) has no blue component. So, you have to accept that your output image will hardly approximate the input image, unless there is no blue component to start with. Try this code: import Image def estimate_color(c, bit, c_error): c_new= c - c_error if c_new > 127: c_bit= bit c_error= 255 - c_new else: c_bit= 0 c_error= -c_new return c_bit, c_error def image2cga(im): "Produce a sequence of CGA pixels from image im" im_width= im.size[0] for index, (r, g, b) in enumerate(im.getdata()): if index % im_width == 0: # start of a line r_error= g_error= 0 r_bit, r_error= estimate_color(r, 1, r_error) g_bit, g_error= estimate_color(g, 2, g_error) yield r_bit|g_bit def cvt2cga(imgfn): "Convert an RGB image to (K, R, G, Y) CGA image" inp_im= Image.open(imgfn) # assume it's RGB out_im= Image.new("P", inp_im.size, None) out_im.putpalette( ( 0, 0, 0, 255, 0, 0, 0, 255, 0, 255, 255, 0, ) ) out_im.putdata(list(image2cga(inp_im))) return out_im if __name__ == "__main__": import sys, os for imgfn in sys.argv[1:]: im= cvt2cga(imgfn) dirname, filename= os.path.split(imgfn) name, ext= os.path.splitext(filename) newpathname= os.path.join(dirname, "cga-%s.png" % name) im.save(newpathname) This creates a PNG palette image with only the first four palette entries set to your colours. This sample image: becomes It's trivial to take the output of image2cga (yields a sequence of 0-3 values) and pack every four values to a byte. If you need help about what the code does, please ask and I will explain. EDIT1: Do not reinvent the wheel Of course, turns out I was too enthusiastic and —as Thomas discovered— the Image.quantize method can take a palette image as argument and do the quantization with far better results than my ad-hoc method above: def cga_quantize(image): pal_image= Image.new("P", (1,1)) pal_image.putpalette( (0,0,0, 0,255,0, 255,0,0, 255,255,0) + (0,0,0)*252) return image.convert("RGB").quantize(palette=pal_image) EDIT1, cont: Pack the pixels into bytes For "added value", here follows code to produce the packed string (4 pixels per byte): import itertools as it # setup: create a map with tuples [(0,0,0,0)‥(3,3,3,3)] as keys # and values [chr(0)‥chr(255)], because PIL does not yet support # 4 colour palette images TUPLE2CHAR= {} # Assume (b7, b6) are pixel0, (b5, b4) are pixel1… # Call it "big endian" KEY_BUILDER= [ (0, 64, 128, 192), # pixel0 value used as index (0, 16, 32, 48), # pixel1 (0, 4, 8, 12), # pixel2 (0, 1, 2, 3), # pixel3 ] # For "little endian", uncomment the following line ## KEY_BUILDER.reverse() # python2.6 has itertools.product, but for compatibility purposes # let's do it verbosely: for ix0, px0 in enumerate(KEY_BUILDER[0]): for ix1, px1 in enumerate(KEY_BUILDER[1]): for ix2, px2 in enumerate(KEY_BUILDER[2]): for ix3, px3 in enumerate(KEY_BUILDER[3]): TUPLE2CHAR[ix0,ix1,ix2,ix3]= chr(px0+px1+px2+px3) # Another helper function, copied almost verbatim from itertools docs def grouper(n, iterable, padvalue=None): "grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')" return it.izip(*[it.chain(iterable, it.repeat(padvalue, n-1))]*n) # now the functions def seq2str(seq): """Takes a sequence of [0..3] values and packs them into bytes using two bits per value""" return ''.join( TUPLE2CHAR[four_pixel] for four_pixel in grouper(4, seq, 0)) # and the image related function # Note that the following function is correct, # but is not useful for Windows 16 colour bitmaps, # which start at the *bottom* row… def image2str(img): return seq2str(img.getdata())
How to get file creation & modification date/times in Python?
I have a script that needs to do some stuff based on file creation & modification dates but has to run on Linux & Windows. What's the best cross-platform way to get file creation & modification date/times in Python?
You have a couple of choices. For one, you can use the os.path.getmtime and os.path.getctime functions: import os.path, time print "last modified: %s" % time.ctime(os.path.getmtime(file)) print "created: %s" % time.ctime(os.path.getctime(file)) Your other option is to use os.stat: import os, time (mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime) = os.stat(file) print "last modified: %s" % time.ctime(mtime) Note: ctime() does not refer to creation time on *nix systems, but rather the last time the inode data changed. (thanks to kojiro for making that fact more clear in the comments by providing a link to an interesting blog post)
Is there a reason Python strings don't have a string length method?
I know that python has a len() function that is used to determine the size of a string, but I was wondering why its not a method of the string object. Update Ok, I realized I was embarrassingly mistaken. __len__() is actually a method of a string object. It just seems weird to see object oriented code in Python using the len function on string objects. Furthermore, it's also weird to see __len__ as the name instead of just len.
Strings do have a length method: __len__() The protocol in Python is to implement this method on objects which have a length and use the built-in len() function, which calls it for you, similar to the way you would implement __iter__() and use the built-in iter() function (or have the method called behind the scenes for you) on objects which are iterable. See Emulating container types for more information. Here's a good read on the subject of protocols in Python: Python and the Principle of Least Astonishment
Multiple mouse pointers?
Is there a way to accept input from more than one mouse separately? I'm interested in making a multi-user application and I thought it would be great if I could have 2 or more users holding wireless mice each interacting with the app individually with a separate mouse arrow. Is this something I should try to farm out to some other application/driver/os_magic? or is there a library I can use to accomplish this? Language isn't a HUGE deal, but C, C++, and Python are preferrable. Thanks :) edit: Found this multi-pointer toolkit for linux (it's actually a multi-pointer x server): http://wearables.unisa.edu.au/mpx/
You could try the Microsoft Windows MultiPoint Software Development Kit 1.1 or the new Microsoft Windows MultiPoint Software Development Kit 1.5 and the main Microsoft Multipoint site
How to disable HTML encoding when using Context in django
In my django application I am using a template to construct email body, one of the parameters is url, note there are two parametes separated by ampersand in the url. t = loader.get_template("sometemplate") c = Context({ 'foo': 'bar', 'url': 'http://127.0.0.1/test?a=1&b=2', }) print t.render(c) After rendering it produces: http://127.0.0.1/test?a=1&amp;amp;b=2 Note the ampersand is HTML encoded as "&amp;". One way around the problem is to pass each parameter separately to my template and construct the url in the template, however I'd like to avoid doing that. Is there a way to disable HTML encoding of context parameters or at the very least avoid encoding of ampersands?
To turn it off for a single variable, use mark_safe: from django.utils.safestring import mark_safe t = loader.get_template("sometemplate") c = Context({ 'foo': 'bar', 'url': mark_safe('http://127.0.0.1/test?a=1&b=2'), }) print t.render(c) Alternatively, to totally turn autoescaping off from your Python code, use the autoescape argument when initialising a Context: c = Context({ 'foo': 'bar', 'url': 'http://127.0.0.1/test?a=1&b=2', }, autoescape=False) The How to turn [Automatic HTML escaping] off section of the documentation covers some of the in-template options if you'd rather do it there.
python properties and inheritance
I have a base class with a property which (the get method) I want to overwrite in the subclass. My first thought was something like: class Foo(object): def _get_age(self): return 11 age = property(_get_age) class Bar(Foo): def _get_age(self): return 44 This does not work (subclass bar.age returns 11). I found a solution with an lambda expression which works: age = property(lambda self: self._get_age()) So is this the right solution for using properties and overwrite them in a subclass, or are there other preferred ways to do this?
I simply prefer to repeat the property() as well as you will repeat the @classmethod decorator when overriding a class method. While this seems very verbose, at least for Python standards, you may notice: 1) for read only properties, property can be used as a decorator: class Foo(object): @property def age(self): return 11 class Bar(Foo): @property def age(self): return 44 2) in Python 2.6, properties grew a pair of methods setter and deleter which can be used to apply to general properties the shortcut already available for read-only ones: class C(object): @property def x(self): return self._x @x.setter def x(self, value): self._x = value
How do you log server errors on django sites
So, when playing with the development I can just set settings.DEBUG to True and if an error occures I can see it nicely formatted, with good stack trace and request information. But on kind of production site I'd rather use DEBUG=False and show visitors some standard error 500 page with information that I'm working on fixing this bug at this moment ;) At the same time I'd like to have some way of logging all those information (stack trace and request info) to a file on my server - so I can just output it to my console and watch errors scroll, email the log to me every hour or something like this. What logging solutions would you recomend for a django-site, that would meet those simple requirements? I have the application running as fcgi server and I'm using apache web server as frontend (although thinking of going to lighttpd).
Well, when DEBUG = False, Django will automatically mail a full traceback of any error to each person listed in the ADMINS setting, which gets you notifications pretty much for free. If you'd like more fine-grained control, you can write and add to your settings a middleware class which defines a method named process_exception(), which will have access to the exception that was raised: http://docs.djangoproject.com/en/dev/topics/http/middleware/#process-exception Your process_exception() method can then perform whatever type of logging you'd like: writing to console, writing to a file, etc., etc. Edit: though it's a bit less useful, you can also listen for the got_request_exception signal, which will be sent whenever an exception is encountered during request processing: http://docs.djangoproject.com/en/dev/ref/signals/#got-request-exception This does not give you access to the exception object, however, so the middleware method is much easier to work with.
Pattern matching of lists in Python
I want to do some pattern matching on lists in Python. For example, in Haskell, I can do something like the following: fun (head : rest) = ... So when I pass in a list, head will be the first element, and rest will be the trailing elements. Likewise, in Python, I can automatically unpack tuples: (var1, var2) = func_that_returns_a_tuple() I want to do something similar with lists in Python. Right now, I have a function that returns a list, and a chunk of code that does the following: ls = my_func() (head, rest) = (ls[0], ls[1:]) I wondered if I could somehow do that in one line in Python, instead of two.
So far as I know there's no way to make it a one-liner in current Python without introducing another function, e.g.: split_list = lambda lst: (lst[0], lst[1:]) head, rest = split_list(my_func()) However, in Python 3.0 the specialized syntax used for variadic argument signatures and argument unpacking will become available for this type of general sequence unpacking as well, so in 3.0 you'll be able to write: head, *rest = my_func() See PEP 3132 for details.
Python list of lists, changes reflected across sublists unexpectedly
I needed to create a list of lists in Python, so I typed the following: myList = [[1] * 4] * 3 The list looked like this: [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]] Then I changed one of the innermost values: myList[0][0] = 5 Now my list looks like this: [[5, 1, 1, 1], [5, 1, 1, 1], [5, 1, 1, 1]] which is not what I wanted or expected. Can someone please explain what's going on, and how to get around it?
When you write [x]*3 you get, essentially, the list [x, x, x]. That is, a list with 3 references to the same x. When you then modify this single x it is visible via all three references to it. To fix it, you need to make sure that you create a new list at each position. One way to do it is [[1]*4 for n in range(3)]
Possible to integrate Google AppEngine and Google Code for continuous integration?
Anyone have any thoughts on how/if it is possible to integrate Google Code commits to cause a Google AppEngine deployment of the most recent code? I have a simple Google AppEngine project's source hosted on Google Code and would love if everytime I committed to Subversion, that AppEngine would reflect the latest commit. I don't mind if things are broken on the live site since the project is for personal use mainly and for learning. Anyone have any thoughts on how to tie into the subversion commit for the Code repository and/or how to kickoff the deployment to AppEngine? Ideally the solution would not require anything manual from me nor any type of server/listener software on my machine.
Made By Sofa had a blog post about their workflow with Google App Engine. In the second last paragraph they have attached a subversion hook that when when someone commits code it will automatically deploy to Google App Engine. It would take a little bit of tweaking (because it works on the server side not the client) but you could do the same.
Python lazy list
I would like create my own collection that has all the attributes of python list and also knows how to save/load itself into/from a database. Also I want to make the load implicit and lazy, as in it doesn't happen at the point of creation of the list, but waits until its first used. Is there a single__xxx__method I can override to load the list on first usage of any list property (such aslen,getitem,iter...etc) without having to override them all?
Not exactly. For emulating things other than lists, there's __getattribute__, but unfortunately Python doesn't consider operators like x[y] or x(y) to be exactly the same as x.__getitem__(y) or x.__call__(y). Operators like that are attributes of the class, not attributes of the instance, as you can see here: >>> class x(object): ... def __getattribute__(self, o): ... print o ... >>> x()[3] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'x' object does not support indexing However, you can take advantage of Python's dynamic nature to effectively eliminate that distinction. If your main concern is to save yourself typing, and to produce less code that needs maintaining, you can do something like this: class override(object): def __init__(self, methodName): self.methodName = methodName def __get__(self, oself, cls): oself._load(self.methodName) return getattr(super(oself.__class__, oself), self.methodName) class LazyList(list): def _load(self, name): print 'Loading data for %s...' % (name,) for methodName in set(dir(list)) - set(dir(object)): locals()[methodName] = override(methodName) You probably don't want to use dir() in real life, but a suitable fixed list of strings could work as a substitute.
Single Table Inheritance in Django
Is there explicit support for Single Table Inheritance in Django? Last I heard, the feature was still under development and debate. Are there libraries/hacks I can use in the meantime to capture the basic behavior? I have a hierarchy that mixes different objects. The canonical example of a corporation structure with an Employee class, subclasses for types of employees, and a manager_id (parent_id) would be a good approximation of the problem I am solving. In my case, I would like to represent the idea that an employee can manage other employees while being managed by a different employee. There are not separate classes for Manager and Worker, which makes this hard to spread across tables. Sub-classes would represent types of employees-programmers, accountants, sales, etc and would be independent of who supervises who (OK, I guess it's no longer a typical corporation in some respect).
There are currently two forms of inheritance in Django - MTI (model table inheritance) and ABC (abstract base classes). I wrote a tutorial on what's going on under the hood. You can also reference the official docs on model inheritance.
Python snippet to remove C and C++ comments
I'm looking for Python code that removes C and C++ comments from a string. (Assume the string contains an entire C source file.) I realize that I could .match() substrings with a Regex, but that doesn't solve nesting /*, or having a // inside a /* */. Ideally, I would prefer a non-naive implementation that properly handles awkward cases.
This handles C++-style comments, C-style comments, strings and simple nesting thereof. def comment_remover(text): def replacer(match): s = match.group(0) if s.startswith('/'): return " " # note: a space and not an empty string else: return s pattern = re.compile( r'//.*?$|/\*.*?\*/|\'(?:\\.|[^\\\'])*\'|"(?:\\.|[^\\"])*"', re.DOTALL | re.MULTILINE ) return re.sub(pattern, replacer, text) Strings needs to be included, because comment-markers inside them does not start a comment. Edit: re.sub didn't take any flags, so had to compile the pattern first. Edit2: Added character literals, since they could contain quotes that would otherwise be recognized as string delimiters. Edit3: Fixed the case where a legal expression int/**/x=5; would become intx=5; which would not compile, by replacing the comment with a space rather then an empty string.
SUDS - programmatic access to methods and types
I'm investigating SUDS as a SOAP client for python. I want to inspect the methods available from a specified service, and the types required by a specified method. The aim is to generate a user interface, allowing users to select a method, then fill in values in a dynamically generated form. I can get some information on a particular method, but am unsure how to parse it: client = Client(url) method = client.sd.service.methods['MyMethod'] I am unable to programmaticaly figure out what object type I need to create to be able to call the service obj = client.factory.create('?') res = client.service.MyMethod(obj, soapheaders=authen) Does anyone have some sample code?
Okay, so SUDS does quite a bit of magic. A suds.client.Client, is built from a WSDL file: client = suds.client.Client("http://mssoapinterop.org/asmx/simple.asmx?WSDL") It downloads the WSDL and creates a definition in client.wsdl. When you call a method using SUDS via client.service.<method> it's actually doing a whole lot of recursive resolve magic behind the scenes against that interpreted WSDL. To discover the parameters and types for methods you'll need to introspect this object. For example: for method in client.wsdl.services[0].ports[0].methods.values(): print '%s(%s)' % (method.name, ', '.join('%s: %s' % (part.type, part.name) for part in method.soap.input.body.parts)) This should print something like: echoInteger((u'int', http://www.w3.org/2001/XMLSchema): inputInteger) echoFloatArray((u'ArrayOfFloat', http://soapinterop.org/): inputFloatArray) echoVoid() echoDecimal((u'decimal', http://www.w3.org/2001/XMLSchema): inputDecimal) echoStructArray((u'ArrayOfSOAPStruct', http://soapinterop.org/xsd): inputStructArray) echoIntegerArray((u'ArrayOfInt', http://soapinterop.org/): inputIntegerArray) echoBase64((u'base64Binary', http://www.w3.org/2001/XMLSchema): inputBase64) echoHexBinary((u'hexBinary', http://www.w3.org/2001/XMLSchema): inputHexBinary) echoBoolean((u'boolean', http://www.w3.org/2001/XMLSchema): inputBoolean) echoStringArray((u'ArrayOfString', http://soapinterop.org/): inputStringArray) echoStruct((u'SOAPStruct', http://soapinterop.org/xsd): inputStruct) echoDate((u'dateTime', http://www.w3.org/2001/XMLSchema): inputDate) echoFloat((u'float', http://www.w3.org/2001/XMLSchema): inputFloat) echoString((u'string', http://www.w3.org/2001/XMLSchema): inputString) So the first element of the part's type tuple is probably what you're after: >>> client.factory.create(u'ArrayOfInt') (ArrayOfInt){ _arrayType = "" _offset = "" _id = "" _href = "" _arrayType = "" } Update: For the Weather service it appears that the "parameters" are a part with an element not a type: >>> client = suds.client.Client('http://www.webservicex.net/WeatherForecast.asmx?WSDL') >>> client.wsdl.services[0].ports[0].methods.values()[0].soap.input.body.parts[0].element (u'GetWeatherByZipCode', http://www.webservicex.net) >>> client.factory.create(u'GetWeatherByZipCode') (GetWeatherByZipCode){ ZipCode = None } But this is magic'd into the parameters of the method call (a la client.service.GetWeatherByZipCode("12345"). IIRC this is SOAP RPC binding style? I think there's enough information here to get you started. Hint: the Python command line interface is your friend!
OpenGl with Python
I am currently in a course that is using OpenGL and I have been using C for all the programs so far. I have Python installed on Fedora as well as OpenGL, however the minute I call an OpenGL command in my Python code, I get a segmentation fault. I have no idea why this is. Just to avoid the "just use C" comments, here is why I want to use Python: There are a couple reasons I am wanting to switch from C to Python, but the main one is because we are about to start writing a raytracer and I would like to use classes to make it easier on me. Since I hate classes in C++ and structs in C seems a little crazy, I thought I would give Python a try at it. I have also been looking for a reason to use Python again as it has been a while. Thanks for any help.
You may also want to consider using Pyglet instead of PyOpenGL. It's a ctypes-wrapper around the native OpenGL libs on the local platform, along with windowing support (should handle most of the stuff you want to use GLUT for.) The pyglet-users list is pretty active and very helpful.
Starting python debugger automatically on error
This is a question I have wondered about for quite some time, yet I have never found a suitable solution. If I run a script and I come across, let's say an IndexError, python prints the line, location and quick description of the error and exits. Is it possible to automatically start pdb when an error is encountered? I am not against having an extra import statement at the top of the file, nor a few extra lines of code.
python -m pdb myscript.py You'll need to enter 'c' (for Continue) when execution begins, but then it will run to the error point and give you control there.