instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
How do I dump an entire Python process for later debugging inspection?
I have a Python application in a strange state. I don't want to do live debugging of the process. Can I dump it to a file and examine its state later? I know I've restored corefiles of C programs in gdb later, but I don't know how to examine a Python application in a useful way from gdb. (This is a variation on my question about debugging memleaks in a production system.)
There is no builtin way other than aborting (with os.abort(), causing the coredump if resource limits allow it) -- although you can certainly build your own 'dump' function that dumps relevant information about the data you care about. There are no ready-made tools for it. As for handling the corefile of a Python process, the Python source has a gdbinit file that contains useful macros. It's still a lot more painful than somehow getting into the process itself (with pdb or the interactive interpreter) but it makes life a little easier.
Python: How to make a cross-module variable?
The __debug__ variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it? The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it.
If you need a global cross-module variable maybe just simple global module-level variable will suffice. a.py: var = 1 b.py: import a print a.var import c print a.var c.py: import a a.var = 2 Test: $ python b.py # -> 1 2 Real-world example: Django's global_settings.py (though in Django apps settings are used by importing the object django.conf.settings).
Does Python have a bitfield type?
I need a compact representation of an array of booleans, does Python have a builtin bitfield type or will I need to find an alternate solution?
If you mainly want to be able to name your bit fields and easily manipulate them, e.g. to work with flags represented as single bits in a communications protocol, then you can use the standard Structure and Union features of ctypes, as described at How Do I Properly Declare a ctype Structure + Union in Python? - Stack Overflow For example, to work with the 4 least-significant bits of a byte individually, just name them from least to most significant in a LittleEndianStructure. You use a union to provide access to the same data as a byte or int so you can move the data in or out of the communication protocol. In this case that is done via the flags.asbyte field: import ctypes c_uint8 = ctypes.c_uint8 class Flags_bits(ctypes.LittleEndianStructure): _fields_ = [ ("logout", c_uint8, 1), ("userswitch", c_uint8, 1), ("suspend", c_uint8, 1), ("idle", c_uint8, 1), ] class Flags(ctypes.Union): _fields_ = [("b", Flags_bits), ("asbyte", c_uint8)] flags = Flags() flags.asbyte = 0xc print(flags.b.idle) print(flags.b.suspend) print(flags.b.userswitch) print(flags.b.logout) The four bits (which I've printed here starting with the most significant, which seems more natural when printing) are 1, 1, 0, 0, i.e. 0xc in binary.
Drag and drop onto Python script in Windows Explorer
I would like to drag and drop my data file onto a Python script and have it process the file and generate output. The Python script accepts the name of the data file as a command-line parameter, but Windows Explorer doesn't allow the script to be a drop target. Is there some kind of configuration that needs to be done somewhere for this work?
Sure. From a mindless technology article called "Make Python Scripts Droppable in Windows", you can add a drop handler by adding a registry key: Here’s a registry import file that you can use to do this. Copy the following into a .reg file and run it (Make sure that your .py extensions are mapped to Python.File). Windows Registry Editor Version 5.00 [HKEY_CLASSES_ROOT\Python.File\shellex\DropHandler] @="{60254CA5-953B-11CF-8C96-00AA00B8708C}" This makes Python scripts use the WSH drop handler, which is compatible with long filenames. To use the short filename handler, replace the GUID with 86C86720-42A0-1069-A2E8-08002B30309D. A comment in that post indicates that one can enable dropping on "no console Python files (.pyw)" or "compiled Python files (.pyc)" by using the Python.NoConFile and Python.CompiledFile classes.
Has anyone found a good set of python plugins for vim -- specifically module completion?
I'm looking for a suite of plugins that can help me finally switch over to vim full-time. Right now I'm using Komodo with some good success, but their vim bindings have enough little errors that I'm tired of it. What I do love in Komodo, though, is the code completion. So, here's what I'm looking for (ordered by importance). Code completion, meaning: the ability to code complete modules/functions/etc. in any module that's on the pythonpath, not just system modules. Bonus points for showing docstrings when completing. Jump-to a class definition. I'm guessing CTAGS will do this, so how do you all manage automatically updating your tags files? Project type management for managing buffers: ideally the ability to grep for a filename in a directory structure to open it. Bonus for showing an index of class definitions while a buffer is open. Bzr integration. Not super important, since most of it I can just drop to the shell to do.
Here you can find some info about this. It covers code completion, having a list of classes and functions in open files. I haven't got around to do a full configuration for vim, since I don't use Python primarily, but I have the same interests in transforming vim in a better Python IDE. Edit: The original site is down, so found it saved on the web archive.
Python PostgreSQL modules. Which is best?
I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore. Which module do you recommend? Why?
psycopg2 seems to be the most popular. I've never had any trouble with it. There's actually a pure Python interface for PostgreSQL too, called bpgsql. I wouldn't recommend it over psycopg2, but it's recently become capable enough to support Django and is useful if you can't compile C modules.
Calling C/C++ from python?
What would be the quickest way to construct a python binding to a C or C++ library? (using windows if this matters)
I like ctypes a lot, swig always tended to give me problems. Also ctypes has the advantage that you don't need to satisfy any compile time dependency on python, and your binding will work on any python that has ctypes, not just the one it was compiled against. Suppose you have a simple C++ example class you want to talk to in a file called foo.cpp: #include <iostream> class Foo{ public: void bar(){ std::cout << "Hello" << std::endl; } }; Since ctypes can only talk to C functions, you need to provide those declaring them as extern "C" extern "C" { Foo* Foo_new(){ return new Foo(); } void Foo_bar(Foo* foo){ foo->bar(); } } Next you have to compile this to a shared library g++ -c -fPIC foo.cpp -o foo.o g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o And finally you have to write your python wrapper (e.g. in fooWrapper.py) from ctypes import cdll lib = cdll.LoadLibrary('./libfoo.so') class Foo(object): def __init__(self): self.obj = lib.Foo_new() def bar(self): lib.Foo_bar(self.obj) Once you have that you can call it like f = Foo() f.bar() #and you will see "Hello" on the screen
Text difference algorithm
I need an algorithm that can compare two text files and highlight their difference and ( even better!) can compute their difference in a meaningful way (like two similar files should have a similarity score higher than two dissimilar files, with the word "similar" defined in the normal terms). It sounds easy to implement, but it's not. The implementation can be in c# or python. Thanks.
I can recommend to take a look at Neil Fraser's code and articles: google-diff-match-patch Currently available in Java, JavaScript, C++ and Python. Regardless of language, each library features the same API and the same functionality. All versions also have comprehensive test harnesses. Neil Fraser: Diff Strategies - for theory and implementation notes
How does one do the equivalent of "import * from module" with Python's __import__ function?
Given a string with a module name, how do you import everything in the module as if you had called: from module import * i.e. given string S="module", how does one get the equivalent of the following: __import__(S, fromlist="*") This doesn't seem to perform as expected (as it doesn't import anything). Thanks!
Please reconsider. The only thing worse than import * is magic import *. If you really want to: m = __import__ (S) try: attrlist = m.__all__ except AttributeError: attrlist = dir (m) for attr in attrlist: globals()[attr] = getattr (m, attr)
Debug Pylons application through Eclipse
I have Eclipse setup with PyDev and love being able to debug my scripts/apps. I've just started playing around with Pylons and was wondering if there is a way to start up the paster server through Eclipse so I can debug my webapp?
Create a new launch configuration (Python Run) Main tab Use paster-script.py as main module (you can find it in the Scripts sub-directory in your python installation directory) Don't forget to add the root folder of your application in the PYTHONPATH zone Arguments Set the base directory to the root folder also. As Program Arguments use "serve development.ini" (or whatever you use to debug your app") Common Tab Check allocate console and launch in background
How do I manipulate bits in Python?
In C I could, for example, zero out bit #10 in a 32 bit unsigned value like so: unsigned long value = 0xdeadbeef; value &= ~(1<<10); How do I do that in Python ?
Bitwise operations on Python ints work much like in C. The &, | and ^ operators in Python work just like in C. The ~ operator works as for a signed integer in C; that is, ~x computes -x-1. You have to be somewhat careful with left shifts, since Python integers aren't fixed-width. Use bit masks to obtain the low order bits. For example, to do the equivalent of shift of a 32-bit integer do (x << 5) & 0xffffffff.
Character reading from file in Python
In a text file, there is a string "I don't like this". However, when I read it into a string, it becomes "I don\xe2\x80\x98t like this". I understand that \u2018 is the unicode representation of "'". I use f1 = open (file1, "r") text = f1.read() command to do the reading. Now, is it possible to read the string in such a way that when it is read into the string, it is "I don't like this", instead of "I don\xe2\x80\x98t like this like this"? Second edit: I have seen some people use mapping to solve this problem, but really, is there no built-in conversion that does this kind of ANSI to unicode ( and vice versa) conversion?
Ref: http://docs.python.org/howto/unicode Reading Unicode from a file is therefore simple: import codecs f = codecs.open('unicode.rst', encoding='utf-8') for line in f: print repr(line) It's also possible to open files in update mode, allowing both reading and writing: f = codecs.open('test', encoding='utf-8', mode='w+') f.write(u'\u4500 blah blah blah\n') f.seek(0) print repr(f.readline()[:1]) f.close() EDIT: I'm assuming that your intended goal is just to be able to read the file properly into a string in Python. If you're trying to convert to an ASCII string from Unicode, then there's really no direct way to do so, since the Unicode characters won't necessarily exist in ASCII. If you're trying to convert to an ASCII string, try one of the following: Replace the specific unicode chars with ASCII equivalents, if you are only looking to handle a few special cases such as this particular example Use the unicodedata module's normalize() and the string.encode() method to convert as best you can to the next closest ASCII equivalent (Ref https://web.archive.org/web/20090228203858/http://techxplorer.com/2006/07/18/converting-unicode-to-ascii-using-python): >>> teststr u'I don\xe2\x80\x98t like this' >>> unicodedata.normalize('NFKD', teststr).encode('ascii', 'ignore') 'I donat like this'
In Django is there a way to display choices as checkboxes?
In the admin interface and newforms there is the brilliant helper of being able to define choices. You can use code like this: APPROVAL_CHOICES = ( ('yes', 'Yes'), ('no', 'No'), ('cancelled', 'Cancelled'), ) client_approved = models.CharField(choices=APPROVAL_CHOICES) to create a drop down box in your form and force the user to choose one of those options. I'm just wondering if there is a way to define a set of choices where multiple can be chosen using checkboxes? (Would also be nice to be able to say that the user can select a maximum number of them.) It seems like it's a feature that is probably implemented, it's just I can't seem to find it in the documentation.
In terms of the forms library, you would use the MultipleChoiceField field with a CheckboxSelectMultiple widget to do that. You could validate the number of choices which were made by writing a validation method for the field: class MyForm(forms.Form): my_field = forms.MultipleChoiceField(choices=SOME_CHOICES, widget=forms.CheckboxSelectMultiple()) def clean_my_field(self): if len(self.cleaned_data['my_field']) > 3: raise forms.ValidationError('Select no more than 3.') return self.cleaned_data['my_field'] To get this in the admin application, you'd need to customise a ModelForm and override the form used in the appropriate ModelAdmin.
Preserving signatures of decorated functions
Suppose I have written a decorator that does something very generic. For example, it might convert all arguments to a specific type, perform logging, implement memoization, etc. Here is an example: def args_as_ints(f): def g(*args, **kwargs): args = [int(x) for x in args] kwargs = dict((k, int(v)) for k, v in kwargs.items()) return f(*args, **kwargs) return g @args_as_ints def funny_function(x, y, z=3): """Computes x*y + 2*z""" return x*y + 2*z >>> funny_function("3", 4.0, z="5") 22 Everything well so far. There is one problem, however. The decorated function does not retain the documentation of the original function: >>> help(funny_function) Help on function g in module __main__: g(*args, **kwargs) Fortunately, there is a workaround: def args_as_ints(f): def g(*args, **kwargs): args = [int(x) for x in args] kwargs = dict((k, int(v)) for k, v in kwargs.items()) return f(*args, **kwargs) g.__name__ = f.__name__ g.__doc__ = f.__doc__ return g @args_as_ints def funny_function(x, y, z=3): """Computes x*y + 2*z""" return x*y + 2*z This time, the function name and documentation are correct: >>> help(funny_function) Help on function funny_function in module __main__: funny_function(*args, **kwargs) Computes x*y + 2*z But there is still a problem: the function signature is wrong. The information "*args, **kwargs" is next to useless. What to do? I can think of two simple but flawed workarounds: 1 -- Include the correct signature in the docstring: def funny_function(x, y, z=3): """funny_function(x, y, z=3) -- computes x*y + 2*z""" return x*y + 2*z This is bad because of the duplication. The signature will still not be shown properly in automatically generated documentation. It's easy to update the function and forget about changing the docstring, or to make a typo. [And yes, I'm aware of the fact that the docstring already duplicates the function body. Please ignore this; funny_function is just a random example.] 2 -- Not use a decorator, or use a special-purpose decorator for every specific signature: def funny_functions_decorator(f): def g(x, y, z=3): return f(int(x), int(y), z=int(z)) g.__name__ = f.__name__ g.__doc__ = f.__doc__ return g This works fine for a set of functions that have identical signature, but it's useless in general. As I said in the beginning, I want to be able to use decorators entirely generically. I'm looking for a solution that is fully general, and automatic. So the question is: is there a way to edit the decorated function signature after it has been created? Otherwise, can I write a decorator that extracts the function signature and uses that information instead of "*kwargs, **kwargs" when constructing the decorated function? How do I extract that information? How should I construct the decorated function -- with exec? Any other approaches?
Install decorator module: $ pip install decorator Adapt definition of args_as_ints(): import decorator @decorator.decorator def args_as_ints(f, *args, **kwargs): args = [int(x) for x in args] kwargs = dict((k, int(v)) for k, v in kwargs.items()) return f(*args, **kwargs) @args_as_ints def funny_function(x, y, z=3): """Computes x*y + 2*z""" return x*y + 2*z print funny_function("3", 4.0, z="5") # 22 help(funny_function) # Help on function funny_function in module __main__: # # funny_function(x, y, z=3) # Computes x*y + 2*z Python 3.4+ functools.wraps() from stdlib preserves signatures since Python 3.4: import functools def args_as_ints(func): @functools.wraps(func) def wrapper(*args, **kwargs): args = [int(x) for x in args] kwargs = dict((k, int(v)) for k, v in kwargs.items()) return func(*args, **kwargs) return wrapper @args_as_ints def funny_function(x, y, z=3): """Computes x*y + 2*z""" return x*y + 2*z print(funny_function("3", 4.0, z="5")) # 22 help(funny_function) # Help on function funny_function in module __main__: # # funny_function(x, y, z=3) # Computes x*y + 2*z functools.wraps() is available at least since Python 2.5 but it does not preserve the signature there: help(funny_function) # Help on function funny_function in module __main__: # # funny_function(*args, **kwargs) # Computes x*y + 2*z Notice: *args, **kwargs instead of x, y, z=3.
Caching in urllib2?
Is there an easy way to cache things when using urllib2 that I am over-looking, or do I have to roll my own?
If you don't mind working at a slightly lower level, httplib2 (http://code.google.com/p/httplib2/) is an excellent HTTP library that includes caching functionality.
Keeping GUIs responsive during long-running tasks
Keeping the GUI responsive while the application does some CPU-heavy processing is one of the challenges of effective GUI programming. Here's a good discussion of how to do this in wxPython. To summarize, there are 3 ways: Use threads Use wxYield Chunk the work and do it in the IDLE event handler Which method have you found to be the most effective ? Techniques from other frameworks (like Qt, GTK or Windows API) are also welcome.
Threads. They're what I always go for because you can do it in every framework you need. And once you're used to multi-threading and parallel processing in one language/framework, you're good on all frameworks.
BeautifulSoup's Python 3 compatibility
Does BeautifulSoup work with Python 3? If not, how soon will there be a port? Will there be a port at all? Google doesn't turn up anything to me (Maybe it's 'coz I'm looking for the wrong thing?)
Beautiful Soup 4.x officially supports Python 3. pip install beautifulsoup4
How do you break into the debugger from Python source code?
What do you insert into Python source code to have it break into pdb (when execution gets to that spot)?
import pdb; pdb.set_trace() See Python: Coding in the Debugger for Beginners for this and more helpful hints.
How do I calculate number of days betwen two dates using Python?
If I have two dates (ex. '8/18/2008' and '9/26/2008') what is the best way to get the difference measured in days?
If you have two date objects, you can just subtract them. from datetime import date d0 = date(2008, 8, 18) d1 = date(2008, 9, 26) delta = d0 - d1 print delta.days The relevant section of the docs: https://docs.python.org/library/datetime.html
What's the canonical way to check for type in python?
What is the best way to check whether a given object is of a given type? How about checking whether the object inherits from a given type? Let's say I have an object o. How do I check whether it's a str?
To check if the type of o is exactly str: type(o) is str To check if o is an instance of str or any subclass of str (this would be the "canonical" way): isinstance(o, str) The following also works, and can be useful in some cases: issubclass(type(o), str) type(o) in ([str] + str.__subclasses__()) See Built-in Functions in the Python Library Reference for relevant information. One more note: in this case, you may actually want to use: isinstance(o, basestring) because this will also catch Unicode strings (unicode is not a subclass of str; both str and unicode are subclasses of basestring). Alternatively, isinstance accepts a tuple of classes. This will return True if x is an instance of any subclass of any of (str, unicode): isinstance(o, (str, unicode))
How to iterate over a timespan after days, hours, weeks and months in Python?
How do I iterate over a timespan after days, hours, weeks or months? Something like: for date in foo(from_date, to_date, delta=HOURS): print date Where foo is a function, returning an iterator. I've been looking at the calendar module, but that only works for one specific year or month, not between dates.
Use dateutil and its rrule implementation, like so: from dateutil import rrule from datetime import datetime, timedelta now = datetime.now() hundredDaysLater = now + timedelta(days=100) for dt in rrule.rrule(rrule.MONTHLY, dtstart=now, until=hundredDaysLater): print dt Output is 2008-09-30 23:29:54 2008-10-30 23:29:54 2008-11-30 23:29:54 2008-12-30 23:29:54 Replace MONTHLY with any of YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, or SECONDLY. Replace dtstart and until with whatever datetime object you want. This recipe has the advantage for working in all cases, including MONTHLY. Only caveat I could find is that if you pass a day number that doesn't exist for all months, it skips those months.
Python GUI Application redistribution
I need to develop a small-medium sized desktop GUI application, preferably with Python as a language of choice because of time constraints. What GUI library choices do I have which allow me to redistribute my application standalone, assuming that the users don't have a working Python installation and obviously don't have the GUI libraries I'm using either? Also, how would I go about packaging everything up in binaries of reasonable size for each target OS? (my main targets are Windows and Mac OS X) Addition: I've been looking at WxPython, but I've found plenty of horror stories of packaging it with cx_freeze and getting 30mb+ binaries, and no real advice on how to actually do the packaging and how trust-worthy it is.
This may help: http://stackoverflow.com/questions/49146/what-is-the-best-way-to-make-an-exe-file-from-a-python-program
How to avoid .pyc files?
Can I run the python interpreter without generating the compiled .pyc files?
From "What’s New in Python 2.6 - Interpreter Changes": Python can now be prevented from writing .pyc or .pyo files by supplying the -B switch to the Python interpreter, or by setting the PYTHONDONTWRITEBYTECODE environment variable before running the interpreter. This setting is available to Python programs as the sys.dont_write_bytecode variable, and Python code can change the value to modify the interpreter’s behaviour. Update 2010-11-27: Python 3.2 addresses the issue of cluttering source folders with .pyc files by introducing a special __pycache__ subfolder, see What's New in Python 3.2 - PYC Repository Directories.
Is timsort general-purpose or Python-specific?
Timsort is an adaptive, stable, natural mergesort. It has supernatural performance on many kinds of partially ordered arrays (less than lg(N!) comparisons needed, and as few as N-1), yet as fast as Python's previous highly tuned samplesort hybrid on random arrays. Have you seen timsort used outside of CPython? Does it make sense?
Yes, it makes quite a bit of sense to use timsort outside of CPython, in specific, or Python, in general. There is currently an effort underway to replace Java's "modified merge sort" with timsort, and the initial results are quite positive.
Python module for wiki markup
Is there a Python module for converting wiki markup to other languages (e.g. HTML)? A similar question was asked here, What's the easiest way to convert wiki markup to html, but no Python modules are mentioned. Just curious. :) Cheers.
mwlib provides ways of converting MediaWiki formatted text into HTML, PDF, DocBook and OpenOffice formats.
Python Dependency Injection Framework
Is there a framework equivalent to Guice (http://code.google.com/p/google-guice) for Python?
Spring Python is an offshoot of the Java-based Spring Framework and Spring Security, targeted for Python. This project currently contains the following features: Inversion Of Control (dependency injection) - use either classic XML, or the python @Object decorator (similar to the Spring JavaConfig subproject) to wire things together. While the @Object format isn't identical to the Guice style (centralized wiring vs. wiring information in each class), it is a valuable way to wire your python app. Aspect-oriented Programming - apply interceptors in a horizontal programming paradigm (instead of vertical OOP inheritance) for things like transactions, security, and caching. DatabaseTemplate - Reading from the database requires a monotonous cycle of opening cursors, reading rows, and closing cursors, along with exception handlers. With this template class, all you need is the SQL query and row-handling function. Spring Python does the rest. Database Transactions - Wrapping multiple database calls with transactions can make your code hard to read. This module provides multiple ways to define transactions without making things complicated. Security - Plugin security interceptors to lock down access to your methods, utilizing both authentication and domain authorization. Remoting - It is easy to convert your local application into a distributed one. If you have already built your client and server pieces using the IoC container, then going from local to distributed is just a configuration change. Samples - to help demonstrate various features of Spring Python, some sample applications have been created: PetClinic - Spring Framework's sample web app has been rebuilt from the ground up using python web containers including: CherryPy. Go check it out for an example of how to use this framework. (NOTE: Other python web frameworks will be added to this list in the future). Spring Wiki - Wikis are powerful ways to store and manage content, so we created a simple one as a demo! Spring Bot - Use Spring Python to build a tiny bot to manage the IRC channel of your open source project.
Get timer ticks in Python
I'm just trying to time a piece of code. The pseudocode looks like: start = get_ticks() do_long_code() print "It took " + (get_ticks() - start) + " seconds." How does this look in Python? More specifically, how do I get the number of ticks since midnight (or however Python organizes that timing)?
In the time module, there are two timing functions: time and clock. time gives you "wall" time, if this is what you care about. However, the python docs say that clock should be used for benchmarking. Note that clock behaves different in separate systems: on MS Windows, it uses the Win32 function QueryPerformanceCounter(), with "resolution typically better than a microsecond". It has no special meaning, it's just a number (it starts counting the first time you call clock in your process). # ms windows t0= time.clock() do_something() t= time.clock() - t0 # t is wall seconds elapsed (floating point) on *nix, clock reports CPU time. Now, this is different, and most probably the value you want, since your program hardly ever is the only process requesting CPU time (even if you have no other processes, the kernel uses CPU time now and then). So, this number, which typically is smaller¹ than the wall time (i.e. time.time() - t0), is more meaningful when benchmarking code: # linux t0= time.clock() do_something() t= time.clock() - t0 # t is CPU seconds elapsed (floating point) Apart from all that, the timeit module has the Timer class that is supposed to use what's best for benchmarking from the available functionality. ¹ unless threading gets in the way… ² Python ≥3.3: there are time.perf_counter() and time.process_time(). perf_counter is being used by the timeit module.
Emacs and Python
I recently started learning Emacs. I went through the tutorial, read some introductory articles, so far so good. Now I want to use it for Python development. From what I understand, there are two separate Python modes for Emacs: python-mode.el, which is part of the Python project; and python.el, which is part of Emacs 22. I read all information I could find but most of it seems fairly outdated and I'm still confused. The questions: What is their difference? Which mode should I install and use? Are there other Emacs add-ons that are essential for Python development? Relevant links: EmacsEditor @ wiki.python.org PythonMode @ emacswiki.org
If you are using GNU Emacs 21 or before, or XEmacs, use python-mode.el. The GNU Emacs 22 python.el won't work on them. On GNU Emacs 22, python.el does work, and ties in better with GNU Emacs's own symbol parsing and completion, ElDoc, etc. I use XEmacs myself, so I don't use it, and I have heard people complain that it didn't work very nicely in the past, but there are updates available that fix some of the issues (for instance, on the emacswiki page you link), and you would hope some were integrated upstream by now. If I were the GNU Emacs kind, I would use python.el until I found specific reasons not to. The python-mode.el's single biggest problem as far as I've seen is that it doesn't quite understand triple-quoted strings. It treats them as single-quoted, meaning that a single quote inside a triple-quoted string will throw off the syntax highlighting: it'll think the string has ended there. You may also need to change your auto-mode-alist to turn on python-mode for .py files; I don't remember if that's still the case but my init.el has been setting auto-mode-alist for many years now. As for other addons, nothing I would consider 'essential'. XEmacs's func-menu is sometimes useful, it gives you a little function/class browser menu for the current file. I don't remember if GNU Emacs has anything similar. I have a rst-mode for reStructuredText editing, as that's used in some projects. Tying into whatever VC you use, if any, may be useful to you, but there is builtin support for most and easily downloaded .el files for the others.
Most pythonic way of counting matching elements in something iterable
I have an iterable of entries on which I would like to gather some simple statistics, say the count of all numbers divisible by two and the count of all numbers divisible by three. My first alternative, While only iterating through the list once and avoiding the list expansion (and keeping the split loop refactoring in mind), looks rather bloated: (alt 1) r = xrange(1, 10) twos = 0 threes = 0 for v in r: if v % 2 == 0: twos+=1 if v % 3 == 0: threes+=1 print twos print threes This looks rather nice, but has the drawback of expanding the expression to a list: (alt 2) r = xrange(1, 10) print len([1 for v in r if v % 2 == 0]) print len([1 for v in r if v % 3 == 0]) What I would really like is something like a function like this: (alt 3) def count(iterable): n = 0 for i in iterable: n += 1 return n r = xrange(1, 10) print count(1 for v in r if v % 2 == 0) print count(1 for v in r if v % 3 == 0) But this looks a lot like something that could be done without a function. The final variant is this: (alt 4) r = xrange(1, 10) print sum(1 for v in r if v % 2 == 0) print sum(1 for v in r if v % 3 == 0) and while the smallest (and in my book probably the most elegant) it doesn't feel like it expresses the intent very well. So, my question to you is: Which alternative do you like best to gather these types of stats? Feel free to supply your own alternative if you have something better. To clear up some confusion below: In reality my filter predicates are more complex than just this simple test. The objects I iterate over are larger and more complex than just numbers My filter functions are more different and hard to parameterize into one predicate
Having to iterate over the list multiple times isn't elegant IMHO. I'd probably create a function that allows doing: twos, threes = countmatching(xrange(1,10), lambda a: a % 2 == 0, lambda a: a % 3 == 0) A starting point would be something like this: def countmatching(iterable, *predicates): v = [0] * len(predicates) for e in iterable: for i,p in enumerate(predicates): if p(e): v[i] += 1 return tuple(v) Btw, "itertools recipes" has a recipe for doing much like your alt4. def quantify(seq, pred=None): "Count how many times the predicate is true in the sequence" return sum(imap(pred, seq))
Accurate timestamping in Python
I've been building an error logging app recently and was after a way of accurately timestamping the incoming data. When I say accurately I mean each timestamp should be accurate relative to each other (no need to sync to an atomic clock or anything like that). I've been using datetime.now() as a first stab, but this isn't perfect: >>> for i in range(0,1000): ... datetime.datetime.now() ... datetime.datetime(2008, 10, 1, 13, 17, 27, 562000) datetime.datetime(2008, 10, 1, 13, 17, 27, 562000) datetime.datetime(2008, 10, 1, 13, 17, 27, 562000) datetime.datetime(2008, 10, 1, 13, 17, 27, 562000) datetime.datetime(2008, 10, 1, 13, 17, 27, 578000) datetime.datetime(2008, 10, 1, 13, 17, 27, 578000) datetime.datetime(2008, 10, 1, 13, 17, 27, 578000) datetime.datetime(2008, 10, 1, 13, 17, 27, 578000) datetime.datetime(2008, 10, 1, 13, 17, 27, 578000) datetime.datetime(2008, 10, 1, 13, 17, 27, 609000) datetime.datetime(2008, 10, 1, 13, 17, 27, 609000) datetime.datetime(2008, 10, 1, 13, 17, 27, 609000) etc. The changes between clocks for the first second of samples looks like this: uSecs difference 562000 578000 16000 609000 31000 625000 16000 640000 15000 656000 16000 687000 31000 703000 16000 718000 15000 750000 32000 765000 15000 781000 16000 796000 15000 828000 32000 843000 15000 859000 16000 890000 31000 906000 16000 921000 15000 937000 16000 968000 31000 984000 16000 So it looks like the timer data is only updated every ~15-32ms on my machine. The problem comes when we come to analyse the data because sorting by something other than the timestamp and then sorting by timestamp again can leave the data in the wrong order (chronologically). It would be nice to have the time stamps accurate to the point that any call to the time stamp generator gives a unique timestamp. I had been considering some methods involving using a time.clock() call added to a starting datetime, but would appreciate a solution that would work accurately across threads on the same machine. Any suggestions would be very gratefully received.
time.clock() only measures wallclock time on Windows. On other systems, time.clock() actually measures CPU-time. On those systems time.time() is more suitable for wallclock time, and it has as high a resolution as Python can manage -- which is as high as the OS can manage; usually using gettimeofday(3) (microsecond resolution) or ftime(3) (millisecond resolution.) Other OS restrictions actually make the real resolution a lot higher than that. datetime.datetime.now() uses time.time(), so time.time() directly won't be better. For the record, if I use datetime.datetime.now() in a loop, I see about a 1/10000 second resolution. From looking at your data, you have much, much coarser resolution than that. I'm not sure if there's anything Python as such can do, although you may be able to convince the OS to do better through other means. I seem to recall that on Windows, time.clock() is actually (slightly) more accurate than time.time(), but it measures wallclock since the first call to time.clock(), so you have to remember to 'initialize' it first.
Python 2.5 dictionary 2 key sort
I have a dictionary of 200,000 items (the keys are strings and the values are integers). What is the best/most pythonic way to print the items sorted by descending value then ascending key (i.e. a 2 key sort)? a={ 'keyC':1, 'keyB':2, 'keyA':1 } b = a.items() b.sort( key=lambda a:a[0]) b.sort( key=lambda a:a[1], reverse=True ) print b >>>[('keyB', 2), ('keyA', 1), ('keyC', 1)]
You can't sort dictionaries. You have to sort the list of items. Previous versions were wrong. When you have a numeric value, it's easy to sort in reverse order. These will do that. But this isn't general. This only works because the value is numeric. a = { 'key':1, 'another':2, 'key2':1 } b= a.items() b.sort( key=lambda a:(-a[1],a[0]) ) print b Here's an alternative, using an explicit function instead of a lambda and the cmp instead of the key option. def valueKeyCmp( a, b ): return cmp( (-a[1], a[0]), (-b[1], b[0] ) ) b.sort( cmp= valueKeyCmp ) print b The more general solution is actually two separate sorts b.sort( key=lambda a:a[1], reverse=True ) b.sort( key=lambda a:a[0] ) print b
Hiding a password in a (python) script
I have got a python script which is creating an ODBC connection. The ODBC connection is generated with a connection string. In this connection string I have to include the username and password for this connection. Is there an easy way to obscure this password in the file (just that nobody can read the password when I'm editing the file) ?
Base64 encoding is in the standard library and will do to stop shoulder surfers: >>> import base64 >>> print base64.b64encode("password") cGFzc3dvcmQ= >>> print base64.b64decode("cGFzc3dvcmQ=") password
Python module dependency
Ok I have two modules, each containing a class, the problem is their classes reference each other. Lets say for example I had a room module and a person module containing CRoom and CPerson. The CRoom class contains infomation about the room, and a CPerson list of every one in the room. The CPerson class however sometimes needs to use the CRoom class for the room its in, for example to find the door, or too see who else is in the room. The problem is with the two modules importing each other I just get an import error on which ever is being imported second :( In c++ I could solve this by only including the headers, and since in both cases the classes just have pointers to the other class, a forward declaration would suffice for the header eg: class CPerson;//forward declare class CRoom { std::set<CPerson*> People; ... Is there anyway to do this in python, other than placing both classes in the same module or something like that? edit: added python example showing problem using above classes error: Traceback (most recent call last): File "C:\Projects\python\test\main.py", line 1, in from room import CRoom File "C:\Projects\python\test\room.py", line 1, in from person import CPerson File "C:\Projects\python\test\person.py", line 1, in from room import CRoom ImportError: cannot import name CRoom room.py from person import CPerson class CRoom: def __init__(Self): Self.People = {} Self.NextId = 0 def AddPerson(Self, FirstName, SecondName, Gender): Id = Self.NextId Self.NextId += 1# Person = CPerson(FirstName,SecondName,Gender,Id) Self.People[Id] = Person return Person def FindDoorAndLeave(Self, PersonId): del Self.People[PeopleId] person.py from room import CRoom class CPerson: def __init__(Self, Room, FirstName, SecondName, Gender, Id): Self.Room = Room Self.FirstName = FirstName Self.SecondName = SecondName Self.Gender = Gender Self.Id = Id def Leave(Self): Self.Room.FindDoorAndLeave(Self.Id)
No need to import CRoom You don't use CRoom in person.py, so don't import it. Due to dynamic binding, Python doesn't need to "see all class definitions at compile time". If you actually do use CRoom in person.py, then change from room import CRoom to import room and use module-qualified form room.CRoom. See Effbot's Circular Imports for details. Sidenote: you probably have an error in Self.NextId += 1 line. It increments NextId of instance, not NextId of class. To increment class's counter use CRoom.NextId += 1 or Self.__class__.NextId += 1.
Getting MAC Address
I need a cross platform method of determining the MAC address of a computer at run time. For windows the 'wmi' module can be used and the only method under Linux I could find was to run ifconfig and run a regex across its output. I don't like using a package that only works on one OS, and parsing the output of another program doesn't seem very elegant not to mention error prone. Does anyone know a cross platform method (windows and linux) method to get the MAC address? If not, does anyone know any more elegant methods then those I listed above?
Python 2.5 includes an uuid implementation which (in at least one version) needs the mac address. You can import the mac finding function into your own code easily: from uuid import getnode as get_mac mac = get_mac() The return value is the mac address as 48 bit integer.
What is the naming convention in Python for variable and function names?
Coming from a C# background the naming convention for variables and method names are usually either CamelCase or Pascal Case: // C# example string thisIsMyVariable = "a" public void ThisIsMyMethod() In Python, I have seen the above but I have also seen underscores being used: # python example this_is_my_variable = 'a' def this_is_my_function(): Is there a more preferable, definitive coding style for Python?
See Python PEP 8. Function names should be lowercase, with words separated by underscores as necessary to improve readability. mixedCase is allowed only in contexts where that's already the prevailing style Variables... Use the function naming rules: lowercase with words separated by underscores as necessary to improve readability. Personally, I deviate from this because I also prefer mixedCase over lower_case for my own projects.
urllib2 file name
If I open a file using urllib2, like so: remotefile = urllib2.urlopen('http://example.com/somefile.zip') Is there an easy way to get the file name other then parsing the original URL? EDIT: changed openfile to urlopen... not sure how that happened. EDIT2: I ended up using: filename = url.split('/')[-1].split('#')[0].split('?')[0] Unless I'm mistaken, this should strip out all potential queries as well.
Did you mean urllib2.urlopen? You could potentially lift the intended filename if the server was sending a Content-Disposition header by checking remotefile.info()['Content-Disposition'], but as it is I think you'll just have to parse the url. You could use urlparse.urlsplit, but if you have any URLs like at the second example, you'll end up having to pull the file name out yourself anyway: >>> urlparse.urlsplit('http://example.com/somefile.zip') ('http', 'example.com', '/somefile.zip', '', '') >>> urlparse.urlsplit('http://example.com/somedir/somefile.zip') ('http', 'example.com', '/somedir/somefile.zip', '', '') Might as well just do this: >>> 'http://example.com/somefile.zip'.split('/')[-1] 'somefile.zip' >>> 'http://example.com/somedir/somefile.zip'.split('/')[-1] 'somefile.zip'
Python - How do I pass a string into subprocess.Popen (using the stdin argument)?
If I do the following: import subprocess from cStringIO import StringIO subprocess.Popen(['grep','f'],stdout=subprocess.PIPE,stdin=StringIO('one\ntwo\nthree\nfour\nfive\nsix\n')).communicate()[0] I get: Traceback (most recent call last): File "<stdin>", line 1, in ? File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 533, in __init__ (p2cread, p2cwrite, File "/build/toolchain/mac32/python-2.4.3/lib/python2.4/subprocess.py", line 830, in _get_handles p2cread = stdin.fileno() AttributeError: 'cStringIO.StringI' object has no attribute 'fileno' Apparently a cStringIO.StringIO object doesn't quack close enough to a file duck to suit subprocess.Popen. How do I work around this?
Popen.communicate() documentation: Note that if you want to send data to the process’s stdin, you need to create the Popen object with stdin=PIPE. Similarly, to get anything other than None in the result tuple, you need to give stdout=PIPE and/or stderr=PIPE too. Replacing os.popen* pipe = os.popen(cmd, 'w', bufsize) # ==> pipe = Popen(cmd, shell=True, bufsize=bufsize, stdin=PIPE).stdin Warning Use communicate() rather than stdin.write(), stdout.read() or stderr.read() to avoid deadlocks due to any of the other OS pipe buffers filling up and blocking the child process. So your example could be written as follows: from subprocess import Popen, PIPE, STDOUT p = Popen(['grep', 'f'], stdout=PIPE, stdin=PIPE, stderr=STDOUT) grep_stdout = p.communicate(input=b'one\ntwo\nthree\nfour\nfive\nsix\n')[0] print(grep_stdout.decode()) # -> four # -> five # ->
Can "list_display" in a Django ModelAdmin display attributes of ForeignKey fields?
I have a Person model that has a foreign key relationship to Book. Book has a number of fields, but I'm most concerned about "author" (a standard CharField). With that being said, in my PersonAdmin model, I'd like to display "book.author" using "list_display". I've tried all of the obvious methods for doing so (see below), but nothing seems to work. Any suggestions? class PersonAdmin(admin.ModelAdmin): list_display = ['book.author',]
As another option, you can do look ups like: class UserAdmin(admin.ModelAdmin): list_display = (..., 'get_author') def get_author(self, obj): return obj.book.author get_author.short_description = 'Author' get_author.admin_order_field = 'book__author'
How do I deploy a Python desktop application?
I have started on a personal python application that runs on the desktop. I am using wxPython as a GUI toolkit. Should there be a demand for this type of application, I would possibly like to commercialize it. I have no knowledge of deploying "real-life" Python applications, though I have used py2exe in the past with varied success. How would I obfuscate the code? Can I somehow deploy only the bytecode? An ideal solution would not jeopardize my intellectual property (source code), would not require a direct installation of Python (though I'm sure it will need to have some embedded interpreter), and would be cross-platform (Windows, Mac, and Linux). Does anyone know of any tools or resources in this area? Thanks.
You can distribute the compiled Python bytecode (.pyc files) instead of the source. You can't prevent decompilation in Python (or any other language, really). You could use an obfuscator like pyobfuscate to make it more annoying for competitors to decipher your decompiled source. As Alex Martelli says in this thread, if you want to keep your code a secret, you shouldn't run it on other people's machines. IIRC, the last time I used cx_Freeze it created a DLL for Windows that removed the necessity for a native Python installation. This is at least worth checking out.
Change Django Templates Based on User-Agent
I've made a Django site, but I've drank the Koolaid and I want to make an IPhone version. After putting much thought into I've come up with two options: Make a whole other site, like i.xxxx.com. Tie it into the same database using Django's sites framework. Find some time of middleware that reads the user-agent, and changes the template directories dynamically. I'd really prefer option #2, however; I have some reservations, mainly because the Django documentation discourages changing settings on the fly. I found a snippet that would do the what I'd like. My main issue is having it as seamless as possible, I'd like it to be automagic and transparent to the user. Has anyone else come across the same issue? Would anyone care to share about how they've tackled making IPhone versions of Django sites? Update I went with a combination of middleware and tweaking the template call. For the middleware, I used minidetector. I like it because it detects a plethora of mobile user-agents. All I have to do is check request.mobile in my views. For the template call tweak: def check_mobile(request, template_name): if request.mobile: return 'mobile-%s'%template_name return template_name I use this for any view that I know I have both versions. TODO: Figure out how to access request.mobile in an extended version of render_to_response so I don't have to use check_mobile('template_name.html') Using the previous automagically fallback to the regular template if no mobile version exists.
Rather than changing the template directories dynamically you could modify the request and add a value that lets your view know if the user is on an iphone or not. Then wrap render_to_response (or whatever you are using for creating HttpResponse objects) to grab the iphone version of the template instead of the standard html version if they are using an iphone.
How would I package and sell a Django app?
Currently I am hosting a Django app I developed myself for my clients, but I am now starting to look at selling it to people for them to host themselves. My question is this: How can I package up and sell a Django app, while protecting its code from pirating or theft? Distributing a bunch of .py files doesn't sound like a good idea as the people I sell it to too could just make copies of them and pass them on. I think for the purpose of this problem it would be safe to assume that everyone who buys this would be running the same (LAMP) setup.
Don't try and obfuscate or encrypt the code - it will never work. I would suggest selling the Django application "as a service" - either host it for them, or sell them the code and support. Write up a contract that forbids them from redistributing it. That said, if you were determined to obfuscate the code in some way - you can distribute python applications entirely as .pyc (Python compiled byte-code).. It's how Py2App works. It will still be re-distributable, but it will be very difficult to edit the files - so you could add some basic licensing stuff, and not have it foiled by a few #s.. As I said, I don't think you'll succeed in anti-piracy via encryption or obfuscation etc.. Depending on your clients, a simple contract, and maybe some really basic checks will go a long much further than some complicated decryption system (And make the experience of using your application better, instead of hopefully not any worse)
Detecting Mouse clicks in windows using python
How can I detect mouse clicks regardless of the window the mouse is in? Perferabliy in python, but if someone can explain it in any langauge I might be able to figure it out. I found this on microsoft's site: http://msdn.microsoft.com/en-us/library/ms645533(VS.85).aspx But I don't see how I can detect or pick up the notifications listed. Tried using pygame's pygame.mouse.get_pos() function as follows: import pygame pygame.init() while True: print pygame.mouse.get_pos() This just returns 0,0. I'm not familiar with pygame, is something missing? In anycase I'd prefer a method without the need to install a 3rd party module. (other than pywin32 http://sourceforge.net/projects/pywin32/ )
The only way to detect mouse events outside your program is to install a Windows hook using SetWindowsHookEx. The pyHook module encapsulates the nitty-gritty details. Here's a sample that will print the location of every mouse click: import pyHook import pythoncom def onclick(event): print event.Position return True hm = pyHook.HookManager() hm.SubscribeMouseAllButtonsDown(onclick) hm.HookMouse() pythoncom.PumpMessages() hm.UnhookMouse() You can check the example.py script that is installed with the module for more info about the event parameter. pyHook might be tricky to use in a pure Python script, because it requires an active message pump. From the tutorial: Any application that wishes to receive notifications of global input events must have a Windows message pump. The easiest way to get one of these is to use the PumpMessages method in the Win32 Extensions package for Python. [...] When run, this program just sits idle and waits for Windows events. If you are using a GUI toolkit (e.g. wxPython), this loop is unnecessary since the toolkit provides its own.
Python object attributes - methodology for access
Suppose I have a class with some attributes. How is it best (in the Pythonic-OOP) sense to access these attributes ? Just like obj.attr ? Or perhaps write get accessors ? What are the accepted naming styles for such things ? Edit: Can you elaborate on the best-practices of naming attributes with a single or double leading underscore ? I see in most modules that a single underscore is used. If this question has already been asked (and I have a hunch it has, though searching didn't bring results), please point to it - and I will close this one.
With regards to the single and double-leading underscores: both indicate the same concept of 'privateness'. That is to say, people will know the attribute (be it a method or a 'normal' data attribute or anything else) is not part of the public API of the object. People will know that to touch it directly is to invite disaster. On top of that, the double-leading underscore attributes (but not the single-leading underscore attributes) are name-mangled to make accessing them by accident from subclasses or anywhere else outside the current class less likely. You can still access them, but not as trivially. For example: >>> class ClassA: ... def __init__(self): ... self._single = "Single" ... self.__double = "Double" ... def getSingle(self): ... return self._single ... def getDouble(self): ... return self.__double ... >>> class ClassB(ClassA): ... def getSingle_B(self): ... return self._single ... def getDouble_B(self): ... return self.__double ... >>> a = ClassA() >>> b = ClassB() You can now trivially access a._single and b._single and get the _single attribute created by ClassA: >>> a._single, b._single ('Single', 'Single') >>> a.getSingle(), b.getSingle(), b.getSingle_B() ('Single', 'Single', 'Single') But trying to access the __double attribute on the a or b instance directly won't work: >>> a.__double Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: ClassA instance has no attribute '__double' >>> b.__double Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: ClassB instance has no attribute '__double' And though methods defined in ClassA can get at it directly (when called on either instance): >>> a.getDouble(), b.getDouble() ('Double', 'Double') Methods defined on ClassB can not: >>> b.getDouble_B() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 5, in getDouble_B AttributeError: ClassB instance has no attribute '_ClassB__double' And right in that error you get a hint about what's happening. The __double attribute name, when accessed inside a class, is being name-mangled to include the name of the class that it is being accessed in. When ClassA tries to access self.__double, it actually turns -- at compiletime -- into an access of self._ClassA__double, and likewise for ClassB. (If a method in ClassB were to assign to __double, not included in the code for brevity, it would therefor not touch ClassA's __double but create a new attribute.) There is no other protection of this attribute, so you can still access it directly if you know the right name: >>> a._ClassA__double, b._ClassA__double ('Double', 'Double') So why is this a problem? Well, it's a problem any time you want to inherit and change the behaviour of any code dealing with this attribute. You either have to reimplement everything that touches this double-underscore attribute directly, or you have to guess at the class name and mangle the name manually. The problem gets worse when this double-underscore attribute is actually a method: overriding the method or calling the method in a subclass means doing the name-mangling manually, or reimplementing all the code that calls the method to not use the double-underscore name. Not to mention accessing the attribute dynamically, with getattr(): you will have to manually mangle there, too. On the other hand, because the attribute is only trivially rewritten, it offers only superficial 'protection'. Any piece of code can still get at the attribute by manually mangling, although that will make their code dependant on the name of your class, and efforts on your side to refactor your code or rename your class (while still keeping the same user-visible name, a common practice in Python) would needlessly break their code. They can also 'trick' Python into doing the name-mangling for them by naming their class the same as yours: notice how there is no module name included in the mangled attribute name. And lastly, the double-underscore attribute is still visible in all attribute lists and all forms of introspection that don't take care to skip attributes starting with a (single) underscore. So, if you use double-underscore names, use them exceedingly sparingly, as they can turn out quite inconvenient, and never use them for methods or anything else a subclass may ever want to reimplement, override or access directly. And realize that double-leading underscore name-mangling offers no real protection. In the end, using a single leading underscore wins you just as much and gives you less (potential, future) pain. Use a single leading underscore.
Sleeping in a batch file
When writing a batch file to automate something on a Windows box, I've needed to pause its execution for several seconds (usually in a test/wait loop, waiting for a process to start). At the time, the best solution I could find uses ping (I kid you not) to achieve the desired effect. I've found a better write-up of it here, which describes a callable "wait.bat", implemented as follows: @ping 127.0.0.1 -n 2 -w 1000 > nul @ping 127.0.0.1 -n %1% -w 1000> nul You can then include calls to wait.bat in your own batch file, passing in the number of seconds to sleep. Apparently the Windows 2003 Resource Kit provides a Unix-like sleep command (at last!). In the meantime, for those of us still using Windows XP, Windows 2000 or (sadly) Windows NT, is there a better way? I modified the sleep.py script in the accepted answer, so that it defaults to one second if no arguments are passed on the command line: import time, sys time.sleep(float(sys.argv[1]) if len(sys.argv) > 1 else 1)
The timeout command is available from Vista onwards. c:\> timeout /? TIMEOUT [/T] timeout [/NOBREAK] Description: This utility accepts a timeout parameter to wait for the specified time period (in seconds) or until any key is pressed. It also accepts a parameter to ignore the key press. Parameter List: /T timeout Specifies the number of seconds to wait. Valid range is -1 to 99999 seconds. /NOBREAK Ignore key presses and wait specified time. /? Displays this help message. NOTE: A timeout value of -1 means to wait indefinitely for a key press. Examples: TIMEOUT /? TIMEOUT /T 10 TIMEOUT /T 300 /NOBREAK TIMEOUT /T -1 Note: Does not work with input redirection - trivial example: C:\>echo 1 | timeout /t 1 /nobreak ERROR: Input redirection is not supported, exiting the process immediately.
Finding local IP addresses using Python's stdlib
How can I find local IP addresses (i.e. 192.168.x.x or 10.0.x.x) in Python platform independently and using only the standard library?
I just found this but it seems a bit hackish, however they say tried it on *nix and I did on windows and it worked. import socket s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.connect(("gmail.com",80)) print(s.getsockname()[0]) s.close() This assumes you have an internet access, and that there is no local proxy.
Finding a public facing IP address in Python?
How can I find the public facing IP for my net work in Python?
This will fetch your remote IP address import urllib ip = urllib.urlopen('http://automation.whatismyip.com/n09230945.asp').read() If you don't want to rely on someone else, then just upload something like this PHP script: <?php echo $_SERVER['REMOTE_ADDR']; ?> and change the URL in the Python or if you prefer ASP: <% Dim UserIPAddress UserIPAddress = Request.ServerVariables("REMOTE_ADDR") %> Note: I don't know ASP, but I figured it might be useful to have here so I googled.
Calling Python in PHP
I have a Python script I recently wrote that I call using the command line with some options. I now want a very thin web interface to call this script locally on my Mac. I don't want to go through the minor trouble of installing mod_python or mod_wsgi on my Mac, so I was just going to do a system() or popen() from PHP to call the Python script. Any better ideas? Thanks in advance!
Depending on what you are doing, system() or popen() may be perfect. Use system() if the Python script has no output, or if you want the Python script's output to go directly to the browser. Use popen() if you want to write data to the Python script's standard input, or read data from the Python script's standard output in php. popen() will only let you read or write, but not both. If you want both, check out proc_open(), but with two way communication between programs you need to be careful to avoid deadlocks, where each program is waiting for the other to do something. If you want to pass user supplied data to the Python script, then the big thing to be careful about is command injection. If you aren't careful, your user could send you data like "; evilcommand ;" and make your program execute arbitrary commands against your will. escapeshellarg() and escapeshellcmd() can help with this, but personally I like to remove everything that isn't a known good character, using something like preg_replace('/[^a-zA-Z0-9]/', '', $str)
How do you get a directory listing sorted by creation date in python?
What is the best way to get a list of all files in a directory, sorted by date [created | modified], using python, on a windows machine?
I've done this in the past for a Python script to determine the last updated files in a directory: import glob import os search_dir = "/mydir/" # remove anything from the list that is not a file (directories, symlinks) # thanks to J.F. Sebastion for pointing out that the requirement was a list # of files (presumably not including directories) files = filter(os.path.isfile, glob.glob(search_dir + "*")) files.sort(key=lambda x: os.path.getmtime(x)) That should do what you're looking for based on file mtime. EDIT: Note that you can also use os.listdir() in place of glob.glob() if desired - the reason I used glob in my original code was that I was wanting to use glob to only search for files with a particular set of file extensions, which glob() was better suited to. To use listdir here's what it would look like: import os search_dir = "/mydir/" os.chdir(search_dir) files = filter(os.path.isfile, os.listdir(search_dir)) files = [os.path.join(search_dir, f) for f in files] # add path to each file files.sort(key=lambda x: os.path.getmtime(x))
Python - How do I convert "an OS-level handle to an open file" to a file object?
tempfile.mkstemp() returns: a tuple containing an OS-level handle to an open file (as would be returned by os.open()) and the absolute pathname of that file, in that order. How do I convert that OS-level handle to a file object? The documentation for os.open() states: To wrap a file descriptor in a "file object", use fdopen(). So I tried: >>> import tempfile >>> tup = tempfile.mkstemp() >>> import os >>> f = os.fdopen(tup[0]) >>> f.write('foo\n') Traceback (most recent call last): File "<stdin>", line 1, in ? IOError: [Errno 9] Bad file descriptor
You can use os.write(tup[0], "foo\n") to write to the handle. If you want to open the handle for writing you need to add the "w" mode f = os.fdopen(tup[0], "w") f.write("foo")
Python - How do I write a decorator that restores the cwd?
How do I write a decorator that restores the current working directory to what it was before the decorated function was called? In other words, if I use the decorator on a function that does an os.chdir(), the cwd will not be changed after the function is called.
The answer for a decorator has been given; it works at the function definition stage as requested. With Python 2.5+, you also have an option to do that at the function call stage using a context manager: from __future__ import with_statement # needed for 2.5 ≤ Python < 2.6 import contextlib, os @contextlib.contextmanager def remember_cwd(): curdir= os.getcwd() try: yield finally: os.chdir(curdir) which can be used if needed at the function call time as: print "getcwd before:", os.getcwd() with remember_cwd(): walk_around_the_filesystem() print "getcwd after:", os.getcwd() It's a nice option to have. EDIT: I added error handling as suggested by codeape. Since my answer has been voted up, it's fair to offer a complete answer, all other issues aside.
Django signals vs. overriding save method
I'm having trouble wrapping my head around this. Right now I have some models that looks kind of like this: def Review(models.Model) ...fields... overall_score = models.FloatField(blank=True) def Score(models.Model) review = models.ForeignKey(Review) question = models.TextField() grade = models.IntegerField() A Review is has several "scores", the overall_score is the average of the scores. When a review or a score is saved, I need to recalculate the overall_score average. Right now I'm using a overridden save method. Would there be any benefits to using Django's signal dispatcher?
Save/delete signals are generally favourable in situations where you need to make changes which aren't completely specific to the model in question, or could be applied to models which have something in common, or could be configured for use across models. One common task in overridden save methods is automated generation of slugs from some text field in a model. That's an example of something which, if you needed to implement it for a number of models, would benefit from using a pre_save signal, where the signal handler could take the name of the slug field and the name of the field to generate the slug from. Once you have something like that in place, any enhanced functionality you put in place will also apply to all models - e.g. looking up the slug you're about to add for the type of model in question, to ensure uniqueness. Reusable applications often benefit from the use of signals - if the functionality they provide can be applied to any model, they generally (unless it's unavoidable) won't want users to have to directly modify their models in order to benefit from it. With django-mptt, for example, I used the pre_save signal to manage a set of fields which describe a tree structure for the model which is about to be created or updated and the pre_delete signal to remove tree structure details for the object being deleted and its entire sub-tree of objects before it and they are deleted. Due to the use of signals, users don't have to add or modify save or delete methods on their models to have this management done for them, they just have to let django-mptt know which models they want it to manage.
Formatting a list of text into columns
I'm trying to output a list of string values into a 2 column format. The standard way of making a list of strings into "normal text" is by using the string.join method. However, it only takes 2 arguments so I can only make a single column using "\n". I thought trying to make a loop that would simply add a tab between columns would do it but the logic didn't work correctly. I found an ActiveState page that has a fairly complicated way of doing it but it's from 4 years ago. Is there an easy way to do it nowadays? Edit Here is the list that I want to use. skills_defs = ["ACM:Aircraft Mechanic", "BC:Body Combat", "BIO:Biology", "CBE:Combat Engineer", "CHM:Chemistry", "CMP:Computers", "CRM:Combat Rifeman", "CVE:Civil Engineer", "DIS:Disguise", "ELC:Electronics","EQ:Equestrian", "FO:Forward Observer", "FOR:Forage", "FRG:Forgery", "FRM:Farming", "FSH:Fishing", "GEO:Geology", "GS:Gunsmith", "HW:Heavy Weapons", "IF:Indirect Fire", "INS:Instruction", "INT:Interrogation", "JP:Jet Pilot", "LB:Longbow", "LAP:Light Aircraft Pilot", "LCG:Large Caliber Gun", "LNG:Language", "LP:Lockpick", "MC:Melee Combat", "MCY:Motorcycle", "MEC:Mechanic", "MED:Medical", "MET:Meterology", "MNE:Mining Engineer", "MTL:Metallurgy", "MTN:Mountaineering", "NWH:Nuclear Warhead", "PAR:Parachute", "PST:Pistol", "RCN:Recon", "RWP:Rotary Wing Pilot", "SBH:Small Boat Handling","SCD:Scuba Diving", "SCR:Scrounging", "SWM:Swimming", "TW:Thrown Weapon", "TVD:Tracked Vehicle Driver", "WVD:Wheeled Vehicle Driver"] I just want to output this list into a simple, 2 column format to reduce space. Ideally there should be a standard amount of space between the columns but I can work with it. ACM:Aircraft Mechanic BC:Body Combat BIO:Biology CBE:Combat Engineer CHM:Chemistry CMP:Computers CRM:Combat Rifeman CVE:Civil Engineer DIS:Disguise ELC:Electronics EQ:Equestrian FO:Forward Observer FOR:Forage FRG:Forgery FRM:Farming FSH:Fishing GEO:Geology GS:Gunsmith HW:Heavy Weapons IF:Indirect Fire INS:Instruction INT:Interrogation JP:Jet Pilot LB:Longbow LAP:Light Aircraft Pilot LCG:Large Caliber Gun LNG:Language LP:Lockpick MC:Melee Combat MCY:Motorcycle MEC:Mechanic MED:Medical MET:Meterology MNE:Mining Engineer MTL:Metallurgy MTN:Mountaineering NWH:Nuclear Warhead PAR:Parachute PST:Pistol RCN:Recon RWP:Rotary Wing Pilot SBH:Small Boat Handling SCD:Scuba Diving SCR:Scrounging SWM:Swimming TW:Thrown Weapon TVD:Tracked Vehicle Driver WVD:Wheeled Vehicle Driver
Two columns, separated by tabs, joined into lines. Look in itertools for iterator equivalents, to achieve a space-efficient solution. import string def fmtpairs(mylist): pairs = zip(mylist[::2],mylist[1::2]) return '\n'.join('\t'.join(i) for i in pairs) print fmtpairs(list(string.ascii_uppercase)) A B C D E F G H I J ... Oops... got caught by S.Lott (thank you). A more general solution, handles any number of columns and odd lists. Slightly modified from S.lott, using generators to save space. def fmtcols(mylist, cols): lines = ("\t".join(mylist[i:i+cols]) for i in xrange(0,len(mylist),cols)) return '\n'.join(lines)
What is the best way to get all the divisors of a number?
Here's the very dumb way: def divisorGenerator(n): for i in xrange(1,n/2+1): if n%i == 0: yield i yield n The result I'd like to get is similar to this one, but I'd like a smarter algorithm (this one it's too much slow and dumb :-) I can find prime factors and their multiplicity fast enough. I've an generator that generates factor in this way: (factor1, multiplicity1) (factor2, multiplicity2) (factor3, multiplicity3) and so on... i.e. the output of for i in factorGenerator(100): print i is: (2, 2) (5, 2) I don't know how much is this useful for what I want to do (I coded it for other problems), anyway I'd like a smarter way to make for i in divisorGen(100): print i output this: 1 2 4 5 10 20 25 50 100 UPDATE: Many thanks to Greg Hewgill and his "smart way" :) Calculating all divisors of 100000000 took 0.01s with his way against the 39s that the dumb way took on my machine, very cool :D UPDATE 2: Stop saying this is a duplicate of this post. Calculating the number of divisor of a given number doesn't need to calculate all the divisors. It's a different problem, if you think it's not then look for "Divisor function" on wikipedia. Read the questions and the answer before posting, if you do not understand what is the topic just don't add not useful and already given answers.
Given your factorGenerator function, here is a divisorGen that should work: def divisorGen(n): factors = list(factorGenerator(n)) nfactors = len(factors) f = [0] * nfactors while True: yield reduce(lambda x, y: x*y, [factors[x][0]**f[x] for x in range(nfactors)], 1) i = 0 while True: f[i] += 1 if f[i] <= factors[i][1]: break f[i] = 0 i += 1 if i >= nfactors: return The overall efficiency of this algorithm will depend entirely on the efficiency of the factorGenerator.
How do you develop against OpenID locally
I'm developing a website (in Django) that uses OpenID to authenticate users. As I'm currently only running on my local machine I can't authenticate using one of the OpenID providers on the web. So I figure I need to run a local OpenID server that simply lets me type in a username and then passes that back to my main app. Does such an OpenID dev server exist? Is this the best way to go about it?
The libraries at OpenID Enabled ship with examples that are sufficient to run a local test provider. Look in the examples/djopenid/ directory of the python-openid source distribution. Running that will give you an instance of this test provider.
How are you planning on handling the migration to Python 3?
I'm sure this is a subject that's on most python developers' minds considering that Python 3 is coming out soon. Some questions to get us going in the right direction: Will you have a python 2 and python 3 version to be maintained concurrently or will you simply have a python 3 version once it's finished? Have you already started or plan on starting soon? Or do you plan on waiting until the final version comes out to get into full swing?
Here's the general plan for Twisted. I was originally going to blog this, but then I thought: why blog about it when I could get points for it? Wait until somebody cares. Right now, nobody has Python 3. We're not going to spend a bunch of effort until at least one actual user has come forth and said "I need Python 3.0 support", and has a good reason for it aside from the fact that 3.0 looks shiny. Wait until our dependencies have migrated. A large system like Twisted has a number of dependencies. For starters, ours include: Zope Interface PyCrypto PyOpenSSL pywin32 PyGTK (though this dependency is sadly very light right now, by the time migration rolls around, I hope Twisted will have more GUI tools) pyasn1 PyPAM gmpy Some of these projects have their own array of dependencies so we'll have to wait for those as well. Wait until somebody cares enough to help. There are, charitably, 5 people who work on Twisted - and I say "charitably" because that's counting me, and I haven't committed in months. We have over 1000 open tickets right now, and it would be nice to actually fix some of those — fix bugs, add features, and generally make Twisted a better product in its own right — before spending time on getting it ported over to a substantially new version of the language. This potentially includes sponsors caring enough to pay for us to do it, but I hope that there will be an influx of volunteers who care about 3.0 support and want to help move the community forward. Follow Guido's advice. This means we will not change our API incompatibly, and we will follow the transitional development guidelines that Guido posted last year. That starts with having unit tests, and running the 2to3 conversion tool over the Twisted codebase. Report bugs against, and file patches for, the 2to3 tool. When we get to the point where we're actually using it, I anticipate that there will be a lot of problems with running 2to3 in the future. Running it over Twisted right now takes an extremely long time and (last I checked, which was quite a while ago) can't parse a few of the files in the Twisted repository, so the resulting output won't import. I think there will have to be a fair amount of success stories from small projects and a lot of hammering on the tool before it will actually work for us. However, the Python development team has been very helpful in responding to our bug reports, and early responses to these problems have been encouraging, so I expect that all of these issues will be fixed in time. Maintain 2.x compatibility for several years. Right now, Twisted supports python 2.3 to 2.5. Currently, we're working on 2.6 support (which we'll obviously have to finish before 3.0!). Our plan is to we revise our supported versions of Python based on the long-term supported versions of Ubuntu - release 8.04, which includes Python 2.5, will be supported until 2013. According to Guido's advice we will need to drop support for 2.5 in order to support 3.0, but I am hoping we can find a way around that (we are pretty creative with version-compatibility hacks). So, we are planning to support Python 2.5 until at least 2013. In two years, Ubuntu will release another long-term supported version of Ubuntu: if they still exist, and stay on schedule, that will be 10.04. Personally I am guessing that this will ship with Python 2.x, perhaps python 2.8, as /usr/bin/python, because there is a huge amount of Python software packaged with the distribution and it will take a long time to update it all. So, five years from then, in 2015, we can start looking at dropping 2.x support. During this period, we will continue to follow Guido's advice about migration: running 2to3 over our 2.x codebase, and modifying the 2.x codebase to keep its tests passing in both versions. The upshot of this is that Python 3.x will not be a source language for Twisted until well after my 35th birthday — it will be a target runtime (and a set of guidelines and restrictions) for my python 2.x code. I expect to be writing programs in Python 2.x for the next ten years or so. So, that's the plan. I'm hoping that it ends up looking laughably conservative in a year or so; that the 3.x transition is easy as pie, and everyone rapidly upgrades. Other things could happen, too: the 2.x and 3.x branches could converge, someone might end up writing a 3to2, or another runtime (PyPy comes to mind) might allow for running 2.x and 3.x code in the same process directly, making our conversion process easier. For the time being, however, we're assuming that, for many years, we will have people with large codebases they're maintaining (or people writing new code who want to use other libraries which have not yet been migrated) who still want new features and bug fixes in Twisted. Pretty soon I expect we will also have bleeding-edge users that want to use Twisted on python 3. I'd like to provide all of those people with a positive experience for as long as possible.
How do I split a multi-line string into multiple lines?
I have a multi-line string literal that I want to do an operation on each line, like so: inputString = """Line 1 Line 2 Line 3""" I want to do something like the following: for line in inputString: doStuff()
inputString.splitlines() Will give you a list with each item, the splitlines() method is designed to split each line into a list element.
Speeding Up Python
This is really two questions, but they are so similar, and to keep it simple, I figured I'd just roll them together: Firstly: Given an established python project, what are some decent ways to speed it up beyond just plain in-code optimization? Secondly: When writing a program from scratch in python, what are some good ways to greatly improve performance? For the first question, imagine you are handed a decently written project and you need to improve performance, but you can't seem to get much of a gain through refactoring/optimization. What would you do to speed it up in this case short of rewriting it in something like C?
Regarding "Secondly: When writing a program from scratch in python, what are some good ways to greatly improve performance?" Remember the Jackson rules of optimization: Rule 1: Don't do it. Rule 2 (for experts only): Don't do it yet. And the Knuth rule: "Premature optimization is the root of all evil." The more useful rules are in the General Rules for Optimization. Don't optimize as you go. First get it right. Then get it fast. Optimizing a wrong program is still wrong. Remember the 80/20 rule. Always run "before" and "after" benchmarks. Otherwise, you won't know if you've found the 80%. Use the right algorithms and data structures. This rule should be first. Nothing matters as much as algorithm and data structure. Bottom Line You can't prevent or avoid the "optimize this program" effort. It's part of the job. You have to plan for it and do it carefully, just like the design, code and test activities.
Parsing and generating Microsoft Office 2007 files (.docx, .xlsx, .pptx)
I have a web project where I must import text and images from a user-supplied document, and one of the possible formats is Microsoft Office 2007. There's also a need to generate documents in this format. The server runs CentOS 5.2 and has PHP/Perl/Python installed. I can execute local binaries and shell scripts if I must. We use Apache 2.2 but will be switching over to Nginx once it goes live. What are my options? Anyone had experience with this?
The Office 2007 file formats are open and well documented. Roughly speaking, all of the new file formats ending in "x" are zip compressed XML documents. For example: To open a Word 2007 XML file Create a temporary folder in which to store the file and its parts. Save a Word 2007 document, containing text, pictures, and other elements, as a .docx file. Add a .zip extension to the end of the file name. Double-click the file. It will open in the ZIP application. You can see the parts that comprise the file. Extract the parts to the folder that you created previously. The other file formats are roughly similar. I don't know of any open source libraries for interacting with them as yet - but depending on your exact requirements, it doesn't look too difficult to read and write simple documents. Certainly it should be a lot easier than with the older formats. If you need to read the older formats, OpenOffice has an API and can read and write Office 2003 and older documents with more or less success.
Is there a way to prevent a SystemExit exception raised from sys.exit() from being caught?
The docs say that calling sys.exit() raises a SystemExit exception which can be caught in outer levels. I have a situation in which I want to definitively and unquestionably exit from inside a test case, however the unittest module catches SystemExit and prevents the exit. This is normally great, but the specific situation I am trying to handle is one where our test framework has detected that it is configured to point to a non-test database. In this case I want to exit and prevent any further tests from being run. Of course since unittest traps the SystemExit and continues happily on it's way, it is thwarting me. The only option I have thought of so far is using ctypes or something similar to call exit(3) directly but this seems like a pretty fugly hack for something that should be really simple.
You can call os._exit() to directly exit, without throwing an exception: import os os._exit(1) This bypasses all of the python shutdown logic, such as the atexit module, and will not run through the exception handling logic that you're trying to avoid in this situation. The argument is the exit code that will be returned by the process.
python dictionary update method
I have a list string tag. I am trying to initialize a dictionary with the key as the tag string and values as the array index. for i, ithTag in enumerate(tag): tagDict.update(ithTag=i) The above returns me {'ithTag': 608} 608 is the 608th index My problem is that while the i is being interpreted as a variable, Python is treating the "ithTag" as a string instead of a variable. I'm confused, it is kind of hard to google these kind of specific questions. I hope I worded the title of this question correctly, Thanks!
You actually want to do this: for i, tag in enumerate(tag): tagDict[tag] = i The .update() method is used for updating a dictionary using another dictionary, not for changing a single key/value pair.
Is it possible to pass arguments into event bindings?
I haven't found an answer elsewhere and this doesn't appear to have been asked yet on SO. When creating an event binding in wxPython, is it possible to pass additional arguments to the event? For example, this is the normal way: b = wx.Button(self, 10, "Default Button", (20, 20)) self.Bind(wx.EVT_BUTTON, self.OnClick, b) def OnClick(self, event): self.log.write("Click! (%d)\n" % event.GetId()) But is it possible to have another argument passed to the method? Such that the method can tell if more than one widget is calling it but still return the same value? It would greatly reduce copy & pasting the same code but with different callers.
You can always use a lambda or another function to wrap up your method and pass another argument, not WX specific. b = wx.Button(self, 10, "Default Button", (20, 20)) self.Bind(wx.EVT_BUTTON, lambda event: self.OnClick(event, 'somevalue'), b) def OnClick(self, event, somearg): self.log.write("Click! (%d)\n" % event.GetId()) If you're out to reduce the amount of code to type, you might also try a little automatism like: class foo(whateverwxobject): def better_bind(self, type, instance, handler, *args, **kwargs): self.Bind(type, lambda event: handler(event, *args, **kwargs), instance) def __init__(self): self.better_bind(wx.EVT_BUTTON, b, self.OnClick, 'somevalue')
How to output CDATA using ElementTree
I've discovered that cElementTree is about 30 times faster than xml.dom.minidom and I'm rewriting my XML encoding/decoding code. However, I need to output XML that contains CDATA sections and there doesn't seem to be a way to do that with ElementTree. Can it be done?
After a bit of work, I found the answer myself. Looking at the ElementTree.py source code, I found there was special handling of XML comments and preprocessing instructions. What they do is create a factory function for the special element type that uses a special (non-string) tag value to differentiate it from regular elements. def Comment(text=None): element = Element(Comment) element.text = text return element Then in the _write function of ElementTree that actually outputs the XML, there's a special case handling for comments: if tag is Comment: file.write("<!-- %s -->" % _escape_cdata(node.text, encoding)) In order to support CDATA sections, I create a factory function called CDATA, extended the ElementTree class and changed the _write function to handle the CDATA elements. This still doesn't help if you want to parse an XML with CDATA sections and then output it again with the CDATA sections, but it at least allows you to create XMLs with CDATA sections programmatically, which is what I needed to do. The implementation seems to work with both ElementTree and cElementTree. import elementtree.ElementTree as etree #~ import cElementTree as etree def CDATA(text=None): element = etree.Element(CDATA) element.text = text return element class ElementTreeCDATA(etree.ElementTree): def _write(self, file, node, encoding, namespaces): if node.tag is CDATA: text = node.text.encode(encoding) file.write("\n<![CDATA[%s]]>\n" % text) else: etree.ElementTree._write(self, file, node, encoding, namespaces) if __name__ == "__main__": import sys text = """ <?xml version='1.0' encoding='utf-8'?> <text> This is just some sample text. </text> """ e = etree.Element("data") cdata = CDATA(text) e.append(cdata) et = ElementTreeCDATA(e) et.write(sys.stdout, "utf-8")
How do I convert a file's format from Unicode to ASCII using Python?
I use a 3rd party tool that outputs a file in Unicode format. However, I prefer it to be in ASCII. The tool does not have settings to change the file format. What is the best way to convert the entire file format using Python?
You can convert the file easily enough just using the unicode function, but you'll run into problems with Unicode characters without a straight ASCII equivalent. This blog recommends the unicodedata module, which seems to take care of roughly converting characters without direct corresponding ASCII values, e.g. >>> title = u"Klüft skräms inför på fédéral électoral große" is typically converted to Klft skrms infr p fdral lectoral groe which is pretty wrong. However, using the unicodedata module, the result can be much closer to the original text: >>> import unicodedata >>> unicodedata.normalize('NFKD', title).encode('ascii','ignore') 'Kluft skrams infor pa federal electoral groe'
Python List vs. Array - when to use?
If you are creating a 1d array, you can implement it as a List, or else use the 'array' module in the standard library. I have always used Lists for 1d arrays. What is the reason or circumstance where I would want to use the array module instead? Is it for performance and memory optimization, or am I missing something obvious?
Basically, Python lists are very flexible and can hold completely heterogeneous, arbitrary data, and they can be appended to very efficiently, in amortized constant time. If you need to shrink and grow your array time-efficiently and without hassle, they are the way to go. But they use a lot more space than C arrays. The array.array type, on the other hand, is just a thin wrapper on C arrays. It can hold only homogeneous data, all of the same type, and so it uses only sizeof(one object) * length bytes of memory. Mostly, you should use it when you need to expose a C array to an extension or a system call (for example, ioctl or fctnl). It's also a good way to represent a mutable string (array('B', bytes)) until that actually becomes available in Python 3.0. However, if you want to do math on a homogeneous array of numeric data, then you're much better off using NumPy, which can automatically vectorize operations on complex multi-dimensional arrays. To make a long story short: array.array is useful when you need a homogeneous C array of data for reasons other than doing math.
Finding the index of an item given a list containing it in Python
For a list ["foo", "bar", "baz"] and an item in the list "bar", what's the cleanest way to get its index (1) in Python?
>>> ["foo", "bar", "baz"].index("bar") 1 Reference: Data Structures > More on Lists
Alert boxes in Python?
Is it possible to produce an alert similar to JavaScript's alert("message") in python, with an application running as a daemon. This will be run in Windows, Most likely XP but 2000 and Vista are also very real possibilities. Update: This is intended to run in the background and alert the user when certain conditions are met, I figure that the easiest way to alert the user would be to produce a pop-up, as it needs to be handled immediately, and other options such as just logging, or sending an email are not efficient enough.
what about this: import win32api win32api.MessageBox(0, 'hello', 'title') Additionally: win32api.MessageBox(0, 'hello', 'title', 0x00001000) will make the box appear on top of other windows, for urgent messages. See MessageBox function for other options.
Testing socket connection in Python
This question will expand on: http://stackoverflow.com/questions/68774/best-way-to-open-a-socket-in-python When opening a socket how can I test to see if it has been established, and that it did not timeout, or generally fail. Edit: I tried this: try: s.connect((address, '80')) except: alert('failed' + address, 'down') but the alert function is called even when that connection should have worked.
It seems that you catch not the exception you wanna catch out there :) if the s is a socket.socket() object, then the right way to call .connect would be: import socket s = socket.socket() address = '127.0.0.1' port = 80 # port number is a number, not string try: s.connect((address, port)) # originally, it was # except Exception, e: # but this syntax is not supported anymore. except Exception as e: print("something's wrong with %s:%d. Exception is %s" % (address, port, e)) finally: s.close() Always try to see what kind of exception is what you're catching in a try-except loop. You can check what types of exceptions in a socket module represent what kind of errors (timeout, unable to resolve address, etc) and make separate except statement for each one of them - this way you'll be able to react differently for different kind of problems.
Python: can I have a list with named indices?
In PHP I can name my array indicies so that I may have something like: $shows = Array(0 => Array('id' => 1, 'name' => 'Sesaeme Street'), 1 => Array('id' => 2, 'name' => 'Dora The Explorer')); Is this possible in Python?
This sounds like the PHP array using named indices is very similar to a python dict: shows = [ {"id": 1, "name": "Sesaeme Street"}, {"id": 2, "name": "Dora The Explorer"}, ] See http://docs.python.org/tutorial/datastructures.html#dictionaries for more on this.
How to handle a broken pipe (SIGPIPE) in python?
I've written a simple multi-threaded game server in python that creates a new thread for each client connection. I'm finding that every now and then, the server will crash because of a broken-pipe/SIGPIPE error. I'm pretty sure it is happening when the program tries to send a response back to a client that is no longer present. What is a good way to deal with this? My preferred resolution would simply close the server-side connection to the client and move on, rather than exit the entire program. PS: This question/answer deals with the problem in a generic way; how specifically should I solve it?
Assuming that you are using the standard socket module, you should be catching the socket.error: (32, 'Broken pipe') exception (not IOError as others have suggested). This will be raised in the case that you've described, i.e. sending/writing to a socket for which the remote side has disconnected. import socket, errno, time # setup socket to listen for incoming connections s = socket.socket() s.bind(('localhost', 1234)) s.listen(1) remote, address = s.accept() print "Got connection from: ", address while 1: try: remote.send("message to peer\n") time.sleep(1) except socket.error, e: if isinstance(e.args, tuple): print "errno is %d" % e[0] if e[0] == errno.EPIPE: # remote peer disconnected print "Detected remote disconnect" else: # determine and handle different error pass else: print "socket error ", e remote.close() break except IOError, e: # Hmmm, Can IOError actually be raised by the socket module? print "Got IOError: ", e break Note that this exception will not always be raised on the first write to a closed socket - more usually the second write (unless the number of bytes written in the first write is larger than the socket's buffer size). You need to keep this in mind in case your application thinks that the remote end received the data from the first write when it may have already disconnected. You can reduce the incidence (but not entirely eliminate) of this by using select.select() (or poll). Check for data ready to read from the peer before attempting a write. If select reports that there is data available to read from the peer socket, read it using socket.recv(). If this returns an empty string, the remote peer has closed the connection. Because there is still a race condition here, you'll still need to catch and handle the exception. Twisted is great for this sort of thing, however, it sounds like you've already written a fair bit of code.
How do I convert a list of ascii values to a string in python?
I've got a list in a Python program that contains a series of numbers, which are themselves ASCII values. How do I convert this into a "regular" string that I can echo to the screen?
You are probably looking for 'chr()': >>> L = [104, 101, 108, 108, 111, 44, 32, 119, 111, 114, 108, 100] >>> ''.join(chr(i) for i in L) 'hello, world'
In the Django admin interface, is there a way to duplicate an item?
Just wondering if there is an easy way to add the functionality to duplicate an existing listing in the admin interface? In data entry we have run into a situation where a lot of items share generic data with another item, and to save time it would be very nice to quickly duplicate an existing listing and only alter the changed data. Using a better model structure would be one way of reducing the duplication of the data, but there may be situation where the duplicated data needs to be changed on an individual basis in the future.
You can save as by just enabling adding this to your ModelAdmin: save_as = True This replaces the "Save and add another" button with a "Save as" button. "Save as" means the object will be saved as a new object (with a new ID), rather than the old object.
What is the difference between Python's re.search and re.match?
What is the difference between the search() and match() functions in the Python re module? I've read the documentation (current documentation), but I never seem to remember it. I keep having to look it up and re-learn it. I'm hoping that someone will answer it clearly with examples so that (perhaps) it will stick in my head. Or at least I'll have a better place to return with my question and it will take less time to re-learn it.
re.match is anchored at the beginning of the string. That has nothing to do with newlines, so it is not the same as using ^ in the pattern. As re.match documentation says: If zero or more characters at the beginning of string match the regular expression pattern, return a corresponding MatchObject instance. Return None if the string does not match the pattern; note that this is different from a zero-length match. Note: If you want to locate a match anywhere in string, use search() instead. re.search searches the entire string, as the documentation says: Scan through string looking for a location where the regular expression pattern produces a match, and return a corresponding MatchObject instance. Return None if no position in the string matches the pattern; note that this is different from finding a zero-length match at some point in the string. So if you need to match at the beginning of the string, or to match the entire string use match. It is faster. Otherwise use search. The documentation has a specific section for match vs. search that also covers multiline strings: Python offers two different primitive operations based on regular expressions: match checks for a match only at the beginning of the string, while search checks for a match anywhere in the string (this is what Perl does by default). Note that match may differ from search even when using a regular expression beginning with '^': '^' matches only at the start of the string, or in MULTILINE mode also immediately following a newline. The “match” operation succeeds only if the pattern matches at the start of the string regardless of mode, or at the starting position given by the optional pos argument regardless of whether a newline precedes it. Now, enough talk. Time to some example code: # example code: string_with_newlines = """something someotherthing""" import re print re.match('some', string_with_newlines) # matches print re.match('someother', string_with_newlines) # won't match print re.match('^someother', string_with_newlines, re.MULTILINE) # also won't match print re.search('someother', string_with_newlines) # finds something print re.search('^someother', string_with_newlines, re.MULTILINE) # also finds something m = re.compile('thing$', re.MULTILINE) print m.match(string_with_newlines) # no match print m.match(string_with_newlines, pos=4) # matches print m.search(string_with_newlines, re.MULTILINE) # also matches
Python style: multiple-line conditions in IFs
Sometimes I break long conditions in IFs to several lines. The most obvious way to do this is: if (cond1 == 'val1' and cond2 == 'val2' and cond3 == 'val3' and cond4 == 'val4'): do_something Isn't very very appealing visually, because the action blends with the conditions. However, it is the natural way using correct Python indentation of 4 spaces. For the moment I'm using: if ( cond1 == 'val1' and cond2 == 'val2' and cond3 == 'val3' and cond4 == 'val4'): do_something But this isn't very pretty. :-) Can you recommend an alternative way ?
You don't need to use 4 spaces on your second conditional line. Maybe use: if (cond1 == 'val1' and cond2 == 'val2' and cond3 == 'val3' and cond4 == 'val4'): do_something Also, don't forget the whitespace is more flexible than you might think: if ( cond1 == 'val1' and cond2 == 'val2' and cond3 == 'val3' and cond4 == 'val4' ): do_something if (cond1 == 'val1' and cond2 == 'val2' and cond3 == 'val3' and cond4 == 'val4'): do_something Both of those are fairly ugly though. Maybe lose the brackets (the Style Guide discourages this though)? if cond1 == 'val1' and cond2 == 'val2' and \ cond3 == 'val3' and cond4 == 'val4': do_something This at least gives you some differentiation. Or even: if cond1 == 'val1' and cond2 == 'val2' and \ cond3 == 'val3' and \ cond4 == 'val4': do_something I think I prefer: if cond1 == 'val1' and \ cond2 == 'val2' and \ cond3 == 'val3' and \ cond4 == 'val4': do_something Here's the Style Guide, which (since 2010) recommends using brackets.
What is the problem with reduce()?
There seems to be a lot of heated discussion on the net about the changes to the reduce() function in python 3.0 and how it should be removed. I am having a little difficulty understanding why this is the case; I find it quite reasonable to use it in a variety of cases. If the contempt was simply subjective, I cannot imagine that such a large number of people would care about it. What am I missing? What is the problem with reduce()?
As Guido says in his The fate of reduce() in Python 3000 post: So now reduce(). This is actually the one I've always hated most, because, apart from a few examples involving + or *, almost every time I see a reduce() call with a non-trivial function argument, I need to grab pen and paper to diagram what's actually being fed into that function before I understand what the reduce() is supposed to do. So in my mind, the applicability of reduce() is pretty much limited to associative operators, and in all other cases it's better to write out the accumulation loop explicitly. There is an excellent example of a confusing reduce in the Functional Programming HOWTO article: Quick, what's the following code doing? total = reduce(lambda a, b: (0, a[1] + b[1]), items)[1] You can figure it out, but it takes time to disentangle the expression to figure out what's going on. Using a short nested def statements makes things a little bit better: def combine (a, b): return 0, a[1] + b[1] total = reduce(combine, items)[1] But it would be best of all if I had simply used a for loop: total = 0 for a, b in items: total += b Or the sum() built-in and a generator expression: total = sum(b for a,b in items) Many uses of reduce() are clearer when written as for loops.
How do I watch a file for changes using Python?
I have a log file being written by another process which I want to watch for changes. Each time a change occurrs I'd like to read the new data in to do some processing on it. What's the best way to do this? I was hoping there'd be some sort of hook from the PyWin32 library. I've found the win32file.FindNextChangeNotification function but have no idea how to ask it to watch a specific file. If anyone's done anything like this I'd be really grateful to hear how... [Edit] I should have mentioned that I was after a solution that doesn't require polling. [Edit] Curses! It seems this doesn't work over a mapped network drive. I'm guessing windows doesn't 'hear' any updates to the file the way it does on a local disk.
Did you try using Watchdog? Python API library and shell utilities to monitor file system events. Directory monitoring made easy with A cross-platform API. A shell tool to run commands in response to directory changes. Get started quickly with a simple example in Quickstart...
How can I use UUIDs in SQLAlchemy?
Is there a way to define a column (primary key) as a UUID in SQLAlchemy if using PostgreSQL (Postgres)?
I wrote this and the domain is gone but here's the guts.... Regardless of how my colleagues who really care about proper database design feel about UUID's and GUIDs used for key fields. I often find I need to do it. I think it has some advantages over autoincrement that make it worth it. I've been refining a UUID column type for the past few months and I think I've finally got it solid. from sqlalchemy import types from sqlalchemy.dialects.mysql.base import MSBinary from sqlalchemy.schema import Column import uuid class UUID(types.TypeDecorator): impl = MSBinary def __init__(self): self.impl.length = 16 types.TypeDecorator.__init__(self,length=self.impl.length) def process_bind_param(self,value,dialect=None): if value and isinstance(value,uuid.UUID): return value.bytes elif value and not isinstance(value,uuid.UUID): raise ValueError,'value %s is not a valid uuid.UUID' % value else: return None def process_result_value(self,value,dialect=None): if value: return uuid.UUID(bytes=value) else: return None def is_mutable(self): return False id_column_name = "id" def id_column(): import uuid return Column(id_column_name,UUID(),primary_key=True,default=uuid.uuid4) # Usage my_table = Table('test', metadata, id_column(), Column('parent_id', UUID(), ForeignKey(table_parent.c.id))) I believe storing as binary(16 bytes) should end up being more efficient than the string representation(36 bytes?), And there seems to be some indication that indexing 16 byte blocks should be more efficient in mysql than strings. I wouldn't expect it to be worse anyway. One disadvantage I've found is that at least in phpymyadmin, you can't edit records because it implicitly tries to do some sort of character conversion for the "select * from table where id =..." and there's miscellaneous display issues. Other than that everything seems to work fine, and so I'm throwing it out there. Leave a comment if you see a glaring error with it. I welcome any suggestions for improving it. Unless I'm missing something the above solution will work if the underlying database has a UUID type. If it doesn't, you would likely get errors when the table is created. The solution I came up with I was targeting MSSqlServer originally and then went MySql in the end, so I think my solution is a little more flexible as it seems to work fine on mysql and sqlite. Haven't bothered checking postgres yet.
Is this the best way to get unique version of filename w/ Python?
Still 'diving in' to Python, and want to make sure I'm not overlooking something. I wrote a script that extracts files from several zip files, and saves the extracted files together in one directory. To prevent duplicate filenames from being over-written, I wrote this little function - and I'm just wondering if there is a better way to do this? Thanks! def unique_filename(file_name): counter = 1 file_name_parts = os.path.splitext(file_name) # returns ('/path/file', '.ext') while os.path.isfile(file_name): file_name = file_name_parts[0] + '_' + str(counter) + file_name_parts[1] counter += 1 return file_name I really do require the files to be in a single directory, and numbering duplicates is definitely acceptable in my case, so I'm not looking for a more robust method (tho' I suppose any pointers are welcome), but just to make sure that what this accomplishes is getting done the right way.
One issue is that there is a race condition in your above code, since there is a gap between testing for existance, and creating the file. There may be security implications to this (think about someone maliciously inserting a symlink to a sensitive file which they wouldn't be able to overwrite, but your program running with a higher privilege could) Attacks like these are why things like os.tempnam() are deprecated. To get around it, the best approach is to actually try create the file in such a way that you'll get an exception if it fails, and on success, return the actually opened file object. This can be done with the lower level os.open functions, by passing both the os.O_CREAT and os.O_EXCL flags. Once opened, return the actual file (and optionally filename) you create. Eg, here's your code modified to use this approach (returning a (file, filename) tuple): def unique_file(file_name): counter = 1 file_name_parts = os.path.splitext(file_name) # returns ('/path/file', '.ext') while 1: try: fd = os.open(file_name, os.O_CREAT | os.O_EXCL | os.O_RDRW) return os.fdopen(fd), file_name except OSError: pass file_name = file_name_parts[0] + '_' + str(counter) + file_name_parts[1] counter += 1 [Edit] Actually, a better way, which will handle the above issues for you, is probably to use the tempfile module, though you may lose some control over the naming. Here's an example of using it (keeping a similar interface): def unique_file(file_name): dirname, filename = os.path.split(file_name) prefix, suffix = os.path.splitext(filename) fd, filename = tempfile.mkstemp(suffix, prefix+"_", dirname) return os.fdopen(fd), filename >>> f, filename=unique_file('/home/some_dir/foo.txt') >>> print filename /home/some_dir/foo_z8f_2Z.txt The only downside with this approach is that you will always get a filename with some random characters in it, as there's no attempt to create an unmodified file (/home/some_dir/foo.txt) first. You may also want to look at tempfile.TemporaryFile and NamedTemporaryFile, which will do the above and also automatically delete from disk when closed.
In Python, what is the difference between '/' and '//' when used for division?
Is there a benefit to using one over the other? They both seem to return the same results. >>> 6/3 2 >>> 6//3 2
In Python 3.0, 5 / 2 will return 2.5 and 5 // 2 will return 2. The former is floating point division, and the latter is floor division, sometimes also called integer division. In Python 2.2 or later in the 2.x line, there is no difference for integers unless you perform a from __future__ import division, which causes Python 2.x to adopt the behavior of 3.0 Regardless of the future import, 5.0 // 2 will return 2.0 since that's the floor division result of the operation. You can find a detailed description at https://docs.python.org/whatsnew/2.2.html#pep-238-changing-the-division-operator
How do I check out a file from perforce in python?
I would like to write some scripts in python that do some automated changes to source code. If the script determines it needs to change the file I would like to first check it out of perforce. I don't care about checking in because I will always want to build and test first.
Perforce has Python wrappers around their C/C++ tools, available in binary form for Windows, and source for other platforms: http://www.perforce.com/perforce/loadsupp.html#api You will find their documentation of the scripting API to be helpful: http://www.perforce.com/perforce/doc.current/manuals/p4script/p4script.pdf Use of the Python API is quite similar to the command-line client: PythonWin 2.5.1 (r251:54863, May 1 2007, 17:47:05) [MSC v.1310 32 bit (Intel)] on win32. Portions Copyright 1994-2006 Mark Hammond - see 'Help/About PythonWin' for further copyright information. >>> import P4 >>> p4 = P4.P4() >>> p4.connect() # connect to the default server, with the default clientspec >>> desc = {"Description": "My new changelist description", ... "Change": "new" ... } >>> p4.input = desc >>> p4.run("changelist", "-i") ['Change 2579505 created.'] >>> I'll verify it from the command line: P:\>p4 changelist -o 2579505 # A Perforce Change Specification. # # Change: The change number. 'new' on a new changelist. # Date: The date this specification was last modified. # Client: The client on which the changelist was created. Read-only. # User: The user who created the changelist. # Status: Either 'pending' or 'submitted'. Read-only. # Description: Comments about the changelist. Required. # Jobs: What opened jobs are to be closed by this changelist. # You may delete jobs from this list. (New changelists only.) # Files: What opened files from the default changelist are to be added # to this changelist. You may delete files from this list. # (New changelists only.) Change: 2579505 Date: 2008/10/08 13:57:02 Client: MYCOMPUTER-DT User: myusername Status: pending Description: My new changelist description
Delete Folder Contents in Python
How can I delete the contents of a local folder in Python? The current project is for Windows but I would like to see *nix also.
Updated to only delete files and to used the os.path.join() method suggested in the comments. If you also want to remove subdirectories, uncomment the elif statement. import os, shutil folder = '/path/to/folder' for the_file in os.listdir(folder): file_path = os.path.join(folder, the_file) try: if os.path.isfile(file_path): os.unlink(file_path) #elif os.path.isdir(file_path): shutil.rmtree(file_path) except Exception as e: print(e)
What is the best way to open a file for exclusive access in Python?
What is the most elegant way to solve this: open a file for reading, but only if it is not already opened for writing open a file for writing, but only if it is not already opened for reading or writing The built-in functions work like this >>> path = r"c:\scr.txt" >>> file1 = open(path, "w") >>> print file1 <open file 'c:\scr.txt', mode 'w' at 0x019F88D8> >>> file2 = open(path, "w") >>> print file2 <open file 'c:\scr.txt', mode 'w' at 0x02332188> >>> file1.write("111") >>> file2.write("222") >>> file1.close() scr.txt now contains '111'. >>> file2.close() scr.txt was overwritten and now contains '222' (on Windows, Python 2.4). The solution should work inside the same process (like in the example above) as well as when another process has opened the file. It is preferred, if a crashing program will not keep the lock open.
I don't think there is a fully crossplatform way. On unix, the fcntl module will do this for you. However on windows (which I assume you are by the paths), you'll need to use the win32file module. Fortunately, there is a portable implementation (portalocker) using the platform appropriate method at the python cookbook. To use it, open the file, and then call: portalocker.lock(file, flags) where flags are portalocker.LOCK_EX for exclusive write access, or LOCK_SH for shared, read access.
Splitting a semicolon-separated string to a dictionary, in Python
I have a string that looks like this: "Name1=Value1;Name2=Value2;Name3=Value3" Is there a built-in class/function in Python that will take that string and construct a dictionary, as though I had done this: dict = { "Name1": "Value1", "Name2": "Value2", "Name3": "Value3" } I have looked through the modules available but can't seem to find anything that matches. Thanks, I do know how to make the relevant code myself, but since such smallish solutions are usually mine-fields waiting to happen (ie. someone writes: Name1='Value1=2';) etc. then I usually prefer some pre-tested function. I'll do it myself then.
There's no builtin, but you can accomplish this fairly simply with a generator comprehension: s= "Name1=Value1;Name2=Value2;Name3=Value3" dict(item.split("=") for item in s.split(";")) [Edit] From your update you indicate you may need to handle quoting. This does complicate things, depending on what the exact format you are looking for is (what quote chars are accepted, what escape chars etc). You may want to look at the csv module to see if it can cover your format. Here's an example: (Note that the API is a little clunky for this example, as CSV is designed to iterate through a sequence of records, hence the .next() calls I'm making to just look at the first line. Adjust to suit your needs): >>> s = "Name1='Value=2';Name2=Value2;Name3=Value3" >>> dict(csv.reader([item], delimiter='=', quotechar="'").next() for item in csv.reader([s], delimiter=';', quotechar="'").next()) {'Name2': 'Value2', 'Name3': 'Value3', 'Name1': 'Value1=2'} Depending on the exact structure of your format, you may need to write your own simple parser however.
Configuration file with list of key-value pairs in python
I have a python script that analyzes a set of error messages and checks for each message if it matches a certain pattern (regular expression) in order to group these messages. For example "file x does not exist" and "file y does not exist" would match "file .* does not exist" and be accounted as two occurrences of "file not found" category. As the number of patterns and categories is growing, I'd like to put these couples "regular expression/display string" in a configuration file, basically a dictionary serialization of some sort. I would like this file to be editable by hand, so I'm discarding any form of binary serialization, and also I'd rather not resort to xml serialization to avoid problems with characters to escape (& <> and so on...). Do you have any idea of what could be a good way of accomplishing this? Update: thanks to Daren Thomas and Federico Ramponi, but I cannot have an external python file with possibly arbitrary code.
I sometimes just write a python module (i.e. file) called config.py or something with following contents: config = { 'name': 'hello', 'see?': 'world' } this can then be 'read' like so: from config import config config['name'] config['see?'] easy.
Counting array elements in Python
How can I count the number of elements in an array, because contrary to logic array.count(string) does not count all the elements in the array, it just searches for the number of occurrences of string.
The method len() returns the number of elements in the list. Syntax: len(myArray) Eg: myArray = [1, 2, 3] len(myArray) Output: 3
How do I capture an mp3 stream with python
What's the best way of capturing an mp3 stream coming off of http and saving it to disk with python? Thus far I've tried target = open(target_path, "w") conn = urllib.urlopen(stream_url) while True: target.write(conn.read(buf_size)) This gives me data but its garbled or wont play in mp3 players.
If you're on Windows, you might accidentally be doing CRLF conversions, corrupting the binary data. Try opening target in binary mode: target = open(target_path, "wb")
How to make a python, command-line program autocomplete arbitrary things NOT interpreter
I am aware of how to setup autocompletion of python objects in the python interpreter (on unix). Google shows many hits for explanations on how to do this. Unfortunately, there are so many references to that it is difficult to find what I need to do, which is slightly different. I need to know how to enable, tab/auto completion of arbitrary items in a command-line program written in python. My specific use case is a command-line python program that needs to send emails. I want to be able to autocomplete email addresses (I have the addresses on disk) when the user types part of it (and optionally presses the TAB key). I do not need it to work on windows or mac, just linux.
Follow the cmd documentation and you'll be fine import cmd addresses = [ 'here@blubb.com', 'foo@bar.com', 'whatever@wherever.org', ] class MyCmd(cmd.Cmd): def do_send(self, line): pass def complete_send(self, text, line, start_index, end_index): if text: return [ address for address in addresses if address.startswith(text) ] else: return addresses if __name__ == '__main__': my_cmd = MyCmd() my_cmd.cmdloop() Output for tab -> tab -> send -> tab -> tab -> f -> tab (Cmd) help send (Cmd) send foo@bar.com here@blubb.com whatever@wherever.org (Cmd) send foo@bar.com (Cmd)
Reading/Writing MS Word files in Python
Is it possible to read and write Word (2003 and 2007) files in Python without using a COM object? I know that I can: f = open('c:\file.doc', "w") f.write(text) f.close() but Word will read it as an HTML file not a native .doc file.
See python-docx, its official documentation is available here. This has worked very well for me.
How can I, in python, iterate over multiple 2d lists at once, cleanly?
If I'm making a simple grid based game, for example, I might have a few 2d lists. One might be for terrain, another might be for objects, etc. Unfortunately, when I need to iterate over the lists and have the contents of a square in one list affect part of another list, I have to do something like this. for i in range(len(alist)): for j in range(len(alist[i])): if alist[i][j].isWhatever: blist[i][j].doSomething() Is there a nicer way to do something like this?
If anyone is interested in performance of the above solutions, here they are for 4000x4000 grids, from fastest to slowest: Brian: 1.08s (modified, with izip instead of zip) John: 2.33s DzinX: 2.36s ΤΖΩΤΖΙΟΥ: 2.41s (but object initialization took 62s) Eugene: 3.17s Robert: 4.56s Brian: 27.24s (original, with zip) EDIT: Added Brian's scores with izip modification and it won by a large amount! John's solution is also very fast, although it uses indices (I was really surprised to see this!), whereas Robert's and Brian's (with zip) are slower than the question creator's initial solution. So let's present Brian's winning function, as it is not shown in proper form anywhere in this thread: from itertools import izip for a_row,b_row in izip(alist, blist): for a_item, b_item in izip(a_row,b_row): if a_item.isWhatever: b_item.doSomething()
How to use Python to login to a webpage and retrieve cookies for later usage?
I want to download and parse webpage using python, but to access it I need a couple of cookies set. Therefore I need to login over https to the webpage first. The login moment involves sending two POST params (username, password) to /login.php. During the login request I want to retrieve the cookies from the response header and store them so I can use them in the request to download the webpage /data.php. How would I do this in python (preferably 2.6)? If possible I only want to use builtin modules.
import urllib, urllib2, cookielib username = 'myuser' password = 'mypassword' cj = cookielib.CookieJar() opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj)) login_data = urllib.urlencode({'username' : username, 'j_password' : password}) opener.open('http://www.example.com/login.php', login_data) resp = opener.open('http://www.example.com/hiddenpage.php') print resp.read() resp.read() is the straight html of the page you want to open, and you can use opener to view any page using your session cookie.
How to break out of multiple loops in Python?
Given the following code (that doesn't work): while True: #snip: print out current state while True: ok = get_input("Is this ok? (y/n)") if ok == "y" or ok == "Y": break 2 #this doesn't work :( if ok == "n" or ok == "N": break #do more processing with menus and stuff Is there a way to make this work? Or do I have do one check to break out of the input loop, then another, more limited, check in the outside loop to break out all together if the user is satisfied? Edit-FYI: get_input is a short function I wrote that supports showing a prompt and default values and all that fanciness and returns stdin.readline().strip()
My first instinct would be to refactor the nested loop into a function and use return to break out.
Google App Engine and 404 error
I've setup a static website on GAE using hints found elsewhere, but can't figure out how to return a 404 error. My app.yaml file looks like - url: (.*)/ static_files: static\1/index.html upload: static/index.html - url: / static_dir: static with all the static html/jpg files stored under the static directory. The above works for files that exist, but returns a null length file if they don't. The answer is probably to write a python script to return a 404 error, but how do you set things up to serve the static files that exist but run the script for files that don't? Here is the log from fetching a non-existent file (nosuch.html) on the development application server: ERROR 2008-11-25 20:08:34,084 dev_appserver.py] Error encountered reading file "/usr/home/ctuffli/www/tufflinet/static/nosuch.html": [Errno 2] No such file or directory: '/usr/home/ctuffli/www/tufflinet/static/nosuch.html' INFO 2008-11-25 20:08:34,088 dev_appserver.py] "GET /nosuch.html HTTP/1.1" 404 -
You need to register a catch-all script handler. Append this at the end of your app.yaml: - url: /.* script: main.py In main.py you will need to put this code: from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app class NotFoundPageHandler(webapp.RequestHandler): def get(self): self.error(404) self.response.out.write('<Your 404 error html page>') application = webapp.WSGIApplication([('/.*', NotFoundPageHandler)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() Replace <Your 404 error html page> with something meaningful. Or better use a template, you can read how to do that here. Please let me know if you have problems setting this up.
How can I quantify difference between two images?
Here's what I would like to do: I'm taking pictures with a webcam at regular intervals. Sort of like a time lapse thing. However, if nothing has really changed, that is, the picture pretty much looks the same, I don't want to store the latest snapshot. I imagine there's some way of quantifying the difference, and I would have to empirically determine a threshold. I'm looking for simplicity rather than perfection. I'm using python.
General idea Option 1: Load both images as arrays (scipy.misc.imread) and calculate an element-wise (pixel-by-pixel) difference. Calculate the norm of the difference. Option 2: Load both images. Calculate some feature vector for each of them (like a histogram). Calculate distance between feature vectors rather than images. However, there are some decisions to make first. Questions You should answer these questions first: Are images of the same shape and dimension? If not, you may need to resize or crop them. PIL library will help to do it in Python. If they are taken with the same settings and the same device, they are probably the same. Are images well-aligned? If not, you may want to run cross-correlation first, to find the best alignment first. SciPy has functions to do it. If the camera and the scene are still, the images are likely to be well-aligned. Is exposure of the images always the same? (Is lightness/contrast the same?) If not, you may want to normalize images. But be careful, in some situations this may do more wrong than good. For example, a single bright pixel on a dark background will make the normalized image very different. Is color information important? If you want to notice color changes, you will have a vector of color values per point, rather than a scalar value as in gray-scale image. You need more attention when writing such code. Are there distinct edges in the image? Are they likely to move? If yes, you can apply edge detection algorithm first (e.g. calculate gradient with Sobel or Prewitt transform, apply some threshold), then compare edges on the first image to edges on the second. Is there noise in the image? All sensors pollute the image with some amount of noise. Low-cost sensors have more noise. You may wish to apply some noise reduction before you compare images. Blur is the most simple (but not the best) approach here. What kind of changes do you want to notice? This may affect the choice of norm to use for the difference between images. Consider using Manhattan norm (the sum of the absolute values) or zero norm (the number of elements not equal to zero) to measure how much the image has changed. The former will tell you how much the image is off, the latter will tell only how many pixels differ. Example I assume your images are well-aligned, the same size and shape, possibly with different exposure. For simplicity, I convert them to grayscale even if they are color (RGB) images. You will need these imports: import sys from scipy.misc import imread from scipy.linalg import norm from scipy import sum, average Main function, read two images, convert to grayscale, compare and print results: def main(): file1, file2 = sys.argv[1:1+2] # read images as 2D arrays (convert to grayscale for simplicity) img1 = to_grayscale(imread(file1).astype(float)) img2 = to_grayscale(imread(file2).astype(float)) # compare n_m, n_0 = compare_images(img1, img2) print "Manhattan norm:", n_m, "/ per pixel:", n_m/img1.size print "Zero norm:", n_0, "/ per pixel:", n_0*1.0/img1.size How to compare. img1 and img2 are 2D SciPy arrays here: def compare_images(img1, img2): # normalize to compensate for exposure difference, this may be unnecessary # consider disabling it img1 = normalize(img1) img2 = normalize(img2) # calculate the difference and its norms diff = img1 - img2 # elementwise for scipy arrays m_norm = sum(abs(diff)) # Manhattan norm z_norm = norm(diff.ravel(), 0) # Zero norm return (m_norm, z_norm) If the file is a color image, imread returns a 3D array, average RGB channels (the last array axis) to obtain intensity. No need to do it for grayscale images (e.g. .pgm): def to_grayscale(arr): "If arr is a color image (3D array), convert it to grayscale (2D array)." if len(arr.shape) == 3: return average(arr, -1) # average over the last axis (color channels) else: return arr Normalization is trivial, you may choose to normalize to [0,1] instead of [0,255]. arr is a SciPy array here, so all operations are element-wise: def normalize(arr): rng = arr.max()-arr.min() amin = arr.min() return (arr-amin)*255/rng Run the main function: if __name__ == "__main__": main() Now you can put this all in a script and run against two images. If we compare image to itself, there is no difference: $ python compare.py one.jpg one.jpg Manhattan norm: 0.0 / per pixel: 0.0 Zero norm: 0 / per pixel: 0.0 If we blur the image and compare to the original, there is some difference: $ python compare.py one.jpg one-blurred.jpg Manhattan norm: 92605183.67 / per pixel: 13.4210411116 Zero norm: 6900000 / per pixel: 1.0 P.S. Entire compare.py script. Update: relevant techniques As the question is about a video sequence, where frames are likely to be almost the same, and you look for something unusual, I'd like to mention some alternative approaches which may be relevant: background subtraction and segmentation (to detect foreground objects) sparse optical flow (to detect motion) comparing histograms or some other statistics instead of images I strongly recommend taking a look at “Learning OpenCV” book, Chapters 9 (Image parts and segmentation) and 10 (Tracking and motion). The former teaches to use Background subtraction method, the latter gives some info on optical flow methods. All methods are implemented in OpenCV library. If you use Python, I suggest to use OpenCV ≥ 2.3, and its cv2 Python module. The most simple version of the background subtraction: learn the average value μ and standard deviation σ for every pixel of the background compare current pixel values to the range of (μ-2σ,μ+2σ) or (μ-σ,μ+σ) More advanced versions make take into account time series for every pixel and handle non-static scenes (like moving trees or grass). The idea of optical flow is to take two or more frames, and assign velocity vector to every pixel (dense optical flow) or to some of them (sparse optical flow). To estimate sparse optical flow, you may use Lucas-Kanade method (it is also implemented in OpenCV). Obviously, if there is a lot of flow (high average over max values of the velocity field), then something is moving in the frame, and subsequent images are more different. Comparing histograms may help to detect sudden changes between consecutive frames. This approach was used in Courbon et al, 2010: Similarity of consecutive frames. The distance between two consecutive frames is measured. If it is too high, it means that the second frame is corrupted and thus the image is eliminated. The Kullback–Leibler distance, or mutual entropy, on the histograms of the two frames: where p and q are the histograms of the frames is used. The threshold is fixed on 0.2.
Daemon Threads Explanation
In the Python documentation it says: A thread can be flagged as a "daemon thread". The significance of this flag is that the entire Python program exits when only daemon threads are left. The initial value is inherited from the creating thread. Does anyone have a clearer explanation of what that means or a practical example showing where you would want to set threads as daemonic? To clarify for me: so the only time you wouldn't set threads as daemonic is if you wanted them to continue running after the main thread exits?
Some threads do background tasks, like sending keepalive packets, or performing periodic garbage collection, or whatever. These are only useful when the main program is running, and it's okay to kill them off once the other, non-daemon, threads have exited. Without daemon threads, you'd have to keep track of them, and tell them to exit, before your program can completely quit. By setting them as daemon threads, you can let them run and forget about them, and when your program quits, any daemon threads are killed automatically.
Getting the pattern back from a compiled re?
Assume I have created a compiled re: x = re.compile('^\d+$') Is there a way to extract the pattern string (^\d+$) back from the x?
You can get it back with x.pattern from the Python documentation on Regular Expression Objects
How to get a complete list of object's methods and attributes?
dir(re.compile(pattern)) does not return pattern as one of the lists's elements. Namely it returns: ['__copy__', '__deepcopy__', 'findall', 'finditer', 'match', 'scanner', 'search', 'split', 'sub', 'subn'] According to the manual, it is supposed to contain the object's attributes' names, the names of its class's attributes, and recursively of the attributes of its class's base classes. It says also that The list is not necessarily complete. Is there a way to get the complete list? I always assumed that dir returns a complete list but apparently it does not... Also: is there a way to list only attributes? Or only methods? Edit: this is actually a bug in python -> supposedly it is fixed in the 3.0 branch (and perhaps also in 2.6)
For the complete list of attributes, the short answer is: no. The problem is that the attributes are actually defined as the arguments accepted by the getattr built-in function. As the user can reimplement __getattr__, suddenly allowing any kind of attribute, there is no possible generic way to generate that list. The dir function returns the keys in the __dict__ attribute, i.e. all the attributes accessible if the __getattr__ method is not reimplemented. For the second question, it does not really make sense. Actually, methods are callable attributes, nothing more. You could though filter callable attributes, and, using the inspect module determine the class methods, methods or functions.
How to convert a file to utf-8 in Python?
I need to convert a bunch of files to utf-8 in Python, and I have trouble with the "converting the file" part. I'd like to do the equivalent of: iconv -t utf-8 $file > converted/$file # this is shell code Thanks!
You can use the codecs module, like this: import codecs BLOCKSIZE = 1048576 # or some other, desired size in bytes with codecs.open(sourceFileName, "r", "your-source-encoding") as sourceFile: with codecs.open(targetFileName, "w", "utf-8") as targetFile: while True: contents = sourceFile.read(BLOCKSIZE) if not contents: break targetFile.write(contents) EDIT: added BLOCKSIZE parameter to control file chunk size.
Converting XML to JSON using Python?
I've seen a fair share of ungainly XML->JSON code on the web, and having interacted with Stack's users for a bit, I'm convinced that this crowd can help more than the first few pages of Google results can. So, we're parsing a weather feed, and we need to populate weather widgets on a multitude of web sites. We're looking now into Python-based solutions. This public weather.com RSS feed is a good example of what we'd be parsing (our actual weather.com feed contains additional information because of a partnership w/them). In a nutshell, how should we convert XML to JSON using Python?
xmltodict (full disclosure: I wrote it) can help you convert your XML to a dict+list+string structure, following this "standard". It is Expat-based, so it's very fast and doesn't need to load the whole XML tree in memory. Once you have that data structure, you can serialize it to JSON: import xmltodict, json o = xmltodict.parse('<e> <a>text</a> <a>text</a> </e>') json.dumps(o) # '{"e": {"a": ["text", "text"]}}'
Is there a function in Python to print all the current properties and values of an object?
So what I'm looking for here is something like PHP's print_r function. This is so I can debug my scripts by seeing what's the state of the object in question.
You want vars() mixed with pprint: from pprint import pprint pprint (vars(your_object))
How do I log an exception at warning- or info-level with trace back using the python logging framework?
Using something like this: try: #Something... except BaseException, excep: logger = logging.getLogger("componet") logger.warning("something raised an exception: " + excep) logger.info("something raised an exception: " + excep) I would rather not have it on the error-level cause in my special case it is not an error. At least not in that python-process.
From The logging documentation: There are two keyword arguments in kwargs which are inspected: exc_info which, if it does not evaluate as false, causes exception information to be added to the logging message. If an exception tuple (in the format returned by sys.exc_info()) is provided, it is used; otherwise, sys.exc_info() is called to get the exception information. So do: logger.warning("something raised an exception: " + excep,exc_info=True)
In Django how do I notify a parent when a child is saved in a foreign key relationship?
I have the following two models: class Activity(models.Model): name = models.CharField(max_length=50, help_text='Some help.') entity = models.ForeignKey(CancellationEntity) ... class Cancellation(models.Model): activity = models.ForeignKey(Activity) date = models.DateField(default=datetime.now().date()) description = models.CharField(max_length=250) ... I would like the Activity model to be aware when a Cancellation related to it is saved (both inserted or updated). What is the best way to go about this? Thanks in advance.
What you want to look into is Django's signals (check out this page too), specifically the model signals--more specifically, the post_save signal. Signals are Django's version of a plugin/hook system. The post_save signal gets sent every time a model is saved, whether it's updated or created (and it'll let you know if it was created). This is how you'd use signals to get notified when an Activity has a Cancellation from django.db.models.signals import post_save class Activity(models.Model): name = models.CharField(max_length=50, help_text='Some help.') entity = models.ForeignKey(CancellationEntity) @classmethod def cancellation_occurred (sender, instance, created, raw): # grab the current instance of Activity self = instance.activity_set.all()[0] # do something ... class Cancellation(models.Model): activity = models.ForeignKey(Activity) date = models.DateField(default=datetime.now().date()) description = models.CharField(max_length=250) ... post_save.connect(Activity.cancellation_occurred, sender=Cancellation)